Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's fun to watch observations like the following re-discovered again and again. From the NATO Software Engineering Conference in 1968:

Ross: The most deadly thing in software is the concept, which almost universally seems to be followed, that you are going to specify what you are going to do, and then do it. And that is where most of our troubles come from.

Fraser: One of the problems that is central to the software production process is to identify the nature of progress and to find some way of measuring it. Only one thing seems to be clear just now. It is that program construction is not always a simple progression in which each act of assembly represents a distinct forward step and that the final product can be described simply as the sum of many sub-assemblies.

(Full transcript of the 1968 conference here. http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PD... It's a really fun read!)



Imagine you are a large company and there are 10 ideas for things you can build next, and you can't do them all. You gather the people that build such things and ask them, "how much will it cost to make each of these, and how long will it take". Do you think it is an acceptable answer if the builders reply, "I have no idea how much it will cost, I don't know how long it will take, and I don't know what you will have at the end of the process." How should a business handle this problem?


I can't give you an exact cost and time, but what I can do is give you relative cost and time. "This is easy, this is easy but will take a while (low variance), this is hard (relatively long time, high variance.) -- so that's giving you your estimate and your tolerances.

Further, I can work with you to figure out what portion of each project will give you the biggest bang for your buck. So you can say, "I want to spend x00,000 dollars." -- and I come back and give you a demo each week that you can play around with. At any point, you can say, "This is enough to generate value for me, let's move to the next thing for a while."

Further still, I can build it with enough test coverages and good design practices that it will be possible to extend upon the design at a later point without scrapping everything.

On the other hand, you can spend half of that x00,000 developing a specification. If detailed enough, I can give you a very low variance estimate of how long it will take. However, you won't know if it will actually meet your needs until you see it. You won't know about problems in what you really need until it's too late, and you'll end up spending more money in the end.

That's the message that should go out.


You have successfully solved the problem posed, but it was not well-stated. In a real-world scenario, it is important not just to identify which of the 10 ideas is most promising, but to defeat the null hypothesis that the programmers should be fired and no idea should be pursued. After all, to choose the best of 10 bad ideas is to have failed as an economic enterprise.

In order for a business to be investable, it is necessary to demonstrate that it will be profitable to an investor. (And of course all businesses need investment, whether that is VC money or just a lone developer's part-time effort.)

We have avoided this problem because in the early days of software, most software was profitable. But as the software industry has grown, this is proportionally less true. And I seriously question in 2014 whether more LOC are committed each day to "black" or "red" projects.


> to defeat the null hypothesis that the programmers should be fired and no idea should be pursued

Now I'm imagining a website like oDesk or ELance, but which requires employers to "defeat the null hypothesis" for any job contract they want to post. What a wonderful world that would be.


There are two potential problems posed in the scenario:

The first, as I believe you presume, is that the business is looking to develop a revenue-generating product. I agree that Continuous delivery, quick iterations, late-binding requirements (a la Behavioral Driven Design), and rigorous testing are not a strong candidate for this problem within the context of big business, and I can point to a handful of systems where this approach did not work well.

However, the second interpretation is that a big business has ten internal projects to increase their efficiency. In this case, return on investment is relatively easy to calculate, but requirements are usually nebulous and thus estimates are by necessity high-variance.

I think the subset of agile techniques I described (as well as cross-functional teams, high customer involvement, and exploratory testing) are well suited to this problem because they, as a collection, allow a business to receive value quickly and allow the business to "fail fast" in lean startup parlance.

Are you building embedded software? The agile approaches aren't, collectively, well suited for that domain (though I'd argue pair programming and cross-functional teams aren't a bad idea here.) It's not going to work for games. But for internal software and <10 client software, I argue that this subset reduces risk and increases the potential of high ROI. ___

Now, as for a few things you mention.

* The programmers are not responsible for calculating return on investment. However, does an extensive requirements gathering mission with months of meetings going to be cheaper when the costs of meetings (in lost productivity and wages) is calculated? In my experience, these documents end up being works of fiction, on which aggressive estimates are produced upon pressure of management. Then, the costs overrun (especially considering that nobody accounted for the lost productivity of the customer) because the estimate was based on business need rather than reality. Then, features are dropped and technical debt accrues, leading to a system that is hard to maintain and is scrapped after five years of frustration when the customer starts again. At least, this time they know what they don't want. I don't think that works.

* I think that is the situation that leads to the "red" projects that you mention. However, you speak about early days of software where most software was profitable. However, I've seen and heard the horror stories of multi-million (and Billion!) dollar projects from the 80s and 90s. While http://calleam.com/WTPF/?page_id=1445 mentions recent studies, I remember seeing studies that 2/3rds of projects fail in the 80s and 90s as well.

So, most software wasn't profitable then (at least in terms of internal projects) and they aren't now.

However, I'd argue that the collection of techniques above allow a project to fail faster and cheaper if managed correctly.

If we were to put it into a methodology, here it is:

1. Determine return on investment if X can be automated. How many hours are saved? How many fewer people are needed to do the same job? What percentage better forecast can this project create (estimated)

2. Use some sort of behavior driven development to get a basic sense of the complexity of the system needed. If the complexity and ROI don't match up, stop here. You've lost a minimum amount of money.

3. Start prioritizing which pieces would provide the most business value for the least complexity cost. Build the first of these pieces, elaborating on the BDD done in step 2. Write this with good tests, so that the system can be extended with a reduced fear of later regression.

4. Demo to customer. Does this fit their needs? Is additional complexity exposed? Does this provide value? This is the next cutoff point. If the system appears to be more complex than thought, or if the team is unable to provide the business value anticipated, stop the project and re-evaluate or shelve.

Everyone gets together and does a retrospective with a question to continue, an honest look at what worked and what didn't, and a set of things to try to fix the problems of the first iteration.

5. Repeat steps 3 and 4 until the software either sufficiently meets the business need or the project is ended early due to a fatal flaw. Always be willing to ask if this is the appropriate time to declare the system "done for now." ___

Now, this won't work in most major corporations for a variety of political reasons, but to me this seems to be a better system than the traditional corporate project structure from my experience in both successful and unsuccessful IT projects.


Isn't that one of those methodologies ;)


Yeah that was a little bit optimistic about the powers of Agile for me, but I think the thing that Agile gets right is that we all have to operate under the uncertainty of the scope of software projects. In that sense it's not a methodology for estimating the complexity of a software project but a way to start working without that.

Perhaps the biggest reason this keeps causing problems is that companies have no good way of dealing with change. If you expect to finish any project that you start then you need to know more about that project than is realistic at the time you start it. A canceled project is a big failure for most employees and not something they want to happen to their careers.


It's not so much avoiding estimating complexity as much as moving from absolute estimates to relative estimates. Arlo Belshee has the best article I've seen on the subject: http://arlobelshee.com/post/planning-with-any-hope-of-accura...

However, until you establish reality, all the estimates in the world aren't going to help much. And most customers cannot estimate reality until they are actually in the process.

I'm not saying Agile is a silver bullet -- it can go wrong in many ways, and it's not appropriate for every situatoin. However, it's the best we have for its niche.


> Do you think it is an acceptable answer if the builders reply, "I have no idea how much it will cost, I don't know how long it will take, and I don't know what you will have at the end of the process."

If that's the actual answer, then anything else is just obfuscating the truth.

Why should we encourage lying to try to cover for the fundamental lack of knowledge? If you don't know, trying to make up numbers never makes things better.


Making up numbers doesn't make things better for you or for delivery of the project. But it makes you liable to the boss, so they can make your head roll if they need to.

A surprising number of people in business think this way: not that they need to make sure of something, but just that they need someone to tell them everything will be OK, and then put the blame on if anyone is unhappy. That someone will naturally be whoever is at the bottom of the totem pole.


This remind me that business is pure gamble, and business people is bad gambler. They want to play but do not want to lose and if they lose they want to blame somebody.

Managing ego and heart is harder than managing a project i think.


The Second Law of Consulting: No matter how it looks at first, it's always a people problem.

Gerald Weinberg

http://en.wikiquote.org/wiki/Gerald_Weinberg#The_secrets_of_...

http://www.codinghorror.com/blog/2008/01/no-matter-what-they...


haha, I love this guy..

Asking for efficiency and adaptability in the same program is like asking for a beautiful and modest wife. Although beauty and modesty have been known to occur in the same woman, we'll probably have to settle for one or the other. At least that's better than neither.


No haha, this is just plain sexist and wrong. One of this wonky generalizations and why we don't have real software engineering yet.


"Make up numbers" glosses over a bunch of actually-useful techniques, though. You might not have any data to begin with, but you can certainly get some.

For example, since a time estimate can trivially be converted into probability estimate for something happening on time, all of this is relevant: http://lesswrong.com/lw/3m6/techniques_for_probability_estim....

The best technique from that set, I usually find, is betting (with actual money) on results: setting up a intra-company prediction market for when your software will ship can tell you more than you ever wanted to know about how much time it'll truly take.


> If that's the actual answer, then anything else is just obfuscating the truth.

There are three possible reasons for an answer of 'I don't kmow'

1. Something is unknowable 2. The person being question has insufficient expertise to discover the answer 3. The person being questioned hasn't really thought about it.

You appear to be assuming that 1 is always the reason.


There's a lot more than those three possibilities. I don't really believe that anything is unknowable, aside from semantic arguments about Godel's incompleteness and things like that.

What I'm saying is, that at the moment someone asks you "How long will X take to bring through the software development process", "I don't know" has to be a possible answer or your whole method is corrupt.


To know how long something takes you must have done it already.

And that's the problem with software, there are no equals. And there are always "surprises".

A car manufacturer knows how long it takes to build a car, because it does that thousands of times for a same model. Now, designing the car, that's a different issue. (And don't give me the BS enterprise "model" of architects throwing some UML from a high tower and "developers" filling the voids because that's the worse way you can build a software)

Ok, it's one thing to build a simple ACID application, another thing is to build something that "has never been tried before" .


You can answer with order of magnitude approximations (days, months, years, might be impossible).

The order of magnitude approximation is good enough, and for ones in which you can't make an order of magnitude approximation you should be able to back it up.

e.g. c.a. 2002 a PMM said "All we need for this idea to work is speaker-independent voice recognition in a noisy environment; how long will that take?"

The correct answer to that question isn't a time estimate, or "I don't know" it's saying that doing so is an R&D problem, not an engineering problem.


A business should clearly handle this problem by maintaining the delusion that they control all these factors, and firing underlings if they interfere with the maintenance of the delusion.


Businesses never had the stomach for basic research. It's why progress is traditionally done in one form or another of a governmentally sanctioned monopoly.


The business should handle this "problem" by recognizing it shouldn't be in the business of creating or building new things, because it hasn't got the talent or will "at the top" to understand or stomach all that building or creating new things entails.


What about the other half of the uncertainty?

The company does not know how much revenue each of those ideas will deliver.


Do you expect to get estimates of outcome from stock market traders? They do their thing and you gain and loose according to external factors.

But seriously scrum with concept of velocity and estimating size of tasks is the best approach I've seen. That's if ou want to ask programmere at all. Preferable way would be to predict the duration of the project scientfically from metrics gathered in the past (of not only the programmers and projects but also the environment they operate in).

How can a programmer estimate when project will get built if he doesn't even know if anyone actually cares enough about it being built to actually decide what the software should do?


Beats me. But I have every confidence that you'll figure it out. You're smart, that's why they put you in charge.


Having read "the mythical man-month" I came to conclusion that majority of the problems we have today were discovered an described by the 1970-ies.


Great book. He anticipates a lot of the problems, and "Adding idiots at the end" rarely fixes things. (Look at the Obamaacare website)

It's been a while since I've read it, but I don't think he captured the benefits from agility.


He mentions the importance of automated tested once or twice I think, but it isn't hammered on as a modern Agilist would. But, remember, 1970 was a different world. You may literally have not had room to include unit tests in your code, or literally be unable to afford the sort of abstractions that would enable unit testing, for instance.Other parts of that book talk about the difficulty of squeezing bytes out of pages. A lot of modern Agile practices are not possible back then, or are so different as to be entirely different things. TDD would probably get laughed at by all, for instance. "What, you want me to waste my precious timeshare time to run tests I know are going to fail?"


I don't have the reference, but I read somewhere several years ago when digging through old journals that regular testing was part of the Project Mercury? Gemini? software development process.


http://www.testingreferences.com/testinghistory.php, via Google, leads me to Google: http://books.google.nl/books?id=76rnV5Exs50C&lpg=PA81&ots=o8...

"Project Mercury was done with short half-day iterations. [...] Tests were planned and written in advance of each micro-increment"


I would not be surprised to find that someday we understand why some projects work from agile, and see that it basically comes back to the "vision" observation in Mythical Man Month.

Of course, this really just betrays that I want that to happen. So... take it for what its worth.


The majority of everything in software was discovered by the 70s. Since then we have been reinventing the wheel every 5-10 years on better hardware.


This is so true. And not even amusing, it is like we can't come up with anything new, we even go backwards.


It's the politics that change. Sometimes we use a different tool to prove a point. What we lose in productivity we sometimes make back in social progress (ex: yanking control away from certain corporations).

I do hate the reinvention of the wheel too, but it's not all pointless.


I don't think hardware has anything to do with it. I think the most important factor in this evolutionary cycle is that every 4 - 6 years, new Computer Science/Software Developer STUDENTS enter the market, spend a few years catching up, and then trying to re-invent everything. Rediscovering the territory that they'd not really learned that the industry had already proceeded through at much greater pace than Academia.


No, I mean hardware is genuinely progressing. Software is going in circles. It's a culture thing. Legacy code should mean "battle tested code" not "boring, let's replace it!" We are so busy replacing perfectly working code with the latest fashion that we never advance.


Yeah - and I mean that its not being driven by the hardware folks, but in fact is a cultural phenomenon derived from the academia world and industrial world having different gears.

Yes, hardware is progressing (though we've reached a few limits) at an amazing pace - and we've pushed technology into quite a lot of cracks and crevices in the last 40 years (at least, thats as long as I've been in the computer scene) .. but what I observe is that, every 3 - 6 years, there are cycles of entropy and creation which are a result of the endless churn of 1. Industry -> 2. Academia -> GOTO 1.

Today's Rubyist is tomorrow's Cobol'er. It goes on and on, and this is - in my opinion - a people thing. Not a hardware thing.


The 1968 NATO SE conference is also the earliest reference I know of discussing software reuse (page 79 of the PDF, "Mass Produced Software Components"). Interesting to see how some of the problems then are still problems today.


Some?


Things which are a LOT easier:

1. Interactive editing 2. Interactive compilation (these two are probably the biggest reason programmers now are more productive) 3. Having enough storage for most problems 4. Having enough computation to solve most problems interactively 5. Handling numbers larger than one machine word 6. Handling floating point numbers 7. Handling precise decimal numbers (except for the lucky few using mainframes / COBOL) 8. Handling text using characters which wouldn't have worked in a telegram 9. Exchanging text with other systems (you couldn't even assume ASCII with other encodings (e.g. EBCDIC) being common) 10. Tracking changes to source code 11. Distributing shared code and tracking new releases 12. Communicating with other systems

We've gotten a lot better on various technical details. The parts which remain difficult are generally different aspects of human limitations reasoning about complex systems.


"The only things that really changed over the last 30 plus years is computers are a lot faster and people are dumber." -Old Coder


I think that the success of open source libraries shows how software reuse can work rather well. However when designing a new library it is still (and may always be) difficult to "parameterize" (to use McIlroy's term) in the most useful way for your users.


May not be a problem for them any more..


Alan Kay suggested we have these problems because computer science is more like pop culture than science. http://www.youtube.com/watch?v=FvmTSpJU-Xc


Ironically, it is military software projects that are some of the the worst examples of what Ross and Fraser warned about.


And the answer is ... ? :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: