Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Time to Vanquish the Mythical Man Month (assembla.com)
27 points by tst on Aug 10, 2008 | hide | past | favorite | 21 comments


The key word in the sentence "Adding manpower to a late software project makes it later" is "late." - i.e. it's about adding developers in order to meet a preordained deadline. Open source just removed the deadline from the process.


Agreed - and that's what Agile does, too, to a large extent, by removing the focus on long-term planning, which is why it is more successful at delivering.


There's a bit of a contradiction in this article. He says that programmers seem to be about 5 times as productive as they were in Fred Brook's day. If you reduce the size of a team by a factor of 5, you also reduce the communication overhead by a factor of 25. I do not see that this is accounted for in his reasoning.

Also, the truth of the majority of most FOSS projects is that the lion's share of the work is done by a minority of the contributors.


Also, the truth of the majority of most FOSS projects is that the lion's share of the work is done by a minority of the contributors.

Extremely true--and I'd go further on this and say that the lion's share of the work is often done in the minority of the time spent by those minority of contributors. That is, some of the most useful improvements are also the ones that take a surprisingly small amount of time to implement; the bottleneck for many of the most useful changes is not coding time, but good ideas to implement. There's always a bunch of hard, complex projects that are lying around, but what is always in short supply are the easy, simple projects whose hardest part is coming up with the idea/algorithm/concept for it in the first place.

Of course, the other bottleneck for "short and simple" changes is convincing a developer to do it... or even convincing oneself to do it.


Also, the truth of the majority of most FOSS projects is that the lion's share of the work is done by a minority of the contributors.

Definitely - which makes the need for a large team even more dubious. What you want is not a team of 100, it's a team of 5 brilliant programmers. Of course, finding 5 brilliant programmers and getting them to work on your enterprise project is easier said than done. However, I'd wager it'll be easier to do once you have the budget of a 100-people team to share amongst 5, and once it's a team of 5 instead of a team of 100. I don't know any brilliant programmer who wants to work in an enterprise team of 100.


Agreed. Excellent points.


I have been on a project where the manager ignored Brooks' Law, and was well-funded enough to actually get away with it. He preferred to get lots (LOTS) of interns and first-time developers from top schools, presumably on the theory that they were smart and willing to work insane hours. But this team was also well-disciplined in writing tests and readable code, and did follow a lot of Agile processes. Pretty much every part of the codebase was accessible to every developer, and it was common to submit patches to other people's work.

So this was a good experiment to test the posted article's thesis; lots of good processes, Open-Source-like development style, giant team.

I wouldn't call the result a trainwreck, exactly, but neither do I ever want to experience that again.

It got to the point where some were openly calling for half the team to be fired; not due to incompetence but because we were digging ourselves deeper into refactoring debt every day. The ratio of "code written to work around misdesigns" versus "code that achieves something" approached 1:1, at least in my experience. At least two different frameworks sprouted up among the developers to work around each others' bugs.


He says:

From this point of view, you can see that open source projects have a huge scaling advantage because all code is shared. If someone is waiting for a component, and they are frustrated enough, and talented enough, they can just fix the problem. The code is all shared, anybody can build and fix any component, and the responsibility for critical components can move around. The answer to the dependency problem is less hierarchy, and fewer official tool builders, not more.

But this can't be right, because even closed-source commercial projects are "open-source" from the perspective of the developers themselves.


Do such comparisons between OSS and non-OSS take into account the vast wasteland of failed projects littering SourceForge et al?


The "vast wasteland of failures" is not at all unique to OSS. Its just that when proprietary software fails, nobody really notices, because the reason it fails was because nobody noticed it and bought it.

That's one of the differences between OSS and proprietary software; OSS fails when nobody cares enough about it to continue developing it; proprietary software fails when nobody cares enough about it to use it.


The software process of open source is fascinating, but I think you can't overlook the fact that people are choosing to contribute and want to get something done. If you hired 100 developers and told them to contribute as needed would the outcome be as good as the big open source projects around? Most likely we'll never find out because management wants to control what gets done.


Skunkworks are probably the corporate version of open source. They work well.


Quite true, they tend to be small though. I'm imagining large groups of programmers simply tackling projects as they saw fit within a code base. I'm not sure normal employment is quite up to the task - maybe if you paid people per feature or spec'd bits out topcoder style.


Here's an idea... let's try paying them by the line of code or number of bugs fixed. Oh wait, maybe that isn't such a good idea.


Haven't there been comparative studies done on Linux and Windows operating systems as a whole? I think that the quality of both systems has been found to be comparable.


Yes, though I don't think you can say anything really objective about the "quality" based on the data available. I found some of the statistics in the case of Windows source indicative of lower readability and maintainability...defined global funcs and structs counts are higher than the Open Source systems (FreeBSD and Linux) and file length is significantly longer, even though Linux is a significantly larger codebase (the largest of all of the systems analyzed). But, of course, that's just numbers...roughly meaningless without context. And, obviously, Microsoft manages to keep shipping code that mostly works, and they hire some of the best developers in the world.

Here's the paper:

http://www.spinellis.gr/pubs/conf/2008-ICSE-4kernel/html/Spi...

And some additional commentary:

http://www.spinellis.gr/sw/4kernel/

Anyway, while I've never seen the Windows source code, I have looked at FreeBSD, Linux and Solaris, and all are pretty good. I think FreeBSD may be the cleanest (partly because it tries to be fewer things to fewer people than Linux), but probably not technically the most interesting. And, of course, I've only taken a few quick trips through FreeBSD and Solaris, and I've actually written driver code for Linux. Doing real work in a codebase always makes the warts more apparent.


"If one person today can do the work of 5 people from days gone by, that in itself ameliorates the problem."

Does anyone here actually believe that to be true?


Have you every written assembly language on punch cards? That was, in fact, the toolchain in use for OS/360. No editor: you either write them and hand them to a typist or punch them manually. Even simple typos and syntax errors don't come back for half an hour or more.

So yeah, a factor of five for a typical programmer sounds about right to me. Truly great hackers might get closer (because they aren't as limited by tool overhead), but a typical IT wonk would be utterly lost in that environment and do much worse.


Maybe not by a factor of five, but at least by a factor of two. Assuming you're not using C++.


There's also waterfall vs. incremental approaches.

Most (successful) open source projects can be added to easily. That is, they are modular, with a basic core architecture, so a contributor can discover then understand then modify the part they need to - and those projects that aren't modular will be replaced by those that are.

There is an assumption here that the concept of the project is understood well enough for a workable architecture to be known - without this, it's harder to add contributors, because they must first grasp the project as a whole.


Interesting point that Brook's square law of communication between project members might change if you can effectively access what anyone said to anyone at any time in the past - i.e. if all communication is in searchable electronic form (email etc).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: