Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Simple, correct, fast: in that order (drewdevault.com)
519 points by ddevault on July 9, 2018 | hide | past | favorite | 344 comments


When I was in school in the 70's. (That's NINTEEN seventies.) There was this book called The Psychology of Computer Programming. This predates the microcomputer era as we know it. Punched cards were still common when the book was written.

A computer was to control a new assembly line for a car company. They couldn't get the software to work. They called in an outside insultant. The outsider developed a program that worked. (It was more complex.) The book was about the psychology part: The original programmer says: "How fast does YOUR program process a punched card?". Answer: "About one card per second." "Ah!" said the original programmner, "but MY program processes ten cards per second!"

The outsider said, "Yes, but MY program ACTUALLY WORKS". If the program doesn't have to work, I could make it read 100 cards per second.

Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.


Your example is a case of premature optimization. That is not what the author is concerned with.

The problem are not the programs that obviously do not work or who break in a very visible fashion. Programs whose deficiencies are known can be fixed or worked around.

The real problem are programs that appear to work correctly but aren't.

To say it with the words of Tony Hoare:

    There are two ways of constructing a software design:
    One way is to make it so simple that there are obviously
    no deficiencies, and the other way is to make it so
    complicated that there are no obvious deficiencies. The
    first method is far more difficult. It demands the same
    skill, devotion, insight, and even inspiration as the
    discovery of the simple physical laws which underlie the
    complex phenomena of nature.
Source: 1980 Turing Award Lecture; Communications of the ACM 24 (2), (February 1981): pp. 75-83.


That quote actually refutes the OP by reinforcing that correctness is more important than simplicity. Achieving correctness is the whole point of making things simple, after all. To put simplicity before correctness would be missing the forest for the trees.


In terms of designing solutions, I would say that "correctness" is relative to the problem statement at hand. It's also a degree and not an absolute. It may be correct and incorrect at the same time, depending on the context. From this I would prefer simplicity over correctness to allow for ease of optimization.


Correctness is achieved by simplicity.


    For every problem there is a solution that is simple, neat - and wrong.


Indeed - for example, you can make things look simple by leaving necessary parts out.

I think, however, that the more important part of this quote are the words 'problem' and 'solution'. Until you have an understanding of the problem that is correct, it is unlikely that you will come to a solution at all. Avoiding the introduction of gratuitous complexity is not necessary to reaching that understanding, but it sure helps.


Clearly, that solution isn't simple enough.


> Correctness is achieved by simplicity.

That's... literally what I just said? "Achieving correctness is the whole point of making things simple, after all."


That's... literally what the author said.

    If your solution is not simple, it will not be correct or fast.
Correctness may be the end-goal. But correctness is absolute. So it is a bad performance indicator to set as the goal. Yes, we can track bugs. But the absence of open bugs is no guarantee for correctness.

I can never say "We are 5% more correct than last week. Keep up the good work!"

Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.


> That's... literally what the author said.

Excellent, so we both agree with the author that correctness is the ultimate point and that simplicity is just a useful tool for achieving correctness. :)

> Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.

How does one purport to measure simplicity?


I was considering ASM's for it:

http://pages.di.unipi.it/boerger/Papers/Methodology/BcsFacs0...

My thinking was like this. The complexity of software is synonmyous with us saying we don't know what it will do on given inputs. As complexity goes up, it gets more unpredictable. That's because of the input ranges, branching, feedback loops, etc. So, a decent measure of complexity might be simplifying all that down to purest form that we can still measure.

The ASM's are a fundamental model of computation basically representing states, transitions, and conditionals making them happen. So, those numbers for individual ASM's and combinations of them might be good indicator of complexity of an algorithm. And note that they can do imperative and functional programming.

What you think of that idea?


> ...reinforcing that correctness is more important than simplicity

It's the other way around. Correctness is obviously the goal (and likely performance too, depending on your use case), but the way to achieve it is through simplicity. So simplicity should be prioritized - as it allows you to ensure correctness.


I'm glad that we can agree that correctness is the goal, though I still take umbrage to the blog post's title, thesis, and conclusion. :P


By that logic, "fast" goes before "correct"; you can't print the answer quickly if you don't have the answer, after all.

> if your solution is not simple, it will not be correct or fast.

The point of the article is that "simple" is a prerequisite of "correct" (and "fast").


We reached the maximum thread depth.

>> Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.

>How does one purport to measure simplicity?

There's 40 years of research into that. And loads of tools to support dev teams.

You can start here: https://en.wikipedia.org/wiki/Cyclomatic_complexity

Also related are costing models: https://en.wikipedia.org/wiki/COCOMO


Derek Jones argues McCabe Complexity and COCOMO were scientifically unsupported with little bandwagons pushing them for reasons of fame and/or funding:

http://shape-of-code.coding-guidelines.com/2018/03/27/mccabe...

http://shape-of-code.coding-guidelines.com/2016/05/19/cocomo...


We also have 40 years of research into improving program correctness, e.g. static analysis, test suites (unit, integration, etc.), fuzzing/mutation testing, and the benefits of code review. The idea that simplicity (which I'm pretty sure that nobody in here is using to specifically mean "the lack of cyclomatic complexity") can be measurably improved but that correctness cannot is incorrect.


> The idea that simplicity (which I'm pretty sure that nobody in here is using to specifically mean "the lack of cyclomatic complexity") can be measurably improved but that correctness cannot is incorrect.

Have you seen a program that comes with a formal proof of correctness? I have. And boy, they are really simple.

The end result can be complicated. But the program is broken up into small, simple, easy-to-understand pieces that are then composed.

http://spinroot.com

https://frama-c.com


I think maybe you mistakenly assumed that response was in opposition to your comment, I read it as a simplification and restatement of what you said.


Yes but I think OP is saying that, paradoxically, prioritizing correctness over simplicity actually makes correctness more elusive than if simplicity were prioritized.


No, that's just the easiest path to it if your only tool is an unaided human brain.


That doesn’t mean simplicity is more important than correctness. The simplest program ever is an empty file, and it doesn’t solve any problem.


Depending on interpretion of terms, I'd agree with either simplicity or correctness first. To disambiguate I would say:

  Working, simple, correct, optimized.


Would deffo agree to this.

My approach is usually sending out a PR as soon as I can to a group of reviewers / users and goes in following stages.

1) POC - proof of concept. It does 90% of things, some parts are ugly and messy but validates a hypothesis. The unknown unknowns are discovered. I want to stage this and get this in front of some alpha internal users as soon as I can. First pass reviewers give a on the plan of attack. Lots of sub TODO’s are listed in PR. The goal is to discover edge cases and unknown unknowns.

2) Simple - Go through PR and refactor any existing / new code so it’s readable and DRY. If reviewers don’t understand the “why” of some code, a comment is left. Now 90% of scenarios are covered, probably some edge cases may not work but the edge cases are known. The code is simple and at right layer of abstraction.

3) Correct, Testable - Edge cases are covered, tests are written, internal users have validated that the feature is working as expected.

4) Polish - if it’s slow, then slow internals are swapped out for fast parts. Tests would mostly work as is. Same with UI, css fixes to make it elegant and pretty.

Sometimes the process is a day, sometimes it’s a week.


> Your example is a case of premature optimization. That is not what the author is concerned with.

I think he is. Premature optimisation is putting the order: fast, simple, correct.

So although the author doesn't explicitly state it, premature optimisation is something that would be avoided if you followed his advice.


>> They called in an outside insultant.

This is either a great typo, or a hilarious moniker I have somehow missed (almost 40 years in the business). Either way, it's worth recognizing.

Equal parts hilarious and accurate as "/in/con/sultants" are often brought in to play the part of the court jester -- they can speak the hard truths no-one else could, and survive.

>>"Yes, but MY program ACTUALLY WORKS". If the program doesn't have to work, I could make it read 100 cards per second.

I think I wrote a device driver like that, more than once. :( Fast as hell, to the point of outstripping the bitrate of the device it talked to, and about as useful as a sailboat on the moon.


There's a great Dilbert where Dogbert wants to both con and insult someone. So he goes to consult for Dilbert's PHB.


> If the program doesn't have to work, I could make it read 100 cards per second.

> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Correctness isn't binary. Roughly no software today is 100% correct, but for most purposes you'd still pick the current version over a highly complex, slower, more-correct version.

Simplicity can save you a lot of cost as you edit the software, which helps you make it correct sooner. Simplicity and correctness go very well together.


Simplicity (however it is vaguely defined) is desirable, but at the end of the day it is a vehicle for correctness, and hence necessarily subordinate to it. Correctness is the destination of any piece of software (ultimately the goal of any piece of software is to work), and simplicity is just one route to it.


> Correctness is the destination of any piece of software

"Good enough" is the destination of any piece of software. Sometimes that means correct, but more often it means "oh yeah, sometimes it starts acting funny, just restart it when that happens"


Seems a bit like the "worse is better" philosophy :

    It is slightly better to be simple than correct.
See https://en.m.wikipedia.org/wiki/Worse_is_better


Agree. And "good enough" depends on your use case.


It never means "correct". Not to mention that 100% correctness is even impossible.


> Not to mention that 100% correctness is even impossible.

In which case please consider that everyone here is using "correctness" to mean "correctness that is achievable by reasonable human effort". :P It's easy to win any argument by taking one side to its logical extreme and asserting that it is therefore impossible, but that doesn't create a useful discussion. By the same logic we could assert that 100% simplicity is impossible, but that would be just as silly.


They said "destination" being correct, with me interpreting "destination" in the sense of "goal." My point was that some software has the goal of being 100% correct, but most software does not.


It depends on the severity of a bug. If it's very severe, you'll favor the complex but more-correct solution. Otherwise, you'll favor the simple but more-often-wrong solution - because it's easier to fix and get progress.

I use more-* phrases because it's always in a relation. Even NASA can't claim to have 0 bugs although people die if they fail.

bit OT: There's a great article about NASA programming: https://www.fastcompany.com/28121/they-write-right-stuff


Why do you equate working software with correctness? Software that works is never correct by any definition of correctness. Because working software is a system that exists in the real world and therefore can never have a specification against it.


> Why do you equate working software with correctness?

Because the original author neglected to provide an adequate definition of correctness, thereby inspiring an epic HN flamewar as people now must run around endlessly debating semantics. :P


Sometimes people say things that might not be literally true or even "true in spirit", not because they are lying liars who love to lie, but because relating an exaggeration or a caricature or some other sort of not-totally-true thing will have a better effect on their audience than the strict truth would have. As we're now up to 17 comments you've made here emphasizing the skepticism you have for TFA's message, it seems that you value the "correct" more than the "simple". It could be that you are in the intended audience for TFA...


If correctness is some kind of continuum rather than a binary choice, then pick whatever trade offs, cost, and other factors you want.

Plenty of times correctness is binary. In some cases it would be: passes all tests. Or: meets all requirements. Even if it could be "more" correct (or "more" simple), but those aren't part of the tests / requirements.


I always thought correctness begins when the result of your work does what it's supposed to do.

Maybe it's supposed to move from A to B, maybe it should do it in under x seconds, maybe it should go via Y, maybe it has to be easily understood by a 6 years old, etc.

But I can't really imagine something that has simplicity as the only requirement ("nothing" is the simplest thing so that requirement would always be met with no action). So as long as the other requirements are met simplicity is usually the nice to have "add-on". And you can have correct and simple, or correct and complex. But correct (does the job) trumps simple. And the world is surrounded by examples that prove this point.

I think the author meant "simple should be part of good design" but couldn't properly convey the message. He focused on making the message simple and ignored the fact that it's not correct.


I’ve noticed a pattern where the simplest solution DOES accomplish the goal, but isn’t what a user might consider the “shortest path”. How do you count a workaround where it technically can accomplish the end result, but requires a minor annoyance? What about a major annoyance?

What about a process so painful nobody has even thought of it?


It's nigh impossible to solve any complex issue with the "simplest solution" from the first try. This means that when you're faced with a complex issue you will postpone the fix because it's not the simplest.

And you never know if it can be done in an even simpler fashion later.


This seems to mix correctness and completeness.

A good program does only the correct thing in a particular area. It is known to be reliable in that area, sometimes even formally proved to be so.

Outside that area, an ideal program refuses to work, because it detects that it cannot obtain the correct result. This is normally called "error handling".

There's also some gray area where a program may fail to reliably detect whether it can produce the correct result, given the inputs / environment. A reasonably good program would warn the user about that, though.

A "garbage in, garbage out" program is only acceptable in very narrow set of circumstances (e.g. in cryptography).


Agree. In many (most?) cases there is no formal, verifiable correctness proof. And then you are way better of with the simpler solution once feedback from the real world arrives.


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Seconded. I'm highly confused at how many upvotes the OP has gotten in such a short time despite appearing to say that implementation details matter more than program output. A beautiful machine that doesn't work is, at best, a statue. I'm all for the existence of pretty things that do not need to demonstrate inherent practicality, but most people are not printing out source code for use as wallpaper.


To defend the idea, I think it starts with the assumption that software is often a moving target so "correctness" is at best a temporary state. If you had to use a codebase at any point of time you would obviously want the correct one, but if you look at the lifespan of software it would be better to have the simpler code. Simpler is (usually) easier to fix, easier to extend and easier to learn.

I think the author made this a little inflammatory to get people to think about it in these terms.


Easier to fix, yes. Tends to get more complex in nasty and ugly ways.

Easier to extend, almost never. Proper design for extensibility has an extra bit of complexity over the most obvious. Simplistic implementations tend to be tossed away and are good for unscalable prototypes.

Easier to learn, definitely not. The simplest code comes from deep understanding of problem domain and algorithms. It is almost exactly like with writing brevity while not losing the point. It is easy to end up with simplistic instead of simple. There is that famous quote by Dijkstra which I'd rather not butcher from memory.


I think the core consideration is that software isn't static, and a machine that is held together with chewing-gum and silly string can produce the correct output and be a terrible machine at the same time.

What happens when it breaks? What happens when you need to produce doodads as well as gizmos, or a different size gizmo is desired? Who wants to reach inside the silly string and hope for the best?

I'm reminded of that old saying that even a broken clock is right twice a day; an overly complicated piece of software that produces the correct output is only coincidentally correct. Which I think is the point of the article.


That was my first thought as well but then I realized that by correctness the author means "no bug", which is quite more ambitious than just making it "work".

I think the author implicitly assumes the software basically works right from the beginning of the article.


If that's the case then the author is attacking a straw man, because nobody (besides Dijkstra) is suggesting that we rewrite all the software in the world in Coq in order to 100% eliminate bugs at the cost of simplicity.


That's not what Coq would do, and a misrepresentation of Dijkstra's position. We certainly could use tools like TLA+ to assist us with existing code.

Folks are using "simple" and "easy" interchangeably here. That's probably inappropriate.


I apologize for using Coq speicifically, I just needed a scapegoat for formal verification that people might have actually ever heard of. :P I'm happy to debate definitions, which the author of the OP has regretfully omitted (and the contentious definition here is probably the OP's notion of correctness, rather than their notion of simplicity).


You'd be surprised how simple a well written proof can be compared to a program implementing an algorithm to do the same.

That said, Coq itself is not the best vehicle for this. There are nicer high order logic languages.


To be honest, I'm fine skipping it. I don't understand why this article is so upvoted anyways.


> Folks are using "simple" and "easy" interchangeably here. That's probably inappropriate.

Agreed, see Rich Hickey's "Simplicity Matters" presentation on the difference [0].

Simple-Complex vs Easy-Hard

[0]: https://www.youtube.com/watch?v=rI8tNMsozo0


I agree. What he means is, it should work first, as simply as possible. Then you worry about correctness- correctness is not referring to working/no working. Correctness means, 'how SHOULD this work?' or 'How should this logic or code be written to be most efficient or effective?'

Third is performance.

1. Write a working piece of software that does the job.

2. Refactor to make the working piece of software do the job more efficiently and elegantly.

3. Refactor to make the working piece of software do the job as fast as possible.


Seen this several times when someone refactors. The code is much simpler and easier to read, but does not actually work for several important test cases anymore.

I've never thought of simplicity adding upfront cost. That's probably true, but also true that it pays dividends later on in the project.


If it no longer works then I don't consider that to be "refactoring" but "rewriting".

I think of refactoring as a series of SIMPLE transformations that clearly do not have any effect on the correctness (or incorrectness) of the code. That is, there is no possible change in behavior.

And think of the word "factoring" as in high school algebra. or rather "factoring out" something.

I have a dozen examples of this calculation. How about let's refactor it into a function, and replace all the instances with a function call?


> I think of refactoring as a series of SIMPLE transformations that clearly do not have any effect on the correctness (or incorrectness) of the code. That is, there is no possible change in behavior.

This kind of transformation is precisely what the person who coined the term meant: Taking code which works and turning it into easier-to-read code which works precisely as well, because refactoring never introduces a change in behavior.

To quote Martin Fowler and Kent Beck:

> A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior… It is a disciplined way to clean up code that minimizes the chances of introducing bugs.

[snip]

Not a direct quote this time:

> Fixing any bugs that you find along the way is not refactoring. Optimization is not refactoring. Tightening up error handling and adding defensive code is not refactoring. Making the code more testable is not refactoring – although this may happen as the result of refactoring. All of these are good things to do. But they aren’t refactoring.

https://dzone.com/articles/what-refactoring-and-what-it-0


Simplicity is ALWAYS something desirable to achieve. Correctness comes first.

As code is originally written, people are (or should be) using the most "obviously" simple approach.

A Breakthrough in simplicity is often the result of additional thinking and hard work. (And cost)


Maybe the test cases / requirements are "wrong"? I think simplicity is the ultimate test that you found a good problem!


How sure are you that a given program is bug free? I feel that only very rarely would I ever assert 100%. In fact, I would generally assert with 100% confidence that there is some overlooked edge case. How many users that bug may affect... well I would generally give that a small percentage, but it still doesn't hit the boolean state of correct.

So correctness is generally never satisfied in my mind. At any given moment, the programs I am working on are in some way broken in my mind. Even if the other programmers thought that correctness was priority number 1, I will never consider the program correct. I will always suspect there is some snake in the grasses.

I suppose you could feel the same way about simplicity. I think the most charitable stance would be to give them the same level of importance. Overtly complex code cannot easily be proven to be correct amid changing business requirements. Easily testable, complex code with a full functional test suite is at less simple in one sense. Patently incorrect code is hardly valuable regardless of how easily one can understand its function.


None of them are absolutes. Just as we do not expect that "simple" before "fast" means "the code must be 100% as simple as it could possibly be before we begin even thinking about speed", we do not mean "the code must be 100% correct in every possible way before we even start thinking about simpleness"

It is relative preferences, more about what takes precedence over what than an absolute measure. Nothing is ever perfectly correct, nor perfectly simple nor perfectly fast.


I can not be sure the code was bug free. It was an anecdote in a book, the focus of which was more about the psychology of those who wrote the code. But it worked, and the first program did not work. The non working code's author took pride in the speed of his code.


Don't get me wrong, I think its an excellent anecdote. I just shy away from a focus on correctness since in my experience people who prioritize correctness above all else usually make a shambles. I feel that people who prioritize simplicity still understand that it still needs to work more or less correctly.


Yes, correctness absolutely comes first.

One way to achieve greater simplicity is to negotiate for fewer/simpler requirements for the first revision. There's often a core set of functionality that can be implemented correctly in a simpler way, and that gets the work done. Once that's in place it's interesting to see how often people lose interest in what were "hard" requirements before. It's also common that new asks have little to no resemblance to those unimplemented features, and are instead things that they found out they needed after using the new system.


The thing is, correctess is often a transient property. Requirements are frequently changing or evolving. What's correct on a Tuesday may no longer be correct by Friday. Under these conditions it's important that the software be amenable to change. It's for that reason I believe simplicity is more important than correctness.


Simplicity is also a transient property.

> Under these conditions it's important that the software be amenable to change.

At the same time, under all conditions, it is important that the software actually works (i.e. correctness), which is why it's more important than simplicity. Irate users who come to us telling us that our program doesn't work will find little comfort as we regale them with how simple it is.

First, make it correct. Then, make it simple. If requirements change what correctness means, then make it correct again, then make it simple again.


I encountered a similar argument in Clean Architecture by Robert Martin of correctness vs maintainability, where he argues for maintainability over correctness. The argument goes that if you had to choose between code that did the wrong thing but was easy to make do the right thing, and code that does the right thing but is hard to change, you should always pick the former.

He also talks more abstractly about the value of software (as opposed to hardware for instance) being primarily in its "soft"-ness, or ease of changing.

Ultimately this comes from his point of view as an architect, who fights more for system design than say, a PM might for user features. I've encountered the opposite school of thought that says: MVP to deliver features, refactor/rewrite later. I think the strategy to use will depend on the project and team (budget, certainty, tolerance for failure, etc)


"A program that produces incorrect results twice as fast is infinitely slower."

- John Ousterhout


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

It is true mainly for one-time contracts where you actually might not care about simplicity at all. Enough is enough.

However, in the case of iterative projects keeping complexity under control has much higher priority including top priority for very big projects. Complexity and high entropy can easily kill everything.


I don't think this is the form of correctness discussed here. I believe this is more the Correct as discussed in the famous "Worse is better" article.


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness

This may be a matter of definitions. It may be worthwhile distinguish between general correctness and full, as close to 100% provable correctness as you can get. That way allows us to dismiss clearly degenerate cases (you can always do a one-statement no-op program that will be simple but do nothing).

General correctness is what I want in most cases. Example: voice dictation. It requires a final read & polish, but errors are infrequent enough to save me a lot of time. Full correctness is usually requested for jet avionics, nuke power plant control, etc.

With that addition one should optimize for general correctness and simplicity as a first goal, full correctness and performance as a very distant second.

When I write software (or build systems) what I end up with is usually significantly different from what I started with; not externally, but under the hood. Keeping designs simple (on large teams being almost militant about it) helps large systems morph as it goes from a proof of concept into an actual thing. My 2c.


> It may be worthwhile distinguish between general correctness and full, as close to 100% provable correctness as you can get.

Which is the root of the endless back-and-forth in this thread: a program has to do what it says on the tin ("general correctness") before anything else, and then probably be as simple and as "fully correct" as possible. But it's easier said than done for us to posit a distinction between general and full correctness than to actually find exactly where the dividing line lies between the two. A blog post to discuss such a dividing line might have been valuable, but the one we've got here unfortunately just handwaves away all the hard questions.


There is no line between the two. It's something that depends on how much effort and time is put into this, what methods were used, etc. But, the world doesn't actually care about this specific property, as it has no inherent value. Instead we have various levels of assurance of more practical properties, like safety, but not correctness.


I assure you the world cares if your algorithm is generally correct, passing unit and integration tests, etc. This is a programming basics.


You cannot have correctness without simplicity.

"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system." - John Gail


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Yes, but what is "Correctness". Its not usually so binary. Get to "good enough" and move onto the next thing.


This book, "The Psychology of Computer Programming" is by Gerald Weinberg, an author that really explores the design and complexity of systems. I recommend his other books, esp, "On the Design of Stable Systems : A Companion Volume to an Introduction to General Systems Thinking"

http://www.ppig.org/library/book/psychology-computer-program...


> Simplicity always comes after correctness.

Strong disagreement here. A program that isn't kept simple will stop being correct, fast, or any desirable quality over time.


Nobody's saying that simplicity is unimportant, but if the failure mode of a loss of simplicity is that the program is no longer correct, then it inherently suggests that correctness is the primary metric to strive for. :P


Ability to change ("simplicity") is the key metric that allows to maintain, or further desirable invariant. eg: in b2c "correctness" may be less valued than another trait. Do you prefer to know something or to be able to learn fast?


Works today, but tomorrow the complex solution does not do what is needed and presents a barrier to delivering what is needed now. In a static environment you are right, but static environments are vanishingly rare for software, by definition (because with stops after one iteration!)


What is this comment trying to say? That a simple program that doesn't work today is better than a complex program that works today, but that might not work in some nebulous future? Not all complexity is reducible. The point of software is to work correctly, not to satisfy the author's aesthetic notions, which is what most of the modern hype over simplicity boils down to.


I think you missed the parent comment’s point, which is that a highly complex implementation might have problems very similar to overfitting in statistics. Simplicity in some sense means “room to expand to handle future unseen cases.” If an implementation is very complex, chances are it has some assumptions baked in somewhere and when it hits the wrong corner case or a new requirement is added, it manifests not as some mere refactoring annoyance, but as a complete meltdown where the system is revealed to be incapable entirely, and has to undergo major delays due to huge refactoring that can lead to ripple effect problems in other parts of the system.

In that sense, simplicity is like insurance against the future, and so at any given moment you don’t solely care about the system’s total correctness or performance right now but also you care about some diversification benefit of investing in simplicity too.

Very much like how you don’t choose stocks based solely on what will have the highest expected return right now, but instead you also incorporate some notion of risk management when optimizing.


What I am saying is that a ball of mud that passes all tests is worse than something clear that fails corner cases (genuine corner cases) because sorting that out can be done. Whereas the ball of mud will definitely fail in the future and when it does nothing will help you apart from a complete rebuild. "It passes the tests" simply doesn't cut it.


This makes me think of the Donald Knuth quote, "Beware of bugs in the above code; I have only proved it correct, not tried it."

More info here: https://en.wikiquote.org/wiki/Donald_Knuth


Which is why we now have automated theorem provers that can refine proofs into programs.


+1. Simplicity won't work if it isn't correct.

Simple correctness is the best way to create beginners that use software to get faster results. Fast isn't all about computation - it's taking the least amount of the user's time as reasonably neccesary.


Gerald Weinberg. A classic, inpired much of Demarco and Lister's Peopleware.

https://leanpub.com/thepsychologyofcomputerprogramming


Does the time it takes to write come into play here at all?

I’m a novice of sorts. Thanks.


And since "fast enough" is a part of "correct", the order should really be "correct, fast enough, simple".


How do you know it was more complex because from what I read it was slower which is different.


It has been too long. The book described both approaches that both programmers used. I simply no longer remember those details. As I recall, the working program, when explained gave you the "Ah Ha!" experience, and thus was simple enough. The focus of the entire book was more about psychology aspects. One chapter was about how programmers come to feel "ownership" of code.

Another thing: what was an entire program back then, is sometimes a mere function, or maybe a class or code library today.


If it's not simple, it might be incorrect and you'd never know until it bites you.


I was going to say the same.

The conclusion I came to personally was always

Accuracy > Maintainability > Performance

in that order


> Correctness comes first.

I agree with this.

Interestingly, the post is very simple, and not correct. I prefer posts which are slightly more complex but correct, but those don't get as many upvotes.


> Correctness comes first.

Not always. Have you ever used a SNES emulator? There is one emulator that is more correct than all others combined - it's called BSNES and it's the most true to the original SNES hardware of all the available emulators. Yet it is horrifically memory/cpu hungry - that correctness comes at a huge cost.

So no, correctness does not always come first, especially if you value other things like user experience.


Your definition of correctness is wrong in this case. If the purpose is to emulate the hardware as accurately as possible, BSNES wins. If the purpose is to make as many games as possible enjoyable for as many people as possible on the lowest common denominator hardware available today, BSNES loses.

There's no clinical definition of correctness here. Intent matters.


Correctness does come first, or else you can't play games the wa they're intended to be played, but the way BSNES does it is wrong.

I believe that it does so through attemptng to mimic the working circuit logic and chips, the physical hardware, within code alone, hens it requiring a powerful computer. This is an incredibly unoptimized way of doing it, especially since it's formed out of incorrect assumptions n what "accurate emulation" is.

It's the effects that we want, not the logic. If you're going to emulate something that, through common sense, shouldn't even require that much power, you're doing it wrong.

The saying goes, "keep it simple, stupid!" To overcomplicate things, like the programmer of BNES did, results in unweildy an unoptimized code.

Even Nintendo doesn't do this tactic with their official emulators. Yeah, sure, they're known to be inaccurate at times, but that's only because Nintendo's not aiming to build a general emulator to handle all case scenarios. Besides, much of the inaccuracies, as far as I could understand, deal with undefined behaviors of the system, something only things like glitches and bugs ever take advantage of.


The follow on from that is “performance is a feature”. If the emulator is supposed to emulate a fun, playable, game, then perf would be a required feature :-)


BSNES trades off performance for emulation accuracy. Other emulators trade off emulation accuracy for performance. No widely-used emulator that I know of has any care for simplicity at all (all of them are chock full of one-off special cases to benefit specific games). This has little to do with the screed in the OP, especially given how little the OP appears to value performance.


Correctness then is a trade off against other factors. Also it seems correctness in this case is a continuum rather than a binary choice. And you would prefer to trade other important factors for "true" correctness.

But I'll assume that you want the software that calculates your paycheck to be correct.


> So no, correctness does not always come first, especially if you value other things like user experience.

I think you're using a different definition of "correctness" than most other people in this thread. Which is understandable, a lot of folks are using different senses of it. What matters is not, "Does this perfectly and unobservably play hardware" in the definition of correctness for an emulator. What matters is, "Can this emulate the cart I want to play right now with a good experience?" and perhaps, "Will this allow a malware maker to own my entire computer if they run a cleverly crafted fake cart file?"


I would argue that correct is more important than simple.

Consider timezones: it's simpler to pretend there's 24 time zones, one for each hour. But the correct assertion is there's 37 time zones (as of this writing). So, the simple solution results in a third of your potential user base having issues.

Other issues to pick: accessibility, cross-browser compatibility, legacy device compatibility... the list goes on.


I think it's more in the spirit of the article to say, forget timezones, use UTC millis everywhere. If the server doesn't speak in timezones, then you eliminated all bugs where the server mishandles timezones.


> I think it's more in the spirit of the article to say, forget timezones, use UTC millis everywhere. If the server doesn't speak in timezones, then you eliminated all bugs where the server mishandles timezones.

That's a flagrant example of "simple and wholly incorrect". If you don't store timezones, your future dates will eventually turn out incorrect when timezone offsets change e.g. create a meeting at 9AM local, store as UTC, country decides to not follow DST that year bam your reminder will ping an hour early or late.

Or a day off when the country decides to jump across the international date line (https://en.wikipedia.org/wiki/International_Date_Line#Samoan...).


most applications don't need to schedule events into the future though and its a smart strategy if all you need to worry about is the past


For very specific cases? Sure, but none of the comments talking about UTC everywhere cares to specify this rather important bit.

Unless restrictions are specified I will assume we're talking about the general case, and for the general case it's just plain wrong.


from my point of few its the other way around.

There are very few applications that need to schedule events into the future, and that is literally the only situation where you have to worry about the timezone.

Btw, keeping the timezone is insufficient as well if you're building a calendar/scheduler. If the user changes the timezone after scheduling the event... do you keep to the old one and alert him whenever, or do you adjust? There are a lot of edge cases with schedulers -- yet as i said before, most applications don't schedule into the future. They're mostly just doing things right now or within the next few minutes and keeping a log of their actions.


> from my point of few its the other way around.

> There are very few applications that need to schedule events into the future, and that is literally the only situation where you have to worry about the timezone.

My experience is the exact opposite: there are few applications which only store past dates, and in those said date is usually indicative/barely even relevant and could just as well be part of a freeform comment or removed entirely.


The UTC example allows the user to translate output using localised tools.

The solution isn't incorrect, it is modular.


> The UTC example allows the user to translate output using localised tools.

So you provide an alarm clock which is works as neither a clock nor an alarm.

> The solution isn't incorrect, it is modular.

It's either not correct or not a solution, either way it's useless.

I also like how proponents of "simplicity at all cost" apparently assume/assert the composition of two systems is no more complex than either, and that there is no additional complexity to the composition layer.


In my experience, it's been easier to push UTC time all the way from the db to the user javascript and operate on time then, than trying to manipulate time before sending to the user. YMMV, and it's only web dev.


There are many kinds of data that get saved as time. For some, yes, it's better to add and remove the TZ at the interface. For some the TZ carries meaning by itself, and it must be stored and keep constant everywhere. For some the TZ carries meaning, but the time must be converted for display.

Know your data, and most of your problems get easy.


Yes, that's the crucial distinction -- whether the TZ, with its coarse encoding of the geography of origin, carries significance to the consumer of the data.


This works for things that have happened. It doesn't work too well for schedules. In those cases, the timezone of the source matters, heavily.


For example, a task that occurs "daily at 15:00" does not always happen every 24 hours. When DST comes into effect, the interval shortens to 23 hours once.


Or, at "2:30 am" in the continental US can occur twice in a day, or not at all. Of course, even that isn't guaranteed, if you're in AZ.

Even "notify me in exactly 24 hours has its own complications. Leap seconds will screw up your day (as will the vague request of "exactly 24 hours").

Corner cases, the bane of simplicity everywhere.


To this day, knock on wood, I have had success at voluntarily avoiding those issues ^^ (aside from school assignements)


I agree - UTC is the way to go. The point of the example was to illustrate how simple could be detrimental to a significant portion of your userbase.

How about another example? You're building an android application. Let's pretend there's an API in the latest version of Android that reduces a dozen lines of code down to one function call - ShinyNewMethod().

You can use that ShinyNewMethod() call. It's certainly simpler.

But the vast majority of android devices in use are not running the latest OS. So the NewShinyMethod(), while being simple, will cause your app to not work for them.

Hopefully the framework designers figured a way to have this auto-reverse engineer for older devices, but that's not always a guarantee.


This results in funny things like android support library.


But no, what about UTC leap seconds? Use TAI! :)))


The old yarn I'd heard for years is close to this.

Make it work. Make it work right. Make it work fast.

In that order.

Now it could be argued that "work right" can be read as "make it (work right)", or "(make it work) right", or both, but I think the point of this saying is that the "fast" part should always come later.


Yep, this is the version I heard. I think the distinction between "make it work" and "make it work right" is important, though - where "work" means "solve the problem", and "work right" means "solve the problem in a robust and reliable manner."

If your software doesn't solve the problem, it's useless, no matter how correct or fast it is. Once it solves the problem, then you can work on making it bug free and elegant. Once you're done with that, only then should you look at making it fast.

Note, of course, that "make it fast" refers to gratuitous optimisation. If it's too slow to solve the problem, then it doesn't work, and that needs to be fixed.

A similar adage states the rules of code optimisation:

1) Don't.

2) (For experts only) Do it later.


3) Profile before optimization.


4) After thinking how to get beat bang for buck when optimizing. Profilers don't always pinpoint the culprit.


I have a hard time seeing how profilers don't at least point you in the right direction. Or are we using different definitions of the word? They tell you how much time is spent in any given section of your program. If 80% of the time is spent in 20% of the code, it's usually a safe bet that's where you should start looking!


Generally I agree, but after you've picked off a few low hanging fruit, you'll end up with the profiler pointing at the function which does most of your grunt work, at which point you'd expect that to take up most of the time.

Once you've got this pretty optimized and it's still taking up the lion's share of your execution time, you have to look elsewhere (probably changing your overall approach or applying some higher level optimisation) to improve things further.


Yes, at that point you typically get to apply far reaching architectural changes or switch out the algorithm.

Sometimes it is quicker to start with just that instead of "polishing a turd". You can get it to be shiny but still nowhere near as shiny as gold.

Hope the code is testable and reasonably easy to modify. Otherwise it's going to be a rewrite.

The profile is then useful as a benchmark on real data. If you have enough time, you can turn that into a high level performance test.


Profilers have limitations like anything else, and it's possible to be pointing the flashlight in the wrong place. I probably wouldn't include that as a list item.

By the by, is there more than one kungtotte on the Internet? It took me a minute to think why that name was so familiar, but then I remembered watching a few hundred Beaglerush videos.


There must be more than one, because I've never heard of Beaglerush.

I've used this handle for a long time though (20 years or so), so it's all over the internet.


Kk, thank you for indulging my curiosity. Beaglerush is a very humorous Aussie who is notable for a video series on the Long War mod for the game XCOM. Long War turns a moderately challenging game of thirty or forty hours into an extremely complex, impossibly difficult ordeal of at least 400 hours per game. Beagle apparently often uses the handles of his friends as character names, and one of the best/worst parts of XCOM is that it's really good at making you care about the little blobs of pixels you order into virtual mortal peril, so to me kungtotte is like, the hero of 100 missions :)

If you're into strategy games, XCOM is good, and Long War is matchless. However, Beaglerush is actually surprisingly entertaining even if you don't care for his subject; the girlfriend is still not much into the game, but after the first couple episodes she insisted on watching the other hundred-thirty-odd videos. It's probably not everyone's cuppa, but it could be a thing.


Oh, that's me then :) I didn't put two and two together because I knew him as just beagle, and it was a long time ago that I hung out with him. We played together in a gaming community known as ShackTac or ShackTactical, playing a game called Armed Assault/ArmA.

Way back when a bunch of us put our names into a custom name file for XCOM so people could make campaigns featuring ShackTac people instead of generic dudes. I completely forgot about that until you reminded me. It must be five years since I talked to him :)


Oh cool :) Yeah, I did get the impression that the name list represented more former acquaintances than current ones, and I think he got pretty burnt out on YouTube generally. He seemed like a pretty great guy, and the series was excellent, so it seems you have a small measure of reflected glory at least :)


Sometimes I try to implement a common task in a new library, and I find that if I chain these 4 API calls and then extract some data with a regex, it will do what I want. I rarely actually implement the task this way though, because I know it's a common task, and there must be a better way. Sure enough, after some more research and reading of documentation I find one simple API call that does what I want.

I take this to the extreme, I probably wouldn't implement the complicated API / regex chain without 4+ hours of reading documentation and other research. It bothers me that much. If it seems like a simple and common task, I refuse to believe that there isn't a simple API call already to do what I want, I just have to find it. Sometimes, the simple API call really doesn't exist though, and you have to do what you can, with some comments explaining why.

I've noticed some developers will implement the 4 API chain followed by a regex as soon as they find it, and never give a second thought that there might be a simpler way.


This is good, because that process now becomes the API's problem and not yours. Their tests cover it, not yours, and it makes your work simpler.


The make it work fast portion isn't controversial. Premature optimization and all that...

The "make it work" implies a level of correctness & performance that is acceptable, which is why any subsequent steps are after thoughts.


The missing part is "make it workable". By that I mean reasonably easy to modify. This may or may not involve simplicity but usually involves modularity and lack of hard impingement interdependencies - weak coupling.

If you skip that, you will relatively quickly reach the point of a full rewrite.


Correctness (and any other kind of change in behaviour) is easier to achieve if things are simple.

Changes are not necessarily easy or even possible to make safely if things are correct but not simple.

"Simple" is a proxy for "can be changed safely" and so IMO is the most important quality to have.


I think you do whatever it takes to achieve correctness.

If simplicity helps achieve correctness, then great.

But correctness is not always simple.

Most people think there is a leap year every four years. They are wrong.


I know a couple of guys that would probably implement isLeap by checking that the lower two bits of the year are not set, then boast about how fast it was, and finally, when told that the logic was incorrect, push back because the additional checks would make it "slower".

In their minds, when things stop being simple, fast trumps correct.


One nuance is that some parts of correctness may be negotiable.

If I'm writing a file backup program, it absolutely must back up all the files without leaving any out or corrupting the data.

But let's say it has a feature that prints progress indicator percentages on the command line, ranging from 0% to 100%. Maybe under certain circumstances (like files added to a directory after the backup starts), it prints 102%. It's not what I had in mind, nor is it something I'd call correct. But if fixing it complicates the code a lot, maybe leaving it that way is the better choice.

(This is a bit of a contrived example because you could just clip the value at 100, but you get the idea.)


The simple thing to do then is to call an `isLeapYear` function that abstracts any complexity.

Abstraction allows you to hide complexity and make it a simple, reusable part again.


But I could write a simpler (incorrect) leap year function than yours!


You missed the point of the article if you interpreted it as correctness being intentionally sacrificed for simplicity.

Plus, a complete leap year implementation is already what I would consider simple and most standard libraries already have an implementation for it you can use directly.


I don't think simple means discard if complex. It means using the simplest and most obvious tools (algorithms, libraries, &c) possible in order to achieve a correct enough software. So it does not mean discarding intricacies of timekeeping, but implementing it with the clearest, simplest use of abstractions and methods possible.


My calculator watch worked perfectly for 50 years on that "wrong" assumption.


> But the correct assertion is there's 37 time zones (as of this writing). So, the simple solution results in a third of your potential user base having issues.

That assumes your user base is evenly distributed across time zones. The US has 6 time zones, but if you only handled 4, you'd cover 99.3% of people.


The US has more than 6. There's a big difference between Denver and Phoenix, even though they're both "Mountain Time". Grouping them together would be overly simplistic. Things like "weekdays at 10 A.M." mean different things half the year.


Obviously to solve the problem, a certain amount of correctness is implied. But once that threshold is reached, simplicity is a better use of your time. At least, that's the thesis of the article.


True.

The problem is always, how to get enough correctness that your customers are happy, but don't spent too much time on it so that rivals won't overtake you.


All general rules like this are terrible if you apply then blindly, but can still be useful if you apply them loosely.

In your example, I would take simple to mean "only use UTC". As soon as you need timezones I would say you've moved into correctness territory and need to do them all properly (you would use a good library, of course).


Every piece of software that has bugs could be considered not correct, which is the vast majority of it.


Depends on the definition of bug.


I don't think what you're saying is in conflict with the OP:

> The complex problem comes later, and it’ll be better served by the composition of simple solutions than with the application of a complex solution.

Complicated problem domains can be made into simple ones by breaking them down into their constituent components. You can solve time zones by having 5000 Rube-Goldberg-esque lines of if/else-if statements, or you can organize the system into simple components that build on each other.

At any given component or level the problems are clear, simple, and identifiable, and the complexity arises as the components join to form abstractions upon which higher levels operate.


Or you can end up with a mishmash of components instead of statements.

Premature abstraction is a kind of premature optimization, except you're not buying performance.

Bad abstractions tend to stay in for a long time.

What matters is clear delineation between functional components and weak binding, so that internals can change, and that the interfaces are relatively minimal.


The point is that it should be as simple as possible to solve the problem but no simpler.


This generic expression indicates why "correct" comes first. "Simple" has it as a dependency. ;)


It's order of importance though rather than an expression of dependencies and I'd definitely err on wanting my solution to be as simple as possible over merely correct. The idea being that whilst you are solving a problem you are always striving to solve it simply and only add complexity as needed.


So long as the software will only be shipped after all three stages, it's a useful metric to go by, I guess. It prevents you getting overwhelmed by the myriad amount of decision-making that writing software is. And also, so long as you accept the fact that you will need minor or major redesigns by the time you get to the "fast" version that is production ready.


Agreed. Software that does what it's supposed to is better than software that is simple (or fast), but buggy.


Which is why everyone uses OpenBSD instead of making a tradeoff with a different OS that isn't designed as securely.


Users go for usability features first, with an OS that is good enough. Good looks second. They can tolerate a few weirdness and quirks as long as they can get the job done without cursing the thing. They will even tolerate non critical crashes sometimes (no or little data loss).

OpenBSD is not bug free at all, it is just security oriented in the implementation.

Windows got traction because it has even better hardware support, a bunch of backroom OEM deals and nice UI features (at the time of 95), then went far on software availability.


as I understand it, both simplicity and correctitude are requirements for most programs, but imo the article talks about the design, the writing process, where you should focus first on writing a program that's not more complex than it needs to be, because you are likely compromising the other parts too. this doesn't mean incorrect code is acceptable.

in any case, I think it's also worth mentioning that the article is probably talking about "incorrect" as in "accidental bugs", not as in "purposely ignoring the complexity of the problem". with the idea of preventing over-engineering rather than dismissing the specifications of a valid solution


My read of this that simple means not necessarily handling all possible edge cases yet having enough of a skeleton in place that the core functionality is there. Correct is extending (or replacing) that skeleton with coverage for all cases.


Edge cases? You mean we can't test for leap year by simply checking if the year is a multiple of four?


No that’s a silly example.

I meant more for dealing with edge cases like an external API invocation failing. A final “correct” implementation would need to handle failures but an initial simple one would only handle success.


Indeed. During my visit to India in February this year, I didn't even realize they were at a half-hour time-zone offset, all thanks to the miracle of modern technology (and hardworking people maintaining it).


But would it be better to account for non-end of year leap second additions vs something simplier that just ignores them but is wrong once every... decade?


Depends on the software.


He made up his mind, don't confuse him with facts.


Came here to say the same thing. OP doesn't know what he's talking about.


If you cannot achieve correct without simple, redefine correct.


Some problems are just hard and complicated. You can try to make code that dissects it into lots of simpler pieces, but then they have to be put together in a complicated way. Someplace in the code there will be some point of irreducible complexity. In my experience.


In my experience you need a paradigm shift then. If your earth centric approach is creating a need for epicycles of complex code, then you need to paradigm shift to heliocentric which eliminates the epicycles and replaces them with a simple elliptical model.


I'm willing to believe that there are sometimes problems that are inherently complex, but I find that much of the time the problem is complex only because you've insisted it be complex. If you take a step back and re-examine the problem you often see that redefining the problem makes everything much more simple.


Having software people interact directly with the user/customer (internal or external) can be very useful because minor changes in requirements can make a huge difference in implementation complexity.


> If you cannot achieve correct without simple, redefine correct.

More hand-waving.


"redefine correct" = "find a better version of the problem". Software development starts with requirements analysis.


I disagree. I just can't get more specific without specific cases to examine. Like others have pointed out, take the matter of timezones: if "correct" is defined as "handling timezones", you should instead store time in UTC everywhere and redefine "correct" to be "convert times to local time when displayed and back again when input", which can be accomplished with much smaller, simpler, and focused tools.


>", you should instead store time in UTC everywhere

It is not possible to store UTC unambiguously on the db server for all future local wall-clock times. (Previous comment about the erroneous assumption of "UTC everywhere" being a "simple solution".[1])

Therefore, redefining "correct" to be "store UTC everywhere" achieves the exact opposite: an incorrect and buggy program. That's because the "universal" in Universal Time Code doesn't apply to governments changing DST and Time Zone rules in the future.

Pure UTC doesn't have enough metadata to encode future unknowns. For correct handling with zero loss of data, one must store the user's intended "wall-clock" time in his local TZ in the db.

There's irreducible complexity when dealing with user-specified appointment times so an uncompromising fixation on programming a "simple" implementation with pure UTC-on-dbserver and localtime-at-only-at-browser-Javascript ... will lead to a broken calendaring program.

[1] https://news.ycombinator.com/item?id=10990240


You're right, future times are more complex and might require more attention to detail. But I think you can still achieve the requirements in a simple way, perhaps by storing it as UTC + lat/long and running a script to update future dates when someone changes their rules.


> But I think you can still achieve the requirements in a simple way, perhaps by storing it as UTC + lat/long and running a script to update future dates when someone changes their rules.

Congratulation, your emotional refusal to deal with zoned datetimes has led you to a non-standard ad-hoc reinvention of timezones, your misguided quest for simplicity and obstinate rejection of reality has thus led you to a system which is definitely more complex, probably less correct and likely less performant than if you'd just done the right thing in the first place.


>a simple way, [...] and running a script to update future dates

I've commented previously that it's not a good idea to change the rows of UTC times in the database.[1]

Designing "system correctness" to depend on on the reliability of a correctly written SQL statements completing atomic transactions for millions of rows is not a good idea. In addition to batch db updates of UTC being extremely fragile, it's also not simple.

(It's fascinating to note that the multiple programmers independently arrive at the approach to update database rows of UTC times. There's something about it that's cognitively satisfying that attracts repeated reinvention.)

[1] https://news.ycombinator.com/item?id=10991894


The event's time has changed, though. The local representation of when it will occur has not changed, but if you set a timer today and they change their timezone tomorrow, the timer will expire at the wrong time.

We should store the actual time of an event and update it when the scheduled time changes.


>, the timer will expire at the wrong time.

A countdown timer is a runtime concept.

Storing pure UTC and/or intended_localtime_plus_TZ in the database is a static concept of data-at-rest.

A timespan/timer is a different abstraction than a desired point-in-time.

Depending on the use case, the correct timer/timespan value can be derived from pure UTC (e.g. scientific celestial events) -- or -- user_specified_localtime_plus_TZ (recurring yoga class at 5:30pm every Wednesday, or take medication every morning at 7:00am).

For user calendaring and scheduling of social appointments, storing pure UTC will lead to data loss and errors. Instead of complicated mass updates of millions of db rows, it's much more straightforward to take a stored localtimeTZ, and then calculate an up-to-date UTC time at runtime, and then derive a countdown timer from that. The key insight is that the best time to use UTC is when the users need that timer at runtime -- and not when they store the row in the db.


> We should store the actual time of an event and update it when the scheduled time changes.

I would love to see some (simple) code which will send a single alert to me at 1:30 am and another at 2:30am. My client registered me as MST (-7) when I set these two alarms in Feburary.

Of particular note for corner cases: Nov 4th and Mar 10, 2019.

The "scheduled time" will change, for many locations, twice yearly.

EDIT: For added fun, instead consider the registration date as May 10th with the same timezone.


>You're right, future times are more complex and might require more attention to detail. But I think you can still achieve the requirements in a simple way, perhaps by storing it as UTC + lat/long and running a script to update future dates when someone changes their rules.

Seriously? That, in your opinion, is simpler?


>I disagree. I just can't get more specific without specific cases to examine.

You should probably mention somewhere that you're the author of the blog post under discussion. And it looks like you're going to make a reputation for yourself as the guy who argues that it's more important for software to be simple than for it to function correctly.

Good luck with that.


Noop is pretty simple...


And if the opening statement of the article is to be considered correct, noop is by design the only way to do things since its simplicity will always win against trying to create more complex code that does the task.

"The single most important quality in a piece of software is simplicity. It’s more important than doing the task you set out to achieve. It’s more important than performance. The reason is straightforward: if your solution is not simple, it will not be correct or fast."

It praises a quality that is great as an add-on, not really by itself. Pretty sure everyone prefers a complex thing that "does the task" than a simple one that doesn't.

Simple "helps". Simple never "does". I think the author's values are a bit mixed up.


Is the software engineering profession doomed to lose its memory every generation? The premise of this post is ridiculous:

>The single most important quality in a piece of software is simplicity.

How panglossian, imagining the best of all possible worlds. Well, the world is intrinsically complex, as Fred Brooks explained in his No Silver Bullet essay from 1986[0].

"The complexity of software is an essential property, not an accidental one."

Sure, there is accidental complexity in most software problems, that can be tackled with skill and experience, and maybe reduced to zero. But then you are left with the essential complexity of the world. And you are done reducing the complexity; you can only manage it from then on. The world is very, very complex and it is a pipe dream to imagine that we can eliminate its complexity just by some bold engineering.

[0] http://worrydream.com/refs/Brooks-NoSilverBullet.pdf


Anyone who has spent any time developing anything but a tiny software system knows that the biggest impediment to productivity (feature delivery, bug fixing, etc) is the complexity of the system at hand.

In a sense, this post is simply stating the obvious.

The biggest differentiator of skilled software practitioners is the ability to construct simple systems.

To call this claim panglossian or meaningless is to hold the philistinic line that this skill set doesn't matter, that any complex system is effectively the same as any ol' simple one -- don't worry about cultivating the skill, it doesn't matter anyway...

But simplicity is the single most important thing that matters in any maintaining system other than one-off scripts, hack jobs, etc. -- It's absolute torture to collaborate on a software project with anyone who rejects this premise.


You are making the common mistake of confounding essential complexity with accidental complexity. One you are stuck managing and one you can eliminate with skill. The world isn't getting less complex just because you work harder on your software.


Also it’s not about working harder; it’s about building smarter—building simple systems—which absolutely makes your world simpler. ‘Essential complexity’ is what sophomoric developers decry when they are unable to architect well—and are unwilling to do the hard work it takes to learn to architect well.


The world, from software’s perspective at least, isn’t growing in complexity. What leads one to that?


The world growing more complex from the software's perspective.

And by that I mean, it has to account for more scenarios or do additional things... unless your software is growing in complexity while you're only removing features...

Bugs in software come from thinking we're simplifying the world in one way though the program, while in reality it receives a slightly different picture.


A well-built software system doesn't have to grow in complexity over time. More features does not equal more complex in the true sense of Simple.

Faulty data models and system designs -- that aren't fixed -- lead to ever-increasing complexity. But that is the fault of the data model/designer.

I.e., there is a way to build (and grow) systems w/o linear increase in complexity -- but it takes a particular rare skill set.


> The biggest differentiator of skilled software practitioners is the ability to construct simple systems.

I would say it's to construct simple enough systems, and the hallmark of skill is a developer's ability to define enough.


I believe you're conflating simplicity/complexity with flexibility.

_One_ hallmark of developer ability is the discretion/wisdom/experience to know how flexible to make the thing. (How to prioritize and limit feature-creep, etc.)

But this is different than Simplicity. A general purpose programming language or database -- highly flexible/generic systems -- for example, can be built well/simple. But so can highly _specific_ systems.

In both cases though, one can build something that is decoupled and manipulable or one can build something that is coupled and rigid -- and the ability to do so is a function of skill set _not intrinsically_ a function of time. In other words, a skilled developer doesn't have to "take time" to deliver a Simple capability.


Sometimes you just want a hammer, not a swiss army knife.

And sometimes it is good to have a replaceable head on the hammer.


Yes! And the property of Simplicity is orthogonal to whether or not you chose to make the Knife or the Hammer.

A Hammer's construction can be Simple. A Knife's construction can be Simple. Or not.


I can't really understand the equivocating tone a lot of folks are taking in response to this, and more importantly I can't wrap my head around how you could make such a statement in the first place: without correctness you've got nothing. Stating authoritatively that correctness comes after...anything is incomprehensible to me.

It's possible to have a correct solution that is neither simple nor fast, and it can be worth your while to speed up a correct solution while sacrificing simplicity. So there are trade-offs involved in the relationship between simplicity and speed, but correctness is not negotiable. Acknowledging that all software has bugs is not the same thing as throwing out correctness as your first and primary objective in implementing an algorithm, and accepting that your solution may only be partial or fail with certain inputs is fine if that is acceptably correct for the problem at hand, but ascertaining that still comes first. Preferring simplicity over complexity because it makes debugging, profiling, etc. easier is not a reason to insist that correctness can go out the window in service to simplicity--who cares if you've removed all the bloat from your code if it's wrong?


I am reminded of this classic snark from The Elements of Programming Style (1974)[0]:

"Some compilers allow a check during execution that subscripts do not exceed array dimensions. This is a help … many programmers do not use such compilers because “They’re not efficient.” (Presumably this means that it is vital to get the wrong answers quickly.)" (Page 85)

[0] https://en.wikipedia.org/wiki/The_Elements_of_Programming_St...


Usually the real reason is that such checks are pointless as they do not pinpoint the bugs. They are too late. You need a real stack trace to begin debugging such issues.

Languages like Ada Spark or Rust tend to rarely use or need such runtime checks. (they are available as an option to check unsafe code)

Others like Python and Java do check and give you traces. Not for free though.

And then you probably want something more powerful, such as a virtual machine like Valgrind; full sanitization of Address Sanitizer, etc.


"without correctness you've got nothing."

Sure... but define "correctness".

Suppose my manager comes to me with some incredibly complicated problem. It's going to take six months to solve properly. Suppose in the first three weeks I implement a program that is 98% correct, and let's say it can detect the other 2% and kick it out for a human to solve. But it clearly does not fully and correctly eliminate the problem as brought to me by my manager. Have I solved the problem?

The correct answer is not "no, because your solution is incorrect and there is no such thing as an 'incorrect solution' because all solutions must be correct to even be solutions; you have no professional choice but to spend the next 5 months and a week implementing the correct solution". The correct answer is "the question is underspecified". I need to go to the manager and work with them on the question of what the benefit of just deploying this is, what the benefit of doing it "correctly" according to the original specification is versus the cost, and whether or not there are any other in-between choices. The business may require the full solution, sure. On the other hand, your manager may be inclined to thank you profusely for the 98% solution in a fraction of the time because it was far more than they dreamed possible and is way more than enough to make the remaining 2% nowhere near the largest problem we have now.

"Correctness" is only fully defined in a situation where the spec is completely immutable. Specifications are almost never completely immutable. So for the most part, everyone in this conversation using the word "correct" without being very careful about what they mean are not using a well-defined word.

It's all about costs and benefits, not correctness and incorrectness. For nearly two decades, Python's sort algorithm was technically incorrect: http://envisage-project.eu/proving-android-java-and-python-s... Does this mean that any program that used Python sort was worth "nothing", because it was not correct? Obviously this is absurd (in practice at least), so correctness must be understood in terms of costs & benefits to make any sense. And such an understanding must also be grounded in an understanding of the mutability of requirements as well, to make any sense of the real world.

From this perspective, it honestly isn't even 100% clear to me what prioritizing "correctness" over everything else would even mean. That we are slaves to the first iteration of the spec that comes out, no matter what? (Obviously not, but I can't come up with anything better that it might mean.) Correctness can't be prioritized everything else because it can only be understood holistically as part of the whole process. There is no way to isolate it and hold it up as the top priority over everything else. And there is no way for the correctness of a bit of software to exceed the scope of the specification itself, almost by definition, which in the real world tends to put a pretty tight cap on how correct your software can even be in theory, honestly.


This is the most important part, and people advocating “correctness first” are missing this point.

“Simplicity first” means having a minimal skeleton code with glaring weaknesses, unimplemented features, and bugs, but having a simple design that sets you up well to absorb the inevitable shitstorm of changing priorities, pivots, revised performance constraints, feature wishlists, budget, deadlines, etc., and to manage extensibility, integration, or abstraction needs as they arrive in random, ad hoc ways.

Usually project stakeholders don’t care about absolute functional correctness, meeting performance criteria, or completeness until far far later in a project lifecycle, after those requirements have been thrashed around and whimsically changed several times.

Early on, they care about a tangible demo apparatus and solid documentation about the design and tentative plan of implementation. They want to see steady progress towards correctness & performance, but generally don’t care if intermediate work-in-progress lacks these things (often even for early releases or verson 1 of something, they’ll prioritize what bugs or missing features are OK for the sake of delivery).

In terms of interacting successfully with the business people who actually pay you and determine if your project lives on or gets scrapped, “simplicity first” is a total lifesaver, and matters far more than any of the notions of correctness discussed here.


But this is what ddellacosta is saying. I imagine his interpretation of this argument (and I agree) is that 98% correct IS prioritizing correctness. A very fast, simple solution that is 2% correct is an unacceptable balance.


"A very fast, simple solution that is 2% correct is an unacceptable balance."

That begs the question (in the original sense) of "unacceptable". If I banged out that 2% solution in an hour, and it lacked other costs that outweighed the benefits, it may still be something we ship! It is unlikely that we'd stop there, just because the numbers as you've given are unlikely to favor it because something else substantial would have to overcome the small amount of the problem we've solved, but to be firmly confident it's "unacceptable" you'd have to define "acceptable" a lot more carefully.

I understand the deep temptation to turn to discussions of the virtues of letting bugs through or something, but the costs/benefits framework completely handles that already. If you ship a buggy piece of "incorrect" shit, well, you've incurred a ton of costs with no benefits. That's wrong, by whatever standards you are measuring costs and benefits by. There isn't a "what if your 98% solution actually has a massive bug in it because you were unconcerned about 'correctness'?" argument to be made, because if it does have a massive bug, it's not a 98% solution.


>Sure... but define "correctness".

How about this, for example:

The patient lives.


A nice snarky reply.

But I would put $10 down that if I asked you to assert that all medical software in current use that has never killed a patient because of its software issues is therefore "correct", you'd walk back hard. You'd have to be crazy to assert that all such software is "correct".

Unless you are willing to make that assertion, you don't really mean that as a definition.

This is also an example of what I mean in my cousin message about the temptation to turn this into a discussion about attention-grabbing bugs. But my framework already encompasses that. Software that kills patients is software very high on the costs side. There's a complicated discussion to be had about how to exactly quantify probabilities of failure vs. cost, but you can't have that discussion if you're stuck in a "correct or not correct" mindset.


>But I would put $10 down that if I asked you to assert that all medical software in current use that has never killed a patient because of its software issues is therefore "correct", you'd walk back hard. You'd have to be crazy to assert that all such software is "correct".

I made no such assertion. That's a straw man.

But I think I can safely assume that if the patient dies as a result of the software's functioning, that software is not "correct".

You may disagree, but I think it's preferable to have a patient kept alive by an overly-complex system than killed by a simple, elegant, incorrectly functioning one.

Not to be intentionally blunt or snarky, but I think Drew DeVault's post was a bunch of rambling, hand-waving nonsense. Until today, I wouldn't have expected anyone to seriously argue that simplicity is more important than correctness. But he comes along and makes that very argument, with a self-assured, authoritative tone, but very little in the way of concrete reasoning, and to my surprise, the number of people on HN who apparently agree with him is non-zero.


There's a sweet spot in the neighbourhood of well-defined problems and low complexity, that sings phrases like Before even trying to implement it I know it can be done right, or Even though it can't be done 100% right, that's just the nature of the problem domain. Give the users a good enough solution that they can understand, so they can use it in the way _they_ need.


I think to those that are asserting that correctness comes first are somewhat missing the point. One, a simple solution is still a solution, that is if your code doesn't solve the problem, you can't stop. I think the author is suggesting that truly _correct_ code (code that produces the correct output under all circumstances) is only attainable iteratively, and if your code is not simple (and let's also remember here: that simple ain't easy!) than reaching correctness or performance will, in the main, be quite difficult. Not only will it be increasingly more difficult to reach a state of correctness again after a bug is found, and it will be found, but even measuring performance will become increasingly challenging. At least that's the lesson I take.


In practice of many years, I haven't found iterative approach to produce either simple or easily maintainable code. If tends to grow rings instead. Each layer is relatively simple but altogether neither it is performant nor simple.

The simple code of present was almost always written by someone who understands the problem domain really well in one or two tries.


Yeah, maybe iterative as a concept is too facile a concept to contain what is meant here. Maybe "fractal", or recursive is better. Maybe though that's the point, it is hard after "multiple rings" to make code simple anymore, so better start off trying to optimize for simple first. Correctness requires exposure to cases you didn't know you didn't know (unknown unknowns).

This is one of the reasons why I am suspicious about the long-term saliency of so-called "smart contracts" on the blockchain. The immutability of code, while super amazing for digital assets, seems like a horror-show of a liability for dApps.


Performance in most non-trivial software, and especially infrastructure software, is architectural. In many cases an architecture that will allow your software to be performant requires a commitment to a very substantial amount of software complexity upfront to ensure adding performance is much simpler (or even possible) later. There are also rarer cases where correctness is not simple, so there is no trivial path between the simple implementation and a rigorously correct one. While "simple" is easier for the software engineer, customers pay for "correct" and "fast".

In my own area of work (database engines), the common mistake is that inexperienced designers do focus on simplicity first, instead of correctness and performance, not understanding that it is at best difficult and sometimes impossible to add correctness and especially performance later. The fast win of "simple" can turn into nearly insurmountable technical debt when you are asked to deliver scale and performance. People often grossly underestimate the minimum amount of initial implementation complexity required for good architecture.

There are many types of software where "simple, correct, fast" is sound advice but it is far from universal.


I think this is lacking a definition of simple. And where in the problem space do we desire simplicity? Simple in the implementation (and conversely complex in the interface? ie: C-style libraries?) Or complex in the implementation but simple in the interface? (ie: Haskell/FP style libraries?)

My definition of simple software is software that I can validate the correctness of using only equational reasoning and the mathematical tools used to carry it out without any specialized knowledge or verification systems.

If I have to learn a new way to reason about a software system in order to understand it then it is complex.

A priori any system written in C fails this litmus test: one must understand and identify the many ways that undefined behavior can enter into their program and be leveraged by their compiler. One cannot reason about a local expression in the presence of global effects and unchecked side-effects. And if it is possible to write a correct C program it takes considerable effort and the use of very specialized verification tools.

There are many reasons to prefer C however; if we're willing to live within some tolerance of "correct" and "incorrect" then we can leverage a tool-chain that can produce highly performant code... but then we're forced to restrain ourselves from introducing complexity instead of spending that effort on other things.


+1 to acknowledging that when it comes to software, "simplicity" does not have one exact quantitative meaning we all agree on -- for now, simplicity is still in the eye of the beholder in my opinion.


Sounds like you're over complicating simplicity


Not convinced. Simplifying a correct implementation can be easier than correcting a simple implementation. Eventually you need to find the correct model and then everything else will follow easily, but a complex implementation that does the right thing will tell you a lot more about what the correct model is than a simple implementation that doesn't do the right thing.


Without defining what "simplicity" and "correctness" are supposed to mean, this article is empty of content. The title appears to be riffing off of the famous saying "First make it work, then make it right, then make it fast" ( http://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast ) which is supposed to be a warning against premature optimization (one that is less often taken out of context than Knuth's famous saying). But by lumping both "making it work" and "making it right" under "correctness", it makes it appear that the author values simple software that doesn't do its job over complex software that does. And the problem is that you can't easily slot simplicity in by drawing a stark dividing line between "making it work" and "making it right", because it's a continuum of correctness. At best, simplicity is more important than performance, much of the time. But at the end of the day the point of software is to perform a specified task, whether or not it is achieved in an aestheically pleasing way underneath.


This is great, but completely lost on the crowd if what Simple means isn't understood.

One of the best clarifications of what it means to be Simple, to put it out there, is [1]; but the key point: Simple != Easy.

Simple means minimal coupling, high-cohesion etc etc.

Yet IME many developers do not understand the distinction and mistakenly believe that easy is the same as simple, and are willing to couple the hell out of the world under some false notion of "simplicity"...

[1] https://www.infoq.com/presentations/Simple-Made-Easy


In a way, simplicity is the end result of reducing the complex and correct solution without affecting its correctness.

As in math, you come up with the "simple" solution of 0.5 only after you've realized that the "complex" solution is, for example, "sin(pi/4) * cos(pi/4)". There might be no other way to discover the simple solution.


That talk transformed the way I think about software development. I highly recommend watching it.


This title is misleading. The post actually says that the reason "simple" comes first is because without it you can't have "correct" (nor "fast", not that that matters so much). So he's not saying simple is most _important_, just that it comes first chronologically, and has the other two as consequences.


e.g. Gall's law

> A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.


Surprisingly to some, this is neither a law nor true.

A complex system with a good workable and testable architecture will work, starting with passing the tests down to satisfying the user...

Such systems are not designed in detail but in general, and usually start with a single, simple but powerful overarching idea, which is actually quite complex to implement, but ends up working evidently well once even halfway done.

Examples would be message passing architecture, event driven programming, time tracking, microservices, reactors, literate APIs, contract programming, Model-View-* and more... Note how half of those deal with reducing coupling by adding complexity.


That's an excellent clarification since so many people aren't getting his nuanced view.

I also disagree. I have yet in my life see any programmer crank out a simple solution on the first try for anything that isn't a trivial requirement. The way I think most of us work is to create a complex solution first and to have to refactor at least a couple of times before we get to simple and elegant. That doesn't mean the complex version didn't work.

I've made plenty of code that is bug free according to the requirements. I tend to start with tests and I'm pretty good at figuring out edge cases and other ways to break my code before I've even written it, so what I end up with is pretty robust. But the first version is rarely elegant or simple. By the time I'm done with the first version I understand the problem space so much better and might throw out 90% of my original code in the first refactor. Am I the only one doing this? Sometimes it even takes weeks or months to get to simplicity. I keep understanding the requirements better and better and noticing how I could eliminate code, often after I've noticed some code I'm still not happy with and having slept on it. Sleep does wonders for seeing how to simplify.


>This title is misleading. The post actually says that the reason "simple" comes first is because without it you can't have "correct" (nor "fast", not that that matters so much). So he's not saying simple is most _important_, just that it comes first chronologically, and has the other two as consequences.

If that's what he meant, he's flat wrong. Simplicity is neither necessary nor sufficient for correctness.


Think about it in terms of each choice you make.

I have a simple solution and a complex solution. Does the simple solution meet the requirement(s) before me? If so, I prefer it. Let's move on to the next requirement and consider my options again.

The alternative might be to look at your requirements, but choose a complex solution (over a simple one) because you think it might meet other requirements, either ones that have not yet been identified or ones you think are likely to happen in the future.

Are there times that the more complex solution wins? Probably. Consider you want to write a blog. You know that you can create an HTML (text) file and slap it on a web server and your blog has started. But if you've done this before, you might also know that you can throw WordPress on your server for a little more up front pain. You know you want comments and word clouds and date/time stamps and navigation. So you choose the complex solution. (You also know that you know face potential security implications, upgrades, dealing with users causing trouble with comments, having the PHP/MySQL infrastructure/hosting requirements...) Maybe you just wanted to dump your thoughts to the internet. Maybe the text file approach was better...

It may just be another way to say "avoid gold-plating your software."


Certainly don't gold plate.

But simply meeting requirements is only doing the minimum possible. Now in a government job, that's okay.

But in a real job, if you see where the simple solution is OBVIOUSLY wrong for certain likely cases not considered in the requirement, then THE REQUIREMENTS ARE WRONG, or incomplete and this should be pointed out!


The OP's advice, if applied in CPU industry, would be disastrous. Modern desktop/server CPUs are incredible complex... in order to drive maximum performance. Pipelining, OOO execution, branch prediction and speculative execution: these are all features that introduce tremendous amount of architectural and design complexity. In many cases, they also harm correctness, because they can lead to functional and security bugs.

And yet, if you try to compete with Intel with a CPU missing the above optimizations, you will get absolutely creamed in the marketplace. No one, not even those touting the importance of simplicity and correctness, will buy what you're selling.

Today's free market is too complex for these overly simple rules. Choosing between simplicity, correctness and performance, is a complex tradeoff that needs to be made on a case-by-case basis. Trying to find shortcuts to avoid these analyses may feel liberating... but you're ultimately only shooting yourself in the foot.


A counter-anecdote: The features you listed started shipping (from Intel & MIPS) in microprocessors in 1996, 22 years ago. Intel's out-of-order Pentium Pro was beaten by the in-order DEC 21164 the same year.

Also, there's the case of Intel losing to in-order ARMs in mobile. First with XScale, and later on with the in-order Atoms. (https://appleinsider.com/articles/15/01/19/how-intel-lost-th...)


Sure, specific optimizations in specific markets may not be worth the cost they incur. Or they may not be valuable enough to overcome other weaknesses in the project.

And yet, if someone tried to sell a server CPU today that was not pipelined, not OOO, and didn't have branch prediction, it would absolutely tank in the marketplace.

I never said that performance optimizations should always be implemented. Just that performance optimizations should sometimes take precedence over simplicity.


> And yet, if someone tried to sell a server CPU today that was not pipelined, not OOO, and didn't have branch prediction, it would absolutely tank in the marketplace.

You could sell it as a niche product for high security applications, since OOO execution is a nasty side-channel.


That's an interesting idea. A "so simple it can't have bugs" design would never win over the mainstream-market, but it might be able to find a niche among extremely security-conscious users. This might be a great project for the open-source community to take on.


They did use OP's strategy in the CPU industry.

The first CPUs were simple as heck. They spent 20 years making them reliable(and faster without compromising simplicity, mostly just node-shrinks), and they've only really been complicating the architecture in the last 30 years.


iAPX432


The OPs advice would probably be dissapointing if applied to making a curry too. It's fortunate he regularly used the word software throughout the blog, really.


The attitude of the CPU industry in this regard led to some recent well-publicized, very bad, and nigh-unfixable security vulnerabilities, as you might have heard.


And yet, I don't see you or anyone else committing to buy ultra-simple non-pipelined non-OOO desktop/server CPUs.

If you insist on only hiring chauffeurs who drive at 100mph, you can hardly complain when they get into a few accidents.


The main problem is that most software leans on those misfeatures as a crutch to excuse heaping layers of abstractions. Unfortunately this problem comes from several places, so it's not as easily fixed.

That being said, consider me lined up to buy one of these CPUs.


> And yet, I don't see you or anyone else committing to buy ultra-simple non-pipelined non-OOO desktop/server CPUs.

If you can find a CPU that has the same number of non-cache[1] transistors as a Intel/AMD chip, but spends them on a larger number of simple (and preferably independent/non-hyperthreaded) cores, rather than squandering them on speculative execution and ten thousand obscure model specific registers, I would absolutely buy several of them.

1: and similar amounts of cache, of course.


Intel makes them and you can buy them today, with up to 72 Atom CPU cores, e.g. (1) https://ark.intel.com/products/95830/Intel-Xeon-Phi-Processo...

Very niche products.

For massively parallel number crunching, GPUs are much better in both performance/watt and performance/dollar. That Xeon Phi 7290 delivers up to 3.45TFlops, costs $3200, and consumes 245W. Compare with GeForce 1080Ti 10.6 TFlops, $700, same 250W.

For general purpose software they don’t work particularly well either. Most IO interfaces is serial, SATA, PCI-X, they have very few wires going to CPU. If you’re IO bound and you don’t have enough single-thread performance you’ll struggle to saturate the bandwidth, doable but very hard.

Also for general-purpose software latency matters. Namely, input to screen latency for desktops and mobiles, or request to response latency for servers. Get Windows or Android tablet with Intel Atom Z8300 (available for $80-100), and see how it performs, it has 4 very similar cores (minus AVX-512), and frequencies are very similar, too.


https://www.intel.com/content/www/us/en/processors/xeon/xeon... shows at least six volumes of datasheets, and I still haven't found a instruction set refence. I have found https://www.intel.com/content/www/us/en/processors/xeon/xeon... (helpfully labeled "Datasheet, volume 2", rather than anything related to it's contents) which describes a subset of the aformentioned ten thousand random control registers. So no, Intel does not make [simple cores], it makes heaping piles of shit complete with malware ("Intel® Management Engine") buried at D22:F0 on a interal PCI bus.

It isn't simple, it's designed to be incorrect (and even the parts that are supposed to be correct aren't), and I'm not surprised it fails on fast as well.


> I still haven't found a instruction set reference.

X86-64, SSE, AVX, AVX-512, AES-NI, etc. Their key selling point is software compatibility.

> Intel does not make [simple cores]

The cores are quite simple by today’s standards; otherwise Intel wouldn’t be able to pack 72 of them on a single chip. IME is unrelated to the cores, it’s a separate piece of silicon.

But if you don’t like the IME and don’t need backward compatibility with x86, maybe you’ll like this: https://www.qualcomm.com/products/qualcomm-centriq-2400-proc... But again, performance benefits of the architecture (48 simple cores) is questionable, GPUs are way faster for parallelizable number crunching, and you need single thread performance for almost everything else.


> X86-64, [etc]

So it has the ten thousand x86 and x64 registers in addition to the ten thousand ?PCI registers?

> The cores are quite simple by today's standards

That's my point; today's CPUs don't have "ultra-simple" as a option (at modern feature densities).

> IME is unrelated to the cores

Fair point, I probably should have added "and doesn't have technicalities like builtin malware" to my original post.

> https://www.qualcomm.com/products/qualcomm-centriq-2400-proc...

This looks interesting, although I'll need to research a bit more (and "SOC - Features - Integrated management controller" isn't encouraging). Thanks!


In the free software world, this is not a huge problem. You can make new CPUs and recompile programs with far fewer effort. All that complexity really comes from monetization.


I often repurpose a famous quote from Mark Twain (about letter writing) for the topic of simplicity:

> I didn't have time to write a simple program, so I wrote a complicated program instead.

This is in my experience more than just a clever turn of phrase: the vast majority of software projects (or features etc.) move from _simplistic_ to complicated, and rarely from there toward simplicity. The end result is exactly what this author describes -- a complex mess that's difficult to reason about and rarely performant.

Few of us (usually myself included) are willing to devote the time and effort required to achieve true simplicity.


I've found that, unlike Mark Twain's profession of writing and art, in which each new work is attempting to push the bounds of thought and expression, constructing simple software is something that you can actually get better at over time, and with focussed practice. I hate to break this to us, but what we software developers do over and over again is not as novel and groundbreaking as, say, what a Mark Twain is doing with each work of art he produces.

This is certainly true for me after 20 years of focussed practice. In fact, I have to go out of my way now to introduce coupling--it takes time for me to do the wrong thing. So when I hear someone say "I don't have time to make it simple / decouple everything / design it correctly", what I really hear is "I don't have the skill set" and, often, "I don't want to do the hard work and patience it takes to require the skill set." It's a philistinic cop out, really.


I prefer: “the strategy is definitely: first make it work, then make it right, and, finally, make it fast”.

http://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast


This is better. If you can make it work while keeping it simple the other two will follow more naturally.


Apparently everybody knows "correctness" comes first. But that is the whole point of this article: It argues that complexity will reduce correctness, and thus simple should come first.

Please NOTE that I am not saying that I agree or disagree with this article.


My take on this from like 10 years ago:

http://www.expatsoftware.com/articles/2007/06/getting-your-p...

I came up with Readable as the top priority, followed by Debuggable and Maintainable. I suppose one could combine that into "Simple" if one liked.

But yeah, Fast was already at the bottom of the list. Even back then.


Those three properties are very highly related.

Personally I found the biggest improvement to my own software came from maintaining the same system that I wrote for 4+ years.

If I came back to a part and didn't understand it more or less immediately, then it was time to refactor it. I wrote the code I should understand what it is doing. No excuses that someone else had written bad code.


"It was hard to write, it should be hard to understand." (I was joking, of course, when I said this.) It seems to me that that's a reasonable metric for determining if something was well written or possibly well commented. I had an opportunity to go back to some of my older code about a year ago. It was scary to me what portion of it I no longer understood. Worse yet, portions of it referred to documentation that I could no longer find. The company had migrated their documentation through several storage organization technologies and no one seemed to know where the old stuff went. Luckily I ran across a retired engineer who recalled where a lot of it had been archived.

(Unrelated) When I read the article, the first thing I thought was that all of the simple programs had already been written.


I'm getting ERR_SSL_VERSION_OR_CIPHER_MISMATCH (no content displayed) on latest Chrome / Linux when trying to access the link (which force redirects me to use https). In case it's your site, you might want to know / fix.


Thanks, Cloudflare. I noticed after posting that that blog is still running without SSL, so I did the quickest thing that would work and moved it over to Cloudflare and their Fast SSL product.

Evidently, they're happy to start auto-forwarding http traffic over to https before finishing the part where https works.

I've flipped it back off for the time being. Thanks for the heads up.


The simplest code is an empty file. And yet, we live in a world with complex software. There are tradeoffs, but this article doesn't help you make them.


> The simplest code is an empty file

Reminds me of this: https://github.com/kelseyhightower/nocode


> this article doesn't help you make them

Sure it does. It clearly states that sometimes new features or performance optimisations have to be sacrificed to keep the software simple.


Complexity for new features or performance optimizations is simply a trade off. Sometimes one worth making.

By adding a lot more complex code can I make this blitter 100 times faster? By using this more complex algorithm can I make a "simple" string search amazingly faster?

As for features: Word has more features than Notepad. So why doesn't everyone use notepad?


That's like saying, sometimes apples are so expensive you can't buy them. Perhaps surprising, if you haven't thought of it, but it is an obviously true statement about all tradeoffs. Sometimes you shouldn't make them. Still doesn't help you figure out when you should.


Maybe it’s obvious to you but at least in my experience lots of devs when confronted with feature request rarely consider if said feature should be implemented. Most of the time they only think how it could be implemented.


I think this article may be guilty of overstatement. It's an ever more common sin, as overstatement seems have become a preferred method for communicating thoughts and ideas (and products, though that's nothing new).

Is there a need to overstate to cut through the noise and get your point across or your message heard? Maybe, but it seems unfortunate that when people have some truth or wisdom to share, there is a felt need to amplify and polarize it.

This article has good things to say about the importance of simplicity in code and implementation. I'm fine with value judgements as long as they convincingly define the values they are judging and show evidence that the facts have been thoroughly weighed. 'Correctness' is an ill-defined villain here and the article would do better to state the benefits of simplicity and experiences the author has had with systems designed without simplicity as a first-order goal.

Then again, perhaps I ask too much. Also, I've never had an article on the front page of Hacker News, so what do I know.


One of the lessons I've learned (that this article echos upon) is that you should _always_ factor the "long term cost" of adding a feature.

When I first started building TrueJob (job board software), I'd add in all these really cool features that made my app -- and at the time, they felt really useful. But over time, people weren't using them, so I built more features.

But then the old features I had built broke, so I had to fix them. And then they lagged behind the quality of other features I had written them, so I had to update them. And after doing this 5-10 times (as the software evolved and I dramatically increased the complexity of my application), these features that no one used really were painful to keep coming back to, but now enough loud users were using them that I couldn't remove them.

It made me really value the projects where we polished and did just a very few things, but did those very very well -- it lead to higher customer satisfaction, and less pain in the long run for us.


I have encountered this so many times with enthusiastic and well-intentioned but non-technical founders that I created a quick presentation around it. Maybe other folks will find it useful.

https://www.slideshare.net/chaffeet/how-killer-features-will...


I fully buy the argument that code is a liability, while the underlying algorithm, or the overlying service is the asset. Following this line of thinking encourages the writing of simple code.

Far more of a problem than code complexity is the lack of systems thinking when applied to programs. Various factors (abstraction, delegation, nicer APIs, solid products, SaaS, "microservice" trends, package managers and bundlers) have encouraged offloading much of the computation and data flow to other products, whose strengths and liabilities become your own if you make use of them. One-liner lambdas might be simple code, but they're often coupled to a maze of other cloud services, and the complexity there is coming from the dependency substrate, whose shape can't even be expressed in an imperative or declarative notation like code.

In truth, code and libraries and services feed into systems, and those systems must be understandable if correctness and maintainability are a goal.


Some other formulations:

- Occam's razor https://en.wikipedia.org/wiki/Occam%27s_razor

- "Simplicity is the ultimate sophistication" (Leonardo da Vinci)

- "Less is more" (Mies Van Der Rohe)

- "Make everything as simple as possible, but not simpler" (Albert Einstein)


All four of those are great! But if they are wrong, then it is still wrong no matter how simple.

Like saying leap years occur every four years. Simple!


I think I read the OP's assertion in relation to your counterpoint like this: when thinking through the problem, your initial iteration of work is allowed to say exactly that: a leap year occurs every 4 years. However, you should do so via a semi-stubbed function (isLeapYear) as mentioned previously. I am imagining that the specifics of when a leap year occurs are not critical to your solution, only that you know when they happen. Thus, you avoid spending too much time on a detail that's eventually important, but not critical for proving your first hypothesis. In the "correct" stage you come back and improve the isLeapYear function to return correct results.

Part of attaining wizard status is not learning how to hold more of the program in your head, but instead learning how to hold _less_. This seems like an excellent step in that direction.


The author assumes that simplicity is a prerequisite for correctness which is in general true but it does not say when exactly a simple system is turned into a complex system. I would say that if the complexity of your system is still under control then it is qualified as a simple system.


I would say that the last one places correctness first.

BTW, the quote is usually attributed to Einstein, but it seems he did't say it.


My reaction to both the title is similar to that of other commentors: correctness should go first!

However, the general idea appealed to me, so taking a step back, I tried to post-rationalize a similar thought to the author that I could reconcile with my initial reaction. The thought I then had, is that if a simpler solution can be found that solves a significant subset of the problem being solved, then perhaps it is worth adjusting requirements to go for the simpler solution for the fact that it lets us ship faster and with less risk.

Often times we come up with requirements that aren't really "required": showing business stakeholders that dropping a few requirements could enable you to ship six months faster and with far less risk can be a valuable insight in of itself. In essence we are still putting correctness first, but we are changing our definition of "correct" slightly in order to increase simplicity.


"Correct" should, by definition, imply simple and fast. "Simple" tends to be the correct and fastest way to code. Code that runs fast might not be the "simple", but simple code will be the debugged/updated/ported the fastest. I think the author is saying this, but with a more controversial headline (and, intentional or not, controversy creates clicks). After all, this is the premise:

The reason is straightforward: if your solution is not simple, it will not be correct or fast.

This could be reworded in the following ways and the points made in the article would still follow:

If your solution is not correct, it won't be simple or fast. If your solution is not fast, it won't be simple or correct.


I interpreted this as advocating for using a model with the lowest-level abstraction that you think will work. If you start with the simplest abstraction possible, you produce a simpler and more maintainable system. You're also in a better position to incorporate further abstraction later as your understanding of the problem space evolves.

This seems like a good opportunity to recommend Rich Hickey's talk "Simple Made Easy": https://www.infoq.com/presentations/Simple-Made-Easy


What's the right way to cut a bagel?

The obvious solution: Grab a knife, put the bagel on end, and get to slicing.

The commercial solution: Flat and flip. https://www.epicurious.com/expert-advice/best-way-to-cut-a-b...

The mathematician's solution: A Möbius bagel. https://www.youtube.com/watch?v=Ktfo8D3cCr0

The engineer's solution: The bagel jig. http://www.freepatentsonline.com/5228668.pdf http://www.freepatentsonline.com/3347296.pdf http://www.freepatentsonline.com/4807505.pdf http://www.freepatentsonline.com/4747331.pdf

The consumer solution: The bagel guillotine. https://www.surlatable.com/product/PRO-1036557/Sur+La+Table+...

Which is the correct solution? Depends on who you are.


I think clear is the most important. Most software projects I've worked on in the past decade have gotten a lot of assumptions wrong. The projects that were able to turn around quickly were the ones that could find busted logic and fix it.

I've been using rspec-inspired testing frameworks to help with this clarity. Whenever I implement some early business logic, I assert that the logic I wrote does what I expect in words using rspec style tests (I've been writing a lot of JS lately so I've been using Jest). The kind that read like sentences: "the tax component applies 2% tax to all purchases above $400." Even if the logic is initially incorrect, being clear in our incorrectness lets engineers and (more importantly) product people quickly identify incorrect assumptions.

The logic to apply that tax in this example may not be simple. Often times, business logic can't be simplified any further and needs to be a bit thorny. In those cases, clarity of purpose is much more beneficial than simplicity, and I've found writing rspec style tests, and forcing myself to translate what the logic is doing into words, helps immensely with clarity. It clarifies my thoughts before shipping code, and it clarifies our business assumptions as a whole when that code is running in production.


Anyone can write complex, brittle code, and often even get it to work.

The real superstars solve the same problems with simple code.

I recall a fellow student in CS in the 1980s who used to brag about how many lines of code he wrote to solve an assignment. I never understood that mentality. His programs were always 2-3 times longer than mine. But now that I've had many years in the industry, it almost seems that a lot of people believe more code is better.


My approach towards software development has been pretty similar except that my order of priorities is a little different. I prefer:

Correct, simple, fast, in that order.

In other words:

- First get a correct solution working that solves the problem correctly for typical as well as corner cases, hopefully with good test coverage for both typical and corner cases.

- Then improve the solution to make it simpler while preserving correctness.

- Spend time on making it fast when there is actual evidence of performance problems such as data from performance testing with current workload and expected future workloads.

The reason why I like to ensure correctness before simplicity is that many times what might seem like a simple solution initially might turn out to be the wrong idea when all corner cases need to be accounted for. Ensuring correctness first requires me to think through the corner cases well and write test cases early during the development phase. With correctness taken care of and protected reasonably well with test cases, it becomes easier to iterate on the solution and increase its simplicity.


I like this approach, and it's usually the one I follow.

When I'm first building out a feature, I need to make sure that it actually works (i.e. is correct). Once that's out of the way, I'll have a good idea of what it takes to make that functionality actually work correctly, and can begin making it simpler while preserving correctness.

Simplicity is easy if correctness is not a constraint.


Software is rarely correct, because if it is not correct today it will only be correct tomorrow due to skilled interventions, and if it is correct today then it will almost certainty not he correct tomorrow without skilled intervention. The chance of skilled intervention is always low.

Simple software reduces the skill required to intervene.


That mantra, taken blindly, leads to software that won't scale under realistic circumstances.

I once worked on a project that failed because the lead programmers did not want to learn how to use a database. They insisted on using an ORM incorrectly, and almost all of their code needed to be rewritten in order to handle a typical anticipated load.

Granted, the entire codebase was full of "simple" for each loops, but the reality is that if they had started by writing correct database queries, the project would have never failed.

Thus I say, optimize for your budget at the beginning of the project. You should pick design patterns that handle anticipated load. Full optimization can come in later, but if your project can't handle anticipated load at the beginning, then you are misinterpreting what this "simple, correct, fast" mantra really means.


There's a difference between simple and simplistic, and there's absolutely a value in keeping things simple instead of complex. So much so that I think it's reasonable to state categorically that if there is a simple solution that works it's always better than the complex solution.

Not using a database when the solution calls for one clearly violates the "works" principle. And obviously, using a tool incorrectly (the ORM) trumps anything else. That's a tautology.


It's like some people take "optimize last" to the extreme. A project needs to handle anticipated load at the beginning.

For example, if a web application needs to handle 20,000 requests an hour, it's okay if early versions take 10-15 seconds to respond under unusually high load. The optimization phase can bring that down to something more manageable.

Some people take the "optimize last" so far that they ignore their basic scalability requirements; or just assume there are no scalability requirements. That's when a more senior dev needs to step in and demand basic scalability in the design.


I would argue that this was not "simple". For me, simple means that it is easy to understand what is going on. If your using an ORM wrong, then you're not understanding what is going on under the hood so it is not simple.

If you really write simple code, you should also be aware of the shortcomings of the simple approach.


A DBMS is inherently less simple than for loops, that's entirely true.

But that's the class of problem they were trying to solve. And a DBMS represents vast amounts of energy put into trying to solve that class of problem.

So, the problem was a complex one, but one that had many aspects of it already solved, and rather than building on a good solution (the DBMS) they chose to try and reinvent queries.

> you should also be aware of the shortcomings of the simple approach

Yup, you really can't refuse to learn how databases work. Unfortunately, they didn't know what they didn't know.


Understandably, a lot of people are balking at the idea that simple might come before correct. Code that is simple but not correct is not a solution. But the OP is not asserting that simple, incorrect code is a superior solution to complex, correct code. Instead, I think the assumption is that there is a solution that is simple (enough), correct (enough), and fast (enough), and that you are not going to stop until you reach such a solution.

Correct comes before fast not only because you should make it correct before making it fast, but because you should not sacrifice correctness in order to make it fast.

Similarly, simple comes before correct because you should not sacrifice simplicity in order to make it correct (or fast). Instead, you should continue looking for different ways to make it correct while maintaining simplicity.


Some years ago I tracked down the origins of the phrase "complexity is the enemy", and learned three things:

1. The full phase is "complexity is the enemy of reliability".

2. It dates not to the 1980s or 11970s as I'd thought, but the 1950s.

3. It first appeared in print in The Economist newspaper, 18 January 1958, according to Google Books & Ngram viewer.

I'd very much like to have a copy of the article, though its proved resistant to obtaining. HN username at protonmail.com should anyone happen to have access to a PDF.

http://books.google.com/books?id=aDsiAQAAMAAJ&q=%22complexit...


Simple is the property of microscale. You can have a class, function, module that is simple but the whole system can be incredibly complex. You can sacrifice simplicity in many places as long as things are loosely coupled.

Correctness can be only achieved if the programmer has a good understanding of requirements and posses necessary discipline to write tests. I cannot image system that maintains correctness without tests.

Performance usually is something that only good architecture can bring. I disagree with the article. If you focus on simple too much you will miss important requirements and you will make architecture choices that negatively impact performance. More annoying it the author is trying to create another silver bullet approach.

Simple/Correct/Fast - it depends on the problem you try to solve.


The Amdahl's law applies here also, if your code is slow everywhere, making the slowest part faster isn't going to help much, so there needs to be some basic minimal consideration given the performance throughout the project. Or better yet, have well performing default ways of doing things and avoid badly performing ways.

In worst case failing to do that will require write entire project from scratch because basic data structures and structure of code is antithesis for performance and there is not a point that you can optimize. A reasonable default is using vectors over linked lists. A more complex choice is using struct of vectors rather than vector of structs. And neither of those choices are easy to do for data structures that really matter late in the project.


Something I would add that helps when writing simple software is testing. You have to close the feedback loop early and often. Until you see what your program does you won't really understand it.

Thinking you understand what you're making before you really know is one of the worst mistakes you can make. It's under that misunderstanding that you'll think you need to add to your program to make it more correct and fast, when really you're over complicating it and making your life worse when you actually need to get it running.

In contrast, when you can run a test as soon as possible, that's where I usually see an alternative way of writing it that may be shorter, more correct or faster

You'll never get to this point of course if it isn't simple in the first place.


I've found that what keeps more programmers from simplicity is that people like things to be in order. Like their desks or their living room. I've found that seemingly messy code that are actually simple perform better than complex supposedly in-order code.


Another problem is reliance on "big" frameworks as if those being big and successful would make your particular application successful. No it's just a bunch of code you don't understand and can't fix.


Interesting that he hit on the top three points in the pony philosophy[1] but in a very different order than they chose. For pony, the order is:

1. Correctness

2. Performance

3. Simplicity

4. Consistency

5. Completeness

I think I tend to agree with that order more. While simplicity tends to be helpful with performance and correctness, there are very few cases where you'd sacrifice correctness/performance for simplicity if implementation time/cost were not a factor. Let's not confuse a way of getting to the goal with the goal itself.

> Incorrectness is simply not allowed. It’s pointless to try to get stuff done if you can’t guarantee the result is correct.

[1] https://www.ponylang.org/discover/


Incorrectness for a minority of inputs can also be tolerated if it allows for a vastly more efficient and simple implementation, provided that the environment or driven machinery has failsafe mechanisms and wouldn't be able to cut an arm or a leg in case of an error.

For example, think about a game that runs quickly and is playable but crashes occasionally, versus one that lags all the time but never crashes.


I think I agree with these points but I have a question. What is "consistency" in this case? The article only says,

> Consistency can be sacrificed for simplicity or performance. Don’t let excessive consistency get in the way of getting stuff done.

I take consistency to mean consistency of results being correct. Wouldn't that make consistency a subset of correctness?


Pony is a programming language, so I take consistency to be about how consistent the pony programming language feels to the user (principle of least surprise, etc.). That said, I'm not part of the pony core team; if you really want an answer, you could stop by #ponylang on irc and ask.


I can't agree.

Simple is important, but it's not the most important thing. The overwhelming evidence is that there's plenty of working software, in use, that isn't simple. Users will shape their behaviour and memorize flows around a complex piece of software if there is sufficient motivation.

My own principles for design state this order:

Functional, Simple, Delightful

Does it do the job? Is it simple to use (nothing more than what's required)? Do people get excited to use it?

The first is non-negotiable and the last is optional. There's a lot of software that is functional, but over-complicated. Lots that is delightful, but doesn't work and everything in-between.

The software I admire most (and use daily) has a balance of all three.


How do measure "simple"? Is PostgreSQL "simple"? If not, is PG not considered to be "correct" or "fast"? Seems like it does a pretty good job to me and no, PostgreSQL isn't exactly "simple".


Yes! So much covered in so few words. This ordering works shockingly well for constructing robust solutions to complex problems.

Keep it simple. Do what needs to be done now, leave off for later everything else. By the time later arrives, what the project needs will have gone in new and surprising directions. Refactoring complex and coupled code into logical discrete units pays off in multiples down the road. Removing that which is no longer needed is like giving your whole team extra time to breath and new room to think. For all the times that simplifying also solved the two other problems, we used that time for making more cool stuff.


It's only six paragraphs so you can't hope for much subtlety, but this is very fuzzy thinking.

I always stress a hierarchy of Goals, Strategies, Objectives, and Tactics, which is a fundamental and well known way to tackle strategic thinking/planning. When you engage in fuzzy thinking, then it is easy to make the wrong decision. For example: "we must be agile". Well, no, that's not our goal. Our goal is "whatever", and agile might let us meet that goal, or it might hinder it. "Move fast and break things" will (hopefully) never be the motto of Boeing.

So, simple, correct, fast. That conflates different things. It is never my goal that things be simple. A goal might be 'correctness' or 'don't kill people'. A way to achieve correctness is, for example, easily unit testable software, which is a strategy, the objective is a passing unit test, and a tactic might then be the code be simple (low cyclomatic software is easier to exhaustively test, for example).

When you conflate various levels of this hierarchy bad decisions and religion ensure. "Code must be simple". "We must always be agile". "Everything must be documented". "functions must be < 10 lines long" You can see that these are not goals, but often chosen ways to get to your goal. The problem is, goals change, and strategies don't get you to the goals in some edge cases. Because you are focused only on strategies/tactics and haven't clearly articulated the goals you don't notice this and make bad decisions.

For example, short functions are generally a good thing. But, sometimes it takes 50 lines to express a cohesive thought. Splitting that arbitrarily across many functions can just obscure intent, and leave the developer scrolling back and forth, which is proven to reduce comprehension and increase the likelihood of errors.

In short, if you use these three criteria, in this order or any order, I don't think you will end up making very good decisions because you at best only implicitly have stated and understood the problems you are trying to solve and the interrelationships of how your strategies and tactics affect one another/


The article and lot of comments implicitly have the assumption of you must sacrifice something to get simplicity or correctness, at least at beginning. Performance can only be picked up “later”.

The fact is retrofit performance into a sufficiently complex system is hard, more often then not the system has to be redesigned and rewritten to achieve it, as we have seen with so many OSS project.

I think with right amount of forethought, all three can be achieved. That doesn’t mean you never need to iterate on your software, because requirement always involves.


It seems to me like what the author proposes fits with TDD: start building the simplest case by way of a unit test, implement it in the simplest way, and keep going that way; while the “correct first” approach many are saying should be used feels more like having a problem, really thinking about how it should be solved and then solve it all at once.

Now, while I think some small and contained problems may be solved the second way, I think most complex problems of the kind we solve as programmers will be best solved the first way.


I do agree with the premise, but with at least one caveat: for some problems, there aren't any simple solutions. In that case it's better to choose the _simpler_ solution.


Making correct code simple(r) requires more time (eg money) than making correct code in the shortest time possible.

Simple, and still correct, is often the result of doing MORE work to achieve it.


Yep, and that's why "worse-is-better" is better.

I've never read Gabriel's article in such a way that this "worse" means actually, seriously, worse quality. I always thought it meant "YAGNI", "KISS", and so on.

If you can't solve a problem, maybe you shouldn't try so hard? Find a better problem! I'm confident this attitude helped me a lot in the past. Speaking as a clearly-not-a-genius programmer.


My version of this is test cases (e.g correct), simple, fast: in that order. With test cases you can refactor your initially complex and naive solution as many times as needed until the solution is easy to understand and elegant. I've rarely made something simple the first time around. To get to simple I need to work at it. And I'd like to know I'm not breaking things when I do that work.


Looks like the point about correctness is misunderstood. The post argues that 1) perfect simplicity, correctness or speed is unachievable in most software of any complexity and 2) given that there is necessarily going to be a trade-off, focusing on simplicity makes the other two easier to solve. There is nothing in there about simply writing `return 1` and considering that the perfect solution.


Been working on some legacy code that is so convoluted, you sometimes can't even tell if it's correct, without starting to rewrite. Once you grok what it's doing, there's almost always a simpler & more direct way. This in turn usually makes it faster, AND sometimes more correct. It's pretty gratifying, though there are many SMH moments.


This made me laugh. We were tasked to write an app for a customer and one of the main requirements from their product manager was the app had to open within 1 seconds. Not a problem we thought, then their technical guy came up with all these requirements that had to be resolved during the startup sequence.

So it was not simple and it was not fast. Nobody wants simple.. ever.


I find it depends on how well you understand the problem space. If you are well versed in it than making a simple solution first is possible. Which can be corrected to handle edge cases later.

However if the problem space is new for you. I find I usually have to write more complex code to understand it first. Then I can come back and simplify the complex code.


mmm... I disagree. The order should be Correct, fast, simple.

First of all the software needs to be correct, if the software produces incorrect results, then there is no point for it to exists. It needs to be debugged until the results are correct.

Then fast. The point of using software is helping the user to accomplish a task. The better help that the software can provide, the more useful is the software, and performance is often critical to help the user.

Then simplicity, simplicity help future developers to understand how the software work and how to properly maintain it. But if some complicated procedure is needed to produce the correct results, then complicated procedure it is. If a different algorithm needs to be used to speed the software, then the new algorithm it is. It is up to the next developer to study and understand what was done and why.

The purpose of software is the end user. The next developer sits second to the end user.


Sounds like a recipe to deliver hard-to-use software late. The easy-to-use software that was delivered on-time will eat your lunch, even with some bugs.


You can apply this rule to the structure of your codebase/class: make a lot of small, focussed classes rather than one big class that does everything.

This way you can easily debug it, replace the logic of 1 class without breaking your whole application.

One of the best advices for me was: your function must do one thing only, and do it good.


OP opines, but cites no evidence, examples, or even anecdotes. If your work is slow and incorrect, your work isn't simple, it's incomplete. Also, some problems are irreducibly complex. Save us from "simple" and "lightweight" "solutions."


At the end of the day your opinion on this doesnt matter. The company you work for is still going to want it as fast as you can possibly manage it, and they honestly dont care if you have to write spaghetti code to get there because they dont have to deal with it.

I think I need a new job...


Simple means that there is less to go wrong, but not necessarily correctness. A seemingly simple implementation can also hide a lot of behavioural complexity behind abstraction layers. The program might not crash, but it might not behave properly either.


In the boardroom, three of Ford's top brass are in the midst of heated debate...

"As you see, our main priorities, in order of importance, are simplicity of design, reliable operation, and good performance. For now we're focussing on simplifying our designs and manufacturing process, we'll figure out the rest afterwards."

"In that order?!" screamed another, sitting to the left. "Why you might as well replace the engine with a brick! If it's not able to drive you around, what even makes it a car?! Scrap your ramblings, first we make a car that works, with an engine and all, then we can figure out how to manufacture it."

"Pah," scoffed the engineer across the table, "and when is it in this story that you realize we're in the business of building cars, not mars rovers? If it doesn't get you from A to B faster than a bike nobody is going to give a hoot that it can run for three years without maintenance at the bottom of the Mariana Trench."

"Idiots!", the original butted back in. "If you make it simple first, then changing it to make it fast will be easy, and of course only simplicity begets correctness."

"You think the Mars Rover was simple?!─"

"Ha, because nuclear reactors are the paragon of simplicity─"

"CHERNOBYL IS EXACTLY THE POINT I'M MAKING HERE!─"

"Though wasn't it economic factors that lead to disuse of nuclear power? I hear solar is getting popular, we should really stick to my plan─"

"A car is not a solar panel, you bumble-headed fool─"

"You might as well be though─"

_Ahem,_ sounded the man at the head, drawing the room's attention, "I'm a little lost, so forgive the stupid question, but why have you not just considered... doing them together?"


I wish more people followed those 3 steps for deploying apps.

A lot of people want to jump straight to the auto scaling, self healing, super deluxe cluster + tax edition to handle a billion requests a week before their app even has 1 visitor.


TLDR: there’s no universal criteria for code quality.

Imagine you have 5 programmers. #1 works in a web development company, writing a CRUD web app in SQL, PHP and JavaScript. #2 is coding C and developing an embedded firmware for a PC component. Also works on the driver for that component in the Linux kernel. #3 writes some COBOL for a half-century old system running in a bank. #4 works in a game studio developing a level editor for yet to be announced videogame. #5 is a researcher working in a university, mostly writing Mathlab but occasionally some Python and R.

The ideal simplicity/correctness/performance tradeoff is totally different between the five. Just like any other tradeoff, or generalization, or methodology, or approach. All 5 are writing code, but the software they’re creating have totally different requirements, expectations, lifetime and budget.

I never saw articles generalizing stuff across all engineers: aircraft, biomedical, marine, telecommunications, etc. However, I saw many articles, this included, that tries to generalize across all software development.


Correct, [Simple|Fast]: In That Order.

The above is definitely the correct algorithm. Surely you don't think it is wrong! Come at me, bro!

If you aren't trying to make your representation correct, then why bother? Correctness is either the telos or closest of these in the pursuit of the telos (the assumed intrinsic purpose). To make it simple and fast are the instrumental (and heuristical) means to increasingly correct versions of that correctness end(s). You can only ever simplify a representation (hello, Kant!) because you never have complete (hello, Gödel!) access to the thing-in-itself directly.

The representation is the only thing you can consciously (the Daseinic emergent result of the non-conscious [that you know of: problem of other minds, OOO, dualist's hard problem of consciousness, etc.] aspects of your brain using language to talk to itself) attend to; it's the only way to tell and retell these stories to yourself. How correctly can you recursively represent correctness? I can only begin to meaningfully compute by starting with a meaningful notion of correctness as my foundation (however flawed it may objectively be), else it is meaningless. What does it mean to correctly tell yourself about an object if you don't assume the concept of correctness in how you tell yourself about an object?

In a sense, you beg the question of the Ontological Proof (hello Kantian idealist vs Gödelian realist in dialectic!), the reality of the possibility of the goodiest good of your program (and I suggest even beyond), in thinking about the telos of your program (and, clearly, you change your mind about what counts as that).

Simplifying and/or optimizing a representation already begs the question of having something to be correct about. "Simplify" according to what standard? The pursuit of correctness is the necessary precondition to having a reason to take the means. Epistemic justification in coding computers (be they silicon or brains) is inevitably tied to this telic processs and metanarrative. We do not escape the chain of sublations. Be a transcendental coder! I believe in you, folks. I know you care about correctness, deep down. Don't you want to be correct about this code too?

It's dangerous to go alone! Take this: https://plato.stanford.edu/entries/dialetheism

------------------------------

https://philosopher.life/


A valid observation, but hardly novel: http://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast


It's not an order, it's not "or", you must strive for simple AND correct AND fast, that's how you compete. If you don't have one, it is extremely difficult to compete.


For any non-trivial program, I usually end up with: working jank, maintainable jank, fast jank. If I can write something that works AND isn't a ball of mud, I'm pretty happy.


How sure are you that you're program is simple enough for the next guy to grok it? If it works and the next guy doesn't see how, it doesn't work.


and funnily enough, writing simple code is the hardest part to learn, because you can't measure it until it falls down on you

now that I think about it, that would be a good exercise for programming students. after writing a program that does a certain task, make the student rewrite it to be simpler/easier to understand/maintain. then take a look at the best solutions in the class.


Simplicity only trumps actual correctness if you want to hire more cheap maintenance programmers, thus pay more for the software.


I read this as "Don't start with the edge cases." Which is _sometimes_ good advice, but is not a universal rule.


What means simple? When I speak with Java developers it means enough abstraction for all kind of stuff (UML graph) :/


How do you know how far to simplify if you don't pay attention to correctness? Simple has correct as a dependency.


The simplest possible program that is not correct is no program at all, so why bother?


“Everything should be as simple as it can be, but not simpler” aka Occam's razor.


These are also the attributes that news should follow.


Make it work, make it good, make it fast.


I disagree. Correct, THEN simple, then fast.

Why? Aside from the excellent reasons given by others below, remember that even conceptually simple systems can and often have surprisingly complex behaviours.

The emergent complexity of interacting simple systems is... often breathtaking in the scope of how whacked out the unexpected can be.

tl;dr - "simple" programs do not necessarily have "simple" behaviour or "simple" interactions with other "simple" systems.

Make sure the damn thing works before simplifying it.


I subscribe to this philosophy, but unfortunately simplicity is surprisingly difficult for most software devs. It's because if you don't have the tool in your toolbox, you won't even know it can be simplified.

I once did a code review for a function that parsed some Linux file for Ethernet stats. It was incredibly convoluted with tons of substring finding and indexing. I told the author to simplify it and he declared he already had and it was as simple as it could get. I then showed him of the existence of regex and his mind was blown.


Exactly. And this is a major reason to always be on the lookout for new things to learn.

I had a similar experience (but earlier in his coding process) and managed to change what was about to be a months long effort of adding epicycles and epicycles to code into a one week task. I have most of my library at work and constantly speak of design and theory concepts with the younger folks. Based on more recent code it's paying off.


  > The reason is straightforward: if your 
  > solution is not simple, it will not be 
  > correct or fast.
A very hand-wavy statement, and not always true.


"Everything should be made as simple as possible, but not simpler"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: