Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

the answer to "how much" is aways "it depends".

* tech stacks evolve changing the amount of testing that is needed => most of the stacks allow to only focus on the "meat" of the logic, rather than things like integration (Spring Integration / Camel), network (Netty), cache (Redis) or even data structures (various language built ins).

* human is getting better with years of coding => I spot flaws and mistakes during code reviews N times faster than I did 10 years ago. I code in little pieces (usually functions), which "talk" back to me immediately even before they are finished.

* REPL is getting really good => Clojure, Scala, Ruby, Groovy, etc.. REPLs save lots of time and prevents mistakes: where a 5 minutes REPLay session reveals a nice and polished approach a lot quicker than a "let's try this / now rerun the test" formula.

* Domain knowledge and "'ve done this exact thing before" greatly impact amount of testing needed => e.g. deeper domain knowledge allows for [better] tests, while no domain knowledge requires lots of prototyping (even if you think it is the "real thing" at first, it is not, it's a prototype), and would greatly suffer from a large number of tests, as most of the time will be spent rewriting test instead of learning the domain.

In the end, the rule of thumb I always use is "do whatever makes sense". I don't buy TDD, ADD and other DDs. They are fun to read about, but they are too removed from the "real thing". If any DD term is needed, what I use is MSDD => "Making Sense Driven Development"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: