Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think you can automate all the testing, since almost by definition, you always test AGAINST something. That something can be a specification, or a different implementation, or just a clock, but it has to come outside the subject (program) being tested.

A different take is that we want tests to have two qualities - correctness (the test cases should match the desired program behavior) and comprehensiveness (the test cases should cover as much of behavior as possible). The only way these are both 100% attained is if the source of your test cases is effectively another implementation of the same specifications, and you compare the behavior of the two.

The comparison itself can be automated, but the creation of the other implementation cannot. So, logically, if you want to automate it anyway, you have to compromise one of the two qualities, either being less correct (be less strict when comparing program output) or less comprehensive (verify less possible inputs).

So it seems to me that the "automate all the tests" folks want to have a cake and eat it too. They want to have an automated comprehensive and correct test suite, without the effort of writing an approximation of another implementation (used to compare).

In the past (I love how the blog post says "I am not advocating to returning to the Dark Ages of software delivery", as if that was necessarily terrible), you had a QA team to do exactly that - approximate another implementation of the same program directly from the specs. The better the approximation, the better the verification. But the cost is duplication of effort, in some sense. If you try to remove the duplication (i.e. for example by having the same person create both at the same time), you're likely to compromise one of the two qualities, without realizing it.

Let me state the above yet differently. The "let's automate testing" approach is based on the assumption that human tester is running the same tests over and over. But that's not the case, the manual testing is actually different each time, so what you invisibly lose by automation is comprehensiveness.

In fact, the QA's job in the past was to have another person (other than the developer) trying to make the sense of the specification (and presumably approximate the implementation with the test design), and check if the specification was translated correctly by running a comparison of their understanding to the developers'. While the comparison itself can be automated, the another look is important to discover the parts of the specification are actually weak, and can be understood differently. I don't think testing a program is solely about its intrinsic properties, but rather about checking the correctness of the translation from specification to the executable code.



I think if we have an automated way to check two implementations it would beat most every other form of testing that exists.

Then testing would come down to simply write everything twice and hope you got it right at least once.


I don't think there is anything theoretical that prevents you from doing that. In fact, it has been tried: https://en.wikipedia.org/wiki/N-version_programming

The problem in practice is though, for larger programs, it's really difficult to delineate what are the inputs and outputs (so they could be recorded and compared), especially since they take so many different forms.

But on small scale (like functions or modules), it should be possible, but the tooling is not widespread. IMHO it would save lots of time on doing unit tests for refactoring, and would be a real progress.

And I will go on a limb and claim, what people (in OOP world) mean by testability, is really just referential transparency (in FP parlance), in other words, our ability to delineate what are all the inputs and outputs of a module. Thus, adopt more FP and this will become increasingly possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: