The illogical thinking implied in the article upsets me.
They are taking a model, confirming that it performs as expected on some examples, and then assuming that discrepancies on other examples will mean something. But once you obtain those measurements, maybe they just mean the model never worked. It's machine learning magic - you don't have a theory for why the model should be correct, so how can you learn anything from using it?
They are taking a model, confirming that it performs as expected on some examples, and then assuming that discrepancies on other examples will mean something. But once you obtain those measurements, maybe they just mean the model never worked. It's machine learning magic - you don't have a theory for why the model should be correct, so how can you learn anything from using it?