Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In theory I agree with you, we want unbiased models but here we have an input distribution that is not well understood so things get much more complex. We don’t even have a clear definition of what’s a face or not.

The model doesn’t work for people with masks: near 100% failure rate on this category of inputs. Should we release it or not?

In general some inputs are harder than others so it is expected to have more errors on those.

That being said in practice, in normal conditions, it is not hard to detect people with dark skin if the proper training data and training is used (btw, if you don’t pay attention how you do things even a low light image of a Caucasian will not be recognized) so there is little excuse to exclude a large part of the population just because of sloppiness. Moreover for this specific category (and of course others), there are consideration ethical and legal to make sure the system works for them.

Apart from that in general I do really think that ML systems with no “operator override” in many contexts are an hazard. We cannot expect the model creators to have predicted and tested for every possible input and we cannot have ways to manually correct the error (for instance in lending or border controls). Incidentally it is interesting to note this will be skilled work that will not be take over by “AI”.



I believe we're mostly in agreement. What's not acceptable to me is using "All models are wrong" to imply that it's ok to not understand ways in which they wrong, to be willfully ignorant of their failures, or to devalue transparency.

As a professional and practitioner, I have to a responsibility to engage in transparency and honesty when I deliver a model. Part of that is understanding and designing failure modes. That's simply good engineering.


Indeed I agree, it seems even that for some use cases training data is not anymore the bottleneck but robust test suites are. Interesting times, let’s hope we will find a responsible way to use these powerful technologies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: