Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We covered these topics in my Computer Ethics class. I took a bunch of philosophy classes on Ethics too. You can teach a person from a book all about Ethics. The problem is that people will make the decision to be ethical with their gut or in the moment.

For instance I was designing an app that would identify if the food had an allergen or not. Eventually I came to the realization that my program if deployed could actually hurt people. So I stopped working on the project. I wrote a blogpost about this (shameless plug) at https://medium.com/@zitterbewegung/making-computer-vision-sy... . The core issue was I was giving people information that they could perform medical decisions. This gave me a bad feeling and a bunch of lawyers told me when I referenced this post.

I eventually pivoted what I learned into this (yet another shameless plug) : https://steemit.com/twilio/@zitterbewegung/mms2text-let-your...

which doesn't have the issue of harming people.



But not making your app killed people who didn't notice nuts in their food...


That is too strong a conclusion. People with nut allergies already have procedures for knowing if nuts are in food, for example, asking the restaurant or preparing food their self.

Without knowing if his app was accurate, we cannot say whether building or not building the app was the right decision.


The app was very inaccurate during my testing. Hiding almonds in a dish made out of pure chocolate (a Hershey kiss) would be an easy counter example.

Also, I have a nut allergy so I have a complicated set of procedures to figure out if nuts are in the dish (that was the main motivation for the app).


Humans aren't any better at determining if a kiss has invisible almonds. An AI could solve that (better than a human could!) by knowing memorizing ingredient lists form public databases and tagging foods that have nutty variants, often times that people wouldn't know about.


I think it's fair to say humans are better at reasoning about uncertainty and risk. If the food isn't in the database, or we aren't sure if it's a match, what does the algorithm say?

ML algorithms work on statistical performance against loss functions or error rates. They aren't (yet) good at understanding the difference between a mistake that causes a missed dessert and a mistake that might kill you. Maybe they can guess correctly a higher percent of the time if shown flashcards, but that's small consolation from the hospital bed. They also aren't that good at the limits of their own knowledge, i.e. saying "I don't know".


But, I would be responsible for peoples deaths if my system said there wasn't any nuts in the dish and they ate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: