We covered these topics in my Computer Ethics class. I took a bunch of philosophy classes on Ethics too. You can teach a person from a book all about Ethics. The problem is that people will make the decision to be ethical with their gut or in the moment.
For instance I was designing an app that would identify if the food had an allergen or not. Eventually I came to the realization that my program if deployed could actually hurt people. So I stopped working on the project. I wrote a blogpost about this (shameless plug) at https://medium.com/@zitterbewegung/making-computer-vision-sy... .
The core issue was I was giving people information that they could perform medical decisions. This gave me a bad feeling and a bunch of lawyers told me when I referenced this post.
That is too strong a conclusion. People with nut allergies already have procedures for knowing if nuts are in food, for example, asking the restaurant or preparing food their self.
Without knowing if his app was accurate, we cannot say whether building or not building the app was the right decision.
Humans aren't any better at determining if a kiss has invisible almonds. An AI could solve that (better than a human could!) by knowing memorizing ingredient lists form public databases and tagging foods that have nutty variants, often times that people wouldn't know about.
I think it's fair to say humans are better at reasoning about uncertainty and risk. If the food isn't in the database, or we aren't sure if it's a match, what does the algorithm say?
ML algorithms work on statistical performance against loss functions or error rates. They aren't (yet) good at understanding the difference between a mistake that causes a missed dessert and a mistake that might kill you. Maybe they can guess correctly a higher percent of the time if shown flashcards, but that's small consolation from the hospital bed. They also aren't that good at the limits of their own knowledge, i.e. saying "I don't know".
For instance I was designing an app that would identify if the food had an allergen or not. Eventually I came to the realization that my program if deployed could actually hurt people. So I stopped working on the project. I wrote a blogpost about this (shameless plug) at https://medium.com/@zitterbewegung/making-computer-vision-sy... . The core issue was I was giving people information that they could perform medical decisions. This gave me a bad feeling and a bunch of lawyers told me when I referenced this post.
I eventually pivoted what I learned into this (yet another shameless plug) : https://steemit.com/twilio/@zitterbewegung/mms2text-let-your...
which doesn't have the issue of harming people.