Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article seems really wishy-washy and confusing. What is actually being proposed?

A set of standards that automated algorithms should adhere to? What kind of standards are we talking about?

A new branch of philosophy that talks about the actions of automated algorithms? What would be the point? How do you actually turn that into something that progresses, and that people will act on?

More ethics classes in universities for CS students? How do you differentiate this from what's already happening? Is it just broader?

---

To try to steelman this a bit, I figure there are two issues at play.

The first is making sure that AI is working in the way we designed it to; it's free of biases, it doesn't endanger people, it adheres to laws. We're doing horifficially on this metric when it comes to long-term superhuman AI alignment stuff, but most people are talking about next-decade issues. Those are raw technical issues, and though we're still working on robustness and detecting biases I really cannot see much need for external guidance outside of the natural technical progress that the field is already heavily invested in. These are problems we want to solve already.

The second issue is use of these technologies as tools. This is where we talk about how large companies' algorithms affect public perception, or how automated militaries affects warfare, or how self-driving cars are litigated. These are not AI problems, they're social problems. Yes, the technology behind those examples looks kind'a similar behind the scenes, but these are distinct social, legal and economic problems.

This is kind of like seeing the advent of electricity, predicting its effect on society, and asking for people to study "ethics of electricity".



the first problem is one where CS researchers would likely benefit from more contact with outsiders, although I agree it's a pretty lively area of research already, at least for the shorter-term issues.

the second problem is one where social scientists could benefit from actually understanding how these systems work. yes, they are social problems, but the sociologists could do much better if they had a better understanding of the details of the technology.

The authors say they want "a consolidated, scalable, and scientific approach to the behavioral study of artificial intelligence agents in which social scientists and computer scientists can collaborate seamlessly". So, a new interdisciplinary category.

I get the sense they are especially concerned with the empirical study of behavior and social impact of big ML systems in the real world. In another post I compared this sort of thing to sociologists studying transit infrastructure.


> the first problem is one where CS researchers would likely benefit from more contact with outsiders

Could you be more concrete about how you would like other fields to contribute?

> the second problem is one where social scientists could benefit from actually understanding how these systems work [...]

> The authors say they want "a consolidated, scalable, and scientific approach to the behavioral study of artificial intelligence agents in which social scientists and computer scientists can collaborate seamlessly". So, a new interdisciplinary category.

I don't really agree. You don't need to know how electricity works to study its societal effects, and the societal change from electricity-augmented manufacturing is a completely different problem to the health and safety regulation of indoor sockets.

The same is true for AI. You don't need to know what backpropagation is to work on the long-term ramifications of automated warfare, and that in turn is a largely irrelevant discussion for someone figuring out whether a company's hiring algorithms are racially biased. There is neither a clear need for top-down regulations to be grounded in the minutia of the systems, nor any obvious advantage of grouping these discussions under one umbrella.

There is a need for the disciplines already dealing with these problems to pick up more specialised knowledge as the systems get more common, but that is a far cry from what seems to be argued in the article.


> Could you be more concrete about how you would like other fields to contribute?

In the world of fair machine learning, "What are the right criteria for fairness under [set of circumstances]?" is usually not easily answerable, and I don't think we'll find satisfactory answers without more involvement from non-CS researchers -- both people in economics, law, and the humanities who are sort of generally concerned with fairness in society, and people who know a lot about specific domains where systems are being deployed. This in particular seems like an issue of "machine behavior" as described in the article.

With regards to transparency/explainability of models, there's a problem of making sure the "explanation" is actually useful and intuitive to the user, where psychologists (and HCI people who are already in CS depts) may have a lot to contribute.

In both cases there is a little ad hoc communication and there ought to be more.

> I don't really agree. You don't need to know how electricity works to study its societal effects, and the societal change from electricity-augmented manufacturing is a completely different problem to the health and safety regulation of indoor sockets.

I think electricity is not the best analogy, because it has a really subtle theory that few people understand, but we're so familiar with its use that many of its interesting properties are too obvious to see. It's also not at all autonomous -- it is often fruitful to think of a specific machine learning system as an agent, whereas this doesn't make much sense for electricity. Most importantly, I think it's just broader and more general than machine learning (trivially so!).

> The same is true for AI. You don't need to know what backpropagation is to work on the long-term ramifications of automated warfare, and that in turn is a largely irrelevant discussion for someone figuring out whether a company's hiring algorithms are racially biased.

The commonality here is you care about understanding or predicting the empirical behavior of machine learning systems interacting with the real world, especially with humans. ("Long-term ramifications of automated warfare" might not qualify, but I think medium-term ramifications certainly could.)

I don't think CS researchers are trained or particularly interested in empirical studies of human behavior, the stock market, etc., nor should they be, so somebody else will have to help. That somebody had better know enough about CS to be able to collaborate with actual CS researchers, though, or the results are going to be poor.


> In the world of fair machine learning, "What are the right criteria for fairness under [set of circumstances]?" is usually not easily answerable

There is nothing particularly machine learning specific about this. If I want to design an AI to detect bank fraud, you're right that I want to do cross-disciplinary research in its design, but I do not understand what AI ethicists would add to that.

My disagreement is not that AI will involve itself in other fields, and in the process we need to learn about those fields. It's with the idea that either those people should be talking about the raw technology or that there should be a general field about how AI specifically relates to the sum of every other field.

> The commonality here is you care about understanding or predicting the empirical behavior of machine learning systems interacting with the real world, especially with humans.

Which is just the technical domain of AI research. Narrowing it down to the commonalities has removed all of the interesting points we were going to study!

I agree that when AI is added to the social systems or the stock market, we need to involve people versed in social systems or the stock market. I still do not see why they cannot collaborate in exactly the same way that they have already. The only difference visible to me is that that the hammers are bigger.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: