Hacker Newsnew | past | comments | ask | show | jobs | submit | septor's commentslogin

Here’s a summary: we are aware of the fact that this model will harm society and we are releasing it anyway. We are fiddling around with the way it’s released in an attempt to absolve ourselves of blame while simultaneously collecting the profit in the form of a juicy acquisition.

The net result of these advanced forms of signal processing will be negative. Nobody has come forward to prove that they will benefit society on the whole or even that they are safe. But anyone who raises concern is shouted down and called names like “alarmist” and “Luddite.”

These companies are playing with fire, and the whole world stands to be burned. Wake the fuck up.


Someone is going to invent this model sooner or later, simply because it is possible. There is not much sense in trying to stop it. We just have to adapt.


That’s not correct. What you are saying is that there is no plausible organized effort that could stop or slow the creation of signal processing models that will have pronounced negative impacts. The error is on two levels: you are using too much analogy with other technologies. And you are writing off the possibility of stopping ai when it’s still not clear that it can’t be stopped.

This isn’t something that can be built and tested in isolation like other things we are familiar with. Training these models is not an exact science. Nothing about ai is an exact science. Progress only comes with trial and error. And each trial requires huge compute resources; at least for the most capable and dangerous models. It can’t be done in your basement. Not without significant effort and drawing attention to yourself. Could we sense whenever someone was trying to do it? Could we form a global coalition to stop every attempt? That brings us to the next thing.

What you are doing is the following: we are both in a car that is about to roll off a cliff. I propose that we try pressing the brakes. You respond by saying that, geez it looks like we probably wouldn’t stop in time — we are going awfully fast and it probably wouldn’t work to press the brakes so why even try? Let’s just brace our heads and hope the impact doesn’t kill us.

Obviously the better thing to do is to try and press the brakes. Even if you aren’t sure if you can stop in time.


I think if you try to be less cynical about what OpenAI is doing you might even find them an ally to your perspective. Facebook throwing pocket change at an AI ethics org and Google's rather embarrassing failure at staffing an ethics panel of its own is evidence that we've got bicycle brakes on a freight train.

OpenAI suffered a ton of blowback for not just releasing the full model from the start. You can read their intitial blog post [1] looking particularly at the sections for Policy Implications and Release Strategy. I would also highly recommend you listen to Lex Fridman's podcast with Greg Brockman [2] to hear their rationale about the recent org changes at Open AI.

Obviously you can posit that everything they say is bullshit and they are only after almighty dollars. I can't prove it's not true and personally believe there is at least a kernel of truth to it, but we live in a messy world and finding imperfect allies is generally better than having none at all.

[1] - https://openai.com/blog/better-language-models/

[2] - https://www.youtube.com/watch?v=bIrEM2FbOLU


What do you mean "Could we sense when someone was trying to do it", are we talking mandatory computer inspections, government-enforced walled gardens, and the death of the general purpose computer? lol, if you thought gun control was hard...

Also, yeah, you could do it in your basement without being detected.

You can do it in the cloud, too, a lot of us here have the skills and resources to do it, but why spend time on this toy instead of another one, especially if it's going to cost money we could spend on something more fun?


> The error is on two levels: you are using too much analogy with other technologies. And you are writing off the possibility of stopping ai when it’s still not clear that it can’t be stopped.

No. What I am saying is that it's impossible to centrally control the actions of 7.7 billion free humans. A lot of them will disagree with your position (and any other position as well).

By trying to "put a stop to it" in a central manner, you are only making it harder (but not impossible) for some subset to learn about this phenomenon, to improve on it and to understand its strengths and weaknesses.

I am unconvinced by your argument that we could reliably detect, much less stop, attempts to train a large and useful machine learning model.


Correction: You are in a car that doesn't have any brakes.


I don't understand why it's downvoted (at the moment of writing, at least). I think it's absolutely correct.

Imagine a thing "we should not do", let it be creation of really powerful language model, genetic engineering in order to produce smarter, stronger children, or whatever. Whatever you are opposing to, really. The simple fact is that if you (and by "you" I mean any entity you associate yourself with, be it literally you, or your company, or a group of researchers in your country, or the government of your country) can do something, anybody can. Even if it will be a couple of years later. You are not unique, nor alone. You can stop "yourselves" as long as you want, but there are other people, companies, research groups, governments, and they don't give a fuck about what you think "we" should do.

So, in the end, the car really doesn't have any brakes. Even if it truly means the end of it all, it's just unfortunate, but really, really unavoidable.


Nobody has proven these technologies will be a net negative either, there are many positive applications.

>These companies are playing with fire, and the whole world stands to be burned. Wake the fuck up.

Your tone and arrogance is very unwelcome.


> of a juicy acquisition

OpenAI is a non-profit. They are not looking to get acquired.


Is it still a non-profit? I think they changed their structure to some fudged thing recently.


Yes, but the idea of their for-profit subsidiary is to develop new stuff, not sell old stuff. (It'd be like the Hershey Foundation selling Hershey Chocolate.) No one is making >100x returns off of spinning off a 'GPT-2 startup', either. GPT-2 isn't that far in advance of everything else; there's already a replication of the WebText corpus they trained GPT-2 on.


It's the whole premise of Open AI - they democratize AI, give everyone access. Since it's known to be possible, someone else would be able to repeat it anyway.

Saying hyperbolic things like "the whole world stands to be burned" is silly. They aren't giving everyone a nuke.

Things like deep fakes and synthesized speech are much much worse, since you can make any politician say anything you want, and an average person wouldn't be able to tell it's fake.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: