Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe I’m too naive here, but I’m not seeing the potential malicious usage of this model. People will generate text, and then what?


Fiction, but this might be a good place to start: https://slatestarcodex.com/2018/10/30/sort-by-controversial/


Especially what kind of usage that could not already be achieved by asking a human to write a text.


It makes it cheaper, faster, and effectively infinitely scalable. You'll run out of man hours and people to generate this stuff by hand than you can launching a ton of resources in the cloud.

With the main concerns being troll army/fake news type stuff, I don't think this makes a difference. We seem pretty sure there are state level actors behind a lot of that stuff, and I think it would be silly to believe they can't recreate something at the level of GPT-2, especially with the underlying principles out there and understood, competitors like BERT available, etc.

I think their heart is in the right place, but also incredibly naive.


It will accelerate the development of a fact checking AI, which means lying and manipulating people won't be so profitable any more, I guess.


That ain’t it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: