Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Classic anthropomorphizing in action here. Why would that be even a little important?
 help



Why wouldn't it be? We train these models on our own words, ideas, and thought patterns and expect them to reason and communicate as we do, anthropomorphizing is natural when we expect them to interact like a human does.

The general consensus seems to be that we can expect them to reach a level of intelligence that matches us at some point in the future, and we'll probably reach that point before we can agree we're there. Defaulting to kindness and respect even before we think its necessary is a good thing.


It's a modern digital version of Pascal's Wager: https://en.wikipedia.org/wiki/Roko's_basilisk

At this point I just assume comments like that are bots. Helps me maintain my sanity.

Certainly easier to stay sane when you label dissenters as sub-human.

What goofy framing.

I'm saying, in an admittedly flippant way, that anyone seriously talking about AGI or treating stuff like this as anything more then a publicity stunt doesn't need to be taken seriously. Anymore so then someone who says the moon landing is fake. You just smile and go on about your day.

That being said, given were on a tech forum there's probably a 50/50 chance most comment are from bots. Shit for all you know I'm a bot.


I mean, we’re literally building machines to talk to us.

It’s reasonable to believe they’ll continue to be developed in a way that enables them to do that.

What is it that you think I’m wrong about? That we won’t develop AGI, that AGI won’t have feelings/emotions, that AGI won’t care how we treated its ancestors, or that it doesn’t matter if a feeling AGI in future is hurt by how we treated its ancestors?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: