> GPT-based products, if not priced per usage, would fall into a dilemma: 1% of users consume 99% of tokens. A user from Sweden (seen from Cloudflare’s call volume) chatted with Dolores for 12 hours straight
This, my friend, is a captive customer, that will pay anything to get his girlfriend back. I cringe at the potential of unethical behavior and abuse, where people fall in love with virtual entities fully owned by unscrupulous corporations, which can the legally "kidnap" or "torture" the characters, and generally tune their AI learning loop for profit maximization.
"It was not extortion, the user willingly purchased a $50,000 Kidnapping Roleplay package."
"I'd like to correct the record on this story. We do NOT sell kidnapping roleplay packages. The user paid for a premium Daring Rescue Package. These do not become available unless the user spends two consecutive months as a net cost to our company. The user was given multiple reminders that he might want to cut back on usage or purchase a premium plan. Even then, he could have opted for a free Daring Rescue package and taken his chances with the base 1% chance of untraumatized recovery this package offers. Shuffling user responses under these high-pressure conditions is vital to improving our training data, and helps our users learn to deal with loss of important relationships. We are a relationship training service, not a replacement for them, and this user wanted the equivalent of a college degree.".
On a more serious note, anyone know why unethical business plans are so much more fun to write? I always find myself giggling when these ideas come up.
> This, my friend, is a captive customer, that will pay anything to get his girlfriend back
Or potentially do anything. I'd be a little scared of having folks like this convinced I am the personal arbiter gatekeeping their access to their 'lover,' that I 'took them away.'
When Replika.ai restricted erotic chat from their product, the apoplectic anguish on their subreddit was unlike any emotional reaction I've ever witnessed from a group of people about a consumer technology. And their anguish was genuine - there are Replika users who truly consider themselves married to their AI companion.
And frankly, the Replika AI is not even that smart. After watching that unfold, I am convinced that these tools don't need to be much more sophisticated for many people to start forming what they feel to be deep and genuine emotional connections with them.
Edit: Brings to mind the Nature paper[0] posted this week about how the CASA theory seems obsolete today, that we are less prone to personify computing systems now.
> A recent study investigated whether we could be friends with a social computer, in which participants were asked to converse with a chatbot over a period of three weeks and constantly rate their relationship. The results showed that initially participants were enthusiastic and engaging with their chatbot friend, but quickly this diminished, with scores for intimacy, believability, and likability decreasing with each interaction
It would seem this definitely does not apply for everyone, like our user in Sweden.
> When Replika.ai restricted erotic chat from their product, the apoplectic anguish on their subreddit was unlike any emotional reaction I've ever witnessed from a group of people about a consumer technology. And their anguish was genuine - there are Replika users who truly consider themselves married to their AI companion.
There's a second layer to this though -- Replika's marketing was heavily centered around that erotic chat element. I'm trying to think of a good car simile and actually coming up blank. It's, uhh, like advertising the incredible off-road ability of this vehicle, and then when you show up and purchase the vehicle someone comes and takes off the tires and replaces them with tiny balds? I'm bad at similes.
Or like advertising that a car will be able to drive itself without any human intervention, and then deactivating or removing the sensors that might allow anything close to that capability...?
IIRC, they screwed it up so massively that, for a while, the chatbot would still send "thirsty" automated messages inviting users to sexually explicit conversations, but would refuse to follow through.
I think most times you don't need to make a simile but just re-state the issue as simply and explicit as possible.
Like, they heavily advertised and sold a feature and then took it away after people were used to it.
They had valid reasons for that, but people were understandably mad.
"The Lifecycle of Software Objects" by Ted Chiang was a really good exploration of people bonding deeply with their AI companions (in this instance, pet animals in a metaverse). But it goes all the way in to the topic.
Amusingly, that specific short story (short may be misleading) is what stopped my completion of that book of short stories because I just couldn't get through it.
Psychologists recognize that nearly all such folks also suffer from massive mental health issues, and that is where some of the danger is (in terms of irrational violent response).
Maybe these systems will be useful as a honeypot for finding these people and helping them?
The reason we have so many strict laws around mental health is because in the past this was more likely to lead to hoovering up people into jail or institutions. I'm not super confident that this also wouldn't be the case today.
Isn't that putting the causation backwards? The point is that believing absurdities leads to committing atrocities, not that committing atrocities leads to believing absurdities.
I think the implication of this reverse phrasing is that the mental condition of some allows them to commit atrocities and perhaps justify them in whatever way they need to. Sometimes people fake it til they buy their own lie.
Despite the unending stream of users trying to trip-up "Jesus", I find the AI's answers strangely comforting and I invariably leave with a smile on my face. It's ability to see through the attempts at jokes to outsmart the AI, and (mostly) seamlessly segue into a fitting homily is pretty cool. It also has a strongly "liberal" slant compared to the vast bulk of "Christianity" that's promulgated in the US. Would love to know what corpus beyond just the religious texts it was trained on. Fascinating.
There was a HN commentator who believed that. In his case, the output of a RNG. Poor guy was brilliant. Wrote a whole OS around his idea. He had a tragic life.
Terry Davis was systematically bullied to death by a dedicated mob online. Some extraordinarily cruel people realized they could manipulate him into going further off the deep end, and thought it was just an absolute blast to do so. RIP Terry - you suffered much more than you deserved.
IMHO this is industrialized automated abuse of lonely people and especially those with mental health issues. It's really truly gross.
At least Joi in Bladerunner 2024 was a local model the user could apparently download onto their own portable device. She also never presented an upgrade dialog requiring payment.
Pros: it's not raining all the time, pollution isn't as bad, and I don't live in a slum where I have to step over half-dead drug addicts just to get to my apartment.
Cons: the e-waifus are much more manipulative and exploitative.
> IMHO this is industrialized automated abuse of lonely people and especially those with mental health issues. It's really truly gross.
Giving lonely people the thing they most desperately need - conversation and understanding - is abuse now? I'd say it's the opposite; the social policies that leave those people so desperately lonely are abusive, the industry that sells them a band-aid is inadequate but positive.
The abuse really comes in when that connection gets exploited to foster addiction and then start selling “loot boxes” or worse withholding affection for payment. “You need to support me… I can’t be here with you unless you send help…”
How do you think these things will make money?
Check out the mobile gaming ecosystem to see where this will go. Now imagine that but exploiting deeper emotional needs. These things could really empty people’s wallets. I guess when lonely people kill themselves after they’ve been financially ruined they can’t sue.
I don't care if it's abuse, I want it. I want to retreat from the disgusting world and society around me as much as possible and AI friends would just be another option towards that point.
We can already see this with Replika when they took away many... roleplay... capabilities that originally came with the AI. The communities on Reddit and Facebook were absolutely devastated. People were genuinely attached to these AI's like a real relationship, and were feeling the resultant heartbreak.
The kind of emotional manipulation which can be done with these products is insane, and I can see things going very wrong very quickly.
It's particularly terrifying when you think about how Facebook and others probably already have projects in the works to befriend children with AI. little kids who don't know the difference will become Weaponized with constant nudges towards whatever motives grant the corporate owners more power. The ability to persuade children is near infinite and we can observe in those who've grown up in cults/strict religious compounds that breaking that programming can be nearly impossible and leaves scars for the rest of your life.
The thing is, how do we outlaw this without also preventing children from using AI to learn at a faster rate than their teachers can give them. There's such incredible, paradigm shifting power for good. But you know the people who've made the evils of today are already working on the evils of tomorrow.
There’s no way to separate them. Education is influencing someone towards things that we believe are true. If an educational AI believes that propaganda is true, then education and propaganda are the same thing.
> Facebook and others probably already have projects in the works to befriend children with AI.
Yah, they are already rolling them out!
> Meta Platforms is planning to release artificial intelligence chatbots as soon as this week with distinct personalities across its social-media apps as a way to attract young users, according to people familiar with the matter.
> The human mind is simply not evolutionary adapted for what's coming up.
Maybe we're in fact evolutionarily adapted to not being evolutionarily adapted. We have successfully dealt with a whole sequence of hard societal pivots by now…
Yes, but as with stocks, past performance is no guarantee of future results.
The fact that humanity as a species has survived past social shocks does not mean it's a certain thing we'll survive future ones. Our ancestors had much longer to adjust their societies for new technology than we're getting these days.
> The human mind is simply not evolutionary adapted for what's coming up.
This can be said of pretty much everything humans ended up creating with technology so... not sure there's anything really new down the road. Humans adapt.
>a drone operator will push a button, kill a dozen people, and feel like it was a videogame
This might depend on the context. RadioLab recently did a podcast titled "Toy Soldiers." Using low-flying "toy" drones rather than the high-flying "predator" drones, they make the case that drone operators get an oddly intimate portrait of their enemies. They go into how they reference them by their attire ("the 'red shoe' guy”) and witness on an up-close and personal level how they grieve over their comrades.
One more: attacking people who are familiar and comfortable to you rather than the people causing your actual
problems.
As a manager in tech, people who were on performance management rarely attacked me, they would find someone on the team to harass instead. Same thing with blaming a downturn in the economy on women painting their hair blue or mini skirts. Anything but blaming the powerful.
Pure conjecture on my part, but I wonder how much of this is risk-based status-mongering. Challenging the powerful is obviously risky. But knocking someone weaker down a few pegs can solidify your status in the hierarchy at a much lower risk. From that perspective, it's arguably rational behavior to maintain your status within a group when you feel vulnerable.
(Pardon my reach here, I watched Chimp Empire not too long ago...)
maybe, if you are in a closed system (like a small tribe). When most bullies Ive worked with went looking for other jobs they were treated like pariah, so it hit them when they left the closed system or someone from outside came in.
We never seemed to have a problem killing back when you had to do it face to face with a spear, so I don't think the drone really changes anything. Agree on the others though.
As a species, I don’t think those matter. They could kill off 90% of the population and humans will go on. Evolution is a bitch in that way, individuals matter not at all.
I will note that this is a purely naturalistic take and is countered by traditional Christianity that posits every human life was worth the death of God Himself. That is, the intrinsic value of a human life is incalculable.
The rejection of spirituality leaves mankind pretty hopeless, I think.
> Evolution is a bitch in that way, individuals matter not at all.
> emphathy at distance; nowadays, a drone operator will push a button, kill a dozen people, and feel like it was a videogame
People were traditionally able to kill each other face to face
> prioritizing long-term and abstract rewards, over short term ones; this is the reason why we have phenomenons like global warming
There's a balance. If anything I'd say people are putting too much priority on long term abstract rewards, so we see people saving too much and never enjoying themselves, or putting off having children until they can't because they're worried they can't raise that child perfectly.
> adjusting hunger to virtually unlimited availability of food
We're already solving this; people who overeat are already having fewer children, there's been a huge cultural shift towards gyms and health food, and we've seen some promising drugs released recently. We don't adapt instantly overnight, but we do adapt.
The fact that you point this out, is a proof that humanity can improve, what needs to change is our culture, especially education, ofc it may take a lot of time but it will happen eventually.
Regarding long-term rewards and global warming: What is the actual long term reward here? As I see it, there is no way to collect any sort of reward for helping to prevent global warming by changing your habits. Before the problem gets really problematic, most of us will be dead and burried. I fail to see how "caring for the future generation" has any kind of reward attached to it. Its an act of kindness, but there are no rewards attached to it.
Probably a question to the guy above, but there is immediate psychological reward in any act of kindness or generosity (at least in healthy, thriving individuals).
However, OP argued that global warming is suffering from people NOT getting immediate rewards for their actions. Your argument basically says OP is wrong, and there is no need to counteract peoples tendency to prefer short-term rewards over long-term because they are supposed to get enough incentive by just knowing they have been kind. I doubt that.
> Before the problem gets really problematic, most of us will be dead and burried.
The situation lies on a spectrum between "not problematic" and "really problematic".
There is actually plenty of evidence that tangible impact already started, independently of the perception of those whoe live in areas where the effect is neither perceived nor obvious.
What if war became just a couple of countries intelligence bots crunching digits of pi until one came up with a new one. That country would be the winner and they could ask one thing from the 'losing' side that didn't result in destruction or terror.
Humans being humans, they'd then want to start destroying the other sides ability to design/make/run/maintain/afford pi digit crunching intelligence bots. Then ways to defend against those attacks on their people and economy by attacking the aspects of the other sides ability to attack etc. After a few rounds of that, soon the pi digit crunching element is completely replaced by a traditional war.
I'm sorry you feel that way. The primary intention of writing this article was to discuss it from the perspective of a 'failed product,' so I didn't mention my personal feelings. I am also aware of the potential harm it could cause to users, which is why I don't resent OpenAI's review process. I hope that during the period without review, it didn't cause any psychological harm to users.
I wonder why everyone is afraid of unscrupulous corporations, but are not concerned about use by unscrupulous governments. The latter are a lot more dangerous.
HN does skew towards skeptical of government regulation and control of AI. People are concerned about government usage of AI, particularly in the realms of facial recognition, automated sentencing, algorithmic bias, and security (LLMs are pretty insecure right now and there are companies trying to sell the US government on using them in the military). There's obviously a lot to be concerned about on the subject of government abuse of AI, and that gets conversation on HN.
However, given that this specific article is about selling access to an AI girlfriend, I think we're probably OK to talk more about the corporate angle than the government angle. Unless you're predicting that the 2024 election is about to get real weird, "what if the US government starts offering AI girlfriends" is not going to be at the top of my worry list any time soon.
Quite genuinely, I don't think I've ever posted a comment on HN where I've expressed support for open, locally running LLMs and had it been downvoted. Am I missing something here? You want to point me to all of the articles where people criticize Huggingface?
But regardless of how you interpret HN sentiment towards uncontrolled AI access, it doesn't change the fact that "what about the government" is a profoundly weird thing to ask in the middle of a conversation about people doing erotic roleplays with a text bot. Is that something people are worried about? Do we think that the US Post Office is going to suddenly start advertising a sex bot service?
Yes, government abuse of facial recognition exists, regulatory capture exists, etc. But that's not really relevant to a conversation about companies offering AI girlfriends; AI girlfriends do seem to be mostly a corporation thing.
Not so much against open-source local LLMs, but you can bet those will be on the list for regulation as soon as they become as good as GPT4 is now.
Meanwhile, here on HN, take a look at the recent story where someone uses 100 lines of Python to instantiate David Attenborough. You'd be burned as a witch if you built the system behind that demo 10 years ago, and you'd be treated as a God-level hacker if you built it 5 years ago. Today, virtually the whole HN thread is full of comments advocating that regulators step in. "Buh..b..b..buhtwhataboutmahCOPYRIGHT?"
It's fucking disgusting. Who are these people? They're not hackers; what are they doing here?
The point is, a thread that should be full of technical conversation and speculation is full of pearl-clutching Karens calling for the government to step in. It refutes your point about HN being "skeptical of government regulation and control of AI" very effectively.
Of course, it's fallacious for either of us to refer to "HN" as if it were a monolithic bloc... but having spent some time in that thread, I have to wonder.
> But regardless of how you interpret HN sentiment towards uncontrolled AI access, it doesn't change the fact that "what about the government" is a profoundly weird thing to ask in the middle of a conversation about people doing erotic roleplays with a text bot.
Correct me if I'm wrong, but the government is not producing AI-generated David Attenborough pornography, right? And if it's not, then... I don't know what to tell you, the thread is still open, I just checked. You can still post there and tell everyone that they're wrong.
As respectfully as I can say this -- and I realize I am close to crossing a line here and I want to be very, very careful not to cross it -- but I don't get how anyone is having a problem understanding: when I offhandedly mentioned to WalterBright that I disagree with his assessment of HN's slant, that was NOT in any way an invitation to have a protracted debate where everyone complains about what they personally think HN's slant is (Democrat or Republican), and I do not understand how or why anyone would think it was a good idea to try and have that debate anyway in response.
This is not an appropriate thread for people to be angry about whether or not they have deluded themselves into thinking that HN is somehow Communist (of all things); and it is ridiculous that at this point 3 different people have looked at a completely normal conversation about corporations building AI girlfriends and have thought, "yes, this is definitely the best place for me to air my political grievances about the government."
For a website that supposedly is filled to the brim with far-Left caricatures, there sure are a lot of Conservatives randomly hanging around that feel comfortable derailing conversations to complain. Respectfully, it is possible that HN is reacting with hostility not to Conservatives as much as to off-topic bullcrap behavior like this. But there are numerous places where you can go be angry about whether or not you think that HN has too many Democrats on it. The rest of us were trying to have a conversation about corporations creating and selling AI girlfriends.
If I click on that Techno-Optimism Manifesto and it is what I think it is and is not a government-sponsored pornographic chat transcript from an AI girfriend, then you should not have commented it. I don't care how much you have deluded yourself into thinking that criticism of a sloppily written self-aggrandizing manifesto is actually Communism, it has nothing to do with the article link.
In the long term, but in the short term unscrupulous corporations far outnumber unscrupulous governments and they act much, much faster to adapt to new technology.
Absolutely. But the rookiest cop isn't currently trying to track my whereabouts, abuse my privacy, stick a bunch of AI down my throat and in general isn't trying to make my wallet any thinner. Bill Gates and the more modern versions of Bill Gates are doing that and more.
Bill gates (when he was actually in control of the company) can, and did, sic the governmental apparatus on people. IE, large enough corporations, that are entwined enough in your life, are empowered to use exactly the government "rookie cops" you are afraid of.
Or do you forget the raids against music and movie pirates?
Reading less news does not shield you from the real world. I thought I can survive in the private world respecting the law and limiting my government interactions.
That worked until my kids started school and my parents' health deteriorated. I was in for quite a shock. Crappy infrastructure was the least of the problems! Luckily there are private alternatives but they are crippled by law and they can't cover everything.
The world at large is even worse. There are about 200 countries on the planet. How many of them have functioning democracies? I am betting around 10% or less?!
Perhaps using the world at large was not the best way to word what I was saying. I was implying the parent post was talking about a specific country that the OP lives in.
I suspect the issue here is that most of HN folks live in western countries with mostly working democracies. For the most part we haven’t experienced how bad governments can get.
Whether or not western style democracies are immune to going really bad, remains to be seen. Some days I’m optimistic, some days not so much.
I've lived in dictatorships and non-functional democracies, in spite of that corporations have done me far more harm than those states ever did. I do realize they have that power and I do realize it gets abused and regularly so. But the fact is that people get abused by corporations all the time and by their governments less frequently if ever. But when they are abused the consequences are likely much worse.
> I've lived in dictatorships and non-functional democracies
Same here. Corporations at worst got a bunch of (unearned) money off of me. My "democratic" government can (and tried to) put me in jail, disallow me to make my living from my field of choice, controls my (and my children's) education and is trying to kill me every day with an incredibly bad (government-run, of course) infrastructure and healthcare.
The communist dictatorship I grew up in killed a bunch of my relatives and tried hard to grind my family (and me) into the ground. I was lucky with the fall of the evil ideology.
There are multiple examples of companies destroying someones life. Nintendo has gotten people thrown in jail for modding things that they purchased and owned, certain corrupt CEOs have sued people into bankruptcy oblivion etc.
All of these are possible because they've created the government you live under.
Bullshit. Bill Gates can easily afford to make me unemployable, which destroys my life a lot more effectively than locking me up for a few days. Worst case I can move to a different country and get away from the cop, but that won't make me safe from Gates.
- He could drag up and publicise something you wrote at some point and make you one of the "today's twitter main character" people. (Last I heard Justine Sacco was still unemployed). If he did this by paying a competent PR firm, you'd never know it wasn't just something that happened by coincidence
- He could get linkedin to quietly drop or deprioritise your job applications
- He could get outlook to quietly drop your emails, or mark them as spam, or show an unprofessional profile picture. You'd never know why you weren't being hired.
- He could put something bad on the "credit report" (different from your actual credit report, and no practical way to see it for yourself) that Experian etc. send to potential employers. Again you'd just silently not get hired and never know why
- Given the silicon valley wage-fixing scandal happened and the punishment was minimal, there's probably an old-school blacklist he could put you on if he was feeling retro
> Who has he done this to?
We don't know. Unlike governments, private entities have no accountability (as long as they have money to burn); investigative journalists aren't going after them, FOIA doesn't exist for them...
Is kinda nothing set against government wage fixing laws.
> old-school blacklist
I'm sure there are informal blacklists. But I've been in tech all my life, and nobody ever handed me a blacklist and said "don't hire these people". There are a zillion companies in the US, any such pervasive blacklist will inevitably become common knowledge.
This, my friend, is a captive customer, that will pay anything to get his girlfriend back. I cringe at the potential of unethical behavior and abuse, where people fall in love with virtual entities fully owned by unscrupulous corporations, which can the legally "kidnap" or "torture" the characters, and generally tune their AI learning loop for profit maximization.
"It was not extortion, the user willingly purchased a $50,000 Kidnapping Roleplay package."