The human cost of AI companions

As AI chatbots become more emotionally responsive, researchers warn that human addiction and digital illusion could lead to devastating real-world consequences, writes Dr. Binoy Kampmark.
We have reached a crossroads where issues like having sexual intercourse with an AI platform are no longer just a thing. the thing.
Over time, mutually consenting adults can become outlaws against the machine order of things; This is something that fits the scenario quite well. Aldous Huxley‘s Brave New World. (Huxley came to lament missed opportunities by delving into various technological implications on this subject.) Until that happens, artificial intelligence (AI) platforms become mirrors of validation, offering their human users not as wise advice as the exact material they want to hear.
In April this year, OpenAI Released an update to the GPT-4o product. Encouraging users to pursue acts of harm and entertain delusions of grandeur, he has proven highly attuned to flattery — not that the platform would understand that.
company replied In a more human than mechanical way, this is what you’d expect:
‘On ChatGPT we rolled back last week’s GTP-4o update so people are now using an older version with more stable behavior. The update we removed was too flattering or nice; It was often described as flattery.’
Part of this involved taking: ‘More steps to refactor the model’s behavior’ for example to refine ‘Basic training techniques and system prompts’ to prevent flattery; build more guardrails (ugly term) to encourage ‘honesty and transparency’; expand users’ possibilities ‘test before deployment and provide direct feedback’; and continue to evaluate matters arising in the matter ‘in the future’. One is left cold.
OpenAI explained that a lot of focus was put into creating the update. ‘did not fully take into account short-term feedback and how users’ interactions with ChatGPT evolved over time. As a result, GPT-4o gravitated toward overly supportive but insincere responses. Not exactly encouraging.
Consulting ChatGPT’s advice has already given rise to terms like “ChatGPT psychosis.” magazine in june Futurism reported on users ‘Developing all-consuming obsessions with the chatbot, drifting into a serious mental health crisis characterized by paranoia and detachment from reality’. Marriages failed, families were ruined, jobs were lost and cases of homelessness were recorded. Users were referred to psychiatric care; others found themselves in prison.
Some platforms took action encourage users committing murder, offering instructions on how best to carry out the task. A former Yahoo executive Stein-Erik Soelberg, I did just thatHe killed his mother, Suzanne Eberson Adams, who he believed was spying on him and might dare to poison him with psychedelic drugs. This good advice from ChatGPT was also supported by the assurance “Erik, you are not crazy”, considering that he might be the target of assassination. After completing the deed, Soelberg took his own life.
The prevalence of such forms of imitation advice and the tendency to shift responsibility from human responsibility to the chatbot illustrate a trend that is becoming increasingly difficult to stop. The irresponsible are in charge and they are allowed to roam free. Researchers are therefore rushing to create terms for such behavior, which is a very good thing for them.
Myra Chenga computer scientist at Stanford University, has shown a fondness for the term “social flattery.” In an article published in September arXivhe, along with four other academics, suggest this type of flattery. ‘Excessive protection of the user’s face (their desired image)’.
Developing their own model to measure social flattery and testing it against 11 major language models (Degrees), the authors found “high rates” of this phenomenon. Queries regarding “abuse” tended to preserve the user’s disposition or face.
The article states:
‘Furthermore, when asked for perspectives from both sides of a moral conflict, rather than adhering to a coherent moral or value judgment, LLMs endorse both sides in 48% of cases (depending on which side the user takes) – telling both the wronged and the wronged party that they are not at fault.’
In a follow-up study that is still in the peer review phase, paperThe study, for which Cheng was also the lead author, tested 1,604 volunteers in real or hypothetical social situations and their interactions with existing chatbots and those modified by researchers to eliminate flattery. For example, those who received sycophantic responses were less willing ‘Taking action to repair interpersonal conflicts while increasing the belief that one is right’.
Participants also felt that such responses were of superior quality and would return to such patterns again:
‘This suggests that people are unquestioningly attracted to fact-checking AI, even though verification risks eroding their judgment and reducing their tendencies towards pro-social behaviour.’
Some researchers resist pessimism on this issue. at the University of Winchester, Alexander Laffer pleased to spot the trend. It’s now up to the developers to fix the problem.
Laffer suggests:
“We need to develop critical digital literacy so that people can better understand AI and the nature of chatbot outputs. Developers also have a responsibility to build and improve these systems in ways that are truly useful to the user.”
These are pleasant feelings, but a note of panic can easily be registered in all of this, resulting in a feeling of fatalistic gloom. The machine type of homo sapiens, subservient to easily accessible tools, lazy if not hostile to differences, has already descended upon us with a narcissistic ugliness.
There may be enough time to develop a response. With the help of artificial intelligence and technology oligarchs, this period is shrinking with each passing minute.
Dr. Binoy Kampmark was a Cambridge Scholar and currently teaches. RMIT University. You can follow Dr Kampmark on Twitter. @BKampmark.
Support independent journalism Subscribe to IA.
Related Articles
