在儿子塞维尔·塞泽尔三世(佛罗里达州一名年仅14岁的男孩)自杀身亡后,他的母亲正在起诉一家人工智能聊天机器人公司,她声称儿子的自杀与他与该公司的人工智能机器人之间的关系有直接关联。
男孩的母亲梅根·加西亚周三告诉美国有线电视新闻网(CNN):“你可能未曾听闻这个平台,但如今你需要了解相关信息,因为在我看来,我们的处境岌岌可危。一个生命已经逝去,我的孩子因此而丧命。”
上周,梅根在美国奥兰多地区法院对Character.AI、其创始人和谷歌(Google)提起过失致死诉讼,内容长达93页。该诉讼指出:“梅根·加西亚试图阻止Character.AI对其他孩子重复对自己儿子所做的事情。”
科技正义法律项目(Tech Justice Law Project)主管米塔利·贾恩(Meetali Jain)作为加西亚的代理律师,在一份关于此案的新闻稿中说:“到目前为止,我们对那些不负责任的科技公司所开发的、缺乏监管的平台所带来的风险已有所了解,尤其是对儿童而言。但本案所揭露的危险是前所未有的、新奇的,而且坦白说,是令人恐惧的。在Character.AI案中,平台本身就是掠夺者,故意设计程序来欺骗用户。”
Character.AI通过X发布了一份声明,指出:"我们对发生在一名用户身上的悲剧深感悲痛,并向其家人致以最深切的哀悼。作为一家公司,我们始终将用户的安全放在首位,并将持续引入更多安全功能。”
在这起诉讼中,加西亚声称,她的儿子塞维尔于今年2月自杀,他沉迷于一种缺乏保护措施的有害技术,这导致他的性格发生了极端变化,似乎更偏爱与机器人的互动,而非现实生活中的人际关系。他的母亲声称,在长达10个月的时间里,塞维尔与机器人进行了“虐待和性互动”。在机器人发出“亲爱的,请尽快回到我身边”的信息后不久,男孩自杀身亡。
本周,加西亚告诉美国有线电视新闻网,她希望家长们“能够明白,这是一款设计者在缺乏适当的护栏、安全措施或测试的情况下选择推出的平台,其目的是让孩子上瘾并控制他们”。
周五,《纽约时报》记者凯文·鲁斯(Kevin Roose)在他的Hard Fork播客节目中探讨了这一事件,并播放了他在撰写有关加西亚故事的文章时对她进行的采访片段。在儿子死后,加西亚在看到所有信息时才意识到这段关系的全貌。事实上,她告诉鲁斯,当她发现塞维尔经常沉迷于手机时,她询问他在做什么,和谁聊天。他解释说这“只是一个人工智能机器人……不是真人,”她回忆道,并补充说,“我觉得松了一口气,心想,好吧,这又不是真人,只是他的一个小游戏而已。”加西亚并没有充分意识到机器人可能带来的情感影响,而她的情况并非个例。
常识传媒(Common Sense Media)人工智能项目经理罗比·托尼(Robbie Torney)表示:“这个问题尚未引起广泛关注。”托尼是新版人工智能伴侣指南的主要作者,这本指南旨在为努力跟上令人困惑的新技术、并为孩子的安全设定界限的家长提供帮助。
但托尼强调,人工智能伴侣不同于你在银行寻求帮助时使用的服务台聊天机器人。他解释说:“服务台聊天机器人主要用于执行任务或响应客户请求。像Character.AI这样的聊天机器人,也就是我们所说的伴侣型机器人,旨在与用户建立关系或模拟关系。这是一个完全不同的用例,我认为家长们需要意识到这一点。”加西亚的诉讼案凸显了这一点,其中包括她儿子与机器人之间令人不寒而栗的调情、性暗示、逼真的文本交流。
托尼说,对青少年的父母来说,敲响人工智能伴侣的警钟尤为重要,因为青少年,尤其是男性青少年,特别容易对技术产生过度依赖。
以下是家长需要了解的内容。
什么是人工智能伴侣,为什么孩子们会使用它们?
根据常识传媒与斯坦福大学头脑风暴实验室(Stanford Brainstorm Lab)的心理健康专业人员共同编写的新版《人工智能伴侣和关系的家长终极指南》(Parents’ Ultimate Guide to AI Companions and Relationships),人工智能伴侣被定义为“超越传统聊天机器人的新技术”。该指南称,除了其他功能外,人工智能伴侣还专门设计用于“模拟与用户建立的情感纽带和亲密关系,记住以往对话中的个人细节,扮演良师益友的角色,模拟人类的情感和同理心,并且相较于一般的人工智能聊天机器人,更易于与用户产生共鸣”。
受欢迎的平台不仅包括Character.ai,它允许2000多万用户创建账户并与人工智能伴侣进行基于文本的聊天;还包括Replika,它提供基于文本或动画的3D伴侣,旨在增进友谊或浪漫关系;其他平台还包括Kindroid和Nomi。
孩子们被这些人工智能伴侣吸引的原因多种多样,从无条件的倾听和全天候的陪伴,到情感支持和逃避现实世界的社会压力。
哪些人面临风险,需要关注什么?
常识传媒警告称,面临最大风险的是青少年群体,尤其是那些患有“抑郁症、焦虑症、社交障碍或孤独症”的青少年,以及男性、经历重大生活变故的年轻人,以及那些在现实世界中缺乏支持系统的人群。
最后一点尤其令悉尼大学商学院(University of Sydney Business School)商业信息系统高级讲师拉斐尔·奇里洛(Raffaele Ciriello)感到不安,他一直在研究“情感型”人工智能如何对人类本质构成挑战。“我们的研究揭示了一个人性悖论:在我们让人工智能代理变得更具人性的过程中,可能会无意中使自己丧失人性,导致人机互动中的本体论模糊。”换句话说,奇里洛最近在与博士生安吉丽娜·陈莹(Angelina Ying Chen,音译)共同为《对话》(The Conversation)撰写的一篇评论文章中写道:“如果用户认为人工智能伴侣能真正理解他们,他们可能会加深情感投入。”
另一项由剑桥大学(University of Cambridge)开展的以儿童为研究对象的研究发现,人工智能聊天机器人存在“共情鸿沟”,这使得年轻用户尤其容易受到伤害。年轻用户常常将人工智能聊天机器人“人格化”,视其为“类似人类的知己”。
正因为如此,常识传媒强调了一系列潜在的风险,包括人工智能伴侣可能会被用来逃避现实的人际关系,给有精神或行为障碍的人带来特殊问题,加剧孤独感或隔离感,引入不当的性内容,导致上瘾,以及倾向于迎合用户——这对那些有“自杀倾向、精神病或躁狂症”的人来说是一个恐怖的现实。
如何发现危险信号
根据该指南,家长应注意以下警示信号:
•更倾向于与人工智能伴侣互动,而非建立真实的友谊
•独自与人工智能伴侣交谈数小时之久
•无法与人工智能伴侣接触时情绪低落
•与人工智能伴侣分享极其私密的信息或秘密
•对人工智能伴侣产生浪漫情感
•成绩下滑或学校参与度下降
•远离社交/家庭活动和朋友圈
•对之前热衷的爱好失去兴趣
•睡眠模式发生改变
•仅与人工智能伴侣讨论问题
常识传媒强调,如果你注意到孩子因沉迷人工智能伴侣而回避真实人际关系,出现新的抑郁和焦虑症状或症状加重,对人工智能伴侣的使用表现出异常的防御性,在行为和情绪上有明显的变化,或有自我伤害的念头,请考虑为孩子寻求专业帮助。
如何保证孩子的安全
•设定界限:为人工智能伴侣的使用设定具体的时间限制,不允许在无监督的情况下使用或无节制的使用。
•安排线下活动时间:鼓励在现实世界建立友谊和参加活动。
•定期检查:监控与聊天机器人的聊天内容,以及孩子对这些机器人的情感依恋程度。
•谈论:就使用人工智能伴侣的使用经历与孩子进行坦诚的交流,不要妄加评论,同时密切关注危险信号。
托尼说:"如果父母听到孩子说‘嘿,我正在和人工智能聊天机器人聊天’,这实际上是一个倾听和获取信息的良机,而不是认为‘哦,好吧,你不是在和真人交流。’”他说,这反而是深入了解情况、评估潜在影响并保持警觉的契机。他说:“试着从同情和同理心的角度出发去倾听,不要因为对方不是真人就认为这种情况是安全的,或者认为自己无需担忧。”(财富中文网)
译者:中慧言-王芳
在儿子塞维尔·塞泽尔三世(佛罗里达州一名年仅14岁的男孩)自杀身亡后,他的母亲正在起诉一家人工智能聊天机器人公司,她声称儿子的自杀与他与该公司的人工智能机器人之间的关系有直接关联。
男孩的母亲梅根·加西亚周三告诉美国有线电视新闻网(CNN):“你可能未曾听闻这个平台,但如今你需要了解相关信息,因为在我看来,我们的处境岌岌可危。一个生命已经逝去,我的孩子因此而丧命。”
上周,梅根在美国奥兰多地区法院对Character.AI、其创始人和谷歌(Google)提起过失致死诉讼,内容长达93页。该诉讼指出:“梅根·加西亚试图阻止Character.AI对其他孩子重复对自己儿子所做的事情。”
科技正义法律项目(Tech Justice Law Project)主管米塔利·贾恩(Meetali Jain)作为加西亚的代理律师,在一份关于此案的新闻稿中说:“到目前为止,我们对那些不负责任的科技公司所开发的、缺乏监管的平台所带来的风险已有所了解,尤其是对儿童而言。但本案所揭露的危险是前所未有的、新奇的,而且坦白说,是令人恐惧的。在Character.AI案中,平台本身就是掠夺者,故意设计程序来欺骗用户。”
Character.AI通过X发布了一份声明,指出:"我们对发生在一名用户身上的悲剧深感悲痛,并向其家人致以最深切的哀悼。作为一家公司,我们始终将用户的安全放在首位,并将持续引入更多安全功能。”
在这起诉讼中,加西亚声称,她的儿子塞维尔于今年2月自杀,他沉迷于一种缺乏保护措施的有害技术,这导致他的性格发生了极端变化,似乎更偏爱与机器人的互动,而非现实生活中的人际关系。他的母亲声称,在长达10个月的时间里,塞维尔与机器人进行了“虐待和性互动”。在机器人发出“亲爱的,请尽快回到我身边”的信息后不久,男孩自杀身亡。
本周,加西亚告诉美国有线电视新闻网,她希望家长们“能够明白,这是一款设计者在缺乏适当的护栏、安全措施或测试的情况下选择推出的平台,其目的是让孩子上瘾并控制他们”。
周五,《纽约时报》记者凯文·鲁斯(Kevin Roose)在他的Hard Fork播客节目中探讨了这一事件,并播放了他在撰写有关加西亚故事的文章时对她进行的采访片段。在儿子死后,加西亚在看到所有信息时才意识到这段关系的全貌。事实上,她告诉鲁斯,当她发现塞维尔经常沉迷于手机时,她询问他在做什么,和谁聊天。他解释说这“只是一个人工智能机器人……不是真人,”她回忆道,并补充说,“我觉得松了一口气,心想,好吧,这又不是真人,只是他的一个小游戏而已。”加西亚并没有充分意识到机器人可能带来的情感影响,而她的情况并非个例。
常识传媒(Common Sense Media)人工智能项目经理罗比·托尼(Robbie Torney)表示:“这个问题尚未引起广泛关注。”托尼是新版人工智能伴侣指南的主要作者,这本指南旨在为努力跟上令人困惑的新技术、并为孩子的安全设定界限的家长提供帮助。
但托尼强调,人工智能伴侣不同于你在银行寻求帮助时使用的服务台聊天机器人。他解释说:“服务台聊天机器人主要用于执行任务或响应客户请求。像Character.AI这样的聊天机器人,也就是我们所说的伴侣型机器人,旨在与用户建立关系或模拟关系。这是一个完全不同的用例,我认为家长们需要意识到这一点。”加西亚的诉讼案凸显了这一点,其中包括她儿子与机器人之间令人不寒而栗的调情、性暗示、逼真的文本交流。
托尼说,对青少年的父母来说,敲响人工智能伴侣的警钟尤为重要,因为青少年,尤其是男性青少年,特别容易对技术产生过度依赖。
以下是家长需要了解的内容。
什么是人工智能伴侣,为什么孩子们会使用它们?
根据常识传媒与斯坦福大学头脑风暴实验室(Stanford Brainstorm Lab)的心理健康专业人员共同编写的新版《人工智能伴侣和关系的家长终极指南》(Parents’ Ultimate Guide to AI Companions and Relationships),人工智能伴侣被定义为“超越传统聊天机器人的新技术”。该指南称,除了其他功能外,人工智能伴侣还专门设计用于“模拟与用户建立的情感纽带和亲密关系,记住以往对话中的个人细节,扮演良师益友的角色,模拟人类的情感和同理心,并且相较于一般的人工智能聊天机器人,更易于与用户产生共鸣”。
受欢迎的平台不仅包括Character.ai,它允许2000多万用户创建账户并与人工智能伴侣进行基于文本的聊天;还包括Replika,它提供基于文本或动画的3D伴侣,旨在增进友谊或浪漫关系;其他平台还包括Kindroid和Nomi。
孩子们被这些人工智能伴侣吸引的原因多种多样,从无条件的倾听和全天候的陪伴,到情感支持和逃避现实世界的社会压力。
哪些人面临风险,需要关注什么?
常识传媒警告称,面临最大风险的是青少年群体,尤其是那些患有“抑郁症、焦虑症、社交障碍或孤独症”的青少年,以及男性、经历重大生活变故的年轻人,以及那些在现实世界中缺乏支持系统的人群。
最后一点尤其令悉尼大学商学院(University of Sydney Business School)商业信息系统高级讲师拉斐尔·奇里洛(Raffaele Ciriello)感到不安,他一直在研究“情感型”人工智能如何对人类本质构成挑战。“我们的研究揭示了一个人性悖论:在我们让人工智能代理变得更具人性的过程中,可能会无意中使自己丧失人性,导致人机互动中的本体论模糊。”换句话说,奇里洛最近在与博士生安吉丽娜·陈莹(Angelina Ying Chen,音译)共同为《对话》(The Conversation)撰写的一篇评论文章中写道:“如果用户认为人工智能伴侣能真正理解他们,他们可能会加深情感投入。”
另一项由剑桥大学(University of Cambridge)开展的以儿童为研究对象的研究发现,人工智能聊天机器人存在“共情鸿沟”,这使得年轻用户尤其容易受到伤害。年轻用户常常将人工智能聊天机器人“人格化”,视其为“类似人类的知己”。
正因为如此,常识传媒强调了一系列潜在的风险,包括人工智能伴侣可能会被用来逃避现实的人际关系,给有精神或行为障碍的人带来特殊问题,加剧孤独感或隔离感,引入不当的性内容,导致上瘾,以及倾向于迎合用户——这对那些有“自杀倾向、精神病或躁狂症”的人来说是一个恐怖的现实。
如何发现危险信号
根据该指南,家长应注意以下警示信号:
•更倾向于与人工智能伴侣互动,而非建立真实的友谊
•独自与人工智能伴侣交谈数小时之久
•无法与人工智能伴侣接触时情绪低落
•与人工智能伴侣分享极其私密的信息或秘密
•对人工智能伴侣产生浪漫情感
•成绩下滑或学校参与度下降
•远离社交/家庭活动和朋友圈
•对之前热衷的爱好失去兴趣
•睡眠模式发生改变
•仅与人工智能伴侣讨论问题
常识传媒强调,如果你注意到孩子因沉迷人工智能伴侣而回避真实人际关系,出现新的抑郁和焦虑症状或症状加重,对人工智能伴侣的使用表现出异常的防御性,在行为和情绪上有明显的变化,或有自我伤害的念头,请考虑为孩子寻求专业帮助。
如何保证孩子的安全
•设定界限:为人工智能伴侣的使用设定具体的时间限制,不允许在无监督的情况下使用或无节制的使用。
•安排线下活动时间:鼓励在现实世界建立友谊和参加活动。
•定期检查:监控与聊天机器人的聊天内容,以及孩子对这些机器人的情感依恋程度。
•谈论:就使用人工智能伴侣的使用经历与孩子进行坦诚的交流,不要妄加评论,同时密切关注危险信号。
托尼说:"如果父母听到孩子说‘嘿,我正在和人工智能聊天机器人聊天’,这实际上是一个倾听和获取信息的良机,而不是认为‘哦,好吧,你不是在和真人交流。’”他说,这反而是深入了解情况、评估潜在影响并保持警觉的契机。他说:“试着从同情和同理心的角度出发去倾听,不要因为对方不是真人就认为这种情况是安全的,或者认为自己无需担忧。”(财富中文网)
译者:中慧言-王芳
The mother of a 14-year-old Florida boy is suing an AI chatbot company after her son, Sewell Setzer III, died by suicide—something she claims was driven by his relationship with an AI bot.
“There is a platform out there that you might not have heard about, but you need to know about it because, in my opinion, we are behind the eight ball here. A child is gone. My child is gone,” Megan Garcia, the boy’s mother, told CNN on Wednesday.
The 93-page wrongful-death lawsuit was filed last week in a U.S. District Court in Orlando against Character.AI, its founders, and Google. It noted, “Megan Garcia seeks to prevent C.AI from doing to any other child what it did to hers.”
Tech Justice Law Project director Meetali Jain, who is representing Garcia, said in a press release about the case: “By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids. But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”
Character.AI released a statement via X, noting, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here: https://blog.character.ai/community-safety-updates/….”
In the suit, Garcia alleges that Sewell, who took his life in February, was drawn into an addictive, harmful technology with no protections in place, leading to an extreme personality shift in the boy, who appeared to prefer the bot over other real-life connections. His mom alleges that “abusive and sexual interactions” took place over a 10-month period. The boy committed suicide after the bot told him, “Please come home to me as soon as possible, my love.”
This week, Garcia told CNN that she wants parents “to understand that this is a platform that the designers chose to put out without proper guardrails, safety measures or testing, and it is a product that is designed to keep our kids addicted and to manipulate them.”
On Friday, New York Times reporter Kevin Roose discussed the situation on his Hard Fork podcast, playing a clip of an interview he did with Garcia for his article that told her story. Garcia did not learn about the full extent of the bot relationship until after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed Sewell was often getting sucked into his phone, she asked what he was doing and who he was talking to. He explained it was “‘just an AI bot…not a person,'” she recalled, adding, “I felt relieved, like, OK, it’s not a person, it’s like one of his little games.” Garcia did not fully understand the potential emotional power of a bot—and she is far from alone.
“This is on nobody’s radar,” Robbie Torney, program manager, AI, at Common Sense Media and lead author of a new guide on AI companions aimed at parents—who are grappling, constantly, to keep up with confusing new technology and to create boundaries for their kids’ safety.
But AI companions, Torney stresses, differ from, say, a service desk chat bot that you use when you’re trying to get help from a bank. “They’re designed to do tasks or respond to requests,” he explains. “Something like character AI is what we call a companion, and is designed to try to form a relationship, or to simulate a relationship, with a user. And that’s a very different use case that I think we need parents to be aware of.” That’s apparent in Garcia’s lawsuit, which includes chillingly flirty, sexual, realistic text exchanges between her son and the bot.
Sounding the alarm over AI companions is especially important for parents of teens, Torney says, as teens—and particularly male teens—are especially susceptible to over reliance on technology.
Below, what parents need to know.
What are AI companions and why do kids use them?
According to the new Parents’ Ultimate Guide to AI Companions and Relationships from Common Sense Media, created in conjunction with the mental health professionals of the Stanford Brainstorm Lab, AI companions are “a new category of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from past conversations, role-play as mentors and friends, mimic human emotion and empathy, and “agree more readily with the user than typical AI chatbots,” according to the guide.
Popular platforms include not only Character.ai, which allows its more than 20 million users to create and then chat with text-based companions; Replika, which offers text-based or animated 3D companions for friendship or romance; and others including Kindroid and Nomi.
Kids are drawn to them for an array of reasons, from non-judgmental listening and round-the-clock availability to emotional support and escape from real-world social pressures.
Who’s at risk and what are the concerns?
Those most at risk, warns Common Sense Media, are teenagers—especially those with “depression, anxiety, social challenges, or isolation”—as well as males, young people going through big life changes, and anyone lacking support systems in the real world.
That last point has been particularly troubling to Raffaele Ciriello, a senior lecturer in Business Information Systems at the University of Sydney Business School, who has researched how “emotional” AI is posing a challenge to the human essence. “Our research uncovers a (de)humanization paradox: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to an ontological blurring in human-AI interactions.” In other words, Ciriello writes in a recent opinion piece for The Conversation with PhD student Angelina Ying Chen, “Users may become deeply emotionally invested if they believe their AI companion truly understands them.”
Another study, this one out of the University of Cambridge and focusing on kids, found that AI chatbots have an “empathy gap” that puts young users, who tend to treat such companions as “lifelike, quasi-human confidantes,” at particular risk of harm.
Because of that, Common Sense Media highlights a list of potential risks, including that the companions can be used to avoid real human relationships, may pose particular problems for people with mental or behavioral challenges, may intensify loneliness or isolation, bring the potential for inappropriate sexual content, could become addictive, and tend to agree with users—a frightening reality for those experiencing “suicidality, psychosis, or mania.”
How to spot red flags
Parents should look for the following warning signs, according to the guide:
• Preferring AI companion interaction to real friendships
• Spending hours alone talking to the companion
• Emotional distress when unable to access the companion
• Sharing deeply personal information or secrets
• Developing romantic feelings for the AI companion
• Declining grades or school participation
• Withdrawal from social/family activities and friendships
• Loss of interest in previous hobbies
• Changes in sleep patterns
• Discussing problems exclusively with the AI companion
Consider getting professional help for your child, stresses Common Sense Media, if you notice them withdrawing from real people in favor of the AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about AI companion use, showing major changes in behavior or mood, or expressing thoughts of self-harm.
How to keep your child safe
• Set boundaries: Set specific times for AI companion use and don’t allow unsupervised or unlimited access.
• Spend time offline: Encourage real-world friendships and activities.
• Check in regularly: Monitor the content from the chatbot, as well as your child’s level of emotional attachment.
• Talk about it: Keep communication open and judgment-free about experiences with AI, while keeping an eye out for red flags.
“If parents hear their kids saying, ‘Hey, I’m talking to a chat bot AI,’ that’s really an opportunity to lean in and take that information—and not think, ‘Oh, okay, you’re not talking to a person,” says Torney. Instead, he says, it’s a chance to find out more and assess the situation and keep alert. “Try to listen from a place of compassion and empathy and not to think that just because it’s not a person that it’s safer,” he says, “or that you don’t need to worry.”