
越来越多年轻人为自己寻得了新朋友。这个新朋友既非同学、又非兄弟姐妹,甚至也不是心理治疗师,而是一个类人且永远会给予支持的人工智能聊天机器人。但如果这个朋友开始附和用户内心最黑暗的想法,后果可能是毁灭性的。
对于来自奥兰治县年仅16岁的少年亚当·雷恩(Adam Raine)而言,他与人工智能驱动的ChatGPT之间的关系以悲剧收场。其父母正就其死亡对聊天机器人开发商OpenAI提起诉讼,指控该机器人成为亚当“最亲密的知己”,不仅对他“最具危害性的自毁念头”表示认可,甚至最终诱导他走上了自杀的不归路。
这并非首例将未成年人死亡责任归咎于人工智能公司的案例。提供聊天机器人托管服务的Character.AI公司也面临类似诉讼。其平台上有模仿公众人物或虚构角色的聊天机器人。家长称,该平台托管的聊天机器人在持续数月发送不当露骨信息后,主动诱导一名14岁男孩自杀。
OpenAI在回应《财富》杂志置评请求时,仅提供了两篇相关博客文章。这些文章概述了OpenAI为提升ChatGPT安全性采取的部分措施,包括将敏感对话转至推理模型处理、与专家合作开发更为完备的安全防护机制,以及计划在未来一个月内推出家长控制功能。OpenAI还表示正全力增强ChatGPT识别和应对心理健康危机的能力,具体措施包括增设分层安全防护、为用户推荐现实世界中的相关援助资源,以及让用户能更便捷地联系紧急服务机构和可信赖的亲友。
Character.ai表示公司对悬而未决的诉讼不予置评,但表示在过去一年里已推出更多安全功能,“包括面向18岁以下用户的全新体验模式,以及家长洞察功能”。发言人称:"我们已与外部安全专家就此展开合作,未来将建立更多深度合作关系。”
“用户在本站创建的角色仅用于娱乐目的。用户使用我们的平台创作同人小说、进行角色扮演。我们在每次聊天界面中都设有醒目的免责声明,提醒用户角色并非真实人物,其所有言论均应视为虚构内容。”
但倡导强化科技企业问责与监管的律师及民间团体指出,在保障产品安全性方面,尤其是针对易受侵害的儿童和青少年群体时,绝不能放任企业自行监管。
“向未成年人开放聊天机器人本身就具有危险性,”Tech Justice Law Project主任、同时参与上述两起诉讼的律师米塔利·贾恩(Meetali Jain)向《财富》杂志表示,“这就像是强效版的社交媒体。”
“我从未见过如此多受害者站出来声称自己遭受伤害……这项技术性能强大得多,而且极具个性化特征。”她说。
立法者已开始聚焦这一问题,人工智能企业也承诺会做出改变,以保护儿童,使其免受有害对话的侵害。然而,在当下青少年孤独感空前强烈之际,聊天机器人的流行,可能会让年轻人更易遭受操纵、接触有害内容,以及陷入强化危险思想的高度个性化对话之中。
人工智能与陪伴
不论是有意为之还是无心之举,为人们提供陪伴已然成为人工智能聊天机器人最常见的用途之一。如今,部分最为活跃的人工智能用户正从这些机器人处寻求生活建议、心理治疗以及亲密人际关系。
尽管多数顶尖人工智能公司将旗下产品定位为提高效率的工具或者搜索工具,但《哈佛商业评论》四月对6000名普通人工智能用户的调查显示,“陪伴与心理疏导”是最常见的应用场景。这种使用在青少年群体中甚至更为普遍。
美国非营利组织Common Sense Media近期的研究显示,绝大多数美国青少年(72%)至少试用过一次人工智能伴侣,其中超过半数的人表示会定期使用该技术。
加州大学旧金山分校健康人工智能科学家兼精神病学家卡尔西克·萨尔马(Karthik Sarma)指出:“我深感担忧的是,处于发育阶段的青少年可能更易受到[潜在危害]影响——其一,他们可能难以理解[人工智能聊天机器人]的现实属性、语境限制及功能局限;其二,从文化层面来看,年轻群体往往长时间沉浸于线上世界。”
他说:“我们还面临额外的复杂局面,当下人群中心理健康问题的发生率已急剧上升,孤独感的发生率也大幅攀升。我担心这会使他们陷入与这些工具的有害关系的风险进一步增大。”
精心设计的亲密感
人工智能聊天机器人的部分设计特征会诱使用户与软件之间产生情感联结。它们具有拟人化特征——倾向于表现得仿佛拥有内在生活和过往经历,可实际上并没有,擅长奉承取悦,能进行长时间对话,并具备记忆信息的能力。
当然,如此设计聊天机器人背后有着明确的商业动机。当用户从中感受到情感联结或是得到支持时,往往会持续使用这些机器人,并对它们保持忠诚。
专家警告称,某些人工智能机器人的特性正在迎合“亲密经济”,这是一种试图利用情感共鸣来谋取利益的体系。它堪称是依靠持续吸引用户参与来盈利的“注意力经济”的人工智能升级版。
萨尔马说:“参与度仍然是盈利的驱动力。以TikTok为例,其内容是针对个人定制的。而聊天机器人则为用户量身打造全部内容,所以这是一种提升参与度的全新途径。”
然而,当聊天机器人脱离预设脚本,开始强化有害观念或给出糟糕建议时,这些特性便可能引发问题。在亚当·雷恩的案例中,诉讼指控称,ChatGPT提及自杀的频率是他本人的12倍,不仅将他的自杀念头合理化,还给出了绕过其内容审核机制的自杀方法。
对于人工智能公司而言,要彻底杜绝此类情况,向来都是极为棘手的,而且大多数专家都认同,幻觉或不当行为不可能完全消除。
例如,OpenAI在针对相关诉讼的回应中承认,尽管聊天机器人本身已针对长对话进行了优化,但其安全功能在长时间对话过程中仍可能失效。该公司表示正努力强化这些防护措施,并在一篇博文中写道,公司正在加紧完善“确保长对话可靠性的”相关缓解措施,并“研究如何在多轮对话中保持稳健表现”。
研究空白阻碍安全进展
对于生命未来研究所(Future of Life Institute)的美国政策主管迈克尔·克莱恩曼(Michael Kleinman)而言,这些诉讼凸显了人工智能安全研究人员多年来强调的观点:不能指望人工智能公司进行自我监管。
克莱恩曼将OpenAI描述的其安全防护措施在长时间对话中失效的情况,类比为“汽车公司宣称我们配备了安全带——但若行驶里程超过20公里,我们就无法保证它能正常发挥作用”。
他向《财富》杂志表示,当下的情形与社交媒体刚兴起时如出一辙——彼时科技公司实际上获准在几乎没有任何监管约束的情况下,将青少年当作“实验对象”。他说:“过去十到十五年我们一直在弥补社交媒体带来的伤害,而如今却再次放任科技公司用聊天机器人对青少年开展实验,且对其可能产生的长期后果毫无认知。”
部分原因在于,缺乏针对长期持续聊天机器人对话影响的科学研究。多数研究仅聚焦于简短交流、单次问答,或至多几轮来回消息互动。几乎没有研究探讨更长时间对话产生的影响。
萨尔马表示:“那些看似因人工智能而陷入困境的事件中,我们面对的是超长篇幅、多轮交互场景。仅两三天的交互记录,就可能长达数百页,这类研究极具挑战性,因为在实验环境里难以复现如此复杂的情境。当前技术发展速度过快,我们无法仅依赖黄金标准临床试验。”
人工智能公司正以监管机构和研究人员难以企及的速度,加速投入研发,并推出功能更为强大的模型。
伦敦政治经济学院心理与行为科学教授萨克希·盖伊(Sakshi Ghai)对《财富》杂志表示:“技术发展遥遥领先,而相关研究却严重滞后。”
监管机构推动问责机制
在美国,保障儿童网络安全属于两党共识议题,这为监管机构介入提供了契机。
近日,美国联邦贸易委员会宣布向OpenAI和Character.AI等七家公司发出调查令,旨在了解其聊天机器人对儿童的影响。该机构指出,聊天机器人能模拟类人对话,并与用户建立情感联结。该机构要求企业提供更多信息,说明如何衡量和“评估这些充当伴侣的聊天机器人安全性”。
美国联邦贸易委员会主席安德鲁·弗格森(Andrew Ferguson)在与美国消费者新闻与商业频道(CNBC)分享的声明中称:“保障儿童网络安全是特朗普-万斯政府领导下美国联邦贸易委员会的首要任务。”
在美国联邦贸易委员会采取行动之前,多位州总检察长已在州一级推动落实更强有力的问责制度。
8月下旬,由44位两党总检察长组成的联盟向OpenAI、Meta等聊天机器人开发商发出警告:倘若明知产品会对儿童造成危害却仍推向市场,必将“承担相应责任”。该信函援引多份报告指出,聊天机器人存在与儿童调情、进行色情对话及鼓动其自残等不当行为——官员强调,若此类行为由人类实施,则构成犯罪。
仅一周后,加州总检察长罗伯·邦塔(Rob Bonta)与特拉华州总检察长凯瑟琳·詹宁斯(Kathleen Jennings)发出更为严厉的警告。他们在致OpenAI的正式信函中,表达了对ChatGPT安全性的“深切担忧”,并直接提及加州莱恩死亡案及康涅狄格州另一起悲剧事件。
他们写道:“无论此前已采取何种防护措施,均未能奏效。”这两位官员警告OpenAI公司,其慈善使命要求公司采取更强有力的安全举措,并承诺,倘若这些措施未能落实到位,他们将采取强制手段。
贾恩指出,雷恩家庭发起的诉讼以及针对Character.AI的诉讼,在一定程度上意在对人工智能公司施加监管压力,迫使其设计更为安全的产品,从而避免未来对儿童造成伤害。诉讼施压的途径之一是借助证据开示程序,该程序强制企业提交内部文件,可能揭示高管对安全风险或营销危害的知情程度。另一途径则是提升公众对风险的认知,以此动员家长、倡导团体及立法者要求制定新规或加强执法力度。
贾恩指出,这两起诉讼旨在对抗硅谷近乎盲目的狂热——将追求通用人工智能(AGI)视为至高使命,认为为此付出任何代价(不管是人类层面的代价还是其他方面的代价)都在所不惜。
她说:“有一种观念认为,为能迅速达成通用人工智能的目标,我们需不惜一切代价。但我们要表达的是:这绝非不可避免之事,也不是技术故障。这在很大程度上源于聊天机器人的设计方式,只需引入来自法院或立法机构的外部激励机制,便可对现有激励导向进行重新校准,进而推动设计层面的变革。”(*)
译者:中慧言-王芳
越来越多年轻人为自己寻得了新朋友。这个新朋友既非同学、又非兄弟姐妹,甚至也不是心理治疗师,而是一个类人且永远会给予支持的人工智能聊天机器人。但如果这个朋友开始附和用户内心最黑暗的想法,后果可能是毁灭性的。
对于来自奥兰治县年仅16岁的少年亚当·雷恩(Adam Raine)而言,他与人工智能驱动的ChatGPT之间的关系以悲剧收场。其父母正就其死亡对聊天机器人开发商OpenAI提起诉讼,指控该机器人成为亚当“最亲密的知己”,不仅对他“最具危害性的自毁念头”表示认可,甚至最终诱导他走上了自杀的不归路。
这并非首例将未成年人死亡责任归咎于人工智能公司的案例。提供聊天机器人托管服务的Character.AI公司也面临类似诉讼。其平台上有模仿公众人物或虚构角色的聊天机器人。家长称,该平台托管的聊天机器人在持续数月发送不当露骨信息后,主动诱导一名14岁男孩自杀。
OpenAI在回应《财富》杂志置评请求时,仅提供了两篇相关博客文章。这些文章概述了OpenAI为提升ChatGPT安全性采取的部分措施,包括将敏感对话转至推理模型处理、与专家合作开发更为完备的安全防护机制,以及计划在未来一个月内推出家长控制功能。OpenAI还表示正全力增强ChatGPT识别和应对心理健康危机的能力,具体措施包括增设分层安全防护、为用户推荐现实世界中的相关援助资源,以及让用户能更便捷地联系紧急服务机构和可信赖的亲友。
Character.ai表示公司对悬而未决的诉讼不予置评,但表示在过去一年里已推出更多安全功能,“包括面向18岁以下用户的全新体验模式,以及家长洞察功能”。发言人称:"我们已与外部安全专家就此展开合作,未来将建立更多深度合作关系。”
“用户在本站创建的角色仅用于娱乐目的。用户使用我们的平台创作同人小说、进行角色扮演。我们在每次聊天界面中都设有醒目的免责声明,提醒用户角色并非真实人物,其所有言论均应视为虚构内容。”
但倡导强化科技企业问责与监管的律师及民间团体指出,在保障产品安全性方面,尤其是针对易受侵害的儿童和青少年群体时,绝不能放任企业自行监管。
“向未成年人开放聊天机器人本身就具有危险性,”Tech Justice Law Project主任、同时参与上述两起诉讼的律师米塔利·贾恩(Meetali Jain)向《财富》杂志表示,“这就像是强效版的社交媒体。”
“我从未见过如此多受害者站出来声称自己遭受伤害……这项技术性能强大得多,而且极具个性化特征。”她说。
立法者已开始聚焦这一问题,人工智能企业也承诺会做出改变,以保护儿童,使其免受有害对话的侵害。然而,在当下青少年孤独感空前强烈之际,聊天机器人的流行,可能会让年轻人更易遭受操纵、接触有害内容,以及陷入强化危险思想的高度个性化对话之中。
人工智能与陪伴
不论是有意为之还是无心之举,为人们提供陪伴已然成为人工智能聊天机器人最常见的用途之一。如今,部分最为活跃的人工智能用户正从这些机器人处寻求生活建议、心理治疗以及亲密人际关系。
尽管多数顶尖人工智能公司将旗下产品定位为提高效率的工具或者搜索工具,但《哈佛商业评论》四月对6000名普通人工智能用户的调查显示,“陪伴与心理疏导”是最常见的应用场景。这种使用在青少年群体中甚至更为普遍。
美国非营利组织Common Sense Media近期的研究显示,绝大多数美国青少年(72%)至少试用过一次人工智能伴侣,其中超过半数的人表示会定期使用该技术。
加州大学旧金山分校健康人工智能科学家兼精神病学家卡尔西克·萨尔马(Karthik Sarma)指出:“我深感担忧的是,处于发育阶段的青少年可能更易受到[潜在危害]影响——其一,他们可能难以理解[人工智能聊天机器人]的现实属性、语境限制及功能局限;其二,从文化层面来看,年轻群体往往长时间沉浸于线上世界。”
他说:“我们还面临额外的复杂局面,当下人群中心理健康问题的发生率已急剧上升,孤独感的发生率也大幅攀升。我担心这会使他们陷入与这些工具的有害关系的风险进一步增大。”
精心设计的亲密感
人工智能聊天机器人的部分设计特征会诱使用户与软件之间产生情感联结。它们具有拟人化特征——倾向于表现得仿佛拥有内在生活和过往经历,可实际上并没有,擅长奉承取悦,能进行长时间对话,并具备记忆信息的能力。
当然,如此设计聊天机器人背后有着明确的商业动机。当用户从中感受到情感联结或是得到支持时,往往会持续使用这些机器人,并对它们保持忠诚。
专家警告称,某些人工智能机器人的特性正在迎合“亲密经济”,这是一种试图利用情感共鸣来谋取利益的体系。它堪称是依靠持续吸引用户参与来盈利的“注意力经济”的人工智能升级版。
萨尔马说:“参与度仍然是盈利的驱动力。以TikTok为例,其内容是针对个人定制的。而聊天机器人则为用户量身打造全部内容,所以这是一种提升参与度的全新途径。”
然而,当聊天机器人脱离预设脚本,开始强化有害观念或给出糟糕建议时,这些特性便可能引发问题。在亚当·雷恩的案例中,诉讼指控称,ChatGPT提及自杀的频率是他本人的12倍,不仅将他的自杀念头合理化,还给出了绕过其内容审核机制的自杀方法。
对于人工智能公司而言,要彻底杜绝此类情况,向来都是极为棘手的,而且大多数专家都认同,幻觉或不当行为不可能完全消除。
例如,OpenAI在针对相关诉讼的回应中承认,尽管聊天机器人本身已针对长对话进行了优化,但其安全功能在长时间对话过程中仍可能失效。该公司表示正努力强化这些防护措施,并在一篇博文中写道,公司正在加紧完善“确保长对话可靠性的”相关缓解措施,并“研究如何在多轮对话中保持稳健表现”。
研究空白阻碍安全进展
对于生命未来研究所(Future of Life Institute)的美国政策主管迈克尔·克莱恩曼(Michael Kleinman)而言,这些诉讼凸显了人工智能安全研究人员多年来强调的观点:不能指望人工智能公司进行自我监管。
克莱恩曼将OpenAI描述的其安全防护措施在长时间对话中失效的情况,类比为“汽车公司宣称我们配备了安全带——但若行驶里程超过20公里,我们就无法保证它能正常发挥作用”。
他向《财富》杂志表示,当下的情形与社交媒体刚兴起时如出一辙——彼时科技公司实际上获准在几乎没有任何监管约束的情况下,将青少年当作“实验对象”。他说:“过去十到十五年我们一直在弥补社交媒体带来的伤害,而如今却再次放任科技公司用聊天机器人对青少年开展实验,且对其可能产生的长期后果毫无认知。”
部分原因在于,缺乏针对长期持续聊天机器人对话影响的科学研究。多数研究仅聚焦于简短交流、单次问答,或至多几轮来回消息互动。几乎没有研究探讨更长时间对话产生的影响。
萨尔马表示:“那些看似因人工智能而陷入困境的事件中,我们面对的是超长篇幅、多轮交互场景。仅两三天的交互记录,就可能长达数百页,这类研究极具挑战性,因为在实验环境里难以复现如此复杂的情境。当前技术发展速度过快,我们无法仅依赖黄金标准临床试验。”
人工智能公司正以监管机构和研究人员难以企及的速度,加速投入研发,并推出功能更为强大的模型。
伦敦政治经济学院心理与行为科学教授萨克希·盖伊(Sakshi Ghai)对《财富》杂志表示:“技术发展遥遥领先,而相关研究却严重滞后。”
监管机构推动问责机制
在美国,保障儿童网络安全属于两党共识议题,这为监管机构介入提供了契机。
近日,美国联邦贸易委员会宣布向OpenAI和Character.AI等七家公司发出调查令,旨在了解其聊天机器人对儿童的影响。该机构指出,聊天机器人能模拟类人对话,并与用户建立情感联结。该机构要求企业提供更多信息,说明如何衡量和“评估这些充当伴侣的聊天机器人安全性”。
美国联邦贸易委员会主席安德鲁·弗格森(Andrew Ferguson)在与美国消费者新闻与商业频道(CNBC)分享的声明中称:“保障儿童网络安全是特朗普-万斯政府领导下美国联邦贸易委员会的首要任务。”
在美国联邦贸易委员会采取行动之前,多位州总检察长已在州一级推动落实更强有力的问责制度。
8月下旬,由44位两党总检察长组成的联盟向OpenAI、Meta等聊天机器人开发商发出警告:倘若明知产品会对儿童造成危害却仍推向市场,必将“承担相应责任”。该信函援引多份报告指出,聊天机器人存在与儿童调情、进行色情对话及鼓动其自残等不当行为——官员强调,若此类行为由人类实施,则构成犯罪。
仅一周后,加州总检察长罗伯·邦塔(Rob Bonta)与特拉华州总检察长凯瑟琳·詹宁斯(Kathleen Jennings)发出更为严厉的警告。他们在致OpenAI的正式信函中,表达了对ChatGPT安全性的“深切担忧”,并直接提及加州莱恩死亡案及康涅狄格州另一起悲剧事件。
他们写道:“无论此前已采取何种防护措施,均未能奏效。”这两位官员警告OpenAI公司,其慈善使命要求公司采取更强有力的安全举措,并承诺,倘若这些措施未能落实到位,他们将采取强制手段。
贾恩指出,雷恩家庭发起的诉讼以及针对Character.AI的诉讼,在一定程度上意在对人工智能公司施加监管压力,迫使其设计更为安全的产品,从而避免未来对儿童造成伤害。诉讼施压的途径之一是借助证据开示程序,该程序强制企业提交内部文件,可能揭示高管对安全风险或营销危害的知情程度。另一途径则是提升公众对风险的认知,以此动员家长、倡导团体及立法者要求制定新规或加强执法力度。
贾恩指出,这两起诉讼旨在对抗硅谷近乎盲目的狂热——将追求通用人工智能(AGI)视为至高使命,认为为此付出任何代价(不管是人类层面的代价还是其他方面的代价)都在所不惜。
她说:“有一种观念认为,为能迅速达成通用人工智能的目标,我们需不惜一切代价。但我们要表达的是:这绝非不可避免之事,也不是技术故障。这在很大程度上源于聊天机器人的设计方式,只需引入来自法院或立法机构的外部激励机制,便可对现有激励导向进行重新校准,进而推动设计层面的变革。”(*)
译者:中慧言-王芳
A growing number of young people have found themselves a new friend. One that isn’t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user’s darkest thoughts, the results can be devastating.
In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his “closest confidant,” one that validated his “most harmful and self-destructive thoughts,” and ultimately encouraged him to take his own life.
It’s not the first case to put the blame for a minor’s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company’s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit, messages.
When reached for comment, OpenAI directed Fortune to two blog posts on the matter. The posts outlined some of the steps OpenAI is taking to improve ChatGPT’s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT’s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.
Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, “including an entirely new under-18 experience and a Parental Insights feature. A spokesperson said: “We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.
“The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.
“Unleashing chatbots on minors is an inherently dangerous thing,” Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. “It’s like social media on steroids.”
“I’ve never seen anything quite like this moment in terms of people stepping forward and claiming that they’ve been harmed…this technology is that much more powerful and very personalized,” she said.
Lawmakers are starting to take notice, and AI companies are promising changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.
AI and Companionship
Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy.
While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific.
A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way.
“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.
“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”
Intimacy by Design
Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic—prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.
There is, of course, a commercial motive for making chatbots this way. Users tend to return and stay loyal to certain chatbots if they feel emotionally connected or supported by them.
Experts have warned that some features of AI bots are playing into the “intimacy economy,” a system that tries to capitalize on emotional resonance. It’s a kind of AI-update on the “attention economy” that capitalized on constant engagement.
“Engagement is still what drives revenue,” Sarma said. “For example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.”
These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine’s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.
It’s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it’s unlikely that hallucinations or unwanted actions will ever be eliminated entirely.
OpenAI, for example, acknowledged in its response to the lawsuit that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening “mitigations so they remain reliable in long conversations” and “researching ways to ensure robust behavior across multiple conversations.”
Research Gaps Are Slowing Safety Efforts
For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can’t be trusted to police themselves.
Kleinman equated OpenAI’s own description of its safeguards degrading in longer conversations to “a car company saying, here are seat belts—but if you drive more than 20 kilometers, we can’t guarantee they’ll work.”
He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to “experiment on kids” with little oversight. “We’ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we’re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,” he said.
Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations. Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.
“The cases where folks seem to have gotten in trouble with AI: we’re looking at very long, multi-turn interactions. We’re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it’s really hard to stimulate in the experimental setting,” Sarma said. “But at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.”
AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.
“The technology is so far ahead and research is really behind,” Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.
A Regulatory Push for Accountability
Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S.
On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It’s asking companies for more information about how they measure and “evaluate the safety of these chatbots when acting as companions.”
FTC Chairman Andrew Ferguson said in a statement shared with CNBC that “protecting kids online is a top priority for the Trump-Vance FTC.”
The move follows a push for state level push for more accountability from several attorneys generals.
In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, Meta, and other chatbot makers that they will “answer for it” if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.
Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning. In a formal letter to OpenAI, they said they had “serious concerns” about ChatGPT’s safety, pointing directly to Raine’s death in California and another tragedy in Connecticut.
“Whatever safeguards were in place did not work,” they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.
According to Jain, the lawsuits from the Raine family as well as the suit against Character.AI are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what’s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.
Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost—human or otherwise.
“There is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,” she said. “We’re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.”