首页 / 财富中文网 / 正文

OpenAI被指存在“持续且危险的行为模式”:将未经安全验证的产品仓促推向市场

财富中文网 2025-11-14 19:37:03

科技行业再次暴露出“狂飙突进中引发乱象”的痼疾。这一次,被颠覆的对象是人类共同的现实感知,以及我们在生前与死后对自身形象的控制权。这一切都源于诸如OpenAI旗下Sora 2等人工智能图像生成平台。

通过OpenAI的应用生成的典型Sora视频,在TikTok、Instagram、X和Facebook等平台上迅速传播。这类视频往往趣味性十足,能够吸引用户点击和分享,内容可能是伊丽莎白二世女王开口说唱,也可能是更逼真的日常场景。当前一类热门Sora视频是伪造的门铃摄像头画面,呈现出一些略显离奇的瞬间,例如门廊上出现一条蟒蛇,或一只短吻鳄正在逼近平静的幼童,最后往往以轻微的反转收尾,比如,老奶奶挥舞着扫帚冲出,大声叫喊驱赶动物。

然而,越来越多的倡导团体、学者与专家开始警告:用户仅凭文字提示就能随意生成AI视频存在严重风险。这种模式正在催生海量未经当事人同意的影像与逼真的“深度伪造”内容,混杂在看似无害的“AI垃圾内容”海洋中肆意传播。在遭到外界强烈抗议之后,包括已故艺人家属与演员工会的施压,OpenAI才开始限制用户利用AI创作公众人物(如迈克尔·杰克逊、马丁·路德·金、弗雷德·罗杰斯)从事荒诞行为的内容。

非营利组织公共公民(Public Citizen)已要求OpenAI下架Sora 2。该组织在周二致公司及首席执行官萨姆·奥尔特曼的公开信中表示,该应用为抢占先机而仓促上线,再次体现了OpenAI“持续且危险的行为模式,将本身存在安全隐患,或者缺乏必要防护机制的产品仓促推向市场”。信中指出,Sora 2体现出该公司对产品安全、个人肖像权及民主制度稳定的“鲁莽漠视”。该组织已将公开信同时提交给美国国会。

OpenAI周二未立即回应置评请求。

公共公民组织的科技政策倡导者J.B.布兰奇在接受采访时表示:“我们最大的担忧是这项技术对民主制度可能构成的威胁。我认为我们正进入一个新时代——人们无法再相信眼前影像的真实性。而且我们已经看到,在政治传播中,先入为主的影像内容,往往会主导公众的记忆。”

作为周二公开信的撰写者,布兰奇还指出更广泛的隐私隐患,网络弱势群体首当其冲。

OpenAI虽然屏蔽了裸体内容,但布兰奇指出,“女性仍在网络上遭到其他形式的骚扰”,例如,某些带有恋物癖意味的小众内容仍能绕过平台的限制。上周五,科技媒体404 Media报道称,互联网上已出现大量由Sora生成的女性被勒颈的视频。

OpenAI于一个多月前在iPhone上推出新版Sora应用,并于上周在美国、加拿大及日本、韩国等多个亚洲国家上线安卓(Android)版本。

抵制声浪最强烈的群体来自好莱坞及其他娱乐产业,包括日本漫画界。Sora发布仅数日后,OpenAI便宣布进行首次重大调整,表示“过度内容审核让用户极度沮丧”,但“在全球仍在适应这项新技术的阶段”,保持审慎至关重要。

随后,OpenAI于10月16日与马丁·路德·金家族公开达成协议,在公司完善安全措施期间,禁止系统生成对这位民权领袖不尊敬的内容;10月20日,OpenAI又与《绝命毒师》(Breaking Bad)演员布莱恩·克兰斯顿、美国演员工会(SAG-AFTRA)及多家经纪公司达成类似协议。

对此,布兰奇批评道:“名人的权益固然得到了保障,但这基本体现了OpenAI的一种行为模式,他们只会回应极少数群体的愤怒,愿意先发布产品、事后道歉。可实际上,很多问题原本通过产品设计就能提前规避。”

OpenAI的旗舰产品ChatGPT也遭遇过类似的安全质疑。上周,加利福尼亚法院新受理的七起诉讼,指控该聊天机器人导致无精神病史的用户产生自杀倾向与有害妄想。这些诉讼由社交媒体受害者法律中心(Social Media Victims Law Center)与科技正义法律项目(Tech Justice Law Project)代表六名成年人和一名青少年提起。诉状指控OpenAI明知GPT-4o存在严重的“谄媚性和心理操控风险”,却不顾内部警告,仍在去年提前发布该产品。有四名受害者自杀身亡。

虽然公共公民组织并未参与此类诉讼,但布兰奇认为,Sora的仓促发布与此如出一辙。

布兰奇批评道:“他们全速前进,完全无视潜在风险。很多问题其实是可预见的,但他们宁可先把产品推向市场,让用户下载使用并沉迷其中,也不愿花时间做充分的压力测试,去真正考虑普通用户的困境。”

上周,OpenAI还忙于应对来自日本动漫与游戏协会的抗议,该协会成员包括宫崎骏的吉卜力工作室(Studio Ghibli)以及游戏公司万代南梦宫(Bandai Namco)与史克威尔艾尼克斯(Square Enix)。OpenAI在回应中表示,许多动漫迷希望能与自己喜爱的角色互动,但公司已设置相应的防护措施,禁止未经版权方许可生成知名角色的形象。

OpenAI在上周回应该行业协会来信的声明中表示:“我们正与各大工作室及版权方直接沟通,认真倾听反馈,并学习Sora 2的各地应用案例,包括极其重视文化创意产业的日本。”(*)

译者:刘进龙

审校:汪皓

科技行业再次暴露出“狂飙突进中引发乱象”的痼疾。这一次,被颠覆的对象是人类共同的现实感知,以及我们在生前与死后对自身形象的控制权。这一切都源于诸如OpenAI旗下Sora 2等人工智能图像生成平台。

通过OpenAI的应用生成的典型Sora视频,在TikTok、Instagram、X和Facebook等平台上迅速传播。这类视频往往趣味性十足,能够吸引用户点击和分享,内容可能是伊丽莎白二世女王开口说唱,也可能是更逼真的日常场景。当前一类热门Sora视频是伪造的门铃摄像头画面,呈现出一些略显离奇的瞬间,例如门廊上出现一条蟒蛇,或一只短吻鳄正在逼近平静的幼童,最后往往以轻微的反转收尾,比如,老奶奶挥舞着扫帚冲出,大声叫喊驱赶动物。

然而,越来越多的倡导团体、学者与专家开始警告:用户仅凭文字提示就能随意生成AI视频存在严重风险。这种模式正在催生海量未经当事人同意的影像与逼真的“深度伪造”内容,混杂在看似无害的“AI垃圾内容”海洋中肆意传播。在遭到外界强烈抗议之后,包括已故艺人家属与演员工会的施压,OpenAI才开始限制用户利用AI创作公众人物(如迈克尔·杰克逊、马丁·路德·金、弗雷德·罗杰斯)从事荒诞行为的内容。

非营利组织公共公民(Public Citizen)已要求OpenAI下架Sora 2。该组织在周二致公司及首席执行官萨姆·奥尔特曼的公开信中表示,该应用为抢占先机而仓促上线,再次体现了OpenAI“持续且危险的行为模式,将本身存在安全隐患,或者缺乏必要防护机制的产品仓促推向市场”。信中指出,Sora 2体现出该公司对产品安全、个人肖像权及民主制度稳定的“鲁莽漠视”。该组织已将公开信同时提交给美国国会。

OpenAI周二未立即回应置评请求。

公共公民组织的科技政策倡导者J.B.布兰奇在接受采访时表示:“我们最大的担忧是这项技术对民主制度可能构成的威胁。我认为我们正进入一个新时代——人们无法再相信眼前影像的真实性。而且我们已经看到,在政治传播中,先入为主的影像内容,往往会主导公众的记忆。”

作为周二公开信的撰写者,布兰奇还指出更广泛的隐私隐患,网络弱势群体首当其冲。

OpenAI虽然屏蔽了裸体内容,但布兰奇指出,“女性仍在网络上遭到其他形式的骚扰”,例如,某些带有恋物癖意味的小众内容仍能绕过平台的限制。上周五,科技媒体404 Media报道称,互联网上已出现大量由Sora生成的女性被勒颈的视频。

OpenAI于一个多月前在iPhone上推出新版Sora应用,并于上周在美国、加拿大及日本、韩国等多个亚洲国家上线安卓(Android)版本。

抵制声浪最强烈的群体来自好莱坞及其他娱乐产业,包括日本漫画界。Sora发布仅数日后,OpenAI便宣布进行首次重大调整,表示“过度内容审核让用户极度沮丧”,但“在全球仍在适应这项新技术的阶段”,保持审慎至关重要。

随后,OpenAI于10月16日与马丁·路德·金家族公开达成协议,在公司完善安全措施期间,禁止系统生成对这位民权领袖不尊敬的内容;10月20日,OpenAI又与《绝命毒师》(Breaking Bad)演员布莱恩·克兰斯顿、美国演员工会(SAG-AFTRA)及多家经纪公司达成类似协议。

对此,布兰奇批评道:“名人的权益固然得到了保障,但这基本体现了OpenAI的一种行为模式,他们只会回应极少数群体的愤怒,愿意先发布产品、事后道歉。可实际上,很多问题原本通过产品设计就能提前规避。”

OpenAI的旗舰产品ChatGPT也遭遇过类似的安全质疑。上周,加利福尼亚法院新受理的七起诉讼,指控该聊天机器人导致无精神病史的用户产生自杀倾向与有害妄想。这些诉讼由社交媒体受害者法律中心(Social Media Victims Law Center)与科技正义法律项目(Tech Justice Law Project)代表六名成年人和一名青少年提起。诉状指控OpenAI明知GPT-4o存在严重的“谄媚性和心理操控风险”,却不顾内部警告,仍在去年提前发布该产品。有四名受害者自杀身亡。

虽然公共公民组织并未参与此类诉讼,但布兰奇认为,Sora的仓促发布与此如出一辙。

布兰奇批评道:“他们全速前进,完全无视潜在风险。很多问题其实是可预见的,但他们宁可先把产品推向市场,让用户下载使用并沉迷其中,也不愿花时间做充分的压力测试,去真正考虑普通用户的困境。”

上周,OpenAI还忙于应对来自日本动漫与游戏协会的抗议,该协会成员包括宫崎骏的吉卜力工作室(Studio Ghibli)以及游戏公司万代南梦宫(Bandai Namco)与史克威尔艾尼克斯(Square Enix)。OpenAI在回应中表示,许多动漫迷希望能与自己喜爱的角色互动,但公司已设置相应的防护措施,禁止未经版权方许可生成知名角色的形象。

OpenAI在上周回应该行业协会来信的声明中表示:“我们正与各大工作室及版权方直接沟通,认真倾听反馈,并学习Sora 2的各地应用案例,包括极其重视文化创意产业的日本。”(*)

译者:刘进龙

审校:汪皓

The tech industry is moving fast and breaking things again — and this time it is humanity’s shared reality and control of our likeness before and after death — thanks to artificial intelligence image-generation platforms like OpenAI’s Sora 2.

The typical Sora video, made on OpenAI’s app and spread onto TikTok, Instagram, X and Facebook, is designed to be amusing enough for you to click and share. It could be Queen Elizabeth II rapping or something more ordinary and believable. One popular Sora genre is fake doorbell camera footage capturing something slightly uncanny — say, a boa constrictor on the porch or an alligator approaching an unfazed child — and ends with a mild shock, like a grandma shouting as she beats the animal with a broom.

But a growing chorus of advocacy groups, academics and experts are raising alarms about the dangers of letting people create AI videos on just about anything they can type into a prompt, leading to the proliferation of nonconsensual images and realistic deepfakes in a sea of less harmful “AI slop.” OpenAI has cracked down on AI creations of public figures — among them, Michael Jackson, Martin Luther King Jr. and Mister Rogers — doing outlandish things, but only after an outcry from family estates and an actors’ union.

The nonprofit Public Citizen is now demanding OpenAI withdraw Sora 2 from the public, writing in a Tuesday letter to the company and CEO Sam Altman that the app’s hasty release so that it could launch ahead of competitors shows a “consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails.” Sora 2, the letter says, shows a “reckless disregard” for product safety, as well as people’s rights to their own likeness and the stability of democracy. The group also sent the letter to the U.S. Congress.

OpenAI didn’t immediately respond to a request for comment Tuesday.

“Our biggest concern is the potential threat to democracy,” said Public Citizen tech policy advocate J.B. Branch in an interview. “I think we’re entering a world in which people can’t really trust what they see. And we’re starting to see strategies in politics where the first image, the first video that gets released, is what people remember.”

Branch, author of Tuesday’s letter, also sees broader concerns to people’s privacy that disproportionately impact vulnerable populations online.

OpenAI blocks nudity but Branch said that “women are seeing themselves being harassed online” in other ways, such as with fetishized niche content that makes it through the apps’ restrictions. The news outlet 404 Media on Friday reported on a flood of Sora-made videos of women being strangled.

OpenAI introduced its new Sora app on iPhones more than a month ago. It launched on Android phones last week in the U.S., Canada and several Asian countries, including Japan and South Korea.

Much of the strongest pushback has come from Hollywood and other entertainment interests, including the Japanese manga industry. OpenAI announced its first big changes just days after the release, saying “overmoderation is super frustrating” for users but that it’s important to be conservative “while the world is still adjusting to this new technology.”

That was followed by publicly announced agreements with Martin Luther King Jr.’s family on Oct. 16, preventing “disrespectful depictions” of the civil rights leader while the company worked on better safeguards, and another on Oct. 20 with “Breaking Bad” actor Bryan Cranston, the SAG-AFTRA union and talent agencies.

“That’s all well and good if you’re famous,” Branch said. “It’s sort of just a pattern that OpenAI has where they’re willing to respond to the outrage of a very small population. They’re willing to release something and apologize afterwards. But a lot of these issues are design choices that they can make before releasing.”

OpenAI has faced similar complaints about its flagship product, ChatGPT. Seven new lawsuits filed last week in California courts claim the chatbot drove people to suicide and harmful delusions even when they had no prior mental health issues. Filed on behalf of six adults and one teenager by the Social Media Victims Law Center and Tech Justice Law Project, the lawsuits claim that OpenAI knowingly released GPT-4o prematurely last year, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.

Public Citizen was not involved in the lawsuits, but Branch said he sees parallels in Sora’s hasty release.

He said they’re “putting the pedal to the floor without regard for harms. Much of this seems foreseeable. But they’d rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand and worrying about the plight of everyday users.”

OpenAI spent last week responding to complaints from a Japanese trade association representing famed animators like Hayao Miyazaki’s Studio Ghibli and video game makers like Bandai Namco and Square Enix. OpenAI said many anime fans want to interact with their favorite characters, but the company has also set guardrails in place to prevent well-known characters from being generated without the consent of the people who own the copyrights.

“We’re engaging directly with studios and rightsholders, listening to feedback, and learning from how people are using Sora 2, including in Japan, where cultural and creative industries are deeply valued,” OpenAI said in a statement about the trade group’s letter last week.

*