
东南亚已成为全球网络诈骗的中心,高科技欺诈与人口贩卖交织在一起。在柬埔寨、缅甸等国,犯罪集团运营着工业级规模的“杀猪盘”诈骗园区——这些中心的工作人员多为惨遭贩卖的人口,被迫对新加坡、中国香港等较富裕市场的受害者实施诈骗。其规模令人震惊:联合国的一项估算显示,此类犯罪在全球范围内造成的损失高达370亿美元,而情况可能很快会进一步恶化。
该地区网络犯罪的日益猖獗已对政治和政策产生影响。泰国报告今年中国游客数量下降,此前,一名中国演员遭绑架并被强迫在缅甸诈骗园区工作;泰国如今正试图让游客重拾对当地安全的信心。新加坡则刚刚通过一项反诈骗法案,允许执法部门冻结诈骗受害者的银行账户。
但亚洲缘何会因网络犯罪而背负恶名?Okta亚太区总经理本·古德曼(Ben Goodman)指出,该地区存在若干独特因素,使网络诈骗更易得逞。例如,亚洲是“移动优先市场”:WhatsApp、Line和微信等热门移动即时通讯平台,为诈骗者与受害者建立直接联系提供了便利。
人工智能也在助力诈骗者克服亚洲的语言多样性问题。古德曼提到,机器翻译虽是“人工智能的一个绝佳应用场景”,但也让人们“更易被诱导,进而点击错误链接或批准某些操作”。
国家层面也牵涉其中。古德曼还指出,有指控称朝鲜通过在大型科技公司安插假员工来收集情报,并为这个孤立国家谋取亟需的资金。
新风险:“影子”人工智能
古德曼对工作场所中人工智能带来的新风险表示担忧:“影子”人工智能,即员工使用私人账户访问人工智能模型,从而不受公司监管。他解释道:“比如,有人在筹备业务审查所需的演示文稿时,会使用个人账户登录ChatGPT来生成图像。”
这可能导致员工在不知情的情况下将机密信息上传到公共人工智能平台,从而“在信息泄露方面产生潜在的巨大风险”。

代理型人工智能还可能模糊个人身份与职业身份之间的界限:例如,部分内容与个人邮箱绑定,而非企业邮箱。他解释道:“作为企业用户,公司会为我配备应用程序,并期望能够监管我的使用方式。”
但他补充道:“我从不使用个人账户访问企业服务,也绝不会用企业账户访问个人服务。区分用户身份——无论是工作中使用工作服务,还是生活中使用个人服务——是我们对客户身份和企业身份的核心考量。”
对古德曼而言,这正是事情变得复杂的地方。人工智能代理被赋予了代表用户做出决策的权力——这就意味着,必须明确用户是以个人身份还是企业身份在开展行动。
古德曼警告称:“一旦个人身份被盗用,骗子便能迅速窃取资金或者损害声誉,其影响范围会大得多。”(*)
译者:中慧言-王芳
东南亚已成为全球网络诈骗的中心,高科技欺诈与人口贩卖交织在一起。在柬埔寨、缅甸等国,犯罪集团运营着工业级规模的“杀猪盘”诈骗园区——这些中心的工作人员多为惨遭贩卖的人口,被迫对新加坡、中国香港等较富裕市场的受害者实施诈骗。其规模令人震惊:联合国的一项估算显示,此类犯罪在全球范围内造成的损失高达370亿美元,而情况可能很快会进一步恶化。
该地区网络犯罪的日益猖獗已对政治和政策产生影响。泰国报告今年中国游客数量下降,此前,一名中国演员遭绑架并被强迫在缅甸诈骗园区工作;泰国如今正试图让游客重拾对当地安全的信心。新加坡则刚刚通过一项反诈骗法案,允许执法部门冻结诈骗受害者的银行账户。
但亚洲缘何会因网络犯罪而背负恶名?Okta亚太区总经理本·古德曼(Ben Goodman)指出,该地区存在若干独特因素,使网络诈骗更易得逞。例如,亚洲是“移动优先市场”:WhatsApp、Line和微信等热门移动即时通讯平台,为诈骗者与受害者建立直接联系提供了便利。
人工智能也在助力诈骗者克服亚洲的语言多样性问题。古德曼提到,机器翻译虽是“人工智能的一个绝佳应用场景”,但也让人们“更易被诱导,进而点击错误链接或批准某些操作”。
国家层面也牵涉其中。古德曼还指出,有指控称朝鲜通过在大型科技公司安插假员工来收集情报,并为这个孤立国家谋取亟需的资金。
新风险:“影子”人工智能
古德曼对工作场所中人工智能带来的新风险表示担忧:“影子”人工智能,即员工使用私人账户访问人工智能模型,从而不受公司监管。他解释道:“比如,有人在筹备业务审查所需的演示文稿时,会使用个人账户登录ChatGPT来生成图像。”
这可能导致员工在不知情的情况下将机密信息上传到公共人工智能平台,从而“在信息泄露方面产生潜在的巨大风险”。
代理型人工智能还可能模糊个人身份与职业身份之间的界限:例如,部分内容与个人邮箱绑定,而非企业邮箱。他解释道:“作为企业用户,公司会为我配备应用程序,并期望能够监管我的使用方式。”
但他补充道:“我从不使用个人账户访问企业服务,也绝不会用企业账户访问个人服务。区分用户身份——无论是工作中使用工作服务,还是生活中使用个人服务——是我们对客户身份和企业身份的核心考量。”
对古德曼而言,这正是事情变得复杂的地方。人工智能代理被赋予了代表用户做出决策的权力——这就意味着,必须明确用户是以个人身份还是企业身份在开展行动。
古德曼警告称:“一旦个人身份被盗用,骗子便能迅速窃取资金或者损害声誉,其影响范围会大得多。”(*)
译者:中慧言-王芳
Southeast Asia has become a global epicenter of cyber scams, where high-tech fraud meets human trafficking. In countries like Cambodia and Myanmar, criminal syndicates run industrial-scale “pig butchering” operations—scam centers staffed by trafficked workers forced to con victims in wealthier markets like Singapore and Hong Kong. The scale is staggering: one UN estimate pegs global losses from these schemes at $37 billion. And it could soon get worse.
The rise of cybercrime in the region is already having an effect on politics and policy. Thailand has reported a drop in Chinese visitors this year, after a Chinese actor was kidnapped and forced to work in a Myanmar-based scam compound; Bangkok is now struggling to convince tourists it’s safe to come. And Singapore just passed an anti-scam law that allows law enforcement to freeze the bank accounts of scam victims.
But why has Asia become infamous for cybercrime? Ben Goodman, Okta’s general manager for Asia-Pacific notes that the region offers some unique dynamics that make cybercrime scams easier to pull off. For example, the region is a “mobile-first market”: Popular mobile messaging platforms like WhatsApp, Line and WeChat help facilitate a direct connection between the scammer and the victim.
AI is also helping scammers overcome Asia’s linguistic diversity. Goodman notes that machine translations, while a “phenomenal use case for AI,” also make it “easier for people to be baited into clicking the wrong links or approving something.”
Nation-states are also getting involved. Goodman also points to allegations that North Korea is using fake employees at major tech companies to gather intelligence and get much needed cash into the isolated country.
A new risk: ‘Shadow’ AI
Goodman is worried about a new risk about AI in the workplace: “shadow” AI, or employees using private accounts to access AI models without company oversight. “That could be someone preparing a presentation for a business review, going into ChatGPT on their own personal account, and generating an image,” he explains.
This can lead to employees unknowingly uploading confidential information onto a public AI platform, creating “potentially a lot of risk in terms of information leakage.”
Agentic AI could also blur the boundaries between personal and professional identities: for example, something tied to your personal email as opposed to your corporate one. “As a corporate user, my company gives me an application to use, and they want to govern how I use it,” he explains.
But “I never use my personal profile for a corporate service, and I never use my corporate profile for personal service,” he adds. “The ability to delineate who you are, whether it’s at work and using work services or in life and using your own personal services, is how we think about customer identity versus corporate identity.”
And for Goodman, this is where things get complicated. AI agents are empowered to make decisions on a user’s behalf–which means it’s important to define whether a user is acting in a personal or a corporate capacity.
“If your human identity is ever stolen, the blast radius in terms of what can be done quickly to steal money from you or damage your reputation is much greater,” Goodman warns.