OpenAI terminates accounts of confirmed state-affiliated bad actors

OpenAI terminates accounts of confirmed state-affiliated bad actors


OpenAI has confirmed that state-affiliated bad actors are using the company’s tech for malicious purposes, a validation of what many have feared since the company’s rise to prominence in the generative AI race.

The discovery comes as part of a collaboration with Microsoft Threat Intelligence, a community of thousands of security experts, researchers, and threat hunters that analyze and detect cyber threats.

Using the network’s intelligence gathering, OpenAI discovered at least five confirmed state-affiliated actors that were using OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks, the company explained. The actors included two China-affiliated actors known as Charcoal Typhoon and Salmon Typhoon; an Iran-affiliated actor known as Crimson Sandstorm; a North Korea-affiliated actor known as Emerald Sleet; and a Russia-affiliated actor known as Forest Blizzard.

The accounts were said to be relying on OpenAI’s services to bolster potential cyber attacks, but Microsoft did not detect any significant uses of the most-highly monitored LLMs.

“These include reconnaissance, such as learning about potential victims’ industries, locations, and relationships; help with coding, including improving things like software scripts and malware development; and assistance with learning and using native languages,” Microsoft explained. “Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships.”

Microsoft distinguished this announcement as an early-detection effort, intended to expose “early-stage, incremental moves that we observe well-known threat actors attempting.”

The collaboration aligns with recent moves from the White House to require safety testing and government supervision for AI systems that could impacts national and economic security, public health, and general safety. “While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context. As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts.”

While OpenAI admits that its current models are limited in their ability to detect cyber attacks, the company committed to future security investments, including:

  • Investments in technology and teams, including its Intelligence and Investigations and Safety, Security, and Integrity teams, to detect threats.

  • Collaborations with industry partners and other stakeholders to exchange information about malicious uses.

  • Continued public reporting of security threats and solutions.

“Although we work to minimize potential misuse by such actors, we will not be able to stop every instance,” OpenAI wrote. “But by continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else.”





Read More

Previous post Climate activists dump pink powder on case containing US Constitution
Next post Israeli Army Says Bodies Of Hostages Likely Held In Gaza Hospital