Connect with us

Tech News

OpenAI uses its own models to fight election interference

Published

on

OpenAI uses its own models to fight election interference

OpenAI, the organization behind the ChatGPT generative AI solution, has reported blocking over 20 malicious operations and networks globally in 2024. These operations varied in scope and purpose, ranging from creating malware to generating fake content such as media accounts, bios, and articles.

According to OpenAI, their analysis of the halted activities revealed that threat actors are continuously trying to exploit their models but have not made significant progress in developing new malware or gaining viral traction. This is particularly crucial during election years in countries like the United States, Rwanda, India, and the European Union.

One notable achievement for OpenAI was thwarting a China-based threat actor named “SweetSpecter,” who attempted to target OpenAI employees through spear-phishing attacks. Additionally, OpenAI collaborated with Microsoft to disrupt an Iranian covert influence operation known as “STORM-2035.”

Despite the attempts by threat actors, the social media posts generated by OpenAI’s models did not gain much engagement, as they received minimal comments, likes, or shares. OpenAI reassures that they will remain vigilant in monitoring and preventing the misuse of advanced AI models for malicious purposes.






See also  Ninja Announces Three New Double Stack Models

Trending