OpenAI has revealed that some ChatGPT users based in China have been exploiting the artificial intelligence tool for “authoritarian abuses”. The company disclosed in its latest threat report that several accounts linked to Chinese government entities were banned after breaching policies related to national security applications. These accounts reportedly used the AI model to generate proposals for systems designed to monitor social media activity, offering what OpenAI described as “a rare snapshot into the broader world of authoritarian abuses of AI.”
++ Former public restroom transformed into stylish boutique hotel
The report also detailed that a network of Chinese-language accounts had attempted to use ChatGPT in cyber operations targeting Taiwan’s semiconductor industry, American universities, and groups critical of the Chinese Communist Party. In some cases, users employed the chatbot to draft formal phishing emails in English as part of wider attempts to infiltrate IT systems. ChatGPT remains officially unavailable in China due to the country’s strict internet censorship—known as the Great Firewall—but can still be accessed via virtual private networks (VPNs).
“Our disruption of ChatGPT accounts used by individuals apparently linked to Chinese government entities shines some light on the current state of AI usage in this authoritarian setting,” the report stated. The 37-page document also noted that OpenAI had identified cyber activity from Russian- and Korean-speaking users. While these did not appear to be directly tied to their governments, some were believed to have connections with state-backed criminal groups.
++ Washington grand jury rejects case against woman accused of threatening Donald Trump
Since beginning its public threat reporting in February 2024, OpenAI claims to have dismantled more than 40 malicious networks. The company confirmed that no new offensive capabilities had been found in its most recent AI models, but warned that it would continue to monitor and address emerging risks associated with misuse of its technology.