Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Can we really trust ChatGPT to figure out state-sponsored threats?

By Peter Vogel on June 13, 2025
Email this postTweet this postLike this postShare this post on LinkedIn
Laptop displaying a pirate flag / jolly roger on a red screen, possibly indicating malware, hackers or a different computer problem. If you like that image, consider donating at https://sharethemeal.org/donate - thanks!
Michael Geiger, Unsplash

SCWorld.com reported that “OpenAI said in its June security report that it spotted and disrupted a number of attacks, most originating in China and Russia, that appear to have been using ChatGPT to either generate code or automate the process of making social media posts or emails for social engineering campaigns.”  The June 11, 2025 article entitled “OpenAI bans ChatGPT accounts linked to state-sponsored threat activity” (https://tinyurl.com/yc8bu8ch) included these comments from “the OpenAI team report”:

AI investigations are an evolving discipline,…

Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses.

The report included a handful of case studies outlining the various ways in which it has seen threat actors use ChatGPT. Of the 10 selected cases, seven involved use of ChatGPT for social engineering, while another two involved code generation for malware operations.

Do you trust AI?

First published at https://www.vogelitlaw.com/blog/can-we-really-trust-chatgpt-to-figure-out-state-sponsored-threats

  • Posted in:
    E-Discovery, Technology
  • Blog:
    Internet, IT & e-Discovery
  • Organization:
    Peter S. Vogel PC
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo