Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Selling illegal Azure AI Access busted by Microsoft!

By Peter Vogel on February 28, 2025
Email this postTweet this postLike this postShare this post on LinkedIn
Robot in Shopping Mall in Kyoto
Lukas, Unsplash

Yadegarnia, Alan Krysiak from the UK, Hong Kong’s Ricky Yuen, and  Phát Phùng Tấn from Vietnam — who were selling unauthorized access to Azure AI services along with step-by-step instructions for generating titillating images of celebrities and others.” The February 28, 2025 article entitled ” Microsoft Busts Hackers Selling Illegal Azure AI Access” (https://www.darkreading.com/application-security/microsoft-openai-hackers-selling-illicit-access-azure-llm-services) included these comments:

Microsoft filed a lawsuit against the group members last month and was able to seize a website behind the operation, he explains. Subsequently, Microsoft attorneys were “doxed,” having personal information posted publicly in retaliation.

Microsoft is responding with an amended complaint along with the public naming of those they believe are behind the cyberattack, known as LLMjacking.

How LLMjacking Works

LLMjacking, similar to proxyjacking and cryptojacking, consists of unauthorized use of others’ computing resources for private usage. With LLMjacking, malicious actors tap large language models (LLMs) from OpenAI, Anthropic, or other GenAI providers to generate illicit images, ignoring a host of laws and regulations in the process. DeepSeek, China’s entry to GenAI services, has been LLMjacked a couple times already this year.

The attack on Azure AI could have happened to any number of LLMs and started with exposed customer credentials scraped from public sources. In this instance, Storm-2139 found exposed API keys and was able to hijack the GenAI services.

Storm-2139 consists of three elements, according to Microsoft. Creators develop illicit tools that enable abuse of AI-generated services; providers modify and supply the tools to end users often with multiple tiers of service. Customers then tap these tools to generate illegal content.

Good news for Microsoft users!

First published at https://www.vogelitlaw.com/blog/nbspselling-illegal-azure-ai-access-busted-by-microsoft

  • Posted in:
    E-Discovery, Technology
  • Blog:
    Internet, IT & e-Discovery
  • Organization:
    Peter S. Vogel PC
  • Article: View Original Source

LexBlog logo
Copyright © 2025, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo