No employee—except the rare bad actor—means to leak sensitive company data. But it happens, especially when people are using generative AI tools like ChatGPT to “polish a proposal,” “summarize a contract,” or “write code faster.” But here’s the problem: unless you’re using ChatGPT Team or Enterprise, it doesn’t treat your data as confidential.
According to OpenAIs own Terms of Use: “We do not use Content that you provide to or receive from our API to develop or improve our Services.”
But don’t forget to read the fine print: that protection does not apply unless you’re on a business plan. For regular users, ChatGPT can use your prompts, including anything you type or upload, to train its large language models.
☠️ Poof. Trade secret status, gone. ☠️
If you don’t take reasonable measures to maintain the secrecy of your trade secrets, they will lose their protection as such.
So how do you protect your business?
AI isn’t going anywhere. The companies that get ahead of its risk will be the ones still standing when the dust settles. If you don’t have an AI policy and a plan to protect your data, you’re not just behind—you’re exposed.