So, you want to onboard an AI solution that contains personal information. But how?
I recently discussed this and more during a BigID webinar.
Here are some questions i ask clients, and some pointers i give them about where to start:
Why Do You Need It? (And Can You Do It Less Invasively)
- What is the purpose for this (be as specific as you can)
- What is the expected benefit?
- You need this for compliance with the “purpose limitation” principle (use only for the purpose or something compatible)
- You need it for your privacy notice.
Is It AI?
- Is this an automated process?
- What is the output?
Can It Do This?
- Is the AI fit for this purpose/can it do what it says it can?
- Vet the vendor: Do due diligence. Ask questions. Get documentation.
What’s the Impact?
- Does it impact safety?
- Does it impact rights?
DPIA: Conduct and Document a Risk Assessment
- Where did the data come from? Was there permission? Was it scraped legally?
- Is the data inputted accurate?
- Is the data outputted accurate (hallucinations)?
- Is the data outputted fair (bias)?
- What are other risks?
- How can you mitigate them? can the vendor mitigate them (reporting, QA, accountability)? Can you mitigate them (internal procedures and limitations)?
Provide Disclosure
- Human understandable privacy notice that explains the data processing and output in plain language
Provide Choice
- You may need an opt-in for sensitive information
- You may need an opt out / human intervention in the AI decision.