The Privacy Gap in AI (And How to Actually Close It)
I reviewed the report on ‘AI Privacy Risks & Mitigations – Large Language Models (LLMs)’, and I realised something simple – we already have most of the tools to manage these risks. What we often miss is structure, ownership, and the habit of actually using them.
If you’re building anything with AI – even a simple chatbot or study tool -this also matters. These systems can leak personal data quietly or cross legal boundaries without anyone noticing.
The most common reasons why data slips through are:
- LLMs memorise small fragments of sensitive info.
- Users share personal stuff without thinking.
- Still not clear who’s responsible, the AI company or the people using it.
- Sometimes models are retrained on data they were never supposed to see.
So instead of waiting for another regulation or checklist, maybe it’s time we get a bit more practical.
Here’s how I see it:
1️⃣Know Where the Data Is
Know where personal data moves – inputs, outputs, logs. Understand what your system sees before anything else.
2️⃣ Focus on What Matters (Prioritise Risks)
Focus on what could cause the biggest harm first. Not everything matters equally.
3️⃣ Set Up Basic Protection
- Hide personal info before it goes to the model.
- Control who sees what.
- Filter AI outputs for sensitive stuff.
- Use ‘unlearning’ tech where someone requests deletion.
- Keep humans in the loop on the critical decisions.
4️⃣ Test a Little
Start with one simple, low-risk case. Monitor the alerts, learn from mistakes, and adapt quickly. The EDPB guidance emphasises a lifecycle-based risk assessment approach: before deployment, organisations should identify, classify, and evaluate privacy risks to understand potential impacts.
5️⃣ Make Privacy a Continuous Process (Keep Doing It)
Privacy shouldn’t be treated like a one-time setup; organisations should:
- Regularly reviewing and monitoring how data flows through the system to detect any privacy or compliance issues that may emerge over time.
- Adapt mitigation measures as needed: as usage evolves (new features, more users, different data types), risk profiles change – security, anonymisation, logging and oversight practices may need updates.
- Maintain documentation and transparency: even in this part, documentation is needed. You need to keep a risk register, log processing activities, document decisions, and ensure accountability.
Thanks to these practices, the privacy in LLM adoption becomes not just a simple “checkbox list” but a living practice – part of corporate culture and technical operations. This long‑term approach can help organisations respond to new threats, regulatory changes, or evolving deployment scopes. Privacy shouldn’t slow AI down – it should guide us how we build it, driving us to create smarter, cleaner, and more transparent technology.
About author
You might also like
Last week reading (2018-04-01)
Good day SQL folks! It’s April Fool day! But don’t worry. All the cool stuff you find here is real & has been read already. The Challenge Is on: Maintain
Last Weeks Reading (2023-11-05)
📰 News Process Azure Event Hubs data using Azure Data Factory Mapping Data Flows Read how to process Azure Event Hubs data using ADF for certain use cases. Automate a
What if your remote team member isn’t who they say they are?
In today’s remote-first world, trust is a product risk, and one that’s becoming harder to manage. But what if some of your team members aren’t who they claim to be?





1 Comment
Dew Drop – December 22, 2025 (#4566) – Morning Dew by Alvin Ashcraft
December 22, 12:37[…] The Privacy Gap in AI (And How to Actually Close It) (Paulina Nowinska) […]