The Privacy Gap in AI (And How to Actually Close It)
I reviewed the report on ‘AI Privacy Risks & Mitigations – Large Language Models (LLMs)’, and I realised something simple – we already have most of the tools to manage these risks. What we often miss is structure, ownership, and the habit of actually using them.
If you’re building anything with AI – even a simple chatbot or study tool -this also matters. These systems can leak personal data quietly or cross legal boundaries without anyone noticing.
The most common reasons why data slips through are:
- LLMs memorise small fragments of sensitive info.
- Users share personal stuff without thinking.
- Still not clear who’s responsible, the AI company or the people using it.
- Sometimes models are retrained on data they were never supposed to see.
So instead of waiting for another regulation or checklist, maybe it’s time we get a bit more practical.
Here’s how I see it:
1️⃣Know Where the Data Is
Know where personal data moves – inputs, outputs, logs. Understand what your system sees before anything else.
2️⃣ Focus on What Matters (Prioritise Risks)
Focus on what could cause the biggest harm first. Not everything matters equally.
3️⃣ Set Up Basic Protection
- Hide personal info before it goes to the model.
- Control who sees what.
- Filter AI outputs for sensitive stuff.
- Use ‘unlearning’ tech where someone requests deletion.
- Keep humans in the loop on the critical decisions.
4️⃣ Test a Little
Start with one simple, low-risk case. Monitor the alerts, learn from mistakes, and adapt quickly. The EDPB guidance emphasises a lifecycle-based risk assessment approach: before deployment, organisations should identify, classify, and evaluate privacy risks to understand potential impacts.
5️⃣ Make Privacy a Continuous Process (Keep Doing It)
Privacy shouldn’t be treated like a one-time setup; organisations should:
- Regularly reviewing and monitoring how data flows through the system to detect any privacy or compliance issues that may emerge over time.
- Adapt mitigation measures as needed: as usage evolves (new features, more users, different data types), risk profiles change – security, anonymisation, logging and oversight practices may need updates.
- Maintain documentation and transparency: even in this part, documentation is needed. You need to keep a risk register, log processing activities, document decisions, and ensure accountability.
Thanks to these practices, the privacy in LLM adoption becomes not just a simple “checkbox list” but a living practice – part of corporate culture and technical operations. This long‑term approach can help organisations respond to new threats, regulatory changes, or evolving deployment scopes. Privacy shouldn’t slow AI down – it should guide us how we build it, driving us to create smarter, cleaner, and more transparent technology.
About author
You might also like
Last Weeks Reading (2024-01-28)
📰 News Fabric end-to-end use case: Data Engineering part 2 – Pipelines If you’re new in Microsoft Fabric – check this series and build it with Sam. Set up Azure
Last Weeks Reading (2023-12-17)
📰 News What happened in the DAX world in 2022 New DAX functions… in 2022 The Microsoft Learn Cloud Skills Challenge Microsoft Ignite edition is back! How to build unit
Humans and AI: How to find the right balance
When companies decide to implement AI tools, the main argument is usually that AI will act as an employee assistant – helping people work faster and better. On paper, it





1 Comment
Dew Drop – December 22, 2025 (#4566) – Morning Dew by Alvin Ashcraft
December 22, 12:37[…] The Privacy Gap in AI (And How to Actually Close It) (Paulina Nowinska) […]