The Privacy Gap in AI (And How to Actually Close It)

The Privacy Gap in AI (And How to Actually Close It)

I reviewed the report on  ‘AI Privacy Risks & Mitigations – Large Language Models (LLMs)’, and I realised something simple – we already have most of the tools to manage these risks. What we often miss is structure, ownership, and the habit of actually using them.
If you’re building anything with AI – even a simple chatbot or study tool -this also matters. These systems can leak personal data quietly or cross legal boundaries without anyone noticing.

The most common reasons why data slips through are:

  • LLMs memorise small fragments of sensitive info.
  • Users share personal stuff without thinking.
  • Still not clear who’s responsible, the AI company or the people using it.
  • Sometimes models are retrained on data they were never supposed to see.

So instead of waiting for another regulation or checklist, maybe it’s time we get a bit more practical.

Here’s how I see it:

1️⃣Know Where the Data Is

Know where personal data moves – inputs, outputs, logs. Understand what your system sees before anything else.

2️⃣ Focus on What Matters (Prioritise Risks)

Focus on what could cause the biggest harm first. Not everything matters equally.

3️⃣ Set Up Basic Protection

  • Hide personal info before it goes to the model.
  • Control who sees what.
  • Filter AI outputs for sensitive stuff.
  • Use ‘unlearning’ tech where someone requests deletion.
  • Keep humans in the loop on the critical decisions.

4️⃣ Test a Little

Start with one simple, low-risk case. Monitor the alerts, learn from mistakes, and adapt quickly. The EDPB guidance emphasises a lifecycle-based risk assessment approach: before deployment, organisations should identify, classify, and evaluate privacy risks to understand potential impacts.

5️⃣ Make Privacy a Continuous Process (Keep Doing It)

Privacy shouldn’t be treated like a one-time setup; organisations should:

  • Regularly reviewing and monitoring how data flows through the system to detect any privacy or compliance issues that may emerge over time.
  • Adapt mitigation measures as needed: as usage evolves (new features, more users, different data types), risk profiles change – security, anonymisation, logging and oversight practices may need updates.
  • Maintain documentation and transparency: even in this part, documentation is needed. You need to keep a risk register, log processing activities, document decisions, and ensure accountability.

Thanks to these practices, the privacy in LLM adoption becomes not just a simple “checkbox list” but a living practice – part of corporate culture and technical operations. This long‑term approach can help organisations respond to new threats, regulatory changes, or evolving deployment scopes. Privacy shouldn’t slow AI down – it should guide us how we build it, driving us to create smarter, cleaner, and more transparent technology.

Previous Humans and AI: How to find the right balance
Next Start with the FabricTools PowerShell module

About author

Paulina Nowinska
Paulina Nowinska 10 posts

Product Manager, Scrum Master, and AI practitioner passionate about turning ideas into impactful, user- centered solutions. I blend Agile leadership, product strategy, and hands-on experience with AI, automation, and emerging technologies to guide teams through digital transformation. I’m building AI-driven projects, including Vibe Coding apps, exploring creative workflows, automation, and multi-agent orchestration to bring innovative solutions to life.

View all posts by this author →

You might also like

Security 0 Comments

Store sensitive information with Azure Key Vault

Azure Key Vault is a service which allows you to keep and manage passwords, certificates and other sensitive information securely. Out of the box we have capabilities to manage permissions having

Events 0 Comments

My recap of PASS Summit 2017

When you doing something first time, predominantly jitters happens, right? Fine, it was not my first conference, I wasn’t in a speaker role this time, etc. However, certain voice in

General 0 Comments

BI Conference in Wroclaw

I would like to spread the word about a new conference which appears in my favourite city in Poland – Wroclaw. I’m talking to Jacek Biały, Business Intelligence Competency Center Manager

1 Comment

Leave a Reply