AI Tools & Products

Google’s top differential-privacy scientist tells the EU its data-sharing plan can be reversed in two hours

· May 6, 2026
Google’s top differential-privacy scientist tells the EU its data-sharing plan can be reversed in two hours

Google’s lead scientist on differential privacy, Sergei Vassilvitskii, has warned the European Union that its current plan for mandated anonymized data sharing could be broken in just two hours. Vassilvitskii, a distinguished Google researcher since 2012, alerted EU regulators after his team tested the Commission’s anonymization scheme and found it vulnerable to reversal attacks. The EU is facing a decision deadline of July 27 to approve or revise this data-sharing proposal, which aims to enforce transparency by forcing tech companies to share some of their search data in a way that protects individual privacy.

This matters because the EU’s plan is intended to give regulators and researchers access to crucial search data while preserving user anonymity. If the anonymization can be cracked quickly, as Vassilvitskii shows, it undermines the entire purpose of the policy. Sensitive user information could be exposed despite efforts to protect it. This creates risks not only for individual privacy but also for companies who rely on anonymization to comply with legal requirements. Such vulnerabilities could erode public trust in digital privacy safeguards and complicate ongoing regulatory efforts to balance transparency and confidentiality.

The background here involves growing pressure from governments around the world for large tech firms to share more data with regulators, especially data involving online search patterns which can reveal societal trends. The EU has been pushing for stronger rules on data sharing, tied to privacy protections like differential privacy—an advanced mathematical technique designed to add “noise” to data so that individuals cannot be identified even when datasets are analyzed closely. However, testing and red-teaming, as Vassilvitskii’s team performed, are crucial steps to validate whether these anonymization measures truly hold up when adversaries try to break them down.

Vassilvitskii’s warning signals a key challenge for regulators and tech companies as they develop data-sharing frameworks. It shows that current anonymization techniques may not yet be robust enough to meet the EU’s goals. The next moves should focus on enhancing these privacy methods or reconsidering how much access to raw or semi-processed data might be given without violating privacy. For developers, it means that relying solely on differential privacy without thorough stress testing could lead to vulnerabilities. The EU could pause or adjust its deadline while experts refine approaches that better protect user identities. These developments will likely influence global conversations on balancing data transparency with privacy in AI and data science.

— AI Quick Briefs Editorial Desk

Stay ahead of AI Get the most important AI news delivered to your inbox — free.