While I once hoped 2017 would be the year of privacy, 2024 closes on a troubling note, a likely decrease in privacy standards across the web. I was surprised by the recent Information Commissioner’s Office post, which criticized Google’s decision to introduce device fingerprinting for advertising purposes from February 2025. According to ICO, this change risks undermining user control and transparency in how personal data is collected and used. Could this mark the end of nearly a decade of progress in internet and web privacy? It would be unfortunate if the newly developing AI economy started from a decrease of privacy and data protection standards. Some analysts or observers might then be inclined to wonder whether this approach to privacy online might signal similar attitudes in other future Google products, like AI.

I can confidently raise this question, having observed and analyzed this area for over 15 years from various perspectives. My background includes  experience in web browser security and privacy, including in standardization. I served in the W3C Technical Architecture Group, and have authored scientific papers on privacy, tracking, and fingerprinting, as well as assessments of technologies like Web APIs. This includes the Privacy Sandbox’s Protected Audience API. I was looking forward to the architectural improvements of web privacy. In other words, I am deeply familiar with this context. The media so far have done a great job bringing attention to the issue, but they frame this development as a controversy between Google’s policy change and the UK ICO’s concerns. I believe that the general public and experts alike would benefit from a broader perspective.

What Is Fingerprinting


Device fingerprinting involves collecting information about user devices, such as smartphones or computers, to create a unique identifier, often to track people or their activities as they browse around the web. This data may include IP addresses, browser user-agent strings, screen resolution, or even details like battery discharge rate. Fingerprinting is particularly concerning because it can be passive—requiring no user interaction. Data is collected without the user’s knowledge and linked to their device. Upon subsequent browsing, systems can recognize the same visitor, enabling ad tracking or uncovering private information, such as browsing habits.

This form of identification is neither transparent nor user-friendly. Users are often unaware it is happening, and when done without their consent, awareness, or other legal grounds, it breaches laws. Unlike cookies or other mechanisms, such identifiers cannot be easily “cleared,” making them especially invasive. Nevertheless, websites, advertising technologies, and others have continued to use them. Remarkably, large technology companies like Apple and Google once vowed not to engage in such practices. This commitment marked a major achievement for privacy, driven by advancements in privacy research and engineering. Large platforms even began competing to enhance user privacy, benefiting users’ welfare and reducing the risk of data misuse or leaks. This issue cannot simply be reduced to “Google does this, and the ICO critiques it.”

What’s the Change?

The current Google Ads policy concerning fingerprinting says:

“You must not use device fingerprints or locally shared objects (e.g. Flash cookies, Browser Helper Objects, HTML5 local storage) other than HTTP cookies, or user-resettable device identifiers designed for use in measurement or advertising.”


This policy is repeated in many other places, such as:

“You must not use device fingerprints or locally shared objects (e.g., Flash cookies, Browser Helper Objects, HTML5 local storage) other than HTTP cookies, or user-resettable mobile device identifiers designed for use in advertising, in connection with Google’s platform products.”

Until now, fingerprinting was broadly restricted, as seen here: “Google doesn’t allow fingerprinting.” Or here: “We remind you that our policies prohibit fingerprinting for identification.”

The shift is rather drastic. Where clear restrictions once existed, the new policy removes the prohibition (so allows such uses) and now only requires disclosure.

‘You must disclose clearly any data collection, sharing and usage … such as your use of cookies, web beacons, IP addresses, or other identifiers. This applies for data collection, sharing and usage on any platform, e.g., web, app, Connected TV, gaming console or email publication.’”

Privacy Sandbox and Fingerprinting


Google’s earlier privacy direction reflected a strong conviction that the web should prioritize privacy. However, I stress that if the ICO’s claims about Google sharing IP addresses within the adtech ecosystem are accurate, this represents a significant policy shift with critical implications for privacy, trust, and the integrity of previously proposed Privacy Sandbox initiatives.

One of the initial elements of the Privacy Sandbox was the Gnatcatcher proposal, now known as IP Protection. This initiative was designed to mask users’ true IP addresses, thereby also combating fingerprinting—something Google now appears to allow. Privacy Sandbox was built on reducing fingerprinting surfaces as a fundamental guarantee, as detailed here:

“to help ensure that it is difficult to re-identify significant numbers of users across sites and apps.”


Privacy Sandbox components explicitly sought to limit fingerprinting. Some privacy sandbox proposals considered fingerprinting as a risk needed to be addressed during design, deployment, and use.

Google’s own PR stance highlighted this focus:

“The Privacy Sandbox technologies aim to make current tracking mechanisms obsolete, and block covert tracking techniques, like fingerprinting.”

Acknowledging that:

“users have even less control when ad tech providers use permanent and immutable identifiers, like those derived based on device fingerprinting, since there's no central place for users to manage those”

The Contradiction


Google’s policy adjustments now create a troubling contradiction. While IP addresses are explicitly mentioned, the way disclosure requirements are worded raises concerns. Is that the end? Google’s messaging implies that fingerprinting is now acceptable because it is common in the industry.

However, Google is not just any market participant—it is a dominant player. The Competition and Markets Authority in the UK has already investigated Google’s adtech practices, citing its significant influence. Is Privacy Sandbox moving forward? Many web participants had assumed the industry was progressing toward privacy. Investments were made in this direction. Now, this reversal casts doubt, creating a significant shift for the online ecosystem.

Still, the accompanying wording suggests that privacy technologies are to be used.

Privacy-Preserving Technologies or Handwaving?


Google has argued that its use of Privacy-Enhancing Technologies (PETs) will mitigate risks. However, at least currently, this reassurance rings hollow. PETs are only meaningful if their implementation is clear and robust. Even hashing an IP address with MD5 could theoretically be labeled as a PET—though it would be grossly inadequate.

While Google claims it will collaborate with the broader ads industry, the problem lies in uncertainty. Once the new policy is in force, it is imaginable that a wide array of signals could be used—not only device-based ones like IP addresses and browser configurations but also behavioral fingerprints. Then suddenly, say in 2030, we wake up in a very different world than in 2018.

Summary


These changes contradict the goals of the Privacy Sandbox. Google must explain how this reversal aligns with its previous communication about commitment to user privacy.

Assuming the ICO’s claims are accurate, the announced changes mark a departure from Google’s privacy objectives, potentially reversing a long and consistent trend of improving user privacy. While it is possible to use fingerprinting in line with laws like the EU GDPR and ePrivacy, big technology platforms had set higher standards. This shift arrives at a time when data demands—fueled by AI developments—are intensifying. The 2024 opinion of the European Data Protection Board potentially allowing to use originally not-so-legally acquired data to train and use AI models later is merely one recent evolution.

Reversing the stance on fingerprinting could open the door to further data collection, including to crafting dynamic, generative AI-powered ads tailored with huge precision. Indeed, such applications would require new data.

Privacy and data protection research, as well as engineering, must continue to evolve—not merely as a future responsibility for data protection officers, but as a priority for strategists, engineers, regulators, and of interest to reporters and journalists. Hello, 2025?

I hold a PhD in Computer Science (Privacy) and an LL.M. in IT Law. My work includes . I have served in the W3C Technical Architecture Group, worked at the European Data Protection Supervisor, and consult on privacy technologies and strategies. I’ve had a busy year working and publishing  in cybersecurity and data protection, and am looking forward to new opportunities starting in early 2025. If you could use expertise in global cybersecurity, risk assessment, privacy regulations/GDPR & standards, I’m open to new projects. Feel free to reach out me@lukaszolejnik.com.