It is not often when a major product used by millions rolls out changes reducing the privacy posture. Google Chrome is a recent example. The newly released version 69 automatically signs-in users into the browser whenever the user logs into a Google Service (e.g. Gmail); the change also makes it easier to accidentally turn data synchronisation by mistake.

Recap

Thisregression is possibly confusing, and it further blurs the lines between what is “local” (user device) and what is “remote” (“cloud”). Furthermore, the change has not been communicated well, actually not at all. Users were not notified after installation, release notes did not contain any explanation or even the slightest hint. To users, this was a surprise. To further increase the eccentricity, Google ends up forced to modify the Chrome privacy policy in an emergency mode (literary, “over a weekend night”), as the then-current privacy policy was not adapted (which also shows that Google treats privacy policies descriptively, not strategically).

For more information, refer to my detailed write up (Am I logged in or not? GDPR case study on the example of Chrome browser change). The change resulted in a controversy, with more researchers highlighting the issue (e.g. Matthew Green, Vincent Toubiana).
The case also brought some public attention (e.g. 1, 2, 3).

I won’t go into speculating on how the change might came to be. Instead, let’s acknowledge that Google decided to back-track, and roll additional changes scheduled for Chrome 70. Chrome is to communicate the sign-on state more clearly, and cookie clearing will also work for Google cookies. It will take about a month to deliver the fix. Meanwhile, the privacy user interface stays confusing, and the browser is denying the user access to their data (some cookies are not cleared following an explicit user action). I am happy these changes will happen.

In my previous) analysis, I presented an analysis based on the Chrome 69 change. The planned changes in Chrome 70 are enriching the case study further. Accordingly, I provide an update below.

Data Protection by Design

Data Protection by Design, sometimes referred to “Privacy by Design” (PbD) favours proactive stance. In case of Chrome, it appears the automatic sign-on feature may have not gone through a review. In effect, Chrome 70 is shipping changes, addressing the issue in a reactive fashion. It is interesting to see Privacy by Design (not) at work.

Chrome 70 intends to ship a change that will allow users to disable the "automatic sign-on feature". The commoniqué by Chrome suggests that by default, users are opted-in. I'm wondering whether a change like that should not be an opt-in for current browser installations.

Data Protection Impact Assessment

I believe that if a Privacy Impact Assessment for Chrome exists, the case should be included in it. Furthermore, I would still find it constructive to see the (technical) DPIA made public.

Summary

This is an addition to an already fascinating case study. I highlight the connections between privacy programs and reviews (competing goals of the development team, management priorities), software and system designs (how to review for privacy), regulations (GDPR, DPIA, PbD), and privacy policies.

The case highlights shortcomings, but it also demonstrates some positive things. Fortunately, it turns out that a concerted effort and pressure of concerned users and researchers might (still) have an impact.

However, the final question remains. If privacy policies can change basically overnight, what is the actual value of privacy policies?