Last month, Google made one of the biggest changes in search privacy ever, by routing all searches through its Google Secure Search service. Why did it make the change? Why didn’t it close some loopholes that leave some search data vulnerable? Google — which is demanding more transparency from the US government over privacy issues — is preferring not to be transparent about its own moves.
What’s The TL;DR?
The too long; didn’t read is that more people should read stuff that they deem too long. The world would be a better place. But OK:
Google claims it has improved search privacy but won’t explain why there are major loopholes in that protection.
There. That even fits into a tweet. That statement should be enough to concern anyone. A major company makes moves it says are increasing privacy but with loopholes?
Right now, Google is desperately trying to battle the impression it already gives loopholes to the US government’s NSA spying agency about what people do on its services. It really doesn’t need to have unexplained loopholes of its own.
Over the summer, Google quietly started routing searches from some non-logged in users through Google Secure Search, something it previously had done only for people who were signed-in to Google. In late September, it increased that routing and confirmed to us officially that it was happening.
Why make such a move? Unlike the change two years ago, for logged-in searchers, there was no public blog post about the shift. Google seems to have hoped no one would even notice. That silence has helped fuel speculation that the change was less about protecting privacy and more about protecting Google’s ad business — or that perhaps privacy was a convenient excuse to also boost the ad-side of Google’s interests.
The Questions Google Wouldn’t Answer
To understand more, I asked to speak with Google’s director of privacy, Lawrence You, about the change. Google refused that request. Google also refused to provide answers to any specific questions I emailed. All it responded with was the same statement it gave when we reported in September about the change to send all searches through Google Secure Search:
We added SSL encryption for our signed-in search users in 2011, as well as searches from the Chrome omnibox earlier this year. We’re now working to bring this extra protection to more users who are not signed in.
It’s a pretty sparse response to the questions that I — and others — wanted answers to. Answers that might help square-up Google saying the change was designed to protect searchers and yet, seemingly, still leaves big holes in that protection.
On to the questions.
1) Are search terms considered private information?
This is a fundamental question that Google should answer. It’s core to what most people do at Google — search. Are the words they enter into the Google search box considered private or not?
The answer seems to be, “maybe” or “sometimes.” But good luck figuring that out directly from Google itself.
Search privacy doesn’t appear to be an important enough topic to appear at all on Google’s “Good To Know” page about staying safe and secure online with its products, not that I can see. Nor is it something featured within Google’s Inside Search area.
No, trying to get an answer about Google’s view if search terms are private is like a scavenger hunt that you need a search engine like Google to perform. Here’s a page that explains Google may record your search history. Here’s another on how to delete that history. Here’s another that explains that search history is used to personalize results, and that “Google takes your privacy very seriously,” so presumably search terms are indeed private in some way.
2) If search terms are considered private information, why are they provided in various ways that can be viewed by third parties, such as through Google Webmaster Tools, AdWords & Google Suggest?
Google clearly considers that search term data is considered private to some degree, otherwise it wouldn’t be using Google Secure Search as a means to strip those terms off the “referrer” information that’s passed along to non-advertisers (advertisers still get this data).
That was the whole reason why Google justified making use of Google Secure Search back in 2011. It argued that as people were able to search for more personal things through Google, such as for appointments in Google Calendar or messages in Gmail, the searches themselves might somehow be too sensitive to expose to a third party.
For logged-in users, Google Secure Search stopped the transmission of search terms “in the clear” to publishers (except for advertisers). It also prevented the terms from being associated with other information, such as a searcher’s IP address (except for advertisers).
This change made it harder for anyone to “eavesdrop” on a string of searches by any individual. Get enough searches linked to an IP address, and potentially, you can learn enough to know who is doing those searches. That happened in 2006, when AOL released “anonymous” search data that the New York Times used to track back to a particular person. Great story. Read it.
But search term data is still provided in other ways — and without any apparent attempt to filter out terms that might be somehow personally identifying on their own.
That’s also the answer to the question that Google refuses to give. It clearly doesn’t believe that search terms, on their own — away from possibly personally identifying information — are private or private enough that they can’t be given to third parties. Otherwise, it wouldn’t continue to do this.
But that’s my guess. Google isn’t saying.
3) If only some search terms are considered private, how does Google filter these from being exposed to the public in some of the ways outlined above?
This is closely related to the second question and really depends on what Google’s answer is.
If the answer is that search terms are considered private, period — no ifs, ands or buts — then Google fails to protect this privacy because it applies no filtering (that it has acknowledged) before handing them over in various ways to third parties.
If the answer is that search terms are considered private when linked to personally identifiable information like a cookie or IP address, then Google fails to protect this privacy when it passes along search terms along with such information to advertisers.
If the answer is that search terms are considered private when linked to personally identifiable information AND when a number of terms can all be linked to one individual, THEN Google’s move to Google Secure Search makes sense and lacks some of the seeming loopholes in the “protection” story that Google wants to spin.
4) What’s considered the bigger privacy issue, potential eavesdropping of a string of terms or the terms themselves?
As explained in my speculation on the third question, I think Google sees eavesdropping on a string of terms to be the real privacy issue. It’s a pity the company won’t explain if this is so.
5) Why are ad clicks not encrypted or withheld? How does this square against privacy?
Do a search on Google, click on an ad, and what you searched for is transmitted to the advertiser, along with your IP address, leaving you open to being targeted for ads based on that term in the future.
This has been a loophole since Google’s first ramp-up of Google Secure Search in 2011. It never offers a decent explanation for why it does this.
The only real explanation that holds up is that terms aren’t deemed private unless you are able to intercept a series of them.
6) If terms aren’t considered private, why not allow web sites that allow secure connections continue to receive search term data, which would block eavesdropping?
There’s a nearly 20-year-old industry-standard “referrer” system where if you click from one web page to another, the destination page is told where you came from. It’s sort of like a “Caller ID” for the web. Until 2011, Google used this standard to let publishers know the exact terms used, if someone found their content when doing a Google search and then clicked to a publisher’s site.
Well, except for when people click on advertiser listings. Google still uses the standard, in that case.
The change was useful if the goal was to prevent “eavesdropping.” But if the goal was, as Google stated when making it, to encourage the industry to “adopt stronger security standards,” Google missed a huge opportunity.
Google could have continued to provide referrer data to publishers if they agreed to also use secure web sites. Many would have, just as many have increased their site speed when Google said it would reward faster sites with better rankings.
So why not do that, especially when today, Google seems even more worried about internet security in the wake of NSA spying revelations? Again, no answer from Google
7) Why the sudden change to encrypt everything other than ad clicks?
Moving ALL Google searches to Google Secure Search wasn’t something that Google would have just done just because someone at Google got up one day and decided to flip a switch.
Two years ago, Google caused people who were logged into Google to use Google Secure Search initially in preparation for the coming of Google’s Search Plus Your World. That was the motive.
What prompted Google to make the move this summer to full security? To me, the likely candidate was concern over the NSA spying. But, it could be that Google decided to make the move to shore up its ad business. Maybe it was both. But Google’s sparse statement says nothing about the WHY it made this change, out of the blue.
Tell Me More
These are two recent FAQ-like stories I’ve done:
These are background stories that go into detail on many of the things I’ve covered above
- The Death Of Web Analytics? An Ode To The Threatened Referrer
- Google To Begin Encrypting Searches & Outbound Clicks By Default With SSL Search
- Google Puts A Price On Privacy
- Dark Google: One Year Since Search Terms Went “Not Provided”
- Google’s Plan To Withhold Search Data & Create New Advertisers
- Post-PRISM, Google Confirms Quietly Moving To Make All Searches Secure, Except For Ad Clicks