If you or your company have seen something that harms your reputation abruptly appear in Google’s search results, you may be wondering how and why something negative could appear so fast, and how it gained against longer-established materials. It’s pretty simple, though: Google’s algorithm likes it better. Let me explain.
First of all, Google has worked very hard to interpret user intent when searches are conducted. It’s not easy to fathom what people may be seeking when they submit a keyword or a keyword phrase.
When someone searches for “pizza,” for instance, Google may assume that most of the time people are seeking local pizza providers, so it provides Map Search results of local pizza restaurants, personalized to the locality of the searcher. But it also provides links to pages from nationwide pizza websites that deliver, as well as lists of top area pizza places, the Wikipedia article for pizza — and more.
Since Google cannot always divine a specific intention when a user submits a search query, it’s evolved to using something of a scattergun approach — it tries to provide a variety of the most likely sorts of things that people are generally seeking when submitting those keywords. When this is the name of a business or a person, Google commonly returns things like the official website of the subject, resumes, directory pages, profiles, business reviews and social media profiles.
Part of the search results variety Google tries to present includes fresh content — newly published things like news articles, videos, images, blog posts and so on.
Another aspect of Google’s desire to present a search results page with a variety of content is the company’s effort to reduce duplicate content. As Google says in its help page about duplicate content, “Google tries hard to index and show pages with distinct information.”
Google makes this all look easy, but for nearly any keyword query, there are typically many thousands of pages that are determined to be more or less relevant, and determining what comes up at the top on the first page of results is complex and difficult.
Unfortunately, Google seems to have embedded a bias in the system (or a few biases), which in many instances gives negative content greater ranking ability than it deserves.
New, negative content gaining prominence
To determine where newly emerging material should fit, Google seems to test it. In my experience, when a new web page is introduced and Google spiders it, it’s not uncommon to find that new page appearing at various places in the search results over time — on page three, on two, and on one. Sometimes the new stuff rises in search results up onto page one and then subsides down lower in the results, where it stabilizes. Or, in other cases, it rises up to page one, and then it may stick there.
Of course, Google has always used a number of relevance and prominence signals, such as the links pointing to a web page, to determine relative rankings. As a new web page is introduced, the various other signals may trickle in after, iteratively incorporating links to the count, and more.
But another couple of dynamics appear to be at work as well.
One of the additional factors is user engagement — starting at clicks when people begin encountering the new content in the search results. (Note: I am aware that there’s controversy about how Google uses clicks in determining rankings, but, at SMX West in March, engineer Paul Haar’s presentation specifically mentioned that the engine conducts experiments that look for changes in click patterns.)
This is where Google’s apparent automated content testing really comes into play, in my opinion. If the newly-introduced content shows up on page one, does it receive more clicks than the other items appearing with it? If it does, it may get promoted onto a first-page slot.
But why might this newer content receive more clicks than other, better-established things that have already been present on page one, and which would naturally have more clicks than stuff that has never been there before? The answer to this is often in the title of the content appearing in the search results.
I like to point out that on the information superhighway, when there’s an accident, people tend to be rubberneckers. It only makes sense that when you sees a link in search results that has a scandalous or highly negative headline, your attention is captured. It’s only human nature. If you’re searching for information about a person or a company, seeing such a headline makes you curious, and you want to find out about it.
Combine an individual’s name with words like “arrest record,” “scandal” or “cheat,” and it draws clicks at a big rate compared with more staid content like “homepage,” “resume” or “phone records.” Similarly, combine a company’s name with words like “scam,” “lawsuit” or “rip-off,” and it gets clicked.
From talking with Google engineers at various times, I know that they want search results to be pretty objective. In general, they want to make information about subjects available to everyone — regardless of whether the information is positive, negative or neutral. But these attention-getting negative headlines might be skewing the system.
The negative bias dynamic
Prior to the internet, a single individual could rarely acquire a voice as loud as that of a major corporation. But the internet — and Google — have effectively given people a bully pulpit. It doesn’t take much for an individual to create some content now that tears down a company (or another person), and to get it to rank well.
Unfortunately, I’m fairly convinced there may be another dynamic in Google algorithms that is biased towards negative content, in addition to dramatically headlined links: the desire to deliver content with a variety of sentiments.
Google has long had the ability to analyze content for sentiment. For instance, its patent for “Phrase-Based Snippet Generation” enables it to analyze a document for its various expressions of sentiment and bubble that up into the snippet displayed under the page’s link in search results.
Since Google has dedicated a lot of effort to providing a variety of types of content in search result listings, it isn’t much of a stretch to consider that it also works to present a list of content with a combination of sentiments. Though I don’t have proof other than my own observations, it seems highly likely to me that this is happening.
I’ve worked on a number of cases in which all my research indicates my clients’ names have extremely low volumes of searches. The negative materials are likely to receive no more clicks than the positive materials, according to my information, and, in many cases, they have fewer links.
Once you know the mechanisms at work — the ways people search, what draws people to click on one item versus another, and how to optimize content — then you can engineer content to fight down the negative stuff in the search results so that it won’t be as visible.
For a great many things, Google’s efforts to produce variety will work effectively to provide very useful search results. But in many cases I’ve worked on, serving up a variety of information can be problematic — negative content that Google’s system raises into visibility doesn’t merit the level of prominence conveyed.
Certainly, companies and individuals should be accountable for their actions. The internet has been a part of enabling that to happen, and many, if not most, people believe this to be a net positive. But there are also many instances of people and companies being harmed unfairly by untrue and malicious defamation.
Should a single crank or crazy person with a bully pulpit be able to bring a company to its knees, figuratively speaking? These things affect the livelihood of companies’ employees and stockholders.
Where Google can improve
Google perhaps can’t be completely blamed for matching results to what their data indicates to be “searcher intent.” After all, the major news organizations have similarly done that for many years, and many have remarked upon the tendency toward reporting on sensationalist and salacious topics. But I’d argue that Google doesn’t have to gravitate toward being the equivalent of the National Enquirer of search — doing so is often simply not fair to those who are affected.
Google could do better in this respect by finding additional signals that can be used to determine the veracity and merit of negative materials. There’s no question that a news story from a site like CNN, which tends to fact-check accusations and check sources, may merit ranking about a subject. But should a single blog post from a crazy person, an extortion artist or a disgruntled boyfriend be able to rank as easily?
I wouldn’t be surprised if many disagree with me. If something bad and damning about a person or company comes out, there can be some degree of public interest in allowing that to come out and be findable (depending on who is involved and the context). But I think that Google ought to apply some of its sophistication to vetting content algorithmically. Simple gossip or hastily sketched-together defamation shouldn’t rise to the same level as professional reporting on a major class-action lawsuit.
Until Google improves on this dynamic, you can use the standard tools of online reputation management to try to fight against the negative-biased search results that are impacting you. It goes beyond merely generating content that matches your name. You may need to craft in titles that are equally psychologically attractive to searchers to draw them away from the negative content.
[Editor’s note: We’ve reached out to Google for comment and will update this post if and when we receive a response.]
Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.