On August 1st, 2018, Google officially confirmed that there was a broad core algorithm update during the previous week via the @searchliaison twitter account:
Judging by the ranking fluctuations as measured by Mozcast, the algorithm roll out likely began on July 31st:
According to Danny Sullivan, we can estimate that the algorithm completed rolling out circa August 8th, although again this is only an estimate:
If you were negatively impacted by the update, Google officially stresses that this was not because there was something specifically wrong with your pages and that this shouldn’t be thought of as a penalty. Instead, other pages are now outranking you because they weren’t adequately benefiting from the algorithm:
Unfortunately, Google’s advice on “recovering” from any rankings lost, while obviously true advice, isn’t particularly comprehensive or actionable, beyond saying that you should focus on creating great content:
Thankfully, Google provided us with a more useful clue when they updated their quality raters guidelines on July 20th, 2018. The quality raters guidelines are the guidelines that human raters are taught to use in order to score the quality of pages and their relevance for search queries.
Those scores have no direct impact on how Google’s algorithms work, but they are used to improve Google’s algorithms. Machine learning algorithms are trained on these scores, and manual adjustments to the algorithm are designed to ensure that better-scored pages tend to rank higher for the right queries.
In short, if you are creating pages that would receive a high score from human raters following those guidelines, specifically for the kinds of search queries you are trying to rank for, there is a better chance you will rank higher for those queries.
In the post that follows, we will review what those changes were, as well as any other clues pointing to what may have changed with the algorithm, to ensure that you can rank well after this update.
What is a Google Broad Core Algorithm Update?
Before we dive deeper into what changes were most likely made by the algorithm update, it’s important that we understand specifically what kind of update this was.
Let’s start with what a broad core algorithm update is not. It is not a focused update like Panda or Penguin designed to address a specific issue. Updates like those are new algorithms which are incorporated into the main algorithm, sometimes run entirely separate from it aside from providing the main algorithm with a score or classifier of some kind which is then used to tweak rankings.
A broad core algorithm update means that the main algorithm itself has been revamped in some way. This could include changing the way in which certain existing ranking factors were weighted against one another, or how they interact with one another, or how they form a cohesive whole.
It’s also entirely possible that the update incorporated new ranking factors that weren’t previously taken into consideration.
It’s also worth noting that even Google employees may not know what precise changes have been made, particularly if machine learning were involved. If Google’s core algorithm were, for example, a machine learning algorithm trained on the new set of human quality rating scores, based on the changes made to the rating guidelines, the resulting algorithm would be a set of rules too complicated for Google’s search engineers to interpret at the base level. This is just the nature of machine learning.
Ultimately, a broad core algorithm update like this means that so many things have changed, and the way that they interact has changed so much, that there is no one suggestion that will help every website that lost rankings. The solution for each website will be different.
In the spirit of that, Danny Sullivan stressed that sites which have lost rankings after this update don’t have anything to “fix” because they weren’t broken. The update wasn’t designed to target any one thing:
Instead, the update was broad, and he explicitly mentioned Google’s human quality raters guidelines:
How The New Quality Rater Guidelines Likely Influenced The August 1 Broad Update
So, let’s start talking about what changes were made to the raters guidelines that may have impacted your site when the core algorithm was updated. After that, we will share some advice on diagnosing why rankings may have been lost.
For reference, the complete 164-page document is available here.
A New Focus On Author Reputation
One of the most apparent changes to the quality rater guidelines was a change to how reputation is evaluated. Previously, raters were asked to evaluate the reputation of the website or the brand, but the new guidelines also ask raters to evaluate the reputation of individual authors or content creators.
The change is especially apparent because it is reflected in the table of contents. Section 2.6, previously named “Website Reputation,” has been changed to “Reputation of the Website or Creator of the Main Content.” The subsection 2.6.1 has also been changed from “Reputation Research” to “Research on the Reputation of the Website or Creator of the Main Content.”
Some of the related changes to the document include:
- Identifying the content creator of it’s different from the creator of the entire website
- Suggesting quality raters search for the name or alias of the content creator to research their reputation
- Look for reputation information created by others, not just by the content creator, including biographical information
- Looking at the author’s Wikipedia page
- Quality raters are told that, for well-known content creators, they should expect to be able to find reputation information. They do stress, however, that it may not be possible to find reputation information for smaller organizations, and that this shouldn’t be considered a sign of low quality. Presumably, this advice carries over to less well-known content creators.
- In choosing a page quality rating, raters are asked to consider both the website’s reputation and the reputation of the person responsible for creating the main content on the page.
- References to “quality” have been updated to refer instead to “EAT” which stands for “Expertise, Authoritativeness, and Trustworthiness,” and authors are expected to have “high EAT,” not just websites.
- Medical content is expected to be created by medical professionals, news articles are expected to be produced by journalists, science content is expected to be created by science experts, and likewise for financial advice, tax advice, legal advice, and even home remodeling, parenting, and hobby content should be created by experts on those topics.
- Highest quality pages are distinguished from high-quality pages based in part on a very high positive reputation for the content creator. Before giving a page the “highest” rating, raters are now asked to perform “extensive” reputation research.
- Pages may be rated “low quality” if the content creator lacks the expertise to serve the purpose of the page.
- A mildly negative reputation for the website or author is now enough to receive a “low quality” rating.
- For “your money or your life” (YMYL) pages, a “mixed” reputation is enough to receive a “low” rating.
- Clear information about who created the content is expected unless there is a good reason for anonymity. A long-standing alias can also serve the same purpose as a name. For personal websites and forum discussions, email addresses or social media profiles can be considered adequate reputation information. This is very different for “your money or your life” (YMYL) pages, which automatically receive a low rating if information about the author is insufficient.
If the broad core algorithm update negatively impacted your rankings, it’s possible that some of your pages were overtaken by pages written by authors with a clearer or more trustworthy reputation. For sites with multiple authors, this means that you may need to get more discerning with your author choices, or that you should invest more resources in adding content created by authors with a better reputation.
I want to reiterate here that this isn’t about punishing less well-known authors. Do not start firing authors because they don’t already have a big name associated with them.
Do consider how a negative author reputation could impact the user experience. This isn’t to say that you shouldn’t work with controversial authors if they are a good fit for your target audience and search queries. It does mean that if an author has a history of perpetuating misinformation or is likely to be untrustworthy to informed members of your target audience, you probably shouldn’t be working with them.
In any case, this is also a sign that you should start investing more in personal branding for yourself or any authors working for your brand, especially if you have been focused entirely on your organizational branding up until this point.
Note that, based on how raters are asked to score results, we would expect negative author reputation to affect individual pages, not necessarily the site as a whole, unless of course, the total of author contributions results in a loss of reputation for the branding of the entire website.
Introducing Beneficial Purpose
Quality raters have always been asked to determine the quality of a page in large part by determining how well the content of the page lives up to its intended purpose. The new quality ratings expand on this by emphasizing that the purpose itself needs to be “beneficial.”
In Google’s own words: “Most pages are created to be helpful for users, thus having a beneficial purpose.”
In other words, pages that aren’t seen as being helpful for users are seen as having no beneficial purpose. So a page may serve its intended purpose very well, but if that purpose isn’t for the user’s benefit, it’s a low-quality page.
Mentions of “beneficial purpose” have been added throughout the guidelines, including:
- Pages receive the lowest rating if they have no beneficial purpose, regardless of any other factors
- Raters are asked to consider whether non-traditional pages have a beneficial purpose
- Pages that do not attempt to help users, that spread hate, cause harm, misinform, or deceive users are considered to have no beneficial purpose
- Beneficial purposes include providing information, making people laugh, artistic expression, and allowing people to purchase products or services
- Depictions of extreme gore, violence, racial slurs, or offensive terminology are only permitted if they serve a beneficial purpose
To understand how important beneficial purpose is, a page can receive a medium score by simply having a beneficial purpose and meeting it. But a page will receive the lowest possible score if it has no beneficial purpose, even if it has the highest expertise, authority, and trustworthiness possible.
This hammers home just how central the user needs to be to your marketing strategy if you want to rank well after the August 1 broad core algorithm update. While every page must ultimately serve a business purpose, do not allow this to distract from the need to serve a beneficial purpose for users. The more beneficial that purpose is, the more likely you will be to rank well in the search results.
Many other miscellaneous changes were made to the rater guidelines that could have indirectly impacted the SEO on your site in the wake of the algorithm update. Here are some of the most important ones:
- Clickbait titles are a no-no. Specifically, the title of the page should describe the content, not use exaggeration or misleading language to entice clicks that will leave the user disappointed or confused when the content doesn’t live up to the hype. Raters are instructed to rate a page as “low” if it uses clickbait, regardless of the quality of the content otherwise, so this is an important change.
- High-quality pages must have a descriptive or helpful title.
- “Your money, your life” (YMYL) page criteria has been updated to include pages that impact safety, in addition to the previously included happiness, health, and financial stability.
- The “Overall Page Quality Rating” section has been revamped to specify that raters need to first understand the purpose of a page and to give it the lowest rating if that purpose isn’t beneficial. They are then instructed to otherwise give it a rating of lowest, low, medium, high, or highest based on how well it achieves its purpose.
- News sources are expected to have published editorial policies and a robust review process.
- “Highest” score pages are now distinguished from “high” score pages based on the quantity of main content, in addition to the previously existing quality, indicating that “highest” score pages are expected to be more comprehensive. (Do not make the mistake of confusing this for word count.)
- An unsatisfying amount of main content to meet the purpose of the page results in a “low” score.
- “Highly distracting” ads used to be necessary to award a “low” rating, but now they merely need to be “distracting.”
- “Unmaintained” websites have been added to the section on hacked, defaced, and spammed pages. Raters are instructed to mark them “lowest” if a lack of maintenance has resulted in the website failing to meet its purpose.
- Socioeconomic status, political beliefs, and victims of atrocities have been added to the section on “Pages That Spread Hate.” Hate speech that is “expressed in polite or even academic-sounding language” is specifically called out.
- A new section on “Potentially Harmful Pages” has been added that asks raters to give the “lowest” rating to pages that encourage or incite mental, physical, or emotional harm to oneself or others. They list as examples the justification of sexually abusing children, terrorist “how-to” guides, extreme depictions of gore or violence with no beneficial purpose, pro-suicide and pro-anorexia sites, and realistic-sounding death threats.
- A new section on “Pages that Potentially Misinform Users” has been added that focuses on conspiracy theories (including ones that some may find amusing because they are outlandish) and demonstrably false information. These are to be given the “lowest” rating.
- Pages designed to trick users into clicking ads or links receive the “lowest” rating.
Of these changes, the ones that seem the most relevant are the changes regarding distracting as and deceptive clicks, clickbait, hate speech, and conspiracies and misinformation. Any of these changes could potentially have indirectly impacted your site, however.
Diagnoses and Recovery
It’s impossible to pinpoint precisely what changes impacted your pages in which ways, and it would be a mistake to approach diagnoses and recovery in that way. Furthermore, all of the changes mentioned above are things you should be conscious of and begin incorporating into your SEO strategy, regardless of whether they were directly responsible for any competitors outranking you in this particular instance.
Still, with the above changes in mind, it is useful to analyze the results and qualitatively evaluate which strategic changes are most prudent.
If you were monitoring keywords in SEMrush or a similar tool, begin by analyzing the highest volume keywords that you lost rankings for. Starting with the highest volume keyword, take a look at the pages that recently saw a boost in rankings, and compare them to your page. Use the rating guideline changes above as a guide in determining what factors may have been responsible, and begin updating your pages as though a human quality rater were going to evaluate it, aiming for a higher score. You may even consider having your own human quality rater evaluate the pages.
Next, move to Google Analytics and identify any pages that lost significant traffic there. For those that have, plug those pages into a tool like SEMrush to identify which keywords they rank well for, which keywords lost traffic, and which competitors pulled ahead of them. Repeat the process above for these pages.
If you lost a significant amount of traffic to competitors and this wasn’t the result of a single page losing rankings, there is a good chance you will see patterns in what went wrong regarding your compliance with the quality rater guidelines. This is where you will need to make some important strategic decisions and refine your processes so that you can systematically avoid repeating the mistakes that led to competitors outranking you.
If your site as a whole seemed to lose traffic, not just individual pages, this suggests that your site as a whole has taken on a negative reputation. In this case, purging pages from your site that would receive lowest or low scores from human quality raters is a good idea. This is also a good time to start investing more in the personal branding of your content creators and to start hiring experts where appropriate. Remove or noindex any pages that wouldn’t be seen as having a beneficial purpose, and institute fact-checking processes for any information-based content. Eliminate any design elements that could result in distracting ads or misplaced clicks, and revamp any clickbait titles.
The August 1st broad core algorithm update was comprehensive, impacting the way ranking factors are weighted and interact with each other. The update wasn’t designed to target any one thing, and that means any impacts you may have seen can’t be attributed to any one thing either, or at least not any one thing that applies to everybody.
By taking a close look at how you and your competitors have jockeyed in the search engines as a result of this update, and taking a close look at the changes to Google’s quality rater guidelines, you can formulate a strategy for building your audience with search engine traffic in the wake of this update. Do not let concerns about the broad nature of this update hold you back. Adapt and move forward, and you will succeed.