The Local Algorithm: Relevance, Proximity, and Prominence

Posted by MaryBowling

How does Google decide what goes into the local pack? It doesn’t have to be a black box — there’s logic behind the order. In this week’s Whiteboard Friday, renowned local SEO expert Mary Bowling lays out the three factors that drive Google’s local algorithm and local rankings in a simple and concise way anyone can understand.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hi, Moz fans. This is Mary Bowling from Ignitor Digital, and today I want to talk to you about the local algorithm. I’d like to make this as simple as possible for people to understand, because I think it’s a very confusing thing for a lot of SEOs who don’t do this every day.

The local algorithm has always been based on relevance, prominence, and proximity

1. Relevance

For relevance, what the algorithm is asking is, “Does this business do or sell or have the attributes that the searcher is looking for?” That’s pretty simple. So that gives us all these businesses over here that might be relevant. For prominence, the algorithm is asking, “Which businesses are the most popular and the most well regarded in their local market area?”

2. Proximity

For proximity, the question really is, “Is the business close enough to the searcher to be considered to be a good answer for this query?” This is what trips people up. This is what really defines the local algorithm — proximity. So I’m going to try to explain that in very simple terms here today.

Let’s say we have a searcher in a particular location, and she’s really hungry today and she wants some egg rolls. So her query is egg rolls. If she were to ask for egg rolls near me, these businesses are the ones that the algorithm would favor.

3. Prominence

They are the closest to her, and Google would rank them most likely by their prominence. If she were to ask for something in a particular place, let’s say this is a downtown area and she asked for egg rolls downtown because she didn’t want to be away from work too long, then the algorithm is actually going to favor the businesses that sell egg rolls in the downtown area even though that’s further away from where the searcher is.

If she were to ask for egg rolls open now, there might be a business here and a business here and a business here that are open now, and they would be the ones that the algorithm would consider. So relevance is kicking in on the query. If she were to ask for the cheapest egg rolls, that might be here and here.

If she were to ask for the best egg rolls, that might be very, very far away, or it could be a combination of all kinds of locations. So you really need to think of proximity as a fluid thing. It’s like a rubber band, and depending on… 

  • the query
  • the searcher’s location
  • the relevance to the query
  • and the prominence of the business 

….is what Google is going to show in that local pack.

I hope that makes it much clearer to those of you who haven’t understood the Local Algorithm. If you have some comments or suggestions, please make them below and thanks for listening.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Headline Writing and Title Tag SEO in a Clickbait World – Whiteboard Friday

Posted by randfish

When writing headlines and title tags, we’re often conflicted in what we’re trying to say and (more to the point) how we’re trying to say it. Do we want it to help the page rank in SERPs? Do we want people to be intrigued enough to click through? Or are we trying to best satisfy the searcher’s intent? We’d like all three, but a headline that achieves them all is incredibly difficult to write.

In today’s Whiteboard Friday, Rand illustrates just how small the intersection of those goals is, and offers a process you can use to find the best way forward.

For reference, here’s a still of this week’s whiteboard!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about writing titles and headlines, both for SEO and in this new click-bait, Facebook social world. This is kind of a challenge, because I think many folks are seeing and observing that a lot of the ranking signals that can help a page perform well are often preceded or well correlated with social activity, which would kind of bias us towards saying, “Hey, how can I do these click-baity, link-baity sorts of social viral pieces,” versus we’re also a challenge with, “Gosh, those things aren’t as traditionally well performing in search results from a perhaps click-through rate and certainly from a search conversion perspective. So how do we balance out these two and make them work together for us based on our marketing goals?” So I want to try and help with that.

Let’s look at a search query for Viking battles, in Google. These are the top two results. One is from Wikipedia. It’s a category page — Battles Involving the Vikings. That’s pretty darn straightforward. But then our second result — actually this might be a third result, I think there’s a indented second Wikipedia result — is the seven most bad ass last stands in the history of battles. It turns out that there happen to be a number of Viking related battles in there, and you can see that in the meta description that Google pulls. This one’s from Crack.com.

These are pretty representative of the two different kinds of results or of content pieces that I’m talking about. One is very, very viral, very social focused, clearly designed to sort of do well in the Facebook world. One is much more classic search focused, clearly designed to help answer the user query — here’s a list of Viking battles and their prominence and importance in history, and structure, and all those kinds of things.

Okay. Here’s another query — Viking jewelry. Going to stick with my Viking theme, because why not? We can see a website from Viking jewelry. This one’s on JellDragon.com. It’s an eCommerce site. They’re selling sterling silver and bronze Viking jewelry. They’ve actually done very classic SEO focus. Not only do they have Viking jewelry mentioned twice, in the second instance of Viking jewelry, I think they’ve intentionally — I hope it was intentionally — misspelled the word “jewelry” to hopefully catch misspellings. That’s some old-school SEO. I would actually not recommend this for any purpose.

But I thought it was interesting to highlight versus in this search result it takes until page three until I could really find a viral, social, targeted, more link-baity, click-baity type of article, this one from io9 — 1,000 Year-old Viking Jewelry Found On Danish Farm. You know what the interesting part is? In this case, both of these are on powerful domains. They both have quite a few links to them from many external sources. They’re pretty well SEO’d pages.

In this case, the first two pages of results are all kind of small jewelry website stores and a few results from like Etsy and Amazon, more powerful authoritative domains. But it really takes a long time before you get these, what I’d consider, very powerful, very strong attempts at ranking for Viking jewelry from more of your click-bait, social, headline, viral sites. io9 certainly, I would kind of expect them to perform higher, except that this doesn’t serve the searcher intent.

I think Google knows that when people look for Viking jewelry, they’re not looking for the history of Viking jewelry or where recent archeological finds of Viking jewelry happened. They’re looking specifically for eCommerce sites. They’re trying to transact and buy, or at least view and see what Viking jewelry looks like. So they’re looking for photo heavy, visual heavy, potentially places where they might buy stuff. Maybe it’s some people looking for artifacts as well, to view the images of those, but less of the click-bait focus kind of stuff.

This one I think it’s very likely that this does indeed perform well for this search query, and lots of people do click on that as a positive result for what they’re looking for from Viking battles, because they’d like to see, “Okay, what were the coolest, most amazing Viking battles that happened in history?”

You can kind of see what’s happened here with two things. One is with Hummingbird and Google’s focus on topic modeling, and the other with searcher intent and how Google has gotten so incredibly good at pattern matching to serve user intent. This is really important from an SEO perspective to understand as well, and I like how these two examples highlight it. One is saying, “Hey, just because you have the most links, the strongest domain, the best keyword targeting, doesn’t necessarily mean you’ll rank if you’re not serving searcher intent.”

Now, when we think about doing this for ourselves, that click-bait versus searched optimized experience for our content, what is it about? It’s really about choosing. It’s about choosing searcher intent, our website and marketing goals, or click-bait types of goals. I’ve visualized the intersection here with a Venn diagram. So these in pink here, the click-bait pieces that are going to resonate in social media — Facebook, Twitter, etc. Blue is the intent of searchers, and purple is your marketing goals, what you want to achieve when visitors get to your site, the reason you’re trying to attract this traffic in the first place.

This intersection, as you will notice, is super, uber tiny. It is miniscule. It is molecule sized, and it’s a very, very hard intersection to hit. In fact, for the vast majority of content pieces, I’m going to say that it’s going to be close to, not always, but close to impossible to get that perfect mix of click-bait, intent of searchers, and your marketing goals. The times when it works best is really when you’re trying to educate your audience or provide them with informational value, and that’s also something that’s going to resonate in the social web and something searchers are going to be looking for. It works pretty well in B2B types of things, particularly in spaces where there’s lots of influencers and amplifiers who also care about educating their followers. It doesn’t work so well when you’re trying to target Viking battles or Viking jewelry. What can I say, the historians of the Viking world simply aren’t that huge on Twitter yet. I hope they will be one day.

This is kind of the process that I would use to think about the structure of these and how to choose between them. First off, I think you need to ask, “Should I create a single piece of content to target all of these, or should I instead be thinking about individual pieces that hit one or two at a time?”

So it could be the case that maybe you’ve got an intersection of intent for searchers and your marketing goals. This happens quite a bit, and oftentimes for these folks, for the Jell Dragon Viking Jewelry, the intent of searchers and what they’re trying to accomplish on their site, perfectly in harmony, but definitely not with click-bait pieces that are going to resonate on the web. More challenging for io9 with this kind of a thing, because searchers just aren’t looking for that around Viking jewelry. They might instead be thinking about, “Hey, we’re trying to target the specific news item. We want anyone who looks for Viking jewelry in Danish farm, or Viking jewelry found, or those kind of things to be finding our site.”

Then, I would ask, “How can I best serve my own marketing goals, the marketing goals of my website through the pages that are targeted at search or social?” Sometimes that’s going to be very direct, like it is over here with JellDagon.com trying to convert folks and folks looking for Viking jewelry to buy.

Sometimes it’s going to be indirect,. A Moz Whiteboard Friday, for example, is a very indirect example. We’re trying to serve the intent of searchers and in the long term eventually, maybe sometime in the future some folks who watch this video might be interested in Moz’ tools or going to MozCon or signing up for an email list, or whatever it is. But our marketing goals are secondary and they’re further in the future. You could also think about that happening at the very end of a funnel, coming in if someone searches for say Moz versus Searchmetrics and maybe Searchmetrics has a great page comparing what’s better about their service versus Moz’ service and those types of things, and getting right in at the end of the funnel. So that should be a consideration as well. Same thing with social.

Then lastly, where are you going to focus that keyword targeting and the content foci efforts? What kind of content are you going to build? How are you going to keyword target them best to achieve this, and how much you interlink between those pages?

I’ll give you a quick example over here, but this can be expanded upon. So for my conversion page, I may try and target the same keywords or a slightly more commercial variation on the search terms I’m targeting with my more informational style content versus entertainment social style content. Then, conversion page might be separate, depending on how I’m structuring things and what the intent of searchers is. My click-bait piece may be not very keyword focused at all. I might write that headline and say, “I don’t care about the keywords at all. I don’t need to rank here. I’m trying to go viral on social media. I’m trying to achieve my click-bait goals. My goal is to drive traffic, get some links, get some topical authority around this subject matter, and later hopefully rank with this page or maybe even this page in search engines.” That’s a viable goal as well.

When you do that, what you want to do then is have a link structure that optimizes around this. So your click-bait piece, a lot of times with click-bait pieces they’re going to perform worse if you go over and try and link directly to your conversion page, because it looks like you’re trying to sell people something. That’s not what plays on Facebook, on Twitter, on social media in general. What plays is, “Hey, this is just entertainment, and I can just visit this piece and it’s fun and funny and interesting.”

What plays well in search, however, is something that let’s someone accomplish their tasks. So it’s fine to have information and then a call to action, and that call to action can point to the conversion page. The click-bait pieces content can do a great job of helping to send link equity, ranking signals, and maybe some visitor traffic who’s interested in truly learning more over to the informational page that you want ranking for search. This is kind of a beautiful way to think about the interaction between the three of these when you have these different levels of foci, when you have these different searcher versus click-bait intents, and how to bring them all together.

All right everyone, hope to see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Clear Prominence – The Next Generation SEO Company in San Francisco

http://BecomeNoticed.com – SEO Company Clear Prominence, INC is one of the finest Search Engine Optimization (SEO) Companies representing the Next Generation…

[ccw-atrib-link]

Long Tail CTR Study: The Forgotten Traffic Beyond Top 10 Rankings

Posted by GaryMoyle

Search behavior is fundamentally changing, as users become more savvy and increasingly familiar with search technology. Google’s results have also changed significantly over the last decade, going from a simple page of 10 blue links to a much richer layout, including videos, images, shopping ads and the innovative Knowledge Graph.

We also know there are an increasing amount of touchpoints in a customer journey involving different channels and devices. Google’s
Zero Moment of Truth theory (ZMOT), which describes a revolution in the way consumers search for information online, supports this idea and predicts that we can expect the number of times natural search is involved on the path to a conversion to get higher and higher.

Understanding how people interact with Google and other search engines will always be important. Organic click curves show how many clicks you might expect from search engine results and are one way of evaluating the impact of our campaigns, forecasting performance and exploring changing search behavior.

Using search query data from Google UK for a wide range of leading brands based on millions of impressions and clicks, we can gain insights into the how CTR in natural search has evolved beyond those shown in previous studies by
Catalyst, Slingshot and AOL.

Our methodology

The NetBooster study is based entirely on UK top search query data and has been refined by day in order to give us the most accurate sample size possible. This helped us reduce anomalies in the data in order to achieve the most reliable click curve possible, allowing us to extend it way beyond the traditional top 10 results.

We developed a method to extract data day by day to greatly increase the volume of keywords and to help improve the accuracy of the
average ranking position. It ensured that the average was taken across the shortest timescale possible, reducing rounding errors.

The NetBooster study included:

  • 65,446,308 (65 million) clicks
  • 311,278,379 (311 million) impressions
  • 1,253,130 (1.2 million) unique search queries
  • 54 unique brands
  • 11 household brands (sites with a total of 1M+ branded keyword impressions)
  • Data covers several verticals including retail, travel and financial

We also looked at organic CTR for mobile, video and image results to better understand how people are discovering content in natural search across multiple devices and channels. 

We’ll explore some of the most important elements in this article.

How does our study compare against others?

Let’s start by looking at the top 10 results. In the graph below we have normalized the results in order to compare our curve, like-for-like, with previous studies from Catalyst and Slingshot. Straight away we can see that there is higher participation beyond the top four positions when compared to other studies. We can also see much higher CTR for positions lower on the pages, which highlights how searchers are becoming more comfortable with mining search results.

A new click curve to rule them all

Our first click curve is the most useful, as it provides the click through rates for generic non-brand search queries across positions 1 to 30. Initially, we can see a significant amount of traffic going to the top three results with position No. 1 receiving 19% of total traffic, 15% at position No. 2 and 11.45% at position No. 3. The interesting thing to note, however, is our curve shows a relatively high CTR for positions typically below the fold. Positions 6-10 all received a higher CTR than shown in previous studies. It also demonstrates that searchers are frequently exploring pages two and three.

CTR-top-30-730px.jpg

When we look beyond the top 10, we can see that CTR is also higher than anticipated, with positions 11-20 accounting for 17% of total traffic. Positions 21-30 also show higher than anticipated results, with over 5% of total traffic coming from page three. This gives us a better understanding of the potential uplift in visits when improving rankings from positions 11-30.

This highlights that searchers are frequently going beyond the top 10 to find the exact result they want. The prominence of paid advertising, shopping ads, Knowledge Graph and the OneBox may also be pushing users below the fold more often as users attempt to find better qualified results. It may also indicate growing dissatisfaction with Google results, although this is a little harder to quantify.

Of course, it’s important we don’t just rely on one single click curve. Not all searches are equal. What about the influence of brand, mobile and long-tail searches?

Brand bias has a significant influence on CTR

One thing we particularly wanted to explore was how the size of your brand influences the curve. To explore this, we banded each of the domains in our study into small, medium and large categories based on the sum of brand query impressions across the entire duration of the study.

small-medium-large-brand-organic-ctr-730

When we look at how brand bias is influencing CTR for non-branded search queries, we can see that better known brands get a sizable increase in CTR. More importantly, small- to medium-size brands are actually losing out to results from these better-known brands and experience a much lower CTR in comparison.

What is clear is keyphrase strategy will be important for smaller brands in order to gain traction in natural search. Identifying and targeting valuable search queries that aren’t already dominated by major brands will minimize the cannibalization of CTR and ensure higher traffic levels as a result.

How does mobile CTR reflect changing search behavior?

Mobile search has become a huge part of our daily lives, and our clients are seeing a substantial shift in natural search traffic from desktop to mobile devices. According to Google, 30% of all searches made in 2013 were on a mobile device; they also predict mobile searches will constitute over 50% of all searches in 2014.

Understanding CTR from mobile devices will be vital as the mobile search revolution continues. It was interesting to see that the click curve remained very similar to our desktop curve. Despite the lack of screen real estate, searchers are clearly motivated to scroll below the fold and beyond the top 10.

netbooster-mobile-organic-ctr-730px.jpg

NetBooster CTR curves for top 30 organic positions


Position

Desktop CTR

Mobile CTR

Large Brand

Medium Brand

Small Brand
1 19.35% 20.28% 20.84% 13.32% 8.59%
2 15.09% 16.59% 16.25% 9.77% 8.92%
3 11.45% 13.36% 12.61% 7.64% 7.17%
4 8.68% 10.70% 9.91% 5.50% 6.19%
5 7.21% 7.97% 8.08% 4.69% 5.37%
6 5.85% 6.38% 6.55% 4.07% 4.17%
7 4.63% 4.85% 5.20% 3.33% 3.70%
8 3.93% 3.90% 4.40% 2.96% 3.22%
9 3.35% 3.15% 3.76% 2.62% 3.05%
10 2.82% 2.59% 3.13% 2.25% 2.82%
11 3.06% 3.18% 3.59% 2.72% 1.94%
12 2.36% 3.62% 2.93% 1.96% 1.31%
13 2.16% 4.13% 2.78% 1.96% 1.26%
14 1.87% 3.37% 2.52% 1.68% 0.92%
15 1.79% 3.26% 2.43% 1.51% 1.04%
16 1.52% 2.68% 2.02% 1.26% 0.89%
17 1.30% 2.79% 1.67% 1.20% 0.71%
18 1.26% 2.13% 1.59% 1.16% 0.86%
19 1.16% 1.80% 1.43% 1.12% 0.82%
20 1.05% 1.51% 1.36% 0.86% 0.73%
21 0.86% 2.04% 1.15% 0.74% 0.70%
22 0.75% 2.25% 1.02% 0.68% 0.46%
23 0.68% 2.13% 0.91% 0.62% 0.42%
24 0.63% 1.84% 0.81% 0.63% 0.45%
25 0.56% 2.05% 0.71% 0.61% 0.35%
26 0.51% 1.85% 0.59% 0.63% 0.34%
27 0.49% 1.08% 0.74% 0.42% 0.24%
28 0.45% 1.55% 0.58% 0.49% 0.24%
29 0.44% 1.07% 0.51% 0.53% 0.28%
30 0.36% 1.21% 0.47% 0.38% 0.26%

Creating your own click curve

This study will give you a set of benchmarks for both non-branded and branded click-through rates with which you can confidently compare to your own click curve data. Using this data as a comparison will let you understand whether the appearance of your content is working for or against you.

We have made things a little easier for you by creating an Excel spreadsheet: simply drop your own top search query data in and it’ll automatically create a click curve for your website.

Simply visit the NetBooster website and download our tool to start making your own click curve.

In conclusion

It’s been both a fascinating and rewarding study, and we can clearly see a change in search habits. Whatever the reasons for this evolving search behavior, we need to start thinking beyond the top 10, as pages two and three are likely to get more traffic in future. 

 We also need to maximize the traffic created from existing rankings and not just think about position.

Most importantly, we can see practical applications of this data for anyone looking to understand and maximize their content’s performance in natural search. Having the ability to quickly and easily create your own click curve and compare this against a set of benchmarks means you can now understand whether you have an optimal CTR.

What could be the next steps?

There is, however, plenty of scope for improvement. We are looking forward to continuing our investigation, tracking the evolution of search behavior. If you’d like to explore this subject further, here are a few ideas:

  • Segment search queries by intent (How does CTR vary depending on whether a search query is commercial or informational?)
  • Understand CTR by industry or niche
  • Monitor the effect of new Knowledge Graph formats on CTR across both desktop and mobile search
  • Conduct an annual analysis of search behavior (Are people’s search habits changing? Are they clicking on more results? Are they mining further into Google’s results?)

Ultimately, click curves like this will change as the underlying search behavior continues to evolve. We are now seeing a massive shift in the underlying search technology, with Google in particular heavily investing in entity- based search (i.e., the Knowledge Graph). We can expect other search engines, such as Bing, Yandex and Baidu to follow suit and use a similar approach.

The rise of smartphone adoption and constant connectivity also means natural search is becoming more focused on mobile devices. Voice-activated search is also a game-changer, as people start to converse with search engines in a more natural way. This has huge implications for how we monitor search activity.

What is clear is no other industry is changing as rapidly as search. Understanding how we all interact with new forms of search results will be a crucial part of measuring and creating success.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Searchmetrics Ranking Factors 2014: Why Quality Content Focuses on Topics, not Keywords

Posted by searchmetrics

Searchmetrics recently launched their yearly Ranking Factors Study that bases numbers on rank correlation and averages of top 10 SEO rankings, and this year’s analysis shows that content on top-performing sites is much more holistic and less keyword-focused.

Everybody talks about how “content is king.” People are advised to “create quality content for users,” and not ever since keyword (not provided), some have said “the keyword is dead.” Though these phrases may convey somehow understandable approaches, they are often nothing more than empty clichés leaving webmasters alone with without any further information.

Making relevant content measurable

What is quality content? How can I create relevant content for my users? Should I still place the keyword in the title or use it seven times in the
content?

To understand how search engines develop over time and what kind of features increase or decrease in prevalence and importance, we analyze the top 30 ranking sites for over 10,000 keywords (approximately 300,000 URLs) each year. The full study with all 100 pages of details is 
downloadable here.

In a nutshell: To what extent have Panda, Penguin, and not least Hummingbird influenced the algorithm and therefore the search results?

Before we get into detail, let me—as a matter of course—point out the fact that correlation does not imply causation. You can find some more comprehensive information, as well as an introduction and explanation of what a correlation is,
here. That is why we took two approaches:


  • Correlation of Top 30
    = Differences between URLs within SERP 1 to 3

  • Averages
    = Appearance and/or extent of certain factors per position

The “Fall” of the Keyword?

Most keyword factors are declining. This is one of the major findings of our studies over the years. Let me give you an example:

The decrease of the features “Keyword in URL” and “Keyword in Domain” is one of the more obvious findings of our analyses. You can clearly see the
declining correlation from 2012 to 2014. Let’s have a look at some more on-page keyword factors:

What you see here as well are very low correlations. In other words: With regard to these features, there are no huge differences between URLs ranking on
positions from one to thirty. But there is more than that. It is also important to have a look at the averages here:

Explanation: X-Axis: Google Position from one to 30 / Y-Axis: Average share of URLs having keyword in description/title (0.10 = 10%). Please note that we have modified the crawling of these features. It is more exact now. This is why last year’s values are likely to be actually even a bit higher than given here. However, you can see that relatively few sites actually have the keywords in their headings. In fact, only about 10% of the URLs in positions 1-30 have the keyword in h2s; 15% have them in h1s. And the trend also is negative.

By the way: What you see in positions 1-2 is what we call the “Brand Factor.” It is often a big brand ranking on these positions, and most of them differ from the rest of the SERPs when it comes to classic SEO measures.

Actually, taking only correlation into consideration can sometimes lead to a false conclusion. Let me show you what I mean with the following example: 

The correlation for the feature “% Backlinks with Keyword” has considerably increased from 2013 to 2014. But the conclusion: “Hey cool, I will immediately
do link building and tell the people to put the keyword I want to rank for in the anchor text!” would be a shot in the dark. A glance at the averages tells
you why:

In fact, the average share of links featuring the keyword in the anchor text has declined from 2013 to 2014 (from ~40% to ~27). But what you see is a falling graph in 2014 which is why the correlation is more positive with regard to better rankings. That means: the better the position of a URL is, the higher the share of backlinks that contain the keyword (on average). On average, this share continuously decreases with each position. In contrast to last year’s curve, this results in the calculation of a high(er) positive correlation.

Conclusion: The keyword as such seems to continue losing influence over time as Google becomes better and better at evaluating other factors. But what kind of factors are these?

The “rise” of content

Co-occurrence evaluations of keywords and relevant terms is something we’ve been focusing on this past year, as we’ve seen high shifts in rankings based on these. I won’t go into much detail here, as this would go beyond the scope of this blog post, but what we can say is that after conducting word co-occurrence analyses, we found that
Proof and Relevant keywords played a major role in the quality and content of rankings. Proof Terms are words that are strongly related to the primary keyword and highly likely to appear at the same time. Relevant Terms are not as closely related to the main keyword, yet are still likely to appear in the same context (or as a part of a subtopic). These kinds of approaches are based on semantics and context. For example, it is very likely that the word “car” is relevant in a text in which the word “bumper” occurs, while the same is not true for the term “refrigerator.”

Proof and relevant terms to define and analyze topics

Let’s have a look at an example analysis for Proof and Relevant Terms regarding the keyword “apple watch,” done with the Content Optimization section of the Searchmetrics Suite:

The number behind the bar describes the average appearance of the word in a text dealing with the topic, the bar length mirrors the respective weighting (x-axis, bottom) and is calculated based on the term’s semantic closeness to the main keyword. Terms marked with green hooked bubbles are the 10 most important words, based on a mixed calculation of appearance and semantic weighting (and some further parameters).

As you can see, the terms “iphone” and “time” are marked as highly important Proof Terms, and “iwatch” is very likely to appear in the context of the main keyword “apple phone” as well. Note that simply reading the list without knowing the main keyword gives you an idea of the text’s main topic.

The above chart shows an excerpt from the list of Relevant Terms. Note that both the semantic weighting and the appearance of these terms is somewhat lower than in the previous chart. In contrast to the Proof Terms list, you won’t know the exact focus of the text just looking at these Relevant Terms, but you might probably get an idea of what its rough topic might be.

Content features on the rise

By the way, the length of content also continues to increase. Furthermore, high-ranking content is written in a way that is easier for the average person to read, and is often enriched by other media, such as images or video. This is shown in the following charts:

Shown here is the average text length in characters per position, in both 2014 and 2013. You can see that content is much longer on each and every position among the top 30 (on average) in 2014. (Note the “Brand Factor” at the first position(s) again.)

And here is the average readability of texts per position based on the
Flesch score ranging from 0 (very difficult) to 100 (very easy):

The Flesch score is given on the y-axis. You can see that there is a rather positive correlation with URLs on higher positions featuring, on average, easier-to-read texts.

But just creating more (or easier) content does not positively influence rankings. It’s about developing relevant and comprehensive content for users dealing with more than just one aspect of a certain topic.
The findings support the idea that search engines are moving away from focusing on single keywords to analyzing so-called “content clusters” – individual subjects or topic areas that are based around keywords and a variety of related terms.

Stop doing “checklist SEO”

So, please stop these outdated “Checklist-SEO” practices which are still overused in the market from my perspective.
It’s not about optimizing keywords for search engines. It’s about optimizing the search experience for the user. Let me show you this with another graphic:

On the left, we have the “old SEO paradigm: 1 Keyword (maybe some keyword variations. we all know the ”
An SEO walks into a bar joke“) = 1 Landing Page – Checklist SEO. That’s why, in the past, many websites had single landing pages for each specific keyword (and those pages were very likely to bear near-duplicate content). Imagine a website dealing with a specific car having single landing pages for each and every single car part: “x motor,” “x seats,” “x front shield,” “x head lamps,” etc. This does not make sense in most cases. But this is how SEO used to be (and I must admit: the pages ranked!).

But, to have success in the long term, it’s the content (or better, the
topic) that matters, not the single keyword. That is why landing pages should be focused on comprehensive topics: 1 Landing Page = 1 Topic. To stick with the example: Put the descriptions of all the car parts on one page.

Decreasing diversity in SERPs since the Hummingbird update

How these developments actually influences the SERPs can be seen in the impact of Google’s Hummingbird. The algorithm refactoring means the search engine now has a better understanding of the intent and meaning of searches which improves its ability to deliver relevant content in search results. This means search engine optimization is increasingly a holistic discipline. It’s not enough to optimize and rank for one relevant keyword – content must now be relevant to the topic and include several related terms. This helps a page to rank for several terms and creates an improved user experience at the same time.

In a
recent analysis on Hummingbird, we found that the diversity in search results is actually decreasing. This means, fewer URLs rank for semantically similar (“near-identic”) yet different keywords. Most of you know that not long ago there were often completely different search results for keyword pairs like “bang haircuts” and “hairstyles with bangs” which have quite a bit of overlap in meaning. Now, as it turns out, SERPs for these kinds of keywords are getting more and more identic. Here are two SERPs, one for the query “rice dish,” and one for the query “rice recipe,” shown both before and after Hummingbird, as examples:

SERPs pre-Hummingbird


SERPs post-Hummingbird

At a glance: The most important ranking factors

To get an insight of what some of the more important ranking factors are, we have developed an infographic adding evaluations (based on averages and interpretations) in bubble form to the well-known correlation bar chart. Again, you see the prominence of content factors (given in blue). (Click/tap for a full-size image.)

The more important factors are given on the left side. Arrows (both on the bubbles and the bars) show the trend in comparison to last year’s analysis. On the left side also, the size of the bubbles represents a graphic element based on the interpretation of how important the respective factor might probably be. Please note that the averages given in this chart are based on the top 10 only. We condensed the pool of URLs to SERP 1 to investigate their secrets of ranking on page 1, without having this data influenced by the URLs ranking from 11 to 30.

Good content generates better user signals

What you also notice is the prominent appearance of the factors given in purple. This year we have included user features such as bounce rate (on a keyword level), as well as correlating user signals with rankings. We were able to analyze thousands of GWT accounts in order to avoid a skewed version of the data. Having access to large data sets has also allowed us to see when major shifts occur.

You’ll notice that click through rate is one of the biggest factors that we’ve noticed in this year’s study, coming in at .67%. Average time on site within the top 10 is 101 seconds, while bounce rate is only 37%.

Conclusion: What should I be working on?

Brands are maturing in their approach to SEO. However, the number one factor is still relevant page content. This is the same for big brands and small businesses alike. Make sure that the content is designed for the user and relevant in your appropriate niche.

If you’re interested in learning how SEO developed and how to stay ahead of your competition, just
download the study here. Within the study you’ll find many more aspects of potential ranking factors that are covered in this article.

Get the Full Study

So, don’t build landing pages for single keywords. And don’t build landing pages for search engines, either. Focus on topics related to your website/content/niche/product and try to write the best content for these topics and subtopics. Create landing pages dealing with several, interdependent aspects of main topics and write comprehensive texts using semantically closely related terms. This is how you can optimize the user experience as well as your rankings – for more than even the focus keyword – at the same time!

What do you think of this data? Have you seen similar types of results with the companies that you work with? Let us know your feedback in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]