How to Combat 5 of the SEO World’s Most Infuriating Problems – Whiteboard Friday

Posted by randfish

These days, most of us have learned that spammy techniques aren’t the way to go, and we have a solid sense for the things we should be doing to rank higher, and ahead of our often spammier competitors. Sometimes, maddeningly, it just doesn’t work. In today’s Whiteboard Friday, Rand talks about five things that can infuriate SEOs with the best of intentions, why those problems exist, and what we can do about them.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

What SEO problems make you angry?

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about some of the most infuriating things in the SEO world, specifically five problems that I think plague a lot of folks and some of the ways that we can combat and address those.

I’m going to start with one of the things that really infuriates a lot of new folks to the field, especially folks who are building new and emerging sites and are doing SEO on them. You have all of these best practices list. You might look at a web developer’s cheat sheet or sort of a guide to on-page and on-site SEO. You go, “Hey, I’m doing it. I’ve got my clean URLs, my good, unique content, my solid keyword targeting, schema markup, useful internal links, my XML sitemap, and my fast load speed. I’m mobile friendly, and I don’t have manipulative links.”

Great. “Where are my results? What benefit am I getting from doing all these things, because I don’t see one?” I took a site that was not particularly SEO friendly, maybe it’s a new site, one I just launched or an emerging site, one that’s sort of slowly growing but not yet a power player. I do all this right stuff, and I don’t get SEO results.

This makes a lot of people stop investing in SEO, stop believing in SEO, and stop wanting to do it. I can understand where you’re coming from. The challenge is not one of you’ve done something wrong. It’s that this stuff, all of these things that you do right, especially things that you do right on your own site or from a best practices perspective, they don’t increase rankings. They don’t. That’s not what they’re designed to do.

1) Following best practices often does nothing for new and emerging sites

This stuff, all of these best practices are designed to protect you from potential problems. They’re designed to make sure that your site is properly optimized so that you can perform to the highest degree that you are able. But this is not actually rank boosting stuff unfortunately. That is very frustrating for many folks. So following a best practices list, the idea is not, “Hey, I’m going to grow my rankings by doing this.”

On the flip side, many folks do these things on larger, more well-established sites, sites that have a lot of ranking signals already in place. They’re bigger brands, they have lots of links to them, and they have lots of users and usage engagement signals. You fix this stuff. You fix stuff that’s already broken, and boom, rankings pop up. Things are going well, and more of your pages are indexed. You’re getting more search traffic, and it feels great. This is a challenge, on our part, of understanding what this stuff does, not a challenge on the search engine’s part of not ranking us properly for having done all of these right things.

2) My competition seems to be ranking on the back of spammy or manipulative links

What’s going on? I thought Google had introduced all these algorithms to kind of shut this stuff down. This seems very frustrating. How are they pulling this off? I look at their link profile, and I see a bunch of the directories, Web 2.0 sites — I love that the spam world decided that that’s Web 2.0 sites — article sites, private blog networks, and do follow blogs.

You look at this stuff and you go, “What is this junk? It’s terrible. Why isn’t Google penalizing them for this?” The answer, the right way to think about this and to come at this is: Are these really the reason that they rank? I think we need to ask ourselves that question.

One thing that we don’t know, that we can never know, is: Have these links been disavowed by our competitor here?

I’ve got my HulksIncredibleStore.com and their evil competitor Hulk-tastrophe.com. Hulk-tastrophe has got all of these terrible links, but maybe they disavowed those links and you would have no idea. Maybe they didn’t build those links. Perhaps those links came in from some other place. They are not responsible. Google is not treating them as responsible for it. They’re not actually what’s helping them.

If they are helping, and it’s possible they are, there are still instances where we’ve seen spam propping up sites. No doubt about it.

I think the next logical question is: Are you willing to loose your site or brand? What we don’t see anymore is we almost never see sites like this, who are ranking on the back of these things and have generally less legitimate and good links, ranking for two or three or four years. You can see it for a few months, maybe even a year, but this stuff is getting hit hard and getting hit frequently. So unless you’re willing to loose your site, pursuing their links is probably not a strategy.

Then what other signals, that you might not be considering potentially links, but also non-linking signals, could be helping them rank? I think a lot of us get blinded in the SEO world by link signals, and we forget to look at things like: Do they have a phenomenal user experience? Are they growing their brand? Are they doing offline kinds of things that are influencing online? Are they gaining engagement from other channels that’s then influencing their SEO? Do they have things coming in that I can’t see? If you don’t ask those questions, you can’t really learn from your competitors, and you just feel the frustration.

3) I have no visibility or understanding of why my rankings go up vs down

On my HulksIncredibleStore.com, I’ve got my infinite stretch shorts, which I don’t know why he never wears — he should really buy those — my soothing herbal tea, and my anger management books. I look at my rankings and they kind of jump up all the time, jump all over the place all the time. Actually, this is pretty normal. I think we’ve done some analyses here, and the average page one search results shift is 1.5 or 2 position changes daily. That’s sort of the MozCast dataset, if I’m recalling correctly. That means that, over the course of a week, it’s not uncommon or unnatural for you to be bouncing around four, five, or six positions up, down, and those kind of things.

I think we should understand what can be behind these things. That’s a very simple list. You made changes, Google made changes, your competitors made changes, or searcher behavior has changed in terms of volume, in terms of what they were engaging with, what they’re clicking on, what their intent behind searches are. Maybe there was just a new movie that came out and in one of the scenes Hulk talks about soothing herbal tea. So now people are searching for very different things than they were before. They want to see the scene. They’re looking for the YouTube video clip and those kind of things. Suddenly Hulk’s soothing herbal tea is no longer directing as well to your site.

So changes like these things can happen. We can’t understand all of them. I think what’s up to us to determine is the degree of analysis and action that’s actually going to provide a return on investment. Looking at these day over day or week over week and throwing up our hands and getting frustrated probably provides very little return on investment. Looking over the long term and saying, “Hey, over the last 6 months, we can observe 26 weeks of ranking change data, and we can see that in aggregate we are now ranking higher and for more keywords than we were previously, and so we’re going to continue pursuing this strategy. This is the set of keywords that we’ve fallen most on, and here are the factors that we’ve identified that are consistent across that group.” I think looking at rankings in aggregate can give us some real positive ROI. Looking at one or two, one week or the next week probably very little ROI.

4) I cannot influence or affect change in my organization because I cannot accurately quantify, predict, or control SEO

That’s true, especially with things like keyword not provided and certainly with the inaccuracy of data that’s provided to us through Google’s Keyword Planner inside of AdWords, for example, and the fact that no one can really control SEO, not fully anyway.

You get up in front of your team, your board, your manager, your client and you say, “Hey, if we don’t do these things, traffic will suffer,” and they go, “Well, you can’t be sure about that, and you can’t perfectly predict it. Last time you told us something, something else happened. So because the data is imperfect, we’d rather spend money on channels that we can perfectly predict, that we can very effectively quantify, and that we can very effectively control.” That is understandable. I think that businesses have a lot of risk aversion naturally, and so wanting to spend time and energy and effort in areas that you can control feels a lot safer.

Some ways to get around this are, first off, know your audience. If you know who you’re talking to in the room, you can often determine the things that will move the needle for them. For example, I find that many managers, many boards, many executives are much more influenced by competitive pressures than they are by, “We won’t do as well as we did before, or we’re loosing out on this potential opportunity.” Saying that is less powerful than saying, “This competitor, who I know we care about and we track ourselves against, is capturing this traffic and here’s how they’re doing it.”

Show multiple scenarios. Many of the SEO presentations that I see and have seen and still see from consultants and from in-house folks come with kind of a single, “Hey, here’s what we predict will happen if we do this or what we predict will happen if we don’t do this.” You’ve got to show multiple scenarios, especially when you know you have error bars because you can’t accurately quantify and predict. You need to show ranges.

So instead of this, I want to see: What happens if we do it a little bit? What happens if we really overinvest? What happens if Google makes a much bigger change on this particular factor than we expect or our competitors do a much bigger investment than we expect? How might those change the numbers?

Then I really do like bringing case studies, especially if you’re a consultant, but even in-house there are so many case studies in SEO on the Web today, you can almost always find someone who’s analogous or nearly analogous and show some of their data, some of the results that they’ve seen. Places like SEMrush, a tool that offers competitive intelligence around rankings, can be great for that. You can show, hey, this media site in our sector made these changes. Look at the delta of keywords they were ranking for versus R over the next six months. Correlation is not causation, but that can be a powerful influencer showing those kind of things.

Then last, but not least, any time you’re going to get up like this and present to a group around these topics, if you very possibly can, try to talk one-on-one with the participants before the meeting actually happens. I have found it almost universally the case that when you get into a group setting, if you haven’t had the discussions beforehand about like, “What are your concerns? What do you think is not valid about this data? Hey, I want to run this by you and get your thoughts before we go to the meeting.” If you don’t do that ahead of time, people can gang up and pile on. One person says, “Hey, I don’t think this is right,” and everybody in the room kind of looks around and goes, “Yeah, I also don’t think that’s right.” Then it just turns into warfare and conflict that you don’t want or need. If you address those things beforehand, then you can include the data, the presentations, and the “I don’t know the answer to this and I know this is important to so and so” in that presentation or in that discussion. It can be hugely helpful. Big difference between winning and losing with that.

5) Google is biasing to big brands. It feels hopeless to compete against them

A lot of people are feeling this hopelessness, hopelessness in SEO about competing against them. I get that pain. In fact, I’ve felt that very strongly for a long time in the SEO world, and I think the trend has only increased. This comes from all sorts of stuff. Brands now have the little dropdown next to their search result listing. There are these brand and entity connections. As Google is using answers and knowledge graph more and more, it’s feeling like those entities are having a bigger influence on where things rank and where they’re visible and where they’re pulling from.

User and usage behavior signals on the rise means that big brands, who have more of those signals, tend to perform better. Brands in the knowledge graph, brands growing links without any effort, they’re just growing links because they’re brands and people point to them naturally. Well, that is all really tough and can be very frustrating.

I think you have a few choices on the table. First off, you can choose to compete with brands where they can’t or won’t. So this is areas like we’re going after these keywords that we know these big brands are not chasing. We’re going after social channels or people on social media that we know big brands aren’t. We’re going after user generated content because they have all these corporate requirements and they won’t invest in that stuff. We’re going after content that they refuse to pursue for one reason or another. That can be very effective.

You better be building, growing, and leveraging your competitive advantage. Whenever you build an organization, you’ve got to say, “Hey, here’s who is out there. This is why we are uniquely better or a uniquely better choice for this set of customers than these other ones.” If you can leverage that, you can generally find opportunities to compete and even to win against big brands. But those things have to become obvious, they have to become well-known, and you need to essentially build some of your brand around those advantages, or they’re not going to give you help in search. That includes media, that includes content, that includes any sort of press and PR you’re doing. That includes how you do your own messaging, all of these things.

(C) You can choose to serve a market or a customer that they don’t or won’t. That can be a powerful way to go about search, because usually search is bifurcated by the customer type. There will be slightly different forms of search queries that are entered by different kinds of customers, and you can pursue one of those that isn’t pursued by the competition.

Last, but not least, I think for everyone in SEO we all realize we’re going to have to become brands ourselves. That means building the signals that are typically associated with brands — authority, recognition from an industry, recognition from a customer set, awareness of our brand even before a search has happened. I talked about this in a previous Whiteboard Friday, but I think because of these things, SEO is becoming a channel that you benefit from as you grow your brand rather than the channel you use to initially build your brand.

All right, everyone. Hope these have been helpful in combating some of these infuriating, frustrating problems and that we’ll see some great comments from you guys. I hope to participate in those as well, and we’ll catch you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Understanding and Applying Moz’s Spam Score Metric – Whiteboard Friday

Posted by randfish

This week, Moz released a new feature that we call Spam Score, which helps you analyze your link profile and weed out the spam (check out the blog post for more info). There have been some fantastic conversations about how it works and how it should (and shouldn’t) be used, and we wanted to clarify a few things to help you all make the best use of the tool.

In today’s Whiteboard Friday, Rand offers more detail on how the score is calculated, just what those spam flags are, and how we hope you’ll benefit from using it.

For reference, here’s a still of this week’s whiteboard. 

Click on the image above to open a high resolution version in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re going to chat a little bit about Moz’s Spam Score. Now I don’t typically like to do Whiteboard Fridays specifically about a Moz project, especially when it’s something that’s in our toolset. But I’m making an exception because there have been so many questions and so much discussion around Spam Score and because I hope the methodology, the way we calculate things, the look at correlation and causation, when it comes to web spam, can be useful for everyone in the Moz community and everyone in the SEO community in addition to being helpful for understanding this specific tool and metric.

The 17-flag scoring system

I want to start by describing the 17 flag system. As you might know, Spam Score is shown as a score from 0 to 17. You either fire a flag or you don’t. Those 17 flags you can see a list of them on the blog post, and we’ll show that in there. Essentially, those flags correlate to the percentage of sites that we found with that count of flags, not those specific flags, just any count of those flags that were penalized or banned by Google. I’ll show you a little bit more in the methodology.

Basically, what this means is for sites that had 0 spam flags, none of the 17 flags that we had fired, that actually meant that 99.5% of those sites were not penalized or banned, on average, in our analysis and 0.5% were. At 3 flags, 4.2% of those sites, that’s actually still a huge number. That’s probably in the millions of domains or subdomains that Google has potentially still banned. All the way down here with 11 flags, it’s 87.3% that we did find banned. That seems pretty risky or penalized. It seems pretty risky. But 12.7% of those is still a very big number, again probably in the hundreds of thousands of unique websites that are not banned but still have these flags.

If you’re looking at a specific subdomain and you’re saying, “Hey, gosh, this only has 3 flags or 4 flags on it, but it’s clearly been penalized by Google, Moz’s score must be wrong,” no, that’s pretty comfortable. That should fit right into those kinds of numbers. Same thing down here. If you see a site that is not penalized but has a number of flags, that’s potentially an indication that you’re in that percentage of sites that we found not to be penalized.

So this is an indication of percentile risk, not a “this is absolutely spam” or “this is absolutely not spam.” The only caveat is anything with, I think, more than 13 flags, we found 100% of those to have been penalized or banned. Maybe you’ll find an odd outlier or two. Probably you won’t.

Correlation ≠ causation

Correlation is not causation. This is something we repeat all the time here at Moz and in the SEO community. We do a lot of correlation studies around these things. I think people understand those very well in the fields of social media and in marketing in general. Certainly in psychology and electoral voting and election polling results, people understand those correlations. But for some reason in SEO we sometimes get hung up on this.

I want to be clear. Spam flags and the count of spam flags correlates with sites we saw Google penalize. That doesn’t mean that any of the flags or combinations of flags actually cause the penalty. It could be that the things that are flags are not actually connected to the reasons Google might penalize something at all. Those could be totally disconnected.

We are not trying to say with the 17 flags these are causes for concern or you need to fix these. We are merely saying this feature existed on this website when we crawled it, or it had this feature, maybe it still has this feature. Therefore, we saw this count of these features that correlates to this percentile number, so we’re giving you that number. That’s all that the score intends to say. That’s all it’s trying to show. It’s trying to be very transparent about that. It’s not trying to say you need to fix these.

A lot of flags and features that are measured are perfectly fine things to have on a website, like no social accounts or email links. That’s a totally reasonable thing to have, but it is a flag because we saw it correlate. A number in your domain name, I think it’s fine if you want to have a number in your domain name. There’s plenty of good domains that have a numerical character in them. That’s cool.

TLD extension that happens to be used by lots of spammers, like a .info or a .cc or a number of other ones, that’s also totally reasonable. Just because lots of spammers happen to use those TLD extensions doesn’t mean you are necessarily spam because you use one.

Or low link diversity. Maybe you’re a relatively new site. Maybe your niche is very small, so the number of folks who point to your site tends to be small, and lots of the sites that organically naturally link to you editorially happen to link to you from many of their pages, and there’s not a ton of them. That will lead to low link diversity, which is a flag, but it isn’t always necessarily a bad thing. It might still nudge you to try and get some more links because that will probably help you, but that doesn’t mean you are spammy. It just means you fired a flag that correlated with a spam percentile.

The methodology we use

The methodology that we use, for those who are curious — and I do think this is a methodology that might be interesting to potentially apply in other places — is we brainstormed a large list of potential flags, a huge number. We cut that down to the ones we could actually do, because there were some that were just unfeasible for our technology team, our engineering team to do.

Then, we got a huge list, many hundreds of thousands of sites that were penalized or banned. When we say banned or penalized, what we mean is they didn’t rank on page one for either their own domain name or their own brand name, the thing between the
www and the .com or .net or .info or whatever it was. If you didn’t rank for either your full domain name, www and the .com or Moz, that would mean we said, “Hey, you’re penalized or banned.”

Now you might say, “Hey, Rand, there are probably some sites that don’t rank on page one for their own brand name or their own domain name, but aren’t actually penalized or banned.” I agree. That’s a very small number. Statistically speaking, it probably is not going to be impactful on this data set. Therefore, we didn’t have to control for that. We ended up not controlling for that.

Then we found which of the features that we ideated, brainstormed, actually correlated with the penalties and bans, and we created the 17 flags that you see in the product today. There are lots things that I thought were going to correlate, for example spammy-looking anchor text or poison keywords on the page, like Viagra, Cialis, Texas Hold’em online, pornography. Those things, not all of them anyway turned out to correlate well, and so they didn’t make it into the 17 flags list. I hope over time we’ll add more flags. That’s how things worked out.

How to apply the Spam Score metric

When you’re applying Spam Score, I think there are a few important things to think about. Just like domain authority, or page authority, or a metric from Majestic, or a metric from Google, or any other kind of metric that you might come up with, you should add it to your toolbox and to your metrics where you find it useful. I think playing around with spam, experimenting with it is a great thing. If you don’t find it useful, just ignore it. It doesn’t actually hurt your website. It’s not like this information goes to Google or anything like that. They have way more sophisticated stuff to figure out things on their end.

Do not just disavow everything with seven or more flags, or eight or more flags, or nine or more flags. I think that we use the color coding to indicate 0% to 10% of these flag counts were penalized or banned, 10% to 50% were penalized or banned, or 50% or above were penalized or banned. That’s why you see the green, orange, red. But you should use the count and line that up with the percentile. We do show that inside the tool as well.

Don’t just take everything and disavow it all. That can get you into serious trouble. Remember what happened with Cyrus. Cyrus Shepard, Moz’s head of content and SEO, he disavowed all the backlinks to its site. It took more than a year for him to rank for anything again. Google almost treated it like he was banned, not completely, but they seriously took away all of his link power and didn’t let him back in, even though he changed the disavow file and all that.

Be very careful submitting disavow files. You can hurt yourself tremendously. The reason we offer it in disavow format is because many of the folks in our customer testing said that’s how they wanted it so they could copy and paste, so they could easily review, so they could get it in that format and put it into their already existing disavow file. But you should not do that. You’ll see a bunch of warnings if you try and generate a disavow file. You even have to edit your disavow file before you can submit it to Google, because we want to be that careful that you don’t go and submit.

You should expect the Spam Score accuracy. If you’re doing spam investigation, you’re probably looking at spammier sites. If you’re looking at a random hundred sites, you should expect that the flags would correlate with the percentages. If I look at a random hundred 4 flag Spam Score sites, 7.5% of those I would expect on average to be penalized or banned. If you are therefore seeing sites that don’t fit those, they probably fit into the percentiles that were not penalized, or up here were penalized, down here weren’t penalized, that kind of thing.

Hopefully, you find Spam Score useful and interesting and you add it to your toolbox. We would love to hear from you on iterations and ideas that you’ve got for what we can do in the future, where else you’d like to see it, and where you’re finding it useful/not useful. That would be great.

Hopefully, you’ve enjoyed this edition of Whiteboard Friday and will join us again next week. Thanks so much. Take care.

Video transcription by Speechpad.com

ADDITION FROM RAND: I also urge folks to check out Marie Haynes’ excellent Start-to-Finish Guide to Using Google’s Disavow Tool. We’re going to update the feature to link to that as well.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Searchmetrics Ranking Factors 2014: Why Quality Content Focuses on Topics, not Keywords

Posted by searchmetrics

Searchmetrics recently launched their yearly Ranking Factors Study that bases numbers on rank correlation and averages of top 10 SEO rankings, and this year’s analysis shows that content on top-performing sites is much more holistic and less keyword-focused.

Everybody talks about how “content is king.” People are advised to “create quality content for users,” and not ever since keyword (not provided), some have said “the keyword is dead.” Though these phrases may convey somehow understandable approaches, they are often nothing more than empty clichés leaving webmasters alone with without any further information.

Making relevant content measurable

What is quality content? How can I create relevant content for my users? Should I still place the keyword in the title or use it seven times in the
content?

To understand how search engines develop over time and what kind of features increase or decrease in prevalence and importance, we analyze the top 30 ranking sites for over 10,000 keywords (approximately 300,000 URLs) each year. The full study with all 100 pages of details is 
downloadable here.

In a nutshell: To what extent have Panda, Penguin, and not least Hummingbird influenced the algorithm and therefore the search results?

Before we get into detail, let me—as a matter of course—point out the fact that correlation does not imply causation. You can find some more comprehensive information, as well as an introduction and explanation of what a correlation is,
here. That is why we took two approaches:


  • Correlation of Top 30
    = Differences between URLs within SERP 1 to 3

  • Averages
    = Appearance and/or extent of certain factors per position

The “Fall” of the Keyword?

Most keyword factors are declining. This is one of the major findings of our studies over the years. Let me give you an example:

The decrease of the features “Keyword in URL” and “Keyword in Domain” is one of the more obvious findings of our analyses. You can clearly see the
declining correlation from 2012 to 2014. Let’s have a look at some more on-page keyword factors:

What you see here as well are very low correlations. In other words: With regard to these features, there are no huge differences between URLs ranking on
positions from one to thirty. But there is more than that. It is also important to have a look at the averages here:

Explanation: X-Axis: Google Position from one to 30 / Y-Axis: Average share of URLs having keyword in description/title (0.10 = 10%). Please note that we have modified the crawling of these features. It is more exact now. This is why last year’s values are likely to be actually even a bit higher than given here. However, you can see that relatively few sites actually have the keywords in their headings. In fact, only about 10% of the URLs in positions 1-30 have the keyword in h2s; 15% have them in h1s. And the trend also is negative.

By the way: What you see in positions 1-2 is what we call the “Brand Factor.” It is often a big brand ranking on these positions, and most of them differ from the rest of the SERPs when it comes to classic SEO measures.

Actually, taking only correlation into consideration can sometimes lead to a false conclusion. Let me show you what I mean with the following example: 

The correlation for the feature “% Backlinks with Keyword” has considerably increased from 2013 to 2014. But the conclusion: “Hey cool, I will immediately
do link building and tell the people to put the keyword I want to rank for in the anchor text!” would be a shot in the dark. A glance at the averages tells
you why:

In fact, the average share of links featuring the keyword in the anchor text has declined from 2013 to 2014 (from ~40% to ~27). But what you see is a falling graph in 2014 which is why the correlation is more positive with regard to better rankings. That means: the better the position of a URL is, the higher the share of backlinks that contain the keyword (on average). On average, this share continuously decreases with each position. In contrast to last year’s curve, this results in the calculation of a high(er) positive correlation.

Conclusion: The keyword as such seems to continue losing influence over time as Google becomes better and better at evaluating other factors. But what kind of factors are these?

The “rise” of content

Co-occurrence evaluations of keywords and relevant terms is something we’ve been focusing on this past year, as we’ve seen high shifts in rankings based on these. I won’t go into much detail here, as this would go beyond the scope of this blog post, but what we can say is that after conducting word co-occurrence analyses, we found that
Proof and Relevant keywords played a major role in the quality and content of rankings. Proof Terms are words that are strongly related to the primary keyword and highly likely to appear at the same time. Relevant Terms are not as closely related to the main keyword, yet are still likely to appear in the same context (or as a part of a subtopic). These kinds of approaches are based on semantics and context. For example, it is very likely that the word “car” is relevant in a text in which the word “bumper” occurs, while the same is not true for the term “refrigerator.”

Proof and relevant terms to define and analyze topics

Let’s have a look at an example analysis for Proof and Relevant Terms regarding the keyword “apple watch,” done with the Content Optimization section of the Searchmetrics Suite:

The number behind the bar describes the average appearance of the word in a text dealing with the topic, the bar length mirrors the respective weighting (x-axis, bottom) and is calculated based on the term’s semantic closeness to the main keyword. Terms marked with green hooked bubbles are the 10 most important words, based on a mixed calculation of appearance and semantic weighting (and some further parameters).

As you can see, the terms “iphone” and “time” are marked as highly important Proof Terms, and “iwatch” is very likely to appear in the context of the main keyword “apple phone” as well. Note that simply reading the list without knowing the main keyword gives you an idea of the text’s main topic.

The above chart shows an excerpt from the list of Relevant Terms. Note that both the semantic weighting and the appearance of these terms is somewhat lower than in the previous chart. In contrast to the Proof Terms list, you won’t know the exact focus of the text just looking at these Relevant Terms, but you might probably get an idea of what its rough topic might be.

Content features on the rise

By the way, the length of content also continues to increase. Furthermore, high-ranking content is written in a way that is easier for the average person to read, and is often enriched by other media, such as images or video. This is shown in the following charts:

Shown here is the average text length in characters per position, in both 2014 and 2013. You can see that content is much longer on each and every position among the top 30 (on average) in 2014. (Note the “Brand Factor” at the first position(s) again.)

And here is the average readability of texts per position based on the
Flesch score ranging from 0 (very difficult) to 100 (very easy):

The Flesch score is given on the y-axis. You can see that there is a rather positive correlation with URLs on higher positions featuring, on average, easier-to-read texts.

But just creating more (or easier) content does not positively influence rankings. It’s about developing relevant and comprehensive content for users dealing with more than just one aspect of a certain topic.
The findings support the idea that search engines are moving away from focusing on single keywords to analyzing so-called “content clusters” – individual subjects or topic areas that are based around keywords and a variety of related terms.

Stop doing “checklist SEO”

So, please stop these outdated “Checklist-SEO” practices which are still overused in the market from my perspective.
It’s not about optimizing keywords for search engines. It’s about optimizing the search experience for the user. Let me show you this with another graphic:

On the left, we have the “old SEO paradigm: 1 Keyword (maybe some keyword variations. we all know the ”
An SEO walks into a bar joke“) = 1 Landing Page – Checklist SEO. That’s why, in the past, many websites had single landing pages for each specific keyword (and those pages were very likely to bear near-duplicate content). Imagine a website dealing with a specific car having single landing pages for each and every single car part: “x motor,” “x seats,” “x front shield,” “x head lamps,” etc. This does not make sense in most cases. But this is how SEO used to be (and I must admit: the pages ranked!).

But, to have success in the long term, it’s the content (or better, the
topic) that matters, not the single keyword. That is why landing pages should be focused on comprehensive topics: 1 Landing Page = 1 Topic. To stick with the example: Put the descriptions of all the car parts on one page.

Decreasing diversity in SERPs since the Hummingbird update

How these developments actually influences the SERPs can be seen in the impact of Google’s Hummingbird. The algorithm refactoring means the search engine now has a better understanding of the intent and meaning of searches which improves its ability to deliver relevant content in search results. This means search engine optimization is increasingly a holistic discipline. It’s not enough to optimize and rank for one relevant keyword – content must now be relevant to the topic and include several related terms. This helps a page to rank for several terms and creates an improved user experience at the same time.

In a
recent analysis on Hummingbird, we found that the diversity in search results is actually decreasing. This means, fewer URLs rank for semantically similar (“near-identic”) yet different keywords. Most of you know that not long ago there were often completely different search results for keyword pairs like “bang haircuts” and “hairstyles with bangs” which have quite a bit of overlap in meaning. Now, as it turns out, SERPs for these kinds of keywords are getting more and more identic. Here are two SERPs, one for the query “rice dish,” and one for the query “rice recipe,” shown both before and after Hummingbird, as examples:

SERPs pre-Hummingbird


SERPs post-Hummingbird

At a glance: The most important ranking factors

To get an insight of what some of the more important ranking factors are, we have developed an infographic adding evaluations (based on averages and interpretations) in bubble form to the well-known correlation bar chart. Again, you see the prominence of content factors (given in blue). (Click/tap for a full-size image.)

The more important factors are given on the left side. Arrows (both on the bubbles and the bars) show the trend in comparison to last year’s analysis. On the left side also, the size of the bubbles represents a graphic element based on the interpretation of how important the respective factor might probably be. Please note that the averages given in this chart are based on the top 10 only. We condensed the pool of URLs to SERP 1 to investigate their secrets of ranking on page 1, without having this data influenced by the URLs ranking from 11 to 30.

Good content generates better user signals

What you also notice is the prominent appearance of the factors given in purple. This year we have included user features such as bounce rate (on a keyword level), as well as correlating user signals with rankings. We were able to analyze thousands of GWT accounts in order to avoid a skewed version of the data. Having access to large data sets has also allowed us to see when major shifts occur.

You’ll notice that click through rate is one of the biggest factors that we’ve noticed in this year’s study, coming in at .67%. Average time on site within the top 10 is 101 seconds, while bounce rate is only 37%.

Conclusion: What should I be working on?

Brands are maturing in their approach to SEO. However, the number one factor is still relevant page content. This is the same for big brands and small businesses alike. Make sure that the content is designed for the user and relevant in your appropriate niche.

If you’re interested in learning how SEO developed and how to stay ahead of your competition, just
download the study here. Within the study you’ll find many more aspects of potential ranking factors that are covered in this article.

Get the Full Study

So, don’t build landing pages for single keywords. And don’t build landing pages for search engines, either. Focus on topics related to your website/content/niche/product and try to write the best content for these topics and subtopics. Create landing pages dealing with several, interdependent aspects of main topics and write comprehensive texts using semantically closely related terms. This is how you can optimize the user experience as well as your rankings – for more than even the focus keyword – at the same time!

What do you think of this data? Have you seen similar types of results with the companies that you work with? Let us know your feedback in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

How To Tap Into Social Norms to Build a Strong Brand

Posted by bridget.randolph

In recent years there has been a necessary shift in the way businesses advertise themselves to consumers, thanks to the increasingly common information overload experienced by the average person.

In 1945, just after WWII, the
annual total ad spend in the United States was about $2.8 billion (that’s around $36.8 million before the adjustment for inflation). In 2013, it was around $140 billion.

Don’t forget that this is just paid media advertising; it doesn’t include the many types of earned coverage like search, social, email, supermarket displays, direct mail and so on. Alongside the growth in media spends is a growth in the sheer volume of products available, which is made possible by increasingly sophisticated technologies for sales, inventory, delivery and so on.

What does this mean? Well, simply that the strategy of ‘just buy some ads and sell the benefits’ isn’t enough anymore: you’ll be lost in the noise. How can a brand retain customers and create loyalty in an atmosphere where everyone else has a better offer? Through tapping into the psychology of social relationships.


Imagine that you are at home for Thanksgiving, and your mother has pulled out all the stops to lovingly craft the most delicious, intricate dinner ever known to man. You and your family have enjoyed a wonderful afternoon of socializing and snacking on leftovers and watching football, and now it’s time to leave. As you hug your parents goodbye, you take out your wallet. “How much do I owe you for all the love and time you put into this wonderful afternoon?” you ask. “$100 for the food? here, have $50 more as a thank you for the great hospitality!” How would your mother respond to such an offer? I don’t know about your mother, but my mom would be deeply offended.

New scenario: You’ve gone to a restaurant for Thanksgiving dinner. It’s the most delicious dinner you’ve ever had, the atmosphere is great with the football playing in the background, and best of all, your server is attentive, warm, and maternal. You feel right at home. At the end of the meal, you give her a hug and thank her for the delicious meal before leaving. She calls the cops and has you arrested for a dine-and-dash.

And herein lies the difference between social norms and market norms.

Social norms vs. market norms

The Thanksgiving dinner example is one which I’ve borrowed from a book by Dan Ariely,
Predictably Irrational: The Hidden Forces that Shape Our Decisions. Ariely discusses two ways in which humans interact: social norms and market norms.


Social norms
, as Ariely explains, “are wrapped up in our social nature and our need for community. They are usually warm and fuzzy. Instant paybacks are not required.” Examples would be: helping a friend move house, babysitting your grandchild, having your parents over for dinner. There is an implied reciprocity on some level but it is not instantaneous nor is it expected that the action will be repaid on a financial level. These are the sort of relationships and interactions we expect to have with friends and family.


Market norms
, on the other hand, are about the exchange of resources and in particular, money. Examples of this type of interaction would be any type of business transaction where goods or services are exchanged for money: wages, prices, rents, interest, and cost-and-benefit. These are the sort of relationships and interactions we expect to have with businesses.

I’ve drawn you a very rough illustration – it may not be the most aesthetically pleasing visual, but it gets the point across:

Market norms come into play any time money enters into the equation, sometimes counter-intuitively! Ariely gives the example of a group of lawyers who were approached by the AARP and asked whether they would provide legal services to needy retirees at a drastically discounted rate of $30/hour. The lawyers said no. From a market norms perspective, the exchange didn’t make sense. Later the same lawyers were asked whether they would consider donating their time free of charge to needy retirees. The vast majority of the lawyers said yes. The difference is that, when no money changes hands, the exchange shifts from a poor-value market exchange to an altruistic and therefore high-value social exchange. It is a strange psychological quirk that ‘once market norms enter our considerations, the social norms depart.’

Mixed signals: when social and market norms collide

In a book called
Positioning: The Battle for Your Mind by Al Ries and Jack Trout (originally published in 1981), the authors describe the 1950s as the ‘product era’ of advertising, when ‘advertising people focused their attention on product features and customer benefits.’ It was all about the unique selling proposition (USP).


In this case, the USP is mildness: “not one single case of throat irritation!” (image source)

However, as the sheer volume of products on the market increased, it became more difficult to sell a product simply by pointing out the benefits. As Ries and Trout put it, ‘Your “better mousetrap” was quickly followed by two more just like it. Both claiming to be better than the first one.’

They describe the next phase of advertising (which hit its peak in the 1960s and 70s and which we can probably all relate to if we watch Mad Men) as the ‘image era’, pioneered by David Ogilvy. In this period, successful campaigns sold the reputation, or ‘image’ of a brand and a product rather than its features. Ries and Trout quote Ogilvy as saying that ‘Every advertisement is a long-term investment in the image of a brand’. Examples include Hathaway shirts and Rolls-Royce.

Rather than the product benefits, this ad focuses on the ‘image’ of the man who smokes Viceroys: “Viceroy has a thinking man’s filter and a smoking man’s taste. (image source)

But yet again, as more and more brands imitate the strategy of these successful campaigns, the space gets more crowded and the consumer becomes more jaded and these techniques become less effective.

According to Ries and Trout, this brought the world of advertising into the ‘positioning era’ of the 80s, which is where they positioned (hehe) themselves. As they described this, “To succeed in our overcommunicated society, a company must create a position in the prospect’s mind, a position that takes into consideration not only a company’s own strengths and weaknesses, but those of its competitors as well.”

This one’s all about positioning Winston’s in opposition to competitors: as the brand with real taste, as opposed to other brands which ‘promise taste’ but fail to deliver. (image source)

And yet, despite this evolution of advertising strategy over the course of the 20th century, all of these different approaches are ultimately based on market norms. The ‘product era’ sells you features and benefits in exchange for money; the ‘image era’ sells you on an image and a lifestyle in exchange for money, and the ‘positioning era’ sells you on why a particular company is the right one to supply your needs in exchange for money.

Social norms and loyalty


When does cheap not win?
When it comes to social norms. Social norms are about relationships, community and loyalty. If your sister is getting married, you don’t do a cost benefit analysis to decide whether or not you should go to her wedding or whether the food will be better and the travel cheaper if you go to your next door neighbor’s BBQ instead. If anything, it’s the opposite: some people take it to such an extreme that they will go into massive debt to attend friends’ weddings and bring lavish gifts. That is certainly not a decision based on monetary considerations.

Therefore, if the average brand wants to get out of the vicious cycle of undercutting competitors in order to gain business, they need to start focusing on relationships and community building instead of ‘SUPER CHEAP BEST LOW LOW PRICES!!®’ and sneaky upsells at the point of sale. This is something my colleague
Tim Allen spoke about in a presentation called “Make Me Love Your Brand, Not Just Tolerate It”. And this is what a large number of recent ‘advertising success stories’ are based on and it’s the whole premise behind many of the more recent trends in marketing: email marketing, personalization, SMS marketing, good social media marketing, and so on.

Some of the most popular brands are the ones which are able to find the perfect balance between:

  • a friendly, warm relationship with customers and potential customers, which also often includes a fun, personal tone of voice (the ‘brand personality’) – in these interactions there is often an offering of something to the customer without an expectation of instant payback, and
  • a strong product which they offer at a good price with good ‘market’ benefits like free returns and so on.

One example of this is John Lewis, who have good customer service policies around returns etc but also offer free perks to their shoppers, like the maternity room where breastfeeding mothers can relax. One of my colleagues mentioned that, as a new mother, his girlfriend always prefers to shop at John Lewis over other competitor stores for that very reason. Now if this is purely a convenience factor for her, and after her child is older she stops shopping at John Lewis in favor of a cheaper option, you could argue that this is less of a social interaction and more market influenced (in some sense it serves as a service differentiator between JL and their customers). However, if after she no longer requires the service, she continues to shop there because she wants to reciprocate their past support of her as a breastfeeding mother, that pushes it more firmly into the realm of the social.

Another thing John Lewis do for their fans is the annual Christmas ad, which (much like the 
Coca-Cola Santa truck in the UK) has become something which people look forward to each year because it’s a heartwarming little story more than just an ad for a home and garden store. Their 2012 ad was my favorite (and a lot of other people’s too, with over 4.5 million Youtube views).

But usually anytime a brand ‘do something nice’ for no immediate monetary benefit, it counts as a ‘social’ interaction – a classic example is
Sainsbury’s response to the little girl who wrote to them about ‘tiger bread’.

Some of my other favorite examples of social norm interactions by brands are:

The catch is, you have to be careful and keep the ‘mix’ of social and market norms consistent.

Ariely uses the example of a bank when describing the danger of bringing social norms into a business relationship:

“What happens if a customer’s check bounces? If the relationship is based on market norms, the bank charges a fee, and the customer shakes it off. Business is business. While the fee is annoying, it’s nonetheless acceptable. In a social relationship, however, a hefty late fee–rather than a friendly call from the manager or an automatic fee waiver–is not only a relationship-killer; it’s a stab in the back. Consumers will take personal offense. They’ll leave the bank angry and spend hours complaining to their friends about this awful bank.”

Richard Fergie also summed this issue up nicely in this G+ post about the recent outrage over Facebook manipulating users’ emotions; in this case, the back-stab effect was due to the fact that the implicit agreement between the users and the company about what was being ‘sold’ and therefore ‘valued’ in the exchange changed without warning.


The basic rule of thumb is that whether you choose to emphasize market norms or social norms, you can’t arbitrarily change the rules.

A side note about social media and brands: Act like a normal person

In a time when
the average American aged 18-64 spends 2-3 hours a day on social media, it is only logical that we would start to see brands and the advertising industry follow suit. But if this is your only strategy for building relationships and interacting with your customers socially, it’s not good enough. Instead, in this new ‘relationship era’ of advertising (as I’ve just pretentiously dubbed it, in true Ries-and-Trout fashion), the brands who will successfully merge market and social norms in their advertising will be the brands which are able to develop the sort of reciprocal relationships that we see with our friends and family. I wrote a post over on the Distilled blog about what social media marketers can learn from weddings. That was just one example, but the TL;DR is: as a brand, you still need to use social media the way that normal people do. Otherwise you risk becoming a Condescending Corporate Brand on Facebook. On Twitter too.

Social norms and authenticity: Why you actually do need to care

Another way in which brands tap into social norms are through their brand values. My colleague
Hannah Smith talked about this in her post on The Future of Marketing. Moz themselves are a great example of a brand with strong values: for them it’s TAGFEE. Hannah also gives the examples of Innocent Drinks (sustainability), Patagonia (environmentalism) and Nike (whose strapline ‘Find Your Greatness’ is about their brand values of everyone being able to ‘achieve their own defining moment of greatness’).

Havas Media have been doing some interesting work around trying to ‘measure’ brand sentiment with something call the
‘Meaningful Brands Index’ (MBi), based on how much a brand is perceived as making a meaningful difference in people’s lives, both for personal wellbeing and collective wellbeing. Whether or not you like their approach, they have some interesting stats: apparently only 20% of brands worldwide are seen to ‘meaningfully positively impact peoples’ lives’, but the brands that rank high on the MBi also tend to outperform other brands significantly (120%).

Now there may be a ‘correlation vs causation’ argument here, and I don’t have space to explore it. But regardless of whether you like the MBi as a metric or not, countless case studies demonstrate that it’s valuable for a brand to have strong brand values.

There are two basic rules of thumb when it comes to choosing brand values:

1) I
t has to be relevant to what you do. If a bingo site is running an environmentalism campaign, it might seem a bit weird and it won’t resonate well with your audience. You also need to watch out for accidental irony. For example, McDonalds and Coca-Cola came in for some flak when they sponsored the Olympics, due to their reputation as purveyors of unhealthy food/drink products.

Nike’s #FindYourGreatness campaign, on the other hand, is a great example of how to tie in your values with your product. Another example is one of our clients at Distilled, SimplyBusiness, a business insurance company whose brand values include being ‘the small business champion’. This has informed their content strategy, leading them to develop in-depth resources for small businesses, and it has served them very well.

2) I
t can’t be so closely connected to what you do that it comes across as self-serving. For example, NatWest’s NatYes campaign claims to be about enabling people to become homeowners, but ultimately (in no small part thanks to the scary legal compliance small print about foreclosure) the authenticity of the message is undermined.

The most important thing when it comes to brand values: it’s very easy for people to be cynical about brands and whether they ‘care’. Havas did a survey that found that
only 32% of people feel that brands communicate honestly about commitments and promises. So choose values that you do feel strongly about and follow through even if it means potentially alienating some people. The recent OKCupid vs Mozilla Firefox episode is an illustration of standing up for brand values (regardless of where you stand on this particular example, it got them a lot of positive publicity).

Key takeaways

So what can we take away from these basic principles of social norms and market norms? If you want to build a brand based on social relationships, here’s 3 things to remember.

1)
Your brand needs to provide something besides just a low price. In order to have a social relationship with your customers, your brand needs a personality, a tone of voice, and you need to do nice things for your customers without the expectation of immediate payback.

2)
You need to keep your mix of social and market norms consistent at every stage of the customer lifecycle. Don’t pull the rug out from under your loyal fans by hitting them with surprise costs after they checkout or other tricks. And don’t give new customers significantly better benefits. What you gain in the short term you will lose in the long term resentment they will feel about having been fooled. Instead, treat them with transparency and fairness and be responsive to customer service issues.

3)
You need brand values that make sense for your brand and that you (personally and as a company) really believe in. Don’t have values that don’t relate to your core business. Don’t have values which are obviously self-serving. Don’t be accidentally ironic like McDonalds.

Have you seen examples of brands building customer relationships based on social norms? Did it work? Do you do this type of relationship-building for your brand?

I’d love to hear your thoughts in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]