Google is also extending ‘Why this ad?’ notices to all services that display Google Ads, including YouTube, Google Play, Gmail, Maps and partner websites and apps.
Please visit Search Engine Land for the full article.
[ccw-atrib-link]
Posted by Cyrus-Shepard
Recently, Moz announced the results of our biennial Ranking Factors study. Today, we’d like to explore one of the most vital elements of the study: the Ranking Factors survey.
2015 Ranking Factors Expert Survey
Every two years, Moz surveys the brightest minds in SEO and search marketing with a comprehensive set of questions meant to gauge the current workings of Google’s search algorithm. This year’s panel of experts possesses a truly unique set of knowledge and perspectives. We’re thankful on behalf of the entire community for their contribution.
In addition to asking the participants about what does and doesn’t work in Google’s ranking algorithm today, one of the most illuminating group of questions asks the panel to predict the future of search – how the features of Google’s algorithm are expected to change over the next 12 months.
Amazingly, almost all of the factors that are expected to increase in influence revolved around user experience, including:
The experts predicted that more traditional ranking signals, such as those around links and URL structures, would largely remain the same, while the more manipulative aspects of SEO, like paid links and anchor text (which is subject to manipulation), would largely decrease in influence.
The survey also asks respondents to weight the importance of various factors within Google’s current ranking algorithm (on a scale of 1-10). Understanding these areas of importance helps to inform webmasters and marketers where to invest time and energy in working to improve the search presence of their websites.
These features describe use of the keyword term/phrase in particular parts of the HTML code on the page (title element, H1s, alt attributes, etc).
Highest influence: Keyword present in title element, 8.34
Lowest influence: Keyword present in specific HTML elements (bold/italic/li/a/etc), 4.16
![]() |
Titles are still very powerful. Overall, it’s about focus and matching query syntax. If your post is about airplane propellers but you go on a three paragraph rant about gorillas, you’re going to have a problem ranking for airplane propellers. |
![]() |
Keyword usage is vital to making the cut, but we don’t always see it correlate with ranking, because we’re only looking at what already made the cut. The page has to be relevant to appear for a query, IMO, but when it comes to how high the page ranks once it’s relevant, I think keywords have less impact than they once did. So, it’s a necessary but not sufficient condition to ranking. |
![]() |
In my experience, most of problems with organic visibility are related to on-page factors. When I look for an opportunity, I try to check for 2 strong things: presence of keyword in the title and in the main content. Having both can speed up your visibility, especially on long-tail queries. |
These features cover how keywords are used in the root or subdomain name, and how much impact this might have on search engine rankings.
Highest influence: Keyword is the exact match root domain name, 5.83
Lowest influence: Keyword is the domain extension, 2.55
![]() |
The only domain/keyword factor I’ve seen really influence rankings is an exact match. Subdomains, partial match, and others appear to have little or no effect. |
![]() |
There’s no direct influence, but an exact match root domain name can definitely lead to a higher CTR within the SERPs and therefore a better ranking in the long term. |
![]() |
It’s very easy to link keyword-rich domains with their success in Google’s results for the given keyword. I’m always mindful about other signals that align with domain name which may have contributed to its success. These includes inbound links, mentions, and local citations. |
These features describe link metrics for the individual ranking page (such as number of links, PageRank, etc).
Highest influence: Raw quantity of links from high-authority sites, 7.78
Lowest influence: Sentiment of the external links pointing to the page, 3.85
![]() |
High-quality links still rule rankings. The way a brand can earn links has become more important over the years, whereas link schemes can hurt a site more than ever before. There is a lot of FUD slinging in this respect! |
![]() |
Similar to my thoughts on content, I suspect link-based metrics are going to be used increasingly with a focus on verisimilitude (whether content is actually true or not) and relationships between nodes in Knowledge Graph. Google’s recent issues with things, such as the snippet results for “evolution,” highlight the importance of them only pulling things that are factually correct for featured parts of a SERP. Thus, just counting traditional link metrics won’t cut it anymore. |
![]() |
While anchor text is still a powerful ranking factor, using targeted anchor text carries a significant amount of risk and can easily wipe out your previous success. |
These features describe elements that indicate qualities of branding and brand metrics.
Highest influence: Search volume for the brand/domain, 6.54
Lowest influence: Popularity of business’s official social media profiles, 3.99
![]() |
This is clearly on deck to change very soon with the reintegration of Twitter into Google’s Real-Time Results. It will be interesting to see how this affects the “Breaking News” box and trending topics. Social influencers, quality and quantity of followers, RTs, and favorites will all be a factor. And what’s this?! Hashtags will be important again?! Have mercy! |
![]() |
Google has to give the people what they want, and if most of the time they are searching for a brand, Google is going to give them that brand. Google doesn’t have a brand bias, we do. |
![]() |
It’s already noticeable; brands are more prominently displayed in search results for both informational and commercial queries. I’m expecting Google will be paying more attention to brand-related metrics from now on (and certainly more initiatives to encourage site owners to optimize for better entity detection). |
These features relate to third-party metrics from social media sources (Facebook, Twitter, Google+, etc) for the ranking page.
Highest influence: Engagement with content/URL on social networks, 3.87
Lowest influence: Upvotes for the page on social sites, 2.7
![]() |
Social ranking factors are important in a revamped Query Deserves Freshness algorithm. Essentially, if your content gets a lot of natural tweets, shares, and likes, it will rank prominently for a short period of time, until larger and more authoritative sites catch up. |
![]() |
Social popularity has several factors to consider: (1) Years ago, Google and Bing said they take into account the authority of a social profile sharing a link and the popularity of the link being shared (retweets/reshares), and there was more complexity to social signals that was never revealed even back then. (2) My experience has been that social links and shares have more power for newsy/fresh-type content. For example, a lot of social shares for a dentist’s office website wouldn’t be nearly as powerful (or relevant to consider) as a lot of social shares for an article on a site with a constant flow of fresh content. |
![]() |
Honestly, I do not think that the so-called “social signals” have any direct influence on the Google Algorithm (that does not mean that a correlation doesn’t exist, though). My only doubt is related to Twitter, because of the renewed contract between Google and Twitter itself. That said, as of now I do not consider Twitter to offer any ranking signals, except for very specific niches related to news and “news-able” content, where QDF plays a fundamental role. |
These elements describe non-keyword-usage, non-link-metrics features of individual pages (such as length of the page, load speed, etc).
Highest influence: Uniqueness of the content on the page, 7.85
Lowest influence: Page contains Open Graph data and/or Twitter cards, 3.64
![]() |
By branching mobile search off of Google’s core ranking algorithm, having a “mobile-friendly” website is probably now less important for desktop search rankings. Our clients are seeing an ever-increasing percentage of organic search traffic coming from mobile devices, though (particularly in retail), so this is certainly not an excuse to ignore responsive design – the opposite, in fact. Click-through rate from the SERPs has been an important ranking signal for a long time and continues to be, flagging irrelevant or poor-quality search listings. |
![]() |
I believe many of these will be measured within the ecosystem, rather than absolutely. For example, the effect of bounce rate (or rather, bounce speed) on a site will be relative to the bounce speeds on other pages in similar positions for similar terms. |
![]() |
I want to answer these a certain way because, while I have been told by Google what matters to them, what I see in the SERPs does not back up what Google claims they want. There are a lot of sites out there with horrible UX that rank in the top three. While I believe it’s really important for conversion and to bring customers back, I don’t feel as though Google is all that concerned, based on the sites that rank highly. Additionally, Google practically screams “unique content,” yet sites that more or less steal and republish content from other sites are still ranking highly. What I think should matter to Google doesn’t seem to matter to them, based on the results they give me. |
These features describe link metrics about the domain hosting the page.
Highest influence: Quantity of unique linking domains to the domain, 7.45
Lowest influence: Sentiment of the external links pointing to the site, 3.91
![]() |
Quantity and quality of unique linking domains at the domain level is still among the most significant factors in determining how a domain will perform as a whole in the organic search results, and is among the best SEO “spot checks” for determining if a site will be successful relative to other competitor sites with similar content and selling points. |
![]() |
Throughout this survey, when I say “no direct influence,” this is interchangeable with “no direct positive influence.” For example, I’ve marked exact match domain as low numbers, while their actual influence may be higher – though negatively. |
![]() |
Topical relevancy has, in my opinion, gained much ground as a relevant ranking factor. Although I find it most at play when at page level, I am seeing significant shifts at overall domain relevancy, by long-tail growth or by topically-relevant domains linking to sites. One way I judge such movements is the growth of the long-tail relevant to the subject or ranking, when neither anchor text (exact match or synonyms) nor exact phrase is used in a site’s content, yet it still ranks very highly for long-tail and mid-tail synonyms. |
These features relate to the entire root domain, but don’t directly describe link- or keyword-based elements. Instead, they relate to things like the length of the domain name in characters.
Highest influence: Uniqueness of content across the whole site, 7.52
Lowest influence: Length of time until domain name expires, 2.45
![]() |
Character length of domain name is another correlative yet not causative factor, in my opinion. They don’t need to rule these out – it just so happens that longer domain names get clicked on, so they get ruled out quickly. |
![]() |
A few points: Google’s document inception date patents describe how Google might handle freshness and maturity of content for a query. The “trust signal” pages sound like a site quality metric that Google might use to score a page on the basis of site quality. Some white papers from Microsoft on web spam signals identified multiple hyphens in subdomains as evidence of web spam. The length of time until the domain expires was cited as a potential signal in Google’s patent on information retrieval through historic data, and was refuted by Matt Cutts after domain sellers started trying to use that information to sell domain extensions to “help the SEO” of a site. |
![]() |
I think that page speed only becomes a factor when it is significantly slow. I think that having error pages on the site doesn’t matter, unless there are so many that it greatly impacts Google’s ability to crawl. |
To bring it back to the beginning, we asked the experts if they had any comments or alternative signals they think will become more or less important over the next 12 months.
![]() |
While I expect that static factors, such as incoming links and anchor text, will remain influential, I think the power of these will be mediated by the presence or absence of engagement factors. |
![]() |
The app world and webpage world are getting lumped together. If you have the more popular app relative to your competitors, expect Google to notice. |
![]() |
Mobile will continue to increase, with directly-related factors increasing as well. Structured data will increase, along with more data partners and user segmentation/personalization of SERPs to match query intent, localization, and device-specific need states. |
![]() |
User location may have more influence in mobile SERPs as (a) more connected devices like cars and watches allow voice search, and (b) sites evolve accordingly to make such signals more accurate. |
![]() |
I really think that over the next 12-18 months we are going to see a larger impact of structured data in the SERPs. In fact, we are already seeing this. Google has teams that focus on artificial intelligence and machine learning. They are studying “relationships of interest” and, at the heart of what they are doing, are still looking to provide the most relevant result in the quickest fashion. Things like schema that help “educate” the search engines as to a given topic or entity are only going to become more important as a result. |
For more data, check out the complete Ranking Factors Survey results.
2015 Ranking Factors Expert Survey
Finally, we leave you with this infographic created by Kevin Engle which shows the relative weighting of broad areas of Google’s algorithm, according to the experts.
What’s your opinion on the future of search and SEO? Let us know in the comments below.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
[ccw-atrib-link]
Posted by Cyrus-Shepard
We’re excited to announce the results of Moz’s biannual Search Engine Ranking Correlation Study and Expert Survey, a.k.a. Ranking Factors.
Moz’s Ranking Factors study helps identify which attributes of pages and sites have the strongest association with ranking highly in Google. The study consists of two parts: a survey of professional SEOs and a large correlation study.
This year, with the help of Moz’s data scientist Dr. Matt Peters, new data partners, and over 150 search marketing professionals, we were able to study more data points than in any year past. All together, we measured over 170 correlations and collected over 15,000 data points from our panel of SEO experts.
Ready to dig in?
We want to especially thank our data partners. SimilarWeb, Ahrefs, and DomainTools each gave us unparallelled access and their data was essential to helping make this study a success. It’s amazing and wonderful when different companies—even competitors—can come together for the advancement of knowledge.
You can see all of our findings within the study now. In the coming days and weeks we’ll dive into deeper analysis as to what we can learn from these correlations.
Moz’s Ranking Correlation Study measures which attributes of pages and websites are associated with higher rankings in Google’s search results. This means we look at characteristics such as:
To be clear, the study doesn’t tell us if Google actually uses these attributes in its core ranking algorithm. Instead, it shows which features of pages and sites are most associated with higher rankings. It’s a fine, but important, distinction.
While correlation studies can’t prove or disprove which attributes Google considers in its algorithm, it does provide valuable hints. In fact, many would argue that correlation studies are even more important than causation when working with today’s increasingly complex algorithms.
For the study, Dr. Peters examined the top 50 Google results of 16,521 search queries, resulting in over 700,000 unique URLs. You can read about the full methodology here.
Here’s a sample of our findings:
The features in the chart below describe link metrics to the individual ranking page (such as number of links, PageRank, etc.) and their correlation to higher rankings in Google.
Despite rumors to the contrary, links continue to show one of the strongest associations with higher rankings out of all the features we studied. While this doesn’t prove how Google uses links in its algorithm, this information combined with statements from Google and the observations of many professional marketers leads us to strongly believe that links remain hugely important for SEO.
Link-based features were only one of the features categories we examined. The complete correlation study includes 12 different categories of data.
While correlation data can provide valuable insight into the workings of Google’s algorithm, we often learn much more by gathering the collective wisdom of search marketing experts working at the top of their game.
For this reason, every two years we conduct the Ranking Factors Expert Survey.
The survey itself is famously grueling–over 100 questions covering every aspect of Google’s ranking algorithm. This year, we sent the invitation-only survey to 150 industry professionals.
Stay tuned for a deeper dive into the Expert Survey later this week. We’re honored to have the participation of so many knowledgeable professionals.
In the meantime, you can freely view all the findings and results right now:
Ranking Factors wouldn’t be possible without the contribution of dozens of very talented people, but we’d especially like to thank Dr. Matt Peters, Kevin Engle, Rand Fishkin, Casey Coates, Trevor Klein, and Kelly Cooper for their efforts, along with our data partners and all the survey participants.
What ranking factors or correlations stand out to you? Leave your thoughts in the comments below.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
[ccw-atrib-link]
Posted by EricEnge
Today’s post focuses on a vision for your online presence. This vision outlines what it takes to be the best, both from an overall reputation and visibility standpoint, as well as an SEO point of view. The reason these are tied together is simple: Your overall online reputation and visibility is a huge factor in your SEO. Period. Let’s start by talking about why.
For purposes of this post, let’s define three cornerstone ranking signals that most everyone agrees on:
Links remain a huge factor in overall ranking. Both Cyrus Shepard and Marcus Tober re-confirmed this on the Periodic Table of SEO Ranking Factors session at the SMX Advanced conference in Seattle this past June.
On-page content remains a huge factor too, but with some subtleties now thrown in. I wrote about some of this in earlier posts I did on Moz about Term Frequency and Inverse Document Frequency. Suffice it to say that on-page content is about a lot more than pure words on the page, but also includes the supporting pages that you link to.
This is not one of the traditional SEO signals from the early days of SEO, but most advanced SEO pros that I know consider it a real factor these days. One of the most popular concepts people talk about is called pogo-sticking, which is illustrated here:
You can learn more about the pogosticking concept by visiting this Whiteboard Friday video by a rookie SEO with a last name of Fishkin.
OK, so these are the more obvious signals, but now let’s look more broadly at the overall web ecosystem and talk about other types of ranking signals. Be warned that some of these signals may be indirect, but that just doesn’t matter. In fact, my first example below is an indirect factor which I will use to demonstrate why whether a signal is direct or indirect is not an issue at all.
Let me illustrate with an example. Say you spend $1 billion dollars building a huge brand around a product that is massively useful to people. Included in this is a sizable $100 million dollar campaign to support a highly popular charitable foundation, and your employees regularly donate time to help out in schools across your country. In short, the great majority of people love your brand.
Do you think this will impact the way people link to your site? Of course it does. Do you think it will impact how likely people are to be satisified with quality of the pages of your site? Consider this A/B test scenario of 2 pages from different “brands” (for the one on the left, imagine the image of Coca Cola or Pepsi Cola, whichever one you prefer):
Do you think that the huge brand will get a benefit of a doubt on their page that the no-name brand does not even though the pages are identical? Of course they will. Now let’s look at some simpler scenarios that don’t involve a $1 billion investment.
Imagine that a user arrives on your auto parts site after searching on the phrase “oil filter” at Google or Bing. Chances are pretty good that they want an oil filter, but here are some other items they may also want:
This is just the basics, right? But, you would be surprised with how many sites don’t include links or information on directly related products on their money pages. Providing this type of smart site and page design can have a major impact on user engagement with the money pages of your site.
In the prior item we covered the user’s most directly related needs, but they may have secondary needs as well. Someone who is changing a car’s oil is either a mechanic or a do-it-yourself-er. What else might they need? How about other parts, such as windshield wipers or air filters?
These are other fairly easy maintenance steps for someone who is working on their car to complete. Presence of these supporting products could be one way to improve user engagement with your pages.
Publishing world-class content on your site is a great way to produce links to your site. Of course, if you do this on a blog on your site, it may not provide links directly to your money pages, but it will nonetheless lift overall site authority.
In addition, if someone has consumed one or more pieces of great content on your site, the chance of their engaging in a more positive manner with your site overall go way up. Why? Because you’ve earned their trust and admiration.
Are there major media sites that cover your market space? Do they consider you to be an expert? Will they quote you in articles they write? Can you provide them with guest posts or let you be a guest columnist? Will they collaborate on larger content projects with you?
All of these activities put you in front of their audiences, and if those audiences overlap with yours, this provides a great way to build your overall reputation and visibility. This content that you publish, or collaborate on, that shows up on 3rd-party sites will get you mentions and links. In addition, once again, it will provide you with a boost to your branding. People are now more likely to consume your other content more readily, including on your money pages.
The concept here shares much in common with the prior point. Social media provides opportunities to get in front of relevant audiences. Every person that’s an avid follower of yours on a social media site is more likely to show very different behavior characteristics interacting with your site than someone that does not know you well at all.
Note that links from social media sites are nofollowed, but active social media behavior can lead to people implementing “real world” links to your site that are followed, from their blogs and media web sites.
Think your offline activity doesn’t matter online? Think again. Relationships are still most easily built face-to-face. People you meet and spend time with can well become your most loyal fans online. This is particularly important when it comes to building relationships with influential people.
One great way to do that is to go to public events related to your industry, such as conferences. Better still, obtain speaking engagements at those conferences. This can even impact people who weren’t there to hear you speak, as they become aware that you have been asked to do that. This concept can also work for a small local business. Get out in your community and engage with people at local events.
The payoff here is similar to the payoff for other items: more engaged, highly loyal fans who engage with you across the web, sending more and more positive signals, both to other people and to search engines, that you are the real deal.
Whatever your business may be, you need to take care of your customers as best you can. No one can make everyone happy, that’s unrealistic, but striving for much better than average is a really sound idea. Having satisfied customers saying nice things about you online is a big impact item in the grand scheme of things.
While this post is not about the value of influencer relationships, I include this in the list for illustration purposes, for two reasons:
The web provides a level of integrated, real-time connectivity of a kind that the world has never seen before. This is only going to increase. Do something bad to a customer in Hong Kong? Consumers in Boston will know within 5 minutes. That’s where it’s all headed.
Google and Bing (and any future search engine that may emerge) want to measure these types of signals because they tell them how to improve the quality of the experience on their platforms. There are may ways they can perform these measurements.
One simple concept is covered by Rand in this recent Whiteboard Friday video. The discussion is about a recent patent granted to Google that shows how the company can use search queries to detect who is an authority on a topic.
The example he provides is about people who search on “email finding tool”. If Google also finds that a number of people search on “voila norbert email tool”, Google may use that as an authority signal.
Think about that for a moment. How are you going to get people to search on your brand more while putting it together with a non-branded querly like that? (OK, please leave Mechanical Turk and other services like that out of the discussion).
Now you can start to see the bigger picture. Measurements like pogosticking and this recent search behavior related patent are just the tip of the iceberg. Undoubtedly, there are many other ways that search engines can measure what people like and engage with the most.
This is all part of SEO now. UX, product breadth, problem solving, UX, engaging in social media, getting face to face, creating great content that you publish in front of other people’s audiences, and more.
For the small local business, you can still win at this game, as your focus just needs to be on doing it better than your competitors. The big brands will never be hyper-local like you are, so don’t think you can’t play the game, because you can.
Whoever you are, get ready, because this new integrated ecosystem is already upon us, and you need to be a part of it.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
[ccw-atrib-link]
Posted by MarkTraphagen
Editor’s note: Today we’re featuring back-to-back episodes of Whiteboard Friday from our friends at Stone Temple Consulting. Make sure to also check out the second episode, “UX, Content Quality, and SEO” from Eric Enge.
Like many other areas of marketing, SEO incorporates elements of science. It becomes problematic for everyone, though, when theories that haven’t been the subject of real scientific rigor are passed off as proven facts. In today’s Whiteboard Friday, Stone Temple Consulting’s Mark Traphagen is here to teach us a thing or two about the scientific method and how it can be applied to our day-to-day work.
For reference, here’s a still of this week’s whiteboard.
Click on it to open a high resolution image in a new tab!
Howdy, Mozzers. Mark Traphagen from Stone Temple Consulting here today to share with you how to become a better SEO scientist. We know that SEO is a science in a lot of ways, and everything I’m going to say today applies not only to SEO, but testing things like your AdWords, how does that work, quality scores. There’s a lot of different applications you can make in marketing, but we’ll focus on the SEO world because that’s where we do a lot of testing. What I want to talk to you about today is how that really is a science and how we need to bring better science in it to get better results.
The reason is in astrophysics, things like that we know there’s something that they’re talking about these days called dark matter, and dark matter is something that we know it’s there. It’s pretty much accepted that it’s there. We can’t see it. We can’t measure it directly. We don’t even know what it is. We can’t even imagine what it is yet, and yet we know it’s there because we see its effect on things like gravity and mass. Its effects are everywhere. And that’s a lot like search engines, isn’t it? It’s like Google or Bing. We see the effects, but we don’t see inside the machine. We don’t know exactly what’s happening in there.
An artist’s depiction of how search engines work.
So what do we do? We do experiments. We do tests to try to figure that out, to see the effects, and from the effects outside we can make better guesses about what’s going on inside and do a better job of giving those search engines what they need to connect us with our customers and prospects. That’s the goal in the end.
Now, the problem is there’s a lot of testing going on out there, a lot of experiments that maybe aren’t being run very well. They’re not being run according to scientific principles that have been proven over centuries to get the best possible results.
So today I want to give you just very quickly 10 basic things that a real scientist goes through on their way to trying to give you better data. Let’s see what we can do with those in our SEO testing in the future.
So let’s start with number one. You’ve got to start with a hypothesis. Your hypothesis is the question that you want to solve. You always start with that, a good question in mind, and it’s got to be relatively narrow. You’ve got to narrow it down to something very specific. Something like how does time on page effect rankings, that’s pretty narrow. That’s very specific. That’s a good question. Might be able to test that. But something like how do social signals effect rankings, that’s too broad. You’ve got to narrow it down. Get it down to one simple question.
Then you choose a variable that you’re going to test. Out of all the things that you could do, that you could play with or you could tweak, you should choose one thing or at least a very few things that you’re going to tweak and say, “When we tweak this, when we change this, when we do this one thing, what happens? Does it change anything out there in the world that we are looking at?” That’s the variable.
The next step is to set a sample group. Where are you going to gather the data from? Where is it going to come from? That’s the world that you’re working in here. Out of all the possible data that’s out there, where are you going to gather your data and how much? That’s the small circle within the big circle. Now even though it’s smaller, you’re probably not going to get all the data in the world. You’re not going to scrape every search ranking that’s possible or visit every URL.
You’ve got to ask yourself, “Is it large enough that we’re at least going to get some validity?” If I wanted to find out what is the typical person in Seattle and I might walk through just one part of the Moz offices here, I’d get some kind of view. But is that a typical, average person from Seattle? I’ve been around here at Moz. Probably not. But this was large enough.
Also, it should be randomized as much as possible. Again, going back to that example, if I just stayed here within the walls of Moz and do research about Mozzers, I’d learn a lot about what Mozzers do, what Mozzers think, how they behave. But that may or may not be applicable to the larger world outside, so you randomized.
We want to control. So we’ve got our sample group. If possible, it’s always good to have another sample group that you don’t do anything to. You do not manipulate the variable in that group. Now, why do you have that? You have that so that you can say, to some extent, if we saw a change when we manipulated our variable and we did not see it in the control group, the same thing didn’t happen, more likely it’s not just part of the natural things that happen in the world or in the search engine.
If possible, even better you want to make that what scientists call double blind, which means that even you the experimenter don’t know who that control group is out of all the SERPs that you’re looking at or whatever it is. As careful as you might be and honest as you might be, you can end up manipulating the results if you know who is who within the test group? It’s not going to apply to every test that we do in SEO, but a good thing to have in mind as you work on that.
Next, very quickly, duration. How long does it have to be? Is there sufficient time? If you’re just testing like if I share a URL to Google +, how quickly does it get indexed in the SERPs, you might only need a day on that because typically it takes less than a day in that case. But if you’re looking at seasonality effects, you might need to go over several years to get a good test on that.
Let’s move to the second group here. The sixth thing keep a clean lab. Now what that means is try as much as possible to keep anything that might be dirtying your results, any kind of variables creeping in that you didn’t want to have in the test. Hard to do, especially in what we’re testing, but do the best you can to keep out the dirt.
Manipulate only one variable. Out of all the things that you could tweak or change choose one thing or a very small set of things. That will give more accuracy to your test. The more variables that you change, the more other effects and inner effects that are going to happen that you may not be accounting for and are going to muddy your results.
Make sure you have statistical validity when you go to analyze those results. Now that’s beyond the scope of this little talk, but you can read up on that. Or even better, if you are able to, hire somebody or work with somebody who is a trained data scientist or has training in statistics so they can look at your evaluation and say the correlations or whatever you’re seeing, “Does it have a statistical significance?” Very important.
Transparency. As much as possible, share with the world your data set, your full results, your methodology. What did you do? How did you set up the study? That’s going to be important to our last step here, which is replication and falsification, one of the most important parts of any scientific process.
So what you want to invite is, hey we did this study. We did this test. Here’s what we found. Here’s how we did it. Here’s the data. If other people ask the same question again and run the same kind of test, do they get the same results? Somebody runs it again, do they get the same results? Even better, if you have some people out there who say, “I don’t think you’re right about that because I think you missed this, and I’m going to throw this in and see what happens,” aha they falsify. That might make you feel like you failed, but it’s success because in the end what are we after? We’re after the truth about what really works.
Think about your next test, your next experiment that you do. How can you apply these 10 principles to do better testing, get better results, and have better marketing? Thanks.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
[ccw-atrib-link]
Posted by russangular
Given this blog’s readership, chances are good you will spend some time this week looking at backlinks in one of the growing number of link data tools. We know backlinks continue to be one of, if not the most important
parts of Google’s ranking algorithm. We tend to take these link data sets at face value, though, in part because they are all we have. But when your rankings are on the line, is there a better way to get at which data set is the best? How should we go
about assessing these different link indexes like
Moz,
Majestic, Ahrefs and SEMrush for quality? Historically, there have been 4 common approaches to this question of index quality…
There are a number of really good studies (some newer than others) using these techniques that are worth checking out when you get a chance:
While these are all excellent at addressing the methodologies above, there is a particular limitation with all of them. They miss one of the
most important metrics we need to determine the value of a link index: proportional representation to Google’s link graph
. So here at Angular Marketing, we decided to take a closer look.
So, why is it important to determine proportional representation? Many of the most important and valued metrics we use are built on proportional
models. PageRank, MozRank, CitationFlow and Ahrefs Rank are proportional in nature. The score of any one URL in the data set is relative to the
other URLs in the data set. If the data set is biased, the results are biased.
Link graphs are biased by their crawl prioritization. Because there is no full representation of the Internet, every link graph, even Google’s,
is a biased sample of the web. Imagine for a second that the picture below is of the web. Each dot represents a page on the Internet,
and the dots surrounded by green represent a fictitious index by Google of certain sections of the web.
Of course, Google isn’t the only organization that crawls the web. Other organizations like Moz,
Majestic, Ahrefs, and SEMrush
have their own crawl prioritizations which result in different link indexes.
In the example above, you can see different link providers trying to index the web like Google. Link data provider 1 (purple) does a good job
of building a model that is similar to Google. It isn’t very big, but it is proportional. Link data provider 2 (blue) has a much larger index,
and likely has more links in common with Google that link data provider 1, but it is highly disproportional. So, how would we go about measuring
this proportionality? And which data set is the most proportional to Google?
The first step is to determine a measurement of relativity for analysis. Google doesn’t give us very much information about their link graph.
All we have is what is in Google Search Console. The best source we can use is referring domain counts. In particular, we want to look at
what we call
referring domain link pairs. A referring domain link pair would be something like ask.com->mlb.com: 9,444 which means
that ask.com links to mlb.com 9,444 times.
When placed head-to-head, there seem to be some clear winners at first glance. In head-to-head, Moz edges out Ahrefs, but across the board, Moz and Ahrefs fare quite evenly. Moz, Ahrefs and SEMrush seem to be far better than Majestic Fresh and Majestic Historic. Is that really the case? And why?
It turns out there is an inversely proportional relationship between index size and proportional relevancy. This might seem counterintuitive,
shouldn’t the bigger indexes be closer to Google? Not Exactly.
Each organization has to create a crawl prioritization strategy. When you discover millions of links, you have to prioritize which ones you
might crawl next. Google has a crawl prioritization, so does Moz, Majestic, Ahrefs and SEMrush. There are lots of different things you might
choose to prioritize…
Chances are, an organization’s crawl priority will blend some of these features, but it’s difficult to design one exactly like Google. Imagine
for a moment that instead of crawling the web, you want to climb a tree. You have to come up with a tree climbing strategy.
Despite having different climb strategies, everyone chooses the same first branch, and everyone chooses the same second branch. There are only
so many different options early on.
But as the climbers go further and further along, their choices eventually produce differing results. This is exactly the same for web crawlers
like Google, Moz, Majestic, Ahrefs and SEMrush. The bigger the crawl, the more the crawl prioritization will cause disparities. This is not a
deficiency; this is just the nature of the beast. However, we aren’t completely lost. Once we know how index size is related to disparity, we
can make some inferences about how similar a crawl priority may be to Google.
Unfortunately, we have to be careful in our conclusions. We only have a few data points with which to work, so it is very difficult to be
certain regarding this part of the analysis. In particular, it seems strange that Majestic would get better relative to its index size as it grows,
unless Google holds on to old data (which might be an important discovery in and of itself). It is most likely that at this point we can’t make
this level of conclusion.
Let’s say you have a list of domains or URLs for which you would like to know their relative values. Your process might look something like
this…
It is important to point out that the likelihood that all the URLs you want to check are in a single index increases as the accuracy of the metric
decreases. Considering the size of Majestic’s data, you can’t ignore them because you are less likely to get null value answers from their data than
the others. If anything rings true, it is that once again it makes sense to get data
from as many sources as possible. You won’t
get the most proportional data without Moz, the broadest data without Majestic, or everything in-between without Ahrefs.
What about SEMrush? They are making progress, but they don’t publish any relative statistics that would be useful in this particular
case. Maybe we can hope to see more from them soon given their already promising index!
All we hear about these days is big data; we almost never hear about good data. I know that the teams at Moz,
Majestic, Ahrefs, SEMrush and others are interested in mimicking Google, but I would love to see some organization stand up against the
allure of
more data in favor of better data—data more like Google’s. It could begin with testing various crawl strategies to see if they produce
a result more similar to that of data shared in Google Search Console. Having the most Google-like data is certainly a crown worth winning.
Thanks to Diana Carter at Angular for assistance with data acquisition and Andrew Cron with statistical analysis. Thanks also to the representatives from Moz, Majestic, Ahrefs, and SEMrush for answering questions about their indices.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
[ccw-atrib-link]
Posted by randfish
A lot of fantastic websites (and products, services, ideas, etc.) are in something of a pickle: The keywords they would normally think to target get next to no search volume. It can make SEO seem like a lost cause. In today’s Whiteboard Friday, Rand explains why that’s not the case, and talks about the one extra step that’ll help those organizations create the demand they want.
For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!
Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about a particularly challenging problem in the world of SEO, and that is trying to do SEO or trying to do any type of web marketing when your product, service, or idea has no search volume around it. So nobody is already looking for what you offer. It’s a new thing, a new concept.
I’ll use the example here of a website that I’m very fond of, but which there’s virtually no search volume for, called Niice. It’s Niice.co.
It’s great. I searched for things in here. It brings me back all these wonderful visuals from places like Colossus and lots of design portals. I love this site. I use it all the time for inspiration, for visuals, for stuff that I might write about on blogs, for finding new artists. It’s just cool. I love it. I love the discovery aspect of it, and I think it can be really great for finding artists and designers and visuals.
But when I looked at the keyword research — and granted I didn’t go deep into the keyword research, but let’s imagine that I did — I looked for things like: “visual search engine” almost no volume; “search engine for designers” almost no volume; “graphical search engine” almost no volume; “find designer visuals” nada.
So when they look at their keyword research they go, “Man, we don’t even have keywords to target here really.” SEO almost feels like it’s not a channel of opportunity, and I think that’s where many, many companies and businesses make mistakes actually, because just because you don’t see keyword research around exactly around what you’re offering doesn’t mean that SEO can’t be a great channel. It just means we have to do an extra step of work, and that’s what I want to talk about today.
So I think when you encounter this type of challenge — and granted it might not be the challenge that there’s no keyword volume — it could be a challenge in your business, for your organization, for some ideas or products that you have or are launching that there’s just very little, and thus you’re struggling to come up with enough volume to create the quantity of leads, or free trials, or customers that you need. This process really can work.
In Niice’s case, that’s going to be a lot of designers. It might be people who are creating presentations. It might be those who are searching out designers or artists. It could be people seeking inspiration for all sorts of things. So they’re going to figure out who that is.
From there, they can look at the job title, interests, demographics of those people, and then you can do some cool stuff where you can figure out things like, “Oh, you know what? We could do some Facebook ad targeting to those right groups to help boost their interests in our product and potentially, well, create branded search volume down the road, attract direct visitors, build brand awareness for ourselves, and potentially get some traffic to the site directly as well. If we can convert some of that traffic, well, that’s fantastic.”
In their case, I think Niice is ad-supported right now, so all they really need is the traffic itself. But regardless, this is that same type of process you’d use.
What is that target audience searching for? Knowledge, products, tools, services, people, brands, whatever it is, if you know who the audience is, you can figure out what they’re searching for because they have needs. If they have a job title, if they have interests, if you have those profile features about the audience, you can figure out what else they’re going to be searching for, and in this case, knowing what designers are searching for, well, that’s probably relatively simplistic. The other parts of their audience might be more complex, but that one is pretty obvious.
From that, we can do content creation. We can do keyword targeting to be in front of those folks when they’re doing search by creating content that may not necessarily be exactly selling our tools, but that’s the idea of content marketing. We’re creating content to target people higher up in the funnel before they need our product.
We can use that, too, for product and feature inspiration in the product itself. So in this case, Niice might consider creating a design pattern library or several, pulling from different places, or hiring someone to come in and build one for them and then featuring that somewhere on the site if you haven’t done a search yet and then potentially trying to rank for that in the search engine, which then brings qualified visitors, the types of people who once they got exposed to Niice would be like, “Wow, this is great and it’s totally free. I love it.”
UX tool list, so list of tools for user experience, people on the design or UI side, maybe Photoshop tutorials, whatever it is that they feel like they’re competent and capable of creating and could potentially rank for, well, now you’re attracting the right audience to your site before they need your product.
That audience, where are they going on the web? What do they do when they get there? To whom do they listen? Who are their influencers? How can we be visible in those locations? So from that I can get things like influencer targeting and outreach. I can get ad and sponsorship opportunities. I can figure out places to do partnership or guest content or business development.
In Niice’s case, that might be things like sponsor or speak at design events. Maybe they could create an awards project for Dribble. So they go to Dribble, they look at what’s been featured there, or they go to Colossus, or some of the other sites that they feature, and they find the best work of the week. At the end of the week, they feature the top 10 projects, and then they call out the designers who put them together.
Wow, that’s terrific. Now you’re getting in front of the audience whose work you’re featuring, which is going to, in turn, make them amplify Niice’s project and product to an audience who’s likely to be in their target audience. It’s sort of a win-win. That’s also going to help them build links, engagement, shares, and all sorts of signals that potentially will help them with their authority, both topically and domain-wide, which then means they can rank for all the content they create, building up this wonderful engine.
I think what we can glean from this is not just inspiration for content and keyword opportunities as we can from many other kinds of content, but also sites to target, in particular sites to target with advertising, sites to target for guest posting or sponsorship, or sites to target for business development or for partnerships, site to target in an ad network, sites to target psychographically or demographically for Facebook if we want to run ads like that, potentially bidding on ads in Google when people search for that website or for that brand name in paid search.
So if you’re Niice, you could think about contracting some featured artist to contribute visuals maybe for a topical news project. So something big is happening in the news or in the design community, you contract a few of the artists whose work you have featured or are featuring, or people from the communities whose work you’re featuring, and say, “Hey, we might not be able to pay you a lot, but we’re going to get in front of a ton of people. We’re going to build exposure for you, which is something we already do, FYI, and now you’ve got some wonderful content that has that potential to mimic that work.”
You could think about, and I love this just generally as a content marketing and SEO tactic, if you go find viral content, content that has had wide sharing success across the web from the past, say two, three, four, or five years ago, you have a great opportunity, especially if the initial creator of that content or project hasn’t continued on with it, to go say, “Hey, you know what? We can do a version of that. We’re going to modernize and update that for current audiences, current tastes, what’s currently going on in the market. We’re going to go build that, and we have a strong feeling that it’s going to be successful because it’s succeeded in the past.”
That, I think, is a great way to get content ideas from viral content and then to potentially overtake them in the search rankings too. If something from three or five years ago, that was particularly timely then still ranks today, if you produce it, you’re almost certainly going to come out on top due to Google’s bias for freshness, especially around things that have timely relevance.
Then last one, I like to ask about brand advertising in these cases, because when there’s not search volume yet, a lot of times what you have to do is create awareness. I should change this from advertising to a brand awareness, because really there’s organic ways to do it and advertising ways to do it. You can think about, “Well, where are places that we can target where we could build that awareness? Should we invest in press and public relations?” Not press releases. “Then how do we own the market?” So I think one of the keys here is starting with that name or title or keyword phrase that encapsulates what the market will call your product, service or idea.
In the case of Niice, that could be, well, visual search engines. You can imagine the press saying, “Well, visual search engines like Niice have recently blah, blah, blah.” Or it could be designer search engines, or it could be graphical search engines, or it could be designer visual engines, whatever it is. You need to find what that thing is going to be and what’s going to resonate.
In the case of Nest, that was the smart home. In the case of Oculus, it was virtual reality and virtual reality gaming. In the case of Tesla, it was sort of already established. There’s electric cars, but they kind of own that market. If you know what those keywords are, you can own the market before it gets hot, and that’s really important because that means that all of the press and PR and awareness that happens around the organic rankings for that particular keyword phrase will all be owned and controlled by you.
When you search for “smart home,” Nest is going to dominate those top 10 results. When you search for “virtual reality gaming,” Oculus is going to dominate those top 10. It’s not necessarily dominate just on their own site, it’s dominate all the press and PR articles that are about that, all of the Wikipedia page about it, etc., etc. You become the brand that’s synonymous with the keyword or concept. From an SEO perspective, that’s a beautiful world to live in.
So, hopefully, for those of you who are struggling around demand for your keywords, for your volume, this process can be something that’s really helpful. I look forward to hearing from you in the comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
[ccw-atrib-link]
Posted by Casey_Meraz
Competition in local search is fierce. While it’s typical to do some surface level research on your competitors before entering a market, you can go much further down the SEO rabbit hole. In this article we will look at how you can find more competitors, pull their data, and use it to beat them in the search game.
Since there are plenty of resources out there on best practices, this guide will assume that you have already followed the best practices for your own listing and are looking for the little things that might make a big difference in putting you over your competition. So if you haven’t already read how to perform the Ultimate Local SEO Audit or how to Find and Build Citations then you should probably start there.
Disclaimer: While it’s important to mention that correlation does not mean causation, we can learn a lot by seeing what the competition has done.
Some of the benefits of conducting competitive research are:
Once you isolate trends that seem to make a positive difference, you can create a hypothesis and test. This allows you to constantly be testing, finding out what works, and growing those positive elements while eliminating the things that don’t produce results. Instead of making final decisions off of emotion, make your decisions off of the conversion data.
A good competition analysis will give you a strong insight into the market and allow you to test, succeed, or fail fast. The idea behind this process is to really get a strong snapshot of your competition at a glance to isolate factors you may be missing in your company’s online presence.
Disclaimer 2: It’s good to use competitors’ ideas if they work, but don’t make that your only strategy.
Below I will cover a process I commonly use for competition analysis. I have also created this Google Docs spreadsheet for you to follow along with and use for yourself. To make your own copy simply go to File > Make A Copy. (Don’t ask me to add you as an owner please 🙂
Whether you work internally or were hired as an outside resource to help with your client’s SEO campaign, you probably have some idea of who the competition is in your space. Some companies may have good offline marketing but poor online marketing. If you’re looking to be the best, it’s a good idea to do your own research and see who you’re up against.
In my experience it’s always good to find and verify 5-10 online competitors in your space from a variety of sources. You can use tools for this or take the manual approach. Keep in mind that you have to screen the data tools give you with your own eye for accuracy.
We’re going to look at some tools you can use to find competitors here in a second, but keep in mind you want to record everything you find.
Make sure to capture the basic information for each competitor including their company name, location, and website. These tools will be useful at a later time. Record these in the “competitor research” tab of the spreadsheet.
This is pointing out the obvious, but if you have a set of keywords you want to rank for, you can look for trends and see who is already ranking where you want to be. Don’t limit this to just one or two keywords, instead get a broader list of the competitors out there.
To do this, simply come up with a list of several keywords you want to rank for and search for them in your geographic area. Make sure your Geographic preference is set correctly so you get accurate data.
To start we’re just going to collect the data and enter it into the spreadsheet. We will revisit this data shortly.
Outside of the basics, I always find it’s good to see who else is out there. Since organic and local rankings are more closely tied together than ever, it’s a good idea to use 3rd party tools to get some insight as to what else your website could be considered related to.
This can help provide hidden opportunities outside of the normal competition you likely look at most frequently.
SEMRush is a pretty neat competitive analysis tool. While it is a paid program, they do in fact have a few free visits a day you can check out. It’s limited but it will show you 10 competitors based on keyword ranking data. It’s also useful for recording paid competition as well.
To use the tool, visit www.SEMRush.com and enter your website in the provided search box and hit search. Once the page loads, you simply have to scroll down to the area that says “main competitors”. If you click the “view full report” option you’ll be taken to a page with 10 competition URLs.
Put these URLs into the spreadsheet so we can track them later.
This is a cool tool that will show your top 5 competitors in paid and organic search. Just like SEMRush, it’s a paid tool that’s easy to use. On the home page, you will see a box that loads where you can enter your URL. Once you hit search, a list of 5 websites will populate for free.
Enter these competitors into your spreadsheet for tracking.
This website is a goldmine of data if you’re trying to learn about a startup. In addition to the basic information we’re looking for, you can also find out things like how much money they’ve raised, staff members, past employee history, and so much more.
Crunchbase also works pretty similarly to the prior tools in the sense that you you just enter your website URL and hit the search button. Once the page loads, you can scroll down the page to the competitors section for some data.
While Crunchbase is cool, it’s not too useful for smaller companies as it doesn’t seem to have too much data outside of the startup world.
This tool seems to have limited data for smaller websites but it’s worth a shot. It can also be a little bit more high-level than I prefer, but you should still check it out.
To use the tool visit www.compete.com and enter the URL you want to examine in the box provided then hit search.
Click the “Find more sites like” box to get list of three related sites. Enter these in the provided spreadsheet.
SimilarWeb provides a cool tool with a bunch of data to check out websites. After entering your information, you can scroll down to the similar sites section which will show websites it believes to be related.
The good news about SimilarWeb is that it seems to have data no matter how big or small your site is.
Now that we have a list of competitors, we can really do a deep dive to see who is ranking and what factors might be contributing to their success. To start, make sure to pick your top competitors from the spreadsheet and then look for and record the information below about each business on the Competitor Analysis tab.
You will want to to pull this information from their Google My Business page.
If you know the company’s name, it’s pretty easy to find them just by searching the brand. You can add the geographic location if it’s a multi-location business.
For example if I was searching for a Wendy’s in Parker, Colorado, I could simply search this: “Wendy’s Parker, CO” and it will pull up the location(s).
Make sure to take and record the following information from their local listings. Get the data from their Google My Business (Google + Page) and record it in the spreadsheet!
** Record this information on the spreadsheet. A sample is below.
Since you’ve already optimized your own listing for best practices, we want to see if there is any particular trends that seem to be working better in a certain area. We can then create a hypothesis and test it to see if any gains are losses are made. While we can’t isolate factors, we can get some insight as to what’s working the more you change it.
In my experience, examining trends is much easier when the data is side by side. You can easily pick out data that stands out from the rest.
You already know the ins and outs of your landing page. Now let’s look at each competitor’s landing page individually. Let’s look at the factors that carry the most weight and see if anything sticks out.
Record the following information into the spreadsheet and compare side by side with your company vs. the successful ones.
Page title of landing page |
City present? – Is the city present in the landing page meta title? |
State present? – Is the state present in the landing page meta title? |
Major KW in title? Is there a major keyword in the landing page meta title? |
Content length on landing page – Possibly minor but worth examining. Copy/paste into MS Word |
H1 present? – Is the H1 tag present? |
City in H1? – Does the H1 contain the city name? |
State in H1? – Does the H1 have the state or abbreviation in the heading? |
Keyword in H1? – Do they use a keyword in the H1? |
Local business schema present? – Are they using schema? Find out using the Google structured data testing tool here. |
Embedded map present? – Are they embedding a Google map? |
GPS coordinates present? – Are they using GPS coordinates via schema or text? |
Recently, I was having a conversation with a client who was super-excited about the efforts his staff was making. He proudly proclaimed that his office was building 10 new citations a day and added over 500 within the past couple of months!
His excitement freaked me out. As I suspected, when I asked to see his list, I saw a bunch of low quality directory sites that were passing little or no value. One way I could tell they were not really helping (besides the fact that some were NSFW websites), was that the citations or listings were not even indexed in Google.
I think it’s a reasonable assumption that you should test to see what Google knows about your business. Whatever Google delivers about your brand, it’s serving because it has the most relevance or authority in its eyes.
It’s actually pretty simple. Just do a Google Search. One of the ways that I try to evaluate and see whether or not a citation website is authoritative enough is to take the competition’s NAP and Google it. While you’ve probably done this many times before for citation earning, you can prioritize your efforts based off of what’s recurring between top ranked competitor websites.
As you can see in the example below where I did a quick search for a competitor’s dental office (by pasting his NAP in the search bar), I see that Google is associating this particular brand with websites like:
Pro Tip: Amazon local is relatively new, but you can see that it’s going to carry a citation benefit in local search. If your clients are willing, you should sign up for this.
Don’t want to copy and paste the NAP in a variety of formats? Use Andrew Shotland’s NAP Hunter to get your competitor’s variants. This tool will easily open multiple window tabs in your browser and search for combinations of your competitor’s NAP listings. It makes it easy and it’s kind of fun.
With citations, I’m generally in the ballpark of quality over quantity. That being said, if you’re just getting the same citations that everyone else has, that doesn’t really set you apart does it? I like to tell clients that the top citation sources are a must, but it’s good to seek out opportunities and monitor what your competition does so you can keep up and stay ahead of the game.
You need to check the top citations and see where you’re listed vs. your competition. Tools like Whitespark’s local citation finder make this much easier to get an easy snapshot.
If you’re looking to see which citations you should find and check, use these two resources below:
Just like in the example in the section above, you can find powerful hidden gems and also new website opportunities that arise from time to time.
A common mistake I see is businesses thinking it’s ok to just turn things off when they get to the top.That’s a bad idea. If you’re serious about online marketing, you know that someone is always out to get you. So in addition to tracking your brand mentions through the Fresh Web Explorer, you also need to be tracking your competition at least once a month! The good news is that you can do this easily with Fresh Web Explorer from Moz.
Plus track anything else you can think of related to your brand. This will help the on-going efforts get a bit easier.
Did you know some citation sources have dofollow links which mean they pass link juice to your website? Now while these by themselves likely won’t pass a lot of juice, it adds an incentive for you to be proactive with recording and promoting these listings.
When reviewing my competition’s citations and links I use a simple Chrome plugin called NoFollow which simply highlights nofollow links on pages. It makes it super easy to see what’s a follow vs. a nofollow link.
But what’s the benefit of this? Let’s say that I have a link on a city website that’s a follow link and a citation. If it’s an authority page that talks highly about my business, it would make sense for me to link to it from time to time. If you’re getting links from websites other than your own and linking to these high quality citations you will pass link juice to your page. It’s a pretty simple way of increasing the authority of your local landing pages.
Since the Pigeon update almost a year ago, links started to make a bigger impact in local search. You have to be earning links and you have to earn high quality links to your website and especially your Google My Business Landing page.
If the factors show you’re on the same playing field as your competition except in domain authority or page authority, you know your primary focus needs to be links.
Now here is where the research gets interesting. Remember the data sources we pulled earlier like compete, spyfu.com, etc? We are now going to get a bigger picture on the link profile because we did this extra work. Not only are we just going to look at the links that our competition in the pack has, we’ve started to branch out of that for more ideas which will potentially pay off big in the long run.
Now we want to take every domain we looked at when we started and run Open Site Explorer on each and every domain. Once we have these lists of links, we can then sort them out and go after the high quality ones that you don’t already have.
Typically, when I’m doing this research I will export everything into Excel or Google Docs, combine them into one spreadsheet and then sort from highest authority to least authority. This way you can prioritize your road map and focus on the bigger fish.
Keep in mind that citations usually have links and some links have citations. If they have a lot of authority you should make sure you add both.
If you feel like you’ve gone above and beyond your competition and yet you’re not seeing the gains you want, there is more you have to look at. Sometimes as an SEO it’s easy to get in a paradigm of just the technical or link side of things. But what about user behavior?
It’s no secret and even some recent tests are showing promising data. If your users visit your site and then click back to the search results it indicates that they didn’t find what they were looking for. Through our own experiments we have seen listings in the SERPs jump a few positions in hours just based off of user behavior.
You need to make sure your pages are answering the users queries as they land on your page, preferably above the fold. For example, if I’m looking for a haircut place and I land on your page, I might be wanting to know the hours, pricing, or directions to your store. Making information prevalent is essential.
Make sure that if you’re going to make these changes you test them. Come up with a hypothesis, test the results, and come to conclusion or another test based off of the data. If you want to know more about your users, I say that you need to find as much about them as human possible. Some services you can use for that are:
1. Inspectlet – Record user sessions and watch how they navigate your website. This awesome tool literally allows you to watch recorded user sessions. Check out their site.
2. LinkedIn Tracking Script – Although I admit it’s a bit creepy, did you know that you can see the actual visitors to your website if they’re logged into LinkedIn while browsing your website? You sure can. To do this complete the following steps:
1. Sign up for a LinkedIn Premium Account
2. Enter this code into the body of your website pages:
<img src="https://www.linkedin.com/profile/view?authToken=zRgB&authType=name&id=XXXXX" />
3. Replace the XXXXX with your account number of your profile. You can get this by logging into your profile page and getting the number present after viewid?=
4. Wait for the visitors to start showing up under “who’s viewed your profile”
3. Google Analytics – Watch user behavior and gain insights as so what they were doing on your website.
Speaking of user behavior, is your listing the only one without reviews? Does it have fewer or less favorable reviews? All of these are negative signals for user experience. Do you competitors have more positive reviews? If so you need to work getting more.
While this post was mainly geared towards local SEO as in Google My Business rankings, you have to consider that there are a lot of localized search queries that do not generate pack results. In these cases they’re just standard organic listings.
If you’ve been deterred to add these by Google picking its own meta descriptions or by their lack of ranking benefit, you need to check yourself before you wreck yourself. Seriously. Customers will make a decision on which listing to click on based on this information. If you’re not thinking about optimizing these for user intent on the corresponding page then you’re just being lazy. Spend the time, increase CTR, and increase your rankings if you’re serving great content.
The key to success here is realizing that this is a marathon and not a sprint. If you examine the competition in the top areas mentioned above and create a plan to overcome, you will win long term. This of course also assumes you’re not doing anything shady and staying above board.
While there were many more things I could add to this article, I believe that if you put your focus on what’s mentioned here you’ll have the greatest success. Since I didn’t talk too much about geo-tagged media in this article, I also included some other items to check in the spreadsheet under the competitor analysis tab.
Remember to actively monitor what those around you are doing and develop a pro-active plan to be successful for your clients.
What’s the most creative thing you have seen a competitor do successfully local search? I would love to hear about it in the comments below.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
[ccw-atrib-link]