12 Common Reasons Reconsideration Requests Fail

Posted by Modestos

There are several reasons a reconsideration request might fail. But some of the most common mistakes site owners and inexperienced SEOs make when trying to lift a link-related Google penalty are entirely avoidable. 

Here’s a list of the top 12 most common mistakes made when submitting reconsideration requests, and how you can prevent them.

1. Insufficient link data

This is one of the most common reasons why reconsideration requests fail. This mistake is readily evident each time a reconsideration request gets rejected and the example URLs provided by Google are unknown to the webmaster. Relying only on Webmaster Tools data isn’t enough, as Google has repeatedly said. You need to combine data from as many different sources as possible. 

A good starting point is to collate backlink data, at the very least:

  • Google Webmaster Tools (both latest and sample links)
  • Bing Webmaster Tools
  • Majestic SEO (Fresh Index)
  • Ahrefs
  • Open Site Explorer

If you use any toxic link-detection services (e.g., Linkrisk and Link Detox), then you need to take a few precautions to ensure the following:

  • They are 100% transparent about their backlink data sources
  • They have imported all backlink data
  • You can upload your own backlink data (e.g., Webmaster Tools) without any limitations

If you work on large websites that have tons of backlinks, most of these automated services are very likely used to process just a fraction of the links, unless you pay for one of their premium packages. If you have direct access to the above data sources, it’s worthwhile to download all backlink data, then manually upload it into your tool of choice for processing. This is the only way to have full visibility over the backlink data that has to be analyzed and reviewed later. Starting with an incomplete data set at this early (yet crucial) stage could seriously hinder the outcome of your reconsideration request.

2. Missing vital legacy information

The more you know about a site’s history and past activities, the better. You need to find out (a) which pages were targeted in the past as part of link building campaigns, (b) which keywords were the primary focus and (c) the link building tactics that were scaled (or abused) most frequently. Knowing enough about a site’s past activities, before it was penalized, can help you home in on the actual causes of the penalty. Also, collect as much information as possible from the site owners.

3. Misjudgement

Misreading your current situation can lead to wrong decisions. One common mistake is to treat the example URLs provided by Google as gospel and try to identify only links with the same patterns. Google provides a very small number of examples of unnatural links. Often, these examples are the most obvious and straightforward ones. However, you should look beyond these examples to fully address the issues and take the necessary actions against all types of unnatural links. 

Google is very clear on the matter: “Please correct or remove all inorganic links, not limited to the samples provided above.

Another common area of bad judgement is the inability to correctly identify unnatural links. This is a skill that requires years of experience in link auditing, as well as link building. Removing the wrong links won’t lift the penalty, and may also result in further ranking drops and loss of traffic. You must remove the right links.


4. Blind reliance on tools

There are numerous unnatural link-detection tools available on the market, and over the years I’ve had the chance to try out most (if not all) of them. Because (and without any exception) I’ve found them all very ineffective and inaccurate, I do not rely on any such tools for my day-to-day work. In some cases, a lot of the reported “high risk” links were 100% natural links, and in others, numerous toxic links were completely missed. If you have to manually review all the links to discover the unnatural ones, ensuring you don’t accidentally remove any natural ones, it makes no sense to pay for tools. 

If you solely rely on automated tools to identify the unnatural links, you will need a miracle for your reconsideration request to be successful. The only tool you really need is a powerful backlink crawler that can accurately report the current link status of each URL you have collected. You should then manually review all currently active links and decide which ones to remove. 

I could write an entire book on the numerous flaws and bugs I have come across each time I’ve tried some of the most popular link auditing tools. A lot of these issues can be detrimental to the outcome of the reconsideration request. I have seen many reconsiderations request fail because of this. If Google cannot algorithmically identify all unnatural links and must operate entire teams of humans to review the sites (and their links), you shouldn’t trust a $99/month service to identify the unnatural links.

If you have an in-depth understanding of Google’s link schemes, you can build your own process to prioritize which links are more likely to be unnatural, as I described in this post (see sections 7 & 8). In an ideal world, you should manually review every single link pointing to your site. Where this isn’t possible (e.g., when dealing with an enormous numbers of links or resources are unavailable), you should at least focus on the links that have the more “unnatural” signals and manually review them.

5. Not looking beyond direct links

When trying to lift a link-related penalty, you need to look into all the links that may be pointing to your site directly or indirectly. Such checks include reviewing all links pointing to other sites that have been redirected to your site, legacy URLs with external inbound links that have been internally redirected owned, and third-party sites that include cross-domain canonicals to your site. For sites that used to buy and redirect domains in order increase their rankings, the quickest solution is to get rid of the redirects. Both Majestic SEO and Ahrefs report redirects, but some manual digging usually reveals a lot more.

PQPkyj0.jpg

6. Not looking beyond the first link

All major link intelligence tools, including Majestic SEO, Ahrefs and Open Site Explorer, report only the first link pointing to a given site when crawling a page. This means that, if you overly rely on automated tools to identify links with commercial keywords, the vast majority of them will only take into consideration the first link they discover on a page. If a page on the web links just once to your site, this is not big deal. But if there are multiple links, the tools will miss all but the first one.

For example, if a page has five different links pointing to your site, and the first one includes a branded anchor text, these tools will just report the first link. Most of the link-auditing tools will in turn evaluate the link as “natural” and completely miss the other four links, some of which may contain manipulative anchor text. The more links that get missed this way the more likely your reconsideration request will fail.

7. Going too thin

Many SEOs and webmasters (still) feel uncomfortable with the idea of losing links. They cannot accept the idea of links that once helped their rankings are now being devalued, and must be removed. There is no point trying to save “authoritative”, unnatural links out of fear of losing rankings. If the main objective is to lift the penalty, then all unnatural links need to be removed.

Often, in the first reconsideration request, SEOs and site owners tend to go too thin, and in the subsequent attempts start cutting deeper. If you are already aware of the unnatural links pointing to your site, try to get rid of them from the very beginning. I have seen examples of unnatural links provided by Google on PR 9/DA 98 sites. Metrics do not matter when it comes to lifting a penalty. If a link is manipulative, it has to go.

In any case, Google’s decision won’t be based only on the number of links that have been removed. Most important in the search giant’s eyes are the quality of links still pointing to your site. If the remaining links are largely of low quality, the reconsideration request will almost certainly fail. 

8. Insufficient effort to remove links

Google wants to see a “good faith” effort to get as many links removed as possible. The higher the percentage of unnatural links removed, the better. Some agencies and SEO consultants tend to rely too much on the use of the disavow tool. However, this isn’t a panacea, and should be used as a last resort for removing those links that are impossible to remove—after exhausting all possibilities to physically remove them via the time-consuming (yet necessary) outreach route. 

Google is very clear on this:

m4M4n3g.jpg?1

Even if you’re unable to remove all of the links that need to be removed, you must be able to demonstrate that you’ve made several attempts to have them removed, which can have a favorable impact on the outcome of the reconsideration request. Yes, in some cases it might be possible to have a penalty lifted simply by disavowing instead of removing the links, but these cases are rare and this strategy may backfire in the future. When I reached out to ex-googler Fili Wiese’s for some advice on the value of removing the toxic links (instead of just disavowing them), his response was very straightforward:

V3TmCrj.jpg 

9. Ineffective outreach

Simply identifying the unnatural links won’t get the penalty lifted unless a decent percentage of the links have been successfully removed. The more communication channels you try, the more likely it is that you reach the webmaster and get the links removed. Sending the same email hundreds or thousands of times is highly unlikely to result in a decent response rate. Trying to remove a link from a directory is very different from trying to get rid of a link appearing in a press release, so you should take a more targeted approach with a well-crafted, personalized email. Link removal request emails must be honest and to the point, or else they’ll be ignored.

Tracking the emails will also help in figuring out which messages have been read, which webmasters might be worth contacting again, or alert you of the need to try an alternative means of contacting webmasters.

Creativity, too, can play a big part in the link removal process. For example, it might be necessary to use social media to reach the right contact. Again, don’t trust automated emails or contact form harvesters. In some cases, these applications will pull in any email address they find on the crawled page (without any guarantee of who the information belongs to). In others, they will completely miss masked email addresses or those appearing in images. If you really want to see that the links are removed, outreach should be carried out by experienced outreach specialists. Unfortunately, there aren’t any shortcuts to effective outreach.

10. Quality issues and human errors

All sorts of human errors can occur when filing a reconsideration request. The most common errors include submitting files that do not exist, files that do not open, files that contain incomplete data, and files that take too long to load. You need to triple-check that the files you are including in your reconsideration request are read-only, and that anyone with the URL can fully access them. 

Poor grammar and language is also bad practice, as it may be interpreted as “poor effort.” You should definitely get the reconsideration request proofread by a couple of people to be sure it is flawless. A poorly written reconsideration request can significantly hinder your overall efforts.

Quality issues can also occur with the disavow file submission. Disavowing at the URL level isn’t recommended because the link(s) you want to get rid of are often accessible to search engines via several URLs you may be unaware of. Therefore, it is strongly recommended that you disavow at the domain or sub-domain level.

11. Insufficient evidence

How does Google know you have done everything you claim in your reconsideration request? Because you have to prove each claim is valid, you need to document every single action you take, from sent emails and submitted forms, to social media nudges and phone calls. The more information you share with Google in your reconsideration request, the better. This is the exact wording from Google:

“ …we will also need to see good-faith efforts to remove a large portion of inorganic links from the web wherever possible.”

12. Bad communication

How you communicate your link cleanup efforts is as essential as the work you are expected to carry out. Not only do you need to explain the steps you’ve taken to address the issues, but you also need to share supportive information and detailed evidence. The reconsideration request is the only chance you have to communicate to Google which issues you have identified, and what you’ve done to address them. Being honest and transparent is vital for the success of the reconsideration request.

There is absolutely no point using the space in a reconsideration request to argue with Google. Some of the unnatural links examples they share may not always be useful (e.g., URLs that include nofollow links, removed links, or even no links at all). But taking the argumentative approach veritably guarantees your request will be denied.

54adb6e0227790.04405594.jpg
Cropped from photo by Keith Allison, licensed under Creative Commons.

Conclusion

Getting a Google penalty lifted requires a good understanding of why you have been penalized, a flawless process and a great deal of hands-on work. Performing link audits for the purpose of lifting a penalty can be very challenging, and should only be carried out by experienced consultants. If you are not 100% sure you can take all the required actions, seek out expert help rather than looking for inexpensive (and ineffective) automated solutions. Otherwise, you will almost certainly end up wasting weeks or months of your precious time, and in the end, see your request denied.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from moz.com

Unraveling Panda Patterns

Posted by billslawski

This is my first official blog post at Moz.com, and I’m going to be requesting your help and expertise and imagination.

I’m going to be asking you to take over as Panda for a little while to see if you can identify the kinds of things that Google’s Navneet Panda addressed when faced with what looked like an incomplete patent created to identify sites as parked domain pages, content farm pages, and link farm pages. You’re probably better at this now then he was then.

You’re a subject matter expert.

To put things in perspective, I’m going to include some information about what appears to be the very first Panda patent, and some of Google’s effort behind what they were calling the “high-quality site algorithm.”

I’m going to then include some of the patterns they describe in the patent to identify lower-quality pages, and then describe some of the features I personally would suggest to score and rank a higher-quality site of one type.

Google’s Amit Singhal identified a number of questions about higher quality sites that he might use, and told us in the blog post where he listed those that it was an incomplete list because they didn’t want to make it easy for people to abuse their algorithm.

In my opinion though, any discussion about improving the quality of webpages is one worth having, because it can help improve the quality of the Web for everyone, which Google should be happy to see anyway.

Warning searchers about low-quality content

In “Processing web pages based on content quality,” the original patent filing for Panda, there’s a somewhat mysterious statement that makes it sound as if Google might warn searchers before sending them to a low quality search result, and give them a choice whether or not they might actually click through to such a page.

As it notes, the types of low quality pages the patent was supposed to address included parked domain pages, content farm pages, and link farm pages (yes,
link farm pages):

“The processor 260 is configured to receive from a client device (e.g., 110), a request for a web page (e.g., 206). The processor 260 is configured to determine the content quality of the requested web page based on whether the requested web page is a parked web page, a content farm web page, or a link farm web page.

Based on the content quality of the requested web page, the processor is configured to provide for display, a graphical component (e.g., a warning prompt). That is, the processor 260 is configured to provide for display a graphical component (e.g., a warning prompt) if the content quality of the requested web page is at or below a certain threshold.

The graphical component provided for display by the processor 260 includes options to proceed to the requested web page or to proceed to one or more alternate web pages relevant to the request for the web page (e.g., 206). The graphical component may also provide an option to stop proceeding to the requested web page.

The processor 260 is further configured to receive an indication of a selection of an option from the graphical component to proceed to the requested web page, or to proceed to an alternate web page. The processor 260 is further configured to provide for display, based on the received indication, the requested web page or the alternate web page.”

This did not sound like a good idea.

Recently, Google announced in a post on the Google Webmaster Central blog post,
Promoting modern websites for modern devices in Google search results, that they would start providing warning notices on mobile versions of sites if there were issues on those pages that visitors might go to.

I imagine that as a site owner, you might be disappointed seeing such warning notice shown to searchers on your site about technology used on your site possibly not working correctly on a specific device. That recent blog post mentions Flash as an example of a technology that might not work correctly on some devices. For example, we know that Apple’s mobile devices and Flash don’t work well together.

That’s not a bad warning in that it provides enough information to act upon and fix to the benefit of a lot of potential visitors. 🙂

But imagine if you tried to visit your website in 2011, and instead of getting to the site, you received a Google warning that the page you were trying to visit was a content farm page or a link farm page, and it provided alternative pages to visit as well.

That ”
your website sucks” warning still doesn’t sound like a good idea. One of the inventors listed on the patent is described in LinkedIn as presently working on the Google Play store. The warning for mobile devices might have been something he brought to Google from his work on this Panda patent.

We know that when the Panda Update was released that it was targeting specific types of pages that people at places such as
The New York Times were complaining about, such as parked domains and content farm sites. A
follow-up from the Timesafter the algorithm update was released puts it into perspective for us.

It wasn’t easy to know that your pages might have been targeted by that particular Google update either, or if your site was a false positive—and many site owners ended up posting in the Google Help forums after a Google search engineer invited them to post there if they believed that they were targeted by the update when they shouldn’t have been.

The wording of that
invitation is interesting in light of the original name of the Panda algorithm. (Note that the thread was broken into multiple threads when Google did a migration of posts to new software, and many appear to have disappeared at some point.)

As we were told in the invite from the Google search engineer:

“According to our metrics, this update improves overall search quality. However, we are interested in hearing feedback from site owners and the community as we continue to refine our algorithms. If you know of a high-quality site that has been negatively affected by this change, please bring it to our attention in this thread.

Note that as this is an algorithmic change we are unable to make manual exceptions, but in cases of high quality content we can pass the examples along to the engineers who will look at them as they work on future iterations and improvements to the algorithm.

So even if you don’t see us responding, know that we’re doing a lot of listening.”

The timing for such in-SERP warnings might have been troublesome. A site that mysteriously stops appearing in search results for queries that it used to rank well for might be said to have gone astray of
Google’s guidelines. Instead, such a warning might be a little like the purposefully embarrassing “Scarlet A” in Nathaniel Hawthorn’s novel The Scarlet Letter.

A page that shows up in search results with a warning to searchers stating that it was a content farm, or a link farm, or a parked domain probably shouldn’t be ranking well to begin with. Having Google continuing to display those results ranking highly, showing both a link and a warning to those pages, and then diverting searchers to alternative pages might have been more than those site owners could handle. Keep in mind that the fates of those businesses are usually tied to such detoured traffic.

My imagination is filled with the filing of lawsuits against Google based upon such tantalizing warnings, rather than site owners filling up a Google Webmaster Help Forum with information about the circumstances involving their sites being impacted by the upgrade.

In retrospect, it is probably a good idea that the warnings hinted at in the original Panda Patent were avoided.

Google seems to think that such warnings are appropriate now when it comes to multiple devices and technologies that may not work well together, like Flash and iPhones.

But there were still issues with how well or how poorly the algorithm described in the patent might work.

In the March, 2011 interview with Google’s Head of Search Quality, Amit Sighal, and his team member and Head of Web Spam at Google, Matt Cutts, titled
TED 2011: The “Panda” That Hates Farms: A Q&A With Google’s Top Search Engineers, we learned of the code name that Google claimed to be using to refer to the algorithm update as “Panda,” after an engineer with that name came along and provided suggestions on patterns that could be used by the patent to identify high- and low-quality pages.

His input seems to have been pretty impactful—enough for Google to have changed the name of the update, from the “High Quality Site Algorithm” to the “Panda” update.

How the High-Quality Site Algorithm became Panda

Danny Sullivan named the update the “Farmer update” since it supposedly targeted content farm web sites. Soon afterwards the joint interview with Singhal and Cutts identified the Panda codename, and that’s what it’s been called ever since.

Google didn’t completely abandon the name found in the original patent, the “high quality sites algorithm,” as can be seen in the titles of these Google Blog posts:

The most interesting of those is the “more guidance” post, in which Amit Singhal lists 23 questions about things Google might look for on a page to determine whether or not it was high-quality. I’ve spent a lot of time since then looking at those questions thinking of features on a page that might convey quality.

The original patent is at:

Processing web pages based on content quality
Inventors: Brandon Bilinski and Stephen Kirkham
Assigned to Google

US Patent 8,775,924

Granted July 8, 2014

Filed: March 9, 2012

Abstract

“Computer-implemented methods of processing web pages based on content quality are provided. In one aspect, a method includes receiving a request for a web page.

The method includes determining the content quality of the requested web page based on whether it is a parked web page, a content farm web page, or a link farm web page. The method includes providing for display, based on the content quality of the requested web page, a graphical component providing options to proceed to the requested web page or to an alternate web page relevant to the request for the web page.

The method includes receiving an indication of a selection of an option from the graphical component to proceed to the requested web page or to an alternate web page. The method further includes providing, based on the received indication, the requested web page or an alternate web page.

The patent expands on what are examples of low-quality web pages, including:

  • Parked web pages
  • Content farm web pages
  • Link farm web pages
  • Default pages
  • Pages that do not offer useful content, and/or pages that contain advertisements and little else

An invitation to crowdsource high-quality patterns

This is the section I mentioned above where I am asking for your help. You don’t have to publish your thoughts on how quality might be identified, but I’m going to start with some examples.

Under the patent, a content quality value score is calculated for every page on a website based upon patterns found on known low-quality pages, “such as parked web pages, content farm web pages, and/or link farm web pages.”

For each of the patterns identified on a page, the content quality value of the page might be reduced based upon the presence of that particular pattern—and each pattern might be weighted differently.

Some simple patterns that might be applied to a low-quality web page might be one or more references to:

  • A known advertising network,
  • A web page parking service, and/or
  • A content farm provider

One of these references may be in the form of an IP address that the destination hostname resolves to, a Domain Name Server (“DNS server”) that the destination domain name is pointing to, an “a href” attribute on the destination page, and/or an “img src” attribute on the destination page.

That’s a pretty simple pattern, but a web page resolving to an IP address known to exclusively serve parked web pages provided by a particular Internet domain registrar can be deemed a parked web page, so it can be pretty effective.

A web page with a DNS server known to be associated with web pages that contain little or no content other than advertisements may very well provide little or no content other than advertising. So that one can be effective, too.

Some of the patterns listed in the patent don’t seem quite as useful or informative. For example, the one stating that a web page containing a common typographical error of a bona fide domain name may likely be a low-quality web page, or a non-existent web page. I’ve seen more than a couple of legitimate sites with common misspellings of good domains, so I’m not too sure how helpful a pattern that is.

Of course, some textual content is a dead giveaway the patent tells us, with terms on them such as “domain is for sale,” “buy this domain,” and/or “this page is parked.”

Likewise, a web page with little or no content is probably (but not always) a low-quality web page.

This is a simple but effective pattern, even if not too imaginative:

… page providing 99% hyperlinks and 1% plain text is more likely to be a low-quality web page than a web page providing 50% hyperlinks and 50% plain text.

Another pattern is one that I often check upon and address in site audits, and it involves how functional and responsive pages on a site are.

The determination of whether a web site is full functional may be based on an HTTP response code, information received from a DNS server (e.g., hostname records), and/or a lack of a response within a certain amount of time. As an example, an HTTP response that is anything other than 200 (e.g., “404 Not Found”) would indicate that a web site is not fully functional.

As another example, a DNS server that does not return authoritative records for a hostname would indicate that the web site is not fully functional. Similarly, a lack of a response within a certain amount of time, from the IP address of the hostname for a web site would indicate that the web site is not fully functional.

As for user-data, sometimes it might play a role as well, as the patent tells us:

A web page may be suggested for review and/or its content quality value may be adapted based on the amount of time spent on that page.

For example, if a user reaches a web page and then leaves immediately, the brief nature of the visit may cause the content quality value of that page to be reviewed and/or reduced. The amount of time spent on a particular web page may be determined through a variety of approaches. For example, web requests for web pages may be used to determine the amount of time spent on a particular web page.”

My example of some patterns for an e-commerce website

There are a lot of things that you might want to include on an ecommerce site that help to indicate that it’s high quality. If you look at the questions that Amit Singhal raised in the last Google Blog post I mentioned above, one of his questions was “Would you be comfortable giving your credit card information to this site?” Patterns that might fit with this question could include:

  • Is there a privacy policy linked to on pages of the site?
  • Is there a “terms of service” page linked to on pages of the site?
  • Is there a “customer service” page or section linked to on pages of the site?
  • Do ordering forms function fully on the site? Do they return 404 pages or 500 server errors?
  • If an order is made, does a thank-you or acknowledgement page show up?
  • Does the site use an https protocol when sending data or personally identifiable data (like a credit card number)?

As I mentioned above, the patent tells us that a high-quality content score for a page might be different from one pattern to another.

The
questions from Amit Singhal imply a lot of other patterns, but as SEOs who work on and build and improve a lot of websites, this is an area where we probably have more expertise than Google’s search engineers.

What other questions would you ask if you were tasked with looking at this original Panda Patent? What patterns would you suggest looking for when trying to identify high or low quality pages?  Perhaps if we share with one another patterns or features on a site that Google might look for algorithmically, we could build pages that might not be interpreted by Google as being a low quality site. I provided a few patterns for an ecommerce site above. What patterns would you suggest?

(Illustrations: Devin Holmes @DevinGoFish)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

Dear Google, Links from YouMoz Don’t Violate Your Quality Guidelines

Posted by randfish

Recently, Moz contributor Scott Wyden, a photographer in New Jersey, received a warning in his Google Webmaster Tools about some links that violated Google’s Quality Guidelines. Many, many site owners have received warnings like this, and while some are helpful hints, many (like Scott’s) include sites and links that clearly do not violate the guidelines Google’s published.

Here’s a screenshot of Scott’s reconsideration request:

(note that the red text was added by Scott as a reminder to himself)

As founder, board member, and majority shareholder of Moz, which owns Moz.com (of which YouMoz is a part), I’m here to tell Google that Scott’s link from the YouMoz post was absolutely editorial. Our content team reviews every YouMoz submission. We reject the vast majority of them. We publish only those that are of value and interest to our community. And we check every frickin’ link.

Scott’s link, ironically, came from this post about Building Relationships, Not Links. It’s a good post with helpful information, good examples, and a message which I strongly support. I also, absolutely, support Scott’s earning of a link back to his Photography SEO community and to his page listing business books for photographers (this link was recently removed from the post at Scott’s request). Note that “Photography SEO community” isn’t just a descriptive name, it’s also the official brand name of the site Scott built. Scott linked the way I believe content creators should on the web: with descriptive anchor text that helps inform a reader what they’re going to find on that page. In this case, it may overlap with keywords Scott’s targeting for SEO, but I find it ridiculous to hurt usability in the name of tiptoeing around Google’s potential overenforcement. That’s a one-way ticket to a truly inorganic, Google-shaped web.

If Google doesn’t want to count those links, that’s their business (though I’d argue they’re losing out on a helpful link that improves the link graph and the web overall). What’s not OK is Google’s misrepresentation of Moz’s link as “inorganic” and “in violation of our quality guidelines” in their Webmaster Tools.

I really wish YouMoz was an outlier. Sadly, I’ve been seeing more and more of these frustratingly misleading warnings from Google Webmaster Tools.

(via this tweet)

Several months ago, Jen Lopez, Moz’s director of community, had an email conversation with Google’s Head of Webspam, Matt Cutts. Matt granted us permission to publish portions of that discussion, which you can see below:

Jen Lopez: Hey Matt,

I made the mistake of emailing you while you weren’t answering outside emails for 30 days. 😀 I wanted to bring this up again though because we have a question going on in Q&A right now about the topic. People are worried that they can’t guest post on Moz: http://moz.com/community/q/could-posting-on-youmoz-get-your-penalized-for-guest-blogging because they’ll get penalized. I was curious if you’d like to jump in and respond? Or give your thoughts on the topic?

Thanks!

Matt Cutts: Hey, the short answer is that if a site A links to spammy sites, that can affect site A’s reputation. That shouldn’t be a shock–I think we’ve talked about the hazards of linking to bad neighborhoods for a decade or so.

That said, with the specific instance of Moz.com, for the most part it’s an example of a site that does good due diligence, so on average Moz.com is linking to non-problematic sites. If Moz were to lower its quality standards then that could eventually affect Moz’s reputation.

The factors that make things safer are the commonsense things you’d expect, e.g. adding a nofollow will eliminate the linking issue completely. Short of that, keyword rich anchortext is higher risk than navigational anchortext like a person or site’s name, and so on.”

Jen, in particular, has been a champion of high standards and non-spammy guest publishing, and I’m very appreciative to Matt for the thoughtful reply (which matches our beliefs). Her talk at SMX Sydney—Guest Blogging Isn’t Dead, But Blogging Just for Links Is—and her post—Time for Guest Blogging With a Purpose—helps explain Moz’s position on the subject (one I believe Google shares). 

I can promise that our quality standards are only going up (you can read Keri’s post on YouMoz policies to get a sense of how seriously we take our publishing), that Scott’s link in particular was entirely editorial, organic, and intentional, and that we take great steps to insure that all of our authors and links are carefully vetted.

We’d love if Google’s webmaster review team used the same care when reviewing and calling out links in Webmaster Tools. It would help make the web (and Google’s search engine) a better place.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

The New Link Building Survey 2014 – Results

Posted by JamesAgate

Many of you may have seen Skyrocket SEO’s Link Building Survey results that we published here on Moz around this same time last year. The reception was fantastic, so we decided to push ahead with turning this into an annual series to see how this strand of the industry is developing and evolving over time.

Firstly, “link building”…

Yep, we’ve not changed the name to a “content marketing survey” or “inbound link acquisition survey;” we still feel link building is a vital part of an SEOs arsenal of tactics, and therefore it deserves its own survey.

As a company we’re investing just as much in link building for our clients (granted, we’ve adapted what we are doing), but the fact remains that if you want to score big with decent organic search visibility then you need links.

Now that that’s out of the way, let’s get down to the details:

Who took the survey?

A massive thank you to the 315 or so people who took the survey. That number is slightly down from last yeah, which I feel is partly due to fewer people considering link building to be a part of their day-to-day roles. I’d argue that’s a missed opportunity, and this year we had a few duplicate entries and submissions that needed a bit of tidying up, so we trimmed it back to these 315 submissions.

The makeup of the respondents was broadly similar to last year, as expected, although based on user feedback from our inaugural survey, we added a few more categories for respondents to self-classify—so it is hard to make specific comparisons.

How much does your company spend on link building per month?

In the 2013 survey, 10% of respondents said their company spent $50k+ per month on link building, so it appears that the upper limit to link building spend may have decreased slightly across the industry.

That being said, there now appears to be a much larger number of companies in the $10-$50k per month bracket when you compare this year’s 37% with last year’s 11%.

I would attribute the changes year-on-year to two factors;

  • Reclassification of the term “link building:” Many companies have shifted budget that they would previously classified as link building budget into content projects that more than likely still have an impact on link building efforts.
  • Recognition of opportunity: Based on our own experiences we see a number of website owners and businesses pushing harder with their content promotion and link building as they recognise an opportunity to invest when their competitors are running scared.

Warren Buffett once said “Be fearful when others are greedy and greedy when others are fearful.” Based on conversations alone that I’ve had with a wide range of businesses, many are now fearful when it comes to building links. In fact, we gathered some data later in the survey that revealed that one of the biggest challenges people face is not knowing which links will help and which will harm them. Google’s widespread action against websites (and dare I say it webmaster propaganda) has had a dramatic impact on some people to the point of paralysis.

There are clear opportunities that, with a sound strategy, can be seized in today’s market.

You can
build links like it’s 1999 for a microsite or second level property, keep it super-clean and identify link opportunities that would be valuable irrespective of Google, or somewhere in between those extremes. The fact is the links still form the backbone of the internet and of Google’s algorithm and that isn’t going to change for a very long time.

What percentage of your overall SEO budget is allocated toward building links?

Thanks to
John-Henry Scherck for this one as he made the suggestion following the 2013 survey that having data on the percentage would be really interesting. Looking back we don’t have a point of comparison but not of course moving forward we will have so we should get a clearer picture of whether online marketing budgets are just increasing in general (and therefore link building gets allocated the same percentage but of a bigger pie) or whether folks are seeing the value from building links and therefore allocating a larger percentage of the same sized pie to link building activities.

Would you say you’ve increased or decreased your spend on link building over the past 12 months?

This aligns with our data on more people entering the $10-$50k per month investment bracket this year:

Why the increase/decrease in spending?

We asked people why they decided to increase or decrease their spending on link building over the past 12 months.

Responses could be categorized into the following areas:

Common reason for increases:

  • Increased costs related to moving away from older style and often “cheaper” link building
  • Increased costs related to production/creativity
  • Good links are just as important as ever; links still move the needle in terms of search engine visibility and performance therefore it makes sense to increase investment in this area.

Common reasons for decreases:

  • Moving link building budget into content marketing projects (to be fair, this budget will probably indirectly fund link acquisition of some kind even if it is seen as a secondary goal for the content campaign.)
  • We wanted to scale back and assess the impact that Google’s manual actions etc have on our website.

In the next 12 months, will you look to increase or decrease your spend on link building?

Why the planned increase/decrease in spending?

  • Link building continues to get more expensive
  • To raise the bar on existing efforts, and to beat competitors with increasingly sophisticated content assets
  • Unsure where to invest/which links are working so concentrating budget into other activities.

Which link building tactics do you utilise most often?

(Numbers listed are votes rather than percentages)

When we compare with responses from the 2013 survey, there is a clear shift towards content-led initiatives and a reduction in some tactics for example close to 50% said in 2013 that guest blogging was their staple tactic, in 2014 fewer than 15% listed it as one of their staple activities.

Another interesting bit of data is the fact that paid links have seen somewhat of a resurgence in popularity, presumably as companies look for tactics where they can maintain greater control. In 2013, just 5% listed paid links as their staple linking tactic whereas in 2014 over 13% reported paid linking and blog networks as one of their main link building tactics.

What is currently your biggest link building challenge?

  • Getting links to pages that aren’t particularly linkworthy (money pages)
  • Lack of scalability (time, process, training, spreading time between clients)
  • Avoiding Google penalties

These are similar challenges to those reported in 2013 in the sense that there is still concern over which links are helping and harming organic search performance as well as difficulties relating to processes and the lack of scalability.

The interesting thing is that SEO is full of challenges so as soon as one is overcome, the next appears. In 2013, 28% of respondents said that “finding link prospects” was a key challenge but this year not a mention of link prospects being an issue. This arguably suggests that we as an industry were adjusting to the “new world” back in 2013 and that now we have advanced our capabilities enough for this to now longer be the primary challenge in our day to day work. Now the main problem doesn’t seem to be getting links as such but more about getting links into the pages that we all need to rank to stay in business … the money pages.

Which link building tactics do you believe to be most effective?

(numbers below are “votes” rather than percentages)

Which link building tactics do you believe to be least effective?

(numbers below are “votes” rather than percentages)

Which link building tactics do you consider to be harmful to a site?

(numbers below are “votes” rather than percentages)

See the complete visual below:

Thank you for everyone who took part in the survey! See you all again next year.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com