Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking – Whiteboard Friday

Posted by randfish

There’s no doubt that quite a bit has changed about SEO, and that the field is far more integrated with other aspects of online marketing than it once was. In today’s Whiteboard Friday, Rand pushes back against the idea that effective modern SEO doesn’t require any technical expertise, outlining a fantastic list of technical elements that today’s SEOs need to know about in order to be truly effective.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to do something unusual. I don’t usually point out these inconsistencies or sort of take issue with other folks’ content on the web, because I generally find that that’s not all that valuable and useful. But I’m going to make an exception here.

There is an article by Jayson DeMers, who I think might actually be here in Seattle — maybe he and I can hang out at some point — called “Why Modern SEO Requires Almost No Technical Expertise.” It was an article that got a shocking amount of traction and attention. On Facebook, it has thousands of shares. On LinkedIn, it did really well. On Twitter, it got a bunch of attention.

Some folks in the SEO world have already pointed out some issues around this. But because of the increasing popularity of this article, and because I think there’s, like, this hopefulness from worlds outside of kind of the hardcore SEO world that are looking to this piece and going, “Look, this is great. We don’t have to be technical. We don’t have to worry about technical things in order to do SEO.”

Look, I completely get the appeal of that. I did want to point out some of the reasons why this is not so accurate. At the same time, I don’t want to rain on Jayson, because I think that it’s very possible he’s writing an article for Entrepreneur, maybe he has sort of a commitment to them. Maybe he had no idea that this article was going to spark so much attention and investment. He does make some good points. I think it’s just really the title and then some of the messages inside there that I take strong issue with, and so I wanted to bring those up.

First off, some of the good points he did bring up.

One, he wisely says, “You don’t need to know how to code or to write and read algorithms in order to do SEO.” I totally agree with that. If today you’re looking at SEO and you’re thinking, “Well, am I going to get more into this subject? Am I going to try investing in SEO? But I don’t even know HTML and CSS yet.”

Those are good skills to have, and they will help you in SEO, but you don’t need them. Jayson’s totally right. You don’t have to have them, and you can learn and pick up some of these things, and do searches, watch some Whiteboard Fridays, check out some guides, and pick up a lot of that stuff later on as you need it in your career. SEO doesn’t have that hard requirement.

And secondly, he makes an intelligent point that we’ve made many times here at Moz, which is that, broadly speaking, a better user experience is well correlated with better rankings.

You make a great website that delivers great user experience, that provides the answers to searchers’ questions and gives them extraordinarily good content, way better than what’s out there already in the search results, generally speaking you’re going to see happy searchers, and that’s going to lead to higher rankings.

But not entirely. There are a lot of other elements that go in here. So I’ll bring up some frustrating points around the piece as well.

First off, there’s no acknowledgment — and I find this a little disturbing — that the ability to read and write code, or even HTML and CSS, which I think are the basic place to start, is helpful or can take your SEO efforts to the next level. I think both of those things are true.

So being able to look at a web page, view source on it, or pull up Firebug in Firefox or something and diagnose what’s going on and then go, “Oh, that’s why Google is not able to see this content. That’s why we’re not ranking for this keyword or term, or why even when I enter this exact sentence in quotes into Google, which is on our page, this is why it’s not bringing it up. It’s because it’s loading it after the page from a remote file that Google can’t access.” These are technical things, and being able to see how that code is built, how it’s structured, and what’s going on there, very, very helpful.

Some coding knowledge also can take your SEO efforts even further. I mean, so many times, SEOs are stymied by the conversations that we have with our programmers and our developers and the technical staff on our teams. When we can have those conversations intelligently, because at least we understand the principles of how an if-then statement works, or what software engineering best practices are being used, or they can upload something into a GitHub repository, and we can take a look at it there, that kind of stuff is really helpful.

Secondly, I don’t like that the article overly reduces all of this information that we have about what we’ve learned about Google. So he mentions two sources. One is things that Google tells us, and others are SEO experiments. I think both of those are true. Although I’d add that there’s sort of a sixth sense of knowledge that we gain over time from looking at many, many search results and kind of having this feel for why things rank, and what might be wrong with a site, and getting really good at that using tools and data as well. There are people who can look at Open Site Explorer and then go, “Aha, I bet this is going to happen.” They can look, and 90% of the time they’re right.

So he boils this down to, one, write quality content, and two, reduce your bounce rate. Neither of those things are wrong. You should write quality content, although I’d argue there are lots of other forms of quality content that aren’t necessarily written — video, images and graphics, podcasts, lots of other stuff.

And secondly, that just doing those two things is not always enough. So you can see, like many, many folks look and go, “I have quality content. It has a low bounce rate. How come I don’t rank better?” Well, your competitors, they’re also going to have quality content with a low bounce rate. That’s not a very high bar.

Also, frustratingly, this really gets in my craw. I don’t think “write quality content” means anything. You tell me. When you hear that, to me that is a totally non-actionable, non-useful phrase that’s a piece of advice that is so generic as to be discardable. So I really wish that there was more substance behind that.

The article also makes, in my opinion, the totally inaccurate claim that modern SEO really is reduced to “the happier your users are when they visit your site, the higher you’re going to rank.”

Wow. Okay. Again, I think broadly these things are correlated. User happiness and rank is broadly correlated, but it’s not a one to one. This is not like a, “Oh, well, that’s a 1.0 correlation.”

I would guess that the correlation is probably closer to like the page authority range. I bet it’s like 0.35 or something correlation. If you were to actually measure this broadly across the web and say like, “Hey, were you happier with result one, two, three, four, or five,” the ordering would not be perfect at all. It probably wouldn’t even be close.

There’s a ton of reasons why sometimes someone who ranks on Page 2 or Page 3 or doesn’t rank at all for a query is doing a better piece of content than the person who does rank well or ranks on Page 1, Position 1.

Then the article suggests five and sort of a half steps to successful modern SEO, which I think is a really incomplete list. So Jayson gives us;

  • Good on-site experience
  • Writing good content
  • Getting others to acknowledge you as an authority
  • Rising in social popularity
  • Earning local relevance
  • Dealing with modern CMS systems (which he notes most modern CMS systems are SEO-friendly)

The thing is there’s nothing actually wrong with any of these. They’re all, generally speaking, correct, either directly or indirectly related to SEO. The one about local relevance, I have some issue with, because he doesn’t note that there’s a separate algorithm for sort of how local SEO is done and how Google ranks local sites in maps and in their local search results. Also not noted is that rising in social popularity won’t necessarily directly help your SEO, although it can have indirect and positive benefits.

I feel like this list is super incomplete. Okay, I brainstormed just off the top of my head in the 10 minutes before we filmed this video a list. The list was so long that, as you can see, I filled up the whole whiteboard and then didn’t have any more room. I’m not going to bother to erase and go try and be absolutely complete.

But there’s a huge, huge number of things that are important, critically important for technical SEO. If you don’t know how to do these things, you are sunk in many cases. You can’t be an effective SEO analyst, or consultant, or in-house team member, because you simply can’t diagnose the potential problems, rectify those potential problems, identify strategies that your competitors are using, be able to diagnose a traffic gain or loss. You have to have these skills in order to do that.

I’ll run through these quickly, but really the idea is just that this list is so huge and so long that I think it’s very, very, very wrong to say technical SEO is behind us. I almost feel like the opposite is true.

We have to be able to understand things like;

  • Content rendering and indexability
  • Crawl structure, internal links, JavaScript, Ajax. If something’s post-loading after the page and Google’s not able to index it, or there are links that are accessible via JavaScript or Ajax, maybe Google can’t necessarily see those or isn’t crawling them as effectively, or is crawling them, but isn’t assigning them as much link weight as they might be assigning other stuff, and you’ve made it tough to link to them externally, and so they can’t crawl it.
  • Disabling crawling and/or indexing of thin or incomplete or non-search-targeted content. We have a bunch of search results pages. Should we use rel=prev/next? Should we robots.txt those out? Should we disallow from crawling with meta robots? Should we rel=canonical them to other pages? Should we exclude them via the protocols inside Google Webmaster Tools, which is now Google Search Console?
  • Managing redirects, domain migrations, content updates. A new piece of content comes out, replacing an old piece of content, what do we do with that old piece of content? What’s the best practice? It varies by different things. We have a whole Whiteboard Friday about the different things that you could do with that. What about a big redirect or a domain migration? You buy another company and you’re redirecting their site to your site. You have to understand things about subdomain structures versus subfolders, which, again, we’ve done another Whiteboard Friday about that.
  • Proper error codes, downtime procedures, and not found pages. If your 404 pages turn out to all be 200 pages, well, now you’ve made a big error there, and Google could be crawling tons of 404 pages that they think are real pages, because you’ve made it a status code 200, or you’ve used a 404 code when you should have used a 410, which is a permanently removed, to be able to get it completely out of the indexes, as opposed to having Google revisit it and keep it in the index.

Downtime procedures. So there’s specifically a… I can’t even remember. It’s a 5xx code that you can use. Maybe it was a 503 or something that you can use that’s like, “Revisit later. We’re having some downtime right now.” Google urges you to use that specific code rather than using a 404, which tells them, “This page is now an error.”

Disney had that problem a while ago, if you guys remember, where they 404ed all their pages during an hour of downtime, and then their homepage, when you searched for Disney World, was, like, “Not found.” Oh, jeez, Disney World, not so good.

  • International and multi-language targeting issues. I won’t go into that. But you have to know the protocols there. Duplicate content, syndication, scrapers. How do we handle all that? Somebody else wants to take our content, put it on their site, what should we do? Someone’s scraping our content. What can we do? We have duplicate content on our own site. What should we do?
  • Diagnosing traffic drops via analytics and metrics. Being able to look at a rankings report, being able to look at analytics connecting those up and trying to see: Why did we go up or down? Did we have less pages being indexed, more pages being indexed, more pages getting traffic less, more keywords less?
  • Understanding advanced search parameters. Today, just today, I was checking out the related parameter in Google, which is fascinating for most sites. Well, for Moz, weirdly, related:oursite.com shows nothing. But for virtually every other sit, well, most other sites on the web, it does show some really interesting data, and you can see how Google is connecting up, essentially, intentions and topics from different sites and pages, which can be fascinating, could expose opportunities for links, could expose understanding of how they view your site versus your competition or who they think your competition is.

Then there are tons of parameters, like in URL and in anchor, and da, da, da, da. In anchor doesn’t work anymore, never mind about that one.

I have to go faster, because we’re just going to run out of these. Like, come on. Interpreting and leveraging data in Google Search Console. If you don’t know how to use that, Google could be telling you, you have all sorts of errors, and you don’t know what they are.

  • Leveraging topic modeling and extraction. Using all these cool tools that are coming out for better keyword research and better on-page targeting. I talked about a couple of those at MozCon, like MonkeyLearn. There’s the new Moz Context API, which will be coming out soon, around that. There’s the Alchemy API, which a lot of folks really like and use.
  • Identifying and extracting opportunities based on site crawls. You run a Screaming Frog crawl on your site and you’re going, “Oh, here’s all these problems and issues.” If you don’t have these technical skills, you can’t diagnose that. You can’t figure out what’s wrong. You can’t figure out what needs fixing, what needs addressing.
  • Using rich snippet format to stand out in the SERPs. This is just getting a better click-through rate, which can seriously help your site and obviously your traffic.
  • Applying Google-supported protocols like rel=canonical, meta description, rel=prev/next, hreflang, robots.txt, meta robots, x robots, NOODP, XML sitemaps, rel=nofollow. The list goes on and on and on. If you’re not technical, you don’t know what those are, you think you just need to write good content and lower your bounce rate, it’s not going to work.
  • Using APIs from services like AdWords or MozScape, or hrefs from Majestic, or SEM refs from SearchScape or Alchemy API. Those APIs can have powerful things that they can do for your site. There are some powerful problems they could help you solve if you know how to use them. It’s actually not that hard to write something, even inside a Google Doc or Excel, to pull from an API and get some data in there. There’s a bunch of good tutorials out there. Richard Baxter has one, Annie Cushing has one, I think Distilled has some. So really cool stuff there.
  • Diagnosing page load speed issues, which goes right to what Jayson was talking about. You need that fast-loading page. Well, if you don’t have any technical skills, you can’t figure out why your page might not be loading quickly.
  • Diagnosing mobile friendliness issues
  • Advising app developers on the new protocols around App deep linking, so that you can get the content from your mobile apps into the web search results on mobile devices. Awesome. Super powerful. Potentially crazy powerful, as mobile search is becoming bigger than desktop.

Okay, I’m going to take a deep breath and relax. I don’t know Jayson’s intention, and in fact, if he were in this room, he’d be like, “No, I totally agree with all those things. I wrote the article in a rush. I had no idea it was going to be big. I was just trying to make the broader points around you don’t have to be a coder in order to do SEO.” That’s completely fine.

So I’m not going to try and rain criticism down on him. But I think if you’re reading that article, or you’re seeing it in your feed, or your clients are, or your boss is, or other folks are in your world, maybe you can point them to this Whiteboard Friday and let them know, no, that’s not quite right. There’s a ton of technical SEO that is required in 2015 and will be for years to come, I think, that SEOs have to have in order to be effective at their jobs.

All right, everyone. Look forward to some great comments, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Distance from Perfect

Posted by wrttnwrd

In spite of all the advice, the strategic discussions and the conference talks, we Internet marketers are still algorithmic thinkers. That’s obvious when you think of SEO.

Even when we talk about content, we’re algorithmic thinkers. Ask yourself: How many times has a client asked you, “How much content do we need?” How often do you still hear “How unique does this page need to be?”

That’s 100% algorithmic thinking: Produce a certain amount of content, move up a certain number of spaces.

But you and I know it’s complete bullshit.

I’m not suggesting you ignore the algorithm. You should definitely chase it. Understanding a little bit about what goes on in Google’s pointy little head helps. But it’s not enough.

A tale of SEO woe that makes you go “whoa”

I have this friend.

He ranked #10 for “flibbergibbet.” He wanted to rank #1.

He compared his site to the #1 site and realized the #1 site had five hundred blog posts.

“That site has five hundred blog posts,” he said, “I must have more.”

So he hired a few writers and cranked out five thousand blogs posts that melted Microsoft Word’s grammar check. He didn’t move up in the rankings. I’m shocked.

“That guy’s spamming,” he decided, “I’ll just report him to Google and hope for the best.”

What happened? Why didn’t adding five thousand blog posts work?

It’s pretty obvious: My, uh, friend added nothing but crap content to a site that was already outranked. Bulk is no longer a ranking tactic. Google’s very aware of that tactic. Lots of smart engineers have put time into updates like Panda to compensate.

He started like this:

And ended up like this:
more posts, no rankings

Alright, yeah, I was Mr. Flood The Site With Content, way back in 2003. Don’t judge me, whippersnappers.

Reality’s never that obvious. You’re scratching and clawing to move up two spots, you’ve got an overtasked IT team pushing back on changes, and you’ve got a boss who needs to know the implications of every recommendation.

Why fix duplication if rel=canonical can address it? Fixing duplication will take more time and cost more money. It’s easier to paste in one line of code. You and I know it’s better to fix the duplication. But it’s a hard sell.

Why deal with 302 versus 404 response codes and home page redirection? The basic user experience remains the same. Again, we just know that a server should return one home page without any redirects and that it should send a ‘not found’ 404 response if a page is missing. If it’s going to take 3 developer hours to reconfigure the server, though, how do we justify it? There’s no flashing sign reading “Your site has a problem!”

Why change this thing and not that thing?

At the same time, our boss/client sees that the site above theirs has five hundred blog posts and thousands of links from sites selling correspondence MBAs. So they want five thousand blog posts and cheap links as quickly as possible.

Cue crazy music.

SEO lacks clarity

SEO is, in some ways, for the insane. It’s an absurd collection of technical tweaks, content thinking, link building and other little tactics that may or may not work. A novice gets exposed to one piece of crappy information after another, with an occasional bit of useful stuff mixed in. They create sites that repel search engines and piss off users. They get more awful advice. The cycle repeats. Every time it does, best practices get more muddled.

SEO lacks clarity. We can’t easily weigh the value of one change or tactic over another. But we can look at our changes and tactics in context. When we examine the potential of several changes or tactics before we flip the switch, we get a closer balance between algorithm-thinking and actual strategy.

Distance from perfect brings clarity to tactics and strategy

At some point you have to turn that knowledge into practice. You have to take action based on recommendations, your knowledge of SEO, and business considerations.

That’s hard when we can’t even agree on subdomains vs. subfolders.

I know subfolders work better. Sorry, couldn’t resist. Let the flaming comments commence.

To get clarity, take a deep breath and ask yourself:

“All other things being equal, will this change, tactic, or strategy move my site closer to perfect than my competitors?”

Breaking it down:

“Change, tactic, or strategy”

A change takes an existing component or policy and makes it something else. Replatforming is a massive change. Adding a new page is a smaller one. Adding ALT attributes to your images is another example. Changing the way your shopping cart works is yet another.

A tactic is a specific, executable practice. In SEO, that might be fixing broken links, optimizing ALT attributes, optimizing title tags or producing a specific piece of content.

A strategy is a broader decision that’ll cause change or drive tactics. A long-term content policy is the easiest example. Shifting away from asynchronous content and moving to server-generated content is another example.

“Perfect”

No one knows exactly what Google considers “perfect,” and “perfect” can’t really exist, but you can bet a perfect web page/site would have all of the following:

  1. Completely visible content that’s perfectly relevant to the audience and query
  2. A flawless user experience
  3. Instant load time
  4. Zero duplicate content
  5. Every page easily indexed and classified
  6. No mistakes, broken links, redirects or anything else generally yucky
  7. Zero reported problems or suggestions in each search engines’ webmaster tools, sorry, “Search Consoles”
  8. Complete authority through immaculate, organically-generated links

These 8 categories (and any of the other bazillion that probably exist) give you a way to break down “perfect” and help you focus on what’s really going to move you forward. These different areas may involve different facets of your organization.

Your IT team can work on load time and creating an error-free front- and back-end. Link building requires the time and effort of content and outreach teams.

Tactics for relevant, visible content and current best practices in UX are going to be more involved, requiring research and real study of your audience.

What you need and what resources you have are going to impact which tactics are most realistic for you.

But there’s a basic rule: If a website would make Googlebot swoon and present zero obstacles to users, it’s close to perfect.

“All other things being equal”

Assume every competing website is optimized exactly as well as yours.

Now ask: Will this [tactic, change or strategy] move you closer to perfect?

That’s the “all other things being equal” rule. And it’s an incredibly powerful rubric for evaluating potential changes before you act. Pretend you’re in a tie with your competitors. Will this one thing be the tiebreaker? Will it put you ahead? Or will it cause you to fall behind?

“Closer to perfect than my competitors”

Perfect is great, but unattainable. What you really need is to be just a little perfect-er.

Chasing perfect can be dangerous. Perfect is the enemy of the good (I love that quote. Hated Voltaire. But I love that quote). If you wait for the opportunity/resources to reach perfection, you’ll never do anything. And the only way to reduce distance from perfect is to execute.

Instead of aiming for pure perfection, aim for more perfect than your competitors. Beat them feature-by-feature, tactic-by-tactic. Implement strategy that supports long-term superiority.

Don’t slack off. But set priorities and measure your effort. If fixing server response codes will take one hour and fixing duplication will take ten, fix the response codes first. Both move you closer to perfect. Fixing response codes may not move the needle as much, but it’s a lot easier to do. Then move on to fixing duplicates.

Do the 60% that gets you a 90% improvement. Then move on to the next thing and do it again. When you’re done, get to work on that last 40%. Repeat as necessary.

Take advantage of quick wins. That gives you more time to focus on your bigger solutions.

Sites that are “fine” are pretty far from perfect

Google has lots of tweaks, tools and workarounds to help us mitigate sub-optimal sites:

  • Rel=canonical lets us guide Google past duplicate content rather than fix it
  • HTML snapshots let us reveal content that’s delivered using asynchronous content and JavaScript frameworks
  • We can use rel=next and prev to guide search bots through outrageously long pagination tunnels
  • And we can use rel=nofollow to hide spammy links and banners

Easy, right? All of these solutions may reduce distance from perfect (the search engines don’t guarantee it). But they don’t reduce it as much as fixing the problems.
Just fine does not equal fixed

The next time you set up rel=canonical, ask yourself:

“All other things being equal, will using rel=canonical to make up for duplication move my site closer to perfect than my competitors?”

Answer: Not if they’re using rel=canonical, too. You’re both using imperfect solutions that force search engines to crawl every page of your site, duplicates included. If you want to pass them on your way to perfect, you need to fix the duplicate content.

When you use Angular.js to deliver regular content pages, ask yourself:

“All other things being equal, will using HTML snapshots instead of actual, visible content move my site closer to perfect than my competitors?”

Answer: No. Just no. Not in your wildest, code-addled dreams. If I’m Google, which site will I prefer? The one that renders for me the same way it renders for users? Or the one that has to deliver two separate versions of every page?

When you spill banner ads all over your site, ask yourself…

You get the idea. Nofollow is better than follow, but banner pollution is still pretty dang far from perfect.

Mitigating SEO issues with search engine-specific tools is “fine.” But it’s far, far from perfect. If search engines are forced to choose, they’ll favor the site that just works.

Not just SEO

By the way, distance from perfect absolutely applies to other channels.

I’m focusing on SEO, but think of other Internet marketing disciplines. I hear stuff like “How fast should my site be?” (Faster than it is right now.) Or “I’ve heard you shouldn’t have any content below the fold.” (Maybe in 2001.) Or “I need background video on my home page!” (Why? Do you have a reason?) Or, my favorite: “What’s a good bounce rate?” (Zero is pretty awesome.)

And Internet marketing venues are working to measure distance from perfect. Pay-per-click marketing has the quality score: A codified financial reward applied for seeking distance from perfect in as many elements as possible of your advertising program.

Social media venues are aggressively building their own forms of graphing, scoring and ranking systems designed to separate the good from the bad.

Really, all marketing includes some measure of distance from perfect. But no channel is more influenced by it than SEO. Instead of arguing one rule at a time, ask yourself and your boss or client: Will this move us closer to perfect?

Hell, you might even please a customer or two.

One last note for all of the SEOs in the crowd. Before you start pointing out edge cases, consider this: We spend our days combing Google for embarrassing rankings issues. Every now and then, we find one, point, and start yelling “SEE! SEE!!!! THE GOOGLES MADE MISTAKES!!!!” Google’s got lots of issues. Screwing up the rankings isn’t one of them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

​The 3 Most Common SEO Problems on Listings Sites

Posted by Dom-Woodman

Listings sites have a very specific set of search problems that you don’t run into everywhere else. In the day I’m one of Distilled’s analysts, but by night I run a job listings site, teflSearch. So, for my first Moz Blog post I thought I’d cover the three search problems with listings sites that I spent far too long agonising about.

Quick clarification time: What is a listings site (i.e. will this post be useful for you)?

The classic listings site is Craigslist, but plenty of other sites act like listing sites:

  • Job sites like Monster
  • E-commerce sites like Amazon
  • Matching sites like Spareroom

1. Generating quality landing pages

The landing pages on listings sites are incredibly important. These pages are usually the primary drivers of converting traffic, and they’re usually generated automatically (or are occasionally custom category pages) .

For example, if I search “Jobs in Manchester“, you can see nearly every result is an automatically generated landing page or category page.

There are three common ways to generate these pages (occasionally a combination of more than one is used):

  • Faceted pages: These are generated by facets—groups of preset filters that let you filter the current search results. They usually sit on the left-hand side of the page.
  • Category pages: These pages are listings which have already had a filter applied and can’t be changed. They’re usually custom pages.
  • Free-text search pages: These pages are generated by a free-text search box.

Those definitions are still bit general; let’s clear them up with some examples:

Amazon uses a combination of categories and facets. If you click on browse by department you can see all the category pages. Then on each category page you can see a faceted search. Amazon is so large that it needs both.

Indeed generates its landing pages through free text search, for example if we search for “IT jobs in manchester” it will generate: IT jobs in manchester.

teflSearch generates landing pages using just facets. The jobs in China landing page is simply a facet of the main search page.

Each method has its own search problems when used for generating landing pages, so lets tackle them one by one.


Aside

Facets and free text search will typically generate pages with parameters e.g. a search for “dogs” would produce:

www.mysite.com?search=dogs

But to make the URL user friendly sites will often alter the URLs to display them as folders

www.mysite.com/results/dogs/

These are still just ordinary free text search and facets, the URLs are just user friendly. (They’re a lot easier to work with in robots.txt too!)


Free search (& category) problems

If you’ve decided the base of your search will be a free text search, then we’ll have two major goals:

  • Goal 1: Helping search engines find your landing pages
  • Goal 2: Giving them link equity.

Solution

Search engines won’t use search boxes and so the solution to both problems is to provide links to the valuable landing pages so search engines can find them.

There are plenty of ways to do this, but two of the most common are:

  • Category links alongside a search

    Photobucket uses a free text search to generate pages, but if we look at example search for photos of dogs, we can see the categories which define the landing pages along the right-hand side. (This is also an example of URL friendly searches!)

  • Putting the main landing pages in a top-level menu

    Indeed also uses free text to generate landing pages, and they have a browse jobs section which contains the URL structure to allow search engines to find all the valuable landing pages.

Breadcrumbs are also often used in addition to the two above and in both the examples above, you’ll find breadcrumbs that reinforce that hierarchy.

Category (& facet) problems

Categories, because they tend to be custom pages, don’t actually have many search disadvantages. Instead it’s the other attributes that make them more or less desirable. You can create them for the purposes you want and so you typically won’t have too many problems.

However, if you also use a faceted search in each category (like Amazon) to generate additional landing pages, then you’ll run into all the problems described in the next section.

At first facets seem great, an easy way to generate multiple strong relevant landing pages without doing much at all. The problems appear because people don’t put limits on facets.

Lets take the job page on teflSearch. We can see it has 18 facets each with many options. Some of these options will generate useful landing pages:

The China facet in countries will generate “Jobs in China” that’s a useful landing page.

On the other hand, the “Conditional Bonus” facet will generate “Jobs with a conditional bonus,” and that’s not so great.

We can also see that the options within a single facet aren’t always useful. As of writing, I have a single job available in Serbia. That’s not a useful search result, and the poor user engagement combined with the tiny amount of content will be a strong signal to Google that it’s thin content. Depending on the scale of your site it’s very easy to generate a mass of poor-quality landing pages.

Facets generate other problems too. The primary one being they can create a huge amount of duplicate content and pages for search engines to get lost in. This is caused by two things: The first is the sheer number of possibilities they generate, and the second is because selecting facets in different orders creates identical pages with different URLs.

We end up with four goals for our facet-generated landing pages:

  • Goal 1: Make sure our searchable landing pages are actually worth landing on, and that we’re not handing a mass of low-value pages to the search engines.
  • Goal 2: Make sure we don’t generate multiple copies of our automatically generated landing pages.
  • Goal 3: Make sure search engines don’t get caught in the metaphorical plastic six-pack rings of our facets.
  • Goal 4: Make sure our landing pages have strong internal linking.

The first goal needs to be set internally; you’re always going to be the best judge of the number of results that need to present on a page in order for it to be useful to a user. I’d argue you can rarely ever go below three, but it depends both on your business and on how much content fluctuates on your site, as the useful landing pages might also change over time.

We can solve the next three problems as group. There are several possible solutions depending on what skills and resources you have access to; here are two possible solutions:

Category/facet solution 1: Blocking the majority of facets and providing external links
  • Easiest method
  • Good if your valuable category pages rarely change and you don’t have too many of them.
  • Can be problematic if your valuable facet pages change a lot

Nofollow all your facet links, and noindex and block category pages which aren’t valuable or are deeper than x facet/folder levels into your search using robots.txt.

You set x by looking at where your useful facet pages exist that have search volume. So, for example, if you have three facets for televisions: manufacturer, size, and resolution, and even combinations of all three have multiple results and search volume, then you could set you index everything up to three levels.

On the other hand, if people are searching for three levels (e.g. “Samsung 42″ Full HD TV”) but you only have one or two results for three-level facets, then you’d be better off indexing two levels and letting the product pages themselves pick up long-tail traffic for the third level.

If you have valuable facet pages that exist deeper than 1 facet or folder into your search, then this creates some duplicate content problems dealt with in the aside “Indexing more than 1 level of facets” below.)

The immediate problem with this set-up, however, is that in one stroke we’ve removed most of the internal links to our category pages, and by no-following all the facet links, search engines won’t be able to find your valuable category pages.

In order re-create the linking, you can add a top level drop down menu to your site containing the most valuable category pages, add category links elsewhere on the page, or create a separate part of the site with links to the valuable category pages.

The top level drop down menu you can see on teflSearch (it’s the search jobs menu), the other two examples are demonstrated in Photobucket and Indeed respectively in the previous section.

The big advantage for this method is how quick it is to implement, it doesn’t require any fiddly internal logic and adding an extra menu option is usually minimal effort.

Category/facet solution 2: Creating internal logic to work with the facets

  • Requires new internal logic
  • Works for large numbers of category pages with value that can change rapidly

There are four parts to the second solution:

  1. Select valuable facet categories and allow those links to be followed. No-follow the rest.
  2. No-index all pages that return a number of items below the threshold for a useful landing page
  3. No-follow all facets on pages with a search depth greater than x.
  4. Block all facet pages deeper than x level in robots.txt

As with the last solution, x is set by looking at where your useful facet pages exist that have search volume (full explanation in the first solution), and if you’re indexing more than one level you’ll need to check out the aside below to see how to deal with the duplicate content it generates.


Aside: Indexing more than one level of facets

If you want more than one level of facets to be indexable, then this will create certain problems.

Suppose you have a facet for size:

  • Televisions: Size: 46″, 44″, 42″

And want to add a brand facet:

  • Televisions: Brand: Samsung, Panasonic, Sony

This will create duplicate content because the search engines will be able to follow your facets in both orders, generating:

  • Television – 46″ – Samsung
  • Television – Samsung – 46″

You’ll have to either rel canonical your duplicate pages with another rule or set up your facets so they create a single unique URL.

You also need to be aware that each followable facet you add will multiply with each other followable facet and it’s very easy to generate a mass of pages for search engines to get stuck in. Depending on your setup you might need to block more paths in robots.txt or set-up more logic to prevent them being followed.

Letting search engines index more than one level of facets adds a lot of possible problems; make sure you’re keeping track of them.


2. User-generated content cannibalization

This is a common problem for listings sites (assuming they allow user generated content). If you’re reading this as an e-commerce site who only lists their own products, you can skip this one.

As we covered in the first area, category pages on listings sites are usually the landing pages aiming for the valuable search terms, but as your users start generating pages they can often create titles and content that cannibalise your landing pages.

Suppose you’re a job site with a category page for PHP Jobs in Greater Manchester. If a recruiter then creates a job advert for PHP Jobs in Greater Manchester for the 4 positions they currently have, you’ve got a duplicate content problem.

This is less of a problem when your site is large and your categories mature, it will be obvious to any search engine which are your high value category pages, but at the start where you’re lacking authority and individual listings might contain more relevant content than your own search pages this can be a problem.

Solution 1: Create structured titles

Set the <title> differently than the on-page title. Depending on variables you have available to you can set the title tag programmatically without changing the page title using other information given by the user.

For example, on our imaginary job site, suppose the recruiter also provided the following information in other fields:

  • The no. of positions: 4
  • The primary area: PHP Developer
  • The name of the recruiting company: ABC Recruitment
  • Location: Manchester

We could set the <title> pattern to be: *No of positions* *The primary area* with *recruiter name* in *Location* which would give us:

4 PHP Developers with ABC Recruitment in Manchester

Setting a <title> tag allows you to target long-tail traffic by constructing detailed descriptive titles. In our above example, imagine the recruiter had specified “Castlefield, Manchester” as the location.

All of a sudden, you’ve got a perfect opportunity to pick up long-tail traffic for people searching in Castlefield in Manchester.

On the downside, you lose the ability to pick up long-tail traffic where your users have chosen keywords you wouldn’t have used.

For example, suppose Manchester has a jobs program called “Green Highway.” A job advert title containing “Green Highway” might pick up valuable long-tail traffic. Being able to discover this, however, and find a way to fit it into a dynamic title is very hard.

Solution 2: Use regex to noindex the offending pages

Perform a regex (or string contains) search on your listings titles and no-index the ones which cannabalise your main category pages.

If it’s not possible to construct titles with variables or your users provide a lot of additional long-tail traffic with their own titles, then is a great option. On the downside, you miss out on possible structured long-tail traffic that you might’ve been able to aim for.

Solution 3: De-index all your listings

It may seem rash, but if you’re a large site with a huge number of very similar or low-content listings, you might want to consider this, but there is no common standard. Some sites like Indeed choose to no-index all their job adverts, whereas some other sites like Craigslist index all their individual listings because they’ll drive long tail traffic.

Don’t de-index them all lightly!

3. Constantly expiring content

Our third and final problem is that user-generated content doesn’t last forever. Particularly on listings sites, it’s constantly expiring and changing.

For most use cases I’d recommend 301’ing expired content to a relevant category page, with a message triggered by the redirect notifying the user of why they’ve been redirected. It typically comes out as the best combination of search and UX.

For more information or advice on how to deal with the edge cases, there’s a previous Moz blog post on how to deal with expired content which I think does an excellent job of covering this area.

Summary

In summary, if you’re working with listings sites, all three of the following need to be kept in mind:

  • How are the landing pages generated? If they’re generated using free text or facets have the potential problems been solved?
  • Is user generated content cannibalising the main landing pages?
  • How has constantly expiring content been dealt with?

Good luck listing, and if you’ve had any other tricky problems or solutions you’ve come across working on listings sites lets chat about them in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

SEO Link Ratios Explained! Brand v Money, Text v Image and more!

At http://www.indexicon.com I am often asked what are the perfect percentages for money v brand keyword anchor text, image v text, follow v nofollow and much…

Reblogged 4 years ago from www.youtube.com

12 Common Reasons Reconsideration Requests Fail

Posted by Modestos

There are several reasons a reconsideration request might fail. But some of the most common mistakes site owners and inexperienced SEOs make when trying to lift a link-related Google penalty are entirely avoidable. 

Here’s a list of the top 12 most common mistakes made when submitting reconsideration requests, and how you can prevent them.

1. Insufficient link data

This is one of the most common reasons why reconsideration requests fail. This mistake is readily evident each time a reconsideration request gets rejected and the example URLs provided by Google are unknown to the webmaster. Relying only on Webmaster Tools data isn’t enough, as Google has repeatedly said. You need to combine data from as many different sources as possible. 

A good starting point is to collate backlink data, at the very least:

  • Google Webmaster Tools (both latest and sample links)
  • Bing Webmaster Tools
  • Majestic SEO (Fresh Index)
  • Ahrefs
  • Open Site Explorer

If you use any toxic link-detection services (e.g., Linkrisk and Link Detox), then you need to take a few precautions to ensure the following:

  • They are 100% transparent about their backlink data sources
  • They have imported all backlink data
  • You can upload your own backlink data (e.g., Webmaster Tools) without any limitations

If you work on large websites that have tons of backlinks, most of these automated services are very likely used to process just a fraction of the links, unless you pay for one of their premium packages. If you have direct access to the above data sources, it’s worthwhile to download all backlink data, then manually upload it into your tool of choice for processing. This is the only way to have full visibility over the backlink data that has to be analyzed and reviewed later. Starting with an incomplete data set at this early (yet crucial) stage could seriously hinder the outcome of your reconsideration request.

2. Missing vital legacy information

The more you know about a site’s history and past activities, the better. You need to find out (a) which pages were targeted in the past as part of link building campaigns, (b) which keywords were the primary focus and (c) the link building tactics that were scaled (or abused) most frequently. Knowing enough about a site’s past activities, before it was penalized, can help you home in on the actual causes of the penalty. Also, collect as much information as possible from the site owners.

3. Misjudgement

Misreading your current situation can lead to wrong decisions. One common mistake is to treat the example URLs provided by Google as gospel and try to identify only links with the same patterns. Google provides a very small number of examples of unnatural links. Often, these examples are the most obvious and straightforward ones. However, you should look beyond these examples to fully address the issues and take the necessary actions against all types of unnatural links. 

Google is very clear on the matter: “Please correct or remove all inorganic links, not limited to the samples provided above.

Another common area of bad judgement is the inability to correctly identify unnatural links. This is a skill that requires years of experience in link auditing, as well as link building. Removing the wrong links won’t lift the penalty, and may also result in further ranking drops and loss of traffic. You must remove the right links.


4. Blind reliance on tools

There are numerous unnatural link-detection tools available on the market, and over the years I’ve had the chance to try out most (if not all) of them. Because (and without any exception) I’ve found them all very ineffective and inaccurate, I do not rely on any such tools for my day-to-day work. In some cases, a lot of the reported “high risk” links were 100% natural links, and in others, numerous toxic links were completely missed. If you have to manually review all the links to discover the unnatural ones, ensuring you don’t accidentally remove any natural ones, it makes no sense to pay for tools. 

If you solely rely on automated tools to identify the unnatural links, you will need a miracle for your reconsideration request to be successful. The only tool you really need is a powerful backlink crawler that can accurately report the current link status of each URL you have collected. You should then manually review all currently active links and decide which ones to remove. 

I could write an entire book on the numerous flaws and bugs I have come across each time I’ve tried some of the most popular link auditing tools. A lot of these issues can be detrimental to the outcome of the reconsideration request. I have seen many reconsiderations request fail because of this. If Google cannot algorithmically identify all unnatural links and must operate entire teams of humans to review the sites (and their links), you shouldn’t trust a $99/month service to identify the unnatural links.

If you have an in-depth understanding of Google’s link schemes, you can build your own process to prioritize which links are more likely to be unnatural, as I described in this post (see sections 7 & 8). In an ideal world, you should manually review every single link pointing to your site. Where this isn’t possible (e.g., when dealing with an enormous numbers of links or resources are unavailable), you should at least focus on the links that have the more “unnatural” signals and manually review them.

5. Not looking beyond direct links

When trying to lift a link-related penalty, you need to look into all the links that may be pointing to your site directly or indirectly. Such checks include reviewing all links pointing to other sites that have been redirected to your site, legacy URLs with external inbound links that have been internally redirected owned, and third-party sites that include cross-domain canonicals to your site. For sites that used to buy and redirect domains in order increase their rankings, the quickest solution is to get rid of the redirects. Both Majestic SEO and Ahrefs report redirects, but some manual digging usually reveals a lot more.

PQPkyj0.jpg

6. Not looking beyond the first link

All major link intelligence tools, including Majestic SEO, Ahrefs and Open Site Explorer, report only the first link pointing to a given site when crawling a page. This means that, if you overly rely on automated tools to identify links with commercial keywords, the vast majority of them will only take into consideration the first link they discover on a page. If a page on the web links just once to your site, this is not big deal. But if there are multiple links, the tools will miss all but the first one.

For example, if a page has five different links pointing to your site, and the first one includes a branded anchor text, these tools will just report the first link. Most of the link-auditing tools will in turn evaluate the link as “natural” and completely miss the other four links, some of which may contain manipulative anchor text. The more links that get missed this way the more likely your reconsideration request will fail.

7. Going too thin

Many SEOs and webmasters (still) feel uncomfortable with the idea of losing links. They cannot accept the idea of links that once helped their rankings are now being devalued, and must be removed. There is no point trying to save “authoritative”, unnatural links out of fear of losing rankings. If the main objective is to lift the penalty, then all unnatural links need to be removed.

Often, in the first reconsideration request, SEOs and site owners tend to go too thin, and in the subsequent attempts start cutting deeper. If you are already aware of the unnatural links pointing to your site, try to get rid of them from the very beginning. I have seen examples of unnatural links provided by Google on PR 9/DA 98 sites. Metrics do not matter when it comes to lifting a penalty. If a link is manipulative, it has to go.

In any case, Google’s decision won’t be based only on the number of links that have been removed. Most important in the search giant’s eyes are the quality of links still pointing to your site. If the remaining links are largely of low quality, the reconsideration request will almost certainly fail. 

8. Insufficient effort to remove links

Google wants to see a “good faith” effort to get as many links removed as possible. The higher the percentage of unnatural links removed, the better. Some agencies and SEO consultants tend to rely too much on the use of the disavow tool. However, this isn’t a panacea, and should be used as a last resort for removing those links that are impossible to remove—after exhausting all possibilities to physically remove them via the time-consuming (yet necessary) outreach route. 

Google is very clear on this:

m4M4n3g.jpg?1

Even if you’re unable to remove all of the links that need to be removed, you must be able to demonstrate that you’ve made several attempts to have them removed, which can have a favorable impact on the outcome of the reconsideration request. Yes, in some cases it might be possible to have a penalty lifted simply by disavowing instead of removing the links, but these cases are rare and this strategy may backfire in the future. When I reached out to ex-googler Fili Wiese’s for some advice on the value of removing the toxic links (instead of just disavowing them), his response was very straightforward:

V3TmCrj.jpg 

9. Ineffective outreach

Simply identifying the unnatural links won’t get the penalty lifted unless a decent percentage of the links have been successfully removed. The more communication channels you try, the more likely it is that you reach the webmaster and get the links removed. Sending the same email hundreds or thousands of times is highly unlikely to result in a decent response rate. Trying to remove a link from a directory is very different from trying to get rid of a link appearing in a press release, so you should take a more targeted approach with a well-crafted, personalized email. Link removal request emails must be honest and to the point, or else they’ll be ignored.

Tracking the emails will also help in figuring out which messages have been read, which webmasters might be worth contacting again, or alert you of the need to try an alternative means of contacting webmasters.

Creativity, too, can play a big part in the link removal process. For example, it might be necessary to use social media to reach the right contact. Again, don’t trust automated emails or contact form harvesters. In some cases, these applications will pull in any email address they find on the crawled page (without any guarantee of who the information belongs to). In others, they will completely miss masked email addresses or those appearing in images. If you really want to see that the links are removed, outreach should be carried out by experienced outreach specialists. Unfortunately, there aren’t any shortcuts to effective outreach.

10. Quality issues and human errors

All sorts of human errors can occur when filing a reconsideration request. The most common errors include submitting files that do not exist, files that do not open, files that contain incomplete data, and files that take too long to load. You need to triple-check that the files you are including in your reconsideration request are read-only, and that anyone with the URL can fully access them. 

Poor grammar and language is also bad practice, as it may be interpreted as “poor effort.” You should definitely get the reconsideration request proofread by a couple of people to be sure it is flawless. A poorly written reconsideration request can significantly hinder your overall efforts.

Quality issues can also occur with the disavow file submission. Disavowing at the URL level isn’t recommended because the link(s) you want to get rid of are often accessible to search engines via several URLs you may be unaware of. Therefore, it is strongly recommended that you disavow at the domain or sub-domain level.

11. Insufficient evidence

How does Google know you have done everything you claim in your reconsideration request? Because you have to prove each claim is valid, you need to document every single action you take, from sent emails and submitted forms, to social media nudges and phone calls. The more information you share with Google in your reconsideration request, the better. This is the exact wording from Google:

“ …we will also need to see good-faith efforts to remove a large portion of inorganic links from the web wherever possible.”

12. Bad communication

How you communicate your link cleanup efforts is as essential as the work you are expected to carry out. Not only do you need to explain the steps you’ve taken to address the issues, but you also need to share supportive information and detailed evidence. The reconsideration request is the only chance you have to communicate to Google which issues you have identified, and what you’ve done to address them. Being honest and transparent is vital for the success of the reconsideration request.

There is absolutely no point using the space in a reconsideration request to argue with Google. Some of the unnatural links examples they share may not always be useful (e.g., URLs that include nofollow links, removed links, or even no links at all). But taking the argumentative approach veritably guarantees your request will be denied.

54adb6e0227790.04405594.jpg
Cropped from photo by Keith Allison, licensed under Creative Commons.

Conclusion

Getting a Google penalty lifted requires a good understanding of why you have been penalized, a flawless process and a great deal of hands-on work. Performing link audits for the purpose of lifting a penalty can be very challenging, and should only be carried out by experienced consultants. If you are not 100% sure you can take all the required actions, seek out expert help rather than looking for inexpensive (and ineffective) automated solutions. Otherwise, you will almost certainly end up wasting weeks or months of your precious time, and in the end, see your request denied.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com

The Best of 2014: Top People and Posts from the Moz Blog

Posted by Trevor-Klein

At the end of every year, we compile a list of the very best posts and most popular and prolific people that have been published on the Moz Blog and YouMoz. It’s a really fun way to look back on what happened this year, and an insight-packed view of what really resonates with our readers.

Here’s what we’ve got in store:

  1. Top Moz Blog posts by 1Metric score
  2. Top Moz Blog posts by unique visits
  3. Top YouMoz Blog posts by unique visits
  4. Top Moz Blog posts by number of thumbs up
  5. Top Moz Blog posts by number of comments
  6. Top Moz Blog posts by number of linking root domains
  7. Top comments from our community by number of thumbs up
  8. Top commenters from our community by total number of thumbs up

A huge thanks goes to Dr. Pete Meyers and Cyrus Shepard; their help cut the amount of time creating this piece consumed in half.

We hope you enjoy the look back at the past year, and wish you a very happy start to 2015!

1. Top Moz Blog posts by 1Metric score

Earlier this year, we created a new metric to evaluate the success of our blog posts, calling it “the one metric” in a nod to The Lord of the Rings. We even
wrote about it on this blog. With the help and feedback of many folks in the community as well as some refinement of our own, we’ve now polished the metric, changed the spelling a bit, applied it retroactively to older posts, and are using it regularly in-house. The following posts are those with the highest scores, representing the 10 posts that saw the most overall success this year. In case there was any doubt, Cyrus really (really) knows what he’s doing.

1. More than Keywords: 7 Concepts of Advanced On-Page SEO
October 21 – Posted by Cyrus Shepard
As marketers, helping search engines understand what our content means is one of our most important tasks. Search engines can’t read pages like humans can, so we incorporate structure and clues as to what our content means. This post explores a series of on-page techniques that not only build upon one another, but can be combined in sophisticated ways.

Dr-Pete

2. New Title Tag Guidelines & Preview Tool
March 20 – Posted by Dr. Peter J. Meyers
Google’s 2014 redesign had a big impact on search result titles, cutting them off much sooner. This post includes a title preview tool and takes a data-driven approach to finding the new limit.

MarieHaynes

3. Your Google Algorithm Cheat Sheet: Panda, Penguin, and Hummingbird
June 11 – Posted by Marie Haynes
Do you have questions about the Panda algorithm, the Penguin algorithm, or Hummingbird? This guide explains in lay terms what each of these Google algorithm changes is about and how to improve your site so that it looks better in the eyes of the big G.

4. 12 Ways to Increase Traffic From Google Without Building Links
March 11 – Posted by Cyrus Shepard
The job of the Technical SEO becomes more complex each year, but we also have more opportunities now than ever. Here are 12 ways you can improve your rankings without relying on link building.

OliGardner

5. The Most Entertaining Guide to Landing Page Optimization You’ll Ever Read
May 20 – Posted by Oli Gardner
If you’ve ever been bored while reading a blog post, your life just got better. If you’ve ever wanted to learn about conversion rate optimization, and how to design high-converting landing pages, without falling asleep, you’re in the right place. Buckle up, and prepare to be entertained in your learning regions.

6. Illustrated Guide to Advanced On-Page Topic Targeting for SEO
November 17 – Posted by Cyrus Shepard
The concepts of advanced on-page SEO are dizzying: LDA, co-occurrence, and entity salience. The question is “How can I easily incorporate these techniques into my content for higher rankings?” The truth is, you can create optimized pages that rank well without understanding complex algorithms.

josh_bachynski

7. Panda 4.1 Google Leaked Dos and Don’ts – Whiteboard Friday
December 05 – Posted by Josh Bachynski
Panda is about so much more than good content. Let Josh Bachynski give you the inside information on the highlights of what you should (and should not) be doing.

8. 10 Smart Tips to Leverage Google+ for Increased Web Traffic
April 15 – Posted by Cyrus Shepard
While not everyone has an audience active on Google+, the number of people who interact socially with any Google products on a monthly basis now reportedly exceeds 500 million.

9. The Rules of Link Building – Whiteboard Friday
April 04 – Posted by Cyrus Shepard
Google is increasingly playing the referee in the marketing game, and many marketers are simply leaving instead of playing by the rules. In today’s Whiteboard Friday, Cyrus Shepard takes a time-out to explain a winning strategy.

gfiorelli1

10. The Myth of Google’s 200 Ranking Factors
September 30 – Posted by Gianluca Fiorelli
Nothing like the “The 200 Google Ranking Factors” actually exists. It is a myth, and those who claim to be able to offer a final list are its prophets. This post explains how the myth was born and the importance of knowing the stages of search engines’ working process.

2. Top Moz Blog posts by unique visits

The heaviest-weighted ingredient in the 1Metric is unique visits, as one of our primary goals for the Moz Blog is to drive traffic to the rest of the site. With that in mind, we thought it interesting to break things down to just this metric and show you just how different this list is from the last one. Of note: Dr. Pete’s post on Google’s new design for title tags is a nod to the power of evergreen content. That post is one that folks can return to over and over as they fiddle with their own title tags, and amassed more than
twice the traffic of the post in the #2 slot.

Dr-Pete

1. New Title Tag Guidelines & Preview Tool
March 20 – Posted by Dr. Peter J. Meyers
Google’s 2014 redesign had a big impact on search result titles, cutting them off much sooner. This post includes a title preview tool and takes a data-driven approach to finding the new limit.

OliGardner

2. The Most Entertaining Guide to Landing Page Optimization You’ll Ever Read
May 20 – Posted by Oli Gardner
If you’ve ever been bored while reading a blog post, your life just got better. If you’ve ever wanted to learn about conversion rate optimization, and how to design high-converting landing pages, without falling asleep, you’re in the right place. Buckle up, and prepare to be entertained in your learning regions.

3. 12 Ways to Increase Traffic From Google Without Building Links
March 11 – Posted by Cyrus Shepard
The job of the Technical SEO becomes more complex each year, but we also have more opportunities now than ever. Here are 12 ways you can improve your rankings without relying on link building.

briancarter

4. Why Every Business Should Spend at Least $1 per Day on Facebook Ads
February 19 – Posted by Brian Carter
For the last three years I’ve constantly recommended Facebook ads. I recommend them to both B2C and B2B businesses. I recommend them to local theaters and comedians here in Charleston, SC. I recommend them to everyone who wants to grow awareness about anything they’re doing. Here’s why.

5. More than Keywords: 7 Concepts of Advanced On-Page SEO
October 21 – Posted by Cyrus Shepard
As marketers, helping search engines understand what our content means is one of our most important tasks. Search engines can’t read pages like humans can, so we incorporate structure and clues as to what our content means. This post explores a series of on-page techniques that not only build upon one another, but can be combined in sophisticated ways.

MarieHaynes

6. Your Google Algorithm Cheat Sheet: Panda, Penguin, and Hummingbird
June 11 – Posted by Marie Haynes
Do you have questions about the Panda algorithm, the Penguin algorithm, or Hummingbird? This guide explains in lay terms what each of these Google algorithm changes is about and how to improve your site so that it looks better in the eyes of the big G.

Chad_Wittman

7. Make Facebook’s Algorithm Change Work For You, Not Against You
January 23 – Posted by Chad Wittman
Recently, many page admins have been experiencing a significant decrease in Total Reach—specifically, organic reach. For pages that want to keep their ad budget as low as possible, maximizing organic reach is vital. To best understand how to make a change like this work for you, and not against you, we need to examine what happened—and what you can do about it.

n8ngrimm

8. How to Rank Well in Amazon, the US’s Largest Product Search Engine
June 04 – Posted by Nathan Grimm
The eCommerce SEO community is ignoring a huge opportunity by focusing almost exclusively on Google. Amazon has roughly three times more search volume for products, and this post tells you all about how to rank.

iPullRank

9. Personas: The Art and Science of Understanding the Person Behind the Visit
January 29 – Posted by Michael King
With the erosion of keyword intelligence and the move to strings-not-things for the user, Google is pushing all marketers to focus more on their target audience. This post will teach you how to understand that audience, the future of Google, and how to build data-driven personas step by step.

Dr-Pete

10. Panda 4.0, Payday Loan 2.0 & eBay’s Very Bad Day
May 21 – Posted by Dr. Peter J. Meyers
Preliminary analysis of the Panda 4.0 and Payday Loan 2.0 updates, major algorithm flux on May 19th, and a big one-day rankings drop for eBay.

3. Top YouMoz Blog posts by unique visits

One of our favorite parts of the Moz community is the YouMoz Blog, where our community members can submit their own posts for potential publishing here on our site. We’re constantly impressed by what we’re sent. These 10 posts all received such high praise that they were promoted to the main Moz Blog, but they all started out as YouMoz posts. 

Chad_Wittman

1. Make Facebook’s Algorithm Change Work For You, Not Against You
January 23 – Posted by Chad Wittman
Recently, many page admins have been experiencing a significant decrease in Total Reach—specifically, organic reach. For pages that want to keep their ad budget as low as possible, maximizing organic reach is vital. To best understand how to make a change like this work for you, and not against you, we need to examine what happened—and what you can do about it.

Carla_Dawson

2. Parallax Scrolling Websites and SEO – A Collection of Solutions and Examples
April 01 – Posted by Carla Dawson
I have observed that there are many articles that say parallax scrolling is not ideal for search engines. Parallax Scrolling is a design technique and it is ideal for search engines if you know how to apply it. I have collected a list of great tutorials and real SEO-friendly parallax websites to help the community learn how to use both techniques together.

Jeffalytics

3. (Provided): 10 Ways to Prove SEO Value in Google Analytics
February 25 – Posted by Jeff Sauer
We and our clients have relied on keyword reports for so long that we’re now using (not provided) as a crutch. This post offers 10 ways you can use Google Analytics to prove your SEO value now that those keywords are gone.

danatanseo

4. How to Set Up and Use Twitter Lead Generation Cards in Your Tweets for Free!
May 07 – Posted by Dana Tan
Working as an in-house SEO strategist for a small business forces me to get “scrappy” every day with tools and techniques. I’m constantly on the lookout for an opportunity that can help my company market to broader audiences for less money. Here’s how to set up your Twitter Cards for free!

Amanda_Gallucci

5. 75 Content Starters for Any Industry
February 06 – Posted by Amanda Gallucci
Suffering from blank page anxiety? Before you go on the hunt for inspiration all over the Internet and elsewhere, turn to the resources around you. Realize that you can create exceptional content with what you already have at hand.

nicoleckohler

6. The Hidden Power of Nofollow Links
June 08 – Posted by Nicole Kohler
For those of us who are trying to earn links for our clients, receiving a nofollow link can feel like a slap in the face. But these links have hidden powers that make them just as important as followed ones. Here’s why nofollow links are more powerful than you might think.

YonDotan

7. A Startling Case Study of Manual Penalties and Negative SEO
March 17 – Posted by Yonatan Dotan
One day in my inbox I found the dreaded notice from Google that our client had a site-wide manual penalty for unnatural inbound links. We quickly set up a call and went through the tooth-rattling ordeal of explaining to our client that they weren’t even ranked for their brand name. Organic traffic dropped by a whopping 94% – and that for a website that gets 66% of its traffic from Google-based organic search.

malditojavi

8. How PornHub Is Bringing its A-Game (SFW)
July 23 – Posted by Javier Sanz
Despite dealing with a sensitive subject, PornHub is doing a great job marketing itself. This (safe-for-work) post takes a closer look at what they are doing.

ajfried

9. Storytelling Through Data: A New Inbound Marketing & SEO Report Structure
January 07 – Posted by Aaron Friedman
No matter what business you are in, it’s a pretty sure thing that someone is going to want to monitor how efficiently and productively you are working. Being able to show these results over time is crucial to maintaining the health of the long term relationship.

robinparallax

10. The Art of Thinking Sideways: Content Marketing for “Boring” Businesses
April 08 – Posted by Robin Swire
In this article, I’ll examine the art of thinking sideways for one of the slightly more tricky marketing clients I’ve worked with. I hope that this will provide an insight for fellow content marketers and SEOs in similar scenarios.

4. Top Moz Blog posts by number of thumbs up

These 10 posts were well enough received that liked that quite a few readers took the time to engage with them, logging in to give their stamp of approval. Whiteboard Fridays are always a hit, and two of them managed to make this list after having been live for less than a month.

1. More than Keywords: 7 Concepts of Advanced On-Page SEO
October 21 – Posted by Cyrus Shepard
As marketers, helping search engines understand what our content means is one of our most important tasks. Search engines can’t read pages like humans can, so we incorporate structure and clues as to what our content means. This post explores a series of on-page techniques that not only build upon one another, but can be combined in sophisticated ways.

Dr-Pete

2. New Title Tag Guidelines & Preview Tool
March 20 – Posted by Dr. Peter J. Meyers
Google’s 2014 redesign had a big impact on search result titles, cutting them off much sooner. This post includes a title preview tool and takes a data-driven approach to finding the new limit.

randfish

3. Dear Google, Links from YouMoz Don’t Violate Your Quality Guidelines
July 23 – Posted by Rand Fishkin
Recently, Moz contributor Scott Wyden, a photographer in New Jersey, received a warning in his Google Webmaster Tools about some links that violated Google’s Quality Guidelines. One example was from moz.com.

MarieHaynes

4. Your Google Algorithm Cheat Sheet: Panda, Penguin, and Hummingbird
June 11 – Posted by Marie Haynes
Do you have questions about the Panda algorithm, the Penguin algorithm, or Hummingbird? This guide explains in lay terms what each of these Google algorithm changes is about and how to improve your site so that it looks better in the eyes of the big G.

randfish

5. Thank You for 10 Incredible Years
October 06 – Posted by Rand Fishkin
It’s been 10 amazing years since Rand started the blog that would turn into SEOmoz and then Moz, and we never could have come this far without you all. You’ll find letters of appreciation from Rand and Sarah in this post (along with a super-cool video retrospective!), and from all of us at Moz, thank you!

6. Illustrated Guide to Advanced On-Page Topic Targeting for SEO
November 17 – Posted by Cyrus Shepard
The concepts of advanced on-page SEO are dizzying: LDA, co-occurrence, and entity salience. The question is “How can I easily incorporate these techniques into my content for higher rankings?” The truth is, you can create optimized pages that rank well without understanding complex algorithms.

josh_bachynski

7. Panda 4.1 Google Leaked Dos and Don’ts – Whiteboard Friday
December 05 – Posted by Josh Bachynski
Panda is about so much more than good content. Let Josh Bachynski give you the inside information on the highlights of what you should (and should not) be doing.

OliGardner

8. The Most Entertaining Guide to Landing Page Optimization You’ll Ever Read
May 20 – Posted by Oli Gardner
If you’ve ever been bored while reading a blog post, your life just got better. If you’ve ever wanted to learn about conversion rate optimization, and how to design high-converting landing pages, without falling asleep, you’re in the right place. Buckle up, and prepare to be entertained in your learning regions.

randfish

9. Does SEO Boil Down to Site Crawlability and Content Quality? – Whiteboard Friday
July 11 – Posted by Rand Fishkin
What does good SEO really mean these days? Rand takes us beyond crawlability and content quality for a peek inside the art and science of the practice.

randfish

10. How to Avoid the Unrealistic Expectations SEOs Often Create – Whiteboard Friday
December 12 – Posted by Rand Fishkin
Making promises about SEO results too often leads to broken dreams and shredded contracts. In today’s Whiteboard Friday, Rand shows us how to set expectations that lead to excitement but help prevent costly misunderstandings.

5. Top Moz Blog posts by number of comments

While the discussions can take a big chunk out of an already busy day, the conversations we get to have with our community members (and the conversations they have with each other) in the comments below our posts is absolutely one of our favorite parts of the blog. These 10 posts garnered quite a bit of discussion (some with a fair amount of controversy), and are fascinating to follow.

1. Take the SEO Expert Quiz and Rule the Internet
May 28 – Posted by Cyrus Shepard
You are master of the keyword. You create 1,000 links with a single tweet. Google engineers ask for your approval before updating their algorithm. You, my friend, are an SEO Expert. Prove it by taking our new SEO Expert Quiz.

2. The Rules of Link Building – Whiteboard Friday
April 04 – Posted by Cyrus Shepard
Google is increasingly playing the referee in the marketing game, and many marketers are simply leaving instead of playing by the rules. In today’s Whiteboard Friday, Cyrus Shepard takes a time-out to explain a winning strategy.

randfish

3. Dear Google, Links from YouMoz Don’t Violate Your Quality Guidelines
July 23 – Posted by Rand Fishkin
Recently, Moz contributor Scott Wyden, a photographer in New Jersey, received a warning in his Google Webmaster Tools about some links that violated Google’s Quality Guidelines. One example was from moz.com.

Dr-Pete

4. New Title Tag Guidelines & Preview Tool
March 20 – Posted by Dr. Peter J. Meyers
Google’s 2014 redesign had a big impact on search result titles, cutting them off much sooner. This post includes a title preview tool and takes a data-driven approach to finding the new limit.

Carla_Dawson

5. SEO Teaching: Should SEO Be Taught at Universities?
October 09 – Posted by Carla Dawson
Despite the popularity and importance of SEO, the field has yet to gain significant traction at the university level other than a few courses here and there offered as part of a broader digital marketing degree. The tide could be turning, however slowly.

6. 12 Ways to Increase Traffic From Google Without Building Links
March 11 – Posted by Cyrus Shepard
The job of the Technical SEO becomes more complex each year, but we also have more opportunities now than ever. Here are 12 ways you can improve your rankings without relying on link building.

evolvingSEO

7. The Broken Art of Company Blogging (and the Ignored Metric that Could Save Us All)
July 22 – Posted by Dan Shure
Company blogging is broken. We’re tricking ourselves into believing they’re successful while ignoring the one signal we have that tells us whether they’re actually working.

MichaelC

8. Real-World Panda Optimization – Whiteboard Friday
August 01 – Posted by Michael Cottam
From the originality of your content to top-heavy posts, there’s a lot that the Panda algorithm is looking for. In today’s Whiteboard Friday, Michael Cottam explains what these things are, and more importantly, what we can do to be sure we get the nod from this particular bear.

EricaMcGillivray

9. Ways to Proactively Welcome Women Into Online Marketing
September 17 – Posted by Erica McGillivray
SEO may be a male-dominated industry, but let’s step out of our biases and work hard to welcome women, and marketers of all stripes, into our community.

10. More than Keywords: 7 Concepts of Advanced On-Page SEO
October 21 – Posted by Cyrus Shepard
As marketers, helping search engines understand what our content means is one of our most important tasks. Search engines can’t read pages like humans can, so we incorporate structure and clues as to what our content means. This post explores a series of on-page techniques that not only build upon one another, but can be combined in sophisticated ways.

6. Top Moz Blog posts by number of linking root domains

What, you thought you’d get to the bottom of the post without seeing a traditional SEO metric? =)

Dr-Pete

1. New Title Tag Guidelines & Preview Tool
March 20 – Posted by Dr. Peter J. Meyers
Google’s 2014 redesign had a big impact on search result titles, cutting them off much sooner. This post includes a title preview tool and takes a data-driven approach to finding the new limit.

Dr-Pete

2. Panda 4.0, Payday Loan 2.0 & eBay’s Very Bad Day
May 21 – Posted by Dr. Peter J. Meyers
Preliminary analysis of the Panda 4.0 and Payday Loan 2.0 updates, major algorithm flux on May 19th, and a big one-day rankings drop for eBay.

iPullRank

3. Personas: The Art and Science of Understanding the Person Behind the Visit
January 29 – Posted by Michael King
With the erosion of keyword intelligence and the move to strings-not-things for the user, Google is pushing all marketers to focus more on their target audience. This post will teach you how to understand that audience, the future of Google, and how to build data-driven personas step by step.

briancarter

4. Why Every Business Should Spend at Least $1 per Day on Facebook Ads
February 19 – Posted by Brian Carter
For the last three years I’ve constantly recommended Facebook ads. I recommend them to both B2C and B2B businesses. I recommend them to local theaters and comedians here in Charleston, SC. I recommend them to everyone who wants to grow awareness about anything they’re doing. Here’s why.

JamesAgate

5. The New Link Building Survey 2014 – Results
July 16 – Posted by James Agate
How has the marketing industry changed its views of link building since last year? James Agate of Skyrocket SEO is back with the results of a brand new survey.

Dr-Pete

6. Google’s 2014 Redesign: Before and After
March 13 – Posted by Dr. Peter J. Meyers
Google’s SERP and ad format redesign may finally be rolling out, after months of testing. Before we lose the old version forever, here’s the before-and-after of every major vertical that’s changed.

7. Google Announces the End of Author Photos in Search: What You Should Know
June 26 – Posted by Cyrus Shepard
Many of us have been constantly advising webmasters to connect their content writers with Google authorship, and it came as a shock when John Mueller announced Google will soon drop authorship photos from regular search results. Let’s examine what this means.

randfish

8. The Greatest Misconception in Content Marketing – Whiteboard Friday
April 25 – Posted by Rand Fishkin
Great content certainly helps business, but it isn’t as simple as “publish, share, convert new customers.” In today’s Whiteboard Friday, Rand explains what’s really going on.

OliGardner

9. The Most Entertaining Guide to Landing Page Optimization You’ll Ever Read
May 20 – Posted by Oli Gardner
If you’ve ever been bored while reading a blog post, your life just got better. If you’ve ever wanted to learn about conversion rate optimization, and how to design high-converting landing pages, without falling asleep, you’re in the right place. Buckle up, and prepare to be entertained in your learning regions.

MarieHaynes

10. Your Google Algorithm Cheat Sheet: Panda, Penguin, and Hummingbird
June 11 – Posted by Marie Haynes
Do you have questions about the Panda algorithm, the Penguin algorithm, or Hummingbird? This guide explains in lay terms what each of these Google algorithm changes is about and how to improve your site so that it looks better in the eyes of the big G.

7. Top comments from our community by number of thumbs up

These 10 comments were the most thumbed-up of any on our blogs this year, offering voices of reason that stand out from the crowd. 

MarieHaynes

1. Marie Haynes | July 23
Commented on: 
Dear Google, Links from YouMoz Don’t Violate Your Quality Guidelines

Backlinko

2. Brian Dean | September 30
Commented on: 
The Myth of Google’s 200 Ranking Factors

mpezet

3. Martin Pezet | July 22
Commented on: 
The Broken Art of Company Blogging (and the Ignored Metric that Could Save Us All)

dannysullivan

4. Danny Sullivan | July 23
Commented on: 
Dear Google, Links from YouMoz Don’t Violate Your Quality Guidelines

5. Cyrus Shepard | October 21
Commented on: 
More than Keywords: 7 Concepts of Advanced On-Page SEO

SarahBird

6. Sarah Bird | September 17
Commented on: 
Ways to Proactively Welcome Women Into Online Marketing

randfish

7. Rand Fishkin | July 04
Commented on: 
5 Fashion Hacks for the Modern Male Marketer – Whiteboard Friday

mpezet

8. Martin Pezet | September 30
Commented on: 
The Myth of Google’s 200 Ranking Factors

FangDigitalMarketing

9. Jeff Ferguson | October 24
Commented on: 
Is It Possible to Have Good SEO Simply by Having Great Content – Whiteboard Friday

magicrob

10. Robert Duckers | March 20
Commented on: 
New Title Tag Guidelines & Preview Tool

8. Top commenters from our community by total thumbs up

We calculated this one a bit differently this year. In the past, we’ve shown the top community members by sheer number of comments. We don’t want, however, to imply that being prolific is necessarily good within itself. So, we added up all the thumbs-up that each comment on our blogs has received, and figured out which community members racked up the most thumbs over the course of the year. (We’ve intentionally omitted staff members and associates from this list, as they’d stack the deck pretty heavily!)

The graphics to the right of each community member show the number of comments they’ve left on blog posts in 2014 as well as the total number of thumbs up those comments have received.

This list is truly an illustration of how amazing the Moz community is. This site would hardly be anything without all of you, and we
so appreciate your involvement on such a regular basis!

SamuelScott

1. Samuel Scott (Moz username: SamuelScott)
MozPoints: 1557 | Rank: 54

paints-n-design

2. Andreas Becker (Moz username: paints-n-design)
MozPoints: 667 | Rank: 148

MarieHaynes

3. Marie Haynes (Moz username: MarieHaynes)
MozPoints: 4706 | Rank: 7

MarkTraphagen

4. Mark Traphagen (Moz username: MarkTraphagen)
MozPoints: 993 | Rank: 102

steviephil

5. Steve Morgan (Moz username: steviephil)
MozPoints: 1249 | Rank: 72

russangular

6. Russ Jones (Moz username: russangular)
MozPoints: 3282 | Rank: 16

mpezet

7. Martin Pezet (Moz username: mpezet)
MozPoints: 464 | Rank: 211

Pixelbypixel

8. Chris Painter (Moz username: Pixelbypixel)
MozPoints: 2707 | Rank: 25

billslawski

9. Bill Slawski (Moz username: billslawski)
MozPoints: 709 | Rank: 140

danatanseo

10. Dana Tan (Moz username: danatanseo)
MozPoints: 4071 | Rank: 11

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com