Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking – Whiteboard Friday

Posted by randfish

There’s no doubt that quite a bit has changed about SEO, and that the field is far more integrated with other aspects of online marketing than it once was. In today’s Whiteboard Friday, Rand pushes back against the idea that effective modern SEO doesn’t require any technical expertise, outlining a fantastic list of technical elements that today’s SEOs need to know about in order to be truly effective.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to do something unusual. I don’t usually point out these inconsistencies or sort of take issue with other folks’ content on the web, because I generally find that that’s not all that valuable and useful. But I’m going to make an exception here.

There is an article by Jayson DeMers, who I think might actually be here in Seattle — maybe he and I can hang out at some point — called “Why Modern SEO Requires Almost No Technical Expertise.” It was an article that got a shocking amount of traction and attention. On Facebook, it has thousands of shares. On LinkedIn, it did really well. On Twitter, it got a bunch of attention.

Some folks in the SEO world have already pointed out some issues around this. But because of the increasing popularity of this article, and because I think there’s, like, this hopefulness from worlds outside of kind of the hardcore SEO world that are looking to this piece and going, “Look, this is great. We don’t have to be technical. We don’t have to worry about technical things in order to do SEO.”

Look, I completely get the appeal of that. I did want to point out some of the reasons why this is not so accurate. At the same time, I don’t want to rain on Jayson, because I think that it’s very possible he’s writing an article for Entrepreneur, maybe he has sort of a commitment to them. Maybe he had no idea that this article was going to spark so much attention and investment. He does make some good points. I think it’s just really the title and then some of the messages inside there that I take strong issue with, and so I wanted to bring those up.

First off, some of the good points he did bring up.

One, he wisely says, “You don’t need to know how to code or to write and read algorithms in order to do SEO.” I totally agree with that. If today you’re looking at SEO and you’re thinking, “Well, am I going to get more into this subject? Am I going to try investing in SEO? But I don’t even know HTML and CSS yet.”

Those are good skills to have, and they will help you in SEO, but you don’t need them. Jayson’s totally right. You don’t have to have them, and you can learn and pick up some of these things, and do searches, watch some Whiteboard Fridays, check out some guides, and pick up a lot of that stuff later on as you need it in your career. SEO doesn’t have that hard requirement.

And secondly, he makes an intelligent point that we’ve made many times here at Moz, which is that, broadly speaking, a better user experience is well correlated with better rankings.

You make a great website that delivers great user experience, that provides the answers to searchers’ questions and gives them extraordinarily good content, way better than what’s out there already in the search results, generally speaking you’re going to see happy searchers, and that’s going to lead to higher rankings.

But not entirely. There are a lot of other elements that go in here. So I’ll bring up some frustrating points around the piece as well.

First off, there’s no acknowledgment — and I find this a little disturbing — that the ability to read and write code, or even HTML and CSS, which I think are the basic place to start, is helpful or can take your SEO efforts to the next level. I think both of those things are true.

So being able to look at a web page, view source on it, or pull up Firebug in Firefox or something and diagnose what’s going on and then go, “Oh, that’s why Google is not able to see this content. That’s why we’re not ranking for this keyword or term, or why even when I enter this exact sentence in quotes into Google, which is on our page, this is why it’s not bringing it up. It’s because it’s loading it after the page from a remote file that Google can’t access.” These are technical things, and being able to see how that code is built, how it’s structured, and what’s going on there, very, very helpful.

Some coding knowledge also can take your SEO efforts even further. I mean, so many times, SEOs are stymied by the conversations that we have with our programmers and our developers and the technical staff on our teams. When we can have those conversations intelligently, because at least we understand the principles of how an if-then statement works, or what software engineering best practices are being used, or they can upload something into a GitHub repository, and we can take a look at it there, that kind of stuff is really helpful.

Secondly, I don’t like that the article overly reduces all of this information that we have about what we’ve learned about Google. So he mentions two sources. One is things that Google tells us, and others are SEO experiments. I think both of those are true. Although I’d add that there’s sort of a sixth sense of knowledge that we gain over time from looking at many, many search results and kind of having this feel for why things rank, and what might be wrong with a site, and getting really good at that using tools and data as well. There are people who can look at Open Site Explorer and then go, “Aha, I bet this is going to happen.” They can look, and 90% of the time they’re right.

So he boils this down to, one, write quality content, and two, reduce your bounce rate. Neither of those things are wrong. You should write quality content, although I’d argue there are lots of other forms of quality content that aren’t necessarily written — video, images and graphics, podcasts, lots of other stuff.

And secondly, that just doing those two things is not always enough. So you can see, like many, many folks look and go, “I have quality content. It has a low bounce rate. How come I don’t rank better?” Well, your competitors, they’re also going to have quality content with a low bounce rate. That’s not a very high bar.

Also, frustratingly, this really gets in my craw. I don’t think “write quality content” means anything. You tell me. When you hear that, to me that is a totally non-actionable, non-useful phrase that’s a piece of advice that is so generic as to be discardable. So I really wish that there was more substance behind that.

The article also makes, in my opinion, the totally inaccurate claim that modern SEO really is reduced to “the happier your users are when they visit your site, the higher you’re going to rank.”

Wow. Okay. Again, I think broadly these things are correlated. User happiness and rank is broadly correlated, but it’s not a one to one. This is not like a, “Oh, well, that’s a 1.0 correlation.”

I would guess that the correlation is probably closer to like the page authority range. I bet it’s like 0.35 or something correlation. If you were to actually measure this broadly across the web and say like, “Hey, were you happier with result one, two, three, four, or five,” the ordering would not be perfect at all. It probably wouldn’t even be close.

There’s a ton of reasons why sometimes someone who ranks on Page 2 or Page 3 or doesn’t rank at all for a query is doing a better piece of content than the person who does rank well or ranks on Page 1, Position 1.

Then the article suggests five and sort of a half steps to successful modern SEO, which I think is a really incomplete list. So Jayson gives us;

  • Good on-site experience
  • Writing good content
  • Getting others to acknowledge you as an authority
  • Rising in social popularity
  • Earning local relevance
  • Dealing with modern CMS systems (which he notes most modern CMS systems are SEO-friendly)

The thing is there’s nothing actually wrong with any of these. They’re all, generally speaking, correct, either directly or indirectly related to SEO. The one about local relevance, I have some issue with, because he doesn’t note that there’s a separate algorithm for sort of how local SEO is done and how Google ranks local sites in maps and in their local search results. Also not noted is that rising in social popularity won’t necessarily directly help your SEO, although it can have indirect and positive benefits.

I feel like this list is super incomplete. Okay, I brainstormed just off the top of my head in the 10 minutes before we filmed this video a list. The list was so long that, as you can see, I filled up the whole whiteboard and then didn’t have any more room. I’m not going to bother to erase and go try and be absolutely complete.

But there’s a huge, huge number of things that are important, critically important for technical SEO. If you don’t know how to do these things, you are sunk in many cases. You can’t be an effective SEO analyst, or consultant, or in-house team member, because you simply can’t diagnose the potential problems, rectify those potential problems, identify strategies that your competitors are using, be able to diagnose a traffic gain or loss. You have to have these skills in order to do that.

I’ll run through these quickly, but really the idea is just that this list is so huge and so long that I think it’s very, very, very wrong to say technical SEO is behind us. I almost feel like the opposite is true.

We have to be able to understand things like;

  • Content rendering and indexability
  • Crawl structure, internal links, JavaScript, Ajax. If something’s post-loading after the page and Google’s not able to index it, or there are links that are accessible via JavaScript or Ajax, maybe Google can’t necessarily see those or isn’t crawling them as effectively, or is crawling them, but isn’t assigning them as much link weight as they might be assigning other stuff, and you’ve made it tough to link to them externally, and so they can’t crawl it.
  • Disabling crawling and/or indexing of thin or incomplete or non-search-targeted content. We have a bunch of search results pages. Should we use rel=prev/next? Should we robots.txt those out? Should we disallow from crawling with meta robots? Should we rel=canonical them to other pages? Should we exclude them via the protocols inside Google Webmaster Tools, which is now Google Search Console?
  • Managing redirects, domain migrations, content updates. A new piece of content comes out, replacing an old piece of content, what do we do with that old piece of content? What’s the best practice? It varies by different things. We have a whole Whiteboard Friday about the different things that you could do with that. What about a big redirect or a domain migration? You buy another company and you’re redirecting their site to your site. You have to understand things about subdomain structures versus subfolders, which, again, we’ve done another Whiteboard Friday about that.
  • Proper error codes, downtime procedures, and not found pages. If your 404 pages turn out to all be 200 pages, well, now you’ve made a big error there, and Google could be crawling tons of 404 pages that they think are real pages, because you’ve made it a status code 200, or you’ve used a 404 code when you should have used a 410, which is a permanently removed, to be able to get it completely out of the indexes, as opposed to having Google revisit it and keep it in the index.

Downtime procedures. So there’s specifically a… I can’t even remember. It’s a 5xx code that you can use. Maybe it was a 503 or something that you can use that’s like, “Revisit later. We’re having some downtime right now.” Google urges you to use that specific code rather than using a 404, which tells them, “This page is now an error.”

Disney had that problem a while ago, if you guys remember, where they 404ed all their pages during an hour of downtime, and then their homepage, when you searched for Disney World, was, like, “Not found.” Oh, jeez, Disney World, not so good.

  • International and multi-language targeting issues. I won’t go into that. But you have to know the protocols there. Duplicate content, syndication, scrapers. How do we handle all that? Somebody else wants to take our content, put it on their site, what should we do? Someone’s scraping our content. What can we do? We have duplicate content on our own site. What should we do?
  • Diagnosing traffic drops via analytics and metrics. Being able to look at a rankings report, being able to look at analytics connecting those up and trying to see: Why did we go up or down? Did we have less pages being indexed, more pages being indexed, more pages getting traffic less, more keywords less?
  • Understanding advanced search parameters. Today, just today, I was checking out the related parameter in Google, which is fascinating for most sites. Well, for Moz, weirdly, related:oursite.com shows nothing. But for virtually every other sit, well, most other sites on the web, it does show some really interesting data, and you can see how Google is connecting up, essentially, intentions and topics from different sites and pages, which can be fascinating, could expose opportunities for links, could expose understanding of how they view your site versus your competition or who they think your competition is.

Then there are tons of parameters, like in URL and in anchor, and da, da, da, da. In anchor doesn’t work anymore, never mind about that one.

I have to go faster, because we’re just going to run out of these. Like, come on. Interpreting and leveraging data in Google Search Console. If you don’t know how to use that, Google could be telling you, you have all sorts of errors, and you don’t know what they are.

  • Leveraging topic modeling and extraction. Using all these cool tools that are coming out for better keyword research and better on-page targeting. I talked about a couple of those at MozCon, like MonkeyLearn. There’s the new Moz Context API, which will be coming out soon, around that. There’s the Alchemy API, which a lot of folks really like and use.
  • Identifying and extracting opportunities based on site crawls. You run a Screaming Frog crawl on your site and you’re going, “Oh, here’s all these problems and issues.” If you don’t have these technical skills, you can’t diagnose that. You can’t figure out what’s wrong. You can’t figure out what needs fixing, what needs addressing.
  • Using rich snippet format to stand out in the SERPs. This is just getting a better click-through rate, which can seriously help your site and obviously your traffic.
  • Applying Google-supported protocols like rel=canonical, meta description, rel=prev/next, hreflang, robots.txt, meta robots, x robots, NOODP, XML sitemaps, rel=nofollow. The list goes on and on and on. If you’re not technical, you don’t know what those are, you think you just need to write good content and lower your bounce rate, it’s not going to work.
  • Using APIs from services like AdWords or MozScape, or hrefs from Majestic, or SEM refs from SearchScape or Alchemy API. Those APIs can have powerful things that they can do for your site. There are some powerful problems they could help you solve if you know how to use them. It’s actually not that hard to write something, even inside a Google Doc or Excel, to pull from an API and get some data in there. There’s a bunch of good tutorials out there. Richard Baxter has one, Annie Cushing has one, I think Distilled has some. So really cool stuff there.
  • Diagnosing page load speed issues, which goes right to what Jayson was talking about. You need that fast-loading page. Well, if you don’t have any technical skills, you can’t figure out why your page might not be loading quickly.
  • Diagnosing mobile friendliness issues
  • Advising app developers on the new protocols around App deep linking, so that you can get the content from your mobile apps into the web search results on mobile devices. Awesome. Super powerful. Potentially crazy powerful, as mobile search is becoming bigger than desktop.

Okay, I’m going to take a deep breath and relax. I don’t know Jayson’s intention, and in fact, if he were in this room, he’d be like, “No, I totally agree with all those things. I wrote the article in a rush. I had no idea it was going to be big. I was just trying to make the broader points around you don’t have to be a coder in order to do SEO.” That’s completely fine.

So I’m not going to try and rain criticism down on him. But I think if you’re reading that article, or you’re seeing it in your feed, or your clients are, or your boss is, or other folks are in your world, maybe you can point them to this Whiteboard Friday and let them know, no, that’s not quite right. There’s a ton of technical SEO that is required in 2015 and will be for years to come, I think, that SEOs have to have in order to be effective at their jobs.

All right, everyone. Look forward to some great comments, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Why Good Unique Content Needs to Die – Whiteboard Friday

Posted by randfish

We all know by now that not just any old content is going to help us rank in competitive SERPs. We often hear people talking about how it takes “good, unique content.” That’s the wrong bar. In today’s Whiteboard Friday, Rand talks about where we should be aiming, and how to get there.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about something that I really have a problem with in the SEO world, and that is the phrase “good, unique content.” I’ll tell you why this troubles me so much. It’s because I get so many emails, I hear so many times at conferences and events with people I meet, with folks I talk to in the industry saying, “Hey, we created some good, unique content, but we don’t seem to be performing well in search.” My answer back to that is always that is not the bar for entry into SEO. That is not the bar for ranking.

The content quality scale

So I made this content quality scale to help illustrate what I’m talking about here. You can see that it starts all the way up at 10x, and down here I’ve got Panda Invasion. So quality, like Google Panda is coming for your site, it’s going to knock you out of the rankings. It’s going to penalize you, like your content is thin and largely useless.

Then you go up a little bit, and it’s like, well four out of five searchers find it pretty bad. They clicked the Back button. Maybe one out of five is thinking, “Well, this is all right. This solves my most basic problems.”

Then you get one level higher than that, and you have good, unique content, which I think many folks think of as where they need to get to. It’s essentially, hey, it’s useful enough. It answers the searcher’s query. It’s unique from any other content on the Web. If you read it, you wouldn’t vomit. It’s good enough, right? Good, unique content.

Problem is almost everyone can get here. They really can. It’s not a high bar, a high barrier to entry to say you need good, unique content. In fact, it can scale. So what I see lots of folks doing is they look at a search result or a set of search results in their industry. Say you’re in travel and vacations, and you look at these different countries and you’re going to look at the hotels or recommendations in those countries and then see all the articles there. You go, “Yeah, you know what, I think we could do something as good as what’s up there or almost.” Well, okay, that puts you in the range. That’s good, unique content.

But in my opinion, the minimum bar today for modern SEO is a step higher, and that is as good as the best in the search results on the search results page. If you can’t consistently say, “We’re the best result that a searcher could find in the search results,” well then, guess what? You’re not going to have an opportunity to rank. It’s much, much harder to get into those top 10 positions, page 1, page 2 positions than it was in the past because there are so many ranking signals that so many of these websites have already built up over the last 5, 10, 15 years that you need to go above and beyond.

Really, where I want folks to go and where I always expect content from Moz to go is here, and that is 10x, 10 times better than anything I can find in the search results today. If I don’t think I can do that, then I’m not going to try and rank for those keywords. I’m just not going to pursue it. I’m going to pursue content in areas where I believe I can create something 10 times better than the best result out there.

What changed?

Why is this? What changed? Well, a bunch of things actually.

  • User experience became a much bigger element in the ranking algorithms, and that’s direct influences, things that we’ve talked about here on Whiteboard Friday before like pogo-sticking, and lots of indirect ones like the links that you earn based on the user experience that you provide and Google rendering pages, Google caring about load speed and device rendering, mobile friendliness, all these kinds of things.
  • Earning links overtook link building. It used to be you put out a page and you built a bunch of links to it. Now that doesn’t so much work anymore because Google is very picky about the links that it’s going to consider. If you can’t earn links naturally, not only can you not get links fast enough and not get good ones, but you also are probably earning links that Google doesn’t even want to count or may even penalize you for. It’s nearly impossible to earn links with just good, unique content. If there’s something better out there on page one of the search results, why would they even bother to link to you? Someone’s going to do a search, and they’re going to find something else to link to, something better.
  • Third, the rise of content marketing over the last five, six years has meant that there’s just a lot more competition. This field is a lot more crowded than it used to be, with many people trying to get to a higher and higher quality bar.
  • Finally, as a result of many of these things, user expectations have gone crazy. Users expect pages to load insanely fast, even on mobile devices, even when their connection’s slow. They expect it to look great. They expect to be provided with an answer almost instantaneously. The quality of results that Google has delivered and the quality of experience that sites like Facebook, which everyone is familiar with, are delivering means that our brains have rewired themselves to expect very fast, very high quality results consistently.

How do we create “10x” content?

So, because of all these changes, we need a process. We need a process to choose, to figure out how we can get to 10x content, not good, unique content, 10x content. A process that I often like to use — this probably is not the only one, but you’re welcome to use it if you find it valuable — is to go, “All right, you know what? I’m going to perform some of these search queries.”

By the way, I would probably perform the search query in two places. One is in Google and their search results, and the other is actually in BuzzSumo, which I think is a great tool for this, where I can see the content that has been most shared. So if you haven’t already, check out BuzzSumo.com.

I might search for something like Costa Rica ecolodges, which I might be considering a Costa Rica vacation at some point in the future. I look at these top ranking results, probably the whole top 10 as well as the most shared content on social media.

Then I’m going to ask myself these questions;

  • What questions are being asked and answered by these search results?
  • What sort of user experience is provided? I look at this in terms of speed, in terms of mobile friendliness, in terms of rendering, in terms of layout and design quality, in terms of what’s required from the user to be able to get the information? Is it all right there, or do I need to click? Am I having trouble finding things?
  • What’s the detail and thoroughness of the information that’s actually provided? Is it lacking? Is it great?
  • What about use of visuals? Visual content can often take best in class all the way up to 10x if it’s done right. So I might check out the use of visuals.
  • The quality of the writing.
  • I’m going to look at information and data elements. Where are they pulling from? What are their sources? What’s the quality of that stuff? What types of information is there? What types of information is missing?

In fact, I like to ask, “What’s missing?” a lot.

From this, I can determine like, hey, here are the strengths and weaknesses of who’s getting all of the social shares and who’s ranking well, and here’s the delta between me and them today. This is the way that I can be 10 times better than the best results in there.

If you use this process or a process like this and you do this type of content auditing and you achieve this level of content quality, you have a real shot at rankings. One of the secret reasons for that is that the effort axis that I have here, like I go to Fiverr, I get Panda invasion. I make the intern write it. This is going to take a weekend to build versus there’s no way to scale this content.

This is a super power. When your competitors or other folks in the field look and say, “Hey, there’s no way that we can scale content quality like this. It’s just too much effort. We can’t keep producing it at this level,” well, now you have a competitive advantage. You have something that puts you in a category by yourself and that’s very hard for competitors to catch up to. It’s a huge advantage in search, in social, on the Web as a whole.

All right everyone, hope you’ve enjoyed this edition of Whiteboard Friday, and we’ll see you again next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Should I Rebrand and Redirect My Site? Should I Consolidate Multiple Sites/Brands? – Whiteboard Friday

Posted by randfish

Making changes to your brand is a huge step, and while it’s sometimes the best path forward, it isn’t one to be taken lightly. In today’s Whiteboard Friday, Rand offers some guidance to marketers who are wondering whether a rebrand/redirect is right for them, and also those who are considering consolidating multiple sites under a single brand.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

To rebrand, or not to rebrand, that is the question

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. Today we’re going to chat a little bit about whether you should rebrand and consider redirecting your existing website or websites and whether you should potentially consolidate multiple websites and brands that you may be running.

So we’ve talked before about redirection moves best practices. We’ve also talked about the splitting of link equity and domain authority and those kinds of things. But one of the questions that people have is, “Gosh, you know I have a website today and given the moves that Google has been making, that the social media world has been making, that content marketing has been making, I’m wondering whether I should potentially rebrand my site.” Lots of people bought domains back in the day that were exact match domains or partial match domains or that they thought reflected a move of the web toward or away from less brand-centric stuff and toward more keyword matching, topic matching, intent matching kinds of things.

Maybe you’re reconsidering those moves and you want to know, “Hey, should I be thinking about making a change now?” That’s what I’m here to answer. So this question to rebrand or not to re, it is tough because you know that when you do that rebrand, you will almost certainly take a traffic hit, and SEO is one of the biggest places where people typically take that traffic hit.

Moz previously was at SEOmoz.org and moved to moz.com. We saw a dip in our traffic over about 3 to 4 months before it fully recovered, and I would say that dip was between 15% and 25% of our search traffic, depending on week to week. I’ll link to a list of metrics that I put on my personal blog, Moz.com/rand, so that you can check those out if you’d like to see them. But it was a short recovery time for us.

One of the questions that people always have is, “Well wait, did you lose rankings for SEO since SEO used to be in your domain name?” The answer is no. In fact, six months after the move, we were ranking higher for SEO related terms and phrases.

Scenario A: Rebranding or redirecting scifitoysandgames.com

So let’s imagine that today you are running SciFiToysAndGames.com, which is right on the borderline. In my opinion, that’s right on the borderline of barely tolerable. Like it could be brandable, but it’s not great. I don’t love the “sci-fi” in here, partially because of how the Syfy channel, the entity that broadcasts stuff on television has chosen to delineate their spelling, sci-fi can be misinterpreted as to how it’s spelled. I don’t love having to have “and” in a domain name. This is long. All sorts of stuff.

Let’s say you also own StarToys.com, but you haven’t used it. Previously StarToys.com has been redirecting to SciFiToysAndGames.com, and you’re thinking, “Well, man, is it the right time to make this move? Should I make this change now? Should I wait for the future?”

How memorable or amplifiable is your current brand?

Well, these are the questions that I would urge you to consider. How memorable and amplifiable is your current brand? That’s something that if you are recognizing like, “Hey I think our brand name, in fact, is holding us back in search results and social media amplification, press, in blog mentions, in journalist links and these kinds of things,” well, that’s something serious to think about. Word of mouth too.

Will you maintain your current brand name long term?

So if you know that sometime in the next two, three, four, or five years you do want to move to StarToys, I would actually strongly urge you to do that right now, because the longer you wait, the longer it will take to build up the signals around the new domain and the more pain you’ll potentially incur by having to keep branding this and working on this old brand name. So I would strongly urge you, if you know you’re going to make the move eventually, make it today. Take the pain now, rather than more pain later.

Can or have you tested brand preference with your target audience?

I would urge you to find two different groups, one who are loyal customers today, people who know SciFiToysAndGames.com and have used it, and two, people who are potential customers, but aren’t yet familiar with it.

You don’t need to do big sample-sizes. If you can get 5, 10, or 15 people either in a room or talk to them in person, you can try some web surveys, you can try using some social media ads like things on Facebook. I’ve seen some companies do some testing around this. Even buying potential PPC ads and seeing how click-through rates perform and sentiment and those kinds of things, that is a great way to help validate your ideas, especially if you’re forced to bring data to a table by executives or other stakeholders.

How much traffic would you need in one year to justify a URL move?

The last thing I think about is imagine, and I want you to either imagine or even model this out, mathematically model it out. If your traffic growth rate — so let’s say you’re growing at 10% year-over-year right now — if that improved 1%, 5%, or 10% annually with a new brand name, would you make the move? So knowing that you might take a short-term hit, but then that your growth rate would be incrementally higher in years to come, how big would that growth rate need to be?

I would say that, in general, if I were thinking about these two domains, granted this is a hard case because you don’t know exactly how much more brandable or word-of-mouth-able or amplifiable your new one might be compared to your existing one. Well, gosh, my general thing here is if you think that’s going to be a substantive percentage, say 5% plus, almost always it’s worth it, because compound growth rate over a number of years will mean that you’re winning big time. Remember that that growth rate is different that raw growth. If you can incrementally increase your growth rate, you get tremendously more traffic when you look back two, three, four, or five years later.

Where does your current and future URL live on the domain/brand name spectrum?

I also made this domain name, brand name spectrum, because I wanted to try and visualize crappiness of domain name, brand name to really good domain name, brand name. I wanted to give some examples and then extract out some elements so that maybe you can start to build on these things thematically as you’re considering your own domains.

So from awful, we go to tolerable, good, and great. So Science-Fi-Toys.net is obviously terrible. I’ve taken a contraction of the name and the actual one. It’s got a .net. It’s using hyphens. It’s infinitely unmemorable up to what I think is tolerable — SciFiToysAndGames.com. It’s long. There are some questions about how type-in-able it is, how easy it is to type in. SciFiToys.com, which that’s pretty good. SciFiToys, relatively short, concise. It still has the “sci-fi” in there, but it’s a .com. We’re getting better. All the way up to, I really love the name, StarToys. I think it’s very brandable, very memorable. It’s concise. It’s easy to remember and type in. It has positive associations probably with most science fiction toy buyers who are familiar with at least “Star Wars” or “Star Trek.” It’s cool. It has some astronomy connotations too. Just a lot of good stuff going on with that domain name.

Then, another one, Region-Data-API.com. That sucks. NeighborhoodInfo.com. Okay, at least I know what it is. Neighborhood is a really hard name to type because it is very hard for many people to spell and remember. It’s long. I don’t totally love it. I don’t love the “info” connotation, which is generic-y.

DistrictData.com has a nice, alliterative ring to it. But maybe we could do even better and actually there is a company, WalkScore.com, which I think is wonderfully brandable and memorable and really describes what it is without being too in your face about the generic brand of we have regional data about places.

What if you’re doing mobile apps? BestAndroidApps.com. You might say, “Why is that in awful?” The answer is two things. One, it’s the length of the domain name and then the fact that you’re actually using someone else’s trademark in your name, which can be really risky. Especially if you start blowing up, getting big, Google might go and say, “Oh, do you have Android in your domain name? We’ll take that please. Thank you very much.”

BestApps.io, in the tech world, it’s very popular to use domains like .io or .ly. Unfortunately, I think once you venture outside of the high tech world, it’s really tough to get people to remember that that is a domain name. If you put up a billboard that says “BestApps.com,” a majority of people will go, “Oh, that’s a website.” But if you use .io, .ly, or one of the new domain names, .ninja, a lot of people won’t even know to connect that up with, “Oh, they mean an Internet website that I can type into my browser or look for.”

So we have to remember that we sometimes live in a bubble. Outside of that bubble are a lot of people who, if it’s not .com, questionable as to whether they’re even going to know what it is. Remember outside of the U.S., country code domain names work equally well — .co.uk, .ca, .co.za, wherever you are.

InstallThis.com. Now we’re getting better. Memorable, clear. Then all the way up to, I really like the name AppCritic.com. I have positive associations with like, “Oh year, restaurant critics, food critics, and movie critics, and this is an app critic. Great, that’s very cool.”

What are the things that are in here? Well, stuff at this end of the spectrum tends to be generic, forgettable, hard to type in. It’s long, brand-infringing, danger, danger, and sketchy sounding. It’s hard to quantify what sketchy sounding is, but you know it when you see it. When you’re reviewing domain names, you’re looking for links, you’re looking at things in the SERPs, you’re like, “Hmm, I don’t know about this one.” Having that sixth sense is something that we all develop over time, so sketchy sounding not quite as scientific as I might want for a description, but powerful.

On this end of the spectrum though, domain names and brand names tend to be unique, memorable, short. They use .com. Unfortunately, still the gold standard. Easy to type in, pronounceable. That’s a powerful thing too, especially because of word of mouth. We suffered with that for a long time with SEOmoz because many people saw it and thought, “Oh, ShowMoz, COMoz, SeeMoz.” It sucked. Have positive associations, like StarToys or WalkScore or AppCritic. They have these positive, pre-built-in associations psychologically that suggest something brandable.

Scenario B: Consolidating two sites

Scenario B, and then we’ll get to the end, but scenario B is the question like, “Should I consolidate?” Let’s say I’m running both of these today. Or more realistic and many times I see people like this, you’re running AppCritic.com and StarToys.com, and you think, “Boy, these are pretty separate.” But then you keep finding overlap between them. Your content tends to overlap, the audience tends to overlap. I find this with many, many folks who run multiple domains.

How much audience and content overlap is there?

So we’ve got to consider a few things. First off, that audience and content overlap. If you’ve got StarToys and AppCritic and the overlap is very thin, just that little, tiny piece in the middle there. The content doesn’t overlap much, the audience doesn’t overlap much. It probably doesn’t make that much sense.

But what if you’re finding like, “Gosh, man, we’re writing more and more about apps and tech and mobile and web stuff on StarToys, and we’re writing more and more about other kinds of geeky, fun things on AppCritic. Slowly it feels like these audiences are merging.” Well, now you might want to consider that consolidation.

Is there potential for separate sales or exits?

Second point of consideration, the potential for separate exits or sales. So if you know that you’re going to sell AppCritic.com to someone in the future and you want to make sure that’s separate from StarToys, you should keep them separate. If you think to yourself, “Gosh, I’d never sell one without the other. They’re really part of the same company, brand, effort,” well, I’d really consider that consolidation.

Will you dilute marketing or branding efforts?

Last point of positive consideration is dilution of marketing and branding efforts. Remember that you’re going to be working on marketing. You’re going to be working on branding. You’re going to be working on growing traffic to these. When you split your efforts, unless you have two relatively large, separate teams, this is very, very hard to do at the same rate that it could be done if you combined those efforts. So another big point of consideration. That compound growth rate that we talked about, that’s another big consideration with this.

Is the topical focus out of context?

What I don’t recommend you consider and what has been unfortunately considered, by a lot of folks in the SEO-centric world in the past, is topical focus of the content. I actually am crossing this out. Not a big consideration. You might say to yourself, “But Rand, we talked about previously on Whiteboard Friday how I can have topical authority around toys and games that are related to science fiction stuff, and I can have topical authority related to mobile apps.”

My answer is if the content overlap is strong and the audience overlap is strong, you can do both on one domain. You can see many, many examples of this across the web, Moz being a great example where we talk about startups and technology and sometimes venture capital and team building and broad marketing and paid search marketing and organic search marketing and just a ton of topics, but all serving the same audience and content. Because that overlap is strong, we can be an authority in all of these realms. Same goes for any time you’re considering these things.

All right everyone, hope you’ve enjoyed this edition of Whiteboard Friday. I look forward to some great comments, and we’ll see you again next week. take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

​The 3 Most Common SEO Problems on Listings Sites

Posted by Dom-Woodman

Listings sites have a very specific set of search problems that you don’t run into everywhere else. In the day I’m one of Distilled’s analysts, but by night I run a job listings site, teflSearch. So, for my first Moz Blog post I thought I’d cover the three search problems with listings sites that I spent far too long agonising about.

Quick clarification time: What is a listings site (i.e. will this post be useful for you)?

The classic listings site is Craigslist, but plenty of other sites act like listing sites:

  • Job sites like Monster
  • E-commerce sites like Amazon
  • Matching sites like Spareroom

1. Generating quality landing pages

The landing pages on listings sites are incredibly important. These pages are usually the primary drivers of converting traffic, and they’re usually generated automatically (or are occasionally custom category pages) .

For example, if I search “Jobs in Manchester“, you can see nearly every result is an automatically generated landing page or category page.

There are three common ways to generate these pages (occasionally a combination of more than one is used):

  • Faceted pages: These are generated by facets—groups of preset filters that let you filter the current search results. They usually sit on the left-hand side of the page.
  • Category pages: These pages are listings which have already had a filter applied and can’t be changed. They’re usually custom pages.
  • Free-text search pages: These pages are generated by a free-text search box.

Those definitions are still bit general; let’s clear them up with some examples:

Amazon uses a combination of categories and facets. If you click on browse by department you can see all the category pages. Then on each category page you can see a faceted search. Amazon is so large that it needs both.

Indeed generates its landing pages through free text search, for example if we search for “IT jobs in manchester” it will generate: IT jobs in manchester.

teflSearch generates landing pages using just facets. The jobs in China landing page is simply a facet of the main search page.

Each method has its own search problems when used for generating landing pages, so lets tackle them one by one.


Aside

Facets and free text search will typically generate pages with parameters e.g. a search for “dogs” would produce:

www.mysite.com?search=dogs

But to make the URL user friendly sites will often alter the URLs to display them as folders

www.mysite.com/results/dogs/

These are still just ordinary free text search and facets, the URLs are just user friendly. (They’re a lot easier to work with in robots.txt too!)


Free search (& category) problems

If you’ve decided the base of your search will be a free text search, then we’ll have two major goals:

  • Goal 1: Helping search engines find your landing pages
  • Goal 2: Giving them link equity.

Solution

Search engines won’t use search boxes and so the solution to both problems is to provide links to the valuable landing pages so search engines can find them.

There are plenty of ways to do this, but two of the most common are:

  • Category links alongside a search

    Photobucket uses a free text search to generate pages, but if we look at example search for photos of dogs, we can see the categories which define the landing pages along the right-hand side. (This is also an example of URL friendly searches!)

  • Putting the main landing pages in a top-level menu

    Indeed also uses free text to generate landing pages, and they have a browse jobs section which contains the URL structure to allow search engines to find all the valuable landing pages.

Breadcrumbs are also often used in addition to the two above and in both the examples above, you’ll find breadcrumbs that reinforce that hierarchy.

Category (& facet) problems

Categories, because they tend to be custom pages, don’t actually have many search disadvantages. Instead it’s the other attributes that make them more or less desirable. You can create them for the purposes you want and so you typically won’t have too many problems.

However, if you also use a faceted search in each category (like Amazon) to generate additional landing pages, then you’ll run into all the problems described in the next section.

At first facets seem great, an easy way to generate multiple strong relevant landing pages without doing much at all. The problems appear because people don’t put limits on facets.

Lets take the job page on teflSearch. We can see it has 18 facets each with many options. Some of these options will generate useful landing pages:

The China facet in countries will generate “Jobs in China” that’s a useful landing page.

On the other hand, the “Conditional Bonus” facet will generate “Jobs with a conditional bonus,” and that’s not so great.

We can also see that the options within a single facet aren’t always useful. As of writing, I have a single job available in Serbia. That’s not a useful search result, and the poor user engagement combined with the tiny amount of content will be a strong signal to Google that it’s thin content. Depending on the scale of your site it’s very easy to generate a mass of poor-quality landing pages.

Facets generate other problems too. The primary one being they can create a huge amount of duplicate content and pages for search engines to get lost in. This is caused by two things: The first is the sheer number of possibilities they generate, and the second is because selecting facets in different orders creates identical pages with different URLs.

We end up with four goals for our facet-generated landing pages:

  • Goal 1: Make sure our searchable landing pages are actually worth landing on, and that we’re not handing a mass of low-value pages to the search engines.
  • Goal 2: Make sure we don’t generate multiple copies of our automatically generated landing pages.
  • Goal 3: Make sure search engines don’t get caught in the metaphorical plastic six-pack rings of our facets.
  • Goal 4: Make sure our landing pages have strong internal linking.

The first goal needs to be set internally; you’re always going to be the best judge of the number of results that need to present on a page in order for it to be useful to a user. I’d argue you can rarely ever go below three, but it depends both on your business and on how much content fluctuates on your site, as the useful landing pages might also change over time.

We can solve the next three problems as group. There are several possible solutions depending on what skills and resources you have access to; here are two possible solutions:

Category/facet solution 1: Blocking the majority of facets and providing external links
  • Easiest method
  • Good if your valuable category pages rarely change and you don’t have too many of them.
  • Can be problematic if your valuable facet pages change a lot

Nofollow all your facet links, and noindex and block category pages which aren’t valuable or are deeper than x facet/folder levels into your search using robots.txt.

You set x by looking at where your useful facet pages exist that have search volume. So, for example, if you have three facets for televisions: manufacturer, size, and resolution, and even combinations of all three have multiple results and search volume, then you could set you index everything up to three levels.

On the other hand, if people are searching for three levels (e.g. “Samsung 42″ Full HD TV”) but you only have one or two results for three-level facets, then you’d be better off indexing two levels and letting the product pages themselves pick up long-tail traffic for the third level.

If you have valuable facet pages that exist deeper than 1 facet or folder into your search, then this creates some duplicate content problems dealt with in the aside “Indexing more than 1 level of facets” below.)

The immediate problem with this set-up, however, is that in one stroke we’ve removed most of the internal links to our category pages, and by no-following all the facet links, search engines won’t be able to find your valuable category pages.

In order re-create the linking, you can add a top level drop down menu to your site containing the most valuable category pages, add category links elsewhere on the page, or create a separate part of the site with links to the valuable category pages.

The top level drop down menu you can see on teflSearch (it’s the search jobs menu), the other two examples are demonstrated in Photobucket and Indeed respectively in the previous section.

The big advantage for this method is how quick it is to implement, it doesn’t require any fiddly internal logic and adding an extra menu option is usually minimal effort.

Category/facet solution 2: Creating internal logic to work with the facets

  • Requires new internal logic
  • Works for large numbers of category pages with value that can change rapidly

There are four parts to the second solution:

  1. Select valuable facet categories and allow those links to be followed. No-follow the rest.
  2. No-index all pages that return a number of items below the threshold for a useful landing page
  3. No-follow all facets on pages with a search depth greater than x.
  4. Block all facet pages deeper than x level in robots.txt

As with the last solution, x is set by looking at where your useful facet pages exist that have search volume (full explanation in the first solution), and if you’re indexing more than one level you’ll need to check out the aside below to see how to deal with the duplicate content it generates.


Aside: Indexing more than one level of facets

If you want more than one level of facets to be indexable, then this will create certain problems.

Suppose you have a facet for size:

  • Televisions: Size: 46″, 44″, 42″

And want to add a brand facet:

  • Televisions: Brand: Samsung, Panasonic, Sony

This will create duplicate content because the search engines will be able to follow your facets in both orders, generating:

  • Television – 46″ – Samsung
  • Television – Samsung – 46″

You’ll have to either rel canonical your duplicate pages with another rule or set up your facets so they create a single unique URL.

You also need to be aware that each followable facet you add will multiply with each other followable facet and it’s very easy to generate a mass of pages for search engines to get stuck in. Depending on your setup you might need to block more paths in robots.txt or set-up more logic to prevent them being followed.

Letting search engines index more than one level of facets adds a lot of possible problems; make sure you’re keeping track of them.


2. User-generated content cannibalization

This is a common problem for listings sites (assuming they allow user generated content). If you’re reading this as an e-commerce site who only lists their own products, you can skip this one.

As we covered in the first area, category pages on listings sites are usually the landing pages aiming for the valuable search terms, but as your users start generating pages they can often create titles and content that cannibalise your landing pages.

Suppose you’re a job site with a category page for PHP Jobs in Greater Manchester. If a recruiter then creates a job advert for PHP Jobs in Greater Manchester for the 4 positions they currently have, you’ve got a duplicate content problem.

This is less of a problem when your site is large and your categories mature, it will be obvious to any search engine which are your high value category pages, but at the start where you’re lacking authority and individual listings might contain more relevant content than your own search pages this can be a problem.

Solution 1: Create structured titles

Set the <title> differently than the on-page title. Depending on variables you have available to you can set the title tag programmatically without changing the page title using other information given by the user.

For example, on our imaginary job site, suppose the recruiter also provided the following information in other fields:

  • The no. of positions: 4
  • The primary area: PHP Developer
  • The name of the recruiting company: ABC Recruitment
  • Location: Manchester

We could set the <title> pattern to be: *No of positions* *The primary area* with *recruiter name* in *Location* which would give us:

4 PHP Developers with ABC Recruitment in Manchester

Setting a <title> tag allows you to target long-tail traffic by constructing detailed descriptive titles. In our above example, imagine the recruiter had specified “Castlefield, Manchester” as the location.

All of a sudden, you’ve got a perfect opportunity to pick up long-tail traffic for people searching in Castlefield in Manchester.

On the downside, you lose the ability to pick up long-tail traffic where your users have chosen keywords you wouldn’t have used.

For example, suppose Manchester has a jobs program called “Green Highway.” A job advert title containing “Green Highway” might pick up valuable long-tail traffic. Being able to discover this, however, and find a way to fit it into a dynamic title is very hard.

Solution 2: Use regex to noindex the offending pages

Perform a regex (or string contains) search on your listings titles and no-index the ones which cannabalise your main category pages.

If it’s not possible to construct titles with variables or your users provide a lot of additional long-tail traffic with their own titles, then is a great option. On the downside, you miss out on possible structured long-tail traffic that you might’ve been able to aim for.

Solution 3: De-index all your listings

It may seem rash, but if you’re a large site with a huge number of very similar or low-content listings, you might want to consider this, but there is no common standard. Some sites like Indeed choose to no-index all their job adverts, whereas some other sites like Craigslist index all their individual listings because they’ll drive long tail traffic.

Don’t de-index them all lightly!

3. Constantly expiring content

Our third and final problem is that user-generated content doesn’t last forever. Particularly on listings sites, it’s constantly expiring and changing.

For most use cases I’d recommend 301’ing expired content to a relevant category page, with a message triggered by the redirect notifying the user of why they’ve been redirected. It typically comes out as the best combination of search and UX.

For more information or advice on how to deal with the edge cases, there’s a previous Moz blog post on how to deal with expired content which I think does an excellent job of covering this area.

Summary

In summary, if you’re working with listings sites, all three of the following need to be kept in mind:

  • How are the landing pages generated? If they’re generated using free text or facets have the potential problems been solved?
  • Is user generated content cannibalising the main landing pages?
  • How has constantly expiring content been dealt with?

Good luck listing, and if you’ve had any other tricky problems or solutions you’ve come across working on listings sites lets chat about them in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Using Term Frequency Analysis to Measure Your Content Quality

Posted by EricEnge

It’s time to look at your content differently—time to start understanding just how good it really is. I am not simply talking about titles, keyword usage, and meta descriptions. I am talking about the entire page experience. In today’s post, I am going to introduce the general concept of content quality analysis, why it should matter to you, and how to use term frequency (TF) analysis to gather ideas on how to improve your content.

TF analysis is usually combined with inverse document frequency analysis (collectively TF-IDF analysis). TF-IDF analysis has been a staple concept for information retrieval science for a long time. You can read more about TF-IDF and other search science concepts in Cyrus Shepard’s
excellent article here.

For purposes of today’s post, I am going to show you how you can use TF analysis to get clues as to what Google is valuing in the content of sites that currently outrank you. But first, let’s get oriented.

Conceptualizing page quality

Start by asking yourself if your page provides a quality experience to people who visit it. For example, if a search engine sends 100 people to your page, how many of them will be happy? Seventy percent? Thirty percent? Less? What if your competitor’s page gets a higher percentage of happy users than yours does? Does that feel like an “uh-oh”?

Let’s think about this with a specific example in mind. What if you ran a golf club site, and 100 people come to your page after searching on a phrase like “golf clubs.” What are the kinds of things they may be looking for?

Here are some things they might want:

  1. A way to buy golf clubs on your site (you would need to see a shopping cart of some sort).
  2. The ability to select specific brands, perhaps by links to other pages about those brands of golf clubs.
  3. Information on how to pick the club that is best for them.
  4. The ability to select specific types of clubs (drivers, putters, irons, etc.). Again, this may be via links to other pages.
  5. A site search box.
  6. Pricing info.
  7. Info on shipping costs.
  8. Expert analysis comparing different golf club brands.
  9. End user reviews of your company so they can determine if they want to do business with you.
  10. How your return policy works.
  11. How they can file a complaint.
  12. Information about your company. Perhaps an “about us” page.
  13. A link to a privacy policy page.
  14. Whether or not you have been “in the news” recently.
  15. Trust symbols that show that you are a reputable organization.
  16. A way to access pages to buy different products, such as golf balls or tees.
  17. Information about specific golf courses.
  18. Tips on how to improve their golf game.

This is really only a partial list, and the specifics of your site can certainly vary for any number of reasons from what I laid out above. So how do you figure out what it is that people really want? You could pull in data from a number of sources. For example, using data from your site search box can be invaluable. You can do user testing on your site. You can conduct surveys. These are all good sources of data.

You can also look at your analytics data to see what pages get visited the most. Just be careful how you use that data. For example, if most of your traffic is from search, this data will be biased by incoming search traffic, and hence what Google chooses to rank. In addition, you may only have a small percentage of the visitors to your site going to your privacy policy, but chances are good that there are significantly more users than that who notice whether or not you have a privacy policy. Many of these will be satisfied just to see that you have one and won’t actually go check it out.

Whatever you do, it’s worth using many of these methods to determine what users want from the pages of your site and then using the resulting information to improve your overall site experience.

Is Google using this type of info as a ranking factor?

At some level, they clearly are. Clearly Google and Bing have evolved far beyond the initial TF-IDF concepts, but we can still use them to better understand our own content.

The first major indication we had that Google was performing content quality analysis was with the release of the
Panda algorithm in February of 2011. More recently, we know that on April 21 Google will release an algorithm that makes the mobile friendliness of a web site a ranking factor. Pure and simple, this algo is about the user experience with a page.

Exactly how Google is performing these measurements is not known, but
what we do know is their intent. They want to make their search engine look good, largely because it helps them make more money. Sending users to pages that make them happy will do that. Google has every incentive to improve the quality of their search results in as many ways as they can.

Ultimately, we don’t actually know what Google is measuring and using. It may be that the only SEO impact of providing pages that satisfy a very high percentage of users is an indirect one. I.e., so many people like your site that it gets written about more, linked to more, has tons of social shares, gets great engagement, that Google sees other signals that it uses as ranking factors, and this is why your rankings improve.

But, do I care if the impact is a direct one or an indirect one? Well, NO.

Using TF analysis to evaluate your page

TF-IDF analysis is more about relevance than content quality, but we can still use various precepts from it to help us understand our own content quality. One way to do this is to compare the results of a TF analysis of all the keywords on your page with those pages that currently outrank you in the search results. In this section, I am going to outline the basic concepts for how you can do this. In the next section I will show you a process that you can use with publicly available tools and a spreadsheet.

The simplest form of TF analysis is to count the number of uses of each keyword on a page. However, the problem with that is that a page using a keyword 10 times will be seen as 10 times more valuable than a page that uses a keyword only once. For that reason, we dampen the calculations. I have seen two methods for doing this, as follows:

term frequency calculation

The first method relies on dividing the number of repetitions of a keyword by the count for the most popular word on the entire page. Basically, what this does is eliminate the inherent advantage that longer documents might otherwise have over shorter ones. The second method dampens the total impact in a different way, by taking the log base 10 for the actual keyword count. Both of these achieve the effect of still valuing incremental uses of a keyword, but dampening it substantially. I prefer to use method 1, but you can use either method for our purposes here.

Once you have the TF calculated for every different keyword found on your page, you can then start to do the same analysis for pages that outrank you for a given search term. If you were to do this for five competing pages, the result might look something like this:

term frequency spreadsheet

I will show you how to set up the spreadsheet later, but for now, let’s do the fun part, which is to figure out how to analyze the results. Here are some of the things to look for:

  1. Are there any highly related words that all or most of your competitors are using that you don’t use at all?
  2. Are there any such words that you use significantly less, on average, than your competitors?
  3. Also look for words that you use significantly more than competitors.

You can then tag these words for further analysis. Once you are done, your spreadsheet may now look like this:

second stage term frequency analysis spreadsheet

In order to make this fit into this screen shot above and keep it legibly, I eliminated some columns you saw in my first spreadsheet. However, I did a sample analysis for the movie “Woman in Gold”. You can see the
full spreadsheet of calculations here. Note that we used an automated approach to marking some items at “Low Ratio,” “High Ratio,” or “All Competitors Have, Client Does Not.”

None of these flags by themselves have meaning, so you now need to put all of this into context. In our example, the following words probably have no significance at all: “get”, “you”, “top”, “see”, “we”, “all”, “but”, and other words of this type. These are just very basic English language words.

But, we can see other things of note relating to the target page (a.k.a. the client page):

  1. It’s missing any mention of actor ryan reynolds
  2. It’s missing any mention of actor helen mirren
  3. The page has no reviews
  4. Words like “family” and “story” are not mentioned
  5. “Austrian” and “maria altmann” are not used at all
  6. The phrase “woman in gold” and words “billing” and “info” are used proportionally more than they are with the other pages

Note that the last item is only visible if you open
the spreadsheet. The issues above could well be significant, as the lead actors, reviews, and other indications that the page has in-depth content. We see that competing pages that rank have details of the story, so that’s an indication that this is what Google (and users) are looking for. The fact that the main key phrase, and the word “billing”, are used to a proportionally high degree also makes it seem a bit spammy.

In fact, if you look at the information closely, you can see that the target page is quite thin in overall content. So much so, that it almost looks like a doorway page. In fact, it looks like it was put together by the movie studio itself, just not very well, as it presents little in the way of a home page experience that would cause it to rank for the name of the movie!

In the many different times I have done an analysis using these methods, I’ve been able to make many different types of observations about pages. A few of the more interesting ones include:

  1. A page that had no privacy policy, yet was taking personally identifiable info from users.
  2. A major lack of important synonyms that would indicate a real depth of available content.
  3. Comparatively low Domain Authority competitors ranking with in-depth content.

These types of observations are interesting and valuable, but it’s important to stress that you shouldn’t be overly mechanical about this. The value in this type of analysis is that it gives you a technical way to compare the content on your page with that of your competitors. This type of analysis should be used in combination with other methods that you use for evaluating that same page. I’ll address this some more in the summary section of this below.

How do you execute this for yourself?

The
full spreadsheet contains all the formulas so all you need to do is link in the keyword count data. I have tried this with two different keyword density tools, the one from Searchmetrics, and this one from motoricerca.info.

I am not endorsing these tools, and I have no financial interest in either one—they just seemed to work fairly well for the process I outlined above. To provide the data in the right format, please do the following:

  1. Run all the URLs you are testing through the keyword density tool.
  2. Copy and paste all the one word, two word, and three word results into a tab on the spreadsheet.
  3. Sort them all so you get total word counts aligned by position as I have shown in the linked spreadsheet.
  4. Set up the formulas as I did in the demo spreadsheet (you can just use the demo spreadsheet).
  5. Then do your analysis!

This may sound a bit tedious (and it is), but it has worked very well for us at STC.

Summary

You can also use usability groups and a number of other methods to figure out what users are really looking for on your site. However, what this does is give us a look at what Google has chosen to rank the highest in its search results. Don’t treat this as some sort of magic formula where you mechanically tweak the content to get better metrics in this analysis.

Instead, use this as a method for slicing into your content to better see it the way a machine might see it. It can yield some surprising (and wonderful) insights!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Spam Score: Moz’s New Metric to Measure Penalization Risk

Posted by randfish

Today, I’m very excited to announce that Moz’s Spam Score, an R&D project we’ve worked on for nearly a year, is finally going live. In this post, you can learn more about how we’re calculating spam score, what it means, and how you can potentially use it in your SEO work.

How does Spam Score work?

Over the last year, our data science team, led by 
Dr. Matt Peters, examined a great number of potential factors that predicted that a site might be penalized or banned by Google. We found strong correlations with 17 unique factors we call “spam flags,” and turned them into a score.

Almost every subdomain in 
Mozscape (our web index) now has a Spam Score attached to it, and this score is viewable inside Open Site Explorer (and soon, the MozBar and other tools). The score is simple; it just records the quantity of spam flags the subdomain triggers. Our correlations showed that no particular flag was more likely than others to mean a domain was penalized/banned in Google, but firing many flags had a very strong correlation (you can see the math below).

Spam Score currently operates only on the subdomain level—we don’t have it for pages or root domains. It’s been my experience and the experience of many other SEOs in the field that a great deal of link spam is tied to the subdomain-level. There are plenty of exceptions—manipulative links can and do live on plenty of high-quality sites—but as we’ve tested, we found that subdomain-level Spam Score was the best solution we could create at web scale. It does a solid job with the most obvious, nastiest spam, and a decent job highlighting risk in other areas, too.

How to access Spam Score

Right now, you can find Spam Score inside 
Open Site Explorer, both in the top metrics (just below domain/page authority) and in its own tab labeled “Spam Analysis.” Spam Score is only available for Pro subscribers right now, though in the future, we may make the score in the metrics section available to everyone (if you’re not a subscriber, you can check it out with a free trial). 

The current Spam Analysis page includes a list of subdomains or pages linking to your site. You can toggle the target to look at all links to a given subdomain on your site, given pages, or the entire root domain. You can further toggle source tier to look at the Spam Score for incoming linking pages or subdomains (but in the case of pages, we’re still showing the Spam Score for the subdomain on which that page is hosted).

You can click on any Spam Score row and see the details about which flags were triggered. We’ll bring you to a page like this:

Back on the original Spam Analysis page, at the very bottom of the rows, you’ll find an option to export a disavow file, which is compatible with Google Webmaster Tools. You can choose to filter the file to contain only those sites with a given spam flag count or higher:

Disavow exports usually take less than 3 hours to finish. We can send you an email when it’s ready, too.

WARNING: Please do not export this file and simply upload it to Google! You can really, really hurt your site’s ranking and there may be no way to recover. Instead, carefully sort through the links therein and make sure you really do want to disavow what’s in there. You can easily remove/edit the file to take out links you feel are not spam. When Moz’s Cyrus Shepard disavowed every link to his own site, it took more than a year for his rankings to return!

We’ve actually made the file not-wholly-ready for upload to Google in order to be sure folks aren’t too cavalier with this particular step. You’ll need to open it up and make some edits (specifically to lines at the top of the file) in order to ready it for Webmaster Tools

In the near future, we hope to have Spam Score in the Mozbar as well, which might look like this: 

Sweet, right? 🙂

Potential use cases for Spam Analysis

This list probably isn’t exhaustive, but these are a few of the ways we’ve been playing around with the data:

  1. Checking for spammy links to your own site: Almost every site has at least a few bad links pointing to it, but it’s been hard to know how much or how many potentially harmful links you might have until now. Run a quick spam analysis and see if there’s enough there to cause concern.
  2. Evaluating potential links: This is a big one where we think Spam Score can be helpful. It’s not going to catch every potentially bad link, and you should certainly still use your brain for evaluation too, but as you’re scanning a list of link opportunities or surfing to various sites, having the ability to see if they fire a lot of flags is a great warning sign.
  3. Link cleanup: Link cleanup projects can be messy, involved, precarious, and massively tedious. Spam Score might not catch everything, but sorting links by it can be hugely helpful in identifying potentially nasty stuff, and filtering out the more probably clean links.
  4. Disavow Files: Again, because Spam Score won’t perfectly catch everything, you will likely need to do some additional work here (especially if the site you’re working on has done some link buying on more generally trustworthy domains), but it can save you a heap of time evaluating and listing the worst and most obvious junk.

Over time, we’re also excited about using Spam Score to help improve the PA and DA calculations (it’s not currently in there), as well as adding it to other tools and data sources. We’d love your feedback and insight about where you’d most want to see Spam Score get involved.

Details about Spam Score’s calculation

This section comes courtesy of Moz’s head of data science, Dr. Matt Peters, who created the metric and deserves (at least in my humble opinion) a big round of applause. – Rand

Definition of “spam”

Before diving into the details of the individual spam flags and their calculation, it’s important to first describe our data gathering process and “spam” definition.

For our purposes, we followed Google’s definition of spam and gathered labels for a large number of sites as follows.

  • First, we randomly selected a large number of subdomains from the Mozscape index stratified by mozRank.
  • Then we crawled the subdomains and threw out any that didn’t return a “200 OK” (redirects, errors, etc).
  • Finally, we collected the top 10 de-personalized, geo-agnostic Google-US search results using the full subdomain name as the keyword and checked whether any of those results matched the original keyword. If they did not, we called the subdomain “spam,” otherwise we called it “ham.”

We performed the most recent data collection in November 2014 (after the Penguin 3.0 update) for about 500,000 subdomains.

Relationship between number of flags and spam

The overall Spam Score is currently an aggregate of 17 different “flags.” You can think of each flag a potential “warning sign” that signals that a site may be spammy. The overall likelihood of spam increases as a site accumulates more and more flags, so that the total number of flags is a strong predictor of spam. Accordingly, the flags are designed to be used together—no single flag, or even a few flags, is cause for concern (and indeed most sites will trigger at least a few flags).

The following table shows the relationship between the number of flags and percent of sites with those flags that we found Google had penalized or banned:

ABOVE: The overall probability of spam vs. the number of spam flags. Data collected in Nov. 2014 for approximately 500K subdomains. The table also highlights the three overall danger levels: low/green (< 10%) moderate/yellow (10-50%) and high/red (>50%)

The overall spam percent averaged across a large number of sites increases in lock step with the number of flags; however there are outliers in every category. For example, there are a small number of sites with very few flags that are tagged as spam by Google and conversely a small number of sites with many flags that are not spam.

Spam flag details

The individual spam flags capture a wide range of spam signals link profiles, anchor text, on page signals and properties of the domain name. At a high level the process to determine the spam flags for each subdomain is:

  • Collect link metrics from Mozscape (mozRank, mozTrust, number of linking domains, etc).
  • Collect anchor text metrics from Mozscape (top anchor text phrases sorted by number of links)
  • Collect the top five pages by Page Authority on the subdomain from Mozscape
  • Crawl the top five pages plus the home page and process to extract on page signals
  • Provide the output for Mozscape to include in the next index release cycle

Since the spam flags are incorporated into in the Mozscape index, fresh data is released with each new index. Right now, we crawl and process the spam flags for each subdomains every two – three months although this may change in the future.

Link flags

The following table lists the link and anchor text related flags with the the odds ratio for each flag. For each flag, we can compute two percents: the percent of sites with that flag that are penalized by Google and the percent of sites with that flag that were not penalized. The odds ratio is the ratio of these percents and gives the increase in likelihood that a site is spam if it has the flag. For example, the first row says that a site with this flag is 12.4 times more likely to be spam than one without the flag.

ABOVE: Description and odds ratio of link and anchor text related spam flags. In addition to a description, it lists the odds ratio for each flag which gives the overall increase in spam likelihood if the flag is present).

Working down the table, the flags are:

  • Low mozTrust to mozRank ratio: Sites with low mozTrust compared to mozRank are likely to be spam.
  • Large site with few links: Large sites with many pages tend to also have many links and large sites without a corresponding large number of links are likely to be spam.
  • Site link diversity is low: If a large percentage of links to a site are from a few domains it is likely to be spam.
  • Ratio of followed to nofollowed subdomains/domains (two separate flags): Sites with a large number of followed links relative to nofollowed are likely to be spam.
  • Small proportion of branded links (anchor text): Organically occurring links tend to contain a disproportionate amount of banded keywords. If a site does not have a lot of branded anchor text, it’s a signal the links are not organic.

On-page flags

Similar to the link flags, the following table lists the on page and domain name related flags:

ABOVE: Description and odds ratio of on page and domain name related spam flags. In addition to a description, it lists the odds ratio for each flag which gives the overall increase in spam likelihood if the flag is present).

  • Thin content: If a site has a relatively small ratio of content to navigation chrome it’s likely to be spam.
  • Site mark-up is abnormally small: Non-spam sites tend to invest in rich user experiences with CSS, Javascript and extensive mark-up. Accordingly, a large ratio of text to mark-up is a spam signal.
  • Large number of external links: A site with a large number of external links may look spammy.
  • Low number of internal links: Real sites tend to link heavily to themselves via internal navigation and a relative lack of internal links is a spam signal.
  • Anchor text-heavy page: Sites with a lot of anchor text are more likely to be spam then those with more content and less links.
  • External links in navigation: Spam sites may hide external links in the sidebar or footer.
  • No contact info: Real sites prominently display their social and other contact information.
  • Low number of pages found: A site with only one or a few pages is more likely to be spam than one with many pages.
  • TLD correlated with spam domains: Certain TLDs are more spammy than others (e.g. pw).
  • Domain name length: A long subdomain name like “bycheapviagra.freeshipping.onlinepharmacy.com” may indicate keyword stuffing.
  • Domain name contains numerals: domain names with numerals may be automatically generated and therefore spam.

If you’d like some more details on the technical aspects of the spam score, check out the 
video of Matt’s 2012 MozCon talk about Algorithmic Spam Detection or the slides (many of the details have evolved, but the overall ideas are the same):

We’d love your feedback

As with all metrics, Spam Score won’t be perfect. We’d love to hear your feedback and ideas for improving the score as well as what you’d like to see from it’s in-product application in the future. Feel free to leave comments on this post, or to email Matt (matt at moz dot com) and me (rand at moz dot com) privately with any suggestions.

Good luck cleaning up and preventing link spam!



Not a Pro Subscriber? No problem!



Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Hiring for SEO: How to Find and Hire Someone with Little or No Experience

Posted by RuthBurrReedy

SEO is a seller’s market. The supply of people with SEO experience is currently no match for the demand for search engine marketing services, as anyone who has spent months searching for the right SEO candidate can tell you. Even in a big city with a booming tech scene (like Seattle, LA, New York, or Austin), experienced SEOs are thin on the ground. In a local market where the economy is less tech-driven (like, say, Oklahoma City, where I work), finding an experienced SEO (even one with just a year or two of experience) is like finding a unicorn.

You’re hired. (Photo via 
Pixabay)

If you’re looking for an in-house SEO or someone to run your whole program, you may have no choice but to hold out for a hero (and think about relocating someone). If you’re an SEO trying to grow a team of digital marketers at an agency or to expand a large in-house team, sometimes your best bet is to hire someone with no digital marketing experience but a lot of potential and train them. 

However, you can’t plug just anyone into an SEO role, train them up right and have them be fantastic (or enjoy their job); there are definite skills, talents and personality traits that contribute to success in digital marketing.

Most advice on hiring SEOs is geared toward making sure they know their stuff and aren’t spammers. That’s not really applicable to hiring at the trainee level, though. So how can you tell whether someone is right for a job they’ve never done? At BigWing, we’ve had a lot of success hiring smart young people and turning them into digital marketers, and there are a few things we look for in a candidate.

Are they an aggressive, independent learner?

Successful SEOs spend a ton of time on continued learning—reading blogs, attending conferences and webinars, discussing and testing new techniques—and a lot of that learning happens outside of normal work hours. The right candidate should be someone who loves learning and has the ability to independently drive their ongoing education.

Ask job candidates about another situation where they’ve had to quickly pick up a new skill. What did they do to learn it? How did that go? If it’s never come up for them, ask what they might do in that situation.

Interview prep is something I always look for in a candidate, since it shows they’re actually interested in the job. Ask what they’ve done to prep for the interview. Did they take a look at your company website? Maybe do some Googling to find other informational resources on what digital marketing entails? What did they learn? Where did they learn it? How did they find it?

Give your candidates some homework before the interview. Have them read the 
Beginner’s Guide to SEO, maybe Google’s Search Engine Optimization Starter Guide, or the demo modules at Distilled U. How much of it did they retain? More importantly, what did they learn? Which brings us to:

Do they have a small understanding of what SEO is and why we do it?

I’ve seen a lot of people get excited about learning SEO, do OK for a year or two, and then crash and burn. The number one cause of SEO flame-out or burn-out, in my experience, is an inability to pivot from old tactics to new ones. This failure often stems from a fundamental lack of understanding of what SEO is (marketing, connecting websites that have stuff with people who want that stuff) and what it is not (any single SEO tactic).

It can be frustrating when the methods you originally learned on, or that used to work so well, dry up and blow away (I’m looking at you, siloing and PageRank sculpting). If you’re focused on what tricks and tactics can get you ranking #1, instead of on how you’re using digital techniques to market to and connect with potential customers, sooner or later the rug’s going to get pulled out from under you.

Ask your candidates: what did they retain from their research? Are they totally focused on the search engine, or have they thought about how visits can turn into revenue? Do they seem more interested in being a hacker, or a marketer? Some people really fall in love with the idea that they could manipulate search engines to do what they want; I look for people who are more in love with the idea of using the Internet as a tool to connect businesses with their customers, since ultimately your SEO client is going to want revenue, not just rankings.

Another trait I look for in the interview process is empathy. Can they articulate why a business might want to invest in search? Ask them to imagine some fears or concerns a small business owner might have when starting up an Internet marketing program. This is especially important for agency work, where communicating success requires an understanding of your client’s goals and concerns.

Can they write?

Photo via 
Pixabay

Even if you’re looking to grow someone into a technical SEO, not a content creator, SEO involves writing well. You’re going to have to be able to create on-page elements that not only communicate topical relevance to search engines but also appeal to users.

This should go without saying, but in my experience definitely doesn’t: their resume should be free of typos and grammatical errors. Not only is this an indicator of their ability to write while unsupervised, it’s also an indicator of their attention to detail and how seriously they’re taking the position.

Any kind of writing experience is a major plus for me when looking at a resume, but isn’t necessarily a requirement. It’s helpful to get some idea of what they’re capable of, though. Ask for a writing sample, and better yet, look for a writing sample in the wild online. Have they blogged before?
You’ll almost certainly be exchanging emails with a candidate before an interview—pay attention to how they communicate via email. Is it hard to tell what they’re talking about? Good writing isn’t just about grammar; it’s about communicating ideas.

I like to give candidates a scenario like “A client saw traffic to their website decline because of an error we failed to detect. We found and corrected the error, but their traffic numbers are still down for the month,” and have them compose a pretend email to the client about what happened. This is a great way to test both their written communication skills and their empathy for the client. Are you going to have to proofread their client emails before they go out? That sounds tedious.

How are their critical thinking and data analysis skills?

A brand-new digital marketer probably won’t have any experience with analytics tools like Google Analytics, and that’s OK—you can teach them how to use those. What’s harder to teach is an ability to think critically and to use data to make decisions.

Have your candidates ever been in a situation where they needed to use data to figure out what to do next? What about tell a story, back up a claim or change someone’s mind? Recent college grads should all have recent experience with this, regardless of their major—critical thinking and data analysis are what college is all about.
How comfortable are they in Microsoft Excel? They don’t have to love it, but if they absolutely loathe it, SEO probably isn’t for them. Would it make them miserable to spend most of a day in a spreadsheet (not every day, but fairly regularly)?

Are they a citizen of the web?

Even if they’ve never heard of SEO, a new employee is going to have an easier time learning it if they’re already pretty net savvy. An active web presence also indicates a general interest in the the Internet, which is one indicator of whether they’ll have long-term interest in digital marketing as a field. Do some recon: are they active on social media? Have they ever blogged? What comes up when you Google them?

Prior experience

Different applicants will have different backgrounds, and you’ll have the best idea of what skills someone will need to bring to the table to fill the role you need. When I’m reading a resume, I take experience in any of these areas as a good sign:

  • Marketing 
  • Advertising 
  • Public relations 
  • APIs (using them, creating apps with them, what have you) 
  • Web development or coding of any kind 
  • Web design 
  • Copywriting

Your mileage may vary

Photo via 
Knowyourmeme

Very few candidates are going to excel in all of the areas outlined above, and everyone you talk to is going to be stronger in some areas than others. Since digital marketing can include a wide variety of different tasks, keep in mind the things you’d actually like the person to do on the job; for example, written communication becomes somewhat less important in a non-client-facing role. At the very least, look for a smart, driven person who is excited about digital marketing as a career opportunity (not just as a next paycheck).

Hiring inexperienced people has its risks: the person you hire may not actually turn out to be any good at SEO. They may have more trouble learning it than you anticipated, and once they start doing it, they may decide that SEO just isn’t what they want to do long-term.

On the other hand, hiring and training someone who’s a great fit for your company culture and who is excited about learning often results in a better employee than hiring someone with experience who doesn’t really mesh well with your team. Plus, teaching someone SEO is a great way to make sure they don’t have any bad habits that could put your clients at risk. Best of all, you have the opportunity to unlock a whole career for someone and watch them grow into a world-class marketer—and that’s a great feeling.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Leveraging Panda to Get Out of Product Feed Jail

Posted by MichaelC

This is a story about Panda, customer service, and differentiating your store from others selling the same products.

Many e-commerce websites get the descriptions, specifications, and imagery for products they sell from feeds or databases provided by the
manufacturers. The manufacturers might like this, as they control how their product is described and shown. However, it does their retailers
no good when they are trying to rank for searches for those products and they’ve got the exact same content as every other retailer. If the content
in the feed is thin, then you’ll have pages with…well….thin content. And if there’s a lot of content for the products, then you’ll have giant blocks of content that
Panda might spot as being the same as they’ve seen on many other sites. To throw salt on the wound, if the content is really crappy, badly written,
or downright wrong, then the retailers’ sites will look low-quality to Panda and users as well.

Many webmasters see Panda as a type of Google penalty—but it’s not, really. Panda is a collection of measurements Google
is taking of your web pages to try and give your pages a rating on how happy users are likely to be with those pages.
It’s not perfect, but then again—neither is your website.

Many SEO folks (including me) tend to focus on the kinds of tactical and structural things you can do to make Panda see
your web pages as higher quality: things like adding big, original images, interactive content like videos and maps, and
lots and lots and lots and lots of text. These are all good tactics, but let’s step back a bit and look at a specific
example to see WHY Panda was built to do this, and from that, what we can do as retailers to enrich the content we have
for e-commerce products where our hands are a bit tied—we’re getting a feed of product info from the manufacturers, the same
as every other retailer of those products.

I’m going to use a real-live example that I suffered through about a month ago. I was looking for a replacement sink
stopper for a bathroom sink. I knew the brand, but there wasn’t a part number on the part I needed to replace. After a few Google
searches, I think I’ve found it on Amazon:


Don’t you wish online shopping was always this exciting?

What content actually teaches the customer

All righty… my research has shown me that there are standard sizes for plug stoppers. In fact, I initially ordered a
“universal fit sink stopper.” Which didn’t fit. Then I found 3 standard diameters, and 5 or 6 standard lengths.
No problem…I possess that marvel of modern tool chests, a tape measure…so I measure the part I have that I need to replace. I get about 1.5″ x 5″.
So let’s scroll down to the product details to see if it’s a match:

Kohler sink stopper product info from hell

Whoa. 1.2 POUNDS? This sink stopper must be made of
Ununoctium.
The one in my hand weighs about an ounce. But the dimensions
are way off as well: a 2″ diameter stopper isn’t going to fit, and mine needs to be at least an inch longer.

I scroll down to the product description…maybe there’s more detail there, maybe the 2″ x 2″ is the box or something.

I've always wanted a sink stopper designed for long long

Well, that’s less than helpful, with a stupid typo AND incorrect capitalization AND a missing period at the end.
Doesn’t build confidence in the company’s quality control.

Looking at the additional info section, maybe this IS the right part…the weight quoted in there is about right:

Maybe this is my part after all

Where else customers look for answers

Next I looked at the questions and answers bit, which convinced me that it PROBABLY was the right part:

Customers will answer the question if the retailer won't...sometimes.

If I was smart, I would have covered my bets by doing what a bunch of other customers also did: buy a bunch of different parts,
and surely one of them will fit. Could there
possibly was a clearer signal that the product info was lacking than this?

If you can't tell which one to buy, buy them all!

In this case, that was probably smarter than spending another 1/2 hour of my time snooping around online. But in general, people
aren’t going to be willing to buy THREE of something just to make sure they get the right one. This cheap part was an exception.

So, surely SOMEONE out there has the correct dimensions of this part on their site—so I searched for the part number I saw on the Amazon
listing. But as it turned out, that crappy description and wrong weight and dimensions were on every site I found…because they came from
the manufacturer.

Better Homes and Gardens...but not better description.

A few of the sites had edited out the “designed for long long” bit, but apart from that, they were all the same.

What sucks for the customer is an opportunity for you

Many, many retailers are in this same boat—they get their product info from the manufacturer, and if the data sucks in their feed,
it’ll suck on their site. Your page looks weak to both users and to Panda, and it looks the same as everybody else’s page for that product…to
both users and to Panda. So (a) you won’t rank very well, and (b) if you DO manage to get a customer to that page, it’s not as likely to convert
to a sale.

What can you do to improve on this? Here’s a few tactics to consider.

1. Offer your own additional description and comments

Add a new field to your CMS for your own write-ups on products, and when you discover issues like the above, you can add your own information—and
make it VERY clear what’s the manufacturer’s stock info and what you’ve added (that’s VALUE-ADDED) as well. My client
Sports Car Market magazine does this with their collector car auction reports in their printed magazine:
they list the auction company’s description of the car, then their reporter’s assessment of the car. This is why I buy the magazine and not the auction catalog.

2. Solicit questions

Be sure you solicit questions on every product page—your customers will tell you what’s wrong or what important information is missing. Sure,
you’ve got millions of products to deal with, but what the customers are asking about (and your sales volume of course) will help you prioritize as well as
find the problems opportunities.

Amazon does a great job of enabling this, but in this case, I used the Feedback option to update the product info,
and got back a total
bull-twaddle email from the seller about how the dimensions are in the product description thank you for shopping with us, bye-bye.
I tried to help them, for free, and they shat on me.

3. But I don’t get enough traffic to get the questions

Don’t have enough site volume to get many customer requests? No problem, the information is out there for you on Amazon :-).
Take your most important products, and look them up on Amazon, and see what questions are being asked—then answer those ONLY on your own site.

4. What fits with what?

Create fitment/cross-reference charts for products.
You probably have in-house knowledge of what products fit/are compatible with what other products.
Just because YOU know a certain accessory fits all makes and models, because it’s some industry-standard size, doesn’t mean that the customer knows this.

If there’s a particular way to measure a product so you get the correct size, explain that (with photos of what you’re measuring, if it seems
at all complicated). I’m getting a new front door for my house. 

  • How big is the door I need? 
  • Do I measure the width of the door itself, or the width of the
    opening (probably 1/8″ wider)? 
  • Or if it’s pre-hung, do I measure the frame too? Is it inswing or outswing?
  • Right or left hinged…am I supposed to
    look at the door from inside the house or outside to figure this out? 

If you’re a door seller, this is all obvious stuff,
but it wasn’t obvious to me, and NOT having the info on a website means (a) I feel stupid, and (b) I’m going to look at your competitors’ sites
to see if they will explain it…and maybe I’ll find a door on THEIR site I like better anyway.

Again, prioritize based on customer requests.

5. Provide your own photos and measurements

If examples of the physical products are available to you, take your own photos, and take your own measurements.

In fact, take your OWN photo of YOURSELF taking the measurement—so the user can see exactly what part of the product you’re measuring.
In the photo below, you can see that I’m measuring the diameter of the stopper, NOT the hole in the sink, NOT the stopper plus the rubber gasket.
And no, Kohler, it’s NOT 2″ in diameter…by a long shot.

Don't just give the measurements, SHOW the measurements

Keep in mind, you shouldn’t have to tear apart your CMS to do any of this. You can put your additions in a new database table, just tied to the
core product content by SKU. In the page template code for the product page, you can check your database to see if you have any of your “extra bits” to display
alongside the feed content, and this way keep it separate from the core product catalog code. This will make updates to the CMS/product catalog less painful as well.

Fixing your content doesn’t have to be all that difficult, nor expensive

At this point, you’re probably thinking “hey, but I’ve got 1.2 million SKUs, and if I were to do this, it’d take me 20 years to update all of them.”
FINE. Don’t update all of them. Prioritize, based on factors like what you sell the most of, what you make the best margin on, what customers
ask questions about the most, etc. Maybe concentrate on your top 5% in terms of sales, and do those first. Take all that money you used to spend
buying spammy links every month, and spend it instead on junior employees or interns doing the product measurements, extra photos, etc.

And don’t be afraid to spend a little effort on a low value product, if it’s one that frequently gets questions from customers.
Simple things can make a life-long fan of the customer. I once needed to replace a dishwasher door seal, and didn’t know if I needed special glue,
special tools, how to cut it to fit with or without overlap, etc.
I found a video on how to do the replacement on
RepairClinic.com. So easy!
They got my business for the $10 seal, of course…but now I order my $50 fridge water filter from them every six months as well.

Benefits to your conversion rate

Certainly the tactics we’ve talked about will improve your conversion rate from visitors to purchasers. If JUST ONE of those sites I looked at for that damn sink stopper
had the right measurement (and maybe some statement about how the manufacturer’s specs above are actually incorrect, we measured, etc.), I’d have stopped right there
and bought from that site.

What does this have to do with Panda?

But, there’s a Panda benefit here too. You’ve just added a bunch of additional, unique text to your site…and maybe a few new unique photos as well.
Not only are you going to convert better, but you’ll probably rank better too.

If you’re NOT Amazon, or eBay, or Home Depot, etc., then Panda is your secret weapon to help you rank against those other sites whose backlink profiles are
stronger than
carbon fibre (that’s a really cool video, by the way).
If you saw my
Whiteboard Friday on Panda optimization, you’ll know that
Panda tuning can overcome incredible backlink profile deficits.

It’s go time

We’re talking about tactics that are time-consuming, yes—but relatively easy to implement, using relatively inexpensive staff (and in some
cases, your customers are doing some of the work for you).
And it’s something you can roll out a product at a time.
You’ll be doing things that really DO make your site a better experience for the user…we’re not just trying to trick Panda’s measurements.

  1. Your pages will rank better, and bring more traffic.
  2. Your pages will convert better, because users won’t leave your site, looking elsewhere for answers to their questions.
  3. Your customers will be more loyal, because you were able to help them when nobody else bothered.

Don’t be held hostage by other peoples’ crappy product feeds. Enhance your product information with your own info and imagery.
Like good link-building and outreach, it takes time and effort, but both Panda and your site visitors will reward you for it.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it