Distance from Perfect

Posted by wrttnwrd

In spite of all the advice, the strategic discussions and the conference talks, we Internet marketers are still algorithmic thinkers. That’s obvious when you think of SEO.

Even when we talk about content, we’re algorithmic thinkers. Ask yourself: How many times has a client asked you, “How much content do we need?” How often do you still hear “How unique does this page need to be?”

That’s 100% algorithmic thinking: Produce a certain amount of content, move up a certain number of spaces.

But you and I know it’s complete bullshit.

I’m not suggesting you ignore the algorithm. You should definitely chase it. Understanding a little bit about what goes on in Google’s pointy little head helps. But it’s not enough.

A tale of SEO woe that makes you go “whoa”

I have this friend.

He ranked #10 for “flibbergibbet.” He wanted to rank #1.

He compared his site to the #1 site and realized the #1 site had five hundred blog posts.

“That site has five hundred blog posts,” he said, “I must have more.”

So he hired a few writers and cranked out five thousand blogs posts that melted Microsoft Word’s grammar check. He didn’t move up in the rankings. I’m shocked.

“That guy’s spamming,” he decided, “I’ll just report him to Google and hope for the best.”

What happened? Why didn’t adding five thousand blog posts work?

It’s pretty obvious: My, uh, friend added nothing but crap content to a site that was already outranked. Bulk is no longer a ranking tactic. Google’s very aware of that tactic. Lots of smart engineers have put time into updates like Panda to compensate.

He started like this:

And ended up like this:
more posts, no rankings

Alright, yeah, I was Mr. Flood The Site With Content, way back in 2003. Don’t judge me, whippersnappers.

Reality’s never that obvious. You’re scratching and clawing to move up two spots, you’ve got an overtasked IT team pushing back on changes, and you’ve got a boss who needs to know the implications of every recommendation.

Why fix duplication if rel=canonical can address it? Fixing duplication will take more time and cost more money. It’s easier to paste in one line of code. You and I know it’s better to fix the duplication. But it’s a hard sell.

Why deal with 302 versus 404 response codes and home page redirection? The basic user experience remains the same. Again, we just know that a server should return one home page without any redirects and that it should send a ‘not found’ 404 response if a page is missing. If it’s going to take 3 developer hours to reconfigure the server, though, how do we justify it? There’s no flashing sign reading “Your site has a problem!”

Why change this thing and not that thing?

At the same time, our boss/client sees that the site above theirs has five hundred blog posts and thousands of links from sites selling correspondence MBAs. So they want five thousand blog posts and cheap links as quickly as possible.

Cue crazy music.

SEO lacks clarity

SEO is, in some ways, for the insane. It’s an absurd collection of technical tweaks, content thinking, link building and other little tactics that may or may not work. A novice gets exposed to one piece of crappy information after another, with an occasional bit of useful stuff mixed in. They create sites that repel search engines and piss off users. They get more awful advice. The cycle repeats. Every time it does, best practices get more muddled.

SEO lacks clarity. We can’t easily weigh the value of one change or tactic over another. But we can look at our changes and tactics in context. When we examine the potential of several changes or tactics before we flip the switch, we get a closer balance between algorithm-thinking and actual strategy.

Distance from perfect brings clarity to tactics and strategy

At some point you have to turn that knowledge into practice. You have to take action based on recommendations, your knowledge of SEO, and business considerations.

That’s hard when we can’t even agree on subdomains vs. subfolders.

I know subfolders work better. Sorry, couldn’t resist. Let the flaming comments commence.

To get clarity, take a deep breath and ask yourself:

“All other things being equal, will this change, tactic, or strategy move my site closer to perfect than my competitors?”

Breaking it down:

“Change, tactic, or strategy”

A change takes an existing component or policy and makes it something else. Replatforming is a massive change. Adding a new page is a smaller one. Adding ALT attributes to your images is another example. Changing the way your shopping cart works is yet another.

A tactic is a specific, executable practice. In SEO, that might be fixing broken links, optimizing ALT attributes, optimizing title tags or producing a specific piece of content.

A strategy is a broader decision that’ll cause change or drive tactics. A long-term content policy is the easiest example. Shifting away from asynchronous content and moving to server-generated content is another example.

“Perfect”

No one knows exactly what Google considers “perfect,” and “perfect” can’t really exist, but you can bet a perfect web page/site would have all of the following:

  1. Completely visible content that’s perfectly relevant to the audience and query
  2. A flawless user experience
  3. Instant load time
  4. Zero duplicate content
  5. Every page easily indexed and classified
  6. No mistakes, broken links, redirects or anything else generally yucky
  7. Zero reported problems or suggestions in each search engines’ webmaster tools, sorry, “Search Consoles”
  8. Complete authority through immaculate, organically-generated links

These 8 categories (and any of the other bazillion that probably exist) give you a way to break down “perfect” and help you focus on what’s really going to move you forward. These different areas may involve different facets of your organization.

Your IT team can work on load time and creating an error-free front- and back-end. Link building requires the time and effort of content and outreach teams.

Tactics for relevant, visible content and current best practices in UX are going to be more involved, requiring research and real study of your audience.

What you need and what resources you have are going to impact which tactics are most realistic for you.

But there’s a basic rule: If a website would make Googlebot swoon and present zero obstacles to users, it’s close to perfect.

“All other things being equal”

Assume every competing website is optimized exactly as well as yours.

Now ask: Will this [tactic, change or strategy] move you closer to perfect?

That’s the “all other things being equal” rule. And it’s an incredibly powerful rubric for evaluating potential changes before you act. Pretend you’re in a tie with your competitors. Will this one thing be the tiebreaker? Will it put you ahead? Or will it cause you to fall behind?

“Closer to perfect than my competitors”

Perfect is great, but unattainable. What you really need is to be just a little perfect-er.

Chasing perfect can be dangerous. Perfect is the enemy of the good (I love that quote. Hated Voltaire. But I love that quote). If you wait for the opportunity/resources to reach perfection, you’ll never do anything. And the only way to reduce distance from perfect is to execute.

Instead of aiming for pure perfection, aim for more perfect than your competitors. Beat them feature-by-feature, tactic-by-tactic. Implement strategy that supports long-term superiority.

Don’t slack off. But set priorities and measure your effort. If fixing server response codes will take one hour and fixing duplication will take ten, fix the response codes first. Both move you closer to perfect. Fixing response codes may not move the needle as much, but it’s a lot easier to do. Then move on to fixing duplicates.

Do the 60% that gets you a 90% improvement. Then move on to the next thing and do it again. When you’re done, get to work on that last 40%. Repeat as necessary.

Take advantage of quick wins. That gives you more time to focus on your bigger solutions.

Sites that are “fine” are pretty far from perfect

Google has lots of tweaks, tools and workarounds to help us mitigate sub-optimal sites:

  • Rel=canonical lets us guide Google past duplicate content rather than fix it
  • HTML snapshots let us reveal content that’s delivered using asynchronous content and JavaScript frameworks
  • We can use rel=next and prev to guide search bots through outrageously long pagination tunnels
  • And we can use rel=nofollow to hide spammy links and banners

Easy, right? All of these solutions may reduce distance from perfect (the search engines don’t guarantee it). But they don’t reduce it as much as fixing the problems.
Just fine does not equal fixed

The next time you set up rel=canonical, ask yourself:

“All other things being equal, will using rel=canonical to make up for duplication move my site closer to perfect than my competitors?”

Answer: Not if they’re using rel=canonical, too. You’re both using imperfect solutions that force search engines to crawl every page of your site, duplicates included. If you want to pass them on your way to perfect, you need to fix the duplicate content.

When you use Angular.js to deliver regular content pages, ask yourself:

“All other things being equal, will using HTML snapshots instead of actual, visible content move my site closer to perfect than my competitors?”

Answer: No. Just no. Not in your wildest, code-addled dreams. If I’m Google, which site will I prefer? The one that renders for me the same way it renders for users? Or the one that has to deliver two separate versions of every page?

When you spill banner ads all over your site, ask yourself…

You get the idea. Nofollow is better than follow, but banner pollution is still pretty dang far from perfect.

Mitigating SEO issues with search engine-specific tools is “fine.” But it’s far, far from perfect. If search engines are forced to choose, they’ll favor the site that just works.

Not just SEO

By the way, distance from perfect absolutely applies to other channels.

I’m focusing on SEO, but think of other Internet marketing disciplines. I hear stuff like “How fast should my site be?” (Faster than it is right now.) Or “I’ve heard you shouldn’t have any content below the fold.” (Maybe in 2001.) Or “I need background video on my home page!” (Why? Do you have a reason?) Or, my favorite: “What’s a good bounce rate?” (Zero is pretty awesome.)

And Internet marketing venues are working to measure distance from perfect. Pay-per-click marketing has the quality score: A codified financial reward applied for seeking distance from perfect in as many elements as possible of your advertising program.

Social media venues are aggressively building their own forms of graphing, scoring and ranking systems designed to separate the good from the bad.

Really, all marketing includes some measure of distance from perfect. But no channel is more influenced by it than SEO. Instead of arguing one rule at a time, ask yourself and your boss or client: Will this move us closer to perfect?

Hell, you might even please a customer or two.

One last note for all of the SEOs in the crowd. Before you start pointing out edge cases, consider this: We spend our days combing Google for embarrassing rankings issues. Every now and then, we find one, point, and start yelling “SEE! SEE!!!! THE GOOGLES MADE MISTAKES!!!!” Google’s got lots of issues. Screwing up the rankings isn’t one of them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

From Editorial Calendars to SEO: Setting Yourself Up to Create Fabulous Content

Posted by Isla_McKetta

Quick note: This article is meant to apply to teams of all sizes, from the sole proprietor who spends all night writing their copy (because they’re doing business during the day) to the copy team who occupies an entire floor and produces thousands of pieces of content per week. So if you run into a section that you feel requires more resources than you can devote just now, that’s okay. Bookmark it and revisit when you can, or scale the step down to a more appropriate size for your team. We believe all the information here is important, but that does not mean you have to do everything right now.

If you thought ideation was fun, get ready for content creation. Sure, we’ve all written some things before, but the creation phase of content marketing is where you get to watch that beloved idea start to take shape.

Before you start creating, though, you want to get (at least a little) organized, and an editorial calendar is the perfect first step.

Editorial calendars

Creativity and organization are not mutually exclusive. In fact, they can feed each other. A solid schedule gives you and your writers the time and space to be wild and creative. If you’re just starting out, this document may be sparse, but it’s no less important. Starting early with your editorial calendar also saves you from creating content willy-nilly and then finding out months later that no one ever finished that pesky (but crucial) “About” page.

There’s no wrong way to set up your editorial calendar, as long as it’s meeting your needs. Remember that an editorial calendar is a living document, and it will need to change as a hot topic comes up or an author drops out.

There are a lot of different types of documents that pass for editorial calendars. You get to pick the one that’s right for your team. The simplest version is a straight-up calendar with post titles written out on each day. You could even use a wall calendar and a Sharpie.

Monday Tuesday Wednesday Thursday Friday
Title
The Five Colors of Oscar Fashion 12 Fabrics We’re Watching for Fall Is Charmeuse the New Corduroy? Hot Right Now: Matching Your Handbag to Your Hatpin Tea-length and Other Fab Vocab You Need to Know
Author Ellie James Marta Laila Alex

Teams who are balancing content for different brands at agencies or other more complex content environments will want to add categories, author information, content type, social promo, and more to their calendars.

Truly complex editorial calendars are more like hybrid content creation/editorial calendars, where each of the steps to create and publish the content are indicated and someone has planned for how long all of that takes. These can be very helpful if the content you’re responsible for crosses a lot of teams and can take a long time to complete. It doesn’t matter if you’re using Excel or a Google Doc, as long as the people who need the calendar can easily access it. Gantt charts can be excellent for this. Here’s a favorite template for creating a Gantt chart in Google Docs (and they only get more sophisticated).

Complex calendars can encompass everything from ideation through writing, legal review, and publishing. You might even add content localization if your empire spans more than one continent to make sure you have the currency, date formatting, and even slang right.

Content governance

Governance outlines who is taking responsibility for your content. Who evaluates your content performance? What about freshness? Who decides to update (or kill) an older post? Who designs and optimizes workflows for your team or chooses and manages your CMS?

All these individual concerns fall into two overarching components to governance: daily maintenance and overall strategy. In the long run it helps if one person has oversight of the whole process, but the smaller steps can easily be split among many team members. Read this to take your governance to the next level.

Finding authors

The scale of your writing enterprise doesn’t have to be limited to the number of authors you have on your team. It’s also important to consider the possibility of working with freelancers and guest authors. Here’s a look at the pros and cons of outsourced versus in-house talent.

In-house authors

Guest authors and freelancers

Responsible to

You

Themselves

Paid by

You (as part of their salary)

You (on a per-piece basis)

Subject matter expertise

Broad but shallow

Deep but narrow

Capacity for extra work

As you wish

Show me the Benjamins

Turnaround time

On a dime

Varies

Communication investment

Less

More

Devoted audience

Smaller

Potentially huge

From that table, it might look like in-house authors have a lot more advantages. That’s somewhat true, but do not underestimate the value of occasionally working with a true industry expert who has name recognition and a huge following. Whichever route you take (and there are plenty of hybrid options), it’s always okay to ask that the writers you are working with be professional about communication, payment, and deadlines. In some industries, guest writers will write for links. Consider yourself lucky if that’s true. Remember, though, that the final paycheck can be great leverage for getting a writer to do exactly what you need them to (such as making their deadlines).

Tools to help with content creation

So those are some things you need to have in place before you create content. Now’s the fun part: getting started. One of the beautiful things about the Internet is that new and exciting tools crop up every day to help make our jobs easier and more efficient. Here are a few of our favorites.

Calendars

You can always use Excel or a Google Doc to set up your editorial calendar, but we really like Trello for the ability to gather a lot of information in one card and then drag and drop it into place. Once there are actual dates attached to your content, you might be happier with something like a Google Calendar.

Ideation and research

If you need a quick fix for ideation, turn your keywords into wacky ideas with Portent’s Title Maker. You probably won’t want to write to the exact title you’re given (although “True Facts about Justin Bieber’s Love of Pickles” does sound pretty fascinating…), but it’s a good way to get loose and look at your topic from a new angle.

Once you’ve got that idea solidified, find out what your audience thinks about it by gathering information with Survey Monkey or your favorite survey tool. Or, use Storify to listen to what people are saying about your topic across a wide variety of platforms. You can also use Storify to save those references and turn them into a piece of content or an illustration for one. Don’t forget that a simple social ask can also do wonders.

Format

Content doesn’t have to be all about the words. Screencasts, Google+ Hangouts, and presentations are all interesting ways to approach content. Remember that not everyone’s a reader. Some of your audience will be more interested in visual or interactive content. Make something for everyone.

Illustration

Don’t forget to make your content pretty. It’s not that hard to find free stock images online (just make sure you aren’t violating someone’s copyright). We like Morgue File, Free Images, and Flickr’s Creative Commons. If you aren’t into stock images and don’t have access to in-house graphic design, it’s still relatively easy to add images to your content. Pull a screenshot with Skitch or dress up an existing image with Pixlr. You can also use something like Canva to create custom graphics.

Don’t stop with static graphics, though. There are so many tools out there to help you create gifs, quizzes and polls, maps, and even interactive timelines. Dream it, then search for it. Chances are whatever you’re thinking of is doable.

Quality, not quantity

Mediocre content will hurt your cause

Less is more. That’s not an excuse to pare your blog down to one post per month (check out our publishing cadence experiment), but it is an important reminder that if you’re writing “How to Properly Install a Toilet Seat” two days after publishing “Toilet Seat Installation for Dummies,” you might want to rethink your strategy.

The thing is, and I’m going to use another cliché here to drive home the point, you never get a second chance to make a first impression. Potential customers are roving the Internet right now looking for exactly what you’re selling. And if what they find is an only somewhat informative article stuffed with keywords and awful spelling and grammar mistakes… well, you don’t want that. Oh, and search engines think it’s spammy too…

A word about copyright

We’re not copyright lawyers, so we can’t give you the ins and outs on all the technicalities. What we can tell you (and you already know this) is that it’s not okay to steal someone else’s work. You wouldn’t want them to do it to you. This includes images. So whenever you can, make your own images or find images that you can either purchase the rights to (stock imagery) or license under Creative Commons.

It’s usually okay to quote short portions of text, as long as you attribute the original source (and a link is nice). In general, titles and ideas can’t be copyrighted (though they might be trademarked or patented). When in doubt, asking for permission is smart.

That said, part of the fun of the Internet is the remixing culture which includes using things like memes and gifs. Just know that if you go that route, there is a certain amount of risk involved.

Editing

Your content needs to go through at least one editing cycle by someone other than the original author. There are two types of editing, developmental (which looks at the underlying structure of a piece that happens earlier in the writing cycle) and copy editing (which makes sure all the words are there and spelled right in the final draft).

If you have a very small team or are in a rush (and are working with writers that have some skill), you can often skip the developmental editing phase. But know that an investment in that close read of an early draft is often beneficial to the piece and to the writer’s overall growth.

Many content teams peer-edit work, which can be great. Other organizations prefer to run their work by a dedicated editor. There’s no wrong answer, as long as the work gets edited.

Ensuring proper basic SEO

The good news is that search engines are doing their best to get closer and closer to understanding and processing natural language. So good writing (including the natural use of synonyms rather than repeating those keywords over and over and…) will take you a long way towards SEO mastery.

For that reason (and because it’s easy to get trapped in keyword thinking and veer into keyword stuffing), it’s often nice to think of your SEO check as a further edit of the post rather than something you should think about as you’re writing.

But there are still a few things you can do to help cover those SEO bets. Once you have that draft, do a pass for SEO to make sure you’ve covered the following:

  • Use your keyword in your title
  • Use your keyword (or long-tail keyword phrase) in an H2
  • Make sure the keyword appears at least once (though not more than four times, especially if it’s a phrase) in the body of the post
  • Use image alt text (including the keyword when appropriate)

Finding time to write when you don’t have any

Writing (assuming you’re the one doing the writing) can require a lot of energy—especially if you want to do it well. The best way to find time to write is to break each project down into little tasks. For example, writing a blog post actually breaks down into these steps (though not always in this order):

  • Research
  • Outline
  • Fill in outline
  • Rewrite and finish post
  • Write headline
  • SEO check
  • Final edit
  • Select hero image (optional)

So if you only have random chunks of time, set aside 15-30 minutes one day (when your research is complete) to write a really great outline. Then find an hour the next to fill that outline in. After an additional hour the following day, (unless you’re dealing with a research-heavy post) you should have a solid draft by the end of day three.

The magic of working this way is that you engage your brain and then give it time to work in the background while you accomplish other tasks. Hemingway used to stop mid-sentence at the end of his writing days for the same reason.

Once you have that draft nailed, the rest of the steps are relatively easy (even the headline, which often takes longer to write than any other sentence, is easier after you’ve immersed yourself in the post over a few days).

Working with design/development

Every designer and developer is a little different, so we can’t give you any blanket cure-alls for inter-departmental workarounds (aka “smashing silos”). But here are some suggestions to help you convey your vision while capitalizing on the expertise of your coworkers to make your content truly excellent.

Ask for feedback

From the initial brainstorm to general questions about how to work together, asking your team members what they think and prefer can go a long way. Communicate all the details you have (especially the unspoken expectations) and then listen.

If your designer tells you up front that your color scheme is years out of date, you’re saving time. And if your developer tells you that the interactive version of that timeline will require four times the resources, you have the info you need to fight for more budget (or reassess the project).

Check in

Things change in the design and development process. If you have interim check-ins already set up with everyone who’s working on the project, you’ll avoid the potential for nasty surprises at the end. Like finding out that no one has experience working with that hot new coding language you just read about and they’re trying to do a workaround that isn’t working.

Proofread

Your job isn’t done when you hand over the copy to your designer or developer. Not only might they need help rewriting some of your text so that it fits in certain areas, they will also need you to proofread the final version. Accidents happen in the copy-and-paste process and there’s nothing sadder than a really beautiful (and expensive) piece of content that wraps up with a typo:

Know when to fight for an idea

Conflict isn’t fun, but sometimes it’s necessary. The more people involved in your content, the more watered down the original idea can get and the more roadblocks and conflicting ideas you’ll run into. Some of that is very useful. But sometimes you’ll get pulled off track. Always remember who owns the final product (this may not be you) and be ready to stand up for the idea if it’s starting to get off track.

We’re confident this list will set you on the right path to creating some really awesome content, but is there more you’d like to know? Ask us your questions in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

8 Ways Content Marketers Can Hack Facebook Multi-Product Ads

Posted by Alan_Coleman

The trick most content marketers are missing

Creating great content is the first half of success in content marketing. Getting quality content read by, and amplified to, a relevant audience is the oft overlooked second half of success. Facebook can be a content marketer’s best friend for this challenge. For reach, relevance and amplification potential, Facebook is unrivaled.

  1. Reach: 1 in 6 mobile minutes on planet earth is somebody reading something on Facebook.
  2. Relevance: Facebook is a lean mean interest and demo targeting machine. There is no online or offline media that owns as much juicy interest and demographic information on its audience and certainly no media has allowed advertisers to utilise this information as effectively as Facebook has.
  3. Amplification: Facebook is literally built to encourage sharing. Here’s the first 10 words from their mission statement: “Facebook’s mission is to give people the power to share…”, Enough said!

Because of these three digital marketing truths, if a content marketer gets their paid promotion* right on Facebook, the battle for eyeballs and amplification is already won.

For this reason it’s crucial that content marketers keep a close eye on Facebook advertising innovations and seek out ways to use them in new and creative ways.

In this post I will share with you eight ways we’ve hacked a new Facebook ad format to deliver content marketing success.

Multi-Product Ads (MPAs)

In 2014, Facebook unveiled multi-product ads (MPAs) for US advertisers, we got them in Europe earlier this year. They allow retailers to show multiple products in a carousel-type ad unit.

They look like this:

If the user clicks on the featured product, they are guided directly to the landing page for that specific product, from where they can make a purchase.

You could say MPAs are Facebook’s answer to Google Shopping.

Facebook’s mistake is a content marketer’s gain

I believe Facebook has misunderstood how people want to use their social network and the transaction-focused format is OK at best for selling products. People aren’t really on Facebook to hit the “buy now” button. I’m a daily Facebook user and I can’t recall a time this year where I have gone directly from Facebook to an e-commerce website and transacted. Can you remember a recent time when you did?

So, this isn’t an innovation that removes a layer of friction from something that we are all doing online already (as the most effective innovations do). Instead, it’s a bit of a “hit and hope” that, by providing this functionality, Facebook would encourage people to try to buy online in a way they never have before.

The Wolfgang crew felt the MPA format would be much more useful to marketers and users if they were leveraging Facebook for the behaviour we all demonstrate on the platform every day, guiding users to relevant content. We attempted to see if Facebook Ads Manager would accept MPAs promoting content rather than products. We plugged in the images, copy and landing pages, hit “place order”, and lo and behold the ads became active. We’re happy to say that the engagement rates, and more importantly the amplification rates, are fantastic!

Multi-Content Ads

We’ve re-invented the MPA format for multi-advertisers in multi-ways, eight ways to be exact! Here’s eight MPA Hacks that have worked well for us. All eight hacks use the MPA format to promote content rather than promote products.

Hack #1: Multi-Package Ads

Our first variation wasn’t a million miles away from multi-product ads; we were promoting the various packages offered by a travel operator.

By looking at the number of likes, comments, and shares (in blue below the ads) you can see the ads were a hit with Facebook users and they earned lots of free engagement and amplification.

NB: If you have selected “clicks to website” as your advertising objective, all those likes, comments and shares are free!

Independent Travel Multi Product Ad

The ad sparked plenty of conversation amongst Facebook friends in the comments section.

Comments on a Facebook MPA

Hack #2: Multi-Offer Ads

Everybody knows the Internet loves a bargain. So we decided to try another variation moving away from specific packages, focusing instead on deals for a different travel operator.

Here’s how the ads looked:

These ads got valuable amplification beyond the share. In the comments section, you can see people tagging specific friends. This led to the MPAs receiving further amplification, and a very targeted and personalised form of amplification to boot.

Abbey Travel Facebook Ad Comments

Word of mouth referrals have been a trader’s best friend since the stone age. These “personalised” word of mouth referrals en masse are a powerful marketing proposition. It’s worth mentioning again that those engagements are free!

Hack #3: Multi-Locations Ads

Putting the Lo in SOLOMO.

This multi-product feed ad was hacked to promote numerous locations of a waterpark. “Where to go?” is among the first questions somebody asks when researching a holiday. In creating this top of funnel content, we can communicate with our target audience at the very beginning of their research process. A simple truth of digital marketing is: the more interactions you have with your target market on their journey to purchase, the more likely they are to seal the deal with you when it comes time to hit the “buy now” button. Starting your relationship early gives you an advantage over those competitors who are hanging around the bottom of the purchase funnel hoping to make a quick and easy conversion.

Abbey Travel SplashWorld Facebook MPA

What was surprising here, was that because we expected to reach people at the very beginning of their research journey, we expected the booking enquiries to be some time away. What actually happened was these ads sparked an enquiry frenzy as Facebook users could see other people enquiring and the holidays selling out in real time.

Abbey Travel comments and replies

In fact nearly all of the 35 comments on this ad were booking enquiries. This means what we were measuring as an “engagement” was actually a cold hard “conversion”! You don’t need me to tell you a booking enquiry is far closer to the money than a Facebook like.

The three examples outlined so far are for travel companies. Travel is a great fit for Facebook as it sits naturally in the Facebook feed, my Facebook feed is full of envy-inducing friends’ holiday pictures right now. Another interesting reason why travel is a great fit for Facebook ads is because typically there are multiple parties to a travel purchase. What happened here is the comments section actually became a very visible and measurable forum for discussion between friends and family before becoming a stampede inducing medium of enquiry.

So, stepping outside of the travel industry, how do other industries fare with hacked MPAs?

Hack #3a: Multi-Location Ads (combined with location targeting)

Location, location, location. For a property listings website, we applied location targeting and repeated our Multi-Location Ad format to advertise properties for sale to people in and around that location.

Hack #4: Multi-Big Content Ad

“The future of big content is multi platform”

– Cyrus Shepard

The same property website had produced a report and an accompanying infographic to provide their audience with unique and up-to-the-minute market information via their blog. We used the MPA format to promote the report, the infographic and the search rentals page of the website. This brought their big content piece to a larger audience via a new platform.

Rental Report Multi Product Ad

Hack #5: Multi-Episode Ad

This MPA hack was for an online TV player. As you can see we advertised the most recent episodes of a TV show set in a fictional Dublin police station, Red Rock.

Engagement was high, opinion was divided.

TV3s Red Rock viewer feedback

LOL.

Hack #6: Multi-People Ads

In the cosmetic surgery world, past patients’ stories are valuable marketing material. Particularly when the past patients are celebrities. We recycled some previously published stories from celebrity patients using multi-people ads and targeted them to a very specific audience.

Avoca Clinic Multi People Ads

Hack #7: Multi-UGC Ads

Have you witnessed the power of user generated content (UGC) in your marketing yet? We’ve found interaction rates with authentic UGC images can be up to 10 fold of those of the usual stylised images. In order to encourage further UGC, we posted a number of customer’s images in our Multi-UGC Ads.

The CTR on the above ads was 6% (2% is the average CTR for Facebook News feed ads according to our study). Strong CTRs earn you more traffic for your budget. Facebook’s relevancy score lowers your CPC as your CTR increases.

When it comes to the conversion, UGC is a power player, we’ve learned that “customers attracting new customers” is a powerful acquisition tool.

Hack #8: Target past customers for amplification

“Who will support and amplify this content and why?”

– Rand Fishkin

Your happy customers Rand, that’s the who and the why! Check out these Multi-Package Ads targeted to past customers via custom audiences. The Camino walkers have already told all their friends about their great trip, now allow them to share their great experiences on Facebook and connect the tour operator with their Facebook friends via a valuable word of mouth referral. Just look at the ratio of share:likes and shares:comments. Astonishingly sharable ads!

Camino Ways Mulit Product Ads

Targeting past converters in an intelligent manner is a super smart way to find an audience ready to share your content.

How will hacking Multi-Product Ads work for you?

People don’t share ads, but they do share great content. So why not hack MPAs to promote your content and reap the rewards of the world’s greatest content sharing machine: Facebook.

MPAs allow you to tell a richer story by allowing you to promote multiple pieces of content simultaneously. So consider which pieces of content you have that will work well as “content bundles” and who the relevant audience for each “content bundle” is.

As Hack #8 above illustrates, the big wins come when you match a smart use of the format with the clever and relevant targeting Facebook allows. We’re massive fans of custom audiences so if you aren’t sure where to start, I’d suggest starting there.

So ponder your upcoming content pieces, consider your older content you’d like to breathe some new life into and perhaps you could become a Facebook Ads Hacker.

I’d love to hear about your ideas for turning Multi-Product Ads into Multi-Content Ads in the comments section below.

We could even take the conversation offline at Mozcon!

Happy hacking.


*Yes I did say paid promotion, it’s no secret that Facebook’s organic reach continues to dwindle. The cold commercial reality is you need to pay to play on FB. The good news is that if you select ‘website clicks’ as your objective you only pay for website traffic and engagement while amplification by likes, comments, and shares are free! Those website clicks you pay for are typically substantially cheaper than Adwords, Taboola, Outbrain, Twitter or LinkedIn. How does it compare to display? It doesn’t. Paying for clicks is always preferable to paying for impressions. If you are spending money on display advertising I’d urge you to fling a few spondoolas towards Facebook ads and compare results. You will be pleasantly surprised.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

The Colossus Update: Waking The Giant

Posted by Dr-Pete

Yesterday morning, we woke up to a historically massive temperature spike on MozCast, after an unusually quiet weekend. The 10-day weather looked like this:

That’s 101.8°F, one of the hottest verified days on record, second only to a series of unconfirmed spikes in June of 2013. For reference, the first Penguin update clocked in at 93.1°.

Unfortunately, trying to determine how the algorithm changed from looking at individual keywords (even thousands of them) is more art than science, and even the art is more often Ms. Johnson’s Kindergarten class than Picasso. Sometimes, though, we catch a break and spot something.

The First Clue: HTTPS

When you watch enough SERPs, you start to realize that change is normal. So, the trick is to find the queries that changed a lot on the day in question but are historically quiet. Looking at a few of these, I noticed some apparent shake-ups in HTTP vs. HTTPS (secure) URLs. So, the question becomes: are these anecdotes, or do they represent a pattern?

I dove in and looked at how many URLs for our 10,000 page-1 SERPs were HTTPS over the past few days, and I saw this:

On the morning of June 17, HTTPS URLs on page 1 jumped from 16.9% to 18.4% (a 9.9% day-over-day increase), after trending up for a few days. This represents the total real-estate occupied by HTTPS URLs, but how did rankings fare? Here are the average rankings across all HTTPS results:

HTTPS URLs also seem to have gotten a rankings boost – dropping (with “dropping” being a positive thing) from an average of 2.96 to 2.79 in the space of 24 hours.

Seems pretty convincing, right? Here’s the problem: rankings don’t just change because Google changes the algorithm. We are, collectively, changing the web every minute of the day. Often, those changes are just background noise (and there’s a lot of noise), but sometimes a giant awakens.

The Second Clue: Wikipedia

Anecdotally, I noticed that some Wikipedia URLs seemed to be flipping from HTTP to HTTPS. I ran a quick count, and this wasn’t just a fluke. It turns out that Wikipedia started switching their entire site to HTTPS around June 12 (hat tip to Jan Dunlop). This change is expected to take a couple of weeks.

It’s just one site, though, right? Well, historically, this one site is the #1 largest land-holder across the SERP real-estate we track, with over 5% of the total page-1 URLs in our tracking data (5.19% as of June 17). Wikipedia is a giant, and its movements can shake the entire web.

So, how do we tease this apart? If Wikipedia’s URLs had simply flipped from HTTP to HTTPS, we should see a pretty standard pattern of shake-up. Those URLs would look to have changed, but the SERPS around them would be quiet. So, I ran an analysis of what the temperature would’ve been if we ignored the protocol (treating HTTP/HTTPS as the same). While slightly lower, that temperature was still a scorching 96.6°F.

Is it possible that Wikipedia moving to HTTPS also made the site eligible for a rankings boost from previous algorithm updates, thus disrupting page 1 without any code changes on Google’s end? Yes, it is possible – even a relatively small rankings boost for Wikipedia from the original HTTPS algorithm update could have a broad impact.

The Third Clue: Google?

So far, Google has only said that this was not a Panda update. There have been rumors that the HTTPS update would get a boost, as recently as SMX Advanced earlier this month, but no timeline was given for when that might happen.

Is it possible that Wikipedia’s publicly announced switch finally gave Google the confidence to boost the HTTPS signal? Again, yes, it’s possible, but we can only speculate at this point.

My gut feeling is that this was more than just a waking giant, even as powerful of a SERP force as Wikipedia has become. We should know more as their HTTPS roll-out continues and their index settles down. In the meantime, I think we can expect Google to become increasingly serious about HTTPS, even if what we saw yesterday turns out not to have been an algorithm update.

In the meantime, I’m going to melodramatically name this “The Colossus Update” because, well, it sounds cool. If this indeed was an algorithm update, I’m sure Google would prefer something sensible, like “HTTPS Update 2” or “Securageddon” (sorry, Gary).

Update from Google: Gary Illyes said that he’s not aware of an HTTPS update (via Twitter):

No comment on other updates, or the potential impact of a Wikipedia change. I feel strongly that there is an HTTPS connection in the data, but as I said – that doesn’t necessarily mean the algorithm changed.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Why We Can’t Do Keyword Research Like It’s 2010 – Whiteboard Friday

Posted by randfish

Keyword Research is a very different field than it was just five years ago, and if we don’t keep up with the times we might end up doing more harm than good. From the research itself to the selection and targeting process, in today’s Whiteboard Friday Rand explains what has changed and what we all need to do to conduct effective keyword research today.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

What do we need to change to keep up with the changing world of keyword research?

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat a little bit about keyword research, why it’s changed from the last five, six years and what we need to do differently now that things have changed. So I want to talk about changing up not just the research but also the selection and targeting process.

There are three big areas that I’ll cover here. There’s lots more in-depth stuff, but I think we should start with these three.

1) The Adwords keyword tool hides data!

This is where almost all of us in the SEO world start and oftentimes end with our keyword research. We go to AdWords Keyword Tool, what used to be the external keyword tool and now is inside AdWords Ad Planner. We go inside that tool, and we look at the volume that’s reported and we sort of record that as, well, it’s not good, but it’s the best we’re going to do.

However, I think there are a few things to consider here. First off, that tool is hiding data. What I mean by that is not that they’re not telling the truth, but they’re not telling the whole truth. They’re not telling nothing but the truth, because those rounded off numbers that you always see, you know that those are inaccurate. Anytime you’ve bought keywords, you’ve seen that the impression count never matches the count that you see in the AdWords tool. It’s not usually massively off, but it’s often off by a good degree, and the only thing it’s great for is telling relative volume from one from another.

But because AdWords hides data essentially by saying like, “Hey, you’re going to type in . . .” Let’s say I’m going to type in “college tuition,” and Google knows that a lot of people search for how to reduce college tuition, but that doesn’t come up in the suggestions because it’s not a commercial term, or they don’t think that an advertiser who bids on that is going to do particularly well and so they don’t show it in there. I’m giving an example. They might indeed show that one.

But because that data is hidden, we need to go deeper. We need to go beyond and look at things like Google Suggest and related searches, which are down at the bottom. We need to start conducting customer interviews and staff interviews, which hopefully has always been part of your brainstorming process but really needs to be now. Then you can apply that to AdWords. You can apply that to suggest and related.

The beautiful thing is once you get these tools from places like visiting forums or communities, discussion boards and seeing what terms and phrases people are using, you can collect all this stuff up, plug it back into AdWords, and now they will tell you how much volume they’ve got. So you take that how to lower college tuition term, you plug it into AdWords, they will show you a number, a non-zero number. They were just hiding it in the suggestions because they thought, “Hey, you probably don’t want to bid on that. That won’t bring you a good ROI.” So you’ve got to be careful with that, especially when it comes to SEO kinds of keyword research.

2) Building separate pages for each term or phrase doesn’t make sense

It used to be the case that we built separate pages for every single term and phrase that was in there, because we wanted to have the maximum keyword targeting that we could. So it didn’t matter to us that college scholarship and university scholarships were essentially people looking for exactly the same thing, just using different terminology. We would make one page for one and one page for the other. That’s not the case anymore.

Today, we need to group by the same searcher intent. If two searchers are searching for two different terms or phrases but both of them have exactly the same intent, they want the same information, they’re looking for the same answers, their query is going to be resolved by the same content, we want one page to serve those, and that’s changed up a little bit of how we’ve done keyword research and how we do selection and targeting as well.

3) Build your keyword consideration and prioritization spreadsheet with the right metrics

Everybody’s got an Excel version of this, because I think there’s just no awesome tool out there that everyone loves yet that kind of solves this problem for us, and Excel is very, very flexible. So we go into Excel, we put in our keyword, the volume, and then a lot of times we almost stop there. We did keyword volume and then like value to the business and then we prioritize.

What are all these new columns you’re showing me, Rand? Well, here I think is how sophisticated, modern SEOs that I’m seeing in the more advanced agencies, the more advanced in-house practitioners, this is what I’m seeing them add to the keyword process.

Difficulty

A lot of folks have done this, but difficulty helps us say, “Hey, this has a lot of volume, but it’s going to be tremendously hard to rank.”

The difficulty score that Moz uses and attempts to calculate is a weighted average of the top 10 domain authorities. It also uses page authority, so it’s kind of a weighted stack out of the two. If you’re seeing very, very challenging pages, very challenging domains to get in there, it’s going to be super hard to rank against them. The difficulty is high. For all of these ones it’s going to be high because college and university terms are just incredibly lucrative.

That difficulty can help bias you against chasing after terms and phrases for which you are very unlikely to rank for at least early on. If you feel like, “Hey, I already have a powerful domain. I can rank for everything I want. I am the thousand pound gorilla in my space,” great. Go after the difficulty of your choice, but this helps prioritize.

Opportunity

This is actually very rarely used, but I think sophisticated marketers are using it extremely intelligently. Essentially what they’re saying is, “Hey, if you look at a set of search results, sometimes there are two or three ads at the top instead of just the ones on the sidebar, and that’s biasing some of the click-through rate curve.” Sometimes there’s an instant answer or a Knowledge Graph or a news box or images or video, or all these kinds of things that search results can be marked up with, that are not just the classic 10 web results. Unfortunately, if you’re building a spreadsheet like this and treating every single search result like it’s just 10 blue links, well you’re going to lose out. You’re missing the potential opportunity and the opportunity cost that comes with ads at the top or all of these kinds of features that will bias the click-through rate curve.

So what I’ve seen some really smart marketers do is essentially build some kind of a framework to say, “Hey, you know what? When we see that there’s a top ad and an instant answer, we’re saying the opportunity if I was ranking number 1 is not 10 out of 10. I don’t expect to get whatever the average traffic for the number 1 position is. I expect to get something considerably less than that. Maybe something around 60% of that, because of this instant answer and these top ads.” So I’m going to mark this opportunity as a 6 out of 10.

There are 2 top ads here, so I’m giving this a 7 out of 10. This has two top ads and then it has a news block below the first position. So again, I’m going to reduce that click-through rate. I think that’s going down to a 6 out of 10.

You can get more and less scientific and specific with this. Click-through rate curves are imperfect by nature because we truly can’t measure exactly how those things change. However, I think smart marketers can make some good assumptions from general click-through rate data, which there are several resources out there on that to build a model like this and then include it in their keyword research.

This does mean that you have to run a query for every keyword you’re thinking about, but you should be doing that anyway. You want to get a good look at who’s ranking in those search results and what kind of content they’re building . If you’re running a keyword difficulty tool, you are already getting something like that.

Business value

This is a classic one. Business value is essentially saying, “What’s it worth to us if visitors come through with this search term?” You can get that from bidding through AdWords. That’s the most sort of scientific, mathematically sound way to get it. Then, of course, you can also get it through your own intuition. It’s better to start with your intuition than nothing if you don’t already have AdWords data or you haven’t started bidding, and then you can refine your sort of estimate over time as you see search visitors visit the pages that are ranking, as you potentially buy those ads, and those kinds of things.

You can get more sophisticated around this. I think a 10 point scale is just fine. You could also use a one, two, or three there, that’s also fine.

Requirements or Options

Then I don’t exactly know what to call this column. I can’t remember the person who’ve showed me theirs that had it in there. I think they called it Optional Data or Additional SERPs Data, but I’m going to call it Requirements or Options. Requirements because this is essentially saying, “Hey, if I want to rank in these search results, am I seeing that the top two or three are all video? Oh, they’re all video. They’re all coming from YouTube. If I want to be in there, I’ve got to be video.”

Or something like, “Hey, I’m seeing that most of the top results have been produced or updated in the last six months. Google appears to be biasing to very fresh information here.” So, for example, if I were searching for “university scholarships Cambridge 2015,” well, guess what? Google probably wants to bias to show results that have been either from the official page on Cambridge’s website or articles from this year about getting into that university and the scholarships that are available or offered. I saw those in two of these search results, both the college and university scholarships had a significant number of the SERPs where a fresh bump appeared to be required. You can see that a lot because the date will be shown ahead of the description, and the date will be very fresh, sometime in the last six months or a year.

Prioritization

Then finally I can build my prioritization. So based on all the data I had here, I essentially said, “Hey, you know what? These are not 1 and 2. This is actually 1A and 1B, because these are the same concepts. I’m going to build a single page to target both of those keyword phrases.” I think that makes good sense. Someone who is looking for college scholarships, university scholarships, same intent.

I am giving it a slight prioritization, 1A versus 1B, and the reason I do this is because I always have one keyword phrase that I’m leaning on a little more heavily. Because Google isn’t perfect around this, the search results will be a little different. I want to bias to one versus the other. In this case, my title tag, since I more targeting university over college, I might say something like college and university scholarships so that university and scholarships are nicely together, near the front of the title, that kind of thing. Then 1B, 2, 3.

This is kind of the way that modern SEOs are building a more sophisticated process with better data, more inclusive data that helps them select the right kinds of keywords and prioritize to the right ones. I’m sure you guys have built some awesome stuff. The Moz community is filled with very advanced marketers, probably plenty of you who’ve done even more than this.

I look forward to hearing from you in the comments. I would love to chat more about this topic, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]