The Linkbait Bump: How Viral Content Creates Long-Term Lift in Organic Traffic – Whiteboard Friday

Posted by randfish

A single fantastic (or “10x”) piece of content can lift a site’s traffic curves long beyond the popularity of that one piece. In today’s Whiteboard Friday, Rand talks about why those curves settle into a “new normal,” and how you can go about creating the content that drives that change.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about the linkbait bump, classic phrase in the SEO world and almost a little dated. I think today we’re talking a little bit more about viral content and how high-quality content, content that really is the cornerstone of a brand or a website’s content can be an incredible and powerful driver of traffic, not just when it initially launches but over time.

So let’s take a look.

This is a classic linkbait bump, viral content bump analytics chart. I’m seeing over here my traffic and over here the different months of the year. You know, January, February, March, like I’m under a thousand. Maybe I’m at 500 visits or something, and then I have this big piece of viral content. It performs outstandingly well from a relative standpoint for my site. It gets 10,000 or more visits, drives a ton more people to my site, and then what happens is that that traffic falls back down. But the new normal down here, new normal is higher than the old normal was. So the new normal might be at 1,000, 1,500 or 2,000 visits whereas before I was at 500.

Why does this happen?

A lot of folks see an analytics chart like this, see examples of content that’s done this for websites, and they want to know: Why does this happen and how can I replicate that effect? The reasons why are it sort of feeds back into that viral loop or the flywheel, which we’ve talked about in previous Whiteboard Fridays, where essentially you start with a piece of content. That content does well, and then you have things like more social followers on your brand’s accounts. So now next time you go to amplify content or share content socially, you’re reaching more potential people. You have a bigger audience. You have more people who share your content because they’ve seen that that content performs well for them in social. So they want to find other content from you that might help their social accounts perform well.

You see more RSS and email subscribers because people see your interesting content and go, “Hey, I want to see when these guys produce something else.” You see more branded search traffic because people are looking specifically for content from you, not necessarily just around this viral piece, although that’s often a big part of it, but around other pieces as well, especially if you do a good job of exposing them to that additional content. You get more bookmark and type in traffic, more searchers biased by personalization because they’ve already visited your site. So now when they search and they’re logged into their accounts, they’re going to see your site ranking higher than they normally would otherwise, and you get an organic SEO lift from all the links and shares and engagement.

So there’s a ton of different factors that feed into this, and you kind of want to hit all of these things. If you have a piece of content that gets a lot of shares, a lot of links, but then doesn’t promote engagement, doesn’t get more people signing up, doesn’t get more people searching for your brand or searching for that content specifically, then it’s not going to have the same impact. Your traffic might fall further and more quickly.

How do you achieve this?

How do we get content that’s going to do this? Well, we’re going to talk through a number of things that we’ve talked about previously on Whiteboard Friday. But there are some additional ones as well. This isn’t just creating good content or creating high quality content, it’s creating a particular kind of content. So for this what you want is a deep understanding, not necessarily of what your standard users or standard customers are interested in, but a deep understanding of what influencers in your niche will share and promote and why they do that.

This often means that you follow a lot of sharers and influencers in your field, and you understand, hey, they’re all sharing X piece of content. Why? Oh, because it does this, because it makes them look good, because it helps their authority in the field, because it provides a lot of value to their followers, because they know it’s going to get a lot of retweets and shares and traffic. Whatever that because is, you have to have a deep understanding of it in order to have success with viral kinds of content.

Next, you want to have empathy for users and what will give them the best possible experience. So if you know, for example, that a lot of people are coming on mobile and are going to be sharing on mobile, which is true of almost all viral content today, FYI, you need to be providing a great mobile and desktop experience. Oftentimes that mobile experience has to be different, not just responsive design, but actually a different format, a different way of being able to scroll through or watch or see or experience that content.

There are some good examples out there of content that does that. It makes a very different user experience based on the browser or the device you’re using.

You also need to be aware of what will turn them off. So promotional messages, pop-ups, trying to sell to them, oftentimes that diminishes user experience. It means that content that could have been more viral, that could have gotten more shares won’t.

Unique value and attributes that separate your content from everything else in the field. So if there’s like ABCD and whoa, what’s that? That’s very unique. That stands out from the crowd. That provides a different form of value in a different way than what everyone else is doing. That uniqueness is often a big reason why content spreads virally, why it gets more shared than just the normal stuff.

I’ve talk about this a number of times, but content that’s 10X better than what the competition provides. So unique value from the competition, but also quality that is not just a step up, but 10X better, massively, massively better than what else you can get out there. That makes it unique enough. That makes it stand out from the crowd, and that’s a very hard thing to do, but that’s why this is so rare and so valuable.

This is a critical one, and I think one that, I’ll just say, many organizations fail at. That is the freedom and support to fail many times, to try to create these types of effects, to have this impact many times before you hit on a success. A lot of managers and clients and teams and execs just don’t give marketing teams and content teams the freedom to say, “Yeah, you know what? You spent a month and developer resources and designer resources and spent some money to go do some research and contracted with this third party, and it wasn’t a hit. It didn’t work. We didn’t get the viral content bump. It just kind of did okay. You know what? We believe in you. You’ve got a lot of chances. You should try this another 9 or 10 times before we throw it out. We really want to have a success here.”

That is something that very few teams invest in. The powerful thing is because so few people are willing to invest that way, the ones that do, the ones that believe in this, the ones that invest long term, the ones that are willing to take those failures are going to have a much better shot at success, and they can stand out from the crowd. They can get these bumps. It’s powerful.

Not a requirement, but it really, really helps to have a strong engaged community, either on your site and around your brand, or at least in your niche and your topic area that will help, that wants to see you, your brand, your content succeed. If you’re in a space that has no community, I would work on building one, even if it’s very small. We’re not talking about building a community of thousands or tens of thousands. A community of 100 people, a community of 50 people even can be powerful enough to help content get that catalyst, that first bump that’ll boost it into viral potential.

Then finally, for this type of content, you need to have a logical and not overly promotional match between your brand and the content itself. You can see many sites in what I call sketchy niches. So like a criminal law site or a casino site or a pharmaceutical site that’s offering like an interactive musical experience widget, and you’re like, “Why in the world is this brand promoting this content? Why did they even make it? How does that match up with what they do? Oh, it’s clearly just intentionally promotional.”

Look, many of these brands go out there and they say, “Hey, the average web user doesn’t know and doesn’t care.” I agree. But the average web user is not an influencer. Influencers know. Well, they’re very, very suspicious of why content is being produced and promoted, and they’re very skeptical of promoting content that they don’t think is altruistic. So this kills a lot of content for brands that try and invest in it when there’s no match. So I think you really need that.

Now, when you do these linkbait bump kinds of things, I would strongly recommend that you follow up, that you consider the quality of the content that you’re producing. Thereafter, that you invest in reproducing these resources, keeping those resources updated, and that you don’t simply give up on content production after this. However, if you’re a small business site, a small or medium business, you might think about only doing one or two of these a year. If you are a heavy content player, you’re doing a lot of content marketing, content marketing is how you’re investing in web traffic, I’d probably be considering these weekly or monthly at the least.

All right, everyone. Look forward to your experiences with the linkbait bump, and I will see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

From Editorial Calendars to SEO: Setting Yourself Up to Create Fabulous Content

Posted by Isla_McKetta

Quick note: This article is meant to apply to teams of all sizes, from the sole proprietor who spends all night writing their copy (because they’re doing business during the day) to the copy team who occupies an entire floor and produces thousands of pieces of content per week. So if you run into a section that you feel requires more resources than you can devote just now, that’s okay. Bookmark it and revisit when you can, or scale the step down to a more appropriate size for your team. We believe all the information here is important, but that does not mean you have to do everything right now.

If you thought ideation was fun, get ready for content creation. Sure, we’ve all written some things before, but the creation phase of content marketing is where you get to watch that beloved idea start to take shape.

Before you start creating, though, you want to get (at least a little) organized, and an editorial calendar is the perfect first step.

Editorial calendars

Creativity and organization are not mutually exclusive. In fact, they can feed each other. A solid schedule gives you and your writers the time and space to be wild and creative. If you’re just starting out, this document may be sparse, but it’s no less important. Starting early with your editorial calendar also saves you from creating content willy-nilly and then finding out months later that no one ever finished that pesky (but crucial) “About” page.

There’s no wrong way to set up your editorial calendar, as long as it’s meeting your needs. Remember that an editorial calendar is a living document, and it will need to change as a hot topic comes up or an author drops out.

There are a lot of different types of documents that pass for editorial calendars. You get to pick the one that’s right for your team. The simplest version is a straight-up calendar with post titles written out on each day. You could even use a wall calendar and a Sharpie.

Monday Tuesday Wednesday Thursday Friday
Title
The Five Colors of Oscar Fashion 12 Fabrics We’re Watching for Fall Is Charmeuse the New Corduroy? Hot Right Now: Matching Your Handbag to Your Hatpin Tea-length and Other Fab Vocab You Need to Know
Author Ellie James Marta Laila Alex

Teams who are balancing content for different brands at agencies or other more complex content environments will want to add categories, author information, content type, social promo, and more to their calendars.

Truly complex editorial calendars are more like hybrid content creation/editorial calendars, where each of the steps to create and publish the content are indicated and someone has planned for how long all of that takes. These can be very helpful if the content you’re responsible for crosses a lot of teams and can take a long time to complete. It doesn’t matter if you’re using Excel or a Google Doc, as long as the people who need the calendar can easily access it. Gantt charts can be excellent for this. Here’s a favorite template for creating a Gantt chart in Google Docs (and they only get more sophisticated).

Complex calendars can encompass everything from ideation through writing, legal review, and publishing. You might even add content localization if your empire spans more than one continent to make sure you have the currency, date formatting, and even slang right.

Content governance

Governance outlines who is taking responsibility for your content. Who evaluates your content performance? What about freshness? Who decides to update (or kill) an older post? Who designs and optimizes workflows for your team or chooses and manages your CMS?

All these individual concerns fall into two overarching components to governance: daily maintenance and overall strategy. In the long run it helps if one person has oversight of the whole process, but the smaller steps can easily be split among many team members. Read this to take your governance to the next level.

Finding authors

The scale of your writing enterprise doesn’t have to be limited to the number of authors you have on your team. It’s also important to consider the possibility of working with freelancers and guest authors. Here’s a look at the pros and cons of outsourced versus in-house talent.

In-house authors

Guest authors and freelancers

Responsible to

You

Themselves

Paid by

You (as part of their salary)

You (on a per-piece basis)

Subject matter expertise

Broad but shallow

Deep but narrow

Capacity for extra work

As you wish

Show me the Benjamins

Turnaround time

On a dime

Varies

Communication investment

Less

More

Devoted audience

Smaller

Potentially huge

From that table, it might look like in-house authors have a lot more advantages. That’s somewhat true, but do not underestimate the value of occasionally working with a true industry expert who has name recognition and a huge following. Whichever route you take (and there are plenty of hybrid options), it’s always okay to ask that the writers you are working with be professional about communication, payment, and deadlines. In some industries, guest writers will write for links. Consider yourself lucky if that’s true. Remember, though, that the final paycheck can be great leverage for getting a writer to do exactly what you need them to (such as making their deadlines).

Tools to help with content creation

So those are some things you need to have in place before you create content. Now’s the fun part: getting started. One of the beautiful things about the Internet is that new and exciting tools crop up every day to help make our jobs easier and more efficient. Here are a few of our favorites.

Calendars

You can always use Excel or a Google Doc to set up your editorial calendar, but we really like Trello for the ability to gather a lot of information in one card and then drag and drop it into place. Once there are actual dates attached to your content, you might be happier with something like a Google Calendar.

Ideation and research

If you need a quick fix for ideation, turn your keywords into wacky ideas with Portent’s Title Maker. You probably won’t want to write to the exact title you’re given (although “True Facts about Justin Bieber’s Love of Pickles” does sound pretty fascinating…), but it’s a good way to get loose and look at your topic from a new angle.

Once you’ve got that idea solidified, find out what your audience thinks about it by gathering information with Survey Monkey or your favorite survey tool. Or, use Storify to listen to what people are saying about your topic across a wide variety of platforms. You can also use Storify to save those references and turn them into a piece of content or an illustration for one. Don’t forget that a simple social ask can also do wonders.

Format

Content doesn’t have to be all about the words. Screencasts, Google+ Hangouts, and presentations are all interesting ways to approach content. Remember that not everyone’s a reader. Some of your audience will be more interested in visual or interactive content. Make something for everyone.

Illustration

Don’t forget to make your content pretty. It’s not that hard to find free stock images online (just make sure you aren’t violating someone’s copyright). We like Morgue File, Free Images, and Flickr’s Creative Commons. If you aren’t into stock images and don’t have access to in-house graphic design, it’s still relatively easy to add images to your content. Pull a screenshot with Skitch or dress up an existing image with Pixlr. You can also use something like Canva to create custom graphics.

Don’t stop with static graphics, though. There are so many tools out there to help you create gifs, quizzes and polls, maps, and even interactive timelines. Dream it, then search for it. Chances are whatever you’re thinking of is doable.

Quality, not quantity

Mediocre content will hurt your cause

Less is more. That’s not an excuse to pare your blog down to one post per month (check out our publishing cadence experiment), but it is an important reminder that if you’re writing “How to Properly Install a Toilet Seat” two days after publishing “Toilet Seat Installation for Dummies,” you might want to rethink your strategy.

The thing is, and I’m going to use another cliché here to drive home the point, you never get a second chance to make a first impression. Potential customers are roving the Internet right now looking for exactly what you’re selling. And if what they find is an only somewhat informative article stuffed with keywords and awful spelling and grammar mistakes… well, you don’t want that. Oh, and search engines think it’s spammy too…

A word about copyright

We’re not copyright lawyers, so we can’t give you the ins and outs on all the technicalities. What we can tell you (and you already know this) is that it’s not okay to steal someone else’s work. You wouldn’t want them to do it to you. This includes images. So whenever you can, make your own images or find images that you can either purchase the rights to (stock imagery) or license under Creative Commons.

It’s usually okay to quote short portions of text, as long as you attribute the original source (and a link is nice). In general, titles and ideas can’t be copyrighted (though they might be trademarked or patented). When in doubt, asking for permission is smart.

That said, part of the fun of the Internet is the remixing culture which includes using things like memes and gifs. Just know that if you go that route, there is a certain amount of risk involved.

Editing

Your content needs to go through at least one editing cycle by someone other than the original author. There are two types of editing, developmental (which looks at the underlying structure of a piece that happens earlier in the writing cycle) and copy editing (which makes sure all the words are there and spelled right in the final draft).

If you have a very small team or are in a rush (and are working with writers that have some skill), you can often skip the developmental editing phase. But know that an investment in that close read of an early draft is often beneficial to the piece and to the writer’s overall growth.

Many content teams peer-edit work, which can be great. Other organizations prefer to run their work by a dedicated editor. There’s no wrong answer, as long as the work gets edited.

Ensuring proper basic SEO

The good news is that search engines are doing their best to get closer and closer to understanding and processing natural language. So good writing (including the natural use of synonyms rather than repeating those keywords over and over and…) will take you a long way towards SEO mastery.

For that reason (and because it’s easy to get trapped in keyword thinking and veer into keyword stuffing), it’s often nice to think of your SEO check as a further edit of the post rather than something you should think about as you’re writing.

But there are still a few things you can do to help cover those SEO bets. Once you have that draft, do a pass for SEO to make sure you’ve covered the following:

  • Use your keyword in your title
  • Use your keyword (or long-tail keyword phrase) in an H2
  • Make sure the keyword appears at least once (though not more than four times, especially if it’s a phrase) in the body of the post
  • Use image alt text (including the keyword when appropriate)

Finding time to write when you don’t have any

Writing (assuming you’re the one doing the writing) can require a lot of energy—especially if you want to do it well. The best way to find time to write is to break each project down into little tasks. For example, writing a blog post actually breaks down into these steps (though not always in this order):

  • Research
  • Outline
  • Fill in outline
  • Rewrite and finish post
  • Write headline
  • SEO check
  • Final edit
  • Select hero image (optional)

So if you only have random chunks of time, set aside 15-30 minutes one day (when your research is complete) to write a really great outline. Then find an hour the next to fill that outline in. After an additional hour the following day, (unless you’re dealing with a research-heavy post) you should have a solid draft by the end of day three.

The magic of working this way is that you engage your brain and then give it time to work in the background while you accomplish other tasks. Hemingway used to stop mid-sentence at the end of his writing days for the same reason.

Once you have that draft nailed, the rest of the steps are relatively easy (even the headline, which often takes longer to write than any other sentence, is easier after you’ve immersed yourself in the post over a few days).

Working with design/development

Every designer and developer is a little different, so we can’t give you any blanket cure-alls for inter-departmental workarounds (aka “smashing silos”). But here are some suggestions to help you convey your vision while capitalizing on the expertise of your coworkers to make your content truly excellent.

Ask for feedback

From the initial brainstorm to general questions about how to work together, asking your team members what they think and prefer can go a long way. Communicate all the details you have (especially the unspoken expectations) and then listen.

If your designer tells you up front that your color scheme is years out of date, you’re saving time. And if your developer tells you that the interactive version of that timeline will require four times the resources, you have the info you need to fight for more budget (or reassess the project).

Check in

Things change in the design and development process. If you have interim check-ins already set up with everyone who’s working on the project, you’ll avoid the potential for nasty surprises at the end. Like finding out that no one has experience working with that hot new coding language you just read about and they’re trying to do a workaround that isn’t working.

Proofread

Your job isn’t done when you hand over the copy to your designer or developer. Not only might they need help rewriting some of your text so that it fits in certain areas, they will also need you to proofread the final version. Accidents happen in the copy-and-paste process and there’s nothing sadder than a really beautiful (and expensive) piece of content that wraps up with a typo:

Know when to fight for an idea

Conflict isn’t fun, but sometimes it’s necessary. The more people involved in your content, the more watered down the original idea can get and the more roadblocks and conflicting ideas you’ll run into. Some of that is very useful. But sometimes you’ll get pulled off track. Always remember who owns the final product (this may not be you) and be ready to stand up for the idea if it’s starting to get off track.

We’re confident this list will set you on the right path to creating some really awesome content, but is there more you’d like to know? Ask us your questions in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Illustrated Guide to Advanced On-Page Topic Targeting for SEO

Posted by Cyrus-Shepard

Topic n. A subject or theme of a webpage, section, or site.

Several SEOs have recently written about topic modeling and advanced on-page optimization. A few of note:

The concepts themselves are dizzying: LDA, co-occurrence, and entity salience, to name only a few. The question is
“How can I easily incorporate these techniques into my content for higher rankings?”

In fact, you can create optimized pages without understanding complex algorithms. Sites like Wikipedia, IMDB, and Amazon create highly optimized, topic-focused pages almost by default. Utilizing these best practices works exactly the same when you’re creating your own content.

The purpose of this post is to provide a simple
framework for on-page topic targeting in a way that makes optimizing easy and scalable while producing richer content for your audience.

1. Keywords and relationships

No matter what topic modeling technique you choose, all rely on discovering
relationships between words and phrases. As content creators, how we organize words on a page greatly influences how search engines determine the on-page topics.

When we use keywords phrases, search engines hunt for other phrases and concepts that
relate to one another. So our first job is to expand our keywords research to incorporate these related phrases and concepts. Contextually rich content includes:

  • Close variants and synonyms: Includes abbreviations, plurals, and phrases that mean the same thing.
  • Primary related keywords: Words and phrases that relate to the main keyword phrase.
  • Secondary related keywords: Words and phrases that relate to the primary related keywords.
  • Entity relationships: Concept that describe the properties and relationships between people, places, and things. 

A good keyword phrase or entity is one that
predicts the presence of other phrases and entities on the page. For example, a page about “The White House” predicts other phrases like “president,” “Washington,” and “Secret Service.” Incorporating these related phrases may help strengthen the topicality of “White House.”

2. Position, frequency, and distance

How a page is organized can greatly influence how concepts relate to each other.

Once search engines find your keywords on a page, they need to determine which ones are most
important, and which ones actually have the strongest relationships to one another.

Three primary techniques for communicating this include:

  • Position: Keywords placed in important areas like titles, headlines, and higher up in the main body text may carry the most weight.
  • Frequency: Using techniques like TF-IDF, search engines determine important phrases by calculating how often they appear in a document compared to a normal distribution.
  • Distance: Words and phrases that relate to each other are often found close together, or grouped by HTML elements. This means leveraging semantic distance to place related concepts close to one another using paragraphs, lists, and content sectioning.

A great way to organize your on-page content is to employ your primary and secondary related keywords in support of your focus keyword. Each primary related phrase becomes its own subsection, with the secondary related phrases supporting the primary, as illustrated here.

Keyword Position, Frequency and Distance

As an example, the primary keyword phrase of this page is ‘On-page Topic Targeting‘. Supporting topics include: keywords and relationships, on-page optimization, links, entities, and keyword tools. Each related phrase supports the primary topic, and each becomes its own subsection.

3. Links and supplemental content

Many webmasters overlook the importance of linking as a topic signal.

Several well-known Google
search patents and early research papers describe analyzing a page’s links as a way to determine topic relevancy. These include both internal links to your own pages and external links to other sites, often with relevant anchor text.

Google’s own
Quality Rater Guidelines cites the value external references to other sites. It also describes a page’s supplemental content, which can includes internal links to other sections of your site, as a valuable resource.

Links and Supplemental Content

If you need an example of how relevant linking can help your SEO,
The New York Times
famously saw success, and an increase in traffic, when it started linking out to other sites from its topic pages.

Although this guide discusses
on-page topic optimization, topical external links with relevant anchor text can greatly influence how search engines determine what a page is about. These external signals often carry more weight than on-page cues, but it almost always works best when on-page and off-page signals are in alignment.

4. Entities and semantic markup

Google extracts entities from your webpage automatically,
without any effort on your part. These are people, places and things that have distinct properties and relationships with each other.

• Christopher Nolan (entity, person) stands 5’4″ (property, height) and directed Interstellar (entity, movie)

Even though entity extraction happens automatically, it’s often essential to mark up your content with
Schema for specific supported entities such as business information, reviews, and products. While the ranking benefit of adding Schema isn’t 100% clear, structured data has the advantage of enhanced search results.

Entities and Schema

For a solid guide in implementing schema.org markup, see Builtvisible’s excellent
guide to rich snippets.

5. Crafting the on-page framework

You don’t need to be a search genius or spend hours on complex research to produce high quality, topic optimized content. The beauty of this framework is that it can be used by anyone, from librarians to hobby bloggers to small business owners; even when they aren’t search engine experts.

A good webpage has much in common with a high quality university paper. This includes:

  1. A strong title that communicates the topic
  2. Introductory opening that lays out what the page is about
  3. Content organized into thematic subsections
  4. Exploration of multiple aspects of the topic and answers related questions
  5. Provision of additional resources and external citations

Your webpage doesn’t need to be academic, stuffy, or boring. Some of the most interesting pages on the Internet employ these same techniques while remaining dynamic and entertaining.

Keep in mind that ‘best practices’ don’t apply to every situation, and as
Rand Fishkin says “There’s no such thing as ‘perfectly optimized’ or ‘perfect on-page SEO.'” Pulling everything together looks something like this:

On-page Topic Targeting for SEO

This graphic is highly inspired by Rand Fishkin’s great
Visual Guide to Keyword Targeting and On-Page SEO. This guide doesn’t replace that canonical resource. Instead, it should be considered a supplement to it.

5 alternative tools for related keyword and entity research

For the search professional, there are dozens of tools available for thematic keyword and entity research. This list is not exhaustive by any means, but contains many useful favorites.

1.
Alchemy API

One of the few tools on the market that delivers entity extraction, concept targeting and linked data analysis. This is a great platform for understanding how a modern search engine views your webpage.

2.
SEO Review Tools

The SEO Keyword Suggestion Tools was actually designed to return both primary and secondary related keywords, as well as options for synonyms and country targeting. 

3.
LSIKeywords.com

The LSIKeyword tool performs Latent Semantic Indexing (LSI) on the top pages returned by Google for any given keyword phrase. The tool can go down from time to time, but it’s a great one to bookmark.

4.
Social Mention

Quick and easy, enter any keyword phrase and then check “Top Keywords” to see what words appear most with your primary phrase across the of the platforms that Social Mention monitors. 

5.
Google Trends

Google trends is a powerful related research tool, if you know how to use it. The secret is downloading your results to a CSV (under settings) to get a list up to 50 related keywords per search term.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

Google’s Physical Web and its Impact on Search

Posted by Tom-Anthony

In early October, Google announced a new project called ”
The Physical Web,” which they explain like this:

The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device – a vending machine, a poster, a toy, a bus stop, a rental car – and not have to download an app first. Everything should be just a tap away.

At the moment this is an experimental project which is designed to promote establishing an open standard by which this mechanism could work. The two key elements of this initiative are:

URLs: The project proposes that all ‘smart devices’ should advertise a URL by which you can interact with that device. The device broadcasts its URL to anyone in the vicinity, who can detect it via their smartphone (with the eventual goal being this functionality is built into the smart phone operating systems rather than needing third-party apps).


Beacons:
Not well known until Apple recently jumped on the bandwagon announcing iBeacons, beacon technology has been around for a couple of years now. Using a streamlined sibling of Bluetooth, called Bluetooth Low Energy (no pairing, range of ~70 metres / ~230 feet) it allows smartphones to detect the presence of nearby beacons and their approximate distance. Until now they’ve mostly been used to ‘hyper-local’ location based applications (check this blog post of mine for some thoughts on how this might impact SEO).

The project proposes adapting and augmenting the signal that Beacons send out to include a URL by which nearby users might interact with a smart device.

This post is about looking to the future at ways this could potentially impact search. It isn’t likely that any serious impact will happen within the next 18 months, and it is hard to predict exactly how things will pan out, but this post is designed to prompt you to think about things proactively.

Usage examples

To help wrap your head around this, lets look at a few examples of possible uses:

Bus times: This is one of the examples Google gives, where you walk up to a bus stop and on detecting the smart device embedded into the stop your phone allows you to pull the latest bus times and travel info.

Item finder: Imagine when you go to the store looking for a specific item. You could pull out your phone and check stock of the item, as well as being directed to the specific part of the store where you can find it.

Check in: Combined with using URLs that are only accessible on local wifi / intranet, you could make a flexible and consistent check in mechanism for people in a variety of situations.

I’m sure there are many many more applications that are yet to be thought up. One thing to notice is that there is no reason you can’t bookmark these advertised URLs and use them elsewhere, so you can’t be sure that someone accessing the URL is actually by the device in question. You can get some of the way there by using URLs that are only accessible within a certain network, but that isn’t going to be a general solution.

Also, note that these URLs don’t need to be constrained to just website URLs; they could just as well be
deep links into apps which you might have installed.

Parallels to the web and ranking

There are some obvious parallels to the web (which is likely why Google named it the way they did). There will be many smart devices which will map to URLs which anyone can go to. A corollary of this is that there will be similar issues to those we see in search engines today. Google already identified one such issue—ranking—on the page for the project:

At first, the nearby smart devices will be small, but if we’re successful, there will be many to choose from and that raises an important UX issue. This is where ranking comes in. Today, we are perfectly happy typing “tennis” into a search engine and getting millions of results back, we trust that the first 10 are the best ones. The same applies here. The phone agent can sort by both signal strength as well as personal preference and history, among many other possible factors. Clearly there is lots of work to be done here.

So there is immediately a parallel between with Google’s role on the world wide web and their potential role on this new physical web; there is a suggestion here that someone needs to rank beacons if they become so numerous that our phones or wearable devices are often picking up a variety of beacons. 

Google proposes proximity as the primary measure of ranking, but the proximity range of BLE technology is very imprecise, so I imagine in dense urban areas that just using proximity won’t be sufficient. Furthermore, given the beacons are cheap (in bulk, $5 per piece will get you standalone beacons with a year-long battery) I imagine there could be “smart device spam.”

At that point, you need some sort of ranking mechanism and that will inevitably lead to people trying to optimise (be it manipulative or a more white-hat approach).
However, I don’t think that will be the sole impact on search. There are several other possible outcomes.

Further impacts on the search industry

1. Locating out-of-range smart devices

Imagine that these smart devices became fairly widespread and were constantly advertising information to anyone nearby with a smart devices. I imagine, in a similar vein to schema.org actions which provide a standard way for websites to describe what they enable someone to do (“affordances,” for the academics), we could establish similar semantic standards for smart devices enabling them to advertise what services/goods they provide.

Now imagine you are looking for a specific product or service, which you want as quickly as possible (e.g “I need to pick up a charger for my phone,” or “I need to charge my phone on the move”). You could imagine that Google or some other search engine will have mapped these smart devices. If the above section was about “ranking,” then this is about “indexing.”

You could even imagine they could keep track of what is in stock at each of these places, enabling “environment-aware” searches. How might this work? Users in the vicinity whose devices have picked up the beacons, and read their (standardised) list of services could then record this into Google’s index. It sounds like a strange paradigm, but it is exactly how Google’s app indexing methodology works.

2. Added context

Context is becoming increasingly important for all searches that we do. Beyond your search phrase, Google look at what device you are on, where you are, what you have recently searched for, who you know, and quite a bit more. It makes our search experiences significantly better, and we should expect that they are going to continue to try to refine their understanding of our context ever more.

It is not hard to see that knowing what beacons people are near adds various facets of context. It can help refine location even further, giving indications to the environment you are in, what you are doing, and even what you might be looking for.

3. Passive searches

I’ve spoken a little bit about passive searches before; this is when Google runs searches for you based entirely off your context with no explicit search. Google Now is currently the embodiment of this technology, but I expect we’ll see it become more and more

I believe could even see see a more explicit element of this become a reality, with the rise of conversational search. Conversational search is already at a point where a search queries can have persistent aspects (“How old is Tom Cruise?”, then “How tall is he?” – the pronoun ‘he’ refers back to previous search). I expect we’ll see this expand more into multi-stage searches (“Sushi restaurant within 10 minutes of here.”, and then “Just those with 4 stars or more”).

So, I could easily imagine that these elements combine with “environment-aware” searches (whether they are powered in the fashion I described above or not) to enable multi-stage searches that result in explicit passive searches. For example, “nearby shops with iPhone 6 cables in stock,” to which Google fails to find a suitable result (“there are no suitable shops nearby”) and you might then answer “let me know when there is.”

Wrap up

It seems certain that embedded smart devices of some sort are coming, and this project from Google looks like a strong candidate to establish a standard. With the rise of smart devices, whichever form they end up taking and standard they end up using, it is certain this is going to impact the way people interact with their environments and use their smart phones and wearables.

It is hard to believe this won’t also have a heavy impact upon marketing and business. What remains less clear is the scale of impact that this will have on SEO. Hopefully this post has got your brain going a bit so as and industry, we can start to prepare ourselves for the rise of smart devices.

I’d love to hear in the comments what other ideas people have and how you guys think this stuff might affect us.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

6 Things I Wish I Knew Before Using Optimizely

Posted by tallen1985

Diving into Conversion Rate Optimization (CRO) for the first time can be a challenge. You are faced with a whole armoury of new tools, each containing a huge variety of features. Optimizely is one of those tools you will quickly encounter and through this post I’m going to cover 6 features I wish I had known from day one that have helped improve test performance/debugging and the ability to track results accurately.

1. You don’t have to use the editor

The editor within Optimizely is a useful tool if you don’t have much experience working with code. The editor
should be used for making simple visual changes, such as changing an image, adjusting copy or making minor layout changes.

If you are looking to make changes that change the behaviour of the page rather than just straightforward visual changes, then the editor can become troublesome. In this case you should use the “Edit Code” feature at the foot of the editor.

For any large-scale changes to the site, such as completely redesigning the page, Optimizely should be used for traffic allocation and not editing pages. To do this:

1. Build a new version of the page outside of Optimizely

2. Upload the variation page to your site.
Important: Ensure that the variation page is noindexed.

We now have two variations of our page:

www.myhomepage.com & www.myhomepage.com/variation1

3. Select the variation drop down menu and click Redirect to a new page

4. Enter the variation URL, apply the settings and save the experiment. You can now use Optimizely as an A/B test management tool to allocate traffic, exclude traffic/device types, and gather further test data.

If you do use the editor be aware of excess code

One problem to be aware of here is that each time you move or change an element Optimizely adds a new line of code. The variation code below actually repositions the h2 title four times.

Instead when using the editor we should make sure that we strip out any excess code. If you move and save a page element multiple times, open the <edit code> tab at the foot of the page and delete any excess code. For example, the following positions my h2 title in exactly the same position as before with three fewer lines of code. Over the course of multiple changes, this excess code can result in an increase of load time for Optimizely.


2. Enabling analytics tracking

Turning on analytics tracking seems obvious, right? In fact, why would we even need to turn it on in the first place, surely it would be defaulted to on?

Optimizely currently sets analytics tracking to the default option of off. As a result if you don’t manually change the setting nothing will be getting reporting into your analytics platform of choice.

To turn on analytics tracking, simply open the settings in the top right corner from within the editor mode and select Analytics Integration.

Turn on the relevant analytics tracking. If you are using Google Analytics, then at this point you should assign a vacant custom variable slot (for Classic Analytics) or a vacant custom dimension (Universal Analytics) to the experiment.

Once the test is live, wait for a while (up to 24 hours), then check to be sure the data is reporting correctly within the custom segments.


3. Test your variations in a live environment

Before you set your test live, it’s important that you test the new variation to ensure everything works as expected. To do this we need to see the test in a live environment while ensuring no customers see the test versions yet. I’ve suggested a couple of ways to do this below:

Query parameter targeting

Query parameter tracking is available on all accounts and is our preferred method for sharing live versions with clients, mainly because once set up, it is as simple as sharing a URL.

1. Click the audiences icon at the top of the page 

2. Select create a new audience

3. Drag Query Parameters from the possible conditions and enter parameters of your choice.

4. Click Apply and save the experiment.

5. To view the experiment visit the test URL with query parameters added. In the above example the URL would be:
http://www.distilled.net?test=variation

Cookie targeting

1. Open the browser and create a bookmark on any page

2. Edit the bookmark and change both properties to:

a) Name: Set A Test Cookie

b)URL: The following Javascript code:

<em>javascript:(function(){ var hostname = window.location.hostname; var parts = hostname.split("."); var publicSuffix = hostname; var last = parts[parts.length - 1]; var expireDate = new Date(); expireDate.setDate(expireDate.getDate() + 7); var TOP_LEVEL_DOMAINS = ["com", "local", "net", "org", "xxx", "edu", "es", "gov", "biz", "info", "fr", "gr", "nl", "ca", "de", "kr", "it", "me", "ly", "tv", "mx", "cn", "jp", "il", "in", "iq"]; var SPECIAL_DOMAINS = ["jp", "uk", "au"]; if(parts.length > 2 && SPECIAL_DOMAINS.indexOf(last) != -1){ publicSuffix = parts[parts.length - 3] + "."+ parts[parts.length - 2] + "."+ last} else if(parts.length > 1 && TOP_LEVEL_DOMAINS.indexOf(last) != -1) {publicSuffix = parts[parts.length - 2] + "."+ last} document.cookie = "optly_"+publicSuffix.split(".")[0]+"_test=true; domain=."+publicSuffix+"; path=/; expires="+expireDate.toGMTString()+";"; })();</em>

You should end up with the following:

3. Open the page where you want to place the cookie and click the bookmark

4. The cookie will now be set on the domain you are browsing and will looking something like: ‘optly_YOURDOMAINNAME_test=true’

Next we need to target our experiment to only allow visitors who have the cookie set to see test variations.

5. Click the audiences icon at the top of the page

6. Select create a new audience

7. Drag Cookie into the Conditions and change the name to optly_YOURDOMAINNAME_test=true

8. Click Apply and save the experiment.

Source:
https://help.optimizely.com/hc/en-us/articles/200293784-Setting-a-test-cookie-for-your-site

IP address targeting (only available on Enterprise accounts)

Using IP address targeting is useful when you are looking to test variations in house and on a variety of different devices and browsers.

1. Click the audiences icon at the top of the page

2. Select create a new audience

3. Drag IP Address from the possible conditions and enter the IP address being used. (Not sure of your IP address then head to
http://whatismyipaddress.com/)

4. Click Apply and Save the experiment.


4. Force variations using parameters when debugging pages

There will be times, particular when testing new variations, that there will be the need to view a specific variation. Obviously this can be an issue if your browser has already been bucketed into an alternative variation. Optimizely overcomes this by allowing you to force the variation you wish to view, simply using query parameters.

The query parameter is structured in the following way: optimizely_x
EXPRIMENTID=VARIATIONINDEX

1. The
EXPERIMENTID can be found in the browser URL

2.
VARIATIONINDEX is the variation you want to run, 0 is for the original, 1 is variation #1, 2 is variation #2 etc.

3. Using the above example to force a variation, we would use the following URLstructure to display variation 1 of our experiment:
http://www.yourwebsite.com/?optimizely_x1845540742=1

Source:
https://help.optimizely.com/hc/en-us/articles/200107480-Forcing-a-specific-variation-to-run-and-other-advanced-URL-parameters


5. Don’t change the traffic allocation sliders

Once a test is live it is important not change the amount of traffic allocated to each variation. Doing so can massively affect test results, as one version would potentially begin to receive more return visitors who in turn have a much higher chance of converting.

My colleague Tom Capper discussed further the
do’s and don’ts of statistical significance earlier this year where he explained,

“At the start of your test, you decide to play it safe and set your traffic allocation to 90/10. After a time, it seems the variation is non-disastrous, and you decide to move the slider to 50/50. But return visitors are still always assigned their original group, so now you have a situation where the original version has a larger proportion of return visitors, who are far more likely to convert.”

To summarize, if you do need to adjust the amount of traffic allocated to each test variation, you should look to restart the test to have complete confidence that the data you receive is accurate.


6. Use segmentation to generate better analysis

Okay I understand this one isn’t strictly about Optimizely, but it is certainly worth keeping in mind, particularly earlier on in the CRO process when producing hypothesis around device type.

Conversion rates can vary greatly, particularly when we start segmenting data by locations, browsers, medium, return visits vs new visits, just to name a few. However, by using segmentation we can unearth opportunities that we may have previously overlooked, allowing us to generate new hypotheses for future experiments.


Example

You have been running a test for a month and unfortunately the results are inconclusive. The test version of the page didn’t perform any better or worse than the original. Overall the test results look like the following:


Page Version

Visitors

Transactions

Conversion Rate
Original 41781 1196 2.86%
Variation 42355 1225 2.89%

In this case the test variation overall has only performed
1% better than the original with a significance of 60%. With these results this test variation certainly wouldn’t be getting rolled out any time soon.

However when these results are segmented by
device they tell a very different story:

Drilling into the
desktop results we actually find that the test variation saw a 10% increase in conversions over the original with 97% significance. Yet those using a tablet were converting way below the original, thus driving down the overall conversion rates we were seeing in the first table.

Ultimately with this data we would be able to generate a new hypothesis of “we believe the variation will increase conversion rate for users on a desktop”. We would then re-run the test to desktop only users to verify the previous data and the new hypothesis.

Using segmented data here could also potentially help the experiment reach significance at a much faster rate as
explained in this video from Opticon 2014.

Should the new test be successful and achieve significance we would serve users on the desktops the new variation, whilst those on mobile and tablets continue to be displayed the original site.

Key takeaways

  • Always turn on Google Analytics tracking (and then double check it is turned on).
  • If you plan to make behavioural changes to a page use the Javascript editor rather than the drag and drop feature
  • Use IP address targeting for device testing and query parameters to share a live test with clients.
  • If you need to change the traffic allocation to test variations you should restart the test.
  • Be aware that test performance can vary greatly based on device.

What problems and solutions have you come across when creating CRO experiments with Optimizely? What pieces of information do you wish you had known 6 months ago?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

800 Social Bookmark SEO Backlinks + Ping in 24 hours for $5

http://fiverr.com/spookseo/create-800-social-bookmark-seo-backlinks-ping-in-24-hours If you want to get over 800 SEO optimized backlinks to your site and DOM…

Reblogged 4 years ago from www.youtube.com

Your Google Algorithm Cheat Sheet: Panda, Penguin, and Hummingbird

Posted by MarieHaynes

If you’re reading the Moz blog, then you probably have a decent understanding of Google and its algorithm changes. However, there is probably a good percentage of the Moz audience that is still confused about the effects that Panda, Penguin, and Hummingbird can have on your site. I did write a post last year about the main 
differences between Penguin and a Manual Unnautral Links Penalty, and if you haven’t read that, it’ll give you a good primer.

The point of this article is to explain very simply what each of these algorithms are meant to do. It is hopefully a good reference that you can point your clients to if you want to explain an algorithm change and not overwhelm them with technical details about 301s, canonicals, crawl errors, and other confusing SEO terminologies.

What is an algorithm change?

First of all, let’s start by discussing the Google algorithm. It’s immensely complicated and continues to get more complicated as Google tries its best to provide searchers with the information that they need. When search engines were first created, early search marketers were able to easily find ways to make the search engine think that their client’s site was the one that should rank well. In some cases it was as simple as putting in some code on the website called a meta keywords tag. The meta keywords tag would tell search engines what the page was about.

As Google evolved, its engineers, who were primarily focused on making the search engine results as relevant to users as possible, continued to work on ways to stop people from cheating, and looked at other ways to show the most relevant pages at the top of their searches. The algorithm now looks at hundreds of different factors. There are some that we know are significant such as having a good descriptive title (between the <title></title> tags in the code.) And there are many that are the subject of speculation such as 
whether or not Google +1’s contribute to a site’s rankings.

In the past, the Google algorithm would change very infrequently. If your site was sitting at #1 for a certain keyword, it was guaranteed to stay there until the next update which might not happen for weeks or months. Then, they would push out another update and things would change. They would stay that way until the next update happened. If you’re interested in reading about how Google used to push updates out of its index, you may find this 
Webmaster World forum thread from 2002 interesting. (Many thanks to Paul Macnamara  for explaining to me how algo changes used to work on Google in the past and pointing me to the Webmaster World thread.)

This all changed with launch of “Caffeine” in 2010. Since Caffeine launched, the search engine results have been changing several times a day rather than every few weeks. Google makes over 600 changes to its algorithm in a year, and the vast majority of these are not announced. But, when Google makes a really big change, they give it a name, usually make an announcement, and everyone in the SEO world goes crazy trying to figure out how to understand the changes and use them to their advantage.

Three of the biggest changes that have happened in the last few years are the Panda algorithm, the Penguin algorithm and Hummingbird.

What is the Panda algorithm?

Panda first launched on February 23, 2011. It was a big deal. The purpose of Panda was to try to show high-quality sites higher in search results and demote sites that may be of lower quality. This algorithm change was unnamed when it first came out, and many of us called it the “Farmer” update as it seemed to affect content farms. (Content farms are sites that aggregate information from many sources, often stealing that information from other sites, in order to create large numbers of pages with the sole purpose of ranking well in Google for many different keywords.) However, it affected a very large number of sites. The algorithm change was eventually officially named after one of its creators, Navneet Panda.

When Panda first happened, a lot of SEOs in forums thought that this algorithm was targeting sites with unnatural backlink patterns. However, it turns out that links are most likely
not a part of the Panda algorithm. It is all about on-site quality.

In most cases, sites that were affected by Panda were hit quite hard. But, I have also seen sites that have taken a slight loss on the date of a Panda update. Panda tends to be a site-wide issue which means that it doesn’t just demote certain pages of your site in the search engine results, but instead, Google considers the entire site to be of lower quality. In some cases though Panda can affect just a section of a site such as a news blog or one particular subdomain.

Whenever a Google employee is asked about what needs to be done to recover from Panda, they refer to a 
blog post by Google Employee Amit Singhal that gives a checklist that you can use on your site to determine if your site really is high quality or not. Here is the list:

  • Would you trust the information presented in this article?
  • Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?
  • Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?
  • Would you be comfortable giving your credit card information to this site?
  • Does this article have spelling, stylistic, or factual errors?
  • Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?
  • Does the article provide original content or information, original reporting, original research, or original analysis?
  • Does the page provide substantial value when compared to other pages in search results?
  • How much quality control is done on content?
  • Does the article describe both sides of a story?
  • Is the site a recognized authority on its topic?
  • Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?
  • Was the article edited well, or does it appear sloppy or hastily produced?
  • For a health related query, would you trust information from this site?
  • Would you recognize this site as an authoritative source when mentioned by name?
  • Does this article provide a complete or comprehensive description of the topic?
  • Does this article contain insightful analysis or interesting information that is beyond obvious?
  • Is this the sort of page you’d want to bookmark, share with a friend, or recommend?
  • Does this article have an excessive amount of ads that distract from or interfere with the main content?
  • Would you expect to see this article in a printed magazine, encyclopedia or book?
  • Are the articles short, unsubstantial, or otherwise lacking in helpful specifics?
  • Are the pages produced with great care and attention to detail vs. less attention to detail?
  • Would users complain when they see pages from this site?

Phew! That list is pretty overwhelming! These questions do not necessarily mean that Google tries to algorithmically figure out whether your articles are interesting or whether you have told both sides of a story. Rather, the questions are there because all of these factors can contribute to how real-life users would rate the quality of your site. No one really knows all of the factors that Google uses in determining the quality of your site through the eyes of Panda. Ultimately though, the focus is on creating the best site possible for your users.  It is also important that only your best stuff is given to Google to have in its index. There are a few factors that are widely accepted as important things to look at in regards to Panda:

Thin content

A “thin” page is a page that adds little or no value to someone who is reading it. It doesn’t necessarily mean that a page has to be a certain number of words, but quite often, pages with very few words are not super-helpful. If you have a large number of pages on your site that contain just one or two sentences and those pages are all included in the Google index, then the Panda algorithm may determine that the majority of your indexed pages are of low quality.

Having the odd thin page is not going to cause you to run in to Panda problems. But, if a big enough portion of your site contains pages that are not helpful to users, then that is not good.

Duplicate content

There are several ways that duplicate content can cause your site to be viewed as a low-quality site by the Panda algorithm. The first is when a site has a large amount of content that is copied from other sources on the web. Let’s say that you have a blog on your site and you populate that blog with articles that are taken from other sources. Google is pretty good at figuring out that you are not the creator of this content. If the algorithm can see that a large portion of your site is made up of content that exists on other sites then this can cause Panda to look at you unfavorably.

You can also run into problems with duplicated content on your own site. One example would be for a site that has a large number of products for sale. Perhaps each product has a separate page for each color variation and size. But, all of these pages are essentially the same. If one product comes in 20 different colors and each of those come in 6 different sizes, then that means that you have 120 pages for the same product, all of which are almost identical. Now, imagine that you sell 4,000 products. This means that you’ve got almost half a million pages in the Google index when really 4,000 pages would suffice. In this type of situation, the fix for this problem is to use something called a canonical tag. Moz has got a really good guide on using canonical tags 
here, and Dr. Pete has also written this great article on canonical tag use

Low-quality content

When I write an article and publish it on one of my websites, the only type of information that I want to present to Google is information that is the absolute best of its kind. In the past, many SEOs have given advice to site owners saying that it was important to blog every day and make sure that you are always adding content for Google to index. But, if what you are producing is not high quality content, then you could be doing more harm than good. A lot of Amit Singhal’s questions listed above are asking whether the content on your site is valuable to readers. Let’s say that I have an SEO blog and every day I take a short blurb from each of the interesting SEO articles that I have read online and publish it as a blog post on my site. Is Google going to want to show searchers my summary of these articles, or would they rather show them the actual articles? Of course my summary is not going to be as valuable as the real thing! Now, let’s say that I have done this every day for 4 years. Now my site has over 4,000 pages that contain information that is not unique and not as valuable as other sites on the same topics.

Here is another example. Let’s say that I am a plumber. I’ve been told that I should blog regularly, so several times a week I write a 2-3 paragraph article on things like, “How to fix a leaky faucet” or “How to unclog a toilet.” But, I’m busy and don’t have much time to put into my website so each article I’ve written contains keywords in the title and a few times in the content, but the content is not in depth and is not that helpful to readers. If the majority of the pages on my site contain information that no one is engaging with, then this can be a sign of low quality in the eyes of the Panda algorithm.

There are other factors that probably play a roll in the Panda algorithm.  Glenn Gabe recently wrote an 
excellent article on his evaluation of sites affected by the most recent Panda update.  His bullet point list of things to improve upon when affected by Panda is extremely thorough.

How to recover from a Panda hit

Google refreshes the Panda algorithm approximately monthly. They used to announce whenever they were refreshing the algorithm, but now they only do this if there is a really big change to the Panda algorithm. What happens when the Panda algorithm refreshes is that Google takes a new look at each site on the web and determines whether or not it looks like a quality site in regards to the criteria that the Panda algorithm looks at. If your site was adversely affected by Panda and you have made changes such as removing thin and duplicate content then, when Panda refreshes, you should see that things improve. However, for some sites it can take a couple of Panda refreshes to see the full extent of the improvements. This is because it can sometimes take several months for Google to revisit all of your pages and recognize the changes that you have made.

Every now and then, instead of just
refreshing the algorithm, Google does what they call an update. When an update happens, this means that Google has changed the criteria that they use to determine what is and isn’t considered high quality. On May 20, 2014, Google did a major update which they called Panda 4.0. This caused a lot of sites to see significant changes in regards to Panda:

Not all Panda recoveries are as dramatic as this one. But, if you have been affected by Panda and you work hard to make changes to your site, you really should see some improvement.

What is the Penguin algorithm?

Penguin

The Penguin algorithm initially rolled out on April 24, 2012. The goal of Penguin is to reduce the trust that Google has in sites that have cheated by creating unnatural backlinks in order to gain an advantage in the Google results. While the primary focus of Penguin is on unnatural links, there can be other 
factors that can affect a site in the eyes of Penguin as well. Links, though, are known to be by far the most important thing to look at.

Why are links important?

A link is like a vote for your site. If a well respected site links to your site, then this is a recommendation for your site. If a small, unknown site links to you then this vote is not going to count for as much as a vote from an authoritative site. Still, if you can get a large number of these small votes, they really can make a difference. This is why, in the past, SEOs would try to get as many links as they could from any possible source.

Another thing that is important in the Google algorithms is anchor text. Anchor text is the text that is underlined in a link. So, in this link to a great 
SEO blog, the anchor text would be “SEO blog.” If Moz.com gets a number of sites linking to them using the anchor text “SEO blog,” that is a hint to Google that people searching for “SEO blog” probably want to see sites like Moz in their search results.

It’s not hard to see how people could manipulate this part of the algorithm. Let’s say that I am doing SEO for a landscaping company in Orlando. In the past, one of the ways that I could cheat the algorithm into thinking that my company should be ranked highly would be to create a bunch of self made links and use anchor text in these links that contain phrases like
Orlando Landscaping Company, Landscapers in Orlando and Orlando Landscaping. While an authoritative link from a well respected site is good, what people discovered is that creating a large number of links from low quality sites was quite effective. As such, what SEOs would do is create links from easy to get places like directory listings, self made articles, and links in comments and forum posts.

While we don’t know exactly what factors the Penguin algorithm looks at, what we do know is that this type of low quality, self made link is what the algorithm is trying to detect. In my mind, the Penguin algorithm is sort of like Google putting a “trust factor” on your links. I used to tell people that Penguin could affect a site on a page or even a keyword level, but Google employee John Mueller has said several times now that Penguin is a sitewide algorithm. This means that if the Penguin algorithm determines that a large number of the links to your site are untrustworthy, then this reduces Google’s trust in your entire site. As such, the whole site will see a reduction in rankings.  

While Penguin affected a lot of sites drastically, I have seen many sites that saw a small reduction in rankings.  The difference, of course, depends on the amount of link manipulation that has been done.

How to recover from a Penguin hit?

Penguin is a filter just like Panda. What that means, is that the algorithm is re-run periodically and sites are re-evaluated with each re-run. At this point it is not run very often at all. The last update was October 4, 2013 which means that we have currently been waiting eight months for a new Penguin update. In order to recover from Penguin, you need to identify the unnatural links pointing to your site and either remove them, or if you can’t remove them you can ask Google to no longer count them by using the 
disavow tool. Then, the next time that Penguin refreshes or updates, if you have done a good enough job at cleaning up your unnatural links, you will once again regain trust in Google’s eyes.  In some cases, it can take a couple of refreshes in order for a site to completely escape Penguin because it can take up to 6 months for all of a site’s disavow file to be completely processed.

If you are not certain how to identify which links to your site are unnatural, here are some good resources for you:

The disavow tool is something that you probably should only be using if you really understand how it works. It is potentially possible for you to do more harm than good to your site if you disavow the wrong links. Here is some information on using the disavow tool:

It’s important to note that when sites “recover” from Penguin, they often don’t skyrocket up to top rankings once again as those previously high rankings were probably based on the power of links that are now considered unnatural. Here is some information on 
what to expect when you have recovered from a link based penalty or algorithmic issue.

Also, the Penguin algorithm is not the same thing as a manual unnatural links penalty. You do not need to file a reconsideration request to recover from Penguin. You also do not need to document the work that you have done in order to get links removed as no Google employee will be manually reviewing your work. As mentioned previously, here is more information on the 
difference between the Penguin algorithm and a manual unnatural links penalty.

What is Hummingbird?

Hummingbird is a completely different animal than Penguin or Panda. (Yeah, I know…that was a bad pun.) I will commonly get people emailing me telling me that Hummingbird destroyed their rankings. I would say that in almost every case that I have evalutated, this was not true. Google made their announcement about Hummingbird on September 26, 2013. However, at that time, they announced that Hummingbird had already been live for about a month. If the Hummingbird algorithm was truly responsible for catastrophic ranking fluctuations then we really should have seen an outcry from the SEO world of something drastic happening in August of 2013, and this did not happen. There did seem to be some type of fluctuation that happened around August 21 as reported here on Search Engine Round Table, but there were not many sites that reported huge ranking changes on that day.

If you think that Hummingbird affected you, it’s not a bad idea to look at your traffic to see if you noticed a drop on October 4, 2013 which was actually a refresh of the Penguin algorithm. I believe that a lot of people who thought that they were affected by Hummingbird were actually affected by Penguin which happened just a week after Google made their announcement about Hummingbird.

There are some excellent articles on Hummingbird here and here. Hummingbird was a complete overhaul of the entire Google algorithm. As Danny Sullivan put it, if you consider the Google algorithm as an engine, Panda and Penguin are algorithm changes that were like putting a new part in the engine such as a filter or a fuel pump. But, Hummingbird wasn’t just a new part; it was a completely new engine. That new engine still makes use of many of the old parts (such as Panda and Penguin) but a good amount of the engine is completely original.

The goal of the Hummingbird algorithm is for Google to better understand a user’s query. Bill Slawski who writes about Google patents has a great example of this in his post here. He explains that when someone searches for “What is the best place to find and eat Chicago deep dish style pizza?”, Hummingbird is able to discern that by “place” the user likely would be interested in results that show “restaurants”. There is speculation that these changes were necessary in order for Google’s voice search to be more effective. When we’re typing a search query, we might type, “best Seattle SEO company” but when we’re speaking a query (i.e. via Google Glass or via Google Now) we’re more likely to say something like, “Which firm in Seattle offers the best SEO services?” The point of Hummingbird is to better understand what users mean when they have queries like this.

So how do I recover or improve in the eyes of Hummingbird?

If you read the posts referenced above, the answer to this question is essentially to create content that answers users queries rather than just trying to rank for a particular keyword. But really, this is what you should already be doing!

It appears that Google’s goal with all of these algorithm changes (Panda, Penguin and Hummingbird) is to encourage webmasters to publish content that is the best of its kind. Google’s goal is to deliver answers to people who are searching. If you can produce content that answers people’s questions, then you’re on the right track.

I know that that is a really vague answer when it comes to “recovering” from Hummingbird. Hummingbird really is different than Panda and Penguin. When a site has been demoted by the Panda or Penguin algorithm, it’s because Google has lost some trust in the site’s quality, whether it is on-site quality or the legitimacy of its backlinks. If you fix those quality issues you can regain the algorithm’s trust and subsequently see improvements. But, if your site seems to be doing poorly since the launch of Hummingbird, then there really isn’t a way to recover those keyword rankings that you once held. You can, however, get new traffic by finding ways to be more thorough and complete in what your website offers.

Do you have more questions?

My goal in writing this article was to have a resource to point people to when they had basic questions about Panda, Penguin and Hummingbird. Recently, when I published my penalty newsletter, I had a small business owner comment that it was very interesting but that most of it went over their head. I realized that many people outside of the SEO world are greatly affected by these algorithm changes, but don’t have much information on why they have affected their website.

Do you have more questions about Panda, Penguin or Hummingbird? If so, I’d be happy to address them in the comments. I also would love for those of you who are experienced with dealing with websites affected by these issues to comment as well.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com