Distance from Perfect

Posted by wrttnwrd

In spite of all the advice, the strategic discussions and the conference talks, we Internet marketers are still algorithmic thinkers. That’s obvious when you think of SEO.

Even when we talk about content, we’re algorithmic thinkers. Ask yourself: How many times has a client asked you, “How much content do we need?” How often do you still hear “How unique does this page need to be?”

That’s 100% algorithmic thinking: Produce a certain amount of content, move up a certain number of spaces.

But you and I know it’s complete bullshit.

I’m not suggesting you ignore the algorithm. You should definitely chase it. Understanding a little bit about what goes on in Google’s pointy little head helps. But it’s not enough.

A tale of SEO woe that makes you go “whoa”

I have this friend.

He ranked #10 for “flibbergibbet.” He wanted to rank #1.

He compared his site to the #1 site and realized the #1 site had five hundred blog posts.

“That site has five hundred blog posts,” he said, “I must have more.”

So he hired a few writers and cranked out five thousand blogs posts that melted Microsoft Word’s grammar check. He didn’t move up in the rankings. I’m shocked.

“That guy’s spamming,” he decided, “I’ll just report him to Google and hope for the best.”

What happened? Why didn’t adding five thousand blog posts work?

It’s pretty obvious: My, uh, friend added nothing but crap content to a site that was already outranked. Bulk is no longer a ranking tactic. Google’s very aware of that tactic. Lots of smart engineers have put time into updates like Panda to compensate.

He started like this:

And ended up like this:
more posts, no rankings

Alright, yeah, I was Mr. Flood The Site With Content, way back in 2003. Don’t judge me, whippersnappers.

Reality’s never that obvious. You’re scratching and clawing to move up two spots, you’ve got an overtasked IT team pushing back on changes, and you’ve got a boss who needs to know the implications of every recommendation.

Why fix duplication if rel=canonical can address it? Fixing duplication will take more time and cost more money. It’s easier to paste in one line of code. You and I know it’s better to fix the duplication. But it’s a hard sell.

Why deal with 302 versus 404 response codes and home page redirection? The basic user experience remains the same. Again, we just know that a server should return one home page without any redirects and that it should send a ‘not found’ 404 response if a page is missing. If it’s going to take 3 developer hours to reconfigure the server, though, how do we justify it? There’s no flashing sign reading “Your site has a problem!”

Why change this thing and not that thing?

At the same time, our boss/client sees that the site above theirs has five hundred blog posts and thousands of links from sites selling correspondence MBAs. So they want five thousand blog posts and cheap links as quickly as possible.

Cue crazy music.

SEO lacks clarity

SEO is, in some ways, for the insane. It’s an absurd collection of technical tweaks, content thinking, link building and other little tactics that may or may not work. A novice gets exposed to one piece of crappy information after another, with an occasional bit of useful stuff mixed in. They create sites that repel search engines and piss off users. They get more awful advice. The cycle repeats. Every time it does, best practices get more muddled.

SEO lacks clarity. We can’t easily weigh the value of one change or tactic over another. But we can look at our changes and tactics in context. When we examine the potential of several changes or tactics before we flip the switch, we get a closer balance between algorithm-thinking and actual strategy.

Distance from perfect brings clarity to tactics and strategy

At some point you have to turn that knowledge into practice. You have to take action based on recommendations, your knowledge of SEO, and business considerations.

That’s hard when we can’t even agree on subdomains vs. subfolders.

I know subfolders work better. Sorry, couldn’t resist. Let the flaming comments commence.

To get clarity, take a deep breath and ask yourself:

“All other things being equal, will this change, tactic, or strategy move my site closer to perfect than my competitors?”

Breaking it down:

“Change, tactic, or strategy”

A change takes an existing component or policy and makes it something else. Replatforming is a massive change. Adding a new page is a smaller one. Adding ALT attributes to your images is another example. Changing the way your shopping cart works is yet another.

A tactic is a specific, executable practice. In SEO, that might be fixing broken links, optimizing ALT attributes, optimizing title tags or producing a specific piece of content.

A strategy is a broader decision that’ll cause change or drive tactics. A long-term content policy is the easiest example. Shifting away from asynchronous content and moving to server-generated content is another example.

“Perfect”

No one knows exactly what Google considers “perfect,” and “perfect” can’t really exist, but you can bet a perfect web page/site would have all of the following:

  1. Completely visible content that’s perfectly relevant to the audience and query
  2. A flawless user experience
  3. Instant load time
  4. Zero duplicate content
  5. Every page easily indexed and classified
  6. No mistakes, broken links, redirects or anything else generally yucky
  7. Zero reported problems or suggestions in each search engines’ webmaster tools, sorry, “Search Consoles”
  8. Complete authority through immaculate, organically-generated links

These 8 categories (and any of the other bazillion that probably exist) give you a way to break down “perfect” and help you focus on what’s really going to move you forward. These different areas may involve different facets of your organization.

Your IT team can work on load time and creating an error-free front- and back-end. Link building requires the time and effort of content and outreach teams.

Tactics for relevant, visible content and current best practices in UX are going to be more involved, requiring research and real study of your audience.

What you need and what resources you have are going to impact which tactics are most realistic for you.

But there’s a basic rule: If a website would make Googlebot swoon and present zero obstacles to users, it’s close to perfect.

“All other things being equal”

Assume every competing website is optimized exactly as well as yours.

Now ask: Will this [tactic, change or strategy] move you closer to perfect?

That’s the “all other things being equal” rule. And it’s an incredibly powerful rubric for evaluating potential changes before you act. Pretend you’re in a tie with your competitors. Will this one thing be the tiebreaker? Will it put you ahead? Or will it cause you to fall behind?

“Closer to perfect than my competitors”

Perfect is great, but unattainable. What you really need is to be just a little perfect-er.

Chasing perfect can be dangerous. Perfect is the enemy of the good (I love that quote. Hated Voltaire. But I love that quote). If you wait for the opportunity/resources to reach perfection, you’ll never do anything. And the only way to reduce distance from perfect is to execute.

Instead of aiming for pure perfection, aim for more perfect than your competitors. Beat them feature-by-feature, tactic-by-tactic. Implement strategy that supports long-term superiority.

Don’t slack off. But set priorities and measure your effort. If fixing server response codes will take one hour and fixing duplication will take ten, fix the response codes first. Both move you closer to perfect. Fixing response codes may not move the needle as much, but it’s a lot easier to do. Then move on to fixing duplicates.

Do the 60% that gets you a 90% improvement. Then move on to the next thing and do it again. When you’re done, get to work on that last 40%. Repeat as necessary.

Take advantage of quick wins. That gives you more time to focus on your bigger solutions.

Sites that are “fine” are pretty far from perfect

Google has lots of tweaks, tools and workarounds to help us mitigate sub-optimal sites:

  • Rel=canonical lets us guide Google past duplicate content rather than fix it
  • HTML snapshots let us reveal content that’s delivered using asynchronous content and JavaScript frameworks
  • We can use rel=next and prev to guide search bots through outrageously long pagination tunnels
  • And we can use rel=nofollow to hide spammy links and banners

Easy, right? All of these solutions may reduce distance from perfect (the search engines don’t guarantee it). But they don’t reduce it as much as fixing the problems.
Just fine does not equal fixed

The next time you set up rel=canonical, ask yourself:

“All other things being equal, will using rel=canonical to make up for duplication move my site closer to perfect than my competitors?”

Answer: Not if they’re using rel=canonical, too. You’re both using imperfect solutions that force search engines to crawl every page of your site, duplicates included. If you want to pass them on your way to perfect, you need to fix the duplicate content.

When you use Angular.js to deliver regular content pages, ask yourself:

“All other things being equal, will using HTML snapshots instead of actual, visible content move my site closer to perfect than my competitors?”

Answer: No. Just no. Not in your wildest, code-addled dreams. If I’m Google, which site will I prefer? The one that renders for me the same way it renders for users? Or the one that has to deliver two separate versions of every page?

When you spill banner ads all over your site, ask yourself…

You get the idea. Nofollow is better than follow, but banner pollution is still pretty dang far from perfect.

Mitigating SEO issues with search engine-specific tools is “fine.” But it’s far, far from perfect. If search engines are forced to choose, they’ll favor the site that just works.

Not just SEO

By the way, distance from perfect absolutely applies to other channels.

I’m focusing on SEO, but think of other Internet marketing disciplines. I hear stuff like “How fast should my site be?” (Faster than it is right now.) Or “I’ve heard you shouldn’t have any content below the fold.” (Maybe in 2001.) Or “I need background video on my home page!” (Why? Do you have a reason?) Or, my favorite: “What’s a good bounce rate?” (Zero is pretty awesome.)

And Internet marketing venues are working to measure distance from perfect. Pay-per-click marketing has the quality score: A codified financial reward applied for seeking distance from perfect in as many elements as possible of your advertising program.

Social media venues are aggressively building their own forms of graphing, scoring and ranking systems designed to separate the good from the bad.

Really, all marketing includes some measure of distance from perfect. But no channel is more influenced by it than SEO. Instead of arguing one rule at a time, ask yourself and your boss or client: Will this move us closer to perfect?

Hell, you might even please a customer or two.

One last note for all of the SEOs in the crowd. Before you start pointing out edge cases, consider this: We spend our days combing Google for embarrassing rankings issues. Every now and then, we find one, point, and start yelling “SEE! SEE!!!! THE GOOGLES MADE MISTAKES!!!!” Google’s got lots of issues. Screwing up the rankings isn’t one of them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Has Google Gone Too Far with the Bias Toward Its Own Content?

Posted by ajfried

Since the beginning of SEO time, practitioners have been trying to crack the Google algorithm. Every once in a while, the industry gets a glimpse into how the search giant works and we have opportunity to deconstruct it. We don’t get many of these opportunities, but when we do—assuming we spot them in time—we try to take advantage of them so we can “fix the Internet.”

On Feb. 16, 2015, news started to circulate that NBC would start removing images and references of Brian Williams from its website.

This was it!

A golden opportunity.

This was our chance to learn more about the Knowledge Graph.

Expectation vs. reality

Often it’s difficult to predict what Google is truly going to do. We expect something to happen, but in reality it’s nothing like we imagined.

Expectation

What we expected to see was that Google would change the source of the image. Typically, if you hover over the image in the Knowledge Graph, it reveals the location of the image.

Keanu-Reeves-Image-Location.gif

This would mean that if the image disappeared from its original source, then the image displayed in the Knowledge Graph would likely change or even disappear entirely.

Reality (February 2015)

The only problem was, there was no official source (this changed, as you will soon see) and identifying where the image was coming from proved extremely challenging. In fact, when you clicked on the image, it took you to an image search result that didn’t even include the image.

Could it be? Had Google started its own database of owned or licensed images and was giving it priority over any other sources?

In order to find the source, we tried taking the image from the Knowledge Graph and “search by image” in images.google.com to find others like it. For the NBC Nightly News image, Google failed to even locate a match to the image it was actually using anywhere on the Internet. For other television programs, it was successful. Here is an example of what happened for Morning Joe:

Morning_Joe_image_search.png

So we found the potential source. In fact, we found three potential sources. Seemed kind of strange, but this seemed to be the discovery we were looking for.

This looks like Google is using someone else’s content and not referencing it. These images have a source, but Google is choosing not to show it.

Then Google pulled the ol’ switcheroo.

New reality (March 2015)

Now things changed and Google decided to put a source to their images. Unfortunately, I mistakenly assumed that hovering over an image showed the same thing as the file path at the bottom, but I was wrong. The URL you see when you hover over an image in the Knowledge Graph is actually nothing more than the title. The source is different.

Morning_Joe_Source.png

Luckily, I still had two screenshots I took when I first saw this saved on my desktop. Success. One screen capture was from NBC Nightly News, and the other from the news show Morning Joe (see above) showing that the source was changed.

NBC-nightly-news-crop.png

(NBC Nightly News screenshot.)

The source is a Google-owned property: gstatic.com. You can clearly see the difference in the source change. What started as a hypothesis in now a fact. Google is certainly creating a database of images.

If this is the direction Google is moving, then it is creating all kinds of potential risks for brands and individuals. The implications are a loss of control for any brand that is looking to optimize its Knowledge Graph results. As well, it seems this poses a conflict of interest to Google, whose mission is to organize the world’s information, not license and prioritize it.

How do we think Google is supposed to work?

Google is an information-retrieval system tasked with sourcing information from across the web and supplying the most relevant results to users’ searches. In recent months, the search giant has taken a more direct approach by answering questions and assumed questions in the Answer Box, some of which come from un-credited sources. Google has clearly demonstrated that it is building a knowledge base of facts that it uses as the basis for its Answer Boxes. When it sources information from that knowledge base, it doesn’t necessarily reference or credit any source.

However, I would argue there is a difference between an un-credited Answer Box and an un-credited image. An un-credited Answer Box provides a fact that is indisputable, part of the public domain, unlikely to change (e.g., what year was Abraham Lincoln shot? How long is the George Washington Bridge?) Answer Boxes that offer more than just a basic fact (or an opinion, instructions, etc.) always credit their sources.

There are four possibilities when it comes to Google referencing content:

  • Option 1: It credits the content because someone else owns the rights to it
  • Option 2: It doesn’t credit the content because it’s part of the public domain, as seen in some Answer Box results
  • Option 3: It doesn’t reference it because it owns or has licensed the content. If you search for “Chicken Pox” or other diseases, Google appears to be using images from licensed medical illustrators. The same goes for song lyrics, which Eric Enge discusses here: Google providing credit for content. This adds to the speculation that Google is giving preference to its own content by displaying it over everything else.
  • Option 4: It doesn’t credit the content, but neither does it necessarily own the rights to the content. This is a very gray area, and is where Google seemed to be back in February. If this were the case, it would imply that Google is “stealing” content—which I find hard to believe, but felt was necessary to include in this post for the sake of completeness.

Is this an isolated incident?

At Five Blocks, whenever we see these anomalies in search results, we try to compare the term in question against others like it. This is a categorization concept we use to bucket individuals or companies into similar groups. When we do this, we uncover some incredible trends that help us determine what a search result “should” look like for a given group. For example, when looking at searches for a group of people or companies in an industry, this grouping gives us a sense of how much social media presence the group has on average or how much media coverage it typically gets.

Upon further investigation of terms similar to NBC Nightly News (other news shows), we noticed the un-credited image scenario appeared to be a trend in February, but now all of the images are being hosted on gstatic.com. When we broadened the categories further to TV shows and movies, the trend persisted. Rather than show an image in the Knowledge Graph and from the actual source, Google tends to show an image and reference the source from Google’s own database of stored images.

And just to ensure this wasn’t a case of tunnel vision, we researched other categories, including sports teams, actors and video games, in addition to spot-checking other genres.

Unlike terms for specific TV shows and movies, terms in each of these other groups all link to the actual source in the Knowledge Graph.

Immediate implications

It’s easy to ignore this and say “Well, it’s Google. They are always doing something.” However, there are some serious implications to these actions:

  1. The TV shows/movies aren’t receiving their due credit because, from within the Knowledge Graph, there is no actual reference to the show’s official site
  2. The more Google moves toward licensing and then retrieving their own information, the more biased they become, preferring their own content over the equivalent—or possibly even superior—content from another source
  3. If feels wrong and misleading to get a Google Image Search result rather than an actual site because:
    • The search doesn’t include the original image
    • Considering how poor Image Search results are normally, it feels like a poor experience
  4. If Google is moving toward licensing as much content as possible, then it could make the Knowledge Graph infinitely more complicated when there is a “mistake” or something unflattering. How could one go about changing what Google shows about them?

Google is objectively becoming subjective

It is clear that Google is attempting to create databases of information, including lyrics stored in Google Play, photos, and, previously, facts in Freebase (which is now Wikidata and not owned by Google).

I am not normally one to point my finger and accuse Google of wrongdoing. But this really strikes me as an odd move, one bordering on a clear bias to direct users to stay within the search engine. The fact is, we trust Google with a heck of a lot of information with our searches. In return, I believe we should expect Google to return an array of relevant information for searchers to decide what they like best. The example cited above seems harmless, but what about determining which is the right religion? Or even who the prettiest girl in the world is?

Religion-and-beauty-queries.png

Questions such as these, which Google is returning credited answers for, could return results that are perceived as facts.

Should we next expect Google to decide who is objectively the best service provider (e.g., pizza chain, painter, or accountant), then feature them in an un-credited answer box? The direction Google is moving right now, it feels like we should be calling into question their objectivity.

But that’s only my (subjective) opinion.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Long Tail CTR Study: The Forgotten Traffic Beyond Top 10 Rankings

Posted by GaryMoyle

Search behavior is fundamentally changing, as users become more savvy and increasingly familiar with search technology. Google’s results have also changed significantly over the last decade, going from a simple page of 10 blue links to a much richer layout, including videos, images, shopping ads and the innovative Knowledge Graph.

We also know there are an increasing amount of touchpoints in a customer journey involving different channels and devices. Google’s
Zero Moment of Truth theory (ZMOT), which describes a revolution in the way consumers search for information online, supports this idea and predicts that we can expect the number of times natural search is involved on the path to a conversion to get higher and higher.

Understanding how people interact with Google and other search engines will always be important. Organic click curves show how many clicks you might expect from search engine results and are one way of evaluating the impact of our campaigns, forecasting performance and exploring changing search behavior.

Using search query data from Google UK for a wide range of leading brands based on millions of impressions and clicks, we can gain insights into the how CTR in natural search has evolved beyond those shown in previous studies by
Catalyst, Slingshot and AOL.

Our methodology

The NetBooster study is based entirely on UK top search query data and has been refined by day in order to give us the most accurate sample size possible. This helped us reduce anomalies in the data in order to achieve the most reliable click curve possible, allowing us to extend it way beyond the traditional top 10 results.

We developed a method to extract data day by day to greatly increase the volume of keywords and to help improve the accuracy of the
average ranking position. It ensured that the average was taken across the shortest timescale possible, reducing rounding errors.

The NetBooster study included:

  • 65,446,308 (65 million) clicks
  • 311,278,379 (311 million) impressions
  • 1,253,130 (1.2 million) unique search queries
  • 54 unique brands
  • 11 household brands (sites with a total of 1M+ branded keyword impressions)
  • Data covers several verticals including retail, travel and financial

We also looked at organic CTR for mobile, video and image results to better understand how people are discovering content in natural search across multiple devices and channels. 

We’ll explore some of the most important elements in this article.

How does our study compare against others?

Let’s start by looking at the top 10 results. In the graph below we have normalized the results in order to compare our curve, like-for-like, with previous studies from Catalyst and Slingshot. Straight away we can see that there is higher participation beyond the top four positions when compared to other studies. We can also see much higher CTR for positions lower on the pages, which highlights how searchers are becoming more comfortable with mining search results.

A new click curve to rule them all

Our first click curve is the most useful, as it provides the click through rates for generic non-brand search queries across positions 1 to 30. Initially, we can see a significant amount of traffic going to the top three results with position No. 1 receiving 19% of total traffic, 15% at position No. 2 and 11.45% at position No. 3. The interesting thing to note, however, is our curve shows a relatively high CTR for positions typically below the fold. Positions 6-10 all received a higher CTR than shown in previous studies. It also demonstrates that searchers are frequently exploring pages two and three.

CTR-top-30-730px.jpg

When we look beyond the top 10, we can see that CTR is also higher than anticipated, with positions 11-20 accounting for 17% of total traffic. Positions 21-30 also show higher than anticipated results, with over 5% of total traffic coming from page three. This gives us a better understanding of the potential uplift in visits when improving rankings from positions 11-30.

This highlights that searchers are frequently going beyond the top 10 to find the exact result they want. The prominence of paid advertising, shopping ads, Knowledge Graph and the OneBox may also be pushing users below the fold more often as users attempt to find better qualified results. It may also indicate growing dissatisfaction with Google results, although this is a little harder to quantify.

Of course, it’s important we don’t just rely on one single click curve. Not all searches are equal. What about the influence of brand, mobile and long-tail searches?

Brand bias has a significant influence on CTR

One thing we particularly wanted to explore was how the size of your brand influences the curve. To explore this, we banded each of the domains in our study into small, medium and large categories based on the sum of brand query impressions across the entire duration of the study.

small-medium-large-brand-organic-ctr-730

When we look at how brand bias is influencing CTR for non-branded search queries, we can see that better known brands get a sizable increase in CTR. More importantly, small- to medium-size brands are actually losing out to results from these better-known brands and experience a much lower CTR in comparison.

What is clear is keyphrase strategy will be important for smaller brands in order to gain traction in natural search. Identifying and targeting valuable search queries that aren’t already dominated by major brands will minimize the cannibalization of CTR and ensure higher traffic levels as a result.

How does mobile CTR reflect changing search behavior?

Mobile search has become a huge part of our daily lives, and our clients are seeing a substantial shift in natural search traffic from desktop to mobile devices. According to Google, 30% of all searches made in 2013 were on a mobile device; they also predict mobile searches will constitute over 50% of all searches in 2014.

Understanding CTR from mobile devices will be vital as the mobile search revolution continues. It was interesting to see that the click curve remained very similar to our desktop curve. Despite the lack of screen real estate, searchers are clearly motivated to scroll below the fold and beyond the top 10.

netbooster-mobile-organic-ctr-730px.jpg

NetBooster CTR curves for top 30 organic positions


Position

Desktop CTR

Mobile CTR

Large Brand

Medium Brand

Small Brand
1 19.35% 20.28% 20.84% 13.32% 8.59%
2 15.09% 16.59% 16.25% 9.77% 8.92%
3 11.45% 13.36% 12.61% 7.64% 7.17%
4 8.68% 10.70% 9.91% 5.50% 6.19%
5 7.21% 7.97% 8.08% 4.69% 5.37%
6 5.85% 6.38% 6.55% 4.07% 4.17%
7 4.63% 4.85% 5.20% 3.33% 3.70%
8 3.93% 3.90% 4.40% 2.96% 3.22%
9 3.35% 3.15% 3.76% 2.62% 3.05%
10 2.82% 2.59% 3.13% 2.25% 2.82%
11 3.06% 3.18% 3.59% 2.72% 1.94%
12 2.36% 3.62% 2.93% 1.96% 1.31%
13 2.16% 4.13% 2.78% 1.96% 1.26%
14 1.87% 3.37% 2.52% 1.68% 0.92%
15 1.79% 3.26% 2.43% 1.51% 1.04%
16 1.52% 2.68% 2.02% 1.26% 0.89%
17 1.30% 2.79% 1.67% 1.20% 0.71%
18 1.26% 2.13% 1.59% 1.16% 0.86%
19 1.16% 1.80% 1.43% 1.12% 0.82%
20 1.05% 1.51% 1.36% 0.86% 0.73%
21 0.86% 2.04% 1.15% 0.74% 0.70%
22 0.75% 2.25% 1.02% 0.68% 0.46%
23 0.68% 2.13% 0.91% 0.62% 0.42%
24 0.63% 1.84% 0.81% 0.63% 0.45%
25 0.56% 2.05% 0.71% 0.61% 0.35%
26 0.51% 1.85% 0.59% 0.63% 0.34%
27 0.49% 1.08% 0.74% 0.42% 0.24%
28 0.45% 1.55% 0.58% 0.49% 0.24%
29 0.44% 1.07% 0.51% 0.53% 0.28%
30 0.36% 1.21% 0.47% 0.38% 0.26%

Creating your own click curve

This study will give you a set of benchmarks for both non-branded and branded click-through rates with which you can confidently compare to your own click curve data. Using this data as a comparison will let you understand whether the appearance of your content is working for or against you.

We have made things a little easier for you by creating an Excel spreadsheet: simply drop your own top search query data in and it’ll automatically create a click curve for your website.

Simply visit the NetBooster website and download our tool to start making your own click curve.

In conclusion

It’s been both a fascinating and rewarding study, and we can clearly see a change in search habits. Whatever the reasons for this evolving search behavior, we need to start thinking beyond the top 10, as pages two and three are likely to get more traffic in future. 

 We also need to maximize the traffic created from existing rankings and not just think about position.

Most importantly, we can see practical applications of this data for anyone looking to understand and maximize their content’s performance in natural search. Having the ability to quickly and easily create your own click curve and compare this against a set of benchmarks means you can now understand whether you have an optimal CTR.

What could be the next steps?

There is, however, plenty of scope for improvement. We are looking forward to continuing our investigation, tracking the evolution of search behavior. If you’d like to explore this subject further, here are a few ideas:

  • Segment search queries by intent (How does CTR vary depending on whether a search query is commercial or informational?)
  • Understand CTR by industry or niche
  • Monitor the effect of new Knowledge Graph formats on CTR across both desktop and mobile search
  • Conduct an annual analysis of search behavior (Are people’s search habits changing? Are they clicking on more results? Are they mining further into Google’s results?)

Ultimately, click curves like this will change as the underlying search behavior continues to evolve. We are now seeing a massive shift in the underlying search technology, with Google in particular heavily investing in entity- based search (i.e., the Knowledge Graph). We can expect other search engines, such as Bing, Yandex and Baidu to follow suit and use a similar approach.

The rise of smartphone adoption and constant connectivity also means natural search is becoming more focused on mobile devices. Voice-activated search is also a game-changer, as people start to converse with search engines in a more natural way. This has huge implications for how we monitor search activity.

What is clear is no other industry is changing as rapidly as search. Understanding how we all interact with new forms of search results will be a crucial part of measuring and creating success.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

How to Recover Lost Pageviews in pushState Experiences

Posted by GeoffKenyon

PushState and AJAX can be used in tandem to deliver content without requiring the entire page to refresh, providing a better user experience. The other week, Richard Baxter dove into the implications of pushState for SEO on Builtvisible. If you’re not familiar with pushState, you should spend some time to read through his post.

If you’re not familiar with delivering content this way, you can check out these sites using pushState and AJAX to deliver content:

Time: When you scroll to the bottom of the article, a new article loads and the URL changes
Halcyon: When you click on a navigation link, the left hand panel doesn’t refresh

While pushState is really cool and great for UX, there are analytics issues presented by this technology.

When the content on a page and URL are updated using AJAX and pushState, in most cases, the 
_trackPageView beacon is not fired and the pageview is not tracked. This artificially increases your bounce rate while reducing your pages per visit, time on site, and total pageviews along with other metrics associated with pageviews. 

How to tell if you’re having tracking problems

If you have a very high bounce rate or are generally curious to check if this is a problem for you, start by installing the GA Debugger extension for Chrome. Then go to the URL you want to investigate and open up the console (windows: control + shift + j, mac: command + option + j). Now, clear the console using the button at the left, and refresh the URL.

Once you refresh the page, you should see GA debugging show up in the console. To check that the initial page view is being tracked, you should see a “sent beacon” for a pageview.

Once you’ve established the initial pageview is tracked, click a link to load another page. If GA is properly tracking pageviews, you should see another pageview beacon being sent. If you don’t see this, then you have a problem.

Capturing these pageviews with GTM

The good news is that even though this is a huge problem, it can easily be fixed with Google Analytics and Google Tag Manager.

Start by creating a new “History Listener” tag. Now set your fire rules to all pages and hit save. This will simply look for changes to the URL.

Now we’ll need to create a separate event to fire a pageview when the URL History Listener fires. To do this, create a new GA tag. 

If you already run Google Analytics from GTM, you’ll simply need to modify your existing tag. This tag should, by default, be set to track pageviews. 

At this point we’ll need to set the firing rules. First, we should make sure the tag is firing on all of our pages for our basic GA installation.

The firing rule for all pages should be a default option.

If you are already running GA via GTM, you’ll already have this set up. You’ll need to create a subsequent firing rule to fire a pageview for this URL History Listener.

To do this, click to add a new firing rule and then select “create new rule.” Name the rule, and then move on to conditions. The default rule should be [url] [contains]; we need to change this to [event] [equals]. Then we’ll set the condition to gtm.historyChange. Now click save.

Now you should be all set to hit publish on your updated tag container. Overnight, you should see a change in your pageviews and related metrics.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

I See Content Everywhere – Whiteboard Friday

Posted by MarkTraphagen

Most of us who work in content marketing have felt the strain that scaling puts on our efforts. How on Earth are we supposed to keep coming up with great ideas for new pieces of content? The answer is, in some sense, all around us. In today’s Whiteboard Friday, MozCon community speaker Mark Traphagen shows us how to see the world in a different way—a way that’s chock full of content ideas.

Heads-up! We’re publishing a one-two punch of Whiteboard Fridays from our friends at Stone Temple Consulting today. Be sure to check out “Content Syndication” by Eric Enge, as well!

For reference, here’s a still of this week’s whiteboard!

Video transcription

Hey, hello. I’m Mark Traphagen from Stone Temple Consulting, and welcome to this week’s Whiteboard Friday. I want to talk to you today, starting out, about a movie that I hope you’ve all seen by now, because this should not be a spoiler alert. I’m not even going to spoil the movie, but it’s “The Sixth Sense.”

Most of you know that movie. You’ve seen it and remember it. The little kid who says that creepy thing: “I see dead people.”

What I want to give to you today, what I want to try to teach you to do and bring to you is that you see, not dead people, but content and see it everywhere. Most of us realize that these days we’ve got to be producing content to be effective on the Web, not only for SEO, but to be effective in our marketing, in our branding and building the reputation and trust authority that we need around our brand. That’s going to be happening by content.

We’re all topically challenged

But if you’re the one tasked with coming up with that content and you’ve got to create it, it’s a tough job. Why? Most of us are topically challenged. We come to that moment, “What do I write about? What do I do that video about? What do I make that podcast about? What’s the next thing I’m going to write about?” That’s going to be the hardest thing.

When I talk to people about this, people who do this, like I do every day for a living, producing, inventing content, they’re almost invariably going to put that in the top three and usually number one. What do I do? Where do I get this from?

It’s more important now than ever before. It used to be just most companies that did content at all, websites, would hire an SEO copywriter. They’d actually use that term. We need an SEO copywriter. That usually meant that we’re looking for somebody who’s going to know where to put the keywords in enough times, and we don’t really care what else goes on with the content, what they write or how they say it or how good a writer they are as long as they can know the ways to manipulate the search engines.

Well, I think most of us now, if you watch these Whiteboard Friday videos, you know it, that that just doesn’t work anymore. That’s not going to cut it. Not only does that not really work with the search engines so well anymore, but it’s not really using your content effectively. It’s not using it to build, again, that reputation, that trust, that authority that you need around your brand and that content can be so powerful to do.

Get yourself some cyborg content eyes

So what I’m going to challenge you to do today is to get content eyes. You’ve got to get content eyes. You’ve got to get eyes that see content everywhere. This is what I train myself to do. It’s why I’m never out of ideas for that next blog post or that next video. You start to see it everywhere. You’ve got to get those eyes for it.

You’ve got to be like that professional photographer. Professional photographers are like this. This is what they have. Some of them, maybe they are born with it, but I think a lot of them have just developed it. They train themselves that everywhere they walk, when they’re going down the city street, when they’re out in the country, or wherever they are, they see photographs. The rest of us will walk right by it and say, “That’s just stuff happening.” But they see that old man on the street that has a face that tells a story of long ages. They see the way that shadow falls across the street at that moment, that right time of day. They see that’s a photograph. That’s a photograph. That’s a photograph.

You’ve got to start looking for that with content. You’ve got to be like Michelangelo. According to legend anyway, he said that he could look at a block of granite and see the sculpture that was inside it, waiting for him to chisel it out. That’s what you’ve got to train yourself to do.

So what I want to do today with the rest of this time is to give you some ways of doing that, some ways that you can look at the other content that you’re reading online, or videos you’re watching, conversations that you get into, listening to a conference speaker, wherever you are to start to look for that and get those content eyes. So let’s break into what those are.

Like the bumper sticker says, question everything

By questioning everything here, I mean develop a questioning mind. This is a good thing to do anyway when you’re reading, especially when you’re reading non-fiction content or you’re looking at and evaluating things. But for the content producer, this is a great tool.

When I’m looking at a piece of content, when I’m watching one of Rand’s Whiteboard Friday videos, I don’t just say, “Oh, it’s Rand Fishkin. I’ve got to take everything that he says.” I formulate questions in my mind. Why is that true? He just went past that fact there, but how does he know that?

Wait, I’d like to know this, but I’m looking at a Whiteboard video. I could yell at it all day, and Rand’s not going to answer me. But maybe instead of just putting that question in the comments, maybe that becomes my next piece of content.

Install a question antenna

So question everything. Get those questions. Related to that — get a question antenna up. Now what I mean by that is look for questions that are already there, but aren’t getting answered. You see a great blog post on something, and then you look in the comments and see somebody has asked this great question, and neither the author of the blog post nor anybody else is really answering it adequately. Chances are, if that’s a really great question, that person doesn’t have it alone. There are a lot of other people out there with that same question.

So that’s an opportunity for you to take that and make a piece of content out of it. We’re talking here about something that’s relevant to the audience that you’re after, obviously. So that’s another thing is looking for those questions, and not just on other pieces of content, but obviously you should be listening to your customers. What are the questions they’re asking? If you don’t have direct access to that, talk to your sales staff. Talk to your customer service people. Whoever interfaces with the customers, collect their questions. Those are great sources of content.

Finally, here, not finally. Second to finally, penultimate, do the mash-up. I love mash-ups. I’m totally obsessed with them. It’s where somebody, an artist goes and takes two or three or sometimes more pieces of pop music —

they could be from different eras — and puts them together in a very creative way. It’s not just playing one after the other, but finds ways that they sonically match up and they can blend over each other. It might be a Beatles song over Gangster’s Paradise. A whole new thing happens when they do that.

Juxtapose this! By which I mean do a mash-up.

Well, you can do mash-ups. When you’re reading content or watching videos or wherever you’re getting your stimulation, look for things that juxtapose in some way, that you could bring that in, in some way that nobody’s done before.

Quickly, there are four kinds of things you should be looking for to do your mash-up. Sometimes you could be writing about things that intersect in some way. You might see two different pieces of content and, because you’ve got your content eyes out there, you say, “Ah, there’s an overlap here that nobody is talking about.” So you talk about it. You write about that.

It might be a total contrast. It might be like over here people are saying this, and over here people are saying that. Why is there such a difference?

Maybe you can either resolve that or even just talk about why that difference is there.

It can be just an actual contradiction. There’s contradiction in this thing. Why is that contradiction there? Or maybe just where they complement each other. That’s supposed to be a bridge between there. Not a very good bridge. The two things, how do they complement each other? The mash-up idea is taking two or more ideas that are out there floating around, that you’ve been thinking about, and bringing them together in a way that nobody else has.

Before I go on to the last one here, I just want to say “Do you see what we’re doing?” We’re synthesizing out of other stimulus that’s out there to produce something that is unique, but birthed out of other ideas. That’s where the best ideas come from. That’s a way that you can be getting those ideas.

Let’s brand-name-acne-treatment this topic up

Let’s go to the last one here. I call it Clearasil because it’s clearing things up. This is one I use a lot. Maybe it’s because I have a background as a teacher years ago. I’ve got to make this clear. I’ve got to explain this. When you see something out there that is interesting or new, somebody presents some new facts, a test result, whatever it is, but they just kind of presented the facts, you could go, if you understand it, and say, “I think I know what that’s happening. I think I know the implications of that.” You could go and explain that. Now you have cleared that up, and you’ve created a great new piece of useful content.

A quick example of that kind of thing is I had a chat with Jay Baer recently, of Convince & Convert. Something he said just pinged in my mind and I said, “Yes, that’s why some of my content works.” He has this thing that he calls “and therefore” content. He says that he’s trained his staff and himself that when they go out and they see something where somebody has said like, “This happened out there,” kind of reporting of the news, they say, “Let’s write about or do a video about or an audio or whatever, and therefore what this means to you, and therefore the next steps you need to take because of that, and therefore what might happen in the future.” You see the power of that?

So the whole thing here is getting content eyes. Learning to see content everywhere. Train yourself. Begin to ask those questions. Begin to look at the stimulus that comes in around you. Listen, look, and find out what you can put together in a way that nobody else has before, and you’ll never run out of those content ideas. Thanks a lot for joining me today.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]