How Much Has Link Building Changed in Recent Years?

Posted by Paddy_Moogan

I get asked this question a lot. It’s mainly asked by people who are considering buying my link building book and want to know whether it’s still up to date. This is understandable given that the first edition was published in February 2013 and our industry has a deserved reputation for always changing.

I find myself giving the same answer, even though I’ve been asked it probably dozens of times in the last two years—”not that much”. I don’t think this is solely due to the book itself standing the test of time, although I’ll happily take a bit of credit for that 🙂 I think it’s more a sign of our industry as a whole not changing as much as we’d like to think.

I started to question myself and if I was right and honestly, it’s one of the reasons it has taken me over two years to release the second edition of the book.

So I posed this question to a group of friends not so long ago, some via email and some via a Facebook group. I was expecting to be called out by many of them because my position was that in reality, it hasn’t actually changed that much. The thing is, many of them agreed and the conversations ended with a pretty long thread with lots of insights. In this post, I’d like to share some of them, share what my position is and talk about what actually has changed.

My personal view

Link building hasn’t changed as much we think it has.

The core principles of link building haven’t changed. The signals around link building have changed, but mainly around new machine learning developments that have indirectly affected what we do. One thing that has definitely changed is the mindset of SEOs (and now clients) towards link building.

I think the last big change to link building came in April 2012 when Penguin rolled out. This genuinely did change our industry and put to bed a few techniques that should never have worked so well in the first place.

Since then, we’ve seen some things change, but the core principles haven’t changed if you want to build a business that will be around for years to come and not run the risk of being hit by a link related Google update. For me, these principles are quite simple:

  • You need to deserve links – either an asset you create or your product
  • You need to put this asset in front of a relevant audience who have the ability to share it
  • You need consistency – one new asset every year is unlikely to cut it
  • Anything that scales is at risk

For me, the move towards user data driving search results + machine learning has been the biggest change we’ve seen in recent years and it’s still going.

Let’s dive a bit deeper into all of this and I’ll talk about how this relates to link building.

The typical mindset for building links has changed

I think that most SEOs are coming round to the idea that you can’t get away with building low quality links any more, not if you want to build a sustainable, long-term business. Spammy link building still works in the short-term and I think it always will, but it’s much harder than it used to be to sustain websites that are built on spam. The approach is more “churn and burn” and spammers are happy to churn through lots of domains and just make a small profit on each one before moving onto another.

For everyone else, it’s all about the long-term and not putting client websites at risk.

This has led to many SEOs embracing different forms of link building and generally starting to use content as an asset when it comes to attracting links. A big part of me feels that it was actually Penguin in 2012 that drove the rise of content marketing amongst SEOs, but that’s a post for another day…! For today though, this goes some way towards explain the trend we see below.

Slowly but surely, I’m seeing clients come to my company already knowing that low quality link building isn’t what they want. It’s taken a few years after Penguin for it to filter down to client / business owner level, but it’s definitely happening. This is a good thing but unfortunately, the main reason for this is that most of them have been burnt in the past by SEO companies who have built low quality links without giving thought to building good quality ones too.

I have no doubt that it’s this change in mindset which has led to trends like this:

The thing is, I don’t think this was by choice.

Let’s be honest. A lot of us used the kind of link building tactics that Google no longer like because they worked. I don’t think many SEOs were under the illusion that it was genuinely high quality stuff, but it worked and it was far less risky to do than it is today. Unless you were super-spammy, the low-quality links just worked.

Fast forward to a post-Penguin world, things are far more risky. For me, it’s because of this that we see the trends like the above. As an industry, we had the easiest link building methods taken away from us and we’re left with fewer options. One of the main options is content marketing which, if you do it right, can lead to good quality links and importantly, the types of links you won’t be removing in the future. Get it wrong and you’ll lose budget and lose the trust if your boss or client in the power of content when it comes to link building.

There are still plenty of other methods to build links and sometimes we can forget this. Just look at this epic list from Jon Cooper. Even with this many tactics still available to us, it’s hard work. Way harder than it used to be.

My summary here is that as an industry, our mindset has shifted but it certainly wasn’t a voluntary shift. If the tactics that Penguin targeted still worked today, we’d still be using them.

A few other opinions…

I definitely think too many people want the next easy win. As someone surfing the edge of what Google is bringing our way, here’s my general take—SEO, in broad strokes, is changing a lot, *but* any given change is more and more niche and impacts fewer people. What we’re seeing isn’t radical, sweeping changes that impact everyone, but a sort of modularization of SEO, where we each have to be aware of what impacts our given industries, verticals, etc.”

Dr. Pete

 

I don’t feel that techniques for acquiring links have changed that much. You can either earn them through content and outreach or you can just buy them. What has changed is the awareness of “link building” outside of the SEO community. This makes link building / content marketing much harder when pitching to journalists and even more difficult when pitching to bloggers.

“Link building has to be more integrated with other channels and struggles to work in its own environment unless supported by brand, PR and social. Having other channels supporting your link development efforts also creates greater search signals and more opportunity to reach a bigger audience which will drive a greater ROI.

Carl Hendy

 

SEO has grown up in terms of more mature staff and SEOs becoming more ingrained into businesses so there is a smarter (less pressure) approach. At the same time, SEO has become more integrated into marketing and has made marketing teams and decision makers more intelligent in strategies and not pushing for the quick win. I’m also seeing that companies who used to rely on SEO and building links have gone through IPOs and the need to build 1000s of links per quarter has rightly reduced.

Danny Denhard

Signals that surround link building have changed

There is no question about this one in my mind. I actually wrote about this last year in my previous blog post where I talked about signals such as anchor text and deep links changing over time.

Many of the people I asked felt the same, here are some quotes from them, split out by the types of signal.

Domain level link metrics

I think domain level links have become increasingly important compared with page level factors, i.e. you can get a whole site ranking well off the back of one insanely strong page, even with sub-optimal PageRank flow from that page to the rest of the site.

Phil Nottingham

I’d agree with Phil here and this is what I was getting at in my previous post on how I feel “deep links” will matter less over time. It’s not just about domain level links here, it’s just as much about the additional signals available for Google to use (more on that later).

Anchor text

I’ve never liked anchor text as a link signal. I mean, who actually uses exact match commercial keywords as anchor text on the web?

SEOs. 🙂

Sure there will be natural links like this, but honestly, I struggle with the idea that it took Google so long to start turning down the dial on commercial anchor text as a ranking signal. They are starting to turn it down though, slowly but surely. Don’t get me wrong, it still matters and it still works. But like pure link spam, the barrier is a lot more lower now in terms what of constitutes too much.

Rand feels that they matter more than we’d expect and I’d mostly agree with this statement:

Exact match anchor text links still have more power than you’d expect—I think Google still hasn’t perfectly sorted what is “brand” or “branded query” from generics (i.e. they want to start ranking a new startup like meldhome.com for “Meld” if the site/brand gets popular, but they can’t quite tell the difference between that and https://moz.com/learn/seo/redirection getting a few manipulative links that say “redirect”)

Rand Fishkin

What I do struggle with though, is that Google still haven’t figured this out and that short-term, commercial anchor text spam is still so effective. Even for a short burst of time.

I don’t think link building as a concept has changed loads—but I think links as a signal have, mainly because of filters and penalties but I don’t see anywhere near the same level of impact from coverage anymore, even against 18 months ago.

Paul Rogers

New signals have been introduced

It isn’t just about established signals changing though, there are new signals too and I personally feel that this is where we’ve seen the most change in Google algorithms in recent years—going all the way back to Panda in 2011.

With Panda, we saw a new level of machine learning where it almost felt like Google had found a way of incorporating human reaction / feelings into their algorithms. They could then run this against a website and answer questions like the ones included in this post. Things such as:

  • “Would you be comfortable giving your credit card information to this site?”
  • “Does this article contain insightful analysis or interesting information that is beyond obvious?”
  • “Are the pages produced with great care and attention to detail vs. less attention to detail?”

It is a touch scary that Google was able to run machine learning against answers to questions like this and write an algorithm to predict the answers for any given page on the web. They have though and this was four years ago now.

Since then, they’ve made various moves to utilize machine learning and AI to build out new products and improve their search results. For me, this was one of the biggest and went pretty unnoticed by our industry. Well, until Hummingbird came along I feel pretty sure that we have Ray Kurzweil to thank for at least some of that.

There seems to be more weight on theme/topic related to sites, though it’s hard to tell if this is mostly link based or more user/usage data based. Google is doing a good job of ranking sites and pages that don’t earn the most links but do provide the most relevant/best answer. I have a feeling they use some combination of signals to say “people who perform searches like this seem to eventually wind up on this website—let’s rank it.” One of my favorite examples is the Audubon Society ranking for all sorts of birding-related searches with very poor keyword targeting, not great links, etc. I think user behavior patterns are stronger in the algo than they’ve ever been.

– Rand Fishkin

Leading on from what Rand has said, it’s becoming more and more common to see search results that just don’t make sense if you look at the link metrics—but are a good result.

For me, the move towards user data driving search results + machine learning advanced has been the biggest change we’ve seen in recent years and it’s still going.

Edit: since drafting this post, Tom Anthony released this excellent blog post on his views on the future of search and the shift to data-driven results. I’d recommend reading that as it approaches this whole area from a different perspective and I feel that an off-shoot of what Tom is talking about is the impact on link building.

You may be asking at this point, what does machine learning have to do with link building?

Everything. Because as strong as links are as a ranking signal, Google want more signals and user signals are far, far harder to manipulate than established link signals. Yes it can be done—I’ve seen it happen. There have even been a few public tests done. But it’s very hard to scale and I’d venture a guess that only the top 1% of spammers are capable of doing it, let alone maintaining it for a long period of time. When I think about the process for manipulation here, I actually think we go a step beyond spammers towards hackers and more cut and dry illegal activity.

For link building, this means that traditional methods of manipulating signals are going to become less and less effective as these user signals become stronger. For us as link builders, it means we can’t keep searching for that silver bullet or the next method of scaling link building just for an easy win. The fact is that scalable link building is always going to be at risk from penalization from Google—I don’t really want to live a life where I’m always worried about my clients being hit by the next update. Even if Google doesn’t catch up with a certain method, machine learning and user data mean that these methods may naturally become less effective and cost efficient over time.

There are of course other things such as social signals that have come into play. I certainly don’t feel like these are a strong ranking factor yet, but with deals like this one between Google and Twitter being signed, I wouldn’t be surprised if that ever-growing dataset is used at some point in organic results. The one advantage that Twitter has over Google is it’s breaking news freshness. Twitter is still way quicker at breaking news than Google is—140 characters in a tweet is far quicker than Google News! Google know this which is why I feel they’ve pulled this partnership back into existence after a couple of years apart.

There is another important point to remember here and it’s nicely summarised by Dr. Pete:

At the same time, as new signals are introduced, these are layers not replacements. People hear social signals or user signals or authorship and want it to be the link-killer, because they already fucked up link-building, but these are just layers on top of on-page and links and all of the other layers. As each layer is added, it can verify the layers that came before it and what you need isn’t the magic signal but a combination of signals that generally matches what Google expects to see from real, strong entities. So, links still matter, but they matter in concert with other things, which basically means it’s getting more complicated and, frankly, a bit harder. Of course, on one wants to hear that.”

– Dr. Pete

The core principles have not changed

This is the crux of everything for me. With all the changes listed above, the key is that the core principles around link building haven’t changed. I could even argue that Penguin didn’t change the core principles because the techniques that Penguin targeted should never have worked in the first place. I won’t argue this too much though because even Google advised website owners to build directory links at one time.

You need an asset

You need to give someone a reason to link to you. Many won’t do it out of the goodness of their heart! One of the most effective ways to do this is to develop a content asset and use this as your reason to make people care. Once you’ve made someone care, they’re more likely to share the content or link to it from somewhere.

You need to promote that asset to the right audience

I really dislike the stance that some marketers take when it comes to content promotion—build great content and links will come.

No. Sorry but for the vast majority of us, that’s simply not true. The exceptions are people that sky dive from space or have huge existing audiences to leverage.

You simply have to spend time promoting your content or your asset for it to get shares and links. It is hard work and sometimes you can spend a long time on it and get little return, but it’s important to keep working at until you’re at a point where you have two things:

  • A big enough audience where you can almost guarantee at least some traffic to your new content along with some shares
  • Enough strong relationships with relevant websites who you can speak to when new content is published and stand a good chance of them linking to it

Getting to this point is hard—but that’s kind of the point. There are various hacks you can use along the way but it will take time to get right.

You need consistency

Leading on from the previous point. It takes time and hard work to get links to your content—the types of links that stand the test of time and you’re not going to be removing in 12 months time anyway! This means that you need to keep pushing content out and getting better each and every time. This isn’t to say you should just churn content out for the sake of it, far from it. I am saying that with each piece of content you create, you will learn to do at least one thing better the next time. Try to give yourself the leverage to do this.

Anything scalable is at risk

Scalable link building is exactly what Google has been trying to crack down on for the last few years. Penguin was the biggest move and hit some of the most scalable tactics we had at our disposal. When you scale something, you often lose some level of quality, which is exactly what Google doesn’t want when it comes to links. If you’re still relying on tactics that could fall into the scalable category, I think you need to be very careful and just look at the trend in the types of links Google has been penalizing to understand why.

The part Google plays in this

To finish up, I want to briefly talk about the part that Google plays in all of this and shaping the future they want for the web.

I’ve always tried to steer clear of arguments involving the idea that Google is actively pushing FUD into the community. I’ve preferred to concentrate more on things I can actually influence and change with my clients rather than what Google is telling us all to do.

However, for the purposes of this post, I want to talk about it.

General paranoia has increased. My bet is there are some companies out there carrying out zero specific linkbuilding activity through worry.

Dan Barker

Dan’s point is a very fair one and just a day or two after reading this in an email, I came across a page related to a client’s target audience that said:

“We are not publishing guest posts on SITE NAME any more. All previous guest posts are now deleted. For more information, see www.mattcutts.com/blog/guest-blogging/“.

I’ve reworded this as to not reveal the name of the site, but you get the point.

This is silly. Honestly, so silly. They are a good site, publish good content, and had good editorial standards. Yet they have ignored all of their own policies, hard work, and objectives to follow a blog post from Matt. I’m 100% confident that it wasn’t sites like this one that Matt was talking about in this blog post.

This is, of course, from the publishers’ angle rather than the link builders’ angle, but it does go to show the effect that statements from Google can have. Google know this so it does make sense for them to push out messages that make their jobs easier and suit their own objectives—why wouldn’t they? In a similar way, what did they do when they were struggling to classify at scale which links are bad vs. good and they didn’t have a big enough web spam team? They got us to do it for them 🙂

I’m mostly joking here, but you see the point.

The most recent infamous mobilegeddon update, discussed here by Dr. Pete is another example of Google pushing out messages that ultimately scared a lot of people into action. Although to be fair, I think that despite the apparent small impact so far, the broad message from Google is a very serious one.

Because of this, I think we need to remember that Google does have their own agenda and many shareholders to keep happy. I’m not in the camp of believing everything that Google puts out is FUD, but I’m much more sensitive and questioning of the messages now than I’ve ever been.

What do you think? I’d love to hear your feedback and thoughts in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

What Happened after Google Pulled Author and Video Snippets: A Moz Case Study

Posted by Cyrus-Shepard

In the past 2 months Google made
big changes to its search results

Webmasters saw disappearing 
Google authorship photos, reduced video snippets, changes to local packs and in-depth articles, and more.

Here at Moz, we’ve closely monitored our own URLs to measure the effect of these changes on our actual traffic.
The results surprised us.

Authorship traffic—surprising results

In the early days of authorship, many webmasters worked hard to get their photo in Google search results. I confess, I doubt anyone worked harder at author snippets
than me

Search results soon became crowded with smiling faces staring back at us. Authors hired professional photographers. Publishers worked to correctly follow Google’s guidelines to set up authorship for thousands of authors.

The race for more clicks was on.

Then on June 28th,
Google cleared the page. No more author photos. 

To gauge the effect on traffic, we examined eight weeks’ worth of data from Google Analytics and Webmaster Tools, before and after the change. We then examined our top 15 authorship URLs (where author photos were known to show consistently) compared to non-authorship URLs. 

The results broke down like this:

Change in Google organic traffic to Moz

  • Total Site:  -1.76%
  • Top 15 Non-Authorship URLs:  -5.96%
  • Top 15 Authorship URLs:  -2.86%

Surprisingly,
authorship URLs performed as well as non-authorship URLs in terms of traffic. Even though Moz was highly optimized for authors, traffic didn’t significantly change.

On an individual level, things looked much different. We actually observed big changes in traffic with authorship URLs increasing or decreasing in traffic by as much as 45%. There is no clear pattern: Some went up, some went down—exactly like any URL would over an extended time.

Authorship photos don’t exist in a vacuum; each photo on the page competed for attention with all the other photos on the page.
Each search result is as unique as a fingerprint. What worked for one result didn’t work for another.

Consider what happens visually when multiple author photos exist in the same search result:

One hypothesis speculates that more photos has the effect of drawing eyes down the page. In the absence of rich snippets, search click-through rates might follow more closely studied models, which dictate that
results closer to the top earn more clicks.

In the absence of author photos, it’s likely click-through rate expectations have once again become more standardized.

Video snippets: a complex tale

Shortly after Google removed author photos, they took aim at video snippets as well. On July 17th,
MozCast reported a sharp decline in video thumbnails.

Most sites, Moz included, lost
100% of their video results. Other sites appeared to be “white-listed” as reported by former Mozzer Casey Henry at Wistia. 

A few of the sites Casey found where Google continues to show video thumbnails:

  • youtube.com
  • vimeo.com
  • vevo.com
  • ted.com
  • today.com
  • discovery.com

Aside from these “giants,” most webmasters, even very large publishers at the top of the industry, saw their video snippets vanish in search results.

How did this loss affect traffic for our URLs with embedded videos? Fortunately, here at Moz we have a large collection of ready-made video URLs we could easily study: our
Whiteboard Friday videos, which we produce every, well, Friday. 

To our surprise, most URLs actually saw more traffic.

On average, our Whiteboard Friday videos saw a
10% jump in organic traffic after losing video snippets.

A few other with video saw
dramatic increases:

The last example, the Learn SEO page, didn’t have an actual video on it, but a bug with Google caused them to display an older video thumbnail. (Several folks we’ve talked to speculate that Google removed video snippets simply to clean up their bugs in the system)

We witnessed a significant increase in traffic after losing video snippets. How did this happen? 

Did Google change the way they rank and show video pages?

It turns out that many of our URLs that contained videos also saw a significant change in the number of search
impressions at the exact same time.

According to Google, impressions for the majority of our video URLs shot up dramatically around July 14th.

Impressions for Whiteboard Friday URLs also rose 20% during this time. For Moz, most of the video URLs saw many more impressions, but for others, it appears rankings dropped.

While Moz saw video impressions rise,
other publishers saw the opposite effect.

Casey Henry, our friend at video hosting company
Wistia, reports seeing rankings drop for many video URLs that had thin or little content.

“…it’s only pages hosting video with thin content… the pages that only had video and a little bit of text went down.”


Casey Henry

For a broader perspective, we talked to
Marshall Simmonds, founder of Define Media Group, who monitors traffic to millions of daily video pageviews for large publishers. 

Marshall found that despite the fact that
most of the sites they monitor lost video snippets, they observed no visible change in either traffic or pageviews across hundreds of millions of visits.

Define Media Group also recently released its
2014 Mid-Year Digital Traffic Report which sheds fascinating light on current web traffic trends.

What does it all mean?

While we have anecdotal evidence of ranking and impression changes for video URLs on individual sites, on the grand scale across all Google search results these differences aren’t visible.

If you have video content, the evidence suggests it’s now worth more than ever to follow
video SEO best practices: (taken from video SEO expert Phil Nottingham)

  • Use a crawlable player (all the major video hosting platforms use these today)
  • Surround the video with supporting information (caption files and transcripts work great)
  • Include schema.org video markup

SEO finds a way

For the past several years web marketers competed for image and video snippets, and it’s with a sense of sadness that they’ve been taken away.

The smart strategy follows the data, which suggest that more traditional click-through rate optimization techniques and strategies could now be more effective. This means strong titles, meta descriptions, rich snippets (those that remain), brand building and traditional ranking signals.

What happened to your site when Google removed author photos and video snippets? Let us know in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

The Month Google Shook the SERPs

Posted by Dr-Pete

As a group, we SEOs still tend to focus most of our attention on just one place – traditional, organic results. In the past two years, I’ve spent a lot of time studying these results and how they change over time. The more I experience the reality of SERPs in the wild, though, the more I’ve become interested in situations like this one (a search for “diabetes symptoms”)…

See the single blue link and half-snippet on the bottom-left? That’s the only thing about this above-the-fold page that most SEOs in 2014 would call “organic”. Of course, it’s easy to find fringe cases, but the deeper I dig into the feature landscape that surrounds and fundamentally alters SERPs, the more I find that the exceptions are inching gradually closer to the rule.

Monday, July 28th was my 44th birthday, and I think Google must have decided to celebrate by giving me extra work (hooray for job security?). In the month between June 28th and July 28th, there were four major shake-ups to the SERPs, all of them happening beyond traditional, organic results. This post is a recap of our data on each of those shake-ups.

Authorship photos disappear (June 28)

On June 25th, Google’s John Mueller made a surprise announcement via Google+:

We had seen 
authorship shake-ups in the past, but the largest recent drop had measured around 15%. It was clear that Google was rethinking the prevalence of author photos and their impact on perceived quality, but most of us assumed this would be a process of small tweaks. Given Google’s push toward Google+ and its inherent tie-in with authorship, not a single SEO I know had predicted a complete loss of authorship photos.

Yet, over the next few days, culminating on the morning of June 28th, a 
total loss of authorship photos is exactly what happened:

While some authorship photos still appeared in personalized results, the profile photos completely disappeared from general results, after previously being present on about 21% of the SERPs that MozCast tracks. It’s important to note that the concept of authorship remains, and author bylines are still being shown (we track that at about 24%, as of this writing), but the overall visual impact was dramatic for many SERPs.

In-depth gets deeper (July 2nd)

Most SEOs still don’t pay much attention to Google’s “In-depth Articles,” but they’ve been slowly gain SERP share. When we first started tracking them, they popped up on about 3.5% of the searches MozCast covers. This data seems to only get updated periodically, and the number had grown to roughly 6.0% by the end of June 2014. On the morning of July 2nd, I (and, seemingly, everyone else), missed a major change:

Overnight, the presence of in-depth articles jumped from 6.0% to 12.7%, more than doubling (a +112% increase, to be precise). Some examples of queries that gained in-depth articles include:

  • xbox 360
  • hotels
  • raspberry pi
  • samsung galaxy tab
  • job search
  • pilates
  • payday loans
  • apartments
  • car sales
  • web design

Here’s an example set of in-depth for a term SEOs know all too well, “payday loans”:

The motivation for this change is unclear, and it comes even as Google continues to test designs with pared down in-depth results (almost all of their tests seem to take up less space than the current design). Doubling this feature hardly indicates a lack of confidence, though, and many competitive terms are now showing in-depth results.

Video looks more like radio (July 16th)

Just a couple of weeks after the authorship drop, we saw a smaller but still significant shake-up in video results, with about 28% of results MozCast tracks losing video thumbnails:

As you can see, the presence of thumbnails does vary day-to-day, but the two plateaus, before and after June 16th, are clear here. At this point, the new number seems to be holding.

Since our data doesn’t connect the video thumbnails to specific results, it’s tough to say if this change indicates a removal of thumbnails or a drop in rankings for video results overall. Considering how smaller drops in authorship signaled a much larger change down the road, I think this shift deserves more attention. It could be that Google is generally questioning the value and prevalence of rich snippets, especially when quality concerns come into play.

I originally hypothesized that this might not be a true loss, but could be a sign that some video snippets were switching to the new “mega-video” format (or video answer box, if you prefer). This does not appear to be the case, as the larger video format is still fairly uncommon, and the numbers don’t match up.

For reference, here’s a mega-video format (for the query “bartender”):

Mega-videos are appearing on such seemingly generic queries as “partition”, “headlights”, and “california king bed”. If you have the budget and really want to dominate the SERPs, try writing a pop song.

Pigeons attack local results (July 24th)

By now, many of you have heard of 
Google’s “Pigeon” update. The Pigeon update hit local SERPs hard and seems to have dramatically changed how Google determines and uses a searcher’s location. Local search is more than an algorithmic layer, though â€“ it’s also a feature set. When Pigeon hit, we saw a sharp decline in local “pack” results (the groups of 2-7 pinned local results):

We initially reported that pack results dropped more than 60% after the Pigeon update. We now are convinced that this was a mistake (indicated by the “?” zone) – essentially, Pigeon changed localization so much that it broke the method we were using. We’ve found a new method that seems to match manually setting your location, and the numbers for July 29-30 are, to the best of my knowledge, accurate.

According to these new numbers, local pack results have fallen 23.4% (in our data set) after the Pigeon update. This is the exact same number 
Darren Shaw of WhiteSpark found, using a completely different data set and methodology. The perfect match between those two numbers is probably a bit of luck, but they suggest that we’re at least on the right track. While I over-reported the initial drop, and I apologize for any confusion that may have caused, the corrected reality still shows a substantial change in pack results.

It’s important to note that this 23.4% drop is a net change – among queries, there were both losers and winners. Here are 10 searches that lost pack results (and have been manually verified):

  • jobs
  • cars for sale
  • apartments
  • cruises
  • train tickets
  • sofa
  • wheels
  • liposuction
  • social security card
  • motorcycle helmets

A couple of important notes – first, some searches that lost packs only lost packs in certain regions. Second, Pigeon is a very recent update and may still be rolling out or being tweaked. This is only the state of the data as we know it today.

Here are 10 searches that gained pack results (in our data set):

  • skechers
  • mortgage
  • apartments for rent
  • web designer
  • long john silvers
  • lamps
  • mystic
  • make a wish foundation
  • va hospital
  • internet service

The search for “mystic” is an interesting example – no matter what your location (if you’re in the US), Google is showing a pack result for Mystic, CT. This pattern seems to be popping up across the Pigeon update. For example, a search for “California Pizza Kitchen” automatically targets California, regardless of your location (h/t 
Tony Verre), and a search for “Buffalo Wild Wings” sends you to Buffalo, NY (h/t Andrew Mitschke).

Of course, local search is complex, and it seems like Google is trying to do a lot in one update. The simple fact that a search for “apartments” lost pack results in our data, while “apartments for rent” gained them, shows that the Pigeon update isn’t based on a few simplistic rules.

Some local SEOs have commented that Pigeon seemed to increase the number of smaller packs (2-3 results). Looking at the data for pack size before and after Pigeon, this is what we’re seeing:

Both before and after Pigeon, there are no 1-packs, and 4-, 5-, and 6-packs are relatively rare. After Pigeon, the distribution of 2-packs is similar, but there is a notable jump in 3-packs and a corresponding decrease in 7-packs. The total number of 3-packs actually increased after the Pigeon update. While our data set (once we restrict it to just searches with pack results) is fairly small, this data does seem to match the observations of local SEOs.

Sleep with one eye open

Ok, maybe that’s a bit melodramatic. All of the changes do go to show, though, that, if you’re laser-focused on ranking alone, you may be missing a lot. We as SEOs not only need to look beyond our own tunnel vision, we need to start paying more attention to post-ranking data, like CTR and search traffic. SERPs are getting richer and more dynamic, and Google can change the rules overnight.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

How to Be TAGFEE when You Disagree

Posted by Lisa-Mozstaff

On being TAGFEE


I’m a big advocate of the TAGFEE culture at Moz. It’s one of the big
reasons I joined the team and why I stay here. I also recognize that sometimes
it can be hard to practice it in “Real Life.” 

How, for instance, can I
be both authentic AND fun when I tell Anthony how angry I am that he
took the last two donuts? I can certainly be transparent and authentic,
but, anger and confrontation…where does that get fun?

But those times when you need to be authentic—those are the times when being generous and empathetic matter the most. It may seem more generous and empathetic to just withhold that difficult feedback, but it’s not. Giving that feedback can be scary, and most people imagine things going horribly wrong and leaving everything in ruins when you really just wanted to help.

Having a little bit of self-awareness and a whole lot of hold-on- there-a-minute can really help with this. I’ve been sharing with other Mozzers a way to be Transparent AND Authentic AND Generous AND Fun AND Empathetic AND Exceptional. And I thought I’d share a little bit of it with you too.

Conflict can be productive

Why it’s important to have productive conflict

Why it matters

If you read about the psychology and physiology of confrontations, you’ll realize that our brains aren’t at their best when we’re in a confrontation.

When threatened, our bodies respond by going back to our most basic, primal instincts, sometimes called the lizard brain or (cue scary music) “amygdala hijack.” Blood and oxygen pump away from your brain and into your muscles so you’re equipped to fight or run away.

However, having your higher-order thinking functions deprived of oxygen when confronted by an angry customer or coworker isn’t such a good thing. Your lizard brain isn’t well-equipped to deal with situations diplomatically, or look at ways to find common ground and a win-win solution. It’s looking to destroy or get the heck out of there (or both), and neither of those approaches work well in a business environment.

To really communicate,*everyone* has to feel safe. If you are calm and collected and using the collaborative parts of your brain, but the person you’re talking to is scared or uncertain, you can’t communicate.

Fighting the lizard

Control the physiological and psychological reactions of fear

When you’re in a confrontation, how do you control the physiological and psychological reactions of fear so you can choose to act rather than react?

To bring your brain back, you need to force your brain to use its higher-order thinking functions. Ask yourself questions that the lizard brain can’t answer, and it’ll have to send some of that oxygen and blood back up into the rest of your brain.

Once you’ve freed your brain from the lizard, you have access to your higher thinking functions – and the resources to have a productive confrontation.

Questions to fight the lizard:

  • Find benevolent intent. Ask yourself what you really want from this interaction. Find an intention that’s benevolent for both you and the other person. Draw on your Empathy and Generosity here. 
  • Get curious. Ask yourself why you or the other person is emotional and seek to understand. The lizard brain hates “why” questions. 

This lizard has no choice, but you do! (Image by Lisa Wildwood)

What does productive conflict look like?

Giving up “winning” to win

Give yourself permission to try something new. Even if you don’t do it perfectly, it’s better than the lizard.

These steps assume you’ve got some time to prepare, but sometimes, you find yourself in a confrontation and have to do the best you can. Give yourself permission to try something new. Even if you don’t do it perfectly, it’s better than the lizard taking over. And the more you practice these, the easier and more natural they’ll feel, and the more confidence you’ll have in the power of productive confrontations.

Once I’ve walked you through all of these steps, I’ll talk about how to put it all together. Also note that these steps may be contrary to how you are used to behaving, particularly if you come from a culture that values personal success over teamwork. It may feel strange to do this at first, and it may feel like you’re giving up the chance to “win”… but it’s worth it.


Steps to productive conflict:

  1. Change your story.
  2. Talk about the right things. 
  3. Get curious.
  4. Inspire and be inspired
  5. Follow up.

1 - Change your story

Create a benevolent story and a positive intent

The first step to Productive Conflict is to change your story. And to do that, you first have to realize you’re telling stories in the first place…

We’re all amazing storytellers

We all make up stories every time we see something happen. It’s human nature.

Here’s my story:

This is Anthony, stealing my donut. He saw me coming and grabbed it
before I could.

He’s munching on my donut while I despair of ever
getting a donut.

I don’t get why he’s so selfish that he took two donuts. I mean, didn’t his mama raise him right?

Imaged cropped from an image courtesy of

Stéfan under Creative Commons license

My story is one we all make up sometimes. We paint ourselves as helpless victims thwarted by an evil villain. Sometimes we don’t see them as stories, however, but as reality, and that’s where we get into trouble.

The victim/villain story may get you sympathy, but it takes away your power. During a confrontation, it helps if you remember that it *is* a story, and it’s also:

  • Internal – Something you made up based on what you’ve seen, assumed, or experienced in the past in a similar situation
  • Of questionable validity. It could be true, partially true, or completely bogus 
  • Mutable!

“Mutable?” you ask. Why, yes, it is!

Changing the story you’re telling yourself is the key to having a productive (and powerful) conversation.

Make a happy story

You can read body language really well. And so can the person you’re talking to.

If you’re going to make up a story, make one up that helps you resolve an important issue and maintain your relationships.

Change your story to the most kind and generous one that fits the facts you’ve seen, and then believe it. Why? Because non-verbal cues, state of mind, fear or anger, and judgments and stories affect your reactions and approach to the conversation.

If you’ve planned your words out carefully but the intent doesn’t match, the other person can tell. If your intent isn’t good, the interaction won’t be good either. At best, you may appear to be trying to do the right thing but not really managing it. At worst, you appear insincere and manipulative.

Here’s your benevolent story, just waiting to hatch
(
Image by Pon Malar on Wikimedia under creative commons license)

How to change your story

To help change your story, ask yourself these questions:

  • Why might a reasonable, intelligent, courteous, kind person do that?
  • Could there be circumstances I’m not aware of that could be contributing?
  • What if it was me? How would I explain what happened from my perspective? Be as lenient/forgiving as you can to your imaginary self
Review the facts… what you’ve seen and what you’ve
chosen to pay attention to. They may all appear to support a nasty
story, but you don’t know for sure. Think of the Rorschach tests…
people see different things depending on how they’re feeling and their
unique view on life, so find a benevolent story.

My new story

So, let’s try this on my story.  I’ll start with the facts,
remove my emotional devastation at not getting a donut, and create a
benevolent story:

  • My facts are: I saw someone take the last two donuts.
  • My new benevolent story is: Anthony didn’t see me, and didn’t know how much I was craving a donut.

What do you see? (Image by Hermann Rorschach (died 1922), [Public domain], via Wikimedia Commons)

But my story is true!

Let’s assume for a moment, your not-so-nice story is completely, 100%, bonafide TRUE. This is hard, but consider this carefully… It Doesn’t Matter!

Giving someone the benefit of the doubt is the best way to motivate them to change. By creating a benevolent story, you give the person a way to improve AND save face. It’s magic!

Assuming the worst can severely damage your relationships, even if it’s true! Getting caught it a mistake makes people immediately defensive, which will hinder the conversation. Give them a chance to just fix things and they’ll be grateful to you and more inspired to make the change stick.

And then there’s the flip side… what if your story is partly or all wrong? This situation, as you can imagine, is much worse.

You’ll probably never find out what truly happened, and may find yourself arguing about the parts you got wrong rather than the real issue. It also damages the relationship, and here’s the key point: even if the person can get past their anger and hear your message, they will likely not like you, trust you, or want to work with you. And I’ve heard crow tastes really bad.

The power of a benevolent story and positive intent

The last part of changing your story is figuring out what you want from the conversation.

Think about what you want to happen, but also what you want from the relationship. The power of a benevolent story and positive intent is that it fosters a better relationship based on trust . That is huge and I recommend that it be part of the intent of all conversations.

Judgment doublecheck!

When you’re done, go back through what you’ve got down and make sure a not-so-nice story hasn’t crept back in:

  • Remove judgment
  • Check that the issue matches your intent

Some examples

Here’s some examples where I take a nasty story, break it down to the facts, and then create a new, benevolent story and a positive intent for a discussion.

Judgment & Nasty Story

Fact

New Benevolent Story

Positive Intent

What a jerk, he just cut me off! Are you trying to kill me?

A car changed lanes in front of me in a way that I found uncomfortable.

Wow, he must not have seen me.

Let him know a head check was needed.

Sue doesn’t respect me enough to respond to my email. She thinks it’s a stupid idea.

Sue didn’t answer my email when I expected.

Sue’s busy and either hasn’t seen my email or hasn’t had time to respond.

Follow up with Sue on what she thinks

What an idiot! That report Bruce turned in didn’t even try to answer the questions I had. It’s useless!

Bruce turned in a report that didn’t have the information I expected and needed.

Bruce wasn’t aware or misunderstood what information I needed.

Let Bruce know what I need in the reports.

Remember that stories spread…all storytellers love an audience. So make sure your story is spreading positivity

2 - Talk about the right things

Get clear on what the conversation needs to be about

What do you want from the conversation?

The next step is to think about what the real issue is. What exactly needs to happen? Who is involved? Who is impacted? Which facts are known? What information is available?

In TAGFEE terms, this is where transparency and being exceptional come in. Make sure that you’re talking about the right issue.

Ask yourself:

  • What is the impact to you and others?
  • What are the facts?
  • Scope – is this the first time? The second? The umpteenth?

Can you spot the judgment?

I just broke my own rules… can you see it?

I’ll give you a hint…it’s that last word in the Scope point… it sneaks in, so check!

Are you talking about apples when the issue is really oranges?

Scope is important:

  • If it’s the first time something has happened, you talk about what happened.
  • If it’s the second or third, talk about how it keeps happening.
  • If you can’t remember how many times it’s happened, talk about how the behavior is affecting your relationship.

Orange

Ask questions to understand and get to the root causes

Be an information maniac

Find out how the other person sees the situation.

Before you trip too far down that happy path, get more information. Seek to understand. Use Empathy and Generosity, and be Authentic. Ask neutral questions to create safety, and give the other person a chance to respond – you might find out something you didn’t know.

Asking neutral questions can create a space of collaboration, where you are both on the same side trying to figure out how to solve an issue you both agree needs to be resolved. It’s not always possible to turn a conflict into a collaboration, but you’d be surprised how many times it does work that way.

Another benefit of asking neutral questions is that it puts off conclusions and judgments until you have talked to the person involved and heard what they have to say. This is critical to keeping the conversation safe and collaborative.

Questions to ask:

  • What is your perspective? What do you see going on?
  • What’s important to you? Tell me more about that.
  • Here’s what I notice… What do you notice?

State conclusions tentatively

You can state a conclusion tentatively, making it clear you’re looking for their input on whether that conclusion is valid or if they have more information.

Listen carefully and continue to put off judgment until you’ve heard what they have to say.

Putting off judgment makes it easier for *you* to admit that you’ve been wrong. You may find what you thought was going to be a difficult conversation instead opens up a new level of authenticity and collaboration in your relationships.

Make sure anything you state definitively are only facts, devoid of judgment.

Be open to being wrong!

Or being surprised by more information that turns your story on its head.

Just maybe it wasn’t Anthony I saw “stealing” donuts in the stormtrooper outfit…

4 - Inspire and be inspiredCreate a mutual purpose or common goal that inspires everyone to move forward

It’s all upside

Why inspire others? Well, why not? There is no downside to inspiring people: it benefits everyone.

The earlier steps talk about getting clear of the negative. This is where the good stuff happens. The Fun in TAGFEE! If you start from what felt like a conflict and end up with a mutual understanding with someone about what an issue is and how to resolve it, all things are possible. It can feel like magic! You move from confrontation to collaboration and win-win thinking that can help you both step outside the box.

Here’s a chart that’s totally made up, but it communicates a key point in communication. Collaboration happens when you both trust and respect the people you’re talking to!

True collaboration

You need both a willingness and freedom to disagree, and mutual trust and respect to get into the “Collaboration Zone.”

The key to inspiring others is to seek to understand their point of view and their goals, and work together with them to find common ground.

Start the collaboration engine by asking some powerful questions and seeing what you can agree on and brainstorm solutions.

Collaboration engine questions:

  • What’s working?
  • What do you think?
  • What can we agree on?
  • What are we both interested in achieving?
  • What’s important about resolving this?
  • What can we try?

A rainbow of solutions

Solutions often go from the black and white “my” vs. “your”
choice to a synergistic combination of mine and yours and other ideas we
brainstormed along the way.

You may disagree on how to do something, but if
you can agree on a common goal, you’re one step closer to a win-win
solution.

Instead
of accusing Anthony of taking the last donut and demanding that he
promise to never do it again, or be reported to Team Happy for a
happiness “adjustment,” my conversation is now about fair access to
donuts at Moz. The entire conversation’s focus has shifted from “I want
Anthony to know how angry I am he stole my donut” to “how can we make
sure no-one at Moz is donut-deprived?” Magic!

Fair Access to Donuts at Moz – Possible solutions:

  • Work with Team Happy to make sure there’s enough donuts for everyone who wants them
  • Ask everyone at the company to only take one
  • Get a fresh donut machine where we can all make our own donuts on demand

5 - Follow up

Agree on what to do next and circle back around
This is a little step with a big impact.  Make sure you’ve captured your conversation and everyone is on board to take action to make your solutions a reality.

Being Exceptional and Authentic come into play here. You’re collaborating on a solution and then making it happen.

Once you’ve established a shared understanding of an issue that needs to be resolved, it’s time to figure out how. Solicit ideas for how to solve the problem. Listen, acknowledge feedback and discuss pros and cons on the solutions until you both agree the solution is a good approach.

Make sure everyone is in agreement on:

  • Goals. How will you measure success?
  • Due dates. Who will do what by when?
  • When to check in: What time will we check to see how we’re doing?

Wrapping it up

Have productive, inspiring conversations, whether you agree or disagree

Before you talk to someone

At first, it may help to write down what you’re planning on saying.

I’ve broken this down into discrete before and during steps, but it doesn’t always end up being that way in practice. Use these steps to plan and practice until it comes naturally.

Steps to prepare:

  • Calm down! Lizard brain begone!
  • Create a happy story
  • Make sure you’re talking about the right thing
  • Write out what you want to say and check for your old story & judgments
  • Remember your benevolent intent

Have the conversation

Steps:

  1. Ask if the person has time to talk
  2. State your benevolent intent
  3. Keep to the facts
  4. State conclusions tentatively
  5. Get curious – seek to understand their point of view
  6. Be open to being wrong. Change your mind if needed.
  7. Aim toward collaboration.
  8. Finish with summarizing what you’ve discussed, and who will do what, when.

Remember the conversation may dictate you take a different path.

If the conversation starts to get heated, re-establish safety:

  • Restate your intent
  • Explicitly state what you’re not trying to do. For example, “I’m not saying you’re wrong, I’m trying to help us come to a solution that works for both of us.”

When conflict finds you

If you find yourself in a conversation unexpectedly, these steps can still help. Get curious, find out what they want, how they’re feeling, and tentatively state your perspective and ask for feedback. Some other ideas:

  • Accept the input and acknowledge the emotions but don’t reciprocate. Ask yourself “what do I want from this interaction” to rescue your brain from the lizard.
  • Do your best to establish safety for you and the other person by establishing a positive intent. It can be as simple as “Wow, Lisa, I can see you’re really upset about not getting a donut. I’d like to figure out how I can fix this – can I ask you a few questions?”

Don’t hesitate to take a break

If the conversation is heated, it may be better to step away and take the conversation up later. You might say:

“I can see this is an subject we both care deeply about. I’d like to take some time to prepare for a productive conversation, can we take a break and meet back here in an hour.”

An example conversation

So, my side of the conversation with Anthony about the donuts might go like this:

“Anthony, do you have time to talk?”

“I’d like to talk to you about making sure everyone at Moz has the opportunity to get a donut. ”

“I saw someone taking the last two donuts this morning, and I was disappointed that I didn’t get one.”

“I thought it might be you, so I wanted to talk to you to see what happened.”

“I’m
not accusing you of taking the last two donuts. I’m trying to figure
out what happened and then work on how to make sure the donuts are
evenly distributed at Moz”

“Oh, so you were grabbing a donut for Crystal too! Wow, I totally misinterpreted what I saw!”

“Can you think of ways we can ensure everyone gets a donut?”

“Great, so I’ll contact Team Happy about getting a donut machine tomorrow, and you’ll approve the expense report on Friday.”

Image from Nostalgia Electrics

Perfection not required

Not everything will always turn out wonderful, but at least you’ve approached the problem and given feedback in a way that has the best chance for a positive outcome for everyone involved.

Maybe you’re a little closer to what the real issues are, or you’ve agreed to disagree; even those outcomes will keep miscommunication or confusion from being a source of problems.

If I really feel that donut was mine, and Anthony really thinks that donut was promised to Crystal, we may not agree, but at least everything is on the table where we have the chance to deal with it. And, we’re not telling our nasty stories to everyone but the person we need to talk to.

Feedback is a gift

Annette Promes, our CMO, said to me, “Feedback is a gift,” and it is.

Most folks want to know, and are truly interested in being better… better coworkers, friends, and humans. So let’s all resolve to give that gift in the best way we can. And receive it gratefully when it comes to us.

Oh, and that donut conflict… totally made up. I’m gluten-intolerant girl, so you can always have my share, Anthony! 🙂

Give me feedback

I experimented with converting a training class into a blog post, and would love to have your feedback on what works for you and what could be better.

You can also download this blog post in slidedoc format. It’s a communication technique that’s halfway between presentation and documentation. I learned about it at
Write the Docs this year. You can read more and get the free slidedoc ebook at their site. What do you think?

Other resources

You may find these resources helpful too:

5 Rules for Productive Conflict (TED talk)

6 ways to make conflict productive

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

Unraveling Panda Patterns

Posted by billslawski

This is my first official blog post at Moz.com, and I’m going to be requesting your help and expertise and imagination.

I’m going to be asking you to take over as Panda for a little while to see if you can identify the kinds of things that Google’s Navneet Panda addressed when faced with what looked like an incomplete patent created to identify sites as parked domain pages, content farm pages, and link farm pages. You’re probably better at this now then he was then.

You’re a subject matter expert.

To put things in perspective, I’m going to include some information about what appears to be the very first Panda patent, and some of Google’s effort behind what they were calling the “high-quality site algorithm.”

I’m going to then include some of the patterns they describe in the patent to identify lower-quality pages, and then describe some of the features I personally would suggest to score and rank a higher-quality site of one type.

Google’s Amit Singhal identified a number of questions about higher quality sites that he might use, and told us in the blog post where he listed those that it was an incomplete list because they didn’t want to make it easy for people to abuse their algorithm.

In my opinion though, any discussion about improving the quality of webpages is one worth having, because it can help improve the quality of the Web for everyone, which Google should be happy to see anyway.

Warning searchers about low-quality content

In “Processing web pages based on content quality,” the original patent filing for Panda, there’s a somewhat mysterious statement that makes it sound as if Google might warn searchers before sending them to a low quality search result, and give them a choice whether or not they might actually click through to such a page.

As it notes, the types of low quality pages the patent was supposed to address included parked domain pages, content farm pages, and link farm pages (yes,
link farm pages):

“The processor 260 is configured to receive from a client device (e.g., 110), a request for a web page (e.g., 206). The processor 260 is configured to determine the content quality of the requested web page based on whether the requested web page is a parked web page, a content farm web page, or a link farm web page.

Based on the content quality of the requested web page, the processor is configured to provide for display, a graphical component (e.g., a warning prompt). That is, the processor 260 is configured to provide for display a graphical component (e.g., a warning prompt) if the content quality of the requested web page is at or below a certain threshold.

The graphical component provided for display by the processor 260 includes options to proceed to the requested web page or to proceed to one or more alternate web pages relevant to the request for the web page (e.g., 206). The graphical component may also provide an option to stop proceeding to the requested web page.

The processor 260 is further configured to receive an indication of a selection of an option from the graphical component to proceed to the requested web page, or to proceed to an alternate web page. The processor 260 is further configured to provide for display, based on the received indication, the requested web page or the alternate web page.”

This did not sound like a good idea.

Recently, Google announced in a post on the Google Webmaster Central blog post,
Promoting modern websites for modern devices in Google search results, that they would start providing warning notices on mobile versions of sites if there were issues on those pages that visitors might go to.

I imagine that as a site owner, you might be disappointed seeing such warning notice shown to searchers on your site about technology used on your site possibly not working correctly on a specific device. That recent blog post mentions Flash as an example of a technology that might not work correctly on some devices. For example, we know that Apple’s mobile devices and Flash don’t work well together.

That’s not a bad warning in that it provides enough information to act upon and fix to the benefit of a lot of potential visitors. 🙂

But imagine if you tried to visit your website in 2011, and instead of getting to the site, you received a Google warning that the page you were trying to visit was a content farm page or a link farm page, and it provided alternative pages to visit as well.

That ”
your website sucks” warning still doesn’t sound like a good idea. One of the inventors listed on the patent is described in LinkedIn as presently working on the Google Play store. The warning for mobile devices might have been something he brought to Google from his work on this Panda patent.

We know that when the Panda Update was released that it was targeting specific types of pages that people at places such as
The New York Times were complaining about, such as parked domains and content farm sites. A
follow-up from the Timesafter the algorithm update was released puts it into perspective for us.

It wasn’t easy to know that your pages might have been targeted by that particular Google update either, or if your site was a false positive—and many site owners ended up posting in the Google Help forums after a Google search engineer invited them to post there if they believed that they were targeted by the update when they shouldn’t have been.

The wording of that
invitation is interesting in light of the original name of the Panda algorithm. (Note that the thread was broken into multiple threads when Google did a migration of posts to new software, and many appear to have disappeared at some point.)

As we were told in the invite from the Google search engineer:

“According to our metrics, this update improves overall search quality. However, we are interested in hearing feedback from site owners and the community as we continue to refine our algorithms. If you know of a high-quality site that has been negatively affected by this change, please bring it to our attention in this thread.

Note that as this is an algorithmic change we are unable to make manual exceptions, but in cases of high quality content we can pass the examples along to the engineers who will look at them as they work on future iterations and improvements to the algorithm.

So even if you don’t see us responding, know that we’re doing a lot of listening.”

The timing for such in-SERP warnings might have been troublesome. A site that mysteriously stops appearing in search results for queries that it used to rank well for might be said to have gone astray of
Google’s guidelines. Instead, such a warning might be a little like the purposefully embarrassing “Scarlet A” in Nathaniel Hawthorn’s novel The Scarlet Letter.

A page that shows up in search results with a warning to searchers stating that it was a content farm, or a link farm, or a parked domain probably shouldn’t be ranking well to begin with. Having Google continuing to display those results ranking highly, showing both a link and a warning to those pages, and then diverting searchers to alternative pages might have been more than those site owners could handle. Keep in mind that the fates of those businesses are usually tied to such detoured traffic.

My imagination is filled with the filing of lawsuits against Google based upon such tantalizing warnings, rather than site owners filling up a Google Webmaster Help Forum with information about the circumstances involving their sites being impacted by the upgrade.

In retrospect, it is probably a good idea that the warnings hinted at in the original Panda Patent were avoided.

Google seems to think that such warnings are appropriate now when it comes to multiple devices and technologies that may not work well together, like Flash and iPhones.

But there were still issues with how well or how poorly the algorithm described in the patent might work.

In the March, 2011 interview with Google’s Head of Search Quality, Amit Sighal, and his team member and Head of Web Spam at Google, Matt Cutts, titled
TED 2011: The “Panda” That Hates Farms: A Q&A With Google’s Top Search Engineers, we learned of the code name that Google claimed to be using to refer to the algorithm update as “Panda,” after an engineer with that name came along and provided suggestions on patterns that could be used by the patent to identify high- and low-quality pages.

His input seems to have been pretty impactful—enough for Google to have changed the name of the update, from the “High Quality Site Algorithm” to the “Panda” update.

How the High-Quality Site Algorithm became Panda

Danny Sullivan named the update the “Farmer update” since it supposedly targeted content farm web sites. Soon afterwards the joint interview with Singhal and Cutts identified the Panda codename, and that’s what it’s been called ever since.

Google didn’t completely abandon the name found in the original patent, the “high quality sites algorithm,” as can be seen in the titles of these Google Blog posts:

The most interesting of those is the “more guidance” post, in which Amit Singhal lists 23 questions about things Google might look for on a page to determine whether or not it was high-quality. I’ve spent a lot of time since then looking at those questions thinking of features on a page that might convey quality.

The original patent is at:

Processing web pages based on content quality
Inventors: Brandon Bilinski and Stephen Kirkham
Assigned to Google

US Patent 8,775,924

Granted July 8, 2014

Filed: March 9, 2012

Abstract

“Computer-implemented methods of processing web pages based on content quality are provided. In one aspect, a method includes receiving a request for a web page.

The method includes determining the content quality of the requested web page based on whether it is a parked web page, a content farm web page, or a link farm web page. The method includes providing for display, based on the content quality of the requested web page, a graphical component providing options to proceed to the requested web page or to an alternate web page relevant to the request for the web page.

The method includes receiving an indication of a selection of an option from the graphical component to proceed to the requested web page or to an alternate web page. The method further includes providing, based on the received indication, the requested web page or an alternate web page.

The patent expands on what are examples of low-quality web pages, including:

  • Parked web pages
  • Content farm web pages
  • Link farm web pages
  • Default pages
  • Pages that do not offer useful content, and/or pages that contain advertisements and little else

An invitation to crowdsource high-quality patterns

This is the section I mentioned above where I am asking for your help. You don’t have to publish your thoughts on how quality might be identified, but I’m going to start with some examples.

Under the patent, a content quality value score is calculated for every page on a website based upon patterns found on known low-quality pages, “such as parked web pages, content farm web pages, and/or link farm web pages.”

For each of the patterns identified on a page, the content quality value of the page might be reduced based upon the presence of that particular pattern—and each pattern might be weighted differently.

Some simple patterns that might be applied to a low-quality web page might be one or more references to:

  • A known advertising network,
  • A web page parking service, and/or
  • A content farm provider

One of these references may be in the form of an IP address that the destination hostname resolves to, a Domain Name Server (“DNS server”) that the destination domain name is pointing to, an “a href” attribute on the destination page, and/or an “img src” attribute on the destination page.

That’s a pretty simple pattern, but a web page resolving to an IP address known to exclusively serve parked web pages provided by a particular Internet domain registrar can be deemed a parked web page, so it can be pretty effective.

A web page with a DNS server known to be associated with web pages that contain little or no content other than advertisements may very well provide little or no content other than advertising. So that one can be effective, too.

Some of the patterns listed in the patent don’t seem quite as useful or informative. For example, the one stating that a web page containing a common typographical error of a bona fide domain name may likely be a low-quality web page, or a non-existent web page. I’ve seen more than a couple of legitimate sites with common misspellings of good domains, so I’m not too sure how helpful a pattern that is.

Of course, some textual content is a dead giveaway the patent tells us, with terms on them such as “domain is for sale,” “buy this domain,” and/or “this page is parked.”

Likewise, a web page with little or no content is probably (but not always) a low-quality web page.

This is a simple but effective pattern, even if not too imaginative:

… page providing 99% hyperlinks and 1% plain text is more likely to be a low-quality web page than a web page providing 50% hyperlinks and 50% plain text.

Another pattern is one that I often check upon and address in site audits, and it involves how functional and responsive pages on a site are.

The determination of whether a web site is full functional may be based on an HTTP response code, information received from a DNS server (e.g., hostname records), and/or a lack of a response within a certain amount of time. As an example, an HTTP response that is anything other than 200 (e.g., “404 Not Found”) would indicate that a web site is not fully functional.

As another example, a DNS server that does not return authoritative records for a hostname would indicate that the web site is not fully functional. Similarly, a lack of a response within a certain amount of time, from the IP address of the hostname for a web site would indicate that the web site is not fully functional.

As for user-data, sometimes it might play a role as well, as the patent tells us:

A web page may be suggested for review and/or its content quality value may be adapted based on the amount of time spent on that page.

For example, if a user reaches a web page and then leaves immediately, the brief nature of the visit may cause the content quality value of that page to be reviewed and/or reduced. The amount of time spent on a particular web page may be determined through a variety of approaches. For example, web requests for web pages may be used to determine the amount of time spent on a particular web page.”

My example of some patterns for an e-commerce website

There are a lot of things that you might want to include on an ecommerce site that help to indicate that it’s high quality. If you look at the questions that Amit Singhal raised in the last Google Blog post I mentioned above, one of his questions was “Would you be comfortable giving your credit card information to this site?” Patterns that might fit with this question could include:

  • Is there a privacy policy linked to on pages of the site?
  • Is there a “terms of service” page linked to on pages of the site?
  • Is there a “customer service” page or section linked to on pages of the site?
  • Do ordering forms function fully on the site? Do they return 404 pages or 500 server errors?
  • If an order is made, does a thank-you or acknowledgement page show up?
  • Does the site use an https protocol when sending data or personally identifiable data (like a credit card number)?

As I mentioned above, the patent tells us that a high-quality content score for a page might be different from one pattern to another.

The
questions from Amit Singhal imply a lot of other patterns, but as SEOs who work on and build and improve a lot of websites, this is an area where we probably have more expertise than Google’s search engineers.

What other questions would you ask if you were tasked with looking at this original Panda Patent? What patterns would you suggest looking for when trying to identify high or low quality pages?  Perhaps if we share with one another patterns or features on a site that Google might look for algorithmically, we could build pages that might not be interpreted by Google as being a low quality site. I provided a few patterns for an ecommerce site above. What patterns would you suggest?

(Illustrations: Devin Holmes @DevinGoFish)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com