The Meta Referrer Tag: An Advancement for SEO and the Internet

Posted by Cyrus-Shepard

The movement to make the Internet more secure through HTTPS brings several useful advancements for webmasters. In addition to security improvements, HTTPS promises future technological advances and potential SEO benefits for marketers.

HTTPS in search results is rising. Recent MozCast data from Dr. Pete shows nearly 20% of first page Google results are now HTTPS.

Sadly, HTTPS also has its downsides.

Marketers run into their first challenge when they switch regular HTTP sites over to HTTPS. Technically challenging, the switch typically involves routing your site through a series of 301 redirects. Historically, these types of redirects are associated with a loss of link equity (thought to be around 15%) which can lead to a loss in rankings. This can offset any SEO advantage that Google claims switching.

Ross Hudgens perfectly summed it up in this tweet:

Many SEOs have anecdotally shared stories of HTTPS sites performing well in Google search results (and our soon-to-be-published Ranking Factors data seems to support this.) However, the short term effect of a large migration can be hard to take. When Moz recently switched to HTTPS to provide better security to our logged-in users, we saw an 8-9% dip in our organic search traffic.

Problem number two is the subject of this post. It involves the loss of referral data. Typically, when one site sends traffic to another, information is sent that identifies the originating site as the source of traffic. This invaluable data allows people to see where their traffic is coming from, and helps spread the flow of information across the web.

SEOs have long used referrer data for a number of beneficial purposes. Oftentimes, people will link back or check out the site sending traffic when they see the referrer in their analytics data. Spammers know this works, as evidenced by the recent increase in referrer spam:

This process stops when traffic flows from an HTTPS site to a non-secure HTTP site. In this case, no referrer data is sent. Webmasters can’t know where their traffic is coming from.

Here’s how referral data to my personal site looked when Moz switched to HTTPS. I lost all visibility into where my traffic came from.

Its (not provided) all over again!

Enter the meta referrer tag

While we can’t solve the ranking challenges imposed by switching a site to HTTPS, we can solve the loss of referral data, and it’s actually super-simple.

Almost completely unknown to most marketers, the relatively new meta referrer tag (it’s actually been around for a few years) was designed to help out in these situations.

Better yet, the tag allows you to control how your referrer information is passed.

The meta referrer tag works with most browsers to pass referrer information in a manner defined by the user. Traffic remains encrypted and all the benefits of using HTTPS remain in place, but now you can pass referrer data to all websites, even those that use HTTP.

How to use the meta referrer tag

What follows are extremely simplified instructions for using the meta referrer tag. For more in-depth understanding, we highly recommend referring to the W3C working draft of the spec.

The meta referrer tag is placed in the <head> section of your HTML, and references one of five states, which control how browsers send referrer information from your site. The five states are:

  1. None: Never pass referral data
    <meta name="referrer" content="none">
    
  2. None When Downgrade: Sends referrer information to secure HTTPS sites, but not insecure HTTP sites
    <meta name="referrer" content="none-when-downgrade">
    
  3. Origin Only: Sends the scheme, host, and port (basically, the subdomain) stripped of the full URL as a referrer, i.e. https://moz.com/example.html would simply send https://moz.com
    <meta name="referrer" content="origin">
    

  4. Origin When Cross-Origin: Sends the full URL as the referrer when the target has the same scheme, host, and port (i.e. subdomain) regardless if it’s HTTP or HTTPS, while sending origin-only referral information to external sites. (note: There is a typo in the official spec. Future versions should be “origin-when-cross-origin”)
    <meta name="referrer" content="origin-when-crossorigin">
    
  5. Unsafe URL: Always passes the URL string as a referrer. Note if you have any sensitive information contained in your URL, this isn’t the safest option. By default, URL fragments, username, and password are automatically stripped out.
    <meta name="referrer" content="unsafe-url">
    

The meta referrer tag in action

By clicking the link below, you can get a sense of how the meta referrer tag works.

Check Referrer

Boom!

We’ve set the meta referrer tag for Moz to “origin”, which means when we link out to another site, we pass our scheme, host, and port. The end result is you see http://moz.com as the referrer, stripped of the full URL path (/meta-referrer-tag).

My personal site typically receives several visits per day from Moz. Here’s what my analytics data looked like before and after we implemented the meta referrer tag.

For simplicity and security, most sites may want to implement the “origin” state, but there are drawbacks.

One negative side effect was that as soon as we implemented the meta referrer tag, our AdRoll analytics, which we use for retargeting, stopped working. It turns out that AdRoll uses our referrer information for analytics, but the meta referrer tag “origin” state meant that the only URL they ever saw reported was https://moz.com.

Conclusion

We love the meta referrer tag because it keeps information flowing on the Internet. It’s the way the web is supposed to work!

It helps marketers and webmasters see exactly where their traffic is coming from. It encourages engagement, communication, and even linking, which can lead to improvements in SEO.

Useful links:

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

​1 Day After Mobilegeddon: How Far Did the Sky Fall?

Posted by Dr-Pete

Even clinging to the once towering bridge, the only thing Kayce could see was desert. Yesterday, San Francisco hummed with life, but now there was nothing but the hot hiss of the wind. Google’s Mobilegeddon blew out from Mountain View like Death’s last exhale, and for the first time since she regained consciousness, Kayce wondered if she was the last SEO left alive.

We have a penchant for melodrama, and the blogosphere loves a conspiracy, but after weeks of speculation bordering on hysteria, it’s time to see what the data has to say about Google’s Mobile Update. We’re going to do something a little different – this post will be updated periodically as new data comes in. Stay tuned to this post/URL.

If you watch MozCast, you may be unimpressed with this particular apocalypse:

Temperatures hit 66.1°F on the first official day of Google’s Mobile Update (the system is tuned to an average of 70°F). Of course, the problem is that this system only measures desktop temperatures, and as we know, Google’s Mobile Update should only impact mobile SERPs. So, we decided to build a MozCast Mobile, that would separately track mobile SERPs (Android, specifically) across the same 10K keyword set. Here’s what we saw for the past 7 days on MozCast Mobile:

While the temperature across mobile results on April 21st was slightly higher (73.7°F), you’ll also notice that most of the days are slightly higher and the pattern of change is roughly the same. It appears that the first day of the Mobile Update was a relatively quiet day.

There’s another metric we can look at, though. Since building MozCast Mobile, we’ve also been tracking how many page-1 URLs show the “Mobile-friendly” tag. Presumably, if mobile-friendly results are rewarded, we’ll expect that number to jump. Here’s the last 7 days of that stat:

As of the morning of April 22nd, 70.1% of the URLs we track carried the “Mobile-friendly” tag. That sounds like a lot, but that number hasn’t changed much the past few days. Interestingly, the number has creeped up over the past 2 weeks from a low of 66.3%. It’s unclear whether this is due to changes Google made or changes webmasters made, but I suspect this small uptick indicates sites making last minute changes to meet the mobile deadline. It appears Google is getting what they want from us, one way or another.

Tracking a long roll-out

Although Google has repeatedly cited April 21st, they’ve also said that this update could take days or weeks. If an update is spread out over weeks, can we accurately measure the flux? The short answer is: not very well. We can measure flux over any time-span, but search results naturally change over time – we have no real guidance to tell us what’s normal over longer periods.

The “Mobile-friendly” tag tracking is one solution – this should gradually increase – but there’s another metric we can look at. If mobile results continue to diverge from desktop results, than the same-day flux between the two sets of results should increase. In other words, mobile results should get increasingly different from desktop results with each day of the roll-out. Here’s what that cross-flux looks like:

I’m using raw flux data here, since the temperature conversion isn’t calibrated to this data. This comparison is tricky, because many sites use different URLs for mobile vs. desktop. I’ve stripped out the obvious cases (“m.” and “mobile.” sub-domains), but that still leaves a lot of variants.

Historically, we’re not seeing much movement on April 21st. The bump on April 15-16 is probably an error – Google made a change to In-depth Articles on mobile that created some bad data. So, again, not much going on here, but this should give us a view to see compounding changes over time.

Tracking potential losers

No sites are reporting major hits yet, but by looking at the “Mobile-friendly” tag for the top domains in MozCast Mobile, we can start to piece together who might get hit by the update. Here are the top 20 domains (in our 10K data set) as of April 21st, along with the percent of their ranking URLs that are tagged as mobile-friendly:

    1. en.m.wikipedia.org — 96.3%
    2. www.amazon.com — 62.3%
    3. m.facebook.com — 100.0%
    4. m.yelp.com — 99.9%
    5. m.youtube.com — 27.8%
    6. twitter.com — 99.8%
    7. www.tripadvisor.com — 92.5%
    8. www.m.webmd.com — 100.0%
    9. mobile.walmart.com — 99.5%
    10. www.pinterest.com — 97.5%
    11. www.foodnetwork.com — 69.9%
    12. www.ebay.com — 97.7%
    13. www.mayoclinic.org — 100.0%
    14. m.allrecipes.com — 97.1%
    15. m.medlineplus.gov — 100.0%
    16. www.bestbuy.com — 90.2%
    17. www.overstock.com — 98.6%
    18. m.target.com — 41.4%
    19. www.zillow.com — 99.6%
    20. www.irs.gov — 0.0%

I’ve bolded any site under 75% – the IRS is our big Top 20 trouble spot, although don’t expect IRS.gov to stop ranking at tax-time soon. Interestingly, YouTube’s mobile site only shows as mobile-friendly about a quarter of the time in our data set – this will be a key case to watch. Note that Google could consider a site mobile-friendly without showing the “Mobile-friendly” tag, but it’s the simplest/best proxy we have right now.

Changes beyond rankings

It’s important to note that, in many ways, mobile SERPs are already different from desktop SERPs. The most striking difference is design, but that’s not the only change. For examples, Google recently announced that they would be dropping domains in mobile display URLs. Here’s a sample mobile result from my recent post:

Notice the display URL, which starts with the brand name (“Moz”) instead of our domain name. That’s followed by a breadcrumb-style URL that uses part of the page name. Expect this to spread, and possibly even hit desktop results in the future.

While Google has said that vertical results wouldn’t change with the April 21st update, that statement is a bit misleading when it comes to local results. Google already uses different styles of local pack results for mobile, and those pack results appear in different proportions. For example, here’s a local “snack pack” on mobile (Android):

Snack packs appear in only 1.5% of the local rankings we track for MozCast Desktop, but they’re nearly 4X as prevalent (6.0%) on MozCast Mobile (for the same keywords and locations). As these new packs become more prevalent, they take away other styles of packs, and create new user behavior. So, to say local is the same just because the core algorithm may be the same is misleading at best.

Finally, mobile adds entirely new entities, like app packs on Android (from a search for “jobs”):

These app packs appear on a full 8.4% of the mobile SERPs we’re tracking, including many high-volume keywords. As I noted in my recent post, these app packs also consume page-1 organic slots.

A bit of good news

If you’re worried that you may be too late to the mobile game, it appears there is some good news. Google will most likely reprocess new mobile-friendly pages quickly. Just this past few days, Moz redesigned our blog to be mobile friendly. In less than 24 hours, some of our main blog pages were already showing the “Mobile-friendly” tag:

However big this update ultimately ends up being, Google’s push toward mobile-first design and their clear public stance on this issue strongly signal that mobile-friendly sites are going to have an advantage over time.

Stay tuned to this post (same URL) for the next week or two – I’ll be updating charts and data as the Mobile Update continues to roll out. If the update really does take days or weeks, we’ll do our best to measure the long-term impact and keep you informed.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

SEO Braintrust Podcast 3/14/2011: Google Panda Update Part 2/2

Dan Thies and Andrea Warner discuss Google’s “Panda” update of February 24, 2011 – and how webmasters can respond if they are affected. For the full podcast …

Reblogged 4 years ago from www.youtube.com

best seo success factor in 2013

coventrywebstudio is provides best seo factor for webmasters in 2013.

Reblogged 4 years ago from www.youtube.com

How We Fixed the Internet (Ok, an Answer Box)

Posted by Dr-Pete

Last year, Google expanded the Knowledge Graph to use data extracted (*cough* scraped) from the index to create answer boxes. Back in October, I wrote about a failed experiment. One of my posts, an odd dive
into Google’s revenue, was being answer-fied for the query “How much does Google make?”:

Objectively speaking, even I could concede that this wasn’t a very good answer in 2014. I posted it on Twitter, and
David Iwanow asked the inevitable question:

Enthusiasm may have gotten the best of us, a few more people got involved (like my former Moz colleague
Ruth Burr Reedy), and suddenly we were going to fix this once and for all:

There Was Just One Problem

I updated the post, carefully rewriting the first paragraph to reflect the new reality of Google’s revenue. I did my best to make the change user-friendly, adding valuable information but not disrupting the original post. I did, however, completely replace the old text that Google was scraping.

Within less than a day, Google had re-cached the content, and I just had to wait to see the new answer box. So, I waited, and waited… and waited. Two months later, still no change. Some days, the SERP showed no answer box at all (although I’ve since found these answer boxes are very dynamic), and I was starting to wonder if it was all a mistake.

Then, Something Happened

Last week, months after I had given up, I went to double-check this query for entirely different reasons, and I saw the following:

Google had finally updated the answer box with the new text, and they had even pulled an image from the post. It was a strange choice of images, but in fairness, it was a strange post.

Interestingly, Google also added the publication date of the post, perhaps recognizing that outdated answers aren’t always useful. Unfortunately, this doesn’t reflect the timing of the new content, but that’s understandable – Google doesn’t have easy access to that data.

It’s interesting to note that sometimes Google shows the image, and sometimes they don’t. This seems to be independent of whether the SERP is personalized or incognito. Here’s a capture of the image-free version, along with the #1 organic ranking:

You’ll notice that the #1 result is also my Moz post, and that result has an expanded meta description. So, the same URL is essentially double-dipping this SERP. This isn’t always the case – answers can be extracted from URLs that appear lower on page 1 (although almost always page 1, in my experience). Anecdotally, it’s also not always the case that these organic result ends up getting an expanded meta description.

However, it definitely seems that some of the quality signals driving organic ranking and expanded meta descriptions are also helping Google determine whether a query deserves a direct answer. Put simply, it’s not an accident that this post was chosen to answer this question.

What Does This Mean for You?

Let’s start with the obvious – Yes, the v2 answer boxes (driven by the index, not Freebase/WikiData)
can be updated. However, the update cycle is independent of the index’s refresh cycle. In other words, just because a post is re-cached, it doesn’t mean the answer box will update. Presumably, Google is creating a second Knowledge Graph, based on the index, and this data is only periodically updated.

It’s also entirely possible that updating could cause you to lose an answer box, if the new data weren’t a strong match to the question or the quality of the content came into question. Here’s an interesting question – on a query where a competitor has an answer box, could you change your own content enough to either replace them or knock out the answer box altogether? We are currently testing this question, but it may be a few more months before we have any answers.

Another question is what triggers this style of answer box in the first place? Eric Enge has an
in-depth look at 850,000 queries that’s well worth your time, and in many cases Google is still triggering on obvious questions (“how”, “what”, “where”, etc.). Nouns that could be interpreted as ambiguous also can trigger the new answer boxes. For example, a search for “ruby” is interpreted by Google as roughly meaning “What is Ruby?”:

This answer box also triggers “Related topics” that use content pulled from other sites but drive users to more Google searches. The small, gray links are the source sites. The much more visible, blue links are more Google searches.

Note that these also have to be questions (explicit or implied) that Google can’t answer with their curated Knowledge Graph (based on sources like Freebase and WikiData). So, for example, the question “When is Mother’s Day?” triggers an older-style answer:

Sites offering this data aren’t going to have a chance to get attribution, because Google essentially already owns the answer to this question as part of their core Knowledge Graph.

Do You Want to Be An Answer?

This is where things get tricky. At this point, we have no clear data on how these answer boxes impact CTR, and it’s likely that the impact depends a great deal on the context. I think we’re facing a certain degree of inevitability – if Google is going to list an answer, better it’s your answer then someone else’s, IMO. On the other hand, what if that answer is so complete that it renders your URL irrelevant? Consider, for example, the SERP for “how to make grilled cheese”:

Sorry, Food Network, but making a grilled cheese sandwich isn’t really that hard, and this answer box doesn’t leave much to the imagination. As these answers get more and more thorough, expect CTRs to fall.

For now, I’d argue that it’s better to have your link in the box than someone else’s, but that’s cold comfort in many cases. These new answer boxes represent what I feel is a dramatic shift in the relationship between Google and webmasters, and they may be tipping the balance. For now, we can’t do much but wait, see, and experiment.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Leveraging Panda to Get Out of Product Feed Jail

Posted by MichaelC

This is a story about Panda, customer service, and differentiating your store from others selling the same products.

Many e-commerce websites get the descriptions, specifications, and imagery for products they sell from feeds or databases provided by the
manufacturers. The manufacturers might like this, as they control how their product is described and shown. However, it does their retailers
no good when they are trying to rank for searches for those products and they’ve got the exact same content as every other retailer. If the content
in the feed is thin, then you’ll have pages with…well….thin content. And if there’s a lot of content for the products, then you’ll have giant blocks of content that
Panda might spot as being the same as they’ve seen on many other sites. To throw salt on the wound, if the content is really crappy, badly written,
or downright wrong, then the retailers’ sites will look low-quality to Panda and users as well.

Many webmasters see Panda as a type of Google penalty—but it’s not, really. Panda is a collection of measurements Google
is taking of your web pages to try and give your pages a rating on how happy users are likely to be with those pages.
It’s not perfect, but then again—neither is your website.

Many SEO folks (including me) tend to focus on the kinds of tactical and structural things you can do to make Panda see
your web pages as higher quality: things like adding big, original images, interactive content like videos and maps, and
lots and lots and lots and lots of text. These are all good tactics, but let’s step back a bit and look at a specific
example to see WHY Panda was built to do this, and from that, what we can do as retailers to enrich the content we have
for e-commerce products where our hands are a bit tied—we’re getting a feed of product info from the manufacturers, the same
as every other retailer of those products.

I’m going to use a real-live example that I suffered through about a month ago. I was looking for a replacement sink
stopper for a bathroom sink. I knew the brand, but there wasn’t a part number on the part I needed to replace. After a few Google
searches, I think I’ve found it on Amazon:


Don’t you wish online shopping was always this exciting?

What content actually teaches the customer

All righty… my research has shown me that there are standard sizes for plug stoppers. In fact, I initially ordered a
“universal fit sink stopper.” Which didn’t fit. Then I found 3 standard diameters, and 5 or 6 standard lengths.
No problem…I possess that marvel of modern tool chests, a tape measure…so I measure the part I have that I need to replace. I get about 1.5″ x 5″.
So let’s scroll down to the product details to see if it’s a match:

Kohler sink stopper product info from hell

Whoa. 1.2 POUNDS? This sink stopper must be made of
Ununoctium.
The one in my hand weighs about an ounce. But the dimensions
are way off as well: a 2″ diameter stopper isn’t going to fit, and mine needs to be at least an inch longer.

I scroll down to the product description…maybe there’s more detail there, maybe the 2″ x 2″ is the box or something.

I've always wanted a sink stopper designed for long long

Well, that’s less than helpful, with a stupid typo AND incorrect capitalization AND a missing period at the end.
Doesn’t build confidence in the company’s quality control.

Looking at the additional info section, maybe this IS the right part…the weight quoted in there is about right:

Maybe this is my part after all

Where else customers look for answers

Next I looked at the questions and answers bit, which convinced me that it PROBABLY was the right part:

Customers will answer the question if the retailer won't...sometimes.

If I was smart, I would have covered my bets by doing what a bunch of other customers also did: buy a bunch of different parts,
and surely one of them will fit. Could there
possibly was a clearer signal that the product info was lacking than this?

If you can't tell which one to buy, buy them all!

In this case, that was probably smarter than spending another 1/2 hour of my time snooping around online. But in general, people
aren’t going to be willing to buy THREE of something just to make sure they get the right one. This cheap part was an exception.

So, surely SOMEONE out there has the correct dimensions of this part on their site—so I searched for the part number I saw on the Amazon
listing. But as it turned out, that crappy description and wrong weight and dimensions were on every site I found…because they came from
the manufacturer.

Better Homes and Gardens...but not better description.

A few of the sites had edited out the “designed for long long” bit, but apart from that, they were all the same.

What sucks for the customer is an opportunity for you

Many, many retailers are in this same boat—they get their product info from the manufacturer, and if the data sucks in their feed,
it’ll suck on their site. Your page looks weak to both users and to Panda, and it looks the same as everybody else’s page for that product…to
both users and to Panda. So (a) you won’t rank very well, and (b) if you DO manage to get a customer to that page, it’s not as likely to convert
to a sale.

What can you do to improve on this? Here’s a few tactics to consider.

1. Offer your own additional description and comments

Add a new field to your CMS for your own write-ups on products, and when you discover issues like the above, you can add your own information—and
make it VERY clear what’s the manufacturer’s stock info and what you’ve added (that’s VALUE-ADDED) as well. My client
Sports Car Market magazine does this with their collector car auction reports in their printed magazine:
they list the auction company’s description of the car, then their reporter’s assessment of the car. This is why I buy the magazine and not the auction catalog.

2. Solicit questions

Be sure you solicit questions on every product page—your customers will tell you what’s wrong or what important information is missing. Sure,
you’ve got millions of products to deal with, but what the customers are asking about (and your sales volume of course) will help you prioritize as well as
find the problems opportunities.

Amazon does a great job of enabling this, but in this case, I used the Feedback option to update the product info,
and got back a total
bull-twaddle email from the seller about how the dimensions are in the product description thank you for shopping with us, bye-bye.
I tried to help them, for free, and they shat on me.

3. But I don’t get enough traffic to get the questions

Don’t have enough site volume to get many customer requests? No problem, the information is out there for you on Amazon :-).
Take your most important products, and look them up on Amazon, and see what questions are being asked—then answer those ONLY on your own site.

4. What fits with what?

Create fitment/cross-reference charts for products.
You probably have in-house knowledge of what products fit/are compatible with what other products.
Just because YOU know a certain accessory fits all makes and models, because it’s some industry-standard size, doesn’t mean that the customer knows this.

If there’s a particular way to measure a product so you get the correct size, explain that (with photos of what you’re measuring, if it seems
at all complicated). I’m getting a new front door for my house. 

  • How big is the door I need? 
  • Do I measure the width of the door itself, or the width of the
    opening (probably 1/8″ wider)? 
  • Or if it’s pre-hung, do I measure the frame too? Is it inswing or outswing?
  • Right or left hinged…am I supposed to
    look at the door from inside the house or outside to figure this out? 

If you’re a door seller, this is all obvious stuff,
but it wasn’t obvious to me, and NOT having the info on a website means (a) I feel stupid, and (b) I’m going to look at your competitors’ sites
to see if they will explain it…and maybe I’ll find a door on THEIR site I like better anyway.

Again, prioritize based on customer requests.

5. Provide your own photos and measurements

If examples of the physical products are available to you, take your own photos, and take your own measurements.

In fact, take your OWN photo of YOURSELF taking the measurement—so the user can see exactly what part of the product you’re measuring.
In the photo below, you can see that I’m measuring the diameter of the stopper, NOT the hole in the sink, NOT the stopper plus the rubber gasket.
And no, Kohler, it’s NOT 2″ in diameter…by a long shot.

Don't just give the measurements, SHOW the measurements

Keep in mind, you shouldn’t have to tear apart your CMS to do any of this. You can put your additions in a new database table, just tied to the
core product content by SKU. In the page template code for the product page, you can check your database to see if you have any of your “extra bits” to display
alongside the feed content, and this way keep it separate from the core product catalog code. This will make updates to the CMS/product catalog less painful as well.

Fixing your content doesn’t have to be all that difficult, nor expensive

At this point, you’re probably thinking “hey, but I’ve got 1.2 million SKUs, and if I were to do this, it’d take me 20 years to update all of them.”
FINE. Don’t update all of them. Prioritize, based on factors like what you sell the most of, what you make the best margin on, what customers
ask questions about the most, etc. Maybe concentrate on your top 5% in terms of sales, and do those first. Take all that money you used to spend
buying spammy links every month, and spend it instead on junior employees or interns doing the product measurements, extra photos, etc.

And don’t be afraid to spend a little effort on a low value product, if it’s one that frequently gets questions from customers.
Simple things can make a life-long fan of the customer. I once needed to replace a dishwasher door seal, and didn’t know if I needed special glue,
special tools, how to cut it to fit with or without overlap, etc.
I found a video on how to do the replacement on
RepairClinic.com. So easy!
They got my business for the $10 seal, of course…but now I order my $50 fridge water filter from them every six months as well.

Benefits to your conversion rate

Certainly the tactics we’ve talked about will improve your conversion rate from visitors to purchasers. If JUST ONE of those sites I looked at for that damn sink stopper
had the right measurement (and maybe some statement about how the manufacturer’s specs above are actually incorrect, we measured, etc.), I’d have stopped right there
and bought from that site.

What does this have to do with Panda?

But, there’s a Panda benefit here too. You’ve just added a bunch of additional, unique text to your site…and maybe a few new unique photos as well.
Not only are you going to convert better, but you’ll probably rank better too.

If you’re NOT Amazon, or eBay, or Home Depot, etc., then Panda is your secret weapon to help you rank against those other sites whose backlink profiles are
stronger than
carbon fibre (that’s a really cool video, by the way).
If you saw my
Whiteboard Friday on Panda optimization, you’ll know that
Panda tuning can overcome incredible backlink profile deficits.

It’s go time

We’re talking about tactics that are time-consuming, yes—but relatively easy to implement, using relatively inexpensive staff (and in some
cases, your customers are doing some of the work for you).
And it’s something you can roll out a product at a time.
You’ll be doing things that really DO make your site a better experience for the user…we’re not just trying to trick Panda’s measurements.

  1. Your pages will rank better, and bring more traffic.
  2. Your pages will convert better, because users won’t leave your site, looking elsewhere for answers to their questions.
  3. Your customers will be more loyal, because you were able to help them when nobody else bothered.

Don’t be held hostage by other peoples’ crappy product feeds. Enhance your product information with your own info and imagery.
Like good link-building and outreach, it takes time and effort, but both Panda and your site visitors will reward you for it.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

12 Common Reasons Reconsideration Requests Fail

Posted by Modestos

There are several reasons a reconsideration request might fail. But some of the most common mistakes site owners and inexperienced SEOs make when trying to lift a link-related Google penalty are entirely avoidable. 

Here’s a list of the top 12 most common mistakes made when submitting reconsideration requests, and how you can prevent them.

1. Insufficient link data

This is one of the most common reasons why reconsideration requests fail. This mistake is readily evident each time a reconsideration request gets rejected and the example URLs provided by Google are unknown to the webmaster. Relying only on Webmaster Tools data isn’t enough, as Google has repeatedly said. You need to combine data from as many different sources as possible. 

A good starting point is to collate backlink data, at the very least:

  • Google Webmaster Tools (both latest and sample links)
  • Bing Webmaster Tools
  • Majestic SEO (Fresh Index)
  • Ahrefs
  • Open Site Explorer

If you use any toxic link-detection services (e.g., Linkrisk and Link Detox), then you need to take a few precautions to ensure the following:

  • They are 100% transparent about their backlink data sources
  • They have imported all backlink data
  • You can upload your own backlink data (e.g., Webmaster Tools) without any limitations

If you work on large websites that have tons of backlinks, most of these automated services are very likely used to process just a fraction of the links, unless you pay for one of their premium packages. If you have direct access to the above data sources, it’s worthwhile to download all backlink data, then manually upload it into your tool of choice for processing. This is the only way to have full visibility over the backlink data that has to be analyzed and reviewed later. Starting with an incomplete data set at this early (yet crucial) stage could seriously hinder the outcome of your reconsideration request.

2. Missing vital legacy information

The more you know about a site’s history and past activities, the better. You need to find out (a) which pages were targeted in the past as part of link building campaigns, (b) which keywords were the primary focus and (c) the link building tactics that were scaled (or abused) most frequently. Knowing enough about a site’s past activities, before it was penalized, can help you home in on the actual causes of the penalty. Also, collect as much information as possible from the site owners.

3. Misjudgement

Misreading your current situation can lead to wrong decisions. One common mistake is to treat the example URLs provided by Google as gospel and try to identify only links with the same patterns. Google provides a very small number of examples of unnatural links. Often, these examples are the most obvious and straightforward ones. However, you should look beyond these examples to fully address the issues and take the necessary actions against all types of unnatural links. 

Google is very clear on the matter: “Please correct or remove all inorganic links, not limited to the samples provided above.

Another common area of bad judgement is the inability to correctly identify unnatural links. This is a skill that requires years of experience in link auditing, as well as link building. Removing the wrong links won’t lift the penalty, and may also result in further ranking drops and loss of traffic. You must remove the right links.


4. Blind reliance on tools

There are numerous unnatural link-detection tools available on the market, and over the years I’ve had the chance to try out most (if not all) of them. Because (and without any exception) I’ve found them all very ineffective and inaccurate, I do not rely on any such tools for my day-to-day work. In some cases, a lot of the reported “high risk” links were 100% natural links, and in others, numerous toxic links were completely missed. If you have to manually review all the links to discover the unnatural ones, ensuring you don’t accidentally remove any natural ones, it makes no sense to pay for tools. 

If you solely rely on automated tools to identify the unnatural links, you will need a miracle for your reconsideration request to be successful. The only tool you really need is a powerful backlink crawler that can accurately report the current link status of each URL you have collected. You should then manually review all currently active links and decide which ones to remove. 

I could write an entire book on the numerous flaws and bugs I have come across each time I’ve tried some of the most popular link auditing tools. A lot of these issues can be detrimental to the outcome of the reconsideration request. I have seen many reconsiderations request fail because of this. If Google cannot algorithmically identify all unnatural links and must operate entire teams of humans to review the sites (and their links), you shouldn’t trust a $99/month service to identify the unnatural links.

If you have an in-depth understanding of Google’s link schemes, you can build your own process to prioritize which links are more likely to be unnatural, as I described in this post (see sections 7 & 8). In an ideal world, you should manually review every single link pointing to your site. Where this isn’t possible (e.g., when dealing with an enormous numbers of links or resources are unavailable), you should at least focus on the links that have the more “unnatural” signals and manually review them.

5. Not looking beyond direct links

When trying to lift a link-related penalty, you need to look into all the links that may be pointing to your site directly or indirectly. Such checks include reviewing all links pointing to other sites that have been redirected to your site, legacy URLs with external inbound links that have been internally redirected owned, and third-party sites that include cross-domain canonicals to your site. For sites that used to buy and redirect domains in order increase their rankings, the quickest solution is to get rid of the redirects. Both Majestic SEO and Ahrefs report redirects, but some manual digging usually reveals a lot more.

PQPkyj0.jpg

6. Not looking beyond the first link

All major link intelligence tools, including Majestic SEO, Ahrefs and Open Site Explorer, report only the first link pointing to a given site when crawling a page. This means that, if you overly rely on automated tools to identify links with commercial keywords, the vast majority of them will only take into consideration the first link they discover on a page. If a page on the web links just once to your site, this is not big deal. But if there are multiple links, the tools will miss all but the first one.

For example, if a page has five different links pointing to your site, and the first one includes a branded anchor text, these tools will just report the first link. Most of the link-auditing tools will in turn evaluate the link as “natural” and completely miss the other four links, some of which may contain manipulative anchor text. The more links that get missed this way the more likely your reconsideration request will fail.

7. Going too thin

Many SEOs and webmasters (still) feel uncomfortable with the idea of losing links. They cannot accept the idea of links that once helped their rankings are now being devalued, and must be removed. There is no point trying to save “authoritative”, unnatural links out of fear of losing rankings. If the main objective is to lift the penalty, then all unnatural links need to be removed.

Often, in the first reconsideration request, SEOs and site owners tend to go too thin, and in the subsequent attempts start cutting deeper. If you are already aware of the unnatural links pointing to your site, try to get rid of them from the very beginning. I have seen examples of unnatural links provided by Google on PR 9/DA 98 sites. Metrics do not matter when it comes to lifting a penalty. If a link is manipulative, it has to go.

In any case, Google’s decision won’t be based only on the number of links that have been removed. Most important in the search giant’s eyes are the quality of links still pointing to your site. If the remaining links are largely of low quality, the reconsideration request will almost certainly fail. 

8. Insufficient effort to remove links

Google wants to see a “good faith” effort to get as many links removed as possible. The higher the percentage of unnatural links removed, the better. Some agencies and SEO consultants tend to rely too much on the use of the disavow tool. However, this isn’t a panacea, and should be used as a last resort for removing those links that are impossible to remove—after exhausting all possibilities to physically remove them via the time-consuming (yet necessary) outreach route. 

Google is very clear on this:

m4M4n3g.jpg?1

Even if you’re unable to remove all of the links that need to be removed, you must be able to demonstrate that you’ve made several attempts to have them removed, which can have a favorable impact on the outcome of the reconsideration request. Yes, in some cases it might be possible to have a penalty lifted simply by disavowing instead of removing the links, but these cases are rare and this strategy may backfire in the future. When I reached out to ex-googler Fili Wiese’s for some advice on the value of removing the toxic links (instead of just disavowing them), his response was very straightforward:

V3TmCrj.jpg 

9. Ineffective outreach

Simply identifying the unnatural links won’t get the penalty lifted unless a decent percentage of the links have been successfully removed. The more communication channels you try, the more likely it is that you reach the webmaster and get the links removed. Sending the same email hundreds or thousands of times is highly unlikely to result in a decent response rate. Trying to remove a link from a directory is very different from trying to get rid of a link appearing in a press release, so you should take a more targeted approach with a well-crafted, personalized email. Link removal request emails must be honest and to the point, or else they’ll be ignored.

Tracking the emails will also help in figuring out which messages have been read, which webmasters might be worth contacting again, or alert you of the need to try an alternative means of contacting webmasters.

Creativity, too, can play a big part in the link removal process. For example, it might be necessary to use social media to reach the right contact. Again, don’t trust automated emails or contact form harvesters. In some cases, these applications will pull in any email address they find on the crawled page (without any guarantee of who the information belongs to). In others, they will completely miss masked email addresses or those appearing in images. If you really want to see that the links are removed, outreach should be carried out by experienced outreach specialists. Unfortunately, there aren’t any shortcuts to effective outreach.

10. Quality issues and human errors

All sorts of human errors can occur when filing a reconsideration request. The most common errors include submitting files that do not exist, files that do not open, files that contain incomplete data, and files that take too long to load. You need to triple-check that the files you are including in your reconsideration request are read-only, and that anyone with the URL can fully access them. 

Poor grammar and language is also bad practice, as it may be interpreted as “poor effort.” You should definitely get the reconsideration request proofread by a couple of people to be sure it is flawless. A poorly written reconsideration request can significantly hinder your overall efforts.

Quality issues can also occur with the disavow file submission. Disavowing at the URL level isn’t recommended because the link(s) you want to get rid of are often accessible to search engines via several URLs you may be unaware of. Therefore, it is strongly recommended that you disavow at the domain or sub-domain level.

11. Insufficient evidence

How does Google know you have done everything you claim in your reconsideration request? Because you have to prove each claim is valid, you need to document every single action you take, from sent emails and submitted forms, to social media nudges and phone calls. The more information you share with Google in your reconsideration request, the better. This is the exact wording from Google:

“ …we will also need to see good-faith efforts to remove a large portion of inorganic links from the web wherever possible.”

12. Bad communication

How you communicate your link cleanup efforts is as essential as the work you are expected to carry out. Not only do you need to explain the steps you’ve taken to address the issues, but you also need to share supportive information and detailed evidence. The reconsideration request is the only chance you have to communicate to Google which issues you have identified, and what you’ve done to address them. Being honest and transparent is vital for the success of the reconsideration request.

There is absolutely no point using the space in a reconsideration request to argue with Google. Some of the unnatural links examples they share may not always be useful (e.g., URLs that include nofollow links, removed links, or even no links at all). But taking the argumentative approach veritably guarantees your request will be denied.

54adb6e0227790.04405594.jpg
Cropped from photo by Keith Allison, licensed under Creative Commons.

Conclusion

Getting a Google penalty lifted requires a good understanding of why you have been penalized, a flawless process and a great deal of hands-on work. Performing link audits for the purpose of lifting a penalty can be very challenging, and should only be carried out by experienced consultants. If you are not 100% sure you can take all the required actions, seek out expert help rather than looking for inexpensive (and ineffective) automated solutions. Otherwise, you will almost certainly end up wasting weeks or months of your precious time, and in the end, see your request denied.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com