Moz Local Officially Launches in the UK

Posted by David-Mihm

To all Moz Local fans in the UK, I’m excited to announce that your wait is over. As the sun rises “across the pond” this morning, Moz Local is officially live in the United Kingdom!

A bit of background

As many of you know, we released the US version of Moz Local in March 2014. After 12 months of terrific growth in the US, and a boatload of technical improvements and feature releases–especially for Enterprise customers–we released the Check Listing feature for a limited set of partner search engines and directories in the UK in April of this year.

Over 20,000 of you have checked your listings (or your clients’ listings) in the last 3-1/2 months. Those lookups have helped us refine and improve the background technology immensely (more on that below). We’ve been just as eager to release the fully-featured product as you’ve been to use it, and the technical pieces have finally fallen into place for us to do so.

How does it work?

The concept is the same as the US version of Moz Local: show you how accurately and completely your business is listed on the most important local search platforms and directories, and optimize and perfect as many of those business listings as we can on your behalf.

For customers specifically looking for you, accurate business listings are obviously important. For customers who might not know about you yet, they’re also among the most important factors for ranking in local searches on Google. Basically, the more times Google sees your name, address, phone, and website listed the same way on quality local websites, the more trust they have in your business, and the higher you’re likely to rank.

Moz Local is designed to help on both these fronts.

To use the product, you simply need to type a name and postcode at moz.com/local. We’ll then show you a list of the closest matching listings we found. We prioritize verified listing information that we find on Google or Facebook, and selecting one of those verified listings means we’ll be able to distribute it on your behalf.

Clicking on a result brings you to a full details report for that listing. We’ll show you how accurate and complete your listings are now, and where they could be after using our product.

Clicking the tabs beneath the Listing Score graphic will show you some of the incompletions and inconsistencies that publishing your listing with Moz Local will address.

For customers with hundreds or thousands of locations, bulk upload is also available using a modified version of your data from Google My Business–feel free to e-mail enterpriselocal@moz.com for more details.

Where do we distribute your data?

We’ve prioritized the most important commercial sites in the UK local search ecosystem, and made them the centerpieces of Moz Local. We’ll update your data directly on globally-important players Factual and Foursquare, and the UK-specific players CentralIndex, Thomson Local, and the Scoot network–which includes key directories like TouchLocal, The Independent, The Sun, The Mirror, The Daily Scotsman, and Wales Online.

We’ll be adding two more major destinations shortly, and for those of you who sign up before that time, your listings will be automatically distributed to the additional destinations when the integrations are complete.

How much does it cost?

The cost per listing is £84/year, which includes distribution to the sites mentioned above with unlimited updates throughout the year, monitoring of your progress over time, geographically- focused reporting, and the ability to find and close duplicate listings right from your Moz Local dashboard–all the great upgrades that my colleague Noam Chitayat blogged about here.

What’s next?

Well, as I mentioned just a couple paragraphs ago, we’ve got two additional destinations to which we’ll be sending your data in very short order. Once those integrations are complete, we’ll be just a few weeks away from releasing our biggest set of features since we launched. I look forward to sharing more about these features at BrightonSEO at the end of the summer!

For those of you around the world in Canada, Australia, and other countries, we know there’s plenty of demand for Moz Local overseas, and we’re working as quickly as we can to build additional relationships abroad. And to our friends in the UK, please let us know how we can continue to make the product even better!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

An Open-Source Tool for Checking rel-alternate-hreflang Annotations

Posted by Tom-Anthony

In the Distilled R&D department we have been ramping up the amount of automated monitoring and analysis we do, with an internal system monitoring our client’s sites both directly and via various data sources to ensure they remain healthy and we are alerted to any problems that may arise.

Recently we started work to add in functionality for including the rel-alternate-hreflang annotations in this system. In this blog post I’m going to share an open-source Python library we’ve just started work on for the purpose, which makes it easy to read the hreflang entries from a page and identify errors with them.

If you’re not a Python aficionado then don’t despair, as I have also built a ready-to-go tool for you to use, which will quickly do some checks on the hreflang entries for any URL you specify. 🙂

Google’s Search Console (formerly Webmaster Tools) does have some basic rel-alternate-hreflang checking built in, but it is limited in how you can use it and you are restricted to using it for verified sites.

rel-alternate-hreflang checklist

Before we introduce the code, I wanted to quickly review a list of five easy and common mistakes that we will want to check for when looking at rel-alternate-hreflang annotations:

  • return tag errors – Every alternate language/locale URL of a page should, itself, include a link back to the first page. This makes sense but I’ve seen people make mistakes with it fairly often.
  • indirect / broken links – Links to alternate language/region versions of the page should no go via redirects, and should not link to missing or broken pages.
  • multiple entries – There should never be multiple entries for a single language/region combo.
  • multiple defaults – You should never have more than one x-default entry.
  • conflicting modes – rel-alternate-hreflang entries can be implemented via inline HTML, XML sitemaps, or HTTP headers. For any one set of pages only one implementation mode should be used.

So now imagine that we want to simply automate these checks quickly and simply…

Introducing: polly – the hreflang checker library

polly is the name for the library we have developed to help us solve this problem, and we are releasing it as open source so the SEO community can use it freely to build upon. We only started work on it last week, but we plan to continue developing it, and will also accept contributions to the code from the community, so we expect its feature set to grow rapidly.

If you are not comfortable tinkering with Python, then feel free to skip down to the next section of the post, where there is a tool that is built with polly which you can use right away.

Still here? Ok, great. You can install polly easily via pip:

pip install polly

You can then create a PollyPage() object which will do all our work and store the data simply by instantiating the class with the desired URL:

my_page = PollyPage("http://www.facebook.com/")

You can quickly see the hreflang entries on the page by running:

print my_page.alternate_urls_map

You can list all the hreflang values encountered on a page, and which countries and languages they cover:

print my_page.hreflang_values
print my_page.languages
print my_page.regions

You can also check various aspects of a page, see whether the pages it includes in its rel-alternate-hreflang entries point back, or whether there are entries that do not see retrievable (due to 404 or 500 etc. errors):

print my_page.is_default
print my_page.no_return_tag_pages()
print my_page.non_retrievable_pages()

Get more instructions and grab the code at the polly github page. Hit me up in the comments with any questions.

Free tool: hreflang.ninja

I have put together a very simple tool that uses polly to run some of the checks we highlighted above as being common mistakes with rel-alternate-hreflang, which you can visit right now and start using:

http://hreflang.ninja

Simply enter a URL and hit enter, and you should see something like:

Example output from the ninja!

The tool shows you the rel-alternate-hreflang entries found on the page, the language and region of those entries, the alternate URLs, and any errors identified with the entry. It is perfect for doing quick’n’dirty checks of a URL to identify any errors.

As we add additional functionality to polly we will be updating hreflang.ninja as well, so please tweet me with feature ideas or suggestions.

To-do list!

This is the first release of polly and currently we only handle annotations that are in the HTML of the page, not those in the XML sitemap or HTTP headers. However, we are going to be updating polly (and hreflang.ninja) over the coming weeks, so watch this space! 🙂

Resources

Here are a few links you may find helpful for hreflang:

Got suggestions?

With the increasing number of SEO directives and annotations available, and the ever-changing guidelines around how to deploy them, it is important to automate whatever areas possible. Hopefully polly is helpful to the community in this regard, and we want to here what ideas you have for making these tools more useful – here in the comments or via Twitter.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

The Colossus Update: Waking The Giant

Posted by Dr-Pete

Yesterday morning, we woke up to a historically massive temperature spike on MozCast, after an unusually quiet weekend. The 10-day weather looked like this:

That’s 101.8°F, one of the hottest verified days on record, second only to a series of unconfirmed spikes in June of 2013. For reference, the first Penguin update clocked in at 93.1°.

Unfortunately, trying to determine how the algorithm changed from looking at individual keywords (even thousands of them) is more art than science, and even the art is more often Ms. Johnson’s Kindergarten class than Picasso. Sometimes, though, we catch a break and spot something.

The First Clue: HTTPS

When you watch enough SERPs, you start to realize that change is normal. So, the trick is to find the queries that changed a lot on the day in question but are historically quiet. Looking at a few of these, I noticed some apparent shake-ups in HTTP vs. HTTPS (secure) URLs. So, the question becomes: are these anecdotes, or do they represent a pattern?

I dove in and looked at how many URLs for our 10,000 page-1 SERPs were HTTPS over the past few days, and I saw this:

On the morning of June 17, HTTPS URLs on page 1 jumped from 16.9% to 18.4% (a 9.9% day-over-day increase), after trending up for a few days. This represents the total real-estate occupied by HTTPS URLs, but how did rankings fare? Here are the average rankings across all HTTPS results:

HTTPS URLs also seem to have gotten a rankings boost – dropping (with “dropping” being a positive thing) from an average of 2.96 to 2.79 in the space of 24 hours.

Seems pretty convincing, right? Here’s the problem: rankings don’t just change because Google changes the algorithm. We are, collectively, changing the web every minute of the day. Often, those changes are just background noise (and there’s a lot of noise), but sometimes a giant awakens.

The Second Clue: Wikipedia

Anecdotally, I noticed that some Wikipedia URLs seemed to be flipping from HTTP to HTTPS. I ran a quick count, and this wasn’t just a fluke. It turns out that Wikipedia started switching their entire site to HTTPS around June 12 (hat tip to Jan Dunlop). This change is expected to take a couple of weeks.

It’s just one site, though, right? Well, historically, this one site is the #1 largest land-holder across the SERP real-estate we track, with over 5% of the total page-1 URLs in our tracking data (5.19% as of June 17). Wikipedia is a giant, and its movements can shake the entire web.

So, how do we tease this apart? If Wikipedia’s URLs had simply flipped from HTTP to HTTPS, we should see a pretty standard pattern of shake-up. Those URLs would look to have changed, but the SERPS around them would be quiet. So, I ran an analysis of what the temperature would’ve been if we ignored the protocol (treating HTTP/HTTPS as the same). While slightly lower, that temperature was still a scorching 96.6°F.

Is it possible that Wikipedia moving to HTTPS also made the site eligible for a rankings boost from previous algorithm updates, thus disrupting page 1 without any code changes on Google’s end? Yes, it is possible – even a relatively small rankings boost for Wikipedia from the original HTTPS algorithm update could have a broad impact.

The Third Clue: Google?

So far, Google has only said that this was not a Panda update. There have been rumors that the HTTPS update would get a boost, as recently as SMX Advanced earlier this month, but no timeline was given for when that might happen.

Is it possible that Wikipedia’s publicly announced switch finally gave Google the confidence to boost the HTTPS signal? Again, yes, it’s possible, but we can only speculate at this point.

My gut feeling is that this was more than just a waking giant, even as powerful of a SERP force as Wikipedia has become. We should know more as their HTTPS roll-out continues and their index settles down. In the meantime, I think we can expect Google to become increasingly serious about HTTPS, even if what we saw yesterday turns out not to have been an algorithm update.

In the meantime, I’m going to melodramatically name this “The Colossus Update” because, well, it sounds cool. If this indeed was an algorithm update, I’m sure Google would prefer something sensible, like “HTTPS Update 2” or “Securageddon” (sorry, Gary).

Update from Google: Gary Illyes said that he’s not aware of an HTTPS update (via Twitter):

No comment on other updates, or the potential impact of a Wikipedia change. I feel strongly that there is an HTTPS connection in the data, but as I said – that doesn’t necessarily mean the algorithm changed.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Video Thumbnail – YouTube Thumbnail Trick 2012 Still Working – Verified

This Video Thumbnail – YouTube Thumbnail Trick is Still Working! This is a test of the video thumbnail trick, aka YouTube Thumbnail Trick of 2012 – to verify…

Reblogged 4 years ago from www.youtube.com

The Month Google Shook the SERPs

Posted by Dr-Pete

As a group, we SEOs still tend to focus most of our attention on just one place – traditional, organic results. In the past two years, I’ve spent a lot of time studying these results and how they change over time. The more I experience the reality of SERPs in the wild, though, the more I’ve become interested in situations like this one (a search for “diabetes symptoms”)…

See the single blue link and half-snippet on the bottom-left? That’s the only thing about this above-the-fold page that most SEOs in 2014 would call “organic”. Of course, it’s easy to find fringe cases, but the deeper I dig into the feature landscape that surrounds and fundamentally alters SERPs, the more I find that the exceptions are inching gradually closer to the rule.

Monday, July 28th was my 44th birthday, and I think Google must have decided to celebrate by giving me extra work (hooray for job security?). In the month between June 28th and July 28th, there were four major shake-ups to the SERPs, all of them happening beyond traditional, organic results. This post is a recap of our data on each of those shake-ups.

Authorship photos disappear (June 28)

On June 25th, Google’s John Mueller made a surprise announcement via Google+:

We had seen 
authorship shake-ups in the past, but the largest recent drop had measured around 15%. It was clear that Google was rethinking the prevalence of author photos and their impact on perceived quality, but most of us assumed this would be a process of small tweaks. Given Google’s push toward Google+ and its inherent tie-in with authorship, not a single SEO I know had predicted a complete loss of authorship photos.

Yet, over the next few days, culminating on the morning of June 28th, a 
total loss of authorship photos is exactly what happened:

While some authorship photos still appeared in personalized results, the profile photos completely disappeared from general results, after previously being present on about 21% of the SERPs that MozCast tracks. It’s important to note that the concept of authorship remains, and author bylines are still being shown (we track that at about 24%, as of this writing), but the overall visual impact was dramatic for many SERPs.

In-depth gets deeper (July 2nd)

Most SEOs still don’t pay much attention to Google’s “In-depth Articles,” but they’ve been slowly gain SERP share. When we first started tracking them, they popped up on about 3.5% of the searches MozCast covers. This data seems to only get updated periodically, and the number had grown to roughly 6.0% by the end of June 2014. On the morning of July 2nd, I (and, seemingly, everyone else), missed a major change:

Overnight, the presence of in-depth articles jumped from 6.0% to 12.7%, more than doubling (a +112% increase, to be precise). Some examples of queries that gained in-depth articles include:

  • xbox 360
  • hotels
  • raspberry pi
  • samsung galaxy tab
  • job search
  • pilates
  • payday loans
  • apartments
  • car sales
  • web design

Here’s an example set of in-depth for a term SEOs know all too well, “payday loans”:

The motivation for this change is unclear, and it comes even as Google continues to test designs with pared down in-depth results (almost all of their tests seem to take up less space than the current design). Doubling this feature hardly indicates a lack of confidence, though, and many competitive terms are now showing in-depth results.

Video looks more like radio (July 16th)

Just a couple of weeks after the authorship drop, we saw a smaller but still significant shake-up in video results, with about 28% of results MozCast tracks losing video thumbnails:

As you can see, the presence of thumbnails does vary day-to-day, but the two plateaus, before and after June 16th, are clear here. At this point, the new number seems to be holding.

Since our data doesn’t connect the video thumbnails to specific results, it’s tough to say if this change indicates a removal of thumbnails or a drop in rankings for video results overall. Considering how smaller drops in authorship signaled a much larger change down the road, I think this shift deserves more attention. It could be that Google is generally questioning the value and prevalence of rich snippets, especially when quality concerns come into play.

I originally hypothesized that this might not be a true loss, but could be a sign that some video snippets were switching to the new “mega-video” format (or video answer box, if you prefer). This does not appear to be the case, as the larger video format is still fairly uncommon, and the numbers don’t match up.

For reference, here’s a mega-video format (for the query “bartender”):

Mega-videos are appearing on such seemingly generic queries as “partition”, “headlights”, and “california king bed”. If you have the budget and really want to dominate the SERPs, try writing a pop song.

Pigeons attack local results (July 24th)

By now, many of you have heard of 
Google’s “Pigeon” update. The Pigeon update hit local SERPs hard and seems to have dramatically changed how Google determines and uses a searcher’s location. Local search is more than an algorithmic layer, though â€“ it’s also a feature set. When Pigeon hit, we saw a sharp decline in local “pack” results (the groups of 2-7 pinned local results):

We initially reported that pack results dropped more than 60% after the Pigeon update. We now are convinced that this was a mistake (indicated by the “?” zone) – essentially, Pigeon changed localization so much that it broke the method we were using. We’ve found a new method that seems to match manually setting your location, and the numbers for July 29-30 are, to the best of my knowledge, accurate.

According to these new numbers, local pack results have fallen 23.4% (in our data set) after the Pigeon update. This is the exact same number 
Darren Shaw of WhiteSpark found, using a completely different data set and methodology. The perfect match between those two numbers is probably a bit of luck, but they suggest that we’re at least on the right track. While I over-reported the initial drop, and I apologize for any confusion that may have caused, the corrected reality still shows a substantial change in pack results.

It’s important to note that this 23.4% drop is a net change – among queries, there were both losers and winners. Here are 10 searches that lost pack results (and have been manually verified):

  • jobs
  • cars for sale
  • apartments
  • cruises
  • train tickets
  • sofa
  • wheels
  • liposuction
  • social security card
  • motorcycle helmets

A couple of important notes – first, some searches that lost packs only lost packs in certain regions. Second, Pigeon is a very recent update and may still be rolling out or being tweaked. This is only the state of the data as we know it today.

Here are 10 searches that gained pack results (in our data set):

  • skechers
  • mortgage
  • apartments for rent
  • web designer
  • long john silvers
  • lamps
  • mystic
  • make a wish foundation
  • va hospital
  • internet service

The search for “mystic” is an interesting example – no matter what your location (if you’re in the US), Google is showing a pack result for Mystic, CT. This pattern seems to be popping up across the Pigeon update. For example, a search for “California Pizza Kitchen” automatically targets California, regardless of your location (h/t 
Tony Verre), and a search for “Buffalo Wild Wings” sends you to Buffalo, NY (h/t Andrew Mitschke).

Of course, local search is complex, and it seems like Google is trying to do a lot in one update. The simple fact that a search for “apartments” lost pack results in our data, while “apartments for rent” gained them, shows that the Pigeon update isn’t based on a few simplistic rules.

Some local SEOs have commented that Pigeon seemed to increase the number of smaller packs (2-3 results). Looking at the data for pack size before and after Pigeon, this is what we’re seeing:

Both before and after Pigeon, there are no 1-packs, and 4-, 5-, and 6-packs are relatively rare. After Pigeon, the distribution of 2-packs is similar, but there is a notable jump in 3-packs and a corresponding decrease in 7-packs. The total number of 3-packs actually increased after the Pigeon update. While our data set (once we restrict it to just searches with pack results) is fairly small, this data does seem to match the observations of local SEOs.

Sleep with one eye open

Ok, maybe that’s a bit melodramatic. All of the changes do go to show, though, that, if you’re laser-focused on ranking alone, you may be missing a lot. We as SEOs not only need to look beyond our own tunnel vision, we need to start paying more attention to post-ranking data, like CTR and search traffic. SERPs are getting richer and more dynamic, and Google can change the rules overnight.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com