UX, Content Quality, and SEO – Whiteboard Friday

Posted by EricEnge

Editor’s note: Today we’re featuring back-to-back episodes of Whiteboard Friday from our friends at Stone Temple Consulting. Make sure to also check out the first episode, “Becoming Better SEO Scientists” from Mark Traphagen.

User experience and the quality of your content have an incredibly broad impact on your SEO efforts. In this episode of Whiteboard Friday, Stone Temple’s Eric Enge shows you how paying attention to your users can benefit your position in the SERPs.

For reference, here’s a still of this week’s whiteboard.
Click on it to open a high resolution image in a new tab!

Video transcription

Hi, Mozzers. I’m Eric Enge, CEO of Stone Temple Consulting. Today I want to talk to you about one of the most underappreciated aspects of SEO, and that is the interaction between user experience, content quality, and your SEO rankings and traffic.

I’m going to take you through a little history first. You know, we all know about the Panda algorithm update that came out in February 23, 2011, and of course more recently we have the search quality update that came out in May 19, 2015. Our Panda friend had 27 different updates that we know of along the way. So a lot of stuff has gone on, but we need to realize that that is not where it all started.

The link algorithm from the very beginning was about search quality. Links allowed Google to have an algorithm that gave better results than the other search engines of their day, which were dependent on keywords. These things however, that I’ve just talked about, are still just the tip of the iceberg. Google goes a lot deeper than that, and I want to walk you through the different things that it does.

So consider for a moment, you have someone search on the phrase “men’s shoes” and they come to your website.

What is that they want when they come to your website? Do they want sneakers, sandals, dress shoes? Well, those are sort of the obvious things that they might want. But you need to think a little bit more about what the user really wants to be able to know before they buy from you.

First of all, there has to be a way to buy. By the way, affiliate sites don’t have ways to buy. So the line of thinking I’m talking about might not work out so well for affiliate sites and works better for people who can actually sell the product directly. But in addition to a way to buy, they might want a privacy policy. They might want to see an About Us page. They might want to be able to see your phone number. These are all different kinds of things that users look for when they arrive on the pages of your site.

So as we think about this, what is it that we can do to do a better job with our websites? Well, first of all, lose the focus on keywords. Don’t get me wrong, keywords haven’t gone entirely away. But the pages where we overemphasize one particular keyword over another or related phrases are long gone, and you need to have a broader focus on how you approach things.

User experience is now a big deal. You really need to think about how users are interacting with your page and how that shows your overall page quality. Think about the percent satisfaction. If I send a hundred users to your page from my search engine, how many of those users are going to be happy with the content or the products or everything that they see with your page? You need to think through the big picture. So at the end of the day, this impacts the content on your page to be sure, but a lot more than that it impacts the design, related items that you have on the page.

So let me just give you an example of that. I looked at one page recently that was for a flower site. It was a page about annuals on that site, and that page had no link to their perennials page. Well, okay, a fairly good percentage of people who arrive on a page about annuals are also going to want to have perennials as something they might consider buying. So that page was probably coming across as a poor user experience. So these related items concepts are incredibly important.

Then the links to your page is actually a way to get to some of those related items, and so those are really important as well. What are the related products that you link to?

Finally, really it impacts everything you do with your page design. You need to move past the old-fashioned way of thinking about SEO and into the era of: How am I doing with satisfying all the people who come to the pages of your site?

Thank you, Mozzers. Have a great day.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

The Colossus Update: Waking The Giant

Posted by Dr-Pete

Yesterday morning, we woke up to a historically massive temperature spike on MozCast, after an unusually quiet weekend. The 10-day weather looked like this:

That’s 101.8°F, one of the hottest verified days on record, second only to a series of unconfirmed spikes in June of 2013. For reference, the first Penguin update clocked in at 93.1°.

Unfortunately, trying to determine how the algorithm changed from looking at individual keywords (even thousands of them) is more art than science, and even the art is more often Ms. Johnson’s Kindergarten class than Picasso. Sometimes, though, we catch a break and spot something.

The First Clue: HTTPS

When you watch enough SERPs, you start to realize that change is normal. So, the trick is to find the queries that changed a lot on the day in question but are historically quiet. Looking at a few of these, I noticed some apparent shake-ups in HTTP vs. HTTPS (secure) URLs. So, the question becomes: are these anecdotes, or do they represent a pattern?

I dove in and looked at how many URLs for our 10,000 page-1 SERPs were HTTPS over the past few days, and I saw this:

On the morning of June 17, HTTPS URLs on page 1 jumped from 16.9% to 18.4% (a 9.9% day-over-day increase), after trending up for a few days. This represents the total real-estate occupied by HTTPS URLs, but how did rankings fare? Here are the average rankings across all HTTPS results:

HTTPS URLs also seem to have gotten a rankings boost – dropping (with “dropping” being a positive thing) from an average of 2.96 to 2.79 in the space of 24 hours.

Seems pretty convincing, right? Here’s the problem: rankings don’t just change because Google changes the algorithm. We are, collectively, changing the web every minute of the day. Often, those changes are just background noise (and there’s a lot of noise), but sometimes a giant awakens.

The Second Clue: Wikipedia

Anecdotally, I noticed that some Wikipedia URLs seemed to be flipping from HTTP to HTTPS. I ran a quick count, and this wasn’t just a fluke. It turns out that Wikipedia started switching their entire site to HTTPS around June 12 (hat tip to Jan Dunlop). This change is expected to take a couple of weeks.

It’s just one site, though, right? Well, historically, this one site is the #1 largest land-holder across the SERP real-estate we track, with over 5% of the total page-1 URLs in our tracking data (5.19% as of June 17). Wikipedia is a giant, and its movements can shake the entire web.

So, how do we tease this apart? If Wikipedia’s URLs had simply flipped from HTTP to HTTPS, we should see a pretty standard pattern of shake-up. Those URLs would look to have changed, but the SERPS around them would be quiet. So, I ran an analysis of what the temperature would’ve been if we ignored the protocol (treating HTTP/HTTPS as the same). While slightly lower, that temperature was still a scorching 96.6°F.

Is it possible that Wikipedia moving to HTTPS also made the site eligible for a rankings boost from previous algorithm updates, thus disrupting page 1 without any code changes on Google’s end? Yes, it is possible – even a relatively small rankings boost for Wikipedia from the original HTTPS algorithm update could have a broad impact.

The Third Clue: Google?

So far, Google has only said that this was not a Panda update. There have been rumors that the HTTPS update would get a boost, as recently as SMX Advanced earlier this month, but no timeline was given for when that might happen.

Is it possible that Wikipedia’s publicly announced switch finally gave Google the confidence to boost the HTTPS signal? Again, yes, it’s possible, but we can only speculate at this point.

My gut feeling is that this was more than just a waking giant, even as powerful of a SERP force as Wikipedia has become. We should know more as their HTTPS roll-out continues and their index settles down. In the meantime, I think we can expect Google to become increasingly serious about HTTPS, even if what we saw yesterday turns out not to have been an algorithm update.

In the meantime, I’m going to melodramatically name this “The Colossus Update” because, well, it sounds cool. If this indeed was an algorithm update, I’m sure Google would prefer something sensible, like “HTTPS Update 2” or “Securageddon” (sorry, Gary).

Update from Google: Gary Illyes said that he’s not aware of an HTTPS update (via Twitter):

No comment on other updates, or the potential impact of a Wikipedia change. I feel strongly that there is an HTTPS connection in the data, but as I said – that doesn’t necessarily mean the algorithm changed.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

The Best of the Best: Celebrating the Top 10 of the Moz Top 10 for 2014

Posted by Isla_McKetta

Oh no, another year-end roundup! But before you click away, let me sell you a little on why this is the roundup you actually want to read.

You see, to compile the
Moz Top 10 over the last year, we probably read 50 or more articles EACH WEEK, that’s around 100 articles for every issue. We then spent innumerable hours curating and culling until we could share with you the very best of those articles in the bi-weekly Top 10.

So this is not just another listicle. This article is in fact the distillation of the very best content from all over the interwebs for the past year that has anything to do with digital marketing. Basically,
we read 2,600 (or so) articles so you don’t have to.

What does “best” mean?

There’s no formula for what makes an article Top-10 worthy. We look for the best content of each two week period and then try and winnow and fit it until each newsletter contains just the right balance of digital marketing tips, tricks, analysis, and inspiration.

We work to reach beyond SEO and find articles that will help people who specialize in content, social, design, UX, and more broaden their skill set and understand the work their marketing compatriots engage in. The mix and style changes as the author of this newsletter changes. I’m biased toward content marketing, Cyrus loves SEO. Trevor’s a sucker for a journalistic slant.

But whoever is writing the latest edition is trying to find that perfect balance so you come away from the newsletter having found at least one article that teaches you something new, changes the way you think about marketing, or makes your job a little easier.

We look for articles by authors new and old that are
well written, well illustrated, and comprehensive. Sometimes we publish something because it’s a really good resource or because it says the thing that needs to be said.

Some pieces make the Top 10 because they are
heart-achingly eloquent. And sometimes we include a little something fun, playful, or easy on the eyes (but still educational) at the end to finish your day off right.

Then news
breaks (ahem, Google) and we reconfigure it all.

The Top 10 of the Top 10

For the Top 10 of the Moz Top 10, we could have gone with the most newsworthy content—articles that claim
some tactic is dead
or some era is over, but Search Engine Land already did that, so I wanted to take a different approach.

Instead, I chose the articles from 2014 that endure. Below you’ll find articles that continue to inspire, how-tos and guides so comprehensive they deserve a revisit, and, yes, even a few tips and tricks that you should really get to. Without further ado, here are the best of the best…

1. Life is a Game. This is Your Strategy Guide

If you can master life, all that marketing stuff is a cake walk. Level up in your day-to-day with this thoughtful, comprehensive, and gorgeous guide from Oliver Emberton.

2. Announcing the All-New Beginner’s Guide to Link Building

Paddy Moogan knows a thing or two about link building, and here he’s teamed up with some folks at Moz to turn all of that information into an easy-to-follow yet comprehensive guide. I had no part in this project, so I can safely tell you I <3 the Zelda references.

3. No Words Wasted: A Guide to Creating Focused Content

From getting customer interviews right to nailing content promotion, this massive guide from Distilled covers everything you need to know about content strategy. I learn something new (or rediscover something I should never have forgotten) every time I read it.

4. Micro Data & Schema.org Rich Snippets: Everything You Need to Know

If you don’t know what micro data are and you haven’t figured out what to do with Schema.org, your content marketing is missing a crucial element for SERP success. BuiltVisible to the rescue with this amazing and easy-to-follow guide.

5. The Beginner’s Guide to Conversion Rate Optimization

If you suspect there’s a blockage in your sales funnel, it’s time to think about CRO. This guide from Qualaroo will tell you everything you need to know to start pinpointing (and fixing) your barriers to conversion.

6. 2014 Industry Survey Results

A survey so big we can only do it once every two years. Peek at salaries, tools, and trends to compare where the digital marketing industry was at the beginning of 2014 to where you are now for a peek at what the future may hold. 

7. UX Crash Course: User Psychology

Composed of 31 lessons, this online “course” will help you understand user motivation and how you can use psychology to massively improve your user experience.

8. A Geek’s Guide to Gaming The Algorithms

Sometimes looking at information from a slightly different angle makes it easier to digest. In this delightful piece, Ian Lurie teaches us when it’s okay to game the algorithms at the same time as he’s spelling out, in plain language, what each algorithm update was really about.

9. The Ultimate List of IFTTT Recipes for Marketers

Favorite part of this amazingly detailed post from SEER? The fact that it starts from a user’s perspective. So whether you want to “stalk your competitors’ stocks” or “keep track of industry meetups,” there’s an answer (in the form of an IFTTT recipe) here for you.

10. The Rich Snippets Algorithm

So much changed in the realm of rich snippets last year. AJ Kohn delves into the relationship between those rich snippets and knowledge graph results. It’s a heady post that just might offer some interesting insight into the future of SERPs.

Sign up for the Moz Top 10

Like what you see? Want us to read all the articles while you peruse a summary of the most important things you need to know?

Sign up for the Moz Top 10

After you click that big red button, you’ll be taken to the Moz Top 10 page and asked to enter your email and hit “subscribe.” At that moment we’ll put you on the list for the very next edition, currently scheduled for January 13.

Submit to the Moz Top 10

And if you’re someone who’s writing Top-10-worthy content and we just haven’t found you yet, we want to read what you’ve got. So please send us your suggestions. Each edition of the Moz Top 10 only covers content from the most recent two-week period, so send that link while the content is still fresh.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com

Location is Everything: Local Rankings in Moz Analytics

Posted by MatthewBrown

Today we are thrilled to launch 
local rankings as a feature in Moz Analytics, which gives our customers the ability to assign geo-locations to their tracked keywords. If you’re a Moz Analytics customer and are ready to jump right in, here’s where you an find the new feature within the application:

Not a Moz Analytics customer? You can take the new features for a free spin…

One of the biggest SEO developments of the last several years is how frequently Google is returning localized organics across a rapidly increasing number of search queries. It’s not just happening for “best pizza in Portland” (the answer to that is
Apizza Scholls, by the way). Searches like “financial planning” and “election guide” now trigger Google’s localization algorithm:

local search results election guide

This type of query underscores the need to track rankings on a local level. I’m searching for a non-localized keyword (“election guide”), but Google recognizes I’m searching from Portland, Oregon so they add the localization layer to the result.

Local tends to get lost in the shuffle of zoo animal updates we’ve seen from Google in the last couple of years, but search marketers are coming around to realize the 2012 Venice update was one of the most important changes Google made to the search landscape. It certainly didn’t seem like a huge deal when it launched; here’s how Google described Venice as part of the late lamented
monthly search product updates they used to provide:

  • Improvements to ranking for local search results. [launch codename “Venice”] This improvement improves the triggering of Local Universal results by relying more on the ranking of our main search results as a signal.

Seems innocent enough, right? What the Venice update actually kicked off was a long-term relationship between local search results (what we see in Google local packs and map results) and the organic search results that, once upon a time, existed on their own. “Localized organics,” as they are known, have been increasingly altering the organic search landscape for keywords that normally triggered “generic” or national rankings. If you haven’t already read it, Mike Ramsey’s article on
how to adjust for the Venice update remains one of the best strategic looks at the algorithm update.

This jump in localized organic results has prompted both marketers and business owners to track rankings at the local level. An increasing number of Moz customers have been requesting the ability to add locations to their keywords since the 2012 Venice update, and this is likely due to Google expanding the queries which trigger a localized result. You asked for it, and today we’re delivering. Our new local rankings feature allows our customers to track keywords for any city, state, or ZIP/postal code.

Geo-located searches

We can now return rankings based on a location you specify, just like I set my search to Portland in the example above. This is critical for monitoring the health of your local search campaigns, as Google continues to fold the location layer into the organic results. Here’s how it looks in Moz Analytics:

tracking local keyword ranking

A keyword with a location specified counts against your keyword limit in Moz Analytics just like any other keyword.

The location being tracked will also be displayed in your rankings reports as well as on the keyword analysis page:

local keyword difficulty

The local rankings feature allows you to enter your desired tracking location by city, state, neighborhood, and zip or postal code. We provide neighborhood-level granularity via dropdown for the United States, United Kingdom, Canada and Australia. The dropdown will also provide city-level listings for other countries. It’s also possible to enter a location of your choice not on the list in the text box. Fair warning: We cannot guarantee the accuracy of rankings in mythical locations like Westeros or Twin Peaks, or mythical spellings like Pordland or Los Andules.

An easy way to get started with the new feature is to look at keywords you are already tracking, and find the ones that have an obvious local intent for searchers. Then add the neighborhood or city you are targeting for the most qualified searchers.

What’s next?

We will be launching local rankings functionality within the Moz Local application in the first part of 2015, which will provide needed visibility to folks who are mainly concerned with Local SEO. We’re also working on functionality to allow users to easily add geo-modifiers to their tracked keywords, so we can provide rankings for “health club Des Moines” alongside tracking rankings for “health clubs” in the 50301 zip code.

Right now this feature works with all Google engines (we’ll be adding Bing and Yahoo! later). We’ll also be keeping tabs on Google’s advancements on the local front so we can provide our customers with the best data on their local visibility.

Please let us know what you think in the comments below! Customer feedback, suggestions, and comments were instrumental into both the design and prioritization of this feature.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com

The Danger of Crossing Algorithms: Uncovering The Cloaked Panda Update During Penguin 3.0

Posted by GlennGabe

Penguin 3.0 was one of the most anticipated algorithm updates in recent years when it rolled out on October 17, 2014. Penguin hadn’t run for over a year at that point,
and there were many webmasters sitting in Penguin limbo waiting for recovery. They had cleaned up their link profiles, disavowed what they could, and were
simply waiting for the next update or refresh. Unfortunately, Google was wrestling with the algo internally and over twelve months passed without an
update.

So when Pierre Far finally
announced Penguin 3.0 a few days later on October 21, a few things
stood out. First, this was
not a new algorithm like Gary Illyes had explained it would be at SMX East. It was a refresh and underscored
the potential problems Google was battling with Penguin (cough, negative SEO).

Second, we were not seeing the impact that we expected. The rollout seemed to begin with a heavier international focus and the overall U.S impact has been
underwhelming to say the least. There were definitely many fresh hits globally, but there were a number of websites that should have recovered but didn’t
for some reason. And many are still waiting for recovery today.

Third, the rollout would be slow and steady and could take weeks to fully complete. That’s unusual, but makes sense given the microscope Penguin 3.0 was
under. And this third point (the extended rollout) is even more important than most people think. Many webmasters are already confused when they get hit
during an acute algorithm update (for example, when an algo update rolls out on one day). But the confusion gets exponentially worse when there is an
extended rollout.

The more time that goes by between the initial launch and the impact a website experiences, the more questions pop up. Was it Penguin 3.0 or was it
something else? Since I work heavily with algorithm updates, I’ve heard similar questions many times over the past several years. And the extended Penguin
3.0 rollout is a great example of why confusion can set in. That’s my focus today.


Penguin, Pirate, and the anomaly on October 24

With the Penguin 3.0 rollout, we also had
Pirate 2 rolling out. And yes, there are
some websites that could be impacted by both. That added a layer of complexity to the situation, but nothing like what was about to hit. You see, I picked
up a very a strange anomaly on October 24. And I clearly saw serious movement on that day (starting late in the day ET).

So, if there was a third algorithm update, then that’s
three potential algo updates rolling out at the same time. More about this soon,
but it underscores the confusion that can set in when we see extended rollouts, with a mix of confirmed and unconfirmed updates.


Penguin 3.0 tremors and analysis

Since I do a lot of Penguin work, and have researched many domains impacted by Penguin in the past, I heavily studied the Penguin 3.0 rollout. I 
published a blog post based on the first ten days of the update, which included some interesting findings for sure.

And based on the extended rollout, I definitely saw Penguin tremors beyond the initial October 17 launch. For example, check out the screenshot below of a
website seeing Penguin impact on October 17, 22, and 25.

But as mentioned earlier, something else happened on October 24 that set off sirens in my office. I started to see serious movement on sites impacted by
Panda, and not Penguin. And when I say serious movement, I’m referring to major traffic gains or losses all starting on October 24. Again, these were sites heavily dealing with Panda and had
clean link profiles. Check out the trending below from October 24 for several
sites that saw impact.


A good day for a Panda victim:



A bad day for a Panda victim:



And an incredibly frustrating day for a 9/5 recovery that went south on 10/24:

I saw this enough that I tweeted heavily about it and
included a section about Panda in my Penguin 3.0 blog post. And
that’s when something wonderful happened, and it highlights the true beauty and power of the internet.

As more people saw my tweets and read my post, I started receiving messages from other webmasters explaining that
they saw the same exact thing, and on their websites dealing with Panda and not Penguin. And not only
did they tell me about, they
showed me the impact.

I received emails containing screenshots and tweets with photos from Google Analytics and Google Webmaster Tools. It was amazing to see, and it confirmed
that we had just experienced a Panda update in the middle of a multi-week Penguin rollout. Yes, read that line again. Panda during Penguin, right when the
internet world was clearly focused on Penguin 3.0.

That was a sneaky move Google… very sneaky. 🙂

So, based on what I explained earlier about webmaster confusion and algorithms, can you tell what happened next? Yes, massive confusion ensued. We had the
trifecta of algorithm updates with Penguin, Pirate, and now Panda.


Webmaster confusion and a reminder of the algo sandwich from 2012

So, we had a major algorithm update during two other major algorithm updates (Penguin and Pirate) and webmaster confusion was hitting extremely high
levels. And I don’t blame anyone for being confused. I’m neck deep in this stuff and it confused me at first.

Was the October 24 update a Penguin tremor or was this something else? Could it be Pirate? And if it was indeed Panda, it would have been great if Google told
us it was Panda! Or did they want to throw off SEOs analyzing Penguin and Pirate? Does anyone have a padded room I can crawl into?

Once I realized this was Panda, and started to communicate the update via Twitter and my blog, I had a number of people ask me a very important question:


“Glenn, would Google really roll out two or three algorithm updates so close together, or at the same time?”

Why yes, they would. Anyone remember the algorithm sandwich from April of 2012? That’s when Google rolled out Panda on April 19, then Penguin 1.0 on April 24,
followed by Panda on April 27. Yes, we had three algorithm updates all within ten days. And let’s not forget that the Penguin update on April 24, 2012 was the
first of its kind! So yes, Google can, and will, roll out multiple major algos around the same time.

Where are we headed? It’s fascinating, but not pretty


Panda is near real-time now

When Panda 4.1 rolled out on September 23, 2014, I immediately disliked the title and version number of the update. Danny Sullivan named it 4.1, so it stuck. But for
me, that was not 4.1… not even close. It was more like 4.75. You see, there have been a number of Panda tremors and updates since P4.0 on May 20,
2014.

I saw what I was calling “tremors”
nearly weekly based on having access to a large amount of Panda data (across sites, categories, and countries).
And based on what I was seeing, I reached out to John Mueller at Google to clarify the tremors. John’s response was great and confirmed what I was seeing.
He explained that there
was not a set frequency for algorithms like Panda. Google can roll out an algorithm, analyze the
SERPs, refine the algo to get the desired results, and keep pushing it out. And that’s exactly what I was seeing (again, almost weekly since Panda 4.0).


When Panda and Penguin meet in real time…

…they will have a cup of coffee and laugh at us. 🙂 So, since Panda is near-real time, the crossing of major algorithm updates is going to happen.
And we just experienced an important one on October 24 with Penguin, Pirate, and Panda. But it could (and probably will) get more chaotic than what we have now.
We are quickly approaching a time where major algorithm updates crafted in a lab will be unleashed on the web in near-real time or in actual real time.

And if organic search traffic from Google is important to you, then pay attention. We’re about to take a quick trip into the future of Google and SEO. And
after hearing what I have to say, you might just want the past back…


Google’s brilliant object-oriented approach to fighting webspam

I have presented at the past two SES conferences about Panda, Penguin, and other miscellaneous disturbances in the force. More about those “other
disturbances” soon. In my presentation, one of my slides looks like this:

Over the past several years, Google has been using a brilliant, object-oriented approach to fighting webspam and low quality content. Webspam engineers can
craft external algorithms in a lab and then inject them into the real-time algorithm whenever they want. It’s brilliant because it isolates specific
problems, while also being extremely scalable. And by the way, it should scare the heck out of anyone breaking the rules.

For example, we have Panda, Penguin, Pirate, and Above the Fold. Each was crafted to target a specific problem and can be unleashed on the web whenever
Google wants. Sure, there are undoubtedly connections between them (either directly or indirectly), but each specific algo is its own black box. Again,
it’s object-oriented.

Now, Panda is a great example of an algorithm that has matured to where Google highly trusts it. That’s why Google announced in June of 2013 that Panda
would roll out monthly, over ten days. And that’s also why it matured even more with Panda 4.0 (and why I’ve seen tremors almost weekly.)

And then we had Gary Illyes explain that Penguin was moving along the same path. At SMX East,
Gary explained that the new Penguin algorithm (which clearly didn’t roll out on October 17) would be structured in a way where subsequent updates could be rolled out more easily.
You know, like Panda.

And by the way, what if this happens to Pirate, Above the Fold, and other algorithms that Google is crafting in its Frankenstein lab? Well my friends, then
we’ll have absolute chaos and society as we know it will crumble. OK, that’s a bit dramatic, but you get my point.

We already have massive confusion now… and a glimpse into the future reveals a continual flow of major algorithms running in real-time, each that
could pummel a site to the ground. And of course, with little or no sign of which algo actually caused the destruction. I don’t know about you, but I just
broke out in hives. 🙂


Actual example of what (near) real-time updates can do

After Panda 4.0, I saw some very strange Panda movement for sites impacted by recent updates. And it underscores the power of near-real time algo updates.
As a quick example,
temporary Panda recoveries can happen if you
don’t get out of the gray area enough. And now that we are seeing Panda tremors almost weekly, you can experience potential turbulence several times per
month.

Here is a screenshot from a site that recovered from Panda, didn’t get out of the gray area and reentered the strike zone, just five days later.

Holy cow, that was fast. I hope they didn’t plan any expensive trips in the near future. This is exactly what can happen when major algorithms roam the web
in real time. One week you’re looking good and the next week you’re in the dumps. Now, at least I knew this was Panda. The webmaster could tackle more
content problems and get out of the gray area… But the ups and downs of a Panda roller coaster ride can drive a webmaster insane. It’s one of the
reasons I recommend making
significant changes when
you’ve been hit by Panda. Get as far out of the gray area as possible.


An “automatic action viewer” in Google Webmaster Tools could help (and it’s actually being discussed internally by Google)

Based on webmaster confusion, many have asked Google to create an “automatic action viewer” in Google Webmaster Tools. It would be similar to the “manual
actions viewer,” but focused on algorithms that are demoting websites in the search results (versus penalties). Yes, there is a difference by the way.

The new viewer would help webmasters better understand the types of problems that are being impacted by algorithms like Panda, Penguin, Pirate, Above the
Fold, and others. Needless to say, this would be incredibly helpful to webmasters, business owners, and SEOs.

So, will we see that viewer any time soon? Google’s John Mueller
addressed this question during the November 3 webmaster hangout (at 38:30).

John explained they are trying to figure something out, but it’s not easy. There are so many algorithms running that they don’t want to provide feedback
that is vague or misleading. But, John did say they are discussing the automatic action viewer internally. So you never know…


A quick note about Matt Cutts

As many of you know, Matt Cutts took an extended leave this past summer (through the end of October). Well, he announced on Halloween that he is
extending his leave into 2015. I won’t go crazy here talking about his decision overall, but I will
focus on how this impacts webmasters as it relates to algorithm updates and webspam.

Matt does a lot more than just announce major algo updates… He actually gets involved when collateral damage rears its ugly head. And there’s not a
faster way to rectify a flawed algo update than to have Mr. Cutts involved. So before you dismiss Matt’s extended leave as uneventful, take a look at the
trending below:

Notice the temporary drop off a cliff, then 14 days of hell, only to see that traffic return? That’s because Matt got involved. That’s the
movie blog fiasco from early 2014 that I heavily analyzed. If
Matt was not notified of the drop via Twitter, and didn’t take action, I’m not sure the movie blogs that got hit would be around today. I told Peter from
SlashFilm that his fellow movie blog owners should all pay him a bonus this year. He’s the one that pinged Matt via Twitter and got the ball rolling.

It’s just one example of how having someone with power out front can nip potential problems in the bud. Sure, the sites experienced two weeks of utter
horror, but traffic returned once Google rectified the problem. Now that Matt isn’t actively helping or engaged, who will step up and be that guy? Will it
be John Mueller, Pierre Far, or someone else? John and Pierre are greatly helpful, but will they go to bat for a niche that just got destroyed? Will they
push changes through so sites can turn around? And even at its most basic level, will they even be aware the problem exists?

These are all great questions, and I don’t want to bog down this post (it’s already incredibly long). But don’t laugh off Matt Cutts taking an extended
leave. If he’s gone for good, you might only realize how important he was to the SEO community
after he’s gone. And hopefully it’s not because
your site just tanked as collateral damage during an algorithm update. Matt might be
running a marathon or trying on new Halloween costumes. Then where will you be?


Recommendations moving forward:

So where does this leave us? How can you prepare for the approaching storm of crossing algorithms? Below, I have provided several key bullets that I think
every webmaster should consider. I recommend taking a hard look at your site
now, before major algos are running in near-real time.

  • Truly understand the weaknesses with your website. Google will continue crafting external algos that can be injected into the real-time algorithm.
    And they will go real-time at some point. Be ready by cleaning up your site now.
  • Document all changes and fluctuations the best you can. Use annotations in Google Analytics and keep a spreadsheet updated with detailed
    information.
  • Along the same lines, download your Google Webmaster Tools data monthly (at least). After helping many companies with algorithm hits, that
    information is incredibly valuable, and can help lead you down the right recovery path.
  • Use a mix of audits and focus groups to truly understand the quality of your site. I mentioned in my post about

    aggressive advertising and Panda

    that human focus groups are worth their weight in gold (for surfacing Panda-related problems). Most business owners are too close to their own content and
    websites to accurately measure quality. Bias can be a nasty problem and can quickly lead to bamboo-overflow on a website.
  • Beyond on-site analysis, make sure you tackle your link profile as well. I recommend heavily analyzing your inbound links and weeding out unnatural
    links. And use the disavow tool for links you can’t remove. The combination of enhancing the quality of your content, boosting engagement, knocking down
    usability obstacles, and cleaning up your link profile can help you achieve long-term SEO success. Don’t tackle one quarter of your SEO problems. Address
    all of them.
  • Remove barriers that inhibit change and action. You need to move fast. You need to be decisive. And you need to remove red tape that can bog down
    the cycle of getting changes implemented. Don’t water down your efforts because there are too many chefs in the kitchen. Understand the changes that need
    to be implemented, and take action. That’s how you win SEO-wise.


Summary: Are you ready for the approaching storm?

SEO is continually moving and evolving, and it’s important that webmasters adapt quickly. Over the past few years, Google’s brilliant object-oriented
approach to fighting webspam and low quality content has yielded algorithms like Panda, Penguin, Pirate, and Above the Fold. And more are on their way. My
advice is to get your situation in order now, before crossing algorithms blend a recipe of confusion that make it exponentially harder to identify, and
then fix, problems riddling your website.

Now excuse me while I try to build a flux capacitor. 🙂

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

What SEOs Need to Know About Topic Modeling &amp; Semantic Connectivity – Whiteboard Friday

Posted by randfish

Search engines, especially Google, have gotten remarkably good at understanding searchers’ intent—what we
mean to search for, even if that’s not exactly what we search for. How in the world do they do this? It’s incredibly complex, but in today’s Whiteboard Friday, Rand covers the basics—what we all need to know about how entities are connected in search.

For reference, here’s a still of this week’s whiteboard!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re talking topic modeling and semantic connectivity. Those words might sound big and confusing, but, in fact, they are important to understanding the operations of search engines, and they have some direct influence on things that we might do as SEOs, hence our need to understand them.

Now, I’m going to make a caveat here. I am not an expert in this topic. I have not taken the required math classes, stats classes, programming classes to truly understand this topic in a way that I would feel extremely comfortable explaining. However, even at the surface level of understanding, I feel like I can give some compelling information that hopefully you all and myself included can go research some more about. We’re certainly investigating a lot of topic modeling opportunities and possibilities here at Moz. We’ve done so in the past, and we’re revisiting that again for some future tools, so the topic is fresh on my mind.

So here’s the basic concept. The idea is that search engines are smarter than just knowing that a word, a phrase that someone searches for, like “Super Mario Brothers,” is only supposed to bring back results that have exactly the words “Super Mario Brothers,” that perfect phrase in the title and in the headline and in the document itself. That’s still an SEO best practice because you’re trying to serve visitors who have that search query. But search engines are actually a lot smarter than this.

One of my favorite examples is how intelligent Google has gotten around movie topics. So try, for example, searching for “That movie where the guy is called The Dude,” and you will see that Google properly returns “The Big Lebowski” in the first ranking position. How do they know that? Well, they’ve essentially connected up “movie,” “The Dude,” and said, “Aha, those things are most closely related to ‘The Big Lebowski. That’s what the intent of the searcher is. That’s the document that we’re going to return, not a document that happens to have ‘That movie about the guy named ‘The Dude’ in the title, exactly those words.'”

Here’s another example. So this is Super Mario Brothers, and Super Mario Brothers might be connected to a lot of other terms and phrases. So a search engine might understand that Super Mario Brothers is a little bit more semantically connected to Mario than it is to Luigi, then to Nintendo and then Bowser, the jumping dragon guy, turtle with spikes on his back — I’m not sure exactly what he is — and Princess Peach.

As you go down here, the search engine might actually have a topic modeling algorithm, something like latent semantic indexing, which was an early model, or a later model like latent Dirichlet allocation, which is a somewhat later model, or even predictive latent Dirichlet allocation, which is an even later model. Model’s not particularly important, especially for our purposes.

What is important is to know that there’s probably some scoring going on. A search engine — Google, Bing — can understand that some of these words are more connected to Super Mario Brothers than others, and it can do the reverse. They can say Super Mario Brothers is somewhat connected to video games and very not connected to cat food. So if we find a page that happens to have the title element of Super Mario Brothers, but most of the on-page content seems to be about cat food, well, maybe we shouldn’t rank that even if it has lots of incoming links with anchor text saying “Super Mario Brothers” or a very high page rank or domain authority or those kinds of things.

So search engines, Google, in particular, has gotten very, very smart about this connectivity stuff and this topic modeling post-Hummingbird. Hummingbird, of course, being the algorithm update from last fall that changed a lot of how they can interpret words and phrases.

So knowing that Google and Bing can calculate this relative connectivity, connectivity between the words and phrases and topics, we want to know how are they doing this. That answer is actually extremely broad. So that could come from co-occurrence in web documents. Sorry for turning my back on the camera. I know I’m supposed to move like this, but I just had to do a little twirl for you.

Distance between the keywords. I mean distance on the actual page itself. Does Google find “Super Mario Brothers” near the word “Mario” on a lot of the documents where the two occur, or are they relatively far away? Maybe Super Mario Brothers does appear with cat food a lot, but they’re quite far away. They might look at citations and links between documents in terms of, boy, there’s a lot pages on the web, when they talk about Super Mario Brothers, they also link to pages about Mario, Luigi, Nintendo, etc.

They can look at the anchor text connections of those links. They could look at co-occurrence of those words biased by a given corpi, a set of corpuses, or from certain domains. So they might say, “Hey, we only want to pay attention to what’s on the fresh web right now or in the blogosphere or on news sites or on trusted domains, these kinds of things as opposed to looking at all of the documents on the web.” They might choose to do this in multiple different sets of corpi.

They can look at queries from searchers, which is a really powerful thing that we unfortunately don’t have access to. So they might see searcher behavior saying that a lot of people who search for Mario, Luigi, Nintendo are also searching for Super Mario Brothers.

They might look at searcher clicks, visits, history, all of that browser data that they’ve got from Chrome and from Android and, of course, from Google itself, and they might say those are corpi that they use to connect up words and phrases.

Probably there’s a whole list of other places that they’re getting this from. So they can build a very robust data set to connect words and phrases. For us, as SEOs, this means a few things.

If you’re targeting a keyword for rankings, say “Super Mario Brothers,” those semantically connected and related terms and phrases can help with a number of things. So if you could know that these were the right words and phrases that search engines connected to Super Mario Brothers, you can do all sorts of stuff. Things like inclusion on the page itself, helping to tell the search engine my page is more relevant for Super Mario Brothers because I include words like Mario, Luigi, Princess Peach, Bowser, Nintendo, etc. as opposed to things like cat food, dog food, T-shirts, glasses, what have you.

You can think about it in the links that you earn, the documents that are linking to you and whether they contain those words and phrases and are on those topics, the anchor text that points to you potentially. You can certainly be thinking about this from a naming convention and branding standpoint. So if you’re going to call a product something or call a page something or your unique version of it, you might think about including more of these words or biasing to have those words in the description of the product itself, the formal product description.

For an About page, you might think about the formal bio for a person or a company, including those kinds of words, so that as you’re getting cited around the web or on your book cover jacket or in the presentation that you give at a conference, those words are included. They don’t necessarily have to be links. This is a potentially powerful thing to say a lot of people who mention Super Mario Brothers tend to point to this page Nintendo8.com, which I think actually you can play the original “Super Mario Brothers” live on the web. It’s kind of fun. Sorry to waste your afternoon with that.

Of course, these can also be additional keywords that you might consider targeting. This can be part of your keyword research in addition to your on-page and link building optimization.

What’s unfortunate is right now there are not a lot of tools out there to help you with this process. There is a tool from Virante. Russ Jones, I think did some funding internally to put this together, and it’s quite cool. It’s 
nTopic.org. Hopefully, this Whiteboard Friday won’t bring that tool to its knees by sending tons of traffic over there. But if it does, maybe give it a few days and come back. It gives you a broad score with a little more data if you register and log in. It’s got a plugin for Chrome and for WordPress. It’s fairly simplistic right now, but it might help you say, “Is this page on the topic of the term or phrase that I’m targeting?”

There are many, many downloadable tools and libraries. In fact, Code.google.com has an LDA topic modeling tool specifically, and that might have been something that Google used back in the day. We don’t know.

If you do a search for topic modeling tools, you can find these. Unfortunately, almost all of them are going to require some web development background at the very least. Many of them rely on a Python library or an API. Almost all of them also require a training corpus in order to model things on. So you can think about, “Well, maybe I can download Wikipedia’s content and use that as a training model or use the top 10 search results from Google as some sort of training model.”

This is tough stuff. This is one of the reasons why at Moz I’m particularly passionate about trying to make this something that we can help with in our on-page optimization and keyword difficulty tools, because I think this can be very powerful stuff.

What is true is that you can spot check this yourself right now. It is very possible to go look at things like related searches, look at the keyword terms and phrases that also appear on the pages that are ranking in the top 10 and extract these things out and use your own mental intelligence to say, “Are these terms and phrases relevant? Should they be included? Are these things that people would be looking for? Are they topically relevant?” Consider including them and using them for all of these things. Hopefully, over time, we’ll get more sophisticated in the SEO world with tools that can help with this.

All right, everyone, hope you’ve enjoyed this addition of Whiteboard Friday. Look forward to some great comments, and we’ll see you again next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

Latest Google Algorithm Update for Exact Match Domain

This video is telling the latest Google algorithm update for Exact Match Domain. The video is made by http://www.indian-seo-company.com for internet marketers.

Reblogged 4 years ago from www.youtube.com