How Does the Local Algorithm Work? – Whiteboard Friday

Posted by JoyHawkins

When it comes to Google’s algorithms, there’s quite a difference between how they treat local and organic. Get the scoop on which factors drive the local algorithm and how it works from local SEO extraordinaire, Joy Hawkins, as she offers a taste of her full talk from MozCon 2019.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hello, Moz fans. I’m Joy Hawkins. I run a local SEO agency from Toronto, Canada, and a search forum known as the Local Search Forum, which basically is devoted to anything related to local SEO or local search. Today I’m going to be talking to you about Google’s local algorithm and the three main factors that drive it. 

If you’re wondering what I’m talking about when I say the local algorithm, this is the algorithm that fuels what we call the three-pack here. When you do a local search or a search that Google thinks has local intents, like plumbers let’s say, you traditionally will get three results at the top with the map, and then everything below it I refer to as organic. This algorithm I’ll be kind of breaking down is what fuels this three-pack, also known as Google My Business listings or Google Maps listings.

They’re all talking about the exact same thing. If you search Google’s Help Center on what they look at with ranking these entities, they tell you that there are three main things that fuel this algorithm. The three things that they talk about are proximity, prominence, and relevance. I’m going to basically be breaking down each one and explaining how the factors work.

1. Proximity

I’ll kind of start here with proximity. Proximity is basically defined as your location when you are searching on your phone or your computer and you type something in. It’s where Google thinks you are located. If you’re not really sure, often you can scroll down to the bottom of your page, and at the bottom of your page it will often list a zip code that Google thinks you’re in.

Zip code (desktop)

The other way to tell is if you’re on a phone, sometimes you can also see a little blue dot on the map, which is exactly where Google thinks you’re located. On a high level, we often think that Google thinks we’re located in a city, but this is actually pretty false, which I know that there’s been actually a lot of talk at MozCon about how Google pretty much always knows a little deeper than that as far as where users are located.

Generally speaking, if you’re on a computer, they know what zip code you’re in, and they’ll list that at the bottom. There are a variety of tools that can help you check ranking based on zip codes, some of which would be Moz Check Your Presence Tool, BrightLocal, Whitespark, or Places Scout. All of these tools have the ability to track at the zip code level. 

Geo coordinates (mobile)

However, when you’re on a phone, usually Google knows your location even more detailed, and they actually generally know the geo coordinates of your actual location, and they pinpoint this using that little blue dot.

It knows even more about the zip code. It knows where you’re actually located. It’s a bit creepy. But there are a couple of tools that will actually let you see results based on geo coordinates, which is really cool and very accurate. Those tools include the Local Falcon, and there is a Chrome extension which is 100% free, that you can put in your browser, called GS Location Changer.

I use this all the time in an incognito browser if I want to just see what search results look like from a very, very specific location. Now these two levels, depending on what industry you are working in, it’s really important to know which level you need to be looking at. If you work with lawyers, for example, zip code level is usually good enough.

There aren’t enough lawyers to make a huge difference at certain like little points inside a given zip code. However, if you work with dentists or restaurants, let’s say, you really need to be looking at geo coordinate levels. We have seen lots of cases where we will scan a specific keyword using these two tools, and depending on where in that zip code we are, we see completely different three-packs.

It’s very, very key to know that this factor here for proximity really influences the results that you see. This can be challenging, because when you’re trying to explain this to clients or business owners, they search from their home, and they’re like, “Why am I not there?” It’s because their proximity or their location is different than where their office is located.

I realize this is a challenging problem to solve for a lot of agencies on how to represent this, but that’s kind of the tools that you need to look at and use. 

2. Prominence

Moving to the next factor, so prominence, this is basically how important Google thinks you are. Like Is this business a big deal, or are they just some random, crappy business or a new business that we don’t know much about?

  • This looks at things like links, for example. 
  • Store visits, if you are a brick-and-mortar business and you get no foot traffic, Google likely won’t think you’re very prominent. 
  • Reviews, the number of reviews often factors in here. We often see in cases where businesses have a lot of reviews and a lot of old reviews, they generally have a lot of prominence.
  • Citations can also factor in here due to the number of citations. That can also factor into prominence. 

3. Relevance

Moving into the relevance factor, relevance is basically, does Google think you are related to the query that is typed in? You can be as prominent as anyone else, but if you do not have content on your page that is structured well, that covers the topic the user is searching about, your relevance will be very low, and you will run into issues.

It’s very important to know that these three things all kind of work together, and it’s really important to make sure you are looking at all three. On the relevance end, it looks at things like:

  • content
  • onsite SEO, so your title tags, your meta tags, all that nice SEO stuff
  • Citations also factor in here, because it looks at things like your address. Like are you actually in this city? Are you relevant to the city that the user is trying to get locations from? 
  • Categories are huge here, your Google My Business categories. Google currently has just under 4,000 different Google My Business categories, and they add an insane amount every year and they also remove ones. It’s very important to keep on top of that and make sure that you have the correct categories on your listing or you won’t rank well.
  • The business name is unfortunately a huge factor as well in here. Merely having keywords in your business name can often give you relevance to rank. It shouldn’t, but it does. 
  • Then review content. I know Mike Blumenthal did a really cool experiment on this a couple years ago, where he actually had a bunch of people write a bunch of fake reviews on Yelp mentioning certain terms to see if it would influence ranking on Google in the local results, and it did. Google is definitely looking at the content inside the reviews to see what words people are using so they can see how that impacts relevance. 

How to rank without proximity, prominence, or relevance

Obviously you want all three of these things. It is possible to rank if you don’t have all three, and I’ll give a couple examples. If you’re looking to expand your radius, you service a lot of people.

You don’t just service people on your block. You’re like, “I serve the whole city of Chicago,” for example. You are not likely going to rank in all of Chicago for very common terms, things like dentist or personal injury attorney. However, if you have a lot of prominence and you have a really relevant page or content related to really niche terms, we often see that it is possible to really expand your radius for long tail keywords, which is great.

Prominence is probably the number one thing that will expand your radius inside competitive terms. We’ll often see Google bringing in a business that is slightly outside of the same area as other businesses, just because they have an astronomical number of reviews, or maybe their domain authority is ridiculously high and they have all these linking domains.

Those two factors are definitely what influences the amount of area you cover with your local exposure. 

Spam and fake listings

On the flip side, spam is something I talk a lot about. Fake listings are a big problem in the local search space. Fake listings, these lead gen providers create these listings, and they rank with zero prominence.

They have no prominence. They have no citations. They have no authority. They often don’t even have websites, and they still rank because of these two factors. You create 100 listings in a city, you are going to be close to someone searching. Then if you stuff a bunch of keywords in your business name, you will have some relevance, and by somehow eliminating the prominence factor, they are able to get these listings to rank, which is very frustrating.

Obviously, Google is kind of trying to evolve this algorithm over time. We are hoping that maybe the prominence factor will increase over time to kind of eliminate that problem, but ultimately we’ll have to see what Google does. We also did a study recently to test to see which of these two factors kind of carries more weight.

An experiment: Linking to your site within GMB

One thing I’ve kind of highlighted here is when you link to a website inside your Google My Business listing, there’s often a debate. Should I link to my homepage, or should I link to my location page if I’ve got three or four or five offices? We did an experiment to see what happens when we switch a client’s Google My Business listing from their location page to their homepage, and we’ve pretty much almost always seen a positive impact by switching to the homepage, even if that homepage is not relevant at all.

In one example, we had a client that was in Houston, and they opened up a location in Dallas. Their homepage was optimized for Houston, but their location page was optimized for Dallas. I had a conversation with a couple of other SEOs, and they were like, “Oh, well, obviously link to the Dallas page on the Dallas listing. That makes perfect sense.”

But we were wondering what would happen if we linked to the homepage, which is optimized for Houston. We saw a lift in rankings and a lift in the number of search queries that this business showed for when we switched to the homepage, even though the homepage didn’t really mention Dallas at all. Something to think about. Make sure you’re always testing these different factors and chasing the right ones when you’re coming up with your local SEO strategy. Finally, something I’ll mention at the top here.

Local algorithm vs organic algorithm

As far as the local algorithm versus the organic algorithm, some of you might be thinking, okay, these things really look at the same factors. They really kind of, sort of work the same way. Honestly, if that is your thinking, I would really strongly recommend you change it. I’ll quote this. This is from a Moz whitepaper that they did recently, where they found that only 8% of local pack listings had their website also appearing in the organic search results below.

I feel like the overlap between these two is definitely shrinking, which is kind of why I’m a bit obsessed with figuring out how the local algorithm works to make sure that we can have clients successful in both spaces. Hopefully you learned something. If you have any questions, please hit me up in the comments. Thanks for listening.

Video transcription by Speechpad.com


If you liked this episode of Whiteboard Friday, you’ll love all the SEO thought leadership goodness you’ll get from our newly released MozCon 2019 video bundle. Catch Joy’s full talk on the differences between the local and organic algorithm, plus 26 additional future-focused topics from our top-notch speakers:

Grab the sessions now!

We suggest scheduling a good old-fashioned knowledge share with your colleagues to educate the whole team — after all, who didn’t love movie day in school? 😉

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 2 weeks ago from tracking.feedpress.it

How Often Does Google Update Its Algorithm?

Posted by Dr-Pete

In 2018, Google reported an incredible 3,234 improvements to search. That’s more than 8 times the number of updates they reported in 2009 — less than a decade ago — and an average of almost 9 per day. How have algorithm updates evolved over the past decade, and how can we possibly keep tabs on all of them? Should we even try?

To kick this off, here’s a list of every confirmed count we have (sources at end of post):

  • 2018 – 3,234 “improvements”
  • 2017 – 2,453 “changes”
  • 2016 – 1,653 “improvements”
  • 2013 – 890 “improvements”
  • 2012 – 665 “launches”
  • 2011 – 538 “launches”
  • 2010 – 516 “changes”
  • 2009 – 350–400 “changes”

Unfortunately, we don’t have confirmed data for 2014-2015 (if you know differently, please let me know in the comments).

A brief history of update counts

Our first peek into this data came in spring of 2010, when Google’s Matt Cutts revealed that “on average, [Google] tends to roll out 350–400 things per year.” It wasn’t an exact number, but given that SEOs at the time (and to this day) were tracking at most dozens of algorithm changes, the idea of roughly one change per day was eye-opening.

In fall of 2011, Eric Schmidt was called to testify before Congress, and revealed our first precise update count and an even more shocking scope of testing and changes:

“To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm.”

Later, Google would reveal similar data in an online feature called “How Search Works.” Unfortunately, some of the earlier years are only available via the Internet Archive, but here’s a screenshot from 2012:

Note that Google uses “launches” and “improvements” somewhat interchangeably. This diagram provided a fascinating peek into Google’s process, and also revealed a startling jump from 13,311 precisions evaluations (changes that were shown to human evaluators) to 118,812 in just two years.

Is the Google algorithm heating up?

Since MozCast has kept the same keyword set since almost the beginning of data collection, we’re able to make some long-term comparisons. The graph below represents five years of temperatures. Note that the system was originally tuned (in early 2012) to an average temperature of 70°F. The redder the bar, the hotter the temperature …

Click to open a high-resolution version in a new tab

You’ll notice that the temperature ranges aren’t fixed — instead, I’ve split the label into eight roughly equal buckets (i.e. they represent the same number of days). This gives us a little more sensitivity in the more common ranges.

The trend is pretty clear. The latter half of this 5-year timeframe has clearly been hotter than the first half. While warming trend is evident, though, it’s not a steady increase over time like Google’s update counts might suggest. Instead, we see a stark shift in the fall of 2016 and a very hot summer of 2017. More recently, we’ve actually seen signs of cooling. Below are the means and medians for each year (note that 2014 and 2019 are partial years):

  • 2019 – 83.7° /82.0°
  • 2018 – 89.9° /88.0°
  • 2017 – 94.0° /93.7°
  • 2016 – 75.1° / 73.7°
  • 2015 – 62.9° / 60.3°
  • 2014 – 65.8° / 65.9°

Note that search engine rankings are naturally noisy, and our error measurements tend to be large (making day-to-day changes hard to interpret). The difference from 2015 to 2017, however, is clearly significant.

Are there really 9 updates per day?

No, there are only 8.86 – feel better? Ok, that’s probably not what you meant. Even back in 2009, Matt Cutts said something pretty interesting that seems to have been lost in the mists of time…

“We might batch [algorithm changes] up and go to a meeting once a week where we talk about 8 or 10 or 12 or 6 different things that we would want to launch, but then after those get approved … those will roll out as we can get them into production.”

In 2016, I did a study of algorithm flux that demonstrated a weekly pattern evident during clearer episodes of ranking changes. From a software engineering standpoint, this just makes sense — updates have to be approved and tend to be rolled out in batches. So, while measuring a daily average may help illustrate the rate of change, it probably has very little basis in the reality of how Google handles algorithm updates.

Do all of these algo updates matter?

Some changes are small. Many improvements are likely not even things we in the SEO industry would consider “algorithm updates” — they could be new features, for example, or UI changes.

As SERP verticals and features evolve, and new elements are added, there are also more moving parts subject to being fixed and improved. Local SEO, for example, has clearly seen an accelerated rate of change over the past 2-3 years. So, we’d naturally expect the overall rate of change to increase.

A lot of this is also in the eye of the beholder. Let’s say Google makes an update to how they handle misspelled words in Korean. For most of us in the United States, that change isn’t going to be actionable. If you’re a Korean brand trying to rank for a commonly misspelled, high-volume term, this change could be huge. Some changes also are vertical-specific, representing radical change for one industry and little or no impact outside that niche.

On the other hand, you’ll hear comments in the industry along the lines of “There are 3,000 changes per year; stop worrying about it!” To me that’s like saying “The weather changes every day; stop worrying about it!” Yes, not every weather report is interesting, but I still want to know when it’s going to snow or if there’s a tornado coming my way. Recognizing that most updates won’t affect you is fine, but it’s a fallacy to stretch that into saying that no updates matter or that SEOs shouldn’t care about algorithm changes.

Ultimately, I believe it helps to know when major changes happen, if only to understand whether rankings shifted due something we did or something Google did. It’s also clear that the rate of change has accelerated, no matter how you measure it, and there’s no evidence to suggest that Google is slowing down.


Appendix A: Update count sources

2009 – Google’s Matt Cutts, video (Search Engine Land)
2010 – Google’s Eric Schmidt, testifying before Congress (Search Engine Land)
2012 – Google’s “How Search Works” page (Internet Archive)
2013 – Google’s Amit Singhal, Google+ (Search Engine Land)
2016 – Google’s “How Search Works” page (Internet Archive)
2017 – Unnamed Google employees (CNBC)
2018 – Google’s “How Search Works” page (Google.com)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 months ago from tracking.feedpress.it

Will Google’s new search algorithm really penalize popovers?

The technology company has said that it will begin to punish sites that display interstitials or pop-ups that obscure indexed content. The change isn’t due to come into play until January 2017, but we wanted to take the opportunity to explore the extent of the new rules and their possible impact.

We know from Google’s past algorithm updates that the focus has turned to the ever-increasing number of mobile users. One change Google said it was “experimenting” with in relation to the ranking signal was mobile-friendly design. The company added a ‘Mobile-friendly’ label, which appeared in the search results when a site conformed to its criteria – such as using text that’s readable without zooming, sizing content to the screen or avoiding software like Flash.

It’s clear, then, that there are multiple factors in the way that Google rates websites into the mobile experience – so how much weighting will it be applying to those using pop-ups or interstitials? We won’t know until it happens, but we can speculate.

How do people browse the web on mobile?

Let’s think about people’s usage and habits when it comes to browsing the web on a mobile device.

Dissimilar from perhaps on a laptop or a desktop, those searching the web on their mobile will tend to be picking up the device to look up something specific. These searches will often be long-tail keywords which will draw up deeper links from a site, and this is where brands need to be careful. Popovers featured on these detail pages, and which distract from the main content, can be a barrier to conversion and lead to bounces.

Rather, marketers need to be selective about how they use pop-ups and take a more considered approach when it comes to the user experience.

What constitutes a bad UX?

No-one wants to create a bad user experience, because it can be detrimental to credibility, performance and conversions. However, if companies achieve good response rates to newsletter sign-up popovers, you could argue that they aren’t providing a negative web experience and, in fact, it would be wrong to penalize.

With the right tool, brands can also be cleverer about when, where and how popovers appear. If a company is trying to collect a steady stream of email addresses from new website visitors, it might make sense to host the popover somewhere on the homepage. After all, the homepage is your company’s shop window and its purpose is to lure people in.

It would also be wise to consider when it pops up. In order not to disrupt the journey and experience, you would want to prevent the popover from appearing immediately. And, of course, you would also want to prevent the pop-up from appearing on the next visit if the user had either signed up or dismissed it.

Will it or will it not?

Let’s remember that the new signal in Google’s algorithm is just one of hundreds of signals that are used to determine rankings – so popovers could make up a small percentage of the overall score. What we take from it all is: if a page’s content is relevant, gets lots of clicks and has a decent dwell time, it may still rank highly (in fact, read the official Google blog post). If a popover is enhancing the experience by giving users another way to consume similar content, and there is positive uptake, we don’t see the harm.

Reblogged 2 years ago from blog.dotmailer.com

Google updates Penguin, says it now runs in real time within the core search algorithm

The latest announced release, Penguin 4.0, will also be the last, given its new real-time nature.

The post Google updates Penguin, says it now runs in real time within the core search algorithm appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Reblogged 2 years ago from feeds.searchengineland.com

Everything you need to know about Google’s ‘Possum’ algorithm update

Wondering what’s up with local search rankings lately? Columnist Joy Hawkins has the scoop on a recent local algorithm update that local SEO experts are calling ‘Possum.’

The post Everything you need to know about Google’s ‘Possum’ algorithm update appeared first on Search Engine…

Please visit Search Engine Land for the full article.

Reblogged 2 years ago from feeds.searchengineland.com

Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking – Whiteboard Friday

Posted by randfish

There’s no doubt that quite a bit has changed about SEO, and that the field is far more integrated with other aspects of online marketing than it once was. In today’s Whiteboard Friday, Rand pushes back against the idea that effective modern SEO doesn’t require any technical expertise, outlining a fantastic list of technical elements that today’s SEOs need to know about in order to be truly effective.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to do something unusual. I don’t usually point out these inconsistencies or sort of take issue with other folks’ content on the web, because I generally find that that’s not all that valuable and useful. But I’m going to make an exception here.

There is an article by Jayson DeMers, who I think might actually be here in Seattle — maybe he and I can hang out at some point — called “Why Modern SEO Requires Almost No Technical Expertise.” It was an article that got a shocking amount of traction and attention. On Facebook, it has thousands of shares. On LinkedIn, it did really well. On Twitter, it got a bunch of attention.

Some folks in the SEO world have already pointed out some issues around this. But because of the increasing popularity of this article, and because I think there’s, like, this hopefulness from worlds outside of kind of the hardcore SEO world that are looking to this piece and going, “Look, this is great. We don’t have to be technical. We don’t have to worry about technical things in order to do SEO.”

Look, I completely get the appeal of that. I did want to point out some of the reasons why this is not so accurate. At the same time, I don’t want to rain on Jayson, because I think that it’s very possible he’s writing an article for Entrepreneur, maybe he has sort of a commitment to them. Maybe he had no idea that this article was going to spark so much attention and investment. He does make some good points. I think it’s just really the title and then some of the messages inside there that I take strong issue with, and so I wanted to bring those up.

First off, some of the good points he did bring up.

One, he wisely says, “You don’t need to know how to code or to write and read algorithms in order to do SEO.” I totally agree with that. If today you’re looking at SEO and you’re thinking, “Well, am I going to get more into this subject? Am I going to try investing in SEO? But I don’t even know HTML and CSS yet.”

Those are good skills to have, and they will help you in SEO, but you don’t need them. Jayson’s totally right. You don’t have to have them, and you can learn and pick up some of these things, and do searches, watch some Whiteboard Fridays, check out some guides, and pick up a lot of that stuff later on as you need it in your career. SEO doesn’t have that hard requirement.

And secondly, he makes an intelligent point that we’ve made many times here at Moz, which is that, broadly speaking, a better user experience is well correlated with better rankings.

You make a great website that delivers great user experience, that provides the answers to searchers’ questions and gives them extraordinarily good content, way better than what’s out there already in the search results, generally speaking you’re going to see happy searchers, and that’s going to lead to higher rankings.

But not entirely. There are a lot of other elements that go in here. So I’ll bring up some frustrating points around the piece as well.

First off, there’s no acknowledgment — and I find this a little disturbing — that the ability to read and write code, or even HTML and CSS, which I think are the basic place to start, is helpful or can take your SEO efforts to the next level. I think both of those things are true.

So being able to look at a web page, view source on it, or pull up Firebug in Firefox or something and diagnose what’s going on and then go, “Oh, that’s why Google is not able to see this content. That’s why we’re not ranking for this keyword or term, or why even when I enter this exact sentence in quotes into Google, which is on our page, this is why it’s not bringing it up. It’s because it’s loading it after the page from a remote file that Google can’t access.” These are technical things, and being able to see how that code is built, how it’s structured, and what’s going on there, very, very helpful.

Some coding knowledge also can take your SEO efforts even further. I mean, so many times, SEOs are stymied by the conversations that we have with our programmers and our developers and the technical staff on our teams. When we can have those conversations intelligently, because at least we understand the principles of how an if-then statement works, or what software engineering best practices are being used, or they can upload something into a GitHub repository, and we can take a look at it there, that kind of stuff is really helpful.

Secondly, I don’t like that the article overly reduces all of this information that we have about what we’ve learned about Google. So he mentions two sources. One is things that Google tells us, and others are SEO experiments. I think both of those are true. Although I’d add that there’s sort of a sixth sense of knowledge that we gain over time from looking at many, many search results and kind of having this feel for why things rank, and what might be wrong with a site, and getting really good at that using tools and data as well. There are people who can look at Open Site Explorer and then go, “Aha, I bet this is going to happen.” They can look, and 90% of the time they’re right.

So he boils this down to, one, write quality content, and two, reduce your bounce rate. Neither of those things are wrong. You should write quality content, although I’d argue there are lots of other forms of quality content that aren’t necessarily written — video, images and graphics, podcasts, lots of other stuff.

And secondly, that just doing those two things is not always enough. So you can see, like many, many folks look and go, “I have quality content. It has a low bounce rate. How come I don’t rank better?” Well, your competitors, they’re also going to have quality content with a low bounce rate. That’s not a very high bar.

Also, frustratingly, this really gets in my craw. I don’t think “write quality content” means anything. You tell me. When you hear that, to me that is a totally non-actionable, non-useful phrase that’s a piece of advice that is so generic as to be discardable. So I really wish that there was more substance behind that.

The article also makes, in my opinion, the totally inaccurate claim that modern SEO really is reduced to “the happier your users are when they visit your site, the higher you’re going to rank.”

Wow. Okay. Again, I think broadly these things are correlated. User happiness and rank is broadly correlated, but it’s not a one to one. This is not like a, “Oh, well, that’s a 1.0 correlation.”

I would guess that the correlation is probably closer to like the page authority range. I bet it’s like 0.35 or something correlation. If you were to actually measure this broadly across the web and say like, “Hey, were you happier with result one, two, three, four, or five,” the ordering would not be perfect at all. It probably wouldn’t even be close.

There’s a ton of reasons why sometimes someone who ranks on Page 2 or Page 3 or doesn’t rank at all for a query is doing a better piece of content than the person who does rank well or ranks on Page 1, Position 1.

Then the article suggests five and sort of a half steps to successful modern SEO, which I think is a really incomplete list. So Jayson gives us;

  • Good on-site experience
  • Writing good content
  • Getting others to acknowledge you as an authority
  • Rising in social popularity
  • Earning local relevance
  • Dealing with modern CMS systems (which he notes most modern CMS systems are SEO-friendly)

The thing is there’s nothing actually wrong with any of these. They’re all, generally speaking, correct, either directly or indirectly related to SEO. The one about local relevance, I have some issue with, because he doesn’t note that there’s a separate algorithm for sort of how local SEO is done and how Google ranks local sites in maps and in their local search results. Also not noted is that rising in social popularity won’t necessarily directly help your SEO, although it can have indirect and positive benefits.

I feel like this list is super incomplete. Okay, I brainstormed just off the top of my head in the 10 minutes before we filmed this video a list. The list was so long that, as you can see, I filled up the whole whiteboard and then didn’t have any more room. I’m not going to bother to erase and go try and be absolutely complete.

But there’s a huge, huge number of things that are important, critically important for technical SEO. If you don’t know how to do these things, you are sunk in many cases. You can’t be an effective SEO analyst, or consultant, or in-house team member, because you simply can’t diagnose the potential problems, rectify those potential problems, identify strategies that your competitors are using, be able to diagnose a traffic gain or loss. You have to have these skills in order to do that.

I’ll run through these quickly, but really the idea is just that this list is so huge and so long that I think it’s very, very, very wrong to say technical SEO is behind us. I almost feel like the opposite is true.

We have to be able to understand things like;

  • Content rendering and indexability
  • Crawl structure, internal links, JavaScript, Ajax. If something’s post-loading after the page and Google’s not able to index it, or there are links that are accessible via JavaScript or Ajax, maybe Google can’t necessarily see those or isn’t crawling them as effectively, or is crawling them, but isn’t assigning them as much link weight as they might be assigning other stuff, and you’ve made it tough to link to them externally, and so they can’t crawl it.
  • Disabling crawling and/or indexing of thin or incomplete or non-search-targeted content. We have a bunch of search results pages. Should we use rel=prev/next? Should we robots.txt those out? Should we disallow from crawling with meta robots? Should we rel=canonical them to other pages? Should we exclude them via the protocols inside Google Webmaster Tools, which is now Google Search Console?
  • Managing redirects, domain migrations, content updates. A new piece of content comes out, replacing an old piece of content, what do we do with that old piece of content? What’s the best practice? It varies by different things. We have a whole Whiteboard Friday about the different things that you could do with that. What about a big redirect or a domain migration? You buy another company and you’re redirecting their site to your site. You have to understand things about subdomain structures versus subfolders, which, again, we’ve done another Whiteboard Friday about that.
  • Proper error codes, downtime procedures, and not found pages. If your 404 pages turn out to all be 200 pages, well, now you’ve made a big error there, and Google could be crawling tons of 404 pages that they think are real pages, because you’ve made it a status code 200, or you’ve used a 404 code when you should have used a 410, which is a permanently removed, to be able to get it completely out of the indexes, as opposed to having Google revisit it and keep it in the index.

Downtime procedures. So there’s specifically a… I can’t even remember. It’s a 5xx code that you can use. Maybe it was a 503 or something that you can use that’s like, “Revisit later. We’re having some downtime right now.” Google urges you to use that specific code rather than using a 404, which tells them, “This page is now an error.”

Disney had that problem a while ago, if you guys remember, where they 404ed all their pages during an hour of downtime, and then their homepage, when you searched for Disney World, was, like, “Not found.” Oh, jeez, Disney World, not so good.

  • International and multi-language targeting issues. I won’t go into that. But you have to know the protocols there. Duplicate content, syndication, scrapers. How do we handle all that? Somebody else wants to take our content, put it on their site, what should we do? Someone’s scraping our content. What can we do? We have duplicate content on our own site. What should we do?
  • Diagnosing traffic drops via analytics and metrics. Being able to look at a rankings report, being able to look at analytics connecting those up and trying to see: Why did we go up or down? Did we have less pages being indexed, more pages being indexed, more pages getting traffic less, more keywords less?
  • Understanding advanced search parameters. Today, just today, I was checking out the related parameter in Google, which is fascinating for most sites. Well, for Moz, weirdly, related:oursite.com shows nothing. But for virtually every other sit, well, most other sites on the web, it does show some really interesting data, and you can see how Google is connecting up, essentially, intentions and topics from different sites and pages, which can be fascinating, could expose opportunities for links, could expose understanding of how they view your site versus your competition or who they think your competition is.

Then there are tons of parameters, like in URL and in anchor, and da, da, da, da. In anchor doesn’t work anymore, never mind about that one.

I have to go faster, because we’re just going to run out of these. Like, come on. Interpreting and leveraging data in Google Search Console. If you don’t know how to use that, Google could be telling you, you have all sorts of errors, and you don’t know what they are.

  • Leveraging topic modeling and extraction. Using all these cool tools that are coming out for better keyword research and better on-page targeting. I talked about a couple of those at MozCon, like MonkeyLearn. There’s the new Moz Context API, which will be coming out soon, around that. There’s the Alchemy API, which a lot of folks really like and use.
  • Identifying and extracting opportunities based on site crawls. You run a Screaming Frog crawl on your site and you’re going, “Oh, here’s all these problems and issues.” If you don’t have these technical skills, you can’t diagnose that. You can’t figure out what’s wrong. You can’t figure out what needs fixing, what needs addressing.
  • Using rich snippet format to stand out in the SERPs. This is just getting a better click-through rate, which can seriously help your site and obviously your traffic.
  • Applying Google-supported protocols like rel=canonical, meta description, rel=prev/next, hreflang, robots.txt, meta robots, x robots, NOODP, XML sitemaps, rel=nofollow. The list goes on and on and on. If you’re not technical, you don’t know what those are, you think you just need to write good content and lower your bounce rate, it’s not going to work.
  • Using APIs from services like AdWords or MozScape, or hrefs from Majestic, or SEM refs from SearchScape or Alchemy API. Those APIs can have powerful things that they can do for your site. There are some powerful problems they could help you solve if you know how to use them. It’s actually not that hard to write something, even inside a Google Doc or Excel, to pull from an API and get some data in there. There’s a bunch of good tutorials out there. Richard Baxter has one, Annie Cushing has one, I think Distilled has some. So really cool stuff there.
  • Diagnosing page load speed issues, which goes right to what Jayson was talking about. You need that fast-loading page. Well, if you don’t have any technical skills, you can’t figure out why your page might not be loading quickly.
  • Diagnosing mobile friendliness issues
  • Advising app developers on the new protocols around App deep linking, so that you can get the content from your mobile apps into the web search results on mobile devices. Awesome. Super powerful. Potentially crazy powerful, as mobile search is becoming bigger than desktop.

Okay, I’m going to take a deep breath and relax. I don’t know Jayson’s intention, and in fact, if he were in this room, he’d be like, “No, I totally agree with all those things. I wrote the article in a rush. I had no idea it was going to be big. I was just trying to make the broader points around you don’t have to be a coder in order to do SEO.” That’s completely fine.

So I’m not going to try and rain criticism down on him. But I think if you’re reading that article, or you’re seeing it in your feed, or your clients are, or your boss is, or other folks are in your world, maybe you can point them to this Whiteboard Friday and let them know, no, that’s not quite right. There’s a ton of technical SEO that is required in 2015 and will be for years to come, I think, that SEOs have to have in order to be effective at their jobs.

All right, everyone. Look forward to some great comments, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it