Eliminate Duplicate Content in Faceted Navigation with Ajax/JSON/JQuery

Posted by EricEnge

One of the classic problems in SEO is that while complex navigation schemes may be useful to users, they create problems for search engines. Many publishers rely on tags such as rel=canonical, or the parameters settings in Webmaster Tools to try and solve these types of issues. However, each of the potential solutions has limitations. In today’s post, I am going to outline how you can use JavaScript solutions to more completely eliminate the problem altogether.

Note that I am not going to provide code examples in this post, but I am going to outline how it works on a conceptual level. If you are interested in learning more about Ajax/JSON/jQuery here are some resources you can check out:

  1. Ajax Tutorial
  2. Learning Ajax/jQuery

Defining the problem with faceted navigation

Having a page of products and then allowing users to sort those products the way they want (sorted from highest to lowest price), or to use a filter to pick a subset of the products (only those over $60) makes good sense for users. We typically refer to these types of navigation options as “faceted navigation.”

However, faceted navigation can cause problems for search engines because they don’t want to crawl and index all of your different sort orders or all your different filtered versions of your pages. They would end up with many different variants of your pages that are not significantly different from a search engine user experience perspective.

Solutions such as rel=canonical tags and parameters settings in Webmaster Tools have some limitations. For example, rel=canonical tags are considered “hints” by the search engines, and they may not choose to accept them, and even if they are accepted, they do not necessarily keep the search engines from continuing to crawl those pages.

A better solution might be to use JSON and jQuery to implement your faceted navigation so that a new page is not created when a user picks a filter or a sort order. Let’s take a look at how it works.

Using JSON and jQuery to filter on the client side

The main benefit of the implementation discussed below is that a new URL is not created when a user is on a page of yours and applies a filter or sort order. When you use JSON and jQuery, the entire process happens on the client device without involving your web server at all.

When a user initially requests one of the product pages on your web site, the interaction looks like this:

using json on faceted navigation

This transfers the page to the browser the user used to request the page. Now when a user picks a sort order (or filter) on that page, here is what happens:

jquery and faceted navigation diagram

When the user picks one of those options, a jQuery request is made to the JSON data object. Translation: the entire interaction happens within the client’s browser and the sort or filter is applied there. Simply put, the smarts to handle that sort or filter resides entirely within the code on the client device that was transferred with the initial request for the page.

As a result, there is no new page created and no new URL for Google or Bing to crawl. Any concerns about crawl budget or inefficient use of PageRank are completely eliminated. This is great stuff! However, there remain limitations in this implementation.

Specifically, if your list of products spans multiple pages on your site, the sorting and filtering will only be applied to the data set already transferred to the user’s browser with the initial request. In short, you may only be sorting the first page of products, and not across the entire set of products. It’s possible to have the initial JSON data object contain the full set of pages, but this may not be a good idea if the page size ends up being large. In that event, we will need to do a bit more.

What Ajax does for you

Now we are going to dig in slightly deeper and outline how Ajax will allow us to handle sorting, filtering, AND pagination. Warning: There is some tech talk in this section, but I will try to follow each technical explanation with a layman’s explanation about what’s happening.

The conceptual Ajax implementation looks like this:

ajax and faceted navigation diagram

In this structure, we are using an Ajax layer to manage the communications with the web server. Imagine that we have a set of 10 pages, the user has gotten the first page of those 10 on their device and then requests a change to the sort order. The Ajax requests a fresh set of data from the web server for your site, similar to a normal HTML transaction, except that it runs asynchronously in a separate thread.

If you don’t know what that means, the benefit is that the rest of the page can load completely while the process to capture the data that the Ajax will display is running in parallel. This will be things like your main menu, your footer links to related products, and other page elements. This can improve the perceived performance of the page.

When a user selects a different sort order, the code registers an event handler for a given object (e.g. HTML Element or other DOM objects) and then executes an action. The browser will perform the action in a different thread to trigger the event in the main thread when appropriate. This happens without needing to execute a full page refresh, only the content controlled by the Ajax refreshes.

To translate this for the non-technical reader, it just means that we can update the sort order of the page, without needing to redraw the entire page, or change the URL, even in the case of a paginated sequence of pages. This is a benefit because it can be faster than reloading the entire page, and it should make it clear to search engines that you are not trying to get some new page into their index.

Effectively, it does this within the existing Document Object Model (DOM), which you can think of as the basic structure of the documents and a spec for the way the document is accessed and manipulated.

How will Google handle this type of implementation?

For those of you who read Adam Audette’s excellent recent post on the tests his team performed on how Google reads Javascript, you may be wondering if Google will still load all these page variants on the same URL anyway, and if they will not like it.

I had the same question, so I reached out to Google’s Gary Illyes to get an answer. Here is the dialog that transpired:

Eric Enge: I’d like to ask you about using JSON and jQuery to render different sort orders and filters within the same URL. I.e. the user selects a sort order or a filter, and the content is reordered and redrawn on the page on the client site. Hence no new URL would be created. It’s effectively a way of canonicalizing the content, since each variant is a strict subset.

Then there is a second level consideration with this approach, which involves doing the same thing with pagination. I.e. you have 10 pages of products, and users still have sorting and filtering options. In order to support sorting and filtering across the entire 10 page set, you use an Ajax solution, so all of that still renders on one URL.

So, if you are on page 1, and a user executes a sort, they get that all back in that one page. However, to do this right, going to page 2 would also render on the same URL. Effectively, you are taking the 10 page set and rendering it all within one URL. This allows sorting, filtering, and pagination without needing to use canonical, noindex, prev/next, or robots.txt.

If this was not problematic for Google, the only downside is that it makes the pagination not visible to Google. Does that make sense, or is it a bad idea?

Gary Illyes
: If you have one URL only, and people have to click on stuff to see different sort orders or filters for the exact same content under that URL, then typically we would only see the default content.

If you don’t have pagination information, that’s not a problem, except we might not see the content on the other pages that are not contained in the HTML within the initial page load. The meaning of rel-prev/next is to funnel the signals from child pages (page 2, 3, 4, etc.) to the group of pages as a collection, or to the view-all page if you have one. If you simply choose to render those paginated versions on a single URL, that will have the same impact from a signals point of view, meaning that all signals will go to a single entity, rather than distributed to several URLs.

Summary

Keep in mind, the reason why Google implemented tags like rel=canonical, NoIndex, rel=prev/next, and others is to reduce their crawling burden and overall page bloat and to help focus signals to incoming pages in the best way possible. The use of Ajax/JSON/jQuery as outlined above does this simply and elegantly.

On most e-commerce sites, there are many different “facets” of how a user might want to sort and filter a list of products. With the Ajax-style implementation, this can be done without creating new pages. The end users get the control they are looking for, the search engines don’t have to deal with excess pages they don’t want to see, and signals in to the site (such as links) are focused on the main pages where they should be.

The one downside is that Google may not see all the content when it is paginated. A site that has lots of very similar products in a paginated list does not have to worry too much about Google seeing all the additional content, so this isn’t much of a concern if your incremental pages contain more of what’s on the first page. Sites that have content that is materially different on the additional pages, however, might not want to use this approach.

These solutions do require Javascript coding expertise but are not really that complex. If you have the ability to consider a path like this, you can free yourself from trying to understand the various tags, their limitations, and whether or not they truly accomplish what you are looking for.

Credit: Thanks for Clark Lefavour for providing a review of the above for technical correctness.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Why We Can’t Do Keyword Research Like It’s 2010 – Whiteboard Friday

Posted by randfish

Keyword Research is a very different field than it was just five years ago, and if we don’t keep up with the times we might end up doing more harm than good. From the research itself to the selection and targeting process, in today’s Whiteboard Friday Rand explains what has changed and what we all need to do to conduct effective keyword research today.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

What do we need to change to keep up with the changing world of keyword research?

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat a little bit about keyword research, why it’s changed from the last five, six years and what we need to do differently now that things have changed. So I want to talk about changing up not just the research but also the selection and targeting process.

There are three big areas that I’ll cover here. There’s lots more in-depth stuff, but I think we should start with these three.

1) The Adwords keyword tool hides data!

This is where almost all of us in the SEO world start and oftentimes end with our keyword research. We go to AdWords Keyword Tool, what used to be the external keyword tool and now is inside AdWords Ad Planner. We go inside that tool, and we look at the volume that’s reported and we sort of record that as, well, it’s not good, but it’s the best we’re going to do.

However, I think there are a few things to consider here. First off, that tool is hiding data. What I mean by that is not that they’re not telling the truth, but they’re not telling the whole truth. They’re not telling nothing but the truth, because those rounded off numbers that you always see, you know that those are inaccurate. Anytime you’ve bought keywords, you’ve seen that the impression count never matches the count that you see in the AdWords tool. It’s not usually massively off, but it’s often off by a good degree, and the only thing it’s great for is telling relative volume from one from another.

But because AdWords hides data essentially by saying like, “Hey, you’re going to type in . . .” Let’s say I’m going to type in “college tuition,” and Google knows that a lot of people search for how to reduce college tuition, but that doesn’t come up in the suggestions because it’s not a commercial term, or they don’t think that an advertiser who bids on that is going to do particularly well and so they don’t show it in there. I’m giving an example. They might indeed show that one.

But because that data is hidden, we need to go deeper. We need to go beyond and look at things like Google Suggest and related searches, which are down at the bottom. We need to start conducting customer interviews and staff interviews, which hopefully has always been part of your brainstorming process but really needs to be now. Then you can apply that to AdWords. You can apply that to suggest and related.

The beautiful thing is once you get these tools from places like visiting forums or communities, discussion boards and seeing what terms and phrases people are using, you can collect all this stuff up, plug it back into AdWords, and now they will tell you how much volume they’ve got. So you take that how to lower college tuition term, you plug it into AdWords, they will show you a number, a non-zero number. They were just hiding it in the suggestions because they thought, “Hey, you probably don’t want to bid on that. That won’t bring you a good ROI.” So you’ve got to be careful with that, especially when it comes to SEO kinds of keyword research.

2) Building separate pages for each term or phrase doesn’t make sense

It used to be the case that we built separate pages for every single term and phrase that was in there, because we wanted to have the maximum keyword targeting that we could. So it didn’t matter to us that college scholarship and university scholarships were essentially people looking for exactly the same thing, just using different terminology. We would make one page for one and one page for the other. That’s not the case anymore.

Today, we need to group by the same searcher intent. If two searchers are searching for two different terms or phrases but both of them have exactly the same intent, they want the same information, they’re looking for the same answers, their query is going to be resolved by the same content, we want one page to serve those, and that’s changed up a little bit of how we’ve done keyword research and how we do selection and targeting as well.

3) Build your keyword consideration and prioritization spreadsheet with the right metrics

Everybody’s got an Excel version of this, because I think there’s just no awesome tool out there that everyone loves yet that kind of solves this problem for us, and Excel is very, very flexible. So we go into Excel, we put in our keyword, the volume, and then a lot of times we almost stop there. We did keyword volume and then like value to the business and then we prioritize.

What are all these new columns you’re showing me, Rand? Well, here I think is how sophisticated, modern SEOs that I’m seeing in the more advanced agencies, the more advanced in-house practitioners, this is what I’m seeing them add to the keyword process.

Difficulty

A lot of folks have done this, but difficulty helps us say, “Hey, this has a lot of volume, but it’s going to be tremendously hard to rank.”

The difficulty score that Moz uses and attempts to calculate is a weighted average of the top 10 domain authorities. It also uses page authority, so it’s kind of a weighted stack out of the two. If you’re seeing very, very challenging pages, very challenging domains to get in there, it’s going to be super hard to rank against them. The difficulty is high. For all of these ones it’s going to be high because college and university terms are just incredibly lucrative.

That difficulty can help bias you against chasing after terms and phrases for which you are very unlikely to rank for at least early on. If you feel like, “Hey, I already have a powerful domain. I can rank for everything I want. I am the thousand pound gorilla in my space,” great. Go after the difficulty of your choice, but this helps prioritize.

Opportunity

This is actually very rarely used, but I think sophisticated marketers are using it extremely intelligently. Essentially what they’re saying is, “Hey, if you look at a set of search results, sometimes there are two or three ads at the top instead of just the ones on the sidebar, and that’s biasing some of the click-through rate curve.” Sometimes there’s an instant answer or a Knowledge Graph or a news box or images or video, or all these kinds of things that search results can be marked up with, that are not just the classic 10 web results. Unfortunately, if you’re building a spreadsheet like this and treating every single search result like it’s just 10 blue links, well you’re going to lose out. You’re missing the potential opportunity and the opportunity cost that comes with ads at the top or all of these kinds of features that will bias the click-through rate curve.

So what I’ve seen some really smart marketers do is essentially build some kind of a framework to say, “Hey, you know what? When we see that there’s a top ad and an instant answer, we’re saying the opportunity if I was ranking number 1 is not 10 out of 10. I don’t expect to get whatever the average traffic for the number 1 position is. I expect to get something considerably less than that. Maybe something around 60% of that, because of this instant answer and these top ads.” So I’m going to mark this opportunity as a 6 out of 10.

There are 2 top ads here, so I’m giving this a 7 out of 10. This has two top ads and then it has a news block below the first position. So again, I’m going to reduce that click-through rate. I think that’s going down to a 6 out of 10.

You can get more and less scientific and specific with this. Click-through rate curves are imperfect by nature because we truly can’t measure exactly how those things change. However, I think smart marketers can make some good assumptions from general click-through rate data, which there are several resources out there on that to build a model like this and then include it in their keyword research.

This does mean that you have to run a query for every keyword you’re thinking about, but you should be doing that anyway. You want to get a good look at who’s ranking in those search results and what kind of content they’re building . If you’re running a keyword difficulty tool, you are already getting something like that.

Business value

This is a classic one. Business value is essentially saying, “What’s it worth to us if visitors come through with this search term?” You can get that from bidding through AdWords. That’s the most sort of scientific, mathematically sound way to get it. Then, of course, you can also get it through your own intuition. It’s better to start with your intuition than nothing if you don’t already have AdWords data or you haven’t started bidding, and then you can refine your sort of estimate over time as you see search visitors visit the pages that are ranking, as you potentially buy those ads, and those kinds of things.

You can get more sophisticated around this. I think a 10 point scale is just fine. You could also use a one, two, or three there, that’s also fine.

Requirements or Options

Then I don’t exactly know what to call this column. I can’t remember the person who’ve showed me theirs that had it in there. I think they called it Optional Data or Additional SERPs Data, but I’m going to call it Requirements or Options. Requirements because this is essentially saying, “Hey, if I want to rank in these search results, am I seeing that the top two or three are all video? Oh, they’re all video. They’re all coming from YouTube. If I want to be in there, I’ve got to be video.”

Or something like, “Hey, I’m seeing that most of the top results have been produced or updated in the last six months. Google appears to be biasing to very fresh information here.” So, for example, if I were searching for “university scholarships Cambridge 2015,” well, guess what? Google probably wants to bias to show results that have been either from the official page on Cambridge’s website or articles from this year about getting into that university and the scholarships that are available or offered. I saw those in two of these search results, both the college and university scholarships had a significant number of the SERPs where a fresh bump appeared to be required. You can see that a lot because the date will be shown ahead of the description, and the date will be very fresh, sometime in the last six months or a year.

Prioritization

Then finally I can build my prioritization. So based on all the data I had here, I essentially said, “Hey, you know what? These are not 1 and 2. This is actually 1A and 1B, because these are the same concepts. I’m going to build a single page to target both of those keyword phrases.” I think that makes good sense. Someone who is looking for college scholarships, university scholarships, same intent.

I am giving it a slight prioritization, 1A versus 1B, and the reason I do this is because I always have one keyword phrase that I’m leaning on a little more heavily. Because Google isn’t perfect around this, the search results will be a little different. I want to bias to one versus the other. In this case, my title tag, since I more targeting university over college, I might say something like college and university scholarships so that university and scholarships are nicely together, near the front of the title, that kind of thing. Then 1B, 2, 3.

This is kind of the way that modern SEOs are building a more sophisticated process with better data, more inclusive data that helps them select the right kinds of keywords and prioritize to the right ones. I’m sure you guys have built some awesome stuff. The Moz community is filled with very advanced marketers, probably plenty of you who’ve done even more than this.

I look forward to hearing from you in the comments. I would love to chat more about this topic, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Introducing Followerwonk Profile Pages

Posted by petebray

Followerwonk has always been primarily about social graph analysis and exploration: from tracking follower growth, comparing relationships, and so on.

Followerwonk now adds content analysis and user profiling, too

In the Analyze tab, you’ll find a new option to examine any Twitter user’s tweets. (Note that this is a Pro-only feature, so you’ll need to be a subscriber to use it.)

You can also access these profile pages by simply clicking on a Twitter username anywhere else in Followerwonk.

For us, this feature is really exciting, because we let you analyze not just yourself, but other people too. In fact, Pro users can analyze as many other Twitter accounts as they want!

Now, you’ll doubtlessly learn lots by analyzing your own tweets. But you already probably have a pretty good sense of what content works well for you (and who you engage with frequently).

We feel that Profile Pages really move the needle by letting you surface the relationships and content strategies of competitors, customers, and prospects.

Let’s take a closer look.

Find the people any Twitter user engages with most frequently

Yep, just plug in a Twitter name and we’ll analyze their most recent 2000 tweets. We’ll extract out all of the mentions and determine which folks they talk to the most.

Here, we see that 
@dr_pete talks most frequently with (or about) Moz, Rand, Elisa, and Melissa. In fact, close to 10% of his tweets are talking to these four! (Note the percentage above each listed name.)

This analysis is helpful as it lets you quickly get a sense for the relationships that are important for this person. That provides possible inroads to that person in terms of engagement strategies.

Chart when and what conversations happen with an analyzed user’s most important relationships

We don’t just stop there. By clicking on the little “see engagement” link below each listed user, you can see the history of the relationship.

Here, we can see when the engagements happened in the little chart. And we actually show you the underlying tweets, too.

This is a great way to quickly understand the context of that relationship: is it a friendly back and forth, a heated exchange, or the last gasp of a bad customer experience? Perhaps the tweets from a competitor to one his top customers occurred weeks back? Maybe there’s a chance for you to make inroads to that customer?

There’s all sorts of productive tea-reading that can happen with this feature. And, by the way, don’t forget that you already have the ability to track all the relationships a competitor forms (or breaks), too.

Rank any Twitter user’s tweets by importance to surface their best content

This is my favorite feature—by far—in Followerwonk.

Sure, there are other tools that tell you your most popular tweets, but there are few that let you turn that feature around and examine other Twitter users. This is important because (let’s face it) few of us have the volume of RTs and favorites to make self-analysis that useful. But when we examine top Twitter accounts, we come away with hints about what content strategies they’re using that work well.

Here we see that Obama’s top tweets include a tribute, an irreverent bit of humor, and an image that creatively criticizes a recent Supreme Court ruling. What lessons might you draw from the content that works best for Obama? What content works best for other people? Their image tweets? Tweets with humor? Shorter tweets? Tweets with links? Go do some analyzing!

Uncover top source domains of any Twitter users

Yep, we dissect all the URLs for any analyzed user to assemble a list of their top domains.

This feature offers a great way to quickly snapshot the types of content and sources that users draw material from. Moreover, we can click on “see mentions” to see a timeline of when those mentions occurred for each domain, as well as what particular tweets accounted for them.

In sum…

These features offer exciting ways to quickly profile users. Such analysis should be at the heart of any engagement strategy: understand who your target most frequently engages with, what content makes them successful, and what domains they pull from.

At the same time, this approach reveals content strategies—what, precisely, works well for you, but also for other thought leaders in your category. Not only can you draw inspiration from this approach, but you can find content that might deserve a retweet (or reformulation in your own words).

I don’t want to go too Freudian on you, but consider this: What’s the value of self-analysis? I mean that to say that unless you have a lot of data, any analytics product isn’t going to be totally useful. That’s why this addition in Followerwonk is so powerful. Now you can analyze others, including thought leaders in your particular industry, to find the secrets of their social success.

Start analyzing!

Finally, this is a bittersweet blog post for me. It’s my last one as a Mozzer. I’m off to try my hand at another bootstrapping startup: this time, software that lets you build feature tours and elicit visitor insights. I’m leaving Followerwonk in great hands, and I look forward to seeing awesome new features down the line. Of course, you can always stay in touch with me on Twitter. Keep on wonkin’!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 5 years ago from feedproxy.google.com

Does SEO Boil Down to Site Crawlability and Content Quality? – Whiteboard Friday

Posted by randfish

We all know that keywords and links alone no longer cut it as a holistic SEO strategy. But there’s still plenty outside our field who try to “boil SEO down” to a naively simplistic practice – one that isn’t representative of what SEOs need to do to succeed. In today’s Whiteboard Friday, Rand champions the art and science of SEO and offers insight into how very broad the field really is.

For reference, here’s a still of this week’s whiteboard!

Video Transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to try and tackle a question that, if you’re in the SEO world, you probably have heard many, many times from those outside of the SEO world.

I thought a recent question on Quora
phrased it perfectly. This question actually had quite a few people who’d seen it. Does SEO boil down to making a site easily crawlable and consistently creating good, relevant content?

Oh, well, yeah, that’s basically all there is to it. I mean why do we even film hundreds of Whiteboard Fridays?

In all seriousness, this is a fair question, and I can empathize with the people asking it, because when I look at a new practice, I think when all of us do, we try and boil it down to its basic parts. We say, “Well, I suppose that the field of advertising is just about finding the right audience and then finding the ads that you can afford that are going to reach that target audience, and then making ads that people actually pay attention to.”

Well, yes and no. The advertising field is, in fact, incredibly complex. There are dramatic numbers of inputs that go into it.

You could do this with field after field after field. Oh, well, building a car must just mean X. Or being a photographer must just mean Y.

These things are never true. There’s always complexity underneath there. But I understand why this happens.

We have these two things. In fact, more often, I at least hear the addition of keyword research in there, that being a crawl-friendly website, having good, relevant content, and doing your keyword research and targeting, that’s all SEO is. Right? The answer is no.

This is table stakes. This is what you have to do in order to even attempt to do SEO, in order to attempt to be in the rankings to potentially get search traffic that will drive valuable visits to your website. Table stakes is very different from the art and science of the practice. That comes because good, relevant content is rarely, if ever, good enough to rank competitively, because crawl friendly is necessary, but it’s not going to help you improve any rankings. It’s not going to help you in the competitive sense. You could be extremely crawl friendly and rank on page ten for many, many search terms. That would do nothing for your SEO and drive no traffic whatsoever.

Keyword research and targeting are also required certainly, but so too is ongoing maintenance of these things. This is not a fire and forget strategy in any sense of the word. You need to be tracking those rankings and knowing which search terms and which pages, now that “not provided” exists, are actually driving valuable visits to your site. You’ve got to be identifying new terms as those come out, seeing where your competition is beating you out and what they’ve done. This is an ongoing practice.

It’s the case that you might say, “Okay, all right. So I really need to create remarkable content.” Well, okay, yes, content that’s remarkable helps. It does help you in SEO, but only if that remarkability also yields a high likelihood of engagement and sharing.

If your remarkability is that you’ve produced something wonderful that is incredibly fascinating, but no one particularly cares about, they don’t find it especially more useful, or they do find it more useful, but they’re not interested in sharing it, no one is going to help amplify that content in any way—privately, one to one, through email, or directing people to your website, or linking to you, or sharing socially. There’s no amplification. The media won’t pick it up. Now you’ve kind of lost. You may have remarkable content, but it is not the kind of remarkable that performs well for SEO.

The reason is that links are still a massive, massive input into rankings. So anything—this word is going to be important, I’m going to revisit it—anything that promotes or inhibits link growth helps or hurts SEO. This makes good sense when you think about it.

But SEO, of course, is a competitive practice. You can’t fire and forget as we talked about. Your competition is always going to be seeking to catch up to you or to one up you. If you’re not racing ahead at the right trajectory, someone will catch you. This is the law of SEO, and it’s been seen over and over and over again by thousands and thousands of companies who’ve entered the field.

Okay, I realize this is hard to read. We talked about SEO being anything that impacts potential links. But SEO is really any input that engines use to rank pages. Any input that engines use to rank pages goes into the SEO bucket, and anything that people or technology does to influence those ranking elements is what the practice of SEO is about.

That’s why this field is so huge. That’s why SEO is neuropsychology. SEO is conversion rate optimization. SEO is social media. SEO is user experience and design. SEO is branding. SEO is analytics. SEO is product. SEO is advertising. SEO is public relations. The fill-in-the-blank is SEO if that blank is anything that affects any input directly or indirectly.

This is why this is a huge field. This is why SEO is so complex and so challenging. This is also why, unfortunately, when people try to boil SEO down and put us into a little bucket, it doesn’t work. It doesn’t work, and it defeats the practice. It defeats the investments, and it works against all the things that we are working toward in order to help SEO.

When someone says to you on your team or from your client, they say, “Hey, you’re doing SEO. Why are you telling us how to manage our Facebook page?

Why are you telling us who to talk to in the media? Why are you telling us what changes to make to our branding campaigns or our advertising?” This is why. I hope maybe you’ll send them this video, maybe you’ll draw them this diagram, maybe you’ll be able to explain it a little more clearly and quickly.

With that, I hope we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 5 years ago from feedproxy.google.com