3 ways to drive conversions with web behavior data

Session length, clicks, and product views are among the most actively tracked ecommerce metrics. And for good reason, too. Whilst numbers vary greatly, most e-tailers report that their average session duration is between 2 to 5 minutes. During that time, a customer will browse multiple product pages, mere clicks away from conversion.

The average conversion rate on a product
description page is over 8%, meaning it’s important that merchants track
online visitors
even before they abandon their cart.

Beyond simply tracking these metrics, merchants and marketers can inform their merchandising and marketing with these web behavior insights. Why not resurface popular products in your next email?

Implementation

The first step is to implement web tracking script on your site – this can be done manually via your CMS or Google Tag Manager. Both methods are referenced in our support implementation guide.

Once the script is live you’ll start collecting session data (from contacts who’ve clicked through from an email) in your WebInsights. One contact can have many web sessions, and each can have multiple page visits recorded. There are lots of data attributes available which you can view within an individual contact record – duration of page views (in minutes) is a good one. You can apply segmentation rules based on these attributes, too.   

Here are three ways to start leveraging web behavior data with dotdigital Engagement Cloud:

Abandoned browse email

Truth is – everyone gets distracted online
(who’s multi-tasking right now?). If a customer has viewed a product but hasn’t
purchased, sending a relevant and targeted follow-up makes complete sense. So,
creating an abandoned browse campaign should be on your radar.

Obviously, it’s impractical to set up triggers for all products; select a popular high-value product, or a product with a big margin, as a first step when testing the abandoned browse waters.  

Ready for the next level? Why not set up an abandoned browse based on a set of more generic rules like total number of WebInsight collections or a ‘url contains’ rule. 

(!) Just remember to exclude anyone that has already purchased!

The program-entry segment might look something like:

The program flow might look something like:

Product recommendations

Product recs are a great tool to inspire longer and
repeat website visits. Shoppers
buy recommended products
– whether they’re bestsellers or tailored uniquely
to them. This unmissable sales tactic accounts for 31% of ecommerce site
revenue (Barilliance, 2018).   

If you are storing your products in our insight data collection (this product catalog will sync over as part of our commerce connectors: Magento, Shopify Plus, WooCommerce, BigCommerce, Shopware, and Salesforce Commerce Cloud), then once you begin tracking web behavior you can start using most viewed and trending product recommendation blocks in either BAU campaigns or in automated triggers.

The content block will dynamically update with the latest most-view products based on the logic you set. For example, ‘only show most viewed products over a certain price’ (and if you have stock levels, you could exclude any products that have a low inventory).

A simple rule might look like:

Continue your journey

Rather than target online visitors based on
specific pages, you could drive a more generic follow-up; one that prompts them
to continue their journey. Something like: ‘We hope you liked what you saw’ – a
triggered campaign sent to those who’ve viewed pages numerous times but haven’t
yet purchased or converted.

If you have an online store, get your subscribers to create an account using a dynamic content block that’s only visible to them (and not customers). This email campaign would have a light touch and is about getting your contacts to come back to your site to create an account and/or make a purchase.

One simple script.

Engagement Cloud is powerful, but we can’t collect web behavioral insight unless you tell us to with a simple piece of script. To find out more about implementing web tracking on your site, check out our Knowledge Base here.

Once the script is live, Engagement Cloud
will start collecting session data for you (contacts who have clicked through
from an email) and store it in a web insights collection.

There are many web behavior attributes available for use, this overview should help you learn more about them and web behavior in general.

Getting started

  1. Think big, start small, and
    scale quickly
  2. Focus on high value / bestsellers
    / biggest margin products
  3. Put data to use with customer
    segmentation
  4. Remember to apply frequency
    rules (gauge engagement based on a minimum number of visits in 7 or 14 days) and
    exclude recent purchasers

Want more information on WebInsights? Speak to your account manager today.

The post 3 ways to drive conversions with web behavior data appeared first on dotdigital blog.

Reblogged 2 weeks ago from blog.dotdigital.com

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

A Vision for Brand Engagement Online, or &quot;The Goal&quot;

Posted by EricEnge

Today’s post focuses on a vision for your online presence. This vision outlines what it takes to be the best, both from an overall reputation and visibility standpoint, as well as an SEO point of view. The reason these are tied together is simple: Your overall online reputation and visibility is a huge factor in your SEO. Period. Let’s start by talking about why.

Core ranking signals

For purposes of this post, let’s define three cornerstone ranking signals that most everyone agrees on:

Links

Links remain a huge factor in overall ranking. Both Cyrus Shepard and Marcus Tober re-confirmed this on the Periodic Table of SEO Ranking Factors session at the SMX Advanced conference in Seattle this past June.

On-page content

On-page content remains a huge factor too, but with some subtleties now thrown in. I wrote about some of this in earlier posts I did on Moz about Term Frequency and Inverse Document Frequency. Suffice it to say that on-page content is about a lot more than pure words on the page, but also includes the supporting pages that you link to.

User engagement with your site

This is not one of the traditional SEO signals from the early days of SEO, but most advanced SEO pros that I know consider it a real factor these days. One of the most popular concepts people talk about is called pogo-sticking, which is illustrated here:

You can learn more about the pogosticking concept by visiting this Whiteboard Friday video by a rookie SEO with a last name of Fishkin.

New, lesser-known signals

OK, so these are the more obvious signals, but now let’s look more broadly at the overall web ecosystem and talk about other types of ranking signals. Be warned that some of these signals may be indirect, but that just doesn’t matter. In fact, my first example below is an indirect factor which I will use to demonstrate why whether a signal is direct or indirect is not an issue at all.

Let me illustrate with an example. Say you spend $1 billion dollars building a huge brand around a product that is massively useful to people. Included in this is a sizable $100 million dollar campaign to support a highly popular charitable foundation, and your employees regularly donate time to help out in schools across your country. In short, the great majority of people love your brand.

Do you think this will impact the way people link to your site? Of course it does. Do you think it will impact how likely people are to be satisified with quality of the pages of your site? Consider this A/B test scenario of 2 pages from different “brands” (for the one on the left, imagine the image of Coca Cola or Pepsi Cola, whichever one you prefer):

Do you think that the huge brand will get a benefit of a doubt on their page that the no-name brand does not even though the pages are identical? Of course they will. Now let’s look at some simpler scenarios that don’t involve a $1 billion investment.

1. Cover major options related to a product or service on “money pages”

Imagine that a user arrives on your auto parts site after searching on the phrase “oil filter” at Google or Bing. Chances are pretty good that they want an oil filter, but here are some other items they may also want:

  • A guide to picking the right filter for their car
  • Oil
  • An oil filter wrench
  • A drainage pan to drain the old oil into

This is just the basics, right? But, you would be surprised with how many sites don’t include links or information on directly related products on their money pages. Providing this type of smart site and page design can have a major impact on user engagement with the money pages of your site.

2. Include other related links on money pages

In the prior item we covered the user’s most directly related needs, but they may have secondary needs as well. Someone who is changing a car’s oil is either a mechanic or a do-it-yourself-er. What else might they need? How about other parts, such as windshield wipers or air filters?

These are other fairly easy maintenance steps for someone who is working on their car to complete. Presence of these supporting products could be one way to improve user engagement with your pages.

3. Offer industry-leading non-commercial content on-site

Publishing world-class content on your site is a great way to produce links to your site. Of course, if you do this on a blog on your site, it may not provide links directly to your money pages, but it will nonetheless lift overall site authority.

In addition, if someone has consumed one or more pieces of great content on your site, the chance of their engaging in a more positive manner with your site overall go way up. Why? Because you’ve earned their trust and admiration.

4. Be everywhere your audiences are with more high-quality, relevant, non-commercial content

Are there major media sites that cover your market space? Do they consider you to be an expert? Will they quote you in articles they write? Can you provide them with guest posts or let you be a guest columnist? Will they collaborate on larger content projects with you?

All of these activities put you in front of their audiences, and if those audiences overlap with yours, this provides a great way to build your overall reputation and visibility. This content that you publish, or collaborate on, that shows up on 3rd-party sites will get you mentions and links. In addition, once again, it will provide you with a boost to your branding. People are now more likely to consume your other content more readily, including on your money pages.

5. Leverage social media

The concept here shares much in common with the prior point. Social media provides opportunities to get in front of relevant audiences. Every person that’s an avid follower of yours on a social media site is more likely to show very different behavior characteristics interacting with your site than someone that does not know you well at all.

Note that links from social media sites are nofollowed, but active social media behavior can lead to people implementing “real world” links to your site that are followed, from their blogs and media web sites.

6. Be active in the offline world as well

Think your offline activity doesn’t matter online? Think again. Relationships are still most easily built face-to-face. People you meet and spend time with can well become your most loyal fans online. This is particularly important when it comes to building relationships with influential people.

One great way to do that is to go to public events related to your industry, such as conferences. Better still, obtain speaking engagements at those conferences. This can even impact people who weren’t there to hear you speak, as they become aware that you have been asked to do that. This concept can also work for a small local business. Get out in your community and engage with people at local events.

The payoff here is similar to the payoff for other items: more engaged, highly loyal fans who engage with you across the web, sending more and more positive signals, both to other people and to search engines, that you are the real deal.

7. Provide great customer service/support

Whatever your business may be, you need to take care of your customers as best you can. No one can make everyone happy, that’s unrealistic, but striving for much better than average is a really sound idea. Having satisfied customers saying nice things about you online is a big impact item in the grand scheme of things.

8. Actively build relationships with influencers too

While this post is not about the value of influencer relationships, I include this in the list for illustration purposes, for two reasons:

  1. Some opportunities are worth extra effort. Know of someone who could have a major impact on your business? Know that they will be at a public event in the near future? Book your plane tickets and get your butt out there. No guarantee that you will get the result you are looking for, or that it will happen quickly, but your chances go WAY up if you get some face time with them.
  2. Influencers are worth special attention and focus, but your relationship-building approach to the web and SEO is not only about influencers. It’s about the entire ecosystem.

It’s an integrated ecosystem

The web provides a level of integrated, real-time connectivity of a kind that the world has never seen before. This is only going to increase. Do something bad to a customer in Hong Kong? Consumers in Boston will know within 5 minutes. That’s where it’s all headed.

Google and Bing (and any future search engine that may emerge) want to measure these types of signals because they tell them how to improve the quality of the experience on their platforms. There are may ways they can perform these measurements.

One simple concept is covered by Rand in this recent Whiteboard Friday video. The discussion is about a recent patent granted to Google that shows how the company can use search queries to detect who is an authority on a topic.

The example he provides is about people who search on “email finding tool”. If Google also finds that a number of people search on “voila norbert email tool”, Google may use that as an authority signal.

Think about that for a moment. How are you going to get people to search on your brand more while putting it together with a non-branded querly like that? (OK, please leave Mechanical Turk and other services like that out of the discussion).

Now you can start to see the bigger picture. Measurements like pogosticking and this recent search behavior related patent are just the tip of the iceberg. Undoubtedly, there are many other ways that search engines can measure what people like and engage with the most.

This is all part of SEO now. UX, product breadth, problem solving, UX, engaging in social media, getting face to face, creating great content that you publish in front of other people’s audiences, and more.

For the small local business, you can still win at this game, as your focus just needs to be on doing it better than your competitors. The big brands will never be hyper-local like you are, so don’t think you can’t play the game, because you can.

Whoever you are, get ready, because this new integrated ecosystem is already upon us, and you need to be a part of it.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

How to Combat 5 of the SEO World’s Most Infuriating Problems – Whiteboard Friday

Posted by randfish

These days, most of us have learned that spammy techniques aren’t the way to go, and we have a solid sense for the things we should be doing to rank higher, and ahead of our often spammier competitors. Sometimes, maddeningly, it just doesn’t work. In today’s Whiteboard Friday, Rand talks about five things that can infuriate SEOs with the best of intentions, why those problems exist, and what we can do about them.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

What SEO problems make you angry?

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about some of the most infuriating things in the SEO world, specifically five problems that I think plague a lot of folks and some of the ways that we can combat and address those.

I’m going to start with one of the things that really infuriates a lot of new folks to the field, especially folks who are building new and emerging sites and are doing SEO on them. You have all of these best practices list. You might look at a web developer’s cheat sheet or sort of a guide to on-page and on-site SEO. You go, “Hey, I’m doing it. I’ve got my clean URLs, my good, unique content, my solid keyword targeting, schema markup, useful internal links, my XML sitemap, and my fast load speed. I’m mobile friendly, and I don’t have manipulative links.”

Great. “Where are my results? What benefit am I getting from doing all these things, because I don’t see one?” I took a site that was not particularly SEO friendly, maybe it’s a new site, one I just launched or an emerging site, one that’s sort of slowly growing but not yet a power player. I do all this right stuff, and I don’t get SEO results.

This makes a lot of people stop investing in SEO, stop believing in SEO, and stop wanting to do it. I can understand where you’re coming from. The challenge is not one of you’ve done something wrong. It’s that this stuff, all of these things that you do right, especially things that you do right on your own site or from a best practices perspective, they don’t increase rankings. They don’t. That’s not what they’re designed to do.

1) Following best practices often does nothing for new and emerging sites

This stuff, all of these best practices are designed to protect you from potential problems. They’re designed to make sure that your site is properly optimized so that you can perform to the highest degree that you are able. But this is not actually rank boosting stuff unfortunately. That is very frustrating for many folks. So following a best practices list, the idea is not, “Hey, I’m going to grow my rankings by doing this.”

On the flip side, many folks do these things on larger, more well-established sites, sites that have a lot of ranking signals already in place. They’re bigger brands, they have lots of links to them, and they have lots of users and usage engagement signals. You fix this stuff. You fix stuff that’s already broken, and boom, rankings pop up. Things are going well, and more of your pages are indexed. You’re getting more search traffic, and it feels great. This is a challenge, on our part, of understanding what this stuff does, not a challenge on the search engine’s part of not ranking us properly for having done all of these right things.

2) My competition seems to be ranking on the back of spammy or manipulative links

What’s going on? I thought Google had introduced all these algorithms to kind of shut this stuff down. This seems very frustrating. How are they pulling this off? I look at their link profile, and I see a bunch of the directories, Web 2.0 sites — I love that the spam world decided that that’s Web 2.0 sites — article sites, private blog networks, and do follow blogs.

You look at this stuff and you go, “What is this junk? It’s terrible. Why isn’t Google penalizing them for this?” The answer, the right way to think about this and to come at this is: Are these really the reason that they rank? I think we need to ask ourselves that question.

One thing that we don’t know, that we can never know, is: Have these links been disavowed by our competitor here?

I’ve got my HulksIncredibleStore.com and their evil competitor Hulk-tastrophe.com. Hulk-tastrophe has got all of these terrible links, but maybe they disavowed those links and you would have no idea. Maybe they didn’t build those links. Perhaps those links came in from some other place. They are not responsible. Google is not treating them as responsible for it. They’re not actually what’s helping them.

If they are helping, and it’s possible they are, there are still instances where we’ve seen spam propping up sites. No doubt about it.

I think the next logical question is: Are you willing to loose your site or brand? What we don’t see anymore is we almost never see sites like this, who are ranking on the back of these things and have generally less legitimate and good links, ranking for two or three or four years. You can see it for a few months, maybe even a year, but this stuff is getting hit hard and getting hit frequently. So unless you’re willing to loose your site, pursuing their links is probably not a strategy.

Then what other signals, that you might not be considering potentially links, but also non-linking signals, could be helping them rank? I think a lot of us get blinded in the SEO world by link signals, and we forget to look at things like: Do they have a phenomenal user experience? Are they growing their brand? Are they doing offline kinds of things that are influencing online? Are they gaining engagement from other channels that’s then influencing their SEO? Do they have things coming in that I can’t see? If you don’t ask those questions, you can’t really learn from your competitors, and you just feel the frustration.

3) I have no visibility or understanding of why my rankings go up vs down

On my HulksIncredibleStore.com, I’ve got my infinite stretch shorts, which I don’t know why he never wears — he should really buy those — my soothing herbal tea, and my anger management books. I look at my rankings and they kind of jump up all the time, jump all over the place all the time. Actually, this is pretty normal. I think we’ve done some analyses here, and the average page one search results shift is 1.5 or 2 position changes daily. That’s sort of the MozCast dataset, if I’m recalling correctly. That means that, over the course of a week, it’s not uncommon or unnatural for you to be bouncing around four, five, or six positions up, down, and those kind of things.

I think we should understand what can be behind these things. That’s a very simple list. You made changes, Google made changes, your competitors made changes, or searcher behavior has changed in terms of volume, in terms of what they were engaging with, what they’re clicking on, what their intent behind searches are. Maybe there was just a new movie that came out and in one of the scenes Hulk talks about soothing herbal tea. So now people are searching for very different things than they were before. They want to see the scene. They’re looking for the YouTube video clip and those kind of things. Suddenly Hulk’s soothing herbal tea is no longer directing as well to your site.

So changes like these things can happen. We can’t understand all of them. I think what’s up to us to determine is the degree of analysis and action that’s actually going to provide a return on investment. Looking at these day over day or week over week and throwing up our hands and getting frustrated probably provides very little return on investment. Looking over the long term and saying, “Hey, over the last 6 months, we can observe 26 weeks of ranking change data, and we can see that in aggregate we are now ranking higher and for more keywords than we were previously, and so we’re going to continue pursuing this strategy. This is the set of keywords that we’ve fallen most on, and here are the factors that we’ve identified that are consistent across that group.” I think looking at rankings in aggregate can give us some real positive ROI. Looking at one or two, one week or the next week probably very little ROI.

4) I cannot influence or affect change in my organization because I cannot accurately quantify, predict, or control SEO

That’s true, especially with things like keyword not provided and certainly with the inaccuracy of data that’s provided to us through Google’s Keyword Planner inside of AdWords, for example, and the fact that no one can really control SEO, not fully anyway.

You get up in front of your team, your board, your manager, your client and you say, “Hey, if we don’t do these things, traffic will suffer,” and they go, “Well, you can’t be sure about that, and you can’t perfectly predict it. Last time you told us something, something else happened. So because the data is imperfect, we’d rather spend money on channels that we can perfectly predict, that we can very effectively quantify, and that we can very effectively control.” That is understandable. I think that businesses have a lot of risk aversion naturally, and so wanting to spend time and energy and effort in areas that you can control feels a lot safer.

Some ways to get around this are, first off, know your audience. If you know who you’re talking to in the room, you can often determine the things that will move the needle for them. For example, I find that many managers, many boards, many executives are much more influenced by competitive pressures than they are by, “We won’t do as well as we did before, or we’re loosing out on this potential opportunity.” Saying that is less powerful than saying, “This competitor, who I know we care about and we track ourselves against, is capturing this traffic and here’s how they’re doing it.”

Show multiple scenarios. Many of the SEO presentations that I see and have seen and still see from consultants and from in-house folks come with kind of a single, “Hey, here’s what we predict will happen if we do this or what we predict will happen if we don’t do this.” You’ve got to show multiple scenarios, especially when you know you have error bars because you can’t accurately quantify and predict. You need to show ranges.

So instead of this, I want to see: What happens if we do it a little bit? What happens if we really overinvest? What happens if Google makes a much bigger change on this particular factor than we expect or our competitors do a much bigger investment than we expect? How might those change the numbers?

Then I really do like bringing case studies, especially if you’re a consultant, but even in-house there are so many case studies in SEO on the Web today, you can almost always find someone who’s analogous or nearly analogous and show some of their data, some of the results that they’ve seen. Places like SEMrush, a tool that offers competitive intelligence around rankings, can be great for that. You can show, hey, this media site in our sector made these changes. Look at the delta of keywords they were ranking for versus R over the next six months. Correlation is not causation, but that can be a powerful influencer showing those kind of things.

Then last, but not least, any time you’re going to get up like this and present to a group around these topics, if you very possibly can, try to talk one-on-one with the participants before the meeting actually happens. I have found it almost universally the case that when you get into a group setting, if you haven’t had the discussions beforehand about like, “What are your concerns? What do you think is not valid about this data? Hey, I want to run this by you and get your thoughts before we go to the meeting.” If you don’t do that ahead of time, people can gang up and pile on. One person says, “Hey, I don’t think this is right,” and everybody in the room kind of looks around and goes, “Yeah, I also don’t think that’s right.” Then it just turns into warfare and conflict that you don’t want or need. If you address those things beforehand, then you can include the data, the presentations, and the “I don’t know the answer to this and I know this is important to so and so” in that presentation or in that discussion. It can be hugely helpful. Big difference between winning and losing with that.

5) Google is biasing to big brands. It feels hopeless to compete against them

A lot of people are feeling this hopelessness, hopelessness in SEO about competing against them. I get that pain. In fact, I’ve felt that very strongly for a long time in the SEO world, and I think the trend has only increased. This comes from all sorts of stuff. Brands now have the little dropdown next to their search result listing. There are these brand and entity connections. As Google is using answers and knowledge graph more and more, it’s feeling like those entities are having a bigger influence on where things rank and where they’re visible and where they’re pulling from.

User and usage behavior signals on the rise means that big brands, who have more of those signals, tend to perform better. Brands in the knowledge graph, brands growing links without any effort, they’re just growing links because they’re brands and people point to them naturally. Well, that is all really tough and can be very frustrating.

I think you have a few choices on the table. First off, you can choose to compete with brands where they can’t or won’t. So this is areas like we’re going after these keywords that we know these big brands are not chasing. We’re going after social channels or people on social media that we know big brands aren’t. We’re going after user generated content because they have all these corporate requirements and they won’t invest in that stuff. We’re going after content that they refuse to pursue for one reason or another. That can be very effective.

You better be building, growing, and leveraging your competitive advantage. Whenever you build an organization, you’ve got to say, “Hey, here’s who is out there. This is why we are uniquely better or a uniquely better choice for this set of customers than these other ones.” If you can leverage that, you can generally find opportunities to compete and even to win against big brands. But those things have to become obvious, they have to become well-known, and you need to essentially build some of your brand around those advantages, or they’re not going to give you help in search. That includes media, that includes content, that includes any sort of press and PR you’re doing. That includes how you do your own messaging, all of these things.

(C) You can choose to serve a market or a customer that they don’t or won’t. That can be a powerful way to go about search, because usually search is bifurcated by the customer type. There will be slightly different forms of search queries that are entered by different kinds of customers, and you can pursue one of those that isn’t pursued by the competition.

Last, but not least, I think for everyone in SEO we all realize we’re going to have to become brands ourselves. That means building the signals that are typically associated with brands — authority, recognition from an industry, recognition from a customer set, awareness of our brand even before a search has happened. I talked about this in a previous Whiteboard Friday, but I think because of these things, SEO is becoming a channel that you benefit from as you grow your brand rather than the channel you use to initially build your brand.

All right, everyone. Hope these have been helpful in combating some of these infuriating, frustrating problems and that we’ll see some great comments from you guys. I hope to participate in those as well, and we’ll catch you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it