Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Should I Use Relative or Absolute URLs? – Whiteboard Friday

Posted by RuthBurrReedy

It was once commonplace for developers to code relative URLs into a site. There are a number of reasons why that might not be the best idea for SEO, and in today’s Whiteboard Friday, Ruth Burr Reedy is here to tell you all about why.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Let’s discuss some non-philosophical absolutes and relatives

Howdy, Moz fans. My name is Ruth Burr Reedy. You may recognize me from such projects as when I used to be the Head of SEO at Moz. I’m now the Senior SEO Manager at BigWing Interactive in Oklahoma City. Today we’re going to talk about relative versus absolute URLs and why they are important.

At any given time, your website can have several different configurations that might be causing duplicate content issues. You could have just a standard http://www.example.com. That’s a pretty standard format for a website.

But the main sources that we see of domain level duplicate content are when the non-www.example.com does not redirect to the www or vice-versa, and when the HTTPS versions of your URLs are not forced to resolve to HTTP versions or, again, vice-versa. What this can mean is if all of these scenarios are true, if all four of these URLs resolve without being forced to resolve to a canonical version, you can, in essence, have four versions of your website out on the Internet. This may or may not be a problem.

It’s not ideal for a couple of reasons. Number one, duplicate content is a problem because some people think that duplicate content is going to give you a penalty. Duplicate content is not going to get your website penalized in the same way that you might see a spammy link penalty from Penguin. There’s no actual penalty involved. You won’t be punished for having duplicate content.

The problem with duplicate content is that you’re basically relying on Google to figure out what the real version of your website is. Google is seeing the URL from all four versions of your website. They’re going to try to figure out which URL is the real URL and just rank that one. The problem with that is you’re basically leaving that decision up to Google when it’s something that you could take control of for yourself.

There are a couple of other reasons that we’ll go into a little bit later for why duplicate content can be a problem. But in short, duplicate content is no good.

However, just having these URLs not resolve to each other may or may not be a huge problem. When it really becomes a serious issue is when that problem is combined with injudicious use of relative URLs in internal links. So let’s talk a little bit about the difference between a relative URL and an absolute URL when it comes to internal linking.

With an absolute URL, you are putting the entire web address of the page that you are linking to in the link. You’re putting your full domain, everything in the link, including /page. That’s an absolute URL.

However, when coding a website, it’s a fairly common web development practice to instead code internal links with what’s called a relative URL. A relative URL is just /page. Basically what that does is it relies on your browser to understand, “Okay, this link is pointing to a page that’s on the same domain that we’re already on. I’m just going to assume that that is the case and go there.”

There are a couple of really good reasons to code relative URLs

1) It is much easier and faster to code.

When you are a web developer and you’re building a site and there thousands of pages, coding relative versus absolute URLs is a way to be more efficient. You’ll see it happen a lot.

2) Staging environments

Another reason why you might see relative versus absolute URLs is some content management systems — and SharePoint is a great example of this — have a staging environment that’s on its own domain. Instead of being example.com, it will be examplestaging.com. The entire website will basically be replicated on that staging domain. Having relative versus absolute URLs means that the same website can exist on staging and on production, or the live accessible version of your website, without having to go back in and recode all of those URLs. Again, it’s more efficient for your web development team. Those are really perfectly valid reasons to do those things. So don’t yell at your web dev team if they’ve coded relative URLS, because from their perspective it is a better solution.

Relative URLs will also cause your page to load slightly faster. However, in my experience, the SEO benefits of having absolute versus relative URLs in your website far outweigh the teeny-tiny bit longer that it will take the page to load. It’s very negligible. If you have a really, really long page load time, there’s going to be a whole boatload of things that you can change that will make a bigger difference than coding your URLs as relative versus absolute.

Page load time, in my opinion, not a concern here. However, it is something that your web dev team may bring up with you when you try to address with them the fact that, from an SEO perspective, coding your website with relative versus absolute URLs, especially in the nav, is not a good solution.

There are even better reasons to use absolute URLs

1) Scrapers

If you have all of your internal links as relative URLs, it would be very, very, very easy for a scraper to simply scrape your whole website and put it up on a new domain, and the whole website would just work. That sucks for you, and it’s great for that scraper. But unless you are out there doing public services for scrapers, for some reason, that’s probably not something that you want happening with your beautiful, hardworking, handcrafted website. That’s one reason. There is a scraper risk.

2) Preventing duplicate content issues

But the other reason why it’s very important to have absolute versus relative URLs is that it really mitigates the duplicate content risk that can be presented when you don’t have all of these versions of your website resolving to one version. Google could potentially enter your site on any one of these four pages, which they’re the same page to you. They’re four different pages to Google. They’re the same domain to you. They are four different domains to Google.

But they could enter your site, and if all of your URLs are relative, they can then crawl and index your entire domain using whatever format these are. Whereas if you have absolute links coded, even if Google enters your site on www. and that resolves, once they crawl to another page, that you’ve got coded without the www., all of that other internal link juice and all of the other pages on your website, Google is not going to assume that those live at the www. version. That really cuts down on different versions of each page of your website. If you have relative URLs throughout, you basically have four different websites if you haven’t fixed this problem.

Again, it’s not always a huge issue. Duplicate content, it’s not ideal. However, Google has gotten pretty good at figuring out what the real version of your website is.

You do want to think about internal linking, when you’re thinking about this. If you have basically four different versions of any URL that anybody could just copy and paste when they want to link to you or when they want to share something that you’ve built, you’re diluting your internal links by four, which is not great. You basically would have to build four times as many links in order to get the same authority. So that’s one reason.

3) Crawl Budget

The other reason why it’s pretty important not to do is because of crawl budget. I’m going to point it out like this instead.

When we talk about crawl budget, basically what that is, is every time Google crawls your website, there is a finite depth that they will. There’s a finite number of URLs that they will crawl and then they decide, “Okay, I’m done.” That’s based on a few different things. Your site authority is one of them. Your actual PageRank, not toolbar PageRank, but how good Google actually thinks your website is, is a big part of that. But also how complex your site is, how often it’s updated, things like that are also going to contribute to how often and how deep Google is going to crawl your site.

It’s important to remember when we think about crawl budget that, for Google, crawl budget cost actual dollars. One of Google’s biggest expenditures as a company is the money and the bandwidth it takes to crawl and index the Web. All of that energy that’s going into crawling and indexing the Web, that lives on servers. That bandwidth comes from servers, and that means that using bandwidth cost Google actual real dollars.

So Google is incentivized to crawl as efficiently as possible, because when they crawl inefficiently, it cost them money. If your site is not efficient to crawl, Google is going to save itself some money by crawling it less frequently and crawling to a fewer number of pages per crawl. That can mean that if you have a site that’s updated frequently, your site may not be updating in the index as frequently as you’re updating it. It may also mean that Google, while it’s crawling and indexing, may be crawling and indexing a version of your website that isn’t the version that you really want it to crawl and index.

So having four different versions of your website, all of which are completely crawlable to the last page, because you’ve got relative URLs and you haven’t fixed this duplicate content problem, means that Google has to spend four times as much money in order to really crawl and understand your website. Over time they’re going to do that less and less frequently, especially if you don’t have a really high authority website. If you’re a small website, if you’re just starting out, if you’ve only got a medium number of inbound links, over time you’re going to see your crawl rate and frequency impacted, and that’s bad. We don’t want that. We want Google to come back all the time, see all our pages. They’re beautiful. Put them up in the index. Rank them well. That’s what we want. So that’s what we should do.

There are couple of ways to fix your relative versus absolute URLs problem

1) Fix what is happening on the server side of your website

You have to make sure that you are forcing all of these different versions of your domain to resolve to one version of your domain. For me, I’m pretty agnostic as to which version you pick. You should probably already have a pretty good idea of which version of your website is the real version, whether that’s www, non-www, HTTPS, or HTTP. From my view, what’s most important is that all four of these versions resolve to one version.

From an SEO standpoint, there is evidence to suggest and Google has certainly said that HTTPS is a little bit better than HTTP. From a URL length perspective, I like to not have the www. in there because it doesn’t really do anything. It just makes your URLs four characters longer. If you don’t know which one to pick, I would pick one this one HTTPS, no W’s. But whichever one you pick, what’s really most important is that all of them resolve to one version. You can do that on the server side, and that’s usually pretty easy for your dev team to fix once you tell them that it needs to happen.

2) Fix your internal links

Great. So you fixed it on your server side. Now you need to fix your internal links, and you need to recode them for being relative to being absolute. This is something that your dev team is not going to want to do because it is time consuming and, from a web dev perspective, not that important. However, you should use resources like this Whiteboard Friday to explain to them, from an SEO perspective, both from the scraper risk and from a duplicate content standpoint, having those absolute URLs is a high priority and something that should get done.

You’ll need to fix those, especially in your navigational elements. But once you’ve got your nav fixed, also pull out your database or run a Screaming Frog crawl or however you want to discover internal links that aren’t part of your nav, and make sure you’re updating those to be absolute as well.

Then you’ll do some education with everybody who touches your website saying, “Hey, when you link internally, make sure you’re using the absolute URL and make sure it’s in our preferred format,” because that’s really going to give you the most bang for your buck per internal link. So do some education. Fix your internal links.

Sometimes your dev team going to say, “No, we can’t do that. We’re not going to recode the whole nav. It’s not a good use of our time,” and sometimes they are right. The dev team has more important things to do. That’s okay.

3) Canonicalize it!

If you can’t get your internal links fixed or if they’re not going to get fixed anytime in the near future, a stopgap or a Band-Aid that you can kind of put on this problem is to canonicalize all of your pages. As you’re changing your server to force all of these different versions of your domain to resolve to one, at the same time you should be implementing the canonical tag on all of the pages of your website to self-canonize. On every page, you have a canonical page tag saying, “This page right here that they were already on is the canonical version of this page. ” Or if there’s another page that’s the canonical version, then obviously you point to that instead.

But having each page self-canonicalize will mitigate both the risk of duplicate content internally and some of the risk posed by scrappers, because when they scrape, if they are scraping your website and slapping it up somewhere else, those canonical tags will often stay in place, and that lets Google know this is not the real version of the website.

In conclusion, relative links, not as good. Absolute links, those are the way to go. Make sure that you’re fixing these very common domain level duplicate content problems. If your dev team tries to tell you that they don’t want to do this, just tell them I sent you. Thanks guys.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Has Google Gone Too Far with the Bias Toward Its Own Content?

Posted by ajfried

Since the beginning of SEO time, practitioners have been trying to crack the Google algorithm. Every once in a while, the industry gets a glimpse into how the search giant works and we have opportunity to deconstruct it. We don’t get many of these opportunities, but when we do—assuming we spot them in time—we try to take advantage of them so we can “fix the Internet.”

On Feb. 16, 2015, news started to circulate that NBC would start removing images and references of Brian Williams from its website.

This was it!

A golden opportunity.

This was our chance to learn more about the Knowledge Graph.

Expectation vs. reality

Often it’s difficult to predict what Google is truly going to do. We expect something to happen, but in reality it’s nothing like we imagined.

Expectation

What we expected to see was that Google would change the source of the image. Typically, if you hover over the image in the Knowledge Graph, it reveals the location of the image.

Keanu-Reeves-Image-Location.gif

This would mean that if the image disappeared from its original source, then the image displayed in the Knowledge Graph would likely change or even disappear entirely.

Reality (February 2015)

The only problem was, there was no official source (this changed, as you will soon see) and identifying where the image was coming from proved extremely challenging. In fact, when you clicked on the image, it took you to an image search result that didn’t even include the image.

Could it be? Had Google started its own database of owned or licensed images and was giving it priority over any other sources?

In order to find the source, we tried taking the image from the Knowledge Graph and “search by image” in images.google.com to find others like it. For the NBC Nightly News image, Google failed to even locate a match to the image it was actually using anywhere on the Internet. For other television programs, it was successful. Here is an example of what happened for Morning Joe:

Morning_Joe_image_search.png

So we found the potential source. In fact, we found three potential sources. Seemed kind of strange, but this seemed to be the discovery we were looking for.

This looks like Google is using someone else’s content and not referencing it. These images have a source, but Google is choosing not to show it.

Then Google pulled the ol’ switcheroo.

New reality (March 2015)

Now things changed and Google decided to put a source to their images. Unfortunately, I mistakenly assumed that hovering over an image showed the same thing as the file path at the bottom, but I was wrong. The URL you see when you hover over an image in the Knowledge Graph is actually nothing more than the title. The source is different.

Morning_Joe_Source.png

Luckily, I still had two screenshots I took when I first saw this saved on my desktop. Success. One screen capture was from NBC Nightly News, and the other from the news show Morning Joe (see above) showing that the source was changed.

NBC-nightly-news-crop.png

(NBC Nightly News screenshot.)

The source is a Google-owned property: gstatic.com. You can clearly see the difference in the source change. What started as a hypothesis in now a fact. Google is certainly creating a database of images.

If this is the direction Google is moving, then it is creating all kinds of potential risks for brands and individuals. The implications are a loss of control for any brand that is looking to optimize its Knowledge Graph results. As well, it seems this poses a conflict of interest to Google, whose mission is to organize the world’s information, not license and prioritize it.

How do we think Google is supposed to work?

Google is an information-retrieval system tasked with sourcing information from across the web and supplying the most relevant results to users’ searches. In recent months, the search giant has taken a more direct approach by answering questions and assumed questions in the Answer Box, some of which come from un-credited sources. Google has clearly demonstrated that it is building a knowledge base of facts that it uses as the basis for its Answer Boxes. When it sources information from that knowledge base, it doesn’t necessarily reference or credit any source.

However, I would argue there is a difference between an un-credited Answer Box and an un-credited image. An un-credited Answer Box provides a fact that is indisputable, part of the public domain, unlikely to change (e.g., what year was Abraham Lincoln shot? How long is the George Washington Bridge?) Answer Boxes that offer more than just a basic fact (or an opinion, instructions, etc.) always credit their sources.

There are four possibilities when it comes to Google referencing content:

  • Option 1: It credits the content because someone else owns the rights to it
  • Option 2: It doesn’t credit the content because it’s part of the public domain, as seen in some Answer Box results
  • Option 3: It doesn’t reference it because it owns or has licensed the content. If you search for “Chicken Pox” or other diseases, Google appears to be using images from licensed medical illustrators. The same goes for song lyrics, which Eric Enge discusses here: Google providing credit for content. This adds to the speculation that Google is giving preference to its own content by displaying it over everything else.
  • Option 4: It doesn’t credit the content, but neither does it necessarily own the rights to the content. This is a very gray area, and is where Google seemed to be back in February. If this were the case, it would imply that Google is “stealing” content—which I find hard to believe, but felt was necessary to include in this post for the sake of completeness.

Is this an isolated incident?

At Five Blocks, whenever we see these anomalies in search results, we try to compare the term in question against others like it. This is a categorization concept we use to bucket individuals or companies into similar groups. When we do this, we uncover some incredible trends that help us determine what a search result “should” look like for a given group. For example, when looking at searches for a group of people or companies in an industry, this grouping gives us a sense of how much social media presence the group has on average or how much media coverage it typically gets.

Upon further investigation of terms similar to NBC Nightly News (other news shows), we noticed the un-credited image scenario appeared to be a trend in February, but now all of the images are being hosted on gstatic.com. When we broadened the categories further to TV shows and movies, the trend persisted. Rather than show an image in the Knowledge Graph and from the actual source, Google tends to show an image and reference the source from Google’s own database of stored images.

And just to ensure this wasn’t a case of tunnel vision, we researched other categories, including sports teams, actors and video games, in addition to spot-checking other genres.

Unlike terms for specific TV shows and movies, terms in each of these other groups all link to the actual source in the Knowledge Graph.

Immediate implications

It’s easy to ignore this and say “Well, it’s Google. They are always doing something.” However, there are some serious implications to these actions:

  1. The TV shows/movies aren’t receiving their due credit because, from within the Knowledge Graph, there is no actual reference to the show’s official site
  2. The more Google moves toward licensing and then retrieving their own information, the more biased they become, preferring their own content over the equivalent—or possibly even superior—content from another source
  3. If feels wrong and misleading to get a Google Image Search result rather than an actual site because:
    • The search doesn’t include the original image
    • Considering how poor Image Search results are normally, it feels like a poor experience
  4. If Google is moving toward licensing as much content as possible, then it could make the Knowledge Graph infinitely more complicated when there is a “mistake” or something unflattering. How could one go about changing what Google shows about them?

Google is objectively becoming subjective

It is clear that Google is attempting to create databases of information, including lyrics stored in Google Play, photos, and, previously, facts in Freebase (which is now Wikidata and not owned by Google).

I am not normally one to point my finger and accuse Google of wrongdoing. But this really strikes me as an odd move, one bordering on a clear bias to direct users to stay within the search engine. The fact is, we trust Google with a heck of a lot of information with our searches. In return, I believe we should expect Google to return an array of relevant information for searchers to decide what they like best. The example cited above seems harmless, but what about determining which is the right religion? Or even who the prettiest girl in the world is?

Religion-and-beauty-queries.png

Questions such as these, which Google is returning credited answers for, could return results that are perceived as facts.

Should we next expect Google to decide who is objectively the best service provider (e.g., pizza chain, painter, or accountant), then feature them in an un-credited answer box? The direction Google is moving right now, it feels like we should be calling into question their objectivity.

But that’s only my (subjective) opinion.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Moz’s 2014 Annual Report

Posted by SarahBird

Moz has a tradition of sharing its financials (check out 2012 and 2013 for funzies). It’s an important part of TAGFEE.

Why do we do it? Moz gets its strength from the community of marketers and entrepreneurs that support it. We celebrated 10 years of our community last October. In some ways, the purpose of this report is to give you an inside look into our company. It’s one of many lenses that tell the story of Moz.

Yep. I know. It’s April. I’m not proud. Better late than never, right?

I had a very long and extensive version of this post planned, something closer to last year’s extravaganza. I finally had to admit to myself that I was letting the perfect become the enemy of the good (or at least the done). There was no way I could capture an entire year’s worth of ups and downs—along with supporting data—in a single blog post.

Without further ado, here’s the meat-and-potatoes 2014 Year In Review (and here’s an infographic with more statistics for your viewing pleasure!):

Moz ended 2014 with $31.3 million in revenue. About $30 million was recurring revenue (mostly from subscriptions to Moz Pro and the API).

Here’s a breakdown of all our major revenue sources:

Compared to previous years, 2014 was a much slower growth year. We knew very early that it was going to be a tough year because we started Q1 with negative growth. We worked very hard and successfully shifted the momentum back to increasingly positive quarterly growth rates. I’m proud of what we’ve accomplished so far. We still have a long ways to go to meet our potential, but we’re on the path.

In subscription businesses, If you start the year with negative or even slow growth it is very hard to have meaningful annual growth. All things being equal, you’re better off having a bad quarter in Q4 than Q1. If you get a new customer in Q1, you usually earn revenue from that customer all year. If you get a new customer in Q4, it will barely make a dent in that year, although it should set you up nicely for the following year.

We exited 2014 on a good flight path, which bodes well for 2015. We slammed right into some nasty billing system challenges in Q1 2015, but still managed to grow revenue 6.5%. Mad props to the team for shifting momentum last year and for digging into the billing system challenges we’re experiencing now.

We were very successful in becoming more efficient and managing costs in 2014. Our Cost of Revenue (COR), the cost of producing what we sell, fell by 30% to $8.2 million. These savings drove our gross profit margin up from 63% in 2013 to 74%.

Our operating profit increased by 30%. Here’s a breakdown of our major expenses (both operating expenses and COR):

Total operating expenses (which don’t include COR) clocked in at about $29.9 million this year.

The efficiency gains positively impacted EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization) by pushing it up 50% year over year. In 2013, EBITDA was -$4.5 million. We improved it to -$2.1 million in 2014. We’re a VC-backed startup, so this was a planned loss.

One of the most dramatic indicators of our improved efficiency in 2014 is the substantial decline in our consumption of cash.

In 2014, we spent $1.5 million in cash. This was a planned burn, and is actually very impressive for a startup. In fact, we are intentionally increasing our burn, so we don’t expect EBITDA and cash burn to look as good in 2015! Hopefully, though, you will see that revenue growth rate increase.

Let’s check in on some other Moz KPIs:

At the end of 2014, we reported a little over 27,000 Pro users. When billing system issues hit in Q1 2015, we discovered some weird under- and over-reporting, so the number of subscribers was adjusted down by about ~450 after we scrubbed a bunch of inactive accounts out of the database. We expect accounts to stabilize and be more reliable now that we’ve fixed those issues.

We launched Moz Local about a year ago. I’m amazed and thrilled that we were able to end the year managing 27,000 locations for a range of customers. We just recently took our baby steps into the UK, and we’ve got a bunch of great additional features planned. What an incredible launch year!

We published over 300 posts combined on the Moz Blog and YouMoz. Nearly 20,000 people left comments. Well done, team!

Our content and social efforts are paying off with a 26% year-over-year increase in organic search traffic.

We continue to see good growth across many of our off-site communities, too:

The team grew to 149 people last year. We’re at ~37% women, which is nowhere near where I want it to be. We have a long way to go before the team reflects the diversity of the communities around us.

Our paid, paid vacation perk is very popular with Mozzers, and why wouldn’t it be? Everyone gets $3,000/year to use toward their vacations. In 2014, we spent over $420,000 to help our Mozzers take a break and get connected with matters most.

PPV.png

Also, we’re hiring! You’ll have my undying gratitude if you send me your best software engineers. Help us, help you. 😉

Last, but certainly not least, Mozzers continue to be generous (the ‘G’ in TAGFEE) and donate to the charities of their choice. In 2014, Mozzers donated $48k, and Moz added another $72k to increase the impact of their gifts. Combining those two figures, we donated $120k to causes our team members are passionate about. That’s an average of $805 per employee!

Mozzers are optimists with initiative. I think that’s why they are so generous with their time and money to folks in need. They believe the world can be a better place if we act to change it.

That’s a wrap on 2014! A year with many ups and downs. Fortunately, Mozzers don’t quit when things get hard. They embrace TAGFEE and lean into the challenge.

Revenue is growing again. We’re still operating very efficiently, and TAGFEE is strong. We’re heads-down executing on some big projects that customers have been clamoring for. Thank you for sticking with us, and for inspiring us to make marketing better every day.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

​Inbound Lead Generation: eCommerce Marketing’s Missing Link

Posted by Everett

If eCommerce businesses hope to remain competitive with Amazon, eBay, big box brands, and other online retail juggernauts, they’ll need to learn how to conduct content marketing, lead generation, and contact nurturing as part of a comprehensive inbound marketing strategy.

First, I will discuss some of the ways most online retailers are approaching email from the bottom of the funnel upward, and why this needs to be turned around. Then we can explore how to go about doing this within the framework of “Inbound Marketing” for eCommerce businesses. Lastly, popular marketing automation and email marketing solutions are discussed in the context of inbound marketing for eCommerce.

Key differences between eCommerce and lead generation approaches to email

Different list growth strategies

Email acquisition sources differ greatly between lead gen. sites and online stores. The biggest driver of email acquisition for most eCommerce businesses are their shoppers, especially when the business doesn’t collect an email address for their contact database until the shopper provides it during the check-out process—possibly, not until the very end.

With most B2B/B2C lead gen. websites, the entire purpose of every landing page is to get visitors to submit a contact form or pick up the phone. Often, the price tag for their products or services is much higher than those of an eCommerce site or involves recurring payments. In other words, what they’re selling is more difficult to sell. People take longer to make those purchasing decisions. For this reason, leads—in the form of contact names and email addresses—are typically acquired and nurtured without having first become a customer.

Contacts vs. leads

Whether it is a B2B or B2C website, lead gen. contacts (called leads) are thought of as potential customers (clients, subscribers, patients) who need to be nurtured to the point of becoming “sales qualified,” meaning they’ll eventually get a sales call or email that attempts to convert them into a customer.

On the other hand, eCommerce contacts are often thought of primarily as existing customers to whom the marketing team can blast coupons and other offers by email.

Retail sites typically don’t capture leads at the top or middle of the funnel. Only once a shopper has checked out do they get added to the list. Historically, the buying cycle has been short enough that eCommerce sites could move many first-time visitors directly to customers in a single visit.
But this has changed.

Unless your brand is very strong—possibly a luxury brand or one with an offline retail presence—it is probably getting more difficult (i.e. expensive) to acquire new customers. At the same time, attrition rates are rising. Conversion optimization helps by converting more bottom of the funnel visitors. SEO helps drive more traffic into the site, but mostly for middle-of-funnel (category page) and bottom-of-funnel (product page) visitors who may not also be price/feature comparison shopping, or are unable to convert right away because of device or time limitations.

Even savvy retailers publishing content for shoppers higher up in the funnel, such as buyer guides and reviews, aren’t getting an email address and are missing a lot of opportunities because of it.

attract-convert-grow-funnel-inflow-2.jpg

Here’s a thought. If your eCommerce site has a 10 percent conversion rate, you’re doing pretty good by most standards. But what happened to the other 90 percent of those visitors? Will you have the opportunity to connect with them again? Even if you bump that up a few percentage points with retargeting, a lot of potential revenue has seeped out of your funnel without a trace.

I don’t mean to bash the eCommerce marketing community with generalizations. Most lead gen. sites aren’t doing anything spectacular either, and a lot of opportunity is missed all around.

There are many eCommerce brands doing great things marketing-wise. I’m a big fan of
Crutchfield for their educational resources targeting early-funnel traffic, and Neman Tools, Saddleback Leather and Feltraiger for the stories they tell. Amazon is hard to beat when it comes to scalability, product suggestions and user-generated reviews.

Sadly, most eCommerce sites (including many of the major household brands) still approach marketing in this way…

The ol’ bait n’ switch: promising value and delivering spam

Established eCommerce brands have gigantic mailing lists (compared with lead gen. counterparts), to whom they typically send out at least one email each week with “offers” like free shipping, $ off, buy-one-get-one, or % off their next purchase. The lists are minimally segmented, if at all. For example, there might be lists for repeat customers, best customers, unresponsive contacts, recent purchasers, shoppers with abandoned carts, purchases by category, etc.

The missing points of segmentation include which campaign resulted in the initial contact (sometimes referred to as a cohort) and—most importantly—the persona and buying cycle stage that best applies to each contact.

Online retailers often send frequent “blasts” to their entire list or to a few of the large segments mentioned above. Lack of segmentation means contacts aren’t receiving emails based on their interests, problems, or buying cycle stage, but instead, are receiving what they perceive as “generic” emails.

The result of these missing segments and the lack of overarching strategy looks something like this:

My, What a Big LIST You Have!

iStock_000017047747Medium.jpg

TIME reported in 2012 on stats from Responsys that the average online retailer sent out between five and six emails the week after Thanksgiving. Around the same time, the Wall Street Journal reported that the top 100 online retailers sent an average of 177 emails apiece to each of their contacts in 2011. Averaged out, that’s somewhere between three and four emails each week that the contact is receiving from these retailers.

The better to SPAM you with!

iStock_000016088853Medium.jpg

A 2014 whitepaper from SimpleRelevance titled
Email Fail: An In-Depth Evaluation of Top 20 Internet Retailer’s Email Personalization Capabilities (
PDF) found that, while 70 percent of marketing executives believed personalization was of “utmost importance” to their business…

“Only 17 percent of marketing leaders are going beyond basic transactional data to deliver personalized messages to consumers.”

Speaking of email overload, the same report found that some major online retailers sent ten or more emails per week!

simplerelevance-email-report-frequency.png

The result?

All too often, the eCommerce business will carry around big, dead lists of contacts who don’t even bother reading their emails anymore. They end up scrambling toward other channels to “drive more demand,” but because the real problems were never addressed, this ends up increasing new customer acquisition costs.

The cycle looks something like this:

  1. Spend a fortune driving in unqualified traffic from top-of-the-funnel channels
  2. Ignore the majority of those visitors who aren’t ready to purchase
  3. Capture email addresses only for the few visitors who made a purchase
  4. Spam the hell out of those people until they unsubscribe
  5. Spend a bunch more money trying to fill the top of the funnel with even more traffic

It’s like trying to fill your funnel with a bucket full of holes, some of them patched with band-aids.

The real problems

  1. Lack of a cohesive strategy across marketing channels
  2. Lack of a cohesive content strategy throughout all stages of the buying cycle
  3. Lack of persona, buying cycle stage, and cohort-based list segmentation to nurture contacts
  4. Lack of tracking across customer touchpoints and devices
  5. Lack of gated content that provides enough value to early-funnel visitors to get them to provide their email address

So, what’s the answer?

Inbound marketing allows online retailers to stop competing with Amazon and other “price focused” competitors with leaky funnels, and to instead focus on:

  1. Persona-based content marketing campaigns designed to acquire email addresses from high-quality leads (potential customers) by offering them the right content for each stage in their buyer’s journey
  2. A robust marketing automation system that makes true personalization scalable
  3. Automated contact nurturing emails triggered by certain events, such as viewing specific content, abandoning their shopping cart, adding items to their wish list or performing micro-conversions like downloading a look book
  4. Intelligent SMM campaigns that match visitors and customers with social accounts by email addresses, interests and demographics—as well as social monitoring
  5. Hyper-segmented email contact lists to support the marketing automation described above, as well as to provide highly-customized email and shopping experiences
  6. Cross-channel, closed loop reporting to provide a complete “omnichannel” view of online marketing efforts and how they assist offline conversions, if applicable

Each of these areas will be covered in more detail below. First, let’s take a quick step back and define what it is we’re talking about here.

Inbound marketing: a primer

A lot of people think “inbound marketing” is just a way some SEO agencies are re-cloaking themselves to avoid negative associations with search engine optimization. Others think it’s synonymous with “internet marketing.” I think it goes more like this:

Inbound marketing is to Internet marketing as SEO is to inbound marketing: One piece of a larger whole.

There are many ways to define inbound marketing. A cursory review of definitions from several trusted sources reveals some fundamental similarities :

Rand Fishkin

randfishkin.jpeg

“Inbound Marketing is the practice of earning traffic and attention for your business on the web rather than buying it or interrupting people to get it. Inbound channels include organic search, social media, community-building content, opt-in email, word of mouth, and many others. Inbound marketing is particularly powerful because it appeals to what people are looking for and what they want, rather than trying to get between them and what they’re trying to do with advertising. Inbound’s also powerful due to the flywheel-effect it creates. The more you invest in Inbound and the more success you have, the less effort required to earn additional benefit.”


Mike King

mikeking.jpeg

“Inbound Marketing is a collection of marketing activities that leverage remarkable content to penetrate earned media channels such as Organic Search, Social Media, Email, News and the Blogosphere with the goal of engaging prospects when they are specifically interested in what the brand has to offer.”

This quote is from 2012, and is still just as accurate today. It’s from an
Inbound.org comment thread where you can also see many other takes on it from the likes of Ian Lurie, Jonathon Colman, and Larry Kim.


Inflow

inflow-logo.jpeg

“Inbound Marketing is a multi-channel, buyer-centric approach to online marketing that involves attracting, engaging, nurturing and converting potential customers from wherever they are in the buying cycle.”

From Inflow’s
Inbound Services page.


Wikipedia

wikipedia.jpeg

“Inbound marketing refers to marketing activities that bring visitors in, rather than marketers having to go out to get prospects’ attention. Inbound marketing earns the attention of customers, makes the company easy to be found, and draws customers to the website by producing interesting content.”

From
Inbound Marketing – Wikipedia.


Larry-Kim.jpeg

Larry Kim

“Inbound marketing” refers to marketing activities that bring leads and customers in when they’re ready, rather than you having to go out and wave your arms to try to get people’s attention.”

Via
Marketing Land in 2013. You can also read more of Larry Kim’s interpretation, along with many others, on Inbound.org.


Hubspot

“Instead of the old outbound marketing methods of buying ads, buying email lists, and praying for leads, inbound marketing focuses on creating quality content that pulls people toward your company and product, where they naturally want to be.”

Via
Hubspot, a marketing automation platform for inbound marketing.

When everyone has their own definition of something, it helps to think about what they have in common, as opposed to how they differ. In the case of inbound, this includes concepts such as:

  • Pull (inbound) vs. push (interruption) marketing
  • “Earning” media coverage, search engine rankings, visitors and customers with outstanding content
  • Marketing across channels
  • Meeting potential customers where they are in their buyer’s journey

Running your first eCommerce inbound marketing campaign

Audience personas—priority no. 1

The magic happens when retailers begin to hyper-segment their list based on buyer personas and other relevant information (i.e. what they’ve downloaded, what they’ve purchased, if they abandoned their cart…). This all starts with audience research to develop personas. If you need more information on persona development, try these resources:

Once personas are developed, retailers should choose one on which to focus. A complete campaign strategy should be developed around this persona, with the aim of providing the “right value” to them at the “right time” in their buyer’s journey.

Ready to get started?

We’ve developed a quick-start guide in the form of a checklist for eCommerce marketers who want to get started with inbound marketing, which you can access below.

inbound ecommerce checklist

Hands-on experience running one campaign will teach you more about inbound marketing than a dozen articles. My advice: Just do one. You will make mistakes. Learn from them and get better each time.

Example inbound marketing campaign

Below is an example of how a hypothetical inbound marketing campaign might play out, assuming you have completed all of the steps in the checklist above. Imagine you handle marketing for an online retailer of high-end sporting goods.

AT Hiker Tommy campaign: From awareness to purchase

When segmenting visitors and customers for a “high-end sporting goods / camping retailer” based on the East Coast, you identified a segment of “Trail Hikers.” These are people with disposable income who care about high-quality gear, and will pay top dollar if they know it is tested and reliable. The top trail on their list of destinations is the
Appalachian Trail (AT).

Top of the Funnel: SEO & Strategic Content Marketing

at-tommy.jpg

Tommy’s first action is to do “top of the funnel” research from search engines (one reason why SEO is still so important to a complete inbound marketing strategy).

A search for “Hiking the Appalachian Trail” turns up your article titled “What NOT to Pack When Hiking the Appalachian Trail,” which lists common items that are bulky/heavy, and highlights slimmer, lighter alternatives from your online catalog.

It also highlights the difference between cheap gear and the kind that won’t let you down on your 2,181 mile journey through the wilderness of Appalachia, something you learned was important to Tommy when developing his persona. This allows you to get the company’s value proposition of “tested, high-end, quality gear only” in front of readers very early in their buyer’s journey—important if you want to differentiate your site from all of the retailers racing Amazon to the bottom of their profit margins.

So far you have yet to make “contact” with AT Hiker Tommy. The key to “acquiring” a contact before the potential customer is ready to make a purchase is to provide something of value to that specific type of person (i.e. their persona) at that specific point in time (i.e. their buying cycle stage).

In this case, we need to provide value to AT Hiker Tommy while he is getting started on his research about hiking the Appalachian Trail. He has an idea of what gear not to bring, as well as some lighter, higher-end options sold on your site. At this point, however, he is not ready to buy anything without researching the trail more. This is where retailers lose most of their potential customers. But not you. Not this time…

Middle of the funnel: Content offers, personalization, social & email nurturing

at-hiker-ebook.png

On the “What NOT to Pack When Hiking the Appalachian Trail” article (and probably several others), you have placed a call-to-action (CTA) in the form of a button that offers something like:

Download our Free 122-page Guide to Hiking the Appalachian Trail

This takes Tommy to a landing page showcasing some of the quotes from the book, and highlighting things like:

“We interviewed over 50 ‘thru-hikers’ who completed the AT and have curated and organized the best first-hand tips, along with our own significant research to develop a free eBook that should answer most of your questions about the trail.”

By entering their email address potential customers agree to allow you to send them the free PDF downloadable guide to hiking the AT, and other relevant information about hiking.

An automated email is sent with a link to the downloadable PDF guide, and several other useful content links, such as “The AT Hiker’s Guide to Gear for the Appalachian Trail”—content designed to move Tommy further toward the purchase of hiking gear.

If Tommy still has not made a purchase within the next two weeks, another automated email is sent asking for feedback about the PDF guide (providing the link again), and to again provide the link to the “AT Hiker’s Guide to Gear…” along with a compelling offer just for him, perhaps “Get 20% off your first hiking gear purchase, and a free wall map of the AT!”

Having Tommy’s email address also allows you to hyper-target him on social channels, while also leveraging his initial visit to initiate retargeting efforts.

Bottom of the funnel: Email nurturing & strategic, segmented offers

Eventually Tommy makes a purchase, and he may or may not receive further emails related to this campaign, such as post-purchase emails for reviews, up-sells and cross-sells.

Upon checkout, Tommy checked the box to opt-in to weekly promotional emails. He is now on multiple lists. Your marketing automation system will automatically update Tommy’s status from “Contact” or lead, to “Customer” and potentially remove or deactivate him from the marketing automation system database. This is accomplished either by default integration features, or with the help of integration tools like
Zapier and IFTTT.

You have now nurtured Tommy from his initial research on Google all the way to his first purchase without ever having sent a spammy newsletter email full of irrelevant coupons and other offers. However, now that he is a loyal customer, Tommy finds value in these bottom-of-funnel email offers.

And this is just the start

Every inbound marketing campaign will have its own mix of appropriate channels. This post has focused mostly on email because acquiring the initial permission to contact the person is what fuels most of the other features offered by marketing automation systems, including:

  • Personalization of offers and other content on the site.
  • Knowing exactly which visitors are interacting on social media
  • Knowing where visitors and social followers are in the buying cycle and which persona best represents them, among other things.
  • Smart forms that don’t require visitors to put in the same information twice and allow you to build out more detailed profiles of them over time.
  • Blogging platforms that tie into email and marketing automation systems
  • Analytics data that isn’t blocked by Google and is tied directly to real people.
  • Closed-loop reporting that integrates with call-tracking and Google’s Data Import tool
  • Up-sell, cross-sell, and abandoned cart reclamation features
Three more things…
  1. If you can figure out a way to get Tommy to “log in” when he comes to your site, the personalization possibilities are nearly limitless.
  2. The persona above is based on a real customer segment. I named it after my friend Tommy Bailey, who actually did write the eBook
    Guide to Hiking the Appalachian Trail, featured in the image above.
  3. This Moz post is part of an inbound marketing campaign targeting eCommerce marketers, a segment Inflow identified while building out our own personas. Our hope, and the whole point of inbound marketing, is that it provides value to you.

Current state of the inbound marketing industry

Inbound has, for the the most part, been applied to businesses in which the website objective is to generate leads for a sales team to follow-up with and close the deal. An examination of various marketing automation platforms—a key component of scalable inbound marketing programs—highlights this issue.

Popular marketing automation systems

Most of the major marketing automation systems can be be used very effectively as the backbone of an inbound marketing program for eCommerce businesses. However, only one of them (Silverpop) has made significant efforts to court the eCommerce market with content and out-of-box features. The next closest thing is Hubspot, so let’s start with those two:

Silverpop – an IBMⓇ Company

silver-pop.jpeg

Unlike the other platforms below, right out of the box Silverpop allows marketers to tap into very specific behaviors, including the items purchased or left in the cart.

You can easily segment based on metrics like the Recency, Frequency and Monetary Value (RFM) of purchases:

silverpop triggered campaigns

You can automate personalized shopping cart abandonment recovery emails:

silverpop cart abandonment recovery

You can integrate with many leading brands offering complementary services, including: couponing, CRM, analytics, email deliverability enhancement, social and most major eCommerce platforms.

What you can’t do with Silverpop is blog, find pricing info on their website, get a free trial on their website or have a modern-looking user experience. Sounds like an IBMⓇ company, doesn’t it?

HubSpot

Out of all the marketing automation platforms on this list, HubSpot is the most capable of handling “inbound marketing” campaigns from start to finish. This should come as no surprise, given the phrase is credited to
Brian Halligan, HubSpot’s co-founder and CEO.

While they don’t specifically cater to eCommerce marketing needs with the same gusto they give to lead gen. marketing, HubSpot does have
an eCommerce landing page and a demo landing page for eCommerce leads, which suggests that their own personas include eCommerce marketers. Additionally, there is some good content on their blog written specifically for eCommerce.

HubSpot has allowed some key partners to develop plug-ins that integrate with leading eCommerce platforms. This approach works well with curation, and is not dissimilar to how Google handles Android or Apple handles their approved apps.

magento and hubspot

The
Magento Connector for HubSpot, which costs $80 per month, was developed by EYEMAGiNE, a creative design firm for eCommerce websites. A similar HubSpot-approved third-party integration is on the way for Bigcommerce.

Another eCommerce integration for Hubspot is a Shopify plug-in called
HubShoply, which was developed by Groove Commerce and costs $100 per month.

You can also use HubSpot’s native integration capabilities with
Zapier to sync data between HubSpot and most major eCommerce SaaS vendors, including the ones above, as well as WooCommerce, Shopify, PayPal, Infusionsoft and more. However, the same could be said of some of the other marketing automation platforms, and using these third-party solutions can sometimes feel like fitting a square peg into a round hole.

HubSpot can and does handle inbound marketing for eCommerce websites. All of the features are there, or easy enough to integrate. But let’s put some pressure on them to up their eCommerce game even more. The least they can do is put an eCommerce link in the footer:

hubspot menus

Despite the lack of clear navigation to their eCommerce content, HubSpot seems to be paying more attention to the needs of eCommerce businesses than the rest of the platforms below.

Marketo

Nothing about Marketo’s in-house marketing strategy suggests “Ecommerce Director Bob” might be one of their personas. The description for each of
their marketing automation packages (from Spark to Enterprise) mentions that it is “for B2B” websites.

marketo screenshot

Driving Sales could apply to a retail business so I clicked on the link. Nope. Clearly, this is for lead generation.

marketo marketing automation

Passing “purchase-ready leads” over to your “sales reps” is a good example of the type of language used throughout the site.

Make no mistake, Marketo is a top-notch marketing automation platform. Powerful and clean, it’s a shame they don’t launch a full-scale eCommerce version of their core product. In the meantime, there’s the
Magento Integration for Marketo Plug-in developed by an agency out of Australia called Hoosh Marketing.

magento marketo integration

I’ve never used this integration, but it’s part of Marketo’s
LaunchPoint directory, which I imagine is vetted, and Hoosh seems like a reputable agency.

Their
pricing page is blurred and gated, which is annoying, but perhaps they’ll come on here and tell everyone how much they charge.

marketo pricing page

As with all others except Silverpop, the Marketo navigation provides no easy paths to landing pages that would appeal to “Ecommerce Director Bob.”

Pardot

This option is a
SalesForce product, so—though I’ve never had the opportunity to use it—I can imagine Pardot is heavy on B2B/Sales and very light on B2C marketing for retail sites.

The hero image on their homepage says as much.

pardot tagline

pardot marketing automationAgain, no mention of eCommerce or retail, but clear navigation to lead gen and sales.

Eloqua / OMC

eloqua-logo.jpeg

Eloqua, now part of the Oracle Marketing Cloud (OMC), has a landing page
for the retail industry, on which they proclaim:

“Retail marketers know that the path to lifelong loyalty and increased revenue goes through building and growing deep client relationships.”

Since when did retail marketers start calling customers clients?

eloqua integration

The Integration tab on OMC’s “…Retail.html” page helpfully informs eCommerce marketers that their sales teams can continue using CRM systems like SalesForce and Microsoft Dynamics but doesn’t mention anything about eCommerce platforms and other SaaS solutions for eCommerce businesses.

Others

There are many other players in this arena. Though I haven’t used them yet, three I would love to try out are
SharpSpring, Hatchbuck and Act-On. But none of them appear to be any better suited to handle the concerns of eCommerce websites.

Where there’s a gap, there’s opportunity

The purpose of the section above wasn’t to highlight deficiencies in the tools themselves, but to illustrate a gap in who they are being marketed to and developed for.

So far, most of your eCommerce competitors probably aren’t using tools like these because they are not marketed to by the platforms, and don’t know how to apply the technology to online retail in a way that would justify the expense.

The thing is, a tool is just a tool

The
key concepts behind inbound marketing apply just as much to online retail as they do to lead generation.

In order to “do inbound marketing,” a marketing automation system isn’t even strictly necessary (in theory). They just help make the activities scalable for most businesses.

They also bring a lot of different marketing activities under one roof, which saves time and allows data to be moved and utilized between channels and systems. For example, what a customer is doing on social could influence the emails they receive, or content they see on your site. Here are some potential uses for most of the platforms above:

Automated marketing uses

  • Personalized abandoned cart emails
  • Post-purchase nurturing/reorder marketing
  • Welcome campaigns for the newsletter (other free offer) signups
  • Winback campaigns
  • Lead-nurturing email campaigns for cohorts and persona-based segments

Content marketing uses

  • Optimized, strategic blogging platforms, and frameworks
  • Landing pages for pre-transactional/educational offers or contests
  • Social media reporting, monitoring, and publishing
  • Personalization of content and user experience

Reporting uses

  • Revenue reporting (by segment or marketing action)
  • Attribution reporting (by campaign or content)

Assuming you don’t have the budget for a marketing automation system, but already have a good email marketing platform, you can still get started with inbound marketing. Eventually, however, you may want to graduate to a dedicated marketing automation solution to reap the full benefits.

Email marketing platforms

Most of the marketing automation systems claim to replace your email marketing platform, while many email marketing platforms claim to be marketing automation systems. Neither statement is completely accurate.

Marketing automation systems, especially those created specifically for the type of “inbound” campaigns described above, provide a powerful suite of tools all in one place. On the other hand, dedicated email platforms tend to offer “email marketing” features that are better, and more robust, than those offered by marketing automation systems. Some of them are also considerably cheaper—such as
MailChimp—but those are often light on even the email-specific features for eCommerce.

A different type of campaign

Email “blasts” in the form of B.O.G.O., $10 off or free shipping offers can still be very successful in generating incremental revenue boosts — especially for existing customers and seasonal campaigns.

The conversion rate on a 20% off coupon sent to existing customers, for instance, would likely pulverize the conversion rate of an email going out to middle-of-funnel contacts with a link to content (at least with how CR is currently being calculated by email platforms).

Inbound marketing campaigns can also offer quick wins, but they tend to focus mostly on non-customers after the first segmentation campaign (a campaign for the purpose of segmenting your list, such as an incentivised survey). This means lower initial conversion rates, but long-term success with the growth of new customers.

Here’s a good bet if works with your budget: Rely on a marketing automation system for inbound marketing to drive new customer acquisition from initial visit to first purchase, while using a good email marketing platform to run your “promotional email” campaigns to existing customers.

If you have to choose one or the other, I’d go with a robust marketing automation system.

Some of the most popular email platforms used by eCommerce businesses, with a focus on how they handle various Inbound Marketing activities, include:

Bronto

bronto.jpeg

This platform builds in features like abandoned cart recovery, advanced email list segmentation and automated email workflows that nurture contacts over time.

They also offer a host of eCommerce-related
features that you just don’t get with marketing automation systems like Hubspot and Marketo. This includes easy integration with a variety of eCommerce platforms like ATG, Demandware, Magento, Miva Merchant, Mozu and MarketLive, not to mention apps for coupons, product recommendations, social shopping and more. Integration with enterprise eCommerce platforms is one reason why Bronto is seen over and over again when browsing the Internet Retailer Top 500 reports.

On the other hand, Bronto—like the rest of these email platforms—doesn’t have many of the features that assist with content marketing outside of emails. As an “inbound” marketing automation system, it is incomplete because it focuses almost solely on one channel: email.

Vertical Response

verticalresponse.jpeg

Another juggernaut in eCommerce email marketing platforms, Vertical Response, has even fewer inbound-related features than Bronto, though it is a good email platform with a free version that includes up to 1,000 contacts and 4,000 emails per month (i.e. 4 emails to a full list of 1,000).

Oracle Marketing Cloud (OMC)

Responsys (the email platform), like Eloqua (the marketing automation system) was gobbled up by Oracle and is now part of their “Marketing Cloud.”

It has been my experience that when a big technology firm like IBM or Oracle buys a great product, it isn’t “great” for the users. Time will tell.

Listrak

listrak.jpeg

Out of the established email platforms for eCommerce, Listrak may do the best job at positioning themselves as a full inbound marketing platform.

Listrak’s value proposition is that they’re an “Omnichannel” solution. Everything is all in one “Single, Integrated Digital Marketing Platform for Retailers.” The homepage image promises solutions for Email, Mobile, Social, Web and In-Store channels.

I haven’t had the opportunity to work with Listrak yet, but would love to hear feedback in the comments on whether they could handle the kind of persona-based content marketing and automated email nurturing campaigns described in the example campaign above.

Key takeaways

Congratulations for making this far! Here are a few things I hope you’ll take away from this post:

  • There is a lot of opportunity right now for eCommerce sites to take advantage of marketing automation systems and robust email marketing platforms as the infrastructure to run comprehensive inbound marketing campaigns.
  • There is a lot of opportunity right now for marketing automation systems to develop content and build in eCommerce-specific features to lure eCommerce marketers.
  • Inbound marketing isn’t email marketing, although email is an important piece to inbound because it allows you to begin forming lasting relationships with potential customers much earlier in the buying cycle.
  • To see the full benefits of inbound marketing, you should focus on getting the right content to the right person at the right time in their shopping journey. This necessarily involves several different channels, including search, social and email. One of the many benefits of marketing automation systems is their ability to track your efforts here across marketing channels, devices and touch-points.

Tools, resources, and further reading

There is a lot of great content on the topic of Inbound marketing, some of which has greatly informed my own understanding and approach. Here are a few resources you may find useful as well.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Leveraging Panda to Get Out of Product Feed Jail

Posted by MichaelC

This is a story about Panda, customer service, and differentiating your store from others selling the same products.

Many e-commerce websites get the descriptions, specifications, and imagery for products they sell from feeds or databases provided by the
manufacturers. The manufacturers might like this, as they control how their product is described and shown. However, it does their retailers
no good when they are trying to rank for searches for those products and they’ve got the exact same content as every other retailer. If the content
in the feed is thin, then you’ll have pages with…well….thin content. And if there’s a lot of content for the products, then you’ll have giant blocks of content that
Panda might spot as being the same as they’ve seen on many other sites. To throw salt on the wound, if the content is really crappy, badly written,
or downright wrong, then the retailers’ sites will look low-quality to Panda and users as well.

Many webmasters see Panda as a type of Google penalty—but it’s not, really. Panda is a collection of measurements Google
is taking of your web pages to try and give your pages a rating on how happy users are likely to be with those pages.
It’s not perfect, but then again—neither is your website.

Many SEO folks (including me) tend to focus on the kinds of tactical and structural things you can do to make Panda see
your web pages as higher quality: things like adding big, original images, interactive content like videos and maps, and
lots and lots and lots and lots of text. These are all good tactics, but let’s step back a bit and look at a specific
example to see WHY Panda was built to do this, and from that, what we can do as retailers to enrich the content we have
for e-commerce products where our hands are a bit tied—we’re getting a feed of product info from the manufacturers, the same
as every other retailer of those products.

I’m going to use a real-live example that I suffered through about a month ago. I was looking for a replacement sink
stopper for a bathroom sink. I knew the brand, but there wasn’t a part number on the part I needed to replace. After a few Google
searches, I think I’ve found it on Amazon:


Don’t you wish online shopping was always this exciting?

What content actually teaches the customer

All righty… my research has shown me that there are standard sizes for plug stoppers. In fact, I initially ordered a
“universal fit sink stopper.” Which didn’t fit. Then I found 3 standard diameters, and 5 or 6 standard lengths.
No problem…I possess that marvel of modern tool chests, a tape measure…so I measure the part I have that I need to replace. I get about 1.5″ x 5″.
So let’s scroll down to the product details to see if it’s a match:

Kohler sink stopper product info from hell

Whoa. 1.2 POUNDS? This sink stopper must be made of
Ununoctium.
The one in my hand weighs about an ounce. But the dimensions
are way off as well: a 2″ diameter stopper isn’t going to fit, and mine needs to be at least an inch longer.

I scroll down to the product description…maybe there’s more detail there, maybe the 2″ x 2″ is the box or something.

I've always wanted a sink stopper designed for long long

Well, that’s less than helpful, with a stupid typo AND incorrect capitalization AND a missing period at the end.
Doesn’t build confidence in the company’s quality control.

Looking at the additional info section, maybe this IS the right part…the weight quoted in there is about right:

Maybe this is my part after all

Where else customers look for answers

Next I looked at the questions and answers bit, which convinced me that it PROBABLY was the right part:

Customers will answer the question if the retailer won't...sometimes.

If I was smart, I would have covered my bets by doing what a bunch of other customers also did: buy a bunch of different parts,
and surely one of them will fit. Could there
possibly was a clearer signal that the product info was lacking than this?

If you can't tell which one to buy, buy them all!

In this case, that was probably smarter than spending another 1/2 hour of my time snooping around online. But in general, people
aren’t going to be willing to buy THREE of something just to make sure they get the right one. This cheap part was an exception.

So, surely SOMEONE out there has the correct dimensions of this part on their site—so I searched for the part number I saw on the Amazon
listing. But as it turned out, that crappy description and wrong weight and dimensions were on every site I found…because they came from
the manufacturer.

Better Homes and Gardens...but not better description.

A few of the sites had edited out the “designed for long long” bit, but apart from that, they were all the same.

What sucks for the customer is an opportunity for you

Many, many retailers are in this same boat—they get their product info from the manufacturer, and if the data sucks in their feed,
it’ll suck on their site. Your page looks weak to both users and to Panda, and it looks the same as everybody else’s page for that product…to
both users and to Panda. So (a) you won’t rank very well, and (b) if you DO manage to get a customer to that page, it’s not as likely to convert
to a sale.

What can you do to improve on this? Here’s a few tactics to consider.

1. Offer your own additional description and comments

Add a new field to your CMS for your own write-ups on products, and when you discover issues like the above, you can add your own information—and
make it VERY clear what’s the manufacturer’s stock info and what you’ve added (that’s VALUE-ADDED) as well. My client
Sports Car Market magazine does this with their collector car auction reports in their printed magazine:
they list the auction company’s description of the car, then their reporter’s assessment of the car. This is why I buy the magazine and not the auction catalog.

2. Solicit questions

Be sure you solicit questions on every product page—your customers will tell you what’s wrong or what important information is missing. Sure,
you’ve got millions of products to deal with, but what the customers are asking about (and your sales volume of course) will help you prioritize as well as
find the problems opportunities.

Amazon does a great job of enabling this, but in this case, I used the Feedback option to update the product info,
and got back a total
bull-twaddle email from the seller about how the dimensions are in the product description thank you for shopping with us, bye-bye.
I tried to help them, for free, and they shat on me.

3. But I don’t get enough traffic to get the questions

Don’t have enough site volume to get many customer requests? No problem, the information is out there for you on Amazon :-).
Take your most important products, and look them up on Amazon, and see what questions are being asked—then answer those ONLY on your own site.

4. What fits with what?

Create fitment/cross-reference charts for products.
You probably have in-house knowledge of what products fit/are compatible with what other products.
Just because YOU know a certain accessory fits all makes and models, because it’s some industry-standard size, doesn’t mean that the customer knows this.

If there’s a particular way to measure a product so you get the correct size, explain that (with photos of what you’re measuring, if it seems
at all complicated). I’m getting a new front door for my house. 

  • How big is the door I need? 
  • Do I measure the width of the door itself, or the width of the
    opening (probably 1/8″ wider)? 
  • Or if it’s pre-hung, do I measure the frame too? Is it inswing or outswing?
  • Right or left hinged…am I supposed to
    look at the door from inside the house or outside to figure this out? 

If you’re a door seller, this is all obvious stuff,
but it wasn’t obvious to me, and NOT having the info on a website means (a) I feel stupid, and (b) I’m going to look at your competitors’ sites
to see if they will explain it…and maybe I’ll find a door on THEIR site I like better anyway.

Again, prioritize based on customer requests.

5. Provide your own photos and measurements

If examples of the physical products are available to you, take your own photos, and take your own measurements.

In fact, take your OWN photo of YOURSELF taking the measurement—so the user can see exactly what part of the product you’re measuring.
In the photo below, you can see that I’m measuring the diameter of the stopper, NOT the hole in the sink, NOT the stopper plus the rubber gasket.
And no, Kohler, it’s NOT 2″ in diameter…by a long shot.

Don't just give the measurements, SHOW the measurements

Keep in mind, you shouldn’t have to tear apart your CMS to do any of this. You can put your additions in a new database table, just tied to the
core product content by SKU. In the page template code for the product page, you can check your database to see if you have any of your “extra bits” to display
alongside the feed content, and this way keep it separate from the core product catalog code. This will make updates to the CMS/product catalog less painful as well.

Fixing your content doesn’t have to be all that difficult, nor expensive

At this point, you’re probably thinking “hey, but I’ve got 1.2 million SKUs, and if I were to do this, it’d take me 20 years to update all of them.”
FINE. Don’t update all of them. Prioritize, based on factors like what you sell the most of, what you make the best margin on, what customers
ask questions about the most, etc. Maybe concentrate on your top 5% in terms of sales, and do those first. Take all that money you used to spend
buying spammy links every month, and spend it instead on junior employees or interns doing the product measurements, extra photos, etc.

And don’t be afraid to spend a little effort on a low value product, if it’s one that frequently gets questions from customers.
Simple things can make a life-long fan of the customer. I once needed to replace a dishwasher door seal, and didn’t know if I needed special glue,
special tools, how to cut it to fit with or without overlap, etc.
I found a video on how to do the replacement on
RepairClinic.com. So easy!
They got my business for the $10 seal, of course…but now I order my $50 fridge water filter from them every six months as well.

Benefits to your conversion rate

Certainly the tactics we’ve talked about will improve your conversion rate from visitors to purchasers. If JUST ONE of those sites I looked at for that damn sink stopper
had the right measurement (and maybe some statement about how the manufacturer’s specs above are actually incorrect, we measured, etc.), I’d have stopped right there
and bought from that site.

What does this have to do with Panda?

But, there’s a Panda benefit here too. You’ve just added a bunch of additional, unique text to your site…and maybe a few new unique photos as well.
Not only are you going to convert better, but you’ll probably rank better too.

If you’re NOT Amazon, or eBay, or Home Depot, etc., then Panda is your secret weapon to help you rank against those other sites whose backlink profiles are
stronger than
carbon fibre (that’s a really cool video, by the way).
If you saw my
Whiteboard Friday on Panda optimization, you’ll know that
Panda tuning can overcome incredible backlink profile deficits.

It’s go time

We’re talking about tactics that are time-consuming, yes—but relatively easy to implement, using relatively inexpensive staff (and in some
cases, your customers are doing some of the work for you).
And it’s something you can roll out a product at a time.
You’ll be doing things that really DO make your site a better experience for the user…we’re not just trying to trick Panda’s measurements.

  1. Your pages will rank better, and bring more traffic.
  2. Your pages will convert better, because users won’t leave your site, looking elsewhere for answers to their questions.
  3. Your customers will be more loyal, because you were able to help them when nobody else bothered.

Don’t be held hostage by other peoples’ crappy product feeds. Enhance your product information with your own info and imagery.
Like good link-building and outreach, it takes time and effort, but both Panda and your site visitors will reward you for it.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it