Q&A: Lost Your Anonymous Google Reviews? The Scoop on Removal and Moving Forward

Posted by MiriamEllis

Did you recently notice a minor or major drop in your Google review count, and then realize that some of your actual reviews had gone missing, too? Read on to see if your experience of removal review was part of the action Google took in late May surrounding anonymous reviews.

Q: What happened?

A: As nearly as I can pinpoint it, Google began discounting reviews left by “A Google User” from total review counts around May 23, 2018. For a brief period, these anonymous reviews were still visible, but were then removed from display. I haven’t seen any official announcement about this, to date, and it remains unclear as to whether all reviews designated as being from “A Google User” have been removed, or whether some still remain. I haven’t been able to discover a single one since the update.

Q: How do I know if I was affected by this action?

A: If, prior to my estimated date, you had reviews that had been left by profiles marked “A Google User,” and these reviews are now gone, that’s the diagnostic of why your total review count has dropped.

Q: The reviews I’ve lost weren’t from “A Google User” profiles. What happened?

A: If you’ve lost reviews from non-anonymous profiles, it’s time to investigate other causes of removal. These could include:

  • Having paid for or incentivized reviews, either directly or via an unethical marketer
  • Reviews stemming from a review station/kiosk at your business
  • Getting too many reviews at once
  • URLs, prohibited language, or other objectionable content in the body of reviews
  • Reviewing yourself, or having employees (past or present) do so
  • Reviews were left on your same IP (as in the case of free on-site Wi-Fi)
  • The use of review strategies/software that prohibit negative reviews or selectively solicit positive reviews
  • Any other violation of Google’s review guidelines
  • A Google bug, in which case, check the GMB forum for reports of similar review loss, and wait a few days to see if your reviews return; if not, you can take the time to post about your issue in the GMB forum, but chances are not good that removed reviews will be reinstated

Q: Is anonymous review removal a bug or a test?

A: One month later, these reviews remain absent. This is not a bug, and seems unlikely to be a test.

Q: Could my missing anonymous reviews come back?

A: Never say “never” with Google. From their inception, Google review counts have been wonky, and have been afflicted by various bugs. There have been cases in which reviews have vanished and reappeared. But, in this case, I don’t believe these types of reviews will return. This is most likely an action on Google’s part with the intention of improving their review corpus, which is, unfortunately, plagued with spam.

Q: What were the origins of “A Google User” reviews?

A: Reviews designated by this language came from a variety of scenarios, but are chiefly fallout from Google’s rollout of Google+ and then its subsequent detachment from local. As Mike Blumenthal explains:

As recently as 2016, Google required users to log in as G+ users to leave a review. When they transitioned away from + they allowed users several choices as to whether to delete their reviews or to create a name. Many users did not make that transition. For the users that chose not to give their name and make that transition Google identified them as ” A Google User”…. also certain devices like the old Blackberry’s could leave a review but not a name. Also users left + and may have changed profiles at Google abandoning their old profiles. Needless to say there were many ways that these reviews became from “A Google User.”

Q: Is the removal of anonymous reviews a positive or negative thing? What’s Google trying to do here?

A: Whether this action has worked out well or poorly for you likely depends on the quality of the reviews you’ve lost. In some cases, the loss may have suddenly put you behind competitors, in terms of review count or rating. In others, the loss of anonymous negative reviews may have just resulted in your star rating improving — which would be great news!

As to Google’s intent with this action, my assumption is that it’s a step toward increasing transparency. Not their own transparency, but the accountability of the reviewing public. Google doesn’t really like to acknowledge it, but their review corpus is inundated with spam, some of it the product of global networks of bad actors who have made a business of leaving fake reviews. Personally, I welcome Google making any attempts to cope with this, but the removal of this specific type of anonymous review is definitely not an adequate solution to review spam when the livelihoods of real people are on the line.

Q: Does this Google update mean my business is now safe from anonymous reviews?

A: Unfortunately, no. While it does mean you’re unlikely to see reviews marked as being from “A Google User”, it does not in any way deter people from creating as many Google identities as they’d like to review your business. Consider:

  • Google’s review product has yet to reach a level of sophistication which could automatically flag reviews left by “Rocky Balboa” or “Whatever Whatever” as, perhaps, somewhat lacking in legitimacy.
  • Google’s product also doesn’t appear to suspect profiles created solely to leave one-time reviews, though this is a clear hallmark of many instances of spam
  • Google won’t remove text-less negative star ratings, despite owner requests
  • Google hasn’t been historically swayed to remove reviews on the basis of the owner claiming no records show that a negative reviewer was ever a customer

Q: Should Google’s removal of anonymous reviews alter my review strategy?

A: No, not really. I empathize with the business owners expressing frustration over the loss of reviews they were proud of and had worked hard to earn. I see actions like this as important signals to all local businesses to remember that you don’t own your Google reviews, you don’t own your Google My Business listing/Knowledge Panel. Google owns those assets, and manages them in any way they deem best for Google.

In the local SEO industry, we are increasingly seeing the transformation of businesses from the status of empowered “website owner” to the shakier “Google tenant,” with more and more consumer actions taking place within Google’s interface. The May removal of reviews should be one more nudge to your local brand to:

  • Be sure you have an ongoing, guideline-compliant Google review acquisition campaign in place so that reviews that become filtered out can be replaced with fresh reviews
  • Take an active approach to monitoring your GMB reviews so that you become aware of changes quickly. Software like Moz Local can help with this, especially if you own or market large, multi-location enterprises. Even when no action can be taken in response to a new Google policy, awareness is always a competitive advantage.
  • Diversify your presence on review platforms beyond Google
  • Collect reviews and testimonials directly from your customers to be placed on your own website; don’t forget the Schema markup while you’re at it
  • Diversify the ways in which you are cultivating positive consumer sentiment offline; word-of-mouth marketing, loyalty programs, and the development of real-world relationships with your customers is something you directly control
  • Keep collecting those email addresses and, following the laws of your country, cultivate non-Google-dependent lines of communication with your customers
  • Invest heavily in hiring and training practices that empower staff to offer the finest possible experience to customers at the time of service — this is the very best way to ensure you are building a strong reputation both on and offline

Q: So, what should Google do next about review spam?

A: A Google rep once famously stated,

The wiki nature of Google Maps expands upon Google’s steadfast commitment to open community.”

I’d welcome your opinions as to how Google should deal with review spam, as I find this a very hard question to answer. It may well be a case of trying to lock the barn door after the horse has bolted, and Google’s wiki mentality applied to real-world businesses is one with which our industry has contended for years.

You see, the trouble with Google’s local product is that it was never opt-in. Whether you list your business or not, it can end up in Google’s local business index, and that means you are open to reviews (positive, negative, and fallacious) on the most visible possible platform, like it or not. As I’m not seeing a way to walk this back, review spam should be Google’s problem to fix, and they are obliged to fix it if:

  • They are committed to their own earnings, based on the trust the public feels in their review corpus
  • They are committed to user experience, implementing necessary technology and human intervention to protect consumers from fake reviews
  • They want to stop treating the very businesses on whom their whole product is structured as unimportant in the scheme of things; companies going out of business due to review spam attacks really shouldn’t be viewed as acceptable collateral damage

Knowing that Alphabet has an estimated operating income of $7 billion for 2018, I believe Google could fund these safeguards:

  1. Take a bold step and resource human review mediators. Make this a new department within the local department. Google sends out lots of emails to businesses now. Let them all include clear contact options for reaching the review mediation department if the business experiences spam reviews. Put the department behind a wizard that walks the business owner through guidelines to determine if a review is truly spam, and if this process signals a “yes,” open a ticket and fix the issue. Don’t depend on volunteers in the GMB forum. Invest money in paid staff to maintain the quality of Google’s own product.
  2. If Google is committed to the review flagging process (which is iffy, at best), offer every business owner clear guidelines for flagging reviews within their own GMB dashboard, and then communicate about what is happening to the flagged reviews.
  3. Improve algorithmic detection of suspicious signals, like profiles with one-off reviews, the sudden influx of negative reviews and text-less ratings, global reviews within a single profile, and companies or profiles with a history of guideline violations. Hold the first few reviews left by any profile in a “sandbox,” à la Yelp.

Now it’s your turn! Let’s look at Google’s removal of “A Google User” reviews as a first step in the right direction. If you had Google’s ear, what would you suggest they do next to combat review spam? I’d really like to know.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

The Meta Referrer Tag: An Advancement for SEO and the Internet

Posted by Cyrus-Shepard

The movement to make the Internet more secure through HTTPS brings several useful advancements for webmasters. In addition to security improvements, HTTPS promises future technological advances and potential SEO benefits for marketers.

HTTPS in search results is rising. Recent MozCast data from Dr. Pete shows nearly 20% of first page Google results are now HTTPS.

Sadly, HTTPS also has its downsides.

Marketers run into their first challenge when they switch regular HTTP sites over to HTTPS. Technically challenging, the switch typically involves routing your site through a series of 301 redirects. Historically, these types of redirects are associated with a loss of link equity (thought to be around 15%) which can lead to a loss in rankings. This can offset any SEO advantage that Google claims switching.

Ross Hudgens perfectly summed it up in this tweet:

Many SEOs have anecdotally shared stories of HTTPS sites performing well in Google search results (and our soon-to-be-published Ranking Factors data seems to support this.) However, the short term effect of a large migration can be hard to take. When Moz recently switched to HTTPS to provide better security to our logged-in users, we saw an 8-9% dip in our organic search traffic.

Problem number two is the subject of this post. It involves the loss of referral data. Typically, when one site sends traffic to another, information is sent that identifies the originating site as the source of traffic. This invaluable data allows people to see where their traffic is coming from, and helps spread the flow of information across the web.

SEOs have long used referrer data for a number of beneficial purposes. Oftentimes, people will link back or check out the site sending traffic when they see the referrer in their analytics data. Spammers know this works, as evidenced by the recent increase in referrer spam:

This process stops when traffic flows from an HTTPS site to a non-secure HTTP site. In this case, no referrer data is sent. Webmasters can’t know where their traffic is coming from.

Here’s how referral data to my personal site looked when Moz switched to HTTPS. I lost all visibility into where my traffic came from.

Its (not provided) all over again!

Enter the meta referrer tag

While we can’t solve the ranking challenges imposed by switching a site to HTTPS, we can solve the loss of referral data, and it’s actually super-simple.

Almost completely unknown to most marketers, the relatively new meta referrer tag (it’s actually been around for a few years) was designed to help out in these situations.

Better yet, the tag allows you to control how your referrer information is passed.

The meta referrer tag works with most browsers to pass referrer information in a manner defined by the user. Traffic remains encrypted and all the benefits of using HTTPS remain in place, but now you can pass referrer data to all websites, even those that use HTTP.

How to use the meta referrer tag

What follows are extremely simplified instructions for using the meta referrer tag. For more in-depth understanding, we highly recommend referring to the W3C working draft of the spec.

The meta referrer tag is placed in the <head> section of your HTML, and references one of five states, which control how browsers send referrer information from your site. The five states are:

  1. None: Never pass referral data
    <meta name="referrer" content="none">
    
  2. None When Downgrade: Sends referrer information to secure HTTPS sites, but not insecure HTTP sites
    <meta name="referrer" content="none-when-downgrade">
    
  3. Origin Only: Sends the scheme, host, and port (basically, the subdomain) stripped of the full URL as a referrer, i.e. https://moz.com/example.html would simply send https://moz.com
    <meta name="referrer" content="origin">
    

  4. Origin When Cross-Origin: Sends the full URL as the referrer when the target has the same scheme, host, and port (i.e. subdomain) regardless if it’s HTTP or HTTPS, while sending origin-only referral information to external sites. (note: There is a typo in the official spec. Future versions should be “origin-when-cross-origin”)
    <meta name="referrer" content="origin-when-crossorigin">
    
  5. Unsafe URL: Always passes the URL string as a referrer. Note if you have any sensitive information contained in your URL, this isn’t the safest option. By default, URL fragments, username, and password are automatically stripped out.
    <meta name="referrer" content="unsafe-url">
    

The meta referrer tag in action

By clicking the link below, you can get a sense of how the meta referrer tag works.

Check Referrer

Boom!

We’ve set the meta referrer tag for Moz to “origin”, which means when we link out to another site, we pass our scheme, host, and port. The end result is you see http://moz.com as the referrer, stripped of the full URL path (/meta-referrer-tag).

My personal site typically receives several visits per day from Moz. Here’s what my analytics data looked like before and after we implemented the meta referrer tag.

For simplicity and security, most sites may want to implement the “origin” state, but there are drawbacks.

One negative side effect was that as soon as we implemented the meta referrer tag, our AdRoll analytics, which we use for retargeting, stopped working. It turns out that AdRoll uses our referrer information for analytics, but the meta referrer tag “origin” state meant that the only URL they ever saw reported was https://moz.com.

Conclusion

We love the meta referrer tag because it keeps information flowing on the Internet. It’s the way the web is supposed to work!

It helps marketers and webmasters see exactly where their traffic is coming from. It encourages engagement, communication, and even linking, which can lead to improvements in SEO.

Useful links:

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Big Data, Big Problems: 4 Major Link Indexes Compared

Posted by russangular

Given this blog’s readership, chances are good you will spend some time this week looking at backlinks in one of the growing number of link data tools. We know backlinks continue to be one of, if not the most important
parts of Google’s ranking algorithm. We tend to take these link data sets at face value, though, in part because they are all we have. But when your rankings are on the line, is there a better way to get at which data set is the best? How should we go
about assessing these different link indexes like
Moz,
Majestic, Ahrefs and SEMrush for quality? Historically, there have been 4 common approaches to this question of index quality…

  • Breadth: We might choose to look at the number of linking root domains any given service reports. We know
    that referring domains correlates strongly with search rankings, so it makes sense to judge a link index by how many unique domains it has
    discovered and indexed.
  • Depth: We also might choose to look at how deep the web has been crawled, looking more at the total number of URLs
    in the index, rather than the diversity of referring domains.
  • Link Overlap: A more sophisticated approach might count the number of links an index has in common with Google Webmaster
    Tools.
  • Freshness: Finally, we might choose to look at the freshness of the index. What percentage of links in the index are
    still live?

There are a number of really good studies (some newer than others) using these techniques that are worth checking out when you get a chance:

  • BuiltVisible analysis of Moz, Majestic, GWT, Ahrefs and Search Metrics
  • SEOBook comparison of Moz, Majestic, Ahrefs, and Ayima
  • MatthewWoodward
    study of Ahrefs, Majestic, Moz, Raven and SEO Spyglass
  • Marketing Signals analysis of Moz, Majestic, Ahrefs, and GWT
  • RankAbove comparison of Moz, Majestic, Ahrefs and Link Research Tools
  • StoneTemple study of Moz and Majestic

While these are all excellent at addressing the methodologies above, there is a particular limitation with all of them. They miss one of the
most important metrics we need to determine the value of a link index: proportional representation to Google’s link graph
. So here at Angular Marketing, we decided to take a closer look.

Proportional representation to Google Search Console data

So, why is it important to determine proportional representation? Many of the most important and valued metrics we use are built on proportional
models. PageRank, MozRank, CitationFlow and Ahrefs Rank are proportional in nature. The score of any one URL in the data set is relative to the
other URLs in the data set. If the data set is biased, the results are biased.

A Visualization

Link graphs are biased by their crawl prioritization. Because there is no full representation of the Internet, every link graph, even Google’s,
is a biased sample of the web. Imagine for a second that the picture below is of the web. Each dot represents a page on the Internet,
and the dots surrounded by green represent a fictitious index by Google of certain sections of the web.

Of course, Google isn’t the only organization that crawls the web. Other organizations like Moz,
Majestic, Ahrefs, and SEMrush
have their own crawl prioritizations which result in different link indexes.

In the example above, you can see different link providers trying to index the web like Google. Link data provider 1 (purple) does a good job
of building a model that is similar to Google. It isn’t very big, but it is proportional. Link data provider 2 (blue) has a much larger index,
and likely has more links in common with Google that link data provider 1, but it is highly disproportional. So, how would we go about measuring
this proportionality? And which data set is the most proportional to Google?

Methodology

The first step is to determine a measurement of relativity for analysis. Google doesn’t give us very much information about their link graph.
All we have is what is in Google Search Console. The best source we can use is referring domain counts. In particular, we want to look at
what we call
referring domain link pairs. A referring domain link pair would be something like ask.com->mlb.com: 9,444 which means
that ask.com links to mlb.com 9,444 times.

Steps

  1. Determine the root linking domain pairs and values to 100+ sites in Google Search Console
  2. Determine the same for Ahrefs, Moz, Majestic Fresh, Majestic Historic, SEMrush
  3. Compare the referring domain link pairs of each data set to Google, assuming a
    Poisson Distribution
  4. Run simulations of each data set’s performance against each other (ie: Moz vs Maj, Ahrefs vs SEMrush, Moz vs SEMrush, et al.)
  5. Analyze the results

Results

When placed head-to-head, there seem to be some clear winners at first glance. In head-to-head, Moz edges out Ahrefs, but across the board, Moz and Ahrefs fare quite evenly. Moz, Ahrefs and SEMrush seem to be far better than Majestic Fresh and Majestic Historic. Is that really the case? And why?

It turns out there is an inversely proportional relationship between index size and proportional relevancy. This might seem counterintuitive,
shouldn’t the bigger indexes be closer to Google? Not Exactly.

What does this mean?

Each organization has to create a crawl prioritization strategy. When you discover millions of links, you have to prioritize which ones you
might crawl next. Google has a crawl prioritization, so does Moz, Majestic, Ahrefs and SEMrush. There are lots of different things you might
choose to prioritize…

  • You might prioritize link discovery. If you want to build a very large index, you could prioritize crawling pages on sites that
    have historically provided new links.
  • You might prioritize content uniqueness. If you want to build a search engine, you might prioritize finding pages that are unlike
    any you have seen before. You could choose to crawl domains that historically provide unique data and little duplicate content.
  • You might prioritize content freshness. If you want to keep your search engine recent, you might prioritize crawling pages that
    change frequently.
  • You might prioritize content value, crawling the most important URLs first based on the number of inbound links to that page.

Chances are, an organization’s crawl priority will blend some of these features, but it’s difficult to design one exactly like Google. Imagine
for a moment that instead of crawling the web, you want to climb a tree. You have to come up with a tree climbing strategy.

  • You decide to climb the longest branch you see at each intersection.
  • One friend of yours decides to climb the first new branch he reaches, regardless of how long it is.
  • Your other friend decides to climb the first new branch she reaches only if she sees another branch coming off of it.

Despite having different climb strategies, everyone chooses the same first branch, and everyone chooses the same second branch. There are only
so many different options early on.

But as the climbers go further and further along, their choices eventually produce differing results. This is exactly the same for web crawlers
like Google, Moz, Majestic, Ahrefs and SEMrush. The bigger the crawl, the more the crawl prioritization will cause disparities. This is not a
deficiency; this is just the nature of the beast. However, we aren’t completely lost. Once we know how index size is related to disparity, we
can make some inferences about how similar a crawl priority may be to Google.

Unfortunately, we have to be careful in our conclusions. We only have a few data points with which to work, so it is very difficult to be
certain regarding this part of the analysis. In particular, it seems strange that Majestic would get better relative to its index size as it grows,
unless Google holds on to old data (which might be an important discovery in and of itself). It is most likely that at this point we can’t make
this level of conclusion.

So what do we do?

Let’s say you have a list of domains or URLs for which you would like to know their relative values. Your process might look something like
this…

  • Check Open Site Explorer to see if all URLs are in their index. If so, you are looking metrics most likely to be proportional to Google’s link graph.
  • If any of the links do not occur in the index, move to Ahrefs and use their Ahrefs ranking if all you need is a single PageRank-like metric.
  • If any of the links are missing from Ahrefs’s index, or you need something related to trust, move on to Majestic Fresh.
  • Finally, use Majestic Historic for (by leaps and bounds) the largest coverage available.

It is important to point out that the likelihood that all the URLs you want to check are in a single index increases as the accuracy of the metric
decreases. Considering the size of Majestic’s data, you can’t ignore them because you are less likely to get null value answers from their data than
the others. If anything rings true, it is that once again it makes sense to get data
from as many sources as possible. You won’t
get the most proportional data without Moz, the broadest data without Majestic, or everything in-between without Ahrefs.

What about SEMrush? They are making progress, but they don’t publish any relative statistics that would be useful in this particular
case. Maybe we can hope to see more from them soon given their already promising index!

Recommendations for the link graphing industry

All we hear about these days is big data; we almost never hear about good data. I know that the teams at Moz,
Majestic, Ahrefs, SEMrush and others are interested in mimicking Google, but I would love to see some organization stand up against the
allure of
more data in favor of better data—data more like Google’s. It could begin with testing various crawl strategies to see if they produce
a result more similar to that of data shared in Google Search Console. Having the most Google-like data is certainly a crown worth winning.

Credits

Thanks to Diana Carter at Angular for assistance with data acquisition and Andrew Cron with statistical analysis. Thanks also to the representatives from Moz, Majestic, Ahrefs, and SEMrush for answering questions about their indices.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Creating Demand for Products, Services, and Ideas that Have Little to No Existing Search Volume – Whiteboard Friday

Posted by randfish

A lot of fantastic websites (and products, services, ideas, etc.) are in something of a pickle: The keywords they would normally think to target get next to no search volume. It can make SEO seem like a lost cause. In today’s Whiteboard Friday, Rand explains why that’s not the case, and talks about the one extra step that’ll help those organizations create the demand they want.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about a particularly challenging problem in the world of SEO, and that is trying to do SEO or trying to do any type of web marketing when your product, service, or idea has no search volume around it. So nobody is already looking for what you offer. It’s a new thing, a new concept.

I’ll use the example here of a website that I’m very fond of, but which there’s virtually no search volume for, called Niice. It’s Niice.co.

It’s great. I searched for things in here. It brings me back all these wonderful visuals from places like Colossus and lots of design portals. I love this site. I use it all the time for inspiration, for visuals, for stuff that I might write about on blogs, for finding new artists. It’s just cool. I love it. I love the discovery aspect of it, and I think it can be really great for finding artists and designers and visuals.

But when I looked at the keyword research — and granted I didn’t go deep into the keyword research, but let’s imagine that I did — I looked for things like: “visual search engine” almost no volume; “search engine for designers” almost no volume; “graphical search engine” almost no volume; “find designer visuals” nada.

So when they look at their keyword research they go, “Man, we don’t even have keywords to target here really.” SEO almost feels like it’s not a channel of opportunity, and I think that’s where many, many companies and businesses make mistakes actually, because just because you don’t see keyword research around exactly around what you’re offering doesn’t mean that SEO can’t be a great channel. It just means we have to do an extra step of work, and that’s what I want to talk about today.

So I think when you encounter this type of challenge — and granted it might not be the challenge that there’s no keyword volume — it could be a challenge in your business, for your organization, for some ideas or products that you have or are launching that there’s just very little, and thus you’re struggling to come up with enough volume to create the quantity of leads, or free trials, or customers that you need. This process really can work.

Key questions to start.

1) Who’s the target audience?

In Niice’s case, that’s going to be a lot of designers. It might be people who are creating presentations. It might be those who are searching out designers or artists. It could be people seeking inspiration for all sorts of things. So they’re going to figure out who that is.

From there, they can look at the job title, interests, demographics of those people, and then you can do some cool stuff where you can figure out things like, “Oh, you know what? We could do some Facebook ad targeting to those right groups to help boost their interests in our product and potentially, well, create branded search volume down the road, attract direct visitors, build brand awareness for ourselves, and potentially get some traffic to the site directly as well. If we can convert some of that traffic, well, that’s fantastic.”

In their case, I think Niice is ad-supported right now, so all they really need is the traffic itself. But regardless, this is that same type of process you’d use.

2) What else do they search for?

What is that target audience searching for? Knowledge, products, tools, services, people, brands, whatever it is, if you know who the audience is, you can figure out what they’re searching for because they have needs. If they have a job title, if they have interests, if you have those profile features about the audience, you can figure out what else they’re going to be searching for, and in this case, knowing what designers are searching for, well, that’s probably relatively simplistic. The other parts of their audience might be more complex, but that one is pretty obvious.

From that, we can do content creation. We can do keyword targeting to be in front of those folks when they’re doing search by creating content that may not necessarily be exactly selling our tools, but that’s the idea of content marketing. We’re creating content to target people higher up in the funnel before they need our product.

We can use that, too, for product and feature inspiration in the product itself. So in this case, Niice might consider creating a design pattern library or several, pulling from different places, or hiring someone to come in and build one for them and then featuring that somewhere on the site if you haven’t done a search yet and then potentially trying to rank for that in the search engine, which then brings qualified visitors, the types of people who once they got exposed to Niice would be like, “Wow, this is great and it’s totally free. I love it.”

UX tool list, so list of tools for user experience, people on the design or UI side, maybe Photoshop tutorials, whatever it is that they feel like they’re competent and capable of creating and could potentially rank for, well, now you’re attracting the right audience to your site before they need your product.

3) Where do they go?

That audience, where are they going on the web? What do they do when they get there? To whom do they listen? Who are their influencers? How can we be visible in those locations? So from that I can get things like influencer targeting and outreach. I can get ad and sponsorship opportunities. I can figure out places to do partnership or guest content or business development.

In Niice’s case, that might be things like sponsor or speak at design events. Maybe they could create an awards project for Dribble. So they go to Dribble, they look at what’s been featured there, or they go to Colossus, or some of the other sites that they feature, and they find the best work of the week. At the end of the week, they feature the top 10 projects, and then they call out the designers who put them together.

Wow, that’s terrific. Now you’re getting in front of the audience whose work you’re featuring, which is going to, in turn, make them amplify Niice’s project and product to an audience who’s likely to be in their target audience. It’s sort of a win-win. That’s also going to help them build links, engagement, shares, and all sorts of signals that potentially will help them with their authority, both topically and domain-wide, which then means they can rank for all the content they create, building up this wonderful engine.

4) What types of content have achieved broad or viral distribution?

I think what we can glean from this is not just inspiration for content and keyword opportunities as we can from many other kinds of content, but also sites to target, in particular sites to target with advertising, sites to target for guest posting or sponsorship, or sites to target for business development or for partnerships, site to target in an ad network, sites to target psychographically or demographically for Facebook if we want to run ads like that, potentially bidding on ads in Google when people search for that website or for that brand name in paid search.

So if you’re Niice, you could think about contracting some featured artist to contribute visuals maybe for a topical news project. So something big is happening in the news or in the design community, you contract a few of the artists whose work you have featured or are featuring, or people from the communities whose work you’re featuring, and say, “Hey, we might not be able to pay you a lot, but we’re going to get in front of a ton of people. We’re going to build exposure for you, which is something we already do, FYI, and now you’ve got some wonderful content that has that potential to mimic that work.”

You could think about, and I love this just generally as a content marketing and SEO tactic, if you go find viral content, content that has had wide sharing success across the web from the past, say two, three, four, or five years ago, you have a great opportunity, especially if the initial creator of that content or project hasn’t continued on with it, to go say, “Hey, you know what? We can do a version of that. We’re going to modernize and update that for current audiences, current tastes, what’s currently going on in the market. We’re going to go build that, and we have a strong feeling that it’s going to be successful because it’s succeeded in the past.”

That, I think, is a great way to get content ideas from viral content and then to potentially overtake them in the search rankings too. If something from three or five years ago, that was particularly timely then still ranks today, if you produce it, you’re almost certainly going to come out on top due to Google’s bias for freshness, especially around things that have timely relevance.

5) Should brand advertisement be in our consideration set?

Then last one, I like to ask about brand advertising in these cases, because when there’s not search volume yet, a lot of times what you have to do is create awareness. I should change this from advertising to a brand awareness, because really there’s organic ways to do it and advertising ways to do it. You can think about, “Well, where are places that we can target where we could build that awareness? Should we invest in press and public relations?” Not press releases. “Then how do we own the market?” So I think one of the keys here is starting with that name or title or keyword phrase that encapsulates what the market will call your product, service or idea.

In the case of Niice, that could be, well, visual search engines. You can imagine the press saying, “Well, visual search engines like Niice have recently blah, blah, blah.” Or it could be designer search engines, or it could be graphical search engines, or it could be designer visual engines, whatever it is. You need to find what that thing is going to be and what’s going to resonate.

In the case of Nest, that was the smart home. In the case of Oculus, it was virtual reality and virtual reality gaming. In the case of Tesla, it was sort of already established. There’s electric cars, but they kind of own that market. If you know what those keywords are, you can own the market before it gets hot, and that’s really important because that means that all of the press and PR and awareness that happens around the organic rankings for that particular keyword phrase will all be owned and controlled by you.

When you search for “smart home,” Nest is going to dominate those top 10 results. When you search for “virtual reality gaming,” Oculus is going to dominate those top 10. It’s not necessarily dominate just on their own site, it’s dominate all the press and PR articles that are about that, all of the Wikipedia page about it, etc., etc. You become the brand that’s synonymous with the keyword or concept. From an SEO perspective, that’s a beautiful world to live in.

So, hopefully, for those of you who are struggling around demand for your keywords, for your volume, this process can be something that’s really helpful. I look forward to hearing from you in the comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

How to Rid Your Website of Six Common Google Analytics Headaches

Posted by amandaecking

I’ve been in and out of Google Analytics (GA) for the past five or so years agency-side. I’ve seen three different code libraries, dozens of new different features and reports roll out, IP addresses stop being reported, and keywords not-so-subtly phased out of the free platform.

Analytics has been a focus of mine for the past year or so—mainly, making sure clients get their data right. Right now, our new focus is closed loop tracking, but that’s a topic for another day. If you’re using Google Analytics, and only Google Analytics for the majority of your website stats, or it’s your primary vehicle for analysis, you need to make sure it’s accurate.

Not having data pulling in or reporting properly is like building a house on a shaky foundation: It doesn’t end well. Usually there are tears.

For some reason, a lot of people, including many of my clients, assume everything is tracking properly in Google Analytics… because Google. But it’s not Google who sets up your analytics. People do that. And people are prone to make mistakes.

I’m going to go through six scenarios where issues are commonly encountered with Google Analytics.

I’ll outline the remedy for each issue, and in the process, show you how to move forward with a diagnosis or resolution.

1. Self-referrals

This is probably one of the areas we’re all familiar with. If you’re seeing a lot of traffic from your own domain, there’s likely a problem somewhere—or you need to extend the default session length in Google Analytics. (For example, if you have a lot of long videos or music clips and don’t use event tracking; a website like TEDx or SoundCloud would be a good equivalent.)

Typically one of the first things I’ll do to help diagnose the problem is include an advanced filter to show the full referrer string. You do this by creating a filter, as shown below:

Filter Type: Custom filter > Advanced
Field A: Hostname
Extract A: (.*)
Field B: Request URI
Extract B: (.*)
Output To: Request URI
Constructor: $A1$B1

You’ll then start seeing the subdomains pulling in. Experience has shown me that if you have a separate subdomain hosted in another location (say, if you work with a separate company and they host and run your mobile site or your shopping cart), it gets treated by Google Analytics as a separate domain. Thus, you ‘ll need to implement cross domain tracking. This way, you can narrow down whether or not it’s one particular subdomain that’s creating the self-referrals.

In this example below, we can see all the revenue is being reported to the booking engine (which ended up being cross domain issues) and their own site is the fourth largest traffic source:

I’ll also a good idea to check the browser and device reports to start narrowing down whether the issue is specific to a particular element. If it’s not, keep digging. Look at pages pulling the self-referrals and go through the code with a fine-tooth comb, drilling down as much as you can.

2. Unusually low bounce rate

If you have a crazy-low bounce rate, it could be too good to be true. Unfortunately. An unusually low bounce rate could (and probably does) mean that at least on some pages of your website have the same Google Analytics tracking code installed twice.

Take a look at your source code, or use Google Tag Assistant (though it does have known bugs) to see if you’ve got GA tracking code installed twice.

While I tell clients having Google Analytics installed on the same page can lead to double the pageviews, I’ve not actually encountered that—I usually just say it to scare them into removing the duplicate implementation more quickly. Don’t tell on me.

3. Iframes anywhere

I’ve heard directly from Google engineers and Google Analytics evangelists that Google Analytics does not play well with iframes, and that it will never will play nice with this dinosaur technology.

If you track the iframe, you inflate your pageviews, plus you still aren’t tracking everything with 100% clarity.

If you don’t track across iframes, you lose the source/medium attribution and everything becomes a self-referral.

Damned if you do; damned if you don’t.

My advice: Stop using iframes. They’re Netscape-era technology anyway, with rainbow marquees and Comic Sans on top. Interestingly, and unfortunately, a number of booking engines (for hotels) and third-party carts (for ecommerce) still use iframes.

If you have any clients in those verticals, or if you’re in the vertical yourself, check with your provider to see if they use iframes. Or you can check for yourself, by right-clicking as close as you can to the actual booking element:

iframe-booking.png

There is no neat and tidy way to address iframes with Google Analytics, and usually iframes are not the only complicated element of setup you’ll encounter. I spent eight months dealing with a website on a subfolder, which used iframes and had a cross domain booking system, and the best visibility I was able to get was about 80% on a good day.

Typically, I’d approach diagnosing iframes (if, for some reason, I had absolutely no access to viewing a website or talking to the techs) similarly to diagnosing self-referrals, as self-referrals are one of the biggest symptoms of iframe use.

4. Massive traffic jumps

Massive jumps in traffic don’t typically just happen. (Unless, maybe, you’re Geraldine.) There’s always an explanation—a new campaign launched, you just turned on paid ads for the first time, you’re using content amplification platforms, you’re getting a ton of referrals from that recent press in The New York Times. And if you think it just happened, it’s probably a technical glitch.

I’ve seen everything from inflated pageviews result from including tracking on iframes and unnecessary implementation of virtual pageviews, to not realizing the tracking code was installed on other microsites for the same property. Oops.

Usually I’ve seen this happen when the tracking code was somewhere it shouldn’t be, so if you’re investigating a situation of this nature, first confirm the Google Analytics code is only in the places it needs to be.Tools like Google Tag Assistant and Screaming Frog can be your BFFs in helping you figure this out.

Also, I suggest bribing the IT department with sugar (or booze) to see if they’ve changed anything lately.

5. Cross-domain tracking

I wish cross-domain tracking with Google Analytics out of the box didn’t require any additional setup. But it does.

If you don’t have it set up properly, things break down quickly, and can be quite difficult to untangle.

The older the GA library you’re using, the harder it is. The easiest setup, by far, is Google Tag Manager with Universal Analytics. Hard-coded universal analytics is a bit more difficult because you have to implement autoLink manually and decorate forms, if you’re using them (and you probably are). Beyond that, rather than try and deal with it, I say update your Google Analytics code. Then we can talk.

Where I’ve seen the most murkiness with tracking is when parts of cross domain tracking are implemented, but not all. For some reason, if allowLinker isn’t included, or you forget to decorate all the forms, the cookies aren’t passed between domains.

The absolute first place I would start with this would be confirming the cookies are all passing properly at all the right points, forms, links, and smoke signals. I’ll usually use a combination of the Real Time report in Google Analytics, Google Tag Assistant, and GA debug to start testing this. Any debug tool you use will mean you’re playing in the console, so get friendly with it.

6. Internal use of UTM strings

I’ve saved the best for last. Internal use of campaign tagging. We may think, oh, I use Google to tag my campaigns externally, and we’ve got this new promotion on site which we’re using a banner ad for. That’s a campaign. Why don’t I tag it with a UTM string?

Step away from the keyboard now. Please.

When you tag internal links with UTM strings, you override the original source/medium. So that visitor who came in through your paid ad and then who clicks on the campaign banner has now been manually tagged. You lose the ability to track that they came through on the ad the moment they click on the tagged internal link. Their source and medium is now your internal campaign, not that paid ad you’re spending gobs of money on and have to justify to your manager. See the problem?

I’ve seen at least three pretty spectacular instances of this in the past year, and a number of smaller instances of it. Annie Cushing also talks about the evils of internal UTM tags and the odd prevalence of it. (Oh, and if you haven’t explored her blog, and the amazing spreadsheets she shares, please do.)

One clothing company I worked with tagged all of their homepage offers with UTM strings, which resulted in the loss of visibility for one-third of their audience: One million visits over the course of a year, and $2.1 million in lost revenue.

Let me say that again. One million visits, and $2.1 million. That couldn’t be attributed to an external source/campaign/spend.

Another client I audited included campaign tagging on nearly every navigational element on their website. It still gives me nightmares.

If you want to see if you have any internal UTM strings, head straight to the Campaigns report in Acquisition in Google Analytics, and look for anything like “home” or “navigation” or any language you may use internally to refer to your website structure.

And if you want to see how users are moving through your website, go to the Flow reports. Or if you really, really, really want to know how many people click on that sidebar link, use event tracking. But please, for the love of all things holy (and to keep us analytics lovers from throwing our computers across the room), stop using UTM tagging on your internal links.

Now breathe and smile

Odds are, your Google Analytics setup is fine. If you are seeing any of these issues, though, you have somewhere to start in diagnosing and addressing the data.

We’ve looked at six of the most common points of friction I’ve encountered with Google Analytics and how to start investigating them: self-referrals, bounce rate, iframes, traffic jumps, cross domain tracking and internal campaign tagging.

What common data integrity issues have you encountered with Google Analytics? What are your favorite tools to investigate?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

​The 3 Most Common SEO Problems on Listings Sites

Posted by Dom-Woodman

Listings sites have a very specific set of search problems that you don’t run into everywhere else. In the day I’m one of Distilled’s analysts, but by night I run a job listings site, teflSearch. So, for my first Moz Blog post I thought I’d cover the three search problems with listings sites that I spent far too long agonising about.

Quick clarification time: What is a listings site (i.e. will this post be useful for you)?

The classic listings site is Craigslist, but plenty of other sites act like listing sites:

  • Job sites like Monster
  • E-commerce sites like Amazon
  • Matching sites like Spareroom

1. Generating quality landing pages

The landing pages on listings sites are incredibly important. These pages are usually the primary drivers of converting traffic, and they’re usually generated automatically (or are occasionally custom category pages) .

For example, if I search “Jobs in Manchester“, you can see nearly every result is an automatically generated landing page or category page.

There are three common ways to generate these pages (occasionally a combination of more than one is used):

  • Faceted pages: These are generated by facets—groups of preset filters that let you filter the current search results. They usually sit on the left-hand side of the page.
  • Category pages: These pages are listings which have already had a filter applied and can’t be changed. They’re usually custom pages.
  • Free-text search pages: These pages are generated by a free-text search box.

Those definitions are still bit general; let’s clear them up with some examples:

Amazon uses a combination of categories and facets. If you click on browse by department you can see all the category pages. Then on each category page you can see a faceted search. Amazon is so large that it needs both.

Indeed generates its landing pages through free text search, for example if we search for “IT jobs in manchester” it will generate: IT jobs in manchester.

teflSearch generates landing pages using just facets. The jobs in China landing page is simply a facet of the main search page.

Each method has its own search problems when used for generating landing pages, so lets tackle them one by one.


Aside

Facets and free text search will typically generate pages with parameters e.g. a search for “dogs” would produce:

www.mysite.com?search=dogs

But to make the URL user friendly sites will often alter the URLs to display them as folders

www.mysite.com/results/dogs/

These are still just ordinary free text search and facets, the URLs are just user friendly. (They’re a lot easier to work with in robots.txt too!)


Free search (& category) problems

If you’ve decided the base of your search will be a free text search, then we’ll have two major goals:

  • Goal 1: Helping search engines find your landing pages
  • Goal 2: Giving them link equity.

Solution

Search engines won’t use search boxes and so the solution to both problems is to provide links to the valuable landing pages so search engines can find them.

There are plenty of ways to do this, but two of the most common are:

  • Category links alongside a search

    Photobucket uses a free text search to generate pages, but if we look at example search for photos of dogs, we can see the categories which define the landing pages along the right-hand side. (This is also an example of URL friendly searches!)

  • Putting the main landing pages in a top-level menu

    Indeed also uses free text to generate landing pages, and they have a browse jobs section which contains the URL structure to allow search engines to find all the valuable landing pages.

Breadcrumbs are also often used in addition to the two above and in both the examples above, you’ll find breadcrumbs that reinforce that hierarchy.

Category (& facet) problems

Categories, because they tend to be custom pages, don’t actually have many search disadvantages. Instead it’s the other attributes that make them more or less desirable. You can create them for the purposes you want and so you typically won’t have too many problems.

However, if you also use a faceted search in each category (like Amazon) to generate additional landing pages, then you’ll run into all the problems described in the next section.

At first facets seem great, an easy way to generate multiple strong relevant landing pages without doing much at all. The problems appear because people don’t put limits on facets.

Lets take the job page on teflSearch. We can see it has 18 facets each with many options. Some of these options will generate useful landing pages:

The China facet in countries will generate “Jobs in China” that’s a useful landing page.

On the other hand, the “Conditional Bonus” facet will generate “Jobs with a conditional bonus,” and that’s not so great.

We can also see that the options within a single facet aren’t always useful. As of writing, I have a single job available in Serbia. That’s not a useful search result, and the poor user engagement combined with the tiny amount of content will be a strong signal to Google that it’s thin content. Depending on the scale of your site it’s very easy to generate a mass of poor-quality landing pages.

Facets generate other problems too. The primary one being they can create a huge amount of duplicate content and pages for search engines to get lost in. This is caused by two things: The first is the sheer number of possibilities they generate, and the second is because selecting facets in different orders creates identical pages with different URLs.

We end up with four goals for our facet-generated landing pages:

  • Goal 1: Make sure our searchable landing pages are actually worth landing on, and that we’re not handing a mass of low-value pages to the search engines.
  • Goal 2: Make sure we don’t generate multiple copies of our automatically generated landing pages.
  • Goal 3: Make sure search engines don’t get caught in the metaphorical plastic six-pack rings of our facets.
  • Goal 4: Make sure our landing pages have strong internal linking.

The first goal needs to be set internally; you’re always going to be the best judge of the number of results that need to present on a page in order for it to be useful to a user. I’d argue you can rarely ever go below three, but it depends both on your business and on how much content fluctuates on your site, as the useful landing pages might also change over time.

We can solve the next three problems as group. There are several possible solutions depending on what skills and resources you have access to; here are two possible solutions:

Category/facet solution 1: Blocking the majority of facets and providing external links
  • Easiest method
  • Good if your valuable category pages rarely change and you don’t have too many of them.
  • Can be problematic if your valuable facet pages change a lot

Nofollow all your facet links, and noindex and block category pages which aren’t valuable or are deeper than x facet/folder levels into your search using robots.txt.

You set x by looking at where your useful facet pages exist that have search volume. So, for example, if you have three facets for televisions: manufacturer, size, and resolution, and even combinations of all three have multiple results and search volume, then you could set you index everything up to three levels.

On the other hand, if people are searching for three levels (e.g. “Samsung 42″ Full HD TV”) but you only have one or two results for three-level facets, then you’d be better off indexing two levels and letting the product pages themselves pick up long-tail traffic for the third level.

If you have valuable facet pages that exist deeper than 1 facet or folder into your search, then this creates some duplicate content problems dealt with in the aside “Indexing more than 1 level of facets” below.)

The immediate problem with this set-up, however, is that in one stroke we’ve removed most of the internal links to our category pages, and by no-following all the facet links, search engines won’t be able to find your valuable category pages.

In order re-create the linking, you can add a top level drop down menu to your site containing the most valuable category pages, add category links elsewhere on the page, or create a separate part of the site with links to the valuable category pages.

The top level drop down menu you can see on teflSearch (it’s the search jobs menu), the other two examples are demonstrated in Photobucket and Indeed respectively in the previous section.

The big advantage for this method is how quick it is to implement, it doesn’t require any fiddly internal logic and adding an extra menu option is usually minimal effort.

Category/facet solution 2: Creating internal logic to work with the facets

  • Requires new internal logic
  • Works for large numbers of category pages with value that can change rapidly

There are four parts to the second solution:

  1. Select valuable facet categories and allow those links to be followed. No-follow the rest.
  2. No-index all pages that return a number of items below the threshold for a useful landing page
  3. No-follow all facets on pages with a search depth greater than x.
  4. Block all facet pages deeper than x level in robots.txt

As with the last solution, x is set by looking at where your useful facet pages exist that have search volume (full explanation in the first solution), and if you’re indexing more than one level you’ll need to check out the aside below to see how to deal with the duplicate content it generates.


Aside: Indexing more than one level of facets

If you want more than one level of facets to be indexable, then this will create certain problems.

Suppose you have a facet for size:

  • Televisions: Size: 46″, 44″, 42″

And want to add a brand facet:

  • Televisions: Brand: Samsung, Panasonic, Sony

This will create duplicate content because the search engines will be able to follow your facets in both orders, generating:

  • Television – 46″ – Samsung
  • Television – Samsung – 46″

You’ll have to either rel canonical your duplicate pages with another rule or set up your facets so they create a single unique URL.

You also need to be aware that each followable facet you add will multiply with each other followable facet and it’s very easy to generate a mass of pages for search engines to get stuck in. Depending on your setup you might need to block more paths in robots.txt or set-up more logic to prevent them being followed.

Letting search engines index more than one level of facets adds a lot of possible problems; make sure you’re keeping track of them.


2. User-generated content cannibalization

This is a common problem for listings sites (assuming they allow user generated content). If you’re reading this as an e-commerce site who only lists their own products, you can skip this one.

As we covered in the first area, category pages on listings sites are usually the landing pages aiming for the valuable search terms, but as your users start generating pages they can often create titles and content that cannibalise your landing pages.

Suppose you’re a job site with a category page for PHP Jobs in Greater Manchester. If a recruiter then creates a job advert for PHP Jobs in Greater Manchester for the 4 positions they currently have, you’ve got a duplicate content problem.

This is less of a problem when your site is large and your categories mature, it will be obvious to any search engine which are your high value category pages, but at the start where you’re lacking authority and individual listings might contain more relevant content than your own search pages this can be a problem.

Solution 1: Create structured titles

Set the <title> differently than the on-page title. Depending on variables you have available to you can set the title tag programmatically without changing the page title using other information given by the user.

For example, on our imaginary job site, suppose the recruiter also provided the following information in other fields:

  • The no. of positions: 4
  • The primary area: PHP Developer
  • The name of the recruiting company: ABC Recruitment
  • Location: Manchester

We could set the <title> pattern to be: *No of positions* *The primary area* with *recruiter name* in *Location* which would give us:

4 PHP Developers with ABC Recruitment in Manchester

Setting a <title> tag allows you to target long-tail traffic by constructing detailed descriptive titles. In our above example, imagine the recruiter had specified “Castlefield, Manchester” as the location.

All of a sudden, you’ve got a perfect opportunity to pick up long-tail traffic for people searching in Castlefield in Manchester.

On the downside, you lose the ability to pick up long-tail traffic where your users have chosen keywords you wouldn’t have used.

For example, suppose Manchester has a jobs program called “Green Highway.” A job advert title containing “Green Highway” might pick up valuable long-tail traffic. Being able to discover this, however, and find a way to fit it into a dynamic title is very hard.

Solution 2: Use regex to noindex the offending pages

Perform a regex (or string contains) search on your listings titles and no-index the ones which cannabalise your main category pages.

If it’s not possible to construct titles with variables or your users provide a lot of additional long-tail traffic with their own titles, then is a great option. On the downside, you miss out on possible structured long-tail traffic that you might’ve been able to aim for.

Solution 3: De-index all your listings

It may seem rash, but if you’re a large site with a huge number of very similar or low-content listings, you might want to consider this, but there is no common standard. Some sites like Indeed choose to no-index all their job adverts, whereas some other sites like Craigslist index all their individual listings because they’ll drive long tail traffic.

Don’t de-index them all lightly!

3. Constantly expiring content

Our third and final problem is that user-generated content doesn’t last forever. Particularly on listings sites, it’s constantly expiring and changing.

For most use cases I’d recommend 301’ing expired content to a relevant category page, with a message triggered by the redirect notifying the user of why they’ve been redirected. It typically comes out as the best combination of search and UX.

For more information or advice on how to deal with the edge cases, there’s a previous Moz blog post on how to deal with expired content which I think does an excellent job of covering this area.

Summary

In summary, if you’re working with listings sites, all three of the following need to be kept in mind:

  • How are the landing pages generated? If they’re generated using free text or facets have the potential problems been solved?
  • Is user generated content cannibalising the main landing pages?
  • How has constantly expiring content been dealt with?

Good luck listing, and if you’ve had any other tricky problems or solutions you’ve come across working on listings sites lets chat about them in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]