Boost your ecommerce success with dynamic pricing

The goal of every single ecommerce retailer is to find a solid way that leads them to improve their business. Personalized and segmented emails, delivering a great user experience and customer support, publishing good performing ads… The list goes on. All of these efforts have a great impact on the success of an online business. But in this article, we would like to tap into a mysterious area – having an effective pricing strategy through dynamic pricing to boost ecommerce success.

While dynamic pricing is not a completely new approach, ecommerce retailers have been using it more of late. Most of them were randomly optimizing their prices and changing them manually based on internal decisions. However, because of the increase in online price competition, and thanks to greater market intelligence and sophisticated dynamic pricing software, ecommerce retailers have realized the importance and the impact of dynamic pricing on their businesses.

What is dynamic pricing?

In basic terms, dynamic pricing is a pricing approach that enables you to set flexible prices by taking into account your costs, desired profit margins, the demand of the market and your competitors’ prices. In other words, you’ll be able to set the optimal price at the right time in response to real-time demand and competition status, while taking into account your business goals.

Why is dynamic pricing important?

The most apparent case is retail giant Amazon, who changes and updates its prices every 10 minutes and increased its revenues by 27.2%. Another big player, Walmart, adopted dynamic pricing and changed its prices 50,000 times a month. Using this pricing model, its sales jumped by 30% in 2013.

Dynamic pricing also lets retailers have additional and valuable insights on industry trends. Ecommerce retailers can apply different price limits and analyze price elasticities before deciding the optimal product price. A great way of testing and optimizing your prices is through paid ads. For example, Google Shopping provides instant data on how online shoppers are responding to your new price. You can analyze conversion rates, impressions, CTRs and margins after changing prices. By making continuous tests, you can find the optimal price.

The benefits of using a dynamic pricing strategy are abundant: improved margins and revenues; better conversion; control on the market; personalized prices based on season, demand and demographic; and presence in price comparison engines. As such, your prices always stay competitive and optimized in the ecommerce market.

Dynamic pricing use-case scenarios

If you are managing an ecommerce store, you should seriously consider adopting a dynamic pricing model with the right technology, since it has a significant impact on business success.

Here are some proven dynamic pricing use-cases that you can face in your industry:

  • Demand-based pricing

If the general or seasonal demand for your product is at low level, then you have to eliminate the excess stock in order to get rid of the extra costs. The most common practice is to drop prices as low as possible to increase sales.

On the other hand, if demand is high in the market (it can be a seasonal effect or instant hypes), it would be great to increase your prices for the purpose of boosting profits.

So in a nutshell, demand-based pricing lets you benefit from demand fluctuations in the market like increasing prices when demand is high or when your competitors’ products are out of stock.

Identifying your competitors’ out-of-stock products – through a competitor price monitoring tool, Amazon Bestseller, or tracking Google Trends – would give you great insight in understanding market demand and the most popular products over a certain time period.

Moreover, ecommerce retailers can apply a dynamic pricing strategy for some seasonal opportunities that occur during the holidays or shopping frenzies.

  • Time-based pricing

Time-based pricing is a dynamic pricing approach which enables ecommerce retailers to optimize their prices based on certain times of a day, month, year or the lifespan of a product in the market.

Different to demand-based pricing, as time is the core element rather than instant hypes in demand, the time-based pricing model is more predictable.

Let’s go through some real examples. The most popular industry using the time-based pricing approach is airline ticket providers. You should definitely notice that airline ticket prices are much higher in holiday seasons when compared with a regular season in the year.

Time-based pricing also works well if a product is outdated. Electronics brands use this strategy to increase the demand for an old version of a product. Whenever Apple releases a new version of a product, the price of an old version is marked down for the purpose of attracting more customers.

Through time-based pricing, you’ll always be aware of market trends as well as what your rivals are offering. With that intelligence you can always know where and when to decrease or increase your prices.

  • Competitive pricing

There are hundreds of competitors in the market and they adjust their prices continuously. That’s why you should also monitor them and take actions based on the pricing competition in the market.

If you deny your competitors then you won’t know if you’re priced too high or too low among them. That lack of information causes you to detach from the market.

In that scenario, you’ll face low conversion rates or slim margins which harm your sales and business growth. This is because your competitors are acting competitively and are more aware of market trends.

Do you know why?

The statistics show that majority of online shoppers compare prices before finalizing their purchase by visiting at least 3 online stores. Moreover, most of them name price as the very first criteria for their purchasing decision.

In a nutshell, because of price competition in the ecommerce space, online pricing becomes one of the key elements that influences the purchasing decision. So, retailers should be very attentive with their management and optimization.

Think about a scenario in which one of your competitors applies discounts and undercuts your pricing. With dynamic pricing, you can automatically react to its discounting strategy and regain your competitive status again.

Your prices may be too competitive when compared to your closest rival. So, even an increase of 10% or 20% won’t harm your competitiveness in the market. To grab at this opportunity, you need fresh competitor intelligence and dynamic pricing. By investing in both, you will be able to retain your competitiveness and increase your profit margin.

How to ‘really’ apply dynamic pricing to your strategy?

As mentioned above, Amazon is a huge fan of repricing!

Now let’s see how to apply this smart strategy!

Having tons of data is great. But, the crucial thing is to convert data into actionable insights. Fortunately, there are dynamic pricing and repricing software in the market that help you to generate recommendations from the data that you’ve collected from competitors. Then, the technology lets you calculate optimal prices through repricing rules that you’ve set based on your competitors’ prices, market demand and costs.

Once the optimal price rules are set, then you can enjoy the rest! The repricing engine works all day and your prices will be changed according to the fluctuations in the market and, of course, based on the rules that you’ve set. With the mix of competitive intelligence and repricing ability, your business can gain a seamless competitive advantage in the market. As you’re able to react to every single move in the market, your prices will always stay competitive or optimized.

Let’s give an actual example:

There are two different retailers competing in the same category:

  • The first retailer named ‘Great E-Commerce Retailer’ is selling all types of products from almost every category (electronics, home & kitchen appliances, fashion products, sports products,… the list goes on…)
  • The second retailer named ‘Super Sport Retailer’ specializes in the sports category.

Then, imagine that these retailers are competing harshly over price in the ‘football shoes vertical’. To take advantage of repricing technology and find the most optimal price, the below rule for Super Sport Retailer can be set;

My prices for every product under ‘football shoes category’ should be 10% lower

+

than ‘Great E-Commerce Retailer’

+

but they should be at least 15% higher than my costs

After setting this rule and assigning it in the repricing engine, ‘Super Sport Retailer’ will always have competitive prices in the football shoe category that won’t veer lower than its costs.

Repricing in ecommerce is key to remain competitive and grow your business online.

So, what are your thoughts on dynamic pricing and repricing? Have you ever tried it on your ecommerce store? If yes, what are your experiences? Please don’t hesitate to share all of them with us at Prisync.

The post Boost your ecommerce success with dynamic pricing appeared first on The Marketing Automation Blog.

Reblogged 4 months ago from blog.dotmailer.com

I Can’t Drive 155: Meta Descriptions in 2015

Posted by Dr-Pete

For years now, we (and many others) have been recommending keeping your Meta Descriptions shorter than
about 155-160 characters. For months, people have been sending me examples of search snippets that clearly broke that rule, like this one (on a search for “hummingbird food”):

For the record, this one clocks in at 317 characters (counting spaces). So, I set out to discover if these long descriptions were exceptions to the rule, or if we need to change the rules. I collected the search snippets across the MozCast 10K, which resulted in 92,669 snippets. All of the data in this post was collected on April 13, 2015.

The Basic Data

The minimum snippet length was zero characters. There were 69 zero-length snippets, but most of these were the new generation of answer box, that appears organic but doesn’t have a snippet. To put it another way, these were misidentified as organic by my code. The other 0-length snippets were local one-boxes that appeared as organic but had no snippet, such as this one for “chichen itza”:

These zero-length snippets were removed from further analysis, but considering that they only accounted for 0.07% of the total data, they didn’t really impact the conclusions either way. The shortest legitimate, non-zero snippet was 7 characters long, on a search for “geek and sundry”, and appears to have come directly from the site’s meta description:

The maximum snippet length that day (this is a highly dynamic situation) was 372 characters. The winner appeared on a search for “benefits of apple cider vinegar”:

The average length of all of the snippets in our data set (not counting zero-length snippets) was 143.5 characters, and the median length was 152 characters. Of course, this can be misleading, since some snippets are shorter than the limit and others are being artificially truncated by Google. So, let’s dig a bit deeper.

The Bigger Picture

To get a better idea of the big picture, let’s take a look at the display length of all 92,600 snippets (with non-zero length), split into 20-character buckets (0-20, 21-40, etc.):

Most of the snippets (62.1%) cut off as expected, right in the 141-160 character bucket. Of course, some snippets were shorter than that, and didn’t need to be cut off, and some broke the rules. About 1% (1,010) of the snippets in our data set measured 200 or more characters. That’s not a huge number, but it’s enough to take seriously.

That 141-160 character bucket is dwarfing everything else, so let’s zoom in a bit on the cut-off range, and just look at snippets in the 120-200 character range (in this case, by 5-character bins):

Zooming in, the bulk of the snippets are displaying at lengths between about 146-165 characters. There are plenty of exceptions to the 155-160 character guideline, but for the most part, they do seem to be exceptions.

Finally, let’s zoom in on the rule-breakers. This is the distribution of snippets displaying 191+ characters, bucketed in 10-character bins (191-200, 201-210, etc.):

Please note that the Y-axis scale is much smaller than in the previous 2 graphs, but there is a pretty solid spread, with a decent chunk of snippets displaying more than 300 characters.

Without looking at every original meta description tag, it’s very difficult to tell exactly how many snippets have been truncated by Google, but we do have a proxy. Snippets that have been truncated end in an ellipsis (…), which rarely appears at the end of a natural description. In this data set, more than half of all snippets (52.8%) ended in an ellipsis, so we’re still seeing a lot of meta descriptions being cut off.

I should add that, unlike titles/headlines, it isn’t clear whether Google is cutting off snippets by pixel width or character count, since that cut-off is done on the server-side. In most cases, Google will cut before the end of the second line, but sometimes they cut well before this, which could suggest a character-based limit. They also cut off at whole words, which can make the numbers a bit tougher to interpret.

The Cutting Room Floor

There’s another difficulty with telling exactly how many meta descriptions Google has modified – some edits are minor, and some are major. One minor edit is when Google adds some additional information to a snippet, such as a date at the beginning. Here’s an example (from a search for “chicken pox”):

With the date (and minus the ellipsis), this snippet is 164 characters long, which suggests Google isn’t counting the added text against the length limit. What’s interesting is that the rest comes directly from the meta description on the site, except that the site’s description starts with “Chickenpox.” and Google has removed that keyword. As a human, I’d say this matches the meta description, but a bot has a very hard time telling a minor edit from a complete rewrite.

Another minor rewrite occurs in snippets that start with search result counts:

Here, we’re at 172 characters (with spaces and minus the ellipsis), and Google has even let this snippet roll over to a third line. So, again, it seems like the added information at the beginning isn’t counting against the length limit.

All told, 11.6% of the snippets in our data set had some kind of Google-generated data, so this type of minor rewrite is pretty common. Even if Google honors most of your meta description, you may see small edits.

Let’s look at our big winner, the 372-character description. Here’s what we saw in the snippet:

Jan 26, 2015 – Health• Diabetes Prevention: Multiple studies have shown a correlation between apple cider vinegar and lower blood sugar levels. … • Weight Loss: Consuming apple cider vinegar can help you feel more full, which can help you eat less. … • Lower Cholesterol: … • Detox: … • Digestive Aid: … • Itchy or Sunburned Skin: … • Energy Boost:1 more items

So, what about the meta description? Here’s what we actually see in the tag:

Were you aware of all the uses of apple cider vinegar? From cleansing to healing, to preventing diabetes, ACV is a pantry staple you need in your home.

That’s a bit more than just a couple of edits. So, what’s happening here? Well, there’s a clue on that same page, where we see yet another rule-breaking snippet:

You might be wondering why this snippet is any more interesting than the other one. If you could see the top of the SERP, you’d know why, because it looks something like this:

Google is automatically extracting list-style data from these pages to fuel the expansion of the Knowledge Graph. In one case, that data is replacing a snippet
and going directly into an answer box, but they’re performing the same translation even for some other snippets on the page.

So, does every 2nd-generation answer box yield long snippets? After 3 hours of inadvisable mySQL queries, I can tell you that the answer is a resounding “probably not”. You can have 2nd-gen answer boxes without long snippets and you can have long snippets without 2nd-gen answer boxes,
but there does appear to be a connection between long snippets and Knowledge Graph in some cases.

One interesting connection is that Google has begun bolding keywords that seem like answers to the query (and not just synonyms for the query). Below is an example from a search for “mono symptoms”. There’s an answer box for this query, but the snippet below is not from the site in the answer box:

Notice the bolded words – “fatigue”, “sore throat”, “fever”, “headache”, “rash”. These aren’t synonyms for the search phrase; these are actual symptoms of mono. This data isn’t coming from the meta description, but from a bulleted list on the target page. Again, it appears that Google is trying to use the snippet to answer a question, and has gone well beyond just matching keywords.

Just for fun, let’s look at one more, where there’s no clear connection to the Knowledge Graph. Here’s a snippet from a search for “sons of anarchy season 4”:

This page has no answer box, and the information extracted is odd at best. The snippet bears little or no resemblance to the site’s meta description. The number string at the beginning comes out of a rating widget, and some of the text isn’t even clearly available on the page. This seems to be an example of Google acknowledging IMDb as a high-authority site and desperately trying to match any text they can to the query, resulting in a Frankenstein’s snippet.

The Final Verdict

If all of this seems confusing, that’s probably because it is. Google is taking a lot more liberties with snippets these days, both to better match queries, to add details they feel are important, or to help build and support the Knowledge Graph.

So, let’s get back to the original question – is it time to revise the 155(ish) character guideline? My gut feeling is: not yet. To begin with, the vast majority of snippets are still falling in that 145-165 character range. In addition, the exceptions to the rule are not only atypical situations, but in most cases those long snippets don’t seem to represent the original meta description. In other words, even if Google does grant you extra characters, they probably won’t be the extra characters you asked for in the first place.

Many people have asked: “How do I make sure that Google shows my meta description as is?” I’m afraid the answer is: “You don’t.” If this is very important to you, I would recommend keeping your description below the 155-character limit, and making sure that it’s a good match to your target keyword concepts. I suspect Google is going to take more liberties with snippets over time, and we’re going to have to let go of our obsession with having total control over the SERPs.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Give It Up for Our MozCon 2015 Community Speakers

Posted by EricaMcGillivray

Super thrilled that we’re able to announce this year’s community speakers for MozCon, July 13-15th in Seattle!

Wow. Each year I feel that I say the pool keeps getting more and more talented, but it’s the truth! We had more quality pitches this year than in the past, and quantity-wise, there were 241, around 100 more entries than years previously. Let me tell you, many of the review committee members filled our email thread with amazement at this.

And even though we had an unprecedented six slots, the choices seemed even tougher!

241 pitches
Let that number sink in for a little while.

Because we get numerous questions about what makes a great pitch, I wanted to share both information about the speakers and their great pitches—with some details removed for spoilers. (We’re still working with each speaker to polish and finalize their topic.) I’ve also included my or Matt Roney‘s own notes on each one from when we read them without knowing who the authors were.

Please congratulate our MozCon 2015 community speakers!

Adrian Vender

Adrian is the Director of Analytics at IMI and a general enthusiast of coding and digital marketing. He’s also a life-long drummer and lover of music. Follow him at @adrianvender.

Adrian’s pitch:

Content Tracking with Google Tag Manager

While marketers have matured in the use of web analytics tools, our ability to measure how users interact with our sites’ content needs improvement. Users are interacting with dynamic content that just aren’t captured in a pageview. While there are JavaScript tricks to help track these details, working with IT to place new code is usually the major hurdle that stops us.

Finally, Google Tag Manager is that bridge to advanced content analysis. GTM may appear technical, but it can easily be used by any digital marketer to track almost any action on a site. My goal is to make ALL attendees users of GTM.

My talk will cover the following GTM concepts:

[Adrian lists 8 highly-actionable tactics he’ll cover.]

I’ll share a client example of tracking content interaction in GA. I’ll also share a link to a GTM container file that can help people pre-load the above tag templates into their own GTM.

Matt’s notes: Could be good. I know a lot of people have questions about Tag Manager, and the ubiquity of GA should help it be pretty well-received.


Chris DayleyChris Dayley

Chris is a digital marketing expert and owner of Dayley Conversion. His company provides full-service A/B testing for businesses, including design, development, and test execution. Follow him at @chrisdayley.

Chris’ pitch:

I would like to present a super actionable 15 minute presentation focused on the first two major steps businesses should take to start A/B testing:

1. Radical Redesign Testing

2. Iterative Testing (Test EVERYTHING)

I am one of the few CROs out there that recommends businesses to start with a radical redesign test. My reasoning for doing so is that most businesses have done absolutely no testing on their current website, so the current landing page/website really isn’t a “best practice” design yet.

I will show several case studies where clients saw more than a 50% lift in conversion rates just from this first step of radical redesign testing, and will offer several tips for how to create a radical redesign test. Some of the tips include:

[Chris lists three direct and interesting tips he’ll share.]

Next I suggest moving into the iterative phase.

I will show several case studies of how to move through iterative testing so you eventually test every element on your page.

Erica’s notes: Direct, interesting, and with promise of multiple case studies.


Duane BrownDuane Brown

Duane is a digital marketer with 10 years’ experience having lived and worked in five cities across three continents. He’s currently at Unbounce. When not working, you can find Duane traveling to some far-flung location around the world to eat food and soak up the culture. Follow him at @DuaneBrown.

Duane’s pitch:

What Is Delightful Remarketing & How You Can Do It Too

A lot of people find remarketing creepy and weird. They don’t get why they are seeing those ads around the internet…. let alone how to make them stop showing.

This talk will focus on the different between remarketing & creating delightful remarketing that can help grow the revenue & profit at a company and not piss customers off. 50% of US marketers don’t use remarketing according to eMarketer (2013).

– [Duane’s direct how-to for e-commerce customers.] Over 60% of customers abandon a shopping cart each year: http://baymard.com/lists/cart-abandonment-rate (3 minute)

– Cover a SaaS company using retargeting to [Duane’s actionable item]. This remarketing helps show your products sticky features while showing off your benefits (3 minute)

– The Dos: [Duane’s actionable tip], a variety of creative & a dedicated landing page creates delightful remarketing that grows revenue (3 minute)

– Wrap up and review main points. (2 minutes)

Matt’s notes: Well-detailed, an area in which there’s a lot of room for improvement.


Gianluca FiorelliGianluca Fiorelli

Moz Associate, official blogger for StateofDigital.com and known international SEO and inbound strategist, Gianluca works in the digital marketing industry, but he still believes that he just know that he knows nothing. Follow him at @gfiorelli1.

Gianluca’s pitch:

Unusual Sources for Keyword and Topical Research

A big percentage of SEOs equal Keyword and Topical Research to using Keyword Planner and Google Suggest.

However, using only them, we cannot achieve a real deep knowledge of the interests, psychology and language of our target.

In this talk, I will present unusual sources and unnoticed features of very well-known tools, and offer a final example based on a true story.

Arguments touched in the speech (not necessarily in this order):

[Gianluca lists seven how-tos and one unique case study.]

Erica’s notes: Theme of Google not giving good keyword info. Lots of unique actionable points and resources. Will work in 15 minute time limit.


Ruth Burr ReedyRuth Burr Reedy

Ruth is the head of on-site SEO for BigWing Interactive, a full-service digital marketing agency in Oklahoma City, OK. At BigWing, she manages a team doing on-site, technical, and local SEO. Ruth has been working in SEO since 2006. Follow her at @ruthburr.

Ruth’s pitch:

Get Hired to Do SEO

This talk will go way beyond “just build your own website” and talk about specific ways SEOs can build evidence of their skills across the web, including:

[Ruth lists 7 how-tos with actionable examples.]

All in a funny, actionable, beautiful, easy-to-understand get-hired masterpiece.

Erica’s notes: Great takeaways. Wanted to do a session about building your resume as a marketer for a while.


Stephanie WallaceStephanie Wallace

Stephanie is director of SEO at Nebo, a digital agency in Atlanta. She helps clients navigate the ever-changing world of SEO by understanding their audience and helping them create a digital experience that both the user and Google appreciates. Follow her at @SWallaceSEO.

Stephanie’s pitch:

Everyone knows PPC and SEO complement one another – increased visibility in search results help increase perceived authority and drive more clickthroughs to your site overall. But are you actively leveraging the wealth of PPC data available to build on your existing SEO strategy? The key to effectively using this information lies in understanding how to test SEO tactics and how to apply the results to your on-page strategies. This session will delve into actionable strategies for using PPC campaign insights to influence on-page SEO and content strategies. Key takeaways include:

[Stephanie lists four how-tos.]

Erica’s notes: Nice and actionable. Like this a lot.


As mentioned, we had 241 entries, and many of them were stage quality. Notable runners up included AJ Wilcox, Ed Reese, and Daylan Pearce, and a big pat on the back to all those who tossed their hat in.

Also, a huge thank you to my fellow selection committee members for 2015: Charlene Inoncillo, Cyrus Shepard, Danie Launders, Jen Lopez, Matt Roney, Rand Fishkin, Renea Nielsen, and Trevor Klein.

Buy your ticket now

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Windsor Web Dynamic | Robot Meta Tags and what they mean to SEO.

A short description of the various robot meta tags, and what they mean to SEO. Brought to you by http://windsorwebdynamic.com.

Reblogged 3 years ago from www.youtube.com

​The 3 Most Common SEO Problems on Listings Sites

Posted by Dom-Woodman

Listings sites have a very specific set of search problems that you don’t run into everywhere else. In the day I’m one of Distilled’s analysts, but by night I run a job listings site, teflSearch. So, for my first Moz Blog post I thought I’d cover the three search problems with listings sites that I spent far too long agonising about.

Quick clarification time: What is a listings site (i.e. will this post be useful for you)?

The classic listings site is Craigslist, but plenty of other sites act like listing sites:

  • Job sites like Monster
  • E-commerce sites like Amazon
  • Matching sites like Spareroom

1. Generating quality landing pages

The landing pages on listings sites are incredibly important. These pages are usually the primary drivers of converting traffic, and they’re usually generated automatically (or are occasionally custom category pages) .

For example, if I search “Jobs in Manchester“, you can see nearly every result is an automatically generated landing page or category page.

There are three common ways to generate these pages (occasionally a combination of more than one is used):

  • Faceted pages: These are generated by facets—groups of preset filters that let you filter the current search results. They usually sit on the left-hand side of the page.
  • Category pages: These pages are listings which have already had a filter applied and can’t be changed. They’re usually custom pages.
  • Free-text search pages: These pages are generated by a free-text search box.

Those definitions are still bit general; let’s clear them up with some examples:

Amazon uses a combination of categories and facets. If you click on browse by department you can see all the category pages. Then on each category page you can see a faceted search. Amazon is so large that it needs both.

Indeed generates its landing pages through free text search, for example if we search for “IT jobs in manchester” it will generate: IT jobs in manchester.

teflSearch generates landing pages using just facets. The jobs in China landing page is simply a facet of the main search page.

Each method has its own search problems when used for generating landing pages, so lets tackle them one by one.


Aside

Facets and free text search will typically generate pages with parameters e.g. a search for “dogs” would produce:

www.mysite.com?search=dogs

But to make the URL user friendly sites will often alter the URLs to display them as folders

www.mysite.com/results/dogs/

These are still just ordinary free text search and facets, the URLs are just user friendly. (They’re a lot easier to work with in robots.txt too!)


Free search (& category) problems

If you’ve decided the base of your search will be a free text search, then we’ll have two major goals:

  • Goal 1: Helping search engines find your landing pages
  • Goal 2: Giving them link equity.

Solution

Search engines won’t use search boxes and so the solution to both problems is to provide links to the valuable landing pages so search engines can find them.

There are plenty of ways to do this, but two of the most common are:

  • Category links alongside a search

    Photobucket uses a free text search to generate pages, but if we look at example search for photos of dogs, we can see the categories which define the landing pages along the right-hand side. (This is also an example of URL friendly searches!)

  • Putting the main landing pages in a top-level menu

    Indeed also uses free text to generate landing pages, and they have a browse jobs section which contains the URL structure to allow search engines to find all the valuable landing pages.

Breadcrumbs are also often used in addition to the two above and in both the examples above, you’ll find breadcrumbs that reinforce that hierarchy.

Category (& facet) problems

Categories, because they tend to be custom pages, don’t actually have many search disadvantages. Instead it’s the other attributes that make them more or less desirable. You can create them for the purposes you want and so you typically won’t have too many problems.

However, if you also use a faceted search in each category (like Amazon) to generate additional landing pages, then you’ll run into all the problems described in the next section.

At first facets seem great, an easy way to generate multiple strong relevant landing pages without doing much at all. The problems appear because people don’t put limits on facets.

Lets take the job page on teflSearch. We can see it has 18 facets each with many options. Some of these options will generate useful landing pages:

The China facet in countries will generate “Jobs in China” that’s a useful landing page.

On the other hand, the “Conditional Bonus” facet will generate “Jobs with a conditional bonus,” and that’s not so great.

We can also see that the options within a single facet aren’t always useful. As of writing, I have a single job available in Serbia. That’s not a useful search result, and the poor user engagement combined with the tiny amount of content will be a strong signal to Google that it’s thin content. Depending on the scale of your site it’s very easy to generate a mass of poor-quality landing pages.

Facets generate other problems too. The primary one being they can create a huge amount of duplicate content and pages for search engines to get lost in. This is caused by two things: The first is the sheer number of possibilities they generate, and the second is because selecting facets in different orders creates identical pages with different URLs.

We end up with four goals for our facet-generated landing pages:

  • Goal 1: Make sure our searchable landing pages are actually worth landing on, and that we’re not handing a mass of low-value pages to the search engines.
  • Goal 2: Make sure we don’t generate multiple copies of our automatically generated landing pages.
  • Goal 3: Make sure search engines don’t get caught in the metaphorical plastic six-pack rings of our facets.
  • Goal 4: Make sure our landing pages have strong internal linking.

The first goal needs to be set internally; you’re always going to be the best judge of the number of results that need to present on a page in order for it to be useful to a user. I’d argue you can rarely ever go below three, but it depends both on your business and on how much content fluctuates on your site, as the useful landing pages might also change over time.

We can solve the next three problems as group. There are several possible solutions depending on what skills and resources you have access to; here are two possible solutions:

Category/facet solution 1: Blocking the majority of facets and providing external links
  • Easiest method
  • Good if your valuable category pages rarely change and you don’t have too many of them.
  • Can be problematic if your valuable facet pages change a lot

Nofollow all your facet links, and noindex and block category pages which aren’t valuable or are deeper than x facet/folder levels into your search using robots.txt.

You set x by looking at where your useful facet pages exist that have search volume. So, for example, if you have three facets for televisions: manufacturer, size, and resolution, and even combinations of all three have multiple results and search volume, then you could set you index everything up to three levels.

On the other hand, if people are searching for three levels (e.g. “Samsung 42″ Full HD TV”) but you only have one or two results for three-level facets, then you’d be better off indexing two levels and letting the product pages themselves pick up long-tail traffic for the third level.

If you have valuable facet pages that exist deeper than 1 facet or folder into your search, then this creates some duplicate content problems dealt with in the aside “Indexing more than 1 level of facets” below.)

The immediate problem with this set-up, however, is that in one stroke we’ve removed most of the internal links to our category pages, and by no-following all the facet links, search engines won’t be able to find your valuable category pages.

In order re-create the linking, you can add a top level drop down menu to your site containing the most valuable category pages, add category links elsewhere on the page, or create a separate part of the site with links to the valuable category pages.

The top level drop down menu you can see on teflSearch (it’s the search jobs menu), the other two examples are demonstrated in Photobucket and Indeed respectively in the previous section.

The big advantage for this method is how quick it is to implement, it doesn’t require any fiddly internal logic and adding an extra menu option is usually minimal effort.

Category/facet solution 2: Creating internal logic to work with the facets

  • Requires new internal logic
  • Works for large numbers of category pages with value that can change rapidly

There are four parts to the second solution:

  1. Select valuable facet categories and allow those links to be followed. No-follow the rest.
  2. No-index all pages that return a number of items below the threshold for a useful landing page
  3. No-follow all facets on pages with a search depth greater than x.
  4. Block all facet pages deeper than x level in robots.txt

As with the last solution, x is set by looking at where your useful facet pages exist that have search volume (full explanation in the first solution), and if you’re indexing more than one level you’ll need to check out the aside below to see how to deal with the duplicate content it generates.


Aside: Indexing more than one level of facets

If you want more than one level of facets to be indexable, then this will create certain problems.

Suppose you have a facet for size:

  • Televisions: Size: 46″, 44″, 42″

And want to add a brand facet:

  • Televisions: Brand: Samsung, Panasonic, Sony

This will create duplicate content because the search engines will be able to follow your facets in both orders, generating:

  • Television – 46″ – Samsung
  • Television – Samsung – 46″

You’ll have to either rel canonical your duplicate pages with another rule or set up your facets so they create a single unique URL.

You also need to be aware that each followable facet you add will multiply with each other followable facet and it’s very easy to generate a mass of pages for search engines to get stuck in. Depending on your setup you might need to block more paths in robots.txt or set-up more logic to prevent them being followed.

Letting search engines index more than one level of facets adds a lot of possible problems; make sure you’re keeping track of them.


2. User-generated content cannibalization

This is a common problem for listings sites (assuming they allow user generated content). If you’re reading this as an e-commerce site who only lists their own products, you can skip this one.

As we covered in the first area, category pages on listings sites are usually the landing pages aiming for the valuable search terms, but as your users start generating pages they can often create titles and content that cannibalise your landing pages.

Suppose you’re a job site with a category page for PHP Jobs in Greater Manchester. If a recruiter then creates a job advert for PHP Jobs in Greater Manchester for the 4 positions they currently have, you’ve got a duplicate content problem.

This is less of a problem when your site is large and your categories mature, it will be obvious to any search engine which are your high value category pages, but at the start where you’re lacking authority and individual listings might contain more relevant content than your own search pages this can be a problem.

Solution 1: Create structured titles

Set the <title> differently than the on-page title. Depending on variables you have available to you can set the title tag programmatically without changing the page title using other information given by the user.

For example, on our imaginary job site, suppose the recruiter also provided the following information in other fields:

  • The no. of positions: 4
  • The primary area: PHP Developer
  • The name of the recruiting company: ABC Recruitment
  • Location: Manchester

We could set the <title> pattern to be: *No of positions* *The primary area* with *recruiter name* in *Location* which would give us:

4 PHP Developers with ABC Recruitment in Manchester

Setting a <title> tag allows you to target long-tail traffic by constructing detailed descriptive titles. In our above example, imagine the recruiter had specified “Castlefield, Manchester” as the location.

All of a sudden, you’ve got a perfect opportunity to pick up long-tail traffic for people searching in Castlefield in Manchester.

On the downside, you lose the ability to pick up long-tail traffic where your users have chosen keywords you wouldn’t have used.

For example, suppose Manchester has a jobs program called “Green Highway.” A job advert title containing “Green Highway” might pick up valuable long-tail traffic. Being able to discover this, however, and find a way to fit it into a dynamic title is very hard.

Solution 2: Use regex to noindex the offending pages

Perform a regex (or string contains) search on your listings titles and no-index the ones which cannabalise your main category pages.

If it’s not possible to construct titles with variables or your users provide a lot of additional long-tail traffic with their own titles, then is a great option. On the downside, you miss out on possible structured long-tail traffic that you might’ve been able to aim for.

Solution 3: De-index all your listings

It may seem rash, but if you’re a large site with a huge number of very similar or low-content listings, you might want to consider this, but there is no common standard. Some sites like Indeed choose to no-index all their job adverts, whereas some other sites like Craigslist index all their individual listings because they’ll drive long tail traffic.

Don’t de-index them all lightly!

3. Constantly expiring content

Our third and final problem is that user-generated content doesn’t last forever. Particularly on listings sites, it’s constantly expiring and changing.

For most use cases I’d recommend 301’ing expired content to a relevant category page, with a message triggered by the redirect notifying the user of why they’ve been redirected. It typically comes out as the best combination of search and UX.

For more information or advice on how to deal with the edge cases, there’s a previous Moz blog post on how to deal with expired content which I think does an excellent job of covering this area.

Summary

In summary, if you’re working with listings sites, all three of the following need to be kept in mind:

  • How are the landing pages generated? If they’re generated using free text or facets have the potential problems been solved?
  • Is user generated content cannibalising the main landing pages?
  • How has constantly expiring content been dealt with?

Good luck listing, and if you’ve had any other tricky problems or solutions you’ve come across working on listings sites lets chat about them in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Marketing ThinkTank Webinar Series: Understanding Dynamic Google SEO with Loren Baker

Watch our Understanding Dynamic Google SEO Local, Device, Social, & News webinar with Loren Baker as presenter and Kelsey Jones as host. Slides will also be available on Slideshare soon. To…

Reblogged 3 years ago from www.youtube.com

How We Fixed the Internet (Ok, an Answer Box)

Posted by Dr-Pete

Last year, Google expanded the Knowledge Graph to use data extracted (*cough* scraped) from the index to create answer boxes. Back in October, I wrote about a failed experiment. One of my posts, an odd dive
into Google’s revenue, was being answer-fied for the query “How much does Google make?”:

Objectively speaking, even I could concede that this wasn’t a very good answer in 2014. I posted it on Twitter, and
David Iwanow asked the inevitable question:

Enthusiasm may have gotten the best of us, a few more people got involved (like my former Moz colleague
Ruth Burr Reedy), and suddenly we were going to fix this once and for all:

There Was Just One Problem

I updated the post, carefully rewriting the first paragraph to reflect the new reality of Google’s revenue. I did my best to make the change user-friendly, adding valuable information but not disrupting the original post. I did, however, completely replace the old text that Google was scraping.

Within less than a day, Google had re-cached the content, and I just had to wait to see the new answer box. So, I waited, and waited… and waited. Two months later, still no change. Some days, the SERP showed no answer box at all (although I’ve since found these answer boxes are very dynamic), and I was starting to wonder if it was all a mistake.

Then, Something Happened

Last week, months after I had given up, I went to double-check this query for entirely different reasons, and I saw the following:

Google had finally updated the answer box with the new text, and they had even pulled an image from the post. It was a strange choice of images, but in fairness, it was a strange post.

Interestingly, Google also added the publication date of the post, perhaps recognizing that outdated answers aren’t always useful. Unfortunately, this doesn’t reflect the timing of the new content, but that’s understandable – Google doesn’t have easy access to that data.

It’s interesting to note that sometimes Google shows the image, and sometimes they don’t. This seems to be independent of whether the SERP is personalized or incognito. Here’s a capture of the image-free version, along with the #1 organic ranking:

You’ll notice that the #1 result is also my Moz post, and that result has an expanded meta description. So, the same URL is essentially double-dipping this SERP. This isn’t always the case – answers can be extracted from URLs that appear lower on page 1 (although almost always page 1, in my experience). Anecdotally, it’s also not always the case that these organic result ends up getting an expanded meta description.

However, it definitely seems that some of the quality signals driving organic ranking and expanded meta descriptions are also helping Google determine whether a query deserves a direct answer. Put simply, it’s not an accident that this post was chosen to answer this question.

What Does This Mean for You?

Let’s start with the obvious – Yes, the v2 answer boxes (driven by the index, not Freebase/WikiData)
can be updated. However, the update cycle is independent of the index’s refresh cycle. In other words, just because a post is re-cached, it doesn’t mean the answer box will update. Presumably, Google is creating a second Knowledge Graph, based on the index, and this data is only periodically updated.

It’s also entirely possible that updating could cause you to lose an answer box, if the new data weren’t a strong match to the question or the quality of the content came into question. Here’s an interesting question – on a query where a competitor has an answer box, could you change your own content enough to either replace them or knock out the answer box altogether? We are currently testing this question, but it may be a few more months before we have any answers.

Another question is what triggers this style of answer box in the first place? Eric Enge has an
in-depth look at 850,000 queries that’s well worth your time, and in many cases Google is still triggering on obvious questions (“how”, “what”, “where”, etc.). Nouns that could be interpreted as ambiguous also can trigger the new answer boxes. For example, a search for “ruby” is interpreted by Google as roughly meaning “What is Ruby?”:

This answer box also triggers “Related topics” that use content pulled from other sites but drive users to more Google searches. The small, gray links are the source sites. The much more visible, blue links are more Google searches.

Note that these also have to be questions (explicit or implied) that Google can’t answer with their curated Knowledge Graph (based on sources like Freebase and WikiData). So, for example, the question “When is Mother’s Day?” triggers an older-style answer:

Sites offering this data aren’t going to have a chance to get attribution, because Google essentially already owns the answer to this question as part of their core Knowledge Graph.

Do You Want to Be An Answer?

This is where things get tricky. At this point, we have no clear data on how these answer boxes impact CTR, and it’s likely that the impact depends a great deal on the context. I think we’re facing a certain degree of inevitability – if Google is going to list an answer, better it’s your answer then someone else’s, IMO. On the other hand, what if that answer is so complete that it renders your URL irrelevant? Consider, for example, the SERP for “how to make grilled cheese”:

Sorry, Food Network, but making a grilled cheese sandwich isn’t really that hard, and this answer box doesn’t leave much to the imagination. As these answers get more and more thorough, expect CTRs to fall.

For now, I’d argue that it’s better to have your link in the box than someone else’s, but that’s cold comfort in many cases. These new answer boxes represent what I feel is a dramatic shift in the relationship between Google and webmasters, and they may be tipping the balance. For now, we can’t do much but wait, see, and experiment.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it