Meet Dan Morris, Executive Vice President, North America

  1. Why did you decide to come to dotmailer?

The top three reasons were People, Product and Opportunity. I met the people who make up our business and heard their stories from the past 18 years, learned about the platform and market leading status they had built in the UK, and saw that I could add value with my U.S. high growth business experience. I’ve been working with marketers, entrepreneurs and business owners for years across a series of different roles, and saw that I could apply what I’d learned from that and the start-up space to dotmailer’s U.S. operation. dotmailer has had clients in the U.S. for 12 years and we’re positioned to grow the user base of our powerful and easy-to-use platform significantly. I knew I could make a difference here, and what closed the deal for me was the people.  Every single person I’ve met is deeply committed to the business, to the success of our customers and to making our solution simple and efficient.  We’re a great group of passionate people and I’m proud to have joined the dotfamily.

Dan Morris, dotmailer’s EVP for North America in the new NYC office

      1. Tell us a bit about your new role

dotmailer has been in business and in this space for more than 18 years. We were a web agency, then a Systems Integrator, and we got into the email business that way, ultimately building the dotmailer platform thousands of people use daily. This means we know this space better than anyone and we have the perfect solutions to align closely with our customers and the solutions flexible enough to grow with them.  My role is to take all that experience and the platform and grow our U.S. presence. My early focus has been on identifying the right team to execute our growth plans. We want to be the market leader in the U.S. in the next three years – just like we’ve done in the UK –  so getting the right people in the right spots was critical.  We quickly assessed the skills of the U.S. team and made changes that were necessary in order to provide the right focus on customer success. Next, we set out to completely rebuild dotmailer’s commercial approach in the U.S.  We simplified our offers to three bundles, so that pricing and what’s included in those bundles is transparent to our customers.  We’ve heard great things about this already from clients and partners. We’re also increasing our resources on customer success and support.  We’re intensely focused on ease of on-boarding, ease of use and speed of use.  We consistently hear how easy and smooth a process it is to use dotmailer’s tools.  That’s key for us – when you buy a dotmailer solution, we want to onboard you quickly and make sure you have all of your questions answered right away so that you can move right into using it.  Customers are raving about this, so we know it’s working well.

  1. What early accomplishments are you most proud of from your dotmailer time so far?

I’ve been at dotmailer for eight months now and I’m really proud of all we’ve accomplished together.  We spent a lot of time assessing where we needed to restructure and where we needed to invest.  We made the changes we needed, invested in our partner program, localized tech support, customer on-boarding and added customer success team members.  We have the right people in the right roles and it’s making a difference.  We have a commercial approach that is clear with the complete transparency that we wanted to provide our customers.  We’ve got a more customer-focused approach and we’re on-boarding customers quickly so they’re up and running faster.  We have happier customers than ever before and that’s the key to everything we do.

  1. You’ve moved the U.S. team to a new office. Can you tell us why and a bit about the new space?

I thought it was very important to create a NY office space that was tied to branding and other offices around the world, and also had its own NY energy and culture for our team here – to foster collaboration and to have some fun.  It was also important for us that we had a flexible space where we could welcome customers, partners and resellers, and also hold classes and dotUniversity training sessions. I’m really grateful to the team who worked on the space because it really reflects our team and what we care about.   At any given time, you’ll see a training session happening, the team collaborating, a customer dropping in to ask a few questions or a partner dropping in to work from here.  We love our new, NYC space.

We had a spectacular reception this week to celebrate the opening of this office with customers, partners and the dotmailer leadership team in attendance. Please take a look at the photos from our event on Facebook.

Guests and the team at dotmailer's new NYC office warming party

Guests and the team at dotmailer’s new NYC office warming party

  1. What did you learn from your days in the start-up space that you’re applying at dotmailer?

The start-up space is a great place to learn. You have to know where every dollar is going and coming from, so every choice you make needs to be backed up with a business case for that investment.  You try lots of different things to see if they’ll work and you’re ready to turn those tactics up or down quickly based on an assessment of the results. You also learn things don’t have to stay the way they are, and can change if you make them change. You always listen and learn – to customers, partners, industry veterans, advisors, etc. to better understand what’s working and not working.  dotmailer has been in business for 18 years now, and so there are so many great contributors across the business who know how things have worked and yet are always keen to keep improving.  I am constantly in listening and learning mode so that I can understand all of the unique perspectives our team brings and what we need to act on.

  1. What are your plans for the U.S. and the sales function there?

On our path to being the market leader in the U.S., I’m focused on three things going forward: 1 – I want our customers to be truly happy.  It’s already a big focus in the dotmailer organization – and we’re working hard to understand their challenges and goals so we can take product and service to the next level. 2 – Creating an even more robust program around partners, resellers and further building out our channel partners to continuously improve sales and customer service programs. We recently launched a certification program to ensure partners have all the training and resources they need to support our mutual customers.  3 – We have an aggressive growth plan for the U.S. and I’m very focused on making sure our team is well trained, and that we remain thoughtful and measured as we take the steps to grow.  We want to always keep an eye on what we’re known for – tools that are powerful and simple to use – and make sure everything else we offer remains accessible and valuable as we execute our growth plans.

  1. What are the most common questions that you get when speaking to a prospective customer?

The questions we usually get are around price, service level and flexibility.  How much does dotmailer cost?  How well are you going to look after my business?  How will you integrate into my existing stack and then my plans for future growth? We now have three transparent bundle options with specifics around what’s included published right on our website.  We have introduced a customer success team that’s focused only on taking great care of our customers and we’re hearing stories every day that tells me this is working.  And we have all of the tools to support our customers as they grow and to also integrate into their existing stacks – often integrating so well that you can use dotmailer from within Magento, Salesforce or Dynamics, for example.

  1. Can you tell us about the dotmailer differentiators you highlight when speaking to prospective customers that seem to really resonate?

In addition to the ones above – ease of use, speed of use and the ability to scale with you. With dotmailer’s tiered program, you can start with a lighter level of functionality and grow into more advanced functionality as you need it. The platform itself is so easy to use that most marketers are able to build campaigns in minutes that would have taken hours on other platforms. Our customer success team is also with you all the way if ever you want or need help.  We’ve built a very powerful platform and we have a fantastic team to help you with personalized service as an extended part of your team and we’re ready to grow with you.

  1. How much time is your team on the road vs. in the office? Any road warrior tips to share?

I’ve spent a lot of time on the road, one year I attended 22 tradeshows! Top tip when flying is to be willing to give up your seat for families or groups once you’re at the airport gate, as you’ll often be rewarded with a better seat for helping the airline make the family or group happy. Win win! Since joining dotmailer, I’m focused on being in office and present for the team and customers as much as possible. I can usually be found in our new, NYC office where I spend a lot of time with our team, in customer meetings, in trainings and other hosted events, sales conversations or marketing meetings. I’m here to help the team, clients and partners to succeed, and will always do my best to say yes! Once our prospective customers see how quickly and efficiently they can execute tasks with dotmailer solutions vs. their existing solutions, it’s a no-brainer for them.  I love seeing and hearing their reactions.

  1. Tell us a bit about yourself – favorite sports team, favorite food, guilty pleasure, favorite band, favorite vacation spot?

I’m originally from Yorkshire in England, and grew up just outside York. I moved to the U.S. about seven years ago to join a very fast growing startup, we took it from 5 to well over 300 people which was a fantastic experience. I moved to NYC almost two years ago, and I love exploring this great city.  There’s so much to see and do.  Outside of dotmailer, my passion is cars, and I also enjoy skeet shooting, almost all types of music, and I love to travel – my goal is to get to India, Thailand, Australia and Japan in the near future.

Want to find out more about the dotfamily? Check out our recent post about Darren Hockley, Global Head of Support.

Reblogged 3 years ago from blog.dotmailer.com

Deconstructing the App Store Rankings Formula with a Little Mad Science

Posted by AlexApptentive

After seeing Rand’s “Mad Science Experiments in SEO” presented at last year’s MozCon, I was inspired to put on the lab coat and goggles and do a few experiments of my own—not in SEO, but in SEO’s up-and-coming younger sister, ASO (app store optimization).

Working with Apptentive to guide enterprise apps and small startup apps alike to increase their discoverability in the app stores, I’ve learned a thing or two about app store optimization and what goes into an app’s ranking. It’s been my personal goal for some time now to pull back the curtains on Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in my way.

Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet.

As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or down can make all the difference when it comes to your website’s traffic—and revenue.

In the world of apps, ranking is just as important when it comes to standing out in a sea of more than 1.3 million apps. Apptentive’s recent mobile consumer survey shed a little more light this claim, revealing that nearly half of all mobile app users identified browsing the app store charts and search results (the placement on either of which depends on rankings) as a preferred method for finding new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery.

Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a complex and highly guarded algorithms for determining rankings for both keyword-based app store searches and composite top charts.

Unlike SEO, however, very little research and theory has been conducted around what goes into these rankings.

Until now, that is.

Over the course of five studies analyzing various publicly available data points for a cross-section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps, I’ll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few of the factors commonly thought of as influential to an app’s ranking.

But first, a little context

Image credit: Josh Tuininga, Apptentive

Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically, be on a fairly level playing field in terms of search volume and competition.

Of these apps, nearly two-thirds have not received a single rating and 99% are considered unprofitable. These studies, therefore, single out the rare exceptions to the rule—the top 500 ranked apps in each store.

While neither Apple nor Google have revealed specifics about how they calculate search rankings, it is generally accepted that both app store algorithms factor in:

  • Average app store rating
  • Rating/review volume
  • Download and install counts
  • Uninstalls (what retention and churn look like for the app)
  • App usage statistics (how engaged an app’s users are and how frequently they launch the app)
  • Growth trends weighted toward recency (how daily download counts changed over time and how today’s ratings compare to last week’s)
  • Keyword density of the app’s landing page (Ian did a great job covering this factor in a previous Moz post)

I’ve simplified this formula to a function highlighting the four elements with sufficient data (or at least proxy data) for our analysis:

Ranking = fn(Rating, Rating Count, Installs, Trends)

Of course, right now, this generalized function doesn’t say much. Over the next five studies, however, we’ll revisit this function before ultimately attempting to compare the weights of each of these four variables on app store rankings.

(For the purpose of brevity, I’ll stop here with the assumptions, but I’ve gone into far greater depth into how I’ve reached these conclusions in a 55-page report on app store rankings.)

Now, for the Mad Science.

Study #1: App-les to app-les app store ranking volatility

The first, and most straight forward of the five studies involves tracking daily movement in app store rankings across iOS and Android versions of the same apps to determine any trends of differences between ranking volatility in the two stores.

I went with a small sample of five apps for this study, the only criteria for which were that:

  • They were all apps I actively use (a criterion for coming up with the five apps but not one that influences rank in the U.S. app stores)
  • They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be stickier at the top—an assumption I’ll test in study #2)
  • They had an almost identical version of the app in both Google Play and the App Store, meaning they should (theoretically) rank similarly
  • They covered a spectrum of app categories

The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five apps represent the travel, finance, education banking, and social networking categories.

Hypothesis

Going into this analysis, I predicted slightly more volatility in Apple App Store rankings, based on two statistics:

Both of these assumptions will be tested in later analysis.

Results

7-Day App Store Ranking Volatility in the App Store and Google Play

Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23 positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of Google Play.

Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then standardized the ranking volatility in terms of percent change to control for the effect of numeric rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a 24-hour period while Google Play rankings changed by no more than 9%.

Also of note, daily rankings tended to move in the same direction across the two app stores approximately two-thirds of the time, suggesting that the two stores, and their customers, may have more in common than we think.

Study #2: App store ranking volatility across the top charts

Testing the assumption implicit in standardizing the data in study No. 1, this one was designed to see if app store ranking volatility is correlated with an app’s current rank. The sample for this study consisted of the top 500 ranked apps in both Google Play and the App Store, with special attention given to those on both ends of the spectrum (ranks 1–100 and 401–500).

Hypothesis

I anticipated rankings to be more volatile the higher an app is ranked—meaning an app ranked No. 450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis is based on the assumption that higher ranked apps have more installs, active users, and ratings, and that it would take a large margin to produce a noticeable shift in any of these factors.

Results

App Store Ranking Volatility of Top 500 Apps

One look at the chart above shows that apps in both stores have increasingly more volatile rankings (based on how many ranks they moved in the last 24 hours) the lower on the list they’re ranked.

This is particularly true when comparing either end of the spectrum—with a seemingly straight volatility line among Google Play’s Top 100 apps and very few blips within the App Store’s Top 100. Compare this section to the lower end, ranks 401–)500, where both stores experience much more turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking volatility in the Play Store and 28% correlation in the App Store.

To put this into perspective, the average app in Google Play’s 401–)500 ranks moved 12.1 ranks in the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store, these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as volatile as the highest ranked apps. (I say slightly as these “lower-ranked” apps are still ranked higher than 99.96% of all apps.)

The relationship between rank and volatility is pretty consistent across the App Store charts, while rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100 have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation).

Study #3: App store rankings across the stars

The next study looks at the relationship between rank and star ratings to determine any trends that set the top chart apps apart from the rest and explore any ties to app store ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

As discussed in the introduction, this study relates directly to one of the factors commonly accepted as influential to app store rankings: average rating.

Getting started, I hypothesized that higher ranks generally correspond to higher ratings, cementing the role of star ratings in the ranking algorithm.

As far as volatility goes, I did not anticipate average rating to play a role in app store ranking volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice versa. Instead, I believed volatility to be tied to rating volume (as we’ll explore in our last study).

Results

Average App Store Ratings of Top Apps

The chart above plots the top 100 ranked apps in either store with their average rating (both historic and current, for App Store apps). If it looks a little chaotic, it’s just one indicator of the complexity of ranking algorithm in Google Play and the App Store.

If our hypothesis was correct, we’d see a downward trend in ratings. We’d expect to see the No. 1 ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the chart.

A closer examination, in tandem with what we already know about the app stores, reveals two other interesting points:

  1. The average star rating of the top 100 apps is significantly higher than that of the average app. Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of all rated apps in either store. The averages across apps in the 401–)500 ranks approximately split the difference between the ratings of the top ranked apps and the ratings of the average app.
  2. The rating distribution of top apps in Google Play was considerably more compact than the distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more heavily weighted in Google Play’s algorithm.

App Store Ranking Volatility and Average Rating

Looking next at the relationship between ratings and app store ranking volatility reveals a -15% correlation that is consistent across both app stores; meaning the higher an app is rated, the less its rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store’s calculation of an app’s current rating, for which I did not find a statistically significant correlation.

Study #4: App store rankings across versions

This next study looks at the relationship between the age of an app’s current version, its rank and its ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

In alteration of the above function, I’m using the age of a current app’s version as a proxy (albeit not a very good one) for trends in app store ratings and app quality over time.

Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b) each new update inspires a new wave of installs and ratings, I’m hypothesizing that the older the age of an app’s current version, the lower it will be ranked and the less volatile its rank will be.

Results

How update frequency correlates with app store rank

The first and possibly most important finding is that apps across the top charts in both Google Play and the App Store are updated remarkably often as compared to the average app.

At the time of conducting the study, the current version of the average iOS app on the top chart was only 28 days old; the current version of the average Android app was 38 days old.

As hypothesized, the age of the current version is negatively correlated with the app’s rank, with a 13% correlation in Google Play and a 10% correlation in the App Store.

How update frequency correlates with app store ranking volatility

The next part of the study maps the age of the current app version to its app store ranking volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%) while recently updated iOS apps have more volatile rankings (correlation: -3%).

Study #5: App store rankings across monthly active users

In the final study, I wanted to examine the role of an app’s popularity on its ranking. In an ideal world, popularity would be measured by an app’s monthly active users (MAUs), but since few mobile app developers have released this information, I’ve settled for two publicly available proxies: Rating Count and Installs.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

For the same reasons indicated in the second study, I anticipated that more popular apps (e.g., apps with more ratings and more installs) would be higher ranked and less volatile in rank. This, again, takes into consideration that it takes more of a shift to produce a noticeable impact in average rating or any of the other commonly accepted influencers of an app’s ranking.

Results

Apps with more ratings and reviews typically rank higher

The first finding leaps straight off of the chart above: Android apps have been rated more times than iOS apps, 15.8x more, in fact.

The average app in Google Play’s Top 100 had a whopping 3.1 million ratings while the average app in the Apple App Store’s Top 100 had 196,000 ratings. In contrast, apps in the 401–)500 ranks (still tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth (Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks.

Considering that almost two-thirds of apps don’t have a single rating, reaching rating counts this high is a huge feat, and a very strong indicator of the influence of rating count in the app store ranking algorithms.

To even out the playing field a bit and help us visualize any correlation between ratings and rankings (and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I’ve applied a logarithmic scale to the chart above:

The relationship between app store ratings and rankings in the top 100 apps

From this chart, we can see a correlation between ratings and rankings, such that apps with more ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40% correlation in Google Play.

Apps with more ratings typically experience less app store ranking volatility

Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive evidence was found within the Top 100 Google Play apps.

Apps with more installs and active users tend to rank higher in the app stores

And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.)

Among the top 100 Android apps, this last study found that installs were heavily correlated with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in Google Play. Android apps with more installs also tended to have less volatile app store rankings, with a correlation of -16.5%.

Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in broad ranges (e.g., 500k–)1M). For each app, I took the low end of the range, meaning we can likely expect the correlation to be a little stronger since the low end was further away from the midpoint for apps with more installs.

Summary

To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad science studies in app store optimization:

  1. Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play
  2. Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the Apple App Store’s top charts.
  3. In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of the average app.
  4. Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top charts experience a much wider ratings distribution than that of Google Play’s top charts.
  5. The higher an app is rated, the less volatile its rankings are.
  6. The 100 highest ranked apps in either store are updated much more frequently than the average app, and apps with older current versions are correlated with lower ratings.
  7. An app’s update frequency is negatively correlated with Google Play’s ranking volatility but positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an app’s most recent ratings and reviews.
  8. The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest ranked App Store apps.
  9. In both stores, apps that fall under the 401–500 ranks receive, on average, 10–20% of the rating volume seen by apps in the top 100.
  10. Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29–40% correlation between the two.

Revisiting our first (albeit oversimplified) guess at the app stores’ ranking algorithm gives us this loosely defined function:

Ranking = fn(Rating, Rating Count, Installs, Trends)

I’d now re-write the function into a formula by weighing each of these four factors, where a, b, c, & d are unknown multipliers, or weights:

Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d)

These five studies on ASO shed a little more light on these multipliers, showing Rating Count to have the strongest correlation with rank, followed closely by Installs, in either app store.

It’s with the other two factors—rating and trends—that the two stores show the greatest discrepancy. I’d hazard a guess to say that the App Store prioritizes growth trends over ratings, given the importance it places on an app’s current version and the wide distribution of ratings across the top charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just about have to have at least four stars to make the top 100 ranks.

Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts in either store:

Weight of factors in the Apple App Store ranking algorithm

Rating Count > Installs > Trends > Rating

Weight of factors in the Google Play ranking algorithm

Rating Count > Installs > Rating > Trends


Again, we’re oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional factors including keyword density and in-app engagement statistics continue to be strong indicators of ranks. They simply lie outside the scope of these studies.

I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see ASOs conducting the same experiments that have brought SEO to the center stage, and encourage you to enhance or refute these findings with your own ASO mad science experiments.

Please share your thoughts in the comments below, and let’s deconstruct the ranking formula together, one experiment at a time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

I Can’t Drive 155: Meta Descriptions in 2015

Posted by Dr-Pete

For years now, we (and many others) have been recommending keeping your Meta Descriptions shorter than
about 155-160 characters. For months, people have been sending me examples of search snippets that clearly broke that rule, like this one (on a search for “hummingbird food”):

For the record, this one clocks in at 317 characters (counting spaces). So, I set out to discover if these long descriptions were exceptions to the rule, or if we need to change the rules. I collected the search snippets across the MozCast 10K, which resulted in 92,669 snippets. All of the data in this post was collected on April 13, 2015.

The Basic Data

The minimum snippet length was zero characters. There were 69 zero-length snippets, but most of these were the new generation of answer box, that appears organic but doesn’t have a snippet. To put it another way, these were misidentified as organic by my code. The other 0-length snippets were local one-boxes that appeared as organic but had no snippet, such as this one for “chichen itza”:

These zero-length snippets were removed from further analysis, but considering that they only accounted for 0.07% of the total data, they didn’t really impact the conclusions either way. The shortest legitimate, non-zero snippet was 7 characters long, on a search for “geek and sundry”, and appears to have come directly from the site’s meta description:

The maximum snippet length that day (this is a highly dynamic situation) was 372 characters. The winner appeared on a search for “benefits of apple cider vinegar”:

The average length of all of the snippets in our data set (not counting zero-length snippets) was 143.5 characters, and the median length was 152 characters. Of course, this can be misleading, since some snippets are shorter than the limit and others are being artificially truncated by Google. So, let’s dig a bit deeper.

The Bigger Picture

To get a better idea of the big picture, let’s take a look at the display length of all 92,600 snippets (with non-zero length), split into 20-character buckets (0-20, 21-40, etc.):

Most of the snippets (62.1%) cut off as expected, right in the 141-160 character bucket. Of course, some snippets were shorter than that, and didn’t need to be cut off, and some broke the rules. About 1% (1,010) of the snippets in our data set measured 200 or more characters. That’s not a huge number, but it’s enough to take seriously.

That 141-160 character bucket is dwarfing everything else, so let’s zoom in a bit on the cut-off range, and just look at snippets in the 120-200 character range (in this case, by 5-character bins):

Zooming in, the bulk of the snippets are displaying at lengths between about 146-165 characters. There are plenty of exceptions to the 155-160 character guideline, but for the most part, they do seem to be exceptions.

Finally, let’s zoom in on the rule-breakers. This is the distribution of snippets displaying 191+ characters, bucketed in 10-character bins (191-200, 201-210, etc.):

Please note that the Y-axis scale is much smaller than in the previous 2 graphs, but there is a pretty solid spread, with a decent chunk of snippets displaying more than 300 characters.

Without looking at every original meta description tag, it’s very difficult to tell exactly how many snippets have been truncated by Google, but we do have a proxy. Snippets that have been truncated end in an ellipsis (…), which rarely appears at the end of a natural description. In this data set, more than half of all snippets (52.8%) ended in an ellipsis, so we’re still seeing a lot of meta descriptions being cut off.

I should add that, unlike titles/headlines, it isn’t clear whether Google is cutting off snippets by pixel width or character count, since that cut-off is done on the server-side. In most cases, Google will cut before the end of the second line, but sometimes they cut well before this, which could suggest a character-based limit. They also cut off at whole words, which can make the numbers a bit tougher to interpret.

The Cutting Room Floor

There’s another difficulty with telling exactly how many meta descriptions Google has modified – some edits are minor, and some are major. One minor edit is when Google adds some additional information to a snippet, such as a date at the beginning. Here’s an example (from a search for “chicken pox”):

With the date (and minus the ellipsis), this snippet is 164 characters long, which suggests Google isn’t counting the added text against the length limit. What’s interesting is that the rest comes directly from the meta description on the site, except that the site’s description starts with “Chickenpox.” and Google has removed that keyword. As a human, I’d say this matches the meta description, but a bot has a very hard time telling a minor edit from a complete rewrite.

Another minor rewrite occurs in snippets that start with search result counts:

Here, we’re at 172 characters (with spaces and minus the ellipsis), and Google has even let this snippet roll over to a third line. So, again, it seems like the added information at the beginning isn’t counting against the length limit.

All told, 11.6% of the snippets in our data set had some kind of Google-generated data, so this type of minor rewrite is pretty common. Even if Google honors most of your meta description, you may see small edits.

Let’s look at our big winner, the 372-character description. Here’s what we saw in the snippet:

Jan 26, 2015 – Health• Diabetes Prevention: Multiple studies have shown a correlation between apple cider vinegar and lower blood sugar levels. … • Weight Loss: Consuming apple cider vinegar can help you feel more full, which can help you eat less. … • Lower Cholesterol: … • Detox: … • Digestive Aid: … • Itchy or Sunburned Skin: … • Energy Boost:1 more items

So, what about the meta description? Here’s what we actually see in the tag:

Were you aware of all the uses of apple cider vinegar? From cleansing to healing, to preventing diabetes, ACV is a pantry staple you need in your home.

That’s a bit more than just a couple of edits. So, what’s happening here? Well, there’s a clue on that same page, where we see yet another rule-breaking snippet:

You might be wondering why this snippet is any more interesting than the other one. If you could see the top of the SERP, you’d know why, because it looks something like this:

Google is automatically extracting list-style data from these pages to fuel the expansion of the Knowledge Graph. In one case, that data is replacing a snippet
and going directly into an answer box, but they’re performing the same translation even for some other snippets on the page.

So, does every 2nd-generation answer box yield long snippets? After 3 hours of inadvisable mySQL queries, I can tell you that the answer is a resounding “probably not”. You can have 2nd-gen answer boxes without long snippets and you can have long snippets without 2nd-gen answer boxes,
but there does appear to be a connection between long snippets and Knowledge Graph in some cases.

One interesting connection is that Google has begun bolding keywords that seem like answers to the query (and not just synonyms for the query). Below is an example from a search for “mono symptoms”. There’s an answer box for this query, but the snippet below is not from the site in the answer box:

Notice the bolded words – “fatigue”, “sore throat”, “fever”, “headache”, “rash”. These aren’t synonyms for the search phrase; these are actual symptoms of mono. This data isn’t coming from the meta description, but from a bulleted list on the target page. Again, it appears that Google is trying to use the snippet to answer a question, and has gone well beyond just matching keywords.

Just for fun, let’s look at one more, where there’s no clear connection to the Knowledge Graph. Here’s a snippet from a search for “sons of anarchy season 4”:

This page has no answer box, and the information extracted is odd at best. The snippet bears little or no resemblance to the site’s meta description. The number string at the beginning comes out of a rating widget, and some of the text isn’t even clearly available on the page. This seems to be an example of Google acknowledging IMDb as a high-authority site and desperately trying to match any text they can to the query, resulting in a Frankenstein’s snippet.

The Final Verdict

If all of this seems confusing, that’s probably because it is. Google is taking a lot more liberties with snippets these days, both to better match queries, to add details they feel are important, or to help build and support the Knowledge Graph.

So, let’s get back to the original question – is it time to revise the 155(ish) character guideline? My gut feeling is: not yet. To begin with, the vast majority of snippets are still falling in that 145-165 character range. In addition, the exceptions to the rule are not only atypical situations, but in most cases those long snippets don’t seem to represent the original meta description. In other words, even if Google does grant you extra characters, they probably won’t be the extra characters you asked for in the first place.

Many people have asked: “How do I make sure that Google shows my meta description as is?” I’m afraid the answer is: “You don’t.” If this is very important to you, I would recommend keeping your description below the 155-character limit, and making sure that it’s a good match to your target keyword concepts. I suspect Google is going to take more liberties with snippets over time, and we’re going to have to let go of our obsession with having total control over the SERPs.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Understanding and Applying Moz’s Spam Score Metric – Whiteboard Friday

Posted by randfish

This week, Moz released a new feature that we call Spam Score, which helps you analyze your link profile and weed out the spam (check out the blog post for more info). There have been some fantastic conversations about how it works and how it should (and shouldn’t) be used, and we wanted to clarify a few things to help you all make the best use of the tool.

In today’s Whiteboard Friday, Rand offers more detail on how the score is calculated, just what those spam flags are, and how we hope you’ll benefit from using it.

For reference, here’s a still of this week’s whiteboard. 

Click on the image above to open a high resolution version in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re going to chat a little bit about Moz’s Spam Score. Now I don’t typically like to do Whiteboard Fridays specifically about a Moz project, especially when it’s something that’s in our toolset. But I’m making an exception because there have been so many questions and so much discussion around Spam Score and because I hope the methodology, the way we calculate things, the look at correlation and causation, when it comes to web spam, can be useful for everyone in the Moz community and everyone in the SEO community in addition to being helpful for understanding this specific tool and metric.

The 17-flag scoring system

I want to start by describing the 17 flag system. As you might know, Spam Score is shown as a score from 0 to 17. You either fire a flag or you don’t. Those 17 flags you can see a list of them on the blog post, and we’ll show that in there. Essentially, those flags correlate to the percentage of sites that we found with that count of flags, not those specific flags, just any count of those flags that were penalized or banned by Google. I’ll show you a little bit more in the methodology.

Basically, what this means is for sites that had 0 spam flags, none of the 17 flags that we had fired, that actually meant that 99.5% of those sites were not penalized or banned, on average, in our analysis and 0.5% were. At 3 flags, 4.2% of those sites, that’s actually still a huge number. That’s probably in the millions of domains or subdomains that Google has potentially still banned. All the way down here with 11 flags, it’s 87.3% that we did find banned. That seems pretty risky or penalized. It seems pretty risky. But 12.7% of those is still a very big number, again probably in the hundreds of thousands of unique websites that are not banned but still have these flags.

If you’re looking at a specific subdomain and you’re saying, “Hey, gosh, this only has 3 flags or 4 flags on it, but it’s clearly been penalized by Google, Moz’s score must be wrong,” no, that’s pretty comfortable. That should fit right into those kinds of numbers. Same thing down here. If you see a site that is not penalized but has a number of flags, that’s potentially an indication that you’re in that percentage of sites that we found not to be penalized.

So this is an indication of percentile risk, not a “this is absolutely spam” or “this is absolutely not spam.” The only caveat is anything with, I think, more than 13 flags, we found 100% of those to have been penalized or banned. Maybe you’ll find an odd outlier or two. Probably you won’t.

Correlation ≠ causation

Correlation is not causation. This is something we repeat all the time here at Moz and in the SEO community. We do a lot of correlation studies around these things. I think people understand those very well in the fields of social media and in marketing in general. Certainly in psychology and electoral voting and election polling results, people understand those correlations. But for some reason in SEO we sometimes get hung up on this.

I want to be clear. Spam flags and the count of spam flags correlates with sites we saw Google penalize. That doesn’t mean that any of the flags or combinations of flags actually cause the penalty. It could be that the things that are flags are not actually connected to the reasons Google might penalize something at all. Those could be totally disconnected.

We are not trying to say with the 17 flags these are causes for concern or you need to fix these. We are merely saying this feature existed on this website when we crawled it, or it had this feature, maybe it still has this feature. Therefore, we saw this count of these features that correlates to this percentile number, so we’re giving you that number. That’s all that the score intends to say. That’s all it’s trying to show. It’s trying to be very transparent about that. It’s not trying to say you need to fix these.

A lot of flags and features that are measured are perfectly fine things to have on a website, like no social accounts or email links. That’s a totally reasonable thing to have, but it is a flag because we saw it correlate. A number in your domain name, I think it’s fine if you want to have a number in your domain name. There’s plenty of good domains that have a numerical character in them. That’s cool.

TLD extension that happens to be used by lots of spammers, like a .info or a .cc or a number of other ones, that’s also totally reasonable. Just because lots of spammers happen to use those TLD extensions doesn’t mean you are necessarily spam because you use one.

Or low link diversity. Maybe you’re a relatively new site. Maybe your niche is very small, so the number of folks who point to your site tends to be small, and lots of the sites that organically naturally link to you editorially happen to link to you from many of their pages, and there’s not a ton of them. That will lead to low link diversity, which is a flag, but it isn’t always necessarily a bad thing. It might still nudge you to try and get some more links because that will probably help you, but that doesn’t mean you are spammy. It just means you fired a flag that correlated with a spam percentile.

The methodology we use

The methodology that we use, for those who are curious — and I do think this is a methodology that might be interesting to potentially apply in other places — is we brainstormed a large list of potential flags, a huge number. We cut that down to the ones we could actually do, because there were some that were just unfeasible for our technology team, our engineering team to do.

Then, we got a huge list, many hundreds of thousands of sites that were penalized or banned. When we say banned or penalized, what we mean is they didn’t rank on page one for either their own domain name or their own brand name, the thing between the
www and the .com or .net or .info or whatever it was. If you didn’t rank for either your full domain name, www and the .com or Moz, that would mean we said, “Hey, you’re penalized or banned.”

Now you might say, “Hey, Rand, there are probably some sites that don’t rank on page one for their own brand name or their own domain name, but aren’t actually penalized or banned.” I agree. That’s a very small number. Statistically speaking, it probably is not going to be impactful on this data set. Therefore, we didn’t have to control for that. We ended up not controlling for that.

Then we found which of the features that we ideated, brainstormed, actually correlated with the penalties and bans, and we created the 17 flags that you see in the product today. There are lots things that I thought were going to correlate, for example spammy-looking anchor text or poison keywords on the page, like Viagra, Cialis, Texas Hold’em online, pornography. Those things, not all of them anyway turned out to correlate well, and so they didn’t make it into the 17 flags list. I hope over time we’ll add more flags. That’s how things worked out.

How to apply the Spam Score metric

When you’re applying Spam Score, I think there are a few important things to think about. Just like domain authority, or page authority, or a metric from Majestic, or a metric from Google, or any other kind of metric that you might come up with, you should add it to your toolbox and to your metrics where you find it useful. I think playing around with spam, experimenting with it is a great thing. If you don’t find it useful, just ignore it. It doesn’t actually hurt your website. It’s not like this information goes to Google or anything like that. They have way more sophisticated stuff to figure out things on their end.

Do not just disavow everything with seven or more flags, or eight or more flags, or nine or more flags. I think that we use the color coding to indicate 0% to 10% of these flag counts were penalized or banned, 10% to 50% were penalized or banned, or 50% or above were penalized or banned. That’s why you see the green, orange, red. But you should use the count and line that up with the percentile. We do show that inside the tool as well.

Don’t just take everything and disavow it all. That can get you into serious trouble. Remember what happened with Cyrus. Cyrus Shepard, Moz’s head of content and SEO, he disavowed all the backlinks to its site. It took more than a year for him to rank for anything again. Google almost treated it like he was banned, not completely, but they seriously took away all of his link power and didn’t let him back in, even though he changed the disavow file and all that.

Be very careful submitting disavow files. You can hurt yourself tremendously. The reason we offer it in disavow format is because many of the folks in our customer testing said that’s how they wanted it so they could copy and paste, so they could easily review, so they could get it in that format and put it into their already existing disavow file. But you should not do that. You’ll see a bunch of warnings if you try and generate a disavow file. You even have to edit your disavow file before you can submit it to Google, because we want to be that careful that you don’t go and submit.

You should expect the Spam Score accuracy. If you’re doing spam investigation, you’re probably looking at spammier sites. If you’re looking at a random hundred sites, you should expect that the flags would correlate with the percentages. If I look at a random hundred 4 flag Spam Score sites, 7.5% of those I would expect on average to be penalized or banned. If you are therefore seeing sites that don’t fit those, they probably fit into the percentiles that were not penalized, or up here were penalized, down here weren’t penalized, that kind of thing.

Hopefully, you find Spam Score useful and interesting and you add it to your toolbox. We would love to hear from you on iterations and ideas that you’ve got for what we can do in the future, where else you’d like to see it, and where you’re finding it useful/not useful. That would be great.

Hopefully, you’ve enjoyed this edition of Whiteboard Friday and will join us again next week. Thanks so much. Take care.

Video transcription by Speechpad.com

ADDITION FROM RAND: I also urge folks to check out Marie Haynes’ excellent Start-to-Finish Guide to Using Google’s Disavow Tool. We’re going to update the feature to link to that as well.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Grow Your Own SEOs: Professional Development for Digital Marketers

Posted by RuthBurrReedy

Finding your next SEO hire is hard, but it’s only half the battle. Growing a team isn’t just about hiring—it’s about making your whole team, newbies and experts alike, better marketers.

It’s almost impossible to build a one-size-fits-all training program for digital marketers, since the tasks involved will depend a lot on the role. Even “SEO” can mean a lot of different things. Your role might be highly technical, highly creative, or a mix of both. Tactics like local SEO or conversion rate optimization might be a huge part of an SEO’s job or might be handled by another person entirely. Sometimes an SEO role includes elements like social media or paid search. The skills you teach your trainees will depend on what you need them to do, and more specifically, what you need them to do right now.

Whatever the specifics of the marketing role,
you need to make sure you’re providing a growth plan for your digital marketers (this goes for your more experienced team members, as well as your newbies). A professional growth plan helps you and your team members:

  • Track whether or not they’re making progress in their roles. Taking on a new skill set can be daunting. Having a growth plan can alleviate some of the stress less-experienced employees may feel when learning a new skill, and makes sure more experienced employees aren’t stagnating. 
  • Spot problem areas. Everyone’s talents are different, but you don’t want someone to miss out on growth opportunities because they’re such a superstar in one area and are neglecting everything else. 
  • Have conversations around promotions and raises. Consistently tracking people’s development across a variety of skill sets allows you to compare where someone is now to where they were when you hired them; it also gives you a framework to discuss what additional steps might be needed before a promotion or raise is in order, and help them develop a plan to get there. 
  • Advance their careers. One of your duties as their manager is to make sure you’re giving them what they need to continue on their career path. A professional development plan should be managed with career goals in mind. 
  • Increase employee retention. Smart people like to learn and grow, and if you’re not providing them ways to do so, they’re not going to stick around.

We have technical/on-page SEOs, content marketers, local SEOs and marketing copywriters all working together on the same team at BigWing. We wanted to create a framework for professional development that we could apply to the whole team, so we identified a set of areas that any digital marketer should be growing in, regardless of their focus. This growth plan is part of everyone’s mid-year and year-end reviews.

Here’s what it looks like:

Growth areas for digital marketers

Want your own copy of the Professional Advancement Sheet? Get it here!

Tactical -> strategic

At the beginner level, team members are still learning the basic concepts and tasks associated with their role, and how those translate to the client metrics they’re being measured on. It takes time to encounter and fix enough different kinds of things to know “in x situation, look at a, b and c and then try y or z.”

As someone grows in their role, they will learn more advanced tactics. They should also be more and more able to use critical thinking to figure out how to solve problems and tackle longer-term client goals and projects.
At the senior level, an SEO should be building long-term strategies and be comfortable with unusual campaigns and one-off projects.

Small clients -> big clients

There are plenty of small brochure websites in the world, and these sites are a great testing ground for the fundamentals of SEO: they may still have weird jacked-up problems (so many websites do), but they are a manageable size and don’t usually have the potential for esoteric technical issues that large, complex sites do. Once someone has a handle on SEO, you can start assigning bigger and badder sites and projects (with plenty of mentoring from more experienced team members—more on that later).

We thought about making this one “Easy clients -> difficult clients,” because there’s another dimension to this line of progress: increasingly complex client relationships. Clients with very large or complicated websites (or clients with more than one website) are likely to have higher budgets, bigger internal staff, and more stakeholders. As the number of people involved increases, so does the potential for friction, so a senior-level SEO should be able to handle those complex relationships with aplomb.

Learning -> teaching

At the beginner level, people are learning digital marketing in general and learning about our specific internal processes. As they gain experience, they become a resource for team members still in the “learning” phase, and at the senior level they should be a go-to for tough questions and expert opinions.

Even a beginner digital marketer may have other things to teach the team; skills learned from previous careers, hobbies or side gigs can be valuable additions. For example, we had a brand-new team member with a lot of experience in photography, a valuable skill for content marketers; she was able to start teaching her teammates more about taking good photos while still learning other content marketing fundamentals herself.

learning

I love this stock picture because the chalkboard just says “learning.” Photo via
Pixabay.

Since managers can’t be everywhere at once, more experienced employees must take an active role in teaching.
It’s not enough that they be experts (which is why this scale doesn’t go from “Learning” to “Mastering”); they have to be able to impart that expertise to others. Teaching is more than just being available when people have questions, too: senior team members are expected to be proactive about taking the time to show junior team members the ropes.

Prescribed -> creative

The ability to move from executing a set series of tasks to creating creative, heavily client-focused digital marketing campaigns is, in my opinion,
one of the best predictors of long-term SEO success. When someone is just starting out in SEO, it’s appropriate to have a fairly standard set of tasks they’re carrying out. For a lot of those small sites that SEO trainees start on, that set of SEO fundamentals goes a long way. The challenge comes when the basics aren’t enough.

Creative SEO comes from being able to look at a client’s business, not just their website, and tailor a strategy to their specific needs. Creative SEOs are looking for unique solutions to the unique problems that arise from that particular client’s combination of business model, target market, history and revenue goals. Creativity can also be put to work internally, in the form of suggested process improvements and new revenue-driving projects.

General -> T-shaped

The concept of the T-shaped marketer has been around for a few years (if you’re not familiar with the idea, you can read up on it on
Rand’s blog or the Distilled blog). Basically, it means that in addition to deep knowledge whatever area(s) of inbound marketing we specialize in, digital marketers should also work to develop basic knowledge of a broad set of marketing disciplines, in order to understand more about the craft of marketing as a whole.

t-shaped marketer

Source:
The T-Shaped Marketer

A digital marketer who’s just starting out will naturally be focusing more on the broad part of their T, getting their head around the basic concepts and techniques that make up the digital marketing skill set. Eventually most people naturally find a few specialty areas that they’re really passionate about. Encouraging employees to build deep expertise ultimately results in a whole team full of subject matter experts in a whole team’s worth of subjects.

Beginner -> expert

This one is pretty self-explanatory. The important thing to note is that expertise isn’t something that just happens to you after you do something a lot (although that’s definitely part of it).
Honing expertise means actively pursuing new learning opportunities and testing new ideas and tactics, and we look for the pursuit of expertise as part of evaluating someone’s professional growth.

Observing -> leading

Anyone who is working in inbound marketing should be consistently observing the industry—they should be following search engine news, reading blog posts from industry experts, and attending events and webinars to learn more about their craft. It’s a must-do at all levels, and even someone who’s still learning the ropes can be keeping an eye on industry buzz and sharing items of interest with their co-workers.

Not everyone is crazy about the phrase “thought leadership.” When you’re a digital marketing agency, though,
your people are your product—their depth of knowledge and quality of work is a big part of what you’re selling. As your team gains experience and confidence, it’s appropriate to expect them to start participating more in the digital marketing space, both online and in person. This participation could look like: 

  • Pitching and speaking at marketing conferences 
  • Contributing to blogs, whether on your site or in other marketing communities 
  • Organizing local tech meetups 
  • Regularly participating in online events like #seochat

…or a variety of other activities, depending on the individual’s talents and interests. Not only does this kind of thought-leadership activity promote your agency brand, it also helps your employees build their personal brands—and don’t forget, a professional development plan needs to be as much about helping your people grow in their careers as it is about growing the skill sets you need.

Low output -> high output

I love the idea of meticulous, hand-crafted SEO, but let’s be real: life at an agency means getting stuff done. When people are learning to do stuff, it takes them longer to do (which is BY FAR MY LEAST FAVORITE PART OF LEARNING TO DO THINGS, I HATE IT SO MUCH), so expectations of the number of clients/volume of work they can handle should scale appropriately. It’s okay for people to work at their own pace and in their own way, but at some point you need to be able to rely on your team to turn things around quickly, handle urgent requests, and consistently hit deadlines, or you’re going to lose customers.

You may notice that some of these growth areas overlap, and that’s okay—the idea is to create a nuanced approach that captures all the different ways a digital marketer can move toward excellence.

Like with all other aspects of a performance review, it’s important to be as specific as possible when discussing a professional growth plan. If there’s an area a member of your team needs to make more progress in, don’t just say e.g. “You need to be more strategic.” Come up with specific projects and milestones for your marketer to hit so you’re both clear on when they’re growing and what they need to do to get to the next level.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Leveraging Panda to Get Out of Product Feed Jail

Posted by MichaelC

This is a story about Panda, customer service, and differentiating your store from others selling the same products.

Many e-commerce websites get the descriptions, specifications, and imagery for products they sell from feeds or databases provided by the
manufacturers. The manufacturers might like this, as they control how their product is described and shown. However, it does their retailers
no good when they are trying to rank for searches for those products and they’ve got the exact same content as every other retailer. If the content
in the feed is thin, then you’ll have pages with…well….thin content. And if there’s a lot of content for the products, then you’ll have giant blocks of content that
Panda might spot as being the same as they’ve seen on many other sites. To throw salt on the wound, if the content is really crappy, badly written,
or downright wrong, then the retailers’ sites will look low-quality to Panda and users as well.

Many webmasters see Panda as a type of Google penalty—but it’s not, really. Panda is a collection of measurements Google
is taking of your web pages to try and give your pages a rating on how happy users are likely to be with those pages.
It’s not perfect, but then again—neither is your website.

Many SEO folks (including me) tend to focus on the kinds of tactical and structural things you can do to make Panda see
your web pages as higher quality: things like adding big, original images, interactive content like videos and maps, and
lots and lots and lots and lots of text. These are all good tactics, but let’s step back a bit and look at a specific
example to see WHY Panda was built to do this, and from that, what we can do as retailers to enrich the content we have
for e-commerce products where our hands are a bit tied—we’re getting a feed of product info from the manufacturers, the same
as every other retailer of those products.

I’m going to use a real-live example that I suffered through about a month ago. I was looking for a replacement sink
stopper for a bathroom sink. I knew the brand, but there wasn’t a part number on the part I needed to replace. After a few Google
searches, I think I’ve found it on Amazon:


Don’t you wish online shopping was always this exciting?

What content actually teaches the customer

All righty… my research has shown me that there are standard sizes for plug stoppers. In fact, I initially ordered a
“universal fit sink stopper.” Which didn’t fit. Then I found 3 standard diameters, and 5 or 6 standard lengths.
No problem…I possess that marvel of modern tool chests, a tape measure…so I measure the part I have that I need to replace. I get about 1.5″ x 5″.
So let’s scroll down to the product details to see if it’s a match:

Kohler sink stopper product info from hell

Whoa. 1.2 POUNDS? This sink stopper must be made of
Ununoctium.
The one in my hand weighs about an ounce. But the dimensions
are way off as well: a 2″ diameter stopper isn’t going to fit, and mine needs to be at least an inch longer.

I scroll down to the product description…maybe there’s more detail there, maybe the 2″ x 2″ is the box or something.

I've always wanted a sink stopper designed for long long

Well, that’s less than helpful, with a stupid typo AND incorrect capitalization AND a missing period at the end.
Doesn’t build confidence in the company’s quality control.

Looking at the additional info section, maybe this IS the right part…the weight quoted in there is about right:

Maybe this is my part after all

Where else customers look for answers

Next I looked at the questions and answers bit, which convinced me that it PROBABLY was the right part:

Customers will answer the question if the retailer won't...sometimes.

If I was smart, I would have covered my bets by doing what a bunch of other customers also did: buy a bunch of different parts,
and surely one of them will fit. Could there
possibly was a clearer signal that the product info was lacking than this?

If you can't tell which one to buy, buy them all!

In this case, that was probably smarter than spending another 1/2 hour of my time snooping around online. But in general, people
aren’t going to be willing to buy THREE of something just to make sure they get the right one. This cheap part was an exception.

So, surely SOMEONE out there has the correct dimensions of this part on their site—so I searched for the part number I saw on the Amazon
listing. But as it turned out, that crappy description and wrong weight and dimensions were on every site I found…because they came from
the manufacturer.

Better Homes and Gardens...but not better description.

A few of the sites had edited out the “designed for long long” bit, but apart from that, they were all the same.

What sucks for the customer is an opportunity for you

Many, many retailers are in this same boat—they get their product info from the manufacturer, and if the data sucks in their feed,
it’ll suck on their site. Your page looks weak to both users and to Panda, and it looks the same as everybody else’s page for that product…to
both users and to Panda. So (a) you won’t rank very well, and (b) if you DO manage to get a customer to that page, it’s not as likely to convert
to a sale.

What can you do to improve on this? Here’s a few tactics to consider.

1. Offer your own additional description and comments

Add a new field to your CMS for your own write-ups on products, and when you discover issues like the above, you can add your own information—and
make it VERY clear what’s the manufacturer’s stock info and what you’ve added (that’s VALUE-ADDED) as well. My client
Sports Car Market magazine does this with their collector car auction reports in their printed magazine:
they list the auction company’s description of the car, then their reporter’s assessment of the car. This is why I buy the magazine and not the auction catalog.

2. Solicit questions

Be sure you solicit questions on every product page—your customers will tell you what’s wrong or what important information is missing. Sure,
you’ve got millions of products to deal with, but what the customers are asking about (and your sales volume of course) will help you prioritize as well as
find the problems opportunities.

Amazon does a great job of enabling this, but in this case, I used the Feedback option to update the product info,
and got back a total
bull-twaddle email from the seller about how the dimensions are in the product description thank you for shopping with us, bye-bye.
I tried to help them, for free, and they shat on me.

3. But I don’t get enough traffic to get the questions

Don’t have enough site volume to get many customer requests? No problem, the information is out there for you on Amazon :-).
Take your most important products, and look them up on Amazon, and see what questions are being asked—then answer those ONLY on your own site.

4. What fits with what?

Create fitment/cross-reference charts for products.
You probably have in-house knowledge of what products fit/are compatible with what other products.
Just because YOU know a certain accessory fits all makes and models, because it’s some industry-standard size, doesn’t mean that the customer knows this.

If there’s a particular way to measure a product so you get the correct size, explain that (with photos of what you’re measuring, if it seems
at all complicated). I’m getting a new front door for my house. 

  • How big is the door I need? 
  • Do I measure the width of the door itself, or the width of the
    opening (probably 1/8″ wider)? 
  • Or if it’s pre-hung, do I measure the frame too? Is it inswing or outswing?
  • Right or left hinged…am I supposed to
    look at the door from inside the house or outside to figure this out? 

If you’re a door seller, this is all obvious stuff,
but it wasn’t obvious to me, and NOT having the info on a website means (a) I feel stupid, and (b) I’m going to look at your competitors’ sites
to see if they will explain it…and maybe I’ll find a door on THEIR site I like better anyway.

Again, prioritize based on customer requests.

5. Provide your own photos and measurements

If examples of the physical products are available to you, take your own photos, and take your own measurements.

In fact, take your OWN photo of YOURSELF taking the measurement—so the user can see exactly what part of the product you’re measuring.
In the photo below, you can see that I’m measuring the diameter of the stopper, NOT the hole in the sink, NOT the stopper plus the rubber gasket.
And no, Kohler, it’s NOT 2″ in diameter…by a long shot.

Don't just give the measurements, SHOW the measurements

Keep in mind, you shouldn’t have to tear apart your CMS to do any of this. You can put your additions in a new database table, just tied to the
core product content by SKU. In the page template code for the product page, you can check your database to see if you have any of your “extra bits” to display
alongside the feed content, and this way keep it separate from the core product catalog code. This will make updates to the CMS/product catalog less painful as well.

Fixing your content doesn’t have to be all that difficult, nor expensive

At this point, you’re probably thinking “hey, but I’ve got 1.2 million SKUs, and if I were to do this, it’d take me 20 years to update all of them.”
FINE. Don’t update all of them. Prioritize, based on factors like what you sell the most of, what you make the best margin on, what customers
ask questions about the most, etc. Maybe concentrate on your top 5% in terms of sales, and do those first. Take all that money you used to spend
buying spammy links every month, and spend it instead on junior employees or interns doing the product measurements, extra photos, etc.

And don’t be afraid to spend a little effort on a low value product, if it’s one that frequently gets questions from customers.
Simple things can make a life-long fan of the customer. I once needed to replace a dishwasher door seal, and didn’t know if I needed special glue,
special tools, how to cut it to fit with or without overlap, etc.
I found a video on how to do the replacement on
RepairClinic.com. So easy!
They got my business for the $10 seal, of course…but now I order my $50 fridge water filter from them every six months as well.

Benefits to your conversion rate

Certainly the tactics we’ve talked about will improve your conversion rate from visitors to purchasers. If JUST ONE of those sites I looked at for that damn sink stopper
had the right measurement (and maybe some statement about how the manufacturer’s specs above are actually incorrect, we measured, etc.), I’d have stopped right there
and bought from that site.

What does this have to do with Panda?

But, there’s a Panda benefit here too. You’ve just added a bunch of additional, unique text to your site…and maybe a few new unique photos as well.
Not only are you going to convert better, but you’ll probably rank better too.

If you’re NOT Amazon, or eBay, or Home Depot, etc., then Panda is your secret weapon to help you rank against those other sites whose backlink profiles are
stronger than
carbon fibre (that’s a really cool video, by the way).
If you saw my
Whiteboard Friday on Panda optimization, you’ll know that
Panda tuning can overcome incredible backlink profile deficits.

It’s go time

We’re talking about tactics that are time-consuming, yes—but relatively easy to implement, using relatively inexpensive staff (and in some
cases, your customers are doing some of the work for you).
And it’s something you can roll out a product at a time.
You’ll be doing things that really DO make your site a better experience for the user…we’re not just trying to trick Panda’s measurements.

  1. Your pages will rank better, and bring more traffic.
  2. Your pages will convert better, because users won’t leave your site, looking elsewhere for answers to their questions.
  3. Your customers will be more loyal, because you were able to help them when nobody else bothered.

Don’t be held hostage by other peoples’ crappy product feeds. Enhance your product information with your own info and imagery.
Like good link-building and outreach, it takes time and effort, but both Panda and your site visitors will reward you for it.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it