Q&A: Lost Your Anonymous Google Reviews? The Scoop on Removal and Moving Forward

Posted by MiriamEllis

Did you recently notice a minor or major drop in your Google review count, and then realize that some of your actual reviews had gone missing, too? Read on to see if your experience of removal review was part of the action Google took in late May surrounding anonymous reviews.

Q: What happened?

A: As nearly as I can pinpoint it, Google began discounting reviews left by “A Google User” from total review counts around May 23, 2018. For a brief period, these anonymous reviews were still visible, but were then removed from display. I haven’t seen any official announcement about this, to date, and it remains unclear as to whether all reviews designated as being from “A Google User” have been removed, or whether some still remain. I haven’t been able to discover a single one since the update.

Q: How do I know if I was affected by this action?

A: If, prior to my estimated date, you had reviews that had been left by profiles marked “A Google User,” and these reviews are now gone, that’s the diagnostic of why your total review count has dropped.

Q: The reviews I’ve lost weren’t from “A Google User” profiles. What happened?

A: If you’ve lost reviews from non-anonymous profiles, it’s time to investigate other causes of removal. These could include:

  • Having paid for or incentivized reviews, either directly or via an unethical marketer
  • Reviews stemming from a review station/kiosk at your business
  • Getting too many reviews at once
  • URLs, prohibited language, or other objectionable content in the body of reviews
  • Reviewing yourself, or having employees (past or present) do so
  • Reviews were left on your same IP (as in the case of free on-site Wi-Fi)
  • The use of review strategies/software that prohibit negative reviews or selectively solicit positive reviews
  • Any other violation of Google’s review guidelines
  • A Google bug, in which case, check the GMB forum for reports of similar review loss, and wait a few days to see if your reviews return; if not, you can take the time to post about your issue in the GMB forum, but chances are not good that removed reviews will be reinstated

Q: Is anonymous review removal a bug or a test?

A: One month later, these reviews remain absent. This is not a bug, and seems unlikely to be a test.

Q: Could my missing anonymous reviews come back?

A: Never say “never” with Google. From their inception, Google review counts have been wonky, and have been afflicted by various bugs. There have been cases in which reviews have vanished and reappeared. But, in this case, I don’t believe these types of reviews will return. This is most likely an action on Google’s part with the intention of improving their review corpus, which is, unfortunately, plagued with spam.

Q: What were the origins of “A Google User” reviews?

A: Reviews designated by this language came from a variety of scenarios, but are chiefly fallout from Google’s rollout of Google+ and then its subsequent detachment from local. As Mike Blumenthal explains:

As recently as 2016, Google required users to log in as G+ users to leave a review. When they transitioned away from + they allowed users several choices as to whether to delete their reviews or to create a name. Many users did not make that transition. For the users that chose not to give their name and make that transition Google identified them as ” A Google User”…. also certain devices like the old Blackberry’s could leave a review but not a name. Also users left + and may have changed profiles at Google abandoning their old profiles. Needless to say there were many ways that these reviews became from “A Google User.”

Q: Is the removal of anonymous reviews a positive or negative thing? What’s Google trying to do here?

A: Whether this action has worked out well or poorly for you likely depends on the quality of the reviews you’ve lost. In some cases, the loss may have suddenly put you behind competitors, in terms of review count or rating. In others, the loss of anonymous negative reviews may have just resulted in your star rating improving — which would be great news!

As to Google’s intent with this action, my assumption is that it’s a step toward increasing transparency. Not their own transparency, but the accountability of the reviewing public. Google doesn’t really like to acknowledge it, but their review corpus is inundated with spam, some of it the product of global networks of bad actors who have made a business of leaving fake reviews. Personally, I welcome Google making any attempts to cope with this, but the removal of this specific type of anonymous review is definitely not an adequate solution to review spam when the livelihoods of real people are on the line.

Q: Does this Google update mean my business is now safe from anonymous reviews?

A: Unfortunately, no. While it does mean you’re unlikely to see reviews marked as being from “A Google User”, it does not in any way deter people from creating as many Google identities as they’d like to review your business. Consider:

  • Google’s review product has yet to reach a level of sophistication which could automatically flag reviews left by “Rocky Balboa” or “Whatever Whatever” as, perhaps, somewhat lacking in legitimacy.
  • Google’s product also doesn’t appear to suspect profiles created solely to leave one-time reviews, though this is a clear hallmark of many instances of spam
  • Google won’t remove text-less negative star ratings, despite owner requests
  • Google hasn’t been historically swayed to remove reviews on the basis of the owner claiming no records show that a negative reviewer was ever a customer

Q: Should Google’s removal of anonymous reviews alter my review strategy?

A: No, not really. I empathize with the business owners expressing frustration over the loss of reviews they were proud of and had worked hard to earn. I see actions like this as important signals to all local businesses to remember that you don’t own your Google reviews, you don’t own your Google My Business listing/Knowledge Panel. Google owns those assets, and manages them in any way they deem best for Google.

In the local SEO industry, we are increasingly seeing the transformation of businesses from the status of empowered “website owner” to the shakier “Google tenant,” with more and more consumer actions taking place within Google’s interface. The May removal of reviews should be one more nudge to your local brand to:

  • Be sure you have an ongoing, guideline-compliant Google review acquisition campaign in place so that reviews that become filtered out can be replaced with fresh reviews
  • Take an active approach to monitoring your GMB reviews so that you become aware of changes quickly. Software like Moz Local can help with this, especially if you own or market large, multi-location enterprises. Even when no action can be taken in response to a new Google policy, awareness is always a competitive advantage.
  • Diversify your presence on review platforms beyond Google
  • Collect reviews and testimonials directly from your customers to be placed on your own website; don’t forget the Schema markup while you’re at it
  • Diversify the ways in which you are cultivating positive consumer sentiment offline; word-of-mouth marketing, loyalty programs, and the development of real-world relationships with your customers is something you directly control
  • Keep collecting those email addresses and, following the laws of your country, cultivate non-Google-dependent lines of communication with your customers
  • Invest heavily in hiring and training practices that empower staff to offer the finest possible experience to customers at the time of service — this is the very best way to ensure you are building a strong reputation both on and offline

Q: So, what should Google do next about review spam?

A: A Google rep once famously stated,

The wiki nature of Google Maps expands upon Google’s steadfast commitment to open community.”

I’d welcome your opinions as to how Google should deal with review spam, as I find this a very hard question to answer. It may well be a case of trying to lock the barn door after the horse has bolted, and Google’s wiki mentality applied to real-world businesses is one with which our industry has contended for years.

You see, the trouble with Google’s local product is that it was never opt-in. Whether you list your business or not, it can end up in Google’s local business index, and that means you are open to reviews (positive, negative, and fallacious) on the most visible possible platform, like it or not. As I’m not seeing a way to walk this back, review spam should be Google’s problem to fix, and they are obliged to fix it if:

  • They are committed to their own earnings, based on the trust the public feels in their review corpus
  • They are committed to user experience, implementing necessary technology and human intervention to protect consumers from fake reviews
  • They want to stop treating the very businesses on whom their whole product is structured as unimportant in the scheme of things; companies going out of business due to review spam attacks really shouldn’t be viewed as acceptable collateral damage

Knowing that Alphabet has an estimated operating income of $7 billion for 2018, I believe Google could fund these safeguards:

  1. Take a bold step and resource human review mediators. Make this a new department within the local department. Google sends out lots of emails to businesses now. Let them all include clear contact options for reaching the review mediation department if the business experiences spam reviews. Put the department behind a wizard that walks the business owner through guidelines to determine if a review is truly spam, and if this process signals a “yes,” open a ticket and fix the issue. Don’t depend on volunteers in the GMB forum. Invest money in paid staff to maintain the quality of Google’s own product.
  2. If Google is committed to the review flagging process (which is iffy, at best), offer every business owner clear guidelines for flagging reviews within their own GMB dashboard, and then communicate about what is happening to the flagged reviews.
  3. Improve algorithmic detection of suspicious signals, like profiles with one-off reviews, the sudden influx of negative reviews and text-less ratings, global reviews within a single profile, and companies or profiles with a history of guideline violations. Hold the first few reviews left by any profile in a “sandbox,” à la Yelp.

Now it’s your turn! Let’s look at Google’s removal of “A Google User” reviews as a first step in the right direction. If you had Google’s ear, what would you suggest they do next to combat review spam? I’d really like to know.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 5 months ago from tracking.feedpress.it

Location Data + Reviews: The 1–2 Punch of Local SEO

Posted by MiriamEllis

My father, a hale and hearty gentleman in his seventies, simply won’t dine at a new restaurant these days before he checks its reviews on his cell phone. Your 23-year-old nephew, who travels around the country for his job as a college sports writer, has devoted 233 hours of his young life to writing 932 reviews on Yelp (932 reviews x @15 minutes per review).

Yes, our local SEO industry knows that my dad and your nephew need to find accurate NAP on local business listings to actually find and get to business locations. This is what makes our historic focus on citation data management totally reasonable. But reviews are what help a business to be chosen. Phil Rozek kindly highlighted a comment of mine as being among the most insightful on the Local Search Ranking Factors 2017 survey:

“If I could drive home one topic in 2017 for local business owners, it would surround everything relating to reviews. This would include rating, consumer sentiment, velocity, authenticity, and owner responses, both on third-party platforms and native website reviews/testimonials pages. The influence of reviews is enormous; I have come to see them as almost as powerful as the NAP on your citations. NAP must be accurate for rankings and consumer direction, but reviews sell.”

I’d like to take a few moments here to dive deeper into that list of review elements. It’s my hope that this post is one you can take to your clients, team or boss to urge creative and financial allocations for a review management campaign that reflects the central importance of this special form of marketing.

Ratings: At-a-glance consumer impressions and impactful rankings filter

Whether they’re stars or circles, the majority of rating icons send a 1–5 point signal to consumers that can be instantly understood. This symbol system has been around since at least the 1820s; it’s deeply ingrained in all our brains as a judgement of value.

So, when a modern Internet user is making a snap decision, like where to grab a taco, the food truck with 5 Yelp stars is automatically going to look more appealing than the one with only 2. Ratings can also catch the eye when Schema (or Google serendipity) causes them to appear within organic SERPs or knowledge panels.

All of the above is well-understood, but while the exact impact of high star ratings on local pack rankings has long been speculative (it’s only factor #24 in this year’s Local Search Ranking Factors), we may have just reached a new day with Google. The ability to filter local finder results by rating has been around for some time, but in May, Google began testing the application of a “highly rated” snippet on hotel rankings in the local packs. Meanwhile, searches with the format of “best X in city” (e.g. best burrito in Dallas) appear to be defaulting to local results made up of businesses that have earned a minimum average of 4 stars. It’s early days yet, but totally safe for us to assume that Google is paying increased attention to numeric ratings as indicators of relevance.

Because we’re now reaching the point from which we can comfortably speculate that high ratings will tend to start correlating more frequently with high local rankings, it’s imperative for local businesses to view low ratings as the serious impediments to growth that they truly are. Big brands, in particular, must stop ignoring low star ratings, or they may find themselves not only having to close multiple store locations, but also, to be on the losing end of competing for rankings for their open stores when smaller competitors surpass their standards of cleanliness, quality, and employee behavior.

Consumer sentiment: The local business story your customers are writing for you

Here is a randomly chosen Google 3-pack result when searching just for “tacos” in a small city in the San Francisco Bay Area:

taco3pack.jpg

We’ve just been talking about ratings, and you can look at a result like this to get that instant gut feeling about the 4-star-rated eateries vs. the 2-star place. Now, let’s open the book on business #3 and see precisely what kind of story its consumers are writing. This is the first step towards doing a professional review audit for any business whose troubling reviews may point to future closure if problems aren’t fixed. A full audit would look at all relevant review platforms, but we’ll be brief here and just look at Google and Yelp and sort negative sentiments by type:

tacoaudit.jpg

It’s easy to ding fast food chains. Their business model isn’t commonly associated with fine dining or the kind of high wages that tend to promote employee excellence. In some ways, I think of them as extreme examples. Yet, they serve as good teaching models for how even the most modest-quality offerings create certain expectations in the minds of consumers, and when those basic expectations aren’t met, it’s enough of a story for consumers to share in the form of reviews.

This particular restaurant location has an obvious problem with slow service, orders being filled incorrectly, and employees who have not been trained to represent the brand in a knowledgeable, friendly, or accessible manner. Maybe a business you are auditing has pain points surrounding outdated fixtures or low standards of cleanliness.

Whatever the case, when the incoming consumer turns to the review world, their eyes scan the story as it scrolls down their screen. Repeat mentions of a particular negative issue can create enough of a theme to turn the potential customer away. One survey says only 13% of people will choose a business that has wound up with a 1–2 star rating based on poor reviews. Who can afford to let the other 87% of consumers go elsewhere?

There are 20 restaurants showing up in Google’s local finder for my “tacos” search, highlighted above. Taco Bell is managing to hold the #3 spot in the local pack right now, perhaps due to brand authority. My question is, what happens next, particularly if Google is going to amplify ratings and review sentiment in the overall local ranking mix? Will this chain location continue to beat out 4-star restaurants with 100+ positive reviews, or will it slip down as consumers continue to chronicle specific and unresolved issues?

No third-party brand controls Google, but your brand can open the book right now and make maximum use of the story your customers are constantly publishing — for free. By taking review insights as real and representative of all the customers who don’t speak up, and by actively addressing repeatedly cited issues, you could be making one of the smartest decisions in your company’s history.

Velocity/recency: Just enough of a timely good thing

This is one of the easiest aspects of review management to teach clients. You can sum it up in one sentence: don’t get too many reviews at once on any given platform but do get enough reviews on an ongoing basis to avoid looking like you’ve gone out of business.

For a little more background on the first part of that statement, watch Mary Bowling describing in this LocalU video how she audited a law firm that went from zero to thirty 5-star reviews within a single month. Sudden gluts of reviews like this not only look odd to alert customers, but they can trip review platform filters, resulting in removal. Remember, reviews are a business lifetime effort, not a race. Get a few this month, a few next month, and a few the month after that. Keep going.

The second half of the review timing paradigm relates to not running out of steam in your acquisition campaigns. One survey found that 73% of consumers don’t believe that reviews that are older than 3 months are still relevant to them, yet you will frequently encounter businesses that haven’t earned a new review in over a year. It makes you wonder if the place is still in business, or if it’s in business but is so unimpressive that no one is bothering to review it.

While I’d argue that review recency may be more important in review-oriented industries (like restaurants) vs. those that aren’t quite as actively reviewed (like septic system servicing), the idea here is similar to that of velocity, in that you want to keep things going. Don’t run a big review acquisition campaign in January and then forget about outreach for the rest of the year. A moderate, steady pace of acquisition is ideal.

Authenticity: Honesty is the only honest policy

For me, this is one of the most prickly and interesting aspects of the review world. Three opposing forces meet on this playing field: business ethics, business education, and the temptations engendered by the obvious limitations of review platforms to police themselves.

I recently began a basic audit of a family-owned restaurant for a friend of a friend. Within minutes, I realized that the family had been reviewing their own restaurant on Yelp (a glaring violation of Yelp’s policy). I felt sorry to see this, but being acquainted with the people involved (and knowing them to be quite nice!), I highly doubted they had done this out of some dark impulse to deceive the public. Rather, my guess was that they may have thought they were “getting the ball rolling” for their new business, hoping to inspire real reviews. My gut feeling was that they simply lacked the necessary education to understand that they were being dishonest with their community and how this could lead to them being publicly shamed by Yelp, if caught.

In such a scenario, there is definitely opportunity for the marketer to offer the necessary education to describe the risks involved in tying a brand to misleading practices, highlighting how vital it is to build trust within the local community. Fake positive reviews aren’t building anything real on which a company can stake its future. Ethical business owners will catch on when you explain this in honest terms and can then begin marketing themselves in smarter ways.

But then there’s the other side. Mike Blumenthal recently wrote of his discovery of the largest review spam network he’d ever encountered and there’s simply no way to confuse organized, global review spam with a busy small business making a wrong, novice move. Real temptation resides in this scenario, because, as Blumenthal states:

Review spam at this scale, unencumbered by any Google enforcement, calls into question every review that Google has. Fake business listings are bad, but businesses with 20, or 50, or 150 fake reviews are worse. They deceive the searcher and the buying public and they stain every real review, every honest business, and Google.”

When a platform like Google makes it easy to “get away with” deception, companies lacking ethics will take advantage of the opportunity. All we can do, as marketers, is to offer the education that helps ethical businesses make honest choices. We can simply pose the question:

Is it better to fake your business’ success or to actually achieve success?

On a final note, authenticity is a two-way street in the review world. When spammers target good businesses with fake, negative reviews, this also presents a totally false picture to the consumer public. I highly recommend reading about Whitespark’s recent successes in getting fake Google reviews removed. No guarantees here, but excellent strategic advice.

Owner responses: Your contributions to the consumer story

In previous Moz blog posts, I’ve highlighted the five types of Google My Business reviews and how to respond to them, and I’ve diagrammed a real-world example of how a terrible owner response can make a bad situation even worse. If the world of owner responses is somewhat new to you, I hope you’ll take a gander at both of those. Here, I’d like to focus on a specific aspect of owner responses, as it relates to the story reviews are telling about your business.

We’ve discussed above the tremendous insight consumer sentiment can provide into a company’s pain points. Negative reviews can be a roadmap to resolving repeatedly cited problems. They are inherently valuable in this regard, and by dint of their high visibility, they carry the inherent opportunity for the business owner to make a very public showing of accountability in the form of owner responses. A business can state all it wants on its website that it offers lightning-quick service, but when reviews complain of 20-minute waits for fast food, which source do you think the average consumer will trust?

The truth is, the hypothetical restaurant has a problem. They’re not going to be able to resolve slow service overnight. Some issues are going to require real planning and real changes to overcome. So what can the owner do in this case?

  1. Whistle past the graveyard, claiming everything is actually fine now, guaranteeing further disappointed expectations and further negative reviews resulting therefrom?
  2. Be gutsy and honest, sharing exactly what realizations the business has had due to the negative reviews, what the obstacles are to fixing the problems, and what solutions the business is implementing to do their best to overcome those obstacles?

Let’s look at this in living color:

whistlinggutsy.jpg

In yellow, the owner response is basically telling the story that the business is ignoring a legitimate complaint, and frankly, couldn’t care less. In blue, the owner has jumped right into the storyline, having the guts to take the blame, apologize, explain what happened and promise a fix — not an instant one, but a fix on the way. In the end, the narrative is going to go on with or without input from the owner, but in the blue example, the owner is taking the steering wheel into his own hands for at least part of the road trip. That initiative could save not just his franchise location, but the brand at large. Just ask Florian Huebner:

“Over the course of 2013 customers of Yi-Ko Holding’s restaurants increasingly left public online reviews about “broken and dirty furniture,” “sleeping and indifferent staff,” and “mice running around in the kitchen.” Per the nature of a franchise system, to the typical consumer it was unclear that these problems were limited to this individual franchisee. Consequently, the Burger King brand as a whole began to deteriorate and customers reduced their consumption across all locations, leading to revenue declines of up to 33% for some other franchisees.”

Positive news for small businesses working like mad to compete: You have more agility to put initiatives into quick action than the big brands do. Companies with 1,000 locations may let negative reviews go unanswered because they lack a clear policy or hierarchy for owner responses, but smaller enterprises can literally turn this around in a day. Just sit down at the nearest computer, claim your review profiles, and jump into the story with the goal of hearing, impressing, and keeping every single customer you can.

Big brands: The challenge for you is larger, by dint of your size, but you’ve also likely got the infrastructure to make this task no problem. You just have to assign the right people to the job, with thoughtful guidelines for ensuring your brand is being represented in a winning way.

NAP and reviews: The 1–2 punch combo every local business must practice

When traveling salesman Duncan Hines first published his 1935 review guide Adventures in Good Eating, he was pioneering what we think of today as local SEO. Here is my color-coded version of his review of the business that would one day become KFC. It should look strangely familiar to every one of you who has ever tackled citation management:

duncanhines.jpg

No phone number on this “citation,” of course, but then again telephones were quite a luxury in 1935. Barring that element, this simple and historic review has the core earmarks of a modern local business listing. It has location data and review data; it’s the 1–2 punch combo every local business still needs to get right today. Without the NAP, the business can’t be found. Without the sentiment, the business gives little reason to be chosen.

Are you heading to a team meeting today? Preparing to chat with an incoming client? Make the winning combo as simple as possible, like this:

  1. We’ve got to manage our local business listings so that they’re accessible, accurate, and complete. We can automate much of this (check out Moz Local) so that we get found.
  2. We’ve got to breathe life into the listings so that they act as interactive advertisements, helping us get chosen. We can do this by earning reviews and responding to them. This is our company heartbeat — our story.

From Duncan Hines to the digital age, there may be nothing new under the sun in marketing, but when you spend year after year looking at the sadly neglected review portions of local business listings, you realize you may have something to teach that is new news to somebody. So go for it — communicate this stuff, and good luck at your next big meeting!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 1 year ago from tracking.feedpress.it

Let’s make 2017 the year of honest reviews!

Reviews can be an important part of a local SEO strategy, but columnist Greg Gifford warns that overdoing it can make your business look shady.

The post Let’s make 2017 the year of honest reviews! appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Reblogged 1 year ago from feeds.searchengineland.com

Google updates local reviews schema guidelines

Now only reviews “directly produced by your site” can have local reviews markup, according to Google.

The post Google updates local reviews schema guidelines appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Reblogged 2 years ago from feeds.searchengineland.com

Leveraging social media for local SEO

Exploring new opportunities to integrate social media into your local brand’s SEO strategy? Columnist Lydia Jorden reviews how to elevate your local search presence using Snapchat and Quora.

The post Leveraging social media for local SEO appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Reblogged 2 years ago from feeds.searchengineland.com

SearchCap: Google Updates Local Guides, Bing Rewards On MSN & New AdWords Tests

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. From Search Engine Land: Google Using Points To Boost User Reviews, Beef Up Maps Content Nov 13, 2015 by Greg Sterling A couple of years ago, Google created a program in the mold of the…

Please visit Search Engine Land for the full article.

Reblogged 3 years ago from feeds.searchengineland.com

Deconstructing the App Store Rankings Formula with a Little Mad Science

Posted by AlexApptentive

After seeing Rand’s “Mad Science Experiments in SEO” presented at last year’s MozCon, I was inspired to put on the lab coat and goggles and do a few experiments of my own—not in SEO, but in SEO’s up-and-coming younger sister, ASO (app store optimization).

Working with Apptentive to guide enterprise apps and small startup apps alike to increase their discoverability in the app stores, I’ve learned a thing or two about app store optimization and what goes into an app’s ranking. It’s been my personal goal for some time now to pull back the curtains on Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in my way.

Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet.

As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or down can make all the difference when it comes to your website’s traffic—and revenue.

In the world of apps, ranking is just as important when it comes to standing out in a sea of more than 1.3 million apps. Apptentive’s recent mobile consumer survey shed a little more light this claim, revealing that nearly half of all mobile app users identified browsing the app store charts and search results (the placement on either of which depends on rankings) as a preferred method for finding new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery.

Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a complex and highly guarded algorithms for determining rankings for both keyword-based app store searches and composite top charts.

Unlike SEO, however, very little research and theory has been conducted around what goes into these rankings.

Until now, that is.

Over the course of five studies analyzing various publicly available data points for a cross-section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps, I’ll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few of the factors commonly thought of as influential to an app’s ranking.

But first, a little context

Image credit: Josh Tuininga, Apptentive

Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically, be on a fairly level playing field in terms of search volume and competition.

Of these apps, nearly two-thirds have not received a single rating and 99% are considered unprofitable. These studies, therefore, single out the rare exceptions to the rule—the top 500 ranked apps in each store.

While neither Apple nor Google have revealed specifics about how they calculate search rankings, it is generally accepted that both app store algorithms factor in:

  • Average app store rating
  • Rating/review volume
  • Download and install counts
  • Uninstalls (what retention and churn look like for the app)
  • App usage statistics (how engaged an app’s users are and how frequently they launch the app)
  • Growth trends weighted toward recency (how daily download counts changed over time and how today’s ratings compare to last week’s)
  • Keyword density of the app’s landing page (Ian did a great job covering this factor in a previous Moz post)

I’ve simplified this formula to a function highlighting the four elements with sufficient data (or at least proxy data) for our analysis:

Ranking = fn(Rating, Rating Count, Installs, Trends)

Of course, right now, this generalized function doesn’t say much. Over the next five studies, however, we’ll revisit this function before ultimately attempting to compare the weights of each of these four variables on app store rankings.

(For the purpose of brevity, I’ll stop here with the assumptions, but I’ve gone into far greater depth into how I’ve reached these conclusions in a 55-page report on app store rankings.)

Now, for the Mad Science.

Study #1: App-les to app-les app store ranking volatility

The first, and most straight forward of the five studies involves tracking daily movement in app store rankings across iOS and Android versions of the same apps to determine any trends of differences between ranking volatility in the two stores.

I went with a small sample of five apps for this study, the only criteria for which were that:

  • They were all apps I actively use (a criterion for coming up with the five apps but not one that influences rank in the U.S. app stores)
  • They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be stickier at the top—an assumption I’ll test in study #2)
  • They had an almost identical version of the app in both Google Play and the App Store, meaning they should (theoretically) rank similarly
  • They covered a spectrum of app categories

The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five apps represent the travel, finance, education banking, and social networking categories.

Hypothesis

Going into this analysis, I predicted slightly more volatility in Apple App Store rankings, based on two statistics:

Both of these assumptions will be tested in later analysis.

Results

7-Day App Store Ranking Volatility in the App Store and Google Play

Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23 positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of Google Play.

Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then standardized the ranking volatility in terms of percent change to control for the effect of numeric rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a 24-hour period while Google Play rankings changed by no more than 9%.

Also of note, daily rankings tended to move in the same direction across the two app stores approximately two-thirds of the time, suggesting that the two stores, and their customers, may have more in common than we think.

Study #2: App store ranking volatility across the top charts

Testing the assumption implicit in standardizing the data in study No. 1, this one was designed to see if app store ranking volatility is correlated with an app’s current rank. The sample for this study consisted of the top 500 ranked apps in both Google Play and the App Store, with special attention given to those on both ends of the spectrum (ranks 1–100 and 401–500).

Hypothesis

I anticipated rankings to be more volatile the higher an app is ranked—meaning an app ranked No. 450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis is based on the assumption that higher ranked apps have more installs, active users, and ratings, and that it would take a large margin to produce a noticeable shift in any of these factors.

Results

App Store Ranking Volatility of Top 500 Apps

One look at the chart above shows that apps in both stores have increasingly more volatile rankings (based on how many ranks they moved in the last 24 hours) the lower on the list they’re ranked.

This is particularly true when comparing either end of the spectrum—with a seemingly straight volatility line among Google Play’s Top 100 apps and very few blips within the App Store’s Top 100. Compare this section to the lower end, ranks 401–)500, where both stores experience much more turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking volatility in the Play Store and 28% correlation in the App Store.

To put this into perspective, the average app in Google Play’s 401–)500 ranks moved 12.1 ranks in the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store, these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as volatile as the highest ranked apps. (I say slightly as these “lower-ranked” apps are still ranked higher than 99.96% of all apps.)

The relationship between rank and volatility is pretty consistent across the App Store charts, while rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100 have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation).

Study #3: App store rankings across the stars

The next study looks at the relationship between rank and star ratings to determine any trends that set the top chart apps apart from the rest and explore any ties to app store ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

As discussed in the introduction, this study relates directly to one of the factors commonly accepted as influential to app store rankings: average rating.

Getting started, I hypothesized that higher ranks generally correspond to higher ratings, cementing the role of star ratings in the ranking algorithm.

As far as volatility goes, I did not anticipate average rating to play a role in app store ranking volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice versa. Instead, I believed volatility to be tied to rating volume (as we’ll explore in our last study).

Results

Average App Store Ratings of Top Apps

The chart above plots the top 100 ranked apps in either store with their average rating (both historic and current, for App Store apps). If it looks a little chaotic, it’s just one indicator of the complexity of ranking algorithm in Google Play and the App Store.

If our hypothesis was correct, we’d see a downward trend in ratings. We’d expect to see the No. 1 ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the chart.

A closer examination, in tandem with what we already know about the app stores, reveals two other interesting points:

  1. The average star rating of the top 100 apps is significantly higher than that of the average app. Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of all rated apps in either store. The averages across apps in the 401–)500 ranks approximately split the difference between the ratings of the top ranked apps and the ratings of the average app.
  2. The rating distribution of top apps in Google Play was considerably more compact than the distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more heavily weighted in Google Play’s algorithm.

App Store Ranking Volatility and Average Rating

Looking next at the relationship between ratings and app store ranking volatility reveals a -15% correlation that is consistent across both app stores; meaning the higher an app is rated, the less its rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store’s calculation of an app’s current rating, for which I did not find a statistically significant correlation.

Study #4: App store rankings across versions

This next study looks at the relationship between the age of an app’s current version, its rank and its ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

In alteration of the above function, I’m using the age of a current app’s version as a proxy (albeit not a very good one) for trends in app store ratings and app quality over time.

Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b) each new update inspires a new wave of installs and ratings, I’m hypothesizing that the older the age of an app’s current version, the lower it will be ranked and the less volatile its rank will be.

Results

How update frequency correlates with app store rank

The first and possibly most important finding is that apps across the top charts in both Google Play and the App Store are updated remarkably often as compared to the average app.

At the time of conducting the study, the current version of the average iOS app on the top chart was only 28 days old; the current version of the average Android app was 38 days old.

As hypothesized, the age of the current version is negatively correlated with the app’s rank, with a 13% correlation in Google Play and a 10% correlation in the App Store.

How update frequency correlates with app store ranking volatility

The next part of the study maps the age of the current app version to its app store ranking volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%) while recently updated iOS apps have more volatile rankings (correlation: -3%).

Study #5: App store rankings across monthly active users

In the final study, I wanted to examine the role of an app’s popularity on its ranking. In an ideal world, popularity would be measured by an app’s monthly active users (MAUs), but since few mobile app developers have released this information, I’ve settled for two publicly available proxies: Rating Count and Installs.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

For the same reasons indicated in the second study, I anticipated that more popular apps (e.g., apps with more ratings and more installs) would be higher ranked and less volatile in rank. This, again, takes into consideration that it takes more of a shift to produce a noticeable impact in average rating or any of the other commonly accepted influencers of an app’s ranking.

Results

Apps with more ratings and reviews typically rank higher

The first finding leaps straight off of the chart above: Android apps have been rated more times than iOS apps, 15.8x more, in fact.

The average app in Google Play’s Top 100 had a whopping 3.1 million ratings while the average app in the Apple App Store’s Top 100 had 196,000 ratings. In contrast, apps in the 401–)500 ranks (still tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth (Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks.

Considering that almost two-thirds of apps don’t have a single rating, reaching rating counts this high is a huge feat, and a very strong indicator of the influence of rating count in the app store ranking algorithms.

To even out the playing field a bit and help us visualize any correlation between ratings and rankings (and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I’ve applied a logarithmic scale to the chart above:

The relationship between app store ratings and rankings in the top 100 apps

From this chart, we can see a correlation between ratings and rankings, such that apps with more ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40% correlation in Google Play.

Apps with more ratings typically experience less app store ranking volatility

Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive evidence was found within the Top 100 Google Play apps.

Apps with more installs and active users tend to rank higher in the app stores

And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.)

Among the top 100 Android apps, this last study found that installs were heavily correlated with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in Google Play. Android apps with more installs also tended to have less volatile app store rankings, with a correlation of -16.5%.

Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in broad ranges (e.g., 500k–)1M). For each app, I took the low end of the range, meaning we can likely expect the correlation to be a little stronger since the low end was further away from the midpoint for apps with more installs.

Summary

To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad science studies in app store optimization:

  1. Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play
  2. Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the Apple App Store’s top charts.
  3. In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of the average app.
  4. Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top charts experience a much wider ratings distribution than that of Google Play’s top charts.
  5. The higher an app is rated, the less volatile its rankings are.
  6. The 100 highest ranked apps in either store are updated much more frequently than the average app, and apps with older current versions are correlated with lower ratings.
  7. An app’s update frequency is negatively correlated with Google Play’s ranking volatility but positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an app’s most recent ratings and reviews.
  8. The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest ranked App Store apps.
  9. In both stores, apps that fall under the 401–500 ranks receive, on average, 10–20% of the rating volume seen by apps in the top 100.
  10. Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29–40% correlation between the two.

Revisiting our first (albeit oversimplified) guess at the app stores’ ranking algorithm gives us this loosely defined function:

Ranking = fn(Rating, Rating Count, Installs, Trends)

I’d now re-write the function into a formula by weighing each of these four factors, where a, b, c, & d are unknown multipliers, or weights:

Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d)

These five studies on ASO shed a little more light on these multipliers, showing Rating Count to have the strongest correlation with rank, followed closely by Installs, in either app store.

It’s with the other two factors—rating and trends—that the two stores show the greatest discrepancy. I’d hazard a guess to say that the App Store prioritizes growth trends over ratings, given the importance it places on an app’s current version and the wide distribution of ratings across the top charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just about have to have at least four stars to make the top 100 ranks.

Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts in either store:

Weight of factors in the Apple App Store ranking algorithm

Rating Count > Installs > Trends > Rating

Weight of factors in the Google Play ranking algorithm

Rating Count > Installs > Rating > Trends


Again, we’re oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional factors including keyword density and in-app engagement statistics continue to be strong indicators of ranks. They simply lie outside the scope of these studies.

I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see ASOs conducting the same experiments that have brought SEO to the center stage, and encourage you to enhance or refute these findings with your own ASO mad science experiments.

Please share your thoughts in the comments below, and let’s deconstruct the ranking formula together, one experiment at a time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it