Deconstructing the App Store Rankings Formula with a Little Mad Science

Posted by AlexApptentive

After seeing Rand’s “Mad Science Experiments in SEO” presented at last year’s MozCon, I was inspired to put on the lab coat and goggles and do a few experiments of my own—not in SEO, but in SEO’s up-and-coming younger sister, ASO (app store optimization).

Working with Apptentive to guide enterprise apps and small startup apps alike to increase their discoverability in the app stores, I’ve learned a thing or two about app store optimization and what goes into an app’s ranking. It’s been my personal goal for some time now to pull back the curtains on Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in my way.

Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet.

As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or down can make all the difference when it comes to your website’s traffic—and revenue.

In the world of apps, ranking is just as important when it comes to standing out in a sea of more than 1.3 million apps. Apptentive’s recent mobile consumer survey shed a little more light this claim, revealing that nearly half of all mobile app users identified browsing the app store charts and search results (the placement on either of which depends on rankings) as a preferred method for finding new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery.

Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a complex and highly guarded algorithms for determining rankings for both keyword-based app store searches and composite top charts.

Unlike SEO, however, very little research and theory has been conducted around what goes into these rankings.

Until now, that is.

Over the course of five studies analyzing various publicly available data points for a cross-section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps, I’ll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few of the factors commonly thought of as influential to an app’s ranking.

But first, a little context

Image credit: Josh Tuininga, Apptentive

Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically, be on a fairly level playing field in terms of search volume and competition.

Of these apps, nearly two-thirds have not received a single rating and 99% are considered unprofitable. These studies, therefore, single out the rare exceptions to the rule—the top 500 ranked apps in each store.

While neither Apple nor Google have revealed specifics about how they calculate search rankings, it is generally accepted that both app store algorithms factor in:

  • Average app store rating
  • Rating/review volume
  • Download and install counts
  • Uninstalls (what retention and churn look like for the app)
  • App usage statistics (how engaged an app’s users are and how frequently they launch the app)
  • Growth trends weighted toward recency (how daily download counts changed over time and how today’s ratings compare to last week’s)
  • Keyword density of the app’s landing page (Ian did a great job covering this factor in a previous Moz post)

I’ve simplified this formula to a function highlighting the four elements with sufficient data (or at least proxy data) for our analysis:

Ranking = fn(Rating, Rating Count, Installs, Trends)

Of course, right now, this generalized function doesn’t say much. Over the next five studies, however, we’ll revisit this function before ultimately attempting to compare the weights of each of these four variables on app store rankings.

(For the purpose of brevity, I’ll stop here with the assumptions, but I’ve gone into far greater depth into how I’ve reached these conclusions in a 55-page report on app store rankings.)

Now, for the Mad Science.

Study #1: App-les to app-les app store ranking volatility

The first, and most straight forward of the five studies involves tracking daily movement in app store rankings across iOS and Android versions of the same apps to determine any trends of differences between ranking volatility in the two stores.

I went with a small sample of five apps for this study, the only criteria for which were that:

  • They were all apps I actively use (a criterion for coming up with the five apps but not one that influences rank in the U.S. app stores)
  • They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be stickier at the top—an assumption I’ll test in study #2)
  • They had an almost identical version of the app in both Google Play and the App Store, meaning they should (theoretically) rank similarly
  • They covered a spectrum of app categories

The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five apps represent the travel, finance, education banking, and social networking categories.

Hypothesis

Going into this analysis, I predicted slightly more volatility in Apple App Store rankings, based on two statistics:

Both of these assumptions will be tested in later analysis.

Results

7-Day App Store Ranking Volatility in the App Store and Google Play

Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23 positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of Google Play.

Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then standardized the ranking volatility in terms of percent change to control for the effect of numeric rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a 24-hour period while Google Play rankings changed by no more than 9%.

Also of note, daily rankings tended to move in the same direction across the two app stores approximately two-thirds of the time, suggesting that the two stores, and their customers, may have more in common than we think.

Study #2: App store ranking volatility across the top charts

Testing the assumption implicit in standardizing the data in study No. 1, this one was designed to see if app store ranking volatility is correlated with an app’s current rank. The sample for this study consisted of the top 500 ranked apps in both Google Play and the App Store, with special attention given to those on both ends of the spectrum (ranks 1–100 and 401–500).

Hypothesis

I anticipated rankings to be more volatile the higher an app is ranked—meaning an app ranked No. 450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis is based on the assumption that higher ranked apps have more installs, active users, and ratings, and that it would take a large margin to produce a noticeable shift in any of these factors.

Results

App Store Ranking Volatility of Top 500 Apps

One look at the chart above shows that apps in both stores have increasingly more volatile rankings (based on how many ranks they moved in the last 24 hours) the lower on the list they’re ranked.

This is particularly true when comparing either end of the spectrum—with a seemingly straight volatility line among Google Play’s Top 100 apps and very few blips within the App Store’s Top 100. Compare this section to the lower end, ranks 401–)500, where both stores experience much more turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking volatility in the Play Store and 28% correlation in the App Store.

To put this into perspective, the average app in Google Play’s 401–)500 ranks moved 12.1 ranks in the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store, these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as volatile as the highest ranked apps. (I say slightly as these “lower-ranked” apps are still ranked higher than 99.96% of all apps.)

The relationship between rank and volatility is pretty consistent across the App Store charts, while rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100 have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation).

Study #3: App store rankings across the stars

The next study looks at the relationship between rank and star ratings to determine any trends that set the top chart apps apart from the rest and explore any ties to app store ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

As discussed in the introduction, this study relates directly to one of the factors commonly accepted as influential to app store rankings: average rating.

Getting started, I hypothesized that higher ranks generally correspond to higher ratings, cementing the role of star ratings in the ranking algorithm.

As far as volatility goes, I did not anticipate average rating to play a role in app store ranking volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice versa. Instead, I believed volatility to be tied to rating volume (as we’ll explore in our last study).

Results

Average App Store Ratings of Top Apps

The chart above plots the top 100 ranked apps in either store with their average rating (both historic and current, for App Store apps). If it looks a little chaotic, it’s just one indicator of the complexity of ranking algorithm in Google Play and the App Store.

If our hypothesis was correct, we’d see a downward trend in ratings. We’d expect to see the No. 1 ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the chart.

A closer examination, in tandem with what we already know about the app stores, reveals two other interesting points:

  1. The average star rating of the top 100 apps is significantly higher than that of the average app. Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of all rated apps in either store. The averages across apps in the 401–)500 ranks approximately split the difference between the ratings of the top ranked apps and the ratings of the average app.
  2. The rating distribution of top apps in Google Play was considerably more compact than the distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more heavily weighted in Google Play’s algorithm.

App Store Ranking Volatility and Average Rating

Looking next at the relationship between ratings and app store ranking volatility reveals a -15% correlation that is consistent across both app stores; meaning the higher an app is rated, the less its rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store’s calculation of an app’s current rating, for which I did not find a statistically significant correlation.

Study #4: App store rankings across versions

This next study looks at the relationship between the age of an app’s current version, its rank and its ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

In alteration of the above function, I’m using the age of a current app’s version as a proxy (albeit not a very good one) for trends in app store ratings and app quality over time.

Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b) each new update inspires a new wave of installs and ratings, I’m hypothesizing that the older the age of an app’s current version, the lower it will be ranked and the less volatile its rank will be.

Results

How update frequency correlates with app store rank

The first and possibly most important finding is that apps across the top charts in both Google Play and the App Store are updated remarkably often as compared to the average app.

At the time of conducting the study, the current version of the average iOS app on the top chart was only 28 days old; the current version of the average Android app was 38 days old.

As hypothesized, the age of the current version is negatively correlated with the app’s rank, with a 13% correlation in Google Play and a 10% correlation in the App Store.

How update frequency correlates with app store ranking volatility

The next part of the study maps the age of the current app version to its app store ranking volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%) while recently updated iOS apps have more volatile rankings (correlation: -3%).

Study #5: App store rankings across monthly active users

In the final study, I wanted to examine the role of an app’s popularity on its ranking. In an ideal world, popularity would be measured by an app’s monthly active users (MAUs), but since few mobile app developers have released this information, I’ve settled for two publicly available proxies: Rating Count and Installs.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

For the same reasons indicated in the second study, I anticipated that more popular apps (e.g., apps with more ratings and more installs) would be higher ranked and less volatile in rank. This, again, takes into consideration that it takes more of a shift to produce a noticeable impact in average rating or any of the other commonly accepted influencers of an app’s ranking.

Results

Apps with more ratings and reviews typically rank higher

The first finding leaps straight off of the chart above: Android apps have been rated more times than iOS apps, 15.8x more, in fact.

The average app in Google Play’s Top 100 had a whopping 3.1 million ratings while the average app in the Apple App Store’s Top 100 had 196,000 ratings. In contrast, apps in the 401–)500 ranks (still tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth (Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks.

Considering that almost two-thirds of apps don’t have a single rating, reaching rating counts this high is a huge feat, and a very strong indicator of the influence of rating count in the app store ranking algorithms.

To even out the playing field a bit and help us visualize any correlation between ratings and rankings (and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I’ve applied a logarithmic scale to the chart above:

The relationship between app store ratings and rankings in the top 100 apps

From this chart, we can see a correlation between ratings and rankings, such that apps with more ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40% correlation in Google Play.

Apps with more ratings typically experience less app store ranking volatility

Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive evidence was found within the Top 100 Google Play apps.

Apps with more installs and active users tend to rank higher in the app stores

And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.)

Among the top 100 Android apps, this last study found that installs were heavily correlated with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in Google Play. Android apps with more installs also tended to have less volatile app store rankings, with a correlation of -16.5%.

Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in broad ranges (e.g., 500k–)1M). For each app, I took the low end of the range, meaning we can likely expect the correlation to be a little stronger since the low end was further away from the midpoint for apps with more installs.

Summary

To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad science studies in app store optimization:

  1. Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play
  2. Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the Apple App Store’s top charts.
  3. In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of the average app.
  4. Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top charts experience a much wider ratings distribution than that of Google Play’s top charts.
  5. The higher an app is rated, the less volatile its rankings are.
  6. The 100 highest ranked apps in either store are updated much more frequently than the average app, and apps with older current versions are correlated with lower ratings.
  7. An app’s update frequency is negatively correlated with Google Play’s ranking volatility but positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an app’s most recent ratings and reviews.
  8. The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest ranked App Store apps.
  9. In both stores, apps that fall under the 401–500 ranks receive, on average, 10–20% of the rating volume seen by apps in the top 100.
  10. Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29–40% correlation between the two.

Revisiting our first (albeit oversimplified) guess at the app stores’ ranking algorithm gives us this loosely defined function:

Ranking = fn(Rating, Rating Count, Installs, Trends)

I’d now re-write the function into a formula by weighing each of these four factors, where a, b, c, & d are unknown multipliers, or weights:

Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d)

These five studies on ASO shed a little more light on these multipliers, showing Rating Count to have the strongest correlation with rank, followed closely by Installs, in either app store.

It’s with the other two factors—rating and trends—that the two stores show the greatest discrepancy. I’d hazard a guess to say that the App Store prioritizes growth trends over ratings, given the importance it places on an app’s current version and the wide distribution of ratings across the top charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just about have to have at least four stars to make the top 100 ranks.

Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts in either store:

Weight of factors in the Apple App Store ranking algorithm

Rating Count > Installs > Trends > Rating

Weight of factors in the Google Play ranking algorithm

Rating Count > Installs > Rating > Trends


Again, we’re oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional factors including keyword density and in-app engagement statistics continue to be strong indicators of ranks. They simply lie outside the scope of these studies.

I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see ASOs conducting the same experiments that have brought SEO to the center stage, and encourage you to enhance or refute these findings with your own ASO mad science experiments.

Please share your thoughts in the comments below, and let’s deconstruct the ranking formula together, one experiment at a time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Inverse Document Frequency and the Importance of Uniqueness

Posted by EricEnge

In my last column, I wrote about how to use term frequency analysis in evaluating your content vs. the competition’s. Term frequency (TF) is only one part of the TF-IDF approach to information retrieval. The other part is inverse document frequency (IDF), which is what I plan to discuss today.

Today’s post will use an explanation of how IDF works to show you the importance of creating content that has true uniqueness. There are reputation and visibility reasons for doing this, and it’s great for users, but there are also SEO benefits.

If you wonder why I am focusing on TF-IDF, consider these words from a Google article from August 2014: “This is the idea of the famous TF-IDF, long used to index web pages.” While the way that Google may apply these concepts is far more than the simple TF-IDF models I am discussing, we can still learn a lot from understanding the basics of how they work.

What is inverse document frequency?

In simple terms, it’s a measure of the rareness of a term. Conceptually, we start by measuring document frequency. It’s easiest to illustrate with an example, as follows:

IDF table

In this example, we see that the word “a” appears in every document in the document set. What this tells us is that it provides no value in telling the documents apart. It’s in everything.

Now look at the word “mobilegeddon.” It appears in 1,000 of the documents, or one thousandth of one percent of them. Clearly, this phrase provides a great deal more differentiation for the documents that contain them.

Document frequency measures commonness, and we prefer to measure rareness. The classic way that this is done is with a formula that looks like this:

idf equation

For each term we are looking at, we take the total number of documents in the document set and divide it by the number of documents containing our term. This gives us more of a measure of rareness. However, we don’t want the resulting calculation to say that the word “mobilegeddon” is 1,000 times more important in distinguishing a document than the word “boat,” as that is too big of a scaling factor.

This is the reason we take the Log Base 10 of the result, to dampen that calculation. For those of you who are not mathematicians, you can loosely think of the Log Base 10 of a number as being a count of the number of zeros – i.e., the Log Base 10 of 1,000,000 is 6, and the log base 10 of 1,000 is 3. So instead of saying that the word “mobilegeddon” is 1,000 times more important, this type of calculation suggests it’s three times more important, which is more in line with what makes sense from a search engine perspective.

With this in mind, here are the IDF values for the terms we looked at before:

idf table logarithm values

Now you can see that we are providing the highest score to the term that is the rarest.

What does the concept of IDF teach us?

Think about IDF as a measure of uniqueness. It helps search engines identify what it is that makes a given document special. This needs to be much more sophisticated than how often you use a given search term (e.g. keyword density).

Think of it this way: If you are one of 6.78 million web sites that comes up for the search query “super bowl 2015,” you are dealing with a crowded playing field. Your chances of ranking for this term based on the quality of your content are pretty much zero.

massive number of results for broad keyword

Overall link authority and other signals will be the only way you can rank for a term that competitive. If you are a new site on the landscape, well, perhaps you should chase something else.

That leaves us with the question of what you should target. How about something unique? Even the addition of a simple word like “predictions”—changing our phrase to “super bowl 2015 predictions”—reduces this playing field to 17,800 results.

Clearly, this is dramatically less competitive already. Slicing into this further, the phrase “super bowl 2015 predictions and odds” returns only 26 pages in Google. See where this is going?

What IDF teaches us is the importance of uniqueness in the content we create. Yes, it will not pay nearly as much money to you as it would if you rank for the big head term, but if your business is a new entrant into a very crowded space, you are not going to rank for the big head term anyway

If you can pick out a smaller number of terms with much less competition and create content around those needs, you can start to rank for these terms and get money flowing into your business. This is because you are making your content more unique by using rarer combinations of terms (leveraging what IDF teaches us).

Summary

People who do keyword analysis are often wired to pursue the major head terms directly, simply based on the available keyword search volume. The result from this approach can, in fact, be pretty dismal.

Understanding how inverse document frequency works helps us understand the importance of standing out. Creating content that brings unique angles to the table is often a very potent way to get your SEO strategy kick-started.

Of course, the reasons for creating content that is highly differentiated and unique go far beyond SEO. This is good for your users, and it’s good for your reputation, visibility, AND also your SEO.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Local Centroids are Now Individual Users: How Can We Optimize for Their Searches?

Posted by MiriamEllis

“Google is getting better at detecting location at a more granular level—even on the desktop.
The user is the new centroid.” – 
David Mihm

The history of the centroid

The above quote succinctly summarizes the current state of affairs for local business owners and their customers. The concept of a centroid—
a central point of relevance—is almost as old as local search. In 2008, people like Mike Blumenthal and Google Maps Manager Carter Maslan were sharing statistics like this:

“…research indicates that up to 80% of the variation in rank can be explained by distance from the centroid on certain searches.”

At that time, businesses located near town hall or a similar central hub appeared to be experiencing a ranking advantage.

Fast forward to 2013, and Mike weighed in again with 
an updated definition of “industry centroids”

“If you read their (Google’s) patents, they actually deal with the center of the industries … as defining the center of the search. So if all the lawyers are on the corner of Main and State, that typically defines the center of the search, rather than the center of the city… it isn’t even the centroid of the city that matters. It matters that you are near where the other people in your industry are.”

In other words, Google’s perception of a centralized location for auto dealerships could be completely different than that for medical practices, and that
neither might be located anywhere near the city center.

While the concepts of city and industry centroids may still play a part in some searches,
local search results in 2015 clearly indicate Google’s shift toward deeming the physical location of the desktop or mobile user a powerful factor in determining relevance. The relationship between where your customer is when he performs a search and where your business is physically located has never been more important.

Moreover, in this new, user-centric environment, Google has moved beyond simply detecting cities to detecting neighborhoods and even streets. What this means for local business owners is that
your hyperlocal information has become a powerful component of your business data. This post will teach you how to better serve your most local customers.

Seeing the centroid in action

If you do business in a small town with few competitors, ranking for your product/service + city terms is likely to cover most of your bases. The user-as-centroid phenomenon is most applicable in mid-to-large sized towns and cities with reasonable competition. I’ll be using two districts in San Francisco—Bernal Heights and North Beach—in these illustrations and we’ll be going on a hunt for pizza.

On a desktop, searching for “pizza north beach san francisco” or setting my location to this neighborhood and city while searching for the product, Google will show me something like this:

Performing this same search, but with “bernal heights” substituted, Google shows me pizzerias in a completely different part of the city:

local result bernal heights pizza san francisco

And, when I move over to my mobile device, Google narrows the initial results down to
just three enviable players in each district. These simple illustrations demonstrate Google’s increasing sensitivity to serving me nearby businesses offering what I want.

The physical address of your business is the most important factor in serving the user as centroid. This isn’t something you can control, but there are things you
can do to market your business as being highly relevant to your hyperlocal geography.

Specialized content for the user-centroid

We’ll break this down into four common business models to help get you thinking about planning content that serves your most local customers.

1. Single-location business

Make the shift toward viewing your business not just as “Tony’s Pizza in San Francisco”, but as “Tony’s Pizza
in North Beach, San Francisco”. Consider:

  • Improving core pages of your website or creating new pages to include references to the proud part you play in the neighborhood scene. Talk about the history of your area and where you fit into that.
  • Interview locals and ask them to share their memories about the neighborhood and what they like about living there.
  • Showcase your participation in local events.
  • Plan an event, contest or special for customers in your district.
  • Take pictures, label them with hyperlocal terms, post them on your site and share them socially.
  • Blog about local happenings that are relevant to you and your customers, such as a street market where you buy the tomatoes that top your pizzas or a local award you’ve won.
  • Depending on your industry, there will be opportunities for hyperlocal content specific to your business. For example, a restaurant can make sure its menu is in crawlable text and can name some favorite dishes after the neighborhood—The Bernal Heights Special. Meanwhile, a spa in North Beach can create a hyperlocal name for a service—The North Beach Organic Spa Package. Not only does this show district pride, but customers may mention these products and services by name in their reviews, reinforcing your local connection.

2. Multi-location business within a single city

All that applies to the single location applies to you, too, but you’ve got to find a way to scale building out content for each neighborhood.

  • If your resources are strong, build a local landing page for each of your locations, including basic optimization for the neighborhood name. Meanwhile, create blog categories for each neighborhood and rotate your efforts on a week by week basis. First week, blog about neighborhood A, next week, find something interesting to write about concerning neighborhood B. Over time, you’ll have developed a nice body of content proving your involvement in each district.
  • If you’re short on resources, you’ll still want to build out a basic landing page for each of your stores in your city and make the very best effort you can to showcase your neighborhood pride on these pages.

3. Multiple businesses, multiple cities

Again, scaling this is going to be key and how much you can do will depend upon your resources.

  • The minimum requirement will be a landing page on the site for each physical location, with basic optimization for your neighborhood terms.
  • Beyond this, you’ll be making a decision about how much hyperlocal content you can add to the site/blog for each district, or whether time can be utilized more effectively via off-site social outreach. If you’ve got lots of neighborhoods to cover in lots of different cities, designating a social representative for each store and giving him the keys to your profiles (after a training session in company policies) may make the most sense.

4. Service area businesses (SABs)

Very often, service area businesses are left out in the cold with various local developments, but in my own limited testing, Google is applying at least some hyperlocal care to these business models. I can search for a neighborhood plumber, just as I would a pizza:

local results plumber bernal heights san francisco

To be painstakingly honest, plumbers are going to have to be pretty ingenious to come up with a ton of engaging industry/neighborhood content and may be confined mainly to creating some decent service area landing pages that share a bit about their work in various neighborhoods. Other business models, like contractors, home staging firms and caterers should find it quite easy to talk about district architecture, curb appeal and events on a hyperlocal front.

While your SAB is still unlikely to beat out a competitor with a physical location in a given neighborhood, you still have a chance to associate your business with that area of your town with well-planned content.


Need creative inspiration for the writing projects ahead?
Don’t miss this awesome wildcard search tip Mary Bowling shared at LocalUp. Add an underscore or asterisk to your search terms and just look at the good stuff Google will suggest to you:

wildcard search content ideas

Does Tony’s patio make his business one of
Bernal Heights’ dog-friendly restaurants or does his rooftop view make his restaurant the most picturesque lunch spot in the district? If so, he’s got two new topics to write about, either on his basic landing pages or his blog.

Hop over to 
Whitespark’s favorite takeaways from Mike Ramsey’s LocalUp presentation, too.

Citations and reviews with the user centroid in mind

Here are the basics about citations, broken into the same four business models:

1. Single-location business

You get just one citation on each platform, unless you have multiple departments or practitioners. That means one Google+ Local page, one Yelp profile, one Best of the Web listing. etc. You do not get one citation for your city and another for your neighborhood. Very simple.

2. Multi-location business within a single city

As with the single location business, you are entitled to just one set of citations per physical location. That means one Google+ Local listing for your North Beach pizza place and another for your restaurant in Bernal Heights.

A regular FAQ here in the Moz Q&A Forum relates to how Google will differentiate between two businesses located in the same city. Here are some tips:

  • Google no longer supports the use of modifiers in the business name field, so you can no longer be Tony’s Pizza – Bernal Heights, unless your restaurant is actually named this. You can only be Tony’s Pizza.
  • Facebook’s policies are different than Google’s. To my understanding, Facebook won’t permit you to build more than one Facebook Place for the identical brand name. Thus, to comply with their guidelines, you must differentiate by using those neighborhood names or other modifiers. Given that this same rule applies to all of your competitors, this should not be seen as a danger to your NAP consistency, because apparently, no multi-location business creating Facebook Places will have 100% consistent NAP. The playing field is, then, even.
  • The correct place to differentiate your businesses on all other platforms is in the address field. Google will understand that one of your branches is on A St. and the other is on B St. and will choose which one they feel is most relevant to the user.
  • Google is not a fan of call centers. Unless it’s absolutely impossible to do so, use a unique local phone number for each physical location to prevent mix-ups on Google’s part, and use this number consistently across all web-based mentions of the business.
  • Though you can’t put your neighborhood name in the title, you can definitely include it in the business description field most citation platforms provide.
  • Link your citations to their respective local landing pages on your website, not to your homepage.

3. Multiple businesses, multiple cities

Everything in business model #2 applies to you as well. You are allowed one set of citations for each of your physical locations, and while you can’t modify your Google+ Local business name, you can mention your neighborhood in the description. Promote each location equally in all you do and then rely on Google to separate your locations for various users based on your addresses and phone numbers.

4. SABs

You are exactly like business model #1 when it comes to citations, with the exception of needing to abide by Google’s rules about hiding your address if you don’t serve customers at your place of business. Don’t build out additional citations for neighborhoods you serve, other cities you serve or various service offerings. Just create one citation set. You should be fine mentioning some neighborhoods in your citation descriptions, but don’t go overboard on this.

When it comes to review management, you’ll be managing unique sets of reviews for each of your physical locations. One method for preventing business owner burnout is to manage each location in rotation. One week, tend to owner responses for Business A. Do Business B the following week. In week three, ask for some reviews for Business A and do the same for B in week four. Vary the tasks and take your time unless faced with a sudden reputation crisis.

You can take some additional steps to “hyperlocalize” your review profiles:

  • Write about your neighborhood in the business description on your profile.
  • You can’t compel random customers to mention your neighborhood, but you can certainly do so from time to time when your write responses. “We’ve just installed the first soda fountain Bernal Heights has seen since 1959. Come have a cool drink on us this summer.”
  • Offer a neighborhood special to people who bring in a piece of mail with their address on it. Prepare a little handout for all-comers, highlighting a couple of review profiles where you’d love to hear how they liked the Bernal Heights special. Or, gather email addresses if possible and follow up via email shortly after the time of service.
  • If your business model is one that permits you to name your goods or service packages, don’t forget the tip mentioned earlier about thinking hyperlocal when brainstorming names. Pretty cool if you can get your customers talking about how your “North Beach Artichoke Pizza” is the best pie in town!

Investigate your social-hyperlocal opportunties

I still consider website-based content publication to be more than half the battle in ranking locally, but sometimes, real-time social outreach can accomplish things static articles or scheduled blog posts can’t. The amount of effort you invest in social outreach should be based on your resources and an assessment of how naturally your industry lends itself to socialization. Fire insurance salesmen are going to find it harder to light up their neighborhood community than yoga studios will. Consider your options:

Remember that you are investigating each opportunity to see how it stacks up not just to promoting your location in your city, but in your neighborhood.

Who are the people in your neighborhood?

Remember that Sesame Street jingle? It hails from a time when urban dwellers strongly identified with a certain district of hometown. People were “from the neighborhood.” If my grandfather was a Mission District fella, maybe yours was from Chinatown. Now, we’re shifting in fascinating directions. Even as we’ve settled into telecommuting to jobs in distant states or countries, Amazon is offering one hour home delivery to our neighbors in Manhattan. Doctors are making house calls again! Any day now, I’m expecting a milkman to start making his rounds around here. Commerce has stretched to span the globe and now it’s zooming in to meet the needs of the family next door.

If the big guys are setting their sights on near-instant services within your community, take note.
You live in that community. You talk, face-to-face, with your neighbors every day and know the flavor of the local scene better than any remote competitor can right now.

Now is the time to reinvigorate that old neighborhood pride in the way you’re visualizing your business, marketing it and personally communicating to customers that you’re right there for them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

The Future of Link Building

Posted by Paddy_Moogan

Building the types of links that help grow your online business and organic search traffic is getting harder. It used to be fairly straightforward, back before Google worked out how to treat links with different levels of quality and trust. However, the fact that it’s getting harder doesn’t mean that it’s dead.

What does the future hold?

I’m going to talk about links, but the truth is, the future isn’t really about the links. It is far bigger than that.

Quick sidenote: I’m aware that doing a blog post about the future of link building the week of a likely Penguin update could leave me with egg on my face! But we’ll see what happens.

Links will always be a ranking factor in some form or another. I can see the dials being turned down or off on certain aspects of links (more on that below) but I think they will always be there. Google is always looking for more data, more signals, more indicators of whether or not a certain page is a good result for a user at a certain moment in time. They will find them too, as we can see from
patents such as this. A natural consequence is that other signals may be diluted or even replaced as Google becomes smarter and understands the web and users a lot better.

What this means for the future is that the links valued by Google will be the ones you get as a result of having a great product and great marketing. Essentially, links will be symptomatic of amazing marketing. Hat tip to
Jess Champion who I’ve borrowed this term from.

This isn’t easy, but it shouldn’t be. That’s the point.

To go a bit further, I think we also need to think about the bigger picture. In the grand scheme of things, there are so many more signals that Google can use which, as marketers, we need to understand and use to our advantage. Google is changing and we can’t bury our heads in the sand and ignore what is going on.

A quick side note on spammy links

My background is a spammy one so I can’t help but address this quickly. Spam will continue to work for short-term hits and churn and burn websites. I’ve talked before about 
my position on this so I won’t go into too much more detail here. I will say though that those people who are in the top 1% of spammers will continue to make money, but even for them, it will be hard to maintain over a long period of time.

Let’s move onto some more of the detail around my view of the future by first looking at the past and present.

What we’ve seen in the past

Google didn’t understand links.

The fundamental issue that Google had for a long, long time was that they didn’t understand enough about links. They didn’t understand things such as:

  • How much to trust a link
  • Whether a link was truly editorially given or not
  • Whether a link was paid for or not
  • If a link was genuinely high quality (PageRank isn’t perfect)
  • How relevant a link was

Whilst they still have work to do on all of these, they have gotten much better in recent years. At one time, a link was a link and it was pretty much a case of whoever had the most links, won. I think that for a long time, Google was trying very hard to understand links and find which ones were high quality, but there was so much noise that it was very difficult. I think that eventually they realised that they had to attack the problem from a different angle and 
Penguin came along. So instead of focusing on finding the “good” signals of links, they focused on finding the “bad” signals and started to take action on them. This didn’t fix everything, but it did enough to shock our industry into moving away from certain tactics and therefore, has probably helped reduce a lot of the noise that Google was seeing.

What we’re seeing right now

Google is understanding more about language.

Google is getting better at understanding everything.
Hummingbird was just the start of what Google hopes to achieve on this front and it stands to reason that the same kind of technology that helps the following query work, will also help Google understand links better.

Not many people in the search industry said much when
Google hired this guy back in 2012. We can be pretty sure that it’s partly down to his work that we’re seeing the type of understanding of language that we are. His work has only just begun, though, and I think we’ll see more queries like the one above that just shouldn’t work, but they do. I also think we’ll see more instances of Googlers not knowing why something ranks where it does.

Google is understanding more about people.

I talk about this a little more below but to quickly summarise here, Google is learning more about us all the time. It can seem creepy, but the fact is that Google wants as much data as possible from us so that they can serve more relevant search results—and advertising of course. They are understanding more that the keywords we type into Google may not actually be what we want to find, nor are those keywords enough to find what we really want. Google needs more context.

Tom Anthony has
talked about this extensively so I won’t go into loads more detail. But to bring it back to link building, it is important to be aware of this because it means that there are more and more signals that could mean the dial on links gets turned down a bit more.

Some predictions about the future

I want to make a few things more concrete about my view of the future for link building, so let’s look at a few specifics.

1. Anchor text will matter less and less

Anchor text as a ranking signal was always something that works well in theory but not in reality. Even in my early days of link building, I couldn’t understand why Google put so much weight behind this one signal. My main reason for this view was that using exact match keywords in a link was not natural for most webmasters. I’d go as far as to say the only people who used it were SEOs!

I’m don’t think we’re at a point yet where anchor text as a ranking signal is dead and it will take some more time for Google to turn down the dial. But we definitely are at a point where you can get hurt pretty badly if you have too much commercial anchor text in your link profile. It just isn’t natural.

In the future, Google won’t need this signal. They will be much better at understanding the content of a page and importantly, the context of a page.

2. Deep linking will matter less and less

I was on the fence about this one for a long time but the more I think about it, the more I can see this happening. I’ll explain my view here by using an example.

Let’s imagine you’re an eCommerce website and you sell laptops. Obviously each laptop you sell will have its own product page and if you sell different types, you’ll probably have category pages too. With a products like laptops, chances are that other retailers sell the same ones with the same specifications and probably have very similar looking pages to yours. How does Google know which one to rank better than others?

Links to these product pages can work fine but in my opinion, is a bit of a crude way of working it out. I think that Google will get better at understanding the subtle differences in queries from users which will naturally mean that deep links to these laptop pages will be one of many signals they can use.

Take these queries:


“laptop reviews”

Context: I want to buy a laptop but I don’t know which one.


“asus laptop reviews”

Context: I like the sound of Asus, I want to read more about their laptops.


“sony laptop reviews”

Context: I also like the sound of Sony, I want to read more about their laptops.


“sony vs asus laptop”

Context: I’m confused, they both sound the same so I want a direct comparison to help me decide.


“asus laptop”

Context: I want an Asus laptop.

You can see how the mindset of the user has changed over time and we can easily imagine how the search results will have changed to reflect this. Google already understand this. There are other signals coming into play here too though, what about these bits of additional information that Google can gather about us:

  • Location: I’m on a bus in London, I may not want to buy a ÂŁ1,000 laptop right now but I’ll happily research them.
  • Device: I’m on my iPhone 6, I may not want to input credit card details into it and I worry that the website I’m using won’t work well on a small screen.
  • Search history: I’ve searched for laptops before and visited several retailers, but I keep going back to the same one as I’ve ordered from them before.

These are just a few that are easy for us to imagine Google using. There are loads more that Google could look at, not to mention signals from the retailers themselves such as secure websites, user feedback, 3rd party reviews, trust signals etc.

When you start adding all of these signals together, it’s pretty easy to see why links to a specific product page may not be the strongest signal for Google to use when determining rankings.

Smaller companies will be able to compete more.

One of the things I loved about SEO when I first got into it was the fact that organic search felt like a level playing field. I knew that with the right work, I could beat massive companies in the search results and not have to spend a fortune doing it. Suffice to say, things have changed quite a bit now and there are some industries where you stand pretty much zero chance of competing unless you have a very big budget to spend and a great product.

I think we will see a shift back in the other direction and smaller companies with fewer links will be able to rank for certain types of queries with a certain type of context. As explained above, context is key and allows Google to serve up search results that meet the context of the user. This means that massive brands are not always going to be the right answer for users and Google have to get better at understanding this. Whether a company is classified as a “brand” or not can be subjective. My local craft beer shop in London is the only one in the world and if you were to ask 100 people if they’d heard of it, they’d all probably say no. But it’s a brand to me because I love their products, their staff are knowledgeable and helpful, their marketing is cool and I’d always recommend them.

Sometimes, showing the website of this shop above bigger brands in search results is the right thing to do for a user. Google need lots of additional signals beyond “branding” and links in order to do this but I think they will get them.

What all of this means for us

Predicting the future is hard, knowing what to do about it is pretty hard too! But here are some things that I think we should be doing.

  1. Ask really hard questions
    Marketing is hard. If you or your client wants to compete and win customers, then you need to be prepared to ask really hard questions about the company. Here are just a few that I’ve found difficult when talking to clients:

    • Why does the company exist? (A good answer has nothing to do with making money)
    • Why do you deserve to rank well in Google?
    • What makes you different to your competitors?
    • If you disappeared from Google tomorrow, would anyone notice?
    • Why do you deserve to be linked to?
    • What value do you provide for users?

    The answers to these won’t always give you that silver bullet, but they can provoke conversations that make the client look inwardly and at why they should deserve links and customers. These questions are hard to answer, but again, that’s the point.

  2. Stop looking for scalable link building tactics

    Seriously, just stop. Anything that can be scaled tends to lose quality and anything that scales is likely to be targeted by the Google webspam team at some point. A
    recent piece of content we did at Distilled has so far generated links from over 700 root domains—we did NOT send 700 outreach emails! This piece took on a life of its own and generated those links after some promotion by us, but at no point did we worry about scaling outreach for it.

  3. Start focusing on doing marketing that users love

    I’m not talking necessarily about you doing the next
    Volvo ad or to be the next Old Spice guy. If you can then great, but these are out of reach for most of us.That doesn’t mean you can’t do marketing that people love. I often look at companies like Brewdog and Hawksmoor who do great marketing around their products but in a way that has personality and appeal. They don’t have to spend millions of dollars on celebrities or TV advertising because they have a great product and a fun marketing message. They have value to add which is the key, they don’t need to worry about link building because they get them naturally by doing cool stuff.

    Whilst I know that “doing cool stuff” isn’t particularly actionable, I still think it’s fair to say that marketing needs to be loved. In order to do marketing that people love, you need to have some fun and focus on adding value.

  4. Don’t bury your head in the sand

    The worst thing you can do is ignore the trends and changes taking place. Google is changing, user expectations and behaviours are changing, our industry is changing. As an industry, we’ve adapted very well over the last few years. We have to keep doing this if we’re going to survive.

    Going back to link building, you need to accept that this stuff is really hard and building the types of links that Google value is hard.

In summary

Links aren’t going anywhere. But the world is changing and we have to focus on what truly matters: marketing great products and building a loyal audience. 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]