Rural Local SEO: A Marketing Package Strong on Education

Posted by MiriamEllis

Can your marketing agency make a profit working with low-budget clients in rural areas?

Could you be overlooking a source of referrals, publicity, and professional satisfaction if you’re mainly focused on landing larger clients in urban locales? Clients in least-populated areas need to capture every customer they can get to be viable, including locals, new neighbors, and passers-through. Basic Local SEO can go a long way toward helping with this, and even if package offerings aren’t your agency’s typical approach, a simple product that emphasizes education could be exactly what’s called for.

Today, I’d like to help you explore your opportunities of serving rural and very small town clients. I’ve pulled together a sample spreadsheet and a ton of other resources that I hope will empower you to develop a bare-bones but high-quality local search marketing package that will work for most and could significantly benefit your agency in some remarkable ways.

Everything in moderation

The linchpin fundamental to the rural client/agency relationship is that the needs of these businesses are so exceedingly moderate. The competitive bar is set so low in a small-town-and-country setting, that, with few exceptions, clients can make a strong local showing with a pared-down marketing plan.

Let’s be honest — many businesses in this scenario can squeak by on a website design package from some giant web hosting agency. A few minutes spent with Google’s non-urban local packs attest to this. But I’m personally dissatisfied by independent businesses ending up being treated like numbers because it’s so antithetical to the way they operate. The local hardware store doesn’t put you on hold for 45 minutes to answer a question. The local farm stand doesn’t route you overseas to buy heirloom tomatoes. Few small town institutions stay in business for 150 years by overpromising and under-delivering.

Let’s assume that many rural clients will have some kind of website. If they don’t, you can recommend some sort of freebie or cheapie solution. It will be enough to get them placed somewhere in Google’s results, but if they never move beyond this, the maximum conversions they need to stay in business could be missed.

I’ve come to believe that the small-to-medium local marketing agency is the best fit for the small-to-medium rural brand because of shared work ethics and a similar way of doing business. But both entities need to survive monetarily and that means playing a very smart game with a budget on both sides.

It’s a question of organizing an agency offering that delivers maximum value with a modest investment of your time and the client’s money.

Constructing a square deal

When you take on a substantial client in a large town or city, you pull out all the stops. You dive deeply into auditing the business, its market, its assets. You look at everything from technical errors to creative strengths before beginning to build a strategy or implement campaigns, and there may be many months or years of work ahead for you with these clients. This is all entirely appropriate for big, lucrative contracts.

For your rural roster, prepare to scale way back. Here is your working plan:

1. Schedule your first 15-minute phone call with the client

Avoid the whole issue of having to lollygag around waiting for a busy small business owner to fill out a form. Schedule an appointment and have the client be at their place of business in front of a computer at the time of the call. Confirm the following, ultra-basic data about the client.

  • Name
  • Address
  • Phone
  • URL
  • Business model (single location brick-and-mortar, SAB, etc.)
  • Category
  • Are there any other businesses at this address?
  • Main products/services offered
  • If SAB, list of cities served
  • Most obvious search phrase they want to rank for
  • Year established and year they first took the business online
  • Have they ever been aware of a penalty on their website or had Google tell them they were removing a listing?
  • Finally, have the client (who is in front of their computer at their place of business) search for the search term that’s the most obviously important and read off to you the names and URLs of the businesses ranking in the local pack and on the first page of the organic results.

And that’s it. If you pay yourself $100/hr, this quick session yields a charge of $25.

2. Make a one-time investment in writing a bare-bones guide to Local SEO

Spend less than one working day putting together a .pdf file or Google doc written in the least-technical language containing the following:

  • Your briefest, clearest definition of what local SEO is and how it brings customers to local businesses. Inspiration here.
  • An overview of 3 key business models: brick & mortar, SAB, and home-based so the client can easily identify which of these models is theirs.
  • A complete copy of the Guidelines for representing your business on Google with a link in it to the live guidelines.
  • Foolproof instructions for creating a Google account and creating and claiming a GMB listing. Show the process step-by-step so that anyone can understand it. Inspiration here.
  • A list of top general industry citation platforms with links to the forms for getting listed on them. Inspiration here and if the client can hit at least a few of these, they will be off to a good start.
  • An overview of the role of review acquisition and response, with a few simple tips for earning reviews and a list of the top general industry review platforms. Inspiration here and here.
  • An overview of the role of building offline relationships to earn a few online linktations. Inspiration here.
  • Links to the Google My Business forum and the main Google support platforms including their phone number (844.491.9665), Facebook, Twitter, and online chat. Tell the client this is where to go if they encounter a problem with their Google listing in the future.
  • Links to major independent business associations as a support vehicle for small and rural businesses like AMIBA, ILSR, and Small Business Saturday. Inspiration here.
  • Your agency’s complete contact information so that the business can remember who you are and engage you for further consulting down the road, if ever necessary.

If you pay yourself $100 an hour, investing in creating this guide will cost you less than $1000.00. That’s a modest amount that you can quickly earn back from clients. Hopefully, the inspirational links I’ve included will give you a big head start. Avoid covering anything trendy (like some brand new Google feature) so that the only time you should have to update the guide in the near future will be if Google makes some major changes to their guidelines or dashboard.

Deliver this asset to every rural client as their basic training in the bare essentials of local marketing.

3. Create a competitive audit spreadsheet once and fill it out ad infinitum

What you want here is something that lets you swiftly fill in the blanks.

For the competitive audit, you’ll be stacking up your client’s metrics against the metrics of the business they told you was ranking at the top of the local pack when they searched from their location. You can come up with your own metrics, or you can make a copy of this template I’ve created for you and add to it/subtract from it as you like.

Make a copy of the ultra-basic competitive local audit template — you can do so right here.

You’ll notice that my sample sheet does not delve deeply into some of the more technical or creative areas you might explore for clients in tougher markets. With few exceptions, rural clients just don’t need that level of insight to compete.

Give yourself 45 focused minutes filling in the data in the spreadsheet. You’ve now invested 1 hour of time with the client. So let’s give that a value of $100.

4. Transfer the findings of your audit into a custom report

Here’s another one-time investment. Spend no more than one workday creating a .pdf or Google Docs template that takes the fields of your audit and presents them in a readable format for the client. I’m going to leave exact formatting up to you, but here are the sections I would recommend structuring the report around:

  • A side-by-side comparison of the client vs. competitor metrics, bucketed by topic (Website, GMB, Reputation, Links, Citations, etc)
  • A very basic explanation of what those metrics mean
  • A clear recommendation of what the client should do to improve their metrics

For example, your section on reputation might look like this:

The beauty of this is that, once you have the template, all you have to do is fill it out and then spend an hour making intelligent observations based on your findings.

Constructing the template should take you less than one workday; so, a one-time investment of less than $1,000 if you are paying yourself $100/hr.

Transferring the findings of your audit from the spreadsheet to the report for each client should take about 1 hour. So, we’re now up to two total hours of effort for a unique client.

5. Excelling at value

So, you’ve now had a 15-minute conversation with a client, given them an introductory guide to the basics of local search marketing, and delivered a customized report filled with your observations and their to-dos. Many agencies might call it a day and leave the client to interpret the report on their own.

But you won’t do that, because you don’t want to waste an incredible opportunity to build a firm relationship with a business. Instead, spend one more hour on the phone with the owner, going over the report with them page by page and allowing a few minutes for any of their questions. This is where you have the chance to deliver exceptional value to the client, telling them exactly what you think will be most helpful for them to know in a true teaching moment.

At the end of this, you will have become a memorable ally, someone they trust, and someone to whom they will have confidence in referring their colleagues, family members, and neighbors.

You’ve made an overall investment of less than $2,000 to create your rural/small town marketing program.

Packaging up the guide, the report and the 1:1 phone consulting, you have a base price of $300 for the product if you pay yourself $100/hour.

However, I’m going to suggest that, based on the level of local SEO expertise you bring to the scenario, you create a price point somewhere between $300–$500 for the package. If you are still relatively green at local SEO, $300 could be a fair price for three hours of consulting. If you’re an industry adept, scale it up a bit because, because you bring a rare level of insight to every client interaction, even if you’re sticking to the absolute basics. Begin selling several of these packages in a week, and it will start totaling up to a good monthly revenue stream.

As a marketer, I’ve generally shied away from packages because whenever you dig deeply into a client’s scenario, nuances end up requiring so much custom research and communication. But, for the very smallest clients in this least competitive markets, packages can hit the spot.

Considerable benefits for your agency

The client is going to walk away from the relationship with a good deal … and likely a lot to do. If they follow your recommendations, it will typically be just what they needed to establish themselves on the web to the extent that neighbors and travelers can easily find them and choose them for transactions. Good job!

But you’re going to walk away with some amazing benefits, too, some of which you might not have considered before. To wit:

1. Relationships and the ripple effect

A client you’ve treated very well on the phone is a client who is likely to remember you for future needs and recommend you. I’ve had businesses send me lovely gifts on top of my consulting fee because I’ve taken the time to really listen and answer questions. SEO agencies are always looking for ways to build authentic relationships. Don’t overlook the small client as a centroid of referrals throughout a tight-knit community and beyond it to their urban colleagues, friends, and family.

2. Big data for insights and bragging rights

If your package becomes popular, a ton of data is going to start passing through your hands. The more of these audits you do, the more time you’re spending actively observing Google’s handling of the localized SERPs. Imagine the blog posts your agency can begin publishing by anonymizing and aggregating this data, pulling insights of value to our industry. There is no end to the potential for you to grow your knowledge.

Apart from case studies, think of the way this package can both build up your proud client roster and serve as a source of client reviews. The friendly relationship you’ve built with that 1:1 time can now become a font of very positive portfolio content and testimonials for you to publish on your website.

3. Agency pride from helping rebuild rural America

Have you noticed the recent spate of hit TV shows that hinge on rebuilding dilapidated American towns? Industry consolidation is most often cited as the root of rural collapse, with small farmers and independent businesses no longer able to create a tax base to support basic community needs like hospitals, fire departments, and schools. Few of us rejoice at the idea of Main Streets — long-cherished hallmarks not just of Americana but of shared American identity — becoming ghost towns.

But if you look for it, you can see signs of brilliant small entrepreneurs uniting to buck this trend. Check out initiatives like Locavesting and Localstake. There’s a reason to hope in small farming co-ops, the Main Street movement, and individuals like these who can re-envision a crumbling building as an independent country store, a B&B, or a job training center with Internet access.

It can be a source of professional satisfaction for your marketing agency if you offer these brave and hard-working business owners a good deal and the necessary education they need to present themselves sufficiently on the web. I live in a rural area, and I know just how much a little, solid advice can help. I feel extra good if I know I’m contributing to America’s rural comeback story.

Promoting your rural local SEO package

Once you’ve got your guide and templates created, what next? Here are some simple tips:

  • Create a terrific landing page on your website specifically for this package and call it out on your homepage as well. Wherever appropriate, build internal links to it.
  • Promote on social media.
  • Blog about why you’ve created the package, aligning your agency as an ally to the rebuilding of rural communities.
  • If, like me, you live in a rural area, consider presenting at local community events that will put you in front of small business owners.
  • Don’t overlook old school media like community message boards at the local post office, or even fliers tacked to electric poles.
  • If you’re a city slicker, consider how far you’d have to travel to get to the nearest rural community to participate in events.
  • Advertising both off and online in rural papers can be quite economical. There are also place of worship print bulletins, local school papers, and other publications that welcome sponsors. Give it a try.
  • And, of course, ask happy clients to refer you, telling them what it means to your business. You might even develop a referral program.

The truth is that your agency may not be able to live by rural clients, alone. You may still be targeting the bulk of your campaigns towards urban enterprises because just a few highly competitive clients can bring welcome security to your bank account.

But maybe this is a good day to start looking beyond the fast food franchise, the NY attorney and the LA dermatology group. The more one reads about rural entrepreneurs, the more one tends to empathize with them, and empathy is the best foundation I know of for building rewarding business relationships.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 1 month ago from tracking.feedpress.it

The key to local SEO

Want to spend more time doing great work and less time putting out fires? Columnist Greg Gifford emphasizes the importance of client education in local SEO.

The post The key to local SEO appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Reblogged 3 years ago from feeds.searchengineland.com

Should I Use Relative or Absolute URLs? – Whiteboard Friday

Posted by RuthBurrReedy

It was once commonplace for developers to code relative URLs into a site. There are a number of reasons why that might not be the best idea for SEO, and in today’s Whiteboard Friday, Ruth Burr Reedy is here to tell you all about why.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Let’s discuss some non-philosophical absolutes and relatives

Howdy, Moz fans. My name is Ruth Burr Reedy. You may recognize me from such projects as when I used to be the Head of SEO at Moz. I’m now the Senior SEO Manager at BigWing Interactive in Oklahoma City. Today we’re going to talk about relative versus absolute URLs and why they are important.

At any given time, your website can have several different configurations that might be causing duplicate content issues. You could have just a standard http://www.example.com. That’s a pretty standard format for a website.

But the main sources that we see of domain level duplicate content are when the non-www.example.com does not redirect to the www or vice-versa, and when the HTTPS versions of your URLs are not forced to resolve to HTTP versions or, again, vice-versa. What this can mean is if all of these scenarios are true, if all four of these URLs resolve without being forced to resolve to a canonical version, you can, in essence, have four versions of your website out on the Internet. This may or may not be a problem.

It’s not ideal for a couple of reasons. Number one, duplicate content is a problem because some people think that duplicate content is going to give you a penalty. Duplicate content is not going to get your website penalized in the same way that you might see a spammy link penalty from Penguin. There’s no actual penalty involved. You won’t be punished for having duplicate content.

The problem with duplicate content is that you’re basically relying on Google to figure out what the real version of your website is. Google is seeing the URL from all four versions of your website. They’re going to try to figure out which URL is the real URL and just rank that one. The problem with that is you’re basically leaving that decision up to Google when it’s something that you could take control of for yourself.

There are a couple of other reasons that we’ll go into a little bit later for why duplicate content can be a problem. But in short, duplicate content is no good.

However, just having these URLs not resolve to each other may or may not be a huge problem. When it really becomes a serious issue is when that problem is combined with injudicious use of relative URLs in internal links. So let’s talk a little bit about the difference between a relative URL and an absolute URL when it comes to internal linking.

With an absolute URL, you are putting the entire web address of the page that you are linking to in the link. You’re putting your full domain, everything in the link, including /page. That’s an absolute URL.

However, when coding a website, it’s a fairly common web development practice to instead code internal links with what’s called a relative URL. A relative URL is just /page. Basically what that does is it relies on your browser to understand, “Okay, this link is pointing to a page that’s on the same domain that we’re already on. I’m just going to assume that that is the case and go there.”

There are a couple of really good reasons to code relative URLs

1) It is much easier and faster to code.

When you are a web developer and you’re building a site and there thousands of pages, coding relative versus absolute URLs is a way to be more efficient. You’ll see it happen a lot.

2) Staging environments

Another reason why you might see relative versus absolute URLs is some content management systems — and SharePoint is a great example of this — have a staging environment that’s on its own domain. Instead of being example.com, it will be examplestaging.com. The entire website will basically be replicated on that staging domain. Having relative versus absolute URLs means that the same website can exist on staging and on production, or the live accessible version of your website, without having to go back in and recode all of those URLs. Again, it’s more efficient for your web development team. Those are really perfectly valid reasons to do those things. So don’t yell at your web dev team if they’ve coded relative URLS, because from their perspective it is a better solution.

Relative URLs will also cause your page to load slightly faster. However, in my experience, the SEO benefits of having absolute versus relative URLs in your website far outweigh the teeny-tiny bit longer that it will take the page to load. It’s very negligible. If you have a really, really long page load time, there’s going to be a whole boatload of things that you can change that will make a bigger difference than coding your URLs as relative versus absolute.

Page load time, in my opinion, not a concern here. However, it is something that your web dev team may bring up with you when you try to address with them the fact that, from an SEO perspective, coding your website with relative versus absolute URLs, especially in the nav, is not a good solution.

There are even better reasons to use absolute URLs

1) Scrapers

If you have all of your internal links as relative URLs, it would be very, very, very easy for a scraper to simply scrape your whole website and put it up on a new domain, and the whole website would just work. That sucks for you, and it’s great for that scraper. But unless you are out there doing public services for scrapers, for some reason, that’s probably not something that you want happening with your beautiful, hardworking, handcrafted website. That’s one reason. There is a scraper risk.

2) Preventing duplicate content issues

But the other reason why it’s very important to have absolute versus relative URLs is that it really mitigates the duplicate content risk that can be presented when you don’t have all of these versions of your website resolving to one version. Google could potentially enter your site on any one of these four pages, which they’re the same page to you. They’re four different pages to Google. They’re the same domain to you. They are four different domains to Google.

But they could enter your site, and if all of your URLs are relative, they can then crawl and index your entire domain using whatever format these are. Whereas if you have absolute links coded, even if Google enters your site on www. and that resolves, once they crawl to another page, that you’ve got coded without the www., all of that other internal link juice and all of the other pages on your website, Google is not going to assume that those live at the www. version. That really cuts down on different versions of each page of your website. If you have relative URLs throughout, you basically have four different websites if you haven’t fixed this problem.

Again, it’s not always a huge issue. Duplicate content, it’s not ideal. However, Google has gotten pretty good at figuring out what the real version of your website is.

You do want to think about internal linking, when you’re thinking about this. If you have basically four different versions of any URL that anybody could just copy and paste when they want to link to you or when they want to share something that you’ve built, you’re diluting your internal links by four, which is not great. You basically would have to build four times as many links in order to get the same authority. So that’s one reason.

3) Crawl Budget

The other reason why it’s pretty important not to do is because of crawl budget. I’m going to point it out like this instead.

When we talk about crawl budget, basically what that is, is every time Google crawls your website, there is a finite depth that they will. There’s a finite number of URLs that they will crawl and then they decide, “Okay, I’m done.” That’s based on a few different things. Your site authority is one of them. Your actual PageRank, not toolbar PageRank, but how good Google actually thinks your website is, is a big part of that. But also how complex your site is, how often it’s updated, things like that are also going to contribute to how often and how deep Google is going to crawl your site.

It’s important to remember when we think about crawl budget that, for Google, crawl budget cost actual dollars. One of Google’s biggest expenditures as a company is the money and the bandwidth it takes to crawl and index the Web. All of that energy that’s going into crawling and indexing the Web, that lives on servers. That bandwidth comes from servers, and that means that using bandwidth cost Google actual real dollars.

So Google is incentivized to crawl as efficiently as possible, because when they crawl inefficiently, it cost them money. If your site is not efficient to crawl, Google is going to save itself some money by crawling it less frequently and crawling to a fewer number of pages per crawl. That can mean that if you have a site that’s updated frequently, your site may not be updating in the index as frequently as you’re updating it. It may also mean that Google, while it’s crawling and indexing, may be crawling and indexing a version of your website that isn’t the version that you really want it to crawl and index.

So having four different versions of your website, all of which are completely crawlable to the last page, because you’ve got relative URLs and you haven’t fixed this duplicate content problem, means that Google has to spend four times as much money in order to really crawl and understand your website. Over time they’re going to do that less and less frequently, especially if you don’t have a really high authority website. If you’re a small website, if you’re just starting out, if you’ve only got a medium number of inbound links, over time you’re going to see your crawl rate and frequency impacted, and that’s bad. We don’t want that. We want Google to come back all the time, see all our pages. They’re beautiful. Put them up in the index. Rank them well. That’s what we want. So that’s what we should do.

There are couple of ways to fix your relative versus absolute URLs problem

1) Fix what is happening on the server side of your website

You have to make sure that you are forcing all of these different versions of your domain to resolve to one version of your domain. For me, I’m pretty agnostic as to which version you pick. You should probably already have a pretty good idea of which version of your website is the real version, whether that’s www, non-www, HTTPS, or HTTP. From my view, what’s most important is that all four of these versions resolve to one version.

From an SEO standpoint, there is evidence to suggest and Google has certainly said that HTTPS is a little bit better than HTTP. From a URL length perspective, I like to not have the www. in there because it doesn’t really do anything. It just makes your URLs four characters longer. If you don’t know which one to pick, I would pick one this one HTTPS, no W’s. But whichever one you pick, what’s really most important is that all of them resolve to one version. You can do that on the server side, and that’s usually pretty easy for your dev team to fix once you tell them that it needs to happen.

2) Fix your internal links

Great. So you fixed it on your server side. Now you need to fix your internal links, and you need to recode them for being relative to being absolute. This is something that your dev team is not going to want to do because it is time consuming and, from a web dev perspective, not that important. However, you should use resources like this Whiteboard Friday to explain to them, from an SEO perspective, both from the scraper risk and from a duplicate content standpoint, having those absolute URLs is a high priority and something that should get done.

You’ll need to fix those, especially in your navigational elements. But once you’ve got your nav fixed, also pull out your database or run a Screaming Frog crawl or however you want to discover internal links that aren’t part of your nav, and make sure you’re updating those to be absolute as well.

Then you’ll do some education with everybody who touches your website saying, “Hey, when you link internally, make sure you’re using the absolute URL and make sure it’s in our preferred format,” because that’s really going to give you the most bang for your buck per internal link. So do some education. Fix your internal links.

Sometimes your dev team going to say, “No, we can’t do that. We’re not going to recode the whole nav. It’s not a good use of our time,” and sometimes they are right. The dev team has more important things to do. That’s okay.

3) Canonicalize it!

If you can’t get your internal links fixed or if they’re not going to get fixed anytime in the near future, a stopgap or a Band-Aid that you can kind of put on this problem is to canonicalize all of your pages. As you’re changing your server to force all of these different versions of your domain to resolve to one, at the same time you should be implementing the canonical tag on all of the pages of your website to self-canonize. On every page, you have a canonical page tag saying, “This page right here that they were already on is the canonical version of this page. ” Or if there’s another page that’s the canonical version, then obviously you point to that instead.

But having each page self-canonicalize will mitigate both the risk of duplicate content internally and some of the risk posed by scrappers, because when they scrape, if they are scraping your website and slapping it up somewhere else, those canonical tags will often stay in place, and that lets Google know this is not the real version of the website.

In conclusion, relative links, not as good. Absolute links, those are the way to go. Make sure that you’re fixing these very common domain level duplicate content problems. If your dev team tries to tell you that they don’t want to do this, just tell them I sent you. Thanks guys.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Deconstructing the App Store Rankings Formula with a Little Mad Science

Posted by AlexApptentive

After seeing Rand’s “Mad Science Experiments in SEO” presented at last year’s MozCon, I was inspired to put on the lab coat and goggles and do a few experiments of my own—not in SEO, but in SEO’s up-and-coming younger sister, ASO (app store optimization).

Working with Apptentive to guide enterprise apps and small startup apps alike to increase their discoverability in the app stores, I’ve learned a thing or two about app store optimization and what goes into an app’s ranking. It’s been my personal goal for some time now to pull back the curtains on Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in my way.

Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet.

As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or down can make all the difference when it comes to your website’s traffic—and revenue.

In the world of apps, ranking is just as important when it comes to standing out in a sea of more than 1.3 million apps. Apptentive’s recent mobile consumer survey shed a little more light this claim, revealing that nearly half of all mobile app users identified browsing the app store charts and search results (the placement on either of which depends on rankings) as a preferred method for finding new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery.

Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a complex and highly guarded algorithms for determining rankings for both keyword-based app store searches and composite top charts.

Unlike SEO, however, very little research and theory has been conducted around what goes into these rankings.

Until now, that is.

Over the course of five studies analyzing various publicly available data points for a cross-section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps, I’ll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few of the factors commonly thought of as influential to an app’s ranking.

But first, a little context

Image credit: Josh Tuininga, Apptentive

Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically, be on a fairly level playing field in terms of search volume and competition.

Of these apps, nearly two-thirds have not received a single rating and 99% are considered unprofitable. These studies, therefore, single out the rare exceptions to the rule—the top 500 ranked apps in each store.

While neither Apple nor Google have revealed specifics about how they calculate search rankings, it is generally accepted that both app store algorithms factor in:

  • Average app store rating
  • Rating/review volume
  • Download and install counts
  • Uninstalls (what retention and churn look like for the app)
  • App usage statistics (how engaged an app’s users are and how frequently they launch the app)
  • Growth trends weighted toward recency (how daily download counts changed over time and how today’s ratings compare to last week’s)
  • Keyword density of the app’s landing page (Ian did a great job covering this factor in a previous Moz post)

I’ve simplified this formula to a function highlighting the four elements with sufficient data (or at least proxy data) for our analysis:

Ranking = fn(Rating, Rating Count, Installs, Trends)

Of course, right now, this generalized function doesn’t say much. Over the next five studies, however, we’ll revisit this function before ultimately attempting to compare the weights of each of these four variables on app store rankings.

(For the purpose of brevity, I’ll stop here with the assumptions, but I’ve gone into far greater depth into how I’ve reached these conclusions in a 55-page report on app store rankings.)

Now, for the Mad Science.

Study #1: App-les to app-les app store ranking volatility

The first, and most straight forward of the five studies involves tracking daily movement in app store rankings across iOS and Android versions of the same apps to determine any trends of differences between ranking volatility in the two stores.

I went with a small sample of five apps for this study, the only criteria for which were that:

  • They were all apps I actively use (a criterion for coming up with the five apps but not one that influences rank in the U.S. app stores)
  • They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be stickier at the top—an assumption I’ll test in study #2)
  • They had an almost identical version of the app in both Google Play and the App Store, meaning they should (theoretically) rank similarly
  • They covered a spectrum of app categories

The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five apps represent the travel, finance, education banking, and social networking categories.

Hypothesis

Going into this analysis, I predicted slightly more volatility in Apple App Store rankings, based on two statistics:

Both of these assumptions will be tested in later analysis.

Results

7-Day App Store Ranking Volatility in the App Store and Google Play

Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23 positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of Google Play.

Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then standardized the ranking volatility in terms of percent change to control for the effect of numeric rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a 24-hour period while Google Play rankings changed by no more than 9%.

Also of note, daily rankings tended to move in the same direction across the two app stores approximately two-thirds of the time, suggesting that the two stores, and their customers, may have more in common than we think.

Study #2: App store ranking volatility across the top charts

Testing the assumption implicit in standardizing the data in study No. 1, this one was designed to see if app store ranking volatility is correlated with an app’s current rank. The sample for this study consisted of the top 500 ranked apps in both Google Play and the App Store, with special attention given to those on both ends of the spectrum (ranks 1–100 and 401–500).

Hypothesis

I anticipated rankings to be more volatile the higher an app is ranked—meaning an app ranked No. 450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis is based on the assumption that higher ranked apps have more installs, active users, and ratings, and that it would take a large margin to produce a noticeable shift in any of these factors.

Results

App Store Ranking Volatility of Top 500 Apps

One look at the chart above shows that apps in both stores have increasingly more volatile rankings (based on how many ranks they moved in the last 24 hours) the lower on the list they’re ranked.

This is particularly true when comparing either end of the spectrum—with a seemingly straight volatility line among Google Play’s Top 100 apps and very few blips within the App Store’s Top 100. Compare this section to the lower end, ranks 401–)500, where both stores experience much more turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking volatility in the Play Store and 28% correlation in the App Store.

To put this into perspective, the average app in Google Play’s 401–)500 ranks moved 12.1 ranks in the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store, these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as volatile as the highest ranked apps. (I say slightly as these “lower-ranked” apps are still ranked higher than 99.96% of all apps.)

The relationship between rank and volatility is pretty consistent across the App Store charts, while rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100 have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation).

Study #3: App store rankings across the stars

The next study looks at the relationship between rank and star ratings to determine any trends that set the top chart apps apart from the rest and explore any ties to app store ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

As discussed in the introduction, this study relates directly to one of the factors commonly accepted as influential to app store rankings: average rating.

Getting started, I hypothesized that higher ranks generally correspond to higher ratings, cementing the role of star ratings in the ranking algorithm.

As far as volatility goes, I did not anticipate average rating to play a role in app store ranking volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice versa. Instead, I believed volatility to be tied to rating volume (as we’ll explore in our last study).

Results

Average App Store Ratings of Top Apps

The chart above plots the top 100 ranked apps in either store with their average rating (both historic and current, for App Store apps). If it looks a little chaotic, it’s just one indicator of the complexity of ranking algorithm in Google Play and the App Store.

If our hypothesis was correct, we’d see a downward trend in ratings. We’d expect to see the No. 1 ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the chart.

A closer examination, in tandem with what we already know about the app stores, reveals two other interesting points:

  1. The average star rating of the top 100 apps is significantly higher than that of the average app. Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of all rated apps in either store. The averages across apps in the 401–)500 ranks approximately split the difference between the ratings of the top ranked apps and the ratings of the average app.
  2. The rating distribution of top apps in Google Play was considerably more compact than the distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more heavily weighted in Google Play’s algorithm.

App Store Ranking Volatility and Average Rating

Looking next at the relationship between ratings and app store ranking volatility reveals a -15% correlation that is consistent across both app stores; meaning the higher an app is rated, the less its rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store’s calculation of an app’s current rating, for which I did not find a statistically significant correlation.

Study #4: App store rankings across versions

This next study looks at the relationship between the age of an app’s current version, its rank and its ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

In alteration of the above function, I’m using the age of a current app’s version as a proxy (albeit not a very good one) for trends in app store ratings and app quality over time.

Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b) each new update inspires a new wave of installs and ratings, I’m hypothesizing that the older the age of an app’s current version, the lower it will be ranked and the less volatile its rank will be.

Results

How update frequency correlates with app store rank

The first and possibly most important finding is that apps across the top charts in both Google Play and the App Store are updated remarkably often as compared to the average app.

At the time of conducting the study, the current version of the average iOS app on the top chart was only 28 days old; the current version of the average Android app was 38 days old.

As hypothesized, the age of the current version is negatively correlated with the app’s rank, with a 13% correlation in Google Play and a 10% correlation in the App Store.

How update frequency correlates with app store ranking volatility

The next part of the study maps the age of the current app version to its app store ranking volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%) while recently updated iOS apps have more volatile rankings (correlation: -3%).

Study #5: App store rankings across monthly active users

In the final study, I wanted to examine the role of an app’s popularity on its ranking. In an ideal world, popularity would be measured by an app’s monthly active users (MAUs), but since few mobile app developers have released this information, I’ve settled for two publicly available proxies: Rating Count and Installs.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

For the same reasons indicated in the second study, I anticipated that more popular apps (e.g., apps with more ratings and more installs) would be higher ranked and less volatile in rank. This, again, takes into consideration that it takes more of a shift to produce a noticeable impact in average rating or any of the other commonly accepted influencers of an app’s ranking.

Results

Apps with more ratings and reviews typically rank higher

The first finding leaps straight off of the chart above: Android apps have been rated more times than iOS apps, 15.8x more, in fact.

The average app in Google Play’s Top 100 had a whopping 3.1 million ratings while the average app in the Apple App Store’s Top 100 had 196,000 ratings. In contrast, apps in the 401–)500 ranks (still tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth (Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks.

Considering that almost two-thirds of apps don’t have a single rating, reaching rating counts this high is a huge feat, and a very strong indicator of the influence of rating count in the app store ranking algorithms.

To even out the playing field a bit and help us visualize any correlation between ratings and rankings (and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I’ve applied a logarithmic scale to the chart above:

The relationship between app store ratings and rankings in the top 100 apps

From this chart, we can see a correlation between ratings and rankings, such that apps with more ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40% correlation in Google Play.

Apps with more ratings typically experience less app store ranking volatility

Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive evidence was found within the Top 100 Google Play apps.

Apps with more installs and active users tend to rank higher in the app stores

And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.)

Among the top 100 Android apps, this last study found that installs were heavily correlated with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in Google Play. Android apps with more installs also tended to have less volatile app store rankings, with a correlation of -16.5%.

Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in broad ranges (e.g., 500k–)1M). For each app, I took the low end of the range, meaning we can likely expect the correlation to be a little stronger since the low end was further away from the midpoint for apps with more installs.

Summary

To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad science studies in app store optimization:

  1. Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play
  2. Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the Apple App Store’s top charts.
  3. In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of the average app.
  4. Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top charts experience a much wider ratings distribution than that of Google Play’s top charts.
  5. The higher an app is rated, the less volatile its rankings are.
  6. The 100 highest ranked apps in either store are updated much more frequently than the average app, and apps with older current versions are correlated with lower ratings.
  7. An app’s update frequency is negatively correlated with Google Play’s ranking volatility but positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an app’s most recent ratings and reviews.
  8. The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest ranked App Store apps.
  9. In both stores, apps that fall under the 401–500 ranks receive, on average, 10–20% of the rating volume seen by apps in the top 100.
  10. Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29–40% correlation between the two.

Revisiting our first (albeit oversimplified) guess at the app stores’ ranking algorithm gives us this loosely defined function:

Ranking = fn(Rating, Rating Count, Installs, Trends)

I’d now re-write the function into a formula by weighing each of these four factors, where a, b, c, & d are unknown multipliers, or weights:

Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d)

These five studies on ASO shed a little more light on these multipliers, showing Rating Count to have the strongest correlation with rank, followed closely by Installs, in either app store.

It’s with the other two factors—rating and trends—that the two stores show the greatest discrepancy. I’d hazard a guess to say that the App Store prioritizes growth trends over ratings, given the importance it places on an app’s current version and the wide distribution of ratings across the top charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just about have to have at least four stars to make the top 100 ranks.

Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts in either store:

Weight of factors in the Apple App Store ranking algorithm

Rating Count > Installs > Trends > Rating

Weight of factors in the Google Play ranking algorithm

Rating Count > Installs > Rating > Trends


Again, we’re oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional factors including keyword density and in-app engagement statistics continue to be strong indicators of ranks. They simply lie outside the scope of these studies.

I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see ASOs conducting the same experiments that have brought SEO to the center stage, and encourage you to enhance or refute these findings with your own ASO mad science experiments.

Please share your thoughts in the comments below, and let’s deconstruct the ranking formula together, one experiment at a time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it