Is Australia the land of opportunity for your retail brand?

Australia has a resident population of more than 24 million and, according to eMarketer, the country’s ecommerce sales are predicted to reach A$32.56 billion by 2017. The country’s remote location in the APAC region means that unlike European countries or the USA, traditionally there have been a lack of global brands sold locally.

Of course, we also know that many expatriates, particularly from inside the Commonwealth, have made Australia their home and are keen to buy products they know and love from their country of origin.

All of these factors present a huge and potentially lucrative opportunity for non-Australian brands wanting to open up their new and innovative products to a fresh market, or compete for market share.

But it’s not just non-Australian retailers who are at an advantage here: Australia was late to the ecommerce party because native, established brands were trading well without it. Subsequently, Australian retailers’ ecommerce technology stacks are much more recent and not burdened by legacy systems. This makes it much easier to extend, or get started with, best-of-breed technologies and cash in on a market that’s booming. To put some of this into perspective, Magento’s innovative ecommerce platform currently takes 42% of Australia’s market share and the world’s first adopter of Magento 2.0 was an Australian brand.

The GST loophole

At the moment, local retailers are campaigning against a rule that exempts foreign websites from being charged a 10% general sales tax (GST) on purchases under A$1,000. And in 2013, Australian consumers made $3.11 billion worth of purchases under A$1,000.[1]

While the current GST break appears to put non-Australian retailers at an advantage, Australian-based brands such as Harvey Norman are using it to their advantage by setting up ecommerce operations in Asia to enjoy the GST benefit.

Australian consumers have also countered the argument by saying that price isn’t always the motivator when it comes to making purchasing decisions.

It’s not a place where no man has gone before

Often, concerns around meeting local compliance and lack of overseas business knowledge prevent outsiders from taking the leap into cross-border trade. However, this ecommerce passport, created by Ecommerce Worldwide and NORA, is designed to support those considering selling in Australia. The guide provides a comprehensive look into everything from the country’s economy and trade status, to logistics and dealing with international payments.

Global expansion success stories are also invaluable sources of information. For instance, it’s not just lower-end retailers that are fitting the bill, with brands like online luxury fashion retailer Net-a-Porter naming Australia as one of its biggest markets.

How tech-savvy are the Aussies?

One of the concerns you might have as a new entrant into the market is how you’ll reach and sell to your new audience, particularly without having a physical presence. The good news is that more than 80% of the country is digitally enabled and 60% of mobile phone users own a smartphone – so online is deeply rooted into the majority of Australians’ lives. [2]

Marketing your brand

Heard the saying “Fire bullets then fire cannonballs”? In any case, you’ll want to test the waters and gauge people’s reactions to your product or service.

It all starts with the website because, without it, you’re not discoverable or searchable, and you’ve nowhere to drive people to when running campaigns. SEO and SEM should definitely be a priority, and an online store that can handle multiple regions and storefronts, like Magento, will make your life easier. A mobile-first mentality and well thought-out UX will also place you in a good position.

Once your new web store is set up, you should be making every effort to collect visitors’ email addresses, perhaps via a popover. Why? Firstly, email is one of the top three priority areas for Australian retailers, because it’s a cost-effective, scalable marketing channel that enables true personalization.

Secondly, email marketing automation empowers you to deliver the customer experience today’s consumer expects, as well as enabling you to communicate with them throughout the lifecycle. Check out our ‘Do customer experience masters really exist?’ whitepaper for some real-life success stories.

Like the Magento platform, dotmailer is set up to handle multiple languages, regions and accounts, and is designed to grow with you.

In summary, there’s great scope for ecommerce success in Australia, whether you’re a native bricks-and-mortar retailer, a start-up or a non-Australian merchant. The barriers to cross-border trade are falling and Australia is one of APAC’s most developed regions in terms of purchasing power and tech savviness.

We recently worked with ecommerce expert Chloe Thomas to produce a whitepaper on cross-border trade, which goes into much more detail on how to market and sell successfully in new territories. You can download a free copy here.

[1] Australian Passport 2015: Cross-Border Trading Report

[2] Australian Passport 2015: Cross-Border Trading Report

Reblogged 3 years ago from blog.dotmailer.com

Becoming Better SEO Scientists – Whiteboard Friday

Posted by MarkTraphagen

Editor’s note: Today we’re featuring back-to-back episodes of Whiteboard Friday from our friends at Stone Temple Consulting. Make sure to also check out the second episode, “UX, Content Quality, and SEO” from Eric Enge.

Like many other areas of marketing, SEO incorporates elements of science. It becomes problematic for everyone, though, when theories that haven’t been the subject of real scientific rigor are passed off as proven facts. In today’s Whiteboard Friday, Stone Temple Consulting’s Mark Traphagen is here to teach us a thing or two about the scientific method and how it can be applied to our day-to-day work.

For reference, here’s a still of this week’s whiteboard.
Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Mozzers. Mark Traphagen from Stone Temple Consulting here today to share with you how to become a better SEO scientist. We know that SEO is a science in a lot of ways, and everything I’m going to say today applies not only to SEO, but testing things like your AdWords, how does that work, quality scores. There’s a lot of different applications you can make in marketing, but we’ll focus on the SEO world because that’s where we do a lot of testing. What I want to talk to you about today is how that really is a science and how we need to bring better science in it to get better results.

The reason is in astrophysics, things like that we know there’s something that they’re talking about these days called dark matter, and dark matter is something that we know it’s there. It’s pretty much accepted that it’s there. We can’t see it. We can’t measure it directly. We don’t even know what it is. We can’t even imagine what it is yet, and yet we know it’s there because we see its effect on things like gravity and mass. Its effects are everywhere. And that’s a lot like search engines, isn’t it? It’s like Google or Bing. We see the effects, but we don’t see inside the machine. We don’t know exactly what’s happening in there.

An artist’s depiction of how search engines work.

So what do we do? We do experiments. We do tests to try to figure that out, to see the effects, and from the effects outside we can make better guesses about what’s going on inside and do a better job of giving those search engines what they need to connect us with our customers and prospects. That’s the goal in the end.

Now, the problem is there’s a lot of testing going on out there, a lot of experiments that maybe aren’t being run very well. They’re not being run according to scientific principles that have been proven over centuries to get the best possible results.

Basic data science in 10 steps

So today I want to give you just very quickly 10 basic things that a real scientist goes through on their way to trying to give you better data. Let’s see what we can do with those in our SEO testing in the future.

So let’s start with number one. You’ve got to start with a hypothesis. Your hypothesis is the question that you want to solve. You always start with that, a good question in mind, and it’s got to be relatively narrow. You’ve got to narrow it down to something very specific. Something like how does time on page effect rankings, that’s pretty narrow. That’s very specific. That’s a good question. Might be able to test that. But something like how do social signals effect rankings, that’s too broad. You’ve got to narrow it down. Get it down to one simple question.

Then you choose a variable that you’re going to test. Out of all the things that you could do, that you could play with or you could tweak, you should choose one thing or at least a very few things that you’re going to tweak and say, “When we tweak this, when we change this, when we do this one thing, what happens? Does it change anything out there in the world that we are looking at?” That’s the variable.

The next step is to set a sample group. Where are you going to gather the data from? Where is it going to come from? That’s the world that you’re working in here. Out of all the possible data that’s out there, where are you going to gather your data and how much? That’s the small circle within the big circle. Now even though it’s smaller, you’re probably not going to get all the data in the world. You’re not going to scrape every search ranking that’s possible or visit every URL.

You’ve got to ask yourself, “Is it large enough that we’re at least going to get some validity?” If I wanted to find out what is the typical person in Seattle and I might walk through just one part of the Moz offices here, I’d get some kind of view. But is that a typical, average person from Seattle? I’ve been around here at Moz. Probably not. But this was large enough.

Also, it should be randomized as much as possible. Again, going back to that example, if I just stayed here within the walls of Moz and do research about Mozzers, I’d learn a lot about what Mozzers do, what Mozzers think, how they behave. But that may or may not be applicable to the larger world outside, so you randomized.

We want to control. So we’ve got our sample group. If possible, it’s always good to have another sample group that you don’t do anything to. You do not manipulate the variable in that group. Now, why do you have that? You have that so that you can say, to some extent, if we saw a change when we manipulated our variable and we did not see it in the control group, the same thing didn’t happen, more likely it’s not just part of the natural things that happen in the world or in the search engine.

If possible, even better you want to make that what scientists call double blind, which means that even you the experimenter don’t know who that control group is out of all the SERPs that you’re looking at or whatever it is. As careful as you might be and honest as you might be, you can end up manipulating the results if you know who is who within the test group? It’s not going to apply to every test that we do in SEO, but a good thing to have in mind as you work on that.

Next, very quickly, duration. How long does it have to be? Is there sufficient time? If you’re just testing like if I share a URL to Google +, how quickly does it get indexed in the SERPs, you might only need a day on that because typically it takes less than a day in that case. But if you’re looking at seasonality effects, you might need to go over several years to get a good test on that.

Let’s move to the second group here. The sixth thing keep a clean lab. Now what that means is try as much as possible to keep anything that might be dirtying your results, any kind of variables creeping in that you didn’t want to have in the test. Hard to do, especially in what we’re testing, but do the best you can to keep out the dirt.

Manipulate only one variable. Out of all the things that you could tweak or change choose one thing or a very small set of things. That will give more accuracy to your test. The more variables that you change, the more other effects and inner effects that are going to happen that you may not be accounting for and are going to muddy your results.

Make sure you have statistical validity when you go to analyze those results. Now that’s beyond the scope of this little talk, but you can read up on that. Or even better, if you are able to, hire somebody or work with somebody who is a trained data scientist or has training in statistics so they can look at your evaluation and say the correlations or whatever you’re seeing, “Does it have a statistical significance?” Very important.

Transparency. As much as possible, share with the world your data set, your full results, your methodology. What did you do? How did you set up the study? That’s going to be important to our last step here, which is replication and falsification, one of the most important parts of any scientific process.

So what you want to invite is, hey we did this study. We did this test. Here’s what we found. Here’s how we did it. Here’s the data. If other people ask the same question again and run the same kind of test, do they get the same results? Somebody runs it again, do they get the same results? Even better, if you have some people out there who say, “I don’t think you’re right about that because I think you missed this, and I’m going to throw this in and see what happens,” aha they falsify. That might make you feel like you failed, but it’s success because in the end what are we after? We’re after the truth about what really works.

Think about your next test, your next experiment that you do. How can you apply these 10 principles to do better testing, get better results, and have better marketing? Thanks.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Deconstructing the App Store Rankings Formula with a Little Mad Science

Posted by AlexApptentive

After seeing Rand’s “Mad Science Experiments in SEO” presented at last year’s MozCon, I was inspired to put on the lab coat and goggles and do a few experiments of my own—not in SEO, but in SEO’s up-and-coming younger sister, ASO (app store optimization).

Working with Apptentive to guide enterprise apps and small startup apps alike to increase their discoverability in the app stores, I’ve learned a thing or two about app store optimization and what goes into an app’s ranking. It’s been my personal goal for some time now to pull back the curtains on Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in my way.

Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet.

As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or down can make all the difference when it comes to your website’s traffic—and revenue.

In the world of apps, ranking is just as important when it comes to standing out in a sea of more than 1.3 million apps. Apptentive’s recent mobile consumer survey shed a little more light this claim, revealing that nearly half of all mobile app users identified browsing the app store charts and search results (the placement on either of which depends on rankings) as a preferred method for finding new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery.

Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a complex and highly guarded algorithms for determining rankings for both keyword-based app store searches and composite top charts.

Unlike SEO, however, very little research and theory has been conducted around what goes into these rankings.

Until now, that is.

Over the course of five studies analyzing various publicly available data points for a cross-section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps, I’ll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few of the factors commonly thought of as influential to an app’s ranking.

But first, a little context

Image credit: Josh Tuininga, Apptentive

Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically, be on a fairly level playing field in terms of search volume and competition.

Of these apps, nearly two-thirds have not received a single rating and 99% are considered unprofitable. These studies, therefore, single out the rare exceptions to the rule—the top 500 ranked apps in each store.

While neither Apple nor Google have revealed specifics about how they calculate search rankings, it is generally accepted that both app store algorithms factor in:

  • Average app store rating
  • Rating/review volume
  • Download and install counts
  • Uninstalls (what retention and churn look like for the app)
  • App usage statistics (how engaged an app’s users are and how frequently they launch the app)
  • Growth trends weighted toward recency (how daily download counts changed over time and how today’s ratings compare to last week’s)
  • Keyword density of the app’s landing page (Ian did a great job covering this factor in a previous Moz post)

I’ve simplified this formula to a function highlighting the four elements with sufficient data (or at least proxy data) for our analysis:

Ranking = fn(Rating, Rating Count, Installs, Trends)

Of course, right now, this generalized function doesn’t say much. Over the next five studies, however, we’ll revisit this function before ultimately attempting to compare the weights of each of these four variables on app store rankings.

(For the purpose of brevity, I’ll stop here with the assumptions, but I’ve gone into far greater depth into how I’ve reached these conclusions in a 55-page report on app store rankings.)

Now, for the Mad Science.

Study #1: App-les to app-les app store ranking volatility

The first, and most straight forward of the five studies involves tracking daily movement in app store rankings across iOS and Android versions of the same apps to determine any trends of differences between ranking volatility in the two stores.

I went with a small sample of five apps for this study, the only criteria for which were that:

  • They were all apps I actively use (a criterion for coming up with the five apps but not one that influences rank in the U.S. app stores)
  • They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be stickier at the top—an assumption I’ll test in study #2)
  • They had an almost identical version of the app in both Google Play and the App Store, meaning they should (theoretically) rank similarly
  • They covered a spectrum of app categories

The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five apps represent the travel, finance, education banking, and social networking categories.

Hypothesis

Going into this analysis, I predicted slightly more volatility in Apple App Store rankings, based on two statistics:

Both of these assumptions will be tested in later analysis.

Results

7-Day App Store Ranking Volatility in the App Store and Google Play

Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23 positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of Google Play.

Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then standardized the ranking volatility in terms of percent change to control for the effect of numeric rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a 24-hour period while Google Play rankings changed by no more than 9%.

Also of note, daily rankings tended to move in the same direction across the two app stores approximately two-thirds of the time, suggesting that the two stores, and their customers, may have more in common than we think.

Study #2: App store ranking volatility across the top charts

Testing the assumption implicit in standardizing the data in study No. 1, this one was designed to see if app store ranking volatility is correlated with an app’s current rank. The sample for this study consisted of the top 500 ranked apps in both Google Play and the App Store, with special attention given to those on both ends of the spectrum (ranks 1–100 and 401–500).

Hypothesis

I anticipated rankings to be more volatile the higher an app is ranked—meaning an app ranked No. 450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis is based on the assumption that higher ranked apps have more installs, active users, and ratings, and that it would take a large margin to produce a noticeable shift in any of these factors.

Results

App Store Ranking Volatility of Top 500 Apps

One look at the chart above shows that apps in both stores have increasingly more volatile rankings (based on how many ranks they moved in the last 24 hours) the lower on the list they’re ranked.

This is particularly true when comparing either end of the spectrum—with a seemingly straight volatility line among Google Play’s Top 100 apps and very few blips within the App Store’s Top 100. Compare this section to the lower end, ranks 401–)500, where both stores experience much more turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking volatility in the Play Store and 28% correlation in the App Store.

To put this into perspective, the average app in Google Play’s 401–)500 ranks moved 12.1 ranks in the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store, these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as volatile as the highest ranked apps. (I say slightly as these “lower-ranked” apps are still ranked higher than 99.96% of all apps.)

The relationship between rank and volatility is pretty consistent across the App Store charts, while rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100 have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation).

Study #3: App store rankings across the stars

The next study looks at the relationship between rank and star ratings to determine any trends that set the top chart apps apart from the rest and explore any ties to app store ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

As discussed in the introduction, this study relates directly to one of the factors commonly accepted as influential to app store rankings: average rating.

Getting started, I hypothesized that higher ranks generally correspond to higher ratings, cementing the role of star ratings in the ranking algorithm.

As far as volatility goes, I did not anticipate average rating to play a role in app store ranking volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice versa. Instead, I believed volatility to be tied to rating volume (as we’ll explore in our last study).

Results

Average App Store Ratings of Top Apps

The chart above plots the top 100 ranked apps in either store with their average rating (both historic and current, for App Store apps). If it looks a little chaotic, it’s just one indicator of the complexity of ranking algorithm in Google Play and the App Store.

If our hypothesis was correct, we’d see a downward trend in ratings. We’d expect to see the No. 1 ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the chart.

A closer examination, in tandem with what we already know about the app stores, reveals two other interesting points:

  1. The average star rating of the top 100 apps is significantly higher than that of the average app. Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of all rated apps in either store. The averages across apps in the 401–)500 ranks approximately split the difference between the ratings of the top ranked apps and the ratings of the average app.
  2. The rating distribution of top apps in Google Play was considerably more compact than the distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more heavily weighted in Google Play’s algorithm.

App Store Ranking Volatility and Average Rating

Looking next at the relationship between ratings and app store ranking volatility reveals a -15% correlation that is consistent across both app stores; meaning the higher an app is rated, the less its rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store’s calculation of an app’s current rating, for which I did not find a statistically significant correlation.

Study #4: App store rankings across versions

This next study looks at the relationship between the age of an app’s current version, its rank and its ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

In alteration of the above function, I’m using the age of a current app’s version as a proxy (albeit not a very good one) for trends in app store ratings and app quality over time.

Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b) each new update inspires a new wave of installs and ratings, I’m hypothesizing that the older the age of an app’s current version, the lower it will be ranked and the less volatile its rank will be.

Results

How update frequency correlates with app store rank

The first and possibly most important finding is that apps across the top charts in both Google Play and the App Store are updated remarkably often as compared to the average app.

At the time of conducting the study, the current version of the average iOS app on the top chart was only 28 days old; the current version of the average Android app was 38 days old.

As hypothesized, the age of the current version is negatively correlated with the app’s rank, with a 13% correlation in Google Play and a 10% correlation in the App Store.

How update frequency correlates with app store ranking volatility

The next part of the study maps the age of the current app version to its app store ranking volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%) while recently updated iOS apps have more volatile rankings (correlation: -3%).

Study #5: App store rankings across monthly active users

In the final study, I wanted to examine the role of an app’s popularity on its ranking. In an ideal world, popularity would be measured by an app’s monthly active users (MAUs), but since few mobile app developers have released this information, I’ve settled for two publicly available proxies: Rating Count and Installs.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

For the same reasons indicated in the second study, I anticipated that more popular apps (e.g., apps with more ratings and more installs) would be higher ranked and less volatile in rank. This, again, takes into consideration that it takes more of a shift to produce a noticeable impact in average rating or any of the other commonly accepted influencers of an app’s ranking.

Results

Apps with more ratings and reviews typically rank higher

The first finding leaps straight off of the chart above: Android apps have been rated more times than iOS apps, 15.8x more, in fact.

The average app in Google Play’s Top 100 had a whopping 3.1 million ratings while the average app in the Apple App Store’s Top 100 had 196,000 ratings. In contrast, apps in the 401–)500 ranks (still tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth (Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks.

Considering that almost two-thirds of apps don’t have a single rating, reaching rating counts this high is a huge feat, and a very strong indicator of the influence of rating count in the app store ranking algorithms.

To even out the playing field a bit and help us visualize any correlation between ratings and rankings (and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I’ve applied a logarithmic scale to the chart above:

The relationship between app store ratings and rankings in the top 100 apps

From this chart, we can see a correlation between ratings and rankings, such that apps with more ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40% correlation in Google Play.

Apps with more ratings typically experience less app store ranking volatility

Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive evidence was found within the Top 100 Google Play apps.

Apps with more installs and active users tend to rank higher in the app stores

And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.)

Among the top 100 Android apps, this last study found that installs were heavily correlated with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in Google Play. Android apps with more installs also tended to have less volatile app store rankings, with a correlation of -16.5%.

Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in broad ranges (e.g., 500k–)1M). For each app, I took the low end of the range, meaning we can likely expect the correlation to be a little stronger since the low end was further away from the midpoint for apps with more installs.

Summary

To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad science studies in app store optimization:

  1. Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play
  2. Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the Apple App Store’s top charts.
  3. In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of the average app.
  4. Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top charts experience a much wider ratings distribution than that of Google Play’s top charts.
  5. The higher an app is rated, the less volatile its rankings are.
  6. The 100 highest ranked apps in either store are updated much more frequently than the average app, and apps with older current versions are correlated with lower ratings.
  7. An app’s update frequency is negatively correlated with Google Play’s ranking volatility but positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an app’s most recent ratings and reviews.
  8. The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest ranked App Store apps.
  9. In both stores, apps that fall under the 401–500 ranks receive, on average, 10–20% of the rating volume seen by apps in the top 100.
  10. Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29–40% correlation between the two.

Revisiting our first (albeit oversimplified) guess at the app stores’ ranking algorithm gives us this loosely defined function:

Ranking = fn(Rating, Rating Count, Installs, Trends)

I’d now re-write the function into a formula by weighing each of these four factors, where a, b, c, & d are unknown multipliers, or weights:

Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d)

These five studies on ASO shed a little more light on these multipliers, showing Rating Count to have the strongest correlation with rank, followed closely by Installs, in either app store.

It’s with the other two factors—rating and trends—that the two stores show the greatest discrepancy. I’d hazard a guess to say that the App Store prioritizes growth trends over ratings, given the importance it places on an app’s current version and the wide distribution of ratings across the top charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just about have to have at least four stars to make the top 100 ranks.

Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts in either store:

Weight of factors in the Apple App Store ranking algorithm

Rating Count > Installs > Trends > Rating

Weight of factors in the Google Play ranking algorithm

Rating Count > Installs > Rating > Trends


Again, we’re oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional factors including keyword density and in-app engagement statistics continue to be strong indicators of ranks. They simply lie outside the scope of these studies.

I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see ASOs conducting the same experiments that have brought SEO to the center stage, and encourage you to enhance or refute these findings with your own ASO mad science experiments.

Please share your thoughts in the comments below, and let’s deconstruct the ranking formula together, one experiment at a time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Lessons from the Front Line of Front-End Content Development

Posted by richardbaxterseo

As content marketing evolves, the list of media you could choose to communicate your message expands. So does the list of technologies at your disposal. But without a process, a project plan and a tried and tested approach, you might struggle to gain any traction at all.

In this post, based on my
MozCon 2014 presentation, I’d like to share the high level approach we take while developing content for our clients, and the lessons we’ve learned from initial research to final delivery. Hopefully there are some takeaways for you to enhance your own approach or make your first project a little less difficult.

This stuff is hard to do

I hate to break it to you, but the first few times you attempt to develop something
a little more innovative, you’re going to get burned. Making things is pretty tough and there are lots of lessons to learn. Sometimes you’ll think your work is going to be huge, and it flops. That sucks, move on, learn and maybe come back later to revisit your approach.

To structure and execute a genuinely innovative, successful content marketing campaign, you need to understand what’s possible, especially within the context of your available skills, process, budget, available time and scope.

You’ll have a few failures along the journey, but when something goes viral, when people respond positively to your work – that, friends, feels amazing.

What this post is designed to address

In the early days of SEO, we built links. Email outreach, guest posting, eventually, infographics. It was easy, for a time. Then,
Penguin came and changed everything.

Our industry learned that we should be finding creative and inventive ways to solve our customers’ problems, inspire, guide, help – whatever the solution, an outcome had to be justified. Yet still, a classic habit of the SEO remained: the need to decide in what form the content should be executed before deciding on the message to tell.

I think we’ve evolved from “let’s do an infographic on something!” to “I’ve got a concept that people will love should this be long form, an interactive, a data visualization, an infographic, a video, or something else?”

This post is designed to outline the foundations on an approach you can use to enhance your approach to content development. If you take one thing away from this article, let it be this:

The first rule of almost anything: be prepared or prepare to fail. This rule definitely applies to content development!

Understand the technical environment you’re hosting your content in

Never make assumptions about the technical environment your content will be hosted in. We’ve learned to ask more about technical setup of a client’s website. You see, big enterprise class sites usually have load balancing, 
pre-rendering, and very custom JavaScript that could introduce technical surprises much too late in the process. Better to be aware of what’s in store than hope your work will be compatible with its eventual home.

Before you get started on any development or design, make sure you’ve built an awareness of your client’s development and production environments. Find out more about their CMS, code base, and ask what they can and cannot host.

Knowing more about the client’s development schedule, for example how quickly a project can be uploaded, will help you plan lead times into your project documentation.

We’ve found that discussing early stage ideas with your client’s development team will help them visualise the level of task required to get something live. Involving them at this early stage means you’re informed on any potential risk in technology choice that will harm your project integrity later down the line.

Initial stakeholder outreach and ideation

Way back at MozCon 2013, I presented an idea called “really targeted outreach“. The concept was simple: find influential people in your space, learn more about the people they influence, and build content that appeals to both.

We’ve been using a similar methodology for larger content development projects: using social data to inspire the creative process gathered from the Twitter Firehose and
other freely available tools, reaching out to identified influencers and ask them to contribute or feedback on an idea. The trick is to execute your social research at a critical, early stage of the content development process. Essentially, you’re collecting data to gain a sense of confidence in the appeal of your content.

We’ve made content with such a broad range of people involved, from astronauts to butlers working at well known, historic hotels. With a little of the right approach to outreach, it’s amazing how helpful people can be. Supplemented by the confidence you’ve gained from your data, some positive results from your early stage outreach can really set a content project on the right course.

My tip: outreach and research several ideas and tell your clients which was most popular. If you can get them excited and behind the idea with the biggest response then you’ll find it easier to get everyone on the same page throughout your project.

Asset collection and research

Now, the real work begins. As I’ve
written elsewhere, I believe that the depth of your content, it’s accuracy and integrity is an absolute must if it is to be taken seriously by those it’s intended for.

Each project tends to be approached a little differently, although I tend to see these steps in almost every one: research, asset collection, storyboarding and conceptual illustration.

For asset collection and research, we use a tool called
Mural.ly – a wonderful collaborative tool to help speed up the creative process. Members of the project team begin by collecting relevant information and assets (think: images, quotes, video snippets) and adding them to the project. As the collection evolves, we begin to arrange the data into something that might resemble a timeline:

After a while, the story begins to take shape. Depending on how complex the concept is, we’ll either go ahead with some basic illustration (a “white board session”) or we’ll detail the storyboard in a written form. Here’s the Word document that summarised the chronological order of the content we’d planned for our
Messages in the Deep project:

messages-in-the-deep-storyboard

And, if the brief is more complex, we’ll create a more visual outline in a whiteboard session with our designers:

interactive-map-sketch

How do you decide on the level of brief needed to describe your project? Generally, the more complex the project, the more important a full array of briefing materials and project scoping will be. If, however, we’re talking simpler, like “long form” article content, the chances are a written storyboard and a collection of assets should be enough.

schema-guide

Over time, we’ve learned how to roll out content that’s partially template based, rather than having to re-invent the wheel each time. Dan’s amazing
Log File Analysis guide was reused when we decided to re-skin the Schema Guide, and as a result we’ve decided to give Kaitlin’s Google Analytics Guide the same treatment.

Whichever process you choose, it helps to re-engage your original contributors, influencers and publishers for feedback. Remember to keep them involved at key stages – if for no other reason than to make sure you’re meeting their expectations on content they’d be willing to share.

Going into development

Obviously we could talk all day about the development process. I think I’ll save the detail for my next post, but suffice it to say we’ve learned some big things along the way.

Firstly, it’s good to brief your developers well before the design and content is finalised. Particularly if there are features that might need some thought and experimental prototyping. I’ve found over time that a conversation with a developer leads to a better understanding of what’s easily possible with existing libraries and code. If you don’t involve the developers in the design process, you may find yourself committed to building something extremely custom, and your project timeline can become drastically underestimated.

It’s also really important to make sure that your developers have had the opportunity to specify how they’d like the design work to be delivered; file format; layers and sizing for different break points are all really important to an efficient development schedule and make a huge difference to the agility of your work.

Our developers like to have a logical structure of layers and groups in a PSD. Layers and groups should all be named and it’s a good idea to attach different UI states for interactive elements (buttons, links, tabs, etc.), too.

Grid layouts are much preferred although it doesn’t matter if it’s 1200px or 960px, or 12/16/24 columns. As long as the content has some structure, development is easier.

As our developers like to say: Because structure = patterns = abstraction = good things and in an ideal world they prefer to work with
style tiles.

Launching

Big content takes more promotion to get that all important initial traction. Your outreach strategy has already been set, you’ve defined your influencers, and you have buy in from publishers. So, as soon as your work is ready, go ahead and tell your stakeholders it’s live and get that flywheel turning!

My pro tip for a successful launch is be prepared to offer customised content for certain publishers. Simple touches, like
The Washington Post’s animated GIF idea was a real touch of genius – I think some people liked the GIF more than the actual interactive! This post on Mashable was made possible by our development of some of the interactive to be iFramed – publishers seem to love a different approach, so try to design that concept in right at the beginning of your plan. From there, stand back, measure, learn and never give up!

That’s it for today’s post. I hope you’ve found it informative, and I look forward to your comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com

Long Tail CTR Study: The Forgotten Traffic Beyond Top 10 Rankings

Posted by GaryMoyle

Search behavior is fundamentally changing, as users become more savvy and increasingly familiar with search technology. Google’s results have also changed significantly over the last decade, going from a simple page of 10 blue links to a much richer layout, including videos, images, shopping ads and the innovative Knowledge Graph.

We also know there are an increasing amount of touchpoints in a customer journey involving different channels and devices. Google’s
Zero Moment of Truth theory (ZMOT), which describes a revolution in the way consumers search for information online, supports this idea and predicts that we can expect the number of times natural search is involved on the path to a conversion to get higher and higher.

Understanding how people interact with Google and other search engines will always be important. Organic click curves show how many clicks you might expect from search engine results and are one way of evaluating the impact of our campaigns, forecasting performance and exploring changing search behavior.

Using search query data from Google UK for a wide range of leading brands based on millions of impressions and clicks, we can gain insights into the how CTR in natural search has evolved beyond those shown in previous studies by
Catalyst, Slingshot and AOL.

Our methodology

The NetBooster study is based entirely on UK top search query data and has been refined by day in order to give us the most accurate sample size possible. This helped us reduce anomalies in the data in order to achieve the most reliable click curve possible, allowing us to extend it way beyond the traditional top 10 results.

We developed a method to extract data day by day to greatly increase the volume of keywords and to help improve the accuracy of the
average ranking position. It ensured that the average was taken across the shortest timescale possible, reducing rounding errors.

The NetBooster study included:

  • 65,446,308 (65 million) clicks
  • 311,278,379 (311 million) impressions
  • 1,253,130 (1.2 million) unique search queries
  • 54 unique brands
  • 11 household brands (sites with a total of 1M+ branded keyword impressions)
  • Data covers several verticals including retail, travel and financial

We also looked at organic CTR for mobile, video and image results to better understand how people are discovering content in natural search across multiple devices and channels. 

We’ll explore some of the most important elements in this article.

How does our study compare against others?

Let’s start by looking at the top 10 results. In the graph below we have normalized the results in order to compare our curve, like-for-like, with previous studies from Catalyst and Slingshot. Straight away we can see that there is higher participation beyond the top four positions when compared to other studies. We can also see much higher CTR for positions lower on the pages, which highlights how searchers are becoming more comfortable with mining search results.

A new click curve to rule them all

Our first click curve is the most useful, as it provides the click through rates for generic non-brand search queries across positions 1 to 30. Initially, we can see a significant amount of traffic going to the top three results with position No. 1 receiving 19% of total traffic, 15% at position No. 2 and 11.45% at position No. 3. The interesting thing to note, however, is our curve shows a relatively high CTR for positions typically below the fold. Positions 6-10 all received a higher CTR than shown in previous studies. It also demonstrates that searchers are frequently exploring pages two and three.

CTR-top-30-730px.jpg

When we look beyond the top 10, we can see that CTR is also higher than anticipated, with positions 11-20 accounting for 17% of total traffic. Positions 21-30 also show higher than anticipated results, with over 5% of total traffic coming from page three. This gives us a better understanding of the potential uplift in visits when improving rankings from positions 11-30.

This highlights that searchers are frequently going beyond the top 10 to find the exact result they want. The prominence of paid advertising, shopping ads, Knowledge Graph and the OneBox may also be pushing users below the fold more often as users attempt to find better qualified results. It may also indicate growing dissatisfaction with Google results, although this is a little harder to quantify.

Of course, it’s important we don’t just rely on one single click curve. Not all searches are equal. What about the influence of brand, mobile and long-tail searches?

Brand bias has a significant influence on CTR

One thing we particularly wanted to explore was how the size of your brand influences the curve. To explore this, we banded each of the domains in our study into small, medium and large categories based on the sum of brand query impressions across the entire duration of the study.

small-medium-large-brand-organic-ctr-730

When we look at how brand bias is influencing CTR for non-branded search queries, we can see that better known brands get a sizable increase in CTR. More importantly, small- to medium-size brands are actually losing out to results from these better-known brands and experience a much lower CTR in comparison.

What is clear is keyphrase strategy will be important for smaller brands in order to gain traction in natural search. Identifying and targeting valuable search queries that aren’t already dominated by major brands will minimize the cannibalization of CTR and ensure higher traffic levels as a result.

How does mobile CTR reflect changing search behavior?

Mobile search has become a huge part of our daily lives, and our clients are seeing a substantial shift in natural search traffic from desktop to mobile devices. According to Google, 30% of all searches made in 2013 were on a mobile device; they also predict mobile searches will constitute over 50% of all searches in 2014.

Understanding CTR from mobile devices will be vital as the mobile search revolution continues. It was interesting to see that the click curve remained very similar to our desktop curve. Despite the lack of screen real estate, searchers are clearly motivated to scroll below the fold and beyond the top 10.

netbooster-mobile-organic-ctr-730px.jpg

NetBooster CTR curves for top 30 organic positions


Position

Desktop CTR

Mobile CTR

Large Brand

Medium Brand

Small Brand
1 19.35% 20.28% 20.84% 13.32% 8.59%
2 15.09% 16.59% 16.25% 9.77% 8.92%
3 11.45% 13.36% 12.61% 7.64% 7.17%
4 8.68% 10.70% 9.91% 5.50% 6.19%
5 7.21% 7.97% 8.08% 4.69% 5.37%
6 5.85% 6.38% 6.55% 4.07% 4.17%
7 4.63% 4.85% 5.20% 3.33% 3.70%
8 3.93% 3.90% 4.40% 2.96% 3.22%
9 3.35% 3.15% 3.76% 2.62% 3.05%
10 2.82% 2.59% 3.13% 2.25% 2.82%
11 3.06% 3.18% 3.59% 2.72% 1.94%
12 2.36% 3.62% 2.93% 1.96% 1.31%
13 2.16% 4.13% 2.78% 1.96% 1.26%
14 1.87% 3.37% 2.52% 1.68% 0.92%
15 1.79% 3.26% 2.43% 1.51% 1.04%
16 1.52% 2.68% 2.02% 1.26% 0.89%
17 1.30% 2.79% 1.67% 1.20% 0.71%
18 1.26% 2.13% 1.59% 1.16% 0.86%
19 1.16% 1.80% 1.43% 1.12% 0.82%
20 1.05% 1.51% 1.36% 0.86% 0.73%
21 0.86% 2.04% 1.15% 0.74% 0.70%
22 0.75% 2.25% 1.02% 0.68% 0.46%
23 0.68% 2.13% 0.91% 0.62% 0.42%
24 0.63% 1.84% 0.81% 0.63% 0.45%
25 0.56% 2.05% 0.71% 0.61% 0.35%
26 0.51% 1.85% 0.59% 0.63% 0.34%
27 0.49% 1.08% 0.74% 0.42% 0.24%
28 0.45% 1.55% 0.58% 0.49% 0.24%
29 0.44% 1.07% 0.51% 0.53% 0.28%
30 0.36% 1.21% 0.47% 0.38% 0.26%

Creating your own click curve

This study will give you a set of benchmarks for both non-branded and branded click-through rates with which you can confidently compare to your own click curve data. Using this data as a comparison will let you understand whether the appearance of your content is working for or against you.

We have made things a little easier for you by creating an Excel spreadsheet: simply drop your own top search query data in and it’ll automatically create a click curve for your website.

Simply visit the NetBooster website and download our tool to start making your own click curve.

In conclusion

It’s been both a fascinating and rewarding study, and we can clearly see a change in search habits. Whatever the reasons for this evolving search behavior, we need to start thinking beyond the top 10, as pages two and three are likely to get more traffic in future. 

 We also need to maximize the traffic created from existing rankings and not just think about position.

Most importantly, we can see practical applications of this data for anyone looking to understand and maximize their content’s performance in natural search. Having the ability to quickly and easily create your own click curve and compare this against a set of benchmarks means you can now understand whether you have an optimal CTR.

What could be the next steps?

There is, however, plenty of scope for improvement. We are looking forward to continuing our investigation, tracking the evolution of search behavior. If you’d like to explore this subject further, here are a few ideas:

  • Segment search queries by intent (How does CTR vary depending on whether a search query is commercial or informational?)
  • Understand CTR by industry or niche
  • Monitor the effect of new Knowledge Graph formats on CTR across both desktop and mobile search
  • Conduct an annual analysis of search behavior (Are people’s search habits changing? Are they clicking on more results? Are they mining further into Google’s results?)

Ultimately, click curves like this will change as the underlying search behavior continues to evolve. We are now seeing a massive shift in the underlying search technology, with Google in particular heavily investing in entity- based search (i.e., the Knowledge Graph). We can expect other search engines, such as Bing, Yandex and Baidu to follow suit and use a similar approach.

The rise of smartphone adoption and constant connectivity also means natural search is becoming more focused on mobile devices. Voice-activated search is also a game-changer, as people start to converse with search engines in a more natural way. This has huge implications for how we monitor search activity.

What is clear is no other industry is changing as rapidly as search. Understanding how we all interact with new forms of search results will be a crucial part of measuring and creating success.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

How To Select The Perfect Clients

Posted by Bill.Sebald

I truly believe in the power of partnerships. There have been some incredible partnerships that changed the fabric of our culture. Larry Page and Sergey Brin. William Procter and James Gamble. The Olson Twins.

Good partnerships provide support, motivation, and complementary skills, often allowing you to overcome hurdles faster and create some truly marvelous things. In consulting or any agency work, the concept of “partnership” should be the backbone of your relationship. Like a puzzle piece, sometimes the fit is initially difficult to find – if available at all. The truth is, you’re only secure if your clients are walking in the same direction as the flow of your service. If they’re walking against the current, you have what I believe to be the most detrimental predicament a service provider can have –
a rift. That’s a truly offensive four-letter word.

What kind of rift are we talking about? Let’s do a little calculating.

First think about what you or your agency is really good at. Think about the components you have the most success with; this may actually be different than where you’re most experienced. Think about what you should be selling versus not (even if those items are currently on your menu – let’s be candid here, a lot of us casually promote services we
believe we should be selling even though it’s not a fully baked product or core competency). Think about the amount of time you really spent challenging a given service to make sure it’s truly impactful to a client versus your own bottom line.

Next, think about your past client debacles (if you haven’t stopped to perform a postmortem, you should). Chances are these led to events that cost you a lot of time, pain, and possibly money. They are the memories that make you shudder. Those are the days that made you dust off your resume and think about a career change.  

Finally, how many of these past clients should have never been signed in the first place? How many simply weren’t a fit from the start? How many simply never had a shot at being successful with you – and vice-versa? This computation really needs serious consideration. Have you wasted everyone’s time?

There can be a costly fallout. I’ve seen talented team members quit over clients that simply could not be managed. I’ve seen my colleagues go so far as to cry or start seeking therapy (in part) because of overwhelming clients who were not getting what they expected and a parent company who wasn’t providing any relief. Sometimes these clients were bound to an annual contract which only made them more desperate and angry. Rifts like this can kill your business.

This should never happen.

Client/agency relationships are marriages, but marriages start with dating

I really like this 2011 post from A List Apart called
Marry Your Clients. A few years old, but nothing has changed. However, my post is going to talk about the courting part before the honeymoon.

My post also assumes you make more money on longer consulting relationships. If you’ve somehow built your model through routinely hunting new business with the expectation you’re going to get fired, then that’s a different story. For most of us however, on-boarding a client is a lot of work, both in terms of hours (which is money) and brainpower. If you “hit it off” with your client, you begin to know their business more intimately, as well as their goals and KPIs. The strategies get easier to build; they also tend to be more successful as you become aware of what their tastes and limitations are. You find you have things in common (perhaps you both enjoy long walks to the bank). You often become true partners with your clients, who in turn promote your ideas to their bosses. These are your most profitable engagements, as well as your most rewarding. They tend to last years, sometimes following your point-of-contact to their next jobs as well.

But you don’t get this way simply because both parties signed a legally-bounding document.

The truth is not all parties can work together. A lot of client/agency relationships end in divorce. Like in romance, sometimes you just aren’t compatible.

A different kind of online dating

After my first marriage went kaput, I’ll admit I went to Match.com. For those who never tried online dating, it’s really an exercise in personal marketing. You upload your most attractive pictures. You sell yourself above everyone else. You send communications back and forth to the interested parties where you work to craft the “perfect” response; as well as ask qualifying questions. I found it works pretty well – the online process saved me from potentially bad dates. Don’t get me wrong, I still have some awkward online dating stories…

Photo from Chuck Woolery’s
Twitter profile

With consulting, if we’re supposed to ultimately marry our clients, we should obviously be allowed to see if there’s a love connection. We should all be our own Chuck Woolery. I tend to think this stage is crucial, but often rushed by agencies or managed by a department outside of your own.

Some agencies seem to have a “no dating” policy. For some, it’s not uncommon to come in to work and have an email from a higher-up with the subject, “congratulations – you’re now married to a new client!” Whether it’s a client development department, or an add-on from an existing client, your marketing department is suddenly forced into an arranged marriage where you can only hope to live up to their expectations.

This is a recipe for disaster. I don’t like to run a business on luck and risk, so clearly this makes no sense to me.

But I’ve been there. I once worked for an agency that handed me a signed contract for a major underwear brand – but I didn’t even know we were even speaking to them. Before I had a chance to get the details, the VP of digital marketing called me. I did my best to understand what they were promised in terms of SEO goals without admitting I really had no clue about their business. The promises were unrealistic, but being somewhat timid and naïve back in the day, I went with it. Truth is, their expectations did not fit into our model, philosophies, or workflow. Ultimately I failed to deliver to their expectations. The contract ended early and I vowed to never let that happen again. Not just for the stress and anxiety it brought upon my team and me, but for the blatant neglect to the client as well.

With this being something I never forgot, I would occasionally bring this story up with others I met at networking events or conventions. I quickly learned this is far from an isolated incident occurring only to me. This is how some agencies build their business development departments.

Once again, this should never happen.

How to qualify a client

Let’s assume by now I have successfully inspired a few things:

  1. A client/agency relationship should truly be a partnership akin to a good marriage.
  2. A client should never be thrown into a model that doesn’t make sense for their business (i.e., your style of SEO services), and process should be in place for putting all the parties in the same room before a deal is signed.

    Now we’re up to number 3:

  3. Not all relationships work, so all parties should try to truly connect before there is a proposal. Don’t rush the signature!

Here are some of the things we do at Greenlane to really qualify a client. Before I continue, though, I’m proud to brag a little. With these practices in place, our close rate – that is, the companies we really want to work with – is 90% in our favor. Our retainment is also very high. Once we started being prudent with our intake, we’ve only lost a few companies due to funding issues or a change in their business model – not out of performance. I should also add that these tips work with all sizes of clients. While some of our 20+ clients are smaller businesses, we also have household brands and public companies, all of which could attest to going through this process with us.

It’s all in the details

Your website is your Match.com profile. Your website is your personality. If you’re vague or promotional or full of hype, only to get someone on the phone to which your “car salesman” gear kicks in, I don’t think you’re using the website to the best of its ability. People want to use the website to learn more about you before the reach out.

Our “about us” page is our third most visited page next to the homepage and pricing (outside of the blog). You can see an example from a 
Hotjar heatmap:

The truth is, I’m always tweaking (and A/B testing) our message on the about us page. This page is currently part of a funnel that we careful put together. The “about us” page is a quick but powerful overview putting our team front and center and highlighting our experience (including some past clients).

I believe the website’s more than a brochure. It’s a communication device. Don’t hide or muddle who you are. When I get a prospect email through our form, I always lead them to our “Are We The Right Fit” page. That’s right – I actually ask them to consider choosing wisely. Now at first glance, this might go against a conversion funnel that heats up the prospect and only encourages momentum, but this page has really been a strong asset. It’s crafted to transparently present our differentiators, values, and even our pricing. It’s also crafted to discourage those who aren’t a good fit. You can find this page
here. Even our URL provides the “Are We The Right Fit” question.

We want prospects to make a good decision. We care so much about companies doing great that we’d rather you find someone else if our model isn’t perfect. Sure, sometimes after pointing someone to that link, they never return. That’s OK. Just like a dating profile, this page is designed to target a certain kind of interest. Time is a commodity in agency life – no sense in wasting it on a conversation that isn’t qualified. When we do catch a prospect after reviewing the page and hear, “we went with another firm who better suits our needs,” it actually doesn’t feel like a loss at all.

Everyone who comes back goes into our pipeline. At this stage they all get followed up on with a phone call. If they aren’t a good fit from the get go we actually try to introduce them to other SEO companies or consultants who would be a better fit for them. But 9 times out of 10, it’s an amazing conversation.

Never drop the transparency

There are a few things I try to tell all the prospects I ultimately speak with. One, I openly admit I’m not a salesman. I couldn’t sell ice water to people in hell. But I’m good at being really candid about our strengths and experiences.

Now this one tends to surprise some, especially in the larger agency setting. We admit that we are really choosy about the clients we take on. For our model, we need clients who are flexible, fast moving, interested in brand building, and interested in long-term relationships. We want clients who think in terms of strategy and will let us work with their existing marketing team and vendors. We audit them for their understanding of SEO services and tell them how we’re either alike or different.

I don’t think a prospect call goes by without me saying, “while you’re checking us out to see if we’re a good fit, we’re doing the same for you.” Then, if the call goes great, I let them know we’d like a follow up call to continue (a second date if you will). This follow up call has been where the real decision gets made.

Ask the right questions

I’ve vetted the opportunity, now my partner – who naturally has a different way of approaching opportunities and relationships – asks a different set of questions. This adds a whole different dimension and works to catch the questions I may not have asked. We’ve had companies ready to sign on the first call, to which I’ve had to divert any signatures until the next conversation. This too may seem counter-intuitive to traditional business development, but we find it extremely valuable. It’s true that we could have more clients in our current book of business, but I can proudly state that every current client is exactly who we want to be with; this is very much because of everything you’ve read so far.

On each call we have a list of qualifying questions that we ask. Most are “must answer” questions, while others can roll into a needs analysis questionnaire that we give to each signed client. The purpose of the needs analysis is to get more granular into business items (such as seasonal trends, industry intelligence, etc.) for the intention of developing strategies. With so much to ask, it’s important to be respectful of the prospects’ time. At this point they’ve usually already indicated they’ve read our website, can afford our prices, and feel like we’re a good fit.

Many times prospects start with their introduction and answer some of our questions. While they speak, I intently listen and take many notes.

These are 13 questions from my list that I always make sure get answered on a call, with some rationale:

Questions for the prospect:

1. Can you describe your business model and products/services?

  1. What do you sell?
  2. B2B or B2C
  3. Retail or lead generation?

Rationale
: sometimes when reviewing the website it’s not immediately clear what kind of business they’re in. Perhaps the site just does a bad job, or sometimes their real money making services are deeper in the site and easily missed by a fast scan. One of our clients works with the government and seems to have an obvious model, but the real profit is from a by-product, something we would have never picked up on during our initial review of the website. It’s important to find out exactly what the company does. Is it interesting? Can you stay engaged? Is it a sound model that you believe in? Is it a space you have experience in?

2. What has been your experience with [YOUR SERVICE] in the past?

Rationale: Many times, especially if your model is different, a prospect may have a preconceived notion of what you actually do. Let’s take SEO as an example – there are several different styles of SEO services. If they had a link building company in the past, and you’re a more holistic SEO consulting practice, their point of reference may only be with what they’ve experienced. They may even have a bad taste in their mouth from a previous engagement, which gives you a chance to air it out and see how you compare. This is also a chance to know if you’re potentially playing with a penalized site.

3. What are your [PPC/SEO/etc.] goals?

Rationale: Do they have realistic goals, or lofty, impossible goals? Be candid – tell them if you don’t think you can reach the goals on the budget they have, or if you think they should choose other goals. Don’t align yourself with goals you can’t hit. This is where many conversations could end.

4. What’s your mission or positioning statement?

Rationale: If you’re going to do more than just pump up their rankings, you probably want to know the full story. This should provide a glimpse into other marketing the prospect is executing.

5. How do you stand out?

Rationale: Sometimes this is answered with the question above. If not, really dig up the differentiators. Those are typically the key items to build campaigns on.  Whether they are trying to create a new market segment or have a redundant offering, this can help you set timeline and success expectations.

6. Are you comfortable with an agency that may challenge your plans and ideas?

Rationale: This is one of my favorite questions. There are many who hire an agency and expect “yes-men.” Personally I believe an agency or consultant should be partners; that is, not afraid to fight for what they know is right for the benefit of the client. You shouldn’t be afraid of injury:

 

7. Who are your competitors?

Rationale: Not only do you want this for competitive benchmarking, but this can often help you understand more about the prospect. Not to mention, how big a hill you might have to climb to start competing on head terms.

8. What is your business reach? (local, national, international)?

Rationale: An international client is going to need more work than a domestic client. A local client is going to need an expertise in local search. Knowing the scope of the company can help you align your skills with their targets.

9. What CMS are you on?

Rationale:
 This is a big one. It tells you how much flexibility you will have. WordPress?  Great – you’ll probably have a lot of access to files and templates.  A proprietary CMS or enterprise solution?  Uh-oh.  That probably means tickets and project queues. Are you OK with that?

10. What does your internal team look like?

Rationale:
Another important question. Who will you be working with?  What skill sets?  Will you be able to sit at the table with other vendors too?  If you’re being hired to fill in the gaps, make sure you have the skills to do so. I ask about copywriters, developers, designers, and link builders at a minimum.

11. What do you use for analytics?

Rationale:
A tool like Wappalyzer can probably tell you, but sometimes bigger companies have their own custom analytics through their host. Sometimes it’s bigger than Google Analytics, like Omniture. Will you be allowed to have direct access to it?  You’d be surprised how often we hear no.

12. How big is your site?  Do you have other properties?

Rationale:
It’s surprising how often a prospect forgets to mention those 30+ subdomains and microsites. If the prospect envisions it as part of the deal, you should at least be aware of how far the core website extends.

13. What is your budget, preferred start time, and end date?

Rationale:
The biggest question of all. Do they even meet your fee requirements? Are you staffed and ready to take on the work? Sure, talking money can be tough, but if you post your rates firm, the prospect is generally more open to talk budget. They don’t feel like a negotiation is going to happen.

Conclusion

While these are the core questions we use, I’m sure the list will eventually grow. I don’t think you should copy our list, or the order.  You should ultimately create your own. Every agency or consultant has different requirements, and interviewing your prospect is as important as allowing them to interview you. But remember, you don’t have to have all the business.  Just the right kind of business.  You will grow organically from your positive experiences.  We all hear about “those other agencies” and how they consistently fail to meet client expectations. Next to “do great work,” this is one powerful way to keep off that list.  

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

Is It Possible to Have Good SEO Simply by Having Great Content – Whiteboard Friday

Posted by randfish

This question, posed by Alex Moravek in our Q&A section, has a somewhat complicated answer. In today’s Whiteboard Friday, Rand discusses how organizations might perform well in search rankings without doing any link building at all, relying instead on the strength of their content to be deemed relevant and important by Google.

For reference, here’s a still of this week’s whiteboard!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about is it possible to have good SEO simply by focusing on great content to the exclusion of link building.

This question was posed in the Moz Q&A Forum, which I deeply love, by Alex Moravek — I might not be saying your name right, Alex, and for that I apologize — from SEO Agencias in Madrid. My Spanish is poor, but my love for churros is so strong.

Alex, I think this is a great question. In fact, we get asked this all the time by all sorts of folks, particularly people in the blogging world and people with small and medium businesses who hear about SEO and go, “Okay, I think can make my website accessible, and yes, I can produce great content, but I just either don’t feel comfortable, don’t have time and energy, don’t understand, or just don’t feel okay with doing link building.” Link acquisition through an outreach and a manual process is beyond the scope of what they can fit into their marketing activities.

In fact, it is possible kind of, sort of. It is possible, but what you desperately need in order for this strategy to be possible are really two things. One is content exposure, and two you need time. I’ll explain why you need both of these things.

I’m going to dramatically simplify Google’s ranking algorithm. In fact, I’m going to simplify it so much that those of you who are SEO professionals are going to be like, “Oh God, Rand, you’re killing me.” I apologize in advance. Just bear with me a second.

We basically have keywords and on-page stuff, topical relevance, etc. All your topic modeling stuff might go in there. There’s content quality, all the factors that Google and Bing might measure around a content’s quality. There’s domain authority. There’s link-based authority based on the links that point to all the pages on a given domain that tell Google or Bing how important pages on this particular domain are.

There are probably some topical relevance elements in there, too. There’s page level authority. These could be all the algorithms you’ve heard of like PageRank and TrustRank, etc., and all the much more modern ones of those.

I’m not specifically talking about Moz scores here, the Moz scores DA and PA. Those are rough interpretations of these much more sophisticated formulas that the engines have.

There’s user and usage data, which we know the engines are using. They’ve talked about using that. There’s spam analysis.

Super simplistic. There are these six things, six broad categories of ranking elements. If you have just these four — keywords, on-page content quality, user and usage data, spam analysis, you’re not spammy — without these, without any domain authority or any page authority, it’s next to impossible to rank for competitive terms and very challenging and very unlikely to rank even for stuff in the chunky middle and long tail. Long tail you might rank for a few things if it’s very, very long tail. But these things taken together give you a sense of ranking ability.

Here’s what some marketers, some bloggers, some folks who invest in content nearly to the exclusion of links have found. They have had success with this strategy. They’ve basically elected to entirely ignore link building and let links come to them.

Instead of focusing on link building, they’re going to focus on product quality, press and public relations, social media, offline marketing, word of mouth, content strategy, email marketing, these other channels that can potentially earn them things. Advertising as well potentially could be in here.

What they rely on is that people find them through these other channels. They find them through social, through ads, through offline, through blogs, through very long tail search, through their content, maybe their email marketing list, word of mouth, press. All of these things are discovery mechanisms that are not search.

Once people get to the site, then these websites rely on the fact that, because of the experience people have, the quality of their products, of their content, because all of that stuff is so good, they’re going to earn links naturally.

This is a leap. In fact, for many SEOs, this is kind of a crazy leap to make, because there are so many things that you can do that will nudge people in this link earning direction. We’ve talked about a number of those at Moz. Of course, if you visit the link building section of our blog, there are hundreds if not thousands of great strategies around this.

These folks have elected to ignore all that link building stuff, let the links come to them, and these signals, these people who visit via other channels eventually lead to links which lead to DA, PA ranking ability. I don’t think this strategy is for everyone, but it is possible.

I think in the utopia that Larry Page and Sergey Brin from Google imagined when they were building their first search engine this is, in fact, how they hoped that the web would work. They hoped that people wouldn’t be out actively gaming and manipulating the web’s link graph, but rather that all the links would be earned naturally and editorially.

I think that’s a very, very optimistic and almost naive way of thinking about it. Remember, they were college students at the time. Maybe they were eating their granola, and dancing around, and hoping that everyone on the web would link only for editorial reasons. Not to make fun of granola. I love granola, especially, oh man, with those acai berries. Bowls of those things are great.

This is a potential strategy if you are very uncomfortable with link building and you feel like you can optimize this process. You have all of these channels going on.

For SEOs who are thinking, “Rand, I’m never going to ignore link building,” you can still get a tremendous amount out of thinking about how you optimize the return on investment and especially the exposure that you receive from these and how that might translate naturally into links.

I find looking at websites that accomplish SEO without active link building fascinating, because they have editorially earned those links through very little intentional effort on their own. I think there’s a tremendous amount that we can take away from that process and optimize around this.

Alex, yes, this is possible. Would I recommend it? Only in a very few instances. I think that there’s a ton that SEOs can do to optimize and nudge and create intelligent, non-manipulative ways of earning links that are a little more powerful than just sitting back and waiting, but it is possible.

All right, everyone. Thanks for joining us, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 5 years ago from feedproxy.google.com