[INFOGRAPHIC] Email marketing statistics guide, by Every Cloud

We love this statistics infographic by Every Cloud, which shows you just how effective good email marketing can be for your business.

Want to read more on how you can improve the performance of your email marketing? Check out our resources library, which has oodles of free downloadable cheatsheets, guides and whitepapers.

[ccw-atrib-link]

The Inbound Marketing Economy

Posted by KelseyLibert

When it comes to job availability and security, the future looks bright for inbound marketers.

The Bureau of Labor Statistics (BLS) projects that employment for marketing managers will grow by 13% between 2012 and 2022. Job security for marketing managers also looks positive according to the BLS, which cites that marketing employees are less likely to be laid off since marketing drives revenue for most businesses.

While the BLS provides growth estimates for managerial-level marketing roles, these projections don’t give much insight into the growth of digital marketing, specifically the disciplines within digital marketing. As we know, “marketing” can refer to a variety of different specializations and methodologies. Since digital marketing is still relatively new compared to other fields, there is not much comprehensive research on job growth and trends in our industry.

To gain a better understanding of the current state of digital marketing careers, Fractl teamed up with Moz to identify which skills and roles are the most in demand and which states have the greatest concentration of jobs.

Methodology

We analyzed 75,315 job listings posted on Indeed.com during June 2015 based on data gathered from job ads containing the following terms:

  • “content marketing” or “content strategy”
  • “SEO” or “search engine marketing”
  • “social media marketing” or “social media management”
  • “inbound marketing” or “digital marketing”
  • “PPC” (pay-per-click)
  • “Google Analytics”

We chose the above keywords based on their likelihood to return results that were marketing-focused roles (for example, just searching for “social media” may return a lot of jobs that are not primarily marketing focused, such as customer service). The occurrence of each of these terms in job listings was quantified and segmented by state. We then combined the job listing data with U.S. Census Bureau population estimates to calculate the jobs per capita for each keyword, giving us the states with the greatest concentration of jobs for a given search query.

Using the same data, we identified which job titles appeared most frequently. We used existing data from Indeed to determine job trends and average salaries. LinkedIn search results were also used to identify keyword growth in user profiles.

Marketing skills are in high demand, but talent is hard to find

As the marketing industry continues to evolve due to emerging technology and marketing platforms, marketers are expected to pick up new skills and broaden their knowledge more quickly than ever before. Many believe this rapid rate of change has caused a marketing skills gap, making it difficult to find candidates with the technical, creative, and business proficiencies needed to succeed in digital marketing.

The ability to combine analytical thinking with creative execution is highly desirable and necessary in today’s marketing landscape. According to an article in The Guardian, “Companies will increasingly look for rounded individuals who can combine analytical rigor with the ability to apply this knowledge in a practical and creative context.” Being both detail-oriented and a big picture thinker is also a sought-after combination of attributes. A report by The Economist and Marketo found that “CMOs want people with the ability to grasp and manage the details (in data, technology, and marketing operations) combined with a view of the strategic big picture.”

But well-rounded marketers are hard to come by. In a study conducted by Bullhorn, 64% of recruiters reported a shortage of skilled candidates for available marketing roles. Wanted Analytics recently found that one of the biggest national talent shortages is for marketing manager roles, with only two available candidates per job opening.

Increase in marketers listing skills in content marketing, inbound marketing, and social media on LinkedIn profiles

While recruiter frustrations may indicate a shallow talent pool, LinkedIn tells a different story—the number of U.S.-based marketers who identify themselves as having digital marketing skills is on the rise. Using data tracked by Rand and LinkedIn, we found the following increases of marketing keywords within user profiles.

growth of marketing keywords in linkedin profiles

The number of profiles containing “content marketing” has seen the largest growth, with a 168% increase since 2013. “Social media” has also seen significant growth with a 137% increase. “Social media” appears on a significantly higher volume of profiles than the other keywords, with more than 2.2 million profiles containing some mention of social media. Although “SEO” has not seen as much growth as the other keywords, it still has the second-highest volume with it appearing in 630,717 profiles.

Why is there a growing number of people self-identifying as having the marketing skills recruiters want, yet recruiters think there is a lack of talent?

While there may be a lot of specialists out there, perhaps recruiters are struggling to fill marketing roles due to a lack of generalists or even a lack of specialists with surface-level knowledge of other areas of digital marketing (also known as a T-shaped marketer).

Popular job listings show a need for marketers to diversify their skill set

The data we gathered from LinkedIn confirm this, as the 20 most common digital marketing-related job titles being advertised call for a broad mix of skills.

20 most common marketing job titles

It’s no wonder that marketing manager roles are hard to fill, considering the job ads are looking for proficiency in a wide range of marketing disciplines including social media marketing, SEO, PPC, content marketing, Google Analytics, and digital marketing. Even job descriptions for specialist roles tend to call for skills in other disciplines. A particular role such as SEO Specialist may call for several skills other than SEO, such as PPC, content marketing, and Google Analytics.

Taking a more granular look at job titles, the chart below shows the five most common titles for each search query. One might expect mostly specialist roles to appear here, but there is a high occurrence of generalist positions, such as Digital Marketing Manager and Marketing Manager.

5 most common job titles by search query

Only one job title containing “SEO” cracked the top five. This indicates that SEO knowledge is a desirable skill within other roles, such as general digital marketing and development.

Recruiter was the third most common job title among job listings containing social media keywords, which suggests a need for social media skills in non-marketing roles.

Similar to what we saw with SEO job titles, only one job title specific to PPC (Paid Search Specialist) made it into the top job titles. PPC skills are becoming necessary for more general marketing roles, such as Marketing Manager and Digital Marketing Specialist.

Across all search queries, the most common jobs advertised call for a broad mix of skills. This tells us hiring managers are on the hunt for well-rounded candidates with a diverse range of marketing skills, as opposed to candidates with expertise in one area.

Marketers who cultivate diverse skill sets are better poised to gain an advantage over other job seekers, excel in their job role, and accelerate career growth. Jason Miller says it best in his piece about the new breed hybrid marketer:

future of marketing quote linkedin

Inbound job demand and growth: Most-wanted skills and fastest-growing jobs

Using data from Indeed, we identified which inbound skills have the highest demand and which jobs are seeing the most growth. Social media keywords claim the largest volume of results out of the terms we searched for during June 2015.

number of marketing job listings by keyword

“Social media marketing” or “social media management” appeared the most frequently in the job postings we analyzed, with 46.7% containing these keywords. “PPC” returned the smallest number of results, with only 3.8% of listings containing this term.

Perhaps this is due to social media becoming a more necessary skill across many industries and not only a necessity for marketers (for example, social media’s role in customer service and recruitment). On the other hand, job roles calling for PPC or SEO skills are most likely marketing-focused. The prevalence of social media jobs also may indicate that social media has gained wide acceptance as a necessary part of a marketing strategy. Additionally, social media skills are less valuable compared to other marketing skills, making it cheaper to hire for these positions (we will explore this further in the average salaries section below).

Our search results also included a high volume of jobs containing “digital marketing” and “SEO” keywords, which made up 19.5% and 15.5% respectively. At 5.8%, “content marketing” had the lowest search volume after “PPC.”

Digital marketing, social media, and content marketing experienced the most job growth

While the number of job listings tells us which skills are most in demand today, looking at which jobs are seeing the most growth can give insight into shifting demands.

digital marketing growth on  indeed.com

Digital marketing job listings have seen substantial growth since 2009, when it accounted for less than 0.1% of Indeed.com search results. In January 2015, this number had climbed to nearly 0.3%.

social media job growth on indeed.com

While social media marketing jobs have seen some uneven growth, as of January 2015 more than 0.1% of all job listings on Indeed.com contained the term “social media marketing” or “social media management.” This shows a significant upward trend considering this number was around 0.05% for most of 2014. It’s also worth noting that “social media” is currently ranked No. 10 on Indeed’s list of top job trends.

content marketing job growth on indeed.com

Despite its growth from 0.02% to nearly 0.09% of search volume in the last four years, “content marketing” does not make up a large volume of job postings compared to “digital marketing” or “social media.” In fact, “SEO” has seen a decrease in growth but still constitutes a higher percentage of job listings than content marketing.

SEO, PPC, and Google Analytics job growth has slowed down

On the other hand, search volume on Indeed has either decreased or plateaued for “SEO,” “PPC,” and “Google Analytics.”

seo job growth on indeed.com

As we see in the graph, the volume of “SEO job” listings peaked between 2011 and 2012. This is also around the time content marketing began gaining popularity, thanks to the Panda and Penguin updates. The decrease may be explained by companies moving their marketing budgets away from SEO and toward content or social media positions. However, “SEO” still has a significant amount of job listings, with it appearing in more than 0.2% of job listings on Indeed as of 2015.

ppc job growth on indeed.com

“PPC” has seen the most staggered growth among all the search terms we analyzed, with its peak of nearly 0.1% happening between 2012 and 2013. As of January of this year, search volume was below 0.05% for “PPC.”

google analytics job growth on indeed.com

Despite a lack of growth, the need for this skill remains steady. Between 2008 and 2009, “Google Analytics” job ads saw a huge spike on Indeed. Since then, the search volume has tapered off and plateaued through January 2015.

Most valuable skills are SEO, digital marketing, and Google Analytics

So we know the number of social media, digital marketing, and content marketing jobs are on the rise. But which skills are worth the most? We looked at the average salaries based on keywords and estimates from Indeed and salaries listed in job ads.

national average marketing salaries

Job titles containing “SEO” had an average salary of $102,000. Meanwhile, job titles containing “social media marketing” had an average salary of $51,000. Considering such a large percentage of the job listings we analyzed contained “social media” keywords, there is a much larger pool of jobs; therefore, a lot of entry level social media jobs or internships are probably bringing down the average salary.

Job titles containing “Google Analytics” had the second-highest average salary at $82,000, but this should be taken with a grain of salt considering “Google Analytics” will rarely appear as part of a job title. The chart below, which shows average salaries for jobs containing keywords anywhere in the listing as opposed to only in the title, gives a more accurate idea of how much “Google Analytics” job roles earn on average.national salary averages marketing keywords

Looking at the average salaries based on keywords that appeared anywhere within the job listing (job title, job description, etc.) shows a slightly different picture. Based on this, jobs containing “digital marketing” or “inbound marketing” had the highest average salary of $84,000. “SEO” and “Google Analytics” are tied for second with $76,000 as the average salary.

“Social media marketing” takes the bottom spot with an average salary of $57,000. However, notice that there is a higher average salary for jobs that contain “social media” within the job listing as opposed to jobs that contain “social media” within the title. This suggests that social media skills may be more valuable when combined with other responsibilities and skills, whereas a strictly social media job, such as Social Media Manager or Social Media Specialist, does not earn as much.

Massachusetts, New York, and California have the most career opportunities for inbound marketers

Looking for a new job? Maybe it’s time to pack your bags for Boston.

Massachusetts led the U.S. with the most jobs per capita for digital marketing, content marketing, SEO, and Google Analytics. New York took the top spot for social media jobs per capita, while Utah had the highest concentration of PPC jobs. California ranked in the top three for digital marketing, content marketing, social media, and Google Analytics. Illinois appeared in the top 10 for every term and usually ranked within the top five. Most of the states with the highest job concentrations are in the Northeast, West, and East Coast, with a few exceptions such as Illinois and Minnesota.

But you don’t necessarily have to move to a new state to increase the odds of landing an inbound marketing job. Some unexpected states also made the cut, with Connecticut and Vermont ranking within the top 10 for several keywords.

concentration of digital marketing jobs

marketing jobs per capita

Job listings containing “digital marketing” or “inbound marketing” were most prevalent in Massachusetts, New York, Illinois, and California, which is most likely due to these states being home to major cities where marketing agencies and large brands are headquartered or have a presence. You will notice these four states make an appearance in the top 10 for every other search query and usually rank close to the top of the list.

More surprising to find in the top 10 were smaller states such as Connecticut and Vermont. Many major organizations are headquartered in Connecticut, which may be driving the state’s need for digital marketing talent. Vermont’s high-tech industry growth may explain its high concentration of digital marketing jobs.

content marketing job concentration

per capita content marketing jobs

Although content marketing jobs are growing, there are still a low volume overall of available jobs, as shown by the low jobs per capita compared to most of the other search queries. With more than three jobs per capita, Massachusetts and New York topped the list for the highest concentration of job listings containing “content marketing” or “content strategy.” California and Illinois rank in third and fourth with 2.8 and 2.1 jobs per capita respectively.

seo job concentration

seo jobs per capita

Again, Massachusetts and New York took the top spots, each with more than eight SEO jobs per capita. Utah took third place for the highest concentration of SEO jobs. Surprised to see Utah rank in the top 10? Its inclusion on this list and others may be due to its booming tech startup scene, which has earned the metropolitan areas of Salt Lake City, Provo, and Park City the nickname Silicon Slopes.

social media job concentration

social media jobs per capita

Compared to the other keywords, “social media” sees a much higher concentration of jobs. New York dominates the rankings with nearly 24 social media jobs per capita. The other top contenders of California, Massachusetts, and Illinois all have more than 15 social media jobs per capita.

The numbers at the bottom of this list can give you an idea of how prevalent social media jobs were compared to any other keyword we analyzed. Minnesota’s 12.1 jobs per capita, the lowest ranking state in the top 10 for social media, trumps even the highest ranking state for any other keyword (11.5 digital marketing jobs per capita in Massachusetts).

ppc job concentration

ppc jobs per capita

Due to its low overall number of available jobs, “PPC” sees the lowest jobs per capita out of all the search queries. Utah has the highest concentration of jobs with just two PPC jobs per 100,000 residents. It is also the only state in the top 10 to crack two jobs per capita.

google analytics job concentration

google analytics jobs per capita

Regionally, the Northeast and West dominate the rankings, with the exception of Illinois. Massachusetts and New York are tied for the most Google Analytics job postings, each with nearly five jobs per capita. At more than three jobs per 100,000 residents, California, Illinois, and Colorado round out the top five.

Overall, our findings indicate that none of the marketing disciplines we analyzed are dying career choices, but there is a need to become more than a one-trick pony—or else you’ll risk getting passed up for job opportunities. As the marketing industry evolves, there is a greater need for marketers who “wear many hats” and have competencies across different marketing disciplines. Marketers who develop diverse skill sets can gain a competitive advantage in the job market and achieve greater career growth.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Becoming Better SEO Scientists – Whiteboard Friday

Posted by MarkTraphagen

Editor’s note: Today we’re featuring back-to-back episodes of Whiteboard Friday from our friends at Stone Temple Consulting. Make sure to also check out the second episode, “UX, Content Quality, and SEO” from Eric Enge.

Like many other areas of marketing, SEO incorporates elements of science. It becomes problematic for everyone, though, when theories that haven’t been the subject of real scientific rigor are passed off as proven facts. In today’s Whiteboard Friday, Stone Temple Consulting’s Mark Traphagen is here to teach us a thing or two about the scientific method and how it can be applied to our day-to-day work.

For reference, here’s a still of this week’s whiteboard.
Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Mozzers. Mark Traphagen from Stone Temple Consulting here today to share with you how to become a better SEO scientist. We know that SEO is a science in a lot of ways, and everything I’m going to say today applies not only to SEO, but testing things like your AdWords, how does that work, quality scores. There’s a lot of different applications you can make in marketing, but we’ll focus on the SEO world because that’s where we do a lot of testing. What I want to talk to you about today is how that really is a science and how we need to bring better science in it to get better results.

The reason is in astrophysics, things like that we know there’s something that they’re talking about these days called dark matter, and dark matter is something that we know it’s there. It’s pretty much accepted that it’s there. We can’t see it. We can’t measure it directly. We don’t even know what it is. We can’t even imagine what it is yet, and yet we know it’s there because we see its effect on things like gravity and mass. Its effects are everywhere. And that’s a lot like search engines, isn’t it? It’s like Google or Bing. We see the effects, but we don’t see inside the machine. We don’t know exactly what’s happening in there.

An artist’s depiction of how search engines work.

So what do we do? We do experiments. We do tests to try to figure that out, to see the effects, and from the effects outside we can make better guesses about what’s going on inside and do a better job of giving those search engines what they need to connect us with our customers and prospects. That’s the goal in the end.

Now, the problem is there’s a lot of testing going on out there, a lot of experiments that maybe aren’t being run very well. They’re not being run according to scientific principles that have been proven over centuries to get the best possible results.

Basic data science in 10 steps

So today I want to give you just very quickly 10 basic things that a real scientist goes through on their way to trying to give you better data. Let’s see what we can do with those in our SEO testing in the future.

So let’s start with number one. You’ve got to start with a hypothesis. Your hypothesis is the question that you want to solve. You always start with that, a good question in mind, and it’s got to be relatively narrow. You’ve got to narrow it down to something very specific. Something like how does time on page effect rankings, that’s pretty narrow. That’s very specific. That’s a good question. Might be able to test that. But something like how do social signals effect rankings, that’s too broad. You’ve got to narrow it down. Get it down to one simple question.

Then you choose a variable that you’re going to test. Out of all the things that you could do, that you could play with or you could tweak, you should choose one thing or at least a very few things that you’re going to tweak and say, “When we tweak this, when we change this, when we do this one thing, what happens? Does it change anything out there in the world that we are looking at?” That’s the variable.

The next step is to set a sample group. Where are you going to gather the data from? Where is it going to come from? That’s the world that you’re working in here. Out of all the possible data that’s out there, where are you going to gather your data and how much? That’s the small circle within the big circle. Now even though it’s smaller, you’re probably not going to get all the data in the world. You’re not going to scrape every search ranking that’s possible or visit every URL.

You’ve got to ask yourself, “Is it large enough that we’re at least going to get some validity?” If I wanted to find out what is the typical person in Seattle and I might walk through just one part of the Moz offices here, I’d get some kind of view. But is that a typical, average person from Seattle? I’ve been around here at Moz. Probably not. But this was large enough.

Also, it should be randomized as much as possible. Again, going back to that example, if I just stayed here within the walls of Moz and do research about Mozzers, I’d learn a lot about what Mozzers do, what Mozzers think, how they behave. But that may or may not be applicable to the larger world outside, so you randomized.

We want to control. So we’ve got our sample group. If possible, it’s always good to have another sample group that you don’t do anything to. You do not manipulate the variable in that group. Now, why do you have that? You have that so that you can say, to some extent, if we saw a change when we manipulated our variable and we did not see it in the control group, the same thing didn’t happen, more likely it’s not just part of the natural things that happen in the world or in the search engine.

If possible, even better you want to make that what scientists call double blind, which means that even you the experimenter don’t know who that control group is out of all the SERPs that you’re looking at or whatever it is. As careful as you might be and honest as you might be, you can end up manipulating the results if you know who is who within the test group? It’s not going to apply to every test that we do in SEO, but a good thing to have in mind as you work on that.

Next, very quickly, duration. How long does it have to be? Is there sufficient time? If you’re just testing like if I share a URL to Google +, how quickly does it get indexed in the SERPs, you might only need a day on that because typically it takes less than a day in that case. But if you’re looking at seasonality effects, you might need to go over several years to get a good test on that.

Let’s move to the second group here. The sixth thing keep a clean lab. Now what that means is try as much as possible to keep anything that might be dirtying your results, any kind of variables creeping in that you didn’t want to have in the test. Hard to do, especially in what we’re testing, but do the best you can to keep out the dirt.

Manipulate only one variable. Out of all the things that you could tweak or change choose one thing or a very small set of things. That will give more accuracy to your test. The more variables that you change, the more other effects and inner effects that are going to happen that you may not be accounting for and are going to muddy your results.

Make sure you have statistical validity when you go to analyze those results. Now that’s beyond the scope of this little talk, but you can read up on that. Or even better, if you are able to, hire somebody or work with somebody who is a trained data scientist or has training in statistics so they can look at your evaluation and say the correlations or whatever you’re seeing, “Does it have a statistical significance?” Very important.

Transparency. As much as possible, share with the world your data set, your full results, your methodology. What did you do? How did you set up the study? That’s going to be important to our last step here, which is replication and falsification, one of the most important parts of any scientific process.

So what you want to invite is, hey we did this study. We did this test. Here’s what we found. Here’s how we did it. Here’s the data. If other people ask the same question again and run the same kind of test, do they get the same results? Somebody runs it again, do they get the same results? Even better, if you have some people out there who say, “I don’t think you’re right about that because I think you missed this, and I’m going to throw this in and see what happens,” aha they falsify. That might make you feel like you failed, but it’s success because in the end what are we after? We’re after the truth about what really works.

Think about your next test, your next experiment that you do. How can you apply these 10 principles to do better testing, get better results, and have better marketing? Thanks.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Big Data, Big Problems: 4 Major Link Indexes Compared

Posted by russangular

Given this blog’s readership, chances are good you will spend some time this week looking at backlinks in one of the growing number of link data tools. We know backlinks continue to be one of, if not the most important
parts of Google’s ranking algorithm. We tend to take these link data sets at face value, though, in part because they are all we have. But when your rankings are on the line, is there a better way to get at which data set is the best? How should we go
about assessing these different link indexes like
Moz,
Majestic, Ahrefs and SEMrush for quality? Historically, there have been 4 common approaches to this question of index quality…

  • Breadth: We might choose to look at the number of linking root domains any given service reports. We know
    that referring domains correlates strongly with search rankings, so it makes sense to judge a link index by how many unique domains it has
    discovered and indexed.
  • Depth: We also might choose to look at how deep the web has been crawled, looking more at the total number of URLs
    in the index, rather than the diversity of referring domains.
  • Link Overlap: A more sophisticated approach might count the number of links an index has in common with Google Webmaster
    Tools.
  • Freshness: Finally, we might choose to look at the freshness of the index. What percentage of links in the index are
    still live?

There are a number of really good studies (some newer than others) using these techniques that are worth checking out when you get a chance:

  • BuiltVisible analysis of Moz, Majestic, GWT, Ahrefs and Search Metrics
  • SEOBook comparison of Moz, Majestic, Ahrefs, and Ayima
  • MatthewWoodward
    study of Ahrefs, Majestic, Moz, Raven and SEO Spyglass
  • Marketing Signals analysis of Moz, Majestic, Ahrefs, and GWT
  • RankAbove comparison of Moz, Majestic, Ahrefs and Link Research Tools
  • StoneTemple study of Moz and Majestic

While these are all excellent at addressing the methodologies above, there is a particular limitation with all of them. They miss one of the
most important metrics we need to determine the value of a link index: proportional representation to Google’s link graph
. So here at Angular Marketing, we decided to take a closer look.

Proportional representation to Google Search Console data

So, why is it important to determine proportional representation? Many of the most important and valued metrics we use are built on proportional
models. PageRank, MozRank, CitationFlow and Ahrefs Rank are proportional in nature. The score of any one URL in the data set is relative to the
other URLs in the data set. If the data set is biased, the results are biased.

A Visualization

Link graphs are biased by their crawl prioritization. Because there is no full representation of the Internet, every link graph, even Google’s,
is a biased sample of the web. Imagine for a second that the picture below is of the web. Each dot represents a page on the Internet,
and the dots surrounded by green represent a fictitious index by Google of certain sections of the web.

Of course, Google isn’t the only organization that crawls the web. Other organizations like Moz,
Majestic, Ahrefs, and SEMrush
have their own crawl prioritizations which result in different link indexes.

In the example above, you can see different link providers trying to index the web like Google. Link data provider 1 (purple) does a good job
of building a model that is similar to Google. It isn’t very big, but it is proportional. Link data provider 2 (blue) has a much larger index,
and likely has more links in common with Google that link data provider 1, but it is highly disproportional. So, how would we go about measuring
this proportionality? And which data set is the most proportional to Google?

Methodology

The first step is to determine a measurement of relativity for analysis. Google doesn’t give us very much information about their link graph.
All we have is what is in Google Search Console. The best source we can use is referring domain counts. In particular, we want to look at
what we call
referring domain link pairs. A referring domain link pair would be something like ask.com->mlb.com: 9,444 which means
that ask.com links to mlb.com 9,444 times.

Steps

  1. Determine the root linking domain pairs and values to 100+ sites in Google Search Console
  2. Determine the same for Ahrefs, Moz, Majestic Fresh, Majestic Historic, SEMrush
  3. Compare the referring domain link pairs of each data set to Google, assuming a
    Poisson Distribution
  4. Run simulations of each data set’s performance against each other (ie: Moz vs Maj, Ahrefs vs SEMrush, Moz vs SEMrush, et al.)
  5. Analyze the results

Results

When placed head-to-head, there seem to be some clear winners at first glance. In head-to-head, Moz edges out Ahrefs, but across the board, Moz and Ahrefs fare quite evenly. Moz, Ahrefs and SEMrush seem to be far better than Majestic Fresh and Majestic Historic. Is that really the case? And why?

It turns out there is an inversely proportional relationship between index size and proportional relevancy. This might seem counterintuitive,
shouldn’t the bigger indexes be closer to Google? Not Exactly.

What does this mean?

Each organization has to create a crawl prioritization strategy. When you discover millions of links, you have to prioritize which ones you
might crawl next. Google has a crawl prioritization, so does Moz, Majestic, Ahrefs and SEMrush. There are lots of different things you might
choose to prioritize…

  • You might prioritize link discovery. If you want to build a very large index, you could prioritize crawling pages on sites that
    have historically provided new links.
  • You might prioritize content uniqueness. If you want to build a search engine, you might prioritize finding pages that are unlike
    any you have seen before. You could choose to crawl domains that historically provide unique data and little duplicate content.
  • You might prioritize content freshness. If you want to keep your search engine recent, you might prioritize crawling pages that
    change frequently.
  • You might prioritize content value, crawling the most important URLs first based on the number of inbound links to that page.

Chances are, an organization’s crawl priority will blend some of these features, but it’s difficult to design one exactly like Google. Imagine
for a moment that instead of crawling the web, you want to climb a tree. You have to come up with a tree climbing strategy.

  • You decide to climb the longest branch you see at each intersection.
  • One friend of yours decides to climb the first new branch he reaches, regardless of how long it is.
  • Your other friend decides to climb the first new branch she reaches only if she sees another branch coming off of it.

Despite having different climb strategies, everyone chooses the same first branch, and everyone chooses the same second branch. There are only
so many different options early on.

But as the climbers go further and further along, their choices eventually produce differing results. This is exactly the same for web crawlers
like Google, Moz, Majestic, Ahrefs and SEMrush. The bigger the crawl, the more the crawl prioritization will cause disparities. This is not a
deficiency; this is just the nature of the beast. However, we aren’t completely lost. Once we know how index size is related to disparity, we
can make some inferences about how similar a crawl priority may be to Google.

Unfortunately, we have to be careful in our conclusions. We only have a few data points with which to work, so it is very difficult to be
certain regarding this part of the analysis. In particular, it seems strange that Majestic would get better relative to its index size as it grows,
unless Google holds on to old data (which might be an important discovery in and of itself). It is most likely that at this point we can’t make
this level of conclusion.

So what do we do?

Let’s say you have a list of domains or URLs for which you would like to know their relative values. Your process might look something like
this…

  • Check Open Site Explorer to see if all URLs are in their index. If so, you are looking metrics most likely to be proportional to Google’s link graph.
  • If any of the links do not occur in the index, move to Ahrefs and use their Ahrefs ranking if all you need is a single PageRank-like metric.
  • If any of the links are missing from Ahrefs’s index, or you need something related to trust, move on to Majestic Fresh.
  • Finally, use Majestic Historic for (by leaps and bounds) the largest coverage available.

It is important to point out that the likelihood that all the URLs you want to check are in a single index increases as the accuracy of the metric
decreases. Considering the size of Majestic’s data, you can’t ignore them because you are less likely to get null value answers from their data than
the others. If anything rings true, it is that once again it makes sense to get data
from as many sources as possible. You won’t
get the most proportional data without Moz, the broadest data without Majestic, or everything in-between without Ahrefs.

What about SEMrush? They are making progress, but they don’t publish any relative statistics that would be useful in this particular
case. Maybe we can hope to see more from them soon given their already promising index!

Recommendations for the link graphing industry

All we hear about these days is big data; we almost never hear about good data. I know that the teams at Moz,
Majestic, Ahrefs, SEMrush and others are interested in mimicking Google, but I would love to see some organization stand up against the
allure of
more data in favor of better data—data more like Google’s. It could begin with testing various crawl strategies to see if they produce
a result more similar to that of data shared in Google Search Console. Having the most Google-like data is certainly a crown worth winning.

Credits

Thanks to Diana Carter at Angular for assistance with data acquisition and Andrew Cron with statistical analysis. Thanks also to the representatives from Moz, Majestic, Ahrefs, and SEMrush for answering questions about their indices.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Deconstructing the App Store Rankings Formula with a Little Mad Science

Posted by AlexApptentive

After seeing Rand’s “Mad Science Experiments in SEO” presented at last year’s MozCon, I was inspired to put on the lab coat and goggles and do a few experiments of my own—not in SEO, but in SEO’s up-and-coming younger sister, ASO (app store optimization).

Working with Apptentive to guide enterprise apps and small startup apps alike to increase their discoverability in the app stores, I’ve learned a thing or two about app store optimization and what goes into an app’s ranking. It’s been my personal goal for some time now to pull back the curtains on Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in my way.

Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet.

As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or down can make all the difference when it comes to your website’s traffic—and revenue.

In the world of apps, ranking is just as important when it comes to standing out in a sea of more than 1.3 million apps. Apptentive’s recent mobile consumer survey shed a little more light this claim, revealing that nearly half of all mobile app users identified browsing the app store charts and search results (the placement on either of which depends on rankings) as a preferred method for finding new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery.

Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a complex and highly guarded algorithms for determining rankings for both keyword-based app store searches and composite top charts.

Unlike SEO, however, very little research and theory has been conducted around what goes into these rankings.

Until now, that is.

Over the course of five studies analyzing various publicly available data points for a cross-section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps, I’ll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few of the factors commonly thought of as influential to an app’s ranking.

But first, a little context

Image credit: Josh Tuininga, Apptentive

Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically, be on a fairly level playing field in terms of search volume and competition.

Of these apps, nearly two-thirds have not received a single rating and 99% are considered unprofitable. These studies, therefore, single out the rare exceptions to the rule—the top 500 ranked apps in each store.

While neither Apple nor Google have revealed specifics about how they calculate search rankings, it is generally accepted that both app store algorithms factor in:

  • Average app store rating
  • Rating/review volume
  • Download and install counts
  • Uninstalls (what retention and churn look like for the app)
  • App usage statistics (how engaged an app’s users are and how frequently they launch the app)
  • Growth trends weighted toward recency (how daily download counts changed over time and how today’s ratings compare to last week’s)
  • Keyword density of the app’s landing page (Ian did a great job covering this factor in a previous Moz post)

I’ve simplified this formula to a function highlighting the four elements with sufficient data (or at least proxy data) for our analysis:

Ranking = fn(Rating, Rating Count, Installs, Trends)

Of course, right now, this generalized function doesn’t say much. Over the next five studies, however, we’ll revisit this function before ultimately attempting to compare the weights of each of these four variables on app store rankings.

(For the purpose of brevity, I’ll stop here with the assumptions, but I’ve gone into far greater depth into how I’ve reached these conclusions in a 55-page report on app store rankings.)

Now, for the Mad Science.

Study #1: App-les to app-les app store ranking volatility

The first, and most straight forward of the five studies involves tracking daily movement in app store rankings across iOS and Android versions of the same apps to determine any trends of differences between ranking volatility in the two stores.

I went with a small sample of five apps for this study, the only criteria for which were that:

  • They were all apps I actively use (a criterion for coming up with the five apps but not one that influences rank in the U.S. app stores)
  • They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be stickier at the top—an assumption I’ll test in study #2)
  • They had an almost identical version of the app in both Google Play and the App Store, meaning they should (theoretically) rank similarly
  • They covered a spectrum of app categories

The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five apps represent the travel, finance, education banking, and social networking categories.

Hypothesis

Going into this analysis, I predicted slightly more volatility in Apple App Store rankings, based on two statistics:

Both of these assumptions will be tested in later analysis.

Results

7-Day App Store Ranking Volatility in the App Store and Google Play

Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23 positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of Google Play.

Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then standardized the ranking volatility in terms of percent change to control for the effect of numeric rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a 24-hour period while Google Play rankings changed by no more than 9%.

Also of note, daily rankings tended to move in the same direction across the two app stores approximately two-thirds of the time, suggesting that the two stores, and their customers, may have more in common than we think.

Study #2: App store ranking volatility across the top charts

Testing the assumption implicit in standardizing the data in study No. 1, this one was designed to see if app store ranking volatility is correlated with an app’s current rank. The sample for this study consisted of the top 500 ranked apps in both Google Play and the App Store, with special attention given to those on both ends of the spectrum (ranks 1–100 and 401–500).

Hypothesis

I anticipated rankings to be more volatile the higher an app is ranked—meaning an app ranked No. 450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis is based on the assumption that higher ranked apps have more installs, active users, and ratings, and that it would take a large margin to produce a noticeable shift in any of these factors.

Results

App Store Ranking Volatility of Top 500 Apps

One look at the chart above shows that apps in both stores have increasingly more volatile rankings (based on how many ranks they moved in the last 24 hours) the lower on the list they’re ranked.

This is particularly true when comparing either end of the spectrum—with a seemingly straight volatility line among Google Play’s Top 100 apps and very few blips within the App Store’s Top 100. Compare this section to the lower end, ranks 401–)500, where both stores experience much more turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking volatility in the Play Store and 28% correlation in the App Store.

To put this into perspective, the average app in Google Play’s 401–)500 ranks moved 12.1 ranks in the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store, these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as volatile as the highest ranked apps. (I say slightly as these “lower-ranked” apps are still ranked higher than 99.96% of all apps.)

The relationship between rank and volatility is pretty consistent across the App Store charts, while rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100 have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation).

Study #3: App store rankings across the stars

The next study looks at the relationship between rank and star ratings to determine any trends that set the top chart apps apart from the rest and explore any ties to app store ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

As discussed in the introduction, this study relates directly to one of the factors commonly accepted as influential to app store rankings: average rating.

Getting started, I hypothesized that higher ranks generally correspond to higher ratings, cementing the role of star ratings in the ranking algorithm.

As far as volatility goes, I did not anticipate average rating to play a role in app store ranking volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice versa. Instead, I believed volatility to be tied to rating volume (as we’ll explore in our last study).

Results

Average App Store Ratings of Top Apps

The chart above plots the top 100 ranked apps in either store with their average rating (both historic and current, for App Store apps). If it looks a little chaotic, it’s just one indicator of the complexity of ranking algorithm in Google Play and the App Store.

If our hypothesis was correct, we’d see a downward trend in ratings. We’d expect to see the No. 1 ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the chart.

A closer examination, in tandem with what we already know about the app stores, reveals two other interesting points:

  1. The average star rating of the top 100 apps is significantly higher than that of the average app. Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of all rated apps in either store. The averages across apps in the 401–)500 ranks approximately split the difference between the ratings of the top ranked apps and the ratings of the average app.
  2. The rating distribution of top apps in Google Play was considerably more compact than the distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more heavily weighted in Google Play’s algorithm.

App Store Ranking Volatility and Average Rating

Looking next at the relationship between ratings and app store ranking volatility reveals a -15% correlation that is consistent across both app stores; meaning the higher an app is rated, the less its rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store’s calculation of an app’s current rating, for which I did not find a statistically significant correlation.

Study #4: App store rankings across versions

This next study looks at the relationship between the age of an app’s current version, its rank and its ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

In alteration of the above function, I’m using the age of a current app’s version as a proxy (albeit not a very good one) for trends in app store ratings and app quality over time.

Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b) each new update inspires a new wave of installs and ratings, I’m hypothesizing that the older the age of an app’s current version, the lower it will be ranked and the less volatile its rank will be.

Results

How update frequency correlates with app store rank

The first and possibly most important finding is that apps across the top charts in both Google Play and the App Store are updated remarkably often as compared to the average app.

At the time of conducting the study, the current version of the average iOS app on the top chart was only 28 days old; the current version of the average Android app was 38 days old.

As hypothesized, the age of the current version is negatively correlated with the app’s rank, with a 13% correlation in Google Play and a 10% correlation in the App Store.

How update frequency correlates with app store ranking volatility

The next part of the study maps the age of the current app version to its app store ranking volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%) while recently updated iOS apps have more volatile rankings (correlation: -3%).

Study #5: App store rankings across monthly active users

In the final study, I wanted to examine the role of an app’s popularity on its ranking. In an ideal world, popularity would be measured by an app’s monthly active users (MAUs), but since few mobile app developers have released this information, I’ve settled for two publicly available proxies: Rating Count and Installs.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

For the same reasons indicated in the second study, I anticipated that more popular apps (e.g., apps with more ratings and more installs) would be higher ranked and less volatile in rank. This, again, takes into consideration that it takes more of a shift to produce a noticeable impact in average rating or any of the other commonly accepted influencers of an app’s ranking.

Results

Apps with more ratings and reviews typically rank higher

The first finding leaps straight off of the chart above: Android apps have been rated more times than iOS apps, 15.8x more, in fact.

The average app in Google Play’s Top 100 had a whopping 3.1 million ratings while the average app in the Apple App Store’s Top 100 had 196,000 ratings. In contrast, apps in the 401–)500 ranks (still tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth (Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks.

Considering that almost two-thirds of apps don’t have a single rating, reaching rating counts this high is a huge feat, and a very strong indicator of the influence of rating count in the app store ranking algorithms.

To even out the playing field a bit and help us visualize any correlation between ratings and rankings (and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I’ve applied a logarithmic scale to the chart above:

The relationship between app store ratings and rankings in the top 100 apps

From this chart, we can see a correlation between ratings and rankings, such that apps with more ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40% correlation in Google Play.

Apps with more ratings typically experience less app store ranking volatility

Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive evidence was found within the Top 100 Google Play apps.

Apps with more installs and active users tend to rank higher in the app stores

And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.)

Among the top 100 Android apps, this last study found that installs were heavily correlated with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in Google Play. Android apps with more installs also tended to have less volatile app store rankings, with a correlation of -16.5%.

Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in broad ranges (e.g., 500k–)1M). For each app, I took the low end of the range, meaning we can likely expect the correlation to be a little stronger since the low end was further away from the midpoint for apps with more installs.

Summary

To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad science studies in app store optimization:

  1. Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play
  2. Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the Apple App Store’s top charts.
  3. In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of the average app.
  4. Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top charts experience a much wider ratings distribution than that of Google Play’s top charts.
  5. The higher an app is rated, the less volatile its rankings are.
  6. The 100 highest ranked apps in either store are updated much more frequently than the average app, and apps with older current versions are correlated with lower ratings.
  7. An app’s update frequency is negatively correlated with Google Play’s ranking volatility but positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an app’s most recent ratings and reviews.
  8. The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest ranked App Store apps.
  9. In both stores, apps that fall under the 401–500 ranks receive, on average, 10–20% of the rating volume seen by apps in the top 100.
  10. Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29–40% correlation between the two.

Revisiting our first (albeit oversimplified) guess at the app stores’ ranking algorithm gives us this loosely defined function:

Ranking = fn(Rating, Rating Count, Installs, Trends)

I’d now re-write the function into a formula by weighing each of these four factors, where a, b, c, & d are unknown multipliers, or weights:

Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d)

These five studies on ASO shed a little more light on these multipliers, showing Rating Count to have the strongest correlation with rank, followed closely by Installs, in either app store.

It’s with the other two factors—rating and trends—that the two stores show the greatest discrepancy. I’d hazard a guess to say that the App Store prioritizes growth trends over ratings, given the importance it places on an app’s current version and the wide distribution of ratings across the top charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just about have to have at least four stars to make the top 100 ranks.

Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts in either store:

Weight of factors in the Apple App Store ranking algorithm

Rating Count > Installs > Trends > Rating

Weight of factors in the Google Play ranking algorithm

Rating Count > Installs > Rating > Trends


Again, we’re oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional factors including keyword density and in-app engagement statistics continue to be strong indicators of ranks. They simply lie outside the scope of these studies.

I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see ASOs conducting the same experiments that have brought SEO to the center stage, and encourage you to enhance or refute these findings with your own ASO mad science experiments.

Please share your thoughts in the comments below, and let’s deconstruct the ranking formula together, one experiment at a time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

The Long Click and the Quality of Search Success

Posted by billslawski

“On the most basic level, Google could see how satisfied users were. To paraphrase Tolstoy, happy users were all the same. The best sign of their happiness was the “Long Click” — This occurred when someone went to a search result, ideally the top one, and did not return. That meant Google has successfully fulfilled the query.”

~ Steven Levy. In the Plex: How Google Thinks, Works, and Shapes our Lives

I often explore and read patents and papers from the search engines to try to get a sense of how they may approach different issues, and learn about the assumptions they make about search, searchers, and the Web. Lately, I’ve been keeping an eye open for papers and patents from the search engines where they talk about a metric known as the “long click.”

A recently granted Google patent uses the metric of a “Long Click” as the center of a process Google may use to track results for queries that were selected by searchers for long visits in a set of search results.

This concept isn’t new. In 2011, I wrote about a Yahoo patent in How a Search Engine May Measure the Quality of Its Search Results, where they discussed a metric that they refer to as a “target page success metric.” It included “dwell time” upon a result as a sign of search success (Yes, search engines have goals, too).

5543947f5bb408.24541747.jpg

Another Google patent described assigning web pages “reachability scores” based upon the quality of pages linked to from those initially visited pages. In the post Does Google Use Reachability Scores in Ranking Resources? I described how a Google patent that might view a long click metric as a sign to see if visitors to that page are engaged by the links to content they find those links pointing to, including links to videos. Google tells us in that patent that it might consider a “long click” to have been made on a video if someone watches at least half the video or 30 seconds of it. The patent suggests that a high reachability score on a page may mean that page could be boosted in Google search results.

554394a877e8c8.30299132.jpg

But the patent I’m writing about today is focused primarily upon looking at and tracking a search success metric like a long click or long dwell time. Here’s the abstract:

Modifying ranking data based on document changes

Invented by Henele I. Adams, and Hyung-Jin Kim

Assigned to Google

US Patent 9,002,867

Granted April 7, 2015

Filed: December 30, 2010

Abstract

Methods, systems, and apparatus, including computer programs encoded on computer storage media for determining a weighted overall quality of result statistic for a document.

One method includes receiving quality of result data for a query and a plurality of versions of a document, determining a weighted overall quality of result statistic for the document with respect to the query including weighting each version specific quality of result statistic and combining the weighted version-specific quality of result statistics, wherein each quality of result statistic is weighted by a weight determined from at least a difference between content of a reference version of the document and content of the version of the document corresponding to the version specific quality of result statistic, and storing the weighted overall quality of result statistic and data associating the query and the document with the weighted overall quality of result statistic.

This patent tells us that search results may be be ranked in an order, according to scores assigned to the search results by a scoring function or process that would be based upon things such as:

  • Where, and how often, query terms appear in the given document,
  • How common the query terms are in the documents indexed by the search engine, or
  • A query-independent measure of quality of the document itself.

Last September, I wrote about how Google might identify a category associated with a query term base upon clicks, in the post Using Query User Data To Classify Queries. In a query for “Lincoln.” the results that appear in response might be about the former US President, the town of Lincoln, Nebraska, and the model of automobile. When someone searches for [Lincoln], Google returning all three of those responses as a top result could be said to be reasonable. The patent I wrote about in that post told us that Google might collect information about “Lincoln” as a search entity, and track which category of results people clicked upon most when they performed that search, to determine what categories of pages to show other searchers. Again, that’s another “search success” based upon a past search history.

There likely is some value in working to find ways to increase the amount of dwell time someone spends upon the pages of your site, if you are already having some success in crafting page titles and snippets that persuade people to click on your pages when they those appear in search results. These approaches can include such things as:

  1. Making visiting your page a positive experience in terms of things like site speed, readability, and scannability.
  2. Making visiting your page a positive experience in terms of things like the quality of the content published on your pages including spelling, grammar, writing style, interest, quality of images, and the links you share to other resources.
  3. Providing a positive experience by offering ideas worth sharing with others, and offering opportunities for commenting and interacting with others, and by being responsive to people who do leave comments.

Here are some resources I found that discuss this long click metric in terms of “dwell time”:

Your ability to create pages that can end up in a “long click” from someone who has come to your site in response to a query, is also a “search success” metric on the search engine’s part, and you both succeed. Just be warned that as the most recent patent from Google on Long Clicks shows us, Google will be watching to make sure that the content of your page doesn’t change too much, and that people are continuing to click upon it in search results, and spend a fair amount to time upon it.

(Images for this post are from my Go Fish Digital Design Lead Devin Holmes @DevinGoFish. Thank you, Devin!)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Moz’s 2014 Annual Report

Posted by SarahBird

Moz has a tradition of sharing its financials (check out 2012 and 2013 for funzies). It’s an important part of TAGFEE.

Why do we do it? Moz gets its strength from the community of marketers and entrepreneurs that support it. We celebrated 10 years of our community last October. In some ways, the purpose of this report is to give you an inside look into our company. It’s one of many lenses that tell the story of Moz.

Yep. I know. It’s April. I’m not proud. Better late than never, right?

I had a very long and extensive version of this post planned, something closer to last year’s extravaganza. I finally had to admit to myself that I was letting the perfect become the enemy of the good (or at least the done). There was no way I could capture an entire year’s worth of ups and downs—along with supporting data—in a single blog post.

Without further ado, here’s the meat-and-potatoes 2014 Year In Review (and here’s an infographic with more statistics for your viewing pleasure!):

Moz ended 2014 with $31.3 million in revenue. About $30 million was recurring revenue (mostly from subscriptions to Moz Pro and the API).

Here’s a breakdown of all our major revenue sources:

Compared to previous years, 2014 was a much slower growth year. We knew very early that it was going to be a tough year because we started Q1 with negative growth. We worked very hard and successfully shifted the momentum back to increasingly positive quarterly growth rates. I’m proud of what we’ve accomplished so far. We still have a long ways to go to meet our potential, but we’re on the path.

In subscription businesses, If you start the year with negative or even slow growth it is very hard to have meaningful annual growth. All things being equal, you’re better off having a bad quarter in Q4 than Q1. If you get a new customer in Q1, you usually earn revenue from that customer all year. If you get a new customer in Q4, it will barely make a dent in that year, although it should set you up nicely for the following year.

We exited 2014 on a good flight path, which bodes well for 2015. We slammed right into some nasty billing system challenges in Q1 2015, but still managed to grow revenue 6.5%. Mad props to the team for shifting momentum last year and for digging into the billing system challenges we’re experiencing now.

We were very successful in becoming more efficient and managing costs in 2014. Our Cost of Revenue (COR), the cost of producing what we sell, fell by 30% to $8.2 million. These savings drove our gross profit margin up from 63% in 2013 to 74%.

Our operating profit increased by 30%. Here’s a breakdown of our major expenses (both operating expenses and COR):

Total operating expenses (which don’t include COR) clocked in at about $29.9 million this year.

The efficiency gains positively impacted EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization) by pushing it up 50% year over year. In 2013, EBITDA was -$4.5 million. We improved it to -$2.1 million in 2014. We’re a VC-backed startup, so this was a planned loss.

One of the most dramatic indicators of our improved efficiency in 2014 is the substantial decline in our consumption of cash.

In 2014, we spent $1.5 million in cash. This was a planned burn, and is actually very impressive for a startup. In fact, we are intentionally increasing our burn, so we don’t expect EBITDA and cash burn to look as good in 2015! Hopefully, though, you will see that revenue growth rate increase.

Let’s check in on some other Moz KPIs:

At the end of 2014, we reported a little over 27,000 Pro users. When billing system issues hit in Q1 2015, we discovered some weird under- and over-reporting, so the number of subscribers was adjusted down by about ~450 after we scrubbed a bunch of inactive accounts out of the database. We expect accounts to stabilize and be more reliable now that we’ve fixed those issues.

We launched Moz Local about a year ago. I’m amazed and thrilled that we were able to end the year managing 27,000 locations for a range of customers. We just recently took our baby steps into the UK, and we’ve got a bunch of great additional features planned. What an incredible launch year!

We published over 300 posts combined on the Moz Blog and YouMoz. Nearly 20,000 people left comments. Well done, team!

Our content and social efforts are paying off with a 26% year-over-year increase in organic search traffic.

We continue to see good growth across many of our off-site communities, too:

The team grew to 149 people last year. We’re at ~37% women, which is nowhere near where I want it to be. We have a long way to go before the team reflects the diversity of the communities around us.

Our paid, paid vacation perk is very popular with Mozzers, and why wouldn’t it be? Everyone gets $3,000/year to use toward their vacations. In 2014, we spent over $420,000 to help our Mozzers take a break and get connected with matters most.

PPV.png

Also, we’re hiring! You’ll have my undying gratitude if you send me your best software engineers. Help us, help you. 😉

Last, but certainly not least, Mozzers continue to be generous (the ‘G’ in TAGFEE) and donate to the charities of their choice. In 2014, Mozzers donated $48k, and Moz added another $72k to increase the impact of their gifts. Combining those two figures, we donated $120k to causes our team members are passionate about. That’s an average of $805 per employee!

Mozzers are optimists with initiative. I think that’s why they are so generous with their time and money to folks in need. They believe the world can be a better place if we act to change it.

That’s a wrap on 2014! A year with many ups and downs. Fortunately, Mozzers don’t quit when things get hard. They embrace TAGFEE and lean into the challenge.

Revenue is growing again. We’re still operating very efficiently, and TAGFEE is strong. We’re heads-down executing on some big projects that customers have been clamoring for. Thank you for sticking with us, and for inspiring us to make marketing better every day.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Local Centroids are Now Individual Users: How Can We Optimize for Their Searches?

Posted by MiriamEllis

“Google is getting better at detecting location at a more granular level—even on the desktop.
The user is the new centroid.” – 
David Mihm

The history of the centroid

The above quote succinctly summarizes the current state of affairs for local business owners and their customers. The concept of a centroid—
a central point of relevance—is almost as old as local search. In 2008, people like Mike Blumenthal and Google Maps Manager Carter Maslan were sharing statistics like this:

“…research indicates that up to 80% of the variation in rank can be explained by distance from the centroid on certain searches.”

At that time, businesses located near town hall or a similar central hub appeared to be experiencing a ranking advantage.

Fast forward to 2013, and Mike weighed in again with 
an updated definition of “industry centroids”

“If you read their (Google’s) patents, they actually deal with the center of the industries … as defining the center of the search. So if all the lawyers are on the corner of Main and State, that typically defines the center of the search, rather than the center of the city… it isn’t even the centroid of the city that matters. It matters that you are near where the other people in your industry are.”

In other words, Google’s perception of a centralized location for auto dealerships could be completely different than that for medical practices, and that
neither might be located anywhere near the city center.

While the concepts of city and industry centroids may still play a part in some searches,
local search results in 2015 clearly indicate Google’s shift toward deeming the physical location of the desktop or mobile user a powerful factor in determining relevance. The relationship between where your customer is when he performs a search and where your business is physically located has never been more important.

Moreover, in this new, user-centric environment, Google has moved beyond simply detecting cities to detecting neighborhoods and even streets. What this means for local business owners is that
your hyperlocal information has become a powerful component of your business data. This post will teach you how to better serve your most local customers.

Seeing the centroid in action

If you do business in a small town with few competitors, ranking for your product/service + city terms is likely to cover most of your bases. The user-as-centroid phenomenon is most applicable in mid-to-large sized towns and cities with reasonable competition. I’ll be using two districts in San Francisco—Bernal Heights and North Beach—in these illustrations and we’ll be going on a hunt for pizza.

On a desktop, searching for “pizza north beach san francisco” or setting my location to this neighborhood and city while searching for the product, Google will show me something like this:

Performing this same search, but with “bernal heights” substituted, Google shows me pizzerias in a completely different part of the city:

local result bernal heights pizza san francisco

And, when I move over to my mobile device, Google narrows the initial results down to
just three enviable players in each district. These simple illustrations demonstrate Google’s increasing sensitivity to serving me nearby businesses offering what I want.

The physical address of your business is the most important factor in serving the user as centroid. This isn’t something you can control, but there are things you
can do to market your business as being highly relevant to your hyperlocal geography.

Specialized content for the user-centroid

We’ll break this down into four common business models to help get you thinking about planning content that serves your most local customers.

1. Single-location business

Make the shift toward viewing your business not just as “Tony’s Pizza in San Francisco”, but as “Tony’s Pizza
in North Beach, San Francisco”. Consider:

  • Improving core pages of your website or creating new pages to include references to the proud part you play in the neighborhood scene. Talk about the history of your area and where you fit into that.
  • Interview locals and ask them to share their memories about the neighborhood and what they like about living there.
  • Showcase your participation in local events.
  • Plan an event, contest or special for customers in your district.
  • Take pictures, label them with hyperlocal terms, post them on your site and share them socially.
  • Blog about local happenings that are relevant to you and your customers, such as a street market where you buy the tomatoes that top your pizzas or a local award you’ve won.
  • Depending on your industry, there will be opportunities for hyperlocal content specific to your business. For example, a restaurant can make sure its menu is in crawlable text and can name some favorite dishes after the neighborhood—The Bernal Heights Special. Meanwhile, a spa in North Beach can create a hyperlocal name for a service—The North Beach Organic Spa Package. Not only does this show district pride, but customers may mention these products and services by name in their reviews, reinforcing your local connection.

2. Multi-location business within a single city

All that applies to the single location applies to you, too, but you’ve got to find a way to scale building out content for each neighborhood.

  • If your resources are strong, build a local landing page for each of your locations, including basic optimization for the neighborhood name. Meanwhile, create blog categories for each neighborhood and rotate your efforts on a week by week basis. First week, blog about neighborhood A, next week, find something interesting to write about concerning neighborhood B. Over time, you’ll have developed a nice body of content proving your involvement in each district.
  • If you’re short on resources, you’ll still want to build out a basic landing page for each of your stores in your city and make the very best effort you can to showcase your neighborhood pride on these pages.

3. Multiple businesses, multiple cities

Again, scaling this is going to be key and how much you can do will depend upon your resources.

  • The minimum requirement will be a landing page on the site for each physical location, with basic optimization for your neighborhood terms.
  • Beyond this, you’ll be making a decision about how much hyperlocal content you can add to the site/blog for each district, or whether time can be utilized more effectively via off-site social outreach. If you’ve got lots of neighborhoods to cover in lots of different cities, designating a social representative for each store and giving him the keys to your profiles (after a training session in company policies) may make the most sense.

4. Service area businesses (SABs)

Very often, service area businesses are left out in the cold with various local developments, but in my own limited testing, Google is applying at least some hyperlocal care to these business models. I can search for a neighborhood plumber, just as I would a pizza:

local results plumber bernal heights san francisco

To be painstakingly honest, plumbers are going to have to be pretty ingenious to come up with a ton of engaging industry/neighborhood content and may be confined mainly to creating some decent service area landing pages that share a bit about their work in various neighborhoods. Other business models, like contractors, home staging firms and caterers should find it quite easy to talk about district architecture, curb appeal and events on a hyperlocal front.

While your SAB is still unlikely to beat out a competitor with a physical location in a given neighborhood, you still have a chance to associate your business with that area of your town with well-planned content.


Need creative inspiration for the writing projects ahead?
Don’t miss this awesome wildcard search tip Mary Bowling shared at LocalUp. Add an underscore or asterisk to your search terms and just look at the good stuff Google will suggest to you:

wildcard search content ideas

Does Tony’s patio make his business one of
Bernal Heights’ dog-friendly restaurants or does his rooftop view make his restaurant the most picturesque lunch spot in the district? If so, he’s got two new topics to write about, either on his basic landing pages or his blog.

Hop over to 
Whitespark’s favorite takeaways from Mike Ramsey’s LocalUp presentation, too.

Citations and reviews with the user centroid in mind

Here are the basics about citations, broken into the same four business models:

1. Single-location business

You get just one citation on each platform, unless you have multiple departments or practitioners. That means one Google+ Local page, one Yelp profile, one Best of the Web listing. etc. You do not get one citation for your city and another for your neighborhood. Very simple.

2. Multi-location business within a single city

As with the single location business, you are entitled to just one set of citations per physical location. That means one Google+ Local listing for your North Beach pizza place and another for your restaurant in Bernal Heights.

A regular FAQ here in the Moz Q&A Forum relates to how Google will differentiate between two businesses located in the same city. Here are some tips:

  • Google no longer supports the use of modifiers in the business name field, so you can no longer be Tony’s Pizza – Bernal Heights, unless your restaurant is actually named this. You can only be Tony’s Pizza.
  • Facebook’s policies are different than Google’s. To my understanding, Facebook won’t permit you to build more than one Facebook Place for the identical brand name. Thus, to comply with their guidelines, you must differentiate by using those neighborhood names or other modifiers. Given that this same rule applies to all of your competitors, this should not be seen as a danger to your NAP consistency, because apparently, no multi-location business creating Facebook Places will have 100% consistent NAP. The playing field is, then, even.
  • The correct place to differentiate your businesses on all other platforms is in the address field. Google will understand that one of your branches is on A St. and the other is on B St. and will choose which one they feel is most relevant to the user.
  • Google is not a fan of call centers. Unless it’s absolutely impossible to do so, use a unique local phone number for each physical location to prevent mix-ups on Google’s part, and use this number consistently across all web-based mentions of the business.
  • Though you can’t put your neighborhood name in the title, you can definitely include it in the business description field most citation platforms provide.
  • Link your citations to their respective local landing pages on your website, not to your homepage.

3. Multiple businesses, multiple cities

Everything in business model #2 applies to you as well. You are allowed one set of citations for each of your physical locations, and while you can’t modify your Google+ Local business name, you can mention your neighborhood in the description. Promote each location equally in all you do and then rely on Google to separate your locations for various users based on your addresses and phone numbers.

4. SABs

You are exactly like business model #1 when it comes to citations, with the exception of needing to abide by Google’s rules about hiding your address if you don’t serve customers at your place of business. Don’t build out additional citations for neighborhoods you serve, other cities you serve or various service offerings. Just create one citation set. You should be fine mentioning some neighborhoods in your citation descriptions, but don’t go overboard on this.

When it comes to review management, you’ll be managing unique sets of reviews for each of your physical locations. One method for preventing business owner burnout is to manage each location in rotation. One week, tend to owner responses for Business A. Do Business B the following week. In week three, ask for some reviews for Business A and do the same for B in week four. Vary the tasks and take your time unless faced with a sudden reputation crisis.

You can take some additional steps to “hyperlocalize” your review profiles:

  • Write about your neighborhood in the business description on your profile.
  • You can’t compel random customers to mention your neighborhood, but you can certainly do so from time to time when your write responses. “We’ve just installed the first soda fountain Bernal Heights has seen since 1959. Come have a cool drink on us this summer.”
  • Offer a neighborhood special to people who bring in a piece of mail with their address on it. Prepare a little handout for all-comers, highlighting a couple of review profiles where you’d love to hear how they liked the Bernal Heights special. Or, gather email addresses if possible and follow up via email shortly after the time of service.
  • If your business model is one that permits you to name your goods or service packages, don’t forget the tip mentioned earlier about thinking hyperlocal when brainstorming names. Pretty cool if you can get your customers talking about how your “North Beach Artichoke Pizza” is the best pie in town!

Investigate your social-hyperlocal opportunties

I still consider website-based content publication to be more than half the battle in ranking locally, but sometimes, real-time social outreach can accomplish things static articles or scheduled blog posts can’t. The amount of effort you invest in social outreach should be based on your resources and an assessment of how naturally your industry lends itself to socialization. Fire insurance salesmen are going to find it harder to light up their neighborhood community than yoga studios will. Consider your options:

Remember that you are investigating each opportunity to see how it stacks up not just to promoting your location in your city, but in your neighborhood.

Who are the people in your neighborhood?

Remember that Sesame Street jingle? It hails from a time when urban dwellers strongly identified with a certain district of hometown. People were “from the neighborhood.” If my grandfather was a Mission District fella, maybe yours was from Chinatown. Now, we’re shifting in fascinating directions. Even as we’ve settled into telecommuting to jobs in distant states or countries, Amazon is offering one hour home delivery to our neighbors in Manhattan. Doctors are making house calls again! Any day now, I’m expecting a milkman to start making his rounds around here. Commerce has stretched to span the globe and now it’s zooming in to meet the needs of the family next door.

If the big guys are setting their sights on near-instant services within your community, take note.
You live in that community. You talk, face-to-face, with your neighbors every day and know the flavor of the local scene better than any remote competitor can right now.

Now is the time to reinvigorate that old neighborhood pride in the way you’re visualizing your business, marketing it and personally communicating to customers that you’re right there for them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]