The Inbound Marketing Economy

Posted by KelseyLibert

When it comes to job availability and security, the future looks bright for inbound marketers.

The Bureau of Labor Statistics (BLS) projects that employment for marketing managers will grow by 13% between 2012 and 2022. Job security for marketing managers also looks positive according to the BLS, which cites that marketing employees are less likely to be laid off since marketing drives revenue for most businesses.

While the BLS provides growth estimates for managerial-level marketing roles, these projections don’t give much insight into the growth of digital marketing, specifically the disciplines within digital marketing. As we know, “marketing” can refer to a variety of different specializations and methodologies. Since digital marketing is still relatively new compared to other fields, there is not much comprehensive research on job growth and trends in our industry.

To gain a better understanding of the current state of digital marketing careers, Fractl teamed up with Moz to identify which skills and roles are the most in demand and which states have the greatest concentration of jobs.

Methodology

We analyzed 75,315 job listings posted on Indeed.com during June 2015 based on data gathered from job ads containing the following terms:

  • “content marketing” or “content strategy”
  • “SEO” or “search engine marketing”
  • “social media marketing” or “social media management”
  • “inbound marketing” or “digital marketing”
  • “PPC” (pay-per-click)
  • “Google Analytics”

We chose the above keywords based on their likelihood to return results that were marketing-focused roles (for example, just searching for “social media” may return a lot of jobs that are not primarily marketing focused, such as customer service). The occurrence of each of these terms in job listings was quantified and segmented by state. We then combined the job listing data with U.S. Census Bureau population estimates to calculate the jobs per capita for each keyword, giving us the states with the greatest concentration of jobs for a given search query.

Using the same data, we identified which job titles appeared most frequently. We used existing data from Indeed to determine job trends and average salaries. LinkedIn search results were also used to identify keyword growth in user profiles.

Marketing skills are in high demand, but talent is hard to find

As the marketing industry continues to evolve due to emerging technology and marketing platforms, marketers are expected to pick up new skills and broaden their knowledge more quickly than ever before. Many believe this rapid rate of change has caused a marketing skills gap, making it difficult to find candidates with the technical, creative, and business proficiencies needed to succeed in digital marketing.

The ability to combine analytical thinking with creative execution is highly desirable and necessary in today’s marketing landscape. According to an article in The Guardian, “Companies will increasingly look for rounded individuals who can combine analytical rigor with the ability to apply this knowledge in a practical and creative context.” Being both detail-oriented and a big picture thinker is also a sought-after combination of attributes. A report by The Economist and Marketo found that “CMOs want people with the ability to grasp and manage the details (in data, technology, and marketing operations) combined with a view of the strategic big picture.”

But well-rounded marketers are hard to come by. In a study conducted by Bullhorn, 64% of recruiters reported a shortage of skilled candidates for available marketing roles. Wanted Analytics recently found that one of the biggest national talent shortages is for marketing manager roles, with only two available candidates per job opening.

Increase in marketers listing skills in content marketing, inbound marketing, and social media on LinkedIn profiles

While recruiter frustrations may indicate a shallow talent pool, LinkedIn tells a different story—the number of U.S.-based marketers who identify themselves as having digital marketing skills is on the rise. Using data tracked by Rand and LinkedIn, we found the following increases of marketing keywords within user profiles.

growth of marketing keywords in linkedin profiles

The number of profiles containing “content marketing” has seen the largest growth, with a 168% increase since 2013. “Social media” has also seen significant growth with a 137% increase. “Social media” appears on a significantly higher volume of profiles than the other keywords, with more than 2.2 million profiles containing some mention of social media. Although “SEO” has not seen as much growth as the other keywords, it still has the second-highest volume with it appearing in 630,717 profiles.

Why is there a growing number of people self-identifying as having the marketing skills recruiters want, yet recruiters think there is a lack of talent?

While there may be a lot of specialists out there, perhaps recruiters are struggling to fill marketing roles due to a lack of generalists or even a lack of specialists with surface-level knowledge of other areas of digital marketing (also known as a T-shaped marketer).

Popular job listings show a need for marketers to diversify their skill set

The data we gathered from LinkedIn confirm this, as the 20 most common digital marketing-related job titles being advertised call for a broad mix of skills.

20 most common marketing job titles

It’s no wonder that marketing manager roles are hard to fill, considering the job ads are looking for proficiency in a wide range of marketing disciplines including social media marketing, SEO, PPC, content marketing, Google Analytics, and digital marketing. Even job descriptions for specialist roles tend to call for skills in other disciplines. A particular role such as SEO Specialist may call for several skills other than SEO, such as PPC, content marketing, and Google Analytics.

Taking a more granular look at job titles, the chart below shows the five most common titles for each search query. One might expect mostly specialist roles to appear here, but there is a high occurrence of generalist positions, such as Digital Marketing Manager and Marketing Manager.

5 most common job titles by search query

Only one job title containing “SEO” cracked the top five. This indicates that SEO knowledge is a desirable skill within other roles, such as general digital marketing and development.

Recruiter was the third most common job title among job listings containing social media keywords, which suggests a need for social media skills in non-marketing roles.

Similar to what we saw with SEO job titles, only one job title specific to PPC (Paid Search Specialist) made it into the top job titles. PPC skills are becoming necessary for more general marketing roles, such as Marketing Manager and Digital Marketing Specialist.

Across all search queries, the most common jobs advertised call for a broad mix of skills. This tells us hiring managers are on the hunt for well-rounded candidates with a diverse range of marketing skills, as opposed to candidates with expertise in one area.

Marketers who cultivate diverse skill sets are better poised to gain an advantage over other job seekers, excel in their job role, and accelerate career growth. Jason Miller says it best in his piece about the new breed hybrid marketer:

future of marketing quote linkedin

Inbound job demand and growth: Most-wanted skills and fastest-growing jobs

Using data from Indeed, we identified which inbound skills have the highest demand and which jobs are seeing the most growth. Social media keywords claim the largest volume of results out of the terms we searched for during June 2015.

number of marketing job listings by keyword

“Social media marketing” or “social media management” appeared the most frequently in the job postings we analyzed, with 46.7% containing these keywords. “PPC” returned the smallest number of results, with only 3.8% of listings containing this term.

Perhaps this is due to social media becoming a more necessary skill across many industries and not only a necessity for marketers (for example, social media’s role in customer service and recruitment). On the other hand, job roles calling for PPC or SEO skills are most likely marketing-focused. The prevalence of social media jobs also may indicate that social media has gained wide acceptance as a necessary part of a marketing strategy. Additionally, social media skills are less valuable compared to other marketing skills, making it cheaper to hire for these positions (we will explore this further in the average salaries section below).

Our search results also included a high volume of jobs containing “digital marketing” and “SEO” keywords, which made up 19.5% and 15.5% respectively. At 5.8%, “content marketing” had the lowest search volume after “PPC.”

Digital marketing, social media, and content marketing experienced the most job growth

While the number of job listings tells us which skills are most in demand today, looking at which jobs are seeing the most growth can give insight into shifting demands.

digital marketing growth on  indeed.com

Digital marketing job listings have seen substantial growth since 2009, when it accounted for less than 0.1% of Indeed.com search results. In January 2015, this number had climbed to nearly 0.3%.

social media job growth on indeed.com

While social media marketing jobs have seen some uneven growth, as of January 2015 more than 0.1% of all job listings on Indeed.com contained the term “social media marketing” or “social media management.” This shows a significant upward trend considering this number was around 0.05% for most of 2014. It’s also worth noting that “social media” is currently ranked No. 10 on Indeed’s list of top job trends.

content marketing job growth on indeed.com

Despite its growth from 0.02% to nearly 0.09% of search volume in the last four years, “content marketing” does not make up a large volume of job postings compared to “digital marketing” or “social media.” In fact, “SEO” has seen a decrease in growth but still constitutes a higher percentage of job listings than content marketing.

SEO, PPC, and Google Analytics job growth has slowed down

On the other hand, search volume on Indeed has either decreased or plateaued for “SEO,” “PPC,” and “Google Analytics.”

seo job growth on indeed.com

As we see in the graph, the volume of “SEO job” listings peaked between 2011 and 2012. This is also around the time content marketing began gaining popularity, thanks to the Panda and Penguin updates. The decrease may be explained by companies moving their marketing budgets away from SEO and toward content or social media positions. However, “SEO” still has a significant amount of job listings, with it appearing in more than 0.2% of job listings on Indeed as of 2015.

ppc job growth on indeed.com

“PPC” has seen the most staggered growth among all the search terms we analyzed, with its peak of nearly 0.1% happening between 2012 and 2013. As of January of this year, search volume was below 0.05% for “PPC.”

google analytics job growth on indeed.com

Despite a lack of growth, the need for this skill remains steady. Between 2008 and 2009, “Google Analytics” job ads saw a huge spike on Indeed. Since then, the search volume has tapered off and plateaued through January 2015.

Most valuable skills are SEO, digital marketing, and Google Analytics

So we know the number of social media, digital marketing, and content marketing jobs are on the rise. But which skills are worth the most? We looked at the average salaries based on keywords and estimates from Indeed and salaries listed in job ads.

national average marketing salaries

Job titles containing “SEO” had an average salary of $102,000. Meanwhile, job titles containing “social media marketing” had an average salary of $51,000. Considering such a large percentage of the job listings we analyzed contained “social media” keywords, there is a much larger pool of jobs; therefore, a lot of entry level social media jobs or internships are probably bringing down the average salary.

Job titles containing “Google Analytics” had the second-highest average salary at $82,000, but this should be taken with a grain of salt considering “Google Analytics” will rarely appear as part of a job title. The chart below, which shows average salaries for jobs containing keywords anywhere in the listing as opposed to only in the title, gives a more accurate idea of how much “Google Analytics” job roles earn on average.national salary averages marketing keywords

Looking at the average salaries based on keywords that appeared anywhere within the job listing (job title, job description, etc.) shows a slightly different picture. Based on this, jobs containing “digital marketing” or “inbound marketing” had the highest average salary of $84,000. “SEO” and “Google Analytics” are tied for second with $76,000 as the average salary.

“Social media marketing” takes the bottom spot with an average salary of $57,000. However, notice that there is a higher average salary for jobs that contain “social media” within the job listing as opposed to jobs that contain “social media” within the title. This suggests that social media skills may be more valuable when combined with other responsibilities and skills, whereas a strictly social media job, such as Social Media Manager or Social Media Specialist, does not earn as much.

Massachusetts, New York, and California have the most career opportunities for inbound marketers

Looking for a new job? Maybe it’s time to pack your bags for Boston.

Massachusetts led the U.S. with the most jobs per capita for digital marketing, content marketing, SEO, and Google Analytics. New York took the top spot for social media jobs per capita, while Utah had the highest concentration of PPC jobs. California ranked in the top three for digital marketing, content marketing, social media, and Google Analytics. Illinois appeared in the top 10 for every term and usually ranked within the top five. Most of the states with the highest job concentrations are in the Northeast, West, and East Coast, with a few exceptions such as Illinois and Minnesota.

But you don’t necessarily have to move to a new state to increase the odds of landing an inbound marketing job. Some unexpected states also made the cut, with Connecticut and Vermont ranking within the top 10 for several keywords.

concentration of digital marketing jobs

marketing jobs per capita

Job listings containing “digital marketing” or “inbound marketing” were most prevalent in Massachusetts, New York, Illinois, and California, which is most likely due to these states being home to major cities where marketing agencies and large brands are headquartered or have a presence. You will notice these four states make an appearance in the top 10 for every other search query and usually rank close to the top of the list.

More surprising to find in the top 10 were smaller states such as Connecticut and Vermont. Many major organizations are headquartered in Connecticut, which may be driving the state’s need for digital marketing talent. Vermont’s high-tech industry growth may explain its high concentration of digital marketing jobs.

content marketing job concentration

per capita content marketing jobs

Although content marketing jobs are growing, there are still a low volume overall of available jobs, as shown by the low jobs per capita compared to most of the other search queries. With more than three jobs per capita, Massachusetts and New York topped the list for the highest concentration of job listings containing “content marketing” or “content strategy.” California and Illinois rank in third and fourth with 2.8 and 2.1 jobs per capita respectively.

seo job concentration

seo jobs per capita

Again, Massachusetts and New York took the top spots, each with more than eight SEO jobs per capita. Utah took third place for the highest concentration of SEO jobs. Surprised to see Utah rank in the top 10? Its inclusion on this list and others may be due to its booming tech startup scene, which has earned the metropolitan areas of Salt Lake City, Provo, and Park City the nickname Silicon Slopes.

social media job concentration

social media jobs per capita

Compared to the other keywords, “social media” sees a much higher concentration of jobs. New York dominates the rankings with nearly 24 social media jobs per capita. The other top contenders of California, Massachusetts, and Illinois all have more than 15 social media jobs per capita.

The numbers at the bottom of this list can give you an idea of how prevalent social media jobs were compared to any other keyword we analyzed. Minnesota’s 12.1 jobs per capita, the lowest ranking state in the top 10 for social media, trumps even the highest ranking state for any other keyword (11.5 digital marketing jobs per capita in Massachusetts).

ppc job concentration

ppc jobs per capita

Due to its low overall number of available jobs, “PPC” sees the lowest jobs per capita out of all the search queries. Utah has the highest concentration of jobs with just two PPC jobs per 100,000 residents. It is also the only state in the top 10 to crack two jobs per capita.

google analytics job concentration

google analytics jobs per capita

Regionally, the Northeast and West dominate the rankings, with the exception of Illinois. Massachusetts and New York are tied for the most Google Analytics job postings, each with nearly five jobs per capita. At more than three jobs per 100,000 residents, California, Illinois, and Colorado round out the top five.

Overall, our findings indicate that none of the marketing disciplines we analyzed are dying career choices, but there is a need to become more than a one-trick pony—or else you’ll risk getting passed up for job opportunities. As the marketing industry evolves, there is a greater need for marketers who “wear many hats” and have competencies across different marketing disciplines. Marketers who develop diverse skill sets can gain a competitive advantage in the job market and achieve greater career growth.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Simple Steps for Conducting Creative Content Research

Posted by Hannah_Smith

Most frequently, the content we create at Distilled is designed to attract press coverage, social shares, and exposure (and links) on sites our clients’ target audience reads. That’s a tall order.

Over the years we’ve had our hits and misses, and through this we’ve recognised the value of learning about what makes a piece of content successful. Coming up with a great idea is difficult, and it can be tough to figure out where to begin. Today, rather than leaping headlong into brainstorming sessions, we start with creative content research.

What is creative content research?

Creative content research enables you to answer the questions:

“What are websites publishing, and what are people sharing?”

From this, you’ll then have a clearer view on what might be successful for your client.

A few years ago this required quite an amount of work to figure out. Today, happily, it’s much quicker and easier. In this post I’ll share the process and tools we use.

Whoa there… Why do I need to do this?

I think that the value in this sort of activity lies in a couple of directions:

a) You can learn a lot by deconstructing the success of others…

I’ve been taking stuff apart to try to figure out how it works for about as long as I can remember, so applying this process to content research felt pretty natural to me. Perhaps more importantly though, I think that deconstructing content is actually easier when it isn’t your own. You’re not involved, invested, or in love with the piece so viewing it objectively and learning from it is much easier.

b) Your research will give you a clear overview of the competitive landscape…

As soon as a company elects to start creating content, they gain a whole raft of new competitors. In addition to their commercial competitors (i.e. those who offer similar products or services), the company also gains content competitors. For example, if you’re a sports betting company and plan to create content related to the sports events that you’re offering betting markets on; then you’re competing not just with other betting companies, but every other publisher who creates content about these events. That means major news outlets, sports news site, fan sites, etc. To make matters even more complicated, it’s likely that you’ll actually be seeking coverage from those same content competitors. As such, you need to understand what’s already being created in the space before creating content of your own.

c) You’re giving yourself the data to create a more compelling pitch…

At some point you’re going to need to pitch your ideas to your client (or your boss if you’re working in-house). At Distilled, we’ve found that getting ideas signed off can be really tough. Ultimately, a great idea is worthless if we can’t persuade our client to give us the green light. This research can be used to make a more compelling case to your client and get those ideas signed off. (Incidentally, if getting ideas signed off is proving to be an issue you might find this framework for pitching creative ideas useful).

Where to start

Good ideas start with a good brief, however it can be tough to pin clients down to get answers to a long list of questions.

As a minimum you’ll need to know the following:

  • Who are they looking to target?
    • Age, sex, demographic
    • What’s their core focus? What do they care about? What problems are they looking to solve?
    • Who influences them?
    • What else are they interested in?
    • Where do they shop and which brands do they buy?
    • What do they read?
    • What do they watch on TV?
    • Where do they spend their time online?
  • Where do they want to get coverage?
    • We typically ask our clients to give us a wishlist of 10 or so sites they’d love to get coverage on
  • Which topics are they comfortable covering?
    • This question is often the toughest, particularly if a client hasn’t created content specifically for links and shares before. Often clients are uncomfortable about drifting too far away from their core business—for example, if they sell insurance, they’ll typically say that they really want to create a piece of content about insurance. Whilst this is understandable from the clients’ perspective it can severely limit their chances of success. It’s definitely worth offering up a gentle challenge at this stage—I’ll often cite Red Bull, who are a great example of a company who create content based on what their consumers love, not what they sell (i.e. Red Bull sell soft drinks, but create content about extreme sports because that’s the sort of content their audience love to consume). It’s worth planting this idea early, but don’t get dragged into a fierce debate at this stage—you’ll be able to make a far more compelling argument once you’ve done your research and are pitching concrete ideas.

Processes, useful tools and sites

Now you have your brief, it’s time to begin your research.

Given that we’re looking to uncover “what websites are publishing and what’s being shared,” It won’t surprise you to learn that I pay particular attention to pieces of content and the coverage they receive. For each piece that I think is interesting I’ll note down the following:

  • The title/headline
  • A link to the coverage (and to the original piece if applicable)
  • How many social shares the coverage earned (and the original piece earned)
  • The number of linking root domains the original piece earned
  • Some notes about the piece itself: why it’s interesting, why I think it got shares/coverage
  • Any gaps in the content, whether or not it’s been executed well
  • How we might do something similar (if applicable)

Whilst I’m doing this I’ll also make a note of specific sites I see being frequently shared (I tend to check these out separately later on), any interesting bits of research (particularly if I think there might be an opportunity to do something different with the data), interesting threads on forums etc.

When it comes to kicking off your research, you can start wherever you like, but I’d recommend that you cover off each of the areas below:

What does your target audience share?

Whilst this activity might not uncover specific pieces of successful content, it’s a great way of getting a clearer understanding of your target audience, and getting a handle on the sites they read and the topics which interest them.

  • Review social profiles / feeds
    • If the company you’re working for has a Facebook page, it shouldn’t be too difficult to find some people who’ve liked the company page and have a public profile. It’s even easier on Twitter where most profiles are public. Whilst this won’t give you quantitative data, it does put a human face to your audience data and gives you a feel for what these people care about and share. In addition to uncovering specific pieces of content, this can also provide inspiration in terms of other sites you might want to investigate further and ideas for topics you might want to explore.
  • Demographics Pro
    • This service infers demographic data from your clients’ Twitter followers. I find it particularly useful if the client doesn’t know too much about their audience. In addition to demographic data, you get a breakdown of professions, interests, brand affiliations, and the other Twitter accounts they follow and who they’re most influenced by. This is a paid-for service, but there are pay-as-you-go options in addition to pay monthly plans.

Finding successful pieces of content on specific sites

If you’ve a list of sites you know your target audience read, and/or you know your client wants to get coverage on, there are a bunch of ways you can uncover interesting content:

  • Using your link research tool of choice (e.g. Open Site Explorer, Majestic, ahrefs) you can run a domain level report to see which pages have attracted the most links. This can also be useful if you want to check out commercial competitors to see which pieces of content they’ve created have attracted the most links.
  • There are also tools which enable you to uncover the most shared content on individual sites. You can use Buzzsumo to run content analysis reports on individual domains which provide data on average social shares per post, social shares by network, and social shares by content type.
  • If you just want to see the most shared content for a given domain you can run a simple search on Buzzsumo using the domain; and there’s also the option to refine by topic. For example a search like [guardian.com big data] will return the most shared content on the Guardian related to big data. You can also run similar reports using ahrefs’ Content Explorer tool.

Both Buzzsumo and ahrefs are paid tools, but both offer free trials. If you need to explore the most shared content without using a paid tool, there are other alternatives. Check out Social Crawlytics which will crawl domains and return social share data, or alternatively, you can crawl a site (or section of a site) and then run the URLs through SharedCount‘s bulk upload feature.

Finding successful pieces of content by topic

When searching by topic, I find it best to begin with a broad search and then drill down into more specific areas. For example, if I had a client in the financial services space, I’d start out looking at a broad topic like “money” rather than shooting straight to topics like loans or credit cards.

As mentioned above, both Buzzsumo and ahrefs allow you to search for the most shared content by topic and both offer advanced search options.

Further inspiration

There are also several sites I like to look at for inspiration. Whilst these sites don’t give you a great steer on whether or not a particular piece of content was actually successful, with a little digging you can quickly find the original source and pull link and social share data:

  • Visually has a community area where users can upload creative content. You can search by topic to uncover examples.
  • TrendHunter have a searchable archive of creative ideas, they feature products, creative campaigns, marketing campaigns, advertising and more. It’s best to keep your searches broad if you’re looking at this site.
  • Check out Niice (a moodboard app) which also has a searchable archive of handpicked design inspiration.
  • Searching Pinterest can allow you to unearth some interesting bits and pieces as can Google image searches and regular Google searches around particular topics.
  • Reviewing relevant sections of discussion sites like Quora can provide insight into what people are asking about particular topics which may spark a creative idea.

Moving from data to insight

By this point you’ve (hopefully) got a long list of content examples. Whilst this is a great start, effectively what you’ve got here is just data, now you need to convert this to insight.

Remember, we’re trying to answer the questions: “What are websites publishing, and what are people sharing?”

Ordinarily as I go through the creative content research process, I start to see patterns or themes emerge. For example, across a variety of topics areas you’ll see that the most shared content tends to be news. Whilst this is good to know, it’s not necessarily something that’s going to be particularly actionable. You’ll need to dig a little deeper—what else (aside from news) is given coverage? Can you split those things into categories or themes?

This is tough to explain in the abstract, so let me give you an example. We’d identified a set of music sites (e.g. Rolling Stone, NME, CoS, Stereogum, Pitchfork) as target publishers for a client.

Here’s a summary of what I concluded following my research:

The most-shared content on these music publications is news: album launches, new singles, videos of performances etc. As such, if we can work a news hook into whatever we create, this could positively influence our chances of gaining coverage.

Aside from news, the content which gains traction tends to fall into one of the following categories:

Earlier in this post I mentioned that it can be particularly tough to create content which attracts coverage and shares if clients feel strongly that they want to do something directly related to their product or service. The example I gave at the outset was a client who sold insurance and was really keen to create something about insurance. You’re now in a great position to win an argument with data, as thanks to your research you’ll be able to cite several pieces of insurance-related content which have struggled to gain traction. But it’s not all bad news as you’ll also be able to cite other topics which are relevant to the client’s target audience and stand a better chance of gaining coverage and shares.

Avoiding the pitfalls

There are potential pitfalls when it comes to creative content research in that it’s easy to leap to erroneous conclusions. Here’s some things to watch out for:

Make sure you’re identifying outliers…

When seeking out successful pieces of content you need to be certain that what you’re looking at is actually an outlier. For example, the average post on BuzzFeed gets over 30k social shares. As such, that post you found with just 10k shares is not an outlier. It’s done significantly worse than average. It’s therefore not the best post to be holding up as a fabulous example of what to create to get shares.

Don’t get distracted by formats…

Pay more attention to the idea than the format. For example, the folks at Mashable, kindly covered an infographic about Instagram which we created for a client. However, the takeaway here is not that Instagram infographics get coverage on Mashable. Mashable didn’t cover this because we created an infographic. They covered the piece because it told a story in a compelling and unusual way.

You probably shouldn’t create a listicle…

This point is related to the point above. In my experience, unless you’re a publisher with a huge, engaged social following, that listicle of yours is unlikely to gain traction. Listicles on huge publisher sites get shares, listicles on client sites typically don’t. This is doubly important if you’re also seeking coverage, as listicles on clients sites don’t typically get links or coverage on other sites.

How we use the research to inform our ideation process

At Distilled, we typically take a creative brief and complete creative content research and then move into the ideation process. A summary of the research is included within the creative brief, and this, along with a copy of the full creative content research is shared with the team.

The research acts as inspiration and direction and is particularly useful in terms of identifying potential topics to explore but doesn’t mean team members don’t still do further research of their own.

This process by no means acts as a silver bullet, but it definitely helps us come up with ideas.


Thanks for sticking with me to the end!

I’d love to hear more about your creative content research processes and any tips you have for finding inspirational content. Do let me know via the comments.

Image credits: Research, typing, audience, inspiration, kitteh.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Inverse Document Frequency and the Importance of Uniqueness

Posted by EricEnge

In my last column, I wrote about how to use term frequency analysis in evaluating your content vs. the competition’s. Term frequency (TF) is only one part of the TF-IDF approach to information retrieval. The other part is inverse document frequency (IDF), which is what I plan to discuss today.

Today’s post will use an explanation of how IDF works to show you the importance of creating content that has true uniqueness. There are reputation and visibility reasons for doing this, and it’s great for users, but there are also SEO benefits.

If you wonder why I am focusing on TF-IDF, consider these words from a Google article from August 2014: “This is the idea of the famous TF-IDF, long used to index web pages.” While the way that Google may apply these concepts is far more than the simple TF-IDF models I am discussing, we can still learn a lot from understanding the basics of how they work.

What is inverse document frequency?

In simple terms, it’s a measure of the rareness of a term. Conceptually, we start by measuring document frequency. It’s easiest to illustrate with an example, as follows:

IDF table

In this example, we see that the word “a” appears in every document in the document set. What this tells us is that it provides no value in telling the documents apart. It’s in everything.

Now look at the word “mobilegeddon.” It appears in 1,000 of the documents, or one thousandth of one percent of them. Clearly, this phrase provides a great deal more differentiation for the documents that contain them.

Document frequency measures commonness, and we prefer to measure rareness. The classic way that this is done is with a formula that looks like this:

idf equation

For each term we are looking at, we take the total number of documents in the document set and divide it by the number of documents containing our term. This gives us more of a measure of rareness. However, we don’t want the resulting calculation to say that the word “mobilegeddon” is 1,000 times more important in distinguishing a document than the word “boat,” as that is too big of a scaling factor.

This is the reason we take the Log Base 10 of the result, to dampen that calculation. For those of you who are not mathematicians, you can loosely think of the Log Base 10 of a number as being a count of the number of zeros – i.e., the Log Base 10 of 1,000,000 is 6, and the log base 10 of 1,000 is 3. So instead of saying that the word “mobilegeddon” is 1,000 times more important, this type of calculation suggests it’s three times more important, which is more in line with what makes sense from a search engine perspective.

With this in mind, here are the IDF values for the terms we looked at before:

idf table logarithm values

Now you can see that we are providing the highest score to the term that is the rarest.

What does the concept of IDF teach us?

Think about IDF as a measure of uniqueness. It helps search engines identify what it is that makes a given document special. This needs to be much more sophisticated than how often you use a given search term (e.g. keyword density).

Think of it this way: If you are one of 6.78 million web sites that comes up for the search query “super bowl 2015,” you are dealing with a crowded playing field. Your chances of ranking for this term based on the quality of your content are pretty much zero.

massive number of results for broad keyword

Overall link authority and other signals will be the only way you can rank for a term that competitive. If you are a new site on the landscape, well, perhaps you should chase something else.

That leaves us with the question of what you should target. How about something unique? Even the addition of a simple word like “predictions”—changing our phrase to “super bowl 2015 predictions”—reduces this playing field to 17,800 results.

Clearly, this is dramatically less competitive already. Slicing into this further, the phrase “super bowl 2015 predictions and odds” returns only 26 pages in Google. See where this is going?

What IDF teaches us is the importance of uniqueness in the content we create. Yes, it will not pay nearly as much money to you as it would if you rank for the big head term, but if your business is a new entrant into a very crowded space, you are not going to rank for the big head term anyway

If you can pick out a smaller number of terms with much less competition and create content around those needs, you can start to rank for these terms and get money flowing into your business. This is because you are making your content more unique by using rarer combinations of terms (leveraging what IDF teaches us).

Summary

People who do keyword analysis are often wired to pursue the major head terms directly, simply based on the available keyword search volume. The result from this approach can, in fact, be pretty dismal.

Understanding how inverse document frequency works helps us understand the importance of standing out. Creating content that brings unique angles to the table is often a very potent way to get your SEO strategy kick-started.

Of course, the reasons for creating content that is highly differentiated and unique go far beyond SEO. This is good for your users, and it’s good for your reputation, visibility, AND also your SEO.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

What Deep Learning and Machine Learning Mean For the Future of SEO – Whiteboard Friday

Posted by randfish

Imagine a world where even the high-up Google engineers don’t know what’s in the ranking algorithm. We may be moving in that direction. In today’s Whiteboard Friday, Rand explores and explains the concepts of deep learning and machine learning, drawing us a picture of how they could impact our work as SEOs.

For reference, here’s a still of this week’s whiteboard!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we are going to take a peek into Google’s future and look at what it could mean as Google advances their machine learning and deep learning capabilities. I know these sound like big, fancy, important words. They’re not actually that tough of topics to understand. In fact, they’re simplistic enough that even a lot of technology firms like Moz do some level of machine learning. We don’t do anything with deep learning and a lot of neural networks. We might be going that direction.

But I found an article that was published in January, absolutely fascinating and I think really worth reading, and I wanted to extract some of the contents here for Whiteboard Friday because I do think this is tactically and strategically important to understand for SEOs and really important for us to understand so that we can explain to our bosses, our teams, our clients how SEO works and will work in the future.

The article is called “Google Search Will Be Your Next Brain.” It’s by Steve Levy. It’s over on Medium. I do encourage you to read it. It’s a relatively lengthy read, but just a fascinating one if you’re interested in search. It starts with a profile of Geoff Hinton, who was a professor in Canada and worked on neural networks for a long time and then came over to Google and is now a distinguished engineer there. As the article says, a quote from the article: “He is versed in the black art of organizing several layers of artificial neurons so that the entire system, the system of neurons, could be trained or even train itself to divine coherence from random inputs.”

This sounds complex, but basically what we’re saying is we’re trying to get machines to come up with outcomes on their own rather than us having to tell them all the inputs to consider and how to process those incomes and the outcome to spit out. So this is essentially machine learning. Google has used this, for example, to figure out when you give it a bunch of photos and it can say, “Oh, this is a landscape photo. Oh, this is an outdoor photo. Oh, this is a photo of a person.” Have you ever had that creepy experience where you upload a photo to Facebook or to Google+ and they say, “Is this your friend so and so?” And you’re like, “God, that’s a terrible shot of my friend. You can barely see most of his face, and he’s wearing glasses which he usually never wears. How in the world could Google+ or Facebook figure out that this is this person?”

That’s what they use, these neural networks, these deep machine learning processes for. So I’ll give you a simple example. Here at MOZ, we do machine learning very simplistically for page authority and domain authority. We take all the inputs — numbers of links, number of linking root domains, every single metric that you could get from MOZ on the page level, on the sub-domain level, on the root-domain level, all these metrics — and then we combine them together and we say, “Hey machine, we want you to build us the algorithm that best correlates with how Google ranks pages, and here’s a bunch of pages that Google has ranked.” I think we use a base set of 10,000, and we do it about quarterly or every 6 months, feed that back into the system and the system pumps out the little algorithm that says, “Here you go. This will give you the best correlating metric with how Google ranks pages.” That’s how you get page authority domain authority.

Cool, really useful, helpful for us to say like, “Okay, this page is probably considered a little more important than this page by Google, and this one a lot more important.” Very cool. But it’s not a particularly advanced system. The more advanced system is to have these kinds of neural nets in layers. So you have a set of networks, and these neural networks, by the way, they’re designed to replicate nodes in the human brain, which is in my opinion a little creepy, but don’t worry. The article does talk about how there’s a board of scientists who make sure Terminator 2 doesn’t happen, or Terminator 1 for that matter. Apparently, no one’s stopping Terminator 4 from happening? That’s the new one that’s coming out.

So one layer of the neural net will identify features. Another layer of the neural net might classify the types of features that are coming in. Imagine this for search results. Search results are coming in, and Google’s looking at the features of all the websites and web pages, your websites and pages, to try and consider like, “What are the elements I could pull out from there?”

Well, there’s the link data about it, and there are things that happen on the page. There are user interactions and all sorts of stuff. Then we’re going to classify types of pages, types of searches, and then we’re going to extract the features or metrics that predict the desired result, that a user gets a search result they really like. We have an algorithm that can consistently produce those, and then neural networks are hopefully designed — that’s what Geoff Hinton has been working on — to train themselves to get better. So it’s not like with PA and DA, our data scientist Matt Peters and his team looking at it and going, “I bet we could make this better by doing this.”

This is standing back and the guys at Google just going, “All right machine, you learn.” They figure it out. It’s kind of creepy, right?

In the original system, you needed those people, these individuals here to feed the inputs, to say like, “This is what you can consider, system, and the features that we want you to extract from it.”

Then unsupervised learning, which is kind of this next step, the system figures it out. So this takes us to some interesting places. Imagine the Google algorithm, circa 2005. You had basically a bunch of things in here. Maybe you’d have anchor text, PageRank and you’d have some measure of authority on a domain level. Maybe there are people who are tossing new stuff in there like, “Hey algorithm, let’s consider the location of the searcher. Hey algorithm, let’s consider some user and usage data.” They’re tossing new things into the bucket that the algorithm might consider, and then they’re measuring it, seeing if it improves.

But you get to the algorithm today, and gosh there are going to be a lot of things in there that are driven by machine learning, if not deep learning yet. So there are derivatives of all of these metrics. There are conglomerations of them. There are extracted pieces like, “Hey, we only ant to look and measure anchor text on these types of results when we also see that the anchor text matches up to the search queries that have previously been performed by people who also search for this.” What does that even mean? But that’s what the algorithm is designed to do. The machine learning system figures out things that humans would never extract, metrics that we would never even create from the inputs that they can see.

Then, over time, the idea is that in the future even the inputs aren’t given by human beings. The machine is getting to figure this stuff out itself. That’s weird. That means that if you were to ask a Google engineer in a world where deep learning controls the ranking algorithm, if you were to ask the people who designed the ranking system, “Hey, does it matter if I get more links,” they might be like, “Well, maybe.” But they don’t know, because they don’t know what’s in this algorithm. Only the machine knows, and the machine can’t even really explain it. You could go take a snapshot and look at it, but (a) it’s constantly evolving, and (b) a lot of these metrics are going to be weird conglomerations and derivatives of a bunch of metrics mashed together and torn apart and considered only when certain criteria are fulfilled. Yikes.

So what does that mean for SEOs. Like what do we have to care about from all of these systems and this evolution and this move towards deep learning, which by the way that’s what Jeff Dean, who is, I think, a senior fellow over at Google, he’s the dude that everyone mocks for being the world’s smartest computer scientist over there, and Jeff Dean has basically said, “Hey, we want to put this into search. It’s not there yet, but we want to take these models, these things that Hinton has built, and we want to put them into search.” That for SEOs in the future is going to mean much less distinct universal ranking inputs, ranking factors. We won’t really have ranking factors in the way that we know them today. It won’t be like, “Well, they have more anchor text and so they rank higher.” That might be something we’d still look at and we’d say, “Hey, they have this anchor text. Maybe that’s correlated with what the machine is finding, the system is finding to be useful, and that’s still something I want to care about to a certain extent.”

But we’re going to have to consider those things a lot more seriously. We’re going to have to take another look at them and decide and determine whether the things that we thought were ranking factors still are when the neural network system takes over. It also is going to mean something that I think many, many SEOs have been predicting for a long time and have been working towards, which is more success for websites that satisfy searchers. If the output is successful searches, and that’ s what the system is looking for, and that’s what it’s trying to correlate all its metrics to, if you produce something that means more successful searches for Google searchers when they get to your site, and you ranking in the top means Google searchers are happier, well you know what? The algorithm will catch up to you. That’s kind of a nice thing. It does mean a lot less info from Google about how they rank results.

So today you might hear from someone at Google, “Well, page speed is a very small ranking factor.” In the future they might be, “Well, page speed is like all ranking factors, totally unknown to us.” Because the machine might say, “Well yeah, page speed as a distinct metric, one that a Google engineer could actually look at, looks very small.” But derivatives of things that are connected to page speed may be huge inputs. Maybe page speed is something, that across all of these, is very well connected with happier searchers and successful search results. Weird things that we never thought of before might be connected with them as the machine learning system tries to build all those correlations, and that means potentially many more inputs into the ranking algorithm, things that we would never consider today, things we might consider wholly illogical, like, “What servers do you run on?” Well, that seems ridiculous. Why would Google ever grade you on that?

If human beings are putting factors into the algorithm, they never would. But the neural network doesn’t care. It doesn’t care. It’s a honey badger. It doesn’t care what inputs it collects. It only cares about successful searches, and so if it turns out that Ubuntu is poorly correlated with successful search results, too bad.

This world is not here yet today, but certainly there are elements of it. Google has talked about how Panda and Penguin are based off of machine learning systems like this. I think, given what Geoff Hinton and Jeff Dean are working on at Google, it sounds like this will be making its way more seriously into search and therefore it’s something that we’re really going to have to consider as search marketers.

All right everyone, I hope you’ll join me again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Local Search Expert Quiz: How Much Do You Know about Local SEO?

Posted by Cyrus-Shepard

How big is local SEO?

Our latest
Industry Survey revealed over 67% of online marketers report spending time on local search. We’ve witnessed demand for local SEO expertise grow as Google’s competitive landscape continues to evolve.

Last year, Moz introduced the
SEO Expert Quiz, which to date over 40,000 people have attempted to conquer. Today, we’re proud to announce the Local Search Expert Quiz. Written by local search expert Miriam Ellis, the quiz contains 40 questions and only takes less than 10 minutes to complete.

Ready to get started? When you are finished, we’ll automatically score your quiz and reveal the correct answers.

<a href=”http://mozbot.polldaddy.com/s/local-search-expert-quiz”>View Survey</a>

Rating your score

Keep in mind the Local Search Expert Quiz is
just for fun. That said, we’ve established the following guidelines to help judge your results.

  • 0-39% Newbie: Time to study up on your citation data!
  • 40-59% Beginner: Good job, but you’re not quite in the 7-pack yet.
  • 60-79% Intermediate: You’re getting close to the centroid!
  • 80-89% Pro: Let’s tackle multi-location!
  • 90-100% Guru: We all bow down to your local awesomeness

Resources to improve your performance

Want to learn more about local search? Here’s a collection of free learning resources to help up your performance (and possibly your income.)

  1. The Moz Local Learning Center
  2. Glossary of Local Search Terms and Definitions
  3. Guidelines for Representing Your Business on Google
  4. Local Search Ranking Factors
  5. Blumenthal’s Blog
  6. Local SEO Guide
  7. Whitespark Blog

You can also learn the latest local search tips and tricks by signing up for the LocalUp Advanced one-day conference or reading
local SEO posts on the Moz Blog.

Embed this Quiz

We created this quiz using
Polldaddy, and we’re making it available to embed on your own site. This isn’t a backlink play – we didn’t even include a link to our own site (but feel free to include one if you feel generous).

Here’s the embed code:

<iframe frameborder="0" width="100%" height="600" scrolling="auto" allowtransparency="true" src="http://mozbot.polldaddy.com/s/local-search-expert-quiz?iframe=1"><a href="http://mozbot.polldaddy.com/s/local-search-expert-quiz">View Survey</a></iframe>

How did you score on the quiz? Let us know in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

www.findawineryvictoria.com.au is a local business directory for wineries

Reblogged 4 years ago from moz.com

10 Predictions for the Marketing World in 2015

Posted by randfish

The beginning of the year marks the traditional week for bloggers to prognosticate about the 12 months ahead, and, over the last decade I’ve created a tradition of joining in this festive custom to predict the big trends in SEO and web marketing. However, I divine the future by a strict code: I’m only allowed to make predictions IF my predictions from last year were at least moderately accurate (otherwise, why should you listen to me?). So, before I bring my crystal-ball-gazing, let’s have a look at how I did for 2014.

Yes, we’ll get to that, but not until you prove you’re a real Wizard, mustache-man.

You can find 
my post from January 5th of last year here, but I won’t force you to read through it. Here’s how I do grading:

  • Spot On (+2) – when a prediction hits the nail on the head and the primary criteria are fulfilled
  • Partially Accurate (+1) – predictions that are in the area, but are somewhat different than reality
  • Not Completely Wrong (-1) – those that landed near the truth, but couldn’t be called “correct” in any real sense
  • Off the Mark (-2) – guesses which didn’t come close

If the score is positive, prepare for more predictions, and if it’s negative, I’m clearly losing the pulse of the industry. Let’s tally up the numbers.

In 2014, I made 6 predictions:

#1: Twitter will go Facebook’s route and create insights-style pages for at least some non-advertising accounts

Grade: +2

Twitter rolled out Twitter analytics for all users this year (
starting in July for some accounts, and then in August for everyone), and while it’s not nearly as full-featured as Facebook’s “Insights” pages, it’s definitely in line with the spirit of this prediction.

#2: We will see Google test search results with no external, organic listings

Grade: -2

I’m very happy to be wrong about this one. To my knowledge, Google has yet to go this direction and completely eliminate external-pointing links on search results pages. Let’s hope they never do.

That said, there are plenty of SERPs where Google is taking more and more of the traffic away from everyone but themselves, e.g.:

I think many SERPs that have basic, obvious functions like ”
timer” are going to be less and less valuable as traffic sources over time.

#3: Google will publicly acknowledge algorithmic updates targeting both guest posting and embeddable infographics/badges as manipulative linking practices

Grade: -1

Google most certainly did release an update (possibly several)
targeted at guest posts, but they didn’t publicly talk about something specifically algorithmic targeting emebedded content/badges. It’s very possible this was included in the rolling Penguin updates, but the prediction said “publicly acknowledge” so I’m giving myself a -1.

#4: One of these 5 marketing automation companies will be purchased in the 9-10 figure $ range: Hubspot, Marketo, Act-On, Silverpop, or Sailthru

Grade: +2

Silverpop was 
purchased by IBM in April of 2014. While a price wasn’t revealed, the “sources” quoted by the media estimated the deal in the ~$270mm range. I’m actually surprised there wasn’t another sale, but this one was spot-on, so it gets the full +2.

#5: Resumes listing “content marketing” will grow faster than either SEO or “social media marketing”

Grade: +1

As a percentage, this certainly appears to be the case. Here’s some stats:

  • US profiles with “content marketing”
    • June 2013: 30,145
    • January 2015: 68,580
    • Growth: 227.5%
  • US profiles with “SEO”
    • June 2013: 364,119
    • January 2015: 596,050
    • Growth: 163.7%
  • US profiles with “social media marketing”
    • June 2013: 938,951
    • January 2015: 1,990,677
    • Growth: 212%

Granted, content marketing appears on far fewer profiles than SEO or social media marketing, but it has seen greater growth. I’m only giving myself a +1 rather than a +2 on this because, while the prediction was mathematically correct, the numbers of SEO and social still dwarf content marketing as a term. In fact, in LinkedIn’s 
annual year-end report of which skills got people hired the most, SEO was #5! Clearly, the term and the skillset continue to endure and be in high demand.

#6: There will be more traffic sent by Pinterest than Twitter in Q4 2014 (in the US)

Grade: +1

This is probably accurate, since Pinterest appears to have grown faster in 2014 than Twitter by a good amount AND this was 
already true in most of 2014 according to SharedCount (though I’m not totally sold on the methodology of coverage for their numbers). However, we won’t know the truth for a few months to come, so I’d be presumptuous in giving a full +2. I am a bit surprised that Pinterest continues to grow at such a rapid pace — certainly a very impressive feat for an established social network.


SOURCE: 
Global Web Index

With Twitter’s expected moves into embedded video, it’s my guess that we’ll continue to see a lot more Twitter engagement and activity on Twitter itself, and referring traffic outward won’t be as considerable a focus. Pinterest seems to be one of the only social networks that continues that push (as Facebook, Instagram, LinkedIn, and YouTube all seem to be pursuing a “keep them here” strategy).

——————————–

Final Score: +3

That positive number means I’ve passed my bar and can make another set of predictions for 2015. I’m going to be a little more aggressive this year, even though it risks ruining my sterling record, simply because I think it’s more exciting 🙂

Thus, here are my 10 predictions for what the marketing world will bring us in 2015:

#1: We’ll see the first major not-for-profit University in the US offer a degree in Internet Marketing, including classes on SEO.

There are already some private, for-profit offerings from places like Fullsail and Univ. of Phoenix, but I don’t know that these pedigrees carry much weight. Seeing a Stanford, a Wharton, or a University of Washington offer undergraduate or MBA programs in our field would be a boon to those seeking options and an equal boon to the universities.

The biggest reason I think we’re ripe for this in 2015 is the 
LinkedIn top 25 job skills data showing the immense value of SEO (#5) and digital/online marketing (#16) in a profile when seeking a new job. That should (hopefully) be a direct barometer for what colleges seek to include in their repertoire.

#2: Google will continue the trend of providing instant answers in search results with more interactive tools.

Google has been doing instant answers for a long time, but in addition to queries with immediate and direct responses, they’ve also undercut a number of online tool vendors by building their own versions directly into the SERPs, like they do currently for queries like ”
timer” and “calculator.”

I predict in 2015, we’ll see more partnerships like what’s provided with 
OpenTable and the ability to book reservations directly from the SERPs, possibly with companies like Uber, Flixster (they really need to get back to a better instant answer for movies+city), Zillow, or others that have unique data that could be surfaced directly.

#3: 2015 will be the year Facebook begins including some form of web content (not on Facebook’s site) in their search functionality.

Facebook 
severed their search relationship with Bing in 2014, and I’m going to make a very risky prediction that in 2015, we’ll see Facebook’s new search emerge and use some form of non-Facebook web data. Whether they’ll actually build their own crawler or merely license certain data from outside their properties is another matter, but I think Facebook’s shown an interest in getting more sophisticated with their ad offerings, and any form of search data/history about their users would provide a powerful addition to what they can do today.

#4: Google’s indexation of Twitter will grow dramatically, and a significantly higher percentage of tweets, hashtags, and profiles will be indexed by the year’s end.

Twitter has been 
putting more muscle behind their indexation and SEO efforts, and I’ve seen more and more Twitter URLs creeping into the search results over the last 6 months. I think that trend continues, and in 2015, we see Twitter.com enter the top 5-6 “big domains” in Mozcast.

#5: The EU will take additional regulatory action against Google that will create new, substantive changes to the search results for European searchers.

In 2014, we saw the EU 
enforce the “right to be forgotten” and settle some antitrust issues that require Google to edit what it displays in the SERPs. I don’t think the EU is done with Google. As the press has noted, there are plenty of calls in the European Parliament to break up the company, and while I think the EU will stop short of that measure, I believe we’ll see additional regulatory action that affects search results.

On a personal opinion note, I would add that while I’m not thrilled with how the EU has gone about their regulation of Google, I am impressed by their ability to do so. In the US, with 
Google becoming the second largest lobbying spender in the country and a masterful influencer of politicians, I think it’s extremely unlikely that they suffer any antitrust or regulatory action in their home country — not because they haven’t engaged in monopolistic behavior, but because they were smart enough to spend money to manipulate elected officials before that happened (unlike Microsoft, who, in the 1990’s, assumed they wouldn’t become a target).

Thus, if there is to be any hedge to Google’s power in search, it will probably come from the EU and the EU alone. There’s no competitor with the teeth or market share to have an impact (at least outside of China, Russia, and South Korea), and no other government is likely to take them on.

#6: Mobile search, mobile devices, SSL/HTTPS referrals, and apps will combine to make traffic source data increasingly hard to come by.

I’ll estimate that by year’s end, many major publishers will see 40%+ of their traffic coming from “direct” even though most of that is search and social referrers that fail to pass the proper referral string. Hopefully, we’ll be able to verify that through folks like 
Define Media Group, whose data sharing this year has made them one of the best allies marketers have in understanding the landscape of web traffic patterns.

BTW – I’d already estimate that 30-50% of all “direct” traffic is, in fact, search or social traffic that hasn’t been properly attributed. This is a huge challenge for web marketers — maybe one of the greatest challenges we face, because saying “I brought in a lot more traffic, I just can’t prove it or measure it,” isn’t going to get you nearly the buy-in, raises, or respect that your paid-traffic compatriots can earn by having every last visit they drive perfectly attributed.

#7: The content advertising/recommendation platforms will continue to consolidate, and either Taboola or Outbrain will be acquired or do some heavy acquiring themselves.

We just witnessed the 
surprising shutdown of nRelate, which I suspect had something to do with IAC politics more than just performance and potential for the company. But given that less than 2% of the web’s largest sites use content recommendation/promotion services and yet both Outbrain and Taboola are expected to have pulled in north of $200m in 2014, this is a massive area for future growth.

Yahoo!, Facebook, and Google are all potential acquirers here, and I could even see AOL (who already own Gravity) or Buzzfeed making a play. Likewise, there’s a slew of smaller/other players that Taboola or Outbrain themselves could acquire: Zemanta, Adblade, Zegnet, Nativo, Disqus, Gravity, etc. It’s a marketplace as ripe for acquisition as it is for growth.

#8: Promoted pins will make Pinterest an emerging juggernaut in the social media and social advertising world, particularly for e-commerce.

I’d estimate we’ll see figures north of $50m spent on promoted pins in 2015. This is coming after Pinterest only just 
opened their ad platform beyond a beta group this January. But, thanks to high engagement, lots of traffic, and a consumer base that B2C marketers absolutely love and often struggle to reach, I think Pinterest is going to have a big ad opportunity on their hands.

Note the promoted pin from Mad Hippie on the right

(apologies for very unappetizing recipes featured around it)

#9: Foursquare (and/or Swarm) will be bought, merge with someone, or shut down in 2015 (probably one of the first two).

I used to love Foursquare. I used the service multiple times every day, tracked where I went with it, ran into friends in foreign cities thanks to its notifications, and even used it to see where to go sometimes (in Brazil, for example, I found Foursquare’s business location data far superior to Google Maps’). Then came the split from Swarm. Most of my friends who were using Foursquare stopped, and the few who continued did so less frequently. Swarm itself tried to compete with Yelp, but it looks like 
neither is doing well in the app rankings these days.

I feel a lot of empathy for Dennis and the Foursquare team. I can totally understand the appeal, from a development and product perspective, of splitting up the two apps to let each concentrate on what it’s best at, and not dilute a single product with multiple primary use cases. Heck, we’re trying to learn that lesson at Moz and refocus our products back on SEO, so I’m hardly one to criticize. That said, I think there’s trouble brewing for the company and probably some pressure to sell while their location and check-in data, which is still hugely valuable, is robust enough and unique enough to command a high price.

#10: Amazon will not take considerable search share from Google, nor will mobile search harm Google’s ad revenue substantively.

The “Google’s-in-trouble” pundits are mostly talking about two trends that could hurt Google’s revenue in the year ahead. First, mobile searchers being less valuable to Google because they don’t click on ads as often and advertisers won’t pay as much for them. And, second, Amazon becoming the destination for direct, commercial queries ahead of Google.

In 2015, I don’t see either of these taking a toll on Google. I believe most of Amazon’s impact as a direct navigation destination for e-commerce shoppers has already taken place and while Google would love to get those searchers back, that’s already a lost battle (to the extent it was lost). I also don’t think mobile is a big concern for Google — in fact, I think they’re pivoting it into an opportunity, and taking advantage of their ability to connect mobile to desktop through Google+/Android/Chrome. Desktop search may have flatter growth, and it may even decline 5-10% before reaching a state of equilibrium, but mobile is growing at such a huge clip that Google has plenty of time and even plentier eyeballs and clicks to figure out how to drive more revenue per searcher.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com

Location is Everything: Local Rankings in Moz Analytics

Posted by MatthewBrown

Today we are thrilled to launch 
local rankings as a feature in Moz Analytics, which gives our customers the ability to assign geo-locations to their tracked keywords. If you’re a Moz Analytics customer and are ready to jump right in, here’s where you an find the new feature within the application:

Not a Moz Analytics customer? You can take the new features for a free spin…

One of the biggest SEO developments of the last several years is how frequently Google is returning localized organics across a rapidly increasing number of search queries. It’s not just happening for “best pizza in Portland” (the answer to that is
Apizza Scholls, by the way). Searches like “financial planning” and “election guide” now trigger Google’s localization algorithm:

local search results election guide

This type of query underscores the need to track rankings on a local level. I’m searching for a non-localized keyword (“election guide”), but Google recognizes I’m searching from Portland, Oregon so they add the localization layer to the result.

Local tends to get lost in the shuffle of zoo animal updates we’ve seen from Google in the last couple of years, but search marketers are coming around to realize the 2012 Venice update was one of the most important changes Google made to the search landscape. It certainly didn’t seem like a huge deal when it launched; here’s how Google described Venice as part of the late lamented
monthly search product updates they used to provide:

  • Improvements to ranking for local search results. [launch codename “Venice”] This improvement improves the triggering of Local Universal results by relying more on the ranking of our main search results as a signal.

Seems innocent enough, right? What the Venice update actually kicked off was a long-term relationship between local search results (what we see in Google local packs and map results) and the organic search results that, once upon a time, existed on their own. “Localized organics,” as they are known, have been increasingly altering the organic search landscape for keywords that normally triggered “generic” or national rankings. If you haven’t already read it, Mike Ramsey’s article on
how to adjust for the Venice update remains one of the best strategic looks at the algorithm update.

This jump in localized organic results has prompted both marketers and business owners to track rankings at the local level. An increasing number of Moz customers have been requesting the ability to add locations to their keywords since the 2012 Venice update, and this is likely due to Google expanding the queries which trigger a localized result. You asked for it, and today we’re delivering. Our new local rankings feature allows our customers to track keywords for any city, state, or ZIP/postal code.

Geo-located searches

We can now return rankings based on a location you specify, just like I set my search to Portland in the example above. This is critical for monitoring the health of your local search campaigns, as Google continues to fold the location layer into the organic results. Here’s how it looks in Moz Analytics:

tracking local keyword ranking

A keyword with a location specified counts against your keyword limit in Moz Analytics just like any other keyword.

The location being tracked will also be displayed in your rankings reports as well as on the keyword analysis page:

local keyword difficulty

The local rankings feature allows you to enter your desired tracking location by city, state, neighborhood, and zip or postal code. We provide neighborhood-level granularity via dropdown for the United States, United Kingdom, Canada and Australia. The dropdown will also provide city-level listings for other countries. It’s also possible to enter a location of your choice not on the list in the text box. Fair warning: We cannot guarantee the accuracy of rankings in mythical locations like Westeros or Twin Peaks, or mythical spellings like Pordland or Los Andules.

An easy way to get started with the new feature is to look at keywords you are already tracking, and find the ones that have an obvious local intent for searchers. Then add the neighborhood or city you are targeting for the most qualified searchers.

What’s next?

We will be launching local rankings functionality within the Moz Local application in the first part of 2015, which will provide needed visibility to folks who are mainly concerned with Local SEO. We’re also working on functionality to allow users to easily add geo-modifiers to their tracked keywords, so we can provide rankings for “health club Des Moines” alongside tracking rankings for “health clubs” in the 50301 zip code.

Right now this feature works with all Google engines (we’ll be adding Bing and Yahoo! later). We’ll also be keeping tabs on Google’s advancements on the local front so we can provide our customers with the best data on their local visibility.

Please let us know what you think in the comments below! Customer feedback, suggestions, and comments were instrumental into both the design and prioritization of this feature.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com