Stop Ghost Spam in Google Analytics with One Filter

Posted by CarloSeo

The spam in Google Analytics (GA) is becoming a serious issue. Due to a deluge of referral spam from social buttons, adult sites, and many, many other sources, people are starting to become overwhelmed by all the filters they are setting up to manage the useless data they are receiving.

The good news is, there is no need to panic. In this post, I’m going to focus on the most common mistakes people make when fighting spam in GA, and explain an efficient way to prevent it.

But first, let’s make sure we understand how spam works. A couple of months ago, Jared Gardner wrote an excellent article explaining what referral spam is, including its intended purpose. He also pointed out some great examples of referral spam.

Types of spam

The spam in Google Analytics can be categorized by two types: ghosts and crawlers.

Ghosts

The vast majority of spam is this type. They are called ghosts because they never access your site. It is important to keep this in mind, as it’s key to creating a more efficient solution for managing spam.

As unusual as it sounds, this type of spam doesn’t have any interaction with your site at all. You may wonder how that is possible since one of the main purposes of GA is to track visits to our sites.

They do it by using the Measurement Protocol, which allows people to send data directly to Google Analytics’ servers. Using this method, and probably randomly generated tracking codes (UA-XXXXX-1) as well, the spammers leave a “visit” with fake data, without even knowing who they are hitting.

Crawlers

This type of spam, the opposite to ghost spam, does access your site. As the name implies, these spam bots crawl your pages, ignoring rules like those found in robots.txt that are supposed to stop them from reading your site. When they exit your site, they leave a record on your reports that appears similar to a legitimate visit.

Crawlers are harder to identify because they know their targets and use real data. But it is also true that new ones seldom appear. So if you detect a referral in your analytics that looks suspicious, researching it on Google or checking it against this list might help you answer the question of whether or not it is spammy.

Most common mistakes made when dealing with spam in GA

I’ve been following this issue closely for the last few months. According to the comments people have made on my articles and conversations I’ve found in discussion forums, there are primarily three mistakes people make when dealing with spam in Google Analytics.

Mistake #1. Blocking ghost spam from the .htaccess file

One of the biggest mistakes people make is trying to block Ghost Spam from the .htaccess file.

For those who are not familiar with this file, one of its main functions is to allow/block access to your site. Now we know that ghosts never reach your site, so adding them here won’t have any effect and will only add useless lines to your .htaccess file.

Ghost spam usually shows up for a few days and then disappears. As a result, sometimes people think that they successfully blocked it from here when really it’s just a coincidence of timing.

Then when the spammers later return, they get worried because the solution is not working anymore, and they think the spammer somehow bypassed the barriers they set up.

The truth is, the .htaccess file can only effectively block crawlers such as buttons-for-website.com and a few others since these access your site. Most of the spam can’t be blocked using this method, so there is no other option than using filters to exclude them.

Mistake #2. Using the referral exclusion list to stop spam

Another error is trying to use the referral exclusion list to stop the spam. The name may confuse you, but this list is not intended to exclude referrals in the way we want to for the spam. It has other purposes.

For example, when a customer buys something, sometimes they get redirected to a third-party page for payment. After making a payment, they’re redirected back to you website, and GA records that as a new referral. It is appropriate to use referral exclusion list to prevent this from happening.

If you try to use the referral exclusion list to manage spam, however, the referral part will be stripped since there is no preexisting record. As a result, a direct visit will be recorded, and you will have a bigger problem than the one you started with since. You will still have spam, and direct visits are harder to track.

Mistake #3. Worrying that bounce rate changes will affect rankings

When people see that the bounce rate changes drastically because of the spam, they start worrying about the impact that it will have on their rankings in the SERPs.

bounce.png

This is another mistake commonly made. With or without spam, Google doesn’t take into consideration Google Analytics metrics as a ranking factor. Here is an explanation about this from Matt Cutts, the former head of Google’s web spam team.

And if you think about it, Cutts’ explanation makes sense; because although many people have GA, not everyone uses it.

Assuming your site has been hacked

Another common concern when people see strange landing pages coming from spam on their reports is that they have been hacked.

landing page

The page that the spam shows on the reports doesn’t exist, and if you try to open it, you will get a 404 page. Your site hasn’t been compromised.

But you have to make sure the page doesn’t exist. Because there are cases (not spam) where some sites have a security breach and get injected with pages full of bad keywords to defame the website.

What should you worry about?

Now that we’ve discarded security issues and their effects on rankings, the only thing left to worry about is your data. The fake trail that the spam leaves behind pollutes your reports.

It might have greater or lesser impact depending on your site traffic, but everyone is susceptible to the spam.

Small and midsize sites are the most easily impacted – not only because a big part of their traffic can be spam, but also because usually these sites are self-managed and sometimes don’t have the support of an analyst or a webmaster.

Big sites with a lot of traffic can also be impacted by spam, and although the impact can be insignificant, invalid traffic means inaccurate reports no matter the size of the website. As an analyst, you should be able to explain what’s going on in even in the most granular reports.

You only need one filter to deal with ghost spam

Usually it is recommended to add the referral to an exclusion filter after it is spotted. Although this is useful for a quick action against the spam, it has three big disadvantages.

  • Making filters every week for every new spam detected is tedious and time-consuming, especially if you manage many sites. Plus, by the time you apply the filter, and it starts working, you already have some affected data.
  • Some of the spammers use direct visits along with the referrals.
  • These direct hits won’t be stopped by the filter so even if you are excluding the referral you will sill be receiving invalid traffic, which explains why some people have seen an unusual spike in direct traffic.

Luckily, there is a good way to prevent all these problems. Most of the spam (ghost) works by hitting GA’s random tracking-IDs, meaning the offender doesn’t really know who is the target, and for that reason either the hostname is not set or it uses a fake one. (See report below)

Ghost-Spam.png

You can see that they use some weird names or don’t even bother to set one. Although there are some known names in the list, these can be easily added by the spammer.

On the other hand, valid traffic will always use a real hostname. In most of the cases, this will be the domain. But it also can also result from paid services, translation services, or any other place where you’ve inserted GA tracking code.

Valid-Referral.png

Based on this, we can make a filter that will include only hits that use real hostnames. This will automatically exclude all hits from ghost spam, whether it shows up as a referral, keyword, or pageview; or even as a direct visit.

To create this filter, you will need to find the report of hostnames. Here’s how:

  1. Go to the Reporting tab in GA
  2. Click on Audience in the lefthand panel
  3. Expand Technology and select Network
  4. At the top of the report, click on Hostname

Valid-list

You will see a list of all hostnames, including the ones that the spam uses. Make a list of all the valid hostnames you find, as follows:

  • yourmaindomain.com
  • blog.yourmaindomain.com
  • es.yourmaindomain.com
  • payingservice.com
  • translatetool.com
  • anotheruseddomain.com

For small to medium sites, this list of hostnames will likely consist of the main domain and a couple of subdomains. After you are sure you got all of them, create a regular expression similar to this one:

yourmaindomain\.com|anotheruseddomain\.com|payingservice\.com|translatetool\.com

You don’t need to put all of your subdomains in the regular expression. The main domain will match all of them. If you don’t have a view set up without filters, create one now.

Then create a Custom Filter.

Make sure you select INCLUDE, then select “Hostname” on the filter field, and copy your expression into the Filter Pattern box.

filter

You might want to verify the filter before saving to check that everything is okay. Once you’re ready, set it to save, and apply the filter to all the views you want (except the view without filters).

This single filter will get rid of future occurrences of ghost spam that use invalid hostnames, and it doesn’t require much maintenance. But it’s important that every time you add your tracking code to any service, you add it to the end of the filter.

Now you should only need to take care of the crawler spam. Since crawlers access your site, you can block them by adding these lines to the .htaccess file:

## STOP REFERRER SPAM 
RewriteCond %{HTTP_REFERER} semalt\.com [NC,OR] 
RewriteCond %{HTTP_REFERER} buttons-for-website\.com [NC] 
RewriteRule .* - [F]

It is important to note that this file is very sensitive, and misplacing a single character it it can bring down your entire site. Therefore, make sure you create a backup copy of your .htaccess file prior to editing it.

If you don’t feel comfortable messing around with your .htaccess file, you can alternatively make an expression with all the crawlers, then and add it to an exclude filter by Campaign Source.

Implement these combined solutions, and you will worry much less about spam contaminating your analytics data. This will have the added benefit of freeing up more time for you to spend actually analyze your valid data.

After stopping spam, you can also get clean reports from the historical data by using the same expressions in an Advance Segment to exclude all the spam.

Bonus resources to help you manage spam

If you still need more information to help you understand and deal with the spam on your GA reports, you can read my main article on the subject here: http://www.ohow.co/what-is-referrer-spam-how-stop-it-guide/.

Additional information on how to stop spam can be found at these URLs:

In closing, I am eager to hear your ideas on this serious issue. Please share them in the comments below.

(Editor’s Note: All images featured in this post were created by the author.)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

8 Ways Content Marketers Can Hack Facebook Multi-Product Ads

Posted by Alan_Coleman

The trick most content marketers are missing

Creating great content is the first half of success in content marketing. Getting quality content read by, and amplified to, a relevant audience is the oft overlooked second half of success. Facebook can be a content marketer’s best friend for this challenge. For reach, relevance and amplification potential, Facebook is unrivaled.

  1. Reach: 1 in 6 mobile minutes on planet earth is somebody reading something on Facebook.
  2. Relevance: Facebook is a lean mean interest and demo targeting machine. There is no online or offline media that owns as much juicy interest and demographic information on its audience and certainly no media has allowed advertisers to utilise this information as effectively as Facebook has.
  3. Amplification: Facebook is literally built to encourage sharing. Here’s the first 10 words from their mission statement: “Facebook’s mission is to give people the power to share…”, Enough said!

Because of these three digital marketing truths, if a content marketer gets their paid promotion* right on Facebook, the battle for eyeballs and amplification is already won.

For this reason it’s crucial that content marketers keep a close eye on Facebook advertising innovations and seek out ways to use them in new and creative ways.

In this post I will share with you eight ways we’ve hacked a new Facebook ad format to deliver content marketing success.

Multi-Product Ads (MPAs)

In 2014, Facebook unveiled multi-product ads (MPAs) for US advertisers, we got them in Europe earlier this year. They allow retailers to show multiple products in a carousel-type ad unit.

They look like this:

If the user clicks on the featured product, they are guided directly to the landing page for that specific product, from where they can make a purchase.

You could say MPAs are Facebook’s answer to Google Shopping.

Facebook’s mistake is a content marketer’s gain

I believe Facebook has misunderstood how people want to use their social network and the transaction-focused format is OK at best for selling products. People aren’t really on Facebook to hit the “buy now” button. I’m a daily Facebook user and I can’t recall a time this year where I have gone directly from Facebook to an e-commerce website and transacted. Can you remember a recent time when you did?

So, this isn’t an innovation that removes a layer of friction from something that we are all doing online already (as the most effective innovations do). Instead, it’s a bit of a “hit and hope” that, by providing this functionality, Facebook would encourage people to try to buy online in a way they never have before.

The Wolfgang crew felt the MPA format would be much more useful to marketers and users if they were leveraging Facebook for the behaviour we all demonstrate on the platform every day, guiding users to relevant content. We attempted to see if Facebook Ads Manager would accept MPAs promoting content rather than products. We plugged in the images, copy and landing pages, hit “place order”, and lo and behold the ads became active. We’re happy to say that the engagement rates, and more importantly the amplification rates, are fantastic!

Multi-Content Ads

We’ve re-invented the MPA format for multi-advertisers in multi-ways, eight ways to be exact! Here’s eight MPA Hacks that have worked well for us. All eight hacks use the MPA format to promote content rather than promote products.

Hack #1: Multi-Package Ads

Our first variation wasn’t a million miles away from multi-product ads; we were promoting the various packages offered by a travel operator.

By looking at the number of likes, comments, and shares (in blue below the ads) you can see the ads were a hit with Facebook users and they earned lots of free engagement and amplification.

NB: If you have selected “clicks to website” as your advertising objective, all those likes, comments and shares are free!

Independent Travel Multi Product Ad

The ad sparked plenty of conversation amongst Facebook friends in the comments section.

Comments on a Facebook MPA

Hack #2: Multi-Offer Ads

Everybody knows the Internet loves a bargain. So we decided to try another variation moving away from specific packages, focusing instead on deals for a different travel operator.

Here’s how the ads looked:

These ads got valuable amplification beyond the share. In the comments section, you can see people tagging specific friends. This led to the MPAs receiving further amplification, and a very targeted and personalised form of amplification to boot.

Abbey Travel Facebook Ad Comments

Word of mouth referrals have been a trader’s best friend since the stone age. These “personalised” word of mouth referrals en masse are a powerful marketing proposition. It’s worth mentioning again that those engagements are free!

Hack #3: Multi-Locations Ads

Putting the Lo in SOLOMO.

This multi-product feed ad was hacked to promote numerous locations of a waterpark. “Where to go?” is among the first questions somebody asks when researching a holiday. In creating this top of funnel content, we can communicate with our target audience at the very beginning of their research process. A simple truth of digital marketing is: the more interactions you have with your target market on their journey to purchase, the more likely they are to seal the deal with you when it comes time to hit the “buy now” button. Starting your relationship early gives you an advantage over those competitors who are hanging around the bottom of the purchase funnel hoping to make a quick and easy conversion.

Abbey Travel SplashWorld Facebook MPA

What was surprising here, was that because we expected to reach people at the very beginning of their research journey, we expected the booking enquiries to be some time away. What actually happened was these ads sparked an enquiry frenzy as Facebook users could see other people enquiring and the holidays selling out in real time.

Abbey Travel comments and replies

In fact nearly all of the 35 comments on this ad were booking enquiries. This means what we were measuring as an “engagement” was actually a cold hard “conversion”! You don’t need me to tell you a booking enquiry is far closer to the money than a Facebook like.

The three examples outlined so far are for travel companies. Travel is a great fit for Facebook as it sits naturally in the Facebook feed, my Facebook feed is full of envy-inducing friends’ holiday pictures right now. Another interesting reason why travel is a great fit for Facebook ads is because typically there are multiple parties to a travel purchase. What happened here is the comments section actually became a very visible and measurable forum for discussion between friends and family before becoming a stampede inducing medium of enquiry.

So, stepping outside of the travel industry, how do other industries fare with hacked MPAs?

Hack #3a: Multi-Location Ads (combined with location targeting)

Location, location, location. For a property listings website, we applied location targeting and repeated our Multi-Location Ad format to advertise properties for sale to people in and around that location.

Hack #4: Multi-Big Content Ad

“The future of big content is multi platform”

– Cyrus Shepard

The same property website had produced a report and an accompanying infographic to provide their audience with unique and up-to-the-minute market information via their blog. We used the MPA format to promote the report, the infographic and the search rentals page of the website. This brought their big content piece to a larger audience via a new platform.

Rental Report Multi Product Ad

Hack #5: Multi-Episode Ad

This MPA hack was for an online TV player. As you can see we advertised the most recent episodes of a TV show set in a fictional Dublin police station, Red Rock.

Engagement was high, opinion was divided.

TV3s Red Rock viewer feedback

LOL.

Hack #6: Multi-People Ads

In the cosmetic surgery world, past patients’ stories are valuable marketing material. Particularly when the past patients are celebrities. We recycled some previously published stories from celebrity patients using multi-people ads and targeted them to a very specific audience.

Avoca Clinic Multi People Ads

Hack #7: Multi-UGC Ads

Have you witnessed the power of user generated content (UGC) in your marketing yet? We’ve found interaction rates with authentic UGC images can be up to 10 fold of those of the usual stylised images. In order to encourage further UGC, we posted a number of customer’s images in our Multi-UGC Ads.

The CTR on the above ads was 6% (2% is the average CTR for Facebook News feed ads according to our study). Strong CTRs earn you more traffic for your budget. Facebook’s relevancy score lowers your CPC as your CTR increases.

When it comes to the conversion, UGC is a power player, we’ve learned that “customers attracting new customers” is a powerful acquisition tool.

Hack #8: Target past customers for amplification

“Who will support and amplify this content and why?”

– Rand Fishkin

Your happy customers Rand, that’s the who and the why! Check out these Multi-Package Ads targeted to past customers via custom audiences. The Camino walkers have already told all their friends about their great trip, now allow them to share their great experiences on Facebook and connect the tour operator with their Facebook friends via a valuable word of mouth referral. Just look at the ratio of share:likes and shares:comments. Astonishingly sharable ads!

Camino Ways Mulit Product Ads

Targeting past converters in an intelligent manner is a super smart way to find an audience ready to share your content.

How will hacking Multi-Product Ads work for you?

People don’t share ads, but they do share great content. So why not hack MPAs to promote your content and reap the rewards of the world’s greatest content sharing machine: Facebook.

MPAs allow you to tell a richer story by allowing you to promote multiple pieces of content simultaneously. So consider which pieces of content you have that will work well as “content bundles” and who the relevant audience for each “content bundle” is.

As Hack #8 above illustrates, the big wins come when you match a smart use of the format with the clever and relevant targeting Facebook allows. We’re massive fans of custom audiences so if you aren’t sure where to start, I’d suggest starting there.

So ponder your upcoming content pieces, consider your older content you’d like to breathe some new life into and perhaps you could become a Facebook Ads Hacker.

I’d love to hear about your ideas for turning Multi-Product Ads into Multi-Content Ads in the comments section below.

We could even take the conversation offline at Mozcon!

Happy hacking.


*Yes I did say paid promotion, it’s no secret that Facebook’s organic reach continues to dwindle. The cold commercial reality is you need to pay to play on FB. The good news is that if you select ‘website clicks’ as your objective you only pay for website traffic and engagement while amplification by likes, comments, and shares are free! Those website clicks you pay for are typically substantially cheaper than Adwords, Taboola, Outbrain, Twitter or LinkedIn. How does it compare to display? It doesn’t. Paying for clicks is always preferable to paying for impressions. If you are spending money on display advertising I’d urge you to fling a few spondoolas towards Facebook ads and compare results. You will be pleasantly surprised.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The Importance of Being Different: Creating a Competitive Advantage With Your USP

Posted by TrentonGreener

“The one who follows the crowd will usually go no further than the crowd. Those who walk alone are likely to find themselves in places no one has ever been before.”

While this quote has been credited to everyone from Francis Phillip Wernig, under the pseudonym Alan Ashley-Pitt, to Einstein himself, the powerful message does not lose its substance no matter whom you choose to credit. There is a very important yet often overlooked effect of not heeding this warning. One which can be applied to all aspects of life. From love and happiness, to business and marketing, copying what your competitors are doing and failing to forge your own path can be a detrimental mistake.

While as marketers we are all acutely aware of the importance of differentiation, we’ve been trained for the majority of our lives to seek out the norm.

We spend the majority of our adolescent lives trying desperately not to be different. No one has ever been picked on for being too normal or not being different enough. We would beg our parents to buy us the same clothes little Jimmy or little Jamie wore. We’d want the same backpack and the same bike everyone else had. With the rise of the cell phone and later the smartphone, on hands and knees, we begged and pleaded for our parents to buy us the Razr, the StarTAC (bonus points if you didn’t have to Google that one), and later the iPhone. Did we truly want these things? Yes, but not just because they were cutting edge and nifty. We desired them because the people around us had them. We didn’t want to be the last to get these devices. We didn’t want to be different.

Thankfully, as we mature we begin to realize the fallacy that is trying to be normal. We start to become individuals and learn to appreciate that being different is often seen as beautiful. However, while we begin to celebrate being different on a personal level, it does not always translate into our business or professional lives.

We unconsciously and naturally seek out the normal, and if we want to be different—truly different in a way that creates an advantage—we have to work for it.

The truth of the matter is, anyone can be different. In fact, we all are very different. Even identical twins with the same DNA will often have starkly different personalities. As a business, the real challenge lies in being different in a way that is relevant, valuable to your audience, and creates an advantage.

“Strong products and services are highly differentiated from all other products and services. It’s that simple. It’s that difficult.” – Austin McGhie, Brand Is a Four Letter Word

Let’s explore the example of Revel Hotel & Casino. Revel is a 70-story luxury casino in Atlantic City that was built in 2012. There is simply not another casino of the same class in Atlantic City, but there might be a reason for this. Even if you’re not familiar with the city, a quick jump onto Atlantic City’s tourism website reveals that of the five hero banners that rotate, not one specifically mentions gambling, but three reference the boardwalk. This is further illustrated when exploring their internal linking structure. The beaches, boardwalk, and shopping all appear before a single mention of casinos. There simply isn’t as much of a market for high-end gamblers in the Atlantic City area; in the states Las Vegas serves that role. So while Revel has a unique advantage, their ability to attract customers to their resort has not resulted in profitable earnings reports. In Q2 2012, Revel had a gross operating loss of $35.177M, and in Q3 2012 that increased to $36.838M.

So you need to create a unique selling proposition (also known as unique selling point and commonly referred to as a USP), and your USP needs to be valuable to your audience and create a competitive advantage. Sounds easy enough, right? Now for the kicker. That advantage needs to be as sustainable as physically possible over the long term.

“How long will it take our competitors to duplicate our advantage?”

You really need to explore this question and the possible solutions your competitors could utilize to play catch-up or duplicate what you’ve done. Look no further than Google vs Bing to see this in action. No company out there is going to just give up because your USP is so much better; most will pivot or adapt in some way.

Let’s look at a Seattle-area coffee company of which you may or may not be familiar. Starbucks has tried quite a few times over the years to level-up their tea game with limited success, but the markets that Starbucks has really struggled to break into are the pastry, breads, dessert, and food markets.

Other stores had more success in these markets, and they thought that high-quality teas and bakery items were the USPs that differentiated them from the Big Bad Wolf that is Starbucks. And while they were right to think that their brick house would save them from the Big Bad Wolf for some time, this fable doesn’t end with the Big Bad Wolf in a boiling pot.

Never underestimate your competitor’s ability to be agile, specifically when overcoming a competitive disadvantage.

If your competitor can’t beat you by making a better product or service internally, they can always choose to buy someone who can.

After months of courting, on June 4th, 2012 Starbucks announced that they had come to an agreement to purchase La Boulange in order to “elevate core food offerings and build a premium, artisanal bakery brand.” If you’re a small-to-medium sized coffee shop and/or bakery that even indirectly competed with Starbucks, a new challenger approaches. And while those tea shops momentarily felt safe within the brick walls that guarded their USP, on the final day of that same year, the Big Bad Wolf huffed and puffed and blew a stack of cash all over Teavana. Making Teavana a wholly-owned subsidiary of Starbucks for the low, low price of $620M.

Sarcasm aside, this does a great job of illustrating the ability of companies—especially those with deep pockets—to be agile, and demonstrates that they often have an uncanny ability to overcome your company’s competitive advantage. In seven months, Starbucks went from a minor player in these markets to having all the tools they need to dominate tea and pastries. Have you tried their raspberry pound cake? It’s phenomenal.

Why does this matter to me?

Ok, we get it. We need to be different, and in a way that is relevant, valuable, defensible, and sustainable. But I’m not the CEO, or even the CMO. I cannot effect change on a company level; why does this matter to me?

I’m a firm believer that you effect change no matter what the name plate on your desk may say. Sure, you may not be able to call an all-staff meeting today and completely change the direction of your company tomorrow, but you can effect change on the parts of the business you do touch. No matter your title or area of responsibility, you need to know your company’s, client’s, or even a specific piece of content’s USP, and you need to ensure it is applied liberally to all areas of your work.

Look at this example SERP for “Mechanics”:

While yes, this search is very likely to be local-sensitive, that doesn’t mean you can’t stand out. Every single AdWords result, save one, has only the word “Mechanics” in the headline. (While the top of page ad is pulling description line 1 into the heading, the actual headline is still only “Mechanic.”) But even the one headline that is different doesn’t do a great job of illustrating the company’s USP. Mechanics at home? Whose home? Mine or theirs? I’m a huge fan of Steve Krug’s “Don’t Make Me Think,” and in this scenario there are too many questions I need answered before I’m willing to click through. “Mechanics; We Come To You” or even “Traveling Mechanics” illustrates this point much more clearly, and still fits within the 25-character limit for the headline.

If you’re an AdWords user, no matter how big or small your monthly spend may be, take a look at your top 10-15 keywords by volume and evaluate how well you’re differentiating yourself from the other brands in your industry. Test ad copy that draws attention to your USP and reap the rewards.

Now while this is simply an AdWords text ad example, the same concept can be applied universally across all of marketing.

Title tags & meta descriptions

As we alluded to above, not only do companies have USPs, but individual pieces of content can, and should, have their own USP. Use your title tag and meta description to illustrate what differentiates your piece of content from the competition and do so in a way that attracts the searcher’s click. Use your USP to your advantage. If you have already established a strong brand within a specific niche, great! Now use it to your advantage. Though it’s much more likely that you are competing against a strong brand, and in these scenarios ask yourself, “What makes our content different from theirs?” The answer you come up with is your content’s USP. Call attention to that in your title tag and meta description, and watch the CTR climb.

I encourage you to hop into your own site’s analytics and look at your top 10-15 organic landing pages and see how well you differentiate yourself. Even if you’re hesitant to negatively affect your inbound gold mines by changing the title tags, run a test and change up your meta description to draw attention to your USP. In an hour’s work, you just may make the change that pushes you a little further up those SERPs.

Branding

Let’s break outside the world of digital marketing and look at the world of branding. Tom’s Shoes competes against some heavy hitters in Nike, Adidas, Reebok, and Puma just to name a few. While Tom’s can’t hope to compete against the marketing budgets of these companies in a fair fight, they instead chose to take what makes them different, their USP, and disseminate it every chance they get. They have labeled themselves “The One for One” company. It’s in their homepage’s title tag, in every piece of marketing they put out, and it smacks you in the face when you land on their site. They even use the call-to-action “Get Good Karma” throughout their site.

Now as many of us may know, partially because of the scandal it created in late 2013, Tom’s is not actually a non-profit organization. No matter how you feel about the matter, this marketing strategy has created a positive effect on their bottom line. Fast Company conservatively estimated their revenues in 2013 at $250M, with many estimates being closer to the $300M mark. Not too bad of a slice of the pie when competing against the powerhouses Tom’s does.

Wherever you stand on this issue, Tom’s Shoes has done a phenomenal job of differentiating their brand from the big hitters in their industry.

Know your USP and disseminate it every chance you get.

This is worth repeating. Know your USP and disseminate it every chance you get, whether that be in title tags, ad copy, on-page copy, branding, or any other segment of your marketing campaigns. Online or offline, be different. And remember the quote that we started with, “The one who follows the crowd will usually go no further than the crowd. Those who walk alone are likely to find themselves in places no one has ever been before.”

The amount of marketing knowledge that can be taken from this one simple statement is astounding. Heed the words, stand out from the crowd, and you will have success.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Has Google Gone Too Far with the Bias Toward Its Own Content?

Posted by ajfried

Since the beginning of SEO time, practitioners have been trying to crack the Google algorithm. Every once in a while, the industry gets a glimpse into how the search giant works and we have opportunity to deconstruct it. We don’t get many of these opportunities, but when we do—assuming we spot them in time—we try to take advantage of them so we can “fix the Internet.”

On Feb. 16, 2015, news started to circulate that NBC would start removing images and references of Brian Williams from its website.

This was it!

A golden opportunity.

This was our chance to learn more about the Knowledge Graph.

Expectation vs. reality

Often it’s difficult to predict what Google is truly going to do. We expect something to happen, but in reality it’s nothing like we imagined.

Expectation

What we expected to see was that Google would change the source of the image. Typically, if you hover over the image in the Knowledge Graph, it reveals the location of the image.

Keanu-Reeves-Image-Location.gif

This would mean that if the image disappeared from its original source, then the image displayed in the Knowledge Graph would likely change or even disappear entirely.

Reality (February 2015)

The only problem was, there was no official source (this changed, as you will soon see) and identifying where the image was coming from proved extremely challenging. In fact, when you clicked on the image, it took you to an image search result that didn’t even include the image.

Could it be? Had Google started its own database of owned or licensed images and was giving it priority over any other sources?

In order to find the source, we tried taking the image from the Knowledge Graph and “search by image” in images.google.com to find others like it. For the NBC Nightly News image, Google failed to even locate a match to the image it was actually using anywhere on the Internet. For other television programs, it was successful. Here is an example of what happened for Morning Joe:

Morning_Joe_image_search.png

So we found the potential source. In fact, we found three potential sources. Seemed kind of strange, but this seemed to be the discovery we were looking for.

This looks like Google is using someone else’s content and not referencing it. These images have a source, but Google is choosing not to show it.

Then Google pulled the ol’ switcheroo.

New reality (March 2015)

Now things changed and Google decided to put a source to their images. Unfortunately, I mistakenly assumed that hovering over an image showed the same thing as the file path at the bottom, but I was wrong. The URL you see when you hover over an image in the Knowledge Graph is actually nothing more than the title. The source is different.

Morning_Joe_Source.png

Luckily, I still had two screenshots I took when I first saw this saved on my desktop. Success. One screen capture was from NBC Nightly News, and the other from the news show Morning Joe (see above) showing that the source was changed.

NBC-nightly-news-crop.png

(NBC Nightly News screenshot.)

The source is a Google-owned property: gstatic.com. You can clearly see the difference in the source change. What started as a hypothesis in now a fact. Google is certainly creating a database of images.

If this is the direction Google is moving, then it is creating all kinds of potential risks for brands and individuals. The implications are a loss of control for any brand that is looking to optimize its Knowledge Graph results. As well, it seems this poses a conflict of interest to Google, whose mission is to organize the world’s information, not license and prioritize it.

How do we think Google is supposed to work?

Google is an information-retrieval system tasked with sourcing information from across the web and supplying the most relevant results to users’ searches. In recent months, the search giant has taken a more direct approach by answering questions and assumed questions in the Answer Box, some of which come from un-credited sources. Google has clearly demonstrated that it is building a knowledge base of facts that it uses as the basis for its Answer Boxes. When it sources information from that knowledge base, it doesn’t necessarily reference or credit any source.

However, I would argue there is a difference between an un-credited Answer Box and an un-credited image. An un-credited Answer Box provides a fact that is indisputable, part of the public domain, unlikely to change (e.g., what year was Abraham Lincoln shot? How long is the George Washington Bridge?) Answer Boxes that offer more than just a basic fact (or an opinion, instructions, etc.) always credit their sources.

There are four possibilities when it comes to Google referencing content:

  • Option 1: It credits the content because someone else owns the rights to it
  • Option 2: It doesn’t credit the content because it’s part of the public domain, as seen in some Answer Box results
  • Option 3: It doesn’t reference it because it owns or has licensed the content. If you search for “Chicken Pox” or other diseases, Google appears to be using images from licensed medical illustrators. The same goes for song lyrics, which Eric Enge discusses here: Google providing credit for content. This adds to the speculation that Google is giving preference to its own content by displaying it over everything else.
  • Option 4: It doesn’t credit the content, but neither does it necessarily own the rights to the content. This is a very gray area, and is where Google seemed to be back in February. If this were the case, it would imply that Google is “stealing” content—which I find hard to believe, but felt was necessary to include in this post for the sake of completeness.

Is this an isolated incident?

At Five Blocks, whenever we see these anomalies in search results, we try to compare the term in question against others like it. This is a categorization concept we use to bucket individuals or companies into similar groups. When we do this, we uncover some incredible trends that help us determine what a search result “should” look like for a given group. For example, when looking at searches for a group of people or companies in an industry, this grouping gives us a sense of how much social media presence the group has on average or how much media coverage it typically gets.

Upon further investigation of terms similar to NBC Nightly News (other news shows), we noticed the un-credited image scenario appeared to be a trend in February, but now all of the images are being hosted on gstatic.com. When we broadened the categories further to TV shows and movies, the trend persisted. Rather than show an image in the Knowledge Graph and from the actual source, Google tends to show an image and reference the source from Google’s own database of stored images.

And just to ensure this wasn’t a case of tunnel vision, we researched other categories, including sports teams, actors and video games, in addition to spot-checking other genres.

Unlike terms for specific TV shows and movies, terms in each of these other groups all link to the actual source in the Knowledge Graph.

Immediate implications

It’s easy to ignore this and say “Well, it’s Google. They are always doing something.” However, there are some serious implications to these actions:

  1. The TV shows/movies aren’t receiving their due credit because, from within the Knowledge Graph, there is no actual reference to the show’s official site
  2. The more Google moves toward licensing and then retrieving their own information, the more biased they become, preferring their own content over the equivalent—or possibly even superior—content from another source
  3. If feels wrong and misleading to get a Google Image Search result rather than an actual site because:
    • The search doesn’t include the original image
    • Considering how poor Image Search results are normally, it feels like a poor experience
  4. If Google is moving toward licensing as much content as possible, then it could make the Knowledge Graph infinitely more complicated when there is a “mistake” or something unflattering. How could one go about changing what Google shows about them?

Google is objectively becoming subjective

It is clear that Google is attempting to create databases of information, including lyrics stored in Google Play, photos, and, previously, facts in Freebase (which is now Wikidata and not owned by Google).

I am not normally one to point my finger and accuse Google of wrongdoing. But this really strikes me as an odd move, one bordering on a clear bias to direct users to stay within the search engine. The fact is, we trust Google with a heck of a lot of information with our searches. In return, I believe we should expect Google to return an array of relevant information for searchers to decide what they like best. The example cited above seems harmless, but what about determining which is the right religion? Or even who the prettiest girl in the world is?

Religion-and-beauty-queries.png

Questions such as these, which Google is returning credited answers for, could return results that are perceived as facts.

Should we next expect Google to decide who is objectively the best service provider (e.g., pizza chain, painter, or accountant), then feature them in an un-credited answer box? The direction Google is moving right now, it feels like we should be calling into question their objectivity.

But that’s only my (subjective) opinion.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

How to Create Boring-Industry Content that Gets Shared

Posted by ronell-smith

If you think creating content for boring industries is tough, try creating content for an expensive product that’ll be sold in a so-called boring industry. Such was the problem faced by Mike Jackson, head of sales for a large Denver-based company that was debuting a line of new high-end products for the fishing industry in 2009.

After years of pestering the executives of his traditional, non-flashy company to create a line of products that could be sold to anglers looking to buy premium items, he finally had his wish: a product so expensive only a small percentage of anglers could afford them.

(image source)

What looked like being boxed into a corner was actually part of the plan.

When asked how he could ever put his neck on the line for a product he’d find tough to sell and even tougher to market, he revealed his brilliant plan.

“I don’t need to sell one million of [these products] a year,” he said. “All I need to do is sell a few hundred thousand, which won’t be hard. And as far as marketing, that’s easy: I’m ignoring the folks who’ll buy the items. I’m targeting professional anglers, the folks the buyers are influenced by. If the pros, the influencers, talk about and use the products, people will buy them.”

Such was my first introduction to how it’s often wise to ignore who’ll buy the product in favor of marketing to those who’ll help you market and sell the product.

These influencers are a sweet spot in product marketing and they are largely ignored by many brands

Looking at content for boring industries all wrong

A few months back, I received a message in Google Plus that really piqued my interest: “What’s the best way to create content for my boring business? Just kidding. No one will read it, nor share information from a painter anyway.”

I went from being dismayed to disheartened. Dismayed because the business owner hadn’t yet found a way to connect with his prospects through meaningful content. Disheartened because he seemed to have given up trying.

You can successfully create content for boring industries. Doing so requires nothing out of the ordinary from what you’d normally do to create content for any industry. That’s the good news.

The bad news: Creating successful content for boring industries requires you think beyond content and SEO, focusing heavily on content strategy and outreach.

Successfully creating content for boring industries—or any industry, for that matter—comes down to who’ll share it and who’ll link to it, not who’ll read it, a point nicely summed up in this tweet:

So when businesses struggle with creating content for their respective industries, the culprits are typically easy to find:

  • They lack clarity on who they are creating content for (e.g., content strategy, personas)
  • There are no specific goals (e.g., traffic, links, conversions, etc.) assigned regarding the content, so measuring its effectiveness is impossible
  • They’re stuck in neutral thinking viral content is the only option, while ignoring the value of content amplification (e.g., PR/outreach)

Alone, these three elements are bad; taken together, though, they spell doom for your brand.

content does not equal amplification

If you lack clarity on who you’re creating content for, the best you can hope for is that sometimes you’ll create and share information members of your audience find useful, but you likely won’t be able to reach or engage them with the needed frequency to make content marketing successful.

Goals, or lack thereof, are the real bugaboo of content creation. The problem is even worse for boring industries, where the pressure is on to deliver a content vehicle that meets the threshold of interest to simply gain attention, much less, earn engagement.

For all the hype about viral content, it’s dismaying that so few marketers aren’t being honest on the topic: it’s typically hard to create, impossible to predict and typically has very, very little connection to conversions for most businesses.

What I’ve found is that businesses, regardless of category, struggle to create worthwhile content, leading me to believe there is no boring industry content, only content that’s boring.

“Whenever we label content as ‘boring,’ we’re really admitting we have no idea how to approach marketing something,” says Builtvisible’s Richard Baxter.

Now that we know what the impediments are to producing content for any industry, including boring industries, it’s time to tackle the solution.

Develop a link earning mindset

There are lots of article on the web regarding how to create content for boring industries, some of which have appeared on this very blog.

But, to my mind, the one issue they all suffer from is they all focus on what content should be created, not (a) what content is worthy of promotion, (b) how to identify those who could help with promotion, and (c) how to earn links from boring industry content. (Remember, much of the content that’s read is never shared; much of what’s shared is never read in its entirety; and some of the most linked-to content is neither heavily shared nor heavily read.)

This is why content creators in boring industries should scrap their notions of having the most-read and most-shared content, shifting their focus to creating content that can earn links in addition to generating traffic and social signals to the site.

After all, links and conversions are the main priorities for most businesses sharing content online, including so-called local businesses.

ranking factors survey results

(Image courtesy of the 2014 Moz Local Search Ranking Factors Survey)

If you’re ready to create link-earning, traffic-generating content for your boring-industry business follow the tips from the fictitious example of RZ’s Auto Repair, a Dallas, Texas, automobile shop.

With the Dallas-Forth Worth market being large and competitive, RZ’s has narrowed their speciality to storm repair, mainly hail damage, which is huge in the area. Even with the narrowed focus, however, they still have stiff competition from the major players in the vertical, including MAACO.

What the brand does have in its favor, however, is a solid website and a strong freelance copywriter to help produce content.

Remember, those three problems we mentioned above—lack of goals, lack of clarity and lack of focus on amplification—we’ll now put them to good use to drive our main objectives of traffic, links and conversions.

Setting the right goals

For RZ, this is easy: He needs sales, business (e.g., qualified leads and conversions), but he knows he must be patient since using paid media is not in the cards.

Therefore, he sits down with his partner, and they come up with what seems like the top five workable, important goals:

  1. Increased traffic on the website – He’s noticed that when traffic increases, so does his business.
  2. More phone calls – If they get a customer on the phone, the chances of closing the sale are around 75%.
  3. One blog per week on the site – The more often he blogs, the more web traffic, visits and phone calls increase.
  4. Links from some of the businesses in the area – He’s no dummy. He knows the importance of links, which are that much better when they come from a large company that could send him business.
  5. Develop relationships with small and midsize non-competing businesses in the area for cross promotions, events and the like.

Know the audience

marketing group discussing personas

(image source)

Too many businesses create cute blogs that might generate traffic but do nothing for sales. RZ isn’t falling for this trap. He’s all about identifying the audience who’s likely to do business with him.

Luckily, his secretary is a meticulous record keeper, allowing him to build a reasonable profile of his target persona based on past clients.

  • 21-35 years old
  • Drives a truck that’s less than fours years old
  • Has an income of $45,000-$59,000
  • Employed by a corporation with greater than 500 employees
  • Active on social media, especially Facebook and Twitter
  • Consumes most of their information online
  • Typically referred by a friend or a co-worker

This information will prove invaluable as he goes about creating content. Most important, these nuggets create a clearer picture of how he should go about looking for people and/or businesses to amplify his content.

PR and outreach: Your amplification engines

Armed with his goals and the knowledge of his audience, RZ can now focus on outreach for amplification, thinking along the lines of…

  • Who/what influences his core audience?
  • What could he offer them by way of content to earn their help?
  • What content would they find valuable enough to share and link to?
  • What challenges do they face that he could help them with?
  • How could his brand set itself apart from any other business looking for help from these potential outreach partners?

Putting it all together

Being the savvy businessperson he is, RZ pulls his small staff together and they put their thinking caps on.

Late spring through early fall is prime hail storm season in Dallas. The season accounts for 80 percent of his yearly business. (The other 20% is fender benders.) Also, they realize, many of the storms happen in the late afternoon/early evening, when people are on their way home from work and are stuck in traffic, or when they duck into the grocery store or hit the gym after work.

What’s more, says one of the staffers, often a huge group of clients will come at once, owing to having been parked in the same lot when a storm hits.

Eureka!

lightbulb

(image source)

That’s when RZ bolts out of his chair with the idea that could put his business on the map: Let’s create content for businesses getting a high volume of after-work traffic—sit-down restaurants, gyms, grocery stores, etc.

The businesses would be offering something of value to their customers, who’ll learn about precautions to take in the event of a hail storm, and RZ would have willing amplifiers for his content.

Content is only as boring as your outlook

First—and this is a fatal mistake too many content creators make—RZ visits the handful of local businesses he’d like to partner with. The key here, however, is he smartly makes them aware that he’s done his homework and is eager to help their patrons while making them aware of his service.

This is an integral part of outreach: there must be a clear benefit to the would-be benefactor.

After RZ learns that several of the businesses are amenable to sharing his business’s helpful information, he takes the next step and asks what form the content should take. For now, all he can get them to promote is a glossy one-sheeter, “How To Protect Your Vehicle Against Extensive Hail Damage,” that the biggest gym in the area will promote via a small display at the check-in in return for a 10% coupon for customers.

Three of the five others he talked to also agreed to promote the one-sheeter, though each said they’d be willing to promote other content investments provided they added value for their customers.

The untold truth about creating content for boring industries

When business owners reach out to me about putting together a content strategy for their boring brand, I make two things clear from the start:

  1. There are no boring brands. Those two words are a cop out. No matter what industry you serve, there are hoards of people who use the products or services who are quite smitten.
  2. What they see as boring, I see as an opportunity.

In almost every case, they want to discuss some of another big content piece that’s sure to draw eyes, engagement, and that maybe even leads to a few links. Sure, I say, if you have tons of money to spend.

big content example

(Amazing piece of interactive content created by BuiltVisible)

Assuming you don’t have money to burn, and you want a plan you can replicate easily over time, try what I call the 1-2-1 approach for monthly blog content:

1: A strong piece of local content (goal: organic reach, topical relevance, local SEO)

2: Two pieces of evergreen content (goal: traffic)

1: A link-worthy asset (goal: links)

This plan is not very hard at all to pull off, provided you have your ear to the street in the local market; have done your keyword research, identifying several long-tail keywords you have the ability to rank for; and you’re willing to continue with outreach.

What it does is allow the brand to create content with enough frequency to attain significance with the search engines, while also developing the habit of sharing, promoting and amplifying content as well. For example, all of the posts would be shared on Twitter, Google Plus, and Facebook. (Don’t sleep on paid promotion via Facebook.)

Also, for the link-worthy asset, there would be outreach in advance of its creation, then amplification, and continued promotion from the company and those who’ve agreed to support the content.

Create a winning trifecta: Outreach, promotion and amplification

To RZ’s credit, he didn’t dawdle, getting right to work creating worthwhile content via the 1-2-1 method:

1: “The Worst Places in Dallas to be When a Hail Storm Hits”
2: “Can Hail Damage Cause Structural Damage to Your Car?” and “Should You Buy a Car Damaged by Hail?”
1: “Big as Hail!” contest

This contest idea came from the owner of a large local gym. RZ’s will give $500 to the local homeowner who sends in the largest piece of hail, as judged by Facebook fans, during the season. In return, the gym will promote the contest at its multiple locations, link to the content promotion page on RZ’s website, and share images of its fans holding large pieces of hail via social media.

What does the gym get in return: A catchy slogan (e.g., it’s similar to “big as hell,” popular gym parlance) to market around during the hail season.

It’s a win-win for everyone involved, especially RZ.

He gets a link, but most important he realizes how to create content to nail each one of his goals. You can do the same. All it takes is a change in mindset. Away from content creation. Toward outreach, promote and amplify.

Summary

While the story of RZ’s entirely fictional, it is based on techniques I’ve used with other small and midsize businesses. The keys, I’ve found, are to get away from thinking about your industry/brand as being boring, even if it is, and marshal the resources to find the audience who’ll benefit from from your content and, most important, identify the influencers who’ll promote and amplify it.

What are your thoughts?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it