It all starts with the next generation

How many times have you been to Pride? I’d say, countless.

Among people that have been doing it since before I was born, the ones I owe so much to, the regulars and the ones that allowed themselves to be there for the first time.

Pride is a day of gratitude, emotions and of course celebration. And yes, people celebrate in different ways. Some like to dance in extravagant outfits on top of moving buses, others (like me) just like to march quietly holding a meaningful sign or t-shirt.

Pride is what you make of it.

My pride celebrates diversity, the right to be and the progress made by our community.

What really moved me this year was seeing few of you bring your children to the parade. It’s to those people I particularly want to say: thank you!

What a great thing to do, take a stand for diversity and inspire the next generation, setting the example. That’s such an astonishing and selfless act which I found simply heart-warming. To show your children that we can walk side by side with those from different paths and to normalise and encourage the right to stand up for equality; it passes on a great lesson.

One day they might have friends, relatives, children come into their lives that may need them to be open-minded. I hope this experience will help them to embrace diversity, be open to feel and be whoever they are without any fear or shame.

I’ve going to leave you with the magic worlds of Hannah Gadsby from her wonderful show ‘Nanette’ currently on Netflix UK.

[…] I believe we could paint a better world if we learned how to see it from all perspectives, as many perspectives as we possibly could.

Because diversity is strength.

Difference is a teacher.

Fear difference, you learn nothing. […]

 

Fancy being a part of our proud and empowered team? Check out our current career opportunities.

The post It all starts with the next generation appeared first on The Marketing Automation Blog.

Reblogged 3 months ago from blog.dotmailer.com

Is Australia the land of opportunity for your retail brand?

Australia has a resident population of more than 24 million and, according to eMarketer, the country’s ecommerce sales are predicted to reach A$32.56 billion by 2017. The country’s remote location in the APAC region means that unlike European countries or the USA, traditionally there have been a lack of global brands sold locally.

Of course, we also know that many expatriates, particularly from inside the Commonwealth, have made Australia their home and are keen to buy products they know and love from their country of origin.

All of these factors present a huge and potentially lucrative opportunity for non-Australian brands wanting to open up their new and innovative products to a fresh market, or compete for market share.

But it’s not just non-Australian retailers who are at an advantage here: Australia was late to the ecommerce party because native, established brands were trading well without it. Subsequently, Australian retailers’ ecommerce technology stacks are much more recent and not burdened by legacy systems. This makes it much easier to extend, or get started with, best-of-breed technologies and cash in on a market that’s booming. To put some of this into perspective, Magento’s innovative ecommerce platform currently takes 42% of Australia’s market share and the world’s first adopter of Magento 2.0 was an Australian brand.

The GST loophole

At the moment, local retailers are campaigning against a rule that exempts foreign websites from being charged a 10% general sales tax (GST) on purchases under A$1,000. And in 2013, Australian consumers made $3.11 billion worth of purchases under A$1,000.[1]

While the current GST break appears to put non-Australian retailers at an advantage, Australian-based brands such as Harvey Norman are using it to their advantage by setting up ecommerce operations in Asia to enjoy the GST benefit.

Australian consumers have also countered the argument by saying that price isn’t always the motivator when it comes to making purchasing decisions.

It’s not a place where no man has gone before

Often, concerns around meeting local compliance and lack of overseas business knowledge prevent outsiders from taking the leap into cross-border trade. However, this ecommerce passport, created by Ecommerce Worldwide and NORA, is designed to support those considering selling in Australia. The guide provides a comprehensive look into everything from the country’s economy and trade status, to logistics and dealing with international payments.

Global expansion success stories are also invaluable sources of information. For instance, it’s not just lower-end retailers that are fitting the bill, with brands like online luxury fashion retailer Net-a-Porter naming Australia as one of its biggest markets.

How tech-savvy are the Aussies?

One of the concerns you might have as a new entrant into the market is how you’ll reach and sell to your new audience, particularly without having a physical presence. The good news is that more than 80% of the country is digitally enabled and 60% of mobile phone users own a smartphone – so online is deeply rooted into the majority of Australians’ lives. [2]

Marketing your brand

Heard the saying “Fire bullets then fire cannonballs”? In any case, you’ll want to test the waters and gauge people’s reactions to your product or service.

It all starts with the website because, without it, you’re not discoverable or searchable, and you’ve nowhere to drive people to when running campaigns. SEO and SEM should definitely be a priority, and an online store that can handle multiple regions and storefronts, like Magento, will make your life easier. A mobile-first mentality and well thought-out UX will also place you in a good position.

Once your new web store is set up, you should be making every effort to collect visitors’ email addresses, perhaps via a popover. Why? Firstly, email is one of the top three priority areas for Australian retailers, because it’s a cost-effective, scalable marketing channel that enables true personalization.

Secondly, email marketing automation empowers you to deliver the customer experience today’s consumer expects, as well as enabling you to communicate with them throughout the lifecycle. Check out our ‘Do customer experience masters really exist?’ whitepaper for some real-life success stories.

Like the Magento platform, dotmailer is set up to handle multiple languages, regions and accounts, and is designed to grow with you.

In summary, there’s great scope for ecommerce success in Australia, whether you’re a native bricks-and-mortar retailer, a start-up or a non-Australian merchant. The barriers to cross-border trade are falling and Australia is one of APAC’s most developed regions in terms of purchasing power and tech savviness.

We recently worked with ecommerce expert Chloe Thomas to produce a whitepaper on cross-border trade, which goes into much more detail on how to market and sell successfully in new territories. You can download a free copy here.

[1] Australian Passport 2015: Cross-Border Trading Report

[2] Australian Passport 2015: Cross-Border Trading Report

Reblogged 2 years ago from blog.dotmailer.com

Stop Ghost Spam in Google Analytics with One Filter

Posted by CarloSeo

The spam in Google Analytics (GA) is becoming a serious issue. Due to a deluge of referral spam from social buttons, adult sites, and many, many other sources, people are starting to become overwhelmed by all the filters they are setting up to manage the useless data they are receiving.

The good news is, there is no need to panic. In this post, I’m going to focus on the most common mistakes people make when fighting spam in GA, and explain an efficient way to prevent it.

But first, let’s make sure we understand how spam works. A couple of months ago, Jared Gardner wrote an excellent article explaining what referral spam is, including its intended purpose. He also pointed out some great examples of referral spam.

Types of spam

The spam in Google Analytics can be categorized by two types: ghosts and crawlers.

Ghosts

The vast majority of spam is this type. They are called ghosts because they never access your site. It is important to keep this in mind, as it’s key to creating a more efficient solution for managing spam.

As unusual as it sounds, this type of spam doesn’t have any interaction with your site at all. You may wonder how that is possible since one of the main purposes of GA is to track visits to our sites.

They do it by using the Measurement Protocol, which allows people to send data directly to Google Analytics’ servers. Using this method, and probably randomly generated tracking codes (UA-XXXXX-1) as well, the spammers leave a “visit” with fake data, without even knowing who they are hitting.

Crawlers

This type of spam, the opposite to ghost spam, does access your site. As the name implies, these spam bots crawl your pages, ignoring rules like those found in robots.txt that are supposed to stop them from reading your site. When they exit your site, they leave a record on your reports that appears similar to a legitimate visit.

Crawlers are harder to identify because they know their targets and use real data. But it is also true that new ones seldom appear. So if you detect a referral in your analytics that looks suspicious, researching it on Google or checking it against this list might help you answer the question of whether or not it is spammy.

Most common mistakes made when dealing with spam in GA

I’ve been following this issue closely for the last few months. According to the comments people have made on my articles and conversations I’ve found in discussion forums, there are primarily three mistakes people make when dealing with spam in Google Analytics.

Mistake #1. Blocking ghost spam from the .htaccess file

One of the biggest mistakes people make is trying to block Ghost Spam from the .htaccess file.

For those who are not familiar with this file, one of its main functions is to allow/block access to your site. Now we know that ghosts never reach your site, so adding them here won’t have any effect and will only add useless lines to your .htaccess file.

Ghost spam usually shows up for a few days and then disappears. As a result, sometimes people think that they successfully blocked it from here when really it’s just a coincidence of timing.

Then when the spammers later return, they get worried because the solution is not working anymore, and they think the spammer somehow bypassed the barriers they set up.

The truth is, the .htaccess file can only effectively block crawlers such as buttons-for-website.com and a few others since these access your site. Most of the spam can’t be blocked using this method, so there is no other option than using filters to exclude them.

Mistake #2. Using the referral exclusion list to stop spam

Another error is trying to use the referral exclusion list to stop the spam. The name may confuse you, but this list is not intended to exclude referrals in the way we want to for the spam. It has other purposes.

For example, when a customer buys something, sometimes they get redirected to a third-party page for payment. After making a payment, they’re redirected back to you website, and GA records that as a new referral. It is appropriate to use referral exclusion list to prevent this from happening.

If you try to use the referral exclusion list to manage spam, however, the referral part will be stripped since there is no preexisting record. As a result, a direct visit will be recorded, and you will have a bigger problem than the one you started with since. You will still have spam, and direct visits are harder to track.

Mistake #3. Worrying that bounce rate changes will affect rankings

When people see that the bounce rate changes drastically because of the spam, they start worrying about the impact that it will have on their rankings in the SERPs.

bounce.png

This is another mistake commonly made. With or without spam, Google doesn’t take into consideration Google Analytics metrics as a ranking factor. Here is an explanation about this from Matt Cutts, the former head of Google’s web spam team.

And if you think about it, Cutts’ explanation makes sense; because although many people have GA, not everyone uses it.

Assuming your site has been hacked

Another common concern when people see strange landing pages coming from spam on their reports is that they have been hacked.

landing page

The page that the spam shows on the reports doesn’t exist, and if you try to open it, you will get a 404 page. Your site hasn’t been compromised.

But you have to make sure the page doesn’t exist. Because there are cases (not spam) where some sites have a security breach and get injected with pages full of bad keywords to defame the website.

What should you worry about?

Now that we’ve discarded security issues and their effects on rankings, the only thing left to worry about is your data. The fake trail that the spam leaves behind pollutes your reports.

It might have greater or lesser impact depending on your site traffic, but everyone is susceptible to the spam.

Small and midsize sites are the most easily impacted – not only because a big part of their traffic can be spam, but also because usually these sites are self-managed and sometimes don’t have the support of an analyst or a webmaster.

Big sites with a lot of traffic can also be impacted by spam, and although the impact can be insignificant, invalid traffic means inaccurate reports no matter the size of the website. As an analyst, you should be able to explain what’s going on in even in the most granular reports.

You only need one filter to deal with ghost spam

Usually it is recommended to add the referral to an exclusion filter after it is spotted. Although this is useful for a quick action against the spam, it has three big disadvantages.

  • Making filters every week for every new spam detected is tedious and time-consuming, especially if you manage many sites. Plus, by the time you apply the filter, and it starts working, you already have some affected data.
  • Some of the spammers use direct visits along with the referrals.
  • These direct hits won’t be stopped by the filter so even if you are excluding the referral you will sill be receiving invalid traffic, which explains why some people have seen an unusual spike in direct traffic.

Luckily, there is a good way to prevent all these problems. Most of the spam (ghost) works by hitting GA’s random tracking-IDs, meaning the offender doesn’t really know who is the target, and for that reason either the hostname is not set or it uses a fake one. (See report below)

Ghost-Spam.png

You can see that they use some weird names or don’t even bother to set one. Although there are some known names in the list, these can be easily added by the spammer.

On the other hand, valid traffic will always use a real hostname. In most of the cases, this will be the domain. But it also can also result from paid services, translation services, or any other place where you’ve inserted GA tracking code.

Valid-Referral.png

Based on this, we can make a filter that will include only hits that use real hostnames. This will automatically exclude all hits from ghost spam, whether it shows up as a referral, keyword, or pageview; or even as a direct visit.

To create this filter, you will need to find the report of hostnames. Here’s how:

  1. Go to the Reporting tab in GA
  2. Click on Audience in the lefthand panel
  3. Expand Technology and select Network
  4. At the top of the report, click on Hostname

Valid-list

You will see a list of all hostnames, including the ones that the spam uses. Make a list of all the valid hostnames you find, as follows:

  • yourmaindomain.com
  • blog.yourmaindomain.com
  • es.yourmaindomain.com
  • payingservice.com
  • translatetool.com
  • anotheruseddomain.com

For small to medium sites, this list of hostnames will likely consist of the main domain and a couple of subdomains. After you are sure you got all of them, create a regular expression similar to this one:

yourmaindomain\.com|anotheruseddomain\.com|payingservice\.com|translatetool\.com

You don’t need to put all of your subdomains in the regular expression. The main domain will match all of them. If you don’t have a view set up without filters, create one now.

Then create a Custom Filter.

Make sure you select INCLUDE, then select “Hostname” on the filter field, and copy your expression into the Filter Pattern box.

filter

You might want to verify the filter before saving to check that everything is okay. Once you’re ready, set it to save, and apply the filter to all the views you want (except the view without filters).

This single filter will get rid of future occurrences of ghost spam that use invalid hostnames, and it doesn’t require much maintenance. But it’s important that every time you add your tracking code to any service, you add it to the end of the filter.

Now you should only need to take care of the crawler spam. Since crawlers access your site, you can block them by adding these lines to the .htaccess file:

## STOP REFERRER SPAM 
RewriteCond %{HTTP_REFERER} semalt\.com [NC,OR] 
RewriteCond %{HTTP_REFERER} buttons-for-website\.com [NC] 
RewriteRule .* - [F]

It is important to note that this file is very sensitive, and misplacing a single character it it can bring down your entire site. Therefore, make sure you create a backup copy of your .htaccess file prior to editing it.

If you don’t feel comfortable messing around with your .htaccess file, you can alternatively make an expression with all the crawlers, then and add it to an exclude filter by Campaign Source.

Implement these combined solutions, and you will worry much less about spam contaminating your analytics data. This will have the added benefit of freeing up more time for you to spend actually analyze your valid data.

After stopping spam, you can also get clean reports from the historical data by using the same expressions in an Advance Segment to exclude all the spam.

Bonus resources to help you manage spam

If you still need more information to help you understand and deal with the spam on your GA reports, you can read my main article on the subject here: http://www.ohow.co/what-is-referrer-spam-how-stop-it-guide/.

Additional information on how to stop spam can be found at these URLs:

In closing, I am eager to hear your ideas on this serious issue. Please share them in the comments below.

(Editor’s Note: All images featured in this post were created by the author.)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Misuses of 4 Google Analytics Metrics Debunked

Posted by Tom.Capper

In this post I’ll pull apart four of the most commonly used metrics in Google Analytics, how they are collected, and why they are so easily misinterpreted.

Average Time on Page

Average time on page should be a really useful metric, particularly if you’re interested in engagement with content that’s all on a single page. Unfortunately, this is actually its worst use case. To understand why, you need to understand how time on page is calculated in Google Analytics:

Time on Page: Total across all pageviews of time from pageview to last engagement hit on that page (where an engagement hit is any of: next pageview, interactive event, e-commerce transaction, e-commerce item hit, or social plugin). (Source)

If there is no subsequent engagement hit, or if there is a gap between the last engagement hit on a site and leaving the site, the assumption is that no further time was spent on the site. Below are some scenarios with an intuitive time on page of 20 seconds, and their Google Analytics time on page:

Scenario

Intuitive time on page

GA time on page

0s: Pageview
10s: Social plugin
20s: Click through to next page

20s

20s

0s: Pageview
10s: Social plugin
20s: Leave site

20s

10s

0s: Pageview
20s: Leave site

20s

0s

Google doesn’t want exits to influence the average time on page, because of scenarios like the third example above, where they have a time on page of 0 seconds (source). To avoid this, they use the following formula (remember that Time on Page is a total):

Average Time on Page: (Time on Page) / (Pageviews – Exits)

However, as the second example above shows, this assumption doesn’t always hold. The second example feeds into the top half of the average time on page faction, but not the bottom half:

Example 2 Average Time on Page: (20s+10s+0s) / (3-2) = 30s

There are two issues here:

  1. Overestimation
    Excluding exits from the second half of the average time on page equation doesn’t have the desired effect when their time on page wasn’t 0 seconds—note that 30s is longer than any of the individual visits. This is why average time on page can often be longer than average visit duration. Nonetheless, 30 seconds doesn’t seem too far out in the above scenario (the intuitive average is 20s), but in the real world many pages have much higher exit rates than the 67% in this example, and/or much less engagement with events on page.
  2. Ignored visits
    Considering only visitors who exit without an engagement hit, whether these visitors stayed for 2 seconds, 10 minutes or anything inbetween, it doesn’t influence average time on page in the slightest. On many sites, a 10 minute view of a single page without interaction (e.g. a blog post) would be considered a success, but it wouldn’t influence this metric.

Solution: Unfortunately, there isn’t an easy solution to this issue. If you want to use average time on page, you just need to keep in mind how it’s calculated. You could also consider setting up more engagement events on page (like a scroll event without the “nonInteraction” parameter)—this solves issue #2 above, but potentially worsens issue #1.

Site Speed

If you’ve used the Site Speed reports in Google Analytics in the past, you’ve probably noticed that the numbers can sometimes be pretty difficult to believe. This is because the way that Site Speed is tracked is extremely vulnerable to outliers—it starts with a 1% sample of your users and then takes a simple average for each metric. This means that a few extreme values (for example, the occasional user with a malware-infested computer or a questionable wifi connection) can create a very large swing in your data.

The use of an average as a metric is not in itself bad, but in an area so prone to outliers and working with such a small sample, it can lead to questionable results.

Fortunately, you can increase the sampling rate right up to 100% (or the cap of 10,000 hits per day). Depending on the size of your site, this may still only be useful for top-level data. For example, if your site gets 1,000,000 hits per day and you’re interested in the performance of a new page that’s receiving 100 hits per day, Google Analytics will throttle your sampling back to the 10,000 hits per day cap—1%. As such, you’ll only be looking at a sample of 1 hit per day for that page.

Solution: Turn up the sampling rate. If you receive more than 10,000 hits per day, keep the sampling rate in mind when digging into less visited pages. You could also consider external tools and testing, such as Pingdom or WebPagetest.

Conversion Rate (by channel)

Obviously, conversion rate is not in itself a bad metric, but it can be rather misleading in certain reports if you don’t realise that, by default, conversions are attributed using a last non-direct click attribution model.

From Google Analytics Help:

“…if a person clicks over your site from google.com, then returns as “direct” traffic to convert, Google Analytics will report 1 conversion for “google.com / organic” in All Traffic.”

This means that when you’re looking at conversion numbers in your acquisition reports, it’s quite possible that every single number is different to what you’d expect under last click—every channel other than direct has a total that includes some conversions that occurred during direct sessions, and direct itself has conversion numbers that don’t include some conversions that occurred during direct sessions.

Solution: This is just something to be aware of. If you do want to know your last-click numbers, there’s always the Multi-Channel Funnels and Attribution reports to help you out.

Exit Rate

Unlike some of the other metrics I’ve discussed here, the calculation behind exit rate is very intuitive—”for all pageviews to the page, Exit Rate is the percentage that were the last in the session.” The problem with exit rate is that it’s so often used as a negative metric: “Which pages had the highest exit rate? They’re the problem with our site!” Sometimes this might be true: Perhaps, for example, if those pages are in the middle of a checkout funnel.

Often, however, a user will exit a site when they’ve found what they want. This doesn’t just mean that a high exit rate is ok on informational pages like blog posts or about pages—it could also be true of product pages and other pages with a highly conversion-focused intent. Even on ecommerce sites, not every visitor has the intention of converting. They might be researching towards a later online purchase, or even planning to visit your physical store. This is particularly true if your site ranks well for long tail queries or is referenced elsewhere. In this case, an exit could be a sign that they found the information they wanted and are ready to purchase once they have the money, the need, the right device at hand or next time they’re passing by your shop.

Solution: When judging a page by its exit rate, think about the various possible user intents. It could be useful to take a segment of visitors who exited on a certain page (in the Advanced tab of the new segment menu), and investigate their journey in User Flow reports, or their landing page and acquisition data.

Discussion

If you know of any other similarly misunderstood metrics, you have any questions or you have something to add to my analysis, tweet me at @THCapper or leave a comment below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Why Good Unique Content Needs to Die – Whiteboard Friday

Posted by randfish

We all know by now that not just any old content is going to help us rank in competitive SERPs. We often hear people talking about how it takes “good, unique content.” That’s the wrong bar. In today’s Whiteboard Friday, Rand talks about where we should be aiming, and how to get there.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about something that I really have a problem with in the SEO world, and that is the phrase “good, unique content.” I’ll tell you why this troubles me so much. It’s because I get so many emails, I hear so many times at conferences and events with people I meet, with folks I talk to in the industry saying, “Hey, we created some good, unique content, but we don’t seem to be performing well in search.” My answer back to that is always that is not the bar for entry into SEO. That is not the bar for ranking.

The content quality scale

So I made this content quality scale to help illustrate what I’m talking about here. You can see that it starts all the way up at 10x, and down here I’ve got Panda Invasion. So quality, like Google Panda is coming for your site, it’s going to knock you out of the rankings. It’s going to penalize you, like your content is thin and largely useless.

Then you go up a little bit, and it’s like, well four out of five searchers find it pretty bad. They clicked the Back button. Maybe one out of five is thinking, “Well, this is all right. This solves my most basic problems.”

Then you get one level higher than that, and you have good, unique content, which I think many folks think of as where they need to get to. It’s essentially, hey, it’s useful enough. It answers the searcher’s query. It’s unique from any other content on the Web. If you read it, you wouldn’t vomit. It’s good enough, right? Good, unique content.

Problem is almost everyone can get here. They really can. It’s not a high bar, a high barrier to entry to say you need good, unique content. In fact, it can scale. So what I see lots of folks doing is they look at a search result or a set of search results in their industry. Say you’re in travel and vacations, and you look at these different countries and you’re going to look at the hotels or recommendations in those countries and then see all the articles there. You go, “Yeah, you know what, I think we could do something as good as what’s up there or almost.” Well, okay, that puts you in the range. That’s good, unique content.

But in my opinion, the minimum bar today for modern SEO is a step higher, and that is as good as the best in the search results on the search results page. If you can’t consistently say, “We’re the best result that a searcher could find in the search results,” well then, guess what? You’re not going to have an opportunity to rank. It’s much, much harder to get into those top 10 positions, page 1, page 2 positions than it was in the past because there are so many ranking signals that so many of these websites have already built up over the last 5, 10, 15 years that you need to go above and beyond.

Really, where I want folks to go and where I always expect content from Moz to go is here, and that is 10x, 10 times better than anything I can find in the search results today. If I don’t think I can do that, then I’m not going to try and rank for those keywords. I’m just not going to pursue it. I’m going to pursue content in areas where I believe I can create something 10 times better than the best result out there.

What changed?

Why is this? What changed? Well, a bunch of things actually.

  • User experience became a much bigger element in the ranking algorithms, and that’s direct influences, things that we’ve talked about here on Whiteboard Friday before like pogo-sticking, and lots of indirect ones like the links that you earn based on the user experience that you provide and Google rendering pages, Google caring about load speed and device rendering, mobile friendliness, all these kinds of things.
  • Earning links overtook link building. It used to be you put out a page and you built a bunch of links to it. Now that doesn’t so much work anymore because Google is very picky about the links that it’s going to consider. If you can’t earn links naturally, not only can you not get links fast enough and not get good ones, but you also are probably earning links that Google doesn’t even want to count or may even penalize you for. It’s nearly impossible to earn links with just good, unique content. If there’s something better out there on page one of the search results, why would they even bother to link to you? Someone’s going to do a search, and they’re going to find something else to link to, something better.
  • Third, the rise of content marketing over the last five, six years has meant that there’s just a lot more competition. This field is a lot more crowded than it used to be, with many people trying to get to a higher and higher quality bar.
  • Finally, as a result of many of these things, user expectations have gone crazy. Users expect pages to load insanely fast, even on mobile devices, even when their connection’s slow. They expect it to look great. They expect to be provided with an answer almost instantaneously. The quality of results that Google has delivered and the quality of experience that sites like Facebook, which everyone is familiar with, are delivering means that our brains have rewired themselves to expect very fast, very high quality results consistently.

How do we create “10x” content?

So, because of all these changes, we need a process. We need a process to choose, to figure out how we can get to 10x content, not good, unique content, 10x content. A process that I often like to use — this probably is not the only one, but you’re welcome to use it if you find it valuable — is to go, “All right, you know what? I’m going to perform some of these search queries.”

By the way, I would probably perform the search query in two places. One is in Google and their search results, and the other is actually in BuzzSumo, which I think is a great tool for this, where I can see the content that has been most shared. So if you haven’t already, check out BuzzSumo.com.

I might search for something like Costa Rica ecolodges, which I might be considering a Costa Rica vacation at some point in the future. I look at these top ranking results, probably the whole top 10 as well as the most shared content on social media.

Then I’m going to ask myself these questions;

  • What questions are being asked and answered by these search results?
  • What sort of user experience is provided? I look at this in terms of speed, in terms of mobile friendliness, in terms of rendering, in terms of layout and design quality, in terms of what’s required from the user to be able to get the information? Is it all right there, or do I need to click? Am I having trouble finding things?
  • What’s the detail and thoroughness of the information that’s actually provided? Is it lacking? Is it great?
  • What about use of visuals? Visual content can often take best in class all the way up to 10x if it’s done right. So I might check out the use of visuals.
  • The quality of the writing.
  • I’m going to look at information and data elements. Where are they pulling from? What are their sources? What’s the quality of that stuff? What types of information is there? What types of information is missing?

In fact, I like to ask, “What’s missing?” a lot.

From this, I can determine like, hey, here are the strengths and weaknesses of who’s getting all of the social shares and who’s ranking well, and here’s the delta between me and them today. This is the way that I can be 10 times better than the best results in there.

If you use this process or a process like this and you do this type of content auditing and you achieve this level of content quality, you have a real shot at rankings. One of the secret reasons for that is that the effort axis that I have here, like I go to Fiverr, I get Panda invasion. I make the intern write it. This is going to take a weekend to build versus there’s no way to scale this content.

This is a super power. When your competitors or other folks in the field look and say, “Hey, there’s no way that we can scale content quality like this. It’s just too much effort. We can’t keep producing it at this level,” well, now you have a competitive advantage. You have something that puts you in a category by yourself and that’s very hard for competitors to catch up to. It’s a huge advantage in search, in social, on the Web as a whole.

All right everyone, hope you’ve enjoyed this edition of Whiteboard Friday, and we’ll see you again next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it