14 SEO Predictions for 2019 & Beyond, as Told by Mozzers

Posted by TheMozTeam

With the new year in full swing and an already busy first quarter, our 2019 predictions for SEO in the new year are hopping onto the scene a little late — but fashionably so, we hope. From an explosion of SERP features to increased monetization to the key drivers of search this year, our SEO experts have consulted their crystal balls (read: access to mountains of data and in-depth analyses) and made their predictions. Read on for an exhaustive list of fourteen things to watch out for in search from our very own Dr. Pete, Britney Muller, Rob Bucci, Russ Jones, and Miriam Ellis!

1. Answers will drive search

People Also Ask boxes exploded in 2018, and featured snippets have expanded into both multifaceted and multi-snippet versions. Google wants to answer questions, it wants to answer them across as many devices as possible, and it will reward sites with succinct, well-structured answers. Focus on answers that naturally leave visitors wanting more and establish your brand and credibility. [Dr. Peter J. Meyers]

Further reading:

2. Voice search will continue to be utterly useless for optimization

Optimizing for voice search will still be no more than optimizing for featured snippets, and conversions from voice will remain a dark box. [Russ Jones]

Further reading:

3. Mobile is table stakes

This is barely a prediction. If your 2019 plan is to finally figure out mobile, you’re already too late. Almost all Google features are designed with mobile-first in mind, and the mobile-first index has expanded rapidly in the past few months. Get your mobile house (not to be confused with your mobile home) in order as soon as you can. [Dr. Peter J. Meyers]

Further reading:

4. Further SERP feature intrusions in organic search

Expect Google to find more and more ways to replace organic with solutions that keep users on Google’s property. This includes interactive SERP features that replace, slowly but surely, many website offerings in the same way that live scores, weather, and flights have. [Russ Jones]

Further reading:

5. Video will dominate niches

Featured Videos, Video Carousels, and Suggested Clips (where Google targets specific content in a video) are taking over the how-to spaces. As Google tests search appliances with screens, including Home Hub, expect video to dominate instructional and DIY niches. [Dr. Peter J. Meyers]

Further reading:

6. SERPs will become more interactive

We’ve seen the start of interactive SERPs with People Also Ask Boxes. Depending on which question you expand, two to three new questions will generate below that directly pertain to your expanded question. This real-time engagement keeps people on the SERP longer and helps Google better understand what a user is seeking. [Britney Muller]

Further reading:

7. Local SEO: Google will continue getting up in your business — literally

Google will continue asking more and more intimate questions about your business to your customers. Does this business have gender-neutral bathrooms? Is this business accessible? What is the atmosphere like? How clean is it? What kind of lighting do they have? And so on. If Google can acquire accurate, real-world information about your business (your percentage of repeat customers via geocaching, price via transaction history, etc.) they can rely less heavily on website signals and provide more accurate results to searchers. [Britney Muller]

Further reading:

8. Business proximity-to-searcher will remain a top local ranking factor

In Moz’s recent State of Local SEO report, the majority of respondents agreed that Google’s focus on the proximity of a searcher to local businesses frequently emphasizes distance over quality in the local SERPs. I predict that we’ll continue to see this heavily weighting the results in 2019. On the one hand, hyper-localized results can be positive, as they allow a diversity of businesses to shine for a given search. On the other hand, with the exception of urgent situations, most people would prefer to see best options rather than just closest ones. [Miriam Ellis]

Further reading:

9. Local SEO: Google is going to increase monetization

Look to see more of the local and maps space monetized uniquely by Google both through Adwords and potentially new lead-gen models. This space will become more and more competitive. [Russ Jones]

Further reading:

10. Monetization tests for voice

Google and Amazon have been moving towards voice-supported displays in hopes of better monetizing voice. It will be interesting to see their efforts to get displays in homes and how they integrate the display advertising. Bold prediction: Amazon will provide sleep-mode display ads similar to how Kindle currently displays them today. [Britney Muller]

11. Marketers will place a greater focus on the SERPs

I expect we’ll see a greater focus on the analysis of SERPs as Google does more to give people answers without them having to leave the search results. We’re seeing more and more vertical search engines like Google Jobs, Google Flights, Google Hotels, Google Shopping. We’re also seeing more in-depth content make it onto the SERP than ever in the form of featured snippets, People Also Ask boxes, and more. With these new developments, marketers are increasingly going to want to report on their general brand visibility within the SERPs, not just their website ranking. It’s going to be more important than ever for people to be measuring all the elements within a SERP, not just their own ranking. [Rob Bucci]

Further reading:

12. Targeting topics will be more productive than targeting queries

2019 is going to be another year in which we see the emphasis on individual search queries start to decline, as people focus more on clusters of queries around topics. People Also Ask queries have made the importance of topics much more obvious to the SEO industry. With PAAs, Google is clearly illustrating that they think about searcher experience in terms of a searcher’s satisfaction across an entire topic, not just a specific search query. With this in mind, we can expect SEOs to more and more want to see their search queries clustered into topics so they can measure their visibility and the competitive landscape across these clusters. [Rob Bucci]

Further reading:

13. Linked unstructured citations will receive increasing focus

I recently conducted a small study in which there was a 75% correlation between organic and local pack rank. Linked unstructured citations (the mention of partial or complete business information + a link on any type of relevant website) are a means of improving organic rankings which underpin local rankings. They can also serve as a non-Google dependent means of driving traffic and leads. Anything you’re not having to pay Google for will become increasingly precious. Structured citations on key local business listing platforms will remain table stakes, but competitive local businesses will need to focus on unstructured data to move the needle. [Miriam Ellis]

Further reading:

14. Reviews will remain a competitive difference-maker

A Google rep recently stated that about one-third of local searches are made with the intent of reading reviews. This is huge. Local businesses that acquire and maintain a good and interactive reputation on the web will have a critical advantage over brands that ignore reviews as fundamental to customer service. Competitive local businesses will earn, monitor, respond to, and analyze the sentiment of their review corpus. [Miriam Ellis]

Further reading:

We’ve heard from Mozzers, and now we want to hear from you. What have you seen so far in 2019 that’s got your SEO Spidey senses tingling? What trends are you capitalizing on and planning for? Let us know in the comments below (and brag to friends and colleagues when your prediction comes true in the next 6–10 months). 😉

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

8 Ways Content Marketers Can Hack Facebook Multi-Product Ads

Posted by Alan_Coleman

The trick most content marketers are missing

Creating great content is the first half of success in content marketing. Getting quality content read by, and amplified to, a relevant audience is the oft overlooked second half of success. Facebook can be a content marketer’s best friend for this challenge. For reach, relevance and amplification potential, Facebook is unrivaled.

  1. Reach: 1 in 6 mobile minutes on planet earth is somebody reading something on Facebook.
  2. Relevance: Facebook is a lean mean interest and demo targeting machine. There is no online or offline media that owns as much juicy interest and demographic information on its audience and certainly no media has allowed advertisers to utilise this information as effectively as Facebook has.
  3. Amplification: Facebook is literally built to encourage sharing. Here’s the first 10 words from their mission statement: “Facebook’s mission is to give people the power to share…”, Enough said!

Because of these three digital marketing truths, if a content marketer gets their paid promotion* right on Facebook, the battle for eyeballs and amplification is already won.

For this reason it’s crucial that content marketers keep a close eye on Facebook advertising innovations and seek out ways to use them in new and creative ways.

In this post I will share with you eight ways we’ve hacked a new Facebook ad format to deliver content marketing success.

Multi-Product Ads (MPAs)

In 2014, Facebook unveiled multi-product ads (MPAs) for US advertisers, we got them in Europe earlier this year. They allow retailers to show multiple products in a carousel-type ad unit.

They look like this:

If the user clicks on the featured product, they are guided directly to the landing page for that specific product, from where they can make a purchase.

You could say MPAs are Facebook’s answer to Google Shopping.

Facebook’s mistake is a content marketer’s gain

I believe Facebook has misunderstood how people want to use their social network and the transaction-focused format is OK at best for selling products. People aren’t really on Facebook to hit the “buy now” button. I’m a daily Facebook user and I can’t recall a time this year where I have gone directly from Facebook to an e-commerce website and transacted. Can you remember a recent time when you did?

So, this isn’t an innovation that removes a layer of friction from something that we are all doing online already (as the most effective innovations do). Instead, it’s a bit of a “hit and hope” that, by providing this functionality, Facebook would encourage people to try to buy online in a way they never have before.

The Wolfgang crew felt the MPA format would be much more useful to marketers and users if they were leveraging Facebook for the behaviour we all demonstrate on the platform every day, guiding users to relevant content. We attempted to see if Facebook Ads Manager would accept MPAs promoting content rather than products. We plugged in the images, copy and landing pages, hit “place order”, and lo and behold the ads became active. We’re happy to say that the engagement rates, and more importantly the amplification rates, are fantastic!

Multi-Content Ads

We’ve re-invented the MPA format for multi-advertisers in multi-ways, eight ways to be exact! Here’s eight MPA Hacks that have worked well for us. All eight hacks use the MPA format to promote content rather than promote products.

Hack #1: Multi-Package Ads

Our first variation wasn’t a million miles away from multi-product ads; we were promoting the various packages offered by a travel operator.

By looking at the number of likes, comments, and shares (in blue below the ads) you can see the ads were a hit with Facebook users and they earned lots of free engagement and amplification.

NB: If you have selected “clicks to website” as your advertising objective, all those likes, comments and shares are free!

Independent Travel Multi Product Ad

The ad sparked plenty of conversation amongst Facebook friends in the comments section.

Comments on a Facebook MPA

Hack #2: Multi-Offer Ads

Everybody knows the Internet loves a bargain. So we decided to try another variation moving away from specific packages, focusing instead on deals for a different travel operator.

Here’s how the ads looked:

These ads got valuable amplification beyond the share. In the comments section, you can see people tagging specific friends. This led to the MPAs receiving further amplification, and a very targeted and personalised form of amplification to boot.

Abbey Travel Facebook Ad Comments

Word of mouth referrals have been a trader’s best friend since the stone age. These “personalised” word of mouth referrals en masse are a powerful marketing proposition. It’s worth mentioning again that those engagements are free!

Hack #3: Multi-Locations Ads

Putting the Lo in SOLOMO.

This multi-product feed ad was hacked to promote numerous locations of a waterpark. “Where to go?” is among the first questions somebody asks when researching a holiday. In creating this top of funnel content, we can communicate with our target audience at the very beginning of their research process. A simple truth of digital marketing is: the more interactions you have with your target market on their journey to purchase, the more likely they are to seal the deal with you when it comes time to hit the “buy now” button. Starting your relationship early gives you an advantage over those competitors who are hanging around the bottom of the purchase funnel hoping to make a quick and easy conversion.

Abbey Travel SplashWorld Facebook MPA

What was surprising here, was that because we expected to reach people at the very beginning of their research journey, we expected the booking enquiries to be some time away. What actually happened was these ads sparked an enquiry frenzy as Facebook users could see other people enquiring and the holidays selling out in real time.

Abbey Travel comments and replies

In fact nearly all of the 35 comments on this ad were booking enquiries. This means what we were measuring as an “engagement” was actually a cold hard “conversion”! You don’t need me to tell you a booking enquiry is far closer to the money than a Facebook like.

The three examples outlined so far are for travel companies. Travel is a great fit for Facebook as it sits naturally in the Facebook feed, my Facebook feed is full of envy-inducing friends’ holiday pictures right now. Another interesting reason why travel is a great fit for Facebook ads is because typically there are multiple parties to a travel purchase. What happened here is the comments section actually became a very visible and measurable forum for discussion between friends and family before becoming a stampede inducing medium of enquiry.

So, stepping outside of the travel industry, how do other industries fare with hacked MPAs?

Hack #3a: Multi-Location Ads (combined with location targeting)

Location, location, location. For a property listings website, we applied location targeting and repeated our Multi-Location Ad format to advertise properties for sale to people in and around that location.

Hack #4: Multi-Big Content Ad

“The future of big content is multi platform”

– Cyrus Shepard

The same property website had produced a report and an accompanying infographic to provide their audience with unique and up-to-the-minute market information via their blog. We used the MPA format to promote the report, the infographic and the search rentals page of the website. This brought their big content piece to a larger audience via a new platform.

Rental Report Multi Product Ad

Hack #5: Multi-Episode Ad

This MPA hack was for an online TV player. As you can see we advertised the most recent episodes of a TV show set in a fictional Dublin police station, Red Rock.

Engagement was high, opinion was divided.

TV3s Red Rock viewer feedback

LOL.

Hack #6: Multi-People Ads

In the cosmetic surgery world, past patients’ stories are valuable marketing material. Particularly when the past patients are celebrities. We recycled some previously published stories from celebrity patients using multi-people ads and targeted them to a very specific audience.

Avoca Clinic Multi People Ads

Hack #7: Multi-UGC Ads

Have you witnessed the power of user generated content (UGC) in your marketing yet? We’ve found interaction rates with authentic UGC images can be up to 10 fold of those of the usual stylised images. In order to encourage further UGC, we posted a number of customer’s images in our Multi-UGC Ads.

The CTR on the above ads was 6% (2% is the average CTR for Facebook News feed ads according to our study). Strong CTRs earn you more traffic for your budget. Facebook’s relevancy score lowers your CPC as your CTR increases.

When it comes to the conversion, UGC is a power player, we’ve learned that “customers attracting new customers” is a powerful acquisition tool.

Hack #8: Target past customers for amplification

“Who will support and amplify this content and why?”

– Rand Fishkin

Your happy customers Rand, that’s the who and the why! Check out these Multi-Package Ads targeted to past customers via custom audiences. The Camino walkers have already told all their friends about their great trip, now allow them to share their great experiences on Facebook and connect the tour operator with their Facebook friends via a valuable word of mouth referral. Just look at the ratio of share:likes and shares:comments. Astonishingly sharable ads!

Camino Ways Mulit Product Ads

Targeting past converters in an intelligent manner is a super smart way to find an audience ready to share your content.

How will hacking Multi-Product Ads work for you?

People don’t share ads, but they do share great content. So why not hack MPAs to promote your content and reap the rewards of the world’s greatest content sharing machine: Facebook.

MPAs allow you to tell a richer story by allowing you to promote multiple pieces of content simultaneously. So consider which pieces of content you have that will work well as “content bundles” and who the relevant audience for each “content bundle” is.

As Hack #8 above illustrates, the big wins come when you match a smart use of the format with the clever and relevant targeting Facebook allows. We’re massive fans of custom audiences so if you aren’t sure where to start, I’d suggest starting there.

So ponder your upcoming content pieces, consider your older content you’d like to breathe some new life into and perhaps you could become a Facebook Ads Hacker.

I’d love to hear about your ideas for turning Multi-Product Ads into Multi-Content Ads in the comments section below.

We could even take the conversation offline at Mozcon!

Happy hacking.


*Yes I did say paid promotion, it’s no secret that Facebook’s organic reach continues to dwindle. The cold commercial reality is you need to pay to play on FB. The good news is that if you select ‘website clicks’ as your objective you only pay for website traffic and engagement while amplification by likes, comments, and shares are free! Those website clicks you pay for are typically substantially cheaper than Adwords, Taboola, Outbrain, Twitter or LinkedIn. How does it compare to display? It doesn’t. Paying for clicks is always preferable to paying for impressions. If you are spending money on display advertising I’d urge you to fling a few spondoolas towards Facebook ads and compare results. You will be pleasantly surprised.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Simple Steps for Conducting Creative Content Research

Posted by Hannah_Smith

Most frequently, the content we create at Distilled is designed to attract press coverage, social shares, and exposure (and links) on sites our clients’ target audience reads. That’s a tall order.

Over the years we’ve had our hits and misses, and through this we’ve recognised the value of learning about what makes a piece of content successful. Coming up with a great idea is difficult, and it can be tough to figure out where to begin. Today, rather than leaping headlong into brainstorming sessions, we start with creative content research.

What is creative content research?

Creative content research enables you to answer the questions:

“What are websites publishing, and what are people sharing?”

From this, you’ll then have a clearer view on what might be successful for your client.

A few years ago this required quite an amount of work to figure out. Today, happily, it’s much quicker and easier. In this post I’ll share the process and tools we use.

Whoa there… Why do I need to do this?

I think that the value in this sort of activity lies in a couple of directions:

a) You can learn a lot by deconstructing the success of others…

I’ve been taking stuff apart to try to figure out how it works for about as long as I can remember, so applying this process to content research felt pretty natural to me. Perhaps more importantly though, I think that deconstructing content is actually easier when it isn’t your own. You’re not involved, invested, or in love with the piece so viewing it objectively and learning from it is much easier.

b) Your research will give you a clear overview of the competitive landscape…

As soon as a company elects to start creating content, they gain a whole raft of new competitors. In addition to their commercial competitors (i.e. those who offer similar products or services), the company also gains content competitors. For example, if you’re a sports betting company and plan to create content related to the sports events that you’re offering betting markets on; then you’re competing not just with other betting companies, but every other publisher who creates content about these events. That means major news outlets, sports news site, fan sites, etc. To make matters even more complicated, it’s likely that you’ll actually be seeking coverage from those same content competitors. As such, you need to understand what’s already being created in the space before creating content of your own.

c) You’re giving yourself the data to create a more compelling pitch…

At some point you’re going to need to pitch your ideas to your client (or your boss if you’re working in-house). At Distilled, we’ve found that getting ideas signed off can be really tough. Ultimately, a great idea is worthless if we can’t persuade our client to give us the green light. This research can be used to make a more compelling case to your client and get those ideas signed off. (Incidentally, if getting ideas signed off is proving to be an issue you might find this framework for pitching creative ideas useful).

Where to start

Good ideas start with a good brief, however it can be tough to pin clients down to get answers to a long list of questions.

As a minimum you’ll need to know the following:

  • Who are they looking to target?
    • Age, sex, demographic
    • What’s their core focus? What do they care about? What problems are they looking to solve?
    • Who influences them?
    • What else are they interested in?
    • Where do they shop and which brands do they buy?
    • What do they read?
    • What do they watch on TV?
    • Where do they spend their time online?
  • Where do they want to get coverage?
    • We typically ask our clients to give us a wishlist of 10 or so sites they’d love to get coverage on
  • Which topics are they comfortable covering?
    • This question is often the toughest, particularly if a client hasn’t created content specifically for links and shares before. Often clients are uncomfortable about drifting too far away from their core business—for example, if they sell insurance, they’ll typically say that they really want to create a piece of content about insurance. Whilst this is understandable from the clients’ perspective it can severely limit their chances of success. It’s definitely worth offering up a gentle challenge at this stage—I’ll often cite Red Bull, who are a great example of a company who create content based on what their consumers love, not what they sell (i.e. Red Bull sell soft drinks, but create content about extreme sports because that’s the sort of content their audience love to consume). It’s worth planting this idea early, but don’t get dragged into a fierce debate at this stage—you’ll be able to make a far more compelling argument once you’ve done your research and are pitching concrete ideas.

Processes, useful tools and sites

Now you have your brief, it’s time to begin your research.

Given that we’re looking to uncover “what websites are publishing and what’s being shared,” It won’t surprise you to learn that I pay particular attention to pieces of content and the coverage they receive. For each piece that I think is interesting I’ll note down the following:

  • The title/headline
  • A link to the coverage (and to the original piece if applicable)
  • How many social shares the coverage earned (and the original piece earned)
  • The number of linking root domains the original piece earned
  • Some notes about the piece itself: why it’s interesting, why I think it got shares/coverage
  • Any gaps in the content, whether or not it’s been executed well
  • How we might do something similar (if applicable)

Whilst I’m doing this I’ll also make a note of specific sites I see being frequently shared (I tend to check these out separately later on), any interesting bits of research (particularly if I think there might be an opportunity to do something different with the data), interesting threads on forums etc.

When it comes to kicking off your research, you can start wherever you like, but I’d recommend that you cover off each of the areas below:

What does your target audience share?

Whilst this activity might not uncover specific pieces of successful content, it’s a great way of getting a clearer understanding of your target audience, and getting a handle on the sites they read and the topics which interest them.

  • Review social profiles / feeds
    • If the company you’re working for has a Facebook page, it shouldn’t be too difficult to find some people who’ve liked the company page and have a public profile. It’s even easier on Twitter where most profiles are public. Whilst this won’t give you quantitative data, it does put a human face to your audience data and gives you a feel for what these people care about and share. In addition to uncovering specific pieces of content, this can also provide inspiration in terms of other sites you might want to investigate further and ideas for topics you might want to explore.
  • Demographics Pro
    • This service infers demographic data from your clients’ Twitter followers. I find it particularly useful if the client doesn’t know too much about their audience. In addition to demographic data, you get a breakdown of professions, interests, brand affiliations, and the other Twitter accounts they follow and who they’re most influenced by. This is a paid-for service, but there are pay-as-you-go options in addition to pay monthly plans.

Finding successful pieces of content on specific sites

If you’ve a list of sites you know your target audience read, and/or you know your client wants to get coverage on, there are a bunch of ways you can uncover interesting content:

  • Using your link research tool of choice (e.g. Open Site Explorer, Majestic, ahrefs) you can run a domain level report to see which pages have attracted the most links. This can also be useful if you want to check out commercial competitors to see which pieces of content they’ve created have attracted the most links.
  • There are also tools which enable you to uncover the most shared content on individual sites. You can use Buzzsumo to run content analysis reports on individual domains which provide data on average social shares per post, social shares by network, and social shares by content type.
  • If you just want to see the most shared content for a given domain you can run a simple search on Buzzsumo using the domain; and there’s also the option to refine by topic. For example a search like [guardian.com big data] will return the most shared content on the Guardian related to big data. You can also run similar reports using ahrefs’ Content Explorer tool.

Both Buzzsumo and ahrefs are paid tools, but both offer free trials. If you need to explore the most shared content without using a paid tool, there are other alternatives. Check out Social Crawlytics which will crawl domains and return social share data, or alternatively, you can crawl a site (or section of a site) and then run the URLs through SharedCount‘s bulk upload feature.

Finding successful pieces of content by topic

When searching by topic, I find it best to begin with a broad search and then drill down into more specific areas. For example, if I had a client in the financial services space, I’d start out looking at a broad topic like “money” rather than shooting straight to topics like loans or credit cards.

As mentioned above, both Buzzsumo and ahrefs allow you to search for the most shared content by topic and both offer advanced search options.

Further inspiration

There are also several sites I like to look at for inspiration. Whilst these sites don’t give you a great steer on whether or not a particular piece of content was actually successful, with a little digging you can quickly find the original source and pull link and social share data:

  • Visually has a community area where users can upload creative content. You can search by topic to uncover examples.
  • TrendHunter have a searchable archive of creative ideas, they feature products, creative campaigns, marketing campaigns, advertising and more. It’s best to keep your searches broad if you’re looking at this site.
  • Check out Niice (a moodboard app) which also has a searchable archive of handpicked design inspiration.
  • Searching Pinterest can allow you to unearth some interesting bits and pieces as can Google image searches and regular Google searches around particular topics.
  • Reviewing relevant sections of discussion sites like Quora can provide insight into what people are asking about particular topics which may spark a creative idea.

Moving from data to insight

By this point you’ve (hopefully) got a long list of content examples. Whilst this is a great start, effectively what you’ve got here is just data, now you need to convert this to insight.

Remember, we’re trying to answer the questions: “What are websites publishing, and what are people sharing?”

Ordinarily as I go through the creative content research process, I start to see patterns or themes emerge. For example, across a variety of topics areas you’ll see that the most shared content tends to be news. Whilst this is good to know, it’s not necessarily something that’s going to be particularly actionable. You’ll need to dig a little deeper—what else (aside from news) is given coverage? Can you split those things into categories or themes?

This is tough to explain in the abstract, so let me give you an example. We’d identified a set of music sites (e.g. Rolling Stone, NME, CoS, Stereogum, Pitchfork) as target publishers for a client.

Here’s a summary of what I concluded following my research:

The most-shared content on these music publications is news: album launches, new singles, videos of performances etc. As such, if we can work a news hook into whatever we create, this could positively influence our chances of gaining coverage.

Aside from news, the content which gains traction tends to fall into one of the following categories:

Earlier in this post I mentioned that it can be particularly tough to create content which attracts coverage and shares if clients feel strongly that they want to do something directly related to their product or service. The example I gave at the outset was a client who sold insurance and was really keen to create something about insurance. You’re now in a great position to win an argument with data, as thanks to your research you’ll be able to cite several pieces of insurance-related content which have struggled to gain traction. But it’s not all bad news as you’ll also be able to cite other topics which are relevant to the client’s target audience and stand a better chance of gaining coverage and shares.

Avoiding the pitfalls

There are potential pitfalls when it comes to creative content research in that it’s easy to leap to erroneous conclusions. Here’s some things to watch out for:

Make sure you’re identifying outliers…

When seeking out successful pieces of content you need to be certain that what you’re looking at is actually an outlier. For example, the average post on BuzzFeed gets over 30k social shares. As such, that post you found with just 10k shares is not an outlier. It’s done significantly worse than average. It’s therefore not the best post to be holding up as a fabulous example of what to create to get shares.

Don’t get distracted by formats…

Pay more attention to the idea than the format. For example, the folks at Mashable, kindly covered an infographic about Instagram which we created for a client. However, the takeaway here is not that Instagram infographics get coverage on Mashable. Mashable didn’t cover this because we created an infographic. They covered the piece because it told a story in a compelling and unusual way.

You probably shouldn’t create a listicle…

This point is related to the point above. In my experience, unless you’re a publisher with a huge, engaged social following, that listicle of yours is unlikely to gain traction. Listicles on huge publisher sites get shares, listicles on client sites typically don’t. This is doubly important if you’re also seeking coverage, as listicles on clients sites don’t typically get links or coverage on other sites.

How we use the research to inform our ideation process

At Distilled, we typically take a creative brief and complete creative content research and then move into the ideation process. A summary of the research is included within the creative brief, and this, along with a copy of the full creative content research is shared with the team.

The research acts as inspiration and direction and is particularly useful in terms of identifying potential topics to explore but doesn’t mean team members don’t still do further research of their own.

This process by no means acts as a silver bullet, but it definitely helps us come up with ideas.


Thanks for sticking with me to the end!

I’d love to hear more about your creative content research processes and any tips you have for finding inspirational content. Do let me know via the comments.

Image credits: Research, typing, audience, inspiration, kitteh.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

The Long Click and the Quality of Search Success

Posted by billslawski

“On the most basic level, Google could see how satisfied users were. To paraphrase Tolstoy, happy users were all the same. The best sign of their happiness was the “Long Click” — This occurred when someone went to a search result, ideally the top one, and did not return. That meant Google has successfully fulfilled the query.”

~ Steven Levy. In the Plex: How Google Thinks, Works, and Shapes our Lives

I often explore and read patents and papers from the search engines to try to get a sense of how they may approach different issues, and learn about the assumptions they make about search, searchers, and the Web. Lately, I’ve been keeping an eye open for papers and patents from the search engines where they talk about a metric known as the “long click.”

A recently granted Google patent uses the metric of a “Long Click” as the center of a process Google may use to track results for queries that were selected by searchers for long visits in a set of search results.

This concept isn’t new. In 2011, I wrote about a Yahoo patent in How a Search Engine May Measure the Quality of Its Search Results, where they discussed a metric that they refer to as a “target page success metric.” It included “dwell time” upon a result as a sign of search success (Yes, search engines have goals, too).

5543947f5bb408.24541747.jpg

Another Google patent described assigning web pages “reachability scores” based upon the quality of pages linked to from those initially visited pages. In the post Does Google Use Reachability Scores in Ranking Resources? I described how a Google patent that might view a long click metric as a sign to see if visitors to that page are engaged by the links to content they find those links pointing to, including links to videos. Google tells us in that patent that it might consider a “long click” to have been made on a video if someone watches at least half the video or 30 seconds of it. The patent suggests that a high reachability score on a page may mean that page could be boosted in Google search results.

554394a877e8c8.30299132.jpg

But the patent I’m writing about today is focused primarily upon looking at and tracking a search success metric like a long click or long dwell time. Here’s the abstract:

Modifying ranking data based on document changes

Invented by Henele I. Adams, and Hyung-Jin Kim

Assigned to Google

US Patent 9,002,867

Granted April 7, 2015

Filed: December 30, 2010

Abstract

Methods, systems, and apparatus, including computer programs encoded on computer storage media for determining a weighted overall quality of result statistic for a document.

One method includes receiving quality of result data for a query and a plurality of versions of a document, determining a weighted overall quality of result statistic for the document with respect to the query including weighting each version specific quality of result statistic and combining the weighted version-specific quality of result statistics, wherein each quality of result statistic is weighted by a weight determined from at least a difference between content of a reference version of the document and content of the version of the document corresponding to the version specific quality of result statistic, and storing the weighted overall quality of result statistic and data associating the query and the document with the weighted overall quality of result statistic.

This patent tells us that search results may be be ranked in an order, according to scores assigned to the search results by a scoring function or process that would be based upon things such as:

  • Where, and how often, query terms appear in the given document,
  • How common the query terms are in the documents indexed by the search engine, or
  • A query-independent measure of quality of the document itself.

Last September, I wrote about how Google might identify a category associated with a query term base upon clicks, in the post Using Query User Data To Classify Queries. In a query for “Lincoln.” the results that appear in response might be about the former US President, the town of Lincoln, Nebraska, and the model of automobile. When someone searches for [Lincoln], Google returning all three of those responses as a top result could be said to be reasonable. The patent I wrote about in that post told us that Google might collect information about “Lincoln” as a search entity, and track which category of results people clicked upon most when they performed that search, to determine what categories of pages to show other searchers. Again, that’s another “search success” based upon a past search history.

There likely is some value in working to find ways to increase the amount of dwell time someone spends upon the pages of your site, if you are already having some success in crafting page titles and snippets that persuade people to click on your pages when they those appear in search results. These approaches can include such things as:

  1. Making visiting your page a positive experience in terms of things like site speed, readability, and scannability.
  2. Making visiting your page a positive experience in terms of things like the quality of the content published on your pages including spelling, grammar, writing style, interest, quality of images, and the links you share to other resources.
  3. Providing a positive experience by offering ideas worth sharing with others, and offering opportunities for commenting and interacting with others, and by being responsive to people who do leave comments.

Here are some resources I found that discuss this long click metric in terms of “dwell time”:

Your ability to create pages that can end up in a “long click” from someone who has come to your site in response to a query, is also a “search success” metric on the search engine’s part, and you both succeed. Just be warned that as the most recent patent from Google on Long Clicks shows us, Google will be watching to make sure that the content of your page doesn’t change too much, and that people are continuing to click upon it in search results, and spend a fair amount to time upon it.

(Images for this post are from my Go Fish Digital Design Lead Devin Holmes @DevinGoFish. Thank you, Devin!)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

How to Combat 5 of the SEO World’s Most Infuriating Problems – Whiteboard Friday

Posted by randfish

These days, most of us have learned that spammy techniques aren’t the way to go, and we have a solid sense for the things we should be doing to rank higher, and ahead of our often spammier competitors. Sometimes, maddeningly, it just doesn’t work. In today’s Whiteboard Friday, Rand talks about five things that can infuriate SEOs with the best of intentions, why those problems exist, and what we can do about them.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

What SEO problems make you angry?

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about some of the most infuriating things in the SEO world, specifically five problems that I think plague a lot of folks and some of the ways that we can combat and address those.

I’m going to start with one of the things that really infuriates a lot of new folks to the field, especially folks who are building new and emerging sites and are doing SEO on them. You have all of these best practices list. You might look at a web developer’s cheat sheet or sort of a guide to on-page and on-site SEO. You go, “Hey, I’m doing it. I’ve got my clean URLs, my good, unique content, my solid keyword targeting, schema markup, useful internal links, my XML sitemap, and my fast load speed. I’m mobile friendly, and I don’t have manipulative links.”

Great. “Where are my results? What benefit am I getting from doing all these things, because I don’t see one?” I took a site that was not particularly SEO friendly, maybe it’s a new site, one I just launched or an emerging site, one that’s sort of slowly growing but not yet a power player. I do all this right stuff, and I don’t get SEO results.

This makes a lot of people stop investing in SEO, stop believing in SEO, and stop wanting to do it. I can understand where you’re coming from. The challenge is not one of you’ve done something wrong. It’s that this stuff, all of these things that you do right, especially things that you do right on your own site or from a best practices perspective, they don’t increase rankings. They don’t. That’s not what they’re designed to do.

1) Following best practices often does nothing for new and emerging sites

This stuff, all of these best practices are designed to protect you from potential problems. They’re designed to make sure that your site is properly optimized so that you can perform to the highest degree that you are able. But this is not actually rank boosting stuff unfortunately. That is very frustrating for many folks. So following a best practices list, the idea is not, “Hey, I’m going to grow my rankings by doing this.”

On the flip side, many folks do these things on larger, more well-established sites, sites that have a lot of ranking signals already in place. They’re bigger brands, they have lots of links to them, and they have lots of users and usage engagement signals. You fix this stuff. You fix stuff that’s already broken, and boom, rankings pop up. Things are going well, and more of your pages are indexed. You’re getting more search traffic, and it feels great. This is a challenge, on our part, of understanding what this stuff does, not a challenge on the search engine’s part of not ranking us properly for having done all of these right things.

2) My competition seems to be ranking on the back of spammy or manipulative links

What’s going on? I thought Google had introduced all these algorithms to kind of shut this stuff down. This seems very frustrating. How are they pulling this off? I look at their link profile, and I see a bunch of the directories, Web 2.0 sites — I love that the spam world decided that that’s Web 2.0 sites — article sites, private blog networks, and do follow blogs.

You look at this stuff and you go, “What is this junk? It’s terrible. Why isn’t Google penalizing them for this?” The answer, the right way to think about this and to come at this is: Are these really the reason that they rank? I think we need to ask ourselves that question.

One thing that we don’t know, that we can never know, is: Have these links been disavowed by our competitor here?

I’ve got my HulksIncredibleStore.com and their evil competitor Hulk-tastrophe.com. Hulk-tastrophe has got all of these terrible links, but maybe they disavowed those links and you would have no idea. Maybe they didn’t build those links. Perhaps those links came in from some other place. They are not responsible. Google is not treating them as responsible for it. They’re not actually what’s helping them.

If they are helping, and it’s possible they are, there are still instances where we’ve seen spam propping up sites. No doubt about it.

I think the next logical question is: Are you willing to loose your site or brand? What we don’t see anymore is we almost never see sites like this, who are ranking on the back of these things and have generally less legitimate and good links, ranking for two or three or four years. You can see it for a few months, maybe even a year, but this stuff is getting hit hard and getting hit frequently. So unless you’re willing to loose your site, pursuing their links is probably not a strategy.

Then what other signals, that you might not be considering potentially links, but also non-linking signals, could be helping them rank? I think a lot of us get blinded in the SEO world by link signals, and we forget to look at things like: Do they have a phenomenal user experience? Are they growing their brand? Are they doing offline kinds of things that are influencing online? Are they gaining engagement from other channels that’s then influencing their SEO? Do they have things coming in that I can’t see? If you don’t ask those questions, you can’t really learn from your competitors, and you just feel the frustration.

3) I have no visibility or understanding of why my rankings go up vs down

On my HulksIncredibleStore.com, I’ve got my infinite stretch shorts, which I don’t know why he never wears — he should really buy those — my soothing herbal tea, and my anger management books. I look at my rankings and they kind of jump up all the time, jump all over the place all the time. Actually, this is pretty normal. I think we’ve done some analyses here, and the average page one search results shift is 1.5 or 2 position changes daily. That’s sort of the MozCast dataset, if I’m recalling correctly. That means that, over the course of a week, it’s not uncommon or unnatural for you to be bouncing around four, five, or six positions up, down, and those kind of things.

I think we should understand what can be behind these things. That’s a very simple list. You made changes, Google made changes, your competitors made changes, or searcher behavior has changed in terms of volume, in terms of what they were engaging with, what they’re clicking on, what their intent behind searches are. Maybe there was just a new movie that came out and in one of the scenes Hulk talks about soothing herbal tea. So now people are searching for very different things than they were before. They want to see the scene. They’re looking for the YouTube video clip and those kind of things. Suddenly Hulk’s soothing herbal tea is no longer directing as well to your site.

So changes like these things can happen. We can’t understand all of them. I think what’s up to us to determine is the degree of analysis and action that’s actually going to provide a return on investment. Looking at these day over day or week over week and throwing up our hands and getting frustrated probably provides very little return on investment. Looking over the long term and saying, “Hey, over the last 6 months, we can observe 26 weeks of ranking change data, and we can see that in aggregate we are now ranking higher and for more keywords than we were previously, and so we’re going to continue pursuing this strategy. This is the set of keywords that we’ve fallen most on, and here are the factors that we’ve identified that are consistent across that group.” I think looking at rankings in aggregate can give us some real positive ROI. Looking at one or two, one week or the next week probably very little ROI.

4) I cannot influence or affect change in my organization because I cannot accurately quantify, predict, or control SEO

That’s true, especially with things like keyword not provided and certainly with the inaccuracy of data that’s provided to us through Google’s Keyword Planner inside of AdWords, for example, and the fact that no one can really control SEO, not fully anyway.

You get up in front of your team, your board, your manager, your client and you say, “Hey, if we don’t do these things, traffic will suffer,” and they go, “Well, you can’t be sure about that, and you can’t perfectly predict it. Last time you told us something, something else happened. So because the data is imperfect, we’d rather spend money on channels that we can perfectly predict, that we can very effectively quantify, and that we can very effectively control.” That is understandable. I think that businesses have a lot of risk aversion naturally, and so wanting to spend time and energy and effort in areas that you can control feels a lot safer.

Some ways to get around this are, first off, know your audience. If you know who you’re talking to in the room, you can often determine the things that will move the needle for them. For example, I find that many managers, many boards, many executives are much more influenced by competitive pressures than they are by, “We won’t do as well as we did before, or we’re loosing out on this potential opportunity.” Saying that is less powerful than saying, “This competitor, who I know we care about and we track ourselves against, is capturing this traffic and here’s how they’re doing it.”

Show multiple scenarios. Many of the SEO presentations that I see and have seen and still see from consultants and from in-house folks come with kind of a single, “Hey, here’s what we predict will happen if we do this or what we predict will happen if we don’t do this.” You’ve got to show multiple scenarios, especially when you know you have error bars because you can’t accurately quantify and predict. You need to show ranges.

So instead of this, I want to see: What happens if we do it a little bit? What happens if we really overinvest? What happens if Google makes a much bigger change on this particular factor than we expect or our competitors do a much bigger investment than we expect? How might those change the numbers?

Then I really do like bringing case studies, especially if you’re a consultant, but even in-house there are so many case studies in SEO on the Web today, you can almost always find someone who’s analogous or nearly analogous and show some of their data, some of the results that they’ve seen. Places like SEMrush, a tool that offers competitive intelligence around rankings, can be great for that. You can show, hey, this media site in our sector made these changes. Look at the delta of keywords they were ranking for versus R over the next six months. Correlation is not causation, but that can be a powerful influencer showing those kind of things.

Then last, but not least, any time you’re going to get up like this and present to a group around these topics, if you very possibly can, try to talk one-on-one with the participants before the meeting actually happens. I have found it almost universally the case that when you get into a group setting, if you haven’t had the discussions beforehand about like, “What are your concerns? What do you think is not valid about this data? Hey, I want to run this by you and get your thoughts before we go to the meeting.” If you don’t do that ahead of time, people can gang up and pile on. One person says, “Hey, I don’t think this is right,” and everybody in the room kind of looks around and goes, “Yeah, I also don’t think that’s right.” Then it just turns into warfare and conflict that you don’t want or need. If you address those things beforehand, then you can include the data, the presentations, and the “I don’t know the answer to this and I know this is important to so and so” in that presentation or in that discussion. It can be hugely helpful. Big difference between winning and losing with that.

5) Google is biasing to big brands. It feels hopeless to compete against them

A lot of people are feeling this hopelessness, hopelessness in SEO about competing against them. I get that pain. In fact, I’ve felt that very strongly for a long time in the SEO world, and I think the trend has only increased. This comes from all sorts of stuff. Brands now have the little dropdown next to their search result listing. There are these brand and entity connections. As Google is using answers and knowledge graph more and more, it’s feeling like those entities are having a bigger influence on where things rank and where they’re visible and where they’re pulling from.

User and usage behavior signals on the rise means that big brands, who have more of those signals, tend to perform better. Brands in the knowledge graph, brands growing links without any effort, they’re just growing links because they’re brands and people point to them naturally. Well, that is all really tough and can be very frustrating.

I think you have a few choices on the table. First off, you can choose to compete with brands where they can’t or won’t. So this is areas like we’re going after these keywords that we know these big brands are not chasing. We’re going after social channels or people on social media that we know big brands aren’t. We’re going after user generated content because they have all these corporate requirements and they won’t invest in that stuff. We’re going after content that they refuse to pursue for one reason or another. That can be very effective.

You better be building, growing, and leveraging your competitive advantage. Whenever you build an organization, you’ve got to say, “Hey, here’s who is out there. This is why we are uniquely better or a uniquely better choice for this set of customers than these other ones.” If you can leverage that, you can generally find opportunities to compete and even to win against big brands. But those things have to become obvious, they have to become well-known, and you need to essentially build some of your brand around those advantages, or they’re not going to give you help in search. That includes media, that includes content, that includes any sort of press and PR you’re doing. That includes how you do your own messaging, all of these things.

(C) You can choose to serve a market or a customer that they don’t or won’t. That can be a powerful way to go about search, because usually search is bifurcated by the customer type. There will be slightly different forms of search queries that are entered by different kinds of customers, and you can pursue one of those that isn’t pursued by the competition.

Last, but not least, I think for everyone in SEO we all realize we’re going to have to become brands ourselves. That means building the signals that are typically associated with brands — authority, recognition from an industry, recognition from a customer set, awareness of our brand even before a search has happened. I talked about this in a previous Whiteboard Friday, but I think because of these things, SEO is becoming a channel that you benefit from as you grow your brand rather than the channel you use to initially build your brand.

All right, everyone. Hope these have been helpful in combating some of these infuriating, frustrating problems and that we’ll see some great comments from you guys. I hope to participate in those as well, and we’ll catch you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

I Can’t Drive 155: Meta Descriptions in 2015

Posted by Dr-Pete

For years now, we (and many others) have been recommending keeping your Meta Descriptions shorter than
about 155-160 characters. For months, people have been sending me examples of search snippets that clearly broke that rule, like this one (on a search for “hummingbird food”):

For the record, this one clocks in at 317 characters (counting spaces). So, I set out to discover if these long descriptions were exceptions to the rule, or if we need to change the rules. I collected the search snippets across the MozCast 10K, which resulted in 92,669 snippets. All of the data in this post was collected on April 13, 2015.

The Basic Data

The minimum snippet length was zero characters. There were 69 zero-length snippets, but most of these were the new generation of answer box, that appears organic but doesn’t have a snippet. To put it another way, these were misidentified as organic by my code. The other 0-length snippets were local one-boxes that appeared as organic but had no snippet, such as this one for “chichen itza”:

These zero-length snippets were removed from further analysis, but considering that they only accounted for 0.07% of the total data, they didn’t really impact the conclusions either way. The shortest legitimate, non-zero snippet was 7 characters long, on a search for “geek and sundry”, and appears to have come directly from the site’s meta description:

The maximum snippet length that day (this is a highly dynamic situation) was 372 characters. The winner appeared on a search for “benefits of apple cider vinegar”:

The average length of all of the snippets in our data set (not counting zero-length snippets) was 143.5 characters, and the median length was 152 characters. Of course, this can be misleading, since some snippets are shorter than the limit and others are being artificially truncated by Google. So, let’s dig a bit deeper.

The Bigger Picture

To get a better idea of the big picture, let’s take a look at the display length of all 92,600 snippets (with non-zero length), split into 20-character buckets (0-20, 21-40, etc.):

Most of the snippets (62.1%) cut off as expected, right in the 141-160 character bucket. Of course, some snippets were shorter than that, and didn’t need to be cut off, and some broke the rules. About 1% (1,010) of the snippets in our data set measured 200 or more characters. That’s not a huge number, but it’s enough to take seriously.

That 141-160 character bucket is dwarfing everything else, so let’s zoom in a bit on the cut-off range, and just look at snippets in the 120-200 character range (in this case, by 5-character bins):

Zooming in, the bulk of the snippets are displaying at lengths between about 146-165 characters. There are plenty of exceptions to the 155-160 character guideline, but for the most part, they do seem to be exceptions.

Finally, let’s zoom in on the rule-breakers. This is the distribution of snippets displaying 191+ characters, bucketed in 10-character bins (191-200, 201-210, etc.):

Please note that the Y-axis scale is much smaller than in the previous 2 graphs, but there is a pretty solid spread, with a decent chunk of snippets displaying more than 300 characters.

Without looking at every original meta description tag, it’s very difficult to tell exactly how many snippets have been truncated by Google, but we do have a proxy. Snippets that have been truncated end in an ellipsis (…), which rarely appears at the end of a natural description. In this data set, more than half of all snippets (52.8%) ended in an ellipsis, so we’re still seeing a lot of meta descriptions being cut off.

I should add that, unlike titles/headlines, it isn’t clear whether Google is cutting off snippets by pixel width or character count, since that cut-off is done on the server-side. In most cases, Google will cut before the end of the second line, but sometimes they cut well before this, which could suggest a character-based limit. They also cut off at whole words, which can make the numbers a bit tougher to interpret.

The Cutting Room Floor

There’s another difficulty with telling exactly how many meta descriptions Google has modified – some edits are minor, and some are major. One minor edit is when Google adds some additional information to a snippet, such as a date at the beginning. Here’s an example (from a search for “chicken pox”):

With the date (and minus the ellipsis), this snippet is 164 characters long, which suggests Google isn’t counting the added text against the length limit. What’s interesting is that the rest comes directly from the meta description on the site, except that the site’s description starts with “Chickenpox.” and Google has removed that keyword. As a human, I’d say this matches the meta description, but a bot has a very hard time telling a minor edit from a complete rewrite.

Another minor rewrite occurs in snippets that start with search result counts:

Here, we’re at 172 characters (with spaces and minus the ellipsis), and Google has even let this snippet roll over to a third line. So, again, it seems like the added information at the beginning isn’t counting against the length limit.

All told, 11.6% of the snippets in our data set had some kind of Google-generated data, so this type of minor rewrite is pretty common. Even if Google honors most of your meta description, you may see small edits.

Let’s look at our big winner, the 372-character description. Here’s what we saw in the snippet:

Jan 26, 2015 – Health• Diabetes Prevention: Multiple studies have shown a correlation between apple cider vinegar and lower blood sugar levels. … • Weight Loss: Consuming apple cider vinegar can help you feel more full, which can help you eat less. … • Lower Cholesterol: … • Detox: … • Digestive Aid: … • Itchy or Sunburned Skin: … • Energy Boost:1 more items

So, what about the meta description? Here’s what we actually see in the tag:

Were you aware of all the uses of apple cider vinegar? From cleansing to healing, to preventing diabetes, ACV is a pantry staple you need in your home.

That’s a bit more than just a couple of edits. So, what’s happening here? Well, there’s a clue on that same page, where we see yet another rule-breaking snippet:

You might be wondering why this snippet is any more interesting than the other one. If you could see the top of the SERP, you’d know why, because it looks something like this:

Google is automatically extracting list-style data from these pages to fuel the expansion of the Knowledge Graph. In one case, that data is replacing a snippet
and going directly into an answer box, but they’re performing the same translation even for some other snippets on the page.

So, does every 2nd-generation answer box yield long snippets? After 3 hours of inadvisable mySQL queries, I can tell you that the answer is a resounding “probably not”. You can have 2nd-gen answer boxes without long snippets and you can have long snippets without 2nd-gen answer boxes,
but there does appear to be a connection between long snippets and Knowledge Graph in some cases.

One interesting connection is that Google has begun bolding keywords that seem like answers to the query (and not just synonyms for the query). Below is an example from a search for “mono symptoms”. There’s an answer box for this query, but the snippet below is not from the site in the answer box:

Notice the bolded words – “fatigue”, “sore throat”, “fever”, “headache”, “rash”. These aren’t synonyms for the search phrase; these are actual symptoms of mono. This data isn’t coming from the meta description, but from a bulleted list on the target page. Again, it appears that Google is trying to use the snippet to answer a question, and has gone well beyond just matching keywords.

Just for fun, let’s look at one more, where there’s no clear connection to the Knowledge Graph. Here’s a snippet from a search for “sons of anarchy season 4”:

This page has no answer box, and the information extracted is odd at best. The snippet bears little or no resemblance to the site’s meta description. The number string at the beginning comes out of a rating widget, and some of the text isn’t even clearly available on the page. This seems to be an example of Google acknowledging IMDb as a high-authority site and desperately trying to match any text they can to the query, resulting in a Frankenstein’s snippet.

The Final Verdict

If all of this seems confusing, that’s probably because it is. Google is taking a lot more liberties with snippets these days, both to better match queries, to add details they feel are important, or to help build and support the Knowledge Graph.

So, let’s get back to the original question – is it time to revise the 155(ish) character guideline? My gut feeling is: not yet. To begin with, the vast majority of snippets are still falling in that 145-165 character range. In addition, the exceptions to the rule are not only atypical situations, but in most cases those long snippets don’t seem to represent the original meta description. In other words, even if Google does grant you extra characters, they probably won’t be the extra characters you asked for in the first place.

Many people have asked: “How do I make sure that Google shows my meta description as is?” I’m afraid the answer is: “You don’t.” If this is very important to you, I would recommend keeping your description below the 155-character limit, and making sure that it’s a good match to your target keyword concepts. I suspect Google is going to take more liberties with snippets over time, and we’re going to have to let go of our obsession with having total control over the SERPs.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

The Nifty Guide to Local Content Strategy and Marketing

Posted by NiftyMarketing

This is my Grandma.

She helped raised me and I love her dearly. That chunky baby with the Gerber cheeks is
me. The scarlet letter “A” means nothing… I hope.

This is a rolled up newspaper. 

rolled up newspaper

When I was growing up, I was the king of mischief and had a hard time following parental guidelines. To ensure the lessons she wanted me to learn “sunk in” my grandma would give me a soft whack with a rolled up newspaper and would say,

“Mike, you like to learn the hard way.”

She was right. I have
spent my life and career learning things the hard way.

Local content has been no different. I started out my career creating duplicate local doorway pages using “find and replace” with city names. After getting whacked by the figurative newspaper a few times, I decided there had to be a better way. To save others from the struggles I experienced, I hope that the hard lessons I have learned about local content strategy and marketing help to save you fearing a rolled newspaper the same way I do.

Lesson one: Local content doesn’t just mean the written word

local content ecosystem

Content is everything around you. It all tells a story. If you don’t have a plan for how that story is being told, then you might not like how it turns out. In the local world, even your brick and mortar building is a piece of content. It speaks about your brand, your values, your appreciation of customers and employees, and can be used to attract organic visitors if it is positioned well and provides a good user experience. If you just try to make the front of a building look good, but don’t back up the inside inch by inch with the same quality, people will literally say, “Hey man, this place sucks… let’s bounce.”

I had this experience proved to me recently while conducting an interview at
Nifty for our law division. Our office is a beautifully designed brick, mustache, animal on the wall, leg lamp in the center of the room, piece of work you would expect for a creative company.

nifty offices idaho

Anywho, for our little town of Burley, Idaho it is a unique space, and helps to set apart our business in our community. But, the conference room has a fluorescent ballast light system that can buzz so loudly that you literally can’t carry on a proper conversation at times, and in the recent interviews I literally had to conduct them in the dark because it was so bad.

I’m cheap and slow to spend money, so I haven’t got it fixed yet. The problem is I have two more interviews this week and I am so embarrassed by the experience in that room, I am thinking of holding them offsite to ensure that we don’t product a bad content experience. What I need to do is just fix the light but I will end up spending weeks going back and forth with the landlord on whose responsibility it is.

Meanwhile, the content experience suffers. Like I said, I like to learn the hard way.

Start thinking about everything in the frame of content and you will find that you make better decisions and less costly mistakes.

Lesson two: Scalable does not mean fast and easy growth

In every sales conversation I have had about local content, the question of scalability comes up. Usually, people want two things:

  1. Extremely Fast Production 
  2. Extremely Low Cost

While these two things would be great for every project, I have come to find that there are rare cases where quality can be achieved if you are optimizing for fast production and low cost. A better way to look at scale is as follows:

The rate of growth in revenue/traffic is greater than the cost of continued content creation.

A good local content strategy at scale will create a model that looks like this:

scaling content graph

Lesson three: You need a continuous local content strategy

This is where the difference between local content marketing and content strategy kicks in. Creating a single piece of content that does well is fairly easy to achieve. Building a true scalable machine that continually puts out great local content and consistently tells your story is not. This is a graph I created outlining the process behind creating and maintaining a local content strategy:

local content strategy

This process is not a one-time thing. It is not a box to be checked off. It is a structure that should become the foundation of your marketing program and will need to be revisited, re-tweaked, and replicated over and over again.

1. Identify your local audience

Most of you reading this will already have a service or product and hopefully local customers. Do you have personas developed for attracting and retaining more of them? Here are some helpful tools available to give you an idea of how many people fit your personas in any given market.

Facebook Insights

Pretend for a minute that you live in the unique market of Utah and have a custom wedding dress line. You focus on selling modest wedding dresses. It is a definite niche product, but one that shows the idea of personas very well.

You have interviewed your customer base and found a few interests that your customer base share. Taking that information and putting it into Facebook insights will give you a plethora of data to help you build out your understanding of a local persona.

facebook insights data

We are able to see from the interests of our customers there are roughly 6k-7k current engaged woman in Utah who have similar interests to our customer base.

The location tab gives us a break down of the specific cities and, understandably, Salt Lake City has the highest percentage with Provo (home of BYU) in second place. You can also see pages this group would like, activity levels on Facebook, and household income with spending habits. If you wanted to find more potential locations for future growth you can open up the search to a region or country.

localized facebook insights data

From this data it’s apparent that Arizona would be a great expansion opportunity after Utah.

Neilson Prizm

Neilson offers a free and extremely useful tool for local persona research called Zip Code Lookup that allows you to identify pre-determined personas in a given market.

Here is a look at my hometown and the personas they have developed are dead on.

Neilson Prizm data

Each persona can be expanded to learn more about the traits, income level, and areas across the country with other high concentrations of the same persona group.

You can also use the segment explorer to get a better idea of pre-determined persona lists and can work backwards to determine the locations with the highest density of a given persona.

Google Keyword Planner Tool

The keyword tool is fantastic for local research. Using our same Facebook Insight data above we can match keyword search volume against the audience size to determine how active our persona is in product research and purchasing. In the case of engaged woman looking for dresses, it is a very active group with a potential of 20-30% actively searching online for a dress.

google keyword planner tool

2. Create goals and rules

I think the most important idea for creating the goals and rules around your local content is the following from the must read book Content Strategy for the Web.

You also need to ensure that everyone who will be working on things even remotely related to content has access to style and brand guides and, ultimately, understands the core purpose for what, why, and how everything is happening.

3. Audit and analyze your current local content

The point of this step is to determine how the current content you have stacks up against the goals and rules you established, and determine the value of current pages on your site. With tools like Siteliner (for finding duplicate content) and ScreamingFrog (identifying page titles, word count, error codes and many other things) you can grab a lot of information very fast. Beyond that, there are a few tools that deserve a more in-depth look.

BuzzSumo

With BuzzSumo you can see social data and incoming links behind important pages on your site. This can you a good idea which locations or areas are getting more promotion than others and identify what some of the causes could be.

Buzzsumo also can give you access to competitors’ information where you might find some new ideas. In the following example you can see that one of Airbnb.com’s most shared pages was a motiongraphic of its impact on Berlin.

Buzzsumo

urlProfiler

This is another great tool for scraping urls for large sites that can return about every type of measurement you could want. For sites with 1000s of pages, this tool could save hours of data gathering and can spit out a lovely formatted CSV document that will allow you to sort by things like word count, page authority, link numbers, social shares, or about anything else you could imagine.

url profiler

4. Develop local content marketing tactics

This is how most of you look when marketing tactics are brought up.

monkey

Let me remind you of something with a picture. 

rolled up newspaper

Do not start with tactics. Do the other things first. It will ensure your marketing tactics fall in line with a much bigger organizational movement and process. With the warning out of the way, here are a few tactics that could work for you.

Local landing page content

Our initial concept of local landing pages has stood the test of time. If you are scared to even think about local pages with the upcoming doorway page update then please read this analysis and don’t be too afraid. Here are local landing pages that are done right.

Marriott local content

Marriot’s Burley local page is great. They didn’t think about just ensuring they had 500 unique words. They have custom local imagery of the exterior/interior, detailed information about the area’s activities, and even their own review platform that showcases both positive and negative reviews with responses from local management.

If you can’t build your own platform handling reviews like that, might I recommend looking at Get Five Stars as a platform that could help you integrate reviews as part of your continuous content strategy.

Airbnb Neighborhood Guides

I not so secretly have a big crush on Airbnb’s approach to local. These neighborhood guides started it. They only have roughly 21 guides thus far and handle one at a time with Seoul being the most recent addition. The idea is simple, they looked at extremely hot markets for them and built out guides not just for the city, but down to a specific neighborhood.

air bnb neighborhood guides

Here is a look at Hell’s Kitchen in New York by imagery. They hire a local photographer to shoot the area, then they take some of their current popular listing data and reviews and integrate them into the page. This idea would have never flown if they only cared about creating content that could be fast and easy for every market they serve.

Reverse infographicing

Every decently sized city has had a plethora of infographics made about them. People spent the time curating information and coming up with the concept, but a majority just made the image and didn’t think about the crawlability or page title from an SEO standpoint.

Here is an example of an image search for Portland infographics.

image search results portland infographics

Take an infographic and repurpose it into crawlable content with a new twist or timely additions. Usually infographics share their data sources in the footer so you can easily find similar, new, or more information and create some seriously compelling data based content. You can even link to or share the infographic as part of it if you would like.

Become an Upworthy of local content

No one I know does this better than Movoto. Read the link for their own spin on how they did it and then look at these examples and share numbers from their local content.

60k shares in Boise by appealing to that hometown knowledge.

movoto boise content

65k shares in Salt Lake following the same formula.

movoto salt lake city content

It seems to work with video as well.

movoto video results

Think like a local directory

Directories understand where content should be housed. Not every local piece should be on the blog. Look at where Trip Advisor’s famous “Things to Do” page is listed. Right on the main city page.

trip advisor things to do in salt lake city

Or look at how many timely, fresh, quality pieces of content Yelp is showcasing from their main city page.

yelp main city page

The key point to understand is that local content isn’t just about being unique on a landing page. It is about BEING local and useful.

Ideas of things that are local:

  • Sports teams
  • Local celebrities or heroes 
  • Groups and events
  • Local pride points
  • Local pain points

Ideas of things that are useful:

  • Directions
  • Favorite local sports
  • Granular details only “locals” know

The other point to realize is that in looking at our definition of scale you don’t need to take shortcuts that un-localize the experience for users. Figure and test a location at a time until you have a winning formula and then move forward at a speed that ensures a quality local experience.

5. Create a content calendar

I am not going to get into telling you exactly how or what your content calendar needs to include. That will largely be based on the size and organization of your team and every situation might call for a unique approach. What I will do is explain how we do things at Nifty.

  1. We follow the steps above.
  2. We schedule the big projects and timelines first. These could be months out or weeks out. 
  3. We determine the weekly deliverables, checkpoints, and publish times.
  4. We put all of the information as tasks assigned to individuals or teams in Asana.

asana content calendar

The information then can be viewed by individual, team, groups of team, due dates, or any other way you would wish to sort. Repeatable tasks can be scheduled and we can run our entire operation visible to as many people as need access to the information through desktop or mobile devices. That is what works for us.

6. Launch and promote content

My personal favorite way to promote local content (other than the obvious ideas of sharing with your current followers or outreaching to local influencers) is to use Facebook ads to target the specific local personas you are trying to reach. Here is an example:

I just wrapped up playing Harold Hill in our communities production of The Music Man. When you live in a small town like Burley, Idaho you get the opportunity to play a lead role without having too much talent or a glee-based upbringing. You also get the opportunity to do all of the advertising, set design, and costuming yourself and sometime even get to pay for it.

For my advertising responsibilities, I decided to write a few blog posts and drive traffic to them. As any good Harold Hill would do, I used fear tactics.

music man blog post

I then created Facebook ads that had the following stats: Costs of $.06 per click, 12.7% click through rate, and naturally organic sharing that led to thousands of visits in a small Idaho farming community where people still think a phone book is the only way to find local businesses.

facebook ads setup

Then we did it again.

There was a protestor in Burley for over a year that parked a red pickup with signs saying things like, “I wud not trust Da Mayor” or “Don’t Bank wid Zions”. Basically, you weren’t working hard enough if you name didn’t get on the truck during the year.

Everyone knew that ol’ red pickup as it was parked on the corner of Main and Overland, which is one of the few stoplights in town. Then one day it was gone. We came up with the idea to bring the red truck back, put signs on it that said, “I wud Not Trust Pool Tables” and “Resist Sins n’ Corruption” and other things that were part of The Music Man and wrote another blog complete with pictures.

facebook ads red truck

Then I created another Facebook Ad.

facebook ads set up

A little under $200 in ad spend resulted in thousands more visits to the site which promoted the play and sold tickets to a generation that might not have been very familiar with the show otherwise.

All of it was local targeting and there was no other way would could have driven that much traffic in a community like Burley without paying Facebook and trying to create click bait ads in hope the promotion led to an organic sharing.

7. Measure and report

This is another very personal step where everyone will have different needs. At Nifty we put together very custom weekly or monthly reports that cover all of the plan, execution, and relevant stats such as traffic to specific content or location, share data, revenue or lead data if available, analysis of what worked and what didn’t, and the plan for the following period.

There is no exact data that needs to be shared. Everyone will want something slightly different, which is why we moved away from automated reporting years ago (when we moved away from auto link building… hehe) and built our report around our clients even if it took added time.

I always said that the product of a SEO or content shop is the report. That is what people buy because it is likely that is all they will see or understand.

8. In conclusion, you must refine and repeat the process

local content strategy - refine and repeat

From my point of view, this is by far the most important step and sums everything up nicely. This process model isn’t perfect. There will be things that are missed, things that need tweaked, and ways that you will be able to improve on your local content strategy and marketing all the time. The idea of the cycle is that it is never done. It never sleeps. It never quits. It never surrenders. You just keep perfecting the process until you reach the point that few locally-focused companies ever achieve… where your local content reaches and grows your target audience every time you click the publish button.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]