Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking – Whiteboard Friday

Posted by randfish

There’s no doubt that quite a bit has changed about SEO, and that the field is far more integrated with other aspects of online marketing than it once was. In today’s Whiteboard Friday, Rand pushes back against the idea that effective modern SEO doesn’t require any technical expertise, outlining a fantastic list of technical elements that today’s SEOs need to know about in order to be truly effective.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to do something unusual. I don’t usually point out these inconsistencies or sort of take issue with other folks’ content on the web, because I generally find that that’s not all that valuable and useful. But I’m going to make an exception here.

There is an article by Jayson DeMers, who I think might actually be here in Seattle — maybe he and I can hang out at some point — called “Why Modern SEO Requires Almost No Technical Expertise.” It was an article that got a shocking amount of traction and attention. On Facebook, it has thousands of shares. On LinkedIn, it did really well. On Twitter, it got a bunch of attention.

Some folks in the SEO world have already pointed out some issues around this. But because of the increasing popularity of this article, and because I think there’s, like, this hopefulness from worlds outside of kind of the hardcore SEO world that are looking to this piece and going, “Look, this is great. We don’t have to be technical. We don’t have to worry about technical things in order to do SEO.”

Look, I completely get the appeal of that. I did want to point out some of the reasons why this is not so accurate. At the same time, I don’t want to rain on Jayson, because I think that it’s very possible he’s writing an article for Entrepreneur, maybe he has sort of a commitment to them. Maybe he had no idea that this article was going to spark so much attention and investment. He does make some good points. I think it’s just really the title and then some of the messages inside there that I take strong issue with, and so I wanted to bring those up.

First off, some of the good points he did bring up.

One, he wisely says, “You don’t need to know how to code or to write and read algorithms in order to do SEO.” I totally agree with that. If today you’re looking at SEO and you’re thinking, “Well, am I going to get more into this subject? Am I going to try investing in SEO? But I don’t even know HTML and CSS yet.”

Those are good skills to have, and they will help you in SEO, but you don’t need them. Jayson’s totally right. You don’t have to have them, and you can learn and pick up some of these things, and do searches, watch some Whiteboard Fridays, check out some guides, and pick up a lot of that stuff later on as you need it in your career. SEO doesn’t have that hard requirement.

And secondly, he makes an intelligent point that we’ve made many times here at Moz, which is that, broadly speaking, a better user experience is well correlated with better rankings.

You make a great website that delivers great user experience, that provides the answers to searchers’ questions and gives them extraordinarily good content, way better than what’s out there already in the search results, generally speaking you’re going to see happy searchers, and that’s going to lead to higher rankings.

But not entirely. There are a lot of other elements that go in here. So I’ll bring up some frustrating points around the piece as well.

First off, there’s no acknowledgment — and I find this a little disturbing — that the ability to read and write code, or even HTML and CSS, which I think are the basic place to start, is helpful or can take your SEO efforts to the next level. I think both of those things are true.

So being able to look at a web page, view source on it, or pull up Firebug in Firefox or something and diagnose what’s going on and then go, “Oh, that’s why Google is not able to see this content. That’s why we’re not ranking for this keyword or term, or why even when I enter this exact sentence in quotes into Google, which is on our page, this is why it’s not bringing it up. It’s because it’s loading it after the page from a remote file that Google can’t access.” These are technical things, and being able to see how that code is built, how it’s structured, and what’s going on there, very, very helpful.

Some coding knowledge also can take your SEO efforts even further. I mean, so many times, SEOs are stymied by the conversations that we have with our programmers and our developers and the technical staff on our teams. When we can have those conversations intelligently, because at least we understand the principles of how an if-then statement works, or what software engineering best practices are being used, or they can upload something into a GitHub repository, and we can take a look at it there, that kind of stuff is really helpful.

Secondly, I don’t like that the article overly reduces all of this information that we have about what we’ve learned about Google. So he mentions two sources. One is things that Google tells us, and others are SEO experiments. I think both of those are true. Although I’d add that there’s sort of a sixth sense of knowledge that we gain over time from looking at many, many search results and kind of having this feel for why things rank, and what might be wrong with a site, and getting really good at that using tools and data as well. There are people who can look at Open Site Explorer and then go, “Aha, I bet this is going to happen.” They can look, and 90% of the time they’re right.

So he boils this down to, one, write quality content, and two, reduce your bounce rate. Neither of those things are wrong. You should write quality content, although I’d argue there are lots of other forms of quality content that aren’t necessarily written — video, images and graphics, podcasts, lots of other stuff.

And secondly, that just doing those two things is not always enough. So you can see, like many, many folks look and go, “I have quality content. It has a low bounce rate. How come I don’t rank better?” Well, your competitors, they’re also going to have quality content with a low bounce rate. That’s not a very high bar.

Also, frustratingly, this really gets in my craw. I don’t think “write quality content” means anything. You tell me. When you hear that, to me that is a totally non-actionable, non-useful phrase that’s a piece of advice that is so generic as to be discardable. So I really wish that there was more substance behind that.

The article also makes, in my opinion, the totally inaccurate claim that modern SEO really is reduced to “the happier your users are when they visit your site, the higher you’re going to rank.”

Wow. Okay. Again, I think broadly these things are correlated. User happiness and rank is broadly correlated, but it’s not a one to one. This is not like a, “Oh, well, that’s a 1.0 correlation.”

I would guess that the correlation is probably closer to like the page authority range. I bet it’s like 0.35 or something correlation. If you were to actually measure this broadly across the web and say like, “Hey, were you happier with result one, two, three, four, or five,” the ordering would not be perfect at all. It probably wouldn’t even be close.

There’s a ton of reasons why sometimes someone who ranks on Page 2 or Page 3 or doesn’t rank at all for a query is doing a better piece of content than the person who does rank well or ranks on Page 1, Position 1.

Then the article suggests five and sort of a half steps to successful modern SEO, which I think is a really incomplete list. So Jayson gives us;

  • Good on-site experience
  • Writing good content
  • Getting others to acknowledge you as an authority
  • Rising in social popularity
  • Earning local relevance
  • Dealing with modern CMS systems (which he notes most modern CMS systems are SEO-friendly)

The thing is there’s nothing actually wrong with any of these. They’re all, generally speaking, correct, either directly or indirectly related to SEO. The one about local relevance, I have some issue with, because he doesn’t note that there’s a separate algorithm for sort of how local SEO is done and how Google ranks local sites in maps and in their local search results. Also not noted is that rising in social popularity won’t necessarily directly help your SEO, although it can have indirect and positive benefits.

I feel like this list is super incomplete. Okay, I brainstormed just off the top of my head in the 10 minutes before we filmed this video a list. The list was so long that, as you can see, I filled up the whole whiteboard and then didn’t have any more room. I’m not going to bother to erase and go try and be absolutely complete.

But there’s a huge, huge number of things that are important, critically important for technical SEO. If you don’t know how to do these things, you are sunk in many cases. You can’t be an effective SEO analyst, or consultant, or in-house team member, because you simply can’t diagnose the potential problems, rectify those potential problems, identify strategies that your competitors are using, be able to diagnose a traffic gain or loss. You have to have these skills in order to do that.

I’ll run through these quickly, but really the idea is just that this list is so huge and so long that I think it’s very, very, very wrong to say technical SEO is behind us. I almost feel like the opposite is true.

We have to be able to understand things like;

  • Content rendering and indexability
  • Crawl structure, internal links, JavaScript, Ajax. If something’s post-loading after the page and Google’s not able to index it, or there are links that are accessible via JavaScript or Ajax, maybe Google can’t necessarily see those or isn’t crawling them as effectively, or is crawling them, but isn’t assigning them as much link weight as they might be assigning other stuff, and you’ve made it tough to link to them externally, and so they can’t crawl it.
  • Disabling crawling and/or indexing of thin or incomplete or non-search-targeted content. We have a bunch of search results pages. Should we use rel=prev/next? Should we robots.txt those out? Should we disallow from crawling with meta robots? Should we rel=canonical them to other pages? Should we exclude them via the protocols inside Google Webmaster Tools, which is now Google Search Console?
  • Managing redirects, domain migrations, content updates. A new piece of content comes out, replacing an old piece of content, what do we do with that old piece of content? What’s the best practice? It varies by different things. We have a whole Whiteboard Friday about the different things that you could do with that. What about a big redirect or a domain migration? You buy another company and you’re redirecting their site to your site. You have to understand things about subdomain structures versus subfolders, which, again, we’ve done another Whiteboard Friday about that.
  • Proper error codes, downtime procedures, and not found pages. If your 404 pages turn out to all be 200 pages, well, now you’ve made a big error there, and Google could be crawling tons of 404 pages that they think are real pages, because you’ve made it a status code 200, or you’ve used a 404 code when you should have used a 410, which is a permanently removed, to be able to get it completely out of the indexes, as opposed to having Google revisit it and keep it in the index.

Downtime procedures. So there’s specifically a… I can’t even remember. It’s a 5xx code that you can use. Maybe it was a 503 or something that you can use that’s like, “Revisit later. We’re having some downtime right now.” Google urges you to use that specific code rather than using a 404, which tells them, “This page is now an error.”

Disney had that problem a while ago, if you guys remember, where they 404ed all their pages during an hour of downtime, and then their homepage, when you searched for Disney World, was, like, “Not found.” Oh, jeez, Disney World, not so good.

  • International and multi-language targeting issues. I won’t go into that. But you have to know the protocols there. Duplicate content, syndication, scrapers. How do we handle all that? Somebody else wants to take our content, put it on their site, what should we do? Someone’s scraping our content. What can we do? We have duplicate content on our own site. What should we do?
  • Diagnosing traffic drops via analytics and metrics. Being able to look at a rankings report, being able to look at analytics connecting those up and trying to see: Why did we go up or down? Did we have less pages being indexed, more pages being indexed, more pages getting traffic less, more keywords less?
  • Understanding advanced search parameters. Today, just today, I was checking out the related parameter in Google, which is fascinating for most sites. Well, for Moz, weirdly, related:oursite.com shows nothing. But for virtually every other sit, well, most other sites on the web, it does show some really interesting data, and you can see how Google is connecting up, essentially, intentions and topics from different sites and pages, which can be fascinating, could expose opportunities for links, could expose understanding of how they view your site versus your competition or who they think your competition is.

Then there are tons of parameters, like in URL and in anchor, and da, da, da, da. In anchor doesn’t work anymore, never mind about that one.

I have to go faster, because we’re just going to run out of these. Like, come on. Interpreting and leveraging data in Google Search Console. If you don’t know how to use that, Google could be telling you, you have all sorts of errors, and you don’t know what they are.

  • Leveraging topic modeling and extraction. Using all these cool tools that are coming out for better keyword research and better on-page targeting. I talked about a couple of those at MozCon, like MonkeyLearn. There’s the new Moz Context API, which will be coming out soon, around that. There’s the Alchemy API, which a lot of folks really like and use.
  • Identifying and extracting opportunities based on site crawls. You run a Screaming Frog crawl on your site and you’re going, “Oh, here’s all these problems and issues.” If you don’t have these technical skills, you can’t diagnose that. You can’t figure out what’s wrong. You can’t figure out what needs fixing, what needs addressing.
  • Using rich snippet format to stand out in the SERPs. This is just getting a better click-through rate, which can seriously help your site and obviously your traffic.
  • Applying Google-supported protocols like rel=canonical, meta description, rel=prev/next, hreflang, robots.txt, meta robots, x robots, NOODP, XML sitemaps, rel=nofollow. The list goes on and on and on. If you’re not technical, you don’t know what those are, you think you just need to write good content and lower your bounce rate, it’s not going to work.
  • Using APIs from services like AdWords or MozScape, or hrefs from Majestic, or SEM refs from SearchScape or Alchemy API. Those APIs can have powerful things that they can do for your site. There are some powerful problems they could help you solve if you know how to use them. It’s actually not that hard to write something, even inside a Google Doc or Excel, to pull from an API and get some data in there. There’s a bunch of good tutorials out there. Richard Baxter has one, Annie Cushing has one, I think Distilled has some. So really cool stuff there.
  • Diagnosing page load speed issues, which goes right to what Jayson was talking about. You need that fast-loading page. Well, if you don’t have any technical skills, you can’t figure out why your page might not be loading quickly.
  • Diagnosing mobile friendliness issues
  • Advising app developers on the new protocols around App deep linking, so that you can get the content from your mobile apps into the web search results on mobile devices. Awesome. Super powerful. Potentially crazy powerful, as mobile search is becoming bigger than desktop.

Okay, I’m going to take a deep breath and relax. I don’t know Jayson’s intention, and in fact, if he were in this room, he’d be like, “No, I totally agree with all those things. I wrote the article in a rush. I had no idea it was going to be big. I was just trying to make the broader points around you don’t have to be a coder in order to do SEO.” That’s completely fine.

So I’m not going to try and rain criticism down on him. But I think if you’re reading that article, or you’re seeing it in your feed, or your clients are, or your boss is, or other folks are in your world, maybe you can point them to this Whiteboard Friday and let them know, no, that’s not quite right. There’s a ton of technical SEO that is required in 2015 and will be for years to come, I think, that SEOs have to have in order to be effective at their jobs.

All right, everyone. Look forward to some great comments, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

An Open-Source Tool for Checking rel-alternate-hreflang Annotations

Posted by Tom-Anthony

In the Distilled R&D department we have been ramping up the amount of automated monitoring and analysis we do, with an internal system monitoring our client’s sites both directly and via various data sources to ensure they remain healthy and we are alerted to any problems that may arise.

Recently we started work to add in functionality for including the rel-alternate-hreflang annotations in this system. In this blog post I’m going to share an open-source Python library we’ve just started work on for the purpose, which makes it easy to read the hreflang entries from a page and identify errors with them.

If you’re not a Python aficionado then don’t despair, as I have also built a ready-to-go tool for you to use, which will quickly do some checks on the hreflang entries for any URL you specify. 🙂

Google’s Search Console (formerly Webmaster Tools) does have some basic rel-alternate-hreflang checking built in, but it is limited in how you can use it and you are restricted to using it for verified sites.

rel-alternate-hreflang checklist

Before we introduce the code, I wanted to quickly review a list of five easy and common mistakes that we will want to check for when looking at rel-alternate-hreflang annotations:

  • return tag errors – Every alternate language/locale URL of a page should, itself, include a link back to the first page. This makes sense but I’ve seen people make mistakes with it fairly often.
  • indirect / broken links – Links to alternate language/region versions of the page should no go via redirects, and should not link to missing or broken pages.
  • multiple entries – There should never be multiple entries for a single language/region combo.
  • multiple defaults – You should never have more than one x-default entry.
  • conflicting modes – rel-alternate-hreflang entries can be implemented via inline HTML, XML sitemaps, or HTTP headers. For any one set of pages only one implementation mode should be used.

So now imagine that we want to simply automate these checks quickly and simply…

Introducing: polly – the hreflang checker library

polly is the name for the library we have developed to help us solve this problem, and we are releasing it as open source so the SEO community can use it freely to build upon. We only started work on it last week, but we plan to continue developing it, and will also accept contributions to the code from the community, so we expect its feature set to grow rapidly.

If you are not comfortable tinkering with Python, then feel free to skip down to the next section of the post, where there is a tool that is built with polly which you can use right away.

Still here? Ok, great. You can install polly easily via pip:

pip install polly

You can then create a PollyPage() object which will do all our work and store the data simply by instantiating the class with the desired URL:

my_page = PollyPage("http://www.facebook.com/")

You can quickly see the hreflang entries on the page by running:

print my_page.alternate_urls_map

You can list all the hreflang values encountered on a page, and which countries and languages they cover:

print my_page.hreflang_values
print my_page.languages
print my_page.regions

You can also check various aspects of a page, see whether the pages it includes in its rel-alternate-hreflang entries point back, or whether there are entries that do not see retrievable (due to 404 or 500 etc. errors):

print my_page.is_default
print my_page.no_return_tag_pages()
print my_page.non_retrievable_pages()

Get more instructions and grab the code at the polly github page. Hit me up in the comments with any questions.

Free tool: hreflang.ninja

I have put together a very simple tool that uses polly to run some of the checks we highlighted above as being common mistakes with rel-alternate-hreflang, which you can visit right now and start using:

http://hreflang.ninja

Simply enter a URL and hit enter, and you should see something like:

Example output from the ninja!

The tool shows you the rel-alternate-hreflang entries found on the page, the language and region of those entries, the alternate URLs, and any errors identified with the entry. It is perfect for doing quick’n’dirty checks of a URL to identify any errors.

As we add additional functionality to polly we will be updating hreflang.ninja as well, so please tweet me with feature ideas or suggestions.

To-do list!

This is the first release of polly and currently we only handle annotations that are in the HTML of the page, not those in the XML sitemap or HTTP headers. However, we are going to be updating polly (and hreflang.ninja) over the coming weeks, so watch this space! 🙂

Resources

Here are a few links you may find helpful for hreflang:

Got suggestions?

With the increasing number of SEO directives and annotations available, and the ever-changing guidelines around how to deploy them, it is important to automate whatever areas possible. Hopefully polly is helpful to the community in this regard, and we want to here what ideas you have for making these tools more useful – here in the comments or via Twitter.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Simple Steps for Conducting Creative Content Research

Posted by Hannah_Smith

Most frequently, the content we create at Distilled is designed to attract press coverage, social shares, and exposure (and links) on sites our clients’ target audience reads. That’s a tall order.

Over the years we’ve had our hits and misses, and through this we’ve recognised the value of learning about what makes a piece of content successful. Coming up with a great idea is difficult, and it can be tough to figure out where to begin. Today, rather than leaping headlong into brainstorming sessions, we start with creative content research.

What is creative content research?

Creative content research enables you to answer the questions:

“What are websites publishing, and what are people sharing?”

From this, you’ll then have a clearer view on what might be successful for your client.

A few years ago this required quite an amount of work to figure out. Today, happily, it’s much quicker and easier. In this post I’ll share the process and tools we use.

Whoa there… Why do I need to do this?

I think that the value in this sort of activity lies in a couple of directions:

a) You can learn a lot by deconstructing the success of others…

I’ve been taking stuff apart to try to figure out how it works for about as long as I can remember, so applying this process to content research felt pretty natural to me. Perhaps more importantly though, I think that deconstructing content is actually easier when it isn’t your own. You’re not involved, invested, or in love with the piece so viewing it objectively and learning from it is much easier.

b) Your research will give you a clear overview of the competitive landscape…

As soon as a company elects to start creating content, they gain a whole raft of new competitors. In addition to their commercial competitors (i.e. those who offer similar products or services), the company also gains content competitors. For example, if you’re a sports betting company and plan to create content related to the sports events that you’re offering betting markets on; then you’re competing not just with other betting companies, but every other publisher who creates content about these events. That means major news outlets, sports news site, fan sites, etc. To make matters even more complicated, it’s likely that you’ll actually be seeking coverage from those same content competitors. As such, you need to understand what’s already being created in the space before creating content of your own.

c) You’re giving yourself the data to create a more compelling pitch…

At some point you’re going to need to pitch your ideas to your client (or your boss if you’re working in-house). At Distilled, we’ve found that getting ideas signed off can be really tough. Ultimately, a great idea is worthless if we can’t persuade our client to give us the green light. This research can be used to make a more compelling case to your client and get those ideas signed off. (Incidentally, if getting ideas signed off is proving to be an issue you might find this framework for pitching creative ideas useful).

Where to start

Good ideas start with a good brief, however it can be tough to pin clients down to get answers to a long list of questions.

As a minimum you’ll need to know the following:

  • Who are they looking to target?
    • Age, sex, demographic
    • What’s their core focus? What do they care about? What problems are they looking to solve?
    • Who influences them?
    • What else are they interested in?
    • Where do they shop and which brands do they buy?
    • What do they read?
    • What do they watch on TV?
    • Where do they spend their time online?
  • Where do they want to get coverage?
    • We typically ask our clients to give us a wishlist of 10 or so sites they’d love to get coverage on
  • Which topics are they comfortable covering?
    • This question is often the toughest, particularly if a client hasn’t created content specifically for links and shares before. Often clients are uncomfortable about drifting too far away from their core business—for example, if they sell insurance, they’ll typically say that they really want to create a piece of content about insurance. Whilst this is understandable from the clients’ perspective it can severely limit their chances of success. It’s definitely worth offering up a gentle challenge at this stage—I’ll often cite Red Bull, who are a great example of a company who create content based on what their consumers love, not what they sell (i.e. Red Bull sell soft drinks, but create content about extreme sports because that’s the sort of content their audience love to consume). It’s worth planting this idea early, but don’t get dragged into a fierce debate at this stage—you’ll be able to make a far more compelling argument once you’ve done your research and are pitching concrete ideas.

Processes, useful tools and sites

Now you have your brief, it’s time to begin your research.

Given that we’re looking to uncover “what websites are publishing and what’s being shared,” It won’t surprise you to learn that I pay particular attention to pieces of content and the coverage they receive. For each piece that I think is interesting I’ll note down the following:

  • The title/headline
  • A link to the coverage (and to the original piece if applicable)
  • How many social shares the coverage earned (and the original piece earned)
  • The number of linking root domains the original piece earned
  • Some notes about the piece itself: why it’s interesting, why I think it got shares/coverage
  • Any gaps in the content, whether or not it’s been executed well
  • How we might do something similar (if applicable)

Whilst I’m doing this I’ll also make a note of specific sites I see being frequently shared (I tend to check these out separately later on), any interesting bits of research (particularly if I think there might be an opportunity to do something different with the data), interesting threads on forums etc.

When it comes to kicking off your research, you can start wherever you like, but I’d recommend that you cover off each of the areas below:

What does your target audience share?

Whilst this activity might not uncover specific pieces of successful content, it’s a great way of getting a clearer understanding of your target audience, and getting a handle on the sites they read and the topics which interest them.

  • Review social profiles / feeds
    • If the company you’re working for has a Facebook page, it shouldn’t be too difficult to find some people who’ve liked the company page and have a public profile. It’s even easier on Twitter where most profiles are public. Whilst this won’t give you quantitative data, it does put a human face to your audience data and gives you a feel for what these people care about and share. In addition to uncovering specific pieces of content, this can also provide inspiration in terms of other sites you might want to investigate further and ideas for topics you might want to explore.
  • Demographics Pro
    • This service infers demographic data from your clients’ Twitter followers. I find it particularly useful if the client doesn’t know too much about their audience. In addition to demographic data, you get a breakdown of professions, interests, brand affiliations, and the other Twitter accounts they follow and who they’re most influenced by. This is a paid-for service, but there are pay-as-you-go options in addition to pay monthly plans.

Finding successful pieces of content on specific sites

If you’ve a list of sites you know your target audience read, and/or you know your client wants to get coverage on, there are a bunch of ways you can uncover interesting content:

  • Using your link research tool of choice (e.g. Open Site Explorer, Majestic, ahrefs) you can run a domain level report to see which pages have attracted the most links. This can also be useful if you want to check out commercial competitors to see which pieces of content they’ve created have attracted the most links.
  • There are also tools which enable you to uncover the most shared content on individual sites. You can use Buzzsumo to run content analysis reports on individual domains which provide data on average social shares per post, social shares by network, and social shares by content type.
  • If you just want to see the most shared content for a given domain you can run a simple search on Buzzsumo using the domain; and there’s also the option to refine by topic. For example a search like [guardian.com big data] will return the most shared content on the Guardian related to big data. You can also run similar reports using ahrefs’ Content Explorer tool.

Both Buzzsumo and ahrefs are paid tools, but both offer free trials. If you need to explore the most shared content without using a paid tool, there are other alternatives. Check out Social Crawlytics which will crawl domains and return social share data, or alternatively, you can crawl a site (or section of a site) and then run the URLs through SharedCount‘s bulk upload feature.

Finding successful pieces of content by topic

When searching by topic, I find it best to begin with a broad search and then drill down into more specific areas. For example, if I had a client in the financial services space, I’d start out looking at a broad topic like “money” rather than shooting straight to topics like loans or credit cards.

As mentioned above, both Buzzsumo and ahrefs allow you to search for the most shared content by topic and both offer advanced search options.

Further inspiration

There are also several sites I like to look at for inspiration. Whilst these sites don’t give you a great steer on whether or not a particular piece of content was actually successful, with a little digging you can quickly find the original source and pull link and social share data:

  • Visually has a community area where users can upload creative content. You can search by topic to uncover examples.
  • TrendHunter have a searchable archive of creative ideas, they feature products, creative campaigns, marketing campaigns, advertising and more. It’s best to keep your searches broad if you’re looking at this site.
  • Check out Niice (a moodboard app) which also has a searchable archive of handpicked design inspiration.
  • Searching Pinterest can allow you to unearth some interesting bits and pieces as can Google image searches and regular Google searches around particular topics.
  • Reviewing relevant sections of discussion sites like Quora can provide insight into what people are asking about particular topics which may spark a creative idea.

Moving from data to insight

By this point you’ve (hopefully) got a long list of content examples. Whilst this is a great start, effectively what you’ve got here is just data, now you need to convert this to insight.

Remember, we’re trying to answer the questions: “What are websites publishing, and what are people sharing?”

Ordinarily as I go through the creative content research process, I start to see patterns or themes emerge. For example, across a variety of topics areas you’ll see that the most shared content tends to be news. Whilst this is good to know, it’s not necessarily something that’s going to be particularly actionable. You’ll need to dig a little deeper—what else (aside from news) is given coverage? Can you split those things into categories or themes?

This is tough to explain in the abstract, so let me give you an example. We’d identified a set of music sites (e.g. Rolling Stone, NME, CoS, Stereogum, Pitchfork) as target publishers for a client.

Here’s a summary of what I concluded following my research:

The most-shared content on these music publications is news: album launches, new singles, videos of performances etc. As such, if we can work a news hook into whatever we create, this could positively influence our chances of gaining coverage.

Aside from news, the content which gains traction tends to fall into one of the following categories:

Earlier in this post I mentioned that it can be particularly tough to create content which attracts coverage and shares if clients feel strongly that they want to do something directly related to their product or service. The example I gave at the outset was a client who sold insurance and was really keen to create something about insurance. You’re now in a great position to win an argument with data, as thanks to your research you’ll be able to cite several pieces of insurance-related content which have struggled to gain traction. But it’s not all bad news as you’ll also be able to cite other topics which are relevant to the client’s target audience and stand a better chance of gaining coverage and shares.

Avoiding the pitfalls

There are potential pitfalls when it comes to creative content research in that it’s easy to leap to erroneous conclusions. Here’s some things to watch out for:

Make sure you’re identifying outliers…

When seeking out successful pieces of content you need to be certain that what you’re looking at is actually an outlier. For example, the average post on BuzzFeed gets over 30k social shares. As such, that post you found with just 10k shares is not an outlier. It’s done significantly worse than average. It’s therefore not the best post to be holding up as a fabulous example of what to create to get shares.

Don’t get distracted by formats…

Pay more attention to the idea than the format. For example, the folks at Mashable, kindly covered an infographic about Instagram which we created for a client. However, the takeaway here is not that Instagram infographics get coverage on Mashable. Mashable didn’t cover this because we created an infographic. They covered the piece because it told a story in a compelling and unusual way.

You probably shouldn’t create a listicle…

This point is related to the point above. In my experience, unless you’re a publisher with a huge, engaged social following, that listicle of yours is unlikely to gain traction. Listicles on huge publisher sites get shares, listicles on client sites typically don’t. This is doubly important if you’re also seeking coverage, as listicles on clients sites don’t typically get links or coverage on other sites.

How we use the research to inform our ideation process

At Distilled, we typically take a creative brief and complete creative content research and then move into the ideation process. A summary of the research is included within the creative brief, and this, along with a copy of the full creative content research is shared with the team.

The research acts as inspiration and direction and is particularly useful in terms of identifying potential topics to explore but doesn’t mean team members don’t still do further research of their own.

This process by no means acts as a silver bullet, but it definitely helps us come up with ideas.


Thanks for sticking with me to the end!

I’d love to hear more about your creative content research processes and any tips you have for finding inspirational content. Do let me know via the comments.

Image credits: Research, typing, audience, inspiration, kitteh.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

​The 3 Most Common SEO Problems on Listings Sites

Posted by Dom-Woodman

Listings sites have a very specific set of search problems that you don’t run into everywhere else. In the day I’m one of Distilled’s analysts, but by night I run a job listings site, teflSearch. So, for my first Moz Blog post I thought I’d cover the three search problems with listings sites that I spent far too long agonising about.

Quick clarification time: What is a listings site (i.e. will this post be useful for you)?

The classic listings site is Craigslist, but plenty of other sites act like listing sites:

  • Job sites like Monster
  • E-commerce sites like Amazon
  • Matching sites like Spareroom

1. Generating quality landing pages

The landing pages on listings sites are incredibly important. These pages are usually the primary drivers of converting traffic, and they’re usually generated automatically (or are occasionally custom category pages) .

For example, if I search “Jobs in Manchester“, you can see nearly every result is an automatically generated landing page or category page.

There are three common ways to generate these pages (occasionally a combination of more than one is used):

  • Faceted pages: These are generated by facets—groups of preset filters that let you filter the current search results. They usually sit on the left-hand side of the page.
  • Category pages: These pages are listings which have already had a filter applied and can’t be changed. They’re usually custom pages.
  • Free-text search pages: These pages are generated by a free-text search box.

Those definitions are still bit general; let’s clear them up with some examples:

Amazon uses a combination of categories and facets. If you click on browse by department you can see all the category pages. Then on each category page you can see a faceted search. Amazon is so large that it needs both.

Indeed generates its landing pages through free text search, for example if we search for “IT jobs in manchester” it will generate: IT jobs in manchester.

teflSearch generates landing pages using just facets. The jobs in China landing page is simply a facet of the main search page.

Each method has its own search problems when used for generating landing pages, so lets tackle them one by one.


Aside

Facets and free text search will typically generate pages with parameters e.g. a search for “dogs” would produce:

www.mysite.com?search=dogs

But to make the URL user friendly sites will often alter the URLs to display them as folders

www.mysite.com/results/dogs/

These are still just ordinary free text search and facets, the URLs are just user friendly. (They’re a lot easier to work with in robots.txt too!)


Free search (& category) problems

If you’ve decided the base of your search will be a free text search, then we’ll have two major goals:

  • Goal 1: Helping search engines find your landing pages
  • Goal 2: Giving them link equity.

Solution

Search engines won’t use search boxes and so the solution to both problems is to provide links to the valuable landing pages so search engines can find them.

There are plenty of ways to do this, but two of the most common are:

  • Category links alongside a search

    Photobucket uses a free text search to generate pages, but if we look at example search for photos of dogs, we can see the categories which define the landing pages along the right-hand side. (This is also an example of URL friendly searches!)

  • Putting the main landing pages in a top-level menu

    Indeed also uses free text to generate landing pages, and they have a browse jobs section which contains the URL structure to allow search engines to find all the valuable landing pages.

Breadcrumbs are also often used in addition to the two above and in both the examples above, you’ll find breadcrumbs that reinforce that hierarchy.

Category (& facet) problems

Categories, because they tend to be custom pages, don’t actually have many search disadvantages. Instead it’s the other attributes that make them more or less desirable. You can create them for the purposes you want and so you typically won’t have too many problems.

However, if you also use a faceted search in each category (like Amazon) to generate additional landing pages, then you’ll run into all the problems described in the next section.

At first facets seem great, an easy way to generate multiple strong relevant landing pages without doing much at all. The problems appear because people don’t put limits on facets.

Lets take the job page on teflSearch. We can see it has 18 facets each with many options. Some of these options will generate useful landing pages:

The China facet in countries will generate “Jobs in China” that’s a useful landing page.

On the other hand, the “Conditional Bonus” facet will generate “Jobs with a conditional bonus,” and that’s not so great.

We can also see that the options within a single facet aren’t always useful. As of writing, I have a single job available in Serbia. That’s not a useful search result, and the poor user engagement combined with the tiny amount of content will be a strong signal to Google that it’s thin content. Depending on the scale of your site it’s very easy to generate a mass of poor-quality landing pages.

Facets generate other problems too. The primary one being they can create a huge amount of duplicate content and pages for search engines to get lost in. This is caused by two things: The first is the sheer number of possibilities they generate, and the second is because selecting facets in different orders creates identical pages with different URLs.

We end up with four goals for our facet-generated landing pages:

  • Goal 1: Make sure our searchable landing pages are actually worth landing on, and that we’re not handing a mass of low-value pages to the search engines.
  • Goal 2: Make sure we don’t generate multiple copies of our automatically generated landing pages.
  • Goal 3: Make sure search engines don’t get caught in the metaphorical plastic six-pack rings of our facets.
  • Goal 4: Make sure our landing pages have strong internal linking.

The first goal needs to be set internally; you’re always going to be the best judge of the number of results that need to present on a page in order for it to be useful to a user. I’d argue you can rarely ever go below three, but it depends both on your business and on how much content fluctuates on your site, as the useful landing pages might also change over time.

We can solve the next three problems as group. There are several possible solutions depending on what skills and resources you have access to; here are two possible solutions:

Category/facet solution 1: Blocking the majority of facets and providing external links
  • Easiest method
  • Good if your valuable category pages rarely change and you don’t have too many of them.
  • Can be problematic if your valuable facet pages change a lot

Nofollow all your facet links, and noindex and block category pages which aren’t valuable or are deeper than x facet/folder levels into your search using robots.txt.

You set x by looking at where your useful facet pages exist that have search volume. So, for example, if you have three facets for televisions: manufacturer, size, and resolution, and even combinations of all three have multiple results and search volume, then you could set you index everything up to three levels.

On the other hand, if people are searching for three levels (e.g. “Samsung 42″ Full HD TV”) but you only have one or two results for three-level facets, then you’d be better off indexing two levels and letting the product pages themselves pick up long-tail traffic for the third level.

If you have valuable facet pages that exist deeper than 1 facet or folder into your search, then this creates some duplicate content problems dealt with in the aside “Indexing more than 1 level of facets” below.)

The immediate problem with this set-up, however, is that in one stroke we’ve removed most of the internal links to our category pages, and by no-following all the facet links, search engines won’t be able to find your valuable category pages.

In order re-create the linking, you can add a top level drop down menu to your site containing the most valuable category pages, add category links elsewhere on the page, or create a separate part of the site with links to the valuable category pages.

The top level drop down menu you can see on teflSearch (it’s the search jobs menu), the other two examples are demonstrated in Photobucket and Indeed respectively in the previous section.

The big advantage for this method is how quick it is to implement, it doesn’t require any fiddly internal logic and adding an extra menu option is usually minimal effort.

Category/facet solution 2: Creating internal logic to work with the facets

  • Requires new internal logic
  • Works for large numbers of category pages with value that can change rapidly

There are four parts to the second solution:

  1. Select valuable facet categories and allow those links to be followed. No-follow the rest.
  2. No-index all pages that return a number of items below the threshold for a useful landing page
  3. No-follow all facets on pages with a search depth greater than x.
  4. Block all facet pages deeper than x level in robots.txt

As with the last solution, x is set by looking at where your useful facet pages exist that have search volume (full explanation in the first solution), and if you’re indexing more than one level you’ll need to check out the aside below to see how to deal with the duplicate content it generates.


Aside: Indexing more than one level of facets

If you want more than one level of facets to be indexable, then this will create certain problems.

Suppose you have a facet for size:

  • Televisions: Size: 46″, 44″, 42″

And want to add a brand facet:

  • Televisions: Brand: Samsung, Panasonic, Sony

This will create duplicate content because the search engines will be able to follow your facets in both orders, generating:

  • Television – 46″ – Samsung
  • Television – Samsung – 46″

You’ll have to either rel canonical your duplicate pages with another rule or set up your facets so they create a single unique URL.

You also need to be aware that each followable facet you add will multiply with each other followable facet and it’s very easy to generate a mass of pages for search engines to get stuck in. Depending on your setup you might need to block more paths in robots.txt or set-up more logic to prevent them being followed.

Letting search engines index more than one level of facets adds a lot of possible problems; make sure you’re keeping track of them.


2. User-generated content cannibalization

This is a common problem for listings sites (assuming they allow user generated content). If you’re reading this as an e-commerce site who only lists their own products, you can skip this one.

As we covered in the first area, category pages on listings sites are usually the landing pages aiming for the valuable search terms, but as your users start generating pages they can often create titles and content that cannibalise your landing pages.

Suppose you’re a job site with a category page for PHP Jobs in Greater Manchester. If a recruiter then creates a job advert for PHP Jobs in Greater Manchester for the 4 positions they currently have, you’ve got a duplicate content problem.

This is less of a problem when your site is large and your categories mature, it will be obvious to any search engine which are your high value category pages, but at the start where you’re lacking authority and individual listings might contain more relevant content than your own search pages this can be a problem.

Solution 1: Create structured titles

Set the <title> differently than the on-page title. Depending on variables you have available to you can set the title tag programmatically without changing the page title using other information given by the user.

For example, on our imaginary job site, suppose the recruiter also provided the following information in other fields:

  • The no. of positions: 4
  • The primary area: PHP Developer
  • The name of the recruiting company: ABC Recruitment
  • Location: Manchester

We could set the <title> pattern to be: *No of positions* *The primary area* with *recruiter name* in *Location* which would give us:

4 PHP Developers with ABC Recruitment in Manchester

Setting a <title> tag allows you to target long-tail traffic by constructing detailed descriptive titles. In our above example, imagine the recruiter had specified “Castlefield, Manchester” as the location.

All of a sudden, you’ve got a perfect opportunity to pick up long-tail traffic for people searching in Castlefield in Manchester.

On the downside, you lose the ability to pick up long-tail traffic where your users have chosen keywords you wouldn’t have used.

For example, suppose Manchester has a jobs program called “Green Highway.” A job advert title containing “Green Highway” might pick up valuable long-tail traffic. Being able to discover this, however, and find a way to fit it into a dynamic title is very hard.

Solution 2: Use regex to noindex the offending pages

Perform a regex (or string contains) search on your listings titles and no-index the ones which cannabalise your main category pages.

If it’s not possible to construct titles with variables or your users provide a lot of additional long-tail traffic with their own titles, then is a great option. On the downside, you miss out on possible structured long-tail traffic that you might’ve been able to aim for.

Solution 3: De-index all your listings

It may seem rash, but if you’re a large site with a huge number of very similar or low-content listings, you might want to consider this, but there is no common standard. Some sites like Indeed choose to no-index all their job adverts, whereas some other sites like Craigslist index all their individual listings because they’ll drive long tail traffic.

Don’t de-index them all lightly!

3. Constantly expiring content

Our third and final problem is that user-generated content doesn’t last forever. Particularly on listings sites, it’s constantly expiring and changing.

For most use cases I’d recommend 301’ing expired content to a relevant category page, with a message triggered by the redirect notifying the user of why they’ve been redirected. It typically comes out as the best combination of search and UX.

For more information or advice on how to deal with the edge cases, there’s a previous Moz blog post on how to deal with expired content which I think does an excellent job of covering this area.

Summary

In summary, if you’re working with listings sites, all three of the following need to be kept in mind:

  • How are the landing pages generated? If they’re generated using free text or facets have the potential problems been solved?
  • Is user generated content cannibalising the main landing pages?
  • How has constantly expiring content been dealt with?

Good luck listing, and if you’ve had any other tricky problems or solutions you’ve come across working on listings sites lets chat about them in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it