Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking – Whiteboard Friday

Posted by randfish

There’s no doubt that quite a bit has changed about SEO, and that the field is far more integrated with other aspects of online marketing than it once was. In today’s Whiteboard Friday, Rand pushes back against the idea that effective modern SEO doesn’t require any technical expertise, outlining a fantastic list of technical elements that today’s SEOs need to know about in order to be truly effective.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to do something unusual. I don’t usually point out these inconsistencies or sort of take issue with other folks’ content on the web, because I generally find that that’s not all that valuable and useful. But I’m going to make an exception here.

There is an article by Jayson DeMers, who I think might actually be here in Seattle — maybe he and I can hang out at some point — called “Why Modern SEO Requires Almost No Technical Expertise.” It was an article that got a shocking amount of traction and attention. On Facebook, it has thousands of shares. On LinkedIn, it did really well. On Twitter, it got a bunch of attention.

Some folks in the SEO world have already pointed out some issues around this. But because of the increasing popularity of this article, and because I think there’s, like, this hopefulness from worlds outside of kind of the hardcore SEO world that are looking to this piece and going, “Look, this is great. We don’t have to be technical. We don’t have to worry about technical things in order to do SEO.”

Look, I completely get the appeal of that. I did want to point out some of the reasons why this is not so accurate. At the same time, I don’t want to rain on Jayson, because I think that it’s very possible he’s writing an article for Entrepreneur, maybe he has sort of a commitment to them. Maybe he had no idea that this article was going to spark so much attention and investment. He does make some good points. I think it’s just really the title and then some of the messages inside there that I take strong issue with, and so I wanted to bring those up.

First off, some of the good points he did bring up.

One, he wisely says, “You don’t need to know how to code or to write and read algorithms in order to do SEO.” I totally agree with that. If today you’re looking at SEO and you’re thinking, “Well, am I going to get more into this subject? Am I going to try investing in SEO? But I don’t even know HTML and CSS yet.”

Those are good skills to have, and they will help you in SEO, but you don’t need them. Jayson’s totally right. You don’t have to have them, and you can learn and pick up some of these things, and do searches, watch some Whiteboard Fridays, check out some guides, and pick up a lot of that stuff later on as you need it in your career. SEO doesn’t have that hard requirement.

And secondly, he makes an intelligent point that we’ve made many times here at Moz, which is that, broadly speaking, a better user experience is well correlated with better rankings.

You make a great website that delivers great user experience, that provides the answers to searchers’ questions and gives them extraordinarily good content, way better than what’s out there already in the search results, generally speaking you’re going to see happy searchers, and that’s going to lead to higher rankings.

But not entirely. There are a lot of other elements that go in here. So I’ll bring up some frustrating points around the piece as well.

First off, there’s no acknowledgment — and I find this a little disturbing — that the ability to read and write code, or even HTML and CSS, which I think are the basic place to start, is helpful or can take your SEO efforts to the next level. I think both of those things are true.

So being able to look at a web page, view source on it, or pull up Firebug in Firefox or something and diagnose what’s going on and then go, “Oh, that’s why Google is not able to see this content. That’s why we’re not ranking for this keyword or term, or why even when I enter this exact sentence in quotes into Google, which is on our page, this is why it’s not bringing it up. It’s because it’s loading it after the page from a remote file that Google can’t access.” These are technical things, and being able to see how that code is built, how it’s structured, and what’s going on there, very, very helpful.

Some coding knowledge also can take your SEO efforts even further. I mean, so many times, SEOs are stymied by the conversations that we have with our programmers and our developers and the technical staff on our teams. When we can have those conversations intelligently, because at least we understand the principles of how an if-then statement works, or what software engineering best practices are being used, or they can upload something into a GitHub repository, and we can take a look at it there, that kind of stuff is really helpful.

Secondly, I don’t like that the article overly reduces all of this information that we have about what we’ve learned about Google. So he mentions two sources. One is things that Google tells us, and others are SEO experiments. I think both of those are true. Although I’d add that there’s sort of a sixth sense of knowledge that we gain over time from looking at many, many search results and kind of having this feel for why things rank, and what might be wrong with a site, and getting really good at that using tools and data as well. There are people who can look at Open Site Explorer and then go, “Aha, I bet this is going to happen.” They can look, and 90% of the time they’re right.

So he boils this down to, one, write quality content, and two, reduce your bounce rate. Neither of those things are wrong. You should write quality content, although I’d argue there are lots of other forms of quality content that aren’t necessarily written — video, images and graphics, podcasts, lots of other stuff.

And secondly, that just doing those two things is not always enough. So you can see, like many, many folks look and go, “I have quality content. It has a low bounce rate. How come I don’t rank better?” Well, your competitors, they’re also going to have quality content with a low bounce rate. That’s not a very high bar.

Also, frustratingly, this really gets in my craw. I don’t think “write quality content” means anything. You tell me. When you hear that, to me that is a totally non-actionable, non-useful phrase that’s a piece of advice that is so generic as to be discardable. So I really wish that there was more substance behind that.

The article also makes, in my opinion, the totally inaccurate claim that modern SEO really is reduced to “the happier your users are when they visit your site, the higher you’re going to rank.”

Wow. Okay. Again, I think broadly these things are correlated. User happiness and rank is broadly correlated, but it’s not a one to one. This is not like a, “Oh, well, that’s a 1.0 correlation.”

I would guess that the correlation is probably closer to like the page authority range. I bet it’s like 0.35 or something correlation. If you were to actually measure this broadly across the web and say like, “Hey, were you happier with result one, two, three, four, or five,” the ordering would not be perfect at all. It probably wouldn’t even be close.

There’s a ton of reasons why sometimes someone who ranks on Page 2 or Page 3 or doesn’t rank at all for a query is doing a better piece of content than the person who does rank well or ranks on Page 1, Position 1.

Then the article suggests five and sort of a half steps to successful modern SEO, which I think is a really incomplete list. So Jayson gives us;

  • Good on-site experience
  • Writing good content
  • Getting others to acknowledge you as an authority
  • Rising in social popularity
  • Earning local relevance
  • Dealing with modern CMS systems (which he notes most modern CMS systems are SEO-friendly)

The thing is there’s nothing actually wrong with any of these. They’re all, generally speaking, correct, either directly or indirectly related to SEO. The one about local relevance, I have some issue with, because he doesn’t note that there’s a separate algorithm for sort of how local SEO is done and how Google ranks local sites in maps and in their local search results. Also not noted is that rising in social popularity won’t necessarily directly help your SEO, although it can have indirect and positive benefits.

I feel like this list is super incomplete. Okay, I brainstormed just off the top of my head in the 10 minutes before we filmed this video a list. The list was so long that, as you can see, I filled up the whole whiteboard and then didn’t have any more room. I’m not going to bother to erase and go try and be absolutely complete.

But there’s a huge, huge number of things that are important, critically important for technical SEO. If you don’t know how to do these things, you are sunk in many cases. You can’t be an effective SEO analyst, or consultant, or in-house team member, because you simply can’t diagnose the potential problems, rectify those potential problems, identify strategies that your competitors are using, be able to diagnose a traffic gain or loss. You have to have these skills in order to do that.

I’ll run through these quickly, but really the idea is just that this list is so huge and so long that I think it’s very, very, very wrong to say technical SEO is behind us. I almost feel like the opposite is true.

We have to be able to understand things like;

  • Content rendering and indexability
  • Crawl structure, internal links, JavaScript, Ajax. If something’s post-loading after the page and Google’s not able to index it, or there are links that are accessible via JavaScript or Ajax, maybe Google can’t necessarily see those or isn’t crawling them as effectively, or is crawling them, but isn’t assigning them as much link weight as they might be assigning other stuff, and you’ve made it tough to link to them externally, and so they can’t crawl it.
  • Disabling crawling and/or indexing of thin or incomplete or non-search-targeted content. We have a bunch of search results pages. Should we use rel=prev/next? Should we robots.txt those out? Should we disallow from crawling with meta robots? Should we rel=canonical them to other pages? Should we exclude them via the protocols inside Google Webmaster Tools, which is now Google Search Console?
  • Managing redirects, domain migrations, content updates. A new piece of content comes out, replacing an old piece of content, what do we do with that old piece of content? What’s the best practice? It varies by different things. We have a whole Whiteboard Friday about the different things that you could do with that. What about a big redirect or a domain migration? You buy another company and you’re redirecting their site to your site. You have to understand things about subdomain structures versus subfolders, which, again, we’ve done another Whiteboard Friday about that.
  • Proper error codes, downtime procedures, and not found pages. If your 404 pages turn out to all be 200 pages, well, now you’ve made a big error there, and Google could be crawling tons of 404 pages that they think are real pages, because you’ve made it a status code 200, or you’ve used a 404 code when you should have used a 410, which is a permanently removed, to be able to get it completely out of the indexes, as opposed to having Google revisit it and keep it in the index.

Downtime procedures. So there’s specifically a… I can’t even remember. It’s a 5xx code that you can use. Maybe it was a 503 or something that you can use that’s like, “Revisit later. We’re having some downtime right now.” Google urges you to use that specific code rather than using a 404, which tells them, “This page is now an error.”

Disney had that problem a while ago, if you guys remember, where they 404ed all their pages during an hour of downtime, and then their homepage, when you searched for Disney World, was, like, “Not found.” Oh, jeez, Disney World, not so good.

  • International and multi-language targeting issues. I won’t go into that. But you have to know the protocols there. Duplicate content, syndication, scrapers. How do we handle all that? Somebody else wants to take our content, put it on their site, what should we do? Someone’s scraping our content. What can we do? We have duplicate content on our own site. What should we do?
  • Diagnosing traffic drops via analytics and metrics. Being able to look at a rankings report, being able to look at analytics connecting those up and trying to see: Why did we go up or down? Did we have less pages being indexed, more pages being indexed, more pages getting traffic less, more keywords less?
  • Understanding advanced search parameters. Today, just today, I was checking out the related parameter in Google, which is fascinating for most sites. Well, for Moz, weirdly, related:oursite.com shows nothing. But for virtually every other sit, well, most other sites on the web, it does show some really interesting data, and you can see how Google is connecting up, essentially, intentions and topics from different sites and pages, which can be fascinating, could expose opportunities for links, could expose understanding of how they view your site versus your competition or who they think your competition is.

Then there are tons of parameters, like in URL and in anchor, and da, da, da, da. In anchor doesn’t work anymore, never mind about that one.

I have to go faster, because we’re just going to run out of these. Like, come on. Interpreting and leveraging data in Google Search Console. If you don’t know how to use that, Google could be telling you, you have all sorts of errors, and you don’t know what they are.

  • Leveraging topic modeling and extraction. Using all these cool tools that are coming out for better keyword research and better on-page targeting. I talked about a couple of those at MozCon, like MonkeyLearn. There’s the new Moz Context API, which will be coming out soon, around that. There’s the Alchemy API, which a lot of folks really like and use.
  • Identifying and extracting opportunities based on site crawls. You run a Screaming Frog crawl on your site and you’re going, “Oh, here’s all these problems and issues.” If you don’t have these technical skills, you can’t diagnose that. You can’t figure out what’s wrong. You can’t figure out what needs fixing, what needs addressing.
  • Using rich snippet format to stand out in the SERPs. This is just getting a better click-through rate, which can seriously help your site and obviously your traffic.
  • Applying Google-supported protocols like rel=canonical, meta description, rel=prev/next, hreflang, robots.txt, meta robots, x robots, NOODP, XML sitemaps, rel=nofollow. The list goes on and on and on. If you’re not technical, you don’t know what those are, you think you just need to write good content and lower your bounce rate, it’s not going to work.
  • Using APIs from services like AdWords or MozScape, or hrefs from Majestic, or SEM refs from SearchScape or Alchemy API. Those APIs can have powerful things that they can do for your site. There are some powerful problems they could help you solve if you know how to use them. It’s actually not that hard to write something, even inside a Google Doc or Excel, to pull from an API and get some data in there. There’s a bunch of good tutorials out there. Richard Baxter has one, Annie Cushing has one, I think Distilled has some. So really cool stuff there.
  • Diagnosing page load speed issues, which goes right to what Jayson was talking about. You need that fast-loading page. Well, if you don’t have any technical skills, you can’t figure out why your page might not be loading quickly.
  • Diagnosing mobile friendliness issues
  • Advising app developers on the new protocols around App deep linking, so that you can get the content from your mobile apps into the web search results on mobile devices. Awesome. Super powerful. Potentially crazy powerful, as mobile search is becoming bigger than desktop.

Okay, I’m going to take a deep breath and relax. I don’t know Jayson’s intention, and in fact, if he were in this room, he’d be like, “No, I totally agree with all those things. I wrote the article in a rush. I had no idea it was going to be big. I was just trying to make the broader points around you don’t have to be a coder in order to do SEO.” That’s completely fine.

So I’m not going to try and rain criticism down on him. But I think if you’re reading that article, or you’re seeing it in your feed, or your clients are, or your boss is, or other folks are in your world, maybe you can point them to this Whiteboard Friday and let them know, no, that’s not quite right. There’s a ton of technical SEO that is required in 2015 and will be for years to come, I think, that SEOs have to have in order to be effective at their jobs.

All right, everyone. Look forward to some great comments, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Building Better Content By Improving Upon Your Competitors

Posted by Bill.Sebald

In rock n’ roll music, stealing is expected. Led Zepplin allegedly lifted from lots of earlier blues and folk artists. The famous I-IV-V chord progression of The Wild One’s song “Wild Thing” was used only a couple years later on “Mony, Mony.” My favorite example of musical larceny – “Let It Be” by The Beatles, “Farmhouse” by Phish, and “No Woman, No Cry” by Bob Marley are built around the exact same chord progression. Yet in all these cases, the songs were tweaked enough to stand on their own in meaning, served as distinct entities, and inspired unique feelings from the listener. Granted record company execs often disapproved, but some artists were often flattered to see interpretations of their riffs and progressions. At the end of the day, this is what spawned (and advanced) the rock music genre. Sometimes stealing is the engine of innovation.

Your idea isn’t new. Pick an idea; at least 50 other people have thought of it. Get over your stunning brilliance and realize that execution matters more.” —Mark Fletcher of Bloglines.com.

In marketing, we don’t just “steal” the minds of consumers, we sometimes steal – and interpret – from our competitors. Sometimes we’re lazy about it, and sometimes we’re perceived as originals. Remember one of the immutable laws of marketing – always appear to be first. Well then why not be first to make someone’s content strategy more effective (for your own gain)?

Wait – so do I condone being a pickpocket, cat burglar, or politician? No. What I’m suggesting is reviewing what inspires you, analyzing why it was successful, and inspiring yourself to make something better. Better for us, better for our clients, and better for their customers.

Oh no; is this another “Content Is King” post?

I’m not a huge fan of that phrase anymore. SEO has gone through some serious developmental stages in its lifetime. Once the hype was all about “keyword density,” then “anchor text,” then “duplicate content;” now I feel like our latest bandwagon concept is the semi-vague “content is king.”

These are certainly all valid concepts in SEO, but without proper context, they often fall short of sound advice. They become blind directives. So here we are in 2014, with many business executives nodding along, “yes – content is king. I’ve read that a trillion times. We need to crank out 100 posts a month. Go, go go…” But I think this is a problem. Now that SEO is mainstream, there’s so much “good content” that the noise ceiling has simply been raised. I’ve said it before, “Fair-quality copy is becoming the new Google spam.” I go into pitches now where businesses can’t understand why their legacy content isn’t getting searches. In other words, they ask why “content is king” isn’t producing results. It’s usually because content was treated as a homogeneous tactic where a marketing or SEO strategy wasn’t put in place to link the pieces together.

I think it’s time SEOs put that phrase to rest, and start thinking in terms of how a traditional content marketer would think about it. “Content that is unique in value, strong in expertise, provides a necessary point-of-view, and leads the pack in terms of usefulness is more than king – it’s fundamental to success.” A bit of a mouthful (and less sexy), not to mention harder to develop, but it really needs to be adopted.

So if you would, please keep that in mind during this post. Continue on!

What are your competitors doing?

Content ideas come from lots of sources. Some are vapid (like content topic generators) and some are interpreted (like reviewing customer poll results). Often a simple interview with your sales or service team can teach you plenty about the mindset of your consumer. Studying on-page product reviews can also be inspiring. Focus groups, experiments; all this and more can help produce pieces of content that can be strung together and tracked in order to build a truly converting funnel.

We all know the most effective content is inspired by data, versus “crazy ideas” with no concrete evidence quickly thrown against the wall. While this occasionally has some SEO benefit (arguably less and less with Panda updates), it rarely does much for your conversion funnel. It takes that extra digging that some aren’t quick to execute (at least in my experience). But what happens when your competitor is willing to do the work?

That’s where you can learn some interesting things. Marketing espionage!

Granted, most competitors don’t want to share their data with you, no matter how much beer you try to bribe them with (believe me, I’ve tried). We have tools like
SEMrush to estimate search metrics, and services like Hitwise and Compete to get more online visitor data. While that is certainly helpful, it’s still directional. But we’re marketers – so what do we do? We get creative.

How to get a birdseye view of a content play (with common SEO tools)

It’s time to lift the hood. I like to start with
Screaming Frog. Most SEOs know this tool. If you don’t, it’s a spider that emulates what a search engine spider might find. In my experience there’s no better way to find the topics a website is targeting than with a “screaming” crawl.

Filter down to HTML, and you’ll find the URL, Title Tag, Meta Description, H1, and sometimes the Meta Keyword data. If you already have your own keywords and entities in mind, and want to see what a competitor is doing with them, it’s as simple as searching for them in Screaming Frog (or an excel export) and scanning for it.

Click for a larger image:

 

Consider this totally random “shammy” example in the screenshot above. If I worked in the shammy business, through a quick scan I might be interested to know that at least one of my competitors found value enough in creating a section around an iPad cloth. Is that a segment I never considered?

Don’t have Screaming Frog? The site:operator is a less powerful option. You can’t export into a spreadsheet without a scrape.

Ubersuggest or keywordtool.io can be used in clever “quick and dirty” way – put in a keyword you think there’s opportunity for, and add “who,” “what,” “where,” “why,” or “how” to the query. Your fragmented query will often show some questions people have asked Google. After all, plenty of great content is used to answer a query. Search some of these queries in Google and see what competitor content shows up! At the very least, this is a nice way to find more competitors who are active with creating content for their users.

At this point you should be taking notes, jotting down ideas, observations, potential content titles, and questions you want to research. Whether in a spreadsheet or the back of a napkin, you’re now brainstorming with light research. Let your brain-juice flow. You should also be looking for connections between the posts you are finding. Why were they written? How do they link together? What funnels are the calls-to-action suggesting? Take notes on everything, Sherlock!


Collect the right data

Next, step it up with more quantifying data.Time to trim the fat.

Search data

By entering and measuring your extracted in Google’s Keyword Planner, you’ll see not only is there interest in an iPad cleaner (where an “iPad Shammy” might make sense with its own strategy), but some searcher interest in the best ways to clean an iPad. That could be fun, playful content to write – even for a shammy retailer. It could tie directly to products you already sell, or possibly lead you into carrying new products.

Click for a larger image:

Estimated searches don’t tell the whole story. We know plenty of keywords and metrics from this tool are either interpolated or missing. I’ve found that small estimated searches can sometimes still lead to more highly-converting volume than expected. Keep that in mind.

Social data

What searches enter into Google’s search box isn’t the only indicator of value. Ultimately if nobody likes a certain topic or item your content, they aren’t going to share or link to it. Wouldn’t it be great to have another piece of evidence before you get to structuring a strategy and writing copy? That evidence may lie with your competitors’ social audience.

At this point you have keyword ideas, content titles, sample competitor URLs, and possible strategies sketched out. There are some great tools for checking out what is shared in the social space. TopsySocial Crawlytics, and Buzzsumo are solid selections. You can look up the social popularity of a given URL or domain, and in some cases drill down to influencers. If it’s heavily shared, that may suggest perceived value.

Click for a larger image:

Look at the image above. If my agency is a competitor of yours, you might be interested that one of my posts got 413 social shares. It was a post called “Old School SEO Tests In Action (A 2014 SEO Experiment)”. You can dig in to see the debates boiling through the comments or the reactions through social media. You can go so far as see who shared the post, how influential these people are, and what kind of topics they usually share. This helps qualify the shares.

With these social metrics I believe It’s reasonably safe to infer people in the SEO space care about experiments, learning about things that move rankings, and that most believe older tactics aren’t worth pursuing. With very little time at all, you might be able to come up with ways to improve upon this post or ideas for your own follow up. Maybe even a counter argument? Looking at who the post resonated with, you could presume my target audience was SEOs with a goal of providing industry insights. With a prominent lead generation form on this post, you might even suspect a secondary interest was as a source of new client leads.

If you surmised any of these things from the social data, you’re 100% right! This was certainly a thought out post with those goals in mind.

Backlink data

Let’s examine link popularity and return to the shammy industry. Specifically let’s look at a pretty unique item – a shammy for Apple products –
https://www.klearscreen.com/detail.aspx?ID=11.

  • Open Site Explorer found 1 link from a retailer.
  • Ahrefs found 8 links from 8 domains, one being a forum conversation on Stackexchange.com, and the others from a retailer.
  • Majestic found 13 links from 6 domains. Similiar to what Ahrefs found.
  • WebMeUp found 30 backlinks from 9 domains.

From this data it looks like the iPad shammy market isn’t exactly on fire. Now it doesn’t appear iKlear (or Klear Screen) is doing much marketing for this particular product – at least not according to Google. Their other Apple product cleaners seem to get more attention, but perhaps iKlear simply knows this isn’t a high demand product. It could be true – after all it hasn’t gone viral. It hasn’t generated much in the way of online discussions. But it also hasn’t been marketed much.

This is why all the data needs to be collected, correlated, and analyzed.  You want the best hypothesis you can get before you start committing your time to a content strategy. Did this just kill a possible content strategy for an iPad Shammy, or is this a huge untapped opportunity? It entirely depends on how you interpret all the data you collect.

You’ve got some ideas; now what’s the execution?

You just did a lot of work. You can’t go off half-cocked throwing up willy-nilly content. Jeepers, no! The next step is the most crucial!

At this point you should have uncovered some great ideas based on your competitor’s clues. Now comes the part where you thoughtfully determine how to implement these ideas and craft a strategic roadmap. The options are endless, which could provide a decision-making struggle. From new microsites to overhauling existing content, there’s so much you can do with the gems you’ve dug up.

Remember to examine what your competitors did. How did they plug everything together?

But sometimes your competitors don’t have a discernible content strategy. Instead just fragmented content floating like an island. This is even better for you. Now you have opportunity to not only outshine in the actual content, but put together an actual experience that your users will value, thus providing a likely positive SEO result. Here are three options I tend to build a strategy around most often:

  • Create a new funnel
  • Create content for off-page SEO
  • Create emphasis content

With fresh metrics, the
new funnel is often necessary. Chances are you discovered uncharted territory (at least from your website’s perspective). All future or existing content should have pre-conceived goals – there’s a top and bottom to every funnel, and maybe some strategic off-ramps leading to forms, contact pages, or products. Remember, you’re goal is to be driving the reader through an experience, eliciting emotions and appealing to their needs of which you’ve already built a hypothesis upon. This new funnel can dip into your current website or run parallel (ie, a microsite, sub-domian, or otherwise disconnected grouping). The greatest thing about digital marketing is that nothing is in stone. It’s so easy to test these funnels and redesign with collected data when necessary.


Off-page
is also very common (right link builders?). Find something that is popular, and go share it with sites more popular than yours. Maybe you can even start generating new popularity and create a segment of its own. Build a strategy to take this burgeoning topic and let the widest audience know about it. Get branding, mind share, links, and ideally profit like a beast.

The
“emphasis content” (as I call it) has been a solid go-to plan for me when I discover small pockets of opportunity; notably the stuff that may have a smaller impact and isn’t worth a month long content strategy. If I were to create my own iPad shammy play, based on what I’m seeing so far, I’d probably think about a page or two as emphasis content.

This content is like an independent port of entry or landing page, either to an existing funnel or a direct money maker. In a previous post I talked about
creating niche collection pages for eCommerce. That could serve as emphasis content to a parent collection, but I’m usually thinking of heavier use of text in this case. Where you really take your goal, slice it up, and provide nice, beefy communication about it.

This play can be nuclear. By creating these one-off pages based on all the metrics discussed above, it’s usually much easier to do targeted outreach and social marketing. A well placed page, providing well placed internal links (ideally off popular pages), can pass PageRank and context like a dream, A tool like
Alchemy API can help you see the relevance of pages and help you determine the best place to publish this page

Summary

A content strategy doesn’t go far if it’s phoned in. Take all the help you can get, even if it’s from a competitor. Learn from businesses who took steps before you. They may have very well discovered the holy grail. Competitive research has always been a part of any marketing campaign, but scratching the surface only gets you superficial results. Look deeper to uncover more than just a competitor’s marketing plan, but the very reason why the competitor may be beating you in search. Then, hopefully you’ll become the rock star others are trying to copy from. That’s a good problem to have.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 5 years ago from feedproxy.google.com