Should I Use Relative or Absolute URLs? – Whiteboard Friday

Posted by RuthBurrReedy

It was once commonplace for developers to code relative URLs into a site. There are a number of reasons why that might not be the best idea for SEO, and in today’s Whiteboard Friday, Ruth Burr Reedy is here to tell you all about why.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Let’s discuss some non-philosophical absolutes and relatives

Howdy, Moz fans. My name is Ruth Burr Reedy. You may recognize me from such projects as when I used to be the Head of SEO at Moz. I’m now the Senior SEO Manager at BigWing Interactive in Oklahoma City. Today we’re going to talk about relative versus absolute URLs and why they are important.

At any given time, your website can have several different configurations that might be causing duplicate content issues. You could have just a standard http://www.example.com. That’s a pretty standard format for a website.

But the main sources that we see of domain level duplicate content are when the non-www.example.com does not redirect to the www or vice-versa, and when the HTTPS versions of your URLs are not forced to resolve to HTTP versions or, again, vice-versa. What this can mean is if all of these scenarios are true, if all four of these URLs resolve without being forced to resolve to a canonical version, you can, in essence, have four versions of your website out on the Internet. This may or may not be a problem.

It’s not ideal for a couple of reasons. Number one, duplicate content is a problem because some people think that duplicate content is going to give you a penalty. Duplicate content is not going to get your website penalized in the same way that you might see a spammy link penalty from Penguin. There’s no actual penalty involved. You won’t be punished for having duplicate content.

The problem with duplicate content is that you’re basically relying on Google to figure out what the real version of your website is. Google is seeing the URL from all four versions of your website. They’re going to try to figure out which URL is the real URL and just rank that one. The problem with that is you’re basically leaving that decision up to Google when it’s something that you could take control of for yourself.

There are a couple of other reasons that we’ll go into a little bit later for why duplicate content can be a problem. But in short, duplicate content is no good.

However, just having these URLs not resolve to each other may or may not be a huge problem. When it really becomes a serious issue is when that problem is combined with injudicious use of relative URLs in internal links. So let’s talk a little bit about the difference between a relative URL and an absolute URL when it comes to internal linking.

With an absolute URL, you are putting the entire web address of the page that you are linking to in the link. You’re putting your full domain, everything in the link, including /page. That’s an absolute URL.

However, when coding a website, it’s a fairly common web development practice to instead code internal links with what’s called a relative URL. A relative URL is just /page. Basically what that does is it relies on your browser to understand, “Okay, this link is pointing to a page that’s on the same domain that we’re already on. I’m just going to assume that that is the case and go there.”

There are a couple of really good reasons to code relative URLs

1) It is much easier and faster to code.

When you are a web developer and you’re building a site and there thousands of pages, coding relative versus absolute URLs is a way to be more efficient. You’ll see it happen a lot.

2) Staging environments

Another reason why you might see relative versus absolute URLs is some content management systems — and SharePoint is a great example of this — have a staging environment that’s on its own domain. Instead of being example.com, it will be examplestaging.com. The entire website will basically be replicated on that staging domain. Having relative versus absolute URLs means that the same website can exist on staging and on production, or the live accessible version of your website, without having to go back in and recode all of those URLs. Again, it’s more efficient for your web development team. Those are really perfectly valid reasons to do those things. So don’t yell at your web dev team if they’ve coded relative URLS, because from their perspective it is a better solution.

Relative URLs will also cause your page to load slightly faster. However, in my experience, the SEO benefits of having absolute versus relative URLs in your website far outweigh the teeny-tiny bit longer that it will take the page to load. It’s very negligible. If you have a really, really long page load time, there’s going to be a whole boatload of things that you can change that will make a bigger difference than coding your URLs as relative versus absolute.

Page load time, in my opinion, not a concern here. However, it is something that your web dev team may bring up with you when you try to address with them the fact that, from an SEO perspective, coding your website with relative versus absolute URLs, especially in the nav, is not a good solution.

There are even better reasons to use absolute URLs

1) Scrapers

If you have all of your internal links as relative URLs, it would be very, very, very easy for a scraper to simply scrape your whole website and put it up on a new domain, and the whole website would just work. That sucks for you, and it’s great for that scraper. But unless you are out there doing public services for scrapers, for some reason, that’s probably not something that you want happening with your beautiful, hardworking, handcrafted website. That’s one reason. There is a scraper risk.

2) Preventing duplicate content issues

But the other reason why it’s very important to have absolute versus relative URLs is that it really mitigates the duplicate content risk that can be presented when you don’t have all of these versions of your website resolving to one version. Google could potentially enter your site on any one of these four pages, which they’re the same page to you. They’re four different pages to Google. They’re the same domain to you. They are four different domains to Google.

But they could enter your site, and if all of your URLs are relative, they can then crawl and index your entire domain using whatever format these are. Whereas if you have absolute links coded, even if Google enters your site on www. and that resolves, once they crawl to another page, that you’ve got coded without the www., all of that other internal link juice and all of the other pages on your website, Google is not going to assume that those live at the www. version. That really cuts down on different versions of each page of your website. If you have relative URLs throughout, you basically have four different websites if you haven’t fixed this problem.

Again, it’s not always a huge issue. Duplicate content, it’s not ideal. However, Google has gotten pretty good at figuring out what the real version of your website is.

You do want to think about internal linking, when you’re thinking about this. If you have basically four different versions of any URL that anybody could just copy and paste when they want to link to you or when they want to share something that you’ve built, you’re diluting your internal links by four, which is not great. You basically would have to build four times as many links in order to get the same authority. So that’s one reason.

3) Crawl Budget

The other reason why it’s pretty important not to do is because of crawl budget. I’m going to point it out like this instead.

When we talk about crawl budget, basically what that is, is every time Google crawls your website, there is a finite depth that they will. There’s a finite number of URLs that they will crawl and then they decide, “Okay, I’m done.” That’s based on a few different things. Your site authority is one of them. Your actual PageRank, not toolbar PageRank, but how good Google actually thinks your website is, is a big part of that. But also how complex your site is, how often it’s updated, things like that are also going to contribute to how often and how deep Google is going to crawl your site.

It’s important to remember when we think about crawl budget that, for Google, crawl budget cost actual dollars. One of Google’s biggest expenditures as a company is the money and the bandwidth it takes to crawl and index the Web. All of that energy that’s going into crawling and indexing the Web, that lives on servers. That bandwidth comes from servers, and that means that using bandwidth cost Google actual real dollars.

So Google is incentivized to crawl as efficiently as possible, because when they crawl inefficiently, it cost them money. If your site is not efficient to crawl, Google is going to save itself some money by crawling it less frequently and crawling to a fewer number of pages per crawl. That can mean that if you have a site that’s updated frequently, your site may not be updating in the index as frequently as you’re updating it. It may also mean that Google, while it’s crawling and indexing, may be crawling and indexing a version of your website that isn’t the version that you really want it to crawl and index.

So having four different versions of your website, all of which are completely crawlable to the last page, because you’ve got relative URLs and you haven’t fixed this duplicate content problem, means that Google has to spend four times as much money in order to really crawl and understand your website. Over time they’re going to do that less and less frequently, especially if you don’t have a really high authority website. If you’re a small website, if you’re just starting out, if you’ve only got a medium number of inbound links, over time you’re going to see your crawl rate and frequency impacted, and that’s bad. We don’t want that. We want Google to come back all the time, see all our pages. They’re beautiful. Put them up in the index. Rank them well. That’s what we want. So that’s what we should do.

There are couple of ways to fix your relative versus absolute URLs problem

1) Fix what is happening on the server side of your website

You have to make sure that you are forcing all of these different versions of your domain to resolve to one version of your domain. For me, I’m pretty agnostic as to which version you pick. You should probably already have a pretty good idea of which version of your website is the real version, whether that’s www, non-www, HTTPS, or HTTP. From my view, what’s most important is that all four of these versions resolve to one version.

From an SEO standpoint, there is evidence to suggest and Google has certainly said that HTTPS is a little bit better than HTTP. From a URL length perspective, I like to not have the www. in there because it doesn’t really do anything. It just makes your URLs four characters longer. If you don’t know which one to pick, I would pick one this one HTTPS, no W’s. But whichever one you pick, what’s really most important is that all of them resolve to one version. You can do that on the server side, and that’s usually pretty easy for your dev team to fix once you tell them that it needs to happen.

2) Fix your internal links

Great. So you fixed it on your server side. Now you need to fix your internal links, and you need to recode them for being relative to being absolute. This is something that your dev team is not going to want to do because it is time consuming and, from a web dev perspective, not that important. However, you should use resources like this Whiteboard Friday to explain to them, from an SEO perspective, both from the scraper risk and from a duplicate content standpoint, having those absolute URLs is a high priority and something that should get done.

You’ll need to fix those, especially in your navigational elements. But once you’ve got your nav fixed, also pull out your database or run a Screaming Frog crawl or however you want to discover internal links that aren’t part of your nav, and make sure you’re updating those to be absolute as well.

Then you’ll do some education with everybody who touches your website saying, “Hey, when you link internally, make sure you’re using the absolute URL and make sure it’s in our preferred format,” because that’s really going to give you the most bang for your buck per internal link. So do some education. Fix your internal links.

Sometimes your dev team going to say, “No, we can’t do that. We’re not going to recode the whole nav. It’s not a good use of our time,” and sometimes they are right. The dev team has more important things to do. That’s okay.

3) Canonicalize it!

If you can’t get your internal links fixed or if they’re not going to get fixed anytime in the near future, a stopgap or a Band-Aid that you can kind of put on this problem is to canonicalize all of your pages. As you’re changing your server to force all of these different versions of your domain to resolve to one, at the same time you should be implementing the canonical tag on all of the pages of your website to self-canonize. On every page, you have a canonical page tag saying, “This page right here that they were already on is the canonical version of this page. ” Or if there’s another page that’s the canonical version, then obviously you point to that instead.

But having each page self-canonicalize will mitigate both the risk of duplicate content internally and some of the risk posed by scrappers, because when they scrape, if they are scraping your website and slapping it up somewhere else, those canonical tags will often stay in place, and that lets Google know this is not the real version of the website.

In conclusion, relative links, not as good. Absolute links, those are the way to go. Make sure that you’re fixing these very common domain level duplicate content problems. If your dev team tries to tell you that they don’t want to do this, just tell them I sent you. Thanks guys.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Technical Site Audit Checklist: 2015 Edition

Posted by GeoffKenyon

Back in 2011, I wrote a technical site audit checklist, and while it was thorough, there have been a lot of additions to what is encompassed in a site audit. I have gone through and updated that old checklist for 2015. Some of the biggest changes were the addition of sections for mobile, international, and site speed.

This checklist should help you put together a thorough site audit and determine what is holding back the organic performance of your site. At the end of your audit, don’t write a document that says what’s wrong with the website. Instead, create a document that says what needs to be done. Then explain why these actions need to be taken and why they are important. What I’ve found to really helpful is to provide a prioritized list along with your document of all the actions that you would like them to implement. This list can be handed off to a dev or content team to be implemented easily. These teams can refer to your more thorough document as needed.


Quick overview

Check indexed pages  
  • Do a site: search.
  • How many pages are returned? (This can be way off so don’t put too much stock in this).
  • Is the homepage showing up as the first result? 
  • If the homepage isn’t showing up as the first result, there could be issues, like a penalty or poor site architecture/internal linking, affecting the site. This may be less of a concern as Google’s John Mueller recently said that your homepage doesn’t need to be listed first.

Review the number of organic landing pages in Google Analytics

  • Does this match with the number of results in a site: search?
  • This is often the best view of how many pages are in a search engine’s index that search engines find valuable.

Search for the brand and branded terms

  • Is the homepage showing up at the top, or are correct pages showing up?
  • If the proper pages aren’t showing up as the first result, there could be issues, like a penalty, in play.
Check Google’s cache for key pages
  • Is the content showing up?
  • Are navigation links present?
  • Are there links that aren’t visible on the site?
PRO Tip:
Don’t forget to check the text-only version of the cached page. Here is a
bookmarklet to help you do that.

Do a mobile search for your brand and key landing pages

  • Does your listing have the “mobile friendly” label?
  • Are your landing pages mobile friendly?
  • If the answer is no to either of these, it may be costing you organic visits.

On-page optimization

Title tags are optimized
  • Title tags should be optimized and unique.
  • Your brand name should be included in your title tag to improve click-through rates.
  • Title tags are about 55-60 characters (512 pixels) to be fully displayed. You can test here or review title pixel widths in Screaming Frog.
Important pages have click-through rate optimized titles and meta descriptions
  • This will help improve your organic traffic independent of your rankings.
  • You can use SERP Turkey for this.

Check for pages missing page titles and meta descriptions
  
The on-page content includes the primary keyword phrase multiple times as well as variations and alternate keyword phrases
  
There is a significant amount of optimized, unique content on key pages
 
The primary keyword phrase is contained in the H1 tag
  

Images’ file names and alt text are optimized to include the primary keyword phrase associated with the page.
 
URLs are descriptive and optimized
  • While it is beneficial to include your keyword phrase in URLs, changing your URLs can negatively impact traffic when you do a 301. As such, I typically recommend optimizing URLs when the current ones are really bad or when you don’t have to change URLs with existing external links.
Clean URLs
  • No excessive parameters or session IDs.
  • URLs exposed to search engines should be static.
Short URLs
  • 115 characters or shorter – this character limit isn’t set in stone, but shorter URLs are better for usability.

Content

Homepage content is optimized
  • Does the homepage have at least one paragraph?
  • There has to be enough content on the page to give search engines an understanding of what a page is about. Based on my experience, I typically recommend at least 150 words.
Landing pages are optimized
  • Do these pages have at least a few paragraphs of content? Is it enough to give search engines an understanding of what the page is about?
  • Is it template text or is it completely unique?
Site contains real and substantial content
  • Is there real content on the site or is the “content” simply a list of links?
Proper keyword targeting
  • Does the intent behind the keyword match the intent of the landing page?
  • Are there pages targeting head terms, mid-tail, and long-tail keywords?
Keyword cannibalization
  • Do a site: search in Google for important keyword phrases.
  • Check for duplicate content/page titles using the Moz Pro Crawl Test.
Content to help users convert exists and is easily accessible to users
  • In addition to search engine driven content, there should be content to help educate users about the product or service.
Content formatting
  • Is the content formatted well and easy to read quickly?
  • Are H tags used?
  • Are images used?
  • Is the text broken down into easy to read paragraphs?
Good headlines on blog posts
  • Good headlines go a long way. Make sure the headlines are well written and draw users in.
Amount of content versus ads
  • Since the implementation of Panda, the amount of ad-space on a page has become important to evaluate.
  • Make sure there is significant unique content above the fold.
  • If you have more ads than unique content, you are probably going to have a problem.

Duplicate content

There should be one URL for each piece of content
  • Do URLs include parameters or tracking code? This will result in multiple URLs for a piece of content.
  • Does the same content reside on completely different URLs? This is often due to products/content being replicated across different categories.
Pro Tip:
Exclude common parameters, such as those used to designate tracking code, in Google Webmaster Tools. Read more at
Search Engine Land.
Do a search to check for duplicate content
  • Take a content snippet, put it in quotes and search for it.
  • Does the content show up elsewhere on the domain?
  • Has it been scraped? If the content has been scraped, you should file a content removal request with Google.
Sub-domain duplicate content
  • Does the same content exist on different sub-domains?
Check for a secure version of the site
  • Does the content exist on a secure version of the site?
Check other sites owned by the company
  • Is the content replicated on other domains owned by the company?
Check for “print” pages
  • If there are “printer friendly” versions of pages, they may be causing duplicate content.

Accessibility & Indexation

Check the robots.txt

  • Has the entire site, or important content been blocked? Is link equity being orphaned due to pages being blocked via the robots.txt?

Turn off JavaScript, cookies, and CSS

Now change your user agent to Googlebot

PRO Tip:
Use
SEO Browser to do a quick spot check.

Check the SEOmoz PRO Campaign

  • Check for 4xx errors and 5xx errors.

XML sitemaps are listed in the robots.txt file

XML sitemaps are submitted to Google/Bing Webmaster Tools

Check pages for meta robots noindex tag

  • Are pages accidentally being tagged with the meta robots noindex command
  • Are there pages that should have the noindex command applied
  • You can check the site quickly via a crawl tool such as Moz or Screaming Frog

Do goal pages have the noindex command applied?

  • This is important to prevent direct organic visits from showing up as goals in analytics

Site architecture and internal linking

Number of links on a page
Vertical linking structures are in place
  • Homepage links to category pages.
  • Category pages link to sub-category and product pages as appropriate.
  • Product pages link to relevant category pages.
Horizontal linking structures are in place
  • Category pages link to other relevant category pages.
  • Product pages link to other relevant product pages.
Links are in content
  • Does not utilize massive blocks of links stuck in the content to do internal linking.
Footer links
  • Does not use a block of footer links instead of proper navigation.
  • Does not link to landing pages with optimized anchors.
Good internal anchor text
 
Check for broken links
  • Link Checker and Xenu are good tools for this.

Technical issues

Proper use of 301s
  • Are 301s being used for all redirects?
  • If the root is being directed to a landing page, are they using a 301 instead of a 302?
  • Use Live HTTP Headers Firefox plugin to check 301s.
“Bad” redirects are avoided
  • These include 302s, 307s, meta refresh, and JavaScript redirects as they pass little to no value.
  • These redirects can easily be identified with a tool like Screaming Frog.
Redirects point directly to the final URL and do not leverage redirect chains
  • Redirect chains significantly diminish the amount of link equity associated with the final URL.
  • Google has said that they will stop following a redirect chain after several redirects.
Use of JavaScript
  • Is content being served in JavaScript?
  • Are links being served in JavaScript? Is this to do PR sculpting or is it accidental?
Use of iFrames
  • Is content being pulled in via iFrames?
Use of Flash
  • Is the entire site done in Flash, or is Flash used sparingly in a way that doesn’t hinder crawling?
Check for errors in Google Webmaster Tools
  • Google WMT will give you a good list of technical problems that they are encountering on your site (such as: 4xx and 5xx errors, inaccessible pages in the XML sitemap, and soft 404s)
XML Sitemaps  
  • Are XML sitemaps in place?
  • Are XML sitemaps covering for poor site architecture?
  • Are XML sitemaps structured to show indexation problems?
  • Do the sitemaps follow proper XML protocols
Canonical version of the site established through 301s
 
Canonical version of site is specified in Google Webmaster Tools
 
Rel canonical link tag is properly implemented across the site
Uses absolute URLs instead of relative URLs
  • This can cause a lot of problems if you have a root domain with secure sections.

Site speed


Review page load time for key pages 

Make sure compression is enabled


Enable caching


Optimize your images for the web


Minify your CSS/JS/HTML

Use a good, fast host
  • Consider using a CDN for your images.

Optimize your images for the web

Mobile

Review the mobile experience
  • Is there a mobile site set up?
  • If there is, is it a mobile site, responsive design, or dynamic serving?


Make sure analytics are set up if separate mobile content exists


If dynamic serving is being used, make sure the Vary HTTP header is being used

Review how the mobile experience matches up with the intent of mobile visitors
  • Do your mobile visitors have a different intent than desktop based visitors?
Ensure faulty mobile redirects do not exist
  • If your site redirects mobile visitors away from their intended URL (typically to the homepage), you’re likely going to run into issues impacting your mobile organic performance.
Ensure that the relationship between the mobile site and desktop site is established with proper markup
  • If a mobile site (m.) exists, does the desktop equivalent URL point to the mobile version with rel=”alternate”?
  • Does the mobile version canonical to the desktop version?
  • Official documentation.

International

Review international versions indicated in the URL
  • ex: site.com/uk/ or uk.site.com
Enable country based targeting in webmaster tools
  • If the site is targeted to one specific country, is this specified in webmaster tools? 
  • If the site has international sections, are they targeted in webmaster tools?
Implement hreflang / rel alternate if relevant
If there are multiple versions of a site in the same language (such as /us/ and /uk/, both in English), update the copy been updated so that they are both unique
 

Make sure the currency reflects the country targeted
 
Ensure the URL structure is in the native language 
  • Try to avoid having all URLs in the default language

Analytics

Analytics tracking code is on every page
  • You can check this using the “custom” filter in a Screaming Frog Crawl or by looking for self referrals.
  • Are there pages that should be blocked?
There is only one instance of a GA property on a page
  • Having the same Google Analytics property will create problems with pageview-related metrics such as inflating page views and pages per visit and reducing the bounce rate.
  • It is OK to have multiple GA properties listed, this won’t cause a problem.
Analytics is properly tracking and capturing internal searches
 

Demographics tracking is set up

Adwords and Adsense are properly linked if you are using these platforms
Internal IP addresses are excluded
UTM Campaign Parameters are used for other marketing efforts
Meta refresh and JavaScript redirects are avoided
  • These can artificially lower bounce rates.
Event tracking is set up for key user interactions

This audit covers the main technical elements of a site and should help you uncover any issues that are holding a site back. As with any project, the deliverable is critical. I’ve found focusing on the solution and impact (business case) is the best approach for site audit reports. While it is important to outline the problems, too much detail here can take away from the recommendations. If you’re looking for more resources on site audits, I recommend the following:

Helpful tools for doing a site audit:

Annie Cushing’s Site Audit
Web Developer Toolbar
User Agent Add-on
Firebug
Link Checker
SEObook Toolbar
MozBar (Moz’s SEO toolbar)
Xenu
Screaming Frog
Your own scraper
Inflow’s technical mobile best practices

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com

Landing Page Optimization: Interview with CRC Health

In this MarketingSherpa (http://www.marketingsherpa.com) webinar you’ll hear from Jon Ciampi, Vice President, Marketing, Business Development & Corporate Dev…

Reblogged 4 years ago from www.youtube.com

DEVMET Search Engine Optimization (SEO)

High Page Ranking secrets by Google Personnel. Further: http://www.devmet.com/portfolio ps: This clips is taken from Documentary “GOOGLE, Behind the Scene” …

Reblogged 5 years ago from www.youtube.com

WebProjects101 – Web Design, Programming, SEO, Marketing Portfolio

http://webprojects101.com – WebProjects101 provides web design, web programming, web development, internet marketing social media marketing and SEO services….

Reblogged 5 years ago from www.youtube.com

Is that Mind-Blowing Title Blowing Your Credibility? You Decide

Posted by Isla_McKetta


Image of Tantalus courtesy of Clayton Cusak

What if I told you I could teach you to write the perfect headline? One that is so irresistible every person who sees it will click on it. You’d sign up immediately and maybe even promise me your firstborn.

But what if I then told you not one single person out of all the millions who will click on that headline will convert? And that you might lose all your credibility in the process. Would all the traffic generated by that “perfect” headline be worth it?

Help us solve a dispute

It isn’t really that bad, but with all the emphasis lately on
headline science and the curiosity gap, Trevor (your faithful editor) and I (a recovering copywriter) started talking about the importance of headlines and what their role should be in regards to content. I’m for clickability (as long as there is strong content to back the headline) and, if he has to choose, Trevor is for credibility (with an equal emphasis on quality of the eventual content).

credible vs clickable headlines

What’s the purpose of a headline?

Back in the good ol’ days, headlines were created to sell newspapers. Newsboys stood on street corners shouting the headlines in an attempt to hawk those newspapers. Headlines had to be enough of a tease to get readers interested but they had to be trustworthy enough to get a reader to buy again tomorrow. Competition for eyeballs was less fierce because a town only had so many newspapers, but paper cost money and editors were always happy to get a repeat customer.

Nowadays the competition for eyeballs feels even stiffer because it’s hard to get noticed in the vast sea of the internet. It’s easy to feel a little desperate. And it seems like the opportunity cost of turning away a customer is much lower than it was before. But aren’t we doing content as a product? Does the quality of that product matter?

The forbidden secrets of clickable headlines

There’s no arguing that headlines are important. In fact, at MozCon this year,
Nathalie Nahai reminded us that many copywriters recommend an 80:20 ratio of energy spent on headline to copy. That might be taking things a bit far, but a bad (or even just boring) headline will tank your traffic. Here is some expert advice on writing headlines that convert: 

  • Nahai advises that you take advantage of psychological trigger words like, “weird,” “free,” “incredible,” and “secret” to create a sense of urgency in the reader. Can you possibly wait to read “Secret Ways Butter can Save Your Life”?
  • Use question headlines like “Can You Increase Your Sales by 45% in Only 5 Minutes a Day?” that get a reader asking themselves, “I dunno, can I?” and clicking to read more.
  • Key into the curiosity gap with a headline like “What Mother Should Have Told You about Banking. (And How Not Knowing is Costing You Friends.)” Ridiculous claim? Maybe, but this kind of headline gets a reader hooked on narrative and they have to click through to see how the story comes together.
  • And if you’re looking for a formula for the best headlines ever, Nahai proposes the following:
    Number/Trigger word + Adjective + Keyword + Promise = Killer Headline.

Many readers still (consciously or not) consider headlines a promise. So remember, as you fill the headline with hyperbole and only write eleven of the twelve tips you set out to write, there is a reader on the other end hoping butter really is good for them.

The headline danger zone

This is where headline science can get ugly. Because a lot of “perfect” titles simply do not have the quality or depth of content to back them.

Those types of headlines remind me of the Greek myth of Tantalus. For sharing the secrets of the gods with the common folk, Tantalus was condemned to spend eternity surrounded by food and drink that were forever out of his reach. Now, content is hardly the secrets of the gods, but are we tantalizing our customers with teasing headlines that will never satisfy?

buzzfeed headlines

For me, reading headlines on
BuzzFeed and Upworthy and their ilk is like talking to the guy at the party with all those super wild anecdotes. He’s entertaining, but I don’t believe a word he says, soon wish he would shut up, and can’t remember his name five seconds later. Maybe I don’t believe in clickability as much as I thought…

So I turn to credible news sources for credible headlines.

washington post headlines

I’m having trouble deciding at this point if I’m more bothered by the headline at
The Washington Post, the fact that they’re covering that topic at all, or that they didn’t really go for true clickbait with something like “You Won’t Believe the Bizarre Reasons Girls Scream at Boy Band Concerts.” But one (or all) of those things makes me very sad. 

Are we developing an immunity to clickbait headlines?

Even
Upworthy is shifting their headline creation tactics a little. But that doesn’t mean they are switching from clickbait, it just means they’ve seen their audience get tired of the same old tactics. So they’re looking for new and better tactics to keep you engaged and clicking.

The importance of traffic

I think many of us would sell a little of our soul if it would increase our traffic, and of course those clickbaity curiosity gap headlines are designed to do that (and are mostly working, for now).

But we also want good traffic. The kind of people who are going to engage with our brand and build relationships with us over the long haul, right? Back to what we were discussing in the intro, we want the kind of traffic that’s likely to convert. Don’t we?

As much as I advocate for clickable headlines, the riskier the headline I write, the more closely I compare overall traffic (especially returning visitors) to click-throughs, time on page, and bounce rate to see if I’ve pushed it too far and am alienating our most loyal fans. Because new visitors are awesome, but loyal customers are priceless.

Headline science at Moz

At Moz, we’re trying to find the delicate balance between attracting all the customers and attracting the right customers. In my first week here when Trevor and Cyrus were polling readers on what headline they’d prefer to read, I advocated for a more clickable version. See if you can pick out which is mine…

headline poll

Yep, you guessed it. I suggested “Your Google Algorithm Cheat Sheet: Panda, Penguin, and Hummingbird” because it contained a trigger word and a keyword, plus it was punchy. I actually liked “A Layman’s Explanation of the Panda Algorithm, the Penguin Algorithm, and Hummingbird,” but I was pretty sure no one would click on it.

Last time I checked, that has more traffic than any other post for the month of June. I won’t say that’s all because of the headline—it’s a really strong and useful post—but I think the headline helped a lot.

But that’s just one data point. I’ve also been spicing up the subject lines on the Moz Top 10 newsletter to see what gets the most traffic.

most-read subject lines

And the results here are more mixed. Titles I felt like were much more clickbaity like “Did Google Kill Spam?…” and “Are You Using Robots.txt the Right Way?…” underperformed compared to the straight up “Moz Top 10.”

While the most clickbaity “Groupon Did What?…” and the two about Google selling domains (which was accurate but suggested that Google was selling it’s own domains, which worried me a bit) have the most opens overall.

Help us resolve the dispute

As you can tell, I have some unresolved feelings about this whole clickbait versus credibility thing. While Trevor and I have strong opinions, we also have a lot of questions that we hope you can help us with. Blow my mind with your headline logic in the comments by sharing your opinion on any of the following:

  • Do clickbait titles erode trust? If yes, do you ever worry about that affecting your bottom line?
  • Would you sacrifice credibility for clickability? Does it have to be a choice?
  • Is there such thing as a formula for a perfect headline? What standards do you use when writing headlines?
  • Does a clickbait title affect how likely you are to read an article? What about sharing one? Do you ever feel duped by the content? Does that affect your behavior the next time?  
  • How much of your soul would you sell for more traffic?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 5 years ago from feedproxy.google.com