Preparing for Black Friday

That’s a 16% increase on Black Friday last year and a 200% increase compared to a typical day!

 

It’s that last statistic which really stands out for the dotmailer technical teams. We need to be ready for that level of increased usage and so we spend the weeks prior trawling over telemetry data from our platform to scale our infrastructure accordingly. This year we already knew a lot of the work had been done due to the incredibly busy period running up to the GDPR in May.

 

There are four major elements to our server infrastructure: website clusters, background service clusters, database servers and email servers. Because we’re cloud-based, each can be scaled separately and so we:

 

  • added extra web servers to handle the 37 million hits our click-and-open tracking website saw, plus another 34 million for Web Behavior Tracking.
  • increased the numbers, size and performance of some of our servers which handle background tasks such as sending and importing.
  • added a new email sending capacity to our fast-growing US deployment which has seen rapid growth in the last 12 months. At peak we sent over 320GB of email an hour!
  • optimized email delivery throughput with improved compliance with email receiver guidelines.
Throughout the day, our engineers monitor system metrics, making notes of areas that will soon need attention. It’s busy days like this which enable us to see early warning signs of weaknesses in our different systems.  

 

Sometimes concerns can be addressed quickly and easily with more computing power, or by altering slightly the behavior of a system via a setting. Alternatively, bottlenecks are fed into the development roadmap so systems can be overhauled for the coming year. As demand from customers continues to increase, we continue to reinvest in our platform and we’re already looking forward to next November!

 

If you missed our latest product release, 18four, please find out more here.

The post Preparing for Black Friday appeared first on The Marketing Automation Blog.

Reblogged 1 week ago from blog.dotmailer.com

We’re Black Friday ready. Are you?

With retailers expecting to deliver 30% of their annual sales and 40% of their profits in the fourth quarter, we know how important your emails are in generating that demand.  We wanted to give you an insight into the data and top tips in making sure you make the most out of these key days.

So what have we improved since last year?

  1. We partnered with one of the leaders in cloud computing, Microsoft Azure, and moved our entire infrastructure to utilise the public cloud. We now have all the compute we could ever need at our fingertips (both in EU and US) to make sure we perform when you need us the most.
  2. We have increased the bandwidth we use to send emails by 500%.
  3. We doubled the amount of servers we use to send email.
  4. We have re-written parts of the application that processes emails so it’s over 40% faster.
  5. We increased the amount of processing power our databases have by 50%.

What did last year look like?

Last year I wrote a similar Black Friday post and many of our customers commented on how useful it was to see the trend of email opens, clicks and emails being sent on the day. So this is how it went last year, in recap:

Email opens on Black Friday, 2015 vs 2014. We saw the morning opens grow much faster than Black Friday 2014. The peak was at 9am, where as previously it was 4pm GMT. It is likely that the same trend will continue and there will be many consumers ready to hunt down those Black Friday deals early doors.

Top tip: Make sure your subject line is catchy, create urgency and mention Black Friday or Cyber Monday to get their attention. If you haven’t already dipped your toe with emojis then now is the time to put them in! You can get a free copy of our Black Friday email marketing cheatsheet here for more tips.

bf22

Email clicks Black Friday 2015. Like the opens, we can see good consumer engagement in the morning with 10am GMT being the peak.

Top tip: Bargain hunters will look to shop around, so make sure your email gets in the inbox early and that the calls to action are clear and irresistible.   The bargain hunters out there are starting early and will be shopping around to get the best deal on the day.  If you know they have clicked and not converted, use that engagement to re-target them in the evening using automated programs and segmentation.

Email sends on Black Friday, 2015 vs 2014. Again, we saw far more email sends in the morning, and over 10% of all the Black Friday emails went out between 8am and 9am GMT.

Email sends on Black Friday, 2015 vs 2014. Again, we saw far more email sends in the morning, and over 10% of all the Black Friday emails went out between 8am and 9am GMT.

Top tip: Make sure your campaigns are mobile-optimised. Most consumers will be reading while ‘data snacking’ on their mobile, whether that’s while they’re commuting, in the bathroom (yes, you know you do!) or at any spare moment in their day-to-day lives.

To sum up…

Remember, these are busy days with marketing professionals worldwide sending far more emails. This means that the receivers (Gmail, Hotmail, AOL, BT Internet etc) will also be receiving far more emails than ever before, and they will experience delays under the additional stress and load.

Make sure you get those emails through early to capitalise on the opportunity. You can also better your chances of conversion by ensuring that those offers are irresistible, by providing clear CTAs and by making sure your subject lines pop out!

The post We’re Black Friday ready. Are you? appeared first on The Email Marketing Blog.

Reblogged 2 years ago from blog.dotmailer.com

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

What Deep Learning and Machine Learning Mean For the Future of SEO – Whiteboard Friday

Posted by randfish

Imagine a world where even the high-up Google engineers don’t know what’s in the ranking algorithm. We may be moving in that direction. In today’s Whiteboard Friday, Rand explores and explains the concepts of deep learning and machine learning, drawing us a picture of how they could impact our work as SEOs.

For reference, here’s a still of this week’s whiteboard!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we are going to take a peek into Google’s future and look at what it could mean as Google advances their machine learning and deep learning capabilities. I know these sound like big, fancy, important words. They’re not actually that tough of topics to understand. In fact, they’re simplistic enough that even a lot of technology firms like Moz do some level of machine learning. We don’t do anything with deep learning and a lot of neural networks. We might be going that direction.

But I found an article that was published in January, absolutely fascinating and I think really worth reading, and I wanted to extract some of the contents here for Whiteboard Friday because I do think this is tactically and strategically important to understand for SEOs and really important for us to understand so that we can explain to our bosses, our teams, our clients how SEO works and will work in the future.

The article is called “Google Search Will Be Your Next Brain.” It’s by Steve Levy. It’s over on Medium. I do encourage you to read it. It’s a relatively lengthy read, but just a fascinating one if you’re interested in search. It starts with a profile of Geoff Hinton, who was a professor in Canada and worked on neural networks for a long time and then came over to Google and is now a distinguished engineer there. As the article says, a quote from the article: “He is versed in the black art of organizing several layers of artificial neurons so that the entire system, the system of neurons, could be trained or even train itself to divine coherence from random inputs.”

This sounds complex, but basically what we’re saying is we’re trying to get machines to come up with outcomes on their own rather than us having to tell them all the inputs to consider and how to process those incomes and the outcome to spit out. So this is essentially machine learning. Google has used this, for example, to figure out when you give it a bunch of photos and it can say, “Oh, this is a landscape photo. Oh, this is an outdoor photo. Oh, this is a photo of a person.” Have you ever had that creepy experience where you upload a photo to Facebook or to Google+ and they say, “Is this your friend so and so?” And you’re like, “God, that’s a terrible shot of my friend. You can barely see most of his face, and he’s wearing glasses which he usually never wears. How in the world could Google+ or Facebook figure out that this is this person?”

That’s what they use, these neural networks, these deep machine learning processes for. So I’ll give you a simple example. Here at MOZ, we do machine learning very simplistically for page authority and domain authority. We take all the inputs — numbers of links, number of linking root domains, every single metric that you could get from MOZ on the page level, on the sub-domain level, on the root-domain level, all these metrics — and then we combine them together and we say, “Hey machine, we want you to build us the algorithm that best correlates with how Google ranks pages, and here’s a bunch of pages that Google has ranked.” I think we use a base set of 10,000, and we do it about quarterly or every 6 months, feed that back into the system and the system pumps out the little algorithm that says, “Here you go. This will give you the best correlating metric with how Google ranks pages.” That’s how you get page authority domain authority.

Cool, really useful, helpful for us to say like, “Okay, this page is probably considered a little more important than this page by Google, and this one a lot more important.” Very cool. But it’s not a particularly advanced system. The more advanced system is to have these kinds of neural nets in layers. So you have a set of networks, and these neural networks, by the way, they’re designed to replicate nodes in the human brain, which is in my opinion a little creepy, but don’t worry. The article does talk about how there’s a board of scientists who make sure Terminator 2 doesn’t happen, or Terminator 1 for that matter. Apparently, no one’s stopping Terminator 4 from happening? That’s the new one that’s coming out.

So one layer of the neural net will identify features. Another layer of the neural net might classify the types of features that are coming in. Imagine this for search results. Search results are coming in, and Google’s looking at the features of all the websites and web pages, your websites and pages, to try and consider like, “What are the elements I could pull out from there?”

Well, there’s the link data about it, and there are things that happen on the page. There are user interactions and all sorts of stuff. Then we’re going to classify types of pages, types of searches, and then we’re going to extract the features or metrics that predict the desired result, that a user gets a search result they really like. We have an algorithm that can consistently produce those, and then neural networks are hopefully designed — that’s what Geoff Hinton has been working on — to train themselves to get better. So it’s not like with PA and DA, our data scientist Matt Peters and his team looking at it and going, “I bet we could make this better by doing this.”

This is standing back and the guys at Google just going, “All right machine, you learn.” They figure it out. It’s kind of creepy, right?

In the original system, you needed those people, these individuals here to feed the inputs, to say like, “This is what you can consider, system, and the features that we want you to extract from it.”

Then unsupervised learning, which is kind of this next step, the system figures it out. So this takes us to some interesting places. Imagine the Google algorithm, circa 2005. You had basically a bunch of things in here. Maybe you’d have anchor text, PageRank and you’d have some measure of authority on a domain level. Maybe there are people who are tossing new stuff in there like, “Hey algorithm, let’s consider the location of the searcher. Hey algorithm, let’s consider some user and usage data.” They’re tossing new things into the bucket that the algorithm might consider, and then they’re measuring it, seeing if it improves.

But you get to the algorithm today, and gosh there are going to be a lot of things in there that are driven by machine learning, if not deep learning yet. So there are derivatives of all of these metrics. There are conglomerations of them. There are extracted pieces like, “Hey, we only ant to look and measure anchor text on these types of results when we also see that the anchor text matches up to the search queries that have previously been performed by people who also search for this.” What does that even mean? But that’s what the algorithm is designed to do. The machine learning system figures out things that humans would never extract, metrics that we would never even create from the inputs that they can see.

Then, over time, the idea is that in the future even the inputs aren’t given by human beings. The machine is getting to figure this stuff out itself. That’s weird. That means that if you were to ask a Google engineer in a world where deep learning controls the ranking algorithm, if you were to ask the people who designed the ranking system, “Hey, does it matter if I get more links,” they might be like, “Well, maybe.” But they don’t know, because they don’t know what’s in this algorithm. Only the machine knows, and the machine can’t even really explain it. You could go take a snapshot and look at it, but (a) it’s constantly evolving, and (b) a lot of these metrics are going to be weird conglomerations and derivatives of a bunch of metrics mashed together and torn apart and considered only when certain criteria are fulfilled. Yikes.

So what does that mean for SEOs. Like what do we have to care about from all of these systems and this evolution and this move towards deep learning, which by the way that’s what Jeff Dean, who is, I think, a senior fellow over at Google, he’s the dude that everyone mocks for being the world’s smartest computer scientist over there, and Jeff Dean has basically said, “Hey, we want to put this into search. It’s not there yet, but we want to take these models, these things that Hinton has built, and we want to put them into search.” That for SEOs in the future is going to mean much less distinct universal ranking inputs, ranking factors. We won’t really have ranking factors in the way that we know them today. It won’t be like, “Well, they have more anchor text and so they rank higher.” That might be something we’d still look at and we’d say, “Hey, they have this anchor text. Maybe that’s correlated with what the machine is finding, the system is finding to be useful, and that’s still something I want to care about to a certain extent.”

But we’re going to have to consider those things a lot more seriously. We’re going to have to take another look at them and decide and determine whether the things that we thought were ranking factors still are when the neural network system takes over. It also is going to mean something that I think many, many SEOs have been predicting for a long time and have been working towards, which is more success for websites that satisfy searchers. If the output is successful searches, and that’ s what the system is looking for, and that’s what it’s trying to correlate all its metrics to, if you produce something that means more successful searches for Google searchers when they get to your site, and you ranking in the top means Google searchers are happier, well you know what? The algorithm will catch up to you. That’s kind of a nice thing. It does mean a lot less info from Google about how they rank results.

So today you might hear from someone at Google, “Well, page speed is a very small ranking factor.” In the future they might be, “Well, page speed is like all ranking factors, totally unknown to us.” Because the machine might say, “Well yeah, page speed as a distinct metric, one that a Google engineer could actually look at, looks very small.” But derivatives of things that are connected to page speed may be huge inputs. Maybe page speed is something, that across all of these, is very well connected with happier searchers and successful search results. Weird things that we never thought of before might be connected with them as the machine learning system tries to build all those correlations, and that means potentially many more inputs into the ranking algorithm, things that we would never consider today, things we might consider wholly illogical, like, “What servers do you run on?” Well, that seems ridiculous. Why would Google ever grade you on that?

If human beings are putting factors into the algorithm, they never would. But the neural network doesn’t care. It doesn’t care. It’s a honey badger. It doesn’t care what inputs it collects. It only cares about successful searches, and so if it turns out that Ubuntu is poorly correlated with successful search results, too bad.

This world is not here yet today, but certainly there are elements of it. Google has talked about how Panda and Penguin are based off of machine learning systems like this. I think, given what Geoff Hinton and Jeff Dean are working on at Google, it sounds like this will be making its way more seriously into search and therefore it’s something that we’re really going to have to consider as search marketers.

All right everyone, I hope you’ll join me again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

List Of Black Hat Seo Techniques

List Of Black Hat Seo Techniques http://www.scrapeboxsenukevps.com We Offer the Best Black Hat Seo VPS Solution for Beginners and Professional Marketers with…

Reblogged 3 years ago from www.youtube.com

For Writers Only: Secrets to Improving Engagement on Your Content Using Word Pictures (and I Don’t Mean Wordle)

Posted by Isla_McKetta

“Picture it.”

If you’re of a certain generation, those two words can only conjure images of tiny, white-haired Sophia from the Golden Girls about to tell one of her engaging (if somewhat long and irrelevant) stories as she holds her elderly roommates hostage in the kitchen or living room of their pastel-hued Miami home.

Even if you have no idea what I’m talking about, those words should become your writing mantra, because what readers do with your words is take all those letters and turn them into mind pictures. And as the writer, you have control over what those pictures look like and how long your readers mull them over.

According to
Reading in the Brain by Stanislas Dehaene, reading involves a rich back and forth between the language areas and visual areas of our brains. Although the full extent of that connectivity is not yet known, it’s easy to imagine that the more sensory (interesting) information we can include in our writing, the more fully we can engage our readers.

So if you’re a writer or content marketer you should be harnessing the illustrative power of words to occupy your readers’ minds and keep them interested until they’re ready to convert. Here’s how to make your words
work for you.

Kill clichés

I could have titled this piece “Painting a Picture with Words” but you’ve heard it. Over and over and over. And I’m going to propose that every time you use a cliché, a puppy dies. 

While that’s a bit extreme (at least I hope so because that’s a lot of dead puppies and Rocky’s having second thoughts about his choice of parents), I hope it will remind you to read over what you’ve written and see where your attention starts to wander (wandering attention=cliché=one more tragic, senseless death) you get bored. Chances are it’s right in the middle of a tired bit of language that used to be a wonderful word picture but has been used and abused to the point that we readers can’t even summon the image anymore.

Make up metaphors (and similes)

Did you know that most clichés used to be metaphors? And that we overused them because metaphors are possibly the most powerful tool we have at our disposal for creating word pictures (and, thus, engaging content)? You do now.

By making unexpected comparisons, metaphors and similes force words to perform like a stage mom on a reality show. These comparisons shake our brains awake and force us to pay attention. So apply a whip to your language. Make it dance like a ballerina in a little pink tutu. Give our brains something interesting to sink our teeth into (poor Rocky!), gnaw on, and share with out friends.

Engage the senses

If the goal of all this attention to language is to turn reading into a full brain experience, why not make it a little easier by including sensory information in whatever you’re writing? Here are a few examples:

  • These tickets are selling so fast we can smell the burning rubber.
  • Next to a crumbling cement pillar, our interview subject sits typing on his pristine MacBook Air.
  • In a sea of (yelp!) never ending horde of black and gray umbrellas, this red cowboy hat will show the world you own your look.
  • Black hat tactics left your SERPs stinking as bad as a garbage strike in late August? Let us help you clear the air by cleaning up those results.

See how those images and experiences continue to unfold and develop in your mind? You have the power to affect your readers the same way—to create an image so powerful it stays with them throughout their busy days. One note of caution, though, sensory information is so strong that you want to be careful when creating potentially negative associations (like that garbage strike stench in the final example).

Leverage superlatives (wisely) and ditch hyperbole

SUPERLATIVES ARE THE MOST EFFECTIVEST TOOL YOU CAN USE EVER (until you wear your reader out or lose their trust). Superlatives (think “best,” “worst,” “hairiest” – any form of the adjective or adverb that is the most exaggerated form of the word) are one of the main problems with clickbait headlines (the other being the failure to deliver on those huge promises).

Speaking of exaggeration, be careful with it in all of its forms. You don’t actually have to stop using it, but think of your reader’s credence in your copy as a grasshopper handed over by a child. They think it’s super special and they want you to as well. If you mistreat that grasshopper by piling exaggerated fact after exaggerated fact on top of it, the grasshopper will be crushed and your reader will not easily forgive you.

So how do you stand out in a crowded field of over-used superlatives and hyperbolic claims? Find the places your products honestly excel and tout those. At Moz we don’t have the largest link index in the world. Instead, we have a really high quality link index. I could have obfuscated there and said we have “the best” link index, but by being specific about what we’re actually awesome at, we end up attracting customers who want better results instead of more results (and they’re happier for it).

Unearth the mystery

One of the keys to piquing your audience’s interest is to tap into (poor puppy!) create or find the mystery in what you’re writing. I’m not saying your product description will suddenly feature PIs in fedoras (I can dream, though), but figure out what’s intriguing or new about what you’re talking about. Here are some examples:

  • Remember when shortcuts meant a few extra minutes to yourself after school? How will you spend the 15-30 minutes our email management system will save you? We won’t tell…
  • You don’t need to understand how this toilet saves water while flushing so quietly it won’t wake the baby, just enjoy a restful night’s sleep (and lower water bills)
  • Check out this interactive to see what makes our work boots more comfortable than all the rest.

Secrets, surprises, and inside information make readers hunger for more knowledge. Use that power to get your audience excited about the story you’re about to tell them.

Don’t forget the words around your imagery

Notice how some of these suggestions aren’t about the word picture itself, they’re about the frame around the picture? I firmly believe that a reader comes to a post with a certain amount of energy. You can waste that energy by soothing them to sleep with boring imagery and clichés, while they try to find something to be interested in. Or you can give them energy by giving them word pictures they can get excited about.

So picture it. You’ve captured your reader’s attention with imagery so engaging they’ll remember you after they put down their phone, read their social streams (again), and check their email. They’ll come back to your site to read your content again or to share that story they just can’t shake.

Good writing isn’t easy or fast, but it’s worth the time and effort.

Let me help you make word pictures

Editing writing to make it better is actually one of my great pleasures in life, so I’m going to make you an offer here. Leave a sentence or two in the comments that you’re having trouble activating, and I’ll see what I can do to offer you some suggestions. Pick a cliché you can’t get out of your head or a metaphor that needs a little refresh. Give me a little context for the best possible results.

I’ll do my best to help the first 50 questions or so (I have to stop somewhere or I’ll never write the next blog post in this series), so ask away. I promise no puppies will get hurt in the process. In fact, Rocky’s quite happy to be the poster boy for this post—it’s the first time we’ve let him have beach day, ferry day, and all the other spoilings all at once.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it