Making the most of your Black Friday campaign

It’s, therefore, safe to say that getting your Black Friday campaign right is essential.

Over the last ten years, we’ve seen a massive shift in Black Friday sales. Traditionally an in-store shopping extravaganza, Black Friday deals are now being scooped up by online shoppers, sparking a clicks vs. bricks war. A whopping $6.2bn was spent online in the US on Black Friday – up 23.6% in 2017.

Consumers are clearly shunning the stores in favor of a more accessible and stress-free way to bag the shopping day’s deals. This presents a tricky challenge to ecommerce brands; while high-street retailers are rewarded with limited rivalry, online stores are fighting in an arena where their voices can be silenced by the sheer size of the competition. Our November send volume is proof that consumers’ inboxes were groaning with messages from brands: 1.6 billion emails were sent from the platform, peaking over the Black Friday weekend.

Shopping day turned shopping week

An extended Black Friday shopping period – sometimes
referred to by retailers as the Black Tag or Cyber Week event – has been a
growing trend over the last couple of years. Have too many brands jumped on the
bandwagon and saturated the market, resulting in a lack of sales on the day
itself?

Michele Dupré from Verizon Enterprise Solutions thinks that
consumers now see Black Friday as a marathon and not a sprint: “Retailers need
to be prepared. Everything used to be built around Black Friday. Now, shopping
starts in early November and continues to December 24. Retailers must keep
consumers engaged throughout.”

And, on that very note, we thought it was about time we checked on some of the big brands to see how they made the most of their Black Friday campaigns.

Trends and findings

A prominent observation from our research was that retailers
see Black Friday as a weekend-long or even week-long event.

None of the brands we looked at restricted discounting to one day. 100% of retailers who participated in the occasion offered a sale for four days or more, however email promos and previews often spanned beyond that period. There were variations on the name of the shopping event, however, most were branded Black Friday, the Black Tag Event, or Cyber Week.

66% of retailers didn’t try and claw back the abandoned cart

Using email to combat cart abandonment was a proven tactic
that was missed by 56% of the brands in our 2018 Hitting the mark report. It
seems as though the retailers have still not realized the huge revenue
potential of cart recovery emails, with 66% of brands we looked at failing to follow
up on the lost sale.

Abandoned cart automations are easily set up and can often
deliver ROI in a matter of weeks, if not days. And with the global shopping
cart abandonment rate sitting at around 75.6%, we’re baffled as to why more
retailers aren’t utilizing this ROI-generating automation.

44% of brands didn’t use email effectively

14 of our sample of 32 retailers didn’t adopt email as a
method to market Black Friday deals, despite 22 of the brands participating in
the promotion. This surprised us considering email delivers the best ROI of all
the marketing channels, with average returns of $44 for every $1 spent.

However, as you’ll see from the next set of stats, there were many brands that had a heavy reliance on email when it came to amplifying shopping day sales.

Consumers received an average of 18 emails a day during the period

We received a total of 130 Black Friday-related emails that arrived before, during and in the final throes of the shopping event. That means over the week-long period, an average of 18 emails landed in our inbox every day.

What we found was that brands tended to use email to tease us with previews in the run-up to the event, then went hard on the day. For the remainder of the event, it was common to receive countdown reminders or category-specific product deals.

Black Friday still reigns in the US

Black Friday has swept its way across the world with
retailers in the UK, France and New Zealand jumping on board. But it seems that
US-based brands are still the ones who go the biggest on Black Friday.

We received a total of 95 emails from US brands, compared to just 27 from UK-based retailers. This isn’t totally surprising seeing as the Black Friday shopping period is tied to Thanksgiving and its origins as the United States’ start of the Christmas season. However, there were some American brands that were quite aggressive in their use of email. For example, Overstock sent us 18 emails over the week but it was Best Buy who smashed that number with a total of 26 emails over the seven-day event, averaging four emails a day.

Noteworthy brands

Easy opportunities to boost sales are still being missed by
companies of all sizes in all sectors. We decided to home in on a few retailers
who performed highly, along with a couple which surprised us. From this, we
hope to inspire you to branch out and try something new for this years’ Black
Friday marketing campaign.

ASOS

ASOS is a global fashion destination for 20-somethings, selling cutting-edge fashion and offering a wide variety of fashion-related content. The brand scooped the top spot in our 2017 Hitting the Mark report, collecting the highest number of points for email marketing compared to the other 99 retailers.

ASOS ranked above the rest for timeliness, its use of
automation and cross-channel promotion. But how did it fare on Black Friday?

The fashion retailer ran a site-wide 20% discount over the four-day period from Black Friday. It was widely promoted via email and as an appropriately themed homepage takeover. What we really liked was that despite offering 20% off all its products, ASOS targeted us with a gender-specific promo email that homed in on a category we’d shown an interest in previously.

ASOS Black Friday event

After browsing the ASOS website, adding something to our
cart and abandoning the site, a few days passed but we received no cart
recovery message. The lack of an abandoned cart email surprised us as the
fashion retailer scored highly for this automation program in our previous study.
One recommendation would be to use the abandoned cart program as a last-ditch
attempt to bag the sale on the last day of the event.

John Lewis

John Lewis is a chain of upmarket department stores operating throughout the United Kingdom. In 2017’s Hitting the Mark, the retailer was crowned the King of Customer Experience, scoring 32 points from a possible 35.

For a well-respected brand like John Lewis, a shopping event
like Black Friday necessitates a careful balancing act of attracting custom yet
without cheapening its name. And no surprises, the department store did it in
admirable style.

The John Lewis homepage tastefully reflected its Black Friday campaign which complemented its existing design. What’s more, the retailer fought back the competition by advertising that it was price matching a close competitor’s promotion.

At the start of the event and on the final day, John Lewis
used on-brand responsive emails to promote its offer and link us off to the
various price-dropped product categories.

John Lewis Black Friday abandoned cart

Finally, we were impressed to see that John Lewis followed up with after a well-designed abandoned cart email to encourage us to complete our purchase. The cart recovery program delivered the email the day after we’d placed the item in our basket. We especially liked John Lewis’s emphasis on free and easy returns if we weren’t happy with the product once it arrived.

Timpson

Timpson is a British retailer specializing in shoe repairs, key cutting, and engraving, as well as dry cleaning and photo processing.

The brand came 96 out of 100 in our main report – did it
improve its tactics for Black Friday?

In short, no. A year and a half on from joining the company’s mailing list – and a couple of months later buying a product – we still hadn’t received a marketing email. Timpson didn’t jump on the Black Friday bandwagon and this was evident from its homepage and website.

Timpson Black Friday homepage

In the 2017 study, Timpson lost points for failing to send a
cart recovery email. We tried again for this Black Friday special, to no avail.

Ralph Lauren

Ralph Lauren is a leader in the design, marketing, and distribution of premium apparel, homeware, accessories, and fragrances. In our 2017 report, the retailer performed quite poorly, coming in at 86 out of 100. However, we were pleasantly surprised by its efforts on Black Friday.

Understated Black Friday design appeared to be a theme for premium brands. Ralph Lauren named its event ‘Cyber Weekend Deals’ and the style of its website promotion was very much aligned with its typical brand colors.

Ralph Lauren used email to the max during its Black Friday campaign, first offering us exclusive early access to its deals and then catching us with a cart email after ‘forgetting’ to purchase our chosen item.

Notonthehighstreet.com

Notonthehighstreet.com is the leading curated modern marketplace, connecting the best small creative businesses with the world. The brand came up trumps in Hitting the Mark 2017, coming in at joint 11th place – and it didn’t disappoint in our Black Friday review!

We really liked Notonthehighstreet.com’s cute Black Friday homepage banner styled out as a handmade typographic tapestry. Like many other retailers we reviewed, its Black Friday event spanned beyond the day itself.

Notonthehighstreet.com made sure we knew about its Black Friday campaign with an email to wake up to. It was creative and eye-catching with a GIF taking center stage, along with light-hearted copy to encourage us to click.

The Notonthehighstreet.com abandoned cart email was equally original, with witty words and helpful touches in case we were having issues checking out. A perfectly executed Black Friday campaign.

Notonthehighstreet abandoned cart

Key takeaways for 2019

Once an in-store shopping day, Black Friday has successfully
merged with Cyber Monday to become one mammoth event.

Smartphones contribute more than $2bn of the $6.2bn from online sales, so mobile-readiness is essential. We witnessed some great examples of mobile-friendly emails, particularly from John Lewis and ASOS who used optimized templates, images, and copy, plus responsive content blocks.

However, what was jarring was the brands’ lack of use of email marketing automation, namely abandoned cart programs. Overall, there was a lack of personalization and a penchant for one-size-fits-all offers. Using smart segmentation and marketing automation these can be easily avoided.

Implement today

Our advice for this years’ Black Friday campaign would be to focus on key customer segments using Engagement Cloud’s RFM personas. Using these, you’ll be able to differentiate your new customers from your inactive users and loyal shoppers, making it easier to take each segment on a personalized, automated journeys.

For more inspiration, check out our Hitting the mark: Black Friday special report.


Keep reading

The post Making the most of your Black Friday campaign appeared first on dotdigital blog.

Reblogged 5 days ago from blog.dotdigital.com

Are you ready for Black Friday? If not, here’s our checklist

For many of us, Black Friday conjures up terrifying images of crushing crowds and fighting over TVs.

Black Friday

That’s why we’ve
put together the ultimate checklist to help you plan your Black Friday sales
with confidence.

Today,
Black Friday and Cyber Monday are some of the biggest shopping days of the
year. Last year, Black Friday brought in $6.2
billion
in online sales in the US. Still a relatively new event in the UK,
2018 online sales increased by nearly 8% Y-o-Y to
reach £1.49 billion. In Australia, interest in Black Friday has grown by 614% over the last five
years, with shoppers spending on average
AU$263
a day.

Despite this, the public’s appetite for Black Friday deals seems to be declining. With fewer people visiting stores and opting to surf for sales online instead, competition is getting fiercer.

Black Friday boredom?

‘Declining’
might not be the right word. What we’re seeing is slow down. Where sale
increases had been in the double figures, we’re beginning to see that die down.

There
could be many reasons for this, but boredom is the most obvious explanation.

In the lead-up, shoppers’ inboxes are flooded with emails. Every single one of them shouting about not-to-be-missed deals and limited-time offers. They’ve heard and seen it all before.

Tactical shopping

Shoppers
are adapting. They’re preparing themselves and approaching Black Friday with
specific items in mind. 42% of shoppers know
what they want to purchase
, and they’re wise enough to have a rough price
in mind.

What
they’re buying depends on who they’re shopping for. 62% of shoppers are buying
for their families, while 45% are shopping for themselves.

Some
customers may be going through the year, racking up a list of items they want
to get when the Black Friday sales start. So, if they’re going to be tactical
about it, it’s time you were too.

Get Black Friday ready

Experience matters. On an average day, 80% of customers are willing to pay more for an item, if a brand offers them superior service.

To create unforgettable engagements, planning is essential. Especially for the holiday season. We’ve put together a killer checklist, so you can be fully prepared and focus on delivering the experiences that drive sales.

What
to do today:

1.) Decide what you’re offering

When we think ‘Black Friday’ we think sales, discounts, and price drops, but that doesn’t have to be the case. Big-ticket items tend to be where shoppers are looking for deals. Laptops, TVs, and smartphones being the most popular options. But if these fall outside of your remit, why not try another tactic? If you have a loyalty scheme or use RFM segments, how about targeting loyal customers with exclusive content? That’s what sports brand Nike did. There are no rules around Black Friday, so try something different to stand out in the crowded inbox.

2.) Update your PPC plan

We’ve already mentioned…

Look above

…shoppers have items in mind when Black Friday starts. What they may not have chosen is where they’re going to get it. By updating and increasing your PPC budget for the holiday season, your chances of being discovered will improve.

3.) Improve website UX

Experience extends far beyond how you make the reader feel with an email. Your website’s ease of use can be the difference between a sale, and an abandoned cart. Leading up to the holiday season, you should carry out a full audit of your site. Options such as guest checkout and gift-wrapping can make a huge difference to customers.

Guest checkout
Clear and simple. This is a perfect example from tech giants, Apple.

4.) Establish timelines

The most successful ecommerce retailer over Black Friday weekend (no surprise) is Amazon. Why is this? Because they offer a full week worth of discounts. Kicking off at the beginning of the week, Amazon runs deals all the way from Monday to Cyber Monday. Again, this is a great way to stand out from the crowd, but it’s not the only way.

Do
you want to build-up excitement and anticipation with a series of ramp-up
emails? Figure out what tactic you’re going to take and get planning. The further
in advance you can do this, the more time you’ll have to plan your design,
segments, content, and build your automation.

Also, don’t forget to be thinking about delivery. Can you fulfill a promise to deliver before Christmas? Are you going to need to outsource or hire more employees to manage your warehouse stock?

5.) Design your templates

Are you planning on refreshing your templates from last year, or looking for something a bit more bespoke to hook subscribers? Make sure you have them ready to go. Once they’re created so you can add them to your Black Friday automation programs.

It’s all important and now’s the time to get these details sorted. It’ll help ensure things go smoother as the big day approaches.

Let
the countdown begin:

6.) Push new sign-ups

Popovers are a great way to encourage new visitors to sign-up for your email marketing. In the lead up to Black Friday, use these to grow your marketing lists. Hook them with exclusive access to pre-Black Friday sales or free delivery during the sales when they subscribe.

7.) Plan your content

Show shoppers clearly, and concisely, why they should shop with you. What are your USPs? Bring your brand’s personality to life and give a reason to choose you over your competitors. Make sure your offers are clear and easy to understand. If you’re offering a discount via code, make sure it stands out and is eye-catching, so it’s not missed by readers. If it’s a site-wide discount, make it obvious. Remember, subscribers will be receiving hundreds of these emails. They’ll be skimming your emails at best, so make sure you’re hooking them above-the-fold.

BB Dakota Black Friday sale
Great Simple Studio Black Friday sale

You also need to reflect this across your whole site. You don’t want missed deals to lead to negative feedback because they weren’t visible on your site. Your offers need to be easy to find across your homepage and all Black Friday campaign landing pages.

8.) Check your providers

Make sure your website hosts and email providers know about your upcoming campaign and expect the rise in traffic. Updates or changes to their software could cause massive problems if they go wrong. Luckily, the development team at dotdigital put a coding freeze on Engagement Cloud during the holiday season, to ensure your marketing runs as smoothly as possible.

This year, Black Friday falls on Friday 29 November and runs until Cyber Monday on Monday 2 December.

Follow this checklist, and you’ll be ready to take on the world this holiday season.


Keep reading

Preparing for Black Friday
Holiday customer loyalty
Halloween blog

The post Are you ready for Black Friday? If not, here’s our checklist appeared first on dotdigital blog.

Reblogged 3 weeks ago from blog.dotdigital.com

Preparing for Black Friday

That’s a 16% increase on Black Friday last year and a 200% increase compared to a typical day!

 

It’s that last statistic which really stands out for the dotmailer technical teams. We need to be ready for that level of increased usage and so we spend the weeks prior trawling over telemetry data from our platform to scale our infrastructure accordingly. This year we already knew a lot of the work had been done due to the incredibly busy period running up to the GDPR in May.

 

There are four major elements to our server infrastructure: website clusters, background service clusters, database servers and email servers. Because we’re cloud-based, each can be scaled separately and so we:

 

  • added extra web servers to handle the 37 million hits our click-and-open tracking website saw, plus another 34 million for Web Behavior Tracking.
  • increased the numbers, size and performance of some of our servers which handle background tasks such as sending and importing.
  • added a new email sending capacity to our fast-growing US deployment which has seen rapid growth in the last 12 months. At peak we sent over 320GB of email an hour!
  • optimized email delivery throughput with improved compliance with email receiver guidelines.
Throughout the day, our engineers monitor system metrics, making notes of areas that will soon need attention. It’s busy days like this which enable us to see early warning signs of weaknesses in our different systems.  

 

Sometimes concerns can be addressed quickly and easily with more computing power, or by altering slightly the behavior of a system via a setting. Alternatively, bottlenecks are fed into the development roadmap so systems can be overhauled for the coming year. As demand from customers continues to increase, we continue to reinvest in our platform and we’re already looking forward to next November!

 

If you missed our latest product release, 18four, please find out more here.

The post Preparing for Black Friday appeared first on The Marketing Automation Blog.

Reblogged 10 months ago from blog.dotmailer.com

We’re Black Friday ready. Are you?

With retailers expecting to deliver 30% of their annual sales and 40% of their profits in the fourth quarter, we know how important your emails are in generating that demand.  We wanted to give you an insight into the data and top tips in making sure you make the most out of these key days.

So what have we improved since last year?

  1. We partnered with one of the leaders in cloud computing, Microsoft Azure, and moved our entire infrastructure to utilise the public cloud. We now have all the compute we could ever need at our fingertips (both in EU and US) to make sure we perform when you need us the most.
  2. We have increased the bandwidth we use to send emails by 500%.
  3. We doubled the amount of servers we use to send email.
  4. We have re-written parts of the application that processes emails so it’s over 40% faster.
  5. We increased the amount of processing power our databases have by 50%.

What did last year look like?

Last year I wrote a similar Black Friday post and many of our customers commented on how useful it was to see the trend of email opens, clicks and emails being sent on the day. So this is how it went last year, in recap:

Email opens on Black Friday, 2015 vs 2014. We saw the morning opens grow much faster than Black Friday 2014. The peak was at 9am, where as previously it was 4pm GMT. It is likely that the same trend will continue and there will be many consumers ready to hunt down those Black Friday deals early doors.

Top tip: Make sure your subject line is catchy, create urgency and mention Black Friday or Cyber Monday to get their attention. If you haven’t already dipped your toe with emojis then now is the time to put them in! You can get a free copy of our Black Friday email marketing cheatsheet here for more tips.

bf22

Email clicks Black Friday 2015. Like the opens, we can see good consumer engagement in the morning with 10am GMT being the peak.

Top tip: Bargain hunters will look to shop around, so make sure your email gets in the inbox early and that the calls to action are clear and irresistible.   The bargain hunters out there are starting early and will be shopping around to get the best deal on the day.  If you know they have clicked and not converted, use that engagement to re-target them in the evening using automated programs and segmentation.

Email sends on Black Friday, 2015 vs 2014. Again, we saw far more email sends in the morning, and over 10% of all the Black Friday emails went out between 8am and 9am GMT.

Email sends on Black Friday, 2015 vs 2014. Again, we saw far more email sends in the morning, and over 10% of all the Black Friday emails went out between 8am and 9am GMT.

Top tip: Make sure your campaigns are mobile-optimised. Most consumers will be reading while ‘data snacking’ on their mobile, whether that’s while they’re commuting, in the bathroom (yes, you know you do!) or at any spare moment in their day-to-day lives.

To sum up…

Remember, these are busy days with marketing professionals worldwide sending far more emails. This means that the receivers (Gmail, Hotmail, AOL, BT Internet etc) will also be receiving far more emails than ever before, and they will experience delays under the additional stress and load.

Make sure you get those emails through early to capitalise on the opportunity. You can also better your chances of conversion by ensuring that those offers are irresistible, by providing clear CTAs and by making sure your subject lines pop out!

The post We’re Black Friday ready. Are you? appeared first on The Email Marketing Blog.

Reblogged 2 years ago from blog.dotmailer.com

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

What Deep Learning and Machine Learning Mean For the Future of SEO – Whiteboard Friday

Posted by randfish

Imagine a world where even the high-up Google engineers don’t know what’s in the ranking algorithm. We may be moving in that direction. In today’s Whiteboard Friday, Rand explores and explains the concepts of deep learning and machine learning, drawing us a picture of how they could impact our work as SEOs.

For reference, here’s a still of this week’s whiteboard!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we are going to take a peek into Google’s future and look at what it could mean as Google advances their machine learning and deep learning capabilities. I know these sound like big, fancy, important words. They’re not actually that tough of topics to understand. In fact, they’re simplistic enough that even a lot of technology firms like Moz do some level of machine learning. We don’t do anything with deep learning and a lot of neural networks. We might be going that direction.

But I found an article that was published in January, absolutely fascinating and I think really worth reading, and I wanted to extract some of the contents here for Whiteboard Friday because I do think this is tactically and strategically important to understand for SEOs and really important for us to understand so that we can explain to our bosses, our teams, our clients how SEO works and will work in the future.

The article is called “Google Search Will Be Your Next Brain.” It’s by Steve Levy. It’s over on Medium. I do encourage you to read it. It’s a relatively lengthy read, but just a fascinating one if you’re interested in search. It starts with a profile of Geoff Hinton, who was a professor in Canada and worked on neural networks for a long time and then came over to Google and is now a distinguished engineer there. As the article says, a quote from the article: “He is versed in the black art of organizing several layers of artificial neurons so that the entire system, the system of neurons, could be trained or even train itself to divine coherence from random inputs.”

This sounds complex, but basically what we’re saying is we’re trying to get machines to come up with outcomes on their own rather than us having to tell them all the inputs to consider and how to process those incomes and the outcome to spit out. So this is essentially machine learning. Google has used this, for example, to figure out when you give it a bunch of photos and it can say, “Oh, this is a landscape photo. Oh, this is an outdoor photo. Oh, this is a photo of a person.” Have you ever had that creepy experience where you upload a photo to Facebook or to Google+ and they say, “Is this your friend so and so?” And you’re like, “God, that’s a terrible shot of my friend. You can barely see most of his face, and he’s wearing glasses which he usually never wears. How in the world could Google+ or Facebook figure out that this is this person?”

That’s what they use, these neural networks, these deep machine learning processes for. So I’ll give you a simple example. Here at MOZ, we do machine learning very simplistically for page authority and domain authority. We take all the inputs — numbers of links, number of linking root domains, every single metric that you could get from MOZ on the page level, on the sub-domain level, on the root-domain level, all these metrics — and then we combine them together and we say, “Hey machine, we want you to build us the algorithm that best correlates with how Google ranks pages, and here’s a bunch of pages that Google has ranked.” I think we use a base set of 10,000, and we do it about quarterly or every 6 months, feed that back into the system and the system pumps out the little algorithm that says, “Here you go. This will give you the best correlating metric with how Google ranks pages.” That’s how you get page authority domain authority.

Cool, really useful, helpful for us to say like, “Okay, this page is probably considered a little more important than this page by Google, and this one a lot more important.” Very cool. But it’s not a particularly advanced system. The more advanced system is to have these kinds of neural nets in layers. So you have a set of networks, and these neural networks, by the way, they’re designed to replicate nodes in the human brain, which is in my opinion a little creepy, but don’t worry. The article does talk about how there’s a board of scientists who make sure Terminator 2 doesn’t happen, or Terminator 1 for that matter. Apparently, no one’s stopping Terminator 4 from happening? That’s the new one that’s coming out.

So one layer of the neural net will identify features. Another layer of the neural net might classify the types of features that are coming in. Imagine this for search results. Search results are coming in, and Google’s looking at the features of all the websites and web pages, your websites and pages, to try and consider like, “What are the elements I could pull out from there?”

Well, there’s the link data about it, and there are things that happen on the page. There are user interactions and all sorts of stuff. Then we’re going to classify types of pages, types of searches, and then we’re going to extract the features or metrics that predict the desired result, that a user gets a search result they really like. We have an algorithm that can consistently produce those, and then neural networks are hopefully designed — that’s what Geoff Hinton has been working on — to train themselves to get better. So it’s not like with PA and DA, our data scientist Matt Peters and his team looking at it and going, “I bet we could make this better by doing this.”

This is standing back and the guys at Google just going, “All right machine, you learn.” They figure it out. It’s kind of creepy, right?

In the original system, you needed those people, these individuals here to feed the inputs, to say like, “This is what you can consider, system, and the features that we want you to extract from it.”

Then unsupervised learning, which is kind of this next step, the system figures it out. So this takes us to some interesting places. Imagine the Google algorithm, circa 2005. You had basically a bunch of things in here. Maybe you’d have anchor text, PageRank and you’d have some measure of authority on a domain level. Maybe there are people who are tossing new stuff in there like, “Hey algorithm, let’s consider the location of the searcher. Hey algorithm, let’s consider some user and usage data.” They’re tossing new things into the bucket that the algorithm might consider, and then they’re measuring it, seeing if it improves.

But you get to the algorithm today, and gosh there are going to be a lot of things in there that are driven by machine learning, if not deep learning yet. So there are derivatives of all of these metrics. There are conglomerations of them. There are extracted pieces like, “Hey, we only ant to look and measure anchor text on these types of results when we also see that the anchor text matches up to the search queries that have previously been performed by people who also search for this.” What does that even mean? But that’s what the algorithm is designed to do. The machine learning system figures out things that humans would never extract, metrics that we would never even create from the inputs that they can see.

Then, over time, the idea is that in the future even the inputs aren’t given by human beings. The machine is getting to figure this stuff out itself. That’s weird. That means that if you were to ask a Google engineer in a world where deep learning controls the ranking algorithm, if you were to ask the people who designed the ranking system, “Hey, does it matter if I get more links,” they might be like, “Well, maybe.” But they don’t know, because they don’t know what’s in this algorithm. Only the machine knows, and the machine can’t even really explain it. You could go take a snapshot and look at it, but (a) it’s constantly evolving, and (b) a lot of these metrics are going to be weird conglomerations and derivatives of a bunch of metrics mashed together and torn apart and considered only when certain criteria are fulfilled. Yikes.

So what does that mean for SEOs. Like what do we have to care about from all of these systems and this evolution and this move towards deep learning, which by the way that’s what Jeff Dean, who is, I think, a senior fellow over at Google, he’s the dude that everyone mocks for being the world’s smartest computer scientist over there, and Jeff Dean has basically said, “Hey, we want to put this into search. It’s not there yet, but we want to take these models, these things that Hinton has built, and we want to put them into search.” That for SEOs in the future is going to mean much less distinct universal ranking inputs, ranking factors. We won’t really have ranking factors in the way that we know them today. It won’t be like, “Well, they have more anchor text and so they rank higher.” That might be something we’d still look at and we’d say, “Hey, they have this anchor text. Maybe that’s correlated with what the machine is finding, the system is finding to be useful, and that’s still something I want to care about to a certain extent.”

But we’re going to have to consider those things a lot more seriously. We’re going to have to take another look at them and decide and determine whether the things that we thought were ranking factors still are when the neural network system takes over. It also is going to mean something that I think many, many SEOs have been predicting for a long time and have been working towards, which is more success for websites that satisfy searchers. If the output is successful searches, and that’ s what the system is looking for, and that’s what it’s trying to correlate all its metrics to, if you produce something that means more successful searches for Google searchers when they get to your site, and you ranking in the top means Google searchers are happier, well you know what? The algorithm will catch up to you. That’s kind of a nice thing. It does mean a lot less info from Google about how they rank results.

So today you might hear from someone at Google, “Well, page speed is a very small ranking factor.” In the future they might be, “Well, page speed is like all ranking factors, totally unknown to us.” Because the machine might say, “Well yeah, page speed as a distinct metric, one that a Google engineer could actually look at, looks very small.” But derivatives of things that are connected to page speed may be huge inputs. Maybe page speed is something, that across all of these, is very well connected with happier searchers and successful search results. Weird things that we never thought of before might be connected with them as the machine learning system tries to build all those correlations, and that means potentially many more inputs into the ranking algorithm, things that we would never consider today, things we might consider wholly illogical, like, “What servers do you run on?” Well, that seems ridiculous. Why would Google ever grade you on that?

If human beings are putting factors into the algorithm, they never would. But the neural network doesn’t care. It doesn’t care. It’s a honey badger. It doesn’t care what inputs it collects. It only cares about successful searches, and so if it turns out that Ubuntu is poorly correlated with successful search results, too bad.

This world is not here yet today, but certainly there are elements of it. Google has talked about how Panda and Penguin are based off of machine learning systems like this. I think, given what Geoff Hinton and Jeff Dean are working on at Google, it sounds like this will be making its way more seriously into search and therefore it’s something that we’re really going to have to consider as search marketers.

All right everyone, I hope you’ll join me again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it