The truth about purchased lists

Purchased lists: as email marketers, we all know we shouldn’t use them. But when you’re under pressure to get results, it can sometimes seem like a quick solution. You might have heard that a competitor got incredible results from buying X list, or perhaps your boss has used them before and wants to see a beefed-up database to maximise holiday revenue.

The truth is this: it makes no logical sense for marketers to take an interest in purchased lists. Aside from the legal implications on the horizon, there can be no benefit to stunting your highest performing channel with data that doesn’t convert.

Not convinced? Here’s 6 reasons to avoid bought data now and forever:

Bought data is cold

Recipients who’ve chosen to receive your marketing emails have shown an active and recent interest in your brand; they’re warm and ready for your team to work on them. Bought data’s a different story. The prospects are unengaged, and there’s often no way of telling how old it is; it’s cold data. Unengaged contacts take longer to warm and even longer to convert, putting greater pressure on your team while incurring greater cost to your business. There’s very little chance of achieving the ROI that you’re after.

Bought data is a drain on your resources

The costs associated with cold, bought data are manifold. Get charged per contact by your ESP? That’s money down the drain for every unengaged contact you’ve acquired. And the more cluttered your list gets, the less efficient you’ll find your strategy becomes. On a pay-per-email contract? The same applies. Every cold email address is a detriment to your ROI.

Purchasing lists cripples your email marketing

So you’ve invested your hard-earned budget into a top email marketing platform, you’ve taken the time to train up your team, and your emails are looking better than ever. You’re ready to hit send on a huge campaign and watch the returns rack up. But you bought your email list for this campaign and – unknown to you or the seller – some of those emails are spam traps.

A spam trap is a fraud management tool used by the big ISPs to catch out malicious senders and marketers with poor data hygiene and acquisition practices. Bought data lists are peppered with traps. How do they get there? Check out this comprehensive guide from Laura Atkins for Word to the Wise.

The spam trap has no way of telling whether you’re a bad guy or an unsuspecting marketer, so you’ll be treated the same way as a spammer. Your sender reputation will start to deteriorate with every send and some of your best customers’ mail servers will block your emails from reaching them, causing those relationships to depreciate. You’re single-handedly shooting your ROI in the metaphorical foot

Bought data skews your reporting

If your contact list is riddled with cold or false data, it’s impossible to get an accurate measure of your email marketing’s performance; every campaign report you collect will be distorted by the non-opens, bounces and poor engagement rates associated with these addresses. This means that one of the most important features you’ve gained access to by investing in an ESP won’t work properly.

Purchased lists are a legal minefield

There’s no two ways about this point. Sending to bought data means you’re contacting people who haven’t opted in to receive your messages. In many jurisdictions, this is illegal practice – not to mention a poor introduction to your brand. And no one wants to be on the wrong side of the law when the new General Data Protection Regulation (GDPR) comes into force on 25th May 2018. Check out this blog post from dotmailer’s Chief Privacy Officer, James Koons, for a deeper dive into the legal implications of using purchased lists.

You could impair your marketing stack

The majority of ESPs are unable to provide a service to businesses that use purchased lists, in order to protect their customers – and themselves – from poor deliverability scores. A marketing team looking to graduate to a more empowering and scalable automation solution will struggle to get the best fit for their business, purely because top providers are unable to accommodate their bought data. And at the other end of the scale, an ESP without a robust anti-spam policy isn’t a clever investment of your time or resource.

At dotmailer, our Terms & Conditions prohibit the use of purchased lists, because we know that’s how you’ll get the best out of your strategy. And because we’re only interested in empowering marketers (and not punishing them), we work hard to ensure that you always have access to the most cutting-edge list growth and nurture tactics, along with the latest in sending best practice. Check out this whitepaper on list acquisition, or download our comprehensive guide to Deliverability.

The post The truth about purchased lists appeared first on The Marketing Automation Blog.

Reblogged 6 days ago from blog.dotmailer.com

Why purchased lists are a big no-no

It is not the fact of it being right or wrong which surprises me; it is the fact that some marketers still take the gamble on using them. I know at one point it was a widely accepted practice in the marketing world, but with today’s much stricter regulatory landscape and advanced classification and detection technology, one would think the gamble is not worth it.

To this day, there is still much debate on the use of purchased lists. Most reputable Email Service Providers do not allow their use and will terminate a sender if they detect them. At dotmailer, our Terms & Conditions strictly prohibit their use. While this may seem harsh, there are many good reasons NOT to be using purchased lists in the first place:

Addresses on a purchased list are likely to be poor quality. The main point to remember is, the recipients on such a list have never opted in for your specific emails. Additionally, many purchased lists contain older email addresses. Because of these two factors, you are very likely to see higher complaint rates and higher bounce rates. Typically, a bounce rate below 2% is optimal, and if it goes over 5%, there is a problem. If there are some active email accounts on this type of list they will be more likely to delete the email (without reading it), since they never asked to receive it. However, these active recipients are more prone to mark the email as spam, which will definitely hurt your sender reputation and ultimately your brand reputation.

Another good reason not to use purchased lists? Spamtraps! Often, list brokers will create fake email addresses in order to increase the size of their lists and to create a more “attractive” product. They will make up domain names and use random words as email addresses. Several groups within the anti-abuse community will discover these domains, purchase them and turn them into spam traps.  If you hit a spam trap used by a receiver like Gmail, Hotmail or Yahoo! or an anti-abuse organization like Spamhaus, you will find yourself quickly blacklisted or blocked from sending.

The last point I will make about purchased lists is the obvious legal and ethical implications that come with their use. It is always important to remember that when you send an email campaign, you are sending to a real person. If people did not ask to hear from you and suddenly receive an unexpected email in their inbox, they tend to complain. In many jurisdictions, this would be considered illegal. You have forced your way into someone’s inbox, and that is not a great way to start a business relationship in the first place.

Keep in mind that the new General Data Protection Regulation (GDPR) will come into effect on 25 May 2018. While consent remains a lawful basis to transfer personal data under the GDPR, the definition of consent is significantly restricted. Directive 95/46/EC allowed controllers to rely on implicit and “opt-out” consent in some circumstances, but the GDPR requires the data subject to signal agreement by “a statement or a clear affirmative action.” Not to mention, almost all other anti-spam, privacy and data protection legislation prohibits contacting someone who has not given you the proper consent.

There are many ways to increase your subscriber base without hurting your reputation or brand.  The most obvious way is to use sign-up forms on your website. Contests, giveaways and other promotions are great for incentivizing these sign-ups. You can use social media to drive traffic to your promotions and entice even more organic sign-ups. Pop-ups, sliders, feature boxes and surveys are other great methods for healthy list growth.

Buying a list may seem like a quick fix, but in the long run it will cost you in terms of lost revenue and damage to your sender reputation. It can even get you in to legal hot water. Take the time to organically grow your lists and ensure all of your recipients are opted in. We recommend double opt-in or confirmed opt-in as a best practice. Email marketing relationships are a valuable marketing asset. In fact, 91 percent of consumers want to receive emails from the organizations they do business with.  Building these relationships starts with making sure you have permission before adding someone to your email list.

For more reading on this subject, check out this article by Word to the Wise. Laura Atkins, email marketing legend and well-respected industry expert, points out the fact that even if you hear someone say their ESP is allowing them to use purchased lists, they probably just haven’t been caught yet.

The post Why purchased lists are a big no-no appeared first on The Email Marketing Blog.

Reblogged 4 months ago from blog.dotmailer.com

​The 2015 Online Marketing Industry Survey

Posted by Dr-Pete

It’s been another wild year in search marketing. Mobilegeddon crushed our Twitter streams, but not our dreams, and Matt Cutts stepped out of the spotlight to make way for an uncertain Google future. Pandas and Penguins continue to torment us, but most days, like anyone else, we were just trying to get the job done and earn a living.

This year, over 3,600 brave souls, each one more intelligent and good-looking than the last, completed our survey. While the last survey was technically “2014”, we collected data for it in late 2013, so the 2015 survey reflects about 18 months of industry changes.

A few highlights

Let’s dig in. Almost half (49%) of our 2015 respondents involved in search marketing were in-house marketers. In-house teams still tend to be small – 71% of our in-house marketers reported only 1-3 people in their company being involved in search marketing at least quarter-time. These teams do have substantial influence, though, with 86% reporting that they were involved in purchasing decisions.

Agency search marketers reported larger teams and more diverse responsibilities. More than one-third (36%) of agency marketers in our survey reported working with more than 20 clients in the previous year. Agencies covered a wide range of services, with the top 5 being:

More than four-fifths (81%) of agency respondents reported providing both SEO and SEM services for clients. Please note that respondents could select more than one service/tool/etc., so the charts in this post will not add up to 100%.

The vast majority of respondents (85%) reported being directly involved with content marketing, which was on par with 2014. Nearly two-thirds (66%) of agency content marketers reported “Content for SEO purposes” as their top activity, although “Building Content Strategy” came in a solid second at 44% of respondents.

Top tools

Where do we get such wonderful toys? We marketers love our tools, so let’s take a look at the Top 10 tools across a range of categories. Please note that this survey was conducted here on Moz, and our audience certainly has a pro-Moz slant.

Up first, here are the Top 10 SEO tools in our survey:

Just like last time, Google Webmaster Tools (now “Search Console”) leads the way. Moz Pro and Majestic slipped a little bit, and Firebug fell out of the Top 10. The core players remained fairly stable.

Here are the Top 10 Content tools in our survey:

Even with its uncertain future, Google Alerts continues to be widely used. There are a lot of newcomers to the content tools world, so year-over-year comparisons are tricky. Expect even more players in this market in the coming year.

Following are our respondents’ Top 10 analytics tools:

For an industry that complains about Google so much, we sure do seem to love their stuff. Google Analytics dominates, crushing the enterprise players, at least in the mid-market. KISSmetrics gained solid ground (from the #10 spot last time), while home-brewed tools slipped a bit. CrazyEgg and WordPress Stats remain very popular since our last survey.

Finally, here are the Top 10 social tools used by our respondents:

Facebook Insights and Hootsuite retained the top spots from last year, but newcomer Twitter Analytics rocketed into the #3 position. LinkedIn Insights emerged as a strong contender, too. Overall usage of all social tools increased. Tweetdeck held the #6 spot in 2014, with 19% usage, but dropped to #10 this year, even bumping up slightly to 20%.

Of course, digging into social tools naturally begs the question of which social networks are at the top of our lists.

The Top 6 are unchanged since our last survey, and it’s clear that the barriers to entry to compete with the big social networks are only getting higher. Instagram doubled its usage (from 11% of respondents last time), but this still wasn’t enough to overtake Pinterest. Reddit and Quora saw steady growth, and StumbleUpon slipped out of the Top 10.

Top activities

So, what exactly do we do with these tools and all of our time? Across all online marketers in our survey, the Top 5 activities were:

For in-house marketers, “Site Audits” dropped to the #6 position and “Brand Strategy” jumped up to the #3 spot. Naturally, in-house marketers have more resources to focus on strategy.

For agencies and consultants, “Site Audits” bumped up to #2, and “Managing People” pushed down social media to take the #5 position. Larger agency teams require more traditional people wrangling.

Here’s a much more detailed breakdown of how we spend our time in 2015:

In terms of overall demand for services, the Top 5 winners (calculated by % reporting increase – % reporting decrease were):

Demand for CRO is growing at a steady clip, but analytics still leads the way. Both “Content Creation” (#2) and “Content Curation” (#6) showed solid demand increases.

Some categories reported both gains and losses – 30% of respondents reported increased demand for “Link Building”, while 20% reported decreased demand. Similarly, 20% reported increased demand for “Link Removal”, while almost as many (17%) reported decreased demand. This may be a result of overall demand shifts, or it may represent more specialization by agencies and consultants.

What’s in store for 2016?

It’s clear that our job as online marketers is becoming more diverse, more challenging, and more strategic. We have to have a command of a wide array of tools and tactics, and that’s not going to slow down any time soon. On the bright side, companies are more aware of what we do, and they’re more willing to spend the money to have it done. Our evolution has barely begun as an industry, and you can expect more changes and growth in the coming year.

Raw data download

If you’d like to take a look through the raw results from this year’s survey (we’ve removed identifying information like email addresses from all responses), we’ve got that for you here:

Download the raw results

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 2 years ago from tracking.feedpress.it

Controlling Search Engine Crawlers for Better Indexation and Rankings – Whiteboard Friday

Posted by randfish

When should you disallow search engines in your robots.txt file, and when should you use meta robots tags in a page header? What about nofollowing links? In today’s Whiteboard Friday, Rand covers these tools and their appropriate use in four situations that SEOs commonly find themselves facing.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about controlling search engine crawlers, blocking bots, sending bots where we want, restricting them from where we don’t want them to go. We’re going to talk a little bit about crawl budget and what you should and shouldn’t have indexed.

As a start, what I want to do is discuss the ways in which we can control robots. Those include the three primary ones: robots.txt, meta robots, and—well, the nofollow tag is a little bit less about controlling bots.

There are a few others that we’re going to discuss as well, including Webmaster Tools (Search Console) and URL status codes. But let’s dive into those first few first.

Robots.txt lives at yoursite.com/robots.txt, it tells crawlers what they should and shouldn’t access, it doesn’t always get respected by Google and Bing. So a lot of folks when you say, “hey, disallow this,” and then you suddenly see those URLs popping up and you’re wondering what’s going on, look—Google and Bing oftentimes think that they just know better. They think that maybe you’ve made a mistake, they think “hey, there’s a lot of links pointing to this content, there’s a lot of people who are visiting and caring about this content, maybe you didn’t intend for us to block it.” The more specific you get about an individual URL, the better they usually are about respecting it. The less specific, meaning the more you use wildcards or say “everything behind this entire big directory,” the worse they are about necessarily believing you.

Meta robots—a little different—that lives in the headers of individual pages, so you can only control a single page with a meta robots tag. That tells the engines whether or not they should keep a page in the index, and whether they should follow the links on that page, and it’s usually a lot more respected, because it’s at an individual-page level; Google and Bing tend to believe you about the meta robots tag.

And then the nofollow tag, that lives on an individual link on a page. It doesn’t tell engines where to crawl or not to crawl. All it’s saying is whether you editorially vouch for a page that is being linked to, and whether you want to pass the PageRank and link equity metrics to that page.

Interesting point about meta robots and robots.txt working together (or not working together so well)—many, many folks in the SEO world do this and then get frustrated.

What if, for example, we take a page like “blogtest.html” on our domain and we say “all user agents, you are not allowed to crawl blogtest.html. Okay—that’s a good way to keep that page away from being crawled, but just because something is not crawled doesn’t necessarily mean it won’t be in the search results.

So then we have our SEO folks go, “you know what, let’s make doubly sure that doesn’t show up in search results; we’ll put in the meta robots tag:”

<meta name="robots" content="noindex, follow">

So, “noindex, follow” tells the search engine crawler they can follow the links on the page, but they shouldn’t index this particular one.

Then, you go and run a search for “blog test” in this case, and everybody on the team’s like “What the heck!? WTF? Why am I seeing this page show up in search results?”

The answer is, you told the engines that they couldn’t crawl the page, so they didn’t. But they are still putting it in the results. They’re actually probably not going to include a meta description; they might have something like “we can’t include a meta description because of this site’s robots.txt file.” The reason it’s showing up is because they can’t see the noindex; all they see is the disallow.

So, if you want something truly removed, unable to be seen in search results, you can’t just disallow a crawler. You have to say meta “noindex” and you have to let them crawl it.

So this creates some complications. Robots.txt can be great if we’re trying to save crawl bandwidth, but it isn’t necessarily ideal for preventing a page from being shown in the search results. I would not recommend, by the way, that you do what we think Twitter recently tried to do, where they tried to canonicalize www and non-www by saying “Google, don’t crawl the www version of twitter.com.” What you should be doing is rel canonical-ing or using a 301.

Meta robots—that can allow crawling and link-following while disallowing indexation, which is great, but it requires crawl budget and you can still conserve indexing.

The nofollow tag, generally speaking, is not particularly useful for controlling bots or conserving indexation.

Webmaster Tools (now Google Search Console) has some special things that allow you to restrict access or remove a result from the search results. For example, if you have 404’d something or if you’ve told them not to crawl something but it’s still showing up in there, you can manually say “don’t do that.” There are a few other crawl protocol things that you can do.

And then URL status codes—these are a valid way to do things, but they’re going to obviously change what’s going on on your pages, too.

If you’re not having a lot of luck using a 404 to remove something, you can use a 410 to permanently remove something from the index. Just be aware that once you use a 410, it can take a long time if you want to get that page re-crawled or re-indexed, and you want to tell the search engines “it’s back!” 410 is permanent removal.

301—permanent redirect, we’ve talked about those here—and 302, temporary redirect.

Now let’s jump into a few specific use cases of “what kinds of content should and shouldn’t I allow engines to crawl and index” in this next version…

[Rand moves at superhuman speed to erase the board and draw part two of this Whiteboard Friday. Seriously, we showed Roger how fast it was, and even he was impressed.]

Four crawling/indexing problems to solve

So we’ve got these four big problems that I want to talk about as they relate to crawling and indexing.

1. Content that isn’t ready yet

The first one here is around, “If I have content of quality I’m still trying to improve—it’s not yet ready for primetime, it’s not ready for Google, maybe I have a bunch of products and I only have the descriptions from the manufacturer and I need people to be able to access them, so I’m rewriting the content and creating unique value on those pages… they’re just not ready yet—what should I do with those?”

My options around crawling and indexing? If I have a large quantity of those—maybe thousands, tens of thousands, hundreds of thousands—I would probably go the robots.txt route. I’d disallow those pages from being crawled, and then eventually as I get (folder by folder) those sets of URLs ready, I can then allow crawling and maybe even submit them to Google via an XML sitemap.

If I’m talking about a small quantity—a few dozen, a few hundred pages—well, I’d probably just use the meta robots noindex, and then I’d pull that noindex off of those pages as they are made ready for Google’s consumption. And then again, I would probably use the XML sitemap and start submitting those once they’re ready.

2. Dealing with duplicate or thin content

What about, “Should I noindex, nofollow, or potentially disallow crawling on largely duplicate URLs or thin content?” I’ve got an example. Let’s say I’m an ecommerce shop, I’m selling this nice Star Wars t-shirt which I think is kind of hilarious, so I’ve got starwarsshirt.html, and it links out to a larger version of an image, and that’s an individual HTML page. It links out to different colors, which change the URL of the page, so I have a gray, blue, and black version. Well, these four pages are really all part of this same one, so I wouldn’t recommend disallowing crawling on these, and I wouldn’t recommend noindexing them. What I would do there is a rel canonical.

Remember, rel canonical is one of those things that can be precluded by disallowing. So, if I were to disallow these from being crawled, Google couldn’t see the rel canonical back, so if someone linked to the blue version instead of the default version, now I potentially don’t get link credit for that. So what I really want to do is use the rel canonical, allow the indexing, and allow it to be crawled. If you really feel like it, you could also put a meta “noindex, follow” on these pages, but I don’t really think that’s necessary, and again that might interfere with the rel canonical.

3. Passing link equity without appearing in search results

Number three: “If I want to pass link equity (or at least crawling) through a set of pages without those pages actually appearing in search results—so maybe I have navigational stuff, ways that humans are going to navigate through my pages, but I don’t need those appearing in search results—what should I use then?”

What I would say here is, you can use the meta robots to say “don’t index the page, but do follow the links that are on that page.” That’s a pretty nice, handy use case for that.

Do NOT, however, disallow those in robots.txt—many, many folks make this mistake. What happens if you disallow crawling on those, Google can’t see the noindex. They don’t know that they can follow it. Granted, as we talked about before, sometimes Google doesn’t obey the robots.txt, but you can’t rely on that behavior. Trust that the disallow in robots.txt will prevent them from crawling. So I would say, the meta robots “noindex, follow” is the way to do this.

4. Search results-type pages

Finally, fourth, “What should I do with search results-type pages?” Google has said many times that they don’t like your search results from your own internal engine appearing in their search results, and so this can be a tricky use case.

Sometimes a search result page—a page that lists many types of results that might come from a database of types of content that you’ve got on your site—could actually be a very good result for a searcher who is looking for a wide variety of content, or who wants to see what you have on offer. Yelp does this: When you say, “I’m looking for restaurants in Seattle, WA,” they’ll give you what is essentially a list of search results, and Google does want those to appear because that page provides a great result. But you should be doing what Yelp does there, and make the most common or popular individual sets of those search results into category-style pages. A page that provides real, unique value, that’s not just a list of search results, that is more of a landing page than a search results page.

However, that being said, if you’ve got a long tail of these, or if you’d say “hey, our internal search engine, that’s really for internal visitors only—it’s not useful to have those pages show up in search results, and we don’t think we need to make the effort to make those into category landing pages.” Then you can use the disallow in robots.txt to prevent those.

Just be cautious here, because I have sometimes seen an over-swinging of the pendulum toward blocking all types of search results, and sometimes that can actually hurt your SEO and your traffic. Sometimes those pages can be really useful to people. So check your analytics, and make sure those aren’t valuable pages that should be served up and turned into landing pages. If you’re sure, then go ahead and disallow all your search results-style pages. You’ll see a lot of sites doing this in their robots.txt file.

That being said, I hope you have some great questions about crawling and indexing, controlling robots, blocking robots, allowing robots, and I’ll try and tackle those in the comments below.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 2 years ago from tracking.feedpress.it