The power of preference centers

Earlier this year it was revealed by the DMA that new subscribers are willing to hand over their data in exchange for personal experiences. Savvy-shoppers know that brands have the tools to create something really special. Preference centers simply give brands the opportunity to hear what they want, directly from the consumer.

What makes a good preference center?

Like a one-to-one
with thousands of subscribers, a good preference center shows that you’re a
brand that cares about customers’ experience. You want to get to know as much
about them as possible to ensure their experience is memorable.

Giving control to
subscribers is the fastest, most efficient way to increase satisfaction and
decrease unsubscribes. You’ll quickly discover what they want, so you can
easily tailor your marketing efforts to meet their needs. The more you meet
their needs, the more likely they are to convert. And the more they convert,
the more they spend.

A preference center
should allow subscribers to:

  • Pick newsletters or topics they’d like to see in their inboxes, e.g. daily briefing, weekly travel updates, monthly product releases
  • Inform brands about areas of interest including products and departments, e.g. shoes and knitwear
  • Decide the channel they prefer to hear from brands on, e.g. email, SMS, or WhatsApp

A smart preference center is a massive data-gathering tool. So, while you’re letting customers tailor their experience, don’t forget to collect the data you need to create personalized email perfection.

Making preference centers work
hard for you

While customers are
crafting their perfect journey, you’ll be learning a lot about them. But
there’s always something more you could know. You’re building a two-way
relationship with shoppers, and as a result, they’re happy to give you extra
tidbits of information.

It all helps them receive a better experience, so consider the extra data you could be gathering:

  • First name, last name, and date of birth
  • Address or zip code
  • A favorite product or service/wishlist builder
  • Important dates, e.g. anniversaries, birthdays
  • How they discovered your brand
  • Interests and hobbies
  • Favorite bricks-and-mortar stores or how frequently they visit one

The possibilities are endless, but it’s vital you’re clear and honest about why you’re collecting this data. If you put it to work improving their experience, they’ll happily hand it over

Epic examples of perfect
preference centers

Preference centers can be shown to shoppers in a number of different ways. Whether you decide to collect this information with newsletter sign-up, during the welcome email or as part of their account set-up, these examples should be enough to get you thinking creatively about your data capture.

Science in sport

Science in sport sign-up

Sports nutrition brand, Science in Sport, starts its preference collection simply. Using a sign-up incentive, it makes it clear that shoppers will benefit right from the beginning of this relationship. The brand also uses the opportunity to gather its first preference from customers.

By asking “What is your preferred sport?”, the brand is able to use this information to send products specifically associated with running, cycling, or triathlons.

You don’t always need to collect every piece of information immediately. By gradually collecting customer information in the same way as Science in Sport, you don’t risk overwhelming your audience, and potentially ruining their experience.

Spotify

Spotify, on the other hand, allows customers to control the content they receive on an account level.

Spotify preference

Once an account has
successfully been created, customers are able to choose what they hear about.
For each topic, listeners can also choose whether these communications come in
the form of an email or a push notification.

By diversifying its marketing channels, Spotify is increasing its chances of successful engaging customers.

MATCHESFASHION

To ensure customers don’t miss their chance to build customer experience, MATCHESFASHION uses its welcome program to push preferences to be submitted.

MATCHESFASHION sign up
MATCHESFASHION preference center

This preference center not only outlines what each newsletter covers but also states the day that it’ll land in customers’ inboxes. This helps build anticipation for the latest announcements and product launches. It also gives shoppers the opportunity to follow their favorite fashion designers. This information can then be used to deliver personalized edits every Monday and Friday as part of the ‘WOMENSWEAR JUST IN’ subscription.

MATCHESFASHION also goes on to confirm whether customers would be interested in receiving marketing via phone and post, as well as membership to its loyalty program.

The overall result:
endless opportunities for shoppers to personalize their experience.

J. Crew

J. Crew Unsubscribe

Don’t miss a single
chance to let customers tell you what they want.

J. Crew doesn’t. In fact, when customers click ‘unsubscribe’ in email, J. Crew gives shoppers a final opportunity to tailor its communications. This offers a chance to save the relationship. Customers can either choose to adjust the frequency of their emails or specify the products they would rather hear about.

It’s what you do with the data
that counts

The secret to
successful marketing is all about the quality of the data you have.

The better the data
you have, the better the experience you can build with your customers. By
creating intelligent, personalized journeys for customers and putting the customer
first, you’re creating an engaged fanbase, dedicated to your brand.

A powerful preference center gives you all the information you need to deliver the right message to the right customer, at the right time and on the right channel for them. To discover more about how Engagement Cloud is empowering brands to use customer data, don’t miss our very own Gavin Laugenie speaking at Magento Live later this month.


Keep reading

Tips to generate engagement
Online engagement blog

The post The power of preference centers appeared first on dotdigital blog.

[ccw-atrib-link]

Exposing The Generational Content Gap: Three Ways to Reach Multiple Generations

Posted by AndreaLehr

With more people of all ages online than ever before, marketers must create content that resonates with multiple generations. Successful marketers realize that each generation has unique expectations, values and experiences that influence consumer behaviors, and that offering your audience content that reflects their shared interests is a powerful way to connect with them and inspire them to take action.

We’re in the midst of a generational shift, with
Millennials expected to surpass Baby Boomers in 2015 as the largest living generation. In order to be competitive, marketers need to realize where key distinctions and similarities lie in terms of how these different generations consume content and share it with with others.

To better understand the habits of each generation,
BuzzStream and Fractl surveyed over 1,200 individuals and segmented their responses into three groups: Millennials (born between 1977–1995), Generation X (born between 1965–1976), and Baby Boomers (born between 1946–1964). [Eds note: The official breakdown for each group is as follows: Millennials (1981-1997), Generation X (1965-1980), and Boomers (1946-1964)]

Our survey asked them to identify their preferences for over 15 different content types while also noting their opinions on long-form versus short-form content and different genres (e.g., politics, technology, and entertainment).

We compared their responses and found similar habits and unique trends among all three generations.

Here’s our breakdown of the three key takeaways you can use to elevate your future campaigns:

1. Baby Boomers are consuming the most content

However, they have a tendency to enjoy it earlier in the day than Gen Xers and Millennials.

Although we found striking similarities between the younger generations, the oldest generation distinguished itself by consuming the most content. Over 25 percent of Baby Boomers consume 20 or more hours of content each week. Additional findings:

  • Baby Boomers also hold a strong lead in the 15–20 hours bracket at 17 percent, edging out Gen Xers and Millennials at 12 and 11 percent, respectively
  • A majority of Gen Xers and Millennials—just over 22 percent each—consume between 5 and 10 hours per week
  • Less than 10 percent of Gen Xers consume less than five hours of content a week—the lowest of all three groups

We also compared the times of day that each generation enjoys consuming content. The results show that most of our respondents—over 30 percent— consume content between 8 p.m. and midnight. However, there are similar trends that distinguish the oldest generation from the younger ones:

  • Baby Boomers consume a majority of their content in the morning. Nearly 40 percent of respondents are online between 5 a.m. and noon.
  • The least popular time for most respondents to engage with content online is late at night, between midnight and 5 a.m., earning less than 10 percent from each generation
  • Gen X is the only generation to dip below 10 percent in the three U.S. time zones: 5 a.m. to 9 a.m., 6 to 8 p.m., and midnight to 5 a.m.

When Do We Consume Content

When it comes to which device each generation uses to consume content, laptops are the most common, followed by desktops. The biggest distinction is in mobile usage: Over 50 percent of respondents who use their mobile as their primary device for content consumption are Millennials. Other results reveal:

  • Not only do Baby Boomers use laptops the most (43 percent), but they also use their tablets the most. (40 percent of all primary tablet users are Baby Boomers).
  • Over 25 percent of Millennials use a mobile device as their primary source for content
  • Gen Xers are the least active tablet users, with less than 8 percent of respondents using it as their primary device

Device To Consume Content2. Preferred content types and lengths span all three generations

One thing every generation agrees on is the type of content they enjoy seeing online. Our results reveal that the top four content types— blog articles, images, comments, and eBooks—are exactly the same for Baby Boomers, Gen Xers, and Millennials. Additional comparisons indicate:

  • The least preferred content types—flipbooks, SlideShares, webinars, and white papers—are the same across generations, too (although not in the exact same order)
  • Surprisingly, Gen Xers and Millennials list quizzes as one of their five least favorite content types

Most Consumed Content Type

All three generations also agree on ideal content length, around 300 words. Further analysis reveals:

  • Baby Boomers have the highest preference for articles under 200 words, at 18 percent
  • Gen Xers have a strong preference for articles over 500 words compared to other generations. Over 20 percent of respondents favor long-form articles, while only 15 percent of Baby Boomers and Millennials share the same sentiment.
  • Gen Xers also prefer short articles the least, with less than 10 percent preferring articles under 200 words

Content Length PreferencesHowever, in regards to verticals or genres, where they consume their content, each generation has their own unique preference:

  • Baby Boomers have a comfortable lead in world news and politics, at 18 percent and 12 percent, respectively
  • Millennials hold a strong lead in technology, at 18 percent, while Baby Boomers come in at 10 percent in the same category
  • Gen Xers fall between Millennials and Baby Boomers in most verticals, although they have slight leads in personal finance, parenting, and healthy living
  • Although entertainment is the top genre for each generation, Millennials and Baby Boomers prefer it slightly more than than Gen Xers do

Favorite Content Genres

3. Facebook is the preferred content sharing platform across all three generations

Facebook remains king in terms of content sharing, and is used by about 60 percent of respondents in each generation studied. Surprisingly, YouTube came in second, followed by Twitter, Google+, and LinkedIn, respectively. Additional findings:

  • Baby Boomers share on Facebook the most, edging out Millennials by only a fraction of a percent
  • Although Gen Xers use Facebook slightly less than other generations, they lead in both YouTube and Twitter, at 15 percent and 10 percent, respectively
  • Google+ is most popular with Baby Boomers, at 8 percent, nearly double that of both Gen Xers and Millennials

Preferred Social PlatformAlthough a majority of each generation is sharing content on Facebook, the type of content they are sharing, especially visuals, varies by each age group. The oldest generation prefers more traditional content, such as images and videos. Millennials prefer newer content types, such as memes and GIFs, while Gen X predictably falls in between the two generations in all categories except SlideShares. Other findings:

  • The most popular content type for Baby Boomers is video, at 27 percent
  • Parallax is the least popular type for every generation, earning 1 percent or less in each age group
  • Millennials share memes the most, while less than 10 percent of Baby Boomers share similar content

Most Shared Visual ContentMarketing to several generations can be challenging, given the different values and ideas that resonate with each group. With the number of online content consumers growing daily, it’s essential for marketers to understand the specific types of content that each of their audiences connect with, and align it with their content marketing strategy accordingly.

Although there is no one-size-fits-all campaign, successful marketers can create content that multiple generations will want to share. If you feel you need more information getting started, you can review this deck of additional insights, which includes the preferred video length and weekend consuming habits of each generation discussed in this post.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Has Google Gone Too Far with the Bias Toward Its Own Content?

Posted by ajfried

Since the beginning of SEO time, practitioners have been trying to crack the Google algorithm. Every once in a while, the industry gets a glimpse into how the search giant works and we have opportunity to deconstruct it. We don’t get many of these opportunities, but when we do—assuming we spot them in time—we try to take advantage of them so we can “fix the Internet.”

On Feb. 16, 2015, news started to circulate that NBC would start removing images and references of Brian Williams from its website.

This was it!

A golden opportunity.

This was our chance to learn more about the Knowledge Graph.

Expectation vs. reality

Often it’s difficult to predict what Google is truly going to do. We expect something to happen, but in reality it’s nothing like we imagined.

Expectation

What we expected to see was that Google would change the source of the image. Typically, if you hover over the image in the Knowledge Graph, it reveals the location of the image.

Keanu-Reeves-Image-Location.gif

This would mean that if the image disappeared from its original source, then the image displayed in the Knowledge Graph would likely change or even disappear entirely.

Reality (February 2015)

The only problem was, there was no official source (this changed, as you will soon see) and identifying where the image was coming from proved extremely challenging. In fact, when you clicked on the image, it took you to an image search result that didn’t even include the image.

Could it be? Had Google started its own database of owned or licensed images and was giving it priority over any other sources?

In order to find the source, we tried taking the image from the Knowledge Graph and “search by image” in images.google.com to find others like it. For the NBC Nightly News image, Google failed to even locate a match to the image it was actually using anywhere on the Internet. For other television programs, it was successful. Here is an example of what happened for Morning Joe:

Morning_Joe_image_search.png

So we found the potential source. In fact, we found three potential sources. Seemed kind of strange, but this seemed to be the discovery we were looking for.

This looks like Google is using someone else’s content and not referencing it. These images have a source, but Google is choosing not to show it.

Then Google pulled the ol’ switcheroo.

New reality (March 2015)

Now things changed and Google decided to put a source to their images. Unfortunately, I mistakenly assumed that hovering over an image showed the same thing as the file path at the bottom, but I was wrong. The URL you see when you hover over an image in the Knowledge Graph is actually nothing more than the title. The source is different.

Morning_Joe_Source.png

Luckily, I still had two screenshots I took when I first saw this saved on my desktop. Success. One screen capture was from NBC Nightly News, and the other from the news show Morning Joe (see above) showing that the source was changed.

NBC-nightly-news-crop.png

(NBC Nightly News screenshot.)

The source is a Google-owned property: gstatic.com. You can clearly see the difference in the source change. What started as a hypothesis in now a fact. Google is certainly creating a database of images.

If this is the direction Google is moving, then it is creating all kinds of potential risks for brands and individuals. The implications are a loss of control for any brand that is looking to optimize its Knowledge Graph results. As well, it seems this poses a conflict of interest to Google, whose mission is to organize the world’s information, not license and prioritize it.

How do we think Google is supposed to work?

Google is an information-retrieval system tasked with sourcing information from across the web and supplying the most relevant results to users’ searches. In recent months, the search giant has taken a more direct approach by answering questions and assumed questions in the Answer Box, some of which come from un-credited sources. Google has clearly demonstrated that it is building a knowledge base of facts that it uses as the basis for its Answer Boxes. When it sources information from that knowledge base, it doesn’t necessarily reference or credit any source.

However, I would argue there is a difference between an un-credited Answer Box and an un-credited image. An un-credited Answer Box provides a fact that is indisputable, part of the public domain, unlikely to change (e.g., what year was Abraham Lincoln shot? How long is the George Washington Bridge?) Answer Boxes that offer more than just a basic fact (or an opinion, instructions, etc.) always credit their sources.

There are four possibilities when it comes to Google referencing content:

  • Option 1: It credits the content because someone else owns the rights to it
  • Option 2: It doesn’t credit the content because it’s part of the public domain, as seen in some Answer Box results
  • Option 3: It doesn’t reference it because it owns or has licensed the content. If you search for “Chicken Pox” or other diseases, Google appears to be using images from licensed medical illustrators. The same goes for song lyrics, which Eric Enge discusses here: Google providing credit for content. This adds to the speculation that Google is giving preference to its own content by displaying it over everything else.
  • Option 4: It doesn’t credit the content, but neither does it necessarily own the rights to the content. This is a very gray area, and is where Google seemed to be back in February. If this were the case, it would imply that Google is “stealing” content—which I find hard to believe, but felt was necessary to include in this post for the sake of completeness.

Is this an isolated incident?

At Five Blocks, whenever we see these anomalies in search results, we try to compare the term in question against others like it. This is a categorization concept we use to bucket individuals or companies into similar groups. When we do this, we uncover some incredible trends that help us determine what a search result “should” look like for a given group. For example, when looking at searches for a group of people or companies in an industry, this grouping gives us a sense of how much social media presence the group has on average or how much media coverage it typically gets.

Upon further investigation of terms similar to NBC Nightly News (other news shows), we noticed the un-credited image scenario appeared to be a trend in February, but now all of the images are being hosted on gstatic.com. When we broadened the categories further to TV shows and movies, the trend persisted. Rather than show an image in the Knowledge Graph and from the actual source, Google tends to show an image and reference the source from Google’s own database of stored images.

And just to ensure this wasn’t a case of tunnel vision, we researched other categories, including sports teams, actors and video games, in addition to spot-checking other genres.

Unlike terms for specific TV shows and movies, terms in each of these other groups all link to the actual source in the Knowledge Graph.

Immediate implications

It’s easy to ignore this and say “Well, it’s Google. They are always doing something.” However, there are some serious implications to these actions:

  1. The TV shows/movies aren’t receiving their due credit because, from within the Knowledge Graph, there is no actual reference to the show’s official site
  2. The more Google moves toward licensing and then retrieving their own information, the more biased they become, preferring their own content over the equivalent—or possibly even superior—content from another source
  3. If feels wrong and misleading to get a Google Image Search result rather than an actual site because:
    • The search doesn’t include the original image
    • Considering how poor Image Search results are normally, it feels like a poor experience
  4. If Google is moving toward licensing as much content as possible, then it could make the Knowledge Graph infinitely more complicated when there is a “mistake” or something unflattering. How could one go about changing what Google shows about them?

Google is objectively becoming subjective

It is clear that Google is attempting to create databases of information, including lyrics stored in Google Play, photos, and, previously, facts in Freebase (which is now Wikidata and not owned by Google).

I am not normally one to point my finger and accuse Google of wrongdoing. But this really strikes me as an odd move, one bordering on a clear bias to direct users to stay within the search engine. The fact is, we trust Google with a heck of a lot of information with our searches. In return, I believe we should expect Google to return an array of relevant information for searchers to decide what they like best. The example cited above seems harmless, but what about determining which is the right religion? Or even who the prettiest girl in the world is?

Religion-and-beauty-queries.png

Questions such as these, which Google is returning credited answers for, could return results that are perceived as facts.

Should we next expect Google to decide who is objectively the best service provider (e.g., pizza chain, painter, or accountant), then feature them in an un-credited answer box? The direction Google is moving right now, it feels like we should be calling into question their objectivity.

But that’s only my (subjective) opinion.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Should I Rebrand and Redirect My Site? Should I Consolidate Multiple Sites/Brands? – Whiteboard Friday

Posted by randfish

Making changes to your brand is a huge step, and while it’s sometimes the best path forward, it isn’t one to be taken lightly. In today’s Whiteboard Friday, Rand offers some guidance to marketers who are wondering whether a rebrand/redirect is right for them, and also those who are considering consolidating multiple sites under a single brand.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

To rebrand, or not to rebrand, that is the question

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. Today we’re going to chat a little bit about whether you should rebrand and consider redirecting your existing website or websites and whether you should potentially consolidate multiple websites and brands that you may be running.

So we’ve talked before about redirection moves best practices. We’ve also talked about the splitting of link equity and domain authority and those kinds of things. But one of the questions that people have is, “Gosh, you know I have a website today and given the moves that Google has been making, that the social media world has been making, that content marketing has been making, I’m wondering whether I should potentially rebrand my site.” Lots of people bought domains back in the day that were exact match domains or partial match domains or that they thought reflected a move of the web toward or away from less brand-centric stuff and toward more keyword matching, topic matching, intent matching kinds of things.

Maybe you’re reconsidering those moves and you want to know, “Hey, should I be thinking about making a change now?” That’s what I’m here to answer. So this question to rebrand or not to re, it is tough because you know that when you do that rebrand, you will almost certainly take a traffic hit, and SEO is one of the biggest places where people typically take that traffic hit.

Moz previously was at SEOmoz.org and moved to moz.com. We saw a dip in our traffic over about 3 to 4 months before it fully recovered, and I would say that dip was between 15% and 25% of our search traffic, depending on week to week. I’ll link to a list of metrics that I put on my personal blog, Moz.com/rand, so that you can check those out if you’d like to see them. But it was a short recovery time for us.

One of the questions that people always have is, “Well wait, did you lose rankings for SEO since SEO used to be in your domain name?” The answer is no. In fact, six months after the move, we were ranking higher for SEO related terms and phrases.

Scenario A: Rebranding or redirecting scifitoysandgames.com

So let’s imagine that today you are running SciFiToysAndGames.com, which is right on the borderline. In my opinion, that’s right on the borderline of barely tolerable. Like it could be brandable, but it’s not great. I don’t love the “sci-fi” in here, partially because of how the Syfy channel, the entity that broadcasts stuff on television has chosen to delineate their spelling, sci-fi can be misinterpreted as to how it’s spelled. I don’t love having to have “and” in a domain name. This is long. All sorts of stuff.

Let’s say you also own StarToys.com, but you haven’t used it. Previously StarToys.com has been redirecting to SciFiToysAndGames.com, and you’re thinking, “Well, man, is it the right time to make this move? Should I make this change now? Should I wait for the future?”

How memorable or amplifiable is your current brand?

Well, these are the questions that I would urge you to consider. How memorable and amplifiable is your current brand? That’s something that if you are recognizing like, “Hey I think our brand name, in fact, is holding us back in search results and social media amplification, press, in blog mentions, in journalist links and these kinds of things,” well, that’s something serious to think about. Word of mouth too.

Will you maintain your current brand name long term?

So if you know that sometime in the next two, three, four, or five years you do want to move to StarToys, I would actually strongly urge you to do that right now, because the longer you wait, the longer it will take to build up the signals around the new domain and the more pain you’ll potentially incur by having to keep branding this and working on this old brand name. So I would strongly urge you, if you know you’re going to make the move eventually, make it today. Take the pain now, rather than more pain later.

Can or have you tested brand preference with your target audience?

I would urge you to find two different groups, one who are loyal customers today, people who know SciFiToysAndGames.com and have used it, and two, people who are potential customers, but aren’t yet familiar with it.

You don’t need to do big sample-sizes. If you can get 5, 10, or 15 people either in a room or talk to them in person, you can try some web surveys, you can try using some social media ads like things on Facebook. I’ve seen some companies do some testing around this. Even buying potential PPC ads and seeing how click-through rates perform and sentiment and those kinds of things, that is a great way to help validate your ideas, especially if you’re forced to bring data to a table by executives or other stakeholders.

How much traffic would you need in one year to justify a URL move?

The last thing I think about is imagine, and I want you to either imagine or even model this out, mathematically model it out. If your traffic growth rate — so let’s say you’re growing at 10% year-over-year right now — if that improved 1%, 5%, or 10% annually with a new brand name, would you make the move? So knowing that you might take a short-term hit, but then that your growth rate would be incrementally higher in years to come, how big would that growth rate need to be?

I would say that, in general, if I were thinking about these two domains, granted this is a hard case because you don’t know exactly how much more brandable or word-of-mouth-able or amplifiable your new one might be compared to your existing one. Well, gosh, my general thing here is if you think that’s going to be a substantive percentage, say 5% plus, almost always it’s worth it, because compound growth rate over a number of years will mean that you’re winning big time. Remember that that growth rate is different that raw growth. If you can incrementally increase your growth rate, you get tremendously more traffic when you look back two, three, four, or five years later.

Where does your current and future URL live on the domain/brand name spectrum?

I also made this domain name, brand name spectrum, because I wanted to try and visualize crappiness of domain name, brand name to really good domain name, brand name. I wanted to give some examples and then extract out some elements so that maybe you can start to build on these things thematically as you’re considering your own domains.

So from awful, we go to tolerable, good, and great. So Science-Fi-Toys.net is obviously terrible. I’ve taken a contraction of the name and the actual one. It’s got a .net. It’s using hyphens. It’s infinitely unmemorable up to what I think is tolerable — SciFiToysAndGames.com. It’s long. There are some questions about how type-in-able it is, how easy it is to type in. SciFiToys.com, which that’s pretty good. SciFiToys, relatively short, concise. It still has the “sci-fi” in there, but it’s a .com. We’re getting better. All the way up to, I really love the name, StarToys. I think it’s very brandable, very memorable. It’s concise. It’s easy to remember and type in. It has positive associations probably with most science fiction toy buyers who are familiar with at least “Star Wars” or “Star Trek.” It’s cool. It has some astronomy connotations too. Just a lot of good stuff going on with that domain name.

Then, another one, Region-Data-API.com. That sucks. NeighborhoodInfo.com. Okay, at least I know what it is. Neighborhood is a really hard name to type because it is very hard for many people to spell and remember. It’s long. I don’t totally love it. I don’t love the “info” connotation, which is generic-y.

DistrictData.com has a nice, alliterative ring to it. But maybe we could do even better and actually there is a company, WalkScore.com, which I think is wonderfully brandable and memorable and really describes what it is without being too in your face about the generic brand of we have regional data about places.

What if you’re doing mobile apps? BestAndroidApps.com. You might say, “Why is that in awful?” The answer is two things. One, it’s the length of the domain name and then the fact that you’re actually using someone else’s trademark in your name, which can be really risky. Especially if you start blowing up, getting big, Google might go and say, “Oh, do you have Android in your domain name? We’ll take that please. Thank you very much.”

BestApps.io, in the tech world, it’s very popular to use domains like .io or .ly. Unfortunately, I think once you venture outside of the high tech world, it’s really tough to get people to remember that that is a domain name. If you put up a billboard that says “BestApps.com,” a majority of people will go, “Oh, that’s a website.” But if you use .io, .ly, or one of the new domain names, .ninja, a lot of people won’t even know to connect that up with, “Oh, they mean an Internet website that I can type into my browser or look for.”

So we have to remember that we sometimes live in a bubble. Outside of that bubble are a lot of people who, if it’s not .com, questionable as to whether they’re even going to know what it is. Remember outside of the U.S., country code domain names work equally well — .co.uk, .ca, .co.za, wherever you are.

InstallThis.com. Now we’re getting better. Memorable, clear. Then all the way up to, I really like the name AppCritic.com. I have positive associations with like, “Oh year, restaurant critics, food critics, and movie critics, and this is an app critic. Great, that’s very cool.”

What are the things that are in here? Well, stuff at this end of the spectrum tends to be generic, forgettable, hard to type in. It’s long, brand-infringing, danger, danger, and sketchy sounding. It’s hard to quantify what sketchy sounding is, but you know it when you see it. When you’re reviewing domain names, you’re looking for links, you’re looking at things in the SERPs, you’re like, “Hmm, I don’t know about this one.” Having that sixth sense is something that we all develop over time, so sketchy sounding not quite as scientific as I might want for a description, but powerful.

On this end of the spectrum though, domain names and brand names tend to be unique, memorable, short. They use .com. Unfortunately, still the gold standard. Easy to type in, pronounceable. That’s a powerful thing too, especially because of word of mouth. We suffered with that for a long time with SEOmoz because many people saw it and thought, “Oh, ShowMoz, COMoz, SeeMoz.” It sucked. Have positive associations, like StarToys or WalkScore or AppCritic. They have these positive, pre-built-in associations psychologically that suggest something brandable.

Scenario B: Consolidating two sites

Scenario B, and then we’ll get to the end, but scenario B is the question like, “Should I consolidate?” Let’s say I’m running both of these today. Or more realistic and many times I see people like this, you’re running AppCritic.com and StarToys.com, and you think, “Boy, these are pretty separate.” But then you keep finding overlap between them. Your content tends to overlap, the audience tends to overlap. I find this with many, many folks who run multiple domains.

How much audience and content overlap is there?

So we’ve got to consider a few things. First off, that audience and content overlap. If you’ve got StarToys and AppCritic and the overlap is very thin, just that little, tiny piece in the middle there. The content doesn’t overlap much, the audience doesn’t overlap much. It probably doesn’t make that much sense.

But what if you’re finding like, “Gosh, man, we’re writing more and more about apps and tech and mobile and web stuff on StarToys, and we’re writing more and more about other kinds of geeky, fun things on AppCritic. Slowly it feels like these audiences are merging.” Well, now you might want to consider that consolidation.

Is there potential for separate sales or exits?

Second point of consideration, the potential for separate exits or sales. So if you know that you’re going to sell AppCritic.com to someone in the future and you want to make sure that’s separate from StarToys, you should keep them separate. If you think to yourself, “Gosh, I’d never sell one without the other. They’re really part of the same company, brand, effort,” well, I’d really consider that consolidation.

Will you dilute marketing or branding efforts?

Last point of positive consideration is dilution of marketing and branding efforts. Remember that you’re going to be working on marketing. You’re going to be working on branding. You’re going to be working on growing traffic to these. When you split your efforts, unless you have two relatively large, separate teams, this is very, very hard to do at the same rate that it could be done if you combined those efforts. So another big point of consideration. That compound growth rate that we talked about, that’s another big consideration with this.

Is the topical focus out of context?

What I don’t recommend you consider and what has been unfortunately considered, by a lot of folks in the SEO-centric world in the past, is topical focus of the content. I actually am crossing this out. Not a big consideration. You might say to yourself, “But Rand, we talked about previously on Whiteboard Friday how I can have topical authority around toys and games that are related to science fiction stuff, and I can have topical authority related to mobile apps.”

My answer is if the content overlap is strong and the audience overlap is strong, you can do both on one domain. You can see many, many examples of this across the web, Moz being a great example where we talk about startups and technology and sometimes venture capital and team building and broad marketing and paid search marketing and organic search marketing and just a ton of topics, but all serving the same audience and content. Because that overlap is strong, we can be an authority in all of these realms. Same goes for any time you’re considering these things.

All right everyone, hope you’ve enjoyed this edition of Whiteboard Friday. I look forward to some great comments, and we’ll see you again next week. take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Try Your Hand at A/B Testing for a Chance to Win the Email Subject Line Contest

Posted by danielburstein

This blog post ends with an opportunity for you to win a stay at the ARIA in Vegas and a ticket to
Email Summit, but it begins with an essential question for marketers…

How can you improve already successful marketing, advertising, websites and copywriting?

Today’s Moz blog post is unique. Not only are we going to teach you how to address this challenge, we’re going to offer an example that you can dig into to help drive home the lesson.

Give the people what they want

Some copy and design is so bad, the fixes are obvious. Maybe you shouldn’t insult the customer in the headline. Maybe you should update the website that still uses a dot matrix font.

But when you’re already doing well, how can you continue to improve?

I don’t have the answer for you, but I’ll tell you who does – your customers.

There are many tricks, gimmicks and technology you can use in marketing, but when you strip away all the hype and rhetoric, successful marketing is pretty straightforward –
clearly communicate the value your offer provides to people who will pay you for that value.

Easier said than done, of course.

So how do you determine what customers want? And the best way to deliver it to them?

Well, there are many ways to learn from customers, such as focus groups, surveys and social listening. While there is value in asking people what they want, there is also a major challenge in it. “People’s ability to understand the factors that affect their behavior is surprisingly poor,” according to research from Dr. Noah J. Goldstein, Associate Professor of Management and Organizations, UCLA Anderson School of Management.

Or, as Malcolm Gladwell more glibly puts it when referring to coffee choices, “The mind knows not what the tongue wants.”

Not to say that opinion-based customer preference research is bad. It can be helpful. However, it should be the beginning and not the end of your quest.

…by seeing what they actually do

You can use what you learn from opinion-based research to create a hypothesis about what customers want, and then
run an experiment to see how they actually behave in real-world customer interactions with your product, marketing messages, and website.

The technique that powers this kind of research is often known as A/B testing, split testing, landing page optimization, and/or website optimization. If you are testing more than one thing at a time, it may also be referred to as multi-variate testing.

To offer a simple example, you might assume that customers buy your product because it tastes great. Or because it’s less filling. So you could create two landing pages – one with a headline that promotes that taste (treatment A) and another that mentions the low carbs (treatment B). You then send half the traffic that visits that URL to each version and see which performs better.

Here is a simple visual that Joey Taravella, Content Writer, MECLABS create to illustrate the concept…

That’s just one test. To really learn about your customers, you must continue the process and create a testing-optimization cycle in your organization – continue to run A/B tests, record the findings, learn from them, create more hypotheses, and test again based on these hypotheses.

This is true marketing experimentation, and helps you build your theory of the customer.

But you probably know all that already. So here’s your chance to practice while helping us shape an A/B test. You might even win a prize in the process.

The email subject line contest

The Moz Blog and MarketingExperiments Blog have joined forces to run a unique marketing experimentation contest. We’re presenting you with a real challenge from a real organization (VolunteerMatch) and
asking you to write a subject line to test (it’s simple, just leave your subject line as a comment in this blog post).

We’re going to pick three subject lines suggested by readers of The Moz Blog and three from the MarketingExperiments Blog and run a test with this organization’s customers. Whoever writes the best performing subject line will
win a stay at the ARIA Resort in Las Vegas as well as a two-day ticket to MarketingSherpa Email Summit 2015 to help them gain lessons to further improve their marketing.

Sound good? OK, let’s dive in and tell you more about your “client”…

Craft the best-performing subject line to win the prize

Every year at Email Summit, we run a live A/B test where the audience helps craft the experiment. We then run, validate, close the experiment, and share the results during Summit as a way to teach about marketing experimentation. We have typically run the experiment using MarketingSherpa as the “client” website to test (MarketingExperiments and MarketingSherpa are sister publications, both owned by MECLABS Institute).

However, this year we wanted to try something different and interviewed three national non-profits to find a new “client” for our tests.

We chose
VolunteerMatch – a nonprofit organization that uses the power of technology to make it easier for good people and good causes to connect. One of the key reasons we chose VolunteerMatch is because it is an already successful organization looking to further improve. (Here is a case study explaining one of its successful implementations – Lead Management: How a B2B SaaS nonprofit decreased its sales cycle 99%).

Another reason we chose VolunteerMatch for this opportunity is that it has three types of customers, so the lessons from the content we create can help marketers across a wide range of sales models. VolunteerMatch’s customers are:

  • People who want to volunteer (B2C)
  • Non-profit organizations looking for volunteers (non-profit)
  • Businesses looking for corporate volunteering solutions (B2B) to which it offers a Software-as-a-Service product through VolunteerMatch Solutions

Designing the experiment

After we took VolunteerMatch on as the Research Partner “client,” Jon Powell, Senior Executive Research and Development Manager, MECLABS, worked with Shari Tishman, Director of Engagement and Lauren Wagner, Senior Manager of Engagement, VolunteerMatch, to understand their challenges, take a look at their current assets and performance, and craft a design of experiments to determine what further knowledge about its customers would help VolunteerMatch improve performance.

That design of experiments includes a series of split tests – including the live test we’re going to run at Email Summit, as well as the one you have an opportunity to take part in by writing a subject line in the comments section of this blog post. Let’s take a look at that experiment…

The challenge

VolunteerMatch wants to increase the response rate of the corporate email list (B2B) by discovering the best possible messaging to use. In order to find out, MarketingExperiments wants to run an A/B split test to determine the
best messaging.

However the B2B list is relatively smaller than the volunteer/cause list (B2C) which makes it harder to test in (and gain
statistical significance) and determine which messaging is most effective.

So we’re going to run a messaging test to the B2C list. This isn’t without its challenges though, because most individuals on the B2C list are not likely to immediately connect with B2B corporate solutions messaging.

So the question is…

How do we create an email that is relevant (to the B2C list), which doesn’t ask too much, that simultaneously helps us discover the most relevant aspect of the solutions (B2B) product (if any)?

The approach – Here’s where you come in

This is where the Moz and MarketingExperiments community comes in to help.

We would like you to craft subject lines relevant to the B2C list, which highlight various benefits of the corporate solutions tool.

We have broken down the corporate solutions tool into three main categories of benefit for the SaaS product.
In the comments section below, include which category you are writing a subject line for along with what you think is an effective subject line.

The crew at Moz and MarketingExperiments will then choose the top subject line in each category to test. Below you will find the emails that will be sent as part of the test. They are identical, except for the subject lines (which you will write) and the bolded line in the third paragraph (that ties into that category of value).

Category #1: Proof, recognition, credibility


Category #2: Better, more opportunities to choose from


Category #3: Ease-of-use

About VolunteerMatch’s brand

Since we’re asking you to try your hand at crafting messaging for this example “client,” here is some more information about the brand to inform your messaging…


VolunteerMatch’s brand identity


VolunteerMatch’s core values

Ten things VolunteerMatch believes:

  1. People want to do good
  2. Every great cause should be able to find the help it needs
  3. People want to improve their lives and communities through volunteering
  4. You can’t make a difference without making a connection
  5. In putting the power of technology to good use
  6. Businesses are serious about making a difference
  7. In building relationships based on trust and excellent service
  8. In partnering with like-minded organizations to create systems that result in even greater impact
  9. The passion of our employees drives the success of our products, services and mission
  10. In being great at what we do

And now, we test…

To participate, you must leave your comment with your idea for a subject line before midnight on Tuesday, January 13, 2015. The contest is open to all residents of the 50 US states, the District of Columbia, and Canada (excluding Quebec), 18 or older. If you want more info, here are the
official rules.

When you enter your subject line in the comments section, also include which category you’re entering for (and if you have an idea outside these categories, let us know…we just might drop it in the test).

Next, the Moz marketing team will pick the subject lines they think will perform best in each category from all the comments on The Moz Blog, and the MarketingExperiments team will pick the subject lines we think will perform the best in each category from all the comments on the MarketingExperiments Blog.

We’ll give the VolunteerMatch team a chance to approve the subject lines based on their brand standards, then test all six to eight subject lines and report back to you through the Moz and MarketingExperiments blogs which subject lines won and why they won to help you improve your already successful marketing.

So, what have you got? Write your best subject lines in the comments section below. I look forward to seeing what you come up with.

Related resources

If you’re interested in learning more about marketing experimentation and A/B testing, you might find these links helpful…

And here’s a look at a previous subject line writing contest we’ve run to give you some ideas for your entry…


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Google’s Physical Web and its Impact on Search

Posted by Tom-Anthony

In early October, Google announced a new project called ”
The Physical Web,” which they explain like this:

The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device – a vending machine, a poster, a toy, a bus stop, a rental car – and not have to download an app first. Everything should be just a tap away.

At the moment this is an experimental project which is designed to promote establishing an open standard by which this mechanism could work. The two key elements of this initiative are:

URLs: The project proposes that all ‘smart devices’ should advertise a URL by which you can interact with that device. The device broadcasts its URL to anyone in the vicinity, who can detect it via their smartphone (with the eventual goal being this functionality is built into the smart phone operating systems rather than needing third-party apps).


Beacons:
Not well known until Apple recently jumped on the bandwagon announcing iBeacons, beacon technology has been around for a couple of years now. Using a streamlined sibling of Bluetooth, called Bluetooth Low Energy (no pairing, range of ~70 metres / ~230 feet) it allows smartphones to detect the presence of nearby beacons and their approximate distance. Until now they’ve mostly been used to ‘hyper-local’ location based applications (check this blog post of mine for some thoughts on how this might impact SEO).

The project proposes adapting and augmenting the signal that Beacons send out to include a URL by which nearby users might interact with a smart device.

This post is about looking to the future at ways this could potentially impact search. It isn’t likely that any serious impact will happen within the next 18 months, and it is hard to predict exactly how things will pan out, but this post is designed to prompt you to think about things proactively.

Usage examples

To help wrap your head around this, lets look at a few examples of possible uses:

Bus times: This is one of the examples Google gives, where you walk up to a bus stop and on detecting the smart device embedded into the stop your phone allows you to pull the latest bus times and travel info.

Item finder: Imagine when you go to the store looking for a specific item. You could pull out your phone and check stock of the item, as well as being directed to the specific part of the store where you can find it.

Check in: Combined with using URLs that are only accessible on local wifi / intranet, you could make a flexible and consistent check in mechanism for people in a variety of situations.

I’m sure there are many many more applications that are yet to be thought up. One thing to notice is that there is no reason you can’t bookmark these advertised URLs and use them elsewhere, so you can’t be sure that someone accessing the URL is actually by the device in question. You can get some of the way there by using URLs that are only accessible within a certain network, but that isn’t going to be a general solution.

Also, note that these URLs don’t need to be constrained to just website URLs; they could just as well be
deep links into apps which you might have installed.

Parallels to the web and ranking

There are some obvious parallels to the web (which is likely why Google named it the way they did). There will be many smart devices which will map to URLs which anyone can go to. A corollary of this is that there will be similar issues to those we see in search engines today. Google already identified one such issue—ranking—on the page for the project:

At first, the nearby smart devices will be small, but if we’re successful, there will be many to choose from and that raises an important UX issue. This is where ranking comes in. Today, we are perfectly happy typing “tennis” into a search engine and getting millions of results back, we trust that the first 10 are the best ones. The same applies here. The phone agent can sort by both signal strength as well as personal preference and history, among many other possible factors. Clearly there is lots of work to be done here.

So there is immediately a parallel between with Google’s role on the world wide web and their potential role on this new physical web; there is a suggestion here that someone needs to rank beacons if they become so numerous that our phones or wearable devices are often picking up a variety of beacons. 

Google proposes proximity as the primary measure of ranking, but the proximity range of BLE technology is very imprecise, so I imagine in dense urban areas that just using proximity won’t be sufficient. Furthermore, given the beacons are cheap (in bulk, $5 per piece will get you standalone beacons with a year-long battery) I imagine there could be “smart device spam.”

At that point, you need some sort of ranking mechanism and that will inevitably lead to people trying to optimise (be it manipulative or a more white-hat approach).
However, I don’t think that will be the sole impact on search. There are several other possible outcomes.

Further impacts on the search industry

1. Locating out-of-range smart devices

Imagine that these smart devices became fairly widespread and were constantly advertising information to anyone nearby with a smart devices. I imagine, in a similar vein to schema.org actions which provide a standard way for websites to describe what they enable someone to do (“affordances,” for the academics), we could establish similar semantic standards for smart devices enabling them to advertise what services/goods they provide.

Now imagine you are looking for a specific product or service, which you want as quickly as possible (e.g “I need to pick up a charger for my phone,” or “I need to charge my phone on the move”). You could imagine that Google or some other search engine will have mapped these smart devices. If the above section was about “ranking,” then this is about “indexing.”

You could even imagine they could keep track of what is in stock at each of these places, enabling “environment-aware” searches. How might this work? Users in the vicinity whose devices have picked up the beacons, and read their (standardised) list of services could then record this into Google’s index. It sounds like a strange paradigm, but it is exactly how Google’s app indexing methodology works.

2. Added context

Context is becoming increasingly important for all searches that we do. Beyond your search phrase, Google look at what device you are on, where you are, what you have recently searched for, who you know, and quite a bit more. It makes our search experiences significantly better, and we should expect that they are going to continue to try to refine their understanding of our context ever more.

It is not hard to see that knowing what beacons people are near adds various facets of context. It can help refine location even further, giving indications to the environment you are in, what you are doing, and even what you might be looking for.

3. Passive searches

I’ve spoken a little bit about passive searches before; this is when Google runs searches for you based entirely off your context with no explicit search. Google Now is currently the embodiment of this technology, but I expect we’ll see it become more and more

I believe could even see see a more explicit element of this become a reality, with the rise of conversational search. Conversational search is already at a point where a search queries can have persistent aspects (“How old is Tom Cruise?”, then “How tall is he?” – the pronoun ‘he’ refers back to previous search). I expect we’ll see this expand more into multi-stage searches (“Sushi restaurant within 10 minutes of here.”, and then “Just those with 4 stars or more”).

So, I could easily imagine that these elements combine with “environment-aware” searches (whether they are powered in the fashion I described above or not) to enable multi-stage searches that result in explicit passive searches. For example, “nearby shops with iPhone 6 cables in stock,” to which Google fails to find a suitable result (“there are no suitable shops nearby”) and you might then answer “let me know when there is.”

Wrap up

It seems certain that embedded smart devices of some sort are coming, and this project from Google looks like a strong candidate to establish a standard. With the rise of smart devices, whichever form they end up taking and standard they end up using, it is certain this is going to impact the way people interact with their environments and use their smart phones and wearables.

It is hard to believe this won’t also have a heavy impact upon marketing and business. What remains less clear is the scale of impact that this will have on SEO. Hopefully this post has got your brain going a bit so as and industry, we can start to prepare ourselves for the rise of smart devices.

I’d love to hear in the comments what other ideas people have and how you guys think this stuff might affect us.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

Panda 4.1: The Devil Is in the Aggregate

Posted by russvirante

I wish I didn’t have to say this. I wish I could look in the eyes of every victim of the last Panda 4.1 update and tell them it was something new, something unforeseeable, something out of their control. I wish I could tell them that Google pulled a fast one that no one saw coming. But I can’t.

Like many in the industry, I have been studying Panda closely since its inception. Google gave us a rare glimpse behind the curtain by providing us with the very guidelines they set in place to build their massive machine-learned algorithm which came to be known as Panda. Three and a half years later, Panda is still with us and seems to still catch us off guard.
Enough is enough.

What I intend to show you throughout this piece is that the original Panda questionnaire still remains a powerful predictive tool to wield in defense of what can be a painful organic traffic loss. By analyzing the winner/loser reports of Panda 4.1 using standard Panda surveys, we can determine whether Google’s choices are still in line with their original vision. So let’s dive in.

The process

The first thing we need to do is acquire a winners and losers list. I picked this excellent
one from SearchMetrics although any list would do as long as it is accurate. Second, I proceeded to run a Panda questionnaire with 10 questions on random pages from each of the sites (both the winners and losers). You can run your own Panda survey by following Distilled and Moz’s instructions here or just use PandaRisk like I did. After completing these analyses, we simply compare the scores across the board to determine whether they continue to reflect what we would expect given the original goals of the Panda algorithm.

The aggregate results

I actually want to do this a little bit backwards to drive home a point. Normally we would build to the aggregate results, starting with the details and leaving you with the big picture. But Panda
is a big-picture kind of algorithmic update. It is specially focused on the intersection of myriad features, the sum is greater than the parts. While breaking down these features can give us some insight, at the end of the day we need to stay acutely aware that unless we do well across the board, we are at risk.

Below is a graph of the average cumulative scores across the winners and losers. The top row are winners, the bottom row are losers. The left and right red circles indicate the lowest and highest scores within those categories, and the blue circle represents the average. There is something very important that I want to point out on this graph.
The highest individual average score of all the losers is less than the lowest average score of the winners. This means that in our randomly selected data set, not a single loser averaged as high a score as the worst winner. When we aggregate the data together, even with a crude system of averages rather than the far more sophisticated machine learning techniques employed by Google, there is a clear disparity between the sites that survive Panda and those that do not.

It is also worth pointing out here that there is no
positive Panda algorithm to our knowledge. Sites that perform well on Panda do not see boosts because they are being given ranking preference by Google, rather their competitors have seen rankings loss or their own previous Panda penalties have been lifted. In either scenario, we should remember that performing well on Panda assessments isn’t going to necessarily increase your rankings, but it should help you sustain them.

Now, let’s move on to some of the individual questions. We are going to start with the least correlated questions and move to those which most strongly correlate with performance in Panda 4.1. While all of the questions had positive correlations, a few lacked statistical significance.


Insignificant correlation

The first question which was not statistically significant in its correlation with Panda performance was “This page has visible errors on it”. The scores have been inverted here so that the higher the score, the fewer the number of people who reported that the page has errors. You can see that while more respondents did say that the winners had no visible errors, the difference was very slight. In fact, there was only a 5.35% difference between the two. I will save comment on this until after we discuss the next question.

The second question which was not statistically significant in its correlation with Panda performance was “This page has too many ads”. The scores have once again been inverted here so that the higher the score, the fewer the number of people who reported that the page has too many ads. This was even closer. The winners performed only 2.3% better than the losers in Panda 4.1.

I think there is a clear takeaway from these two questions. Nearly everyone gets the easy stuff right, but that isn’t enough. First, a lot of pages just have no ads whatsoever because that isn’t their business model. Even those that do have ads have caught on for the most part and optimized their pages accordingly, especially given that Google has other layout algorithms in place aside from Panda. Moreover, content inaccuracy is more likely to impact scrapers and content spinners than most sites, so it is unsurprising that few if any reported that the pages were filled with errors. If you score poorly on either of these, you have only begun to scratch the surface, because most websites get these right enough.


Moderate correlation

A number of Panda questions drew statistically significant difference in means but there was still substantial crossover between the winners and losers.
Whenever the average of the losers was greater than the lowest of the winners, I considered it only a moderate correlation. While the difference between means remained strong, there was still a good deal of variance in the scores. 

The first of these to consider was the question as to whether the content was “trustworthy”. You will notice a trend in a lot of these questions that there is a great deal of subjective human opinion. This subjectivity plays itself out quite a bit when the topics of the site might deal with very different categories of knowledge. For example, a celebrity fact site might be very trustworthy (although the site might be ad-laden) and an opinion piece in the New Yorker on the same celebrity might not be seen as trustworthy – even though it is plainly labeled as opinion. The trustworthy question ties back to the “does this page have errors” question quite nicely, drawing attention to the difference between a subjective and objective question and the way it can spread the means out nicely when you ask a respondent to give more of a personal opinion. This might seem unfair, but in the real world your site and Google itself is being judged by that subjective opinion, so it is understandable why Google wants to get at it algorithmically. Nevertheless, there was a strong difference in means between winners and losers of 12.57%, more than double the difference we saw between winners and losers on the question of Errors.

Original content has long been a known requirement of organic search success, so no one was surprised when it made its way into the Panda questionnaire. It still remains an influential piece of the puzzle with a difference in mean of nearly 20%. It was barely ruled out from being a heavily correlated feature due to one loser edging out a loss against the losers’ average mean. Notice though that one of the winners scored a perfect 100% on the survey. This perfect score was received despite hundreds of respondents.
It can be done.

As you can imagine, perception on what is and is not an authority is very subjective. This question is powerful because it pulls in all kinds of assumptions and presuppositions about brand, subject matter, content quality, design, justification, citations, etc. This likely explains why this question is beleaguered by one of the highest variances on the survey. Nevertheless, there was a 13.42% difference in means. And, on the other side of the scale, we did see what it is like to have a site that is clearly not an authority, scoring the worst possible 0% on this question. This is what happens when you include highly irrelevant content on your site just for the purpose of picking up either links or traffic. Be wary.

Everyone hates the credit card question, and luckily there is huge variance in answers. At least one site survived Panda despite scoring 5% on this question. Notice that there is a huge overlap between the lowest winner and the average of the losing sites. Also, if you notice by the placement of the mean (blue circle) in the winners category, the average wasn’t skewed to the right indicating just one outlier. There was strong variance in the responses across the board. The same was true of the losers. However, with a +15% difference in means, there was a clear average differentiation between the performance of winners and losers. Once again, though, we are drawn back to that aggregate score at the top, where we see how Google can use all these questions together to build a much clearer picture of site and content quality. For example, it is possible that Google pays more attention to this question when it is analyzing a site that has other features like the words “shopping cart” or “check out” on the homepage. 

I must admit that the bookmarking question surprised me. I always considered it to be the most subjective of the bunch. It seemed unfair that a site might be judged because it has material that simply doesn’t appeal to the masses. The survey just didn’t bear this out though. There was a clear difference in means, but after comparing the sites that were from similar content categories, there just wasn’t any reason to believe that a bias was created by subject matter. The 14.64% difference seemed to be, editorially speaking, related more to the construction of the page and the quality of the content, not the topic being discussed. Perhaps a better way to think about this question is:
would you be embarrassed if your friends knew THIS was the site you were getting your information from rather than another.

This wraps up the 5 questions that had good correlations but substantial enough variance that it was possible for the highest loser to beat out the average winner. I think one clear takeaway from this section is that these questions, while harder to improve upon than the Low Ads and No Errors questions before, are completely within the webmaster’s grasp. Making your content and site appear original, trustworthy, authoritative, and worthy of bookmarking aren’t terribly difficult. Sure, it takes some time and effort, but these goals, unlike the next, don’t appear that far out of reach.


Heavy correlation

The final three questions that seemed to distinguish the most between the winners and losers of Panda 4.1 all had high difference-in-means and, more importantly, had little to no crossover between the highest loser and lowest winner. In my opinion, these questions are also the hardest for the webmaster to address. They require thoughtful design, high quality content, and real, expert human authors.

The first question that met this classification was “could this content could appear in print”. With a difference in mean of 22.62%, the winners thoroughly trounced the losers in this category. Their sites and content were just better designed and better written. They showed the kind of editorial oversight you would expect in a print publication. The content wasn’t trite and unimportant, it was thorough and timely. 

The next heavily correlated question was whether the page was written by experts. With over a 34% difference in means between the winners and losers, and
literally no overlap at all between the winners’ and losers’ individual averages, it was clearly the strongest question. You can see why Google would want to look into things like authorship when they knew that expertise was such a powerful distinguisher between Panda winners and losers. This really begs the question – who is writing your content and do your readers know it?

Finally, insightful analysis had a huge difference in means of +32% between winners and losers. It is worth noting that the highest loser is an outlier, which is typified by the skewed mean (blue circle) being closer to the bottom that the top. Most of the answers were closer to the lower score than the top. Thus, the overlap is exaggerated a bit. But once again, this just draws us back to the original conclusion – that the devil is not in the details, the devil is in the aggregate. You might be able to score highly on one or two of the questions, but it won’t be enough to carry you through.


The takeaways

OK, so hopefully it is clear that Panda really hasn’t changed all that much. The same questions we looked at for Panda 1.0 still matter. In fact, I would argue that Google is just getting better at algorithmically answering those same questions, not changing them. They are still the right way to judge a site in Google’s eyes. So how should you respond?

The first and most obvious thing is you should run a Panda survey on your (or your clients’) sites. Select a random sample of pages from the site. The easiest way to do this is get an export of all of the pages of your site, perhaps from Open Site Explorer, put them in Excel and shuffle them. Then choose the top 10 that come up.  You can follow the Moz instructions I linked to above, do it at PandaRisk, or just survey your employees, friends, colleagues, etc. While the latter probably will be positively biased, it is still better than nothing. Go ahead and get yourself a benchmark.

The next step is to start pushing those scores up one at a time. I
give some solid examples on the Panda 4.0 release article about improving press release sites, but there is another better resource that just came out as well. Josh Bachynski released an amazing set of known Panda factors over at his website The Moral Concept. It is well worth a thorough read. There is a lot to take in, but there are tons of easy-to-implement improvements that could help you out quite a bit. Once you have knocked out a few for each of your low-scoring questions, run the exact same survey again and see how you improve. Keep iterating this process until you beat out each of the question averages for winners. At that point, you can rest assured that your site is safe from the Panda by beating the devil in the aggregate. 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]