Meet Dan Morris, Executive Vice President, North America

  1. Why did you decide to come to dotmailer?

The top three reasons were People, Product and Opportunity. I met the people who make up our business and heard their stories from the past 18 years, learned about the platform and market leading status they had built in the UK, and saw that I could add value with my U.S. high growth business experience. I’ve been working with marketers, entrepreneurs and business owners for years across a series of different roles, and saw that I could apply what I’d learned from that and the start-up space to dotmailer’s U.S. operation. dotmailer has had clients in the U.S. for 12 years and we’re positioned to grow the user base of our powerful and easy-to-use platform significantly. I knew I could make a difference here, and what closed the deal for me was the people.  Every single person I’ve met is deeply committed to the business, to the success of our customers and to making our solution simple and efficient.  We’re a great group of passionate people and I’m proud to have joined the dotfamily.

Dan Morris, dotmailer’s EVP for North America in the new NYC office

      1. Tell us a bit about your new role

dotmailer has been in business and in this space for more than 18 years. We were a web agency, then a Systems Integrator, and we got into the email business that way, ultimately building the dotmailer platform thousands of people use daily. This means we know this space better than anyone and we have the perfect solutions to align closely with our customers and the solutions flexible enough to grow with them.  My role is to take all that experience and the platform and grow our U.S. presence. My early focus has been on identifying the right team to execute our growth plans. We want to be the market leader in the U.S. in the next three years – just like we’ve done in the UK –  so getting the right people in the right spots was critical.  We quickly assessed the skills of the U.S. team and made changes that were necessary in order to provide the right focus on customer success. Next, we set out to completely rebuild dotmailer’s commercial approach in the U.S.  We simplified our offers to three bundles, so that pricing and what’s included in those bundles is transparent to our customers.  We’ve heard great things about this already from clients and partners. We’re also increasing our resources on customer success and support.  We’re intensely focused on ease of on-boarding, ease of use and speed of use.  We consistently hear how easy and smooth a process it is to use dotmailer’s tools.  That’s key for us – when you buy a dotmailer solution, we want to onboard you quickly and make sure you have all of your questions answered right away so that you can move right into using it.  Customers are raving about this, so we know it’s working well.

  1. What early accomplishments are you most proud of from your dotmailer time so far?

I’ve been at dotmailer for eight months now and I’m really proud of all we’ve accomplished together.  We spent a lot of time assessing where we needed to restructure and where we needed to invest.  We made the changes we needed, invested in our partner program, localized tech support, customer on-boarding and added customer success team members.  We have the right people in the right roles and it’s making a difference.  We have a commercial approach that is clear with the complete transparency that we wanted to provide our customers.  We’ve got a more customer-focused approach and we’re on-boarding customers quickly so they’re up and running faster.  We have happier customers than ever before and that’s the key to everything we do.

  1. You’ve moved the U.S. team to a new office. Can you tell us why and a bit about the new space?

I thought it was very important to create a NY office space that was tied to branding and other offices around the world, and also had its own NY energy and culture for our team here – to foster collaboration and to have some fun.  It was also important for us that we had a flexible space where we could welcome customers, partners and resellers, and also hold classes and dotUniversity training sessions. I’m really grateful to the team who worked on the space because it really reflects our team and what we care about.   At any given time, you’ll see a training session happening, the team collaborating, a customer dropping in to ask a few questions or a partner dropping in to work from here.  We love our new, NYC space.

We had a spectacular reception this week to celebrate the opening of this office with customers, partners and the dotmailer leadership team in attendance. Please take a look at the photos from our event on Facebook.

Guests and the team at dotmailer's new NYC office warming party

Guests and the team at dotmailer’s new NYC office warming party

  1. What did you learn from your days in the start-up space that you’re applying at dotmailer?

The start-up space is a great place to learn. You have to know where every dollar is going and coming from, so every choice you make needs to be backed up with a business case for that investment.  You try lots of different things to see if they’ll work and you’re ready to turn those tactics up or down quickly based on an assessment of the results. You also learn things don’t have to stay the way they are, and can change if you make them change. You always listen and learn – to customers, partners, industry veterans, advisors, etc. to better understand what’s working and not working.  dotmailer has been in business for 18 years now, and so there are so many great contributors across the business who know how things have worked and yet are always keen to keep improving.  I am constantly in listening and learning mode so that I can understand all of the unique perspectives our team brings and what we need to act on.

  1. What are your plans for the U.S. and the sales function there?

On our path to being the market leader in the U.S., I’m focused on three things going forward: 1 – I want our customers to be truly happy.  It’s already a big focus in the dotmailer organization – and we’re working hard to understand their challenges and goals so we can take product and service to the next level. 2 – Creating an even more robust program around partners, resellers and further building out our channel partners to continuously improve sales and customer service programs. We recently launched a certification program to ensure partners have all the training and resources they need to support our mutual customers.  3 – We have an aggressive growth plan for the U.S. and I’m very focused on making sure our team is well trained, and that we remain thoughtful and measured as we take the steps to grow.  We want to always keep an eye on what we’re known for – tools that are powerful and simple to use – and make sure everything else we offer remains accessible and valuable as we execute our growth plans.

  1. What are the most common questions that you get when speaking to a prospective customer?

The questions we usually get are around price, service level and flexibility.  How much does dotmailer cost?  How well are you going to look after my business?  How will you integrate into my existing stack and then my plans for future growth? We now have three transparent bundle options with specifics around what’s included published right on our website.  We have introduced a customer success team that’s focused only on taking great care of our customers and we’re hearing stories every day that tells me this is working.  And we have all of the tools to support our customers as they grow and to also integrate into their existing stacks – often integrating so well that you can use dotmailer from within Magento, Salesforce or Dynamics, for example.

  1. Can you tell us about the dotmailer differentiators you highlight when speaking to prospective customers that seem to really resonate?

In addition to the ones above – ease of use, speed of use and the ability to scale with you. With dotmailer’s tiered program, you can start with a lighter level of functionality and grow into more advanced functionality as you need it. The platform itself is so easy to use that most marketers are able to build campaigns in minutes that would have taken hours on other platforms. Our customer success team is also with you all the way if ever you want or need help.  We’ve built a very powerful platform and we have a fantastic team to help you with personalized service as an extended part of your team and we’re ready to grow with you.

  1. How much time is your team on the road vs. in the office? Any road warrior tips to share?

I’ve spent a lot of time on the road, one year I attended 22 tradeshows! Top tip when flying is to be willing to give up your seat for families or groups once you’re at the airport gate, as you’ll often be rewarded with a better seat for helping the airline make the family or group happy. Win win! Since joining dotmailer, I’m focused on being in office and present for the team and customers as much as possible. I can usually be found in our new, NYC office where I spend a lot of time with our team, in customer meetings, in trainings and other hosted events, sales conversations or marketing meetings. I’m here to help the team, clients and partners to succeed, and will always do my best to say yes! Once our prospective customers see how quickly and efficiently they can execute tasks with dotmailer solutions vs. their existing solutions, it’s a no-brainer for them.  I love seeing and hearing their reactions.

  1. Tell us a bit about yourself – favorite sports team, favorite food, guilty pleasure, favorite band, favorite vacation spot?

I’m originally from Yorkshire in England, and grew up just outside York. I moved to the U.S. about seven years ago to join a very fast growing startup, we took it from 5 to well over 300 people which was a fantastic experience. I moved to NYC almost two years ago, and I love exploring this great city.  There’s so much to see and do.  Outside of dotmailer, my passion is cars, and I also enjoy skeet shooting, almost all types of music, and I love to travel – my goal is to get to India, Thailand, Australia and Japan in the near future.

Want to find out more about the dotfamily? Check out our recent post about Darren Hockley, Global Head of Support.

[ccw-atrib-link]

Simple Steps for Conducting Creative Content Research

Posted by Hannah_Smith

Most frequently, the content we create at Distilled is designed to attract press coverage, social shares, and exposure (and links) on sites our clients’ target audience reads. That’s a tall order.

Over the years we’ve had our hits and misses, and through this we’ve recognised the value of learning about what makes a piece of content successful. Coming up with a great idea is difficult, and it can be tough to figure out where to begin. Today, rather than leaping headlong into brainstorming sessions, we start with creative content research.

What is creative content research?

Creative content research enables you to answer the questions:

“What are websites publishing, and what are people sharing?”

From this, you’ll then have a clearer view on what might be successful for your client.

A few years ago this required quite an amount of work to figure out. Today, happily, it’s much quicker and easier. In this post I’ll share the process and tools we use.

Whoa there… Why do I need to do this?

I think that the value in this sort of activity lies in a couple of directions:

a) You can learn a lot by deconstructing the success of others…

I’ve been taking stuff apart to try to figure out how it works for about as long as I can remember, so applying this process to content research felt pretty natural to me. Perhaps more importantly though, I think that deconstructing content is actually easier when it isn’t your own. You’re not involved, invested, or in love with the piece so viewing it objectively and learning from it is much easier.

b) Your research will give you a clear overview of the competitive landscape…

As soon as a company elects to start creating content, they gain a whole raft of new competitors. In addition to their commercial competitors (i.e. those who offer similar products or services), the company also gains content competitors. For example, if you’re a sports betting company and plan to create content related to the sports events that you’re offering betting markets on; then you’re competing not just with other betting companies, but every other publisher who creates content about these events. That means major news outlets, sports news site, fan sites, etc. To make matters even more complicated, it’s likely that you’ll actually be seeking coverage from those same content competitors. As such, you need to understand what’s already being created in the space before creating content of your own.

c) You’re giving yourself the data to create a more compelling pitch…

At some point you’re going to need to pitch your ideas to your client (or your boss if you’re working in-house). At Distilled, we’ve found that getting ideas signed off can be really tough. Ultimately, a great idea is worthless if we can’t persuade our client to give us the green light. This research can be used to make a more compelling case to your client and get those ideas signed off. (Incidentally, if getting ideas signed off is proving to be an issue you might find this framework for pitching creative ideas useful).

Where to start

Good ideas start with a good brief, however it can be tough to pin clients down to get answers to a long list of questions.

As a minimum you’ll need to know the following:

  • Who are they looking to target?
    • Age, sex, demographic
    • What’s their core focus? What do they care about? What problems are they looking to solve?
    • Who influences them?
    • What else are they interested in?
    • Where do they shop and which brands do they buy?
    • What do they read?
    • What do they watch on TV?
    • Where do they spend their time online?
  • Where do they want to get coverage?
    • We typically ask our clients to give us a wishlist of 10 or so sites they’d love to get coverage on
  • Which topics are they comfortable covering?
    • This question is often the toughest, particularly if a client hasn’t created content specifically for links and shares before. Often clients are uncomfortable about drifting too far away from their core business—for example, if they sell insurance, they’ll typically say that they really want to create a piece of content about insurance. Whilst this is understandable from the clients’ perspective it can severely limit their chances of success. It’s definitely worth offering up a gentle challenge at this stage—I’ll often cite Red Bull, who are a great example of a company who create content based on what their consumers love, not what they sell (i.e. Red Bull sell soft drinks, but create content about extreme sports because that’s the sort of content their audience love to consume). It’s worth planting this idea early, but don’t get dragged into a fierce debate at this stage—you’ll be able to make a far more compelling argument once you’ve done your research and are pitching concrete ideas.

Processes, useful tools and sites

Now you have your brief, it’s time to begin your research.

Given that we’re looking to uncover “what websites are publishing and what’s being shared,” It won’t surprise you to learn that I pay particular attention to pieces of content and the coverage they receive. For each piece that I think is interesting I’ll note down the following:

  • The title/headline
  • A link to the coverage (and to the original piece if applicable)
  • How many social shares the coverage earned (and the original piece earned)
  • The number of linking root domains the original piece earned
  • Some notes about the piece itself: why it’s interesting, why I think it got shares/coverage
  • Any gaps in the content, whether or not it’s been executed well
  • How we might do something similar (if applicable)

Whilst I’m doing this I’ll also make a note of specific sites I see being frequently shared (I tend to check these out separately later on), any interesting bits of research (particularly if I think there might be an opportunity to do something different with the data), interesting threads on forums etc.

When it comes to kicking off your research, you can start wherever you like, but I’d recommend that you cover off each of the areas below:

What does your target audience share?

Whilst this activity might not uncover specific pieces of successful content, it’s a great way of getting a clearer understanding of your target audience, and getting a handle on the sites they read and the topics which interest them.

  • Review social profiles / feeds
    • If the company you’re working for has a Facebook page, it shouldn’t be too difficult to find some people who’ve liked the company page and have a public profile. It’s even easier on Twitter where most profiles are public. Whilst this won’t give you quantitative data, it does put a human face to your audience data and gives you a feel for what these people care about and share. In addition to uncovering specific pieces of content, this can also provide inspiration in terms of other sites you might want to investigate further and ideas for topics you might want to explore.
  • Demographics Pro
    • This service infers demographic data from your clients’ Twitter followers. I find it particularly useful if the client doesn’t know too much about their audience. In addition to demographic data, you get a breakdown of professions, interests, brand affiliations, and the other Twitter accounts they follow and who they’re most influenced by. This is a paid-for service, but there are pay-as-you-go options in addition to pay monthly plans.

Finding successful pieces of content on specific sites

If you’ve a list of sites you know your target audience read, and/or you know your client wants to get coverage on, there are a bunch of ways you can uncover interesting content:

  • Using your link research tool of choice (e.g. Open Site Explorer, Majestic, ahrefs) you can run a domain level report to see which pages have attracted the most links. This can also be useful if you want to check out commercial competitors to see which pieces of content they’ve created have attracted the most links.
  • There are also tools which enable you to uncover the most shared content on individual sites. You can use Buzzsumo to run content analysis reports on individual domains which provide data on average social shares per post, social shares by network, and social shares by content type.
  • If you just want to see the most shared content for a given domain you can run a simple search on Buzzsumo using the domain; and there’s also the option to refine by topic. For example a search like [guardian.com big data] will return the most shared content on the Guardian related to big data. You can also run similar reports using ahrefs’ Content Explorer tool.

Both Buzzsumo and ahrefs are paid tools, but both offer free trials. If you need to explore the most shared content without using a paid tool, there are other alternatives. Check out Social Crawlytics which will crawl domains and return social share data, or alternatively, you can crawl a site (or section of a site) and then run the URLs through SharedCount‘s bulk upload feature.

Finding successful pieces of content by topic

When searching by topic, I find it best to begin with a broad search and then drill down into more specific areas. For example, if I had a client in the financial services space, I’d start out looking at a broad topic like “money” rather than shooting straight to topics like loans or credit cards.

As mentioned above, both Buzzsumo and ahrefs allow you to search for the most shared content by topic and both offer advanced search options.

Further inspiration

There are also several sites I like to look at for inspiration. Whilst these sites don’t give you a great steer on whether or not a particular piece of content was actually successful, with a little digging you can quickly find the original source and pull link and social share data:

  • Visually has a community area where users can upload creative content. You can search by topic to uncover examples.
  • TrendHunter have a searchable archive of creative ideas, they feature products, creative campaigns, marketing campaigns, advertising and more. It’s best to keep your searches broad if you’re looking at this site.
  • Check out Niice (a moodboard app) which also has a searchable archive of handpicked design inspiration.
  • Searching Pinterest can allow you to unearth some interesting bits and pieces as can Google image searches and regular Google searches around particular topics.
  • Reviewing relevant sections of discussion sites like Quora can provide insight into what people are asking about particular topics which may spark a creative idea.

Moving from data to insight

By this point you’ve (hopefully) got a long list of content examples. Whilst this is a great start, effectively what you’ve got here is just data, now you need to convert this to insight.

Remember, we’re trying to answer the questions: “What are websites publishing, and what are people sharing?”

Ordinarily as I go through the creative content research process, I start to see patterns or themes emerge. For example, across a variety of topics areas you’ll see that the most shared content tends to be news. Whilst this is good to know, it’s not necessarily something that’s going to be particularly actionable. You’ll need to dig a little deeper—what else (aside from news) is given coverage? Can you split those things into categories or themes?

This is tough to explain in the abstract, so let me give you an example. We’d identified a set of music sites (e.g. Rolling Stone, NME, CoS, Stereogum, Pitchfork) as target publishers for a client.

Here’s a summary of what I concluded following my research:

The most-shared content on these music publications is news: album launches, new singles, videos of performances etc. As such, if we can work a news hook into whatever we create, this could positively influence our chances of gaining coverage.

Aside from news, the content which gains traction tends to fall into one of the following categories:

Earlier in this post I mentioned that it can be particularly tough to create content which attracts coverage and shares if clients feel strongly that they want to do something directly related to their product or service. The example I gave at the outset was a client who sold insurance and was really keen to create something about insurance. You’re now in a great position to win an argument with data, as thanks to your research you’ll be able to cite several pieces of insurance-related content which have struggled to gain traction. But it’s not all bad news as you’ll also be able to cite other topics which are relevant to the client’s target audience and stand a better chance of gaining coverage and shares.

Avoiding the pitfalls

There are potential pitfalls when it comes to creative content research in that it’s easy to leap to erroneous conclusions. Here’s some things to watch out for:

Make sure you’re identifying outliers…

When seeking out successful pieces of content you need to be certain that what you’re looking at is actually an outlier. For example, the average post on BuzzFeed gets over 30k social shares. As such, that post you found with just 10k shares is not an outlier. It’s done significantly worse than average. It’s therefore not the best post to be holding up as a fabulous example of what to create to get shares.

Don’t get distracted by formats…

Pay more attention to the idea than the format. For example, the folks at Mashable, kindly covered an infographic about Instagram which we created for a client. However, the takeaway here is not that Instagram infographics get coverage on Mashable. Mashable didn’t cover this because we created an infographic. They covered the piece because it told a story in a compelling and unusual way.

You probably shouldn’t create a listicle…

This point is related to the point above. In my experience, unless you’re a publisher with a huge, engaged social following, that listicle of yours is unlikely to gain traction. Listicles on huge publisher sites get shares, listicles on client sites typically don’t. This is doubly important if you’re also seeking coverage, as listicles on clients sites don’t typically get links or coverage on other sites.

How we use the research to inform our ideation process

At Distilled, we typically take a creative brief and complete creative content research and then move into the ideation process. A summary of the research is included within the creative brief, and this, along with a copy of the full creative content research is shared with the team.

The research acts as inspiration and direction and is particularly useful in terms of identifying potential topics to explore but doesn’t mean team members don’t still do further research of their own.

This process by no means acts as a silver bullet, but it definitely helps us come up with ideas.


Thanks for sticking with me to the end!

I’d love to hear more about your creative content research processes and any tips you have for finding inspirational content. Do let me know via the comments.

Image credits: Research, typing, audience, inspiration, kitteh.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

SEO-XC Search Engine Optimization – Extended Concept

SEO (Search Engine Optimization) is all about placing web pages on top of search engines. Internet marketing is all about increasing sales and profits. Is it…

[ccw-atrib-link]

The Danger of Crossing Algorithms: Uncovering The Cloaked Panda Update During Penguin 3.0

Posted by GlennGabe

Penguin 3.0 was one of the most anticipated algorithm updates in recent years when it rolled out on October 17, 2014. Penguin hadn’t run for over a year at that point,
and there were many webmasters sitting in Penguin limbo waiting for recovery. They had cleaned up their link profiles, disavowed what they could, and were
simply waiting for the next update or refresh. Unfortunately, Google was wrestling with the algo internally and over twelve months passed without an
update.

So when Pierre Far finally
announced Penguin 3.0 a few days later on October 21, a few things
stood out. First, this was
not a new algorithm like Gary Illyes had explained it would be at SMX East. It was a refresh and underscored
the potential problems Google was battling with Penguin (cough, negative SEO).

Second, we were not seeing the impact that we expected. The rollout seemed to begin with a heavier international focus and the overall U.S impact has been
underwhelming to say the least. There were definitely many fresh hits globally, but there were a number of websites that should have recovered but didn’t
for some reason. And many are still waiting for recovery today.

Third, the rollout would be slow and steady and could take weeks to fully complete. That’s unusual, but makes sense given the microscope Penguin 3.0 was
under. And this third point (the extended rollout) is even more important than most people think. Many webmasters are already confused when they get hit
during an acute algorithm update (for example, when an algo update rolls out on one day). But the confusion gets exponentially worse when there is an
extended rollout.

The more time that goes by between the initial launch and the impact a website experiences, the more questions pop up. Was it Penguin 3.0 or was it
something else? Since I work heavily with algorithm updates, I’ve heard similar questions many times over the past several years. And the extended Penguin
3.0 rollout is a great example of why confusion can set in. That’s my focus today.


Penguin, Pirate, and the anomaly on October 24

With the Penguin 3.0 rollout, we also had
Pirate 2 rolling out. And yes, there are
some websites that could be impacted by both. That added a layer of complexity to the situation, but nothing like what was about to hit. You see, I picked
up a very a strange anomaly on October 24. And I clearly saw serious movement on that day (starting late in the day ET).

So, if there was a third algorithm update, then that’s
three potential algo updates rolling out at the same time. More about this soon,
but it underscores the confusion that can set in when we see extended rollouts, with a mix of confirmed and unconfirmed updates.


Penguin 3.0 tremors and analysis

Since I do a lot of Penguin work, and have researched many domains impacted by Penguin in the past, I heavily studied the Penguin 3.0 rollout. I 
published a blog post based on the first ten days of the update, which included some interesting findings for sure.

And based on the extended rollout, I definitely saw Penguin tremors beyond the initial October 17 launch. For example, check out the screenshot below of a
website seeing Penguin impact on October 17, 22, and 25.

But as mentioned earlier, something else happened on October 24 that set off sirens in my office. I started to see serious movement on sites impacted by
Panda, and not Penguin. And when I say serious movement, I’m referring to major traffic gains or losses all starting on October 24. Again, these were sites heavily dealing with Panda and had
clean link profiles. Check out the trending below from October 24 for several
sites that saw impact.


A good day for a Panda victim:



A bad day for a Panda victim:



And an incredibly frustrating day for a 9/5 recovery that went south on 10/24:

I saw this enough that I tweeted heavily about it and
included a section about Panda in my Penguin 3.0 blog post. And
that’s when something wonderful happened, and it highlights the true beauty and power of the internet.

As more people saw my tweets and read my post, I started receiving messages from other webmasters explaining that
they saw the same exact thing, and on their websites dealing with Panda and not Penguin. And not only
did they tell me about, they
showed me the impact.

I received emails containing screenshots and tweets with photos from Google Analytics and Google Webmaster Tools. It was amazing to see, and it confirmed
that we had just experienced a Panda update in the middle of a multi-week Penguin rollout. Yes, read that line again. Panda during Penguin, right when the
internet world was clearly focused on Penguin 3.0.

That was a sneaky move Google… very sneaky. 🙂

So, based on what I explained earlier about webmaster confusion and algorithms, can you tell what happened next? Yes, massive confusion ensued. We had the
trifecta of algorithm updates with Penguin, Pirate, and now Panda.


Webmaster confusion and a reminder of the algo sandwich from 2012

So, we had a major algorithm update during two other major algorithm updates (Penguin and Pirate) and webmaster confusion was hitting extremely high
levels. And I don’t blame anyone for being confused. I’m neck deep in this stuff and it confused me at first.

Was the October 24 update a Penguin tremor or was this something else? Could it be Pirate? And if it was indeed Panda, it would have been great if Google told
us it was Panda! Or did they want to throw off SEOs analyzing Penguin and Pirate? Does anyone have a padded room I can crawl into?

Once I realized this was Panda, and started to communicate the update via Twitter and my blog, I had a number of people ask me a very important question:


“Glenn, would Google really roll out two or three algorithm updates so close together, or at the same time?”

Why yes, they would. Anyone remember the algorithm sandwich from April of 2012? That’s when Google rolled out Panda on April 19, then Penguin 1.0 on April 24,
followed by Panda on April 27. Yes, we had three algorithm updates all within ten days. And let’s not forget that the Penguin update on April 24, 2012 was the
first of its kind! So yes, Google can, and will, roll out multiple major algos around the same time.

Where are we headed? It’s fascinating, but not pretty


Panda is near real-time now

When Panda 4.1 rolled out on September 23, 2014, I immediately disliked the title and version number of the update. Danny Sullivan named it 4.1, so it stuck. But for
me, that was not 4.1… not even close. It was more like 4.75. You see, there have been a number of Panda tremors and updates since P4.0 on May 20,
2014.

I saw what I was calling “tremors”
nearly weekly based on having access to a large amount of Panda data (across sites, categories, and countries).
And based on what I was seeing, I reached out to John Mueller at Google to clarify the tremors. John’s response was great and confirmed what I was seeing.
He explained that there
was not a set frequency for algorithms like Panda. Google can roll out an algorithm, analyze the
SERPs, refine the algo to get the desired results, and keep pushing it out. And that’s exactly what I was seeing (again, almost weekly since Panda 4.0).


When Panda and Penguin meet in real time…

…they will have a cup of coffee and laugh at us. 🙂 So, since Panda is near-real time, the crossing of major algorithm updates is going to happen.
And we just experienced an important one on October 24 with Penguin, Pirate, and Panda. But it could (and probably will) get more chaotic than what we have now.
We are quickly approaching a time where major algorithm updates crafted in a lab will be unleashed on the web in near-real time or in actual real time.

And if organic search traffic from Google is important to you, then pay attention. We’re about to take a quick trip into the future of Google and SEO. And
after hearing what I have to say, you might just want the past back…


Google’s brilliant object-oriented approach to fighting webspam

I have presented at the past two SES conferences about Panda, Penguin, and other miscellaneous disturbances in the force. More about those “other
disturbances” soon. In my presentation, one of my slides looks like this:

Over the past several years, Google has been using a brilliant, object-oriented approach to fighting webspam and low quality content. Webspam engineers can
craft external algorithms in a lab and then inject them into the real-time algorithm whenever they want. It’s brilliant because it isolates specific
problems, while also being extremely scalable. And by the way, it should scare the heck out of anyone breaking the rules.

For example, we have Panda, Penguin, Pirate, and Above the Fold. Each was crafted to target a specific problem and can be unleashed on the web whenever
Google wants. Sure, there are undoubtedly connections between them (either directly or indirectly), but each specific algo is its own black box. Again,
it’s object-oriented.

Now, Panda is a great example of an algorithm that has matured to where Google highly trusts it. That’s why Google announced in June of 2013 that Panda
would roll out monthly, over ten days. And that’s also why it matured even more with Panda 4.0 (and why I’ve seen tremors almost weekly.)

And then we had Gary Illyes explain that Penguin was moving along the same path. At SMX East,
Gary explained that the new Penguin algorithm (which clearly didn’t roll out on October 17) would be structured in a way where subsequent updates could be rolled out more easily.
You know, like Panda.

And by the way, what if this happens to Pirate, Above the Fold, and other algorithms that Google is crafting in its Frankenstein lab? Well my friends, then
we’ll have absolute chaos and society as we know it will crumble. OK, that’s a bit dramatic, but you get my point.

We already have massive confusion now… and a glimpse into the future reveals a continual flow of major algorithms running in real-time, each that
could pummel a site to the ground. And of course, with little or no sign of which algo actually caused the destruction. I don’t know about you, but I just
broke out in hives. 🙂


Actual example of what (near) real-time updates can do

After Panda 4.0, I saw some very strange Panda movement for sites impacted by recent updates. And it underscores the power of near-real time algo updates.
As a quick example,
temporary Panda recoveries can happen if you
don’t get out of the gray area enough. And now that we are seeing Panda tremors almost weekly, you can experience potential turbulence several times per
month.

Here is a screenshot from a site that recovered from Panda, didn’t get out of the gray area and reentered the strike zone, just five days later.

Holy cow, that was fast. I hope they didn’t plan any expensive trips in the near future. This is exactly what can happen when major algorithms roam the web
in real time. One week you’re looking good and the next week you’re in the dumps. Now, at least I knew this was Panda. The webmaster could tackle more
content problems and get out of the gray area… But the ups and downs of a Panda roller coaster ride can drive a webmaster insane. It’s one of the
reasons I recommend making
significant changes when
you’ve been hit by Panda. Get as far out of the gray area as possible.


An “automatic action viewer” in Google Webmaster Tools could help (and it’s actually being discussed internally by Google)

Based on webmaster confusion, many have asked Google to create an “automatic action viewer” in Google Webmaster Tools. It would be similar to the “manual
actions viewer,” but focused on algorithms that are demoting websites in the search results (versus penalties). Yes, there is a difference by the way.

The new viewer would help webmasters better understand the types of problems that are being impacted by algorithms like Panda, Penguin, Pirate, Above the
Fold, and others. Needless to say, this would be incredibly helpful to webmasters, business owners, and SEOs.

So, will we see that viewer any time soon? Google’s John Mueller
addressed this question during the November 3 webmaster hangout (at 38:30).

John explained they are trying to figure something out, but it’s not easy. There are so many algorithms running that they don’t want to provide feedback
that is vague or misleading. But, John did say they are discussing the automatic action viewer internally. So you never know…


A quick note about Matt Cutts

As many of you know, Matt Cutts took an extended leave this past summer (through the end of October). Well, he announced on Halloween that he is
extending his leave into 2015. I won’t go crazy here talking about his decision overall, but I will
focus on how this impacts webmasters as it relates to algorithm updates and webspam.

Matt does a lot more than just announce major algo updates… He actually gets involved when collateral damage rears its ugly head. And there’s not a
faster way to rectify a flawed algo update than to have Mr. Cutts involved. So before you dismiss Matt’s extended leave as uneventful, take a look at the
trending below:

Notice the temporary drop off a cliff, then 14 days of hell, only to see that traffic return? That’s because Matt got involved. That’s the
movie blog fiasco from early 2014 that I heavily analyzed. If
Matt was not notified of the drop via Twitter, and didn’t take action, I’m not sure the movie blogs that got hit would be around today. I told Peter from
SlashFilm that his fellow movie blog owners should all pay him a bonus this year. He’s the one that pinged Matt via Twitter and got the ball rolling.

It’s just one example of how having someone with power out front can nip potential problems in the bud. Sure, the sites experienced two weeks of utter
horror, but traffic returned once Google rectified the problem. Now that Matt isn’t actively helping or engaged, who will step up and be that guy? Will it
be John Mueller, Pierre Far, or someone else? John and Pierre are greatly helpful, but will they go to bat for a niche that just got destroyed? Will they
push changes through so sites can turn around? And even at its most basic level, will they even be aware the problem exists?

These are all great questions, and I don’t want to bog down this post (it’s already incredibly long). But don’t laugh off Matt Cutts taking an extended
leave. If he’s gone for good, you might only realize how important he was to the SEO community
after he’s gone. And hopefully it’s not because
your site just tanked as collateral damage during an algorithm update. Matt might be
running a marathon or trying on new Halloween costumes. Then where will you be?


Recommendations moving forward:

So where does this leave us? How can you prepare for the approaching storm of crossing algorithms? Below, I have provided several key bullets that I think
every webmaster should consider. I recommend taking a hard look at your site
now, before major algos are running in near-real time.

  • Truly understand the weaknesses with your website. Google will continue crafting external algos that can be injected into the real-time algorithm.
    And they will go real-time at some point. Be ready by cleaning up your site now.
  • Document all changes and fluctuations the best you can. Use annotations in Google Analytics and keep a spreadsheet updated with detailed
    information.
  • Along the same lines, download your Google Webmaster Tools data monthly (at least). After helping many companies with algorithm hits, that
    information is incredibly valuable, and can help lead you down the right recovery path.
  • Use a mix of audits and focus groups to truly understand the quality of your site. I mentioned in my post about

    aggressive advertising and Panda

    that human focus groups are worth their weight in gold (for surfacing Panda-related problems). Most business owners are too close to their own content and
    websites to accurately measure quality. Bias can be a nasty problem and can quickly lead to bamboo-overflow on a website.
  • Beyond on-site analysis, make sure you tackle your link profile as well. I recommend heavily analyzing your inbound links and weeding out unnatural
    links. And use the disavow tool for links you can’t remove. The combination of enhancing the quality of your content, boosting engagement, knocking down
    usability obstacles, and cleaning up your link profile can help you achieve long-term SEO success. Don’t tackle one quarter of your SEO problems. Address
    all of them.
  • Remove barriers that inhibit change and action. You need to move fast. You need to be decisive. And you need to remove red tape that can bog down
    the cycle of getting changes implemented. Don’t water down your efforts because there are too many chefs in the kitchen. Understand the changes that need
    to be implemented, and take action. That’s how you win SEO-wise.


Summary: Are you ready for the approaching storm?

SEO is continually moving and evolving, and it’s important that webmasters adapt quickly. Over the past few years, Google’s brilliant object-oriented
approach to fighting webspam and low quality content has yielded algorithms like Panda, Penguin, Pirate, and Above the Fold. And more are on their way. My
advice is to get your situation in order now, before crossing algorithms blend a recipe of confusion that make it exponentially harder to identify, and
then fix, problems riddling your website.

Now excuse me while I try to build a flux capacitor. 🙂

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]

What Happened after Google Pulled Author and Video Snippets: A Moz Case Study

Posted by Cyrus-Shepard

In the past 2 months Google made
big changes to its search results

Webmasters saw disappearing 
Google authorship photos, reduced video snippets, changes to local packs and in-depth articles, and more.

Here at Moz, we’ve closely monitored our own URLs to measure the effect of these changes on our actual traffic.
The results surprised us.

Authorship traffic—surprising results

In the early days of authorship, many webmasters worked hard to get their photo in Google search results. I confess, I doubt anyone worked harder at author snippets
than me

Search results soon became crowded with smiling faces staring back at us. Authors hired professional photographers. Publishers worked to correctly follow Google’s guidelines to set up authorship for thousands of authors.

The race for more clicks was on.

Then on June 28th,
Google cleared the page. No more author photos. 

To gauge the effect on traffic, we examined eight weeks’ worth of data from Google Analytics and Webmaster Tools, before and after the change. We then examined our top 15 authorship URLs (where author photos were known to show consistently) compared to non-authorship URLs. 

The results broke down like this:

Change in Google organic traffic to Moz

  • Total Site:  -1.76%
  • Top 15 Non-Authorship URLs:  -5.96%
  • Top 15 Authorship URLs:  -2.86%

Surprisingly,
authorship URLs performed as well as non-authorship URLs in terms of traffic. Even though Moz was highly optimized for authors, traffic didn’t significantly change.

On an individual level, things looked much different. We actually observed big changes in traffic with authorship URLs increasing or decreasing in traffic by as much as 45%. There is no clear pattern: Some went up, some went down—exactly like any URL would over an extended time.

Authorship photos don’t exist in a vacuum; each photo on the page competed for attention with all the other photos on the page.
Each search result is as unique as a fingerprint. What worked for one result didn’t work for another.

Consider what happens visually when multiple author photos exist in the same search result:

One hypothesis speculates that more photos has the effect of drawing eyes down the page. In the absence of rich snippets, search click-through rates might follow more closely studied models, which dictate that
results closer to the top earn more clicks.

In the absence of author photos, it’s likely click-through rate expectations have once again become more standardized.

Video snippets: a complex tale

Shortly after Google removed author photos, they took aim at video snippets as well. On July 17th,
MozCast reported a sharp decline in video thumbnails.

Most sites, Moz included, lost
100% of their video results. Other sites appeared to be “white-listed” as reported by former Mozzer Casey Henry at Wistia. 

A few of the sites Casey found where Google continues to show video thumbnails:

  • youtube.com
  • vimeo.com
  • vevo.com
  • ted.com
  • today.com
  • discovery.com

Aside from these “giants,” most webmasters, even very large publishers at the top of the industry, saw their video snippets vanish in search results.

How did this loss affect traffic for our URLs with embedded videos? Fortunately, here at Moz we have a large collection of ready-made video URLs we could easily study: our
Whiteboard Friday videos, which we produce every, well, Friday. 

To our surprise, most URLs actually saw more traffic.

On average, our Whiteboard Friday videos saw a
10% jump in organic traffic after losing video snippets.

A few other with video saw
dramatic increases:

The last example, the Learn SEO page, didn’t have an actual video on it, but a bug with Google caused them to display an older video thumbnail. (Several folks we’ve talked to speculate that Google removed video snippets simply to clean up their bugs in the system)

We witnessed a significant increase in traffic after losing video snippets. How did this happen? 

Did Google change the way they rank and show video pages?

It turns out that many of our URLs that contained videos also saw a significant change in the number of search
impressions at the exact same time.

According to Google, impressions for the majority of our video URLs shot up dramatically around July 14th.

Impressions for Whiteboard Friday URLs also rose 20% during this time. For Moz, most of the video URLs saw many more impressions, but for others, it appears rankings dropped.

While Moz saw video impressions rise,
other publishers saw the opposite effect.

Casey Henry, our friend at video hosting company
Wistia, reports seeing rankings drop for many video URLs that had thin or little content.

“…it’s only pages hosting video with thin content… the pages that only had video and a little bit of text went down.”


Casey Henry

For a broader perspective, we talked to
Marshall Simmonds, founder of Define Media Group, who monitors traffic to millions of daily video pageviews for large publishers. 

Marshall found that despite the fact that
most of the sites they monitor lost video snippets, they observed no visible change in either traffic or pageviews across hundreds of millions of visits.

Define Media Group also recently released its
2014 Mid-Year Digital Traffic Report which sheds fascinating light on current web traffic trends.

What does it all mean?

While we have anecdotal evidence of ranking and impression changes for video URLs on individual sites, on the grand scale across all Google search results these differences aren’t visible.

If you have video content, the evidence suggests it’s now worth more than ever to follow
video SEO best practices: (taken from video SEO expert Phil Nottingham)

  • Use a crawlable player (all the major video hosting platforms use these today)
  • Surround the video with supporting information (caption files and transcripts work great)
  • Include schema.org video markup

SEO finds a way

For the past several years web marketers competed for image and video snippets, and it’s with a sense of sadness that they’ve been taken away.

The smart strategy follows the data, which suggest that more traditional click-through rate optimization techniques and strategies could now be more effective. This means strong titles, meta descriptions, rich snippets (those that remain), brand building and traditional ranking signals.

What happened to your site when Google removed author photos and video snippets? Let us know in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

[ccw-atrib-link]