My Favorite 5 Analytics Dashboards – Whiteboard Friday

Posted by Sixthman

Finding effective ways of organizing your analytics dashboards is quite a bit easier if you can get a sense for what has worked for others. To that end, in today’s Whiteboard Friday the founder of Sixth Man Marketing, Ed Reese, shares his five favorite approaches.

UPDATE: At the request of several commenters, Ed has generously provided GA templates for these dashboards. Check out the links in his comment below!

For reference, here’s a still of this week’s whiteboard!

Video transcription

Hi, I’m Ed Reese with Sixth Man Marketing and Local U. Welcome to this edition of Whiteboard Friday. Today we’re going to talk about one of my favorite things in terms of Google Analytics — the dashboard.

So think of your dashboard like the dashboard on your car — what’s important to you and what’s important to your client. I have the new Tesla dashboard, you might recognize it. So, for my Tesla dashboard, I want navigation, tunes, calendar, everything and a bag of chips. You notice my hands are not on the wheel because it drives itself now. Awesome.

So, what’s important? I have the top five dashboards that I like to share with my clients and create for them. These are the executive dashboards — one for the CMO on the marketing side, new markets, content, and a tech check. You can actually create dashboards and make sure that everything is working.

These on the side are some of the few that I think people don’t take a look at as often. It’s my opinion that we have a lot of very generic dashboards, so I like to really dive in and see what we can learn so that your client can really start using them for their advantage.

#1 – Executives

Let’s start with the executive dashboard. There is a lot of debate on whether or not to go from left to right or right to left. So in terms of outcome, behavior, and acquisition, Google Analytics gives you those areas. They don’t mark them as these three categories, but I follow Avinash’s language and the language that GA uses.

When you’re talking to executives or CFOs, it’s my personal opinion that executives always want to see the money first. So focus on financials, conversion rates, number of sales, number of leads. They don’t want to go through the marketing first and then get to the numbers. Just give them what they want. On a dashboard, they’re seeing that first.

So let’s start with the result and then go back to behavior. Now, this is where a lot of people have very generic metrics — pages viewed, generic bounce rate, very broad metrics. To really dive in, I like focusing and using the filters to go to specific areas on the site. So if it’s a destination like a hotel, “Oh, are they viewing the pages that helped them get there? Are they looking at the directional information? Are they viewing discounts and sorts of packages?” Think of the behavior on those types of pages you want to measure, and then reverse engineer. That way you can tell they executive, “Hey, this hotel reservation viewed these packages, which came from these sources, campaigns, search, and social.” Remember, you’re building it so that they can view it for themselves and really take advantage and see, “Oh, that’s working, and this campaign from this source had these behaviors that generated a reservation,” in that example.

#2 – CMO

Now, let’s look at it from a marketing perspective. You want to help make them look awesome. So I like to reverse it and start with the marketing side in terms of acquisition, then go to behavior on the website, and then end up with the same financials — money, conversion rate percentages, number of leads, number of hotel rooms booked, etc. I like to get really, really focused.

So when you’re building a dashboard for a CMO or anyone on the marketing side, talk to them about what metrics matter. What do they really want to learn? A lot of times you need to know their exact territory and really fine tune it in to figure out exactly what they want to find out.

Again, I’m a huge fan of filters. What behavior matters? So for example, one of our clients is Beardbrand. They sell beard oil and they support the Urban Beardsman. We know that their main markets are New York, Texas, California, and the Pacific Northwest. So we could have a very broad regional focus for acquisition, but we don’t. We know where their audience lives, we know what type of behavior they like, and ultimately what type of behavior on the website influences purchases.

So really think from a marketing perspective, “How do we want to measure the acquisition to the behavior on the website and ultimately what does that create?”

These are pretty common, so I think most people are using a marketing and executive dashboard. Here are some that have really made a huge difference for clients of ours.

#3 – New markets

Love new market dashboards. Let’s say, for example, you’re a hotel chain and you normally have people visiting your site from Washington, Oregon, Idaho, and Montana. Well, what happened in our case, we had that excluded, and we were looking at states broader — Hawaii, Alaska, Colorado, Texas. Not normally people who would come to this particular hotel.

Well, we discovered in the dashboard — and it was actually the client that discovered it — that we suddenly had a 6000% increase in Hawaii. They called me and said, “Are we marketing to Hawaii?” I said no. They said, “Well, according to the dashboard, we’ve had 193 room nights in the past 2 months.” Like, “Wow, 193 room nights from Hawaii, what happened?” So we started reverse engineering that, and we found out that Allegiant Airlines suddenly had a direct flight from Honolulu to Spokane, and the hotel in this case was two miles from the hotel. They could then do paid search campaigns in Hawaii. They can try to connect with Allegiant to co-op some advertising and some messaging. Boom. Would never have been discovered without that dashboard.

#4 – Top content

Another example, top content. Again, going back to Beardbrand, they have a site called the Urban Beardsman, and they publish a lot of content for help and videos and tutorials. To measure that content, it’s really important, because they’re putting a lot of work into educating their market and new people who are growing beards and using their product. They want to know, “Is it worth it?” They’re hiring photographers, they’re hiring writers, and we’re able to see if people are reading the content they’re providing, and then ultimately, we’re focusing much more on their content on the behavior side and then figuring out what that outcome is.

A lot of people have content or viewing of the blog as part of an overall dashboard, let’s say for your CMO. I’m a big fan of, in addition to having that ,also having a very specific content dashboard so you can see your top blogs. Whatever content you provide, I want you to always know what that’s driving on your website.

#5 – Tech check

One of the things that I’ve never heard anyone talk about before, that we use all the time, is a tech check. So we want to see a setup so we can view mobile, tablet, desktop, browsers. What are your gaps? Where is your site possibly not being used to its fullest potential? Are there any issues with shopping carts? Where do they fall off on your website? Set up any possible tech that you can track. I’m a big fan of looking both on the mobile, tablet, any type of desktop, browsers especially to see where they’re falling off. For a lot of our clients, we’ll have two, three, or four different tech dashboards. Get them to the technical person on the client side so they can immediately see if there’s an issue. If they’ve updated the website, but maybe they forgot to update a certain portion of it, they’ve got a technical issue, and the dashboard can help detect that.

So these are just a few. I’m a huge fan of dashboards. They’re very powerful. But the big key is to make sure that not only you, but your client understands how to use them, and they use them on a regular basis.

I hope that’s been very helpful. Again, I’m Ed Reese, and these are my top five dashboards. Thanks.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Lessons from the Front Line of Front-End Content Development

Posted by richardbaxterseo

As content marketing evolves, the list of media you could choose to communicate your message expands. So does the list of technologies at your disposal. But without a process, a project plan and a tried and tested approach, you might struggle to gain any traction at all.

In this post, based on my
MozCon 2014 presentation, I’d like to share the high level approach we take while developing content for our clients, and the lessons we’ve learned from initial research to final delivery. Hopefully there are some takeaways for you to enhance your own approach or make your first project a little less difficult.

This stuff is hard to do

I hate to break it to you, but the first few times you attempt to develop something
a little more innovative, you’re going to get burned. Making things is pretty tough and there are lots of lessons to learn. Sometimes you’ll think your work is going to be huge, and it flops. That sucks, move on, learn and maybe come back later to revisit your approach.

To structure and execute a genuinely innovative, successful content marketing campaign, you need to understand what’s possible, especially within the context of your available skills, process, budget, available time and scope.

You’ll have a few failures along the journey, but when something goes viral, when people respond positively to your work – that, friends, feels amazing.

What this post is designed to address

In the early days of SEO, we built links. Email outreach, guest posting, eventually, infographics. It was easy, for a time. Then,
Penguin came and changed everything.

Our industry learned that we should be finding creative and inventive ways to solve our customers’ problems, inspire, guide, help – whatever the solution, an outcome had to be justified. Yet still, a classic habit of the SEO remained: the need to decide in what form the content should be executed before deciding on the message to tell.

I think we’ve evolved from “let’s do an infographic on something!” to “I’ve got a concept that people will love should this be long form, an interactive, a data visualization, an infographic, a video, or something else?”

This post is designed to outline the foundations on an approach you can use to enhance your approach to content development. If you take one thing away from this article, let it be this:

The first rule of almost anything: be prepared or prepare to fail. This rule definitely applies to content development!

Understand the technical environment you’re hosting your content in

Never make assumptions about the technical environment your content will be hosted in. We’ve learned to ask more about technical setup of a client’s website. You see, big enterprise class sites usually have load balancing, 
pre-rendering, and very custom JavaScript that could introduce technical surprises much too late in the process. Better to be aware of what’s in store than hope your work will be compatible with its eventual home.

Before you get started on any development or design, make sure you’ve built an awareness of your client’s development and production environments. Find out more about their CMS, code base, and ask what they can and cannot host.

Knowing more about the client’s development schedule, for example how quickly a project can be uploaded, will help you plan lead times into your project documentation.

We’ve found that discussing early stage ideas with your client’s development team will help them visualise the level of task required to get something live. Involving them at this early stage means you’re informed on any potential risk in technology choice that will harm your project integrity later down the line.

Initial stakeholder outreach and ideation

Way back at MozCon 2013, I presented an idea called “really targeted outreach“. The concept was simple: find influential people in your space, learn more about the people they influence, and build content that appeals to both.

We’ve been using a similar methodology for larger content development projects: using social data to inspire the creative process gathered from the Twitter Firehose and
other freely available tools, reaching out to identified influencers and ask them to contribute or feedback on an idea. The trick is to execute your social research at a critical, early stage of the content development process. Essentially, you’re collecting data to gain a sense of confidence in the appeal of your content.

We’ve made content with such a broad range of people involved, from astronauts to butlers working at well known, historic hotels. With a little of the right approach to outreach, it’s amazing how helpful people can be. Supplemented by the confidence you’ve gained from your data, some positive results from your early stage outreach can really set a content project on the right course.

My tip: outreach and research several ideas and tell your clients which was most popular. If you can get them excited and behind the idea with the biggest response then you’ll find it easier to get everyone on the same page throughout your project.

Asset collection and research

Now, the real work begins. As I’ve
written elsewhere, I believe that the depth of your content, it’s accuracy and integrity is an absolute must if it is to be taken seriously by those it’s intended for.

Each project tends to be approached a little differently, although I tend to see these steps in almost every one: research, asset collection, storyboarding and conceptual illustration.

For asset collection and research, we use a tool called
Mural.ly – a wonderful collaborative tool to help speed up the creative process. Members of the project team begin by collecting relevant information and assets (think: images, quotes, video snippets) and adding them to the project. As the collection evolves, we begin to arrange the data into something that might resemble a timeline:

After a while, the story begins to take shape. Depending on how complex the concept is, we’ll either go ahead with some basic illustration (a “white board session”) or we’ll detail the storyboard in a written form. Here’s the Word document that summarised the chronological order of the content we’d planned for our
Messages in the Deep project:

messages-in-the-deep-storyboard

And, if the brief is more complex, we’ll create a more visual outline in a whiteboard session with our designers:

interactive-map-sketch

How do you decide on the level of brief needed to describe your project? Generally, the more complex the project, the more important a full array of briefing materials and project scoping will be. If, however, we’re talking simpler, like “long form” article content, the chances are a written storyboard and a collection of assets should be enough.

schema-guide

Over time, we’ve learned how to roll out content that’s partially template based, rather than having to re-invent the wheel each time. Dan’s amazing
Log File Analysis guide was reused when we decided to re-skin the Schema Guide, and as a result we’ve decided to give Kaitlin’s Google Analytics Guide the same treatment.

Whichever process you choose, it helps to re-engage your original contributors, influencers and publishers for feedback. Remember to keep them involved at key stages – if for no other reason than to make sure you’re meeting their expectations on content they’d be willing to share.

Going into development

Obviously we could talk all day about the development process. I think I’ll save the detail for my next post, but suffice it to say we’ve learned some big things along the way.

Firstly, it’s good to brief your developers well before the design and content is finalised. Particularly if there are features that might need some thought and experimental prototyping. I’ve found over time that a conversation with a developer leads to a better understanding of what’s easily possible with existing libraries and code. If you don’t involve the developers in the design process, you may find yourself committed to building something extremely custom, and your project timeline can become drastically underestimated.

It’s also really important to make sure that your developers have had the opportunity to specify how they’d like the design work to be delivered; file format; layers and sizing for different break points are all really important to an efficient development schedule and make a huge difference to the agility of your work.

Our developers like to have a logical structure of layers and groups in a PSD. Layers and groups should all be named and it’s a good idea to attach different UI states for interactive elements (buttons, links, tabs, etc.), too.

Grid layouts are much preferred although it doesn’t matter if it’s 1200px or 960px, or 12/16/24 columns. As long as the content has some structure, development is easier.

As our developers like to say: Because structure = patterns = abstraction = good things and in an ideal world they prefer to work with
style tiles.

Launching

Big content takes more promotion to get that all important initial traction. Your outreach strategy has already been set, you’ve defined your influencers, and you have buy in from publishers. So, as soon as your work is ready, go ahead and tell your stakeholders it’s live and get that flywheel turning!

My pro tip for a successful launch is be prepared to offer customised content for certain publishers. Simple touches, like
The Washington Post’s animated GIF idea was a real touch of genius – I think some people liked the GIF more than the actual interactive! This post on Mashable was made possible by our development of some of the interactive to be iFramed – publishers seem to love a different approach, so try to design that concept in right at the beginning of your plan. From there, stand back, measure, learn and never give up!

That’s it for today’s post. I hope you’ve found it informative, and I look forward to your comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com

Panda 4.1: The Devil Is in the Aggregate

Posted by russvirante

I wish I didn’t have to say this. I wish I could look in the eyes of every victim of the last Panda 4.1 update and tell them it was something new, something unforeseeable, something out of their control. I wish I could tell them that Google pulled a fast one that no one saw coming. But I can’t.

Like many in the industry, I have been studying Panda closely since its inception. Google gave us a rare glimpse behind the curtain by providing us with the very guidelines they set in place to build their massive machine-learned algorithm which came to be known as Panda. Three and a half years later, Panda is still with us and seems to still catch us off guard.
Enough is enough.

What I intend to show you throughout this piece is that the original Panda questionnaire still remains a powerful predictive tool to wield in defense of what can be a painful organic traffic loss. By analyzing the winner/loser reports of Panda 4.1 using standard Panda surveys, we can determine whether Google’s choices are still in line with their original vision. So let’s dive in.

The process

The first thing we need to do is acquire a winners and losers list. I picked this excellent
one from SearchMetrics although any list would do as long as it is accurate. Second, I proceeded to run a Panda questionnaire with 10 questions on random pages from each of the sites (both the winners and losers). You can run your own Panda survey by following Distilled and Moz’s instructions here or just use PandaRisk like I did. After completing these analyses, we simply compare the scores across the board to determine whether they continue to reflect what we would expect given the original goals of the Panda algorithm.

The aggregate results

I actually want to do this a little bit backwards to drive home a point. Normally we would build to the aggregate results, starting with the details and leaving you with the big picture. But Panda
is a big-picture kind of algorithmic update. It is specially focused on the intersection of myriad features, the sum is greater than the parts. While breaking down these features can give us some insight, at the end of the day we need to stay acutely aware that unless we do well across the board, we are at risk.

Below is a graph of the average cumulative scores across the winners and losers. The top row are winners, the bottom row are losers. The left and right red circles indicate the lowest and highest scores within those categories, and the blue circle represents the average. There is something very important that I want to point out on this graph.
The highest individual average score of all the losers is less than the lowest average score of the winners. This means that in our randomly selected data set, not a single loser averaged as high a score as the worst winner. When we aggregate the data together, even with a crude system of averages rather than the far more sophisticated machine learning techniques employed by Google, there is a clear disparity between the sites that survive Panda and those that do not.

It is also worth pointing out here that there is no
positive Panda algorithm to our knowledge. Sites that perform well on Panda do not see boosts because they are being given ranking preference by Google, rather their competitors have seen rankings loss or their own previous Panda penalties have been lifted. In either scenario, we should remember that performing well on Panda assessments isn’t going to necessarily increase your rankings, but it should help you sustain them.

Now, let’s move on to some of the individual questions. We are going to start with the least correlated questions and move to those which most strongly correlate with performance in Panda 4.1. While all of the questions had positive correlations, a few lacked statistical significance.


Insignificant correlation

The first question which was not statistically significant in its correlation with Panda performance was “This page has visible errors on it”. The scores have been inverted here so that the higher the score, the fewer the number of people who reported that the page has errors. You can see that while more respondents did say that the winners had no visible errors, the difference was very slight. In fact, there was only a 5.35% difference between the two. I will save comment on this until after we discuss the next question.

The second question which was not statistically significant in its correlation with Panda performance was “This page has too many ads”. The scores have once again been inverted here so that the higher the score, the fewer the number of people who reported that the page has too many ads. This was even closer. The winners performed only 2.3% better than the losers in Panda 4.1.

I think there is a clear takeaway from these two questions. Nearly everyone gets the easy stuff right, but that isn’t enough. First, a lot of pages just have no ads whatsoever because that isn’t their business model. Even those that do have ads have caught on for the most part and optimized their pages accordingly, especially given that Google has other layout algorithms in place aside from Panda. Moreover, content inaccuracy is more likely to impact scrapers and content spinners than most sites, so it is unsurprising that few if any reported that the pages were filled with errors. If you score poorly on either of these, you have only begun to scratch the surface, because most websites get these right enough.


Moderate correlation

A number of Panda questions drew statistically significant difference in means but there was still substantial crossover between the winners and losers.
Whenever the average of the losers was greater than the lowest of the winners, I considered it only a moderate correlation. While the difference between means remained strong, there was still a good deal of variance in the scores. 

The first of these to consider was the question as to whether the content was “trustworthy”. You will notice a trend in a lot of these questions that there is a great deal of subjective human opinion. This subjectivity plays itself out quite a bit when the topics of the site might deal with very different categories of knowledge. For example, a celebrity fact site might be very trustworthy (although the site might be ad-laden) and an opinion piece in the New Yorker on the same celebrity might not be seen as trustworthy – even though it is plainly labeled as opinion. The trustworthy question ties back to the “does this page have errors” question quite nicely, drawing attention to the difference between a subjective and objective question and the way it can spread the means out nicely when you ask a respondent to give more of a personal opinion. This might seem unfair, but in the real world your site and Google itself is being judged by that subjective opinion, so it is understandable why Google wants to get at it algorithmically. Nevertheless, there was a strong difference in means between winners and losers of 12.57%, more than double the difference we saw between winners and losers on the question of Errors.

Original content has long been a known requirement of organic search success, so no one was surprised when it made its way into the Panda questionnaire. It still remains an influential piece of the puzzle with a difference in mean of nearly 20%. It was barely ruled out from being a heavily correlated feature due to one loser edging out a loss against the losers’ average mean. Notice though that one of the winners scored a perfect 100% on the survey. This perfect score was received despite hundreds of respondents.
It can be done.

As you can imagine, perception on what is and is not an authority is very subjective. This question is powerful because it pulls in all kinds of assumptions and presuppositions about brand, subject matter, content quality, design, justification, citations, etc. This likely explains why this question is beleaguered by one of the highest variances on the survey. Nevertheless, there was a 13.42% difference in means. And, on the other side of the scale, we did see what it is like to have a site that is clearly not an authority, scoring the worst possible 0% on this question. This is what happens when you include highly irrelevant content on your site just for the purpose of picking up either links or traffic. Be wary.

Everyone hates the credit card question, and luckily there is huge variance in answers. At least one site survived Panda despite scoring 5% on this question. Notice that there is a huge overlap between the lowest winner and the average of the losing sites. Also, if you notice by the placement of the mean (blue circle) in the winners category, the average wasn’t skewed to the right indicating just one outlier. There was strong variance in the responses across the board. The same was true of the losers. However, with a +15% difference in means, there was a clear average differentiation between the performance of winners and losers. Once again, though, we are drawn back to that aggregate score at the top, where we see how Google can use all these questions together to build a much clearer picture of site and content quality. For example, it is possible that Google pays more attention to this question when it is analyzing a site that has other features like the words “shopping cart” or “check out” on the homepage. 

I must admit that the bookmarking question surprised me. I always considered it to be the most subjective of the bunch. It seemed unfair that a site might be judged because it has material that simply doesn’t appeal to the masses. The survey just didn’t bear this out though. There was a clear difference in means, but after comparing the sites that were from similar content categories, there just wasn’t any reason to believe that a bias was created by subject matter. The 14.64% difference seemed to be, editorially speaking, related more to the construction of the page and the quality of the content, not the topic being discussed. Perhaps a better way to think about this question is:
would you be embarrassed if your friends knew THIS was the site you were getting your information from rather than another.

This wraps up the 5 questions that had good correlations but substantial enough variance that it was possible for the highest loser to beat out the average winner. I think one clear takeaway from this section is that these questions, while harder to improve upon than the Low Ads and No Errors questions before, are completely within the webmaster’s grasp. Making your content and site appear original, trustworthy, authoritative, and worthy of bookmarking aren’t terribly difficult. Sure, it takes some time and effort, but these goals, unlike the next, don’t appear that far out of reach.


Heavy correlation

The final three questions that seemed to distinguish the most between the winners and losers of Panda 4.1 all had high difference-in-means and, more importantly, had little to no crossover between the highest loser and lowest winner. In my opinion, these questions are also the hardest for the webmaster to address. They require thoughtful design, high quality content, and real, expert human authors.

The first question that met this classification was “could this content could appear in print”. With a difference in mean of 22.62%, the winners thoroughly trounced the losers in this category. Their sites and content were just better designed and better written. They showed the kind of editorial oversight you would expect in a print publication. The content wasn’t trite and unimportant, it was thorough and timely. 

The next heavily correlated question was whether the page was written by experts. With over a 34% difference in means between the winners and losers, and
literally no overlap at all between the winners’ and losers’ individual averages, it was clearly the strongest question. You can see why Google would want to look into things like authorship when they knew that expertise was such a powerful distinguisher between Panda winners and losers. This really begs the question – who is writing your content and do your readers know it?

Finally, insightful analysis had a huge difference in means of +32% between winners and losers. It is worth noting that the highest loser is an outlier, which is typified by the skewed mean (blue circle) being closer to the bottom that the top. Most of the answers were closer to the lower score than the top. Thus, the overlap is exaggerated a bit. But once again, this just draws us back to the original conclusion – that the devil is not in the details, the devil is in the aggregate. You might be able to score highly on one or two of the questions, but it won’t be enough to carry you through.


The takeaways

OK, so hopefully it is clear that Panda really hasn’t changed all that much. The same questions we looked at for Panda 1.0 still matter. In fact, I would argue that Google is just getting better at algorithmically answering those same questions, not changing them. They are still the right way to judge a site in Google’s eyes. So how should you respond?

The first and most obvious thing is you should run a Panda survey on your (or your clients’) sites. Select a random sample of pages from the site. The easiest way to do this is get an export of all of the pages of your site, perhaps from Open Site Explorer, put them in Excel and shuffle them. Then choose the top 10 that come up.  You can follow the Moz instructions I linked to above, do it at PandaRisk, or just survey your employees, friends, colleagues, etc. While the latter probably will be positively biased, it is still better than nothing. Go ahead and get yourself a benchmark.

The next step is to start pushing those scores up one at a time. I
give some solid examples on the Panda 4.0 release article about improving press release sites, but there is another better resource that just came out as well. Josh Bachynski released an amazing set of known Panda factors over at his website The Moral Concept. It is well worth a thorough read. There is a lot to take in, but there are tons of easy-to-implement improvements that could help you out quite a bit. Once you have knocked out a few for each of your low-scoring questions, run the exact same survey again and see how you improve. Keep iterating this process until you beat out each of the question averages for winners. At that point, you can rest assured that your site is safe from the Panda by beating the devil in the aggregate. 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com