Eliminate Duplicate Content in Faceted Navigation with Ajax/JSON/JQuery

Posted by EricEnge

One of the classic problems in SEO is that while complex navigation schemes may be useful to users, they create problems for search engines. Many publishers rely on tags such as rel=canonical, or the parameters settings in Webmaster Tools to try and solve these types of issues. However, each of the potential solutions has limitations. In today’s post, I am going to outline how you can use JavaScript solutions to more completely eliminate the problem altogether.

Note that I am not going to provide code examples in this post, but I am going to outline how it works on a conceptual level. If you are interested in learning more about Ajax/JSON/jQuery here are some resources you can check out:

  1. Ajax Tutorial
  2. Learning Ajax/jQuery

Defining the problem with faceted navigation

Having a page of products and then allowing users to sort those products the way they want (sorted from highest to lowest price), or to use a filter to pick a subset of the products (only those over $60) makes good sense for users. We typically refer to these types of navigation options as “faceted navigation.”

However, faceted navigation can cause problems for search engines because they don’t want to crawl and index all of your different sort orders or all your different filtered versions of your pages. They would end up with many different variants of your pages that are not significantly different from a search engine user experience perspective.

Solutions such as rel=canonical tags and parameters settings in Webmaster Tools have some limitations. For example, rel=canonical tags are considered “hints” by the search engines, and they may not choose to accept them, and even if they are accepted, they do not necessarily keep the search engines from continuing to crawl those pages.

A better solution might be to use JSON and jQuery to implement your faceted navigation so that a new page is not created when a user picks a filter or a sort order. Let’s take a look at how it works.

Using JSON and jQuery to filter on the client side

The main benefit of the implementation discussed below is that a new URL is not created when a user is on a page of yours and applies a filter or sort order. When you use JSON and jQuery, the entire process happens on the client device without involving your web server at all.

When a user initially requests one of the product pages on your web site, the interaction looks like this:

using json on faceted navigation

This transfers the page to the browser the user used to request the page. Now when a user picks a sort order (or filter) on that page, here is what happens:

jquery and faceted navigation diagram

When the user picks one of those options, a jQuery request is made to the JSON data object. Translation: the entire interaction happens within the client’s browser and the sort or filter is applied there. Simply put, the smarts to handle that sort or filter resides entirely within the code on the client device that was transferred with the initial request for the page.

As a result, there is no new page created and no new URL for Google or Bing to crawl. Any concerns about crawl budget or inefficient use of PageRank are completely eliminated. This is great stuff! However, there remain limitations in this implementation.

Specifically, if your list of products spans multiple pages on your site, the sorting and filtering will only be applied to the data set already transferred to the user’s browser with the initial request. In short, you may only be sorting the first page of products, and not across the entire set of products. It’s possible to have the initial JSON data object contain the full set of pages, but this may not be a good idea if the page size ends up being large. In that event, we will need to do a bit more.

What Ajax does for you

Now we are going to dig in slightly deeper and outline how Ajax will allow us to handle sorting, filtering, AND pagination. Warning: There is some tech talk in this section, but I will try to follow each technical explanation with a layman’s explanation about what’s happening.

The conceptual Ajax implementation looks like this:

ajax and faceted navigation diagram

In this structure, we are using an Ajax layer to manage the communications with the web server. Imagine that we have a set of 10 pages, the user has gotten the first page of those 10 on their device and then requests a change to the sort order. The Ajax requests a fresh set of data from the web server for your site, similar to a normal HTML transaction, except that it runs asynchronously in a separate thread.

If you don’t know what that means, the benefit is that the rest of the page can load completely while the process to capture the data that the Ajax will display is running in parallel. This will be things like your main menu, your footer links to related products, and other page elements. This can improve the perceived performance of the page.

When a user selects a different sort order, the code registers an event handler for a given object (e.g. HTML Element or other DOM objects) and then executes an action. The browser will perform the action in a different thread to trigger the event in the main thread when appropriate. This happens without needing to execute a full page refresh, only the content controlled by the Ajax refreshes.

To translate this for the non-technical reader, it just means that we can update the sort order of the page, without needing to redraw the entire page, or change the URL, even in the case of a paginated sequence of pages. This is a benefit because it can be faster than reloading the entire page, and it should make it clear to search engines that you are not trying to get some new page into their index.

Effectively, it does this within the existing Document Object Model (DOM), which you can think of as the basic structure of the documents and a spec for the way the document is accessed and manipulated.

How will Google handle this type of implementation?

For those of you who read Adam Audette’s excellent recent post on the tests his team performed on how Google reads Javascript, you may be wondering if Google will still load all these page variants on the same URL anyway, and if they will not like it.

I had the same question, so I reached out to Google’s Gary Illyes to get an answer. Here is the dialog that transpired:

Eric Enge: I’d like to ask you about using JSON and jQuery to render different sort orders and filters within the same URL. I.e. the user selects a sort order or a filter, and the content is reordered and redrawn on the page on the client site. Hence no new URL would be created. It’s effectively a way of canonicalizing the content, since each variant is a strict subset.

Then there is a second level consideration with this approach, which involves doing the same thing with pagination. I.e. you have 10 pages of products, and users still have sorting and filtering options. In order to support sorting and filtering across the entire 10 page set, you use an Ajax solution, so all of that still renders on one URL.

So, if you are on page 1, and a user executes a sort, they get that all back in that one page. However, to do this right, going to page 2 would also render on the same URL. Effectively, you are taking the 10 page set and rendering it all within one URL. This allows sorting, filtering, and pagination without needing to use canonical, noindex, prev/next, or robots.txt.

If this was not problematic for Google, the only downside is that it makes the pagination not visible to Google. Does that make sense, or is it a bad idea?

Gary Illyes
: If you have one URL only, and people have to click on stuff to see different sort orders or filters for the exact same content under that URL, then typically we would only see the default content.

If you don’t have pagination information, that’s not a problem, except we might not see the content on the other pages that are not contained in the HTML within the initial page load. The meaning of rel-prev/next is to funnel the signals from child pages (page 2, 3, 4, etc.) to the group of pages as a collection, or to the view-all page if you have one. If you simply choose to render those paginated versions on a single URL, that will have the same impact from a signals point of view, meaning that all signals will go to a single entity, rather than distributed to several URLs.

Summary

Keep in mind, the reason why Google implemented tags like rel=canonical, NoIndex, rel=prev/next, and others is to reduce their crawling burden and overall page bloat and to help focus signals to incoming pages in the best way possible. The use of Ajax/JSON/jQuery as outlined above does this simply and elegantly.

On most e-commerce sites, there are many different “facets” of how a user might want to sort and filter a list of products. With the Ajax-style implementation, this can be done without creating new pages. The end users get the control they are looking for, the search engines don’t have to deal with excess pages they don’t want to see, and signals in to the site (such as links) are focused on the main pages where they should be.

The one downside is that Google may not see all the content when it is paginated. A site that has lots of very similar products in a paginated list does not have to worry too much about Google seeing all the additional content, so this isn’t much of a concern if your incremental pages contain more of what’s on the first page. Sites that have content that is materially different on the additional pages, however, might not want to use this approach.

These solutions do require Javascript coding expertise but are not really that complex. If you have the ability to consider a path like this, you can free yourself from trying to understand the various tags, their limitations, and whether or not they truly accomplish what you are looking for.

Credit: Thanks for Clark Lefavour for providing a review of the above for technical correctness.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

How to Use Server Log Analysis for Technical SEO

Posted by SamuelScott

It’s ten o’clock. Do you know where your logs are?

I’m introducing this guide with a pun on a common public-service announcement that has run on late-night TV news broadcasts in the United States because log analysis is something that is extremely newsworthy and important.

If your technical and on-page SEO is poor, then nothing else that you do will matter. Technical SEO is the key to helping search engines to crawl, parse, and index websites, and thereby rank them appropriately long before any marketing work begins.

The important thing to remember: Your log files contain the only data that is 100% accurate in terms of how search engines are crawling your website. By helping Google to do its job, you will set the stage for your future SEO work and make your job easier. Log analysis is one facet of technical SEO, and correcting the problems found in your logs will help to lead to higher rankings, more traffic, and more conversions and sales.

Here are just a few reasons why:

  • Too many response code errors may cause Google to reduce its crawling of your website and perhaps even your rankings.
  • You want to make sure that search engines are crawling everything, new and old, that you want to appear and rank in the SERPs (and nothing else).
  • It’s crucial to ensure that all URL redirections will pass along any incoming “link juice.”

However, log analysis is something that is unfortunately discussed all too rarely in SEO circles. So, here, I wanted to give the Moz community an introductory guide to log analytics that I hope will help. If you have any questions, feel free to ask in the comments!

What is a log file?

Computer servers, operating systems, network devices, and computer applications automatically generate something called a log entry whenever they perform an action. In a SEO and digital marketing context, one type of action is whenever a page is requested by a visiting bot or human.

Server log entries are specifically programmed to be output in the Common Log Format of the W3C consortium. Here is one example from Wikipedia with my accompanying explanations:

127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326
  • 127.0.0.1 — The remote hostname. An IP address is shown, like in this example, whenever the DNS hostname is not available or DNSLookup is turned off.
  • user-identifier — The remote logname / RFC 1413 identity of the user. (It’s not that important.)
  • frank — The user ID of the person requesting the page. Based on what I see in my Moz profile, Moz’s log entries would probably show either “SamuelScott” or “392388” whenever I visit a page after having logged in.
  • [10/Oct/2000:13:55:36 -0700] — The date, time, and timezone of the action in question in strftime format.
  • GET /apache_pb.gif HTTP/1.0 — “GET” is one of the two commands (the other is “POST”) that can be performed. “GET” fetches a URL while “POST” is submitting something (such as a forum comment). The second part is the URL that is being accessed, and the last part is the version of HTTP that is being accessed.
  • 200 — The status code of the document that was returned.
  • 2326 — The size, in bytes, of the document that was returned.

Note: A hyphen is shown in a field when that information is unavailable.

Every single time that you — or the Googlebot — visit a page on a website, a line with this information is output, recorded, and stored by the server.

Log entries are generated continuously and anywhere from several to thousands can be created every second — depending on the level of a given server, network, or application’s activity. A collection of log entries is called a log file (or often in slang, “the log” or “the logs”), and it is displayed with the most-recent log entry at the bottom. Individual log files often contain a calendar day’s worth of log entries.

Accessing your log files

Different types of servers store and manage their log files differently. Here are the general guides to finding and managing log data on three of the most-popular types of servers:

What is log analysis?

Log analysis (or log analytics) is the process of going through log files to learn something from the data. Some common reasons include:

  • Development and quality assurance (QA) — Creating a program or application and checking for problematic bugs to make sure that it functions properly
  • Network troubleshooting — Responding to and fixing system errors in a network
  • Customer service — Determining what happened when a customer had a problem with a technical product
  • Security issues — Investigating incidents of hacking and other intrusions
  • Compliance matters — Gathering information in response to corporate or government policies
  • Technical SEO — This is my favorite! More on that in a bit.

Log analysis is rarely performed regularly. Usually, people go into log files only in response to something — a bug, a hack, a subpoena, an error, or a malfunction. It’s not something that anyone wants to do on an ongoing basis.

Why? This is a screenshot of ours of just a very small part of an original (unstructured) log file:

Ouch. If a website gets 10,000 visitors who each go to ten pages per day, then the server will create a log file every day that will consist of 100,000 log entries. No one has the time to go through all of that manually.

How to do log analysis

There are three general ways to make log analysis easier in SEO or any other context:

  • Do-it-yourself in Excel
  • Proprietary software such as Splunk or Sumo-logic
  • The ELK Stack open-source software

Tim Resnik’s Moz essay from a few years ago walks you through the process of exporting a batch of log files into Excel. This is a (relatively) quick and easy way to do simple log analysis, but the downside is that one will see only a snapshot in time and not any overall trends. To obtain the best data, it’s crucial to use either proprietary tools or the ELK Stack.

Splunk and Sumo-Logic are proprietary log analysis tools that are primarily used by enterprise companies. The ELK Stack is a free and open-source batch of three platforms (Elasticsearch, Logstash, and Kibana) that is owned by Elastic and used more often by smaller businesses. (Disclosure: We at Logz.io use the ELK Stack to monitor our own internal systems as well as for the basis of our own log management software.)

For those who are interested in using this process to do technical SEO analysis, monitor system or application performance, or for any other reason, our CEO, Tomer Levy, has written a guide to deploying the ELK Stack.

Technical SEO insights in log data

However you choose to access and understand your log data, there are many important technical SEO issues to address as needed. I’ve included screenshots of our technical SEO dashboard with our own website’s data to demonstrate what to examine in your logs.

Bot crawl volume

It’s important to know the number of requests made by Baidu, BingBot, GoogleBot, Yahoo, Yandex, and others over a given period time. If, for example, you want to get found in search in Russia but Yandex is not crawling your website, that is a problem. (You’d want to consult Yandex Webmaster and see this article on Search Engine Land.)

Response code errors

Moz has a great primer on the meanings of the different status codes. I have an alert system setup that tells me about 4XX and 5XX errors immediately because those are very significant.

Temporary redirects

Temporary 302 redirects do not pass along the “link juice” of external links from the old URL to the new one. Almost all of the time, they should be changed to permanent 301 redirects.

Crawl budget waste

Google assigns a crawl budget to each website based on numerous factors. If your crawl budget is, say, 100 pages per day (or the equivalent amount of data), then you want to be sure that all 100 are things that you want to appear in the SERPs. No matter what you write in your robots.txt file and meta-robots tags, you might still be wasting your crawl budget on advertising landing pages, internal scripts, and more. The logs will tell you — I’ve outlined two script-based examples in red above.

If you hit your crawl limit but still have new content that should be indexed to appear in search results, Google may abandon your site before finding it.

Duplicate URL crawling

The addition of URL parameters — typically used in tracking for marketing purposes — often results in search engines wasting crawl budgets by crawling different URLs with the same content. To learn how to address this issue, I recommend reading the resources on Google and Search Engine Land here, here, here, and here.

Crawl priority

Google might be ignoring (and not crawling or indexing) a crucial page or section of your website. The logs will reveal what URLs and/or directories are getting the most and least attention. If, for example, you have published an e-book that attempts to rank for targeted search queries but it sits in a directory that Google only visits once every six months, then you won’t get any organic search traffic from the e-book for up to six months.

If a part of your website is not being crawled very often — and it is updated often enough that it should be — then you might need to check your internal-linking structure and the crawl-priority settings in your XML sitemap.

Last crawl date

Have you uploaded something that you hope will be indexed quickly? The log files will tell you when Google has crawled it.

Crawl budget

One thing I personally like to check and see is Googlebot’s real-time activity on our site because the crawl budget that the search engine assigns to a website is a rough indicator — a very rough one — of how much it “likes” your site. Google ideally does not want to waste valuable crawling time on a bad website. Here, I had seen that Googlebot had made 154 requests of our new startup’s website over the prior twenty-four hours. Hopefully, that number will go up!

As I hope you can see, log analysis is critically important in technical SEO. It’s eleven o’clock — do you know where your logs are now?

Additional resources

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Kasper Szymanski, ex-Google Spam Fighter Guests at #MajesticMemo

Kasper is a former Web Spam Fighter at Google. He founded SEARCHBROTHERS with fellow spam fighter, Fili Wiese where he specialises in backlink analysis, reconsideration requests and site recovery. Here’s a summary of the #MajesticMemo tweetchat: [View the story “Kaspar Szymanski guests at #MajesticMemo” on Storify]  

The post Kasper Szymanski, ex-Google Spam Fighter Guests at #MajesticMemo appeared first on Majestic Blog.

Reblogged 4 years ago from blog.majestic.com

Check Your Local Business Listings in the UK

Posted by David-Mihm

One of the most consistent refrains from the Moz community as we’ve
released features over the last two years has been the desire to see Moz Local expand to countries outside the U.S. Today I’m pleased to announce that we’re embarking on our journey to global expansion with support for U.K. business listing searches in our Check Listing tool.

Some of you may remember limited U.K. functionality as part of GetListed.org, but as a very small company we couldn’t keep up with the maintenance required to present reliable results. It’s taken us longer than we would have liked to get here, but now with more resources, the Moz Local team has the bandwidth and important experience from the past year of Moz Local in the U.S. to fully support U.K. businesses.

How It Works

We’ve updated our search feature to accept both U.S. and U.K. postal codes, so just head on over to
moz.com/local/search to check it out!

After entering the name of your business and a U.K. postcode, we go out and ping Google and other important local search sites in the U.K., and return what we found. Simply select the closest-matching business and we’ll proceed to run a full audit of your listings across these sites.

You can click through and discover incomplete listings, inconsistent NAP information, duplicate listings, and more.

This check listing feature is free to all Moz community members.

You’ve no doubt noted in the screenshot above that we project a listing score improvement. We do plan to release a fully-featured U.K. version of Moz Local later this spring (with the same distribution, reporting, and duplicate-closure features that are available in the U.S.), and you can enter your email address—either on that page or right here—to be notified when we do!

.sendgrid-subscription-widget .response {
font-style: italic;
font-size: 14px;
font-weight: 300;
}

.sendgrid-subscription-widget .response.success {
color: #93e7b6;
font-size: 14px;
}

.sendgrid-subscription-widget form .response.error {
color: #fcbb4a;
font-size: 14px;
}

.sendgrid-subscription-widget form input[type=”submit”].btn {
}

.sendgrid-subscription-widget span {
display: none;
}

.sendgrid-subscription-widget form input[type=”email”] {
color: #000000;
width: 200px;
}

U.K.-Specific Partners

As I’ve mentioned in previous blog comments, there are a certain number of global data platforms (Google, Facebook, Yelp, Bing, Foursquare, and Factual, among others) where it’s valuable to be listed correctly and completely no matter which country you’re in.

But every country has its own unique set of domestically relevant players as well, and we’re pleased to have worked with two of them on this release: Central Index and Thomson Local. (Head on over to the
Moz Local Learning Center for more information about country-specific data providers.)

We’re continuing discussions with a handful of other prospective data partners in the U.K. If you’re interested in working with us, please
let us know!

What’s Next?

Requests for further expansion, especially to Canada and Australia, I’m sure will be loud and clear in the comments below! Further expansion is on our roadmap, but it’s balanced against a more complete feature set in the (more populous) U.S. and U.K. markets. We’ll continue to use our experience in those markets as we prioritize when and where to expand next.

A few lucky members of the Moz Local team are already on their way to
BrightonSEO. So if you’re attending that awesome event later this week, please stop by our booth and let us know what you’d like to see us work on next.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

How To Automate & Manage Guest Posting with LinkAssistant

LinkAssistant is an outdated SEO tool sending automated requests only? No way! The first tool in SEO PowerSuite bundle, LinkAssistant, is in vogue more than ever! A growing number of our loyal…

Reblogged 4 years ago from www.youtube.com

Grow Your Own SEOs: Professional Development for Digital Marketers

Posted by RuthBurrReedy

Finding your next SEO hire is hard, but it’s only half the battle. Growing a team isn’t just about hiring—it’s about making your whole team, newbies and experts alike, better marketers.

It’s almost impossible to build a one-size-fits-all training program for digital marketers, since the tasks involved will depend a lot on the role. Even “SEO” can mean a lot of different things. Your role might be highly technical, highly creative, or a mix of both. Tactics like local SEO or conversion rate optimization might be a huge part of an SEO’s job or might be handled by another person entirely. Sometimes an SEO role includes elements like social media or paid search. The skills you teach your trainees will depend on what you need them to do, and more specifically, what you need them to do right now.

Whatever the specifics of the marketing role,
you need to make sure you’re providing a growth plan for your digital marketers (this goes for your more experienced team members, as well as your newbies). A professional growth plan helps you and your team members:

  • Track whether or not they’re making progress in their roles. Taking on a new skill set can be daunting. Having a growth plan can alleviate some of the stress less-experienced employees may feel when learning a new skill, and makes sure more experienced employees aren’t stagnating. 
  • Spot problem areas. Everyone’s talents are different, but you don’t want someone to miss out on growth opportunities because they’re such a superstar in one area and are neglecting everything else. 
  • Have conversations around promotions and raises. Consistently tracking people’s development across a variety of skill sets allows you to compare where someone is now to where they were when you hired them; it also gives you a framework to discuss what additional steps might be needed before a promotion or raise is in order, and help them develop a plan to get there. 
  • Advance their careers. One of your duties as their manager is to make sure you’re giving them what they need to continue on their career path. A professional development plan should be managed with career goals in mind. 
  • Increase employee retention. Smart people like to learn and grow, and if you’re not providing them ways to do so, they’re not going to stick around.

We have technical/on-page SEOs, content marketers, local SEOs and marketing copywriters all working together on the same team at BigWing. We wanted to create a framework for professional development that we could apply to the whole team, so we identified a set of areas that any digital marketer should be growing in, regardless of their focus. This growth plan is part of everyone’s mid-year and year-end reviews.

Here’s what it looks like:

Growth areas for digital marketers

Want your own copy of the Professional Advancement Sheet? Get it here!

Tactical -> strategic

At the beginner level, team members are still learning the basic concepts and tasks associated with their role, and how those translate to the client metrics they’re being measured on. It takes time to encounter and fix enough different kinds of things to know “in x situation, look at a, b and c and then try y or z.”

As someone grows in their role, they will learn more advanced tactics. They should also be more and more able to use critical thinking to figure out how to solve problems and tackle longer-term client goals and projects.
At the senior level, an SEO should be building long-term strategies and be comfortable with unusual campaigns and one-off projects.

Small clients -> big clients

There are plenty of small brochure websites in the world, and these sites are a great testing ground for the fundamentals of SEO: they may still have weird jacked-up problems (so many websites do), but they are a manageable size and don’t usually have the potential for esoteric technical issues that large, complex sites do. Once someone has a handle on SEO, you can start assigning bigger and badder sites and projects (with plenty of mentoring from more experienced team members—more on that later).

We thought about making this one “Easy clients -> difficult clients,” because there’s another dimension to this line of progress: increasingly complex client relationships. Clients with very large or complicated websites (or clients with more than one website) are likely to have higher budgets, bigger internal staff, and more stakeholders. As the number of people involved increases, so does the potential for friction, so a senior-level SEO should be able to handle those complex relationships with aplomb.

Learning -> teaching

At the beginner level, people are learning digital marketing in general and learning about our specific internal processes. As they gain experience, they become a resource for team members still in the “learning” phase, and at the senior level they should be a go-to for tough questions and expert opinions.

Even a beginner digital marketer may have other things to teach the team; skills learned from previous careers, hobbies or side gigs can be valuable additions. For example, we had a brand-new team member with a lot of experience in photography, a valuable skill for content marketers; she was able to start teaching her teammates more about taking good photos while still learning other content marketing fundamentals herself.

learning

I love this stock picture because the chalkboard just says “learning.” Photo via
Pixabay.

Since managers can’t be everywhere at once, more experienced employees must take an active role in teaching.
It’s not enough that they be experts (which is why this scale doesn’t go from “Learning” to “Mastering”); they have to be able to impart that expertise to others. Teaching is more than just being available when people have questions, too: senior team members are expected to be proactive about taking the time to show junior team members the ropes.

Prescribed -> creative

The ability to move from executing a set series of tasks to creating creative, heavily client-focused digital marketing campaigns is, in my opinion,
one of the best predictors of long-term SEO success. When someone is just starting out in SEO, it’s appropriate to have a fairly standard set of tasks they’re carrying out. For a lot of those small sites that SEO trainees start on, that set of SEO fundamentals goes a long way. The challenge comes when the basics aren’t enough.

Creative SEO comes from being able to look at a client’s business, not just their website, and tailor a strategy to their specific needs. Creative SEOs are looking for unique solutions to the unique problems that arise from that particular client’s combination of business model, target market, history and revenue goals. Creativity can also be put to work internally, in the form of suggested process improvements and new revenue-driving projects.

General -> T-shaped

The concept of the T-shaped marketer has been around for a few years (if you’re not familiar with the idea, you can read up on it on
Rand’s blog or the Distilled blog). Basically, it means that in addition to deep knowledge whatever area(s) of inbound marketing we specialize in, digital marketers should also work to develop basic knowledge of a broad set of marketing disciplines, in order to understand more about the craft of marketing as a whole.

t-shaped marketer

Source:
The T-Shaped Marketer

A digital marketer who’s just starting out will naturally be focusing more on the broad part of their T, getting their head around the basic concepts and techniques that make up the digital marketing skill set. Eventually most people naturally find a few specialty areas that they’re really passionate about. Encouraging employees to build deep expertise ultimately results in a whole team full of subject matter experts in a whole team’s worth of subjects.

Beginner -> expert

This one is pretty self-explanatory. The important thing to note is that expertise isn’t something that just happens to you after you do something a lot (although that’s definitely part of it).
Honing expertise means actively pursuing new learning opportunities and testing new ideas and tactics, and we look for the pursuit of expertise as part of evaluating someone’s professional growth.

Observing -> leading

Anyone who is working in inbound marketing should be consistently observing the industry—they should be following search engine news, reading blog posts from industry experts, and attending events and webinars to learn more about their craft. It’s a must-do at all levels, and even someone who’s still learning the ropes can be keeping an eye on industry buzz and sharing items of interest with their co-workers.

Not everyone is crazy about the phrase “thought leadership.” When you’re a digital marketing agency, though,
your people are your product—their depth of knowledge and quality of work is a big part of what you’re selling. As your team gains experience and confidence, it’s appropriate to expect them to start participating more in the digital marketing space, both online and in person. This participation could look like: 

  • Pitching and speaking at marketing conferences 
  • Contributing to blogs, whether on your site or in other marketing communities 
  • Organizing local tech meetups 
  • Regularly participating in online events like #seochat

…or a variety of other activities, depending on the individual’s talents and interests. Not only does this kind of thought-leadership activity promote your agency brand, it also helps your employees build their personal brands—and don’t forget, a professional development plan needs to be as much about helping your people grow in their careers as it is about growing the skill sets you need.

Low output -> high output

I love the idea of meticulous, hand-crafted SEO, but let’s be real: life at an agency means getting stuff done. When people are learning to do stuff, it takes them longer to do (which is BY FAR MY LEAST FAVORITE PART OF LEARNING TO DO THINGS, I HATE IT SO MUCH), so expectations of the number of clients/volume of work they can handle should scale appropriately. It’s okay for people to work at their own pace and in their own way, but at some point you need to be able to rely on your team to turn things around quickly, handle urgent requests, and consistently hit deadlines, or you’re going to lose customers.

You may notice that some of these growth areas overlap, and that’s okay—the idea is to create a nuanced approach that captures all the different ways a digital marketer can move toward excellence.

Like with all other aspects of a performance review, it’s important to be as specific as possible when discussing a professional growth plan. If there’s an area a member of your team needs to make more progress in, don’t just say e.g. “You need to be more strategic.” Come up with specific projects and milestones for your marketer to hit so you’re both clear on when they’re growing and what they need to do to get to the next level.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it