Exposing The Generational Content Gap: Three Ways to Reach Multiple Generations

Posted by AndreaLehr

With more people of all ages online than ever before, marketers must create content that resonates with multiple generations. Successful marketers realize that each generation has unique expectations, values and experiences that influence consumer behaviors, and that offering your audience content that reflects their shared interests is a powerful way to connect with them and inspire them to take action.

We’re in the midst of a generational shift, with
Millennials expected to surpass Baby Boomers in 2015 as the largest living generation. In order to be competitive, marketers need to realize where key distinctions and similarities lie in terms of how these different generations consume content and share it with with others.

To better understand the habits of each generation,
BuzzStream and Fractl surveyed over 1,200 individuals and segmented their responses into three groups: Millennials (born between 1977–1995), Generation X (born between 1965–1976), and Baby Boomers (born between 1946–1964). [Eds note: The official breakdown for each group is as follows: Millennials (1981-1997), Generation X (1965-1980), and Boomers (1946-1964)]

Our survey asked them to identify their preferences for over 15 different content types while also noting their opinions on long-form versus short-form content and different genres (e.g., politics, technology, and entertainment).

We compared their responses and found similar habits and unique trends among all three generations.

Here’s our breakdown of the three key takeaways you can use to elevate your future campaigns:

1. Baby Boomers are consuming the most content

However, they have a tendency to enjoy it earlier in the day than Gen Xers and Millennials.

Although we found striking similarities between the younger generations, the oldest generation distinguished itself by consuming the most content. Over 25 percent of Baby Boomers consume 20 or more hours of content each week. Additional findings:

  • Baby Boomers also hold a strong lead in the 15–20 hours bracket at 17 percent, edging out Gen Xers and Millennials at 12 and 11 percent, respectively
  • A majority of Gen Xers and Millennials—just over 22 percent each—consume between 5 and 10 hours per week
  • Less than 10 percent of Gen Xers consume less than five hours of content a week—the lowest of all three groups

We also compared the times of day that each generation enjoys consuming content. The results show that most of our respondents—over 30 percent— consume content between 8 p.m. and midnight. However, there are similar trends that distinguish the oldest generation from the younger ones:

  • Baby Boomers consume a majority of their content in the morning. Nearly 40 percent of respondents are online between 5 a.m. and noon.
  • The least popular time for most respondents to engage with content online is late at night, between midnight and 5 a.m., earning less than 10 percent from each generation
  • Gen X is the only generation to dip below 10 percent in the three U.S. time zones: 5 a.m. to 9 a.m., 6 to 8 p.m., and midnight to 5 a.m.

When Do We Consume Content

When it comes to which device each generation uses to consume content, laptops are the most common, followed by desktops. The biggest distinction is in mobile usage: Over 50 percent of respondents who use their mobile as their primary device for content consumption are Millennials. Other results reveal:

  • Not only do Baby Boomers use laptops the most (43 percent), but they also use their tablets the most. (40 percent of all primary tablet users are Baby Boomers).
  • Over 25 percent of Millennials use a mobile device as their primary source for content
  • Gen Xers are the least active tablet users, with less than 8 percent of respondents using it as their primary device

Device To Consume Content2. Preferred content types and lengths span all three generations

One thing every generation agrees on is the type of content they enjoy seeing online. Our results reveal that the top four content types— blog articles, images, comments, and eBooks—are exactly the same for Baby Boomers, Gen Xers, and Millennials. Additional comparisons indicate:

  • The least preferred content types—flipbooks, SlideShares, webinars, and white papers—are the same across generations, too (although not in the exact same order)
  • Surprisingly, Gen Xers and Millennials list quizzes as one of their five least favorite content types

Most Consumed Content Type

All three generations also agree on ideal content length, around 300 words. Further analysis reveals:

  • Baby Boomers have the highest preference for articles under 200 words, at 18 percent
  • Gen Xers have a strong preference for articles over 500 words compared to other generations. Over 20 percent of respondents favor long-form articles, while only 15 percent of Baby Boomers and Millennials share the same sentiment.
  • Gen Xers also prefer short articles the least, with less than 10 percent preferring articles under 200 words

Content Length PreferencesHowever, in regards to verticals or genres, where they consume their content, each generation has their own unique preference:

  • Baby Boomers have a comfortable lead in world news and politics, at 18 percent and 12 percent, respectively
  • Millennials hold a strong lead in technology, at 18 percent, while Baby Boomers come in at 10 percent in the same category
  • Gen Xers fall between Millennials and Baby Boomers in most verticals, although they have slight leads in personal finance, parenting, and healthy living
  • Although entertainment is the top genre for each generation, Millennials and Baby Boomers prefer it slightly more than than Gen Xers do

Favorite Content Genres

3. Facebook is the preferred content sharing platform across all three generations

Facebook remains king in terms of content sharing, and is used by about 60 percent of respondents in each generation studied. Surprisingly, YouTube came in second, followed by Twitter, Google+, and LinkedIn, respectively. Additional findings:

  • Baby Boomers share on Facebook the most, edging out Millennials by only a fraction of a percent
  • Although Gen Xers use Facebook slightly less than other generations, they lead in both YouTube and Twitter, at 15 percent and 10 percent, respectively
  • Google+ is most popular with Baby Boomers, at 8 percent, nearly double that of both Gen Xers and Millennials

Preferred Social PlatformAlthough a majority of each generation is sharing content on Facebook, the type of content they are sharing, especially visuals, varies by each age group. The oldest generation prefers more traditional content, such as images and videos. Millennials prefer newer content types, such as memes and GIFs, while Gen X predictably falls in between the two generations in all categories except SlideShares. Other findings:

  • The most popular content type for Baby Boomers is video, at 27 percent
  • Parallax is the least popular type for every generation, earning 1 percent or less in each age group
  • Millennials share memes the most, while less than 10 percent of Baby Boomers share similar content

Most Shared Visual ContentMarketing to several generations can be challenging, given the different values and ideas that resonate with each group. With the number of online content consumers growing daily, it’s essential for marketers to understand the specific types of content that each of their audiences connect with, and align it with their content marketing strategy accordingly.

Although there is no one-size-fits-all campaign, successful marketers can create content that multiple generations will want to share. If you feel you need more information getting started, you can review this deck of additional insights, which includes the preferred video length and weekend consuming habits of each generation discussed in this post.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

12 Common Reasons Reconsideration Requests Fail

Posted by Modestos

There are several reasons a reconsideration request might fail. But some of the most common mistakes site owners and inexperienced SEOs make when trying to lift a link-related Google penalty are entirely avoidable. 

Here’s a list of the top 12 most common mistakes made when submitting reconsideration requests, and how you can prevent them.

1. Insufficient link data

This is one of the most common reasons why reconsideration requests fail. This mistake is readily evident each time a reconsideration request gets rejected and the example URLs provided by Google are unknown to the webmaster. Relying only on Webmaster Tools data isn’t enough, as Google has repeatedly said. You need to combine data from as many different sources as possible. 

A good starting point is to collate backlink data, at the very least:

  • Google Webmaster Tools (both latest and sample links)
  • Bing Webmaster Tools
  • Majestic SEO (Fresh Index)
  • Ahrefs
  • Open Site Explorer

If you use any toxic link-detection services (e.g., Linkrisk and Link Detox), then you need to take a few precautions to ensure the following:

  • They are 100% transparent about their backlink data sources
  • They have imported all backlink data
  • You can upload your own backlink data (e.g., Webmaster Tools) without any limitations

If you work on large websites that have tons of backlinks, most of these automated services are very likely used to process just a fraction of the links, unless you pay for one of their premium packages. If you have direct access to the above data sources, it’s worthwhile to download all backlink data, then manually upload it into your tool of choice for processing. This is the only way to have full visibility over the backlink data that has to be analyzed and reviewed later. Starting with an incomplete data set at this early (yet crucial) stage could seriously hinder the outcome of your reconsideration request.

2. Missing vital legacy information

The more you know about a site’s history and past activities, the better. You need to find out (a) which pages were targeted in the past as part of link building campaigns, (b) which keywords were the primary focus and (c) the link building tactics that were scaled (or abused) most frequently. Knowing enough about a site’s past activities, before it was penalized, can help you home in on the actual causes of the penalty. Also, collect as much information as possible from the site owners.

3. Misjudgement

Misreading your current situation can lead to wrong decisions. One common mistake is to treat the example URLs provided by Google as gospel and try to identify only links with the same patterns. Google provides a very small number of examples of unnatural links. Often, these examples are the most obvious and straightforward ones. However, you should look beyond these examples to fully address the issues and take the necessary actions against all types of unnatural links. 

Google is very clear on the matter: “Please correct or remove all inorganic links, not limited to the samples provided above.

Another common area of bad judgement is the inability to correctly identify unnatural links. This is a skill that requires years of experience in link auditing, as well as link building. Removing the wrong links won’t lift the penalty, and may also result in further ranking drops and loss of traffic. You must remove the right links.


4. Blind reliance on tools

There are numerous unnatural link-detection tools available on the market, and over the years I’ve had the chance to try out most (if not all) of them. Because (and without any exception) I’ve found them all very ineffective and inaccurate, I do not rely on any such tools for my day-to-day work. In some cases, a lot of the reported “high risk” links were 100% natural links, and in others, numerous toxic links were completely missed. If you have to manually review all the links to discover the unnatural ones, ensuring you don’t accidentally remove any natural ones, it makes no sense to pay for tools. 

If you solely rely on automated tools to identify the unnatural links, you will need a miracle for your reconsideration request to be successful. The only tool you really need is a powerful backlink crawler that can accurately report the current link status of each URL you have collected. You should then manually review all currently active links and decide which ones to remove. 

I could write an entire book on the numerous flaws and bugs I have come across each time I’ve tried some of the most popular link auditing tools. A lot of these issues can be detrimental to the outcome of the reconsideration request. I have seen many reconsiderations request fail because of this. If Google cannot algorithmically identify all unnatural links and must operate entire teams of humans to review the sites (and their links), you shouldn’t trust a $99/month service to identify the unnatural links.

If you have an in-depth understanding of Google’s link schemes, you can build your own process to prioritize which links are more likely to be unnatural, as I described in this post (see sections 7 & 8). In an ideal world, you should manually review every single link pointing to your site. Where this isn’t possible (e.g., when dealing with an enormous numbers of links or resources are unavailable), you should at least focus on the links that have the more “unnatural” signals and manually review them.

5. Not looking beyond direct links

When trying to lift a link-related penalty, you need to look into all the links that may be pointing to your site directly or indirectly. Such checks include reviewing all links pointing to other sites that have been redirected to your site, legacy URLs with external inbound links that have been internally redirected owned, and third-party sites that include cross-domain canonicals to your site. For sites that used to buy and redirect domains in order increase their rankings, the quickest solution is to get rid of the redirects. Both Majestic SEO and Ahrefs report redirects, but some manual digging usually reveals a lot more.

PQPkyj0.jpg

6. Not looking beyond the first link

All major link intelligence tools, including Majestic SEO, Ahrefs and Open Site Explorer, report only the first link pointing to a given site when crawling a page. This means that, if you overly rely on automated tools to identify links with commercial keywords, the vast majority of them will only take into consideration the first link they discover on a page. If a page on the web links just once to your site, this is not big deal. But if there are multiple links, the tools will miss all but the first one.

For example, if a page has five different links pointing to your site, and the first one includes a branded anchor text, these tools will just report the first link. Most of the link-auditing tools will in turn evaluate the link as “natural” and completely miss the other four links, some of which may contain manipulative anchor text. The more links that get missed this way the more likely your reconsideration request will fail.

7. Going too thin

Many SEOs and webmasters (still) feel uncomfortable with the idea of losing links. They cannot accept the idea of links that once helped their rankings are now being devalued, and must be removed. There is no point trying to save “authoritative”, unnatural links out of fear of losing rankings. If the main objective is to lift the penalty, then all unnatural links need to be removed.

Often, in the first reconsideration request, SEOs and site owners tend to go too thin, and in the subsequent attempts start cutting deeper. If you are already aware of the unnatural links pointing to your site, try to get rid of them from the very beginning. I have seen examples of unnatural links provided by Google on PR 9/DA 98 sites. Metrics do not matter when it comes to lifting a penalty. If a link is manipulative, it has to go.

In any case, Google’s decision won’t be based only on the number of links that have been removed. Most important in the search giant’s eyes are the quality of links still pointing to your site. If the remaining links are largely of low quality, the reconsideration request will almost certainly fail. 

8. Insufficient effort to remove links

Google wants to see a “good faith” effort to get as many links removed as possible. The higher the percentage of unnatural links removed, the better. Some agencies and SEO consultants tend to rely too much on the use of the disavow tool. However, this isn’t a panacea, and should be used as a last resort for removing those links that are impossible to remove—after exhausting all possibilities to physically remove them via the time-consuming (yet necessary) outreach route. 

Google is very clear on this:

m4M4n3g.jpg?1

Even if you’re unable to remove all of the links that need to be removed, you must be able to demonstrate that you’ve made several attempts to have them removed, which can have a favorable impact on the outcome of the reconsideration request. Yes, in some cases it might be possible to have a penalty lifted simply by disavowing instead of removing the links, but these cases are rare and this strategy may backfire in the future. When I reached out to ex-googler Fili Wiese’s for some advice on the value of removing the toxic links (instead of just disavowing them), his response was very straightforward:

V3TmCrj.jpg 

9. Ineffective outreach

Simply identifying the unnatural links won’t get the penalty lifted unless a decent percentage of the links have been successfully removed. The more communication channels you try, the more likely it is that you reach the webmaster and get the links removed. Sending the same email hundreds or thousands of times is highly unlikely to result in a decent response rate. Trying to remove a link from a directory is very different from trying to get rid of a link appearing in a press release, so you should take a more targeted approach with a well-crafted, personalized email. Link removal request emails must be honest and to the point, or else they’ll be ignored.

Tracking the emails will also help in figuring out which messages have been read, which webmasters might be worth contacting again, or alert you of the need to try an alternative means of contacting webmasters.

Creativity, too, can play a big part in the link removal process. For example, it might be necessary to use social media to reach the right contact. Again, don’t trust automated emails or contact form harvesters. In some cases, these applications will pull in any email address they find on the crawled page (without any guarantee of who the information belongs to). In others, they will completely miss masked email addresses or those appearing in images. If you really want to see that the links are removed, outreach should be carried out by experienced outreach specialists. Unfortunately, there aren’t any shortcuts to effective outreach.

10. Quality issues and human errors

All sorts of human errors can occur when filing a reconsideration request. The most common errors include submitting files that do not exist, files that do not open, files that contain incomplete data, and files that take too long to load. You need to triple-check that the files you are including in your reconsideration request are read-only, and that anyone with the URL can fully access them. 

Poor grammar and language is also bad practice, as it may be interpreted as “poor effort.” You should definitely get the reconsideration request proofread by a couple of people to be sure it is flawless. A poorly written reconsideration request can significantly hinder your overall efforts.

Quality issues can also occur with the disavow file submission. Disavowing at the URL level isn’t recommended because the link(s) you want to get rid of are often accessible to search engines via several URLs you may be unaware of. Therefore, it is strongly recommended that you disavow at the domain or sub-domain level.

11. Insufficient evidence

How does Google know you have done everything you claim in your reconsideration request? Because you have to prove each claim is valid, you need to document every single action you take, from sent emails and submitted forms, to social media nudges and phone calls. The more information you share with Google in your reconsideration request, the better. This is the exact wording from Google:

“ …we will also need to see good-faith efforts to remove a large portion of inorganic links from the web wherever possible.”

12. Bad communication

How you communicate your link cleanup efforts is as essential as the work you are expected to carry out. Not only do you need to explain the steps you’ve taken to address the issues, but you also need to share supportive information and detailed evidence. The reconsideration request is the only chance you have to communicate to Google which issues you have identified, and what you’ve done to address them. Being honest and transparent is vital for the success of the reconsideration request.

There is absolutely no point using the space in a reconsideration request to argue with Google. Some of the unnatural links examples they share may not always be useful (e.g., URLs that include nofollow links, removed links, or even no links at all). But taking the argumentative approach veritably guarantees your request will be denied.

54adb6e0227790.04405594.jpg
Cropped from photo by Keith Allison, licensed under Creative Commons.

Conclusion

Getting a Google penalty lifted requires a good understanding of why you have been penalized, a flawless process and a great deal of hands-on work. Performing link audits for the purpose of lifting a penalty can be very challenging, and should only be carried out by experienced consultants. If you are not 100% sure you can take all the required actions, seek out expert help rather than looking for inexpensive (and ineffective) automated solutions. Otherwise, you will almost certainly end up wasting weeks or months of your precious time, and in the end, see your request denied.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from moz.com

Feed the Hummingbird: Structured Markup Isn’t the Only Way to Talk to Google

Posted by Cyrus-Shepard

I used to laugh at the idea of Hummingbird optimization.

In a recent poll, Moz asked nearly
300 marketers which Google updated affected their traffic the most. Penguin and Panda were first and second, followed by Hummingbird in a distant third.

Unsurprising, because unlike Panda and Penguin,
Hummingbird doesn’t specifically combat webspam

Ever wonder why Google named certain algorithms after black and white animals (i.e. black hat vs. white hat?) Hummingbird is a broader algorithm altogether, and Hummingbirds can be any color of the rainbow.

One aspect of Hummingbird is about
better understanding of your content, not just specific SEO tactics.

Hummingbird also represents an
evolutionary step in entity-based search that Google has worked on for years, and it will continue to evolve. In a way, optimizing for entity search is optimizing for search itself.

Many SEOs limit their understanding of entity search to vague concepts of
structured data, Schema.org, and Freebase. They fall into the trap of thinking that the only way to participate in the entity SEO revolution is to mark up your HTML with complex schema.org microdata.

Not true.

Don’t misunderstand; schema.org and structured data are awesome. If you can implement structured data on your website, you should. Structured data is precise, can lead to enhanced search snippets, and helps search engines to understand your content. But Schema.org and classic structured data vocabularies also have key shortcomings:

  1. Schema types are limited. Structured data is great for people, products, places, and events, but these cover only a fraction of the entire content of the web. Many of us markup our content using Article schema, but this falls well short of describing the hundreds of possible entity associations within the text itself. 
  2. Markup is difficult. Realistically, in a world where it’s sometimes difficult to get authors to write a title tag or get engineers to attach an alt attribute to an image, implementing proper structured data to source HTML can be a daunting task.
  3. Adoption is low. A study last year of 2.4 billion web pages showed less than 25% contained structured data markup. A recent SearchMetrics study showed even less adoption, with only 0.3% of websites out of over 50 million domains using Schema.org.

This presents a challenge for search engines, which want to understand entity relationships across the
entire web – not simply the parts we choose to mark up. 

In reality, search engines have worked over 10 years –
since the early days of Google – at extracting entities from our content without the use of complex markup.

How search engines understand relationships without markup

Here’s a simple explanation of a complex subject. 

Search engines can structure your content using the concept of
triples. This means organizing keywords into a framework of subjectpredicateobject.

Structured data frameworks like schema.org work great because they automatically classify information into a triple format. Take this
example from Schema.org.

<div itemscope itemtype ="http://schema.org/Movie">
  <h1 itemprop="name">Avatar</h1>
  <span>Director: <span itemprop="director">James Cameron</span> (born August 16, 1954)</span>
  <span itemprop="genre">Science fiction</span>
  <a href="../movies/avatar-theatrical-trailer.html" itemprop="trailer">Trailer</a>
</div><br>

Extracting the triples from this code sample would yield:

Avatar (Movie)Has DirectorJames Cameron

SubjectPredicateObject

The challenge is: Can search engines extract this information for the 90%+ of your content that isn’t marked up with structured data? 

Yes, they can.

Triples, triples everywhere

Ask Google a question like
who is the president of Harvard or how many astronauts walked on the moon, and Google will often answer from a page with no structured data present.

Consider this query for the ideal length of a title tag.

Google is able to extract the semantic meaning from this page even though the properties of “length” and its value of 50-60 characters
are not structured using classic schema.org markup.

Matt Cutts recently revealed that Google uses over 500 algorithms. That means 500 algorithms that layer, filter and interact in different ways. The evidence indicates that Google has many techniques of extracting entity and relationship data that may work independent of each other.

Regardless, whether you are a master of schema.org or not, here are tips for communicating entity and relationship signals within your content.

1. Keywords

Yes, good old fashioned keywords.

Even without structured markup, search engines have the ability to parse keywords into their respective structure. 

But keywords by themselves only go so far. In order for this method to work, your keywords must be accompanied by appropriate predicates and objects. In other words, you sentences provide fuel to search engines when they contain detailed information with clear subjects and organization.

Consider this example of the relationships extracted from our
title tag page by AlchemyAPI:

Entities Extracted via AlchemyAPI

There’s evidence Google has worked on this technology for over 10 years, ever since it acquired the company Applied Semantics in 2003.

For deeper understanding, Bill Slawski wrote an excellent piece on Google’s ability to extract relationship meaning from text, as well as AJ Kohn’s excellent advice on Google’s Knowledge Graph optimization.

2. Tables and HTML elements

This is old school SEO that folks today often forget.

HTML (and HTML5), by default, provide structure to webpages that search engines can extract. By using lists, tables, and proper headings, you organize your content in a way that makes sense to robots. 

In the example below, the technology exists for search engines to easily extract structured relationship about US president John Adams in this Wikipedia table.

The goal isn’t to get in Google’s Knowledge Graph, (which is exclusive to Wikipedia and Freebase). Instead, the objective is to structure your content in a way that makes the most sense and relationships between words and concepts clear. 

For a deeper exploration, Bill Slawski has another excellent write up exploring many different techniques search engines can use to extract structured data from HTML-based content.

3. Entities and synonyms

What do you call the President of the United States? How about:

  • Barack Obama
  • POTUS (President Of The United States)
  • Commander in Chief
  • Michelle Obama’s Husband
  • First African American President

In truth, all of these apply to the same entity, even though searchers will look for them in different ways. If you wanted to make clear what exactly your content was about (which president?) two common techniques would be to include:

  1. Synonyms of the subject: We mean the President of the United States → Barack Obama → Commander in Chief and → Michelle Obama’s Husband
  2. Co-occuring phrases: If we’re talking about Barack Obama, we’re more likely to include phrases like Honolulu (his place of birth), Harvard (his college), 44th (he is the 44th president), and even Bo (his dog). This helps specify exactly which president we mean, and goes way beyond the individual keyword itself.

entities and synonyms for SEO

Using synonyms and entity association also has the benefit of appealing to broader searcher intent. A recent case study by Cognitive SEO demonstrated this by showing significant gains after adding semantically related synonyms to their content.

4. Anchor text and links

Links are the original relationship connector of the web.

Bill Slawski (again, because he is an SEO god) writes about one method Google might use to identity synonyms for entities using anchor text. It appears Google also uses anchor text in far more sophisticated ways. 

When looking at Google answer box results, you almost always find related keyword-rich anchor text pointing to the referenced URL. Ask Google “How many people walked on the moon?” and you’ll see these words in the anchor text that points to the URL Google displays as the answer.

Other queries:

Anchor text of Google's Answer Box URL

In these examples and more that I researched, matching anchor text was present every time in addition to the relevant information and keywords on the page itself.

Additionally, there seems to be an inidication that internal anchor text might also influence these results.

This is another argument to avoid generic anchor text like “click here” and “website.” Descriptive and clear anchor text, without overdoing it, provides a wealth of information for search engines to extract meaning from.

5. Leverage Google Local

For local business owners, the easiest and perhaps most effective way to establish structured relationships is through Google Local. The entire interface is like a structured data dashboard without Schema.org.

When you consider all the data you can upload both in Google+ and even Moz Local, the possibilities to map your business data is fairly complete in the local search sense.

In case you missed it, last week Google introduced My Business which makes maintaining your listings even easier.

6. Google Structured Data Highlighter

Sometimes, structured data is still the way to go.

In times when you have trouble adding markup to your HTML, Google offers its Structured Data Highlighter tool. This allows you to tell Google how your data should be structured, without actually adding any code.

The tool uses a type of machine learning to understand what type of schema applies to your pages, up to thousands at a time. No special skills or coding required.

Google Webmaster Structured Data Highlighter

Although the Structured Data Highlighter is both easy and fun, the downsides are:

  1. The data is only available to Google. Other search engines can’t see it.
  2. Markup types are limited to a few major top categories (Articles, Events, etc)
  3. If your HTML changes even a little, the tool can break.

Even though it’s simple, the Structured Data Highlighter should only be used when it’s impossible to add actual markup to your site. It’s not a substitution for the real thing.

7. Plugins

For pure schema.org markup, depending on the CMS you use, there’s often a multitude of plugins to make the job easier.

If you’re a WordPress user, your options are many:

Looking forward

If you have a chance to add Schema.org (or any other structured data to your site), this will help you earn those coveted SERP enhancements that may help with click-through rate, and may help search engines better understand your content.

That said, semantic understanding of the web goes far beyond rich snippets. Helping search engines to better understand all of your content is the job of the SEO. Even without Hummingbird, these are exactly the types of things we want to be doing.

It’s not “create content and let the search engines figure it out.” It’s “create great content with clues and proper signals to help the search engines figure it out.” 

If you do the latter, you’re far ahead in the game.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 5 years ago from feedproxy.google.com