An Open-Source Tool for Checking rel-alternate-hreflang Annotations

Posted by Tom-Anthony

In the Distilled R&D department we have been ramping up the amount of automated monitoring and analysis we do, with an internal system monitoring our client’s sites both directly and via various data sources to ensure they remain healthy and we are alerted to any problems that may arise.

Recently we started work to add in functionality for including the rel-alternate-hreflang annotations in this system. In this blog post I’m going to share an open-source Python library we’ve just started work on for the purpose, which makes it easy to read the hreflang entries from a page and identify errors with them.

If you’re not a Python aficionado then don’t despair, as I have also built a ready-to-go tool for you to use, which will quickly do some checks on the hreflang entries for any URL you specify. 🙂

Google’s Search Console (formerly Webmaster Tools) does have some basic rel-alternate-hreflang checking built in, but it is limited in how you can use it and you are restricted to using it for verified sites.

rel-alternate-hreflang checklist

Before we introduce the code, I wanted to quickly review a list of five easy and common mistakes that we will want to check for when looking at rel-alternate-hreflang annotations:

  • return tag errors – Every alternate language/locale URL of a page should, itself, include a link back to the first page. This makes sense but I’ve seen people make mistakes with it fairly often.
  • indirect / broken links – Links to alternate language/region versions of the page should no go via redirects, and should not link to missing or broken pages.
  • multiple entries – There should never be multiple entries for a single language/region combo.
  • multiple defaults – You should never have more than one x-default entry.
  • conflicting modes – rel-alternate-hreflang entries can be implemented via inline HTML, XML sitemaps, or HTTP headers. For any one set of pages only one implementation mode should be used.

So now imagine that we want to simply automate these checks quickly and simply…

Introducing: polly – the hreflang checker library

polly is the name for the library we have developed to help us solve this problem, and we are releasing it as open source so the SEO community can use it freely to build upon. We only started work on it last week, but we plan to continue developing it, and will also accept contributions to the code from the community, so we expect its feature set to grow rapidly.

If you are not comfortable tinkering with Python, then feel free to skip down to the next section of the post, where there is a tool that is built with polly which you can use right away.

Still here? Ok, great. You can install polly easily via pip:

pip install polly

You can then create a PollyPage() object which will do all our work and store the data simply by instantiating the class with the desired URL:

my_page = PollyPage("http://www.facebook.com/")

You can quickly see the hreflang entries on the page by running:

print my_page.alternate_urls_map

You can list all the hreflang values encountered on a page, and which countries and languages they cover:

print my_page.hreflang_values
print my_page.languages
print my_page.regions

You can also check various aspects of a page, see whether the pages it includes in its rel-alternate-hreflang entries point back, or whether there are entries that do not see retrievable (due to 404 or 500 etc. errors):

print my_page.is_default
print my_page.no_return_tag_pages()
print my_page.non_retrievable_pages()

Get more instructions and grab the code at the polly github page. Hit me up in the comments with any questions.

Free tool: hreflang.ninja

I have put together a very simple tool that uses polly to run some of the checks we highlighted above as being common mistakes with rel-alternate-hreflang, which you can visit right now and start using:

http://hreflang.ninja

Simply enter a URL and hit enter, and you should see something like:

Example output from the ninja!

The tool shows you the rel-alternate-hreflang entries found on the page, the language and region of those entries, the alternate URLs, and any errors identified with the entry. It is perfect for doing quick’n’dirty checks of a URL to identify any errors.

As we add additional functionality to polly we will be updating hreflang.ninja as well, so please tweet me with feature ideas or suggestions.

To-do list!

This is the first release of polly and currently we only handle annotations that are in the HTML of the page, not those in the XML sitemap or HTTP headers. However, we are going to be updating polly (and hreflang.ninja) over the coming weeks, so watch this space! 🙂

Resources

Here are a few links you may find helpful for hreflang:

Got suggestions?

With the increasing number of SEO directives and annotations available, and the ever-changing guidelines around how to deploy them, it is important to automate whatever areas possible. Hopefully polly is helpful to the community in this regard, and we want to here what ideas you have for making these tools more useful – here in the comments or via Twitter.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

5 Spreadsheet Tips for Manual Link Audits

Posted by MarieHaynes

Link auditing is the part of my job that I love the most. I have audited a LOT of links over the last few years. While there are some programs out there that can be quite helpful to the avid link auditor, I still prefer to create a spreadsheet of my links in Excel and then to audit those links one-by-one from within Google Spreadsheets. Over the years I have learned a few tricks and formulas that have helped me in this process. In this article, I will share several of these with you.

Please know that while I am quite comfortable being labelled a link auditing expert, I am not an Excel wizard. I am betting that some of the things that I am doing could be improved upon if you’re an advanced user. As such, if you have any suggestions or tips of your own I’d love to hear them in the comments section!

1. Extract the domain or subdomain from a URL

OK. You’ve downloaded links from as many sources as possible and now you want to manually visit and evaluate one link from every domain. But, holy moly, some of these domains can have THOUSANDS of links pointing to the site. So, let’s break these down so that you are just seeing one link from each domain. The first step is to extract the domain or subdomain from each url.

I am going to show you examples from a Google spreadsheet as I find that these display nicer for demonstration purposes. However, if you’ve got a fairly large site, you’ll find that the spreadsheets are easier to create in Excel. If you’re confused about any of these steps, check out the animated gif at the end of each step to see the process in action.

Here is how you extract a domain or subdomain from a url:

  • Create a new column to the left of your url column.
  • Use this formula:

    =LEFT(B1,FIND(“/”,B1,9)-1)

    What this will do is remove everything after the trailing slash following the domain name. http://www.example.com/article.html will now become http://www.example.com and http://www.subdomain.example.com/article.html will now become http://www.subdomain.example.com.

  • Copy our new column A and paste it right back where it was using the “paste as values” function. If you don’t do this, you won’t be able to use the Find and Replace feature.
  • Use Find and Replace to replace each of the following with a blank (i.e. nothing):
    http://
    https://
    www.

And BOOM! We are left with a column that contains just domain names and subdomain names. This animated gif shows each of the steps we just outlined:

2. Just show one link from each domain

The next step is to filter this list so that we are just seeing one link from each domain. If you are manually reviewing links, there’s usually no point in reviewing every single link from every domain. I will throw in a word of caution here though. Sometimes a domain can have both a good link and a bad link pointing to you. Or in some cases, you may find that links from one page are followed and from another page on the same site they are nofollowed. You can miss some of these by just looking at one link from each domain. Personally, I have some checks built in to my process where I use Scrapebox and some internal tools that I have created to make sure that I’m not missing the odd link by just looking at one link from each domain. For most link audits, however, you are not going to miss very much by assessing one link from each domain.

Here’s how we do it:

  • Highlight our domains column and sort the column in alphabetical order.
  • Create a column to the left of our domains, so that the domains are in column B.
  • Use this formula:

    =IF(B1=B2,”duplicate”,”unique”)

  • Copy that formula down the column.
  • Use the filter function so that you are just seeing the duplicates.
  • Delete those rows. Note: If you have tens of thousands of rows to delete, the spreadsheet may crash. A workaround here is to use “Clear Rows” instead of “Delete Rows” and then sort your domains column from A-Z once you are finished.

We’ve now got a list of one link from every domain linking to us.

Here’s the gif that shows each of these steps:

You may wonder why I didn’t use Excel’s dedupe function to simply deduplicate these entries. I have found that it doesn’t take much deduplication to crash Excel, which is why I do this step manually.

3. Finding patterns FTW!

Sometimes when you are auditing links, you’ll find that unnatural links have patterns. I LOVE when I see these, because sometimes I can quickly go through hundreds of links without having to check each one manually. Here is an example. Let’s say that your website has a bunch of spammy directory links. As you’re auditing you notice patterns such as one of these:

  • All of these directory links come from a url that contains …/computers/internet/item40682/
  • A whole bunch of spammy links that all come from a particular free subdomain like blogspot, wordpress, weebly, etc.
  • A lot of links that all contain a particular keyword for anchor text (this is assuming you’ve included anchor text in your spreadsheet when making it.)

You can quickly find all of these links and mark them as “disavow” or “keep” by doing the following:

  • Create a new column. In my example, I am going to create a new column in Column C and look for patterns in urls that are in Column B.
  • Use this formula:

    =FIND(“/item40682”,B1)
    (You would replace “item40682” with the phrase that you are looking for.)

  • Copy this formula down the column.
  • Filter your new column so that you are seeing any rows that have a number in this column. If the phrase doesn’t exist in that url, you’ll see “N/A”, and we can ignore those.
  • Now you can mark these all as disavow

4. Check your disavow file

This next tip is one that you can use to check your disavow file across your list of domains that you want to audit. The goal here is to see which links you have disavowed so that you don’t waste time reassessing them. This particular tip only works for checking links that you have disavowed on the domain level.

The first thing you’ll want to do is download your current disavow file from Google. For some strange reason, Google gives you the disavow file in CSV format. I have never understood this because they want you to upload the file in .txt. Still, I guess this is what works best for Google. All of your entries will be in column A of the CSV:

What we are going to do now is add these to a new sheet on our current spreadsheet and use a VLOOKUP function to mark which of our domains we have disavowed.

Here are the steps:

  • Create a new sheet on your current spreadsheet workbook.
  • Copy and paste column A from your disavow spreadsheet onto this new sheet. Or, alternatively, use the import function to import the entire CSV onto this sheet.
  • In B1, write “previously disavowed” and copy this down the entire column.
  • Remove the “domain:” from each of the entries by doing a Find and Replace to replace domain: with a blank.
  • Now go back to your link audit spreadsheet. If your domains are in column A and if you had, say, 1500 domains in your disavow file, your formula would look like this:

    =VLOOKUP(A1,Sheet2!$A$1:$B$1500,2,FALSE)

When you copy this formula down the spreadsheet, it will check each of your domains, and if it finds the domain in Sheet 2, it will write “previously disavowed” on our link audit spreadsheet.

Here is a gif that shows the process:

5. Make monthly or quarterly disavow work easier

That same formula described above is a great one to use if you are doing regular repeated link audits. In this case, your second sheet on your spreadsheet would contain domains that you have previously audited, and column B of this spreadsheet would say, “previously audited” rather than “previously disavowed“.

Your tips?

These are just a few of the formulas that you can use to help make link auditing work easier. But there are lots of other things you can do with Excel or Google Sheets to help speed up the process as well. If you have some tips to add, leave a comment below. Also, if you need clarification on any of these tips, I’m happy to answer questions in the comments section.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

How to Use Server Log Analysis for Technical SEO

Posted by SamuelScott

It’s ten o’clock. Do you know where your logs are?

I’m introducing this guide with a pun on a common public-service announcement that has run on late-night TV news broadcasts in the United States because log analysis is something that is extremely newsworthy and important.

If your technical and on-page SEO is poor, then nothing else that you do will matter. Technical SEO is the key to helping search engines to crawl, parse, and index websites, and thereby rank them appropriately long before any marketing work begins.

The important thing to remember: Your log files contain the only data that is 100% accurate in terms of how search engines are crawling your website. By helping Google to do its job, you will set the stage for your future SEO work and make your job easier. Log analysis is one facet of technical SEO, and correcting the problems found in your logs will help to lead to higher rankings, more traffic, and more conversions and sales.

Here are just a few reasons why:

  • Too many response code errors may cause Google to reduce its crawling of your website and perhaps even your rankings.
  • You want to make sure that search engines are crawling everything, new and old, that you want to appear and rank in the SERPs (and nothing else).
  • It’s crucial to ensure that all URL redirections will pass along any incoming “link juice.”

However, log analysis is something that is unfortunately discussed all too rarely in SEO circles. So, here, I wanted to give the Moz community an introductory guide to log analytics that I hope will help. If you have any questions, feel free to ask in the comments!

What is a log file?

Computer servers, operating systems, network devices, and computer applications automatically generate something called a log entry whenever they perform an action. In a SEO and digital marketing context, one type of action is whenever a page is requested by a visiting bot or human.

Server log entries are specifically programmed to be output in the Common Log Format of the W3C consortium. Here is one example from Wikipedia with my accompanying explanations:

127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326
  • 127.0.0.1 — The remote hostname. An IP address is shown, like in this example, whenever the DNS hostname is not available or DNSLookup is turned off.
  • user-identifier — The remote logname / RFC 1413 identity of the user. (It’s not that important.)
  • frank — The user ID of the person requesting the page. Based on what I see in my Moz profile, Moz’s log entries would probably show either “SamuelScott” or “392388” whenever I visit a page after having logged in.
  • [10/Oct/2000:13:55:36 -0700] — The date, time, and timezone of the action in question in strftime format.
  • GET /apache_pb.gif HTTP/1.0 — “GET” is one of the two commands (the other is “POST”) that can be performed. “GET” fetches a URL while “POST” is submitting something (such as a forum comment). The second part is the URL that is being accessed, and the last part is the version of HTTP that is being accessed.
  • 200 — The status code of the document that was returned.
  • 2326 — The size, in bytes, of the document that was returned.

Note: A hyphen is shown in a field when that information is unavailable.

Every single time that you — or the Googlebot — visit a page on a website, a line with this information is output, recorded, and stored by the server.

Log entries are generated continuously and anywhere from several to thousands can be created every second — depending on the level of a given server, network, or application’s activity. A collection of log entries is called a log file (or often in slang, “the log” or “the logs”), and it is displayed with the most-recent log entry at the bottom. Individual log files often contain a calendar day’s worth of log entries.

Accessing your log files

Different types of servers store and manage their log files differently. Here are the general guides to finding and managing log data on three of the most-popular types of servers:

What is log analysis?

Log analysis (or log analytics) is the process of going through log files to learn something from the data. Some common reasons include:

  • Development and quality assurance (QA) — Creating a program or application and checking for problematic bugs to make sure that it functions properly
  • Network troubleshooting — Responding to and fixing system errors in a network
  • Customer service — Determining what happened when a customer had a problem with a technical product
  • Security issues — Investigating incidents of hacking and other intrusions
  • Compliance matters — Gathering information in response to corporate or government policies
  • Technical SEO — This is my favorite! More on that in a bit.

Log analysis is rarely performed regularly. Usually, people go into log files only in response to something — a bug, a hack, a subpoena, an error, or a malfunction. It’s not something that anyone wants to do on an ongoing basis.

Why? This is a screenshot of ours of just a very small part of an original (unstructured) log file:

Ouch. If a website gets 10,000 visitors who each go to ten pages per day, then the server will create a log file every day that will consist of 100,000 log entries. No one has the time to go through all of that manually.

How to do log analysis

There are three general ways to make log analysis easier in SEO or any other context:

  • Do-it-yourself in Excel
  • Proprietary software such as Splunk or Sumo-logic
  • The ELK Stack open-source software

Tim Resnik’s Moz essay from a few years ago walks you through the process of exporting a batch of log files into Excel. This is a (relatively) quick and easy way to do simple log analysis, but the downside is that one will see only a snapshot in time and not any overall trends. To obtain the best data, it’s crucial to use either proprietary tools or the ELK Stack.

Splunk and Sumo-Logic are proprietary log analysis tools that are primarily used by enterprise companies. The ELK Stack is a free and open-source batch of three platforms (Elasticsearch, Logstash, and Kibana) that is owned by Elastic and used more often by smaller businesses. (Disclosure: We at Logz.io use the ELK Stack to monitor our own internal systems as well as for the basis of our own log management software.)

For those who are interested in using this process to do technical SEO analysis, monitor system or application performance, or for any other reason, our CEO, Tomer Levy, has written a guide to deploying the ELK Stack.

Technical SEO insights in log data

However you choose to access and understand your log data, there are many important technical SEO issues to address as needed. I’ve included screenshots of our technical SEO dashboard with our own website’s data to demonstrate what to examine in your logs.

Bot crawl volume

It’s important to know the number of requests made by Baidu, BingBot, GoogleBot, Yahoo, Yandex, and others over a given period time. If, for example, you want to get found in search in Russia but Yandex is not crawling your website, that is a problem. (You’d want to consult Yandex Webmaster and see this article on Search Engine Land.)

Response code errors

Moz has a great primer on the meanings of the different status codes. I have an alert system setup that tells me about 4XX and 5XX errors immediately because those are very significant.

Temporary redirects

Temporary 302 redirects do not pass along the “link juice” of external links from the old URL to the new one. Almost all of the time, they should be changed to permanent 301 redirects.

Crawl budget waste

Google assigns a crawl budget to each website based on numerous factors. If your crawl budget is, say, 100 pages per day (or the equivalent amount of data), then you want to be sure that all 100 are things that you want to appear in the SERPs. No matter what you write in your robots.txt file and meta-robots tags, you might still be wasting your crawl budget on advertising landing pages, internal scripts, and more. The logs will tell you — I’ve outlined two script-based examples in red above.

If you hit your crawl limit but still have new content that should be indexed to appear in search results, Google may abandon your site before finding it.

Duplicate URL crawling

The addition of URL parameters — typically used in tracking for marketing purposes — often results in search engines wasting crawl budgets by crawling different URLs with the same content. To learn how to address this issue, I recommend reading the resources on Google and Search Engine Land here, here, here, and here.

Crawl priority

Google might be ignoring (and not crawling or indexing) a crucial page or section of your website. The logs will reveal what URLs and/or directories are getting the most and least attention. If, for example, you have published an e-book that attempts to rank for targeted search queries but it sits in a directory that Google only visits once every six months, then you won’t get any organic search traffic from the e-book for up to six months.

If a part of your website is not being crawled very often — and it is updated often enough that it should be — then you might need to check your internal-linking structure and the crawl-priority settings in your XML sitemap.

Last crawl date

Have you uploaded something that you hope will be indexed quickly? The log files will tell you when Google has crawled it.

Crawl budget

One thing I personally like to check and see is Googlebot’s real-time activity on our site because the crawl budget that the search engine assigns to a website is a rough indicator — a very rough one — of how much it “likes” your site. Google ideally does not want to waste valuable crawling time on a bad website. Here, I had seen that Googlebot had made 154 requests of our new startup’s website over the prior twenty-four hours. Hopefully, that number will go up!

As I hope you can see, log analysis is critically important in technical SEO. It’s eleven o’clock — do you know where your logs are now?

Additional resources

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Give It Up for Our MozCon 2015 Community Speakers

Posted by EricaMcGillivray

Super thrilled that we’re able to announce this year’s community speakers for MozCon, July 13-15th in Seattle!

Wow. Each year I feel that I say the pool keeps getting more and more talented, but it’s the truth! We had more quality pitches this year than in the past, and quantity-wise, there were 241, around 100 more entries than years previously. Let me tell you, many of the review committee members filled our email thread with amazement at this.

And even though we had an unprecedented six slots, the choices seemed even tougher!

241 pitches
Let that number sink in for a little while.

Because we get numerous questions about what makes a great pitch, I wanted to share both information about the speakers and their great pitches—with some details removed for spoilers. (We’re still working with each speaker to polish and finalize their topic.) I’ve also included my or Matt Roney‘s own notes on each one from when we read them without knowing who the authors were.

Please congratulate our MozCon 2015 community speakers!

Adrian Vender

Adrian is the Director of Analytics at IMI and a general enthusiast of coding and digital marketing. He’s also a life-long drummer and lover of music. Follow him at @adrianvender.

Adrian’s pitch:

Content Tracking with Google Tag Manager

While marketers have matured in the use of web analytics tools, our ability to measure how users interact with our sites’ content needs improvement. Users are interacting with dynamic content that just aren’t captured in a pageview. While there are JavaScript tricks to help track these details, working with IT to place new code is usually the major hurdle that stops us.

Finally, Google Tag Manager is that bridge to advanced content analysis. GTM may appear technical, but it can easily be used by any digital marketer to track almost any action on a site. My goal is to make ALL attendees users of GTM.

My talk will cover the following GTM concepts:

[Adrian lists 8 highly-actionable tactics he’ll cover.]

I’ll share a client example of tracking content interaction in GA. I’ll also share a link to a GTM container file that can help people pre-load the above tag templates into their own GTM.

Matt’s notes: Could be good. I know a lot of people have questions about Tag Manager, and the ubiquity of GA should help it be pretty well-received.


Chris DayleyChris Dayley

Chris is a digital marketing expert and owner of Dayley Conversion. His company provides full-service A/B testing for businesses, including design, development, and test execution. Follow him at @chrisdayley.

Chris’ pitch:

I would like to present a super actionable 15 minute presentation focused on the first two major steps businesses should take to start A/B testing:

1. Radical Redesign Testing

2. Iterative Testing (Test EVERYTHING)

I am one of the few CROs out there that recommends businesses to start with a radical redesign test. My reasoning for doing so is that most businesses have done absolutely no testing on their current website, so the current landing page/website really isn’t a “best practice” design yet.

I will show several case studies where clients saw more than a 50% lift in conversion rates just from this first step of radical redesign testing, and will offer several tips for how to create a radical redesign test. Some of the tips include:

[Chris lists three direct and interesting tips he’ll share.]

Next I suggest moving into the iterative phase.

I will show several case studies of how to move through iterative testing so you eventually test every element on your page.

Erica’s notes: Direct, interesting, and with promise of multiple case studies.


Duane BrownDuane Brown

Duane is a digital marketer with 10 years’ experience having lived and worked in five cities across three continents. He’s currently at Unbounce. When not working, you can find Duane traveling to some far-flung location around the world to eat food and soak up the culture. Follow him at @DuaneBrown.

Duane’s pitch:

What Is Delightful Remarketing & How You Can Do It Too

A lot of people find remarketing creepy and weird. They don’t get why they are seeing those ads around the internet…. let alone how to make them stop showing.

This talk will focus on the different between remarketing & creating delightful remarketing that can help grow the revenue & profit at a company and not piss customers off. 50% of US marketers don’t use remarketing according to eMarketer (2013).

– [Duane’s direct how-to for e-commerce customers.] Over 60% of customers abandon a shopping cart each year: http://baymard.com/lists/cart-abandonment-rate (3 minute)

– Cover a SaaS company using retargeting to [Duane’s actionable item]. This remarketing helps show your products sticky features while showing off your benefits (3 minute)

– The Dos: [Duane’s actionable tip], a variety of creative & a dedicated landing page creates delightful remarketing that grows revenue (3 minute)

– Wrap up and review main points. (2 minutes)

Matt’s notes: Well-detailed, an area in which there’s a lot of room for improvement.


Gianluca FiorelliGianluca Fiorelli

Moz Associate, official blogger for StateofDigital.com and known international SEO and inbound strategist, Gianluca works in the digital marketing industry, but he still believes that he just know that he knows nothing. Follow him at @gfiorelli1.

Gianluca’s pitch:

Unusual Sources for Keyword and Topical Research

A big percentage of SEOs equal Keyword and Topical Research to using Keyword Planner and Google Suggest.

However, using only them, we cannot achieve a real deep knowledge of the interests, psychology and language of our target.

In this talk, I will present unusual sources and unnoticed features of very well-known tools, and offer a final example based on a true story.

Arguments touched in the speech (not necessarily in this order):

[Gianluca lists seven how-tos and one unique case study.]

Erica’s notes: Theme of Google not giving good keyword info. Lots of unique actionable points and resources. Will work in 15 minute time limit.


Ruth Burr ReedyRuth Burr Reedy

Ruth is the head of on-site SEO for BigWing Interactive, a full-service digital marketing agency in Oklahoma City, OK. At BigWing, she manages a team doing on-site, technical, and local SEO. Ruth has been working in SEO since 2006. Follow her at @ruthburr.

Ruth’s pitch:

Get Hired to Do SEO

This talk will go way beyond “just build your own website” and talk about specific ways SEOs can build evidence of their skills across the web, including:

[Ruth lists 7 how-tos with actionable examples.]

All in a funny, actionable, beautiful, easy-to-understand get-hired masterpiece.

Erica’s notes: Great takeaways. Wanted to do a session about building your resume as a marketer for a while.


Stephanie WallaceStephanie Wallace

Stephanie is director of SEO at Nebo, a digital agency in Atlanta. She helps clients navigate the ever-changing world of SEO by understanding their audience and helping them create a digital experience that both the user and Google appreciates. Follow her at @SWallaceSEO.

Stephanie’s pitch:

Everyone knows PPC and SEO complement one another – increased visibility in search results help increase perceived authority and drive more clickthroughs to your site overall. But are you actively leveraging the wealth of PPC data available to build on your existing SEO strategy? The key to effectively using this information lies in understanding how to test SEO tactics and how to apply the results to your on-page strategies. This session will delve into actionable strategies for using PPC campaign insights to influence on-page SEO and content strategies. Key takeaways include:

[Stephanie lists four how-tos.]

Erica’s notes: Nice and actionable. Like this a lot.


As mentioned, we had 241 entries, and many of them were stage quality. Notable runners up included AJ Wilcox, Ed Reese, and Daylan Pearce, and a big pat on the back to all those who tossed their hat in.

Also, a huge thank you to my fellow selection committee members for 2015: Charlene Inoncillo, Cyrus Shepard, Danie Launders, Jen Lopez, Matt Roney, Rand Fishkin, Renea Nielsen, and Trevor Klein.

Buy your ticket now

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Using Modern SEO to Build Brand Authority

Posted by kaiserthesage

It’s obvious that the technology behind search engines’ ability to determine and understand web entities is gradually leaning towards how real people will normally perceive things from a traditional marketing perspective.

The
emphasis on E-A-T (expertise, authoritativeness, trustworthiness) from Google’s recently updated Quality Rating Guide shows that search engines are shifting towards brand-related metrics to identify sites/pages that deserve to be more visible in search results.

Online branding, or authority building, is quite similar to the traditional SEO practices that many of us have already been accustomed with.

Building a stronger brand presence online and improving a site’s search visibility both require two major processes: the things you implement on the site and the things you do outside of the site.

This is where several of the more advanced aspects of SEO can blend perfectly with online branding when implemented the right way. In this post, I’ll use some examples from my own experience to show you how.

Pick a niche and excel

Building on your brand’s
topical expertise is probably the fastest way to go when you’re looking to build a name for yourself or your business in a very competitive industry.

There are a few reasons why:

  • Proving your field expertise in one or two areas of your industry can be a strong unique selling point (USP) for your brand.
  • It’s easier to expand and delve into the deeper and more competitive parts of your industry once you’ve already established yourself as an expert in your chosen field.
  • Obviously, search engines favour brands known to be experts in their respective fields.

Just to give a brief example, when I started blogging back in 2010, I was all over the place. Then, a few months later, I decided to focus on one specific area of SEO—link building—and
wrote dozens of guides on how I do it.

By aiming to build my blog’s brand identity to become a prime destination for link building tutorials, it became a lot easier for me to sell my ideas on the other aspects of inbound marketing to my continuously growing audience (from technical SEO to social media, content marketing, email marketing and more).

Strengthening your brand starts with the quality of your brand’s content, whether it’s your product/service or the plethora of information available on your website.

You can start by assessing the categories where you’re getting the most traction in terms of natural link acquisitions, social shares, conversions, and/or sales.

Prioritize your content development efforts on the niche where your brand can genuinely compete in and will have a better fighting chance to dominate the market. It’s the smartest way to stand out and scale, especially when you’re still in your campaign’s early stages.

Optimize for semantic search and knowledge graph

In the past, most webmasters and publishers would rely on the usage of generic keywords/terms in optimizing their website’s content to make it easier for search engines to understand what they are about.

But now, while the continuously evolving technologies behind search may seem to make the optimization process more complicated, the fact is that it may just reward those who pursue high-level trustworthy marketing efforts to stand out in the search results.

These technologies and factors for determining relevance—which include entity recognition and disambiguation (ERD), structured data or schema markups, natural language processing (NLP), phrase-based indexing for co-occurrence and co-citations, concept matching, and a lot more—are all driven by branding campaigns and
how an average human would normally find, talk, or ask about a certain thing.

Easily identifiable brands will surely win in this type of setup.

Where to start? See if Google already knows what your brand is about.

How to optimize your site for the Knowledge Graph and at the same time build it as an authority online

1. Provide the best and the most precise answers to the “who, what, why, and how” queries that people might look for in your space.

Razvan Gavrilas did 
an extensive study on how Google’s Answer Boxes work. Getting listed in the answer box will not just drive more traffic and conversions to a business, but can also help position a brand on a higher level in its industry.

But of course, getting one of your entries placed for Google’s answer boxes for certain queries will also require other authority signals (like natural links, domain authority, etc.).

But what search crawlers would typically search for to evaluate whether a page’s content is appropriate to be displayed in the answer boxes (according to Razvan’s post):

  • If the page selected for the answer contains the question in a very similar (if not exact) form, along with the answer, at a short distance from the question (repeating at least some of the words from the question) and
  • If the page selected for the answer belongs to a trustworthy website. So most of the times, if it’s not Wikipedia, it will be a site that it can consider a non-biased third party, such as is the case with a lot of “.edu” sites, or news organization websites.

Although,
John Mueller mentioned recently that Knowledge Graph listings should not be branded, in which you might think that the approach and effort will be for nothing.

But wait, just think about it—the intent alone of optimizing your content for Google’s Knowledge Graph will allow you to serve better content to your users (which is what Google rewards the most these days, so it’s still the soundest action to take if you want to really build a solid brand, right?).

2. Clearly define your brand’s identity to your audience.

Being remarkable and being able to separate your brand from your competitors is crucial in online marketing (be it through your content or the experience people feel when they’re using your site/service/product).


Optimizing for humans through branding allows you to condition the way people will talk about you
. This factor is very important when you’re aiming to get more brand mentions that would really impact your site’s SEO efforts, branding, and conversions.

The more search engines are getting signals (even unlinked mentions) that verify that you’re an authority in your field, the more your brand will be trusted and rank your pages well on SERPs.

3. Build a strong authorship portfolio.

Author photos/badges may have been taken down from the search results a few weeks ago, but it doesn’t mean that authorship markup no longer has value.

Both
Mark Traphagen and Bill Slawski have shared why authorship markup still matters. And clearly, an author’s authority will still be a viable search ranking factor, given that it enables Google to easily identify topical experts and credible documents available around the web.

It will continue to help tie entities (publishers and brands) to their respective industries, which may still accumulate scores over time based on the popularity and reception from the author’s works (AuthorRank).

This approach is a great complement to personal brand building, especially when you’re expanding your content marketing efforts’ reach through guest blogging on industry-specific blogs where you can really absorb more new readers and followers.

There’s certainly more to implement under
Knowledge Graph Optimization, and here’s a short list from what AJ Kohn has already shared on his blog earlier this year, which are all still useful to this day:

  • Use entities (aka Nouns) in your writing
  • Get connected and link out to relevant sites
  • Implement Structured Data to increase entity detection
  • Use the sameAs property
  • Optimize your Google+ presence
  • Get exposure on Wikipedia
  • Edit and update your Freebase entry

Online branding through scalable link building

The right relationships make link building scalable.

In the past, many link builders believed that it’s best to have thousands of links from diversified sources, which apparently forced a lot of early practitioners to resort to tactics focused on manually dropping links to thousands of unique domains (and spamming).

And, unfortunately, guest blogging as a link building tactic has eventually become a part of this craze.

I’ve mentioned this dozens of times before, and I’m going to say it one more time:
It’s better to have multiple links from a few link sources that are highly trusted than having hundreds of one-off links from several mediocre sites.

Focus on building signals that will strongly indicate relationships, because it’s probably the most powerful off-site signal you can build out there.

When other influential entities in your space are vouching for your brand (whether it’s through links, social shares, or even unlinked brand mentions), it allows you to somehow become a part of the list of sites that will most likely be trusted by search engines.

It can most definitely impact how people will see your brand as an authority as well, when they see that you’re being trusted by other credible brands in your industry.

These relationships can also open a lot of opportunities for natural link acquisitions and lead generation, knowing that some of the most trusted brands in your space trust you.

Making all of this actionable

1. Identify and make a list of the top domains and publishers in your industry, particularly those that have high search share.

There are so many tools that you can use to get these data, like
SEMRush, Compete.com, and/or Alexa.com.

You can also use
Google Search and SEOQuake to make a list of sites that are performing well on search for your industry’s head terms (given that Google is displaying better search results these days, it’s probably one of the best prospecting tools you can use).

I also use other free tools in doing this type of prospecting, particularly in cleaning up the list (in
removing duplicate domains, and extracting unique hostnames; and in filtering out highly authoritative sites that are clearly irrelevant for the task, such as ranking pages from Facebook, Wikipedia, and other popular news sites).

2. Try to penetrate at least 2 high authority sites from the first 50 websites on your list—and become a regular contributor for them.

Start engaging them by genuinely participating in their existing communities.

The process shouldn’t stop with you contributing content for them on a regular basis, as along the way you can initiate collaborative tasks, such as inviting them to publish content on your site as well.

This can help draw more traffic (and links) from their end, and can exponentially improve the perceived value of your brand as a publisher (based on your relationships with other influential entities in your industry).

These kinds of relationships will make the latter part of your link building campaign less stressful. As soon as you get to build a strong footing with your brand’s existing relationships and content portfolio (in and out of your site), it’ll be a lot easier for you to pitch and get published on other authoritative industry-specific publications (or even in getting interview opportunities).

3. Write the types of content that your target influencers are usually reading.

Stalk your target influencers on social networks, and take note of the topics/ideas that interest them the most (related to your industry). See what type of content they usually share to their followers.

Knowing these things will give you ton of ideas on how you can effectively approach your content development efforts and can help you come up with content ideas that are most likely to be read, shared, and linked to.

You can also go the extra mile by knowing which sites they mostly link out to or use as reference for their own works (use
ScreamingFrog).

4. Take advantage of your own existing community (or others’ as well).

Collaborate with the people who are already participating in your brand’s online community (blog comments, social networks, discussions, etc.). Identify those who truly contribute and really add value to the discussions, and see if they run their own websites or work for a company that’s also in your industry.

Leverage these interactions, as these can form long-term relationships that can also be beneficial to both parties (for instance, inviting them to write for you or having you write for their blog, and/or cross-promote your works/services).

And perhaps, you can also use this approach to other brands’ communities as well, like reaching out to people you see who have really smart inputs about your industry (that’ll you see on other blog’s comment sections) and asking them if they’ll be interested to talk/share more about that topic and have it published on your website instead.

Building a solid community can easily help automate link building, but more importantly, it can surely help strengthen a brand’s online presence.

Conclusion

SEO can be a tremendous help to your online branding efforts. Likewise, branding can be a tremendous help to your SEO efforts. Alignment and integration of both practices is what keeps winners winning in this game (just look at Moz).

If you liked this post or have any questions, let me know in the comments below, and you can find me on Twitter
@jasonacidre.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com

The New Link Building Survey 2014 – Results

Posted by JamesAgate

Many of you may have seen Skyrocket SEO’s Link Building Survey results that we published here on Moz around this same time last year. The reception was fantastic, so we decided to push ahead with turning this into an annual series to see how this strand of the industry is developing and evolving over time.

Firstly, “link building”…

Yep, we’ve not changed the name to a “content marketing survey” or “inbound link acquisition survey;” we still feel link building is a vital part of an SEOs arsenal of tactics, and therefore it deserves its own survey.

As a company we’re investing just as much in link building for our clients (granted, we’ve adapted what we are doing), but the fact remains that if you want to score big with decent organic search visibility then you need links.

Now that that’s out of the way, let’s get down to the details:

Who took the survey?

A massive thank you to the 315 or so people who took the survey. That number is slightly down from last yeah, which I feel is partly due to fewer people considering link building to be a part of their day-to-day roles. I’d argue that’s a missed opportunity, and this year we had a few duplicate entries and submissions that needed a bit of tidying up, so we trimmed it back to these 315 submissions.

The makeup of the respondents was broadly similar to last year, as expected, although based on user feedback from our inaugural survey, we added a few more categories for respondents to self-classify—so it is hard to make specific comparisons.

How much does your company spend on link building per month?

In the 2013 survey, 10% of respondents said their company spent $50k+ per month on link building, so it appears that the upper limit to link building spend may have decreased slightly across the industry.

That being said, there now appears to be a much larger number of companies in the $10-$50k per month bracket when you compare this year’s 37% with last year’s 11%.

I would attribute the changes year-on-year to two factors;

  • Reclassification of the term “link building:” Many companies have shifted budget that they would previously classified as link building budget into content projects that more than likely still have an impact on link building efforts.
  • Recognition of opportunity: Based on our own experiences we see a number of website owners and businesses pushing harder with their content promotion and link building as they recognise an opportunity to invest when their competitors are running scared.

Warren Buffett once said “Be fearful when others are greedy and greedy when others are fearful.” Based on conversations alone that I’ve had with a wide range of businesses, many are now fearful when it comes to building links. In fact, we gathered some data later in the survey that revealed that one of the biggest challenges people face is not knowing which links will help and which will harm them. Google’s widespread action against websites (and dare I say it webmaster propaganda) has had a dramatic impact on some people to the point of paralysis.

There are clear opportunities that, with a sound strategy, can be seized in today’s market.

You can
build links like it’s 1999 for a microsite or second level property, keep it super-clean and identify link opportunities that would be valuable irrespective of Google, or somewhere in between those extremes. The fact is the links still form the backbone of the internet and of Google’s algorithm and that isn’t going to change for a very long time.

What percentage of your overall SEO budget is allocated toward building links?

Thanks to
John-Henry Scherck for this one as he made the suggestion following the 2013 survey that having data on the percentage would be really interesting. Looking back we don’t have a point of comparison but not of course moving forward we will have so we should get a clearer picture of whether online marketing budgets are just increasing in general (and therefore link building gets allocated the same percentage but of a bigger pie) or whether folks are seeing the value from building links and therefore allocating a larger percentage of the same sized pie to link building activities.

Would you say you’ve increased or decreased your spend on link building over the past 12 months?

This aligns with our data on more people entering the $10-$50k per month investment bracket this year:

Why the increase/decrease in spending?

We asked people why they decided to increase or decrease their spending on link building over the past 12 months.

Responses could be categorized into the following areas:

Common reason for increases:

  • Increased costs related to moving away from older style and often “cheaper” link building
  • Increased costs related to production/creativity
  • Good links are just as important as ever; links still move the needle in terms of search engine visibility and performance therefore it makes sense to increase investment in this area.

Common reasons for decreases:

  • Moving link building budget into content marketing projects (to be fair, this budget will probably indirectly fund link acquisition of some kind even if it is seen as a secondary goal for the content campaign.)
  • We wanted to scale back and assess the impact that Google’s manual actions etc have on our website.

In the next 12 months, will you look to increase or decrease your spend on link building?

Why the planned increase/decrease in spending?

  • Link building continues to get more expensive
  • To raise the bar on existing efforts, and to beat competitors with increasingly sophisticated content assets
  • Unsure where to invest/which links are working so concentrating budget into other activities.

Which link building tactics do you utilise most often?

(Numbers listed are votes rather than percentages)

When we compare with responses from the 2013 survey, there is a clear shift towards content-led initiatives and a reduction in some tactics for example close to 50% said in 2013 that guest blogging was their staple tactic, in 2014 fewer than 15% listed it as one of their staple activities.

Another interesting bit of data is the fact that paid links have seen somewhat of a resurgence in popularity, presumably as companies look for tactics where they can maintain greater control. In 2013, just 5% listed paid links as their staple linking tactic whereas in 2014 over 13% reported paid linking and blog networks as one of their main link building tactics.

What is currently your biggest link building challenge?

  • Getting links to pages that aren’t particularly linkworthy (money pages)
  • Lack of scalability (time, process, training, spreading time between clients)
  • Avoiding Google penalties

These are similar challenges to those reported in 2013 in the sense that there is still concern over which links are helping and harming organic search performance as well as difficulties relating to processes and the lack of scalability.

The interesting thing is that SEO is full of challenges so as soon as one is overcome, the next appears. In 2013, 28% of respondents said that “finding link prospects” was a key challenge but this year not a mention of link prospects being an issue. This arguably suggests that we as an industry were adjusting to the “new world” back in 2013 and that now we have advanced our capabilities enough for this to now longer be the primary challenge in our day to day work. Now the main problem doesn’t seem to be getting links as such but more about getting links into the pages that we all need to rank to stay in business … the money pages.

Which link building tactics do you believe to be most effective?

(numbers below are “votes” rather than percentages)

Which link building tactics do you believe to be least effective?

(numbers below are “votes” rather than percentages)

Which link building tactics do you consider to be harmful to a site?

(numbers below are “votes” rather than percentages)

See the complete visual below:

Thank you for everyone who took part in the survey! See you all again next year.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from feedproxy.google.com