Meet Dan Morris, Executive Vice President, North America

  1. Why did you decide to come to dotmailer?

The top three reasons were People, Product and Opportunity. I met the people who make up our business and heard their stories from the past 18 years, learned about the platform and market leading status they had built in the UK, and saw that I could add value with my U.S. high growth business experience. I’ve been working with marketers, entrepreneurs and business owners for years across a series of different roles, and saw that I could apply what I’d learned from that and the start-up space to dotmailer’s U.S. operation. dotmailer has had clients in the U.S. for 12 years and we’re positioned to grow the user base of our powerful and easy-to-use platform significantly. I knew I could make a difference here, and what closed the deal for me was the people.  Every single person I’ve met is deeply committed to the business, to the success of our customers and to making our solution simple and efficient.  We’re a great group of passionate people and I’m proud to have joined the dotfamily.

Dan Morris, dotmailer’s EVP for North America in the new NYC office

      1. Tell us a bit about your new role

dotmailer has been in business and in this space for more than 18 years. We were a web agency, then a Systems Integrator, and we got into the email business that way, ultimately building the dotmailer platform thousands of people use daily. This means we know this space better than anyone and we have the perfect solutions to align closely with our customers and the solutions flexible enough to grow with them.  My role is to take all that experience and the platform and grow our U.S. presence. My early focus has been on identifying the right team to execute our growth plans. We want to be the market leader in the U.S. in the next three years – just like we’ve done in the UK –  so getting the right people in the right spots was critical.  We quickly assessed the skills of the U.S. team and made changes that were necessary in order to provide the right focus on customer success. Next, we set out to completely rebuild dotmailer’s commercial approach in the U.S.  We simplified our offers to three bundles, so that pricing and what’s included in those bundles is transparent to our customers.  We’ve heard great things about this already from clients and partners. We’re also increasing our resources on customer success and support.  We’re intensely focused on ease of on-boarding, ease of use and speed of use.  We consistently hear how easy and smooth a process it is to use dotmailer’s tools.  That’s key for us – when you buy a dotmailer solution, we want to onboard you quickly and make sure you have all of your questions answered right away so that you can move right into using it.  Customers are raving about this, so we know it’s working well.

  1. What early accomplishments are you most proud of from your dotmailer time so far?

I’ve been at dotmailer for eight months now and I’m really proud of all we’ve accomplished together.  We spent a lot of time assessing where we needed to restructure and where we needed to invest.  We made the changes we needed, invested in our partner program, localized tech support, customer on-boarding and added customer success team members.  We have the right people in the right roles and it’s making a difference.  We have a commercial approach that is clear with the complete transparency that we wanted to provide our customers.  We’ve got a more customer-focused approach and we’re on-boarding customers quickly so they’re up and running faster.  We have happier customers than ever before and that’s the key to everything we do.

  1. You’ve moved the U.S. team to a new office. Can you tell us why and a bit about the new space?

I thought it was very important to create a NY office space that was tied to branding and other offices around the world, and also had its own NY energy and culture for our team here – to foster collaboration and to have some fun.  It was also important for us that we had a flexible space where we could welcome customers, partners and resellers, and also hold classes and dotUniversity training sessions. I’m really grateful to the team who worked on the space because it really reflects our team and what we care about.   At any given time, you’ll see a training session happening, the team collaborating, a customer dropping in to ask a few questions or a partner dropping in to work from here.  We love our new, NYC space.

We had a spectacular reception this week to celebrate the opening of this office with customers, partners and the dotmailer leadership team in attendance. Please take a look at the photos from our event on Facebook.

Guests and the team at dotmailer's new NYC office warming party

Guests and the team at dotmailer’s new NYC office warming party

  1. What did you learn from your days in the start-up space that you’re applying at dotmailer?

The start-up space is a great place to learn. You have to know where every dollar is going and coming from, so every choice you make needs to be backed up with a business case for that investment.  You try lots of different things to see if they’ll work and you’re ready to turn those tactics up or down quickly based on an assessment of the results. You also learn things don’t have to stay the way they are, and can change if you make them change. You always listen and learn – to customers, partners, industry veterans, advisors, etc. to better understand what’s working and not working.  dotmailer has been in business for 18 years now, and so there are so many great contributors across the business who know how things have worked and yet are always keen to keep improving.  I am constantly in listening and learning mode so that I can understand all of the unique perspectives our team brings and what we need to act on.

  1. What are your plans for the U.S. and the sales function there?

On our path to being the market leader in the U.S., I’m focused on three things going forward: 1 – I want our customers to be truly happy.  It’s already a big focus in the dotmailer organization – and we’re working hard to understand their challenges and goals so we can take product and service to the next level. 2 – Creating an even more robust program around partners, resellers and further building out our channel partners to continuously improve sales and customer service programs. We recently launched a certification program to ensure partners have all the training and resources they need to support our mutual customers.  3 – We have an aggressive growth plan for the U.S. and I’m very focused on making sure our team is well trained, and that we remain thoughtful and measured as we take the steps to grow.  We want to always keep an eye on what we’re known for – tools that are powerful and simple to use – and make sure everything else we offer remains accessible and valuable as we execute our growth plans.

  1. What are the most common questions that you get when speaking to a prospective customer?

The questions we usually get are around price, service level and flexibility.  How much does dotmailer cost?  How well are you going to look after my business?  How will you integrate into my existing stack and then my plans for future growth? We now have three transparent bundle options with specifics around what’s included published right on our website.  We have introduced a customer success team that’s focused only on taking great care of our customers and we’re hearing stories every day that tells me this is working.  And we have all of the tools to support our customers as they grow and to also integrate into their existing stacks – often integrating so well that you can use dotmailer from within Magento, Salesforce or Dynamics, for example.

  1. Can you tell us about the dotmailer differentiators you highlight when speaking to prospective customers that seem to really resonate?

In addition to the ones above – ease of use, speed of use and the ability to scale with you. With dotmailer’s tiered program, you can start with a lighter level of functionality and grow into more advanced functionality as you need it. The platform itself is so easy to use that most marketers are able to build campaigns in minutes that would have taken hours on other platforms. Our customer success team is also with you all the way if ever you want or need help.  We’ve built a very powerful platform and we have a fantastic team to help you with personalized service as an extended part of your team and we’re ready to grow with you.

  1. How much time is your team on the road vs. in the office? Any road warrior tips to share?

I’ve spent a lot of time on the road, one year I attended 22 tradeshows! Top tip when flying is to be willing to give up your seat for families or groups once you’re at the airport gate, as you’ll often be rewarded with a better seat for helping the airline make the family or group happy. Win win! Since joining dotmailer, I’m focused on being in office and present for the team and customers as much as possible. I can usually be found in our new, NYC office where I spend a lot of time with our team, in customer meetings, in trainings and other hosted events, sales conversations or marketing meetings. I’m here to help the team, clients and partners to succeed, and will always do my best to say yes! Once our prospective customers see how quickly and efficiently they can execute tasks with dotmailer solutions vs. their existing solutions, it’s a no-brainer for them.  I love seeing and hearing their reactions.

  1. Tell us a bit about yourself – favorite sports team, favorite food, guilty pleasure, favorite band, favorite vacation spot?

I’m originally from Yorkshire in England, and grew up just outside York. I moved to the U.S. about seven years ago to join a very fast growing startup, we took it from 5 to well over 300 people which was a fantastic experience. I moved to NYC almost two years ago, and I love exploring this great city.  There’s so much to see and do.  Outside of dotmailer, my passion is cars, and I also enjoy skeet shooting, almost all types of music, and I love to travel – my goal is to get to India, Thailand, Australia and Japan in the near future.

Want to find out more about the dotfamily? Check out our recent post about Darren Hockley, Global Head of Support.

Reblogged 3 years ago from blog.dotmailer.com

Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking – Whiteboard Friday

Posted by randfish

There’s no doubt that quite a bit has changed about SEO, and that the field is far more integrated with other aspects of online marketing than it once was. In today’s Whiteboard Friday, Rand pushes back against the idea that effective modern SEO doesn’t require any technical expertise, outlining a fantastic list of technical elements that today’s SEOs need to know about in order to be truly effective.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to do something unusual. I don’t usually point out these inconsistencies or sort of take issue with other folks’ content on the web, because I generally find that that’s not all that valuable and useful. But I’m going to make an exception here.

There is an article by Jayson DeMers, who I think might actually be here in Seattle — maybe he and I can hang out at some point — called “Why Modern SEO Requires Almost No Technical Expertise.” It was an article that got a shocking amount of traction and attention. On Facebook, it has thousands of shares. On LinkedIn, it did really well. On Twitter, it got a bunch of attention.

Some folks in the SEO world have already pointed out some issues around this. But because of the increasing popularity of this article, and because I think there’s, like, this hopefulness from worlds outside of kind of the hardcore SEO world that are looking to this piece and going, “Look, this is great. We don’t have to be technical. We don’t have to worry about technical things in order to do SEO.”

Look, I completely get the appeal of that. I did want to point out some of the reasons why this is not so accurate. At the same time, I don’t want to rain on Jayson, because I think that it’s very possible he’s writing an article for Entrepreneur, maybe he has sort of a commitment to them. Maybe he had no idea that this article was going to spark so much attention and investment. He does make some good points. I think it’s just really the title and then some of the messages inside there that I take strong issue with, and so I wanted to bring those up.

First off, some of the good points he did bring up.

One, he wisely says, “You don’t need to know how to code or to write and read algorithms in order to do SEO.” I totally agree with that. If today you’re looking at SEO and you’re thinking, “Well, am I going to get more into this subject? Am I going to try investing in SEO? But I don’t even know HTML and CSS yet.”

Those are good skills to have, and they will help you in SEO, but you don’t need them. Jayson’s totally right. You don’t have to have them, and you can learn and pick up some of these things, and do searches, watch some Whiteboard Fridays, check out some guides, and pick up a lot of that stuff later on as you need it in your career. SEO doesn’t have that hard requirement.

And secondly, he makes an intelligent point that we’ve made many times here at Moz, which is that, broadly speaking, a better user experience is well correlated with better rankings.

You make a great website that delivers great user experience, that provides the answers to searchers’ questions and gives them extraordinarily good content, way better than what’s out there already in the search results, generally speaking you’re going to see happy searchers, and that’s going to lead to higher rankings.

But not entirely. There are a lot of other elements that go in here. So I’ll bring up some frustrating points around the piece as well.

First off, there’s no acknowledgment — and I find this a little disturbing — that the ability to read and write code, or even HTML and CSS, which I think are the basic place to start, is helpful or can take your SEO efforts to the next level. I think both of those things are true.

So being able to look at a web page, view source on it, or pull up Firebug in Firefox or something and diagnose what’s going on and then go, “Oh, that’s why Google is not able to see this content. That’s why we’re not ranking for this keyword or term, or why even when I enter this exact sentence in quotes into Google, which is on our page, this is why it’s not bringing it up. It’s because it’s loading it after the page from a remote file that Google can’t access.” These are technical things, and being able to see how that code is built, how it’s structured, and what’s going on there, very, very helpful.

Some coding knowledge also can take your SEO efforts even further. I mean, so many times, SEOs are stymied by the conversations that we have with our programmers and our developers and the technical staff on our teams. When we can have those conversations intelligently, because at least we understand the principles of how an if-then statement works, or what software engineering best practices are being used, or they can upload something into a GitHub repository, and we can take a look at it there, that kind of stuff is really helpful.

Secondly, I don’t like that the article overly reduces all of this information that we have about what we’ve learned about Google. So he mentions two sources. One is things that Google tells us, and others are SEO experiments. I think both of those are true. Although I’d add that there’s sort of a sixth sense of knowledge that we gain over time from looking at many, many search results and kind of having this feel for why things rank, and what might be wrong with a site, and getting really good at that using tools and data as well. There are people who can look at Open Site Explorer and then go, “Aha, I bet this is going to happen.” They can look, and 90% of the time they’re right.

So he boils this down to, one, write quality content, and two, reduce your bounce rate. Neither of those things are wrong. You should write quality content, although I’d argue there are lots of other forms of quality content that aren’t necessarily written — video, images and graphics, podcasts, lots of other stuff.

And secondly, that just doing those two things is not always enough. So you can see, like many, many folks look and go, “I have quality content. It has a low bounce rate. How come I don’t rank better?” Well, your competitors, they’re also going to have quality content with a low bounce rate. That’s not a very high bar.

Also, frustratingly, this really gets in my craw. I don’t think “write quality content” means anything. You tell me. When you hear that, to me that is a totally non-actionable, non-useful phrase that’s a piece of advice that is so generic as to be discardable. So I really wish that there was more substance behind that.

The article also makes, in my opinion, the totally inaccurate claim that modern SEO really is reduced to “the happier your users are when they visit your site, the higher you’re going to rank.”

Wow. Okay. Again, I think broadly these things are correlated. User happiness and rank is broadly correlated, but it’s not a one to one. This is not like a, “Oh, well, that’s a 1.0 correlation.”

I would guess that the correlation is probably closer to like the page authority range. I bet it’s like 0.35 or something correlation. If you were to actually measure this broadly across the web and say like, “Hey, were you happier with result one, two, three, four, or five,” the ordering would not be perfect at all. It probably wouldn’t even be close.

There’s a ton of reasons why sometimes someone who ranks on Page 2 or Page 3 or doesn’t rank at all for a query is doing a better piece of content than the person who does rank well or ranks on Page 1, Position 1.

Then the article suggests five and sort of a half steps to successful modern SEO, which I think is a really incomplete list. So Jayson gives us;

  • Good on-site experience
  • Writing good content
  • Getting others to acknowledge you as an authority
  • Rising in social popularity
  • Earning local relevance
  • Dealing with modern CMS systems (which he notes most modern CMS systems are SEO-friendly)

The thing is there’s nothing actually wrong with any of these. They’re all, generally speaking, correct, either directly or indirectly related to SEO. The one about local relevance, I have some issue with, because he doesn’t note that there’s a separate algorithm for sort of how local SEO is done and how Google ranks local sites in maps and in their local search results. Also not noted is that rising in social popularity won’t necessarily directly help your SEO, although it can have indirect and positive benefits.

I feel like this list is super incomplete. Okay, I brainstormed just off the top of my head in the 10 minutes before we filmed this video a list. The list was so long that, as you can see, I filled up the whole whiteboard and then didn’t have any more room. I’m not going to bother to erase and go try and be absolutely complete.

But there’s a huge, huge number of things that are important, critically important for technical SEO. If you don’t know how to do these things, you are sunk in many cases. You can’t be an effective SEO analyst, or consultant, or in-house team member, because you simply can’t diagnose the potential problems, rectify those potential problems, identify strategies that your competitors are using, be able to diagnose a traffic gain or loss. You have to have these skills in order to do that.

I’ll run through these quickly, but really the idea is just that this list is so huge and so long that I think it’s very, very, very wrong to say technical SEO is behind us. I almost feel like the opposite is true.

We have to be able to understand things like;

  • Content rendering and indexability
  • Crawl structure, internal links, JavaScript, Ajax. If something’s post-loading after the page and Google’s not able to index it, or there are links that are accessible via JavaScript or Ajax, maybe Google can’t necessarily see those or isn’t crawling them as effectively, or is crawling them, but isn’t assigning them as much link weight as they might be assigning other stuff, and you’ve made it tough to link to them externally, and so they can’t crawl it.
  • Disabling crawling and/or indexing of thin or incomplete or non-search-targeted content. We have a bunch of search results pages. Should we use rel=prev/next? Should we robots.txt those out? Should we disallow from crawling with meta robots? Should we rel=canonical them to other pages? Should we exclude them via the protocols inside Google Webmaster Tools, which is now Google Search Console?
  • Managing redirects, domain migrations, content updates. A new piece of content comes out, replacing an old piece of content, what do we do with that old piece of content? What’s the best practice? It varies by different things. We have a whole Whiteboard Friday about the different things that you could do with that. What about a big redirect or a domain migration? You buy another company and you’re redirecting their site to your site. You have to understand things about subdomain structures versus subfolders, which, again, we’ve done another Whiteboard Friday about that.
  • Proper error codes, downtime procedures, and not found pages. If your 404 pages turn out to all be 200 pages, well, now you’ve made a big error there, and Google could be crawling tons of 404 pages that they think are real pages, because you’ve made it a status code 200, or you’ve used a 404 code when you should have used a 410, which is a permanently removed, to be able to get it completely out of the indexes, as opposed to having Google revisit it and keep it in the index.

Downtime procedures. So there’s specifically a… I can’t even remember. It’s a 5xx code that you can use. Maybe it was a 503 or something that you can use that’s like, “Revisit later. We’re having some downtime right now.” Google urges you to use that specific code rather than using a 404, which tells them, “This page is now an error.”

Disney had that problem a while ago, if you guys remember, where they 404ed all their pages during an hour of downtime, and then their homepage, when you searched for Disney World, was, like, “Not found.” Oh, jeez, Disney World, not so good.

  • International and multi-language targeting issues. I won’t go into that. But you have to know the protocols there. Duplicate content, syndication, scrapers. How do we handle all that? Somebody else wants to take our content, put it on their site, what should we do? Someone’s scraping our content. What can we do? We have duplicate content on our own site. What should we do?
  • Diagnosing traffic drops via analytics and metrics. Being able to look at a rankings report, being able to look at analytics connecting those up and trying to see: Why did we go up or down? Did we have less pages being indexed, more pages being indexed, more pages getting traffic less, more keywords less?
  • Understanding advanced search parameters. Today, just today, I was checking out the related parameter in Google, which is fascinating for most sites. Well, for Moz, weirdly, related:oursite.com shows nothing. But for virtually every other sit, well, most other sites on the web, it does show some really interesting data, and you can see how Google is connecting up, essentially, intentions and topics from different sites and pages, which can be fascinating, could expose opportunities for links, could expose understanding of how they view your site versus your competition or who they think your competition is.

Then there are tons of parameters, like in URL and in anchor, and da, da, da, da. In anchor doesn’t work anymore, never mind about that one.

I have to go faster, because we’re just going to run out of these. Like, come on. Interpreting and leveraging data in Google Search Console. If you don’t know how to use that, Google could be telling you, you have all sorts of errors, and you don’t know what they are.

  • Leveraging topic modeling and extraction. Using all these cool tools that are coming out for better keyword research and better on-page targeting. I talked about a couple of those at MozCon, like MonkeyLearn. There’s the new Moz Context API, which will be coming out soon, around that. There’s the Alchemy API, which a lot of folks really like and use.
  • Identifying and extracting opportunities based on site crawls. You run a Screaming Frog crawl on your site and you’re going, “Oh, here’s all these problems and issues.” If you don’t have these technical skills, you can’t diagnose that. You can’t figure out what’s wrong. You can’t figure out what needs fixing, what needs addressing.
  • Using rich snippet format to stand out in the SERPs. This is just getting a better click-through rate, which can seriously help your site and obviously your traffic.
  • Applying Google-supported protocols like rel=canonical, meta description, rel=prev/next, hreflang, robots.txt, meta robots, x robots, NOODP, XML sitemaps, rel=nofollow. The list goes on and on and on. If you’re not technical, you don’t know what those are, you think you just need to write good content and lower your bounce rate, it’s not going to work.
  • Using APIs from services like AdWords or MozScape, or hrefs from Majestic, or SEM refs from SearchScape or Alchemy API. Those APIs can have powerful things that they can do for your site. There are some powerful problems they could help you solve if you know how to use them. It’s actually not that hard to write something, even inside a Google Doc or Excel, to pull from an API and get some data in there. There’s a bunch of good tutorials out there. Richard Baxter has one, Annie Cushing has one, I think Distilled has some. So really cool stuff there.
  • Diagnosing page load speed issues, which goes right to what Jayson was talking about. You need that fast-loading page. Well, if you don’t have any technical skills, you can’t figure out why your page might not be loading quickly.
  • Diagnosing mobile friendliness issues
  • Advising app developers on the new protocols around App deep linking, so that you can get the content from your mobile apps into the web search results on mobile devices. Awesome. Super powerful. Potentially crazy powerful, as mobile search is becoming bigger than desktop.

Okay, I’m going to take a deep breath and relax. I don’t know Jayson’s intention, and in fact, if he were in this room, he’d be like, “No, I totally agree with all those things. I wrote the article in a rush. I had no idea it was going to be big. I was just trying to make the broader points around you don’t have to be a coder in order to do SEO.” That’s completely fine.

So I’m not going to try and rain criticism down on him. But I think if you’re reading that article, or you’re seeing it in your feed, or your clients are, or your boss is, or other folks are in your world, maybe you can point them to this Whiteboard Friday and let them know, no, that’s not quite right. There’s a ton of technical SEO that is required in 2015 and will be for years to come, I think, that SEOs have to have in order to be effective at their jobs.

All right, everyone. Look forward to some great comments, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

​​Measure Your Mobile Rankings and Search Visibility in Moz Analytics

Posted by jon.white

We have launched a couple of new things in Moz Pro that we are excited to share with you all: Mobile Rankings and a Search Visibility score. If you want, you can jump right in by heading to a campaign and adding a mobile engine, or keep reading for more details!

Track your mobile vs. desktop rankings in Moz Analytics

Mobilegeddon came and went with slightly less fanfare than expected, somewhat due to the vast ‘Mobile Friendly’ updates we all did at super short notice (nice work everyone!). Nevertheless, mobile rankings visibility is now firmly on everyone’s radar, and will only become more important over time.

Now you can track your campaigns’ mobile rankings for all of the same keywords and locations you are tracking on desktop.

For this campaign my mobile visibility is almost 20% lower than my desktop visibility and falling;
I can drill down to find out why

Clicking on this will take you into a new Engines tab within your Keyword Rankings page where you can find a more detailed version of this chart as well as a tabular view by keyword for both desktop and mobile. Here you can also filter by label and location.

Here I can see Search Visibility across engines including mobile;
in this case, for my branded keywords.

We have given an extra engine to all campaigns

We’ve given customers an extra engine for each campaign, increasing the number from 3 to 4. Use the extra slot to add the mobile engine and unlock your mobile data!

We will begin to track mobile rankings within 24 hours of adding to a campaign. Once you are set up, you will notice a new chart on your dashboard showing visibility for Desktop vs. Mobile Search Visibility.

Measure your Search Visibility score vs. competitors

The overall Search Visibility for my campaign

Along with this change we have also added a Search Visibility score to your rankings data. Use your visibility score to track and report on your overall campaign ranking performance, compare to your competitors, and look for any large shifts that might indicate penalties or algorithm changes. For a deeper drill-down into your data you can also segment your visibility score by keyword labels or locations. Visit the rankings summary page on any campaign to get started.

How is Search Visibility calculated?

Good question!

The Search Visibility score is the percentage of clicks we estimate you receive based on your rankings positions, across all of your keywords.

We take each ranking position for each keyword, multiply by an estimated click-thru-rate, and then take the average of all of your keywords. You can think of it as the percentage of your SERPs that you own. The score is expressed as a percentage, though scores of 100% would be almost impossible unless you are tracking keywords using the “site:” modifier. It is probably more useful to measure yourself vs. your competitors rather than focus on the actual score, but, as a rule of thumb, mid-40s is probably the realistic maximum for non-branded keywords.

Jeremy, our Moz Analytics TPM, came up with this metaphor:

Think of the SERPs for your keywords as villages. Each position on the SERP is a plot of land in SERP-village. The Search Visibility score is the average amount of plots you own in each SERP-village. Prime real estate plots (i.e., better ranking positions, like #1) are worth more. A complete monopoly of real estate in SERP-village would equate to a score of 100%. The Search Visibility score equates to how much total land you own in all SERP-villages.

Some neat ways to use this feature

  • Label and group your keywords, particularly when you add them – As visibility score is an average of all of your keywords, when you add or remove keywords from your campaign you will likely see fluctuations in the score that are unrelated to performance. Solve this by getting in the habit of labeling keywords when you add them. Then segment your data by these labels to track performance of specific keyword groups over time.
  • See how location affects your mobile rankings – Using the Engines tab in Keyword Rankings, use the filters to select just local keywords. Look for big differences between Mobile and Desktop where Google might be assuming local intent for mobile searches but not for desktop. Check out how your competitors perform for these keywords. Can you use this data?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

An Open-Source Tool for Checking rel-alternate-hreflang Annotations

Posted by Tom-Anthony

In the Distilled R&D department we have been ramping up the amount of automated monitoring and analysis we do, with an internal system monitoring our client’s sites both directly and via various data sources to ensure they remain healthy and we are alerted to any problems that may arise.

Recently we started work to add in functionality for including the rel-alternate-hreflang annotations in this system. In this blog post I’m going to share an open-source Python library we’ve just started work on for the purpose, which makes it easy to read the hreflang entries from a page and identify errors with them.

If you’re not a Python aficionado then don’t despair, as I have also built a ready-to-go tool for you to use, which will quickly do some checks on the hreflang entries for any URL you specify. 🙂

Google’s Search Console (formerly Webmaster Tools) does have some basic rel-alternate-hreflang checking built in, but it is limited in how you can use it and you are restricted to using it for verified sites.

rel-alternate-hreflang checklist

Before we introduce the code, I wanted to quickly review a list of five easy and common mistakes that we will want to check for when looking at rel-alternate-hreflang annotations:

  • return tag errors – Every alternate language/locale URL of a page should, itself, include a link back to the first page. This makes sense but I’ve seen people make mistakes with it fairly often.
  • indirect / broken links – Links to alternate language/region versions of the page should no go via redirects, and should not link to missing or broken pages.
  • multiple entries – There should never be multiple entries for a single language/region combo.
  • multiple defaults – You should never have more than one x-default entry.
  • conflicting modes – rel-alternate-hreflang entries can be implemented via inline HTML, XML sitemaps, or HTTP headers. For any one set of pages only one implementation mode should be used.

So now imagine that we want to simply automate these checks quickly and simply…

Introducing: polly – the hreflang checker library

polly is the name for the library we have developed to help us solve this problem, and we are releasing it as open source so the SEO community can use it freely to build upon. We only started work on it last week, but we plan to continue developing it, and will also accept contributions to the code from the community, so we expect its feature set to grow rapidly.

If you are not comfortable tinkering with Python, then feel free to skip down to the next section of the post, where there is a tool that is built with polly which you can use right away.

Still here? Ok, great. You can install polly easily via pip:

pip install polly

You can then create a PollyPage() object which will do all our work and store the data simply by instantiating the class with the desired URL:

my_page = PollyPage("http://www.facebook.com/")

You can quickly see the hreflang entries on the page by running:

print my_page.alternate_urls_map

You can list all the hreflang values encountered on a page, and which countries and languages they cover:

print my_page.hreflang_values
print my_page.languages
print my_page.regions

You can also check various aspects of a page, see whether the pages it includes in its rel-alternate-hreflang entries point back, or whether there are entries that do not see retrievable (due to 404 or 500 etc. errors):

print my_page.is_default
print my_page.no_return_tag_pages()
print my_page.non_retrievable_pages()

Get more instructions and grab the code at the polly github page. Hit me up in the comments with any questions.

Free tool: hreflang.ninja

I have put together a very simple tool that uses polly to run some of the checks we highlighted above as being common mistakes with rel-alternate-hreflang, which you can visit right now and start using:

http://hreflang.ninja

Simply enter a URL and hit enter, and you should see something like:

Example output from the ninja!

The tool shows you the rel-alternate-hreflang entries found on the page, the language and region of those entries, the alternate URLs, and any errors identified with the entry. It is perfect for doing quick’n’dirty checks of a URL to identify any errors.

As we add additional functionality to polly we will be updating hreflang.ninja as well, so please tweet me with feature ideas or suggestions.

To-do list!

This is the first release of polly and currently we only handle annotations that are in the HTML of the page, not those in the XML sitemap or HTTP headers. However, we are going to be updating polly (and hreflang.ninja) over the coming weeks, so watch this space! 🙂

Resources

Here are a few links you may find helpful for hreflang:

Got suggestions?

With the increasing number of SEO directives and annotations available, and the ever-changing guidelines around how to deploy them, it is important to automate whatever areas possible. Hopefully polly is helpful to the community in this regard, and we want to here what ideas you have for making these tools more useful – here in the comments or via Twitter.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

The Long Click and the Quality of Search Success

Posted by billslawski

“On the most basic level, Google could see how satisfied users were. To paraphrase Tolstoy, happy users were all the same. The best sign of their happiness was the “Long Click” — This occurred when someone went to a search result, ideally the top one, and did not return. That meant Google has successfully fulfilled the query.”

~ Steven Levy. In the Plex: How Google Thinks, Works, and Shapes our Lives

I often explore and read patents and papers from the search engines to try to get a sense of how they may approach different issues, and learn about the assumptions they make about search, searchers, and the Web. Lately, I’ve been keeping an eye open for papers and patents from the search engines where they talk about a metric known as the “long click.”

A recently granted Google patent uses the metric of a “Long Click” as the center of a process Google may use to track results for queries that were selected by searchers for long visits in a set of search results.

This concept isn’t new. In 2011, I wrote about a Yahoo patent in How a Search Engine May Measure the Quality of Its Search Results, where they discussed a metric that they refer to as a “target page success metric.” It included “dwell time” upon a result as a sign of search success (Yes, search engines have goals, too).

5543947f5bb408.24541747.jpg

Another Google patent described assigning web pages “reachability scores” based upon the quality of pages linked to from those initially visited pages. In the post Does Google Use Reachability Scores in Ranking Resources? I described how a Google patent that might view a long click metric as a sign to see if visitors to that page are engaged by the links to content they find those links pointing to, including links to videos. Google tells us in that patent that it might consider a “long click” to have been made on a video if someone watches at least half the video or 30 seconds of it. The patent suggests that a high reachability score on a page may mean that page could be boosted in Google search results.

554394a877e8c8.30299132.jpg

But the patent I’m writing about today is focused primarily upon looking at and tracking a search success metric like a long click or long dwell time. Here’s the abstract:

Modifying ranking data based on document changes

Invented by Henele I. Adams, and Hyung-Jin Kim

Assigned to Google

US Patent 9,002,867

Granted April 7, 2015

Filed: December 30, 2010

Abstract

Methods, systems, and apparatus, including computer programs encoded on computer storage media for determining a weighted overall quality of result statistic for a document.

One method includes receiving quality of result data for a query and a plurality of versions of a document, determining a weighted overall quality of result statistic for the document with respect to the query including weighting each version specific quality of result statistic and combining the weighted version-specific quality of result statistics, wherein each quality of result statistic is weighted by a weight determined from at least a difference between content of a reference version of the document and content of the version of the document corresponding to the version specific quality of result statistic, and storing the weighted overall quality of result statistic and data associating the query and the document with the weighted overall quality of result statistic.

This patent tells us that search results may be be ranked in an order, according to scores assigned to the search results by a scoring function or process that would be based upon things such as:

  • Where, and how often, query terms appear in the given document,
  • How common the query terms are in the documents indexed by the search engine, or
  • A query-independent measure of quality of the document itself.

Last September, I wrote about how Google might identify a category associated with a query term base upon clicks, in the post Using Query User Data To Classify Queries. In a query for “Lincoln.” the results that appear in response might be about the former US President, the town of Lincoln, Nebraska, and the model of automobile. When someone searches for [Lincoln], Google returning all three of those responses as a top result could be said to be reasonable. The patent I wrote about in that post told us that Google might collect information about “Lincoln” as a search entity, and track which category of results people clicked upon most when they performed that search, to determine what categories of pages to show other searchers. Again, that’s another “search success” based upon a past search history.

There likely is some value in working to find ways to increase the amount of dwell time someone spends upon the pages of your site, if you are already having some success in crafting page titles and snippets that persuade people to click on your pages when they those appear in search results. These approaches can include such things as:

  1. Making visiting your page a positive experience in terms of things like site speed, readability, and scannability.
  2. Making visiting your page a positive experience in terms of things like the quality of the content published on your pages including spelling, grammar, writing style, interest, quality of images, and the links you share to other resources.
  3. Providing a positive experience by offering ideas worth sharing with others, and offering opportunities for commenting and interacting with others, and by being responsive to people who do leave comments.

Here are some resources I found that discuss this long click metric in terms of “dwell time”:

Your ability to create pages that can end up in a “long click” from someone who has come to your site in response to a query, is also a “search success” metric on the search engine’s part, and you both succeed. Just be warned that as the most recent patent from Google on Long Clicks shows us, Google will be watching to make sure that the content of your page doesn’t change too much, and that people are continuing to click upon it in search results, and spend a fair amount to time upon it.

(Images for this post are from my Go Fish Digital Design Lead Devin Holmes @DevinGoFish. Thank you, Devin!)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it