Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking – Whiteboard Friday

Posted by randfish

There’s no doubt that quite a bit has changed about SEO, and that the field is far more integrated with other aspects of online marketing than it once was. In today’s Whiteboard Friday, Rand pushes back against the idea that effective modern SEO doesn’t require any technical expertise, outlining a fantastic list of technical elements that today’s SEOs need to know about in order to be truly effective.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to do something unusual. I don’t usually point out these inconsistencies or sort of take issue with other folks’ content on the web, because I generally find that that’s not all that valuable and useful. But I’m going to make an exception here.

There is an article by Jayson DeMers, who I think might actually be here in Seattle — maybe he and I can hang out at some point — called “Why Modern SEO Requires Almost No Technical Expertise.” It was an article that got a shocking amount of traction and attention. On Facebook, it has thousands of shares. On LinkedIn, it did really well. On Twitter, it got a bunch of attention.

Some folks in the SEO world have already pointed out some issues around this. But because of the increasing popularity of this article, and because I think there’s, like, this hopefulness from worlds outside of kind of the hardcore SEO world that are looking to this piece and going, “Look, this is great. We don’t have to be technical. We don’t have to worry about technical things in order to do SEO.”

Look, I completely get the appeal of that. I did want to point out some of the reasons why this is not so accurate. At the same time, I don’t want to rain on Jayson, because I think that it’s very possible he’s writing an article for Entrepreneur, maybe he has sort of a commitment to them. Maybe he had no idea that this article was going to spark so much attention and investment. He does make some good points. I think it’s just really the title and then some of the messages inside there that I take strong issue with, and so I wanted to bring those up.

First off, some of the good points he did bring up.

One, he wisely says, “You don’t need to know how to code or to write and read algorithms in order to do SEO.” I totally agree with that. If today you’re looking at SEO and you’re thinking, “Well, am I going to get more into this subject? Am I going to try investing in SEO? But I don’t even know HTML and CSS yet.”

Those are good skills to have, and they will help you in SEO, but you don’t need them. Jayson’s totally right. You don’t have to have them, and you can learn and pick up some of these things, and do searches, watch some Whiteboard Fridays, check out some guides, and pick up a lot of that stuff later on as you need it in your career. SEO doesn’t have that hard requirement.

And secondly, he makes an intelligent point that we’ve made many times here at Moz, which is that, broadly speaking, a better user experience is well correlated with better rankings.

You make a great website that delivers great user experience, that provides the answers to searchers’ questions and gives them extraordinarily good content, way better than what’s out there already in the search results, generally speaking you’re going to see happy searchers, and that’s going to lead to higher rankings.

But not entirely. There are a lot of other elements that go in here. So I’ll bring up some frustrating points around the piece as well.

First off, there’s no acknowledgment — and I find this a little disturbing — that the ability to read and write code, or even HTML and CSS, which I think are the basic place to start, is helpful or can take your SEO efforts to the next level. I think both of those things are true.

So being able to look at a web page, view source on it, or pull up Firebug in Firefox or something and diagnose what’s going on and then go, “Oh, that’s why Google is not able to see this content. That’s why we’re not ranking for this keyword or term, or why even when I enter this exact sentence in quotes into Google, which is on our page, this is why it’s not bringing it up. It’s because it’s loading it after the page from a remote file that Google can’t access.” These are technical things, and being able to see how that code is built, how it’s structured, and what’s going on there, very, very helpful.

Some coding knowledge also can take your SEO efforts even further. I mean, so many times, SEOs are stymied by the conversations that we have with our programmers and our developers and the technical staff on our teams. When we can have those conversations intelligently, because at least we understand the principles of how an if-then statement works, or what software engineering best practices are being used, or they can upload something into a GitHub repository, and we can take a look at it there, that kind of stuff is really helpful.

Secondly, I don’t like that the article overly reduces all of this information that we have about what we’ve learned about Google. So he mentions two sources. One is things that Google tells us, and others are SEO experiments. I think both of those are true. Although I’d add that there’s sort of a sixth sense of knowledge that we gain over time from looking at many, many search results and kind of having this feel for why things rank, and what might be wrong with a site, and getting really good at that using tools and data as well. There are people who can look at Open Site Explorer and then go, “Aha, I bet this is going to happen.” They can look, and 90% of the time they’re right.

So he boils this down to, one, write quality content, and two, reduce your bounce rate. Neither of those things are wrong. You should write quality content, although I’d argue there are lots of other forms of quality content that aren’t necessarily written — video, images and graphics, podcasts, lots of other stuff.

And secondly, that just doing those two things is not always enough. So you can see, like many, many folks look and go, “I have quality content. It has a low bounce rate. How come I don’t rank better?” Well, your competitors, they’re also going to have quality content with a low bounce rate. That’s not a very high bar.

Also, frustratingly, this really gets in my craw. I don’t think “write quality content” means anything. You tell me. When you hear that, to me that is a totally non-actionable, non-useful phrase that’s a piece of advice that is so generic as to be discardable. So I really wish that there was more substance behind that.

The article also makes, in my opinion, the totally inaccurate claim that modern SEO really is reduced to “the happier your users are when they visit your site, the higher you’re going to rank.”

Wow. Okay. Again, I think broadly these things are correlated. User happiness and rank is broadly correlated, but it’s not a one to one. This is not like a, “Oh, well, that’s a 1.0 correlation.”

I would guess that the correlation is probably closer to like the page authority range. I bet it’s like 0.35 or something correlation. If you were to actually measure this broadly across the web and say like, “Hey, were you happier with result one, two, three, four, or five,” the ordering would not be perfect at all. It probably wouldn’t even be close.

There’s a ton of reasons why sometimes someone who ranks on Page 2 or Page 3 or doesn’t rank at all for a query is doing a better piece of content than the person who does rank well or ranks on Page 1, Position 1.

Then the article suggests five and sort of a half steps to successful modern SEO, which I think is a really incomplete list. So Jayson gives us;

  • Good on-site experience
  • Writing good content
  • Getting others to acknowledge you as an authority
  • Rising in social popularity
  • Earning local relevance
  • Dealing with modern CMS systems (which he notes most modern CMS systems are SEO-friendly)

The thing is there’s nothing actually wrong with any of these. They’re all, generally speaking, correct, either directly or indirectly related to SEO. The one about local relevance, I have some issue with, because he doesn’t note that there’s a separate algorithm for sort of how local SEO is done and how Google ranks local sites in maps and in their local search results. Also not noted is that rising in social popularity won’t necessarily directly help your SEO, although it can have indirect and positive benefits.

I feel like this list is super incomplete. Okay, I brainstormed just off the top of my head in the 10 minutes before we filmed this video a list. The list was so long that, as you can see, I filled up the whole whiteboard and then didn’t have any more room. I’m not going to bother to erase and go try and be absolutely complete.

But there’s a huge, huge number of things that are important, critically important for technical SEO. If you don’t know how to do these things, you are sunk in many cases. You can’t be an effective SEO analyst, or consultant, or in-house team member, because you simply can’t diagnose the potential problems, rectify those potential problems, identify strategies that your competitors are using, be able to diagnose a traffic gain or loss. You have to have these skills in order to do that.

I’ll run through these quickly, but really the idea is just that this list is so huge and so long that I think it’s very, very, very wrong to say technical SEO is behind us. I almost feel like the opposite is true.

We have to be able to understand things like;

  • Content rendering and indexability
  • Crawl structure, internal links, JavaScript, Ajax. If something’s post-loading after the page and Google’s not able to index it, or there are links that are accessible via JavaScript or Ajax, maybe Google can’t necessarily see those or isn’t crawling them as effectively, or is crawling them, but isn’t assigning them as much link weight as they might be assigning other stuff, and you’ve made it tough to link to them externally, and so they can’t crawl it.
  • Disabling crawling and/or indexing of thin or incomplete or non-search-targeted content. We have a bunch of search results pages. Should we use rel=prev/next? Should we robots.txt those out? Should we disallow from crawling with meta robots? Should we rel=canonical them to other pages? Should we exclude them via the protocols inside Google Webmaster Tools, which is now Google Search Console?
  • Managing redirects, domain migrations, content updates. A new piece of content comes out, replacing an old piece of content, what do we do with that old piece of content? What’s the best practice? It varies by different things. We have a whole Whiteboard Friday about the different things that you could do with that. What about a big redirect or a domain migration? You buy another company and you’re redirecting their site to your site. You have to understand things about subdomain structures versus subfolders, which, again, we’ve done another Whiteboard Friday about that.
  • Proper error codes, downtime procedures, and not found pages. If your 404 pages turn out to all be 200 pages, well, now you’ve made a big error there, and Google could be crawling tons of 404 pages that they think are real pages, because you’ve made it a status code 200, or you’ve used a 404 code when you should have used a 410, which is a permanently removed, to be able to get it completely out of the indexes, as opposed to having Google revisit it and keep it in the index.

Downtime procedures. So there’s specifically a… I can’t even remember. It’s a 5xx code that you can use. Maybe it was a 503 or something that you can use that’s like, “Revisit later. We’re having some downtime right now.” Google urges you to use that specific code rather than using a 404, which tells them, “This page is now an error.”

Disney had that problem a while ago, if you guys remember, where they 404ed all their pages during an hour of downtime, and then their homepage, when you searched for Disney World, was, like, “Not found.” Oh, jeez, Disney World, not so good.

  • International and multi-language targeting issues. I won’t go into that. But you have to know the protocols there. Duplicate content, syndication, scrapers. How do we handle all that? Somebody else wants to take our content, put it on their site, what should we do? Someone’s scraping our content. What can we do? We have duplicate content on our own site. What should we do?
  • Diagnosing traffic drops via analytics and metrics. Being able to look at a rankings report, being able to look at analytics connecting those up and trying to see: Why did we go up or down? Did we have less pages being indexed, more pages being indexed, more pages getting traffic less, more keywords less?
  • Understanding advanced search parameters. Today, just today, I was checking out the related parameter in Google, which is fascinating for most sites. Well, for Moz, weirdly, related:oursite.com shows nothing. But for virtually every other sit, well, most other sites on the web, it does show some really interesting data, and you can see how Google is connecting up, essentially, intentions and topics from different sites and pages, which can be fascinating, could expose opportunities for links, could expose understanding of how they view your site versus your competition or who they think your competition is.

Then there are tons of parameters, like in URL and in anchor, and da, da, da, da. In anchor doesn’t work anymore, never mind about that one.

I have to go faster, because we’re just going to run out of these. Like, come on. Interpreting and leveraging data in Google Search Console. If you don’t know how to use that, Google could be telling you, you have all sorts of errors, and you don’t know what they are.

  • Leveraging topic modeling and extraction. Using all these cool tools that are coming out for better keyword research and better on-page targeting. I talked about a couple of those at MozCon, like MonkeyLearn. There’s the new Moz Context API, which will be coming out soon, around that. There’s the Alchemy API, which a lot of folks really like and use.
  • Identifying and extracting opportunities based on site crawls. You run a Screaming Frog crawl on your site and you’re going, “Oh, here’s all these problems and issues.” If you don’t have these technical skills, you can’t diagnose that. You can’t figure out what’s wrong. You can’t figure out what needs fixing, what needs addressing.
  • Using rich snippet format to stand out in the SERPs. This is just getting a better click-through rate, which can seriously help your site and obviously your traffic.
  • Applying Google-supported protocols like rel=canonical, meta description, rel=prev/next, hreflang, robots.txt, meta robots, x robots, NOODP, XML sitemaps, rel=nofollow. The list goes on and on and on. If you’re not technical, you don’t know what those are, you think you just need to write good content and lower your bounce rate, it’s not going to work.
  • Using APIs from services like AdWords or MozScape, or hrefs from Majestic, or SEM refs from SearchScape or Alchemy API. Those APIs can have powerful things that they can do for your site. There are some powerful problems they could help you solve if you know how to use them. It’s actually not that hard to write something, even inside a Google Doc or Excel, to pull from an API and get some data in there. There’s a bunch of good tutorials out there. Richard Baxter has one, Annie Cushing has one, I think Distilled has some. So really cool stuff there.
  • Diagnosing page load speed issues, which goes right to what Jayson was talking about. You need that fast-loading page. Well, if you don’t have any technical skills, you can’t figure out why your page might not be loading quickly.
  • Diagnosing mobile friendliness issues
  • Advising app developers on the new protocols around App deep linking, so that you can get the content from your mobile apps into the web search results on mobile devices. Awesome. Super powerful. Potentially crazy powerful, as mobile search is becoming bigger than desktop.

Okay, I’m going to take a deep breath and relax. I don’t know Jayson’s intention, and in fact, if he were in this room, he’d be like, “No, I totally agree with all those things. I wrote the article in a rush. I had no idea it was going to be big. I was just trying to make the broader points around you don’t have to be a coder in order to do SEO.” That’s completely fine.

So I’m not going to try and rain criticism down on him. But I think if you’re reading that article, or you’re seeing it in your feed, or your clients are, or your boss is, or other folks are in your world, maybe you can point them to this Whiteboard Friday and let them know, no, that’s not quite right. There’s a ton of technical SEO that is required in 2015 and will be for years to come, I think, that SEOs have to have in order to be effective at their jobs.

All right, everyone. Look forward to some great comments, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

How To Expose “Top 10″ Style Posts

How accurate are “Top 10″ Posts? How do you define influence or compare them or know if the author was being genuine?  Wonder no more… You have probably seen countless amount of articles telling you that you must follow these top 10 industry influencers on Twitter or subscribe to their blogs. However, with Majestic’s Clique…

The post How To Expose “Top 10″ Style Posts appeared first on Majestic Blog.

Reblogged 4 years ago from blog.majestic.com

Everything You Need to Know About Mobile App Search

Posted by Justin_Briggs

Mobile isn’t the future. It’s the present. Mobile apps are not only changing how we interact with devices and websites, they’re changing the way we search. Companies are creating meaningful experiences on mobile-friendly websites and apps, which in turn create new opportunities to get in front of users.

I’d like to explore the growth of mobile app search and its current opportunities to gain visibility and drive engagement.

Rise of mobile app search

The growth of mobile device usage has driven a significant lift in app-related searches. This is giving rise to mobile app search as a vertical within traditional universal search.

While it has been clear for some time that mobile search is important, that importance has been more heavily emphasized by Google recently, as they continue to push
mobile-friendly labels in SERPs, and are likely increasing mobile-friendliness’s weight as a ranking factor.

The future of search marketing involves mobile, and it will not be limited to optimizing HTML webpages, creating responsive designs, and optimizing UX. Mobile SEO is a world where apps, knowledge graph, and conversational search are front and center.

For the
top 10 leading properties online, 34% of visitors are mobile-only (comScore data), and, anecdotally, we’re seeing similar numbers with our clients, if not more.

Mobile device and app growth

It’s also worth noting that
72% of mobile engagement relies on apps vs. on browsers. Looking at teen usage, apps are increasingly dominant. Additionally,
55% of teens use voice search more than once per day

If you haven’t read it, grab some coffee and read
A Teenagers View on Social Media, which is written by a 19-year old who gives his perspective of online behavior. Reading between the lines shows a number of subtle shifts in behavior. I noticed that every time I expected him say website, he said application. In fact, he referenced application 15 times, and it is the primary way he describes social networks.

This means that one of the fasting growing segments of mobile users cannot be marketed to by optimizing HTML webpages alone, requiring search marketers to expand their skills into app optimization.

The mobile app pack

This shift is giving rise to the mobile app pack and app search results, which are triggered on searches from mobile devices in instances of high mobile app intent. Think of these as being similar to local search results. Considering
mobile searcher behavior, these listings dominate user attention.

Mobile app search results and mobile app pack

As with local search, mobile app search can reorder traditional results, completely push them down, or integrate app listings with traditional web results.

You can test on your desktop using a
user-agent switcher, or by searching on your iOS or Android device. 

There are slight differences between iPhone and Android mobile app results:

iOS and Android mobile search result listing

From what I’ve seen, mobile app listings trigger more frequently, and with more results, on Android search results when compared to iOS. Additionally, iOS mobile app listings are represented as a traditional website result listing, while mobile app listings on Android are more integrated.

Some of the differences also come from the differences in app submission guidelines on the two major stores, the Apple App Store and Google Play.

Overview of differences in mobile app results

  1. Title – Google uses the app listing page’s HTML title (which is the app’s title). iOS app titles can exceed 55-62 characters, which causes wrapping and title truncation like a traditional result. Android app title requirements are shorter, so titles are typically shorter on Android mobile app listings.
  2. URL – iOS mobile app listings display the iTunes URL to the App Store as part of the search result.
  3. Icon – iOS icons are square and Android icons have rounded corners.
  4. Design – Android results stand out more, with an “Apps” headline above the pack and a link to Google Play at the end.
  5. App store content – The other differences show up in the copy, ratings, and reviews on each app store.

Ranking in mobile app search results

Ranking in mobile app search results is a
combination of App Store Optimization (ASO) and traditional SEO. The on-page factors are dependent upon your app listing, so optimization starts with having solid ASO. If you’re not familiar with ASO, it’s the process of optimizing your app listing for internal app store search.

Basics of ASO

Ranking in the Apple App Store and in Google Play is driven by two primary factors: keyword alignment and app performance. Text fields in the app store listing, such as title, description, and keyword list, align the app with a particular set of keywords. Performance metrics including download velocity, app ratings, and reviews determine how well the app will rank for each of those keywords. (Additionally, the Google Play algorithm may include external, web-based performance metrics like citations and links as ranking factors.)

App store ranking factors

Mobile app listing optimization

While I won’t explore ASO in-depth here, as it’s very similar to traditional SEO,
optimizing app listings is primarily a function of keyword targeting.

Tools like
Sensor Tower, MobileDevHQ, and App Annie can help you with mobile app keyword research. However, keep in mind that mobile app search listings show up in universal search, so it’s important to leverage traditional keyword research tools like the AdWords Tool or Google Trends.

While there are similarities with ASO, optimizing for these mobile app search listings on the web has some slight differences.

Differences between ASO & mobile app SEO targeting

  1. Titles – While the Apple App Store allows relatively long titles, they are limited to the preview length in organic search. Titles should be optimized with Google search in mind, in addition to optimizing for the app store. Additionally, several apps aggressively target keywords in their app title, but caution should be used as spamming keywords could influence app performance in Google.
  2. Description – The app description on the App Store may not be a factor in internal search, but it will impact external app search results. Leverage keyword targeting best practices when writing your iOS app description, as well as your Android app description.
  3. Device and platform keywords – When targeting for app store search, it is not as important to target terms related to the OS or device. However, these terms can help visibility in external search. Include device and OS terms, such as Android, Samsung Note, iOS, iPad, and iPhone.

App performance optimization

Outside of content optimization, Google looks at the performance of the app. On the Android side, they have access to the data, but for iOS they have to rely on publicly available information.

App performance factors

  • Number of ratings
  • Average rating score
  • Content and sentiment analysis of reviews
  • Downloads / installs
  • Engagement and retention
  • Internal links on app store

For iOS, the primary public metrics are ratings and reviews. However, app performance can be inferred using the App Store’s ranking charts and search results, which can be leveraged as proxies of these performance metrics.


The following objectives will have the greatest influence on your mobile app search ranking:

  1. Increase your average rating number
  2. Increase your number of ratings
  3. Increase downloads

For app ratings and reviews, leverage platforms like
Apptentive to improve your ratings. They are very effective at driving positive ratings. Additionally, paid tactics are a great way to drive install volume and are one area where paid budget capacity could directly influence organic results in Google. Anecdotally, both app stores use rating numbers (typically above or below 4 stars) to make decisions around promoting an app, either through merchandising spots or co-branded campaigns. I suspect this is being used as a general cut-off for what is displayed in universal results. Increasing your rating above 4 stars should improve the likelihood you’ll appear in mobile app search results.

Lastly, think of merchandising and rankings in terms of 
internal linking structures. The more visible you are inside of the app store, the more visibility you have in external search.

App web performance optimization

Lastly, we’re talking Google rankings, so factors like links, citations, and social shares matter. You should be
conducting content marketing, PR, and outreach for your app. Focus on merchandising your app on your own site, as well as increasing coverage of your app (linking to the app store page). The basics of link optimization apply here.

App indexation – drive app engagement

Application search is not limited to driving installs via app search results. With app indexing, you can leverage your desktop/mobile website visibility in organic search to drive engagement with those who have your app installed. Google can discover and expose content deep inside your app directly in search results. This means that when a user clicks on your website in organic search, it can open your app directly, taking them to that exact piece of content in your app, instead of opening your website.

App indexation fundamentally changes technical SEO, extending SEO from server and webpage setup to the setup and optimization of applications.

App indexation on Google

This also fundamentally changes search. Your most avid and engaged user may choose to no longer visit your website. For example, on my Note 4, when I click a link to a site of a brand that I have an app installed for, Google gives me the option not only to open in the app, but to set opening the app as a default behavior.

If a user chooses to open your site in your app, they may never visit your site from organic search again.

App indexation is currently limited to Android devices, but there is evidence to suggest that it’s already in the works and is
soon to be released on iOS devices. There have been hints for some time, but markup is showing up in the wild suggesting that Google is actively working with Apple and select brands to develop iOS app indexing.

URI optimization for apps

The first step in creating an indexable app is to set up your app to support deep links. Deep links are URIs that are understood by your app and will open up a specific piece of content. They are effectively URLs for applications.

Once this URI is supported, a user can be sent to deep content in the app. These can be discovered as alternates to your desktop site’s URLs, similar to how
separate-site mobile sites are defined as alternate URLs for the desktop site. In instances of proper context (on an Android device with the app installed), Google can direct a user to the app instead of the website.

Setting this up requires working with your app developer to implement changes inside the app as well as working with your website developers to add references on your desktop site.

Adding intent filters

Android has
documented the technical setup of deep links in detail, but it starts with setting up intent filters in an app’s Android manifest file. This is done with the following code.

<activity android:name="com.example.android.GizmosActivity"
android:label="@string/title_gizmos" >
<intent-filter android:label="@string/filter_title_viewgizmos">
<action android:name="android.intent.action.VIEW" />
<data android:scheme="http"
android:host="example.com"
android:pathPrefix="/gizmos" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
</intent-filter>
</activity>

This dictates the technical optimization of your app URIs for app indexation and defines the elements used in the URI example above.

  • The <intent-filter> element should be added for activities that should be launchable from search results.
  • The <action> element specifies the ACTION_VIEW intent action so that the intent filter can be reached from Google Search.
  • The <data> tag represents a URI format that resolves to the activity. At minimum, the <data> tag must include the android:scheme attribute.
  • Include the BROWSABLE category. The BROWSABLE category is required in order for the intent filter to be accessible from a web browser. Without it, clicking a link in a browser cannot resolve to your app. The DEFAULT category is optional, but recommended. Without this category, the activity can be started only with an explicit intent, using your app component name.

Testing deep links

Google has created tools to help test your deep link setup. You can use
Google’s Deep Link Test Tool to test your app behavior with deep links on your phone. Additionally, you can create an HTML page with an intent:// link in it.

For example
:

<a href="intent://example.com/page-1#Intent;scheme=http;package=com.example.android;end;"> <a href="http://example.com/page-1">http://example.com/page-1></a>

This link would open up deep content inside the app from the HTML page.

App URI crawl and discovery

Once an app has deep link functionality, the next step is to
ensure that Google can discover these URIs as part of its traditional desktop crawling.

Ways to get apps crawled

  1. Rel=”alternate” in HTML head
  2. ViewAction with Schema.org
  3. Rel=”alternate” in XML Sitemap

Implementing all three will create clear signals, but at minimum you should add the rel=”alternate” tag to the HTML head of your webpages.

Effectively, think of the app URI as being similar to a mobile site URL when
setting up a separate-site mobile site for SEO. The mobile deep link is an alternative way to view a webpage on your site. You map a piece of content on your site to a corresponding piece of content inside the app.

Before you get started, be sure to
verify your website and app following the guidelines here. This will verify your app in Google Play Developer Console and Google Webmaster Tools.

#1: Rel=”alternate” in HTML head

On an example page, such as example.com/page-1, you would add the following code to the head of the document. Again, very similar to separate-site mobile optimization.

<html>
<head> 
... 
<link rel="alternate" href="android-app://com.example.android/http/example.com/page-1" /> 
...
</head>
<body>
</body>
#2: ViewAction with Schema.org

Additionally, you can reference the deep link using Schema.org and JSON by using a 
ViewAction.

<script type="application/ld+json"> 
{ 
"@context": "http://schema.org", 
"@type": "WebPage", 
"@id": "http://example.com/gizmos", 
"potentialAction": { 
"@type": "ViewAction", 
"target": "android-app://com.example.android/http/example.com/gizmos" 
} 
} 
</script>
#3 Rel=”alternate” in XML sitemap

Lastly, you can reference the alternate URL in your XML Sitemaps, similar to using the rel=”alternate” for mobile sites.

<?xml version="1.0" encoding="UTF-8" ?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"> 
<url> 
<loc>http://example.com/page-1</loc> 
<xhtml:link rel="alternate" href="android-app://com.example.android/http/example.com/page-1" /> 
</url> 
... 
</urlset>

Once these are in place, Google can discover the app URI and provide your app as an alternative way to view content found in search.

Bot control and robots noindex for apps

There may be instances where there is content within your app that you do not want indexed in Google. A good example of this might be content or functionality that is built out on your site, but has not yet been developed in your app. This would create an inferior experience for users. The good news is that we can block indexation with a few updates to the app.

First, add the following to your app resource directory (res/xml/noindex.xml).

<?xml version="1.0" encoding="utf-8"?> 
<search-engine xmlns:android="http://schemas.android.com/apk/res/android"> 
<noindex uri="http://example.com/gizmos/hidden_uri"/> 
<noindex uriPrefix="http://example.com/gizmos/hidden_prefix"/> 
<noindex uri="gizmos://hidden_path"/> 
<noindex uriPrefix="gizmos://hidden_prefix"/> 
</search-engine>

As you can see above, you can block an individual URI or define a URI prefix to block entire folders.

Once this has been added, you need to update the AndroidManifest.xml file to denote that you’re using noindex.html to block indexation.

<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.android.Gizmos"> 
<application> 
<activity android:name="com.example.android.GizmosActivity" android:label="@string/title_gizmos" > 
<intent-filter android:label="@string/filter_title_viewgizmos"> 
<action android:name="android.intent.action.VIEW"/> 
... 
</activity> 
<meta-data android:name="search-engine" android:resource="@xml/noindex"/> 
</application> 
<uses-permission android:name="android.permission.INTERNET"/> 
</manifest>

App indexing API to drive re-engagement

In addition to URI discovery via desktop crawl, your mobile app can integrate
Google’s App Indexing API, which communicates with Google when users take actions inside your app. This sends information to Google about what users are viewing in the app. This is an additional method for deep link discovery and has some benefits.

The primary benefit is the ability to appear in
autocomplete. This can drive re-engagement through Google Search query autocompletions, providing access to inner pages in apps.

App auto suggest

Again, be sure to
verify your website and app following the guidelines here. This will verify your app in Google Play Developer Console and Google Webmaster Tools.

App actions with knowledge graph

The next, and most exciting, evolution of search is leveraging actions. These will be powerful when
combined with voice search, allowing search engines to take action on behalf of users, turning spoken language into executed actions.

App indexing allows you to take advantage of actions by allowing Google to not only launch an app, but execute actions inside of the app. Order me a pizza? Schedule my meeting? Drive my car? Ok, Google.

App actions work via entity detection and the application of the knowledge graph, allowing search engines to understand actions, words, ideas and objects. With that understanding, they can build an action graph that allows them to define common actions by entity type.

Here is a list of actions currently supported by Schema.org

For example, the PlayAction could be used to play a song in a music app. This can be achieve with the following markup.

<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "MusicGroup",
"name": "Weezer", "potentialAction": {
"@type": "ListenAction",
"target": "android-app://com.spotify.music/http/we.../listen"
}
}
</script>
Once this is implemented, these app actions can begin to appear in search results and knowledge graph.

deep links in app search results

Overview of mobile app search opportunities

In summary, there are five primary ways to increase visibility and engagement for your mobile app in traditional organic search efforts.

Mobile apps in search results

The growth of mobile search is transforming how we define technical SEO, moving beyond front-end and back-end optimization of websites into the realm of structured data and application development. As app indexing expands to include iOS, I suspect the possibilities and opportunities associated with indexing applications, and their corresponding actions, to grow extensively. 

For those with Android apps, app indexing is a potential leapfrog style opportunity to get ahead of competitors who are dominant in traditional desktop search. Those with iOS devices should start by optimizing their app listings, while preparing to implement indexation, as I suspect it’ll be released for iOS this year.

Have you been leveraging traditional organic search to drive visibility and engagement for apps? Share your experiences in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Announcing the New &amp; Improved Link Intersect Tool

Posted by randfish

Y’all remember how last October, we launched a new section in Open Site Explorer called “Link Opportunities?” While I was proud of that work, there was one section that really disappointed me at the time (and I said as much in my comments on the post).

Well, today, that disappointment is over, because we’re stepping up the Link Intersect tool inside OSE big time:

Literally thousands of sweet, sweet link opportunities are now yours at the click of a button

In the initial launch, Link Intersect used Freshscape (which powers Fresh Web Explorer). Freshscape is great for certain kinds of data – links and mentions that come from newly published pages that are in news sources, blogs, and feeds. But it’s not great for non-news/blogs/feed sources because it’s intentionally avoiding those!

For example, in the screenshot above, I wanted to see all the pages that link to SeriousEats.com and SplendidTable.org but don’t link to SmittenKitchen.com.

That’s 671 more, juicy link opportunities thanks to the hard work of the Moz Big Data and Research Tools teams.

How does the new Link Intersect work?

The tool looks at the top 250,000 links our index has pointing to each of the intersecting targets you enter, and the top 1 mllion links in our index pointing to the excluded URL.

Link Intersect then runs a differential comparison to determine which of the 250K links to each of the intersecting targets are from the same URL or root domain, and removes any of those links that point to the top million links to the excluded URL/root/sub domain.

This means it’s possible for sites and pages with massive quantities of links that we won’t show every intersecting link we know about, but since the sorting is in Page Authority order, you’ll get the highest quality/most important ones at the top.

You can use Link Intersect to see three unique views on the data:

  • Pages that link to subdomains (particularly useful if you’re interested in shared links to sites on hosted subdomains like blogspot, wordpress, etc or to a specific subdomain section of a competitor’s site)
  • Pages that link to root domains (my personal favorite, as I find the results the most comprehensive)
  • Root domains that link to the root domains (great if you’re trying to get a broad sense of domain-level outreach/marketing targets)

Note that it’s possible the root domains will actually expose more links that pages because the domain-level link graph is easier and faster to sort through, so the 250K limit is less of a barrier.

Like most of the reports in Open Site Explorer, Link Intersect comes with a handy CSV Export option:

When it finishes (my most recent one took just under 3 minutes to run and email me), you’ll get a nice email like this one:

Please ignore the grammatical errors. I’m sure our team will fix those up soon 🙂

Why are these such good link/outreach/marketing targets?

Generally speaking, this type of data is invaluable for link outreach because these sites and pages are ones that clearly care about the shared topics or content of the intersecting targets. If you enter two of your primary competitors, you’ll often get news media, blog posts, reference resources, events, trade publications, and more that produce content in your topical niche.

They’re also good targets because they actually link out! This means you can avoid sifting through sites whose policies or practices mean they’re unlikely to ever link to you – if they’ve linked to those other two chaps, why not you, too?!

Basically, you can check the trifecta of link opportunity goodness boxes (which I’ve helpfully illustrated above, because that’s just the kind of SEO dork I am).

Link Intersect is limited only by your own creativity – so long as you can keep finding sites and pages on the web whose links might also be a match for your own site, we can keep digging through trillions of links, finding the intersects, and giving them back to you.

3 examples of Link Intersect in action

Let’s look at some ways we might put this to use in the real world:

#1: I’m trying to figure out who links to my two big competitors in the world of book reviews

First off, remember that Link Intersect works on a root domain or subdomain level, so we wouldn’t want to use something like the NYTimes’ review of books, because we’d be finding all the intersections to NYTimes.com. Instead, we want to pick more topically-focused domains, like these two:

You’ll also note that I’ve used a fake website as my excluded URL – this is a great trick for when you’re simply interested in any sites/pages that link to two domains and don’t need to remove a particular target.

#2: I’ve got a locally-focused website doing plumbing and need a few link sources to help boost my potential to rank in local and organic SERPs

In this instance, I’ll certainly look at pages linking to combinations of the top ranking sites in the local results, e.g. the 15 results for this query:

This is a solid starting point, especially considering how few links local sites often need to perform well. But we can get creative by branching outside of plumbing and exploring related fields like construction:

Focusing on better-linked-to industries and websites will give more results, so we want to try to broaden rather than narrow our categories and look for the most-linked-to sites in given verticals for comparisons.

#3: I’m planning some new content around weather patterns for my air conditioning website and want to know what news and blog sites cover extreme weather content

First, I’m going to start by browsing some search results for content in this field that’s received some serious link activity. By turning on my Mozbar’s SERPs overlay, I can see the sites and pages that have generated loads of links:

Now I can run a few combinations of these through the Link Intersect Tool:

While those domain names make me fear for humanity’s intelligence and future survival, they also expose a great link opportunity tactic I hadn’t previously considered – climate science deniers and the more politically charged universe of climate science overall.


I hope you enjoy the new Link Intersect tool as much as I have been – I think it’s one of the best things we’ve put in Open Site Explorer in the last few months, though what we’re releasing in March might beat even that, so stay tuned!

And, as always, please do give us feedback and feel free to ask questions in the comments below or through the Moz Community Q+A.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

A Universal SEO Strategy Audit in 5 Steps – Whiteboard Friday

Posted by randfish

When it comes to building an SEO strategy, many marketers (especially those who don’t spend a significant amount of time with SEO) start off by asking a few key questions. That’s a good start, but only if you’re asking the right questions. In today’s Whiteboard Friday, Rand puts the usual suspects on the chopping block, showing us the five things we should really be looking into when formulating our SEO strategy.

For reference, here’s a still of this week’s whiteboard!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about building an SEO strategy and having a universal set of five questions that can get you there.

So number one: What keywords do you want to rank for?


Number two
: How do we get links?


Number three
: Site speed. Mobile? Doesn’t even seem like a question.


Number four
: What about Penguin and Panda?


Number five
: When do I get money?

This is bologna. That’s not a strategy. Some of those go to tactics you might invest in an SEO, but this is not an SEO strategy. Unfortunately, this is how a lot of conversations about SEO start at teams, with CMOs, with managers, with CEOs, with clients or potential clients, and it’s very frustrating because you can’t truly do a great job with SEO just in the tactical level. If you don’t start with a compelling strategy, doing all of these things is only going to produce a small amount of potential return compared to if you ask the right questions and you get your strategy set before you begin an SEO process and nailing your tactics.

So that’s what I want to go through. I spend a lot of time thinking through these things and analyzing a lot of posts that other people have put up and questions that folks have put in our Q&A system and others, on Quora and other places. I think actually every great SEO strategy that I have ever seen can be the distilled down to answers that come from these five questions.

So number one: What does our organization create that helps solve searchers’ questions or problems? That could be, “Or what will we create in the future?” It might be that you haven’t yet created the thing or things that’s going to help solve searchers’ questions or problems. But that thing that you make, that product or service or content that you are making, that expertise that you hold, something about your organization is creating value that if only searchers could access it, they would be immensely thankful.

It is possible, and I have seen plenty of examples of companies that are so new or so much on the cutting edge that they’re producing things that aren’t solving questions people are asking yet. The problem that you’re solving then is not a question. It’s not something that’s being searched for directly. It usually is very indirect. If you’re creating a machine that, let’s say, turns children’s laughter into energy, as they do in the film “Monsters, Inc.”, that is something very new. No one is searching for machine to turn kids laughing into energy. However, many people are searching for alternative energy. They’re searching for broader types of things and concepts. By the way, if you do invent that machine, I think you’ll probably nail a lot of that interest level stuff.

If you have a great answer to this, you can then move on to, “What is the unique value we provide that no one else does?” We talked about unique value previously on Whiteboard Friday. There’s a whole episode you can watch about that. Basically, if everyone else out there is producing X and X+1 and X+2, you’ve either got to be producing X times 10, or you’ve got to be producing Y, something that is highly unique or is unique because it is of such better quality, such greater quality. It does the job so much better than anything else out there. It’s not, “Hey, we’re better than the top ten search results.” It’s, “Why are you ten times better than anything on this list?”

The third question is, “Who’s going to help amplify our message, and why will they do it?” This is essential because SEO has turned from an exercise, where we essentially take content that already exists or create some content that will solve a searcher problem and then try and acquire links to it, or point links to it, or point ranking signals at it, and instead it’s ones where we have to go out and earn those ranking signals. Because we’ve shifted from link building or ranking signal building to ranking signal earning, we better have people who will help amplify our message, the content that we create, the value that we provide, the service or the product, the message about our brand.

If we don’t have those people who, for some reason, care enough about what we’re doing to help share it with others, we’re going to be shouting into a void. We’re going to get no return on the investment of broadcasting our message or reaching out one to one, or sharing on social media, or distributing. It’s not going to work. We need that amplification. There must be some of it, and because we need amplification in order to earn these ranking signals, we need an answer to who.

That who is going to depend highly on your target audience, your target customers, and who influences your target customers, which may be a very different group than other customers just like them. There are plenty of businesses in industries where your customers will be your worst amplifiers because they love you and they don’t want to share you with anyone else. They love whatever product or service you’re providing, and they want to keep you all to themselves. By the way, they’re not on social media, and they don’t do sharing. So you need another level above them. You need press or bloggers or social media sharers, somebody who influences your target audience.

Number four: What is our process for turning visitors from search into customers? If you have no answer to this, you can’t expect to earn search visits and have a positive return on your investment. You’ve got to be building out that funnel that says, “Aha, people have come to us through channel X, search, social media, e-mail, directly visited, referred from some other website, through business development, through conference or trade show, whatever it is. Then they come back to our website. Then they sign up for an e-mail. Then they make a conversion. How does that work? What does our web-marketing funnel look like? How do we take people that visited our site for the first time from search, from a problem or a question that they had that we answered, and now how do they become a customer?” If you don’t have that process yet, you must build it. That’s part of a great SEO strategy. Then optimization of this is often called conversion rate optimization.

The last question, number five: How do we expose what we do that provides value here in a way that engines can easily crawl, index, understand, and show off? This is getting to much more classic SEO stuff. For many companies they have something wonderful that they’ve built, but it’s just a mobile app or a web app that has no physical URL structure that anyone can crawl and be exposed to, or it’s a service based business.

Let’s say it’s legal services firm. How are we going to turn the expertise of our legal team into something that engines can perceive? Maybe we have the answers to these questions, but we need to find some way to show it off, and that’s where content creation comes into play. So we don’t just need content that is good quality content that can be crawled and indexed. It also must be understood, and this ties a little bit to things we’ve talked about in the past around Hummingbird, where it’s clear that the content is on the topic and that it really answers the searchers’ underlying question, not just uses the keywords the searcher is using. Although, using the keywords is still important from a classic SEO perspective.

Then show off that content is, “How do we do a great job of applying rich snippets, of applying schema, of having a very compelling title and description and URL, of getting that ranked highly, of learning what our competitors are doing that we can uniquely differentiate from them in the search results themselves so that we can improve our click-through rates,” all of those kinds of things.

If you answer these five questions, or if your customer, your client, your team, your boss already has great answers to these five questions, then you can start getting pretty tactical and be very successful. If you don’t have answers to these yet, go get them. Make them explicit, not just implicit. Don’t just assume you know what they are. Have them list them. Make sure everyone on the team, everyone in the SEO process has bought into, “Yes, these are the answers to those five questions that we have. Now, let’s go do our tactics.” I think you’ll find you’re far more successful with any type of SEO project or investment.

All right gang, thanks so much for joining us on Whiteboard Friday, and we’ll see you again next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 years ago from tracking.feedpress.it

Free SEO Tools: Our Top 11 Tools

http://www.koozai.com – We expose our top 11 free SEO tools that will help you with common tasks from keyword research to Analytics. For more information vis…

Reblogged 5 years ago from www.youtube.com