The FT names the dotdigital Group as a driving force behind the European economy

We’re delighted to have made an appearance in the Financial Times’ inaugural report of Europe’s 1,000 fastest-growing companies.

More than 50,000 organizations were evaluated, in conjunction with Statista, with those featured in the top 1,000 achieving the highest growth in revenue between 2012 and 2015. The dotdigital Group came in at 917, among the likes of UKFAST, Skyscanner and Just Eat.

As the FT puts it, innovative and fast-growing companies play a significant role in the European economy in the 21st century, generating jobs and sustaining competitiveness. This statement rings true as we expand our business and customer base in areas such as the Nordics, Benelux and beyond.

Over the last 12 months we’ve also had a presence at a number of events in countries including Holland and Hungary, with more planned over the coming months.

dotmailer, part of the dotdigital group, now employs 211 people in Europe, with 197 in the UK and 14 based in Belarus.

You can find out more about the dotdigital Group’s performance here, and the full list of Europe’s fastest-growing companies is available on the FT’s website.

The post The FT names the dotdigital Group as a driving force behind the European economy appeared first on The Email Marketing Blog.

Reblogged 1 year ago from blog.dotmailer.com

Darryl, the man behind dotmailer’s Custom Technical Solutions team

Why did you decide to come to dotmailer?

I first got to know dotmailer when the company was just a bunch of young enthusiastic web developers called Ellipsis Media back in 1999. I was introduced by one of my suppliers and we decided to bring them on board to build a recruitment website for one of our clients. That client was Amnesty International and the job role was Secretary General. Not bad for a Croydon company whose biggest client before that was Scobles the plumber’s merchants. So, I was probably dotmailer’s first ever corporate client! After that, I used dotmailer at each company I worked for and then one day they approached a colleague and me and asked us if we wanted to work for them. That was 2013.  We grabbed the opportunity with both hands and haven’t looked back since.

Tell us a bit about your role

I’m the Global Head of Technical Solutions which actually gives me responsibility for 2 teams. First, Custom Technical Solutions (CTS), who build bespoke applications and tools for customers that allow them to integrate more closely with dotmailer and make life easier. Second, Technical Pre-sales, which spans our 3 territories (EMEA, US and APAC) and works with prospective and existing clients to figure out the best solution and fit within dotmailer.

What accomplishments are you most proud of from your dotmailer time so far?

I would say so far it has to be helping to turn the CTS team from just 2 people into a group of 7 highly skilled and dedicated men and women who have become an intrinsic and valued part of the dotmailer organization. Also I really enjoy being part of the Senior Technical Management team. Here we have the ability to influence the direction and structure of the platform on a daily basis.

Meet Darryl Clark – the cheese and peanut butter sandwich lover

Can you speak a bit about your background and that of your team? What experience and expertise is required to join this team?

My background is quite diverse from a stint in the Army, through design college, web development, business analysis to heading up my current teams. I would say the most valuable skill that I have is being highly analytical. I love nothing more than listening to a client’s requirements and digging deep to work out how we can answer these if not exceed them.

As a team, we love nothing more than brainstorming our ideas. Every member has a valid input and we listen. Everyone has the opportunity to influence what we do and our motto is “there is no such thing as a stupid question.”

To work in my teams you have to be analytical but open minded to the fact that other people may have a better answer than you. Embrace other people’s input and use it to give our clients the best possible solution. We are hugely detail conscious, but have to be acutely aware that we need to tailor what we say to our audience so being able to talk to anyone at any level is hugely valuable.

How much of the dotmailer platform is easily customizable and when does it cross over into something that requires your team’s expertise? How much time is spent on these custom solutions one-time or ongoing?

I’ll let you in on a little secret here. We don’t actually do anything that our customers can’t do with dotmailer given the right knowledge and resources. This is because we build all of our solutions using the dotmailer public API. The API has hundreds of methods in both SOAP and REST versions, which allows you to do a huge amount with the dotmailer platform. We do have a vast amount of experience and knowledge in the team so we may well be able to build a solution quicker than our customers. We are more than happy to help them and their development teams build a solution using us on a consultancy basis to lessen the steepness of the learning curve.

Our aim when building a solution for a customer is that it runs silently in the background and does what it should without any fuss.

What are your plans for the Custom Tech Solutions team going forward?

The great thing about Custom Technical Solutions is you never know what is around the corner as our customers have very diverse needs. What we are concentrating on at the moment is refining our processes to ensure that they are as streamlined as possible and allow us to give as much information to the customer as we can. We are also always looking at the technology and coding approaches that we use to make sure that we build the most innovative and robust solutions.

We are also looking at our external marketing and sharing our knowledge through blogs so keep an eye on the website for our insights.

What are the most common questions that you get when speaking to a prospective customer?

Most questions seem to revolve around reassurance such as “Have you done this before?”, “How safe is my data?”, “What about security?”, “Can you talk to my developers?”, “Do I need to do anything?”.  In most instances, we are the ones asking the questions as we need to find out information as soon as possible so that we can analyse it to ensure that we have the right detail to provide the right solution.

Can you tell us about the dotmailer differentiators you highlight when speaking to prospective customers that seem to really resonate?

We talk a lot about working with best of breed so for example a customer can use our Channel Extensions in automation programs to fire out an SMS to a contact using their existing provider. We don’t force customers down one route, we like to let them decide for themselves.

Also, I really like to emphasize the fact that there is always more than one way to do something within the dotmailer platform. This means we can usually find a way to do something that works for a client within the platform. If not, then we call in CTS to work out if there is a way that we can build something that will — whether this is automating uploads for a small client or mass sending from thousands of child accounts for an enterprise level one.

What do you see as the future of marketing automation technology?  Will one size ever fit all? Or more customization going forward?

The 64 million dollar question. One size will never fit all. Companies and their systems are too organic for that. There isn’t one car that suits every driver or one racquet that suits every sport. Working with a top drawer partner network and building our system to be as open as possible from an integration perspective means that our customers can make dotmailer mold to their business and not the other way round…and adding to that the fact that we are building lots of features in the platform that will blow your socks off.

Tell us a bit about yourself – favorite sports team, favorite food, guilty pleasure, favorite band, favorite vacation spot?

I’m a dyed in the wool Gooner (aka Arsenal Football Club fan) thanks to my Grandfather leading me down the right path as a child. If you are still reading this after that bombshell, then food-wise I pretty much like everything apart from coriander which as far as I’m concerned is the Devils own spawn. I don’t really have a favorite band, but am partial to a bit of Level 42 and Kings of Leon and you will also find me listening to 90s drum and bass and proper old school hip hop. My favorite holiday destination is any decent villa that I can relax in and spend time with my family and I went to Paris recently and loved that. Guilty pleasure – well that probably has to be confessing to liking Coldplay or the fact that my favorite sandwich is peanut butter, cheese and salad cream. Go on try it, you’ll love it.

Want to meet more of the dotmailer team? Say hi to Darren Hockley, Global Head of Support, and Dan Morris, EVP for North America.

Reblogged 2 years ago from blog.dotmailer.com

Distance from Perfect

Posted by wrttnwrd

In spite of all the advice, the strategic discussions and the conference talks, we Internet marketers are still algorithmic thinkers. That’s obvious when you think of SEO.

Even when we talk about content, we’re algorithmic thinkers. Ask yourself: How many times has a client asked you, “How much content do we need?” How often do you still hear “How unique does this page need to be?”

That’s 100% algorithmic thinking: Produce a certain amount of content, move up a certain number of spaces.

But you and I know it’s complete bullshit.

I’m not suggesting you ignore the algorithm. You should definitely chase it. Understanding a little bit about what goes on in Google’s pointy little head helps. But it’s not enough.

A tale of SEO woe that makes you go “whoa”

I have this friend.

He ranked #10 for “flibbergibbet.” He wanted to rank #1.

He compared his site to the #1 site and realized the #1 site had five hundred blog posts.

“That site has five hundred blog posts,” he said, “I must have more.”

So he hired a few writers and cranked out five thousand blogs posts that melted Microsoft Word’s grammar check. He didn’t move up in the rankings. I’m shocked.

“That guy’s spamming,” he decided, “I’ll just report him to Google and hope for the best.”

What happened? Why didn’t adding five thousand blog posts work?

It’s pretty obvious: My, uh, friend added nothing but crap content to a site that was already outranked. Bulk is no longer a ranking tactic. Google’s very aware of that tactic. Lots of smart engineers have put time into updates like Panda to compensate.

He started like this:

And ended up like this:
more posts, no rankings

Alright, yeah, I was Mr. Flood The Site With Content, way back in 2003. Don’t judge me, whippersnappers.

Reality’s never that obvious. You’re scratching and clawing to move up two spots, you’ve got an overtasked IT team pushing back on changes, and you’ve got a boss who needs to know the implications of every recommendation.

Why fix duplication if rel=canonical can address it? Fixing duplication will take more time and cost more money. It’s easier to paste in one line of code. You and I know it’s better to fix the duplication. But it’s a hard sell.

Why deal with 302 versus 404 response codes and home page redirection? The basic user experience remains the same. Again, we just know that a server should return one home page without any redirects and that it should send a ‘not found’ 404 response if a page is missing. If it’s going to take 3 developer hours to reconfigure the server, though, how do we justify it? There’s no flashing sign reading “Your site has a problem!”

Why change this thing and not that thing?

At the same time, our boss/client sees that the site above theirs has five hundred blog posts and thousands of links from sites selling correspondence MBAs. So they want five thousand blog posts and cheap links as quickly as possible.

Cue crazy music.

SEO lacks clarity

SEO is, in some ways, for the insane. It’s an absurd collection of technical tweaks, content thinking, link building and other little tactics that may or may not work. A novice gets exposed to one piece of crappy information after another, with an occasional bit of useful stuff mixed in. They create sites that repel search engines and piss off users. They get more awful advice. The cycle repeats. Every time it does, best practices get more muddled.

SEO lacks clarity. We can’t easily weigh the value of one change or tactic over another. But we can look at our changes and tactics in context. When we examine the potential of several changes or tactics before we flip the switch, we get a closer balance between algorithm-thinking and actual strategy.

Distance from perfect brings clarity to tactics and strategy

At some point you have to turn that knowledge into practice. You have to take action based on recommendations, your knowledge of SEO, and business considerations.

That’s hard when we can’t even agree on subdomains vs. subfolders.

I know subfolders work better. Sorry, couldn’t resist. Let the flaming comments commence.

To get clarity, take a deep breath and ask yourself:

“All other things being equal, will this change, tactic, or strategy move my site closer to perfect than my competitors?”

Breaking it down:

“Change, tactic, or strategy”

A change takes an existing component or policy and makes it something else. Replatforming is a massive change. Adding a new page is a smaller one. Adding ALT attributes to your images is another example. Changing the way your shopping cart works is yet another.

A tactic is a specific, executable practice. In SEO, that might be fixing broken links, optimizing ALT attributes, optimizing title tags or producing a specific piece of content.

A strategy is a broader decision that’ll cause change or drive tactics. A long-term content policy is the easiest example. Shifting away from asynchronous content and moving to server-generated content is another example.

“Perfect”

No one knows exactly what Google considers “perfect,” and “perfect” can’t really exist, but you can bet a perfect web page/site would have all of the following:

  1. Completely visible content that’s perfectly relevant to the audience and query
  2. A flawless user experience
  3. Instant load time
  4. Zero duplicate content
  5. Every page easily indexed and classified
  6. No mistakes, broken links, redirects or anything else generally yucky
  7. Zero reported problems or suggestions in each search engines’ webmaster tools, sorry, “Search Consoles”
  8. Complete authority through immaculate, organically-generated links

These 8 categories (and any of the other bazillion that probably exist) give you a way to break down “perfect” and help you focus on what’s really going to move you forward. These different areas may involve different facets of your organization.

Your IT team can work on load time and creating an error-free front- and back-end. Link building requires the time and effort of content and outreach teams.

Tactics for relevant, visible content and current best practices in UX are going to be more involved, requiring research and real study of your audience.

What you need and what resources you have are going to impact which tactics are most realistic for you.

But there’s a basic rule: If a website would make Googlebot swoon and present zero obstacles to users, it’s close to perfect.

“All other things being equal”

Assume every competing website is optimized exactly as well as yours.

Now ask: Will this [tactic, change or strategy] move you closer to perfect?

That’s the “all other things being equal” rule. And it’s an incredibly powerful rubric for evaluating potential changes before you act. Pretend you’re in a tie with your competitors. Will this one thing be the tiebreaker? Will it put you ahead? Or will it cause you to fall behind?

“Closer to perfect than my competitors”

Perfect is great, but unattainable. What you really need is to be just a little perfect-er.

Chasing perfect can be dangerous. Perfect is the enemy of the good (I love that quote. Hated Voltaire. But I love that quote). If you wait for the opportunity/resources to reach perfection, you’ll never do anything. And the only way to reduce distance from perfect is to execute.

Instead of aiming for pure perfection, aim for more perfect than your competitors. Beat them feature-by-feature, tactic-by-tactic. Implement strategy that supports long-term superiority.

Don’t slack off. But set priorities and measure your effort. If fixing server response codes will take one hour and fixing duplication will take ten, fix the response codes first. Both move you closer to perfect. Fixing response codes may not move the needle as much, but it’s a lot easier to do. Then move on to fixing duplicates.

Do the 60% that gets you a 90% improvement. Then move on to the next thing and do it again. When you’re done, get to work on that last 40%. Repeat as necessary.

Take advantage of quick wins. That gives you more time to focus on your bigger solutions.

Sites that are “fine” are pretty far from perfect

Google has lots of tweaks, tools and workarounds to help us mitigate sub-optimal sites:

  • Rel=canonical lets us guide Google past duplicate content rather than fix it
  • HTML snapshots let us reveal content that’s delivered using asynchronous content and JavaScript frameworks
  • We can use rel=next and prev to guide search bots through outrageously long pagination tunnels
  • And we can use rel=nofollow to hide spammy links and banners

Easy, right? All of these solutions may reduce distance from perfect (the search engines don’t guarantee it). But they don’t reduce it as much as fixing the problems.
Just fine does not equal fixed

The next time you set up rel=canonical, ask yourself:

“All other things being equal, will using rel=canonical to make up for duplication move my site closer to perfect than my competitors?”

Answer: Not if they’re using rel=canonical, too. You’re both using imperfect solutions that force search engines to crawl every page of your site, duplicates included. If you want to pass them on your way to perfect, you need to fix the duplicate content.

When you use Angular.js to deliver regular content pages, ask yourself:

“All other things being equal, will using HTML snapshots instead of actual, visible content move my site closer to perfect than my competitors?”

Answer: No. Just no. Not in your wildest, code-addled dreams. If I’m Google, which site will I prefer? The one that renders for me the same way it renders for users? Or the one that has to deliver two separate versions of every page?

When you spill banner ads all over your site, ask yourself…

You get the idea. Nofollow is better than follow, but banner pollution is still pretty dang far from perfect.

Mitigating SEO issues with search engine-specific tools is “fine.” But it’s far, far from perfect. If search engines are forced to choose, they’ll favor the site that just works.

Not just SEO

By the way, distance from perfect absolutely applies to other channels.

I’m focusing on SEO, but think of other Internet marketing disciplines. I hear stuff like “How fast should my site be?” (Faster than it is right now.) Or “I’ve heard you shouldn’t have any content below the fold.” (Maybe in 2001.) Or “I need background video on my home page!” (Why? Do you have a reason?) Or, my favorite: “What’s a good bounce rate?” (Zero is pretty awesome.)

And Internet marketing venues are working to measure distance from perfect. Pay-per-click marketing has the quality score: A codified financial reward applied for seeking distance from perfect in as many elements as possible of your advertising program.

Social media venues are aggressively building their own forms of graphing, scoring and ranking systems designed to separate the good from the bad.

Really, all marketing includes some measure of distance from perfect. But no channel is more influenced by it than SEO. Instead of arguing one rule at a time, ask yourself and your boss or client: Will this move us closer to perfect?

Hell, you might even please a customer or two.

One last note for all of the SEOs in the crowd. Before you start pointing out edge cases, consider this: We spend our days combing Google for embarrassing rankings issues. Every now and then, we find one, point, and start yelling “SEE! SEE!!!! THE GOOGLES MADE MISTAKES!!!!” Google’s got lots of issues. Screwing up the rankings isn’t one of them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

The Colossus Update: Waking The Giant

Posted by Dr-Pete

Yesterday morning, we woke up to a historically massive temperature spike on MozCast, after an unusually quiet weekend. The 10-day weather looked like this:

That’s 101.8°F, one of the hottest verified days on record, second only to a series of unconfirmed spikes in June of 2013. For reference, the first Penguin update clocked in at 93.1°.

Unfortunately, trying to determine how the algorithm changed from looking at individual keywords (even thousands of them) is more art than science, and even the art is more often Ms. Johnson’s Kindergarten class than Picasso. Sometimes, though, we catch a break and spot something.

The First Clue: HTTPS

When you watch enough SERPs, you start to realize that change is normal. So, the trick is to find the queries that changed a lot on the day in question but are historically quiet. Looking at a few of these, I noticed some apparent shake-ups in HTTP vs. HTTPS (secure) URLs. So, the question becomes: are these anecdotes, or do they represent a pattern?

I dove in and looked at how many URLs for our 10,000 page-1 SERPs were HTTPS over the past few days, and I saw this:

On the morning of June 17, HTTPS URLs on page 1 jumped from 16.9% to 18.4% (a 9.9% day-over-day increase), after trending up for a few days. This represents the total real-estate occupied by HTTPS URLs, but how did rankings fare? Here are the average rankings across all HTTPS results:

HTTPS URLs also seem to have gotten a rankings boost – dropping (with “dropping” being a positive thing) from an average of 2.96 to 2.79 in the space of 24 hours.

Seems pretty convincing, right? Here’s the problem: rankings don’t just change because Google changes the algorithm. We are, collectively, changing the web every minute of the day. Often, those changes are just background noise (and there’s a lot of noise), but sometimes a giant awakens.

The Second Clue: Wikipedia

Anecdotally, I noticed that some Wikipedia URLs seemed to be flipping from HTTP to HTTPS. I ran a quick count, and this wasn’t just a fluke. It turns out that Wikipedia started switching their entire site to HTTPS around June 12 (hat tip to Jan Dunlop). This change is expected to take a couple of weeks.

It’s just one site, though, right? Well, historically, this one site is the #1 largest land-holder across the SERP real-estate we track, with over 5% of the total page-1 URLs in our tracking data (5.19% as of June 17). Wikipedia is a giant, and its movements can shake the entire web.

So, how do we tease this apart? If Wikipedia’s URLs had simply flipped from HTTP to HTTPS, we should see a pretty standard pattern of shake-up. Those URLs would look to have changed, but the SERPS around them would be quiet. So, I ran an analysis of what the temperature would’ve been if we ignored the protocol (treating HTTP/HTTPS as the same). While slightly lower, that temperature was still a scorching 96.6°F.

Is it possible that Wikipedia moving to HTTPS also made the site eligible for a rankings boost from previous algorithm updates, thus disrupting page 1 without any code changes on Google’s end? Yes, it is possible – even a relatively small rankings boost for Wikipedia from the original HTTPS algorithm update could have a broad impact.

The Third Clue: Google?

So far, Google has only said that this was not a Panda update. There have been rumors that the HTTPS update would get a boost, as recently as SMX Advanced earlier this month, but no timeline was given for when that might happen.

Is it possible that Wikipedia’s publicly announced switch finally gave Google the confidence to boost the HTTPS signal? Again, yes, it’s possible, but we can only speculate at this point.

My gut feeling is that this was more than just a waking giant, even as powerful of a SERP force as Wikipedia has become. We should know more as their HTTPS roll-out continues and their index settles down. In the meantime, I think we can expect Google to become increasingly serious about HTTPS, even if what we saw yesterday turns out not to have been an algorithm update.

In the meantime, I’m going to melodramatically name this “The Colossus Update” because, well, it sounds cool. If this indeed was an algorithm update, I’m sure Google would prefer something sensible, like “HTTPS Update 2” or “Securageddon” (sorry, Gary).

Update from Google: Gary Illyes said that he’s not aware of an HTTPS update (via Twitter):

No comment on other updates, or the potential impact of a Wikipedia change. I feel strongly that there is an HTTPS connection in the data, but as I said – that doesn’t necessarily mean the algorithm changed.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Should I Use Relative or Absolute URLs? – Whiteboard Friday

Posted by RuthBurrReedy

It was once commonplace for developers to code relative URLs into a site. There are a number of reasons why that might not be the best idea for SEO, and in today’s Whiteboard Friday, Ruth Burr Reedy is here to tell you all about why.

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Let’s discuss some non-philosophical absolutes and relatives

Howdy, Moz fans. My name is Ruth Burr Reedy. You may recognize me from such projects as when I used to be the Head of SEO at Moz. I’m now the Senior SEO Manager at BigWing Interactive in Oklahoma City. Today we’re going to talk about relative versus absolute URLs and why they are important.

At any given time, your website can have several different configurations that might be causing duplicate content issues. You could have just a standard http://www.example.com. That’s a pretty standard format for a website.

But the main sources that we see of domain level duplicate content are when the non-www.example.com does not redirect to the www or vice-versa, and when the HTTPS versions of your URLs are not forced to resolve to HTTP versions or, again, vice-versa. What this can mean is if all of these scenarios are true, if all four of these URLs resolve without being forced to resolve to a canonical version, you can, in essence, have four versions of your website out on the Internet. This may or may not be a problem.

It’s not ideal for a couple of reasons. Number one, duplicate content is a problem because some people think that duplicate content is going to give you a penalty. Duplicate content is not going to get your website penalized in the same way that you might see a spammy link penalty from Penguin. There’s no actual penalty involved. You won’t be punished for having duplicate content.

The problem with duplicate content is that you’re basically relying on Google to figure out what the real version of your website is. Google is seeing the URL from all four versions of your website. They’re going to try to figure out which URL is the real URL and just rank that one. The problem with that is you’re basically leaving that decision up to Google when it’s something that you could take control of for yourself.

There are a couple of other reasons that we’ll go into a little bit later for why duplicate content can be a problem. But in short, duplicate content is no good.

However, just having these URLs not resolve to each other may or may not be a huge problem. When it really becomes a serious issue is when that problem is combined with injudicious use of relative URLs in internal links. So let’s talk a little bit about the difference between a relative URL and an absolute URL when it comes to internal linking.

With an absolute URL, you are putting the entire web address of the page that you are linking to in the link. You’re putting your full domain, everything in the link, including /page. That’s an absolute URL.

However, when coding a website, it’s a fairly common web development practice to instead code internal links with what’s called a relative URL. A relative URL is just /page. Basically what that does is it relies on your browser to understand, “Okay, this link is pointing to a page that’s on the same domain that we’re already on. I’m just going to assume that that is the case and go there.”

There are a couple of really good reasons to code relative URLs

1) It is much easier and faster to code.

When you are a web developer and you’re building a site and there thousands of pages, coding relative versus absolute URLs is a way to be more efficient. You’ll see it happen a lot.

2) Staging environments

Another reason why you might see relative versus absolute URLs is some content management systems — and SharePoint is a great example of this — have a staging environment that’s on its own domain. Instead of being example.com, it will be examplestaging.com. The entire website will basically be replicated on that staging domain. Having relative versus absolute URLs means that the same website can exist on staging and on production, or the live accessible version of your website, without having to go back in and recode all of those URLs. Again, it’s more efficient for your web development team. Those are really perfectly valid reasons to do those things. So don’t yell at your web dev team if they’ve coded relative URLS, because from their perspective it is a better solution.

Relative URLs will also cause your page to load slightly faster. However, in my experience, the SEO benefits of having absolute versus relative URLs in your website far outweigh the teeny-tiny bit longer that it will take the page to load. It’s very negligible. If you have a really, really long page load time, there’s going to be a whole boatload of things that you can change that will make a bigger difference than coding your URLs as relative versus absolute.

Page load time, in my opinion, not a concern here. However, it is something that your web dev team may bring up with you when you try to address with them the fact that, from an SEO perspective, coding your website with relative versus absolute URLs, especially in the nav, is not a good solution.

There are even better reasons to use absolute URLs

1) Scrapers

If you have all of your internal links as relative URLs, it would be very, very, very easy for a scraper to simply scrape your whole website and put it up on a new domain, and the whole website would just work. That sucks for you, and it’s great for that scraper. But unless you are out there doing public services for scrapers, for some reason, that’s probably not something that you want happening with your beautiful, hardworking, handcrafted website. That’s one reason. There is a scraper risk.

2) Preventing duplicate content issues

But the other reason why it’s very important to have absolute versus relative URLs is that it really mitigates the duplicate content risk that can be presented when you don’t have all of these versions of your website resolving to one version. Google could potentially enter your site on any one of these four pages, which they’re the same page to you. They’re four different pages to Google. They’re the same domain to you. They are four different domains to Google.

But they could enter your site, and if all of your URLs are relative, they can then crawl and index your entire domain using whatever format these are. Whereas if you have absolute links coded, even if Google enters your site on www. and that resolves, once they crawl to another page, that you’ve got coded without the www., all of that other internal link juice and all of the other pages on your website, Google is not going to assume that those live at the www. version. That really cuts down on different versions of each page of your website. If you have relative URLs throughout, you basically have four different websites if you haven’t fixed this problem.

Again, it’s not always a huge issue. Duplicate content, it’s not ideal. However, Google has gotten pretty good at figuring out what the real version of your website is.

You do want to think about internal linking, when you’re thinking about this. If you have basically four different versions of any URL that anybody could just copy and paste when they want to link to you or when they want to share something that you’ve built, you’re diluting your internal links by four, which is not great. You basically would have to build four times as many links in order to get the same authority. So that’s one reason.

3) Crawl Budget

The other reason why it’s pretty important not to do is because of crawl budget. I’m going to point it out like this instead.

When we talk about crawl budget, basically what that is, is every time Google crawls your website, there is a finite depth that they will. There’s a finite number of URLs that they will crawl and then they decide, “Okay, I’m done.” That’s based on a few different things. Your site authority is one of them. Your actual PageRank, not toolbar PageRank, but how good Google actually thinks your website is, is a big part of that. But also how complex your site is, how often it’s updated, things like that are also going to contribute to how often and how deep Google is going to crawl your site.

It’s important to remember when we think about crawl budget that, for Google, crawl budget cost actual dollars. One of Google’s biggest expenditures as a company is the money and the bandwidth it takes to crawl and index the Web. All of that energy that’s going into crawling and indexing the Web, that lives on servers. That bandwidth comes from servers, and that means that using bandwidth cost Google actual real dollars.

So Google is incentivized to crawl as efficiently as possible, because when they crawl inefficiently, it cost them money. If your site is not efficient to crawl, Google is going to save itself some money by crawling it less frequently and crawling to a fewer number of pages per crawl. That can mean that if you have a site that’s updated frequently, your site may not be updating in the index as frequently as you’re updating it. It may also mean that Google, while it’s crawling and indexing, may be crawling and indexing a version of your website that isn’t the version that you really want it to crawl and index.

So having four different versions of your website, all of which are completely crawlable to the last page, because you’ve got relative URLs and you haven’t fixed this duplicate content problem, means that Google has to spend four times as much money in order to really crawl and understand your website. Over time they’re going to do that less and less frequently, especially if you don’t have a really high authority website. If you’re a small website, if you’re just starting out, if you’ve only got a medium number of inbound links, over time you’re going to see your crawl rate and frequency impacted, and that’s bad. We don’t want that. We want Google to come back all the time, see all our pages. They’re beautiful. Put them up in the index. Rank them well. That’s what we want. So that’s what we should do.

There are couple of ways to fix your relative versus absolute URLs problem

1) Fix what is happening on the server side of your website

You have to make sure that you are forcing all of these different versions of your domain to resolve to one version of your domain. For me, I’m pretty agnostic as to which version you pick. You should probably already have a pretty good idea of which version of your website is the real version, whether that’s www, non-www, HTTPS, or HTTP. From my view, what’s most important is that all four of these versions resolve to one version.

From an SEO standpoint, there is evidence to suggest and Google has certainly said that HTTPS is a little bit better than HTTP. From a URL length perspective, I like to not have the www. in there because it doesn’t really do anything. It just makes your URLs four characters longer. If you don’t know which one to pick, I would pick one this one HTTPS, no W’s. But whichever one you pick, what’s really most important is that all of them resolve to one version. You can do that on the server side, and that’s usually pretty easy for your dev team to fix once you tell them that it needs to happen.

2) Fix your internal links

Great. So you fixed it on your server side. Now you need to fix your internal links, and you need to recode them for being relative to being absolute. This is something that your dev team is not going to want to do because it is time consuming and, from a web dev perspective, not that important. However, you should use resources like this Whiteboard Friday to explain to them, from an SEO perspective, both from the scraper risk and from a duplicate content standpoint, having those absolute URLs is a high priority and something that should get done.

You’ll need to fix those, especially in your navigational elements. But once you’ve got your nav fixed, also pull out your database or run a Screaming Frog crawl or however you want to discover internal links that aren’t part of your nav, and make sure you’re updating those to be absolute as well.

Then you’ll do some education with everybody who touches your website saying, “Hey, when you link internally, make sure you’re using the absolute URL and make sure it’s in our preferred format,” because that’s really going to give you the most bang for your buck per internal link. So do some education. Fix your internal links.

Sometimes your dev team going to say, “No, we can’t do that. We’re not going to recode the whole nav. It’s not a good use of our time,” and sometimes they are right. The dev team has more important things to do. That’s okay.

3) Canonicalize it!

If you can’t get your internal links fixed or if they’re not going to get fixed anytime in the near future, a stopgap or a Band-Aid that you can kind of put on this problem is to canonicalize all of your pages. As you’re changing your server to force all of these different versions of your domain to resolve to one, at the same time you should be implementing the canonical tag on all of the pages of your website to self-canonize. On every page, you have a canonical page tag saying, “This page right here that they were already on is the canonical version of this page. ” Or if there’s another page that’s the canonical version, then obviously you point to that instead.

But having each page self-canonicalize will mitigate both the risk of duplicate content internally and some of the risk posed by scrappers, because when they scrape, if they are scraping your website and slapping it up somewhere else, those canonical tags will often stay in place, and that lets Google know this is not the real version of the website.

In conclusion, relative links, not as good. Absolute links, those are the way to go. Make sure that you’re fixing these very common domain level duplicate content problems. If your dev team tries to tell you that they don’t want to do this, just tell them I sent you. Thanks guys.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

Support 4.0: Using Snapchat for all of Moz’s Support

Posted by Nick_Sayers

Innovation. Mobile. Community. Social. All words that come to mind when I think of Snapchat. Well, now a new word is creeping in… a word so disruptive to the Snapchat ecosphere that I’m going to bold it, then repeat it.
Support. Yes, support.

april fools placeholder

Moz has always been a customer-centric company. We innovate, and you enjoy. Moz is ready to take it further than ever.
Support+Snapchat is going to change how you talk to us and learn about the Moz products. Now read the following emotionally driven marketing copy to get a better sense of our new (industry-changing) means of support

Move the needle on the go. Using Moz on the go with a desktop-based browser and have a question about Local Rankings? Just hold your phone up to your other screen and send us a snap of your issue. Make sure to shout loud enough. We love to hear you.

Why boil the ocean? This is easy. Sleek. And, dare we say, innovative. It’s like chat, but it completely disappears. You just need your phone and a crippling support issue.

A team of unicorns. We’ve “transitioned” the zebras and horses to unemployment. We now only have unicorns. They will be blowing you away while helping with your support needs. Get ready to puke rainbows, folks.

Game-changing privacy. NSA. FBI. CIA. NYPD. Google. Illuminati. They’re all watching. Feel secure that your in-depth support explanations will disappear soon after you receive them. You won’t have to worry about anyone knowing that you couldn’t find an export button without our help.

Don’t open the kimono. Keep it clean. Unicorns are sensitive. Think of Moz’s Snapchat as your sweet old grandmother’s mailbox. The one those old Scholastic books she ordered for you always arrived in. Don’t tell her you didn’t read them.

Now reach out. Feel the disruption in the
Support Force. Send a Snapchat to moz_help. And welcome to Support 4.0.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

For Writers Only: Secrets to Improving Engagement on Your Content Using Word Pictures (and I Don’t Mean Wordle)

Posted by Isla_McKetta

“Picture it.”

If you’re of a certain generation, those two words can only conjure images of tiny, white-haired Sophia from the Golden Girls about to tell one of her engaging (if somewhat long and irrelevant) stories as she holds her elderly roommates hostage in the kitchen or living room of their pastel-hued Miami home.

Even if you have no idea what I’m talking about, those words should become your writing mantra, because what readers do with your words is take all those letters and turn them into mind pictures. And as the writer, you have control over what those pictures look like and how long your readers mull them over.

According to
Reading in the Brain by Stanislas Dehaene, reading involves a rich back and forth between the language areas and visual areas of our brains. Although the full extent of that connectivity is not yet known, it’s easy to imagine that the more sensory (interesting) information we can include in our writing, the more fully we can engage our readers.

So if you’re a writer or content marketer you should be harnessing the illustrative power of words to occupy your readers’ minds and keep them interested until they’re ready to convert. Here’s how to make your words
work for you.

Kill clichés

I could have titled this piece “Painting a Picture with Words” but you’ve heard it. Over and over and over. And I’m going to propose that every time you use a cliché, a puppy dies. 

While that’s a bit extreme (at least I hope so because that’s a lot of dead puppies and Rocky’s having second thoughts about his choice of parents), I hope it will remind you to read over what you’ve written and see where your attention starts to wander (wandering attention=cliché=one more tragic, senseless death) you get bored. Chances are it’s right in the middle of a tired bit of language that used to be a wonderful word picture but has been used and abused to the point that we readers can’t even summon the image anymore.

Make up metaphors (and similes)

Did you know that most clichés used to be metaphors? And that we overused them because metaphors are possibly the most powerful tool we have at our disposal for creating word pictures (and, thus, engaging content)? You do now.

By making unexpected comparisons, metaphors and similes force words to perform like a stage mom on a reality show. These comparisons shake our brains awake and force us to pay attention. So apply a whip to your language. Make it dance like a ballerina in a little pink tutu. Give our brains something interesting to sink our teeth into (poor Rocky!), gnaw on, and share with out friends.

Engage the senses

If the goal of all this attention to language is to turn reading into a full brain experience, why not make it a little easier by including sensory information in whatever you’re writing? Here are a few examples:

  • These tickets are selling so fast we can smell the burning rubber.
  • Next to a crumbling cement pillar, our interview subject sits typing on his pristine MacBook Air.
  • In a sea of (yelp!) never ending horde of black and gray umbrellas, this red cowboy hat will show the world you own your look.
  • Black hat tactics left your SERPs stinking as bad as a garbage strike in late August? Let us help you clear the air by cleaning up those results.

See how those images and experiences continue to unfold and develop in your mind? You have the power to affect your readers the same way—to create an image so powerful it stays with them throughout their busy days. One note of caution, though, sensory information is so strong that you want to be careful when creating potentially negative associations (like that garbage strike stench in the final example).

Leverage superlatives (wisely) and ditch hyperbole

SUPERLATIVES ARE THE MOST EFFECTIVEST TOOL YOU CAN USE EVER (until you wear your reader out or lose their trust). Superlatives (think “best,” “worst,” “hairiest” – any form of the adjective or adverb that is the most exaggerated form of the word) are one of the main problems with clickbait headlines (the other being the failure to deliver on those huge promises).

Speaking of exaggeration, be careful with it in all of its forms. You don’t actually have to stop using it, but think of your reader’s credence in your copy as a grasshopper handed over by a child. They think it’s super special and they want you to as well. If you mistreat that grasshopper by piling exaggerated fact after exaggerated fact on top of it, the grasshopper will be crushed and your reader will not easily forgive you.

So how do you stand out in a crowded field of over-used superlatives and hyperbolic claims? Find the places your products honestly excel and tout those. At Moz we don’t have the largest link index in the world. Instead, we have a really high quality link index. I could have obfuscated there and said we have “the best” link index, but by being specific about what we’re actually awesome at, we end up attracting customers who want better results instead of more results (and they’re happier for it).

Unearth the mystery

One of the keys to piquing your audience’s interest is to tap into (poor puppy!) create or find the mystery in what you’re writing. I’m not saying your product description will suddenly feature PIs in fedoras (I can dream, though), but figure out what’s intriguing or new about what you’re talking about. Here are some examples:

  • Remember when shortcuts meant a few extra minutes to yourself after school? How will you spend the 15-30 minutes our email management system will save you? We won’t tell…
  • You don’t need to understand how this toilet saves water while flushing so quietly it won’t wake the baby, just enjoy a restful night’s sleep (and lower water bills)
  • Check out this interactive to see what makes our work boots more comfortable than all the rest.

Secrets, surprises, and inside information make readers hunger for more knowledge. Use that power to get your audience excited about the story you’re about to tell them.

Don’t forget the words around your imagery

Notice how some of these suggestions aren’t about the word picture itself, they’re about the frame around the picture? I firmly believe that a reader comes to a post with a certain amount of energy. You can waste that energy by soothing them to sleep with boring imagery and clichés, while they try to find something to be interested in. Or you can give them energy by giving them word pictures they can get excited about.

So picture it. You’ve captured your reader’s attention with imagery so engaging they’ll remember you after they put down their phone, read their social streams (again), and check their email. They’ll come back to your site to read your content again or to share that story they just can’t shake.

Good writing isn’t easy or fast, but it’s worth the time and effort.

Let me help you make word pictures

Editing writing to make it better is actually one of my great pleasures in life, so I’m going to make you an offer here. Leave a sentence or two in the comments that you’re having trouble activating, and I’ll see what I can do to offer you some suggestions. Pick a cliché you can’t get out of your head or a metaphor that needs a little refresh. Give me a little context for the best possible results.

I’ll do my best to help the first 50 questions or so (I have to stop somewhere or I’ll never write the next blog post in this series), so ask away. I promise no puppies will get hurt in the process. In fact, Rocky’s quite happy to be the poster boy for this post—it’s the first time we’ve let him have beach day, ferry day, and all the other spoilings all at once.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from tracking.feedpress.it

10 Predictions for the Marketing World in 2015

Posted by randfish

The beginning of the year marks the traditional week for bloggers to prognosticate about the 12 months ahead, and, over the last decade I’ve created a tradition of joining in this festive custom to predict the big trends in SEO and web marketing. However, I divine the future by a strict code: I’m only allowed to make predictions IF my predictions from last year were at least moderately accurate (otherwise, why should you listen to me?). So, before I bring my crystal-ball-gazing, let’s have a look at how I did for 2014.

Yes, we’ll get to that, but not until you prove you’re a real Wizard, mustache-man.

You can find 
my post from January 5th of last year here, but I won’t force you to read through it. Here’s how I do grading:

  • Spot On (+2) – when a prediction hits the nail on the head and the primary criteria are fulfilled
  • Partially Accurate (+1) – predictions that are in the area, but are somewhat different than reality
  • Not Completely Wrong (-1) – those that landed near the truth, but couldn’t be called “correct” in any real sense
  • Off the Mark (-2) – guesses which didn’t come close

If the score is positive, prepare for more predictions, and if it’s negative, I’m clearly losing the pulse of the industry. Let’s tally up the numbers.

In 2014, I made 6 predictions:

#1: Twitter will go Facebook’s route and create insights-style pages for at least some non-advertising accounts

Grade: +2

Twitter rolled out Twitter analytics for all users this year (
starting in July for some accounts, and then in August for everyone), and while it’s not nearly as full-featured as Facebook’s “Insights” pages, it’s definitely in line with the spirit of this prediction.

#2: We will see Google test search results with no external, organic listings

Grade: -2

I’m very happy to be wrong about this one. To my knowledge, Google has yet to go this direction and completely eliminate external-pointing links on search results pages. Let’s hope they never do.

That said, there are plenty of SERPs where Google is taking more and more of the traffic away from everyone but themselves, e.g.:

I think many SERPs that have basic, obvious functions like ”
timer” are going to be less and less valuable as traffic sources over time.

#3: Google will publicly acknowledge algorithmic updates targeting both guest posting and embeddable infographics/badges as manipulative linking practices

Grade: -1

Google most certainly did release an update (possibly several)
targeted at guest posts, but they didn’t publicly talk about something specifically algorithmic targeting emebedded content/badges. It’s very possible this was included in the rolling Penguin updates, but the prediction said “publicly acknowledge” so I’m giving myself a -1.

#4: One of these 5 marketing automation companies will be purchased in the 9-10 figure $ range: Hubspot, Marketo, Act-On, Silverpop, or Sailthru

Grade: +2

Silverpop was 
purchased by IBM in April of 2014. While a price wasn’t revealed, the “sources” quoted by the media estimated the deal in the ~$270mm range. I’m actually surprised there wasn’t another sale, but this one was spot-on, so it gets the full +2.

#5: Resumes listing “content marketing” will grow faster than either SEO or “social media marketing”

Grade: +1

As a percentage, this certainly appears to be the case. Here’s some stats:

  • US profiles with “content marketing”
    • June 2013: 30,145
    • January 2015: 68,580
    • Growth: 227.5%
  • US profiles with “SEO”
    • June 2013: 364,119
    • January 2015: 596,050
    • Growth: 163.7%
  • US profiles with “social media marketing”
    • June 2013: 938,951
    • January 2015: 1,990,677
    • Growth: 212%

Granted, content marketing appears on far fewer profiles than SEO or social media marketing, but it has seen greater growth. I’m only giving myself a +1 rather than a +2 on this because, while the prediction was mathematically correct, the numbers of SEO and social still dwarf content marketing as a term. In fact, in LinkedIn’s 
annual year-end report of which skills got people hired the most, SEO was #5! Clearly, the term and the skillset continue to endure and be in high demand.

#6: There will be more traffic sent by Pinterest than Twitter in Q4 2014 (in the US)

Grade: +1

This is probably accurate, since Pinterest appears to have grown faster in 2014 than Twitter by a good amount AND this was 
already true in most of 2014 according to SharedCount (though I’m not totally sold on the methodology of coverage for their numbers). However, we won’t know the truth for a few months to come, so I’d be presumptuous in giving a full +2. I am a bit surprised that Pinterest continues to grow at such a rapid pace — certainly a very impressive feat for an established social network.


SOURCE: 
Global Web Index

With Twitter’s expected moves into embedded video, it’s my guess that we’ll continue to see a lot more Twitter engagement and activity on Twitter itself, and referring traffic outward won’t be as considerable a focus. Pinterest seems to be one of the only social networks that continues that push (as Facebook, Instagram, LinkedIn, and YouTube all seem to be pursuing a “keep them here” strategy).

——————————–

Final Score: +3

That positive number means I’ve passed my bar and can make another set of predictions for 2015. I’m going to be a little more aggressive this year, even though it risks ruining my sterling record, simply because I think it’s more exciting 🙂

Thus, here are my 10 predictions for what the marketing world will bring us in 2015:

#1: We’ll see the first major not-for-profit University in the US offer a degree in Internet Marketing, including classes on SEO.

There are already some private, for-profit offerings from places like Fullsail and Univ. of Phoenix, but I don’t know that these pedigrees carry much weight. Seeing a Stanford, a Wharton, or a University of Washington offer undergraduate or MBA programs in our field would be a boon to those seeking options and an equal boon to the universities.

The biggest reason I think we’re ripe for this in 2015 is the 
LinkedIn top 25 job skills data showing the immense value of SEO (#5) and digital/online marketing (#16) in a profile when seeking a new job. That should (hopefully) be a direct barometer for what colleges seek to include in their repertoire.

#2: Google will continue the trend of providing instant answers in search results with more interactive tools.

Google has been doing instant answers for a long time, but in addition to queries with immediate and direct responses, they’ve also undercut a number of online tool vendors by building their own versions directly into the SERPs, like they do currently for queries like ”
timer” and “calculator.”

I predict in 2015, we’ll see more partnerships like what’s provided with 
OpenTable and the ability to book reservations directly from the SERPs, possibly with companies like Uber, Flixster (they really need to get back to a better instant answer for movies+city), Zillow, or others that have unique data that could be surfaced directly.

#3: 2015 will be the year Facebook begins including some form of web content (not on Facebook’s site) in their search functionality.

Facebook 
severed their search relationship with Bing in 2014, and I’m going to make a very risky prediction that in 2015, we’ll see Facebook’s new search emerge and use some form of non-Facebook web data. Whether they’ll actually build their own crawler or merely license certain data from outside their properties is another matter, but I think Facebook’s shown an interest in getting more sophisticated with their ad offerings, and any form of search data/history about their users would provide a powerful addition to what they can do today.

#4: Google’s indexation of Twitter will grow dramatically, and a significantly higher percentage of tweets, hashtags, and profiles will be indexed by the year’s end.

Twitter has been 
putting more muscle behind their indexation and SEO efforts, and I’ve seen more and more Twitter URLs creeping into the search results over the last 6 months. I think that trend continues, and in 2015, we see Twitter.com enter the top 5-6 “big domains” in Mozcast.

#5: The EU will take additional regulatory action against Google that will create new, substantive changes to the search results for European searchers.

In 2014, we saw the EU 
enforce the “right to be forgotten” and settle some antitrust issues that require Google to edit what it displays in the SERPs. I don’t think the EU is done with Google. As the press has noted, there are plenty of calls in the European Parliament to break up the company, and while I think the EU will stop short of that measure, I believe we’ll see additional regulatory action that affects search results.

On a personal opinion note, I would add that while I’m not thrilled with how the EU has gone about their regulation of Google, I am impressed by their ability to do so. In the US, with 
Google becoming the second largest lobbying spender in the country and a masterful influencer of politicians, I think it’s extremely unlikely that they suffer any antitrust or regulatory action in their home country — not because they haven’t engaged in monopolistic behavior, but because they were smart enough to spend money to manipulate elected officials before that happened (unlike Microsoft, who, in the 1990’s, assumed they wouldn’t become a target).

Thus, if there is to be any hedge to Google’s power in search, it will probably come from the EU and the EU alone. There’s no competitor with the teeth or market share to have an impact (at least outside of China, Russia, and South Korea), and no other government is likely to take them on.

#6: Mobile search, mobile devices, SSL/HTTPS referrals, and apps will combine to make traffic source data increasingly hard to come by.

I’ll estimate that by year’s end, many major publishers will see 40%+ of their traffic coming from “direct” even though most of that is search and social referrers that fail to pass the proper referral string. Hopefully, we’ll be able to verify that through folks like 
Define Media Group, whose data sharing this year has made them one of the best allies marketers have in understanding the landscape of web traffic patterns.

BTW – I’d already estimate that 30-50% of all “direct” traffic is, in fact, search or social traffic that hasn’t been properly attributed. This is a huge challenge for web marketers — maybe one of the greatest challenges we face, because saying “I brought in a lot more traffic, I just can’t prove it or measure it,” isn’t going to get you nearly the buy-in, raises, or respect that your paid-traffic compatriots can earn by having every last visit they drive perfectly attributed.

#7: The content advertising/recommendation platforms will continue to consolidate, and either Taboola or Outbrain will be acquired or do some heavy acquiring themselves.

We just witnessed the 
surprising shutdown of nRelate, which I suspect had something to do with IAC politics more than just performance and potential for the company. But given that less than 2% of the web’s largest sites use content recommendation/promotion services and yet both Outbrain and Taboola are expected to have pulled in north of $200m in 2014, this is a massive area for future growth.

Yahoo!, Facebook, and Google are all potential acquirers here, and I could even see AOL (who already own Gravity) or Buzzfeed making a play. Likewise, there’s a slew of smaller/other players that Taboola or Outbrain themselves could acquire: Zemanta, Adblade, Zegnet, Nativo, Disqus, Gravity, etc. It’s a marketplace as ripe for acquisition as it is for growth.

#8: Promoted pins will make Pinterest an emerging juggernaut in the social media and social advertising world, particularly for e-commerce.

I’d estimate we’ll see figures north of $50m spent on promoted pins in 2015. This is coming after Pinterest only just 
opened their ad platform beyond a beta group this January. But, thanks to high engagement, lots of traffic, and a consumer base that B2C marketers absolutely love and often struggle to reach, I think Pinterest is going to have a big ad opportunity on their hands.

Note the promoted pin from Mad Hippie on the right

(apologies for very unappetizing recipes featured around it)

#9: Foursquare (and/or Swarm) will be bought, merge with someone, or shut down in 2015 (probably one of the first two).

I used to love Foursquare. I used the service multiple times every day, tracked where I went with it, ran into friends in foreign cities thanks to its notifications, and even used it to see where to go sometimes (in Brazil, for example, I found Foursquare’s business location data far superior to Google Maps’). Then came the split from Swarm. Most of my friends who were using Foursquare stopped, and the few who continued did so less frequently. Swarm itself tried to compete with Yelp, but it looks like 
neither is doing well in the app rankings these days.

I feel a lot of empathy for Dennis and the Foursquare team. I can totally understand the appeal, from a development and product perspective, of splitting up the two apps to let each concentrate on what it’s best at, and not dilute a single product with multiple primary use cases. Heck, we’re trying to learn that lesson at Moz and refocus our products back on SEO, so I’m hardly one to criticize. That said, I think there’s trouble brewing for the company and probably some pressure to sell while their location and check-in data, which is still hugely valuable, is robust enough and unique enough to command a high price.

#10: Amazon will not take considerable search share from Google, nor will mobile search harm Google’s ad revenue substantively.

The “Google’s-in-trouble” pundits are mostly talking about two trends that could hurt Google’s revenue in the year ahead. First, mobile searchers being less valuable to Google because they don’t click on ads as often and advertisers won’t pay as much for them. And, second, Amazon becoming the destination for direct, commercial queries ahead of Google.

In 2015, I don’t see either of these taking a toll on Google. I believe most of Amazon’s impact as a direct navigation destination for e-commerce shoppers has already taken place and while Google would love to get those searchers back, that’s already a lost battle (to the extent it was lost). I also don’t think mobile is a big concern for Google — in fact, I think they’re pivoting it into an opportunity, and taking advantage of their ability to connect mobile to desktop through Google+/Android/Chrome. Desktop search may have flatter growth, and it may even decline 5-10% before reaching a state of equilibrium, but mobile is growing at such a huge clip that Google has plenty of time and even plentier eyeballs and clicks to figure out how to drive more revenue per searcher.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 years ago from moz.com