Uncategorized

Marketing News Roundup: Week Ending April 25, 2014

Actual News

New How To’s and Advice

 

Commentary

Events

  • SMX Advanced is taking place in Seattle really soon, but early bird registration for SMX Advanced is still available. MozCon is coming up shortly after that.  Quick price comparison: MozCon is $1,495 (Moz Pro subscribers get a $500 discount). SMX is $2,695 unless you register before May 2nd, in which case it’s $1,795.

by Paul Kriloff

Marketing News Roundup: Week Ending April 18, 2014

After a week off to recover from a totally unexpected bout of walking pneumonia, I’m back with a current news roundup!

Actual News

Google stories dominate this week:

  • It’s hard out here for a search engine! Google ad prices declining, earnings miss target. Lest you feel too bad for poor old Google, they still increased net income by $100 million to $3.45 billion.  Interesting to note the ongoing impact of mobile on Google’s business model and the potential impact of the Google ad network on prices and performance.
  • Good News for Google, Part I: Google Play is quickly catching up to App Store in total apps and revenue.
  • Good News for Google, Part II (Scary News for Everyone Else): Android has more than 50% of the mobile phone market, while Apple’s share holds steady at 41%. More or less, those numbers are unchanged from the end of 2013, although LG gained a little at HTC’s expense (this prior to the launch of the HTC One M8). Microsoft has 3.45% of the market, which must mean they’re making each of their US employees buy 100 Nokia phones. 68% of Americans now have a smart phone.
  • Fast Company published an in-depth profile of Google X, the innovation lab best known for Google Glass and changing the world. Interesting insight into how Google is trying to translate its current dominance into future dominance.
  • Facebook now makes it possible for you to find your friends and interact with them in real-time, face to face. You know, sort of like you did before you had Facebook.
  • Amazon getting closer to offering a complete computing ecosystem: Amazon 3D phone coming in September will add to Fire TV and Kindle. Of all the companies trying to rule my digital content, Amazon is not my favorite, but I have to say, it has done a great job with the Fire TV ads: using Gary Busey was a genius stroke.

 

New Advice and How To’s

New Tools

  • SearchMetrics has released Page Cockpit, which it claims offers “the world’s only universal SEO analysis and optimization on the URL level.” Start a free trial before May 31st and get a free knockwurst.

 

Marketing News Roundup: Week Ending April 4, 2014

Actual News

New Advice and How To’s

New Tools

  • I was introduced this week to Sublime Text, a text editor that allows regex find-replace. I downloaded it, haven’t played around with it. Looks like it’s for a fairly technical resource, not something designed principally for marketers.

Is Google Sending SEO’s a Message?

Lately, there have been several high-profile penalties announced against companies that provide organic search services or call themselves SEO firms. Some people have suggested Google is going too far, but I’m not so sure this isn’t Google’s way of trying to get across – in a very emphatic and unequivocal way – something they’ve said for a long time: they want sites building for the end user, not the search engines. That’s a lesson worth heeding.

The cases that have surfaced include the following:

  • MyBlogGuest was the first to be hit with a manual penalty, announced by Matt Cutts after a recent string of discussions about the practice of guest blogging. Cutts has been discouraging the practice in his Webmaster Help videos in recent weeks.
  • Internet marketing company Portent was then hit with a manual penalty, which they first took as possibly being connected to a post or two on MyBlogGuest. This turned out not to be the case. Portent recovered quickly after addressing boilerplate links on client sites that were hacked.
  • Then came the scary-sounding news that Doc Sheldon had been subject to a manual penalty for a single link to a site Google deemed off-topic and spammy. This one created enough noise that Matt Cutts himself weighed in via Twitter to argue the penalty was justified.

People in the SEO community find this scary. They have argued that it will have a chilling effect on the web. They further argue that MyBlogGuest didn’t violate the quality guidelines.  Other voices have suggested that the penalty against Doc Sheldon was overly aggressive – penalized for a single link? How could Google do that? In comments to Ann Smarty’s response to Google on Search Engine Journal, some people suggested it was all part of Google’s ineffectual strategy of creating FUD.

Darn straight. I try to see all this through Google’s eyes. Google’s position in the consumer ecosystem reminds me of Amazon or Walmart. All three depend on an army of suppliers. All three are serving billions of consumers who demand instantaneous gratification at low cost. In Amazon and Walmart’s cases, low cost is literally low cost, measured in dollars. In Google’s case, it’s time. They are in a mad race to deliver the right result extremely quickly. Google can do that best if it has a large  universe of content that is designed to respond to the queries of users and that it can dig into to find answers and if Google’s algorithms are free to comb through that content without any interference or attempts to influence the algorithms artificially.

In Google’s eyes, any effort to short-circuit Google’s algorithms, no matter how well-intentioned or how carefully designed not to violate the literal letter of Google’s quality guidelines, is counter-productive, because it takes the determination of quality somewhat out of Google’s hands and could result in Google returning less useful content to the user. In that context, Google is best served to sow as much FUD as they can among anyone who puts the word “SEO” in their company description and says their purpose is to help customers rank.

As other commenters to the Ann Smarty article suggested, Google is holding SEO firms to a higher standard. And you know what? They should. SEO’s shouldn’t be thinking short-term. We need to stop seeing our job as trying to understand the individual factors that might drive rank and rallying client resources for the latest and greatest trick that drives cheap traffic. We need to start seeing our jobs as helping companies make sound long-term investments based on an understanding of the internet and Google’s role within it. For one thing, the latter is just good sound business – developing steady, value-added competencies instead of resetting the clock every time some new correlation study comes out hinting at some sneaky way to get a bump in the rankings.

I’m not saying we should ignore Google (they’re too important a part of how users experience the internet to ignore), but we need to think of them as one player, not as the promotional channel that matters. We need to understand what they’re trying to accomplish, instead of focusing on how they get there and on how to find shortcuts within it. We need to do things the right way.

At the heart of SEO (or inbound marketing or whatever we’re calling it to avoid calling it SEO) should be an understanding of the user and really good content that serves the user. Add an additional layer for making sure content is discoverable. Add an additional layer for making sure that directory structure, tags, link elements and rich data reflect a clear vision of who your audience is and what they need from your site. Add a layer for promotion – not link-building, but true promotion: finding people to whom your content is relevant and building relationships with them through multiple channels.

They have a word for that sort of thing. It’s simply called “marketing.”

Riding the Bus is Making Me a Better Marketer

Do you ever find yourself wishing you had more time to invest in training or learning new things? I do – it seems like a constant balancing act to do the stuff that needs to be done now and prepare for the stuff I am going to need to do tomorrow (or six months or a year from now).

I stumbled unwittingly on a simple solution: ride the bus to client meetings.

Until the last few weeks, I had ridden public transportation maybe two or three times in the past four years. I am now the hardened veteran of four bus rides in three weeks.

I used to think that the bus was too slow and that I would find being at the mercy of the bus schedule too restrictive. Then I found myself with a series of client meetings in Pioneer Square, where parking is $3/hour minimum ($3.50/hour on the street). I hunted fruitlessly for cheaper lots, then ran the math on parking, Car2Go, and the bus. The bus was the winner by a whopping 40% over the closest alternative, and Car2Go was a nice discovery: you mean I can park it anywhere with a parking meter and just walk away? Unbelievably awesome.

I hopped on the bus only to find 1.) it doesn’t take that much longer than driving, 2.) there are three or four routes I can take, and 3.) the time away from the office is invaluable. If I’m in my office, I feel the need to “get stuff done.” I’ll work on client presentations. I’ll craft proposals. I’ll check out articles people are recommending on Twitter. I’ll write a blog post (*ahem*).

I could certainly do some of these same things via smart phone while riding the bus, but the extra hassle is just enough of a barrier to dissuade me. So it gives me some quiet time; and in that quiet time, I do something amazing: I read. Right now, I’m running through Lawrence Friedman’s “Go To Market Strategy.” It’s an interesting book – well written, well reviewed, priced like a college textbook, but with the look and feel of a series of 1980’s photocopies stitched into hardcover. I like his approach to the discipline, and the examples he cites are really insightful. It makes me want to spend even more time on the bus so I can get through the whole thing.

For now, though, I need to go finish writing a proposal. 🙂

Mystery Solved? How Google +1’s (and Other Social Signals) Might Boost Search Rankings

I have long believed, but been unable to prove, that social signals support organic rank, making the repeatedly cited correlations between social and organic rank in Moz’s 2011 and 2013 ranking factors analyses both intriguing and maddening. In the absence of a clear causal mechanism, the correlation leaves my most important question as a marketing strategist unanswered: how much time, effort and money do I allocate to social and organic respectively, and what result should I expect to get back? Is social a brand and engagement vehicle with some indirect impact on numbers of new visitors, is it an essential part of organic search visibility, or is it both?

I decided to explore a simple explanation that argues it is both, and this post is my attempt to make the case and explain how it should affect social and organic strategy.
I first considered and rejected a number of the more obvious theories discussed in the comments to the earlier Moz Blog post:

  • Direct causal relationship: Google reads social activity and includes it as a ranking factor, at least as it relates to Google+. Google has denied this, so unless you’re a conspiracy theorist, this is a non-starter. Mark Traphagen’s finding that profiles and pages in Google+ have page rank and pass link equity is compelling, but the bulk of social activity still happens on Facebook, i.e.: a registration wall and a whole lot of nofollow tags away from the prying eyes of Google’s crawls.
  • Common causal relationship: some other independent factor (perhaps a strong brand) drives both social activity and rank, creating correlation but not causation. This is undoubtedly true to some degree, but it is also as vague as it is self-evident. Of course, you want a strong brand, and a strong brand no doubt helps with inbound links and other indicators of authority, but the question I’m interested in is how much investment to make – where and when – to accomplish what. This theory does not answer those questions, since you can’t index something a brand metric (say, recall) to a specific organic rank.
  • Indirect causal relationship: social activity drives some other action, such as inbound linking, that in turn affects rank.

The third is the most compelling, but I know from personal experience that there does not have to be any additional activity for something to rank. I saw a specific case of this earlier this year when a blog post I contributed to a corporate website with a domain authority of just 37 ended up ranking in third position on a fairly high-profile search term, ahead of Huffington Post and just behind a prestigious print publication with a strong online presence. The post was promoted solely through the company’s social media presence. The article received more than 150 likes, four tweets, just three +1’s (one of which was from the company’s own Google+ page), and not a single external link.
How is that ranking possible, unless social activity directly drove the rank? This brings me to a fourth theory:

  • Social activity creates user behavior that gives Google an understanding of the relative quality of content on third party websites, an understanding it then uses to order search results.

In order for such a connection to exist, there needs to be a mechanism by which Google can observe users interacting with content on websites outside of search. I decided to evaluate the privacy policies of Chrome and Chrome OS to answer this question: could Google use aggregate browsing behavior from these services? I take it as a given that they will if they can – Google says as much in its general privacy policy: “We use the information we collect from all of our services to provide, maintain, protect and improve them, to develop new ones, and to protect Google and our users.” The privacy policy specific to Chrome reiterates the same:

  • Information that Google receives when you use Chrome is processed in order to operate and improve Chrome and other Google services.”

Google doesn’t collect general browsing data by default. However, it does if you sign in to Chrome:

  • If you sign in to Chrome browser, Chrome OS or an Android device that includes Chrome as a preinstalled application with your Google Account, this will enable the synchronization feature. Google will store certain information, such as bookmarks, history and other settings, on Google’s servers in association with your Google Account. Information stored with your Account is protected by the Google Privacy Policy [which, as noted above, allows Google to use it in improving its other services].

Google has been increasingly touting this feature of Chrome with TV ads that talk about the fact that you can move from one device to another with a single browser and pick right back up where you left off on the last one.

To confirm what is and is not passed back and forth between Chrome and Google, I monitored all http traffic sent to and from Chrome as I browsed the web. When I was logged in, every request I entered for a URL generated two types of entries: multiple commands to complete the search result as Google treated my typing as a query and then a Chrome sync operation with details on the URL visited. Clicks on links also produced the same sync operation. I then logged out and performed the same actions, and as expected, the search complete commands continued, but the sync operations did not.

With the number of people using Chrome and Android, this gives Google a significant body of data about time on page, time on site, and bounce rate. Is there any reason to think Google doesn’t mine the resulting aggregate data to understand how long someone spends browsing a page they are referred to from social media? It might seem intrusive, but Google does far more intrusive things, like serving ads based on the content of emails and using omnibox data (which are not immediately made anonymous) to improve its suggestions service.

There is no smoking gun here, short of gaining access to Google’s internal code or getting Matt Cutts to offer a concrete confirmation of the practice (although I’d argue his public statements about making great content and promoting it via social are in fact tacit confirmation); but the fact that Google’s privacy policy gives them the ability to use all data they receive to improve their services and the fact that Chrome gives them user engagement insights is enough evidence for me. If they can, they will. Period.

Assuming that Google uses the data as I’m suggesting they do, the implication is obvious: building a community via social media is a critical part of an SEO strategy, because it makes it easier for you to put your content in front of users (users who happen to be predisposed to engaging with your content in a positive way) and give Google insight into the quality of that content. If “all” you do is gather your existing audience into a cohesive community, build good content, and promote that content to your community, rank will – more or less – logically follow.

If you use Google+ as one of those distribution channels, you’re increasing the likelihood of putting the content in front of users who are logged in to Chrome, hence the strong correlation between +1’s and rank. Consider conducting hangouts if necessary to drive your audience to Google+, if they’re not already there. Consider paying special attention, too, to the engagement metrics for users who are accessing your site via Android – if they’re not strong, figure out why and address it, since those users are another segment likely to be logged in to Chrome.

Even if I’m right, this doesn’t provide any neat, simple tricks for ranking. It merely provides greater confidence in making the investment in the hard work behind these activities. It also means that content is a double-edged sword: if you put out bad content and promote it via social channels, it can harm your rankings just as quickly as it can help them.

Are Social (NoFollow) Links Worth Pursuing? Yes and Here’s Why

When your organization is obsessed with organic search, the natural tendency is to devalue rel=nofollow links, since they don’t pass “link juice” (my least favorite phrase in all of marketing – can’t we come up with a better term for passing page rank?). Someone even wrote a blog post (that still shows up in page 1 of Google’s search results for the term “rel=nofollow”) entitled: “13 Reasons Why  NoFollow Tags Suck.” This is a huge mistake, for two very important reasons.

The first is that there is significant traffic to be gained via nofollow referrals. I’ve seen this repeatedly. I’ll share two examples:

  • I contribute frequently to a website that allows outside contributions. In a recent contribution, I added a link to video content related to the post. That link drew a 25% click-through rate. I haven’t been terrifically disciplined about including links in posts on this particular site, and my posts have drawn north of 10,000 unique page views. Had I included external links more often, I could have generated a lot of awareness just by contributing in this forum. And that’s just one opportunity. Multiply times the entire internet and the conclusion is inescapable: there’s a lot of traffic to be had out there from referrals. Traffic is traffic – who cares if it passes page rank?
  • In a past job, one member of my team was tasked with driving social links from sites like Wikipedia and Yahoo! Answers, both of which tag all their links rel=nofollow. When I’d review weekly traffic reports, these links were driving thousands of unique visits per week – a fraction of organic search, but still very meaningful, especially when organic search suffered downturns due to changes in how Google classified certain searches related to the site in question.

This doesn’t even include social media links. Social media has the potential to create virality, which can drive even more traffic.

The other reason rel=nofollow links are valuable is because they very well may indirectly affect organic rankings. There are at least three ways they may do this:

  • The more people see your content, the more likely it is that someone will cite your page somewhere else, employing a standard link in the process. Boom: you just got some page rank.
  • If you’re just starting out and don’t have a lot of traffic, you don’t have much of an indication of how good your content really is. The better the quality of your pages, the more likely they are to rank, so just getting some traffic to a page and seeing how users engage with your content gives you feedback. Feedback is golden, because all marketing is a dialogue.
  • Social links expose the quality of your content directly to Google. This is a pet theory of mine, and one I’ll share in a separate blog post.

What do I take away from this?

DON’T: 

  • Ignore a linking opportunity simply because it’s rel=nofollow. There’s traffic in them thar links.
  • Don’t spam. Be respectful when participating off-site. Don’t include a link just to include a link. Add value. It’s the responsible thing to do and promotes your brand.

DO: 

  • Participate in any and all forums relevant to your site.
  • Include links to relevant content that promotes your web presence. Maybe it’s a social presence (good). Maybe it’s your website.
  • Pay attention to the quality metrics on traffic from these links: bounce rate, time on page. If you believe that a read of the page is a reasonable action, create an event in analytics so that spending a certain amount of time or scrolling down the page results in an interaction with your site.
  • Track the conversion from this traffic, but don’t get too obsessed with it. The goal is awareness and quality signals – if you get too aggressive pushing conversion, you’ll get more and more spammy over time. If you point people to a social presence, look for people who start following you after clicking. If you point people to your website, look for people who convert, perhaps to a newsletter sign-up.

Of SEO and CEO’s: A Senior Executive’s Guide to Resourcing Organic Search

There are three basic, conflicting truths about organic search for CEO’s and CMO’s: it’s too important to ignore, its opacity requires you to understand not just what it can do but how it does it, and you don’t have the time to wade through the endless debates about what works and what doesn’t. This post is for you – it’s a 60-second guide to strategy and staffing.

1. Organic search should be part of your marketing team’s budget and staffing plan. 

“SEO is dead” has become a popular refrain lately, but if you don’t staff for organic search at all, you’re making a mistake. Google is part of the fabric of people’s lives. Someone in your organization should be focused on understanding the latest in search and helping apply that understanding to your business.

2. Organic search should not be your entire marketing strategy. 

Organic search is not a strategy in and of itself, because the return is uncertain. Other tactics (paid media, social media, email marketing, etc.) need to be part of a balanced marketing portfolio.

One or maybe two people should be dedicated to organic search, depending on the size of your organization. One senior person should be tracking trends, developing plans, and identifying tools and resources. That same person or a more junior person should be directly conducting site audits, updating citations and executing other tasks within the function. Depending on the size of your business (number of locations, breadth of SKU’s, number of industries served), the number or junior positions needed may need to scale.

3. Organic search is inherently cross-functional. 

Whoever heads up organic search should be highly collaborative and comfortable leading through influence. It’s not as simple as sitting behind a desk, building links and editing title tags.As the concept of search has expanded, the implications can touch virtually any part of your business. Whoever heads up organic search needs to be able to work across marketing, technology and probably other groups as well. They need to be able to explain what they’re asking for and substantiate why it matters without dragging other departments through the sea of opinion that is SEO.

4. Create a long-term culture around search. 

You may be tempted to treat organic search like everything else in your business: set specific goals, measure the inputs, measure the outputs, hold people directly accountable for results in the current quarter. If you do this, your organic search leader will try to game the search engines and look for a new job before the chickens come home to roost. You definitely should look at results (increased traffic, lower traffic costs over time), but put an emphasis on getting a sound long-term framework in place and baked into the work of all departments.

Search also changes so fast that a short-term approach may result in a series of disjointed efforts (for example, the recent emphasis on content marketing led one company I know to staff almost exclusively with content positions, making it much harder to diversify their marketing efforts). Play for the long-term.

5. Manage ambiguity, don’t try to eliminate it. 

The inherent ambiguity around organic search can make a lot of senior leaders throw up their hands or try to over-simplify. Look to your organic search leader to manage, not dispel ambiguity. Task that person with creating a shared framework for how your company will make decisions in the absence of crystal-clear data. The approach this blog is dedicated to is trying to understand what Google is trying to achieve at a high level, rather than trying to understand what tactic right now might be powering some results. Respecting the ambiguity can also help avoid the danger of arrogance: thinking you can outsmart the search engines. That arrogance can lead to missteps that your company later has to undo at great pain and expense.

Handling Duplicate Content on a Website

In a past job, I spent a great deal of time debating ways to prevent Google from seeing similar pages within the same website. The thinking behind this debate was that duplicate and thin content was bad (this was post-Panda), so if you had lists of things that had a lot of overlap with other lists of similar things you published, it would hurt your site’s results in search.

If you boarded this train of thought and rode it for all it’s worth, you’d do the following:

  • Allow fear of duplicate content to alter your publishing strategy and lead you to edit or eliminate certain pages.
  • Tag your pages into oblivion, using rel=canonical in a desperate attempt to educate Google about how these pages relate to one another.
  • Use ajax or robots.txt to make faceted versions of the pages invisible to Google.

Intuitively, this never made sense to me. It’s perfectly normal to have multiple views of things on a site, and those views may substantially duplicate other things on your site. Amazon is a great (and oft-cited) example – they have products, and products have sizes, colors, brands and a host of other variables. Amazon also has lists of products based on categories, and some of those categories overlap. Do I really believe that Google – in all its sophistication – would penalize a site for having a page dedicated to listing “digital cameras,” another for listing “digital point and shoot cameras” and another listing “digital slr cameras?”

I’ve also always viewed robots.txt as the nuclear option of search optimization: I want my content to be visible to the maximum extent possible. Call it transparency. Call it a marketer’s paranoia about not being available when someone – anyone – comes a’crawlin’.

(There are other factors that might lead you to go to some of these lengths, specifically crawl efficiency. If similar pages create infinite loops that trap the search bots or if they spend so much time crawling pages with little importance that they don’t make it to pages that are highly relevant to your audience, that’s a problem worth fixing. For now, I’m focusing solely on the perception that similar or duplicate content within the same site is a problem.)

To a great extent, my intuition has been confirmed and re-confirmed by WebMasterHelp videos. When I listen to the way Matt Cutts describes duplicate content, the context is never around penalty – it’s clarity. The more you do to comment your duplicate or similar content, the easier it is for the search engines not to get confused, but two things seem true:

  • Even if you don’t, they’ll do their best to figure out the right page for the right query (and most of the time, they’ll get it right).
  • Even if you do, they may still figure out something different (which is good, in case you make a mistake in your canonicalization of pages).

The following videos in particular shaped my perception of this:

The video announcing the introduction of rel=canonical provides a really helpful overview of the topic and good insight into the fact that the value of its use is to help the search engines avoid getting confused, but also makes clear the extent to which the search engines interpret data independently.


In a later video, Cutts goes even further in explaining the distinction between treatment of normal duplicate content and penalty cases. He urges people not to stress out about it.

In a separate video (for which I haven’t been able to find the citation…yet), Cutts talks about affiliate sites and retail product pages – highly competitive duplicate content. His remedy? Figure out how to stand out, do something unique with it. It’s good advice, and really, in this regard, the search engines are like normal consumers, trying to figure out: why should I care about this provider over that provider? That’s not a question of tactical SEO so much as it is one of strategy, and by and large those are far more important (and interesting) questions than how to set up rel=canonical.

The lessons I take away from this?

DON’T: 

  • Confuse “similar content” for “duplicate content.” Penalties come in when there’s a clear pattern of scraping or re-publishing without adding value. It’s easy to tie yourself in knots thinking about this stuff, and like any factor Google takes into consideration, it’s highly contextual.
  • Change your publishing strategy out of fear of duplicate content. If there’s a valid editorial reason for duplicate or similar content within your site, then let there be duplicate content. First of all, I wouldn’t give duplicate content within the same site a second thought. I wouldn’t even worry too much about similar content across the internet. Look, there’s nothing new under the sun (there very rarely is). Chances are that whatever you’re talking about, someone else is talking about it, too. Competitive analysis is good, but at the end of the day, put out content about whatever your area of expertise covers and do it as distinctively as you can.

DO: 

  • Have a distinctive voice and identity. In a world where everything is written about and discussed ad infinitum in real time, you can’t avoid duplicating content altogether, so style matters.
  • Use rel=canonical for related content. Even if Google doesn’t need it per se to understand which page to view as the master page, it’s a good idea to take advantage of any kind of markups that can make the structure of your content clear.
  • Use 301 re-directs. One URL per page is a good principle to live by. Eliminating all duplicate URL’s only makes your content data cleaner.
  • Be incredibly consistent across your URL’s, site maps and internal links. The cleaner you can make your data, the better off you are.

Search Milestones: The Historical Data Patent

When I first started managing organic search, one of the pieces of received wisdom I learned was that Google favored sites with greater “longevity.” The older a site, the higher it ranked.

That misconception lived on until I found articles relating to Google’s patent on the use of historical data. It only makes sense that time is a valuable dimension for understanding the context of a given site or link. Ironically, this patent is probably the reason people think Google favors older websites, but only if they give the patent a cursory read.

I first stumbled on the historical data patent on SEO by the Sea. Even a cursory read made it clear that age alone wasn’t what Google was looking at. It was looking at change over time. Age could work for or against a site, depending on numerous factors. I found a discussion on Webmasterworld even more helpful, particularly quotation of sections of the patent, such as this:

The dates that links appear can also be used to detect “spam,” where owners of documents or their colleagues create links to their own document for the purpose of boosting the score assigned by a search engine. A typical, “legitimate” document attracts back links slowly.

A large spike in the quantity of back links may signal a topical phenomenon (e.g., the CDC web site may develop many links quickly after an outbreak, such as SARS), or signal attempts to spam a search engine (to obtain a higher ranking and, thus, better placement in search results) by exchanging links, purchasing links, or gaining links from documents without editorial discretion on making links.

What emerges is a complex view of what Google is trying to do. They’re mining all that historical data for insight, and it’s not a linear process. Google’s not only looking at patterns over time – it’s also looking at patterns across groups of websites.

This is an important point, and one I keep in mind at all times: there are no one-size-fits-all calculations in the algorithm. Having developed much smaller-scale algorithms for rankings of things myself, I know this firsthand: you look at multiple pieces of data, you weight them, you evaluate the results, you re-weight some, you change the formulas, and by so doing, you create complex inter-dependencies that mean a factor may matter greatly in one situation and not at all in another; be positive in one setting, negative in the next. Given the scale and complexity of what Google is trying to do, it only makes sense that Google’s algorithm contains all kinds of complex interactions between different types of data. Trying to distill that down to older=better is pointless.

What are the practical implications of looking at historical data from this perspective?

DON’T: 

  • Accelerate the launch of a site on a given domain simply for the purpose of gaining “longevity.” There’s nothing wrong with having a beta site out sooner rather than later, but all it’s going to do is generate a bunch of blank entries in a table somewhere: no back-links, maybe the occasional change to content. Is that helpful or harmful to ranking later? Probably neither. I’d sooner make sure I had a site that was fully formed and likely to produce a positive impression on users than a site I have to qualify with the term “beta.”
  • Purchase a domain name with lots of organic traffic solely for its rank. If you buy a site for its rank and then drastically overhaul it, Google’s going to know. If a specific domain name fits well in an overall content or brand strategy, purchase it. If the content of the pre-existing site is relevant to your audience, then by all means set up your re-directs. If neither of the above is true, save your cash and find a new domain.
  • Manipulate links. There are many other good reasons for this, but the fact that Google can see how links accumulate over time is just another nail in the coffin of this one. If you buy links, not only do you have to make sure the anchor text seems “natural” and that the pattern of sites you’re linked from seems to reflect a natural semantic context, you’ll also have to pace the acquisition of those links to look natural as well. It’s not only nearly impossible – it’s a waste of energy.

DO:

  • Invest in your site over time. If a site is simply sitting dormant, the information contained in it will go out of date and become less relevant. Google will see this, and the site will logically lose rank over time. If a given site doesn’t fit into your plans long-term, re-direct it or find a buyer. Don’t let it sit out there accumulating cobwebs.