Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

Will Google’s EU Woes Ever End?

google and the right to be forgotten ruling

The EUs “Right to be Forgotten Ruling” seems to be one endless headache for Google. The latest reports suggest that Google’s handling of these requests is the latest item up for scrutiny by European data protection authorities. What seems to be the problem? Well, it appears that Google is only removing links from search results in the EU, such as Google.co.uk but not Gooogle.com.

This action does effectively defeat the purpose of the ruling, as the offending links can still technically be found through non localized search. Google was recently accused of purposely misinterpreting the ruling in order to stir up advocates claiming the ruling itself is censorship.

The issue of censorship seems to be a hazy line drawn in the sand. While the intention is that fresh content, overtime, should lose relevance as the facts pertaining to an individual become outdated. The decisions of who and what events should be included are still largely up for debate – and by “debate” I mean up to Google.

Another point of contention is Google’s notification process. Currently the search giant sends notification to sites that have had search results removed. The data protection authorities have voiced concerns over the effects this may have on those submitting the removal request.

Because the right to be forgotten ruling is still in its infancy, there’s no doubt that this will be a hot topic for years to come as data protection authorities and Google hash out regulations and guidelines that will encapsulate most removal requests. It has been reported that Google has received some 91,000 removal requests effecting over 328,000 urls. The countries with the largest numbers are reported as France and Germany, followed by the UK.

SEO news blog post by @ 12:00 pm on July 26, 2014


 

Google Updates Local Search Algorithm

Google Places

Yesterday, SEO Roundtables Barry Schwartz reported that Google has launched an algorithm update targeted at local search. The aim of the update (as always) is to provide more accurate and relevant local results more closely tied to traditional ranking signals. It is suggested that the update will affect both the Google maps and standard web search results.

It is unknown at this point what percentage of search results will be affected with this update but Schwartz has speculated that the changes will be reasonably significant. It is too soon to tell if this update has hit the mark as many local search marketers are reporting the pendulum swing positioning of one extreme to the other, which often follows an algorithm update before settling somewhere in the middle.

SEO news blog post by @ 11:11 am on July 25, 2014


 

Canadian Court Orders Google to Remove Company From Global Search Results

Captain Canada

In yet another international ruling, Google has been ordered to remove a website from its global search results. Today, B.C. Supreme Court Justice Lauri Ann Fenlon ruled that Google has 14 days to remove a company by the name of Datalink from its global search results. Datalink is the rival of technology company Equustek, who manufactures networking devices for industrial equipment. Equustek has alleged that Datalink has stolen product designs by recruiting a former Equustek engineer.

While Equustek has already won the battle in Canadian courts, this case sets a precedent for international rulings. Justice Lauri Ann Fenlon has stated:

“The courts must adapt to the reality of e-commerce with its potential for abuse by those who would take the property of others and sell it through the borderless electronic web of the internet,”

Google has argued that the B.C. court does not have jurisdiction to enforce such a ruling as their headquarters are located in the United States, but Justice Fenlon countered that the company clearly does business in the province via selling ads and providing search results.

For more information read the full article: http://www.cbc.ca/news/canada/british-columbia/google-ordered-to-remove-website-from-global-search-results-1.2679824

SEO news blog post by @ 4:10 pm on June 18, 2014

Categories:Search Engine News

 

Is Pengiun 3.0 On The Way?

The folks over at Search Engine Roundtable have reported that there’s been a lot of chatter on some of the webmaster forums as to a possible Google algorithm update. Although Google has yet to release any official word, the forums are a buzz with webmaster seeing shifts in search traffic and ranking positions. There have been a variety of suggestions ranging from a Penguin 3.0 update, which many suspect will be released in late May, to a Panda algorithm refresh, or it may be simply that Google is taking further action against link networks. Until we receive the official word from Google, we’ll just have to wait and see how it all plays out.

SEO news blog post by @ 12:34 pm on May 20, 2014


 

Link Reduction for Nerds

Let’s face it, even with our best efforts to make navigation clear and accessible, many websites are not as easy to navigate as they could be.

It doesn’t matter if you are first page super star, or a mom n pop blog with low traffic, most efforts really are no match for the diversity of our visitors.

When I first started blogging on SEO topics for Beanstalk I took a lot of effort to make my posts as accessible as I could with a bunch of different tricks like <acronym> tags (now they are <abbr> tags) and hyperlinks to any content that could be explored further.

Like a good SEO I added the rel="nofollow" to any external links, because that totally fixes all problems, right?

“No.. Not really.”

External links, when they actually are relevant to your topic, and point to a trusted resource, should not be marked as no-follow. Especially in the case of discussions or dynamic resources where you could be referencing a page that was recently updated with information on your topic. In that case you ‘need’ the crawlers to see that the remote page is relevant now.

Internal links are also a concern when they become redundant or excessive. If all your pages link to all your pages, you’re going to have a bad time.

If you went to a big new building downtown, and you asked the person at the visitors desk for directions and the fellow stopped at every few words to explain what he means by each word, you may never get to understanding the directions, at least not before you’re late for whatever destination you had.

Crawlers, even smart ones like Google Bot, don’t really appreciate 12 different URLs on one page that all go the same place. It’s a waste of resources to keep adding the same URL to the spiders as a bot crawls each of your pages.

In fact in some cases, if your pages have tons of repeated links to more pages with the same internal link structures, all the bots will see are the same few pages/URLs until they take the time push past the repeated links and get deeper into your site.

The boy who cried wolf.

The boy who cried wolf would probably be jumping up and down with another analogy, if the wolves hadn’t eaten him, just as your competition will gladly eat your position in the SERPs if your site is sending the crawlers to all the same pages.

Dave Davies has actually spoken about this many times, both on our blog, and on Search Engine Watch: Internal Linking to Promote Keyword Clusters.

“You really only NEED 1 link per page.”

Technically, you don’t actually need any links on your pages, you could just use Javascript that changes the window.location variable when desired and your pages would still work, but how would the robots get around without a sitemap? How would they understand which pages connect to which? Madness!

But don’t toss Javascript out the window just yet, there’s a middle ground where everyone can win!

If you use Javascript to send clicks to actual links on the page, you can markup more elements of your page without making a spaghetti mess of your navigation and without sending crawlers on repeated visits to duplicate URLs.

“In fact jQuery can do most of the work for you!”

Say I wanted to suggest you look at our Articles section, because we have so many articles, in the Articles section, but I didn’t want our articles page linked too many times?

Just tell jQuery to first find a matching <anchor>:
jQuery("a[href='/articles/']")

Then tell it to add an ID to that URL:
.attr( 'id', '/articles/');

And then tell it to send a click to that ID:
document.getElementById('/articles/').click();

Finally, make sure that your element style clearly matched the site’s style for real hyperlinks (ie: cursor: pointer; text-decoration: underline;)

UPDATE: For Chrome browsers you need to either refresh the page or you have to include the following in your page header: header("X-XSS-Protection: 0");

SEO news blog post by @ 6:07 pm on August 28, 2013


 

SEO concerns for Mobile Websites

You want to serve your clients needs regardless of what device they visit your site with, but how do you do it easily without upsetting your SEO?

Lets look at the various options for tackling Mobile sites and what each means in terms of SEO:

Responsive Design :
 
Visual demonstration of responsive web design

  • Responsive design is growing in popularity, especially as communications technology evolves, and bandwidth/memory use is less of a concern.
  • This method also gives us a single URL to work with which helps to keep the sitemap/structure as simple as possible without redirection nightmares.
  • On top of that, Googlebot won’t need to visit multiple URLs to index your content updates.
  • Less to crawl means Googlebot will have a better chance to index more of your pages/get deeper inside your site.
“Why is/was there a concern about mobile page size?”

Low-end mobiles, like a Nokia C6 from 4+ years ago (which was still an offering from major telcos last year), typically require that total page data be less than 1mb in order for the phone to handle the memory needs of rendering/displaying the site.

If you go over that memory limit/tipping point you risk causing the browser to crash with an error that the device memory has been exceeded. Re-loading the browser drops you on the device’s default home-page with all your history lost. I think we could all agree that this is not a good remote experience for potential clients.

Higher-end devices are still victims of their real-world connectivity. Most 3rd generation devices can hit really nice peak speeds, but rarely get into a physical location where those speeds are consistent for a reasonable length of time.

Therefore, even with the latest gee-wiz handsets, your ratio of successfully delivering your entire page to mobile users will be impacted by the amount of data you require them to fetch.

In a responsive web design scenario the main HTML content is typically sent along with CSS markup that caters to the layout/screen limitations of a mobile web browser. While this can mean omission of image data and other resources, many sites simply attempt to ‘resize’ and ‘rearrange’ the content leading to very similar bandwidth/memory needs for mobile sites using responsive design approaches.

The SEO concern with responsive designs is that since the written HTML content is included in the mobile styling it’s very crucial that external search engines/crawlers understand that the mobile styled content is not cloaking or other black-hat techniques. Google does a great job of detecting this and we discuss how a bit later on with some links to Google’s own pages on the topic.

Mobile Pages :

Visual demonstration of mobile web page design

 
If you’ve ever visited ‘mobile.site.com’ or something like that, you’ve already seen what mobile versions of a site can look like. Typically these versions skip reformatting the main site content and they get right down to the business of catering to the unique needs of mobile visitors.

Not only can it be a LOT easier to build a mobile version of your site/pages, you can expect these versions to have more features and be more compatible with a wider range of devices.

Tools like jQuery Mobile will have you making pages in a jiffy and uses modern techniques/HTML5. It’s so easy you could even make a demo image purely for the sake of a blog post! ;)

This also frees up your main site design so you can make changes without worrying what impact it has on mobile.

“What about my content?”

Excellent question!

Mobile versions of sites with lots of useful content (AKA: great websites) can feel like a major hurdle to tackle, but in most cases there’s some awesome solutions to making your content work with mobile versions.

The last thing you’d want to do is block content from mobile visitors, and Google’s ranking algorithm updates in June/2013 agree.

Even something as simple as a faulty redirect where your mobile site is serving up:
mobile.site.com/
..when the visitor requested:
www.site.com/articles/how_to_rank.html

.. is a really bad situation, and in Google’s own words:

“If the content doesn’t exist in a smartphone-friendly format, showing the desktop content is better than redirecting to an irrelevant page.”

 
You might think the solution to ‘light content’ or ‘duplicate content’ in mobile versions is to block crawlers from indexing the mobile versions of a page, but you’d be a bit off the mark because you actually want to make sure crawlers know you have mobile versions to evaluate and rank.

In fact if you hop on over to Google Analytics, you will see that Google is tracking how well your site is doing for mobile, desktop, and tablet visitors:
Example of Google Analytics for a site with mobile SEO issues.

(Nearly double the bounce rate for Mobile? Low page counts/duration as well!?)

 
Google Analytics will show you even more details, so if you want to know how well you do on Android vs. BlackBerry, they can tell you.

“How do the crawlers/search engines sort it out?”

A canonical URL is always a good idea, but using a canonical between a mobile page and the desktop version just makes sense.

A canonical can cancel out any fears of showing duplicate content and help the crawlers understand the relationship between your URLs with just one line of markup.

On the flip-side a rel=”alternate” link in the desktop version of the page will help ensure the connection between them is understood completely.

The following is straight from the Google Developers help docs:

On the desktop page, add:

<link rel="alternate" media="only screen and (max-width: 640px)" href="http://m.example.com/page-1" >

and on the mobile page, the required annotation should be:

<link rel="canonical" href="http://www.example.com/page-1" >

This rel=”canonical” tag on the mobile URL pointing to the desktop page is required.

Even with responsive design, Googlebot is pretty smart, and if you aren’t blocking access to resources intended for a mobile browser, Google can/should detect responsive design from the content itself.

Google’s own help pages confirm this and provide the following example of responsive CSS markup:

    @media only screen and (max-width: 640px) {...}

In this example they are showing us a CSS rule that applies when the screen max-width is 640px; A clear sign that the rules would apply to a mobile device vs. desktop.

Google Webmaster Central takes the information even further, providing tips and examples for implementing responsive design.

Ever wondered how to control what happens when a mobile device rotates and the screen width changes? Click the link above. :)

SEO news blog post by @ 3:51 pm on August 16, 2013


 

Twitter’s New Anti-Abuse Policies and the Dark Side of Social Media

I won’t lie when I say that one of the best parts of my job is managing social media accounts; it can be legitimately fun, but it’s also a very important illustration of how the Internet affects customer/business interactions. My experience mostly comes from being a voracious and active social media user in my private life; I enjoy a following of 400+ people on Twitter, and I have seen what the network is capable of: live-blogging the Vancouver Olympic opening ceremonies, catching cheating politicians in the act, and spreading the word of everything from hot TV shows to full-blown revolutions. While some might resist it, social media is vital for modern reputation management and customer service; the web has democratized marketing in a very drastic way, making it nearly impossible for a company to cover up substantial issues with their products or service. When you do a great job, you might get the occasional positive mention; when you mess up, your customers will definitely air their grievances. And as a social media user myself, I can vouch for the fact that the public has come to respect businesses that address these issues honestly when they’re contacted about them.

Unfortunately, this democratization has lead to some inevitable abuses of the system. In some cases it’s a rival company posting fake reviews in an attempt to discredit the competition; in others, a company (or person) may be the subject of a vicious complaint that goes viral online. Part of online reputation management is being able to mitigate these issues, whether by reporting abuse to site moderators or addressing complaints head-on.

I say all of this because some business owners on desktop and Android platforms may see a new feature on Twitter in the coming weeks: an in-tweet ‘Report Abuse’ button. Currently, users who wish to flag threats must visit the online help center and go through several extra steps to report abuse; the new button will make the process far quicker, and (hopefully) hasten the removal of hate speech. Twitter’s announcement wasn’t just a routine update; it was spurred largely by a British woman named Caroline Criado-Perez, and the flood of horrific rape, violence, and bomb threats she received over the weekend. These weren’t mere trolls; the abuse got so serious that at least one man was arrested on Sunday as a result. What did Criado-Perez do to warrant hundreds of 140-character threats of violence? She campaigned—successfully—for the British government to put author Jane Austen’s face on the new £10 banknote. The threats were also sent to a female Member of Parliament who tweeted her support for the campaign.

If it seems absurd, that’s because it is; this wasn’t a case of radical politics or controversial opinion, but a fairly tame move to represent more British women on currency. The horrifying result was a stark reminder of the abusive power of social media, especially against women and other marginalized groups in society. But even if you’re not an active participant in social issues online, it’s intimidating to realize just how quickly the anonymous web can turn against you. While some have applauded Twitter for finally taking a decisive action to make their website safer for all users, the decision has also drawn criticism from people who have seen how ‘Report Abuse’ functions on other websites have actually been used against legitimate accounts as a form of abuse in and of itself; a group of trolls flagging an account they disagree with can result in that account being suspended by the website, even when the owner hasn’t actually violated any rules.

Of course, the gender politics and personal vendettas of social media are quite a bit more intense than what we do as SEOs to help clients. In terms of reputation management online, the Report Abuse button will likely be a helpful way to ensure that a company doesn’t suffer from malicious treatment. However, it also may be far too easy to report a dissatisfied (and vocal) customer out of sheer frustration. Online reputation is a fickle beast; a few damning reviews can take down an entire small business, and the damage can be very difficult to control—it’s easy to feel helpless when it seems like nothing you do can push down a few dissatisfied customers in favor of the happy ones. Business owners on Twitter should still make it a priority to engage with unhappy customers on a personal level, rather than just report an account because of a particularly bad review—even if it makes the problem temporarily disappear, the Internet is not kind to those types of tactics.

The Criado-Perez debacle over the weekend has shown Twitter’s dark side, particularly when it comes to misogyny and online gender violence. The effect of the new reporting feature remains to be seen in that regard. While smaller businesses on social media may not engage in that debate, it’s a prudent reminder that the web’s anonymity can cause a lot of malicious action in the name of free speech. Reputation management isn’t going to get easier as a result of Twitter’s changes; it will still require a human touch and an honest connection, because that’s what garners respect in the social media sphere. But hopefully this small corner of the web will be a little safer for everyone who uses it, giving people more courage to speak their minds without fear of retaliatory attempts to forcibly silence them.

SEO news blog post by @ 3:14 pm on August 6, 2013


 

A Panda Attack

Google today confirmed that there is a Panda update rolling out. I find it odd that after telling webmasters that there would no longer be announcements of Panda updates, that they made this announcement and one has to wonder why.

The official message from Google is that this Panda update is softer than those previously and that there have been new signals added. There are webmasters who are reporting recoveries from previous updates with this one. I would love to hear some feedback from any of our blog readers as to changes you may have noticed in your rankings with this latest update.

I’ll publish a followup post to this one next week after we’ve had a chance to evaluate the update.

SEO news blog post by @ 10:42 am on July 18, 2013


 

What To Do When Your Site Drops

It’s happened to all of us. You wake up one morning feeling like a million bucks, you stretch and if you’re like me, you notice the eye-rolling as once again your significant other catches you with a toothbrush dangling from your mouth and a laptop or iPhone in front of you while you check rankings and emails. And then it happens – you start your browser with a search phrase already set to display and you notice that your site no longer holds it’s previous position and the move is not in the right direction. We’ve all faced it and the longer you’ve been an SEO or website owner the more times you’ve seen it happen. But still … what do you do? To quote the immortal Douglas Adams, “Don’t panic.”

Believe me – I know how hard it is sometimes. It’s easy for me to say this to clients when I see an engine fluctuating or a site has dropped only a position or two and we’re working to react but it’s a completely different thing when it happens to you and (might I add) a good reminder to SEO’s as to what our clients go through. But I still haven’t answered the question have I? What do you do? What … do … you … do?

There are five basic steps one must take when their site drops (I like to keep things simple and a 5 step check-list is a great way to do that). These steps assume that to start with you had a well-optimized website with good SEO practices followed. If you don’t then the reasons you dropped are pretty clear but if you’ve got a well-optimized site and your site has fallen – then this is for you. You should:

1 – Build Links

It’s very difficult for people to not want to do something proactive when they notice their site drop. I know – I’ve been there. One of the easiest things to do to keep yourself busy while working on the other 4 steps below is to build links. Building good, solid links to your site will never hurt and will only help you out so even if one of the later steps might show you other actions you need to take (or not take) you’ll never go wrong with some solid link building and if nothing else – it’ll make you feel like you’re doing something and stop you from doing other things that might do you more harm than good.

I’m not going to go into all the different types of links you could build or what the anatomy of a good link is. Many articles, forums and blog posts have been written in the past and are easily found online. I’m sure if you monitor a few good SEO forums you’ll find more being written every day. If you can – find articles by Eric Enge. While he doesn’t give it all away (who does?) – you won’t go wrong taking his advice and even seasoned SEO’s are likely to learn a thing or two from reading his work.

2 – Relax For A Couple Days

Before you rush to your favorite site editing tool – relax. Slight tweaks in content are unlikely to make much of a difference (if any) to your rankings. If you’ve got solid, well-optimized content and suddenly your site’s fluctuating – cramming in a few more instances of your targeted phrase will likely do more harm than good.

Now – when I say relax I basically mean, don’t touch your site. There are steps (such as link building) that you can work on including the analytical work noted below. Just don’t go editing all your copy to try to chase some tweak in Google’s algorithm. Relax.

3 & 4 – Analyze The Sites That Have Out-Ranked You (Onsite And Offsite)

One of the best things you can do is to take a look at the sites that are out-ranking you to find out what they’ve done. This will tell you two things: One – are there some good tactics that you’re missing, and Two – are these rankings likely to hold or are they flawed? There are two areas you’ll want to look at and those are the onsite optimization and the backlinks.

When you’re looking at the onsite optimization you need to only briefly look at their keyword densities, H1 and title tags, internal linking structure, number of indexed pages and the amount of content on the page. Remember: I’m assuming that (as you were ranking previously) you have a solidly optimized website with some good SEO practices and content guidelines followed. If you look at these and compare the newly ranking sites with your site and with other sites that have held their positions and dropped you’ll get a feel for whether there are trends. If there are common traits among the sites that have moved up then you may be on to something. Remember the common trends among the sites that have climbed and held and also remember what they have that the sites that have dropped do not. Remember: there may be no common trends or nothing you can find out with this small a sample. Once this step is complete it’s time to move on to backlink analysis.

Backlink analysis is a good practice to undertake every few months regardless of updates but definitely necessary now that you’re dropping. What you need to do now is to analyze the backlinks of the sites that are out-ranking you. Depending on the competition level this can be a brutal task in that it’s not just about numbers. You should use Yahoo!’s link:www.domain.com command and visit many of the sites in your comeptitors backlinks. What you’re trying to do is get a full view of what their links look like. You’ll also want to download SEO Link Analysis (A Firefox extension you’ll find at https://addons.mozilla.org/en-US/firefox/addon/7505/). When you’re doing a backlink check it automatically displays the PageRank and anchor text of the backlinks though I’d still HIGHLY recommend visiting a good many of the sites to see what kind of links they are.

Once again you’re going to be looking for the architecture of the backlinks of the sites that are moving up. What tactics they’re using, what their links look like on the page, what anchor text distribution they’ve got. Once again you’re going to compare that with other sites on the rise, your site and other stable sites to see what is common between those that are climbing and holding their grown vs those that have fallen.

Once we’ve collected this data it’s time to act. Collect all the common traits that the climbing and holding sites have and …

5 – Take Action

You’re done waiting around preforming the tedious task of link building. You’ve got your data and you’re ready to launch into action and get some stuff done. But wait (oh no – did he say wait again?) is action really the best thing?

When you’ve pooled your data you need to decide what it means. Let’s take for example a situation where the newly ranking sites have very low word counts and tons of footer links (looks paid to me). Do you REALLY want to follow their lead? The question you need to ask yourself in this case is do the factors that are apparently working RIGHT NOW overall going to provide better or worse results? Is less content more or less likely to result in a satisfied visitor? Do paid footer links help Google deliver quality results over the whole of the Internet? In these cases the answer is easily “no” but your findings might be more subtle such as an extremely disproportionate use of targeted anchor text among the ranking sites or sp@mmy copy with keyword densities at 8 or 10%.

What you’re in a position to do now is figure out a moving-forward strategy. If the common trends among the top and improving sites are bad or sp@mmy then you know the algorithm will correct itself eventually and you shouldn’t chase it. If you need to do something – build some additional links and look for new phrases to rank for on other pages to help stabilize your traffic when individual phrases decline.

If you find that the factors that have created the new results are legitimate and will lead to better results overall. you know you need to make some changes to what you’re doing. Fortunately– with the research you’ve just done you’ve got a great starting spot in that you can probably get some great resources and tactics from the lists of backlinks and onsite optimization you’ve just collected.

It may take hours or even days to properly perform this research but then – you needed something to do while your rankings are down. It might as well be productive.

SEO news blog post by @ 12:26 pm on April 20, 2013

Categories:SEO Articles

 

301s versus Canonical Resolving Duplicate Content

Since the Panda updates from Google earlier this year, duplicate content has become an issue that no website owner can afford to overlook. While the update was designed specifically to target low value sites, content farms and scraped content, its paramount imperative was to reduce the amount of duplicate content that resulted in mass amounts of spam-ridden search results. As a direct result of the updates to the Google search algorithm, many thousands of both legitimate and nefarious sites were penalized with a significant drop in rankings and traffic.

Duplicate content can include the textual content of a website, scraped content from other sites, or similar content on multiple domains. Duplicate content issues also arise from dynamically generated product pages that display duplicate content throughout different sorting features. Google sees these pages as duplicate content.

Of the tactics available the 301 redirect and the more recent canonical tag, are the primary weapons in a web developers arsenal to help combat the problems associated with duplicate content. Unfortunately many aspiring webmasters do not always have a clear understanding of what they are, or how, or when each method should be employed.

What is a 301 Redirect?

In most cases a 301 redirect is used when you move your domain to a new webhost. The redirect tells search engines that your site has moved but still allows you to preserve your rankings. The other common usage of the 301 is to specify the preferred url of your domain.

Typically you can go to either http://www.exampledomain.com or http://exampledomain.com< they are the same url but the search engine treats them as different urls. The 301 redirect allows you to specify the “proper” domain and retain the strength of the sites ranking so that it is not split between the two.

301s is that they were only designed to work at the domain level and did not address the duplicate content issues that were arising from have multiple dynamically driven pages. 301s also require that you have access to the web server hosting your site in order to implement them and an understanding of the syntax used to describe the parameters.

Introducing the Canonical Tag

Prior to the introduction of the canonical tag, duplicate content was simply ignored and people used link building practices to game the SERPs in order to determine which would be the first to be listed. However, this had the negative systemic effect of inundating the SERPs with webspam which made it increasingly difficult to get quality, relevant results when performing web searches. As a result, Google introduced the canonical tag in early 2009 as a way to resolve some of the major duplicate content issues faced by the search engines.

The canonical tag was designed as a page level element in which you edit the “head” of the HTML document and edit the parameters. The canonical tag is a very simple one line code string that is treated in very much the same way as a permanent 301 redirect. It ensures that the PageRank, backlinks and link juice flow to the “proper url” and is not split between domains. It is fully supported by Google, Bing, Yahoo and other search engines.

Another scenario is which you may want to use a canonical tag is when you have web pages that produce “ugly” urls (http://www.example.com/product.php?item=bluewidgets&trackingid=1234&sessionid=5678), due to advance sorting features, tracking options and other dynamically driven user-defined options. You can specify that the clean url, or the “proper,” or “canonical” version of the url, is at “location B.” Search engines will then index the url that you have specified and regard it as the correct url.

<link rel=”canonical” href=”http://www.example.com/product.php?item=blue widgets” />

*This example tells the search engine that the “correct” version of the Blue Widgets page is located at the www version and not the non-www version of the page.

The main difference between a 301 redirect and the canonical tag is that the later only works within a single domain or subdomain; that is you cannot go from domain A to domain B. This has the added benefit of alleviating problems associated with 301 hijacks and similar attacks.

Introduction of The Cross-Domain Canonical Tag

In December of 2009, Google announced a (http://googlewebmastercentral.blogspot.com/2009/12/handling-legitimate-cross-domain.html)cross-domain rel=”canonical” link element that was also going to work across domains; thereby allowing webmasters of multiple sites with similar content to define specific content as fundamentally sourced from a different domain.

A simple scenario in which the cross-domain tag would be used is If you have three related domains, on three separate urls and all featured the same article (or product descriptions, etc). You can use the cross-browser tag to specify the page that is the authority (or preferred page). As a result, the specified page will collect all associated benefits of Page Rank and Link Juice and will not penalize you for duplicate content.

In essence the new tag performs the exact same function as the 301 redirect but allowed for a much more user-friendly method of implementation.

During the release and subsequent promotion of the canonical tag, Matt Cutts stated that <cite>“anywhere from 10%-36% of webhosts might be duplicated content.”</cite> According to Cutts, there are several effective strategies to combat the problem of duplicate content including:

  • Using 301 redirects
  • Setting your preference in Google to the www or non www version in Google’s Webmaster Tools (http://www.google.com/webmasters/ )
  • Ensuring that your CMS only generates the correct urls
  • Submiting a sitemap to Google. They will try to only use those urls in the sitemap in an effort to pick the “best url”

301s Versus rel=canonical?

Some people have concerns are over how much link juice will they lose if they use a 301 instead of a canonical redirect. There is very little difference in the relative amount of page rank that gets passed between the two methods.

Matt Cutts from Google addressed the problem by stating:

”You do lose some (page rank) but the amount is pretty insignificant. This is used to try and stop people from using 301s exclusively for everything within their own site instead of hyperlinks.

Watch the full video where Matt discusses the issue:

The canonical tag is most appropriate used when you cannot get to the server’s headers to implement the 301 directly as a web technician is typically required to implement the 301 for you.

The Hack

In the video above Matt addresses the question of relative strength loss between using a 301 Redirect and a rel=canonical tag. In a recent blog post (http://searchenginewatch.com/article/2072455/Hacked-Canonical-Tags-Coming-Soon-To-A-Website-Near-You), Beanstalk SEO’s CEO, Dave Davies discusses a possible exploit of this “relative strength loss.”

Matt Cutts sent out a Tweet on May 13th stating, “A recent spam trend is hacking websites to insert rel=canonical pointing to hacker’s site. If you suspect hacking, check for it.”

The conclusion is that there is a viable exploit of the rel=canonical tag and that by inserting the tag into a page can be a very effective strategy; on par with 301ing the page itself but even “better” in that it likely won’t be detected by the site owner.

Davies continues by posing the following statement: <cite>“The next question we need to ask ourselves is, “Is this an issue now or just a warning?”</cite> implying that Google is certainly aware of the hack and will be analyizing ways to detect and penalize those that are planning to attempt this hack.

Article Take Aways:

  • The Panda updates have made the issue of duplicate content a priority for site owners to address.
  • Always use 301s whenever possible. They are more widely supported by search engines and can follow a 301 redirect. This also means that any new search engine that comes on to the market will have to support them as well.
  • 301s only work at the domain level (ie. Pointing domainexapmle.com to www.domainexample.com)
  • 301s also require that you have access to the web server hosting your site in order to implement them
  • The rel=canonical tag is a more user-friendly method to accomplish the same task as a 301.
  • The cross-domain Canonical tag works almost identical to a 301 direct.
  • The canonical tag is a user-friendly version designed to work within the site’s HTML head section.

Resources:

Learn About the Canonical Link Element in 5 Minutes:

http://www.mattcutts.com/blog/canonical-link-tag/

Specify Your Canonical:

http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html

The Canonical Tag Can Save You from the Duplicate Content Monster

http://searchenginewatch.com/article/2066270/The-Canonical-Tag-Can-Save-You-from-the-Duplicate-Content-Monster

 

Canonical hack

(http://searchenginewatch.com/article/2072455/Hacked-Canonical-Tags-Coming-Soon-To-A-Website-Near-You)

SEO news blog post by @ 11:19 am on September 4, 2011

Categories:SEO Articles

 

Older Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.