Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

Canadian Court Orders Google to Remove Company From Global Search Results

Captain Canada

In yet another international ruling, Google has been ordered to remove a website from its global search results. Today, B.C. Supreme Court Justice Lauri Ann Fenlon ruled that Google has 14 days to remove a company by the name of Datalink from its global search results. Datalink is the rival of technology company Equustek, who manufactures networking devices for industrial equipment. Equustek has alleged that Datalink has stolen product designs by recruiting a former Equustek engineer.

While Equustek has already won the battle in Canadian courts, this case sets a precedent for international rulings. Justice Lauri Ann Fenlon has stated:

“The courts must adapt to the reality of e-commerce with its potential for abuse by those who would take the property of others and sell it through the borderless electronic web of the internet,”

Google has argued that the B.C. court does not have jurisdiction to enforce such a ruling as their headquarters are located in the United States, but Justice Fenlon countered that the company clearly does business in the province via selling ads and providing search results.

For more information read the full article: http://www.cbc.ca/news/canada/british-columbia/google-ordered-to-remove-website-from-global-search-results-1.2679824

SEO news blog post by @ 4:10 pm on June 18, 2014

Categories:Search Engine News

 

Is Pengiun 3.0 On The Way?

The folks over at Search Engine Roundtable have reported that there’s been a lot of chatter on some of the webmaster forums as to a possible Google algorithm update. Although Google has yet to release any official word, the forums are a buzz with webmaster seeing shifts in search traffic and ranking positions. There have been a variety of suggestions ranging from a Penguin 3.0 update, which many suspect will be released in late May, to a Panda algorithm refresh, or it may be simply that Google is taking further action against link networks. Until we receive the official word from Google, we’ll just have to wait and see how it all plays out.

SEO news blog post by @ 12:34 pm on May 20, 2014


 

Link Reduction for Nerds

Let’s face it, even with our best efforts to make navigation clear and accessible, many websites are not as easy to navigate as they could be.

It doesn’t matter if you are first page super star, or a mom n pop blog with low traffic, most efforts really are no match for the diversity of our visitors.

When I first started blogging on SEO topics for Beanstalk I took a lot of effort to make my posts as accessible as I could with a bunch of different tricks like <acronym> tags (now they are <abbr> tags) and hyperlinks to any content that could be explored further.

Like a good SEO I added the rel="nofollow" to any external links, because that totally fixes all problems, right?

“No.. Not really.”

External links, when they actually are relevant to your topic, and point to a trusted resource, should not be marked as no-follow. Especially in the case of discussions or dynamic resources where you could be referencing a page that was recently updated with information on your topic. In that case you ‘need’ the crawlers to see that the remote page is relevant now.

Internal links are also a concern when they become redundant or excessive. If all your pages link to all your pages, you’re going to have a bad time.

If you went to a big new building downtown, and you asked the person at the visitors desk for directions and the fellow stopped at every few words to explain what he means by each word, you may never get to understanding the directions, at least not before you’re late for whatever destination you had.

Crawlers, even smart ones like Google Bot, don’t really appreciate 12 different URLs on one page that all go the same place. It’s a waste of resources to keep adding the same URL to the spiders as a bot crawls each of your pages.

In fact in some cases, if your pages have tons of repeated links to more pages with the same internal link structures, all the bots will see are the same few pages/URLs until they take the time push past the repeated links and get deeper into your site.

The boy who cried wolf.

The boy who cried wolf would probably be jumping up and down with another analogy, if the wolves hadn’t eaten him, just as your competition will gladly eat your position in the SERPs if your site is sending the crawlers to all the same pages.

Dave Davies has actually spoken about this many times, both on our blog, and on Search Engine Watch: Internal Linking to Promote Keyword Clusters.

“You really only NEED 1 link per page.”

Technically, you don’t actually need any links on your pages, you could just use Javascript that changes the window.location variable when desired and your pages would still work, but how would the robots get around without a sitemap? How would they understand which pages connect to which? Madness!

But don’t toss Javascript out the window just yet, there’s a middle ground where everyone can win!

If you use Javascript to send clicks to actual links on the page, you can markup more elements of your page without making a spaghetti mess of your navigation and without sending crawlers on repeated visits to duplicate URLs.

“In fact jQuery can do most of the work for you!”

Say I wanted to suggest you look at our Articles section, because we have so many articles, in the Articles section, but I didn’t want our articles page linked too many times?

Just tell jQuery to first find a matching <anchor>:
jQuery("a[href='/articles/']")

Then tell it to add an ID to that URL:
.attr( 'id', '/articles/');

And then tell it to send a click to that ID:
document.getElementById('/articles/').click();

Finally, make sure that your element style clearly matched the site’s style for real hyperlinks (ie: cursor: pointer; text-decoration: underline;)

UPDATE: For Chrome browsers you need to either refresh the page or you have to include the following in your page header: header("X-XSS-Protection: 0");

SEO news blog post by @ 6:07 pm on August 28, 2013


 

SEO concerns for Mobile Websites

You want to serve your clients needs regardless of what device they visit your site with, but how do you do it easily without upsetting your SEO?

Lets look at the various options for tackling Mobile sites and what each means in terms of SEO:

Responsive Design :
 
Visual demonstration of responsive web design

  • Responsive design is growing in popularity, especially as communications technology evolves, and bandwidth/memory use is less of a concern.
  • This method also gives us a single URL to work with which helps to keep the sitemap/structure as simple as possible without redirection nightmares.
  • On top of that, Googlebot won’t need to visit multiple URLs to index your content updates.
  • Less to crawl means Googlebot will have a better chance to index more of your pages/get deeper inside your site.
“Why is/was there a concern about mobile page size?”

Low-end mobiles, like a Nokia C6 from 4+ years ago (which was still an offering from major telcos last year), typically require that total page data be less than 1mb in order for the phone to handle the memory needs of rendering/displaying the site.

If you go over that memory limit/tipping point you risk causing the browser to crash with an error that the device memory has been exceeded. Re-loading the browser drops you on the device’s default home-page with all your history lost. I think we could all agree that this is not a good remote experience for potential clients.

Higher-end devices are still victims of their real-world connectivity. Most 3rd generation devices can hit really nice peak speeds, but rarely get into a physical location where those speeds are consistent for a reasonable length of time.

Therefore, even with the latest gee-wiz handsets, your ratio of successfully delivering your entire page to mobile users will be impacted by the amount of data you require them to fetch.

In a responsive web design scenario the main HTML content is typically sent along with CSS markup that caters to the layout/screen limitations of a mobile web browser. While this can mean omission of image data and other resources, many sites simply attempt to ‘resize’ and ‘rearrange’ the content leading to very similar bandwidth/memory needs for mobile sites using responsive design approaches.

The SEO concern with responsive designs is that since the written HTML content is included in the mobile styling it’s very crucial that external search engines/crawlers understand that the mobile styled content is not cloaking or other black-hat techniques. Google does a great job of detecting this and we discuss how a bit later on with some links to Google’s own pages on the topic.

Mobile Pages :

Visual demonstration of mobile web page design

 
If you’ve ever visited ‘mobile.site.com’ or something like that, you’ve already seen what mobile versions of a site can look like. Typically these versions skip reformatting the main site content and they get right down to the business of catering to the unique needs of mobile visitors.

Not only can it be a LOT easier to build a mobile version of your site/pages, you can expect these versions to have more features and be more compatible with a wider range of devices.

Tools like jQuery Mobile will have you making pages in a jiffy and uses modern techniques/HTML5. It’s so easy you could even make a demo image purely for the sake of a blog post! ;)

This also frees up your main site design so you can make changes without worrying what impact it has on mobile.

“What about my content?”

Excellent question!

Mobile versions of sites with lots of useful content (AKA: great websites) can feel like a major hurdle to tackle, but in most cases there’s some awesome solutions to making your content work with mobile versions.

The last thing you’d want to do is block content from mobile visitors, and Google’s ranking algorithm updates in June/2013 agree.

Even something as simple as a faulty redirect where your mobile site is serving up:
mobile.site.com/
..when the visitor requested:
www.site.com/articles/how_to_rank.html

.. is a really bad situation, and in Google’s own words:

“If the content doesn’t exist in a smartphone-friendly format, showing the desktop content is better than redirecting to an irrelevant page.”

 
You might think the solution to ‘light content’ or ‘duplicate content’ in mobile versions is to block crawlers from indexing the mobile versions of a page, but you’d be a bit off the mark because you actually want to make sure crawlers know you have mobile versions to evaluate and rank.

In fact if you hop on over to Google Analytics, you will see that Google is tracking how well your site is doing for mobile, desktop, and tablet visitors:
Example of Google Analytics for a site with mobile SEO issues.

(Nearly double the bounce rate for Mobile? Low page counts/duration as well!?)

 
Google Analytics will show you even more details, so if you want to know how well you do on Android vs. BlackBerry, they can tell you.

“How do the crawlers/search engines sort it out?”

A canonical URL is always a good idea, but using a canonical between a mobile page and the desktop version just makes sense.

A canonical can cancel out any fears of showing duplicate content and help the crawlers understand the relationship between your URLs with just one line of markup.

On the flip-side a rel=”alternate” link in the desktop version of the page will help ensure the connection between them is understood completely.

The following is straight from the Google Developers help docs:

On the desktop page, add:

<link rel="alternate" media="only screen and (max-width: 640px)" href="http://m.example.com/page-1" >

and on the mobile page, the required annotation should be:

<link rel="canonical" href="http://www.example.com/page-1" >

This rel=”canonical” tag on the mobile URL pointing to the desktop page is required.

Even with responsive design, Googlebot is pretty smart, and if you aren’t blocking access to resources intended for a mobile browser, Google can/should detect responsive design from the content itself.

Google’s own help pages confirm this and provide the following example of responsive CSS markup:

    @media only screen and (max-width: 640px) {...}

In this example they are showing us a CSS rule that applies when the screen max-width is 640px; A clear sign that the rules would apply to a mobile device vs. desktop.

Google Webmaster Central takes the information even further, providing tips and examples for implementing responsive design.

Ever wondered how to control what happens when a mobile device rotates and the screen width changes? Click the link above. :)

SEO news blog post by @ 3:51 pm on August 16, 2013


 

Twitter’s New Anti-Abuse Policies and the Dark Side of Social Media

I won’t lie when I say that one of the best parts of my job is managing social media accounts; it can be legitimately fun, but it’s also a very important illustration of how the Internet affects customer/business interactions. My experience mostly comes from being a voracious and active social media user in my private life; I enjoy a following of 400+ people on Twitter, and I have seen what the network is capable of: live-blogging the Vancouver Olympic opening ceremonies, catching cheating politicians in the act, and spreading the word of everything from hot TV shows to full-blown revolutions. While some might resist it, social media is vital for modern reputation management and customer service; the web has democratized marketing in a very drastic way, making it nearly impossible for a company to cover up substantial issues with their products or service. When you do a great job, you might get the occasional positive mention; when you mess up, your customers will definitely air their grievances. And as a social media user myself, I can vouch for the fact that the public has come to respect businesses that address these issues honestly when they’re contacted about them.

Unfortunately, this democratization has lead to some inevitable abuses of the system. In some cases it’s a rival company posting fake reviews in an attempt to discredit the competition; in others, a company (or person) may be the subject of a vicious complaint that goes viral online. Part of online reputation management is being able to mitigate these issues, whether by reporting abuse to site moderators or addressing complaints head-on.

I say all of this because some business owners on desktop and Android platforms may see a new feature on Twitter in the coming weeks: an in-tweet ‘Report Abuse’ button. Currently, users who wish to flag threats must visit the online help center and go through several extra steps to report abuse; the new button will make the process far quicker, and (hopefully) hasten the removal of hate speech. Twitter’s announcement wasn’t just a routine update; it was spurred largely by a British woman named Caroline Criado-Perez, and the flood of horrific rape, violence, and bomb threats she received over the weekend. These weren’t mere trolls; the abuse got so serious that at least one man was arrested on Sunday as a result. What did Criado-Perez do to warrant hundreds of 140-character threats of violence? She campaigned—successfully—for the British government to put author Jane Austen’s face on the new £10 banknote. The threats were also sent to a female Member of Parliament who tweeted her support for the campaign.

If it seems absurd, that’s because it is; this wasn’t a case of radical politics or controversial opinion, but a fairly tame move to represent more British women on currency. The horrifying result was a stark reminder of the abusive power of social media, especially against women and other marginalized groups in society. But even if you’re not an active participant in social issues online, it’s intimidating to realize just how quickly the anonymous web can turn against you. While some have applauded Twitter for finally taking a decisive action to make their website safer for all users, the decision has also drawn criticism from people who have seen how ‘Report Abuse’ functions on other websites have actually been used against legitimate accounts as a form of abuse in and of itself; a group of trolls flagging an account they disagree with can result in that account being suspended by the website, even when the owner hasn’t actually violated any rules.

Of course, the gender politics and personal vendettas of social media are quite a bit more intense than what we do as SEOs to help clients. In terms of reputation management online, the Report Abuse button will likely be a helpful way to ensure that a company doesn’t suffer from malicious treatment. However, it also may be far too easy to report a dissatisfied (and vocal) customer out of sheer frustration. Online reputation is a fickle beast; a few damning reviews can take down an entire small business, and the damage can be very difficult to control—it’s easy to feel helpless when it seems like nothing you do can push down a few dissatisfied customers in favor of the happy ones. Business owners on Twitter should still make it a priority to engage with unhappy customers on a personal level, rather than just report an account because of a particularly bad review—even if it makes the problem temporarily disappear, the Internet is not kind to those types of tactics.

The Criado-Perez debacle over the weekend has shown Twitter’s dark side, particularly when it comes to misogyny and online gender violence. The effect of the new reporting feature remains to be seen in that regard. While smaller businesses on social media may not engage in that debate, it’s a prudent reminder that the web’s anonymity can cause a lot of malicious action in the name of free speech. Reputation management isn’t going to get easier as a result of Twitter’s changes; it will still require a human touch and an honest connection, because that’s what garners respect in the social media sphere. But hopefully this small corner of the web will be a little safer for everyone who uses it, giving people more courage to speak their minds without fear of retaliatory attempts to forcibly silence them.

SEO news blog post by @ 3:14 pm on August 6, 2013


 

A Panda Attack

Google today confirmed that there is a Panda update rolling out. I find it odd that after telling webmasters that there would no longer be announcements of Panda updates, that they made this announcement and one has to wonder why.

The official message from Google is that this Panda update is softer than those previously and that there have been new signals added. There are webmasters who are reporting recoveries from previous updates with this one. I would love to hear some feedback from any of our blog readers as to changes you may have noticed in your rankings with this latest update.

I’ll publish a followup post to this one next week after we’ve had a chance to evaluate the update.

SEO news blog post by @ 10:42 am on July 18, 2013


 

What To Do When Your Site Drops

It’s happened to all of us. You wake up one morning feeling like a million bucks, you stretch and if you’re like me, you notice the eye-rolling as once again your significant other catches you with a toothbrush dangling from your mouth and a laptop or iPhone in front of you while you check rankings and emails. And then it happens – you start your browser with a search phrase already set to display and you notice that your site no longer holds it’s previous position and the move is not in the right direction. We’ve all faced it and the longer you’ve been an SEO or website owner the more times you’ve seen it happen. But still … what do you do? To quote the immortal Douglas Adams, “Don’t panic.”

Believe me – I know how hard it is sometimes. It’s easy for me to say this to clients when I see an engine fluctuating or a site has dropped only a position or two and we’re working to react but it’s a completely different thing when it happens to you and (might I add) a good reminder to SEO’s as to what our clients go through. But I still haven’t answered the question have I? What do you do? What … do … you … do?

There are five basic steps one must take when their site drops (I like to keep things simple and a 5 step check-list is a great way to do that). These steps assume that to start with you had a well-optimized website with good SEO practices followed. If you don’t then the reasons you dropped are pretty clear but if you’ve got a well-optimized site and your site has fallen – then this is for you. You should:

1 – Build Links

It’s very difficult for people to not want to do something proactive when they notice their site drop. I know – I’ve been there. One of the easiest things to do to keep yourself busy while working on the other 4 steps below is to build links. Building good, solid links to your site will never hurt and will only help you out so even if one of the later steps might show you other actions you need to take (or not take) you’ll never go wrong with some solid link building and if nothing else – it’ll make you feel like you’re doing something and stop you from doing other things that might do you more harm than good.

I’m not going to go into all the different types of links you could build or what the anatomy of a good link is. Many articles, forums and blog posts have been written in the past and are easily found online. I’m sure if you monitor a few good SEO forums you’ll find more being written every day. If you can – find articles by Eric Enge. While he doesn’t give it all away (who does?) – you won’t go wrong taking his advice and even seasoned SEO’s are likely to learn a thing or two from reading his work.

2 – Relax For A Couple Days

Before you rush to your favorite site editing tool – relax. Slight tweaks in content are unlikely to make much of a difference (if any) to your rankings. If you’ve got solid, well-optimized content and suddenly your site’s fluctuating – cramming in a few more instances of your targeted phrase will likely do more harm than good.

Now – when I say relax I basically mean, don’t touch your site. There are steps (such as link building) that you can work on including the analytical work noted below. Just don’t go editing all your copy to try to chase some tweak in Google’s algorithm. Relax.

3 & 4 – Analyze The Sites That Have Out-Ranked You (Onsite And Offsite)

One of the best things you can do is to take a look at the sites that are out-ranking you to find out what they’ve done. This will tell you two things: One – are there some good tactics that you’re missing, and Two – are these rankings likely to hold or are they flawed? There are two areas you’ll want to look at and those are the onsite optimization and the backlinks.

When you’re looking at the onsite optimization you need to only briefly look at their keyword densities, H1 and title tags, internal linking structure, number of indexed pages and the amount of content on the page. Remember: I’m assuming that (as you were ranking previously) you have a solidly optimized website with some good SEO practices and content guidelines followed. If you look at these and compare the newly ranking sites with your site and with other sites that have held their positions and dropped you’ll get a feel for whether there are trends. If there are common traits among the sites that have moved up then you may be on to something. Remember the common trends among the sites that have climbed and held and also remember what they have that the sites that have dropped do not. Remember: there may be no common trends or nothing you can find out with this small a sample. Once this step is complete it’s time to move on to backlink analysis.

Backlink analysis is a good practice to undertake every few months regardless of updates but definitely necessary now that you’re dropping. What you need to do now is to analyze the backlinks of the sites that are out-ranking you. Depending on the competition level this can be a brutal task in that it’s not just about numbers. You should use Yahoo!’s link:www.domain.com command and visit many of the sites in your comeptitors backlinks. What you’re trying to do is get a full view of what their links look like. You’ll also want to download SEO Link Analysis (A Firefox extension you’ll find at https://addons.mozilla.org/en-US/firefox/addon/7505/). When you’re doing a backlink check it automatically displays the PageRank and anchor text of the backlinks though I’d still HIGHLY recommend visiting a good many of the sites to see what kind of links they are.

Once again you’re going to be looking for the architecture of the backlinks of the sites that are moving up. What tactics they’re using, what their links look like on the page, what anchor text distribution they’ve got. Once again you’re going to compare that with other sites on the rise, your site and other stable sites to see what is common between those that are climbing and holding their grown vs those that have fallen.

Once we’ve collected this data it’s time to act. Collect all the common traits that the climbing and holding sites have and …

5 – Take Action

You’re done waiting around preforming the tedious task of link building. You’ve got your data and you’re ready to launch into action and get some stuff done. But wait (oh no – did he say wait again?) is action really the best thing?

When you’ve pooled your data you need to decide what it means. Let’s take for example a situation where the newly ranking sites have very low word counts and tons of footer links (looks paid to me). Do you REALLY want to follow their lead? The question you need to ask yourself in this case is do the factors that are apparently working RIGHT NOW overall going to provide better or worse results? Is less content more or less likely to result in a satisfied visitor? Do paid footer links help Google deliver quality results over the whole of the Internet? In these cases the answer is easily “no” but your findings might be more subtle such as an extremely disproportionate use of targeted anchor text among the ranking sites or sp@mmy copy with keyword densities at 8 or 10%.

What you’re in a position to do now is figure out a moving-forward strategy. If the common trends among the top and improving sites are bad or sp@mmy then you know the algorithm will correct itself eventually and you shouldn’t chase it. If you need to do something – build some additional links and look for new phrases to rank for on other pages to help stabilize your traffic when individual phrases decline.

If you find that the factors that have created the new results are legitimate and will lead to better results overall. you know you need to make some changes to what you’re doing. Fortunately– with the research you’ve just done you’ve got a great starting spot in that you can probably get some great resources and tactics from the lists of backlinks and onsite optimization you’ve just collected.

It may take hours or even days to properly perform this research but then – you needed something to do while your rankings are down. It might as well be productive.

SEO news blog post by @ 12:26 pm on April 20, 2013

Categories:SEO Articles

 

301s versus Canonical Resolving Duplicate Content

Since the Panda updates from Google earlier this year, duplicate content has become an issue that no website owner can afford to overlook. While the update was designed specifically to target low value sites, content farms and scraped content, its paramount imperative was to reduce the amount of duplicate content that resulted in mass amounts of spam-ridden search results. As a direct result of the updates to the Google search algorithm, many thousands of both legitimate and nefarious sites were penalized with a significant drop in rankings and traffic.

Duplicate content can include the textual content of a website, scraped content from other sites, or similar content on multiple domains. Duplicate content issues also arise from dynamically generated product pages that display duplicate content throughout different sorting features. Google sees these pages as duplicate content.

Of the tactics available the 301 redirect and the more recent canonical tag, are the primary weapons in a web developers arsenal to help combat the problems associated with duplicate content. Unfortunately many aspiring webmasters do not always have a clear understanding of what they are, or how, or when each method should be employed.

What is a 301 Redirect?

In most cases a 301 redirect is used when you move your domain to a new webhost. The redirect tells search engines that your site has moved but still allows you to preserve your rankings. The other common usage of the 301 is to specify the preferred url of your domain.

Typically you can go to either http://www.exampledomain.com or http://exampledomain.com< they are the same url but the search engine treats them as different urls. The 301 redirect allows you to specify the “proper” domain and retain the strength of the sites ranking so that it is not split between the two.

301s is that they were only designed to work at the domain level and did not address the duplicate content issues that were arising from have multiple dynamically driven pages. 301s also require that you have access to the web server hosting your site in order to implement them and an understanding of the syntax used to describe the parameters.

Introducing the Canonical Tag

Prior to the introduction of the canonical tag, duplicate content was simply ignored and people used link building practices to game the SERPs in order to determine which would be the first to be listed. However, this had the negative systemic effect of inundating the SERPs with webspam which made it increasingly difficult to get quality, relevant results when performing web searches. As a result, Google introduced the canonical tag in early 2009 as a way to resolve some of the major duplicate content issues faced by the search engines.

The canonical tag was designed as a page level element in which you edit the “head” of the HTML document and edit the parameters. The canonical tag is a very simple one line code string that is treated in very much the same way as a permanent 301 redirect. It ensures that the PageRank, backlinks and link juice flow to the “proper url” and is not split between domains. It is fully supported by Google, Bing, Yahoo and other search engines.

Another scenario is which you may want to use a canonical tag is when you have web pages that produce “ugly” urls (http://www.example.com/product.php?item=bluewidgets&trackingid=1234&sessionid=5678), due to advance sorting features, tracking options and other dynamically driven user-defined options. You can specify that the clean url, or the “proper,” or “canonical” version of the url, is at “location B.” Search engines will then index the url that you have specified and regard it as the correct url.

<link rel=”canonical” href=”http://www.example.com/product.php?item=blue widgets” />

*This example tells the search engine that the “correct” version of the Blue Widgets page is located at the www version and not the non-www version of the page.

The main difference between a 301 redirect and the canonical tag is that the later only works within a single domain or subdomain; that is you cannot go from domain A to domain B. This has the added benefit of alleviating problems associated with 301 hijacks and similar attacks.

Introduction of The Cross-Domain Canonical Tag

In December of 2009, Google announced a (http://googlewebmastercentral.blogspot.com/2009/12/handling-legitimate-cross-domain.html)cross-domain rel=”canonical” link element that was also going to work across domains; thereby allowing webmasters of multiple sites with similar content to define specific content as fundamentally sourced from a different domain.

A simple scenario in which the cross-domain tag would be used is If you have three related domains, on three separate urls and all featured the same article (or product descriptions, etc). You can use the cross-browser tag to specify the page that is the authority (or preferred page). As a result, the specified page will collect all associated benefits of Page Rank and Link Juice and will not penalize you for duplicate content.

In essence the new tag performs the exact same function as the 301 redirect but allowed for a much more user-friendly method of implementation.

During the release and subsequent promotion of the canonical tag, Matt Cutts stated that <cite>“anywhere from 10%-36% of webhosts might be duplicated content.”</cite> According to Cutts, there are several effective strategies to combat the problem of duplicate content including:

  • Using 301 redirects
  • Setting your preference in Google to the www or non www version in Google’s Webmaster Tools (http://www.google.com/webmasters/ )
  • Ensuring that your CMS only generates the correct urls
  • Submiting a sitemap to Google. They will try to only use those urls in the sitemap in an effort to pick the “best url”

301s Versus rel=canonical?

Some people have concerns are over how much link juice will they lose if they use a 301 instead of a canonical redirect. There is very little difference in the relative amount of page rank that gets passed between the two methods.

Matt Cutts from Google addressed the problem by stating:

”You do lose some (page rank) but the amount is pretty insignificant. This is used to try and stop people from using 301s exclusively for everything within their own site instead of hyperlinks.

Watch the full video where Matt discusses the issue:

The canonical tag is most appropriate used when you cannot get to the server’s headers to implement the 301 directly as a web technician is typically required to implement the 301 for you.

The Hack

In the video above Matt addresses the question of relative strength loss between using a 301 Redirect and a rel=canonical tag. In a recent blog post (http://searchenginewatch.com/article/2072455/Hacked-Canonical-Tags-Coming-Soon-To-A-Website-Near-You), Beanstalk SEO’s CEO, Dave Davies discusses a possible exploit of this “relative strength loss.”

Matt Cutts sent out a Tweet on May 13th stating, “A recent spam trend is hacking websites to insert rel=canonical pointing to hacker’s site. If you suspect hacking, check for it.”

The conclusion is that there is a viable exploit of the rel=canonical tag and that by inserting the tag into a page can be a very effective strategy; on par with 301ing the page itself but even “better” in that it likely won’t be detected by the site owner.

Davies continues by posing the following statement: <cite>“The next question we need to ask ourselves is, “Is this an issue now or just a warning?”</cite> implying that Google is certainly aware of the hack and will be analyizing ways to detect and penalize those that are planning to attempt this hack.

Article Take Aways:

  • The Panda updates have made the issue of duplicate content a priority for site owners to address.
  • Always use 301s whenever possible. They are more widely supported by search engines and can follow a 301 redirect. This also means that any new search engine that comes on to the market will have to support them as well.
  • 301s only work at the domain level (ie. Pointing domainexapmle.com to www.domainexample.com)
  • 301s also require that you have access to the web server hosting your site in order to implement them
  • The rel=canonical tag is a more user-friendly method to accomplish the same task as a 301.
  • The cross-domain Canonical tag works almost identical to a 301 direct.
  • The canonical tag is a user-friendly version designed to work within the site’s HTML head section.

Resources:

Learn About the Canonical Link Element in 5 Minutes:

http://www.mattcutts.com/blog/canonical-link-tag/

Specify Your Canonical:

http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html

The Canonical Tag Can Save You from the Duplicate Content Monster

http://searchenginewatch.com/article/2066270/The-Canonical-Tag-Can-Save-You-from-the-Duplicate-Content-Monster

 

Canonical hack

(http://searchenginewatch.com/article/2072455/Hacked-Canonical-Tags-Coming-Soon-To-A-Website-Near-You)

SEO news blog post by @ 11:19 am on September 4, 2011

Categories:SEO Articles

 

Google+ and the Potential Impact on SEO

Although you can only join by invitation at this point, you’ve no doubt heard of Google+, Google’s latest attempt to join (or, in time perhaps, completely overtake?) Facebook and Twitter as a must have social networking tool. In the months before Google+ was launched, Google also began implementing the “+1″ button as a usable option for users to signify that they enjoy a particular site or page in an attempt to gather as much raw data as possible about the popularity and social value of sites and content before Google+ was rolled out for the masses. Preceding the Google+ and +1 button was the introduction of real time search, which was able to incorporate search results from Twitter, blogs and Facebook. Google, it would appear, is realizing the immense value of social media and the impact of social media on web search.

Search will continue to have a social element infused into it as the addition of the +1 button will change search results, as will live feeds from Google+ pages, much like Facebook “likes” and Twitter “tweets” are currently affecting search results by influencing user decisions due to their value as endorsements of certain sites and content.

Google definitely wants websites to implement the +1 button in their pages so that they can track and measure changes in click through rates. The +1 button will also be included on all SERPs as well as all Google+ feeds. What this means is business owners and marketers must ensure that a positive customer experience is, perhaps more than ever before, their primary focus in the hope that as many users as possible will +1 their site, and in doing so, endorse their business (and by association, reputation).

While it is plain to see that the introduction of the +1 button was merely a precursor/trial balloon for Google+, the potential impact of the +1 button on search could be the bridge between all of the social oriented sites and tools and ways of doing things on the web and the subsequent influence on search results.

Recently, Rand Fishkin, head of SEO Moz, decided to test some theories on the subject of social sites influencing search results. He shared a number of un-indexed URLs via Twitter both before and after Google had unceremoniously aborted the real time search results feature. Fishkin repeated the process, only this time he used Google+. He then requested that his followers on Twitter and Google+ to share the post, with the only caveat being that they were not to share it outside of the originating site.

What this yielded in terms of hard data was that even though Google has dropped the real time search, re-tweeting and tweets are still assisting page indexation. As for Google+, Fishkin’s test page ended up ranking #1 on Google within a few hours. This illustrates the fact that Google+ can also help pages get indexed, if not quite as quickly as Twitter.

But perhaps the most interesting concept presented by Google+, and one that could potentially have a significant impact on SEO, is the “Google Circles” feature.

The “Circles” feature is interesting because it grants users the ability to share whatever they choose with specific groups, or Circles, of people. As Google+ users build their Circles, they will subsequently be able to see the sites that users in their circles have +1′d in Google’s SERPs. This has enormous potential – users will be far more likely to make a choice or purchase based on the recommendation of people they have invited to their Circles – people who they know and whose opinions they trust. Most users are going to be far more likely to trust the recommendation of someone they know rather than the recommendation or review from a stranger. Over time, Circles will become much more defined as more available user data is integrated into them – using that data to effectively market could be potentially powerful SEO strategy.

Basically, Google has taken the ideas behind some of their social media competitors more influential and successful features in an attempt to make search more about real people. Google+ and the +1 button are enabling users to influence online activity, and, as such, they will have an effect on search results. Many experts are already proclaiming Google+ to have no impact on SEO whatsoever, citing Google Wave and past attempts by Google to get in on the social side of the net as indicators that this new attempt will also fail. While it is far too early to make any kind of definitive statement as to the long term usefulness or impact of Google+ and the +1 button on SEO, citing past failures as the basis for an argument as to why Google+ is going to fail as well is short sighted at best. The fact of the matter is, social factors are already intertwined with search, and this is likely only going to become more prevalent as these sites are expanded and the way we interact on the internet continues to evolve also, not less so. Whether or not Google+ ends up revolutionizing or merely co-existing with established SEO methodology remains to be seen, but the enormous potential of these features and their long term impact is fairly clear – site ranking methods are changing thanks to the +1 button and this will likely end up creating an altogether new method of SEO in the future.

SEO news blog post by @ 5:02 pm on August 31, 2011


 

Google Instant & SEO

From the moment Google Instant was announced back on September 8 there have been forum chats, blog posts, articles and podcasts discussing the ramification of this new technology. Some have called it the “Death of SEO” which others (myself included) have proclaimed this a step forward and an opportunity for SEO’s, not a threat. And then of course there’s those who don’t even know there’s been a change at all, let’s call them “the vast majority”. In this article we’re going to discuss the pros and cons of Google Instant as it pertains to SEO’s and to website owners as well as cover some of the reasons that this new technology may not have as large an impact on search behavior as some may fear/predict.

But first, let’s cover the basic question …

What Is Google Instant?

Google instant is a technology that allows Google to predict what you are looking for as you type. They are referring to it as ‘search-before-you-type” technology (catchy). Essentially – as I type a phrase (let’s say “buy shoes online”) as soon as I get to “buy sh” I start seeing results for “buy shoes”. As soon as I’ve entered “buy shoes “ (with a space after shoes indicating I want more than just the 2 word phrase) I start seeing results for “buy shoes online”.

Technologically this is genius. Google is now serving likely billions of additional search results pages per day as each query has multiplied results that apply to it. Well … I suppose we all wondered what the Caffeine infrastructure update was all about didn’t we? But what does this do in the real world?

Why Google Instant Isn’t A Big Deal

Alright, obviously it is a significant technological enhancement in search but the way some react you’d think the whole universe was about to be turned on it’s head. There are two reasons why that’s not the case.

    1. I find it unlikely that many will notice right away that the change has occurred and further I find it even less likely that the majority will use the feature. You see – the major hindrance of this enhancement isn’t in the technology – it’s in the users. Only those who touch type and can do so without looking at their keyboard will be affected. If the user looks at their keyboard while typing then they wouldn’t even notice the results coming in ahead of their actual search.

 

  1. This will only affect users who are searching in instances where the shorter or predicted terms match the users end goals. For example, if I am searching for “buy shoes online” and get as far as “buy sh” the top results are sites which clearly suit the needs of a searcher for “buy shoes online” and thus – this may work to the detriment of sites who rank well for “buy shoes online” as they may well lose traffic. In the case of a site targeting, oh – I don’t know – “seo consulting” there will likely be little affect if any. The searcher, looking for an SEO consultant, will find once they’ve entered “seo” that they are presented with Wikipedia and Google – sites that, while informative, don’t offer the services (or results) desired and thus – the searcher would be less affected. Once they proceeded on to enter the “seo c” the searcher would be presented with the results for “seo company” but I’m prone to believe that if the searcher wanted those results – they would have searched for it. For this phrase I’m confident we’ll see little in the way of negative affect from Google Instant.

So we’ve discussed why Google Instant isn’t a big deal, now let’s discuss …

Why Google Instant Is A Big Deal

On the other side of the coin lies the reasons why Google Instant brings forth a revolution in search technology. Followers of the Beanstalk blog or my radio show on WebmasterRadio.fm (Webcology) will know I’m not one to love everything Google does but in this case the immediate affects and long terms affects may well be significant and at the very least – one has to appreciate the brilliance behind the effort. In this section of the article we’re going to cover the three important perspectives involved with the launch off this (or any) Google product. They are:

The Searcher – we’ll look at the pros and cons from a searcher perspective. It’s this aspect that will dictate whether the feature will matter at all.

Google – we’ll look at the positive affect on Google. Of course – this aspect is of paramount importance for this feature to be kept.

SEO’s – I’m of course incredibly interested and have spent much of my analysis time determining the pros and cons to SEO’s (admittedly – there’s more than a bit of self interest here).

So let’s begin …

Google Instant And The Searcher

This is sort of a win-win for Google from a searcher perspective. One of two things will happen for the searcher. Either they won’t notice the change or won’t be affected and thus – Google will be exactly where they are now OR they will notice the change and will select results quicker and find the feature helpful. As I noted – it’s a win-win. There isn’t much of scenario from a searcher perspective where the searcher will be negatively impacted and if they are – they’d simply revert back to past searching patterns. From the perspective of impact on the user – Google has it made with this feature. Their worst-case scenario is that they’re exactly where they are now.

Google Instant From Google’s Perspective

Any feature added to any corporate system must serve a single primary function – it must make it’s developer money. We’ve already seen that the feature itself can’t really negatively impact the searcher but can it make Google money? There are two ways that this can happen:

    1. Improved loyalty and marketshare, and

 

  1. Increased revenue directly from the initiative

Fortunately for Google – they’re going to win on both fronts here and when we see the Q3 earnings and moreso in the Q4 earning Google reports we’ll begin to see how significant an impact this change will have for them – mainly in the second of the two monetary reward methods noted above. And here’s why …

We’ve already covered the improved loyalty this can have on the searchers. Anything that makes my life easier and makes my quest for information faster will make me more loyal. At worst – Google will see my behavior stay the same but for many, the search experience will become faster and more effective – especially once the technology is improved by user behavior to a degree that people trust it more. Overall there will be a net gain in the experience – we’ve only to wait to see how large that net gain is and how it translates into marketshare. The big win is in the second point.

For anyone who’s every bid with AdWords you’ll know that for the most part – bids for generic terms are more expensive than bids for very specific terms. If I’m bidding on “shoes” I’m going to pay more than I would for “shoes online”. So let’s view the world where I start showing the results (and paid ads) for “shoes” while someone is searching for “shoes online”. And what if that person sees the ads that was written and bid on for “shoes” but relates to their query and they click on it. Google just made more from the paid ad click. Maybe only pennies but multiply that by billions of searches per day and you’ve got a significant increase in annual revenue.

The move is a huge win for Google but it does come with a theoretical downside and that is annoying the businesses that are paying for the ads. The argument I’ve heard is that if businesses find that the cost of their campaigns is increasing higher than the ROI that they might get annoyed. Fair enough BUT I would argue – what are they going to do about it? As long as Google maintains the first consideration (the searcher) then the advertisers have no choice. They can drop their bids but at worst – they’ll level off to what they were paying for the longtail phrases. Again – worst case scenario, Google will find themselves where they are today.

Google Instant From The SEO’s Perspective

So let’s assume for a moment that Google Instant is here to stay. Based on all the ways Google and the searchers can win and the limited situational permutations by which they could only come out even I’d say that’s a safe assumption. Given this, what’s happens to SEO’s and those optimizing their own websites?

For one thing – we can’t assume that research we did up to and before the 8th will be relevant down the road. I have already scheduled to redo keyword research in a couple months to see what industries and search types have been most (and least) affected by this change. The main reason for this is that I have a strong suspicion that specific industries will be more prone to being affected by the change based mainly on search types (such as the “buy shoes” vs “seo consulting” example above) and demographics. A Linux developer site is more likely to have a demographic off touch typers who can type without looking at the keyboard than say a life insurance site with a more scattered and thus less technically proficient overall demographic.

So in the short term – life is going to be very interesting for the SEO and website owner while we figure out which industries and phrase types are most affected. In a few months when we see the trends and which phrases are being affected and how we’ll likely have to make adjustments to many campaigns. The downside for may business owners will be that for those who’s campaigns focuses on searches for longtail phrases – they may find the search volumes for their phrases decrease and a shift to more generic (and generally more expensive to attain) phrases is necessary. Only time will tell what the best moves are there and we may not know what exactly will shift and how for a few months yet and even then – we’ll then know the trends, not where things will settle (if anything in online marketing can be referred to as “settling” anymore).

If there is a segment that should be concerned about the situation it is small business owners with limited organic or PPC budgets. Google Instant – because it puts preferences to more generic phrases – clearly favors businesses with larger budgets. How much so we’ll know after we’ve had a chance to see how the search volumes shift. For SEO’s this presents two opportunities and for business owners who do their own SEO – it offers one. And here’s the good news for those.

For SEO’s you’ll find two new opportunities, The first is that there will be a shift to more generic terms in search volumes. This means that there will be stiffer competition for more competitive phrases. If this sounds like a bad thing it’s not. If you’re a skilled SEO who knows how to get the job done it means you’ll have more access to larger volumes of traffic without the added efforts required to rank for a wide array or phrases. Rather than needing to rank for 10 or 20 phrases to get traffic you’ll be able to focus in more and reap the same rewards in the way of traffic. On top of that – SEO’s will be able to charge more for the rankings as fewer phrases have a higher value. A win-win for SEOs and a win for business owners who either do their own SEO or have talented SEO’s on staff.

The second opportunity will come in the form of improved clickthrough rates though I’ll admit – at this point that’s just a theory (noted with a hint sent to Gord Hotchkiss to run eyetracking tests on this theory). If I type while looking at my screen and I’m entering in “buy shoes online” and I rank organically or via PPC for both “buy shoes” and “buy shoes online” I would hypothesize that searchers who complete the phrase “buy shoes online” who had the site (or ad) for “buy shoes” appear and then the same site appear for the full query will have a tendency to click on the familiar. This same principle has been witnessed in sites appearing in both paid and organic results who have an increase in their organic clickthrough rates. This will present opportunities for both PPC and organic marketers to improve the traffic to sites by ranking for specific phrases meant to both attain traffic on their own but also to improve traffic for the other. I would suggest that down the road we’ll be hearing of this phenomenon when conducting and discussing keyword research.

Conclusion

There isn’t much to conclude that hasn’t been discussed above. Virtually every party wins or at worst, breaks even with the introduction of this technology. The only victim appears to be small businesses without the budgets to compete for the more generic phrases but even they may win with a shift away from these phrases by the larger companies. It may well occur that while the search volume shift heads in favor of large companies with larger budget – that the lower hanging fruit, while reduced in it’s search volume, may fall too in the competition levels making it more affordable. Larger business may focus like snipers on larger phrases and smaller business may well be presented with the opportunity to go after more, less search phrases that aren’t worth targeting for larger companies – at least organically.

But only time will tell and of course – we have much data to collect and many algorithmic updates to come between here and there.

SEO news blog post by @ 4:32 pm on September 21, 2010

Categories:SEO Articles

 

Older Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.