Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

SEO Articles

Today was a pretty big day for Beanstalk in the category of putting out some solid content for our valued readers.  2 new articles have been published, one written by the reputable Kyle Krenbrink titled “Updating Your Website’s Content“.  The title is just a titch misleading.  When he first handed it to me for reading I thought I was about to read a technical piece on how to get new content onto your website.  Instead it’s well-written piece discussing what content you should be looking at adding and how often.  I may be biased but to me it’s a great piece for anyone interested in rankings in a post-Panda world.

The second article is written by yours truly and appears over on the Search Engine Watch website.  The article covers rankings your website for image search and discusses everything from the tactics to do so as well as cautions on when it might not actually be beneficial (and yes, there are times this is the case).  With an author’s bias I have to say it’s a solid piece and definitely worth the read.  The article is appropriately  titled, “Ranking on Image Search“.  Enjoy. :)

SEO news blog post by @ 1:31 pm on October 27, 2011

Categories:Articles,Google

 

The Google, The FTC, and The News headlines

If anyone’s been looking at the tech headlines today, particularly the really big sites with very political writers, you may have read something about the FTC having Google in chains over outrageous privacy violations.

Some of that info is based on fact but most of what I’ve read is personal takes on the news with a heavy spin to sidetrack the facts and make a story.

Google behind bars

First, lets just get the elephant in the room to step into the light so we’re all looking at it:

Google’s bread and butter is handling trust and privacy properly.
If users can’t trust Google, we can’t use them.

This is why Google has repeatedly been it’s own whistle blower.

  • The web was programmed by humans..
  • Humans make mistakes..
  • The real measure of things is dealing with the mistakes!

When Google’s engineers came up with a shockingly brilliant method of ‘fingerprinting’ WiFi access points by sampling the data coming to/from the devices it wasn’t anyone outside the company that complained.

The fact is that many homes (and some businesses) have zero wireless security, so what was a brilliant plan to get a ‘fingerprint’ ended up becoming a nightmare of un-encrypted data that had to be destroyed properly.

Plus Google had to figure out what it could do to prevent this from happening again, so as part of the punishment Google helped devise for themselves, they setup a fund to create a privacy resource/knowledge base.

At the time many sites tried to make news from the issue and imply that Google was a privacy nightmare, stealing data from unsuspecting users, etc.., etc.., totally overlooking the fact that anyone could (and probably does) roam around in a vehicle and collect the exact same data Google collected.

The majority of the media coverage was almost insulting to the intellect of the readers, but I saw smart people drinking the cool-aid so don’t feel bad if you saw the headlines and got the wrong idea too.

This latest issue is no different at all in terms of Google acting responsibly and the news makers trying to generate headlines.

So here’s a factual take on the actual settlement, not some poorly considered opinion that I’m hoping will make this a headline:

“Google Inc. has agreed to settle an FTC complaint that it used deceptive tactics and violated its own privacy policy when it launched the Google Buzz social network last year. In addition to alleged FTC privacy violations, this is the first time the FTC has alleged violations of the substantive privacy requirements of the U.S.-EU Safe Harbor Framework, a method for U.S. companies to transfer personal data lawfully from the European Union to the United States.

The settlement agreement bars the Google from future privacy misrepresentations, requires it to implement a comprehensive privacy program and includes regular, independent privacy audits for the next 20 years. This is the first time an FTC settlement order has required a company to implement a comprehensive privacy program to protect the privacy of consumers’ information.

According to the FTC complaint, on the day Buzz was launched through the Gmail service, users got a message announcing the new service and were given two options: “Sweet! Check out Buzz,” and “Nah, go to my inbox.” However, some Gmail users who clicked on “Nah…” were enrolled in certain features of the Google Buzz social network anyway. For those Gmail users who clicked on “Sweet!,” the FTC alleges that they were not adequately informed that the identity of individuals they emailed most frequently would be made public by default. Google also offered a “Turn Off Buzz” option that did not fully remove the user from the social network.

When Google launched Buzz, its privacy policy stated that “When you sign up for a particular service that requires registration, we ask you to provide personal information. If we use this information in a manner different than the purpose for which it was collected, then we will ask for your consent prior to such use.” The FTC complaint charges that Google violated its privacy policies by using information provided for Gmail for another purpose – social networking – without obtaining consumers’ permission in advance.

The agency also alleges that by offering options like “Nah, go to my inbox,” and “Turn Off Buzz,” Google misrepresented that consumers who clicked on these options would not be enrolled in Buzz. In fact, they were enrolled in certain features of Buzz.

The complaint further alleges that a screen that asked consumers enrolling in Buzz, “How do you want to appear to others?” indicated that consumers could exercise control over what personal information would be made public. The FTC charged that Google failed to disclose adequately that consumers’ frequent email contacts would become public by default.

Finally, the agency alleges that Google misrepresented that it was treating personal information from the European Union in accordance with the U.S.-EU Safe Harbor privacy framework. The framework is a voluntary program administered by the U.S. Department of Commerce in consultation with the European Commission. To participate, a company must self-certify annually to the Department of Commerce that it complies with a defined set of privacy principles. The complaint alleges that Google’s assertion that it adhered to the Safe Harbor principles was false because the company failed to give consumers notice and choice before using their information for a purpose different from that for which it was collected.”

SEO news blog post by @ 1:46 pm on October 26, 2011


 

Google Searches Minus the Plus Operator

Anyone who has used Google for any length of time is probably familiar with using the "+" operator in search queries in order to refine their results. This "+" older operator has been around for many years and is widely used by many searchers. It seems that overnight, Google has decided to remove this functionality from search queries.

Google Plus Operator

In a recent response to a post in the Google Webmasters Forum (Link removed – no longer available), Google employee Kelly F. stated the following in regards to the removal of the "+" operator:

Hi everyone,
We’ve made the ways you can tell Google exactly what you want more consistent by expanding the functionality of the quotation marks operator. In addition to using this operator to search for an exact phrase, you can now add quotation marks around a single word to tell Google to match that word precisely. So, if in the past you would have searched for [magazine +latina], you should now search for [magazine "latina"].

We’re constantly making changes to Google Search – adding new features, tweaking the look and feel, running experiments, -all to get you the information you need as quickly and as easily as possible. This recent change is another step toward simplifying the search experience to get you to the info you want. Cheers, Kelly.

The new process she outlined will work for most in most cases, but it does seem to make for more cumbersome searches. I personally can understand that Google needs to remove this in the wake of their Google + Social Media platform for obvious reasons, but as a frequent user of this operator that has been in place for the past 15 years I will be difficult to get used to a less intuitive process; regardless if it has the same functionality as the old way of performing the search.

There was an interesting postscript from Danny Sullivan:

I can’t believe Google has done this. I use the + command all the time, especially in an age when more and more, Google constantly reshapes a search based on what it guesses a searcher wants, rather than what they entered.

The functionality is still there, which is a relief. But having to do a search like this:

mars +landings +failures

now like this:

mars "landings" "failures"

is more complicated. It also goes against 15 years of how search engines have operated, where quotes are used to find exact phrases. Now all those references across the web have become outdated, for no apparent reason other than maybe Google picked a name for its social network that wasn’t searchable.

I think Danny Sullivan "sums" it up the change very well by saying:

Imagine people learned how to symbolize addition by using the + symbol, then 150 years later, one of the big calculator makers declared that the + symbol would now be replaced by using the " symbol. That’s what Google has effectively done, no big blog post, no notice, just yanked the command search engines have used for over a decade. And probably because it named its social network Google+ — making it hard to find.

I think this is an instance where the Google marketers and staff should have realized how the implementation of Google + was going affect search results. It also shows a lack of far-sightedness on their part to not speculate how the coining of the Google + brandname was going to cause problems for searchers. Removing this operator that has been around much longer than Google with no press release shows a profound lack of respect for the subscribers of the Google service.

SEO news blog post by @ 11:59 am on October 24, 2011

Categories:Google

 

Secure search service stirs SEOs slightly

Every once in a while there’s an announcement that makes a huge kerfuffle online only to be yesterdays news the next week. Yesterday’s news is that Google made the move towards secure searches for Google account holders that are logged in while searching. It was actually announced on the 18th, and I didn’t see anything until Kyle mentioned it on the afternoon of the 19th, so it’s actually worse than yesterday’s news!

Google secure search

Anyone following search engine news would be perfectly normal to feel a bit of déjà vu since Google’s had secure search options way back in early 2010. The latest announcement that is stirring up responses is the fact that they are now dropping header info that would normally be passed along to the destination site which could then be tracked and analyzed for SEO purposes.

Google has plenty of good reasons to make this move and only a few reasons against it. Here’s a quick breakdown of the pros/cons:

  • Most searchers are not logged in and won’t be effected
  • Estimates fall between %3-%7 of current search traffic is logged in
  • Tracking the “not provided” searches in Google Analytics will show the missing traffic
  • Mobile users connecting from public WiFi networks can search securely
  • Users of free internet services will have additional privacy
  • HTTPS Everywhere is crucial and backed by Google
  • Webmaster Central still provides search terms to registered owners

Cons:

  • Mobile searchers tend to be logged in
  • Traffic projections for mobile search are growing
  • Google has to make the data accessible to it’s paid users
  • SSL is now becoming a much larger ranking factor

Amy Chang over on the Google Analytics blog had the following point to make:

“When a signed in user visits your site from an organic Google search, all web analytics services, including Google Analytics, will continue to recognize the visit as Google ‘organic’ search, but will no longer report the query terms that the user searched on to reach your site..”
“Keep in mind that the change will affect only a minority of your traffic. You will continue to see aggregate query data with no change, including visits from users who aren’t signed in and visits from Google ‘cpc’.”

Thom Craver, Web and Database specialist for the Saunders College at Rochester Institute of Technology (RIT) was quoted on Search Engine Watch as noting:

“Analytics can already run over https if you tell it to in the JavaScript Code … There’s no reason why Google couldn’t make this work, if the site owners cooperated by offering their entire site via HTTPS.”

Personally, as you can tell from my lead-in, I feel like this is much ado about nothing. Unless competing search engines are willing to risk user privacy/safety to cater to SEOs in a short term bid for popularity, this isn’t going to be repealed. I don’t like to see the trend of money = access, but in this case I don’t see much choice and I’ll stand behind Google’s move for now.

SEO news blog post by @ 12:12 pm on October 20, 2011


 

Resurrecting Dead Backlinks

I came across a great post today from JR Cooper on the SEOMoz site in which he was discussing how to use backlink checkers to find broken links and how to use these to obtain new links. First off he recommended a great new Chrome extension called "Check My Links."

dead link grave

I have just installed the extension myself so I cannot comment directly on it. But the great things JR Cooper reports about it sound very compelling.

"Pretty much, it’s the greatest link building browser extension I’ve ever used. First of all, it’s extremely fast. Like almost too fast. It usually checks half the page in under 10 seconds. It also finds the links that are quickest to check, saving the links with long load times for last (I still don’t know how they do this). Best of all, I can check multiple pages at once, which saves some serious time because I usually find 50 pages at a time to check. As a bonus, it even tells you what kind of page error the broken link got (i.e. 404, 500, etc.)."

The description from the Chrome Web Store:

"Check My Links" is an extension developed primarily for web designers, developers and content editors (and SEOs).>When you’re editing a web page that has lots of links, wouldn’t it be handy to be able to quickly check that all the links on the page are working ok? That’s where &Check My Links" comes in. "Check My Links" quickly finds all the links on a web page, and checks each one for you. It highlights which ones are valid and which ones are broken, simple as that. HTTP response codes and full URLs of broken links are published in the Console log.

As most of us in the SEO industry are finding, it is becoming increasingly difficult to build links to your client’s websites. Tactics that were once widely utilized are no completely ineffective. At the risk of repeating myself again and again; the Panda algorithm has effectively changed everything about how links are obtained. For instance, subsequent updates have rendered posting to forums virtually ineffective for these purposes.

Cooper goes on to detail how this extension can be used for dead link building. The first tactic he describes is Direct Find and Replace. This is where you generate a list of broken links from blogrolls and link pages. You then contact the webmasters of the sites and ask to replace one of the dead links with a link back to your site.

The next method he describes is Content Replacement. He suggests looking at the actual pages that are broken and using the Internet Archive’s "Way Back Machine" to find the original content that was being linked to and then to recreate the content on your own site. You can then contact the webmaster to update their links to the new (and improved) content. Subsequently, you can then use free tools such as Open Site Explorer or Yahoo Site Explorer to discover other sites that were linking to the original content as well and ask if they would like to link to the new and improved content as well.

The last technique he describes is Broken Blogger Blogs where you use the tools to find broken links on blogrolls that point to subdomains on blogspot.com and then looking to see if he can register the blog himself. If so, then he puts up a static page with a desired keyword linking back to the new blog location. Not only does this give you the anchor text of your choice, but it gives a link with a higher amount of link juice (depending on how many outbound links are pointing to that page). He does state that this is a fairly "greyhat" tactic and has requested reader feedback on the ethics of such a tactic.

To recap; the Panda updates are forcing all users to generate better content. It is a bold effort by Google to reduce the amounts of web-spam that have inundated the SERPs for far too long. As an end-user you should love Google for their efforts; as an SEO it means that the whole game has changed and that we have to continue to evolve with the changes to remain effective in our industry.

SEO news blog post by @ 11:53 am on October 19, 2011


 

Back to the Future – Mid October Tech Announcements

SEO News is often dry and since the search engines drive the news, there can be some slow days for discussions on topic. Today was a perfect example of headlines that just don’t make the grade, but at least they had a common theme: Back to the Future?

Quantum Levitation brings the hover board closer to reality:

In this video we see how ‘quantum locking’ (or flux pinning) can work with a superconductor to ‘levitate’. That’s the gist I got from it, yet the whole time I’m thinking about how I’d love one of those hover boards from Back to the Future:

Marty McFly's hoverboard.
(Yeah those are tin pie pans..)

Coming Soon: Electric Deloreans!
The DeLorean Motor Company “DMC” announced a DMC-12 Electric Delorean:

The Delorean DMC-12 Electric Sports Car

..okay so that’s where the McFly references ended.

While you could do a web search from the dash of an electric car, it’s not very web related? To tug us a bit closer back on topic, yet not entirely, I’ll close with a reminder about Google’s remote desktop beta extension for Chrome.

Chrome remote desktop beta getting positive feedback:

https://chrome.google.com/webstore/detail/gbchcmhmhahfdphkhkmpfmihenigjmpp?hl=en-GB&hc=search&hcp=main

All you need is a Chromebook or Chrome browser on both ends, start a sharing session, send the code to the other end, and viola, you’re connected.

This does away with IP addresses, running services, or trusting one of the 3rd party commercial vendors with secure access to your machines.

Since this is a challenge/response setup in the beta phase, Google’s solution won’t be replacing large IT support offerings, but for 1-on-1 support it’s very handy.

Next time a client asks me about some SEO statistics on their PC, instead of describing what I think they can see over the phone, I’ll give this a whirl so we’re both on the same page. ;)

SEO news blog post by @ 12:17 pm on October 18, 2011


 

A Google Engineer who sees the outsider perspective?

I know that as a stubborn old nerd I can be pretty hard to win over, and as much as this Google Engineer claims to have accidentally leaked his rant, I read this as intentionally made public from the get-go just by the way it was written to ‘everyone’ in a few spots. I could be wrong, but I’m not reading this as a leak, just as a rant.

Ranting google employee

The full post is, amazingly enough over on Google+ as a public post (although the original author has pointlessly deleted it). I shouldn’t say it’s really amazing that the post is still public, people duped it instantly so there’s no point in trying to remove it now.

Make no mistake, there’s a few good points from Steve Yegge; I find some of the observations to be true but mostly from an outsider standpoint which is shocking because it was written by a fellow with almost 6 years of experience in the company. Google does have platforms, they do use them, and they do share them. True there’s always been an obvious panic towards security that’s effected accessibility, but then Google’s track record probably wouldn’t be as amazing with a more casual approach to giving outsiders access to core tech.

Amazingly of all the points made, the one that echos most with my opinion is that Google is becoming arrogant and almost needs two versions of projects like Google’s Chrome browser. One version that runs super secure, fast, compatible, and sleek, with no frills or compromises. The other needs to be as bloated as FireFox/Opera, and it’d run like a buggy mess of poorly considered features that are starkly incompatible with themselves. To quote Steve on arrogance and Chrome development:

“You know how people are always saying Google is arrogant? I’m a Googler, so I get as irritated as you do when people say that. We’re not arrogant, by and large. We’re, like, 99% Arrogance-Free. I did start this post — if you’ll reach back into distant memory — by describing Google as “doing everything right”. We do mean well, and for the most part when people say we’re arrogant it’s because we didn’t hire them, or they’re unhappy with our policies, or something along those lines. They’re inferring arrogance because it makes them feel better.

But when we take the stance that we know how to design the perfect product for everyone, and believe you me, I hear that a lot, then we’re being fools. You can attribute it to arrogance, or naivete, or whatever — it doesn’t matter in the end, because it’s foolishness. There IS no perfect product for everyone.

And so we wind up with a browser that doesn’t let you set the default font size. Talk about an affront to Accessibility. I mean, as I get older I’m actually going blind. For real. I’ve been nearsighted all my life, and once you hit 40 years old you stop being able to see things up close. So font selection becomes this life-or-death thing: it can lock you out of the product completely. But the Chrome team is flat-out arrogant here: they want to build a zero-configuration product, and they’re quite brazen about it, and F*** You if you’re blind or deaf or whatever. Hit Ctrl-+ on every single page visit for the rest of your life.”

As Steve deleted the original post he put up a good bit on why it’s bad to have such things in public:

“Please realize, though, that even now, after six years, I know astoundingly little about Google. It’s a huge company and they do tons of stuff, and I work off in a little corner of the company (both technically and geographically) that gives me very little insight into anything else going on there. So my opinions, even though they may seem well-formed and accurate, really are just a bunch of opinions from someone who’s nowhere near the center of the action — so I wouldn’t read too much into anything I said.”

I really couldn’t agree more. If this had come from someone working with Google’s engineers on something such as the GO language it would have been a different story, but Steve’s admittance of the scope of his role is very honest and worth considering as you read his rant.

TL;DR – Google guy rants about Google’s strategies from an outsider’s perspective and calls out some of the lingering issues with Google’s dev teams/arrogance. Everyone would like to see Google bend more and give more, though nobody can seem to qualify themselves to say if it’s really the wisest strategy.

SEO news blog post by @ 11:07 am on October 13, 2011


 

Panda 2.5 Weather Report: To Panic or Not to Panic?

As most were involved actively with SEO are aware, an update to the Google Panda Algorithm was implemented on September 28th and again on October 5th. This appears to be part of ongoing revisions to the Panda algorithm that continue to cause wild fluctuations in many websites rankings. Confirmed on September 30th, Google’s new Panda 2.5 arrived. It is still unclear if Panda 2.5 had been reversed or updated.

DaniWeb, who has taken extreme measures to recover from the previous Panda updates, states that the site was hit hard again by this latest iteration of Panda. DaniWeb stated that traffic to the site dropped by as much as 50% on October 5th, which was the release of a previous update to the algorithm.

Search Metrics has stated that 10 of 30 sites being hit saw an 80-90% recovery in visibility, but also stated that many others saw little to no improvement at all.

In a post from Search Engine Watch, Simon Heseltine wrote a post asking "Was the Google Panda 2.5 Panic Warranted?" I have to respond with an emphatic, “yes.” Google continues to erode confidence in property by continually pulling the rug out from under its multitude of users. Many sites have still not recovered from the original Panda update at the beginning of the year, despite following all the best SEO and content practices and completing site overhauls.

As is usual with major updates to the Google Algorithm, there is much speculation over the full scope or impact of the update. This time is no different. With conflicting reports from Search Metrics and sites like DaniWeb it is difficult to know who is correct. The more likely reality is that they are both right. Even though there appears to be an abundance of information discussing tactics for recovering from Panda and despite the valiant efforts of site owners to recover, many continue to be hit hard, while others seem to weather the updates quite well.

More transparency from Google could help to quell the debates and to restore a measure of confidence in the search-engine giant. Releasing timely information regarding algorithm updates would save an enormous amount of frustration for their users. It is exceedingly difficult to apply a bandage if you cannot see where you are hemorrhaging from. Google is even getting pressure from Danny Sullivan to be more transparent with the Panda updates. This may or may not have prompted Matt Cutts to release a "weather report" regarding Panda:

SEO news blog post by @ 11:56 am on October 12, 2011

Categories:Articles,Google,Rankings

 

What word to use for anchor text?

As a well connected SEO I digest a lot of publications from the web and I try to limit my opinion to factual results either from real world feedback or by controlled tests. Google is constantly evolving and improving itself to render the best search results possible, or at least better search results than the competition.

Considering where Google was with regards to just hardware in 1999, things certainly keep changing:

Evolution of Google - First server

On Monday SEO Moz published a small test they did to gauge the importance of keywords in the anchor text of links. The test is discussed in detail over on SEO Moz but the result was rather straight forward.

In a nutshell they took 3 new sites, randomly equivalent, and tried to build some controlled links to the sites using three different approaches:

  1. Build links with just ‘click here’ text
  2. Build links with the same main keyword phrase
  3. Build links with random components of the main keyword phrase

Obviously the test is a bit broken, because if you don’t have existing keyword relevance for a phrase, you should build relevance with keywords in the anchors. When Google is sorting out who will be ranked #1 for a site dealing with candies, the site linked to with relevant keywords should always rank higher than a site with links like “click here” or “this site” which aren’t relevant. The only exception would be in a situation where the links seem excessive or ‘spammy’ and may result in Google not considering any of the similar links for relevance.

Outside of a clean test environment we know the best results would be a blend of all three types, with a bit of brand linking mixed in to avoid losing focus on brand keywords. A well established site with a healthy user base will constantly be establishing brand due to all the time on site and click-through traffic for that brand.

ie. If I search for “Sears” and click on the first link only to find it’s a competitor, I’d hit back and find the right link to click. In most cases Google’s watching/learning from the process, so brand links aren’t going to be a necessity after a site is quite popular, and the % of brand links wouldn’t need to be much at all.

Kudos to SEOMoz for publishing some of their SEO test info regardless of how experimental it was. We’re constantly putting Google’s updates to the test and it’s often very hard to publish the results in such a clinical fashion for all to see. We will always make an attempt to blog on the topics we’re testing but it’s still on the to-do list to publish more of the data.

SEO news blog post by @ 11:56 am on October 11, 2011


 

Beanstalk Offices Closed Today

sesame street thanksgiving
The Beanstalk offices will be closed today in celebration of Thanksgiving in Canada.  We apologize for any inconvenience this may cause our clients and our blog readers.  Our daily SEO news posts and services will be back to normal on Tuesday at 9am PST.

Thanks for your understanding and to our Canadian friends … Happy Thanksgiving. :)

SEO news blog post by @ 12:01 am on October 10, 2011

Categories:beanstalk

 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.