Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

One Year After the Panda Attack

One Year Panda Attack
It has been just over one year since the Panda Algorithm produced a prolific amount of pandemonium across the World Wide Web. I came across this great infographic posted on Search Engine Land created in conjunction with BlueGlass SEO detailing how Panda works, what it impacted and the stages of the updates implemented from Panda 1.0 through to Panda 3.2.

The Google Panda Update, One Year Later

Here are some of our past blog posts detailing the Panda Updates as they came out, along with strategies and tactics to counteract the effects of the Panda Algorithm updates.


SEO news blog post by @ 11:49 am on February 27, 2012


 

Are multiple rel=authors worth it?

Recently Google+ made it a lot easier for content authors to indicate the sites they publish to. Here’s a short clip showing that new easier process:
[jwplayer mediaid="3346"]

So that’s now told Google+ that you are an author for the sites you’ve listed. It also adds a backlink on your Google+ Profile page back to your site.

At this point, once Google has parsed the changes, and updated it’s caches, you’ll start to see author credits on articles with the same name and email address. While the Google help docs ‘suggest’ you have a matching email address published with each post, it’s clearly not a requirement.

So after this update you could start to see ‘published by you’ when doing searches on Google for posts you’ve made but what’s to stop anyone from claiming they wrote something on-line?

The other half of this process is creating a ‘rel=”author”‘ or ‘rel=”publisher”‘ link on the content pages on your blog/web site.

In the case of Beanstalk’s Blog, all posts get the same rel=”publisher” link, it looks like this (you can see it in ‘view-source’):

<link href="https://plus.google.com/115481870286209043075" rel="publisher" />

That makes Google see our blog posts as ‘published’ by our Google+ Profile, which is a bit ‘lazy’, but the process to add that code was very easy (and we blogged about it here) compared to the task of tagging each post with a custom link.

The truth is that there has to be some ‘ranking signal’ for multiple authors, and there should be a quality/trust grade based on the profiles of the authors. So what is that ‘factor’ that ‘has’ to be hiding in the ranking code? Great question!

Since we’ve already spent some time with Google+ and a single author source we intend to run some tests and prove out the value or lack of it. Our plans are to report on both the difficulty of applying the right tags to the proper posts, and then value of that effort. If anyone reading along has some good suggestions for the test process please drop us a comment via the main contact form.

Where’s Bart?

Chia Bart is retired for now. I need to find a decent webcam and then I’ll re-do him with some time-lapse for added thrills and joy. In the meantime we’re looking at offering the readers a really unique chance at interacting with the blog:

Each month we will be posting some macro images. Each one will be a challenge to figure out and we’ll take guesses on stuff ‘material’ ‘location’ ‘object’ etc.. and then we will rate the guesses based on how close they are. Technically, even if we had one guess like: “The picture for week 2 looks like glass”, that could win!

The best guess will get recognition on our blog and we’ll announce the best guess each month on Twitter as well.

This is the Macro image for Week two of FebruaryFebruary Macro 2 – If you think you know what this is, or where this is, send us your best guess via Twitter or G+

SEO news blog post by @ 12:14 pm on February 9, 2012


 

Stacking up Google optimization efforts

We keep optimizing our meta tags, keywords, link structure, content densities, markup, etc.. etc.. But how does Google optimize itself for us? If this is any sort of ‘relationship’ what’s Google been doing for us lately?

Comparing work done

Anti-Spam DMARC Efforts

One of the big problems with promoting on-line is the folks who don’t care about courtesy or the rules and they just spam everyone/anyone. The best way to cope with this is to never buy products we have seen ‘spammed’; Yet this has been a nerd mantra for so long, and clearly the consumers never got the message because spammers still get paid.

Because of all the abuse, legit advertisers have a bad reputation even before they get started. This is why we have captchas, whitelists, RBLs, and many many other annoying services that some people actually pay to use.

This is why we can't have nice things

Major email providers like Google and Microsoft (including Yahoo!/Hotmail), are working to ally with major online sites like Facebook, LinkedIn, PayPal, and more to work on the DMARC system to cope with not only spam, but phishing, fraud, password scams, ID theft, etc..

In a nutshell DMARC is:

..a technical specification created by a group of organizations that want to help reduce the potential for email-based abuse by solving a couple of long-standing operational, deployment, and reporting issues related to email authentication protocols.

Essentially it’s going to make ‘authenticated’ mail much more commonplace in hopes of raising the global bar on email authentication to help eliminate the spam problem. Still too long winded with the explanation?
Here’s an illustration of DMARC:

This is why we can't have nice things

New Privacy Policy

I’ve witnessed a lot of complaining about this move, and yet I haven’t seen one logical complaint I could ally myself with. Personally, I’m a GMail user who has already invested the deepest amount of privacy I can into Google just by using GMail. Each time Google releases a new product, if I use the same Google account as I do with other Google services, I ‘expect‘ it to be smart and use what Google knows about me to the fullest.

If I wanted a privacy division between Google Maps and GMail, I’d make a separate account and use multiple logins so that if I am hunting for the closest guitar shop I won’t have to deal with Guitar adverts getting special preference when I am logged into GMail. In fact, if I was looking for a gift for someone and I really loved the focus Google has on ‘me’, I might just use a fresh browser instance to keep Google from getting confused.

Fresh browser instance?! I know, that’s jargon and we promised to explain ourselves, so a quick demo of this is to load Chrome (sorry Moz lovers) and then right click on a normal link. In the right click menu you should see this:Chrome Incognito Option

This will open a Chrome Incognito window :
Sites in this tab will not see browser history!
Try visiting your popular sites to test!

If all goes well, as long as you use the incognito window, you will be able to use Google services, and others, without them easily tying the info to a particular account.

Keep in mind that the alternative to a unified privacy policy is a system where the users have to read each privacy policy for every Google service to make sure they understand each service. Then, if you wanted your data to be shared between services you’d have to not only go and manually ‘share’ the information, but you’d also better be praying or something to find a way to motivate Google spend the time to enable the link between services because as we know already, Google doesn’t waste much resources on things that aren’t going to be popular. When you make something like this automatic it changes the entire functionality of that idea and what would otherwise be a ‘wasted effort’ suddenly becomes a ‘big win’.

Kicking Keister in Kenya


If you haven’t read about the Mocality debacle (link removed ), you really aren’t missing that much, it’s more of a ‘How the heck?’ than anything.

In a nutshell:

There was a Google contractor in Kenya using Google IPs and identifying themselves as a Google entity that had been ‘scraping’ the sign ups from Mocality and stealing them away with lies.

When Google first heard of the situation there was a “No freaking way, let us investigate and get back to you.” response from the powers within Google looking into the issue. As things unfolded it became clear that Mocality was indeed providing honest information and that something very bad was happening over in Kenya under Google’s name. Google’s own team leads were ‘mortified’ over the details of how the situation unfolded.

At this point the head of the Kenyan offices for Google, Ms. Olga Arara-Kimani, has resigned stating she felt personally that ‘the buck‘ stopped with her and she wanted to take full responsibility.

While no official statement has come from Google there are signs that the investigation is over and that Google is already implementing measures to prevent something like this from happening again. I expect we’ll hear a few more details as things unfold.

How’s Chia Bart? Well he’s in limbo, and I haven’t started the re-plant. Time for a vacation I think? :)

SEO news blog post by @ 12:23 pm on January 31, 2012


 

Surviving the SOPA Blackout

Tomorrow, January 18th, is SOPA blackout day, and lots of very popular sites are committing to participate in the blackout.
SOPA Blackout cartoon
How can web companies, such as SEOs, and supporters (like us) maintain workflow in the midst of a major blackout?

We’ve got some tips!

I need to find things mid-blackout!

While some sites will be partially blacked out, a lot of the larger sites will be completely offline in terms of content for maximum effect.

This means that during the blackout folks will have to turn to caches to find information on the blacked out sites.

If Google and the Internet Archives both stay on-line during the blackout you can use them to get cached copies of most sites.

If you’re not sure how you’d still find the information on Google, here’s a short video created by our CEO Dave Davies to help you along. :)

I want to participate without killing my SEO campaign!

If all your back-links suddenly don’t work, or they all 301 to the same page for a day, how will that effect your rankings?

Major sites get crawls constantly, even 30 mins of downtime could get noticed by crawlers on major sites.

A smaller site that gets crawled once a week would have a very low risk doing a blackout for the daytime hours of the 18th.

Further to that you could also look at user agent detection and sort out people from crawlers, only blacking out the human traffic.

If that seems rather complex there’s two automated solutions already offered:

    • sopablackout.org is offering a JS you can include that will blackout visitors to the site and then let them click anywhere to continue.
      Simple putting this code in a main include (like a header or banner) will do the trick:
      <script type="text/javascript" src="//js.sopablackout.org/sopablackout.js"></script>

 

  • Get a SOPA plugin for your WordPress and participate without shutting down your site. It simply invokes the above Javascript on the 18th automagically so that visitors get the message and then they can continue on to the blog.

I’d be a rotten SEO if I suggested you install an external Javascript without also clearly telling folks to REMOVE these when you are done. It might be a bit paranoid, but I live by the better safe than sorry rule. Plus just because you are paranoid, it doesn’t mean people aren’t trying to track your visitors. :)

How’s Chia Bart doing? .. Well I think he’s having a mid-life crisis right now because he looks more like the Hulkster than Bart?

Pastamania!
Chia Bart number 5
To all my little Bartmaniacs, drink your water, get lots of sunlight, and you will never go wrong!

SEO news blog post by @ 11:28 am on January 17, 2012


 

Webcology Year In Review

For those interested in what some of the top minds of SEO, SEM, Mobile Marketing and Social Media have to say about 2011 and maybe more importantly – what they see coming in 2012 then Thursday’s Webcology is a must listen.  Hosted on WebmasterRadio.fm, Jim Hedger and I will be hosting 2 separate round-tables with 5 guests each over 2 hours covering everything from Panda to personalization; mobile growth to patent applications.  It’s going to be a fast-paced show with something for everyone.

The show will be airing live from 2PM EST until 4PM EST on Thursday December 22nd.  If you catch it live you’ll have a chance to join the chat room and ask questions of your own but if you miss it you still have an opportunity to download the podcast a couple days later.  I don’t often focus this blog on promoting the radio show I co-host but with the lineup we have including SEOmoz’s Rand Fishkin, Search Engine Watch’s Jonathan Allen and Mike Grehan, search engine patent guru Bill Slawski and many more talented and entertaining Internet Marketing experts it’s definitely worth letting our valued blog visitors know about it. And if you’re worried it might just be a quiet discussion, Terry Van Horne is joining us to insure that doesn’t happen.  Perhaps I’ll ask him a question or two about his feelings about Schema.org (if you listen to the show … you’ll quickly get why this is funny). :)

So tune in tomorrow at 2PM EST at http://www2.webmasterradio.fm/webcology/, be sure to join the chat room to let us know your thoughts and enjoy.

SEO news blog post by @ 3:32 pm on December 21, 2011


 

Panda’s take on Popular vs. Productive

I’ve seen a few SEO blog posts recently on post-panda content concerns that unsurprisingly contradict each other.

The “popular” camp seem to feel the following is true:

- Don’t post anything off topic
- Don’t post anything that won’t be a hit
- If you post something that fails, pull it
- If you can’t pull a post, fake the popularity

So what that means is pulling your punches until you have a post that’s really going to draw attention to your blog.
The SEO logic is that while regular content creates a positive metric, anyone can produce regular content and in fact loads of unpopular content could become a negative ranking factor.

The “productive” camp follow these golden rules:

- Don’t post content that isn’t unique
- Don’t spin content to create unique content
- Keep keyword densities high
- Keep a low ratio of links in proportion to images/text

This group spend all their time creating content and don’t spend time worried about how popular every post will be.

The SEO logic with “producers” is that the Panda update wants to see regular fresh content publications without duplication of existing content, only ‘really bad’ content can harm this ranking factor.

Well I hate to be a pacifist, but both sides are correct! A great strategy would be to listen to BOTH sides.

  • If every post on your blog gets 300+ links on the day it’s posted, that’s not going to look organic
  • If your blog gets one post, every single day, and nobody links to them, that’s not organic either

So post regularly, but don’t sweat it if you miss one day. If you are having a slow day for topics, you should try to go find some discussions where you can generate interest/back-links to your existing posts. At worst you’ll find some topics that are far more interesting that what you’ve been blogging about and you’ll get something fresh to discuss.

A post in draft, waiting for perfection, won’t do you much good if it never gets published. :)

Those of you shocked to see us on SEO blog topics right now can rest assured we’re struggling to stay on topic.

Oh the SOPA debate is frightful,
But MAFIAAFire is so delightful,
And since we’ve no position to SEO,
Let It Snow! Let It Snow! Let It Snow!

It doesn’t show signs of shoop’ing,
I’ve got a report showing keywords are ranking,
And the competition’s phrases are way down low,
Let It Snow! Let It Snow! Let It Snow!

When we finally reach page one,
How I’ll hate going on the phone!
But if you’ll order via email,
It will make it to your home without fail.

The lyric is slowly ending,
And, my dear, we’re badly rhym-ing,
But as long as you let me SEO,
Let It Snow! Let It Snow! Let It Snow!

SEO news blog post by @ 12:05 pm on December 20, 2011


 

Couped up with Google Verbatim Searches

Still upset that Google changed the + functionality in searches? Haven’t tried the verbatim search option, or you have but it didn’t match what you were expecting? This is a blog post for you, the dear + lover seeking to restore your lost Google-Fu.

Lets say you were hoping to search for a place to store some chickens, you could search for chicken coop, chicken coup, chicken coupe, and probably a ton of other variants while always getting the result for “chicken coop”.

a chicken coop

Great times! Now what if you were searching for a not so famous musical group, from the deep south, with ‘Chicken Coupe’ as the only part of the name you can recall? Searching for Chicken Coupe would get you the above results and wishing you could get an exact match.

In Google Adwords the exact match is done by putting square braces [around] a word. Sadly, putting square braces around a chicken coupe still doesn’t get the result we want?

a chicken coupe

Until Google realizes they passed up a handy way to keep their tools in harmony, the result we want is still two more clicks (seriously) away.
more tools
The first step is to let Google know we mean business by clicking on ‘More search tools’.

Why this is located at the bottom left of everything?
Google is concerned about our neck and spine health?
First person with a theme or script to put these options on the first page gets an honourable mention…

EDIT: Adding ‘&tbs=li:1′ to searches seems to be a quick way to toggle verbatim?

So if you have custom search engine entries, you could add a ‘v’ short cut set to something like this (Chrome syntax):

{google:baseURL}search?q=%s&gl=us&num=50&tbs=li:1

A ‘v’ entry with the above code would look like this:

verbatim search shortcut

(Each time you type ‘v’ the browser will search for the next word using the ‘verbatim’ search option)

verbatim search
The next (and final) step:

Now that you’ve forced Google’s hand into showing you more search options..
.. you should see ‘Verbatim’ at the bottom of the list?

Click on that link and the results should change?

If all went well you should be a lot closer to the music you had in mind when you started this search.

This is also VERY handy if you use Google to spell check exotic/localized words.

Just keep an eye out for the blue ‘learn more’ bar and it will tell you when you are doing a verbatim search.

SEO news blog post by @ 10:46 am on December 6, 2011


 

Welcome to NewTube – HTML5 + Sneak Peek Tip

YouTube and Google have been update crazy this month. Apparently the Google engineers are doing more than growing facial hair and thinking about their tongues.

New YouTube Start Page

The image above is a sneak peek at the new YouTube start page. It wasn’t intended to be public but a single command can enable anyone to use it right now.

This command will give you the cookie you need to see the new layout:
javascript:document.cookie=”VISITOR_INFO1_LIVE=ST1Ti53r4fU”;

To enter the command in Chrome, you can paste it into the address bar and if it removes/culls the “javascript:” part, just put it back in and hit enter. Now you’ll have the cookie and going to YouTube’s homepage will show the new screen.

Optionally with other browsers you can get into the developer console and run the javascript command from there.

If that’s not enough fun for you, HTML5 features are almost completely caught-up with Flash versions of the YouTube player, and in many ways it’s better.

No tricks needed here, just head over to the YouTube HTML5 page and click on the ‘Join’ button on the bottom of the page.

Once that’s done you should notice a much different menu when you right click on videos that support the HTML5 player:

HTML5 Video Playback on YouTube

One other “TIL” was the speed test pages linked from the HTML5 page:

Performance tests on YouTube

Performance graphs on YouTube

And even a real-time streaming benchmark:

Performance graphs on YouTube

From the looks of things this could be the year that YouTube drops flash entirely, or at the very least makes it the ‘other option’ with HTML5 as the default. I’d personally love to uninstall flash and that would be one big hurdle down if YouTube switches completely. *fingers crossed*

SEO news blog post by @ 10:53 am on November 22, 2011


 

How Rich-Snippets for Apps Increase CTR

Yesterday Beanstalk Blogger, Ryan Morben introduced a list of 10 New Changes to the Google Algorithm. One of the new updates that he mentioned was the use of "Better Snippets." I thought I would take this opportunity to elaborate on these further.

In September this year, Google introduced rich snippets to be used for reviews, events and music sites. This was an effort to help users determine if a particular website had the relevant information they were searching for. The snippets allow you to get information about the applications, reviews and pricing within the actual search results before you download the app.

These rich snippets are becoming increasingly important for not just sites offering mobile apps, but for all software applications available to be downloaded. These rich snippets are becoming increasingly critical for software developer sites, software publishers, download portals and review sites to standout from the rest of the SERPs.

This new rich snippets has two additional attributes that help to specify which countries are currently supporting the new app, and which ones are not. However at this time, there is no formal standardization for the format specifications on schema.org.

Sites that have utilized this new snippet, specifically those with large review sections, or downloadable content, show much larger images in the SERPs than the author rich snippets. These larger images inevitably lead to larger CTRs and ultimate help to increase conversions.

serp pic 1
serp pic 2

Google will also inevitably prefer those sites using rich snippets/microformats that have more complete and detailed meta data. For this reason, it is imperative to provide meaningful data in all the available attribute areas and not to only fill in the required ones. You should always test new rich snippets and apply to Google to clear the new extensions in the SERPs to help boost your CTRs.

SEO news blog post by @ 11:12 am on November 16, 2011


 

10 new changes to Google algorithms

New features from GoogleYesterday, over on the Google Inside Search blog, Matt Cutts shared 10 recent changes to the Google search algorithms from the last few weeks.

As always these posts can get a bit technical, and anyone subscribed to the feed can just get it from the horses’ mouth. The goal of this post is to put the changes into clearer terms from a SEO perspective:

Translated search titles:
When searching with languages where limited web content is available, Google can translate the English-only results and display the translated titles directly below the English titles in the search results. This also translates the result automatically, thereby increasing the available web content for non-English searchers. If you were selling products that appealed to a global market, but hadn’t yet invested in translations/global site structure, this could drive fresh traffic to your sites/products.

Better Snippets:
Google’s mantra is always ‘content, content, + more content’, and now the snippet code is focusing on the page content vs. header/menu areas. Because of the way sites use keywords in the headers/menus, coding the snippets to seek out body content will result in more relevant text in search snippets.

Improved Google generated page titles:
When a page is lacking a title, Google has code in place to assign a title to the page using various signals. A key signal used is back-link anchor text pointing to the page. If a site has a ton of duplicate anchor text in the back-links, Google has found that putting less emphasis on those links creates a far more relevant title than previously. In this way the titles in the search results should be much less misleading.

Improved Russian auto-complete:
Languages are a constant headache for search engines, and new features like auto-complete can take a very long time to mature in languages outside of English. Recently the prediction system for auto-completed queries was improved to avoid overly long comparisons to the partial query to make auto-complete function much better in Russian, and closer to how well it works for English queries.

More information in application snippets:
Last week Google announced a new method of improved snippets for applications. The feature’s pretty technical and looks like an entire blog post is coming on just this topic. Here’s an example image that hopefully gives you a gist of how the snippets are giving details, like prices, ratings, and user reviews.

Example of application snippet from Google search results.

The feature has been very popular and Google recently added even more options that will elicit a full blog post soon here.

Less document relevance in Image searches:
If you look up search engine optimization in Wikipedia and look at the entry for Image search optimization you will note that there’s really nothing to say about SEO tactics towards images. This hasn’t been true, there are signals that Google has to look for when deciding what image to show for a particular keyword.
Previously, an image referenced in PDF or other searchable documents multiple times would get higher placement in the results. Google has done away with this signal as it wasn’t giving improved results and could easily be abused. *Innocent whistling*

Higher ranking signals on fresh content:
Consider if you will, how Google would look if they never gave new sites/fresh content a shot at the top, or a moment in the limelight? By default most ratings systems will show you the ‘best of the most recent’ by default just to avoid older content dominating the results. As a person on the phones taking SEO leads I can tell you there’s always been a ’10 mins of fame’ situation on Google where the explainable happens in the search results with fresh sites/content, only to return to normal later on when the dust settles. Google claims the recent change impacts roughly 35% of total search traffic which could be a significant boost for sites that take the time to publish fresh content, or for new sites looking for a chance to be seen.

Improved official page detection:
We’ve blogged recently about the importance of the rel=author attributes, tying your content to a G+ profile, and completing the circle with a back-link from the profile to your site. Google’s added even more methods to establish ‘offical’ pages and is continuing to give ‘official’ pages higher rankings on searches where authority is important. If you missed our article on this topic from last week, here’s the link.

Better date specific results:
The date a page is discovered may not always be the date the information is published. Google has the difficult task of sorting out the ‘date’ relevance for search results, and they keep improving on this where possible. A good example would be using duplicate matches to avoid showing you a 3 year old article that was posted two days ago if you specify that you only want results from say ‘last week’.

Enhanced prediction for non-Latin characters:
You’d think it’s hard enough to get a predictive query straight when the character set is limited to Latin, and you’d be right. When it takes several keystrokes to complete a single character in non-Latin, a service like Google’s auto-complete would be hard pressed to know when to start guessing. Previous to this update predictions in Russian, Arabic, and Hebrew were giving gibberish results as the user was forming characters.

These are 10 changes out of 500+ made so far this year. We try to document the most important changes for you but there’s lots of times where Google can’t release info because of exploits/cheating. When that happens you’ll see us chime in with experiments and our personal experience when we can. So while I’d normally suggest folks interested in this topic subscribe to the inside search blog, we know that you’ll only be getting part of the story by doing so. ;)

SEO news blog post by @ 1:16 pm on November 15, 2011


 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.