Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

Panda 3.3: Building a Better Panda

Following up on my previous blog post from Monday where I recapped the launch and implementation of the Google Algorithm update affectionately known as "Panda." First released in February of 2011, Panda has been through several iterations and has a profound effect the quality of search results, webspam and SEO.

Panda Terminator

Google confirmed on February 27th the release of the Panda 3.3 update in conjunction with forty other search updates occurring in February or currently in progress. Although it seems very similar on the surface to Google’s January release of Panda 3.2, which was described by Google as a "data refresh," Google describes this update as follows: This launch refreshes data in the Panda system, making it more accurate and more sensitive to recent changes on the web.

In their blog post, Google states that they are retiring the link evaluation signal that has been employed for many years. An act that is going to cause some heated discussing around SEO water coolers everywhere. Google was reluctant to release too much information regarding the details for fears of revealing details regarding ranking signals.

Link evaluation. We often use characteristics of links to help us figure out the topic of a linked page. We have changed the way in which we evaluate links; in particular, we are turning off a method of link analysis that we used for several years. We often re-architect or turn off parts of our scoring in order to keep our system maintainable, clean and understandable.

Another update of Panda 3.3 will focus on local search rankings. Google revealed that the traditional algorithmic ranking factors are now playing a larger part in triggering local search results.

Here is the released details of the Panda 3.3 Algorithm update:

  • More coverage for related searches. [launch codename “Fuzhou”] This launch brings in a new data source to help generate the “Searches related to” section, increasing coverage significantly so the feature will appear for more queries. This section contains search queries that can help you refine what you’re searching for.
  • Tweak to categorizer for expanded sitelinks. [launch codename “Snippy”, project codename “Megasitelinks”] This improvement adjusts a signal we use to try and identify duplicate snippets. We were applying a categorizer that wasn’t performing well for our expanded sitelinks, so we’ve stopped applying the categorizer in those cases. The result is more relevant sitelinks.
  • Less duplication in expanded sitelinks. [launch codename “thanksgiving”, project codename “Megasitelinks”] We’ve adjusted signals to reduce duplication in the snippets for expanded sitelinks. Now we generate relevant snippets based more on the page content and less on the query.
  • More consistent thumbnail sizes on results page. We’ve adjusted the thumbnail size for most image content appearing on the results page, providing a more consistent experience across result types, and also across mobile and tablet. The new sizes apply to rich snippet results for recipes and applications, movie posters, shopping results, book results, news results and more.
  • More locally relevant predictions in YouTube. [project codename “Suggest”] We’ve improved the ranking for predictions in YouTube to provide more locally relevant queries. For example, for the query [lady gaga in ] performed on the US version of YouTube, we might predict [lady gaga in times square], but for the same search performed on the Indian version of YouTube, we might predict [lady gaga in India].
  • More accurate detection of official pages. [launch codename “WRE”] We’ve made an adjustment to how we detect official pages to make more accurate identifications. The result is that many pages that were previously misidentified as official will no longer be.
  • Refreshed per-URL country information. [Launch codename “longdew”, project codename “country-id data refresh”] We updated the country associations for URLs to use more recent data.
  • Expand the size of our images index in Universal Search. [launch codename “terra”, project codename “Images Universal”] We launched a change to expand the corpus of results for which we show images in Universal Search. This is especially helpful to give more relevant images on a larger set of searches.
  • Minor tuning of autocomplete policy algorithms. [project codename “Suggest”] We have a narrow set of policies for autocomplete for offensive and inappropriate terms. This improvement continues to refine the algorithms we use to implement these policies.
  • “Site:” query update [launch codename “Semicolon”, project codename “Dice”] This change improves the ranking for queries using the “site:” operator by increasing the diversity of results.
  • Improved detection for SafeSearch in Image Search. [launch codename "Michandro", project codename “SafeSearch”] This change improves our signals for detecting adult content in Image Search, aligning the signals more closely with the signals we use for our other search results.
  • Interval based history tracking for indexing. [project codename “Intervals”] This improvement changes the signals we use in document tracking algorithms. 
  • Improvements to foreign language synonyms. [launch codename “floating context synonyms”, project codename “Synonyms”] This change applies an improvement we previously launched for English to all other languages. The net impact is that you’ll more often find relevant pages that include synonyms for your query terms.
  • Disabling two old fresh query classifiers. [launch codename “Mango”, project codename “Freshness”] As search evolves and new signals and classifiers are applied to rank search results, sometimes old algorithms get outdated. This improvement disables two old classifiers related to query freshness.
  • More organized search results for Google Korea. [launch codename “smoothieking”, project codename “Sokoban4”] This significant improvement to search in Korea better organizes the search results into sections for news, blogs and homepages.
  • Fresher images. [launch codename “tumeric”] We’ve adjusted our signals for surfacing fresh images. Now we can more often surface fresh images when they appear on the web.
  • Update to the Google bar. [project codename “Kennedy”] We continue to iterate in our efforts to deliver a beautifully simple experience across Google products, and as part of that this month we made further adjustments to the Google bar. The biggest change is that we’ve replaced the drop-down Google menu in the November redesign with a consistent and expanded set of links running across the top of the page.
  • Adding three new languages to classifier related to error pages. [launch codename "PNI", project codename "Soft404"] We have signals designed to detect crypto 404 pages (also known as “soft 404s”), pages that return valid text to a browser but the text only contain error messages, such as “Page not found.” It’s rare that a user will be looking for such a page, so it’s important we be able to detect them. This change extends a particular classifier to Portuguese, Dutch and Italian.
  • Improvements to travel-related searches. [launch codename “nesehorn”] We’ve made improvements to triggering for a variety of flight-related search queries. These changes improve the user experience for our Flight Search feature with users getting more accurate flight results.
  • Data refresh for related searches signal. [launch codename “Chicago”, project codename “Related Search”] One of the many signals we look at to generate the “Searches related to” section is the queries users type in succession. If users very often search for [apple] right after [banana], that’s a sign the two might be related. This update refreshes the model we use to generate these refinements, leading to more relevant queries to try.
  • International launch of shopping rich snippets. [project codename “rich snippets”] Shopping rich snippets help you more quickly identify which sites are likely to have the most relevant product for your needs, highlighting product prices, availability, ratings and review counts. This month we expanded shopping rich snippets globally (they were previously only available in the US, Japan and Germany).
  • Improvements to Korean spelling. This launch improves spelling corrections when the user performs a Korean query in the wrong keyboard mode (also known as an “IME”, or input method editor). Specifically, this change helps users who mistakenly enter Hangul queries in Latin mode or vice-versa.
  • Improvements to freshness. [launch codename “iotfreshweb”, project codename “Freshness”] We’ve applied new signals which help us surface fresh content in our results even more quickly than before.
  • Web History in 20 new countries. With Web History, you can browse and search over your search history and webpages you’ve visited. You will also get personalized search results that are more relevant to you, based on what you’ve searched for and which sites you’ve visited in the past. In order to deliver more relevant and personalized search results, we’ve launched Web History in Malaysia, Pakistan, Philippines, Morocco, Belarus, Kazakhstan, Estonia, Kuwait, Iraq, Sri Lanka, Tunisia, Nigeria, Lebanon, Luxembourg, Bosnia and Herzegowina, Azerbaijan, Jamaica, Trinidad and Tobago, Republic of Moldova, and Ghana. Web History is turned on only for people who have a Google Account and previously enabled Web History.
  • Improved snippets for video channels. Some search results are links to channels with many different videos, whether on mtv.com, Hulu or YouTube. We’ve had a feature for a while now that displays snippets for these results including direct links to the videos in the channel, and this improvement increases quality and expands coverage of these rich “decorated” snippets. We’ve also made some improvements to our backends used to generate the snippets.
  • Improvements to ranking for local search results. [launch codename “Venice”] This improvement improves the triggering of Local Universal results by relying more on the ranking of our main search results as a signal. 
  • Improvements to English spell correction. [launch codename “Kamehameha”] This change improves spelling correction quality in English, especially for rare queries, by making one of our scoring functions more accurate.
  • Improvements to coverage of News Universal. [launch codename “final destination”] We’ve fixed a bug that caused News Universal results not to appear in cases when our testing indicates they’d be very useful.
  • Consolidation of signals for spiking topics. [launch codename “news deserving score”, project codename “Freshness”] We use a number of signals to detect when a new topic is spiking in popularity. This change consolidates some of the signals so we can rely on signals we can compute in realtime, rather than signals that need to be processed offline. This eliminates redundancy in our systems and helps to ensure we can continue to detect spiking topics as quickly as possible.
  • Better triggering for Turkish weather search feature. [launch codename “hava”] We’ve tuned the signals we use to decide when to present Turkish users with the weather search feature. The result is that we’re able to provide our users with the weather forecast right on the results page with more frequency and accuracy.
  • Visual refresh to account settings page. We completed a visual refresh of the account settings page, making the page more consistent with the rest of our constantly evolving design.
  • Panda update. This launch refreshes data in the Panda system, making it more accurate and more sensitive to recent changes on the web.
  • Link evaluation. We often use characteristics of links to help us figure out the topic of a linked page. We have changed the way in which we evaluate links; in particular, we are turning off a method of link analysis that we used for several years. We often rearchitect or turn off parts of our scoring in order to keep our system maintainable, clean and understandable.
  • SafeSearch update. We have updated how we deal with adult content, making it more accurate and robust. Now, irrelevant adult content is less likely to show up for many queries.
  • Spam update. In the process of investigating some potential spam, we found and fixed some weaknesses in our spam protections.
  • Improved local results. We launched a new system to find results from a user’s city more reliably. Now we’re better able to detect when both queries and documents are local to the user.

Other details of update and changes that Google has made recently can be found here:

SEO news blog post by @ 10:57 am on February 29, 2012


 

One Year After the Panda Attack

One Year Panda Attack
It has been just over one year since the Panda Algorithm produced a prolific amount of pandemonium across the World Wide Web. I came across this great infographic posted on Search Engine Land created in conjunction with BlueGlass SEO detailing how Panda works, what it impacted and the stages of the updates implemented from Panda 1.0 through to Panda 3.2.

The Google Panda Update, One Year Later

Here are some of our past blog posts detailing the Panda Updates as they came out, along with strategies and tactics to counteract the effects of the Panda Algorithm updates.


SEO news blog post by @ 11:49 am on February 27, 2012


 

Are multiple rel=authors worth it?

Recently Google+ made it a lot easier for content authors to indicate the sites they publish to. Here’s a short clip showing that new easier process:
[jwplayer mediaid="3346"]

So that’s now told Google+ that you are an author for the sites you’ve listed. It also adds a backlink on your Google+ Profile page back to your site.

At this point, once Google has parsed the changes, and updated it’s caches, you’ll start to see author credits on articles with the same name and email address. While the Google help docs ‘suggest’ you have a matching email address published with each post, it’s clearly not a requirement.

So after this update you could start to see ‘published by you’ when doing searches on Google for posts you’ve made but what’s to stop anyone from claiming they wrote something on-line?

The other half of this process is creating a ‘rel=”author”‘ or ‘rel=”publisher”‘ link on the content pages on your blog/web site.

In the case of Beanstalk’s Blog, all posts get the same rel=”publisher” link, it looks like this (you can see it in ‘view-source’):

<link href="https://plus.google.com/115481870286209043075" rel="publisher" />

That makes Google see our blog posts as ‘published’ by our Google+ Profile, which is a bit ‘lazy’, but the process to add that code was very easy (and we blogged about it here) compared to the task of tagging each post with a custom link.

The truth is that there has to be some ‘ranking signal’ for multiple authors, and there should be a quality/trust grade based on the profiles of the authors. So what is that ‘factor’ that ‘has’ to be hiding in the ranking code? Great question!

Since we’ve already spent some time with Google+ and a single author source we intend to run some tests and prove out the value or lack of it. Our plans are to report on both the difficulty of applying the right tags to the proper posts, and then value of that effort. If anyone reading along has some good suggestions for the test process please drop us a comment via the main contact form.

Where’s Bart?

Chia Bart is retired for now. I need to find a decent webcam and then I’ll re-do him with some time-lapse for added thrills and joy. In the meantime we’re looking at offering the readers a really unique chance at interacting with the blog:

Each month we will be posting some macro images. Each one will be a challenge to figure out and we’ll take guesses on stuff ‘material’ ‘location’ ‘object’ etc.. and then we will rate the guesses based on how close they are. Technically, even if we had one guess like: “The picture for week 2 looks like glass”, that could win!

The best guess will get recognition on our blog and we’ll announce the best guess each month on Twitter as well.

This is the Macro image for Week two of FebruaryFebruary Macro 2 – If you think you know what this is, or where this is, send us your best guess via Twitter or G+

SEO news blog post by @ 12:14 pm on February 9, 2012


 

Google’s Arnaque Carte

I was tempted to title this blog post ‘Why it’s cheaper to just block France‘ but that implied a lot and wasn’t as much fun as ‘Google’s Map Scam’ (translation).

Ever since the days of Google suggest, there has been some serious merde tossed at Google by French businesses and organizations. I think we all remember the famous French victories search?

Well apparently that algorithm has caused quite a stir for the French as they actually sued Google to get them to remove ‘Arnaque’ from showing in a suggest result for ‘cnfdi’ because it looked bad! Seriously, the French are complaining that the suggest results are too ‘honest’ and they took legal action to have Google give a fake result for that suggest query:

Then again in late 2011 Google lost a French court case over a suggest result that added the French version of ‘crook/swindler’ to the end of a search string for a French insurance company. The sum of that settlement was almost $65,000 and again, the problem is that Google isn’t censoring it’s information enough.

If that wasn’t bad enough, this week a French mapping company that offers similar services to Google Maps has won a settlement of over $660,000 against Google for providing it’s mapping services for free. Yep, once again, Google’s too honest/generous and France wants justice!

From my personal perspective, if I was Google, I’d just give France the same treatment as China, setup some massive IP block restrictions, and then go get some freedom fries with my spare time and money.


$7.50 for a tub of fries? Sweet!

SEO news blog post by @ 2:28 pm on February 2, 2012


 

Focus on the profit

In the first minute of the offical ‘hard hitting’ video called ‘Focus on the user’ they stab at the heart of the Google+ social search issue:
They do a search for ‘cooking
Then they click on the ‘most relevant cooking result within Google+
Afterwards they compare that with a search for ‘Jamie Oliver‘ and complain:

cooking‘ isn’t very relevant to the latest info from ‘Jamie Oliver

Twitter and Facebook wimper about Google+ social search

Don’t believe me that they did this? Go watch it again, they actually want us to feel outrage that ‘cooking’ doesn’t link us to the most relevant info for ‘Jamie Oliver’.

The authors of the plugin fully admit that they are getting the results info from Google itself, and just don’t want to say the words “Google is simple showcasing it’s services” instead they want to make it out to be a matter of ‘evil’ and ‘holding back’. If they didn’t at multiple times in the video slip up and show how you can still get the top results without using their plugin I’d say they had a case.

As much as there is to roll my eyes at, from an SEO standpoint, everything about focusontehuser.org is brilliant. The back-links must be pouring in, and I saw a very clever ‘click here to get your results to show’ link in the video that could be a real profit mill for them (their marklet’s broken right now or I’d investigate).

Don’t get me wrong, I know this scripting project was backed by Facebook, Twitter, and MySpace (it’s still going), so it already had some deep pockets, but in my opinion, it looks like the devs had some deeper ‘evil’ ideas?

While we are still on the ‘Google+ Social is Evil’ topic, the changes to support nicknames, pseudonyms, and maiden names is apparently done and now you can socialize however you wish on Google+. A more ‘evil’ company would have stuck to the original, and far more profitable design which requires valid names and serious privacy commitment.

To read more about the new Google+ naming policy put out on Monday just hop on over to Bradley Horowitz’s Google+ page.

I know this is the part where I slap up a picture of Chia Bart’s amazing growth and progress.. but someone decided to help him out and drain his water tray so he’s really wilted right now and I’m trying to get some life back into him. Perhaps I’ll do an update after lunch if he perks up? :)

Bart sprang back a fair bit, had to zoom to see the wilt!

SEO news blog post by @ 11:23 am on January 24, 2012


 

Surviving the SOPA Blackout

Tomorrow, January 18th, is SOPA blackout day, and lots of very popular sites are committing to participate in the blackout.
SOPA Blackout cartoon
How can web companies, such as SEOs, and supporters (like us) maintain workflow in the midst of a major blackout?

We’ve got some tips!

I need to find things mid-blackout!

While some sites will be partially blacked out, a lot of the larger sites will be completely offline in terms of content for maximum effect.

This means that during the blackout folks will have to turn to caches to find information on the blacked out sites.

If Google and the Internet Archives both stay on-line during the blackout you can use them to get cached copies of most sites.

If you’re not sure how you’d still find the information on Google, here’s a short video created by our CEO Dave Davies to help you along. :)

I want to participate without killing my SEO campaign!

If all your back-links suddenly don’t work, or they all 301 to the same page for a day, how will that effect your rankings?

Major sites get crawls constantly, even 30 mins of downtime could get noticed by crawlers on major sites.

A smaller site that gets crawled once a week would have a very low risk doing a blackout for the daytime hours of the 18th.

Further to that you could also look at user agent detection and sort out people from crawlers, only blacking out the human traffic.

If that seems rather complex there’s two automated solutions already offered:

    • sopablackout.org is offering a JS you can include that will blackout visitors to the site and then let them click anywhere to continue.
      Simple putting this code in a main include (like a header or banner) will do the trick:
      <script type="text/javascript" src="//js.sopablackout.org/sopablackout.js"></script>

 

  • Get a SOPA plugin for your WordPress and participate without shutting down your site. It simply invokes the above Javascript on the 18th automagically so that visitors get the message and then they can continue on to the blog.

I’d be a rotten SEO if I suggested you install an external Javascript without also clearly telling folks to REMOVE these when you are done. It might be a bit paranoid, but I live by the better safe than sorry rule. Plus just because you are paranoid, it doesn’t mean people aren’t trying to track your visitors. :)

How’s Chia Bart doing? .. Well I think he’s having a mid-life crisis right now because he looks more like the Hulkster than Bart?

Pastamania!
Chia Bart number 5
To all my little Bartmaniacs, drink your water, get lots of sunlight, and you will never go wrong!

SEO news blog post by @ 11:28 am on January 17, 2012


 

EPIC FTC Madness

Happy Friday the 13th!

You know that look your pets give you when you are vacuuming?

No not this look:

Scared dog

More like the ‘I will eat you if you get any closer’ look.. ?

That was the look on my face as I read reports today that the Electronic Privacy Information Center has formally requested that the FTC investigate Google’s new social search features for anti-competitive nature and privacy violations.

So what this did is prove? In my personal opinion it proves that someone at EPIC is either a complete fool or funded by Facebook. Here’s why it’s so amazing:

If I want to ‘violate privacy’ in the eyes of EPIC I’d do an image search (on any search engine) for ‘teen mirror facebook’ and I’d get a slew of images teens have taken of themselves in front of a mirror and posted to Facebook. That’s all I’d have to do, and by EPIC’s standards I’ve ‘violated privacy rights’ by getting access to these pictures which are marked ‘public’ on Facebook. This would be no different from me choosing to see search results from my Google+ interests.

If I wanted to make my browser anti-competitive in the eyes of EPIC I’d go into my search settings and I’d add a modifier for my search engine URLs that would add ‘facebook’ as a verbatim keyword that must be in every search result. By clicking those options I’ve now set my browser up for a big fall and stern letters should be written to the FTC immediately to urge them to spend millions of dollars investigating these horrible anti-competitive atrocities. Again, this is no different from me deciding to specifically look at Google+ results when searching.

Heck now that I’ve pointed out that browser software has pre-meditated options to allow anti-competitive behaviour, I guess EPIC will be writing letters to the FTC demanding to have the browser manufacturers investigated to put a stop to people having access to features which allow them to choose a particular service over another.

If my hair wasn’t so short I think I’d be pulling it all out right now in dismay over such examples of non-thought. Perhaps I’ll go trim Chia Bart instead, he’s almost getting ‘shaggy’ now.

If I took even more pictures we could animate Bart!

SEO news blog post by @ 12:39 pm on January 13, 2012


 

SEO Effects of Social Search

Yesterday we covered the hot topic of Google’s social search from a very ‘news’ perspective. If you haven’t watched the tour video take a minute and hit play on the video below.

The truth is that Google is rolling this new search functionality piecemeal just in the same way as most of the recent features. So if I try to explore the option from my work account I get no offers and I’d have to cheat to go play with it right now.

However, on my personal account the option comes right up and my personal account has a smaller social circle than my work account so it seems to me that it’s just a work-in-progress at the moment.

A visit to the Google Inside Search site gives us a bit more confirmation:

If you aren’t seeing the features of Search plus Your World, don’t worry, we’re rolling them out over the next few days.

.. so if you’re not getting the option to try it out, it should come along soon!

Here’s a ‘hands on’ example of ‘Search plus Your World’ for a phrase I personally talk about a lot, ‘minecraft’:

Demonstration of Search plus Your World using the phrase 'minecraft'.

The first thing that occurs to me is that Danny talks about Minecraft WAY more than anyone else, but the second thing that gets my interest is that there’s nothing in the results that I wouldn’t have read or couldn’t get from poking my head into Google+.

Going back to that video from Google that we linked earlier, I have to admit this looks like a very over-hyped feature where 90% of the interesting parts of the video aren’t things we can do with the new search feature. This almost feels like a Microsoft product that was invented by marketers as something to market with zero user interest?

Well that’s my opinion dealt with, but what about SEO factors of this new feature?

A ton of questions come to mind that need to be answered, here’s a few :

  • Who stands to gain from these types of searches?
  • What sites will be negatively impacted?
  • What should websites be doing to take advantage of this new feature?

The first one’s easy, Google, and particularly, Google+ will gain the most from this new search behaviour. Google has always wanted you to find what you want within their domain/services, and limiting your search to a Google owned property, selling it as a great feature, works so well for Google’s overall goals. If you don’t believe that Google wants to keep you inside their services, as you use Google products challenge yourself to consider ‘What more could Google do to keep me inside their networks?’ and I think you’ll start seeing all the efforts they are making to give you what you want instantly vs. leaving Google to visit an external site.

Social media sites that were getting a lot of commercial traffic/advertising will be the hardest hit by this move. If a client came to me and said “We’re on all the big sites, Twitter, Facebook, LinkedIn, Pinterest, Squidoo, etc.. but we haven’t bothered with Google+.” I would be forced to assume they were Australian with such an opposite approach. The same thing would follow with campaign strategies where a company looking at time spent vs. returns would be silly to start a social media campaign anywhere but on Google+ first.

If you have a website that isn’t already following the guidelines for linking between Google+ and your site, you need to start there and then work on building up followers. Ideally you want people talking about your products/services more than your competition so I’d strongly urge someone within your company to engage in Google+ social media efforts on a weekly basis if not more. While it’s pointless to have infinite reach and zero relevance, you also want to be very ‘friendly’ doing whatever it takes to get people to take enough interest in your company pages to follow, +1, add to circles, etc..

In fact the last bit of advice will be a recurring theme for early 2012 where we will be looking at super organic ways to get your product/services out to relevant sections of the internet.

A good example would be a product that is easy to find on-line, but very technical/tricky to work with. Selling the product puts you in the same group as everyone else selling that product, but offering expertise on that product will raise your profile quickly while generating interest/informing potential clients. If you can get links from grateful recipients the effort will pay for itself, and the people you come in contact with are very likely to draw in more clients due to the way that social media is sharing business leads via friend connections.

Typical of Spring, the sooner you plant this ‘social seed’ the sooner it will grow into something that can support your on-line efforts.

Speaking of growing, Chia Bart is getting a little leafy already!

Chia Bart is sprouting nicely.Bart’s beans are sprouting!

SEO news blog post by @ 3:17 pm on January 11, 2012


 

Webcology Year In Review

For those interested in what some of the top minds of SEO, SEM, Mobile Marketing and Social Media have to say about 2011 and maybe more importantly – what they see coming in 2012 then Thursday’s Webcology is a must listen.  Hosted on WebmasterRadio.fm, Jim Hedger and I will be hosting 2 separate round-tables with 5 guests each over 2 hours covering everything from Panda to personalization; mobile growth to patent applications.  It’s going to be a fast-paced show with something for everyone.

The show will be airing live from 2PM EST until 4PM EST on Thursday December 22nd.  If you catch it live you’ll have a chance to join the chat room and ask questions of your own but if you miss it you still have an opportunity to download the podcast a couple days later.  I don’t often focus this blog on promoting the radio show I co-host but with the lineup we have including SEOmoz’s Rand Fishkin, Search Engine Watch’s Jonathan Allen and Mike Grehan, search engine patent guru Bill Slawski and many more talented and entertaining Internet Marketing experts it’s definitely worth letting our valued blog visitors know about it. And if you’re worried it might just be a quiet discussion, Terry Van Horne is joining us to insure that doesn’t happen.  Perhaps I’ll ask him a question or two about his feelings about Schema.org (if you listen to the show … you’ll quickly get why this is funny). :)

So tune in tomorrow at 2PM EST at http://www2.webmasterradio.fm/webcology/, be sure to join the chat room to let us know your thoughts and enjoy.

SEO news blog post by @ 3:32 pm on December 21, 2011


 

Panda’s take on Popular vs. Productive

I’ve seen a few SEO blog posts recently on post-panda content concerns that unsurprisingly contradict each other.

The “popular” camp seem to feel the following is true:

- Don’t post anything off topic
- Don’t post anything that won’t be a hit
- If you post something that fails, pull it
- If you can’t pull a post, fake the popularity

So what that means is pulling your punches until you have a post that’s really going to draw attention to your blog.
The SEO logic is that while regular content creates a positive metric, anyone can produce regular content and in fact loads of unpopular content could become a negative ranking factor.

The “productive” camp follow these golden rules:

- Don’t post content that isn’t unique
- Don’t spin content to create unique content
- Keep keyword densities high
- Keep a low ratio of links in proportion to images/text

This group spend all their time creating content and don’t spend time worried about how popular every post will be.

The SEO logic with “producers” is that the Panda update wants to see regular fresh content publications without duplication of existing content, only ‘really bad’ content can harm this ranking factor.

Well I hate to be a pacifist, but both sides are correct! A great strategy would be to listen to BOTH sides.

  • If every post on your blog gets 300+ links on the day it’s posted, that’s not going to look organic
  • If your blog gets one post, every single day, and nobody links to them, that’s not organic either

So post regularly, but don’t sweat it if you miss one day. If you are having a slow day for topics, you should try to go find some discussions where you can generate interest/back-links to your existing posts. At worst you’ll find some topics that are far more interesting that what you’ve been blogging about and you’ll get something fresh to discuss.

A post in draft, waiting for perfection, won’t do you much good if it never gets published. :)

Those of you shocked to see us on SEO blog topics right now can rest assured we’re struggling to stay on topic.

Oh the SOPA debate is frightful,
But MAFIAAFire is so delightful,
And since we’ve no position to SEO,
Let It Snow! Let It Snow! Let It Snow!

It doesn’t show signs of shoop’ing,
I’ve got a report showing keywords are ranking,
And the competition’s phrases are way down low,
Let It Snow! Let It Snow! Let It Snow!

When we finally reach page one,
How I’ll hate going on the phone!
But if you’ll order via email,
It will make it to your home without fail.

The lyric is slowly ending,
And, my dear, we’re badly rhym-ing,
But as long as you let me SEO,
Let It Snow! Let It Snow! Let It Snow!

SEO news blog post by @ 12:05 pm on December 20, 2011


 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.