Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

SEOmoz SPAM Outing

In the recent wake of the Penguin update from Google and the impact it has had on many sites, Rand Fishkin, CEO of SEOmoz, announced on his Google+ page that SEOmoz is currently developing tools to facilitate the "classifying, indentifying and removing/limiting link juice passed from sites/pages."

feathers mcgraw

SEOmoz wants to develop software to add to their existing toolset available to subscribers on their website to aid in determining if their own website or a competitor’s website appears to be spammy in nature.

If SEOmoz has developed a method to analysis signals that can be used to determine if a site is spammy, it is safe to assume that Google is viewing the page or site in question in the same light. Links that are determined to be spammy will pass little link juice and could potentially incur a penalty from Google. Fishkin summed it the process by saying that if they (SEOmoz) classifies a site or page as having spammy backlinks, “we’re pretty sure Google would call it webspam.”
Some in the SEO community are angered at Rand Fishkin’s policy of “outing” SEOs for spamming practices, so this time, Rand has enlisted the public to answer whether or not he should do so.

Some of our team members, though, do have concerns about whether SEOs will be angry that we’re “exposing” spam. My feeling is that it’s better to have the knowledge out there (and that anything we can catch, Google/Bing can surely better catch and discount) then to keep it hidden. I’m also hopeful this can help a lot of marketers who are trying to decide whether to acquire certain links or who have to dig themselves out of a penalty (or reverse what might have caused it).

antispam

Preliminary results show that most are in favor of Rand’s reporting of other SEOs for spammy practices. Certainly the reporting of offenders will help Google to combat the unwanted webspam that has permeated search results since the inception of the Internet into mainstream society. It is the new mantra of the modern web; you need to follow the rules and guidelines established by Google for fear of serious reprisal – whether or not you agree with it. Ultimately, what benefits the search results, benefits the searcher.

On a slighlty related note, I would like to suggest Feathers McGraw as the new face for the Penguin algorithm update from Google…

SEO news blog post by @ 10:49 am on May 9, 2012

Categories:Google,Rankings

 

Search Engine Experiment in Spam Surfing

If you took a very heavily spam-influenced search engine like Bing for example and removed the first 1 million results for a query, how good would the result be?

How about doing the same thing to the best filtered search engines available?

Well someone got curious and made the million short search engine.

What this new service does is remove a specific # of search results and show you the remainder.

I became immediately curious about a few things:

  • Where are they getting their crawl data from?
  • What are they doing to searches where there’s only a few hundred results?
  • Where is the revenue stream? I see no ads?

Given the lack of advertising I was expecting them to be pulling search data from another site?

There’s no way they are pulling from Bing/Yahoo, there are 14+ sites paying for better spots than we’ve earned on Bing for our terms..

And while the top 10 list looks a bit like DuckDuckGo, we’re seemingly banned from their rankings, and not at #6 at all. It’s funny when you look at their anti-spam approach and then look at the #1 site for ‘seo services’ on DDG. It’s like a time machine back to the days of keyword link spam. Even more ironic is that we conform to DDGs definition of a good SEO:

“The ones who do in fact make web sites suck less, and apply some common sense to the problem, will make improvements in the search ranking if the site is badly done to start with. Things like meta data, semantical document structure, descriptive urls, and whole heap of other factors can affect your rankings significantly.

The ones who want to subscribe you to massive link farms, cloaked gateway pages, and other black hat type techniques are not worth it, and can hurt your rankings in the end.
Just remember, if it sounds too good to be true, is probably is. There are some good ones, and also a lot selling snake oil.”

We know the data isn’t from Google either, we have the #1 seat for ‘seo services’ on Google and maintain that position regularly.

So what’s going on?! This is the same company that gave us the ‘Find People on Plus‘ tool and clearly they know how to monetize a property?

My guess is that they are blending results from multiple search engines, and likely caching a lot of the data so it’d be very hard to tell who’s done the heavy lifting for them?

All that aside, it’s rare to see a search engine that blatantly gives you numbered SERPs and for now MillionShort is, on the left side-bar, showing numbered positions for keywords. That’s sort of handy I guess. :)

You can also change how many results to remove, so if your search is landing you in the spam bucket, then try removing less results. If your search always sucks, and the sites you want to see in the results are on the right, you’ve apparently found a search phrase that isn’t spammed! Congrats!

Weak one: Google Drive

Well my enthusiasm for Google Drive just flew out the window on my second week using it.

UPDATE: Turns out the disk was full and Google Drive has no feedback at all. Thanks FireFox for telling me WHY the download failed. Oh man.

SEO news blog post by @ 11:01 am on May 1, 2012


 

Don’t drink the link bait..

Kool-Aid
Kool-Aid
Thanks to the recent (April/March) Google updates, ‘tread lightly’ has never been better advice to anyone in the SEO industry.

Between extra offers in my inbox to ‘exchange links’, ‘sell links’, ‘purchase links’, that all seem to be coming from GMail accounts, and reports of simple Java-script causing pages to drop from Google’s index, I’m about ready to dig a fox hole and hide in it.

First off, lets talk about how dumb it is to even offer to sell/buy/exchange links at this stage of Google’s anti-spam efforts.

Even if the offer came from some part of the universe where blatantly spamming services, using GMail of all things, was not the most painfully obvious way a person who SHOULD be hiding every effort could get detected, it still doesn’t bode well for the ethics of the company trying to sell you some ‘success’ when they can’t even afford their own mail account and have to use a free one.

Further, if the offer came from someone who was magically smart enough to send out all the spam and not have it tracked, if they are at all successful what you’ll be doing is adding your site to a group of sites ‘cheating’ the system. The more sites in the ‘exchange’ the more likely it is to get you caught and penalized. So technically, any success there is to be had, will also be your successful undoing.

Secondly, lets consider how you would try to catch people buying/selling links if you were Google? It’s an invasion of privacy to snoop through someone’s GMail to see if they bought/sold links, but if Google sends you and email asking to purchase a link on your site, is that an invasion of privacy or just a really accurate way to locate the worst spam sites on-line? The same would go for selling a back link to your site, just send out an email, wait for positive responses from the verified site owner, start demoting the site. Talk about making it easy for Google.

Heck as an SEO trying to do things the right way, if I get enough offers to sell/buy links from a particular spammer, wouldn’t it be worth my time to submit a report to Google’s quality team? I think the ‘lack of wisdom’ of these offers should be very obvious now, but they still persist for some curious reason; Perhaps they are all coming from those relentless Nigerian email scammers?

Java Script?

The next issue is on-page Java Script with questionable tactics. I know Google can’t put a human in-front of every page review, even if they actually do a LOT of human based site review. So the safe assumption for now is that your site will be audited by ‘bots’ that have to make some pretty heavy decisions.

When a crawler bot comes across Java Script the typical response is to isolate and ignore the information inside the <script></script> tags. Google, however, seems to be adding Java Script interpreters to their crawler bots in order to properly sort out what the Java Script is doing to the web page.

Obviously if a Java Script is confusing the crawler the most likely reaction is to not process the page for consideration in SERPS, and this appears to be what we’re seeing a lot of recently with people claiming they have been ‘banished’ from Google due to Java Script that was previously ignored. We even did some tests on our blog late in 2011 for Java Script impact and the results were similar to what I’m hearing from site owners right now in this last update.

So, the bottom line is to re-evaluate your pages and decide: is the Java Script you’ve been using is worth risking your rankings over?

If you are implementing Java Script for appearance reasons, using something very common like jQuery, you probably have nothing to fear. Google endorses jQuery and even helps host an on-line version to make it easier to implement.

On the flip-side, if you are using something obscure/custom, like a click-tracker/traffic Java Script which is inserting links to known ‘SEO’ services, I’d remove it now to avoid any stray rounds from Google’s anti-SEO flak-cannon.
Google Flak Cannon

I did toss some Minecraft demo map videos on-line last night/this morning, but they didn’t turn out so swell for a bunch of reasons and I’m just going to re-record them with better software. Stay tuned!

SEO news blog post by @ 12:42 pm on March 22, 2012


 

Newest Panda Attacks Onsite Optimization

Google will be penalizing websites that overuse onsite optimization tactics. Matt Cutts of Google, announced the new algorithm update during a panel discussion with Danny Sullivan of Search Engine Land and Microsoft’s Senior Product Marketing Manager of Bing at SXSW named "Dear Google & Bing: Help Me Rank Better!"

panda conspiracy

Cutts reveals that over the last few months Google has been working on the new update specifically designed to target sites that are "over-optimized" or "overly SEO’d."

This is the latest effort by Google to reduce the amount of webspam that still permeates the SERPs. Reminiscent of the Panda update, the new update is designed to target and penalize those that are utilize black hat seo tactics and who try to manipulate Google’s search results through less than savory optimization tactics.

Sites that keep to white hat SEO tactics apparently will have nothing to fear (fingers crossed). The new update is designed to address sites that focus only on SEO and not on delivering quality content.

In search results, Google wants to "level the playing field" regarding "all those people doing, for lack of a better word, over optimization or overly SEO–versus those making great content and great sites," Schwartz quotes Cutts as saying, in a rough transcription.

"We are trying to make GoogleBot smarter, make our relevance better, and we are also looking for those who abuse it, like too many keywords on a page, or exchange way too many links or go well beyond what you normally expect," the transcript continues.

The new update is expected to be implemented and to begin affecting search results in the upcoming month or next few weeks, although Google had no official comment on the matter.

The Wall Street Journal reported earlier this week that Google is about to embark on the biggest-ever overhaul of its search system, one that involves semantic search as well as changes to search engine optimization, advertising, and page-ranking results.

SEO news blog post by @ 12:07 pm on March 19, 2012

Categories:Google,Google

 

Panda 3.3: Building a Better Panda

Following up on my previous blog post from Monday where I recapped the launch and implementation of the Google Algorithm update affectionately known as "Panda." First released in February of 2011, Panda has been through several iterations and has a profound effect the quality of search results, webspam and SEO.

Panda Terminator

Google confirmed on February 27th the release of the Panda 3.3 update in conjunction with forty other search updates occurring in February or currently in progress. Although it seems very similar on the surface to Google’s January release of Panda 3.2, which was described by Google as a "data refresh," Google describes this update as follows: This launch refreshes data in the Panda system, making it more accurate and more sensitive to recent changes on the web.

In their blog post, Google states that they are retiring the link evaluation signal that has been employed for many years. An act that is going to cause some heated discussing around SEO water coolers everywhere. Google was reluctant to release too much information regarding the details for fears of revealing details regarding ranking signals.

Link evaluation. We often use characteristics of links to help us figure out the topic of a linked page. We have changed the way in which we evaluate links; in particular, we are turning off a method of link analysis that we used for several years. We often re-architect or turn off parts of our scoring in order to keep our system maintainable, clean and understandable.

Another update of Panda 3.3 will focus on local search rankings. Google revealed that the traditional algorithmic ranking factors are now playing a larger part in triggering local search results.

Here is the released details of the Panda 3.3 Algorithm update:

  • More coverage for related searches. [launch codename “Fuzhou”] This launch brings in a new data source to help generate the “Searches related to” section, increasing coverage significantly so the feature will appear for more queries. This section contains search queries that can help you refine what you’re searching for.
  • Tweak to categorizer for expanded sitelinks. [launch codename “Snippy”, project codename “Megasitelinks”] This improvement adjusts a signal we use to try and identify duplicate snippets. We were applying a categorizer that wasn’t performing well for our expanded sitelinks, so we’ve stopped applying the categorizer in those cases. The result is more relevant sitelinks.
  • Less duplication in expanded sitelinks. [launch codename “thanksgiving”, project codename “Megasitelinks”] We’ve adjusted signals to reduce duplication in the snippets for expanded sitelinks. Now we generate relevant snippets based more on the page content and less on the query.
  • More consistent thumbnail sizes on results page. We’ve adjusted the thumbnail size for most image content appearing on the results page, providing a more consistent experience across result types, and also across mobile and tablet. The new sizes apply to rich snippet results for recipes and applications, movie posters, shopping results, book results, news results and more.
  • More locally relevant predictions in YouTube. [project codename “Suggest”] We’ve improved the ranking for predictions in YouTube to provide more locally relevant queries. For example, for the query [lady gaga in ] performed on the US version of YouTube, we might predict [lady gaga in times square], but for the same search performed on the Indian version of YouTube, we might predict [lady gaga in India].
  • More accurate detection of official pages. [launch codename “WRE”] We’ve made an adjustment to how we detect official pages to make more accurate identifications. The result is that many pages that were previously misidentified as official will no longer be.
  • Refreshed per-URL country information. [Launch codename “longdew”, project codename “country-id data refresh”] We updated the country associations for URLs to use more recent data.
  • Expand the size of our images index in Universal Search. [launch codename “terra”, project codename “Images Universal”] We launched a change to expand the corpus of results for which we show images in Universal Search. This is especially helpful to give more relevant images on a larger set of searches.
  • Minor tuning of autocomplete policy algorithms. [project codename “Suggest”] We have a narrow set of policies for autocomplete for offensive and inappropriate terms. This improvement continues to refine the algorithms we use to implement these policies.
  • “Site:” query update [launch codename “Semicolon”, project codename “Dice”] This change improves the ranking for queries using the “site:” operator by increasing the diversity of results.
  • Improved detection for SafeSearch in Image Search. [launch codename "Michandro", project codename “SafeSearch”] This change improves our signals for detecting adult content in Image Search, aligning the signals more closely with the signals we use for our other search results.
  • Interval based history tracking for indexing. [project codename “Intervals”] This improvement changes the signals we use in document tracking algorithms. 
  • Improvements to foreign language synonyms. [launch codename “floating context synonyms”, project codename “Synonyms”] This change applies an improvement we previously launched for English to all other languages. The net impact is that you’ll more often find relevant pages that include synonyms for your query terms.
  • Disabling two old fresh query classifiers. [launch codename “Mango”, project codename “Freshness”] As search evolves and new signals and classifiers are applied to rank search results, sometimes old algorithms get outdated. This improvement disables two old classifiers related to query freshness.
  • More organized search results for Google Korea. [launch codename “smoothieking”, project codename “Sokoban4”] This significant improvement to search in Korea better organizes the search results into sections for news, blogs and homepages.
  • Fresher images. [launch codename “tumeric”] We’ve adjusted our signals for surfacing fresh images. Now we can more often surface fresh images when they appear on the web.
  • Update to the Google bar. [project codename “Kennedy”] We continue to iterate in our efforts to deliver a beautifully simple experience across Google products, and as part of that this month we made further adjustments to the Google bar. The biggest change is that we’ve replaced the drop-down Google menu in the November redesign with a consistent and expanded set of links running across the top of the page.
  • Adding three new languages to classifier related to error pages. [launch codename "PNI", project codename "Soft404"] We have signals designed to detect crypto 404 pages (also known as “soft 404s”), pages that return valid text to a browser but the text only contain error messages, such as “Page not found.” It’s rare that a user will be looking for such a page, so it’s important we be able to detect them. This change extends a particular classifier to Portuguese, Dutch and Italian.
  • Improvements to travel-related searches. [launch codename “nesehorn”] We’ve made improvements to triggering for a variety of flight-related search queries. These changes improve the user experience for our Flight Search feature with users getting more accurate flight results.
  • Data refresh for related searches signal. [launch codename “Chicago”, project codename “Related Search”] One of the many signals we look at to generate the “Searches related to” section is the queries users type in succession. If users very often search for [apple] right after [banana], that’s a sign the two might be related. This update refreshes the model we use to generate these refinements, leading to more relevant queries to try.
  • International launch of shopping rich snippets. [project codename “rich snippets”] Shopping rich snippets help you more quickly identify which sites are likely to have the most relevant product for your needs, highlighting product prices, availability, ratings and review counts. This month we expanded shopping rich snippets globally (they were previously only available in the US, Japan and Germany).
  • Improvements to Korean spelling. This launch improves spelling corrections when the user performs a Korean query in the wrong keyboard mode (also known as an “IME”, or input method editor). Specifically, this change helps users who mistakenly enter Hangul queries in Latin mode or vice-versa.
  • Improvements to freshness. [launch codename “iotfreshweb”, project codename “Freshness”] We’ve applied new signals which help us surface fresh content in our results even more quickly than before.
  • Web History in 20 new countries. With Web History, you can browse and search over your search history and webpages you’ve visited. You will also get personalized search results that are more relevant to you, based on what you’ve searched for and which sites you’ve visited in the past. In order to deliver more relevant and personalized search results, we’ve launched Web History in Malaysia, Pakistan, Philippines, Morocco, Belarus, Kazakhstan, Estonia, Kuwait, Iraq, Sri Lanka, Tunisia, Nigeria, Lebanon, Luxembourg, Bosnia and Herzegowina, Azerbaijan, Jamaica, Trinidad and Tobago, Republic of Moldova, and Ghana. Web History is turned on only for people who have a Google Account and previously enabled Web History.
  • Improved snippets for video channels. Some search results are links to channels with many different videos, whether on mtv.com, Hulu or YouTube. We’ve had a feature for a while now that displays snippets for these results including direct links to the videos in the channel, and this improvement increases quality and expands coverage of these rich “decorated” snippets. We’ve also made some improvements to our backends used to generate the snippets.
  • Improvements to ranking for local search results. [launch codename “Venice”] This improvement improves the triggering of Local Universal results by relying more on the ranking of our main search results as a signal. 
  • Improvements to English spell correction. [launch codename “Kamehameha”] This change improves spelling correction quality in English, especially for rare queries, by making one of our scoring functions more accurate.
  • Improvements to coverage of News Universal. [launch codename “final destination”] We’ve fixed a bug that caused News Universal results not to appear in cases when our testing indicates they’d be very useful.
  • Consolidation of signals for spiking topics. [launch codename “news deserving score”, project codename “Freshness”] We use a number of signals to detect when a new topic is spiking in popularity. This change consolidates some of the signals so we can rely on signals we can compute in realtime, rather than signals that need to be processed offline. This eliminates redundancy in our systems and helps to ensure we can continue to detect spiking topics as quickly as possible.
  • Better triggering for Turkish weather search feature. [launch codename “hava”] We’ve tuned the signals we use to decide when to present Turkish users with the weather search feature. The result is that we’re able to provide our users with the weather forecast right on the results page with more frequency and accuracy.
  • Visual refresh to account settings page. We completed a visual refresh of the account settings page, making the page more consistent with the rest of our constantly evolving design.
  • Panda update. This launch refreshes data in the Panda system, making it more accurate and more sensitive to recent changes on the web.
  • Link evaluation. We often use characteristics of links to help us figure out the topic of a linked page. We have changed the way in which we evaluate links; in particular, we are turning off a method of link analysis that we used for several years. We often rearchitect or turn off parts of our scoring in order to keep our system maintainable, clean and understandable.
  • SafeSearch update. We have updated how we deal with adult content, making it more accurate and robust. Now, irrelevant adult content is less likely to show up for many queries.
  • Spam update. In the process of investigating some potential spam, we found and fixed some weaknesses in our spam protections.
  • Improved local results. We launched a new system to find results from a user’s city more reliably. Now we’re better able to detect when both queries and documents are local to the user.

Other details of update and changes that Google has made recently can be found here:

SEO news blog post by @ 10:57 am on February 29, 2012


 

One Year After the Panda Attack

One Year Panda Attack
It has been just over one year since the Panda Algorithm produced a prolific amount of pandemonium across the World Wide Web. I came across this great infographic posted on Search Engine Land created in conjunction with BlueGlass SEO detailing how Panda works, what it impacted and the stages of the updates implemented from Panda 1.0 through to Panda 3.2.

The Google Panda Update, One Year Later

Here are some of our past blog posts detailing the Panda Updates as they came out, along with strategies and tactics to counteract the effects of the Panda Algorithm updates.


SEO news blog post by @ 11:49 am on February 27, 2012


 

Are multiple rel=authors worth it?

Recently Google+ made it a lot easier for content authors to indicate the sites they publish to. Here’s a short clip showing that new easier process:
[jwplayer mediaid="3346"]

So that’s now told Google+ that you are an author for the sites you’ve listed. It also adds a backlink on your Google+ Profile page back to your site.

At this point, once Google has parsed the changes, and updated it’s caches, you’ll start to see author credits on articles with the same name and email address. While the Google help docs ‘suggest’ you have a matching email address published with each post, it’s clearly not a requirement.

So after this update you could start to see ‘published by you’ when doing searches on Google for posts you’ve made but what’s to stop anyone from claiming they wrote something on-line?

The other half of this process is creating a ‘rel=”author”‘ or ‘rel=”publisher”‘ link on the content pages on your blog/web site.

In the case of Beanstalk’s Blog, all posts get the same rel=”publisher” link, it looks like this (you can see it in ‘view-source’):

<link href="https://plus.google.com/115481870286209043075" rel="publisher" />

That makes Google see our blog posts as ‘published’ by our Google+ Profile, which is a bit ‘lazy’, but the process to add that code was very easy (and we blogged about it here) compared to the task of tagging each post with a custom link.

The truth is that there has to be some ‘ranking signal’ for multiple authors, and there should be a quality/trust grade based on the profiles of the authors. So what is that ‘factor’ that ‘has’ to be hiding in the ranking code? Great question!

Since we’ve already spent some time with Google+ and a single author source we intend to run some tests and prove out the value or lack of it. Our plans are to report on both the difficulty of applying the right tags to the proper posts, and then value of that effort. If anyone reading along has some good suggestions for the test process please drop us a comment via the main contact form.

Where’s Bart?

Chia Bart is retired for now. I need to find a decent webcam and then I’ll re-do him with some time-lapse for added thrills and joy. In the meantime we’re looking at offering the readers a really unique chance at interacting with the blog:

Each month we will be posting some macro images. Each one will be a challenge to figure out and we’ll take guesses on stuff ‘material’ ‘location’ ‘object’ etc.. and then we will rate the guesses based on how close they are. Technically, even if we had one guess like: “The picture for week 2 looks like glass”, that could win!

The best guess will get recognition on our blog and we’ll announce the best guess each month on Twitter as well.

This is the Macro image for Week two of FebruaryFebruary Macro 2 – If you think you know what this is, or where this is, send us your best guess via Twitter or G+

SEO news blog post by @ 12:14 pm on February 9, 2012


 

Google’s Arnaque Carte

I was tempted to title this blog post ‘Why it’s cheaper to just block France‘ but that implied a lot and wasn’t as much fun as ‘Google’s Map Scam’ (translation).

Ever since the days of Google suggest, there has been some serious merde tossed at Google by French businesses and organizations. I think we all remember the famous French victories search?

Well apparently that algorithm has caused quite a stir for the French as they actually sued Google to get them to remove ‘Arnaque’ from showing in a suggest result for ‘cnfdi’ because it looked bad! Seriously, the French are complaining that the suggest results are too ‘honest’ and they took legal action to have Google give a fake result for that suggest query:

Then again in late 2011 Google lost a French court case over a suggest result that added the French version of ‘crook/swindler’ to the end of a search string for a French insurance company. The sum of that settlement was almost $65,000 and again, the problem is that Google isn’t censoring it’s information enough.

If that wasn’t bad enough, this week a French mapping company that offers similar services to Google Maps has won a settlement of over $660,000 against Google for providing it’s mapping services for free. Yep, once again, Google’s too honest/generous and France wants justice!

From my personal perspective, if I was Google, I’d just give France the same treatment as China, setup some massive IP block restrictions, and then go get some freedom fries with my spare time and money.


$7.50 for a tub of fries? Sweet!

SEO news blog post by @ 2:28 pm on February 2, 2012


 

Focus on the profit

In the first minute of the offical ‘hard hitting’ video called ‘Focus on the user’ they stab at the heart of the Google+ social search issue:
They do a search for ‘cooking
Then they click on the ‘most relevant cooking result within Google+
Afterwards they compare that with a search for ‘Jamie Oliver‘ and complain:

cooking‘ isn’t very relevant to the latest info from ‘Jamie Oliver

Twitter and Facebook wimper about Google+ social search

Don’t believe me that they did this? Go watch it again, they actually want us to feel outrage that ‘cooking’ doesn’t link us to the most relevant info for ‘Jamie Oliver’.

The authors of the plugin fully admit that they are getting the results info from Google itself, and just don’t want to say the words “Google is simple showcasing it’s services” instead they want to make it out to be a matter of ‘evil’ and ‘holding back’. If they didn’t at multiple times in the video slip up and show how you can still get the top results without using their plugin I’d say they had a case.

As much as there is to roll my eyes at, from an SEO standpoint, everything about focusontehuser.org is brilliant. The back-links must be pouring in, and I saw a very clever ‘click here to get your results to show’ link in the video that could be a real profit mill for them (their marklet’s broken right now or I’d investigate).

Don’t get me wrong, I know this scripting project was backed by Facebook, Twitter, and MySpace (it’s still going), so it already had some deep pockets, but in my opinion, it looks like the devs had some deeper ‘evil’ ideas?

While we are still on the ‘Google+ Social is Evil’ topic, the changes to support nicknames, pseudonyms, and maiden names is apparently done and now you can socialize however you wish on Google+. A more ‘evil’ company would have stuck to the original, and far more profitable design which requires valid names and serious privacy commitment.

To read more about the new Google+ naming policy put out on Monday just hop on over to Bradley Horowitz’s Google+ page.

I know this is the part where I slap up a picture of Chia Bart’s amazing growth and progress.. but someone decided to help him out and drain his water tray so he’s really wilted right now and I’m trying to get some life back into him. Perhaps I’ll do an update after lunch if he perks up? :)

Bart sprang back a fair bit, had to zoom to see the wilt!

SEO news blog post by @ 11:23 am on January 24, 2012


 

Surviving the SOPA Blackout

Tomorrow, January 18th, is SOPA blackout day, and lots of very popular sites are committing to participate in the blackout.
SOPA Blackout cartoon
How can web companies, such as SEOs, and supporters (like us) maintain workflow in the midst of a major blackout?

We’ve got some tips!

I need to find things mid-blackout!

While some sites will be partially blacked out, a lot of the larger sites will be completely offline in terms of content for maximum effect.

This means that during the blackout folks will have to turn to caches to find information on the blacked out sites.

If Google and the Internet Archives both stay on-line during the blackout you can use them to get cached copies of most sites.

If you’re not sure how you’d still find the information on Google, here’s a short video created by our CEO Dave Davies to help you along. :)

I want to participate without killing my SEO campaign!

If all your back-links suddenly don’t work, or they all 301 to the same page for a day, how will that effect your rankings?

Major sites get crawls constantly, even 30 mins of downtime could get noticed by crawlers on major sites.

A smaller site that gets crawled once a week would have a very low risk doing a blackout for the daytime hours of the 18th.

Further to that you could also look at user agent detection and sort out people from crawlers, only blacking out the human traffic.

If that seems rather complex there’s two automated solutions already offered:

    • sopablackout.org is offering a JS you can include that will blackout visitors to the site and then let them click anywhere to continue.
      Simple putting this code in a main include (like a header or banner) will do the trick:
      <script type="text/javascript" src="//js.sopablackout.org/sopablackout.js"></script>

 

  • Get a SOPA plugin for your WordPress and participate without shutting down your site. It simply invokes the above Javascript on the 18th automagically so that visitors get the message and then they can continue on to the blog.

I’d be a rotten SEO if I suggested you install an external Javascript without also clearly telling folks to REMOVE these when you are done. It might be a bit paranoid, but I live by the better safe than sorry rule. Plus just because you are paranoid, it doesn’t mean people aren’t trying to track your visitors. :)

How’s Chia Bart doing? .. Well I think he’s having a mid-life crisis right now because he looks more like the Hulkster than Bart?

Pastamania!
Chia Bart number 5
To all my little Bartmaniacs, drink your water, get lots of sunlight, and you will never go wrong!

SEO news blog post by @ 11:28 am on January 17, 2012


 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.