Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

TECHNOlogy: What is AJAX? Baby Don’t Hurt Me!

Wikipedia defines AJAX (Asynchronous JavaScript And XML) as:

A group of interrelated web development techniques used on the client-side to create asynchronous web applications.

What a mind-numbing description! What you need to know is that AJAX is the combination of a several technologies to make better web pages.

If you have no interest in making websites but you like techno music, or you’re curious why I picked that title, this is for you:

This is a good soundtrack for this post. You should hit play and keep reading.

After a bit of time with HTML/CSS I started to build a growing list of issues that I couldn’t solve without some scripting.

I learned some PHP, which wasn’t tricky because it uses very common concepts. Here’s the traditional ‘hello world’ example in PHP:

<?PHP echo ‘Hello World’; ?> = Hello World

.. and if I wanted to be a bit more dynamic:

<?PHP echo ‘Hello World it is ‘.date(‘Y’); ?> = Hello World it is 2012

Because PHP is only run when the page is requested, and only runs on the server side, it’s only the server that loads/understands PHP; The browser does nothing with PHP.

With PHP code only seen by the server, it’s a very safe way to make your pages more intelligent without giving Google or other search engines a reason to be suspicious of your site.

In fact one of the most common applications of PHP for an SEO is something as simple as keeping your Copyright date current:

<?PHP echo ‘Copyright© 2004-’.date(‘Y’); ?> = Copyright© 2004-2012

Plus when I need to store some information, or fetch that information, PHP isn’t that easy, so I added MySQL to the mix and suddenly my data nightmares are all data dreams and fairy tales (well almost). I won’t dive into MySQL on top of everything here, but lets just say that when you have a ton of data, you want easy access to it, and most ‘flat’ formats are far from the ease of MySQL.

But I still had a long list of things I couldn’t do that I knew I should be able to do.

The biggest problem I had was that all my pages had to ‘post’ something, figure out what I’d posted, and then re-load the page with updated information based on what was posted.

Picture playing a game of chess where you are drawing the board with pen and paper. Each move would be a fresh sheet of paper with the moved piece drawn over a different square.

PHP can get the job done, but it’s not a very smart way to proceed when you want to make an update to the current page vs. re-drawing the whole page.

So I learned some JavaScript, starting with the basic ‘hello world’ example:
<span onClick=”alert(‘Hello World’);”>Click</span>

hello world javascript alert box

 
If I wanted to see the date I’d have to add some more JavaScript:
<script language=”javascript”>
function helloworld()
{
var d = new Date();
alert(‘Hello World it is ‘ + d.getFullYear());
}
</script>

<span onClick=”helloworld();”>Click

Hello World it's 2012 alert box example

 
JavaScript is ONLY run on the browser, the server has no bearing on JavaScript, so the example above won’t always work as expected because it’s telling you the date on your computer, not on the server. How would we see the date of the server?

This is where AJAX comes into play. If we can tell the browser to invisibly fetch a page from a server and process the information that comes back, then we can combine the abilities of JavaScript, PHP, and MySQL.

Lets do the ‘hello world’ example with AJAX using the examples above.

First you would create the PHP file that does the server work as something witty like ‘ajax-helloworld.php’:
<?php echo ‘Hello World it is ‘.date(‘Y’); ?>

..next you’d create an AJAX function inside the web page you are working on:
<script language=”javascript”>
function helloworld()
{
var ajaxData; // Initialize the ‘ajaxData’ variable then try to set it to hold the request (on error, assume IE)
try{
// Opera 8.0+, Firefox, Safari
ajaxData = new XMLHttpRequest();
} catch (e){
// Internet Explorer Browsers
try{
ajaxData = new ActiveXObject(“Msxml2.XMLHTTP”);
} catch (e) {
try{
ajaxData = new ActiveXObject(“Microsoft.XMLHTTP”);
} catch (e){
// Something went wrong
alert(“Your browser broke!”);
return false;
}
}
}
// Create a function that will receive data sent from the server
ajaxData.onreadystatechange = function(){
if(ajaxData.readyState == 4){
alert(ajaxData.responseText);
}
}
ajaxData.open(“GET”, “ajax-helloworld.php”, true);
ajaxData.send();
}
</script>

Only the purple text is customized, the rest of the function is a well established method of running an AJAX request that you should not need to edit.

So we have a function that loads the ‘ajax-helloworld.php’ page we made and then does an alert with the output of the page, all we have to do is put something on the page to call the function like that span example with the onClick=’helloworld();’ property.

Well that’s all neat but what about the ‘X’ in AJAX?

XML is a great thing because it’s a language that helps us with extensible mark-up of our data.

In other words XML is like a segregated serving dish for pickled food that keeps the olives from mixing with the beets.

Going back to our ‘hello world’ example we could look at the ‘date data’ and the ‘message data’ as objects:
<XML>
<message>Hello World it is</message>
<date>2012</date>
</XML>

Now, when the AJAX loads our ‘ajax-helloworld.php’ and gets an XML response we can tell what part of the response is the date, and which part is the message. If we made a new page that just needs to display the server’s date, we could re-use our example and only look at the ‘date’ object.

For some odd reason, most coders like JSON a lot, and this makes it really common to see AJAX using JSON vs. XML to package a data response. Here’s our XML example as a JSON string:
{“message”:”Hello World it is”,”date”:”2012″}

Not only is it really easy to read JSON, because JavaScript and PHP both understand JSON encoding it’s really easy to upgrade our ‘hello world’ XML example over to JSON format.

Here’s the new PHP command file ‘ajax-helloworld.php’:
<?php
$response = array(“message” => “Hello World it is”, “date” => date(‘Y’));
echo json_encode($response);
?>

The output of our AJAX PHP file will now be the same as the JSON example string. All we have to do is tell JavaScript to decode the response.

If you look back at this line from the AJAX JavaScript function example above:

if(ajaxData.readyState == 4){
alert(ajaxData.responseText);
}

This is where we’re handling the response from the AJAX request. So this is where we want to decode the response:

if(ajaxData.readyState == 4){
var reply = JSON.parse(ajaxRequestAT.responseText);
alert(‘The message is : ‘ + reply.message + ‘ and the date is : ‘ + reply.date);
}

Now we are asking for data, getting it back as objects, and updating the page with the response data objects.

If this example opened some doors for your website needs you really should continue to learn more. While the web is full of examples like this, from my personal experience I can honestly tell you that you’ll find yourself always trying to bridge knowledge gaps without a solid lesson plan.

Educational sites like LearnQuest, have excellent tutorials and lessons on AJAX and JavaScript including advanced topics like external AJAX with sites like Google and Yahoo. Plus LearnQuest also has jQuery tutorials that will help you tap into advanced JavaScript functionality without getting your hands dirty.

*Savvy readers will note that I gave PHP my blessings for SEO uses but said nothing of JavaScript’s impact on crawlers/search engines.

Kyle recently posted an article on GoogleBot’s handling of AJAX/JavaScript which digs into that topic a bit more.

With any luck I’ll get some time soon to share a gem of JavaScripting that allows you to completely sculpt your page-rank and trust flow in completely non-organic way. The concept would please search engines, but at the same time cannot be viewed as ‘white hat’ no matter how well it works.

SEO news blog post by @ 11:19 am on June 14, 2012


 

Microsoft sues Google: Rankings on Google are too crucial!

Microsoft knows the pains of anti-trust lawsuits, million dollar fines, and the expensive nature of dividing up a business so it doesn’t look like a monopoly.
Breaking up the monopoly
So it’s no shock that one of the biggest weapons in Microsoft’s war chest is a handful of small companies that can claim Google services have stymied their opportunities to succeed.

According to this “Google treads carefully in European antitrust case” article posted yesterday in Canada.com, companies with direct links to Microsoft are suing because they cannot compete in EU markets without ranking well on Google:

Google’s competition includes Microsoft but is mostly small, specialist Internet services which argue the Silicon Valley giant is ensuring their names come low or don’t even figure in searches. In Europe, 80 per cent of Web searches are run on Google, according to the most recent figures by comScore, compared with 67 per cent in the United States. Its opponents say that means Google, which makes its money by advertising sales, can make or break a business by its ranking.

… followed by:

Moreover, Google says the small companies claiming to be its victims are linked to Microsoft. The third original complainant, Ciao.de, is a German travel search site owned by Microsoft. Several are also members of I-comp, whose most prominent member is Microsoft, and which produces position papers on subjects such as web market concentration. I-comp lawyer Wood acknowledges the organization is not independent, but says “our palette is much broader than Microsoft’s.”
 
The scary truth is that if actions like this are successful we would have to reorganize or dismantle all companies like Google that offer free services which prevent smaller companies from selling the same services.

Typically such a thing would never happen here in North America, since due diligence requires proof of consumer harm, not just harm to the competition.

No matter how you look at it, Google is the opposite of consumer harm, but in the EU courts this may not matter.

Once Google loses in EU courts it will be ‘game-on’ for all other countries to dog-pile on the remains of Google, allowing greed to kill off one of the best things that’s ever happened to us.

Looking at history of humanity and greed vs. virtue, we should have seen this coming?

In my opinion it is as if Microsoft woke up one morning, looked into their magical mirror to reflect on how beautiful they are, and came to realize that some poison apples need to be handed out post-haste.

Speaking of humanity vs. greed, I MUST comment on this whole FunnyJunk vs. Oatmeal ‘fiasco’.

Either this is some brilliant promotional scheme or the owners of FunnyJunk painted a bullseye on their own foot. I am really not sure which one, but man is it sad.

Give it a read if you really want to be shocked at how low a business can stoop to make a profit from artists and the community.

It’s also refreshing to see the Oatmeal prove they could shut down TFJ, but instead they used the $20,000 they raised in 64 minutes to fund cancer research and support the World Wildlife Federation.

SEO news blog post by @ 11:08 am on June 12, 2012


 

Google Advisor: Where have you been all my life?

Admittedly, when I read the announcement that Google Advisor was here to help me manage my money the first thoughts were about privacy and that last bastion of private information Google hasn’t touched yet: Banking.

Gloved hand that is reaching for banking and credit info

Being wrong never felt so good!

Google Advisor is not (at the moment) a way to suck more private information from you, it’s actually more of a consulting service for comparing bank accounts, credit cards, certificates of deposit, and more.

Google Advisor

As someone who’s setup review sites for various services/offerings I can tell you how handy/popular it is to break down competing services so the consumer can select something that meets their exact needs.

Google Advisor claims that the information it’s showing is based on my data, but a 0% intro rate on transfers for 18months? If that’s really available to me I’m going to have to send Google some chocolates.

Google bought QuickOffice

QuickOffice Logo

Google bought the mobile office suite ‘QuickOffice‘ which allows ‘App-Level’ access to office documents for mobile devices based on Android/iOS/Symbian.

This move seems redundant with Google’s ‘Docs’ suite offering even more connectivity to your documents/spreadsheets/presentations, but that is just a cloud service, not an ‘App’ and you can have more offline control of your work if you have an ‘App’ vs. a cloud service.

Plus you can’t argue with the users, they want ‘Apps’ and will pay for them.

Google bought Meebo

Meebo Logo

I’m not sure if this was related to Yahoo’s ‘Axis’ bar plugin that came and went with zero fanfare, but it’s an interesting purchase for SEO interests.

Meebo is a handy social media tool with some great options for ad placement and on-line marketing. SEOs not already dabbling with the tool should take a look, like yesterday.

If you’ve been managing your Twitter, Google+, Facebook, etc.., profiles without a management tool, aggregation sites like Meebo are really what you’ve been missing out on.

We know that Google owned properties have more relevance and trust on the web than similar services/products. After all, if you can’t trust yourself, who can you trust?

So if you were using some other social aggregation tool, and were doing it solely for SEO awareness, you can safely assume it’s worth the effort to try out Meebo for a potentially improved result/relevance from your efforts.

We will be doing some testing (as we always do) and will blog about our results to further expand on what the service offers over others. This may even warrant an article or two?

SEO news blog post by @ 12:42 pm on June 5, 2012


 

GoogleBot Now Indexing JavaScript, AJAX & CSS

Gogole Bot

Improving the way that GoogleBot parses and interprets content on the web has always been integral to the Google mandate. It now seems that GoogleBot has reverently been bestowed the ability to parse JavaScript, AJAX and Cascading Style Sheets.

In the past developers avoided the use of JavaScript to deliver content or links to content due to the inherent difficulty by the GoogleBot to correctly index this dynamic content. Over the years it has become so good at the task that Google is now asking us to allow the GoogleBot to scan JavaScript used in our websites.

Google did not release specific details of how or what the GoogleBot does with the JavaScript code it finds due to fears that the knowledge would quickly be incorporated into BlackHat tactics designed to game Search Engine Results Pages (SERPs). A recent blog post on Tumblr is responsible for the media attention. The post has shown server logs where the bot was shown to be accessing JavaScript files.

The ability for the GoogleBot to successfully download and parse dynamic content is a huge leap forward in the indexing of the web and stands to cause many fluctuations in rankings as sites are re-crawled and re-indexed with this dynamic content now factored in to the page’s content.

Previously Google attempted to get developers to standardize the way dynamic content was handled so that it could crawl but the proposal (https://developers.google.com/webmasters/ajax-crawling/) has been more or less ignored.

The GoogleBot has to download the JavaScript and execute it on the Google Servers running the GoogleBot leading some to the conclusion that it may be possible to use the Google Cloud to compute data at a large scale.

SEO news blog post by @ 11:22 am on May 28, 2012

Categories:Coding,Google,Google

 

Yahoo Axis – What the Flock?

I had a friend working on the Flock browser team right until it lost momentum and became clear that it was too much, too soon…

Amy's Winehouse - Too soon?

Here we go again with a new ‘all-in-one’ web browser concept, this time from a very big name?

**Update: Turns out that the leaks were really just rumors. This hype mill is a ‘googol‘ times more intense than it should be considering this is ‘just a plugin’ (unless you count Apple devices).

 

Paul Rudd doing the Double Take
Yahoo..? New?!?

Microsoft owns Yahoo right? So if Yahoo is releasing a new browser + a suite of browser plugins for people who refuse to switch browsers, what’s going on?

Well apparently giving people the option to ‘choose’ MSN/Bing/Yahoo wasn’t working out so well. Now you can run a browser or a plugin that removes that annoying hassle of choosing who’s search services you are using.

Y’know how Firefox and Chrome allow you to sign-in to your browser letting you seamlessly move from one location to the next? Yeah Axis is going to break ground and re-invent the web by also doing that same thing.

Y’know how Google is showing you previews of the sites you’re considering visiting within the search results? Yep Axis will finally let you do that, again.

Is this even a new browser or just IE9 with some ‘fluff’ and Yahoo branding? Tonight we will get a chance to try it hands-on and find out, but for now we have a few videos we can watch over on Yahoo Video.

One of the points my Economics teacher used to hammer home is to view each promotion as the promoter relating to their target audience.

If you have a good product with a smart client base, you can sell your product by focusing on real traits and strengths. Just demonstrate the product and avoid all pointless elements that distract the consumer from your product information.

Enjoy those videos and the clever/unique symbolism that hasn’t been copied too many times since Apple used it in 1984. :)

Does this mean Bing/Yahoo rankings will be important?

Who ever said they weren’t important? Okay, well expert opinions aside, you should never burn the Bing bridge, especially not with cell phones that default to Bing and new versions of Windows that also default to Bing.

It’s never wise to put all your eggs in one basket, and this is true of search engine placement/rankings as well as eggs.

Even if Yahoo Axis only manages a week of public attention, that’s one week of people around the planet searching Bing for a change.

If you rank really well on Google, we’re not going to suggest you intentionally tank your rankings for a short-term gain on Bing. The cost of recovering from such a move would probably be far more than simply paying for some pay-per-click coverage via Microsoft’s AdCenter.

There’s already folks worried about ‘Yahoo’ impressions vs. Bing impressions and the following advice has been posted in the AdCenter help forum:

1) You are currently bidding on broad match only, add phrase and exact match to your bidding structure.
2) Look at keywords with low quality score and optimize for those specifically.
3) Install the MAI tool and check on expected traffic for adCenter, you can also see what average bids are for specific positions.

Only 7 Days Left!

7 DAYS LEFT!

 

Talk about old news? I mentioned this just 2 days ago?!

We still have 7 days left in our Beanstalk Minecraft Map Competition! Check it out and even if you’re not entering, please let others know it’s coming to a close and we need all submissions by the 31st!

SEO news blog post by @ 10:03 am on May 24, 2012


 

FB stock drops as SpaceX soars to success!

There were so many interesting technology/internet developments between Friday and now today that I can’t really pick which one to focus on?

Sliding FB stock prices, Google finally taking over what was the mobility division of Motorola, SpaceX reaching the ISS, Wiki-leaks’ social media platform, and the Google Knowledge Graph.. and more!

If we looked at them from an SEO standpoint I would still have to struggle a bit to pick the most interesting/focused story, but it’s a great way to dive in so lets take a look at the weekends headlines from an SEO aspect.

Facepalm – FB IPO = Uh Oh

 
Dave’s nailed this one really well on Friday in this post:
Facebook IPO vs Ford (real world) Valuation Comparison

The image of money flushing down the toilet was very ‘apt’ since that’s exactly where I see the stock price going:
https://www.google.ca/finance?q=NASDAQ%3AFB

The current ‘low’ appears to be $31/share at the moment, with the price currently dancing around $32.50/share as I write this.

Google Mobility

Google already makes some cool hardware for their servers and other projects, but most people I know wouldn’t think of them as a manufacturer.

And yet here we are today, watching history unfold, as the mobile division of one of the worlds best handset manufacturers changes hands with a company that is at the head of the Android software alliance.

Google does a lot of things for free, even at a loss, because they see value in things that others would squander and ignore. Now that they have a hardware division to support this bad habit things are going to get very interesting.

We already know from looking through project glass’s details that Google will be needing a very skilled manufacturer with assets in micro mobility and wireless. HTC has always been very willing to participate with Google’s projects, but they are a vastly successful hardware manufacturer with no visible brand loyalty.

I personally had Android running on a HTC Windows Mobile so why can’t I run Windows Mobile on a Google subsidized Android HTC phone? I probably could, which is why it’d be very silly for Google to subsidize HTC hardware.

If Google can produce the hardware and find ways to keep 90%+ of the owners using Google services, it’s a much safer bet, and it appears to be exactly what they are doing. Heck if they make the hardware they might not even care what OS you use if they are allowed to sniff the data and still learn about users from the data they are using.

The only part of the puzzle that’s missing is deployment of Google owned, Motorola equipped, cell-towers so that Google can offer hardware, software, and services on their terms, in a model that makes sense to them, which would likely mean no caps on network use for Google products?

Yeah I could be dreaming but if I was a competitive cellular provider I’d be strongly considering opening my arms to Google before it’s an arms race against Google. ;)

Google Knowledge Graph

While the bearing on SEO for this news item is rather debatable and curious. The feature itself is incredibly handy and something Google has the unique opportunity to provide.

By taking key points of knowledge and building some hard links to relate that knowledge to other data points Google has developed a Wikipedia of it’s own design.

Knowing the struggles that Wikipedia has faced in terms of moderation and updating content, it will be anyone’s guess how Google is going to maintain it’s knowledge graph without someone manipulating the results, but kudos to Google for trying?

Right now the coverage on this is going to be all the same because the content in Google KG is still being built up, but you can expect further discussion as the service grows.

FoWL – Wiki-Leaks’ Social Media Service

Since this service claims to be private and encrypted, it would be very foul of me to really spend much of your time discussing it.

As it can’t be officially crawled by Google it’s probably going to have a very low effect on SEO and rankings in general. The only real bearing I could see it having is using it as a traffic tool for sites that are in-line with the Wiki-leaks mantra of public information. So if you can pretend that your services are so good the FBI doesn’t want you talking about them..??

SpaceX reaches ISS

This isn’t search engine related at all. I suppose you could point to the success of Google vs. government run indexes, and then point to the success of SpaceX vs. NASA with a bunch of startling similarities, but that’s some serious reaching.

At the same time, posting this on the same day the first private effort has docked with the International Space Station? I am obligated as a nerd to at least tuck this into the tail of the post. It’s pretty cool!

9 Days Left!

 
9 DAYS LEFT!

 

We still have 9 days left in our Beanstalk Minecraft Map Competition! Check it out and even if you’re not entering, please let others know it’s coming to a close and we need all submissions by the 31st!

Good Luck! :)

SEO news blog post by @ 12:01 pm on May 22, 2012


 

SEOmoz SPAM Outing

In the recent wake of the Penguin update from Google and the impact it has had on many sites, Rand Fishkin, CEO of SEOmoz, announced on his Google+ page that SEOmoz is currently developing tools to facilitate the "classifying, indentifying and removing/limiting link juice passed from sites/pages."

feathers mcgraw

SEOmoz wants to develop software to add to their existing toolset available to subscribers on their website to aid in determining if their own website or a competitor’s website appears to be spammy in nature.

If SEOmoz has developed a method to analysis signals that can be used to determine if a site is spammy, it is safe to assume that Google is viewing the page or site in question in the same light. Links that are determined to be spammy will pass little link juice and could potentially incur a penalty from Google. Fishkin summed it the process by saying that if they (SEOmoz) classifies a site or page as having spammy backlinks, “we’re pretty sure Google would call it webspam.”
Some in the SEO community are angered at Rand Fishkin’s policy of “outing” SEOs for spamming practices, so this time, Rand has enlisted the public to answer whether or not he should do so.

Some of our team members, though, do have concerns about whether SEOs will be angry that we’re “exposing” spam. My feeling is that it’s better to have the knowledge out there (and that anything we can catch, Google/Bing can surely better catch and discount) then to keep it hidden. I’m also hopeful this can help a lot of marketers who are trying to decide whether to acquire certain links or who have to dig themselves out of a penalty (or reverse what might have caused it).

antispam

Preliminary results show that most are in favor of Rand’s reporting of other SEOs for spammy practices. Certainly the reporting of offenders will help Google to combat the unwanted webspam that has permeated search results since the inception of the Internet into mainstream society. It is the new mantra of the modern web; you need to follow the rules and guidelines established by Google for fear of serious reprisal – whether or not you agree with it. Ultimately, what benefits the search results, benefits the searcher.

On a slighlty related note, I would like to suggest Feathers McGraw as the new face for the Penguin algorithm update from Google…

SEO news blog post by @ 10:49 am on May 9, 2012

Categories:Google,Rankings

 

Search Engine Experiment in Spam Surfing

If you took a very heavily spam-influenced search engine like Bing for example and removed the first 1 million results for a query, how good would the result be?

How about doing the same thing to the best filtered search engines available?

Well someone got curious and made the million short search engine.

What this new service does is remove a specific # of search results and show you the remainder.

I became immediately curious about a few things:

  • Where are they getting their crawl data from?
  • What are they doing to searches where there’s only a few hundred results?
  • Where is the revenue stream? I see no ads?

Given the lack of advertising I was expecting them to be pulling search data from another site?

There’s no way they are pulling from Bing/Yahoo, there are 14+ sites paying for better spots than we’ve earned on Bing for our terms..

And while the top 10 list looks a bit like DuckDuckGo, we’re seemingly banned from their rankings, and not at #6 at all. It’s funny when you look at their anti-spam approach and then look at the #1 site for ‘seo services’ on DDG. It’s like a time machine back to the days of keyword link spam. Even more ironic is that we conform to DDGs definition of a good SEO:

“The ones who do in fact make web sites suck less, and apply some common sense to the problem, will make improvements in the search ranking if the site is badly done to start with. Things like meta data, semantical document structure, descriptive urls, and whole heap of other factors can affect your rankings significantly.

The ones who want to subscribe you to massive link farms, cloaked gateway pages, and other black hat type techniques are not worth it, and can hurt your rankings in the end.
Just remember, if it sounds too good to be true, is probably is. There are some good ones, and also a lot selling snake oil.”

We know the data isn’t from Google either, we have the #1 seat for ‘seo services’ on Google and maintain that position regularly.

So what’s going on?! This is the same company that gave us the ‘Find People on Plus‘ tool and clearly they know how to monetize a property?

My guess is that they are blending results from multiple search engines, and likely caching a lot of the data so it’d be very hard to tell who’s done the heavy lifting for them?

All that aside, it’s rare to see a search engine that blatantly gives you numbered SERPs and for now MillionShort is, on the left side-bar, showing numbered positions for keywords. That’s sort of handy I guess. :)

You can also change how many results to remove, so if your search is landing you in the spam bucket, then try removing less results. If your search always sucks, and the sites you want to see in the results are on the right, you’ve apparently found a search phrase that isn’t spammed! Congrats!

Weak one: Google Drive

Well my enthusiasm for Google Drive just flew out the window on my second week using it.

UPDATE: Turns out the disk was full and Google Drive has no feedback at all. Thanks FireFox for telling me WHY the download failed. Oh man.

SEO news blog post by @ 11:01 am on May 1, 2012


 

Don’t drink the link bait..

Kool-Aid
Kool-Aid
Thanks to the recent (April/March) Google updates, ‘tread lightly’ has never been better advice to anyone in the SEO industry.

Between extra offers in my inbox to ‘exchange links’, ‘sell links’, ‘purchase links’, that all seem to be coming from GMail accounts, and reports of simple Java-script causing pages to drop from Google’s index, I’m about ready to dig a fox hole and hide in it.

First off, lets talk about how dumb it is to even offer to sell/buy/exchange links at this stage of Google’s anti-spam efforts.

Even if the offer came from some part of the universe where blatantly spamming services, using GMail of all things, was not the most painfully obvious way a person who SHOULD be hiding every effort could get detected, it still doesn’t bode well for the ethics of the company trying to sell you some ‘success’ when they can’t even afford their own mail account and have to use a free one.

Further, if the offer came from someone who was magically smart enough to send out all the spam and not have it tracked, if they are at all successful what you’ll be doing is adding your site to a group of sites ‘cheating’ the system. The more sites in the ‘exchange’ the more likely it is to get you caught and penalized. So technically, any success there is to be had, will also be your successful undoing.

Secondly, lets consider how you would try to catch people buying/selling links if you were Google? It’s an invasion of privacy to snoop through someone’s GMail to see if they bought/sold links, but if Google sends you and email asking to purchase a link on your site, is that an invasion of privacy or just a really accurate way to locate the worst spam sites on-line? The same would go for selling a back link to your site, just send out an email, wait for positive responses from the verified site owner, start demoting the site. Talk about making it easy for Google.

Heck as an SEO trying to do things the right way, if I get enough offers to sell/buy links from a particular spammer, wouldn’t it be worth my time to submit a report to Google’s quality team? I think the ‘lack of wisdom’ of these offers should be very obvious now, but they still persist for some curious reason; Perhaps they are all coming from those relentless Nigerian email scammers?

Java Script?

The next issue is on-page Java Script with questionable tactics. I know Google can’t put a human in-front of every page review, even if they actually do a LOT of human based site review. So the safe assumption for now is that your site will be audited by ‘bots’ that have to make some pretty heavy decisions.

When a crawler bot comes across Java Script the typical response is to isolate and ignore the information inside the <script></script> tags. Google, however, seems to be adding Java Script interpreters to their crawler bots in order to properly sort out what the Java Script is doing to the web page.

Obviously if a Java Script is confusing the crawler the most likely reaction is to not process the page for consideration in SERPS, and this appears to be what we’re seeing a lot of recently with people claiming they have been ‘banished’ from Google due to Java Script that was previously ignored. We even did some tests on our blog late in 2011 for Java Script impact and the results were similar to what I’m hearing from site owners right now in this last update.

So, the bottom line is to re-evaluate your pages and decide: is the Java Script you’ve been using is worth risking your rankings over?

If you are implementing Java Script for appearance reasons, using something very common like jQuery, you probably have nothing to fear. Google endorses jQuery and even helps host an on-line version to make it easier to implement.

On the flip-side, if you are using something obscure/custom, like a click-tracker/traffic Java Script which is inserting links to known ‘SEO’ services, I’d remove it now to avoid any stray rounds from Google’s anti-SEO flak-cannon.
Google Flak Cannon

I did toss some Minecraft demo map videos on-line last night/this morning, but they didn’t turn out so swell for a bunch of reasons and I’m just going to re-record them with better software. Stay tuned!

SEO news blog post by @ 12:42 pm on March 22, 2012


 

Newest Panda Attacks Onsite Optimization

Google will be penalizing websites that overuse onsite optimization tactics. Matt Cutts of Google, announced the new algorithm update during a panel discussion with Danny Sullivan of Search Engine Land and Microsoft’s Senior Product Marketing Manager of Bing at SXSW named "Dear Google & Bing: Help Me Rank Better!"

panda conspiracy

Cutts reveals that over the last few months Google has been working on the new update specifically designed to target sites that are "over-optimized" or "overly SEO’d."

This is the latest effort by Google to reduce the amount of webspam that still permeates the SERPs. Reminiscent of the Panda update, the new update is designed to target and penalize those that are utilize black hat seo tactics and who try to manipulate Google’s search results through less than savory optimization tactics.

Sites that keep to white hat SEO tactics apparently will have nothing to fear (fingers crossed). The new update is designed to address sites that focus only on SEO and not on delivering quality content.

In search results, Google wants to "level the playing field" regarding "all those people doing, for lack of a better word, over optimization or overly SEO–versus those making great content and great sites," Schwartz quotes Cutts as saying, in a rough transcription.

"We are trying to make GoogleBot smarter, make our relevance better, and we are also looking for those who abuse it, like too many keywords on a page, or exchange way too many links or go well beyond what you normally expect," the transcript continues.

The new update is expected to be implemented and to begin affecting search results in the upcoming month or next few weeks, although Google had no official comment on the matter.

The Wall Street Journal reported earlier this week that Google is about to embark on the biggest-ever overhaul of its search system, one that involves semantic search as well as changes to search engine optimization, advertising, and page-ranking results.

SEO news blog post by @ 12:07 pm on March 19, 2012

Categories:Google,Google

 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.