Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

Particle Physics and Search Engines

If you’ve been hiding under a rock then you may not have heard the news of the ‘God Particle’ discovery.

As someone who is fairly scientific, I look at this as more of a proof of concept than a discovery, and ‘God’ really needs to give Peter Higgs some credit for his theories.

 
I won’t dwell on the news surrounding the Higgs boson particle confirmation, but there are parallels between objects colliding and revealing previously unseen matters.

When Search Engines Collide

It’s been some time since Bing and Yahoo merged, so the data sets should be the same right?

No. That would really be a wasted opportunity, and Microsoft is clearly smarter than that.





 
By not merging the search data or algorithms of Bing and Yahoo, Microsoft can now experiment with different updates and ranking philosophies without putting all it’s eggs in one basket.

An active/healthy SEO will be watching the updates to search algorithms from as many perspectives as possible which means a variety of sites on a variety of topics tracked on a variety of search engines.

Say a site gets a ton of extra 301 links from partner sites, and this improves traffic and rankings on Bing, causes a stability of movement on Yahoo, and a drop in traffic on Google?

It’s possible to say that the drop on Google was related to a ton of different factors, untrusted links, link spam, dilution of keyword relevance, keyword anchor text spamming, you name it. This is because Google is always updating and always keeping us on our toes.

Bring on the data..

Lets now take the data from Bing and Yahoo into consideration and look at what we know of recent algo changes on those search engines. This ‘collision’ of data still leaves us with unseen factors but gives us more to go on.

Since Bing has followed Google on some of the recent updates, the upswing on Bing for position of keywords would hint that it’s neither a dilution of relevance or spamming on the keywords/anchor text.

Stability on Yahoo is largely unremarkable if you check the crawl info and cache dates. It’s likely just late to the game and you can’t bet the farm on this info.

What about the other engines? Without paying a penny for the data we can fetch Blekko and DDG(DuckDuckGo) ranking history to see what changes have occurred to rankings on these engines.

Since Blekko is currently well known to be on the warpath for duplicate content, and they are starving for fresh crawl data, a rankings drop on that service can be very informative especially if the data from the other search engines helps to eliminate key ranking factors.

In the case of our current example I’d narrow down the list of ranking factors that changed on the last ‘Penguin’ update and contrast those with the data from the other engines and probably suspect (in this example) that Google is seeing duplicity from the 301s, something Bing wouldn’t yet exhibit, but Blekko would immediately punish as badly or worse than Google.

The next step would be to check for issues of authority for the page content. Is there authorship mark-up and a reciprocal setup on the author’s end that helps establish the trust of the main site content? Does the site have the proper verified entries in Google WMT to pass authority? Barring WMT flags, what about a dynamic canonical tag in the header, even as a test if it’s not already setup?

Start making small changes, watch the results, and be patient. If you’re not gaming Google and you’ve done something accidental to cause a drop in rankings, you need to think your way through the repairs step by step.

It’s not easy to evaluate but the more data you can mash-up, and the better you understand that data, the closer/quicker you can troubleshoot ranking issues and ensure that your efforts are going to be gains.

SEO news blog post by @ 12:12 pm on July 5, 2012


 

Apple: On the Charge!

apple controller

Over at Apple things are changing to give the company even more power, profit, and exclusive control over it’s customers than ever before.

The good news is that Apple has been charged and found guilty of misleading Australian consumers who purchased Apple’s advertised “iPad with WiFi + 4G” only to find it’s not compatible with the 4G networks in Australia.

This resulted in a $2.25million fine + $300,000.00 in costs for Apple, a fine that seems light given the gross disregard for Australian consumer laws that Apple showed by selling a product that cannot deliver on it’s advertised specifications.

Indeed a small price to pay to purchase Australian tablet buyers without investing in efforts to make the hardware work with the country’s ISPs.

Protecting you from yourself :

Apple also made headlines by patenting an anti-surveillance technology that endeavours to mask a user’s on-line activity with fake information.

Clone Troopers

In a nutshell the service would hide your real activities behind a wall of fake information. If you ‘like’ a Mars Bar™ then your clone would like a brand of chocolate bar that directly competes with your choices. In essence it’s like an electro-acoustic muffler that covers your on-line activity with white-noise.

There is some implication that Apple has a technique to confuse actions of the clone with your actions, but I’d have to see that in action to honestly discuss it.

At the end of the day this means that instead of Apple and ‘others’ knowing about your interests/habits, only Apple will have accurate information, and they can claim that all other ‘targeted advertisers’ are second to them in accurately promoting to someone’s interests.

To me, this reinforces that Apple customers are the sole property of Apple, including their information.

Soul’d Out?

Apple has some great changes coming for loyal consumers. They are spending the time to remove the excellent Google Maps application, which is a free service, and replacing it with Tom Tom maps, which they likely had to purchase/invest in.

It’s also rumoured that the next update to Apple’s Siri app will focus on data from Apple partners like Yelp, Rotten Tomatoes, and OpenTable, instead of Google.

This was a brave move to protect Apple from Google’s growing competition in hardware markets. If Apple doesn’t limit Google growth with every effort they can muster, Apple consumers will start to see why so many people are switching to Android.

From a SEO perspective, the fact that Apple, and it’s users are getting away from Google is worth noting. When I am optimizing a site, I’m doing it for the good of the site/company, not my preferences in search engines.

So if I had a client who sold flower arrangements or something that is very likely to be searched for with Siri, I’d seriously be considering the competition and rankings over on Yelp as part of their external ranking strategy for coming months.

Spending your money for you…

These changes from free services to paid options won’t cost consumers too much more, at least not compared to the new 19pin iPhone interface that Apple is switching to starting with the iPhone 5.
The old iPad and iPhone adapters
You heard that correctly, all those accessories you have purchased over the years with iPad/iPhone connections are all going to be junk. Not to fret however, Apple’s authorized partners will sell you all new devices, and are already working on a new line of must-have add-ons featuring the new connectors.

This way, all the cheap knock-off adapters/accessories that aren’t making Apple any money are going to be worthless and Apple will be climbing back into your pockets to kick those imposters out.

And thus the walls of the garden appear to be growing, taller, thicker, and electrified on both sides.

Speaking of Power & Charging…

In more promising news the process of pulling solar power from infrared light is closer to ‘practical application’ with recent progress in the field of carbon nanotube research over at MIT.

If you look at a typical solar panel, exploring the reaction between light energy -> power conversion, you’ll note that infrared (non-visible) light energy is largely wasted.

This is especially troublesome when you realize that ~40% of the sun’s light energy that reaches our planet surface is actually in the infrared spectrum and isn’t being converted to electricity by traditional solar panel technology.

Plus this new research is pointing to a compatible technology that can be added to existing installations vs. replacing existing solar panel installations.

Here’s the relevant section from the original article:

The carbon-based cell is most effective at capturing sunlight in the near-infrared region.

Because the material is transparent to visible light, such cells could be overlaid on conventional solar cells, creating a tandem device that could harness most of the energy of sunlight.

The carbon cells will need refining, Strano and his colleagues say: So far, the early proof-of-concept devices have an energy-conversion efficiency of only about 0.1 percent.

So while the recent announcement is exciting, and very promising, we won’t see the results for some time to come due to efficiency/cost issues which need to be resolved first.

The real news is that folks worried about investing in current solar tech need not worry as much about the future if the next improvements are going to be complimentary to existing solutions.

SEO news blog post by @ 1:10 pm on June 21, 2012


 

TECHNOlogy: What is AJAX? Baby Don’t Hurt Me!

Wikipedia defines AJAX (Asynchronous JavaScript And XML) as:

A group of interrelated web development techniques used on the client-side to create asynchronous web applications.

What a mind-numbing description! What you need to know is that AJAX is the combination of a several technologies to make better web pages.

If you have no interest in making websites but you like techno music, or you’re curious why I picked that title, this is for you:

This is a good soundtrack for this post. You should hit play and keep reading.

After a bit of time with HTML/CSS I started to build a growing list of issues that I couldn’t solve without some scripting.

I learned some PHP, which wasn’t tricky because it uses very common concepts. Here’s the traditional ‘hello world’ example in PHP:

<?PHP echo ‘Hello World’; ?> = Hello World

.. and if I wanted to be a bit more dynamic:

<?PHP echo ‘Hello World it is ‘.date(‘Y’); ?> = Hello World it is 2012

Because PHP is only run when the page is requested, and only runs on the server side, it’s only the server that loads/understands PHP; The browser does nothing with PHP.

With PHP code only seen by the server, it’s a very safe way to make your pages more intelligent without giving Google or other search engines a reason to be suspicious of your site.

In fact one of the most common applications of PHP for an SEO is something as simple as keeping your Copyright date current:

<?PHP echo ‘Copyright© 2004-’.date(‘Y’); ?> = Copyright© 2004-2012

Plus when I need to store some information, or fetch that information, PHP isn’t that easy, so I added MySQL to the mix and suddenly my data nightmares are all data dreams and fairy tales (well almost). I won’t dive into MySQL on top of everything here, but lets just say that when you have a ton of data, you want easy access to it, and most ‘flat’ formats are far from the ease of MySQL.

But I still had a long list of things I couldn’t do that I knew I should be able to do.

The biggest problem I had was that all my pages had to ‘post’ something, figure out what I’d posted, and then re-load the page with updated information based on what was posted.

Picture playing a game of chess where you are drawing the board with pen and paper. Each move would be a fresh sheet of paper with the moved piece drawn over a different square.

PHP can get the job done, but it’s not a very smart way to proceed when you want to make an update to the current page vs. re-drawing the whole page.

So I learned some JavaScript, starting with the basic ‘hello world’ example:
<span onClick=”alert(‘Hello World’);”>Click</span>

hello world javascript alert box

 
If I wanted to see the date I’d have to add some more JavaScript:
<script language=”javascript”>
function helloworld()
{
var d = new Date();
alert(‘Hello World it is ‘ + d.getFullYear());
}
</script>

<span onClick=”helloworld();”>Click

Hello World it's 2012 alert box example

 
JavaScript is ONLY run on the browser, the server has no bearing on JavaScript, so the example above won’t always work as expected because it’s telling you the date on your computer, not on the server. How would we see the date of the server?

This is where AJAX comes into play. If we can tell the browser to invisibly fetch a page from a server and process the information that comes back, then we can combine the abilities of JavaScript, PHP, and MySQL.

Lets do the ‘hello world’ example with AJAX using the examples above.

First you would create the PHP file that does the server work as something witty like ‘ajax-helloworld.php’:
<?php echo ‘Hello World it is ‘.date(‘Y’); ?>

..next you’d create an AJAX function inside the web page you are working on:
<script language=”javascript”>
function helloworld()
{
var ajaxData; // Initialize the ‘ajaxData’ variable then try to set it to hold the request (on error, assume IE)
try{
// Opera 8.0+, Firefox, Safari
ajaxData = new XMLHttpRequest();
} catch (e){
// Internet Explorer Browsers
try{
ajaxData = new ActiveXObject(“Msxml2.XMLHTTP”);
} catch (e) {
try{
ajaxData = new ActiveXObject(“Microsoft.XMLHTTP”);
} catch (e){
// Something went wrong
alert(“Your browser broke!”);
return false;
}
}
}
// Create a function that will receive data sent from the server
ajaxData.onreadystatechange = function(){
if(ajaxData.readyState == 4){
alert(ajaxData.responseText);
}
}
ajaxData.open(“GET”, “ajax-helloworld.php”, true);
ajaxData.send();
}
</script>

Only the purple text is customized, the rest of the function is a well established method of running an AJAX request that you should not need to edit.

So we have a function that loads the ‘ajax-helloworld.php’ page we made and then does an alert with the output of the page, all we have to do is put something on the page to call the function like that span example with the onClick=’helloworld();’ property.

Well that’s all neat but what about the ‘X’ in AJAX?

XML is a great thing because it’s a language that helps us with extensible mark-up of our data.

In other words XML is like a segregated serving dish for pickled food that keeps the olives from mixing with the beets.

Going back to our ‘hello world’ example we could look at the ‘date data’ and the ‘message data’ as objects:
<XML>
<message>Hello World it is</message>
<date>2012</date>
</XML>

Now, when the AJAX loads our ‘ajax-helloworld.php’ and gets an XML response we can tell what part of the response is the date, and which part is the message. If we made a new page that just needs to display the server’s date, we could re-use our example and only look at the ‘date’ object.

For some odd reason, most coders like JSON a lot, and this makes it really common to see AJAX using JSON vs. XML to package a data response. Here’s our XML example as a JSON string:
{“message”:”Hello World it is”,”date”:”2012″}

Not only is it really easy to read JSON, because JavaScript and PHP both understand JSON encoding it’s really easy to upgrade our ‘hello world’ XML example over to JSON format.

Here’s the new PHP command file ‘ajax-helloworld.php’:
<?php
$response = array(“message” => “Hello World it is”, “date” => date(‘Y’));
echo json_encode($response);
?>

The output of our AJAX PHP file will now be the same as the JSON example string. All we have to do is tell JavaScript to decode the response.

If you look back at this line from the AJAX JavaScript function example above:

if(ajaxData.readyState == 4){
alert(ajaxData.responseText);
}

This is where we’re handling the response from the AJAX request. So this is where we want to decode the response:

if(ajaxData.readyState == 4){
var reply = JSON.parse(ajaxRequestAT.responseText);
alert(‘The message is : ‘ + reply.message + ‘ and the date is : ‘ + reply.date);
}

Now we are asking for data, getting it back as objects, and updating the page with the response data objects.

If this example opened some doors for your website needs you really should continue to learn more. While the web is full of examples like this, from my personal experience I can honestly tell you that you’ll find yourself always trying to bridge knowledge gaps without a solid lesson plan.

Educational sites like LearnQuest, have excellent tutorials and lessons on AJAX and JavaScript including advanced topics like external AJAX with sites like Google and Yahoo. Plus LearnQuest also has jQuery tutorials that will help you tap into advanced JavaScript functionality without getting your hands dirty.

*Savvy readers will note that I gave PHP my blessings for SEO uses but said nothing of JavaScript’s impact on crawlers/search engines.

Kyle recently posted an article on GoogleBot’s handling of AJAX/JavaScript which digs into that topic a bit more.

With any luck I’ll get some time soon to share a gem of JavaScripting that allows you to completely sculpt your page-rank and trust flow in completely non-organic way. The concept would please search engines, but at the same time cannot be viewed as ‘white hat’ no matter how well it works.

SEO news blog post by @ 11:19 am on June 14, 2012


 

Microsoft sues Google: Rankings on Google are too crucial!

Microsoft knows the pains of anti-trust lawsuits, million dollar fines, and the expensive nature of dividing up a business so it doesn’t look like a monopoly.
Breaking up the monopoly
So it’s no shock that one of the biggest weapons in Microsoft’s war chest is a handful of small companies that can claim Google services have stymied their opportunities to succeed.

According to this “Google treads carefully in European antitrust case” (Link removed – no longer available) article posted yesterday in Canada.com, companies with direct links to Microsoft are suing because they cannot compete in EU markets without ranking well on Google:

Google’s competition includes Microsoft but is mostly small, specialist Internet services which argue the Silicon Valley giant is ensuring their names come low or don’t even figure in searches. In Europe, 80 per cent of Web searches are run on Google, according to the most recent figures by comScore, compared with 67 per cent in the United States. Its opponents say that means Google, which makes its money by advertising sales, can make or break a business by its ranking.

… followed by:

Moreover, Google says the small companies claiming to be its victims are linked to Microsoft. The third original complainant, Ciao.de, is a German travel search site owned by Microsoft. Several are also members of I-comp, whose most prominent member is Microsoft, and which produces position papers on subjects such as web market concentration. I-comp lawyer Wood acknowledges the organization is not independent, but says “our palette is much broader than Microsoft’s.”
 
The scary truth is that if actions like this are successful we would have to reorganize or dismantle all companies like Google that offer free services which prevent smaller companies from selling the same services.

Typically such a thing would never happen here in North America, since due diligence requires proof of consumer harm, not just harm to the competition.

No matter how you look at it, Google is the opposite of consumer harm, but in the EU courts this may not matter.

Once Google loses in EU courts it will be ‘game-on’ for all other countries to dog-pile on the remains of Google, allowing greed to kill off one of the best things that’s ever happened to us.

Looking at history of humanity and greed vs. virtue, we should have seen this coming?

In my opinion it is as if Microsoft woke up one morning, looked into their magical mirror to reflect on how beautiful they are, and came to realize that some poison apples need to be handed out post-haste.

Speaking of humanity vs. greed, I MUST comment on this whole FunnyJunk vs. Oatmeal ‘fiasco’.

Either this is some brilliant promotional scheme or the owners of FunnyJunk painted a bullseye on their own foot. I am really not sure which one, but man is it sad.

Give it a read if you really want to be shocked at how low a business can stoop to make a profit from artists and the community.

It’s also refreshing to see the Oatmeal prove they could shut down TFJ, but instead they used the $20,000 they raised in 64 minutes to fund cancer research and support the World Wildlife Federation.

SEO news blog post by @ 11:08 am on June 12, 2012


 

Google Advisor: Where have you been all my life?

Admittedly, when I read the announcement that Google Advisor (Link removed – no longer available) was here to help me manage my money the first thoughts were about privacy and that last bastion of private information Google hasn’t touched yet: Banking.

Gloved hand that is reaching for banking and credit info

Being wrong never felt so good!

Google Advisor is not (at the moment) a way to suck more private information from you, it’s actually more of a consulting service for comparing bank accounts, credit cards, certificates of deposit, and more.

Google Advisor

As someone who’s setup review sites for various services/offerings I can tell you how handy/popular it is to break down competing services so the consumer can select something that meets their exact needs.

Google Advisor claims that the information it’s showing is based on my data, but a 0% intro rate on transfers for 18months? If that’s really available to me I’m going to have to send Google some chocolates.

Google bought QuickOffice

QuickOffice Logo

Google bought the mobile office suite ‘QuickOffice‘ which allows ‘App-Level’ access to office documents for mobile devices based on Android/iOS/Symbian.

This move seems redundant with Google’s ‘Docs’ suite offering even more connectivity to your documents/spreadsheets/presentations, but that is just a cloud service, not an ‘App’ and you can have more offline control of your work if you have an ‘App’ vs. a cloud service.

Plus you can’t argue with the users, they want ‘Apps’ and will pay for them.

Google bought Meebo

Meebo Logo

I’m not sure if this was related to Yahoo’s ‘Axis’ bar plugin that came and went with zero fanfare, but it’s an interesting purchase for SEO interests.

Meebo is a handy social media tool with some great options for ad placement and on-line marketing. SEOs not already dabbling with the tool should take a look, like yesterday.

If you’ve been managing your Twitter, Google+, Facebook, etc.., profiles without a management tool, aggregation sites like Meebo are really what you’ve been missing out on.

We know that Google owned properties have more relevance and trust on the web than similar services/products. After all, if you can’t trust yourself, who can you trust?

So if you were using some other social aggregation tool, and were doing it solely for SEO awareness, you can safely assume it’s worth the effort to try out Meebo for a potentially improved result/relevance from your efforts.

We will be doing some testing (as we always do) and will blog about our results to further expand on what the service offers over others. This may even warrant an article or two?

SEO news blog post by @ 12:42 pm on June 5, 2012


 

GoogleBot Now Indexing JavaScript, AJAX & CSS

Gogole Bot

Improving the way that GoogleBot parses and interprets content on the web has always been integral to the Google mandate. It now seems that GoogleBot has reverently been bestowed the ability to parse JavaScript, AJAX and Cascading Style Sheets.

In the past developers avoided the use of JavaScript to deliver content or links to content due to the inherent difficulty by the GoogleBot to correctly index this dynamic content. Over the years it has become so good at the task that Google is now asking us to allow the GoogleBot to scan JavaScript used in our websites.

Google did not release specific details of how or what the GoogleBot does with the JavaScript code it finds due to fears that the knowledge would quickly be incorporated into BlackHat tactics designed to game Search Engine Results Pages (SERPs). A recent blog post on Tumblr is responsible for the media attention. The post has shown server logs where the bot was shown to be accessing JavaScript files.

The ability for the GoogleBot to successfully download and parse dynamic content is a huge leap forward in the indexing of the web and stands to cause many fluctuations in rankings as sites are re-crawled and re-indexed with this dynamic content now factored in to the page’s content.

Previously Google attempted to get developers to standardize the way dynamic content was handled so that it could crawl but the proposal (https://developers.google.com/webmasters/ajax-crawling/) has been more or less ignored.

The GoogleBot has to download the JavaScript and execute it on the Google Servers running the GoogleBot leading some to the conclusion that it may be possible to use the Google Cloud to compute data at a large scale.

SEO news blog post by @ 11:22 am on May 28, 2012

Categories:Coding,Google,Google

 

Yahoo Axis – What the Flock?

I had a friend working on the Flock browser team right until it lost momentum and became clear that it was too much, too soon…

Amy's Winehouse - Too soon?

Here we go again with a new ‘all-in-one’ web browser concept, this time from a very big name?

**Update: Turns out that the leaks were really just rumors. This hype mill is a ‘googol‘ times more intense than it should be considering this is ‘just a plugin’ (unless you count Apple devices).

 

Paul Rudd doing the Double Take
Yahoo..? New?!?

Microsoft owns Yahoo right? So if Yahoo is releasing a new browser + a suite of browser plugins for people who refuse to switch browsers, what’s going on?

Well apparently giving people the option to ‘choose’ MSN/Bing/Yahoo wasn’t working out so well. Now you can run a browser or a plugin that removes that annoying hassle of choosing who’s search services you are using.

Y’know how Firefox and Chrome allow you to sign-in to your browser letting you seamlessly move from one location to the next? Yeah Axis is going to break ground and re-invent the web by also doing that same thing.

Y’know how Google is showing you previews of the sites you’re considering visiting within the search results? Yep Axis will finally let you do that, again.

Is this even a new browser or just IE9 with some ‘fluff’ and Yahoo branding? Tonight we will get a chance to try it hands-on and find out, but for now we have a few videos we can watch over on Yahoo Video.

One of the points my Economics teacher used to hammer home is to view each promotion as the promoter relating to their target audience.

If you have a good product with a smart client base, you can sell your product by focusing on real traits and strengths. Just demonstrate the product and avoid all pointless elements that distract the consumer from your product information.

Enjoy those videos and the clever/unique symbolism that hasn’t been copied too many times since Apple used it in 1984. :)

Does this mean Bing/Yahoo rankings will be important?

Who ever said they weren’t important? Okay, well expert opinions aside, you should never burn the Bing bridge, especially not with cell phones that default to Bing and new versions of Windows that also default to Bing.

It’s never wise to put all your eggs in one basket, and this is true of search engine placement/rankings as well as eggs.

Even if Yahoo Axis only manages a week of public attention, that’s one week of people around the planet searching Bing for a change.

If you rank really well on Google, we’re not going to suggest you intentionally tank your rankings for a short-term gain on Bing. The cost of recovering from such a move would probably be far more than simply paying for some pay-per-click coverage via Microsoft’s AdCenter (Link removed – no longer available).

There’s already folks worried about ‘Yahoo’ impressions vs. Bing impressions and the following advice has been posted in the AdCenter help forum:

1) You are currently bidding on broad match only, add phrase and exact match to your bidding structure.
2) Look at keywords with low quality score and optimize for those specifically.
3) Install the MAI tool and check on expected traffic for adCenter, you can also see what average bids are for specific positions.

Only 7 Days Left!

7 DAYS LEFT!

 

Talk about old news? I mentioned this just 2 days ago?!

We still have 7 days left in our Beanstalk Minecraft Map Competition! Check it out and even if you’re not entering, please let others know it’s coming to a close and we need all submissions by the 31st!

SEO news blog post by @ 10:03 am on May 24, 2012


 

FB stock drops as SpaceX soars to success!

There were so many interesting technology/internet developments between Friday and now today that I can’t really pick which one to focus on?

Sliding FB stock prices, Google finally taking over what was the mobility division of Motorola, SpaceX reaching the ISS, Wiki-leaks’ social media platform, and the Google Knowledge Graph.. and more!

If we looked at them from an SEO standpoint I would still have to struggle a bit to pick the most interesting/focused story, but it’s a great way to dive in so lets take a look at the weekends headlines from an SEO aspect.

Facepalm – FB IPO = Uh Oh

 
Dave’s nailed this one really well on Friday in this post:
Facebook IPO vs Ford (real world) Valuation Comparison

The image of money flushing down the toilet was very ‘apt’ since that’s exactly where I see the stock price going:
https://www.google.ca/finance?q=NASDAQ%3AFB

The current ‘low’ appears to be $31/share at the moment, with the price currently dancing around $32.50/share as I write this.

Google Mobility

Google already makes some cool hardware for their servers and other projects, but most people I know wouldn’t think of them as a manufacturer.

And yet here we are today, watching history unfold, as the mobile division of one of the worlds best handset manufacturers changes hands with a company that is at the head of the Android software alliance.

Google does a lot of things for free, even at a loss, because they see value in things that others would squander and ignore. Now that they have a hardware division to support this bad habit things are going to get very interesting.

We already know from looking through project glass’s details that Google will be needing a very skilled manufacturer with assets in micro mobility and wireless. HTC has always been very willing to participate with Google’s projects, but they are a vastly successful hardware manufacturer with no visible brand loyalty.

I personally had Android running on a HTC Windows Mobile so why can’t I run Windows Mobile on a Google subsidized Android HTC phone? I probably could, which is why it’d be very silly for Google to subsidize HTC hardware.

If Google can produce the hardware and find ways to keep 90%+ of the owners using Google services, it’s a much safer bet, and it appears to be exactly what they are doing. Heck if they make the hardware they might not even care what OS you use if they are allowed to sniff the data and still learn about users from the data they are using.

The only part of the puzzle that’s missing is deployment of Google owned, Motorola equipped, cell-towers so that Google can offer hardware, software, and services on their terms, in a model that makes sense to them, which would likely mean no caps on network use for Google products?

Yeah I could be dreaming but if I was a competitive cellular provider I’d be strongly considering opening my arms to Google before it’s an arms race against Google. ;)

Google Knowledge Graph

While the bearing on SEO for this news item is rather debatable and curious. The feature itself is incredibly handy and something Google has the unique opportunity to provide.

By taking key points of knowledge and building some hard links to relate that knowledge to other data points Google has developed a Wikipedia of it’s own design.

Knowing the struggles that Wikipedia has faced in terms of moderation and updating content, it will be anyone’s guess how Google is going to maintain it’s knowledge graph without someone manipulating the results, but kudos to Google for trying?

Right now the coverage on this is going to be all the same because the content in Google KG is still being built up, but you can expect further discussion as the service grows.

FoWL – Wiki-Leaks’ Social Media Service

Since this service claims to be private and encrypted, it would be very foul of me to really spend much of your time discussing it.

As it can’t be officially crawled by Google it’s probably going to have a very low effect on SEO and rankings in general. The only real bearing I could see it having is using it as a traffic tool for sites that are in-line with the Wiki-leaks mantra of public information. So if you can pretend that your services are so good the FBI doesn’t want you talking about them..??

SpaceX reaches ISS

This isn’t search engine related at all. I suppose you could point to the success of Google vs. government run indexes, and then point to the success of SpaceX vs. NASA with a bunch of startling similarities, but that’s some serious reaching.

At the same time, posting this on the same day the first private effort has docked with the International Space Station? I am obligated as a nerd to at least tuck this into the tail of the post. It’s pretty cool!

9 Days Left!

 
9 DAYS LEFT!

 

We still have 9 days left in our Beanstalk Minecraft Map Competition! Check it out and even if you’re not entering, please let others know it’s coming to a close and we need all submissions by the 31st!

Good Luck! :)

SEO news blog post by @ 12:01 pm on May 22, 2012


 

SEOmoz SPAM Outing

In the recent wake of the Penguin update from Google and the impact it has had on many sites, Rand Fishkin, CEO of SEOmoz, announced on his Google+ page that SEOmoz is currently developing tools to facilitate the "classifying, indentifying and removing/limiting link juice passed from sites/pages."

feathers mcgraw

SEOmoz wants to develop software to add to their existing toolset available to subscribers on their website to aid in determining if their own website or a competitor’s website appears to be spammy in nature.

If SEOmoz has developed a method to analysis signals that can be used to determine if a site is spammy, it is safe to assume that Google is viewing the page or site in question in the same light. Links that are determined to be spammy will pass little link juice and could potentially incur a penalty from Google. Fishkin summed it the process by saying that if they (SEOmoz) classifies a site or page as having spammy backlinks, “we’re pretty sure Google would call it webspam.”
Some in the SEO community are angered at Rand Fishkin’s policy of “outing” SEOs for spamming practices, so this time, Rand has enlisted the public to answer whether or not he should do so.

Some of our team members, though, do have concerns about whether SEOs will be angry that we’re “exposing” spam. My feeling is that it’s better to have the knowledge out there (and that anything we can catch, Google/Bing can surely better catch and discount) then to keep it hidden. I’m also hopeful this can help a lot of marketers who are trying to decide whether to acquire certain links or who have to dig themselves out of a penalty (or reverse what might have caused it).

antispam

Preliminary results show that most are in favor of Rand’s reporting of other SEOs for spammy practices. Certainly the reporting of offenders will help Google to combat the unwanted webspam that has permeated search results since the inception of the Internet into mainstream society. It is the new mantra of the modern web; you need to follow the rules and guidelines established by Google for fear of serious reprisal – whether or not you agree with it. Ultimately, what benefits the search results, benefits the searcher.

On a slighlty related note, I would like to suggest Feathers McGraw as the new face for the Penguin algorithm update from Google…

SEO news blog post by @ 10:49 am on May 9, 2012

Categories:Google,Rankings

 

Search Engine Experiment in Spam Surfing

If you took a very heavily spam-influenced search engine like Bing for example and removed the first 1 million results for a query, how good would the result be?

How about doing the same thing to the best filtered search engines available?

Well someone got curious and made the million short search engine.

What this new service does is remove a specific # of search results and show you the remainder.

I became immediately curious about a few things:

  • Where are they getting their crawl data from?
  • What are they doing to searches where there’s only a few hundred results?
  • Where is the revenue stream? I see no ads?

Given the lack of advertising I was expecting them to be pulling search data from another site?

There’s no way they are pulling from Bing/Yahoo, there are 14+ sites paying for better spots than we’ve earned on Bing for our terms..

And while the top 10 list looks a bit like DuckDuckGo, we’re seemingly banned from their rankings, and not at #6 at all. It’s funny when you look at their anti-spam approach and then look at the #1 site for ‘seo services’ on DDG. It’s like a time machine back to the days of keyword link spam. Even more ironic is that we conform to DDGs definition of a good SEO:

“The ones who do in fact make web sites suck less, and apply some common sense to the problem, will make improvements in the search ranking if the site is badly done to start with. Things like meta data, semantical document structure, descriptive urls, and whole heap of other factors can affect your rankings significantly.

The ones who want to subscribe you to massive link farms, cloaked gateway pages, and other black hat type techniques are not worth it, and can hurt your rankings in the end.
Just remember, if it sounds too good to be true, is probably is. There are some good ones, and also a lot selling snake oil.”

We know the data isn’t from Google either, we have the #1 seat for ‘seo services’ on Google and maintain that position regularly.

So what’s going on?! This is the same company that gave us the ‘Find People on Plus‘ tool and clearly they know how to monetize a property?

My guess is that they are blending results from multiple search engines, and likely caching a lot of the data so it’d be very hard to tell who’s done the heavy lifting for them?

All that aside, it’s rare to see a search engine that blatantly gives you numbered SERPs and for now MillionShort is, on the left side-bar, showing numbered positions for keywords. That’s sort of handy I guess. :)

You can also change how many results to remove, so if your search is landing you in the spam bucket, then try removing less results. If your search always sucks, and the sites you want to see in the results are on the right, you’ve apparently found a search phrase that isn’t spammed! Congrats!

Weak one: Google Drive

Well my enthusiasm for Google Drive just flew out the window on my second week using it.

UPDATE: Turns out the disk was full and Google Drive has no feedback at all. Thanks FireFox for telling me WHY the download failed. Oh man.

SEO news blog post by @ 11:01 am on May 1, 2012


 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.