Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

Red-Handed Face-Palm

Facebook is making headlines again, but not the kind that Mark Zuckerberg would like.
Mark Zuckerberg looking unhappy
Earlier this week ‘Limited Run’, an e-commerce developer that used Facebook as part of it’s start-up media campaign, posted a report on their findings of click-through data from their Facebook ads.

The data that Limited Run shared was a bit startling. In their own words:
Facebook was charging us for clicks, yet we could only verify about 20% of them actually showing up on our site.

Since data is all about who’s looking at it or how someone looks at it, the folks at Limited Run signed into a ‘handful’ of other tracking services and found the exact same thing.

At this point you have a web developer who is very curious about something going on with their web traffic, so naturally they built an analytics system for their own site:
Here’s what we found: on about 80% of the clicks Facebook was charging us for, JavaScript wasn’t on … in all of our years of experience, only about 1-2% of people coming to us have JavaScript disabled, not 80% like these clicks coming from Facebook.

Limited Run is a start-up company, and the publicity from being the first to catch Facebook with it’s hand in the proverbial cookie jar of advertising money would certainly help ensure the company’s run isn’t so limited.

Even still Limited Run was VERY careful to point out that there is little to no way of proving that Facebook is behind the bot -> ad traffic.

They are however dropping Facebook’s advertising and their company page on FB because of a claim that FB was unwilling to assist them with a name change, “because they weren’t actively paying for $2k or more in campaigns”.

Plus if 80% of the traffic from an advertising source is fake, and you have to pay for 100% of it, there’s better ways to promote your company.

So as this was a smaller advertiser, not someone spending millions of ad revenue on Facebook, we took it as a one-off issue, until this morning when Forbes posted a link to an article on Macleans.ca about “blank” image advertising tests on Facebook.

The gist of the piece is that a blank image test actually netted double the clicks of a static banner style image (think a logo or some non-promotion/non-offer) and only one click in ten thousand less than the average banner ad.

Web Trends even jumped in to do some testing on the clicks to see if there was some sort of curious appeal to clicking on a blank image and by using heat maps and quizzes they confirmed that the traffic is not human.

Facebook makes %85 of it’s ~$2.2 billion revenue from advertising traffic, and 14%-19% of FB revenue is from Zynga, a company that is suddenly involved in a stock crash scandal.
Mark Pincus - Founder of Zynga Games
If you hadn’t heard, just prior to some ugly profit reports for the company, the company Founder Mark Pincus, and key members of company, cashed out over $516 million in shares!

Zynga share prices are currently at $2.83 each, way down from the $10 initial share price, and miles away from the $14.69 peak price of the company’s stock.

It would appear for now that both companies have some explaining to do, and some problems to solve. For the users/subscribers this should be a wake up call on where you are spending your time and your advertising budgets.

SEO news blog post by @ 10:28 am on August 1, 2012


 

Particle Physics and Search Engines

If you’ve been hiding under a rock then you may not have heard the news of the ‘God Particle’ discovery.

As someone who is fairly scientific, I look at this as more of a proof of concept than a discovery, and ‘God’ really needs to give Peter Higgs some credit for his theories.

 
I won’t dwell on the news surrounding the Higgs boson particle confirmation, but there are parallels between objects colliding and revealing previously unseen matters.

When Search Engines Collide

It’s been some time since Bing and Yahoo merged, so the data sets should be the same right?

No. That would really be a wasted opportunity, and Microsoft is clearly smarter than that.





 
By not merging the search data or algorithms of Bing and Yahoo, Microsoft can now experiment with different updates and ranking philosophies without putting all it’s eggs in one basket.

An active/healthy SEO will be watching the updates to search algorithms from as many perspectives as possible which means a variety of sites on a variety of topics tracked on a variety of search engines.

Say a site gets a ton of extra 301 links from partner sites, and this improves traffic and rankings on Bing, causes a stability of movement on Yahoo, and a drop in traffic on Google?

It’s possible to say that the drop on Google was related to a ton of different factors, untrusted links, link spam, dilution of keyword relevance, keyword anchor text spamming, you name it. This is because Google is always updating and always keeping us on our toes.

Bring on the data..

Lets now take the data from Bing and Yahoo into consideration and look at what we know of recent algo changes on those search engines. This ‘collision’ of data still leaves us with unseen factors but gives us more to go on.

Since Bing has followed Google on some of the recent updates, the upswing on Bing for position of keywords would hint that it’s neither a dilution of relevance or spamming on the keywords/anchor text.

Stability on Yahoo is largely unremarkable if you check the crawl info and cache dates. It’s likely just late to the game and you can’t bet the farm on this info.

What about the other engines? Without paying a penny for the data we can fetch Blekko and DDG(DuckDuckGo) ranking history to see what changes have occurred to rankings on these engines.

Since Blekko is currently well known to be on the warpath for duplicate content, and they are starving for fresh crawl data, a rankings drop on that service can be very informative especially if the data from the other search engines helps to eliminate key ranking factors.

In the case of our current example I’d narrow down the list of ranking factors that changed on the last ‘Penguin’ update and contrast those with the data from the other engines and probably suspect (in this example) that Google is seeing duplicity from the 301s, something Bing wouldn’t yet exhibit, but Blekko would immediately punish as badly or worse than Google.

The next step would be to check for issues of authority for the page content. Is there authorship mark-up and a reciprocal setup on the author’s end that helps establish the trust of the main site content? Does the site have the proper verified entries in Google WMT to pass authority? Barring WMT flags, what about a dynamic canonical tag in the header, even as a test if it’s not already setup?

Start making small changes, watch the results, and be patient. If you’re not gaming Google and you’ve done something accidental to cause a drop in rankings, you need to think your way through the repairs step by step.

It’s not easy to evaluate but the more data you can mash-up, and the better you understand that data, the closer/quicker you can troubleshoot ranking issues and ensure that your efforts are going to be gains.

SEO news blog post by @ 12:12 pm on July 5, 2012


 

Chromecraft? Build With Chrome!

I’ve always said that Minecraft is like digital LEGO® that you can save and share with friends. Sure Minecraft is increasingly fun to play and actually ‘collect’ the bricks, but at it’s core it’s a lot like LEGO®.

The problem with Minecraft is that we don’t all share the same map. Some servers try to accommodate everyone, but I don’t think there’s any way that a single map could support everyone playing on it. This means that you could build something incredible, like Castle Black from Game of Thrones, that nobody ever comes across. Bummer.

Enter the new Build With Chrome website from Google and LEGO®! That’s right! My favourite browser mixed with my favourite game!

Right now the ‘world map’ is limited to Australia and New Zealand, but each tile of the map becomes ‘owned’ by the first person to build on it, so they will have to make the map bigger soon!

I gave it a go and started to get used to the controls pretty quick, but really found some polish lacking, at least on my work PC which isn’t rigged up for 3D graphics.
Build with Lego

What’s this got to do with SEO opportunities? Well web presence is all about putting your company on-line, and when the whole world map is available to build on, you can guess what’s going to be built on our square? :)

Already this morning there’s a land rush and the tiles are all getting claimed. So if you wanted to plant your flag in Australia, you better hurry up before all the shrimp are gone from the BBQ.

Heck you can just sit back and watch as people’s published ‘builds’ are approved and start popping up on the map. Really neat work from Google!

As the name suggests, it’s a lot of HTML5 web content that’s been designed to work well with Chrome. So far I’ve tried it on Opera and Firefox with errors both times, leaving me to suggest that ‘buildwithchrome.com’ is a ‘chrome only’ site for now. :)

Other news..

Yep it’s been a bit slow on our blog lately, but there’s lots of buzz from Google IO, and the latest services like Google Now that we’ll be talking about very soon!

I’ve also been working behind the scenes on the programming posts so if you enjoyed our last one there’s more to come and they all touch on SEO implementation so there will be something for everyone.

SEO news blog post by @ 10:26 am on July 3, 2012


 

Microsoft Surface – Not a table, a tablet

All these years of spies telling us about the ‘table’ that the nerds in Redmond are calling the ‘Microsoft Surface‘, and the whole time we didn’t know they were silenced before they could finish saying ‘tablet‘.

The official video from Microsoft. A bit skimpy with the details.

We know Microsoft actually wanted to make a table called Surface, if you haven’t seen enough of it on Hawaii Five-O, there was a demonstration of D&D on it:

Yep, the link in the video description from 2010 takes you to the right spot..

The tablet was ready before the table, so the name ‘Surface’ was on the table for the new tablet. What?!

So confusion over names aside, what’s under the ‘Surface’ of this new gizmo?

- Rare materials use
- Built in kickstand
- Cover acts as magnetic KB/Trackpad

Two versions:

Windows 8 Professional
- Intel 22nm Core i5 Ivy Bridge
- 13.5 MM Thick
- 903 grams
- 10.6″ Full HD Touch Display (1080p?)
- Magnetic stylus w/digital ink support
- USB 3.0
- Mini Display Port
- MicroSDXC slot
- Up to 128GB of storage
- Larger 42Wh battery
- Will be delayed by three months following Windows 8

Windows 8 RT
- NVIDIA Tegra ARM Processor
- 9.3 MM Thick
- 676 grams
- 10.6″ HD Touch Display (1366 x 768?)
- USB 2.0
- Micro HDMI Video Port
- Micro SD slot
- Up to 64GB of storage
- 31.5Wh battery
- Will be available with Windows 8 (this fall)
 

So the full tablet will be for people that run or create Windows applications, want full compatibility with existing apps, and want to trade a lighter/more portable tablet for more options. If the stylus is included or a very inexpensive accessory it may make this version appealing to students and business types that hate flipping through hand written notes searching for something that could be found instantly if it was digital.

The RT version will be for the minimalist that only needs to run core applications that are compatible with the RT version of Windows 8 and it’s ARM processor. This version should not only be lighter but also have stronger battery life making it ideal as a reader or for watching DVD quality movies.

Unless you are developing ARM based Windows 8 applications, no programmer will want the RT version since it cannot run applications that haven’t been ported to ARM. That means if you code up solutions for yourself, you’ll either have to re-compile for ARM or avoid that platform.

Not having tried the RT version of Windows 8 I can only assume the browser choices will be anything you can possibly think of, as long as you always think of “Internet Explorer 10″. This was discussed in our post on browser options for Windows 8 ARM edition.

Design Details – First impressions are everything!

Rumour has it that the ‘kick stand’ was a really hard design choice because it ruined the ‘flow’ of the product shape, regardless of how essential it is in practice. To make-up this design shortfall they apparently over-engineered the hinge system to have the ‘feel’ of a luxury car door??

If that wasn’t exotic enough, the case of the Surface is made from a special magnesium process called ‘vapor-depositing’ which results in an amazingly thin/strong material which is still cost concious enough for mass production.

Personally, if I was Microsoft, and I wanted to kick some Apple hiney all over Silicon Valley, I would have dipped into my XBox parts bin and made a “Pro 360″ version of the surface including:
- Built in Kinect motion controller
- Bluetooth controller support
- Embedded 3D Graphics hardware
- Special ‘kit’ for XBox owners that allows a ‘mobile’ mini-game to be saved to an ‘authorized’ client on the owner’s Surface for each game in the XBox owner’s game library

If you give your tablet exclusive perks that worked exclusively with your console, you might actually make your most loyal consumers feel like they made some savvy choices. It’s what Apple does this all the time. ;)

SEO news blog post by @ 12:02 pm on June 19, 2012


 

TECHNOlogy: What is AJAX? Baby Don’t Hurt Me!

Wikipedia defines AJAX (Asynchronous JavaScript And XML) as:

A group of interrelated web development techniques used on the client-side to create asynchronous web applications.

What a mind-numbing description! What you need to know is that AJAX is the combination of a several technologies to make better web pages.

If you have no interest in making websites but you like techno music, or you’re curious why I picked that title, this is for you:

This is a good soundtrack for this post. You should hit play and keep reading.

After a bit of time with HTML/CSS I started to build a growing list of issues that I couldn’t solve without some scripting.

I learned some PHP, which wasn’t tricky because it uses very common concepts. Here’s the traditional ‘hello world’ example in PHP:

<?PHP echo ‘Hello World’; ?> = Hello World

.. and if I wanted to be a bit more dynamic:

<?PHP echo ‘Hello World it is ‘.date(‘Y’); ?> = Hello World it is 2012

Because PHP is only run when the page is requested, and only runs on the server side, it’s only the server that loads/understands PHP; The browser does nothing with PHP.

With PHP code only seen by the server, it’s a very safe way to make your pages more intelligent without giving Google or other search engines a reason to be suspicious of your site.

In fact one of the most common applications of PHP for an SEO is something as simple as keeping your Copyright date current:

<?PHP echo ‘Copyright© 2004-’.date(‘Y’); ?> = Copyright© 2004-2012

Plus when I need to store some information, or fetch that information, PHP isn’t that easy, so I added MySQL to the mix and suddenly my data nightmares are all data dreams and fairy tales (well almost). I won’t dive into MySQL on top of everything here, but lets just say that when you have a ton of data, you want easy access to it, and most ‘flat’ formats are far from the ease of MySQL.

But I still had a long list of things I couldn’t do that I knew I should be able to do.

The biggest problem I had was that all my pages had to ‘post’ something, figure out what I’d posted, and then re-load the page with updated information based on what was posted.

Picture playing a game of chess where you are drawing the board with pen and paper. Each move would be a fresh sheet of paper with the moved piece drawn over a different square.

PHP can get the job done, but it’s not a very smart way to proceed when you want to make an update to the current page vs. re-drawing the whole page.

So I learned some JavaScript, starting with the basic ‘hello world’ example:
<span onClick=”alert(‘Hello World’);”>Click</span>

hello world javascript alert box

 
If I wanted to see the date I’d have to add some more JavaScript:
<script language=”javascript”>
function helloworld()
{
var d = new Date();
alert(‘Hello World it is ‘ + d.getFullYear());
}
</script>

<span onClick=”helloworld();”>Click

Hello World it's 2012 alert box example

 
JavaScript is ONLY run on the browser, the server has no bearing on JavaScript, so the example above won’t always work as expected because it’s telling you the date on your computer, not on the server. How would we see the date of the server?

This is where AJAX comes into play. If we can tell the browser to invisibly fetch a page from a server and process the information that comes back, then we can combine the abilities of JavaScript, PHP, and MySQL.

Lets do the ‘hello world’ example with AJAX using the examples above.

First you would create the PHP file that does the server work as something witty like ‘ajax-helloworld.php’:
<?php echo ‘Hello World it is ‘.date(‘Y’); ?>

..next you’d create an AJAX function inside the web page you are working on:
<script language=”javascript”>
function helloworld()
{
var ajaxData; // Initialize the ‘ajaxData’ variable then try to set it to hold the request (on error, assume IE)
try{
// Opera 8.0+, Firefox, Safari
ajaxData = new XMLHttpRequest();
} catch (e){
// Internet Explorer Browsers
try{
ajaxData = new ActiveXObject(“Msxml2.XMLHTTP”);
} catch (e) {
try{
ajaxData = new ActiveXObject(“Microsoft.XMLHTTP”);
} catch (e){
// Something went wrong
alert(“Your browser broke!”);
return false;
}
}
}
// Create a function that will receive data sent from the server
ajaxData.onreadystatechange = function(){
if(ajaxData.readyState == 4){
alert(ajaxData.responseText);
}
}
ajaxData.open(“GET”, “ajax-helloworld.php”, true);
ajaxData.send();
}
</script>

Only the purple text is customized, the rest of the function is a well established method of running an AJAX request that you should not need to edit.

So we have a function that loads the ‘ajax-helloworld.php’ page we made and then does an alert with the output of the page, all we have to do is put something on the page to call the function like that span example with the onClick=’helloworld();’ property.

Well that’s all neat but what about the ‘X’ in AJAX?

XML is a great thing because it’s a language that helps us with extensible mark-up of our data.

In other words XML is like a segregated serving dish for pickled food that keeps the olives from mixing with the beets.

Going back to our ‘hello world’ example we could look at the ‘date data’ and the ‘message data’ as objects:
<XML>
<message>Hello World it is</message>
<date>2012</date>
</XML>

Now, when the AJAX loads our ‘ajax-helloworld.php’ and gets an XML response we can tell what part of the response is the date, and which part is the message. If we made a new page that just needs to display the server’s date, we could re-use our example and only look at the ‘date’ object.

For some odd reason, most coders like JSON a lot, and this makes it really common to see AJAX using JSON vs. XML to package a data response. Here’s our XML example as a JSON string:
{“message”:”Hello World it is”,”date”:”2012″}

Not only is it really easy to read JSON, because JavaScript and PHP both understand JSON encoding it’s really easy to upgrade our ‘hello world’ XML example over to JSON format.

Here’s the new PHP command file ‘ajax-helloworld.php’:
<?php
$response = array(“message” => “Hello World it is”, “date” => date(‘Y’));
echo json_encode($response);
?>

The output of our AJAX PHP file will now be the same as the JSON example string. All we have to do is tell JavaScript to decode the response.

If you look back at this line from the AJAX JavaScript function example above:

if(ajaxData.readyState == 4){
alert(ajaxData.responseText);
}

This is where we’re handling the response from the AJAX request. So this is where we want to decode the response:

if(ajaxData.readyState == 4){
var reply = JSON.parse(ajaxRequestAT.responseText);
alert(‘The message is : ‘ + reply.message + ‘ and the date is : ‘ + reply.date);
}

Now we are asking for data, getting it back as objects, and updating the page with the response data objects.

If this example opened some doors for your website needs you really should continue to learn more. While the web is full of examples like this, from my personal experience I can honestly tell you that you’ll find yourself always trying to bridge knowledge gaps without a solid lesson plan.

Educational sites like LearnQuest, have excellent tutorials and lessons on AJAX and JavaScript including advanced topics like external AJAX with sites like Google and Yahoo. Plus LearnQuest also has jQuery tutorials that will help you tap into advanced JavaScript functionality without getting your hands dirty.

*Savvy readers will note that I gave PHP my blessings for SEO uses but said nothing of JavaScript’s impact on crawlers/search engines.

Kyle recently posted an article on GoogleBot’s handling of AJAX/JavaScript which digs into that topic a bit more.

With any luck I’ll get some time soon to share a gem of JavaScripting that allows you to completely sculpt your page-rank and trust flow in completely non-organic way. The concept would please search engines, but at the same time cannot be viewed as ‘white hat’ no matter how well it works.

SEO news blog post by @ 11:19 am on June 14, 2012


 

GoogleBot Now Indexing JavaScript, AJAX & CSS

Gogole Bot

Improving the way that GoogleBot parses and interprets content on the web has always been integral to the Google mandate. It now seems that GoogleBot has reverently been bestowed the ability to parse JavaScript, AJAX and Cascading Style Sheets.

In the past developers avoided the use of JavaScript to deliver content or links to content due to the inherent difficulty by the GoogleBot to correctly index this dynamic content. Over the years it has become so good at the task that Google is now asking us to allow the GoogleBot to scan JavaScript used in our websites.

Google did not release specific details of how or what the GoogleBot does with the JavaScript code it finds due to fears that the knowledge would quickly be incorporated into BlackHat tactics designed to game Search Engine Results Pages (SERPs). A recent blog post on Tumblr is responsible for the media attention. The post has shown server logs where the bot was shown to be accessing JavaScript files.

The ability for the GoogleBot to successfully download and parse dynamic content is a huge leap forward in the indexing of the web and stands to cause many fluctuations in rankings as sites are re-crawled and re-indexed with this dynamic content now factored in to the page’s content.

Previously Google attempted to get developers to standardize the way dynamic content was handled so that it could crawl but the proposal (https://developers.google.com/webmasters/ajax-crawling/) has been more or less ignored.

The GoogleBot has to download the JavaScript and execute it on the Google Servers running the GoogleBot leading some to the conclusion that it may be possible to use the Google Cloud to compute data at a large scale.

SEO news blog post by @ 11:22 am on May 28, 2012

Categories:Coding,Google,Google

 

Salespeople are evil, even at Google

If you use a Google product or service to call someone instead of sending them some GMail, that conversation isn’t relevant to Google, at least not yet.

I can just picture the sales team at Google are sitting around thinking about how knowing their users, via analysis of email/search/etc.., drives their product, and how people using their services via video/audio are escaping that analysis.

And yet, doesn’t Google own the most sophisticated voice analysis system on the planet? Wouldn’t it be really easy to compress audio/video data, upload it to a Google server, and process it for relevance?
Let’s say you kept the NKOTB concert a complete secret because it’s your anniversary gift to your wife, and Google realizes you’re at the concert by the audio in the background of a phone call + your general location? If that means that Google now includes ‘Download NKOTB live at xyz concert’ adverts in your ad stream for a few days following, wouldn’t that be great?

Well those salespeople managed to convince someone at Google it’d be wise to at the very least patent such a method so that in the coming years they aren’t licensing it from their competition. Seems smart right?

Not with all the FUD – (Fear, Uncertainty, and Doubt) that is lingering on-line, no sir, this is war with the tin-foil beanie brigade.

Even Google Trends shows us how trust is at an all time low:
trust
trust - Google Trends

I love the regional breakdown on that search…

First of all, patenting a technology doesn’t guarantee it will happen; How long have we had flying car patents and still nothing feasible?

Secondly, what are the odds Google is going to force nervous users to flee to competing products by snooping on conversations without consent?

And finally, in several key locations around the planet, it’s technically illegal to record someone without their consent. Since a cell phone could pick-up a background conversation, it would be legal suicide to try and implement ‘eavesdropping’ technology without a boatload of safeguards, warnings, and disclaimers.

Nerds are Still Cool However..

We’ll need to talk about this more ‘in depth’ at a later stage in it’s development, but Google’s Knowledge Graph is very exciting.

Have a look at the Knowledge Graph video released yesterday by Google:

http://www.youtube.com/watch?v=mmQl6VGvX-c

I’m sure Bing and competing search engines will just claim Google is evil and trying to keep you on their pages by giving you the answers you need instantly, but if that’s their idea of evil then slap on the horns and poke me with a trident. :)

SEO news blog post by @ 10:45 am on May 17, 2012


 

Google IO is a sellout

I know we’ve been anti-Google the last few weeks, but Google’s upcoming IO conference really did sell-out, in 20 mins no less!
GoogleIO 2012 Sold Out
With only 5,500 seats the 20 minute sell-out wasn’t too shocking, but the $2,000 EBay auction for a Google IO ticket took me by surprise. I tried to go find it for a confirmation picture but it was already nuked. Even at the full price of $900 a pop, the scalping price was over double! Heck educational admission ticket prices are only $300 each!?

If you’re wondering ‘what the heck is Google IO?’ that could be our fault, because our post about it last year, Ooh Shiny! ChromeOS & ChromeBook, was totally about the new ChromeBook and not the conference.. Oh man!

Each year Google hosts it’s Input/Output conference to not only share a vision of what’s ahead for Google, but also to get some feedback from the developers and users that work with Google’s solutions.

As is the case each year the team of nerds over at Google have put together a ‘chrome experiment‘ for anyone with a Google account.

The splash page for the Google IO event experiment teases us with the following:

“Brush up on your geometry, dust off your protractor, and architect a machine only you could have dreamt of. Join developers tackling our latest Chrome Experiment for a chance to have your machine featured at Google I/O.”

… yet the site seems a wee bit too popular at the moment, refusing to proceed into the actual site no matter how many times your click it. I’ll have to keep trying but right now it looks like I’ll have to come back and update after lunch.

If you REALLY wanted to click something to fiddle with in your browser, and it has to work right this second, well try Browser Quest from Mozilla Labs! While the game is currently still up and running I expect it will completely flat-line as it reaches peak popularity. I am running around as DobbieBobkins if you get in.

Browser Quest is an HTML5 site, with everything using the latest web-tech available. Don’t let those 8-bit graphics fool you, this is a modern technical demonstration. I’ve seen the game work with the latest versions of Chrome, Safari, Firefox, and Opera, just fine, though Opera was loading like dirt because of some broken plugins.

Speaking of coming back to things. I keep saying that we will have more on the Beanstalk Minecraft map contest, including some videos to inspire folks with ideas.. Sadly I am SO out of date with video capture that it boggles the mind.

Apparently my problem with recording is missing codecs, so I installed the FFdshow package which supposedly contains the right codecs to maintain the correct color space and gamma values in my source videos. If that sounded like Spanish, in a nutshell I’m fixing some dark video issues. :)

Here’s my last upload fresh off the preview screen, and it’s STILL TOO DARK?

http://vimeo.com/39291926

So, for now, today’s post is more of a bookmark, with some Google IO teasing, to be visited again after lunch when things are less popular. ;)

SEO news blog post by @ 1:38 pm on March 27, 2012


 

Are multiple rel=authors worth it?

Recently Google+ made it a lot easier for content authors to indicate the sites they publish to. Here’s a short clip showing that new easier process:
[jwplayer mediaid="3346"]

So that’s now told Google+ that you are an author for the sites you’ve listed. It also adds a backlink on your Google+ Profile page back to your site.

At this point, once Google has parsed the changes, and updated it’s caches, you’ll start to see author credits on articles with the same name and email address. While the Google help docs ‘suggest’ you have a matching email address published with each post, it’s clearly not a requirement.

So after this update you could start to see ‘published by you’ when doing searches on Google for posts you’ve made but what’s to stop anyone from claiming they wrote something on-line?

The other half of this process is creating a ‘rel=”author”‘ or ‘rel=”publisher”‘ link on the content pages on your blog/web site.

In the case of Beanstalk’s Blog, all posts get the same rel=”publisher” link, it looks like this (you can see it in ‘view-source’):

<link href="https://plus.google.com/115481870286209043075" rel="publisher" />

That makes Google see our blog posts as ‘published’ by our Google+ Profile, which is a bit ‘lazy’, but the process to add that code was very easy (and we blogged about it here) compared to the task of tagging each post with a custom link.

The truth is that there has to be some ‘ranking signal’ for multiple authors, and there should be a quality/trust grade based on the profiles of the authors. So what is that ‘factor’ that ‘has’ to be hiding in the ranking code? Great question!

Since we’ve already spent some time with Google+ and a single author source we intend to run some tests and prove out the value or lack of it. Our plans are to report on both the difficulty of applying the right tags to the proper posts, and then value of that effort. If anyone reading along has some good suggestions for the test process please drop us a comment via the main contact form.

Where’s Bart?

Chia Bart is retired for now. I need to find a decent webcam and then I’ll re-do him with some time-lapse for added thrills and joy. In the meantime we’re looking at offering the readers a really unique chance at interacting with the blog:

Each month we will be posting some macro images. Each one will be a challenge to figure out and we’ll take guesses on stuff ‘material’ ‘location’ ‘object’ etc.. and then we will rate the guesses based on how close they are. Technically, even if we had one guess like: “The picture for week 2 looks like glass”, that could win!

The best guess will get recognition on our blog and we’ll announce the best guess each month on Twitter as well.

This is the Macro image for Week two of FebruaryFebruary Macro 2 – If you think you know what this is, or where this is, send us your best guess via Twitter or G+

SEO news blog post by @ 12:14 pm on February 9, 2012


 

Another iFrames test

Back on November 7th Beanstalk’s Ryan Morben decided to run a test on iFrames to see how they get crawled (and to answer the question … do they?)  I had to support the test as we’d been receiving mixed signals.  A good clean test running an iFrame of our own domain containing only text.  The notion was … if a search for that unique string of text produced our page as a result then we know Google crawled it.

The result was interesting … the content got crawled but the ranking page wasn’t our blog post but rather the URL of the frame source itself.  It appears that Google treated the iFrame call as a link more than content on the page.

When we published the results I got an interesting email from Stefano: “I noticed the results of your iframe test showed that google did indeed index the unique phrase. Can you do another test where you load the phrase from a different domain ? Thanks!

My assumption to this question is “yes” based on the initial results however it’s definitely worth a test.  To that end we’re running two separate tests on this, the first we will be running here on the Beanstalk site with the following frame:

The second test location isn’t being released in this post just to make sure the process isn’t gamed.  I’d rather have slow or even no results than false positives.

Stay tuned – as soon as we have conclusive answers as to how it turns out, we’ll let you know.

And until then … enjoy the weekend !

EDIT: We will still be doing a follow-up post with more code examples, but we have results of the first test of iframe text crawling:

Service   Crawled? Indexed?
Bing   no no
Blekko   no no
DuckDuckGo   no no
Google   yes no
MajesticSEO   no no
Yahoo   no no
Yandex   yes no

Essentially this result is what we should have expected.

A search engine needs a crawler to understand the iframe syntax, and since a lot of iframe data is secure or private, there’s little motivation to go that extra mile, and it’s no surprise that the ones that do crawl the frame don’t publicly index the result since that’s just asking for privacy and other issues.

SEO news blog post by @ 4:02 pm on November 25, 2011

Categories:Code Tests

 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.