Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

Beanstalk's Internet Marketing Blog

At Beanstalk Search Engine Optimization we know that knowledge is power. That's the reason we started this Internet marketing blog back in 2005. We know that the better informed our visitors are, the better the decisions they will make for their websites and their online businesses. We hope you enjoy your stay and find the news, tips and ideas contained within this blog useful.


October 10, 2012

New Webmaster Guidelines Part 1 – Design and Content

Google recently updated their webmaster guidelines following the latest algorithm update. It is easy to feel inundated with the amount of information regarding web design dos & don’ts and the best practices for the internet. As an SEO I am frequently asked, “How can I get my site to rank?” The fact of the matter is that we follow the Google’s Webmaster Guidelines which establishes the best practices for websites to follow. Many are concerned about the Panda/Penguin updates and are worried that there site will be hit; or they have a site that has been hit. Our advice remains consistent: "Drink the Google Kool-Aid".

magician_rabbit_hat

At one time, it was exceedingly difficult to get a straight answer from Google in regards to what was considered best practice. This led to a wild-west frontier attitude and many designers and SEOs adopted many bad practices. This is lead to an inundation of webspam in the Google SERPs and made it very difficult to get quality search results.

The Panda and Penguin algorithm and subsequent updates was a very concerted effort to rid the SERPs of webspam. In the wake of these substantial updates, my advice to customers remains consistent; follow the Google established guidelines. The mantra I repeat to my customers is: "Would I do this if search engines didn’t exist?"

For many of us this is old news, but I still find myself learning new things to try and better practices to adopt. Much of the messaging from Google has been very consistent regarding what makes good content. This post will looks specifically at Google’s recommended Design and Content Guidelines to help Google find, crawl and index your site.

Site Hierarchy

  • Give your site a clear hierarchical structure and make it as easy to navigate as possible. Every page should be reachable from at least one static text link.
  • Think of your website as a book with logical sections and headings; each with their own unique and relevant content.
    • The Title of you is your domain URL (eg. www.booktitle.com)
    • Your title tag <title> can be your topic for the page. It defines what content will be on this page (eg. <title>Book Characters</title>).
    • Your heading tag is your chapter title eg. <h1>Book Characters</h1>. Typically this is the same or very close to the page title and must be directly relevant.
    • Have only one topic per page and only one H1 tag on any page.
    • Use subsequent heading tags (h2, h3, h4) to define further related divisions of the chapter.

Site Map

  • Offer a sitemap for your visitors. Not only does this provide a valuable service to your customers, but it can help improve the indexing of your site by bots.
  • If you have an extensive number of links on your site, you may need to break your sitemap into multiple pages.
  • Remember that a website sitemap is different than the sitemap.xml that you should submit to Google’s Webmaster Tools.

Internal Linking

  • Keep the number of links on any page to the bare minimum. The guidelines used to state ‘around 100’ but this is one area where less is more.
  • In the most recent iteration of the Webmaster Guidelines, Google has only stated to ‘keep it to a reasonable amount’. Too many links leading to other internal pages or offsite is distracting to the visitor. It lowers conversion rates due to people getting lost and creates frustration.

Textual Content

  • Google has always stated that ‘content is king’. It is absolutely imperative that you create rich, useful and dynamic content that engages your audience. All textual content needs to be well written and grammatically correct. It should clearly and accurately describe your content and it must be relevant to the page that it is found on.
  • Do not write for what you think Google wants to see. Think about what searchers would type into a search engine to find your page and ensure that your content actually includes those terms.
  • Do not concern yourself with keyword densities. Inevitably the content comes across as spammy and does not read well. Google may regard this as keyword stuffing and see broken/confused grammar as potential spam or scrapped content…exactly what the Panda/Penguin updates are designed to target, and penalize for.

Page Coding

  • Use a crawler on your site such as XENU’s Link Sleuth, or Google’s Webmaster Tools to check you site for broken links.
  • Check your site with the W3C to ensure that your site has valid HTML.
  • Avoid the use of dynamic pages with cryptic URLs (e.g., the URL contains a "?" character). Try to use keyword focused URLs that reflect the page you are building. If you must use a dynamic URL structure, keep them few and the parameters short.

Images

  • You can give Google additional details about your images, and provide the URL of images we might not otherwise discover, by adding information to a web sitemap.
  • Do not embed important content into images; always use text links instead of images for links, important names etc, where possible. Google crawlers cannot determine the text displayed in an image. If you must use an image for textual content, ensure that you make use of the image ALT tag to describe the image with a few words.
  • Ensure that all image <title< and ALT attributes are descriptive (but not spammy) and accurate. Follow these guidelines for creating great ALT text for your images.
  • Give your images detailed and informative filenames.

The following areas (video and rich snippets and their usage are best described by Google themselves:

Video

View the full post here: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156442

Rich Snippets

View the full post here: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1093493

Coming next time, I will review the newly updated Technical Guidelines and then conclude with Google’s Quality Guidelines.

SEO news blog post by @ 1:15 pm


 

 

October 9, 2012

EMD Insanity and Coining Phrases

It’s clearly time for Beanstalk to officially list ourselves as a sitedefibrillation solution provider.

Why? Because apparently the secret to SERP dominance with an EMD is to coin your own phrase!

Do a search for ‘coinflation’ + ‘gold’ or really, almost any other keyword to see what Google considers an ‘improved’ result following the EMD update.

Google Search results for Coinflation 
If you didn’t get something like the results above, please let us know!

 
Okay so that seems slightly silly, but how the heck did they pull that off? There’s clearly PPC/AdWords competition for the phrase, and EMD should either be a penalty or moot, shouldn’t it?

Well apparently not! In fact EMD can still clearly be an asset if the ‘quality’ scores are all above par!

This means that if you have an organic campaign, with ongoing back links/references from trusted sources, and you aren’t hitting other penalties, you really should be feeling no loss at all from the EMD update.

Indeed, if your competition was using non-organic approaches to EMDs they should have taken a trust hit, and you may see an improvement in position due to their failings!

So while I can show you some examples of the EMD apparently failing to work, we can assure you it’s working, and overall seems like a positive step for Google.

10″ Google Nexus from Samsung?

Last night CNET announced some ‘highly’ probable info that Samsung is manufacturing a new 10.1″ Nexus tablet for Google.

The article is more of a stub of hear-say but had some rather ‘exact’ details including the resolution of the display:

The 2,560×1,600 display will have a PPI (pixels per inch) of about 299, said Shim. That tops the 264 PPI on the 9.7-inch 2,048×1,536 Retina iPad.

Clearly this will be the ‘high end’ model for the Nexus line (currently manufactured by Asus), especially when you consider that Google will be releasing a 7″ Nexus subsidized down to a $99 price this December!

In fact since we’re pondering things to come more than talking facts, I’d have to assume this will be a dual or quad core device with GPU acceleration of some sort to assist with up-scaling video content and 3d games to that eye-popping resolution.

So if this high-end Nexus tablet is anything less than $399 I’d be really shocked and very worried for Apple.

Okay, perhaps more worried for Apple, would be more accurate given it’s current public affairs issues..

In case you’re wondering ‘who cares?’; Tim Pool goes to the streets and broadcasts unedited footage of protests/events.

I’d like to think Apple is patenting this to prevent companies from doing this, but in actual fact this is very creepy stuff from the overly litigious makers of the most expensive walled gardens on the planet.

It seems almost like Apple is testing how well their brand/product can weather bad public image at this point?

SEO news blog post by @ 11:53 am


 

 

October 2, 2012

You may need an EMT after the EMD Update!

Last Friday Matt Cutts tweeted about Google’s latest update, which focuses on penalties for ‘low-quality’ Exact Match Domain names, hence the EMD TLA.

Twitter posts from Matt Cutts on the latest EMD Update

While Google is never big on giving us the details lets digest this together!

Using a relevant keyword in a domain has been a very long-standing ranking signal.
ie: A consulting site for financial companies using ‘financial-consulting.com’ as a domain would be seen as relevant

Over the years this has lead to people grabbing up domains with keywords in them for SEO purposes.

JACOBS BY MARC JACOBS FOR MARC BY MARC JACOBS ETC..

Having your keywords in your domain name didn’t mean overnight dominance of the web, thankfully. Indeed, there was usually some trade-off between desirable keywords and a reasonably short domain name.

In fact, no organic/white-hat SEO would suggest you use something like:

‘best-value-online-financial-consulting-company-with-proven-results.com’

Why? Because the gains in SEO wouldn’t match the losses in user trust/conversions.

Would a good organic SEO/White Hat tell you NOT to purchase those types of domains for 301s to your main site?

I’d like to think so, but this was clearly a strategy for a lot of sites competing for top rankings.

Regardless of your SEO ethics, the practice of domain parking/selling because of search ranking signals is clearly an unnecessary burden on the internet.

While the ‘domains for sale’ issue would still exist without search engines, search engines honestly should be making your choice of domain name MUCH less relevant.

Ideally fresh internet traffic should occur as match between the searchers needs and the services/information that your site provides.

And with this latest update it’d appear that Google agrees with the idea that book should found by more than what’s on the cover.

As of this last update you can expect sites with nothing but some keyword dense 301′d domains to now face a penalty instead of a positive ranking signal.

We didn’t see this coming!

EMD Update Results

I’m already seeing people post sad tales of the deep impact this update is having on certain sites, and I’ve had a laugh at a few ‘professionals’ claiming they never felt this day would come.

Personally, while I’ve watched some very good presentations on SEO and web ranking strategies, the one thing that helps me most as an SEO is Matt Cutts’ breakdown of the real philosophy behind ‘good SEO’ which boils down to:

Never do something for the sake of search engine rankings alone.

If you like ‘Lord of the Rings’ then look at this as:

‘One Rule to Lead them all, one Rule to be found by…’

..and you should never have to fear a Google update!

In fact you should look at each Google update as a chance for your rankings to improve as other sites are punished for their ‘clever’ attempts to game the system.

Another Google Easter Egg?

And finally, to end the post with a chuckle, here’s a Google search phrase for you to test out:

I was hoping this was more than just an ‘Easter Egg‘ in Google’s search, but alas Google hasn’t yet licked mathematical artificial intelligence. :p

SEO news blog post by @ 12:01 pm


 

 

September 20, 2012

Dublin the Airports: iOS 6 Maps is Rotten


Apple’s extra Airport..

Was anyone expecting Apple to replace Google’s Maps application with something superior? Apparently, the iPhone user base and Apple actually expected this to happen.

If you look at the most extremely biased sites reviewing the new ‘Apple’ Maps app for iOS 6 you will see guarded optimism and lots of ‘reasoning’ clash with angry rants from amazed and disappointed users.

One thing I don’t see is anyone calling it the ‘Maps app that Apple bought from TomTom’ the best I’ve seen is a mention that they relied heavily on TomTom and OpenStreetMap for data alone.

Instead I see a very consistent collection of sympathetic remarks like: ‘this is beta, it can only get better’, ‘for a first attempt this is outstanding’, ‘people will question anyone who takes their own path..’

But Apple isn’t taking their own path, they are merely attempting (badly) to replace something that wasn’t really broken.

Sure, Google wasn’t toiling endlessly to include all the updates it was adding to the Android version of Google Maps.

I’m guessing Apple really expected Google to beta test ideas on the Android and then polish them up and finalize them on the iPhone?

So sure, Google put Android development first, and there were things that Google Maps did better on the Android, but that still doesn’t mean it ‘had to go’.

Apple could have offered both solutions in a ‘use what you like’ approach to pleasing it’s user base, but this is a company making headlines for outrageous profits and the working conditions of it’s manufacturing partners.

Removing the choice to pick another company’s solution would clearly explain why Apple didn’t take a settlement from Samsung and wanted to ban their phones. Apple want’s profits, and if Apple wants really happy customers they could lower prices and focus on better apps vs. removing the best ones for inferior versions.

And in other News

Google has blessed a new meta tag!
meta name=”news_keywords”

content=”Apple Maps, iOS 6, Google Maps, Android, TomTom, Google news meta tag”

Do you publish content that you would call ‘news’?
Would you like Google to better understand the topic of your posts?
Would you like the freedom to ignore keyword use in a topic for style reasons?

Then brothers and sisters, this new meta-tag is what you’ve been waiting for!

The format is very simple, and it belongs near the top of your page content, usually in the <head> … </head> section.

Here’s an example:

<meta name=”news_keywords” content=”10 keywords, separated, by commas, just like, meta keywords, etc..”>

That’s some ‘easy breezy’ SEO optimization, and it’s great if you are indeed publishing ‘news’; Not just ranting about Apple. :)

SEO news blog post by @ 11:51 am


 

 

August 27, 2012

Gmail Rank & the Moon Landing

Gmail Rank

Bill Slawski had an interesting blog post the other day speaking about the rise of Gmail Rank and the Importance of Good Subject lines. It seems that Google is experimenting with the possibility of including your own emails in your search results. Users will have to opt-in and only the emails that have been be received via Gmail will be used.

It is thought that the “rankings” used to decide which emails to show will be similar to the existing colored “importance rankings” currently used to display the relative importance of your emails. Gmail does allow the user to sort and filter gmails by their importance markers and offers some other advanced search filters; whether or not this functionality will be carried over to an integrated web search remains speculative.

…In other news:

Neil Armstrong's footprint on the moon

Beanstalk would like to say a fond farewell to Neil Armstrong. Armstrong died Saturday, Aug. 25, 2012, at age 82. Armstrong commanded the Apollo 11 spacecraft that landed on the moon July 20, 1969. As the first man to walk on the moon, his passing truly marks the end of an era. The moon landing happened a bit before my time, but those who witnessed it remember where they were and what they were doing when they heard those famous lines: “That’s one small step for man, one giant leap for mankind.”

SEO news blog post by @ 12:16 pm

Categories:Google,Rankings

 

 

August 16, 2012

You don’t want the next Penguin update…

Scary Matt Cutts

Is Matt Cutts just goofing around or is he really trying to scare us?

The statement in the title of this article, from Matt Cutts, has the SEO world looking for further information as to just how bad the next Penguin update will be.

During the SES in San Francisco this week Matt Cutts got a chance to speak about updates and how they will effect SEOs. One of the things he was quoted as saying really caught my eye:

You don’t want the next Penguin update, the engineers have been working hard…

Mr.Cutts has recently eaten some words, retracting his statement that too much SEO is a bad thing, and explaining that good SEO is still good.

Even with attendees saying that he spoke the words with no signs of ominous intent, how do you expect the SEO world to take follow up statements like:

The updates are going the be jarring and julting for a while.

That’s just not positive sounding at all and it almost has the tone of admission that the next updates are perhaps going to be ‘too much’ even in Matt’s opinion, and he’s one of Google’s top engineers!

My take is that if you are doing anything even slightly shady, you’re about to see some massive ranking spanking.

Reciprocal links, excessive directories, participating in back-link cliques/neighborhoods, pointless press releases, redundant article syndication, duplicate content without authorship markup, poorly configured CMS parameters, etc.. These are all likely to be things, in my opinion, that will burn overly SEO’d sites in the next update.

The discussion also made it’s way to the issues with Twitter data feeds. Essentially since Google and Twitter no longer have an agreement, Google is effectively ‘blocked’ from crawling Twitter.

Dead twitter bird

On the topic of Twitter crawling Matt Cutts was quoted as saying:

..we can do it relatively well, but if we could crawl Twitter in the full way we can, their infastructure[sic] wouldn’t be able to handle it

 

Which to me seems odd, since I don’t see any other sites complaining about how much load Google is placing on their infrastructure?

Clearly the issue is still political/strategic and neither side is looking to point fingers.

With Twitter’s social media relevance diminished you’d think +1′s would be a focus point but Matt Cutts also commented on the situation stating that we shouldn’t place much value on +1 stats for now.

A final point was made about Knowledge Graph, the new information panel that’s appearing on certain search terms.

Since the Google Search Quality team is now the Google Knowledge Graph team Matt Cutts had some great answers on the topic of Knowledge Graph, including the data sources and harm to Wikipedia.

There had been a lot of cursing about Google simply abusing Wikipedia’s bandwidth/resources but it was made clear during the session that Wikipedia is not traffic dependent because they don’t use ads for revenue.

Essentially, if Wikipedia’s data is getting better utilized, and they haven’t had to do anything to make it happen, they are happy.

If you wanted to get more details there’s lots of #SESSF hashed posts on Twitter and plenty of articles coming from the attendees.

I’m personally going to go start working on a moat for this Penguin problem..

SEO news blog post by @ 11:56 am


 

 

July 11, 2012

Google Puts Smack-Down on Infographics

Whether you know what they are called or not, most of us have seen those wonderful images that depict information in a pleasing graphical format and usually span 20 pages vertically. Infographic are visual representations that display information, data or knowledge. For some time now, these infographics have been used as link bait and are all the rage because they offer content in an easily digestible format.

google smash

In a recent interview by Eric Enge, Matt Cutts stated that Google feels they are being abused as a link building tactic and will be soon be discounted. Mr. Cutts when on to state:

"This is similar to what people do with widgets as you and I have talked about in the past. I would not be surprised if at some point in the future we did not start to discount these infographic-type links to a degree. The link is often embedded in the infographic in a way that people don’t realize, vs. a true endorsement of your site."

"In principle, there’s nothing wrong with the concept of an infographic." Cutts told Enge. "What concerns me is the types of things that people are doing with them. They get far off topic, or the fact checking is really poor. The infographic may be neat, but if the information it’s based on is simply wrong, then its misleading people."

Of course this is indicative of a much larger problem of trying to obtain accurate information and statistics from the internet. While it is unlikely that the value of Infographics won’t be completely abolished, the same rule apply to content on your website; if you expect people to link back to your site based on your infographic, you will need to ensure that it is:

  • Relevant to your industry and to your visitors.
  • Offers accurate sources for acquired information/statistics.
  • Gives the viewer new information, tells them how to do something, or describes a process.
  • Free of spammy content and meta information.

"Any infographics you create will do better if they’re closely related to your business and it needs to be fully disclosed what you are doing," Cutts advised.
Similar to what happened with Squidoo lenses, we are seeing another web-trend that has been over-used and abused by online marketers and now we are seeing the resulting smack-down from Google.

Like all other web trends, it is not so much a question of the usefulness of the trend, but how long it will take Google to devalue the tactic once it becomes abused. Any tactic that attemps to garner backlinks must always relevant to the user, rich in content, and be free of nefarious ploys to abuse the tactic.

By employing only white-hat tactics, any strategies or tactics you employ will allow you to weather the storms of any Google updates. It is this practice that has allowed Beanstalk SEO Inc. to pass through barrage of Panda & Penguin updates unscathed to consitently maintain our rankings.

SEO news blog post by @ 12:03 pm


 

 

July 5, 2012

Particle Physics and Search Engines

If you’ve been hiding under a rock then you may not have heard the news of the ‘God Particle’ discovery.

As someone who is fairly scientific, I look at this as more of a proof of concept than a discovery, and ‘God’ really needs to give Peter Higgs some credit for his theories.

 
I won’t dwell on the news surrounding the Higgs boson particle confirmation, but there are parallels between objects colliding and revealing previously unseen matters.

When Search Engines Collide

It’s been some time since Bing and Yahoo merged, so the data sets should be the same right?

No. That would really be a wasted opportunity, and Microsoft is clearly smarter than that.





 
By not merging the search data or algorithms of Bing and Yahoo, Microsoft can now experiment with different updates and ranking philosophies without putting all it’s eggs in one basket.

An active/healthy SEO will be watching the updates to search algorithms from as many perspectives as possible which means a variety of sites on a variety of topics tracked on a variety of search engines.

Say a site gets a ton of extra 301 links from partner sites, and this improves traffic and rankings on Bing, causes a stability of movement on Yahoo, and a drop in traffic on Google?

It’s possible to say that the drop on Google was related to a ton of different factors, untrusted links, link spam, dilution of keyword relevance, keyword anchor text spamming, you name it. This is because Google is always updating and always keeping us on our toes.

Bring on the data..

Lets now take the data from Bing and Yahoo into consideration and look at what we know of recent algo changes on those search engines. This ‘collision’ of data still leaves us with unseen factors but gives us more to go on.

Since Bing has followed Google on some of the recent updates, the upswing on Bing for position of keywords would hint that it’s neither a dilution of relevance or spamming on the keywords/anchor text.

Stability on Yahoo is largely unremarkable if you check the crawl info and cache dates. It’s likely just late to the game and you can’t bet the farm on this info.

What about the other engines? Without paying a penny for the data we can fetch Blekko and DDG(DuckDuckGo) ranking history to see what changes have occurred to rankings on these engines.

Since Blekko is currently well known to be on the warpath for duplicate content, and they are starving for fresh crawl data, a rankings drop on that service can be very informative especially if the data from the other search engines helps to eliminate key ranking factors.

In the case of our current example I’d narrow down the list of ranking factors that changed on the last ‘Penguin’ update and contrast those with the data from the other engines and probably suspect (in this example) that Google is seeing duplicity from the 301s, something Bing wouldn’t yet exhibit, but Blekko would immediately punish as badly or worse than Google.

The next step would be to check for issues of authority for the page content. Is there authorship mark-up and a reciprocal setup on the author’s end that helps establish the trust of the main site content? Does the site have the proper verified entries in Google WMT to pass authority? Barring WMT flags, what about a dynamic canonical tag in the header, even as a test if it’s not already setup?

Start making small changes, watch the results, and be patient. If you’re not gaming Google and you’ve done something accidental to cause a drop in rankings, you need to think your way through the repairs step by step.

It’s not easy to evaluate but the more data you can mash-up, and the better you understand that data, the closer/quicker you can troubleshoot ranking issues and ensure that your efforts are going to be gains.

SEO news blog post by @ 12:12 pm


 

 

June 21, 2012

Apple: On the Charge!

apple controller

Over at Apple things are changing to give the company even more power, profit, and exclusive control over it’s customers than ever before.

The good news is that Apple has been charged and found guilty of misleading Australian consumers who purchased Apple’s advertised “iPad with WiFi + 4G” only to find it’s not compatible with the 4G networks in Australia.

This resulted in a $2.25million fine + $300,000.00 in costs for Apple, a fine that seems light given the gross disregard for Australian consumer laws that Apple showed by selling a product that cannot deliver on it’s advertised specifications.

Indeed a small price to pay to purchase Australian tablet buyers without investing in efforts to make the hardware work with the country’s ISPs.

Protecting you from yourself :

Apple also made headlines by patenting an anti-surveillance technology that endeavours to mask a user’s on-line activity with fake information.

Clone Troopers

In a nutshell the service would hide your real activities behind a wall of fake information. If you ‘like’ a Mars Bar™ then your clone would like a brand of chocolate bar that directly competes with your choices. In essence it’s like an electro-acoustic muffler that covers your on-line activity with white-noise.

There is some implication that Apple has a technique to confuse actions of the clone with your actions, but I’d have to see that in action to honestly discuss it.

At the end of the day this means that instead of Apple and ‘others’ knowing about your interests/habits, only Apple will have accurate information, and they can claim that all other ‘targeted advertisers’ are second to them in accurately promoting to someone’s interests.

To me, this reinforces that Apple customers are the sole property of Apple, including their information.

Soul’d Out?

Apple has some great changes coming for loyal consumers. They are spending the time to remove the excellent Google Maps application, which is a free service, and replacing it with Tom Tom maps, which they likely had to purchase/invest in.

It’s also rumoured that the next update to Apple’s Siri app will focus on data from Apple partners like Yelp, Rotten Tomatoes, and OpenTable, instead of Google.

This was a brave move to protect Apple from Google’s growing competition in hardware markets. If Apple doesn’t limit Google growth with every effort they can muster, Apple consumers will start to see why so many people are switching to Android.

From a SEO perspective, the fact that Apple, and it’s users are getting away from Google is worth noting. When I am optimizing a site, I’m doing it for the good of the site/company, not my preferences in search engines.

So if I had a client who sold flower arrangements or something that is very likely to be searched for with Siri, I’d seriously be considering the competition and rankings over on Yelp as part of their external ranking strategy for coming months.

Spending your money for you…

These changes from free services to paid options won’t cost consumers too much more, at least not compared to the new 19pin iPhone interface that Apple is switching to starting with the iPhone 5.
The old iPad and iPhone adapters
You heard that correctly, all those accessories you have purchased over the years with iPad/iPhone connections are all going to be junk. Not to fret however, Apple’s authorized partners will sell you all new devices, and are already working on a new line of must-have add-ons featuring the new connectors.

This way, all the cheap knock-off adapters/accessories that aren’t making Apple any money are going to be worthless and Apple will be climbing back into your pockets to kick those imposters out.

And thus the walls of the garden appear to be growing, taller, thicker, and electrified on both sides.

Speaking of Power & Charging…

In more promising news the process of pulling solar power from infrared light is closer to ‘practical application’ with recent progress in the field of carbon nanotube research over at MIT.

If you look at a typical solar panel, exploring the reaction between light energy -> power conversion, you’ll note that infrared (non-visible) light energy is largely wasted.

This is especially troublesome when you realize that ~40% of the sun’s light energy that reaches our planet surface is actually in the infrared spectrum and isn’t being converted to electricity by traditional solar panel technology.

Plus this new research is pointing to a compatible technology that can be added to existing installations vs. replacing existing solar panel installations.

Here’s the relevant section from the original article:

The carbon-based cell is most effective at capturing sunlight in the near-infrared region.

Because the material is transparent to visible light, such cells could be overlaid on conventional solar cells, creating a tandem device that could harness most of the energy of sunlight.

The carbon cells will need refining, Strano and his colleagues say: So far, the early proof-of-concept devices have an energy-conversion efficiency of only about 0.1 percent.

So while the recent announcement is exciting, and very promising, we won’t see the results for some time to come due to efficiency/cost issues which need to be resolved first.

The real news is that folks worried about investing in current solar tech need not worry as much about the future if the next improvements are going to be complimentary to existing solutions.

SEO news blog post by @ 1:10 pm


 

 

June 14, 2012

TECHNOlogy: What is AJAX? Baby Don’t Hurt Me!

Wikipedia defines AJAX (Asynchronous JavaScript And XML) as:

A group of interrelated web development techniques used on the client-side to create asynchronous web applications.

What a mind-numbing description! What you need to know is that AJAX is the combination of a several technologies to make better web pages.

If you have no interest in making websites but you like techno music, or you’re curious why I picked that title, this is for you:

This is a good soundtrack for this post. You should hit play and keep reading.

After a bit of time with HTML/CSS I started to build a growing list of issues that I couldn’t solve without some scripting.

I learned some PHP, which wasn’t tricky because it uses very common concepts. Here’s the traditional ‘hello world’ example in PHP:

<?PHP echo ‘Hello World’; ?> = Hello World

.. and if I wanted to be a bit more dynamic:

<?PHP echo ‘Hello World it is ‘.date(‘Y’); ?> = Hello World it is 2012

Because PHP is only run when the page is requested, and only runs on the server side, it’s only the server that loads/understands PHP; The browser does nothing with PHP.

With PHP code only seen by the server, it’s a very safe way to make your pages more intelligent without giving Google or other search engines a reason to be suspicious of your site.

In fact one of the most common applications of PHP for an SEO is something as simple as keeping your Copyright date current:

<?PHP echo ‘Copyright© 2004-’.date(‘Y’); ?> = Copyright© 2004-2012

Plus when I need to store some information, or fetch that information, PHP isn’t that easy, so I added MySQL to the mix and suddenly my data nightmares are all data dreams and fairy tales (well almost). I won’t dive into MySQL on top of everything here, but lets just say that when you have a ton of data, you want easy access to it, and most ‘flat’ formats are far from the ease of MySQL.

But I still had a long list of things I couldn’t do that I knew I should be able to do.

The biggest problem I had was that all my pages had to ‘post’ something, figure out what I’d posted, and then re-load the page with updated information based on what was posted.

Picture playing a game of chess where you are drawing the board with pen and paper. Each move would be a fresh sheet of paper with the moved piece drawn over a different square.

PHP can get the job done, but it’s not a very smart way to proceed when you want to make an update to the current page vs. re-drawing the whole page.

So I learned some JavaScript, starting with the basic ‘hello world’ example:
<span onClick=”alert(‘Hello World’);”>Click</span>

hello world javascript alert box

 
If I wanted to see the date I’d have to add some more JavaScript:
<script language=”javascript”>
function helloworld()
{
var d = new Date();
alert(‘Hello World it is ‘ + d.getFullYear());
}
</script>

<span onClick=”helloworld();”>Click

Hello World it's 2012 alert box example

 
JavaScript is ONLY run on the browser, the server has no bearing on JavaScript, so the example above won’t always work as expected because it’s telling you the date on your computer, not on the server. How would we see the date of the server?

This is where AJAX comes into play. If we can tell the browser to invisibly fetch a page from a server and process the information that comes back, then we can combine the abilities of JavaScript, PHP, and MySQL.

Lets do the ‘hello world’ example with AJAX using the examples above.

First you would create the PHP file that does the server work as something witty like ‘ajax-helloworld.php’:
<?php echo ‘Hello World it is ‘.date(‘Y’); ?>

..next you’d create an AJAX function inside the web page you are working on:
<script language=”javascript”>
function helloworld()
{
var ajaxData; // Initialize the ‘ajaxData’ variable then try to set it to hold the request (on error, assume IE)
try{
// Opera 8.0+, Firefox, Safari
ajaxData = new XMLHttpRequest();
} catch (e){
// Internet Explorer Browsers
try{
ajaxData = new ActiveXObject(“Msxml2.XMLHTTP”);
} catch (e) {
try{
ajaxData = new ActiveXObject(“Microsoft.XMLHTTP”);
} catch (e){
// Something went wrong
alert(“Your browser broke!”);
return false;
}
}
}
// Create a function that will receive data sent from the server
ajaxData.onreadystatechange = function(){
if(ajaxData.readyState == 4){
alert(ajaxData.responseText);
}
}
ajaxData.open(“GET”, “ajax-helloworld.php”, true);
ajaxData.send();
}
</script>

Only the purple text is customized, the rest of the function is a well established method of running an AJAX request that you should not need to edit.

So we have a function that loads the ‘ajax-helloworld.php’ page we made and then does an alert with the output of the page, all we have to do is put something on the page to call the function like that span example with the onClick=’helloworld();’ property.

Well that’s all neat but what about the ‘X’ in AJAX?

XML is a great thing because it’s a language that helps us with extensible mark-up of our data.

In other words XML is like a segregated serving dish for pickled food that keeps the olives from mixing with the beets.

Going back to our ‘hello world’ example we could look at the ‘date data’ and the ‘message data’ as objects:
<XML>
<message>Hello World it is</message>
<date>2012</date>
</XML>

Now, when the AJAX loads our ‘ajax-helloworld.php’ and gets an XML response we can tell what part of the response is the date, and which part is the message. If we made a new page that just needs to display the server’s date, we could re-use our example and only look at the ‘date’ object.

For some odd reason, most coders like JSON a lot, and this makes it really common to see AJAX using JSON vs. XML to package a data response. Here’s our XML example as a JSON string:
{“message”:”Hello World it is”,”date”:”2012″}

Not only is it really easy to read JSON, because JavaScript and PHP both understand JSON encoding it’s really easy to upgrade our ‘hello world’ XML example over to JSON format.

Here’s the new PHP command file ‘ajax-helloworld.php’:
<?php
$response = array(“message” => “Hello World it is”, “date” => date(‘Y’));
echo json_encode($response);
?>

The output of our AJAX PHP file will now be the same as the JSON example string. All we have to do is tell JavaScript to decode the response.

If you look back at this line from the AJAX JavaScript function example above:

if(ajaxData.readyState == 4){
alert(ajaxData.responseText);
}

This is where we’re handling the response from the AJAX request. So this is where we want to decode the response:

if(ajaxData.readyState == 4){
var reply = JSON.parse(ajaxRequestAT.responseText);
alert(‘The message is : ‘ + reply.message + ‘ and the date is : ‘ + reply.date);
}

Now we are asking for data, getting it back as objects, and updating the page with the response data objects.

If this example opened some doors for your website needs you really should continue to learn more. While the web is full of examples like this, from my personal experience I can honestly tell you that you’ll find yourself always trying to bridge knowledge gaps without a solid lesson plan.

Educational sites like LearnQuest, have excellent tutorials and lessons on AJAX and JavaScript including advanced topics like external AJAX with sites like Google and Yahoo. Plus LearnQuest also has jQuery tutorials that will help you tap into advanced JavaScript functionality without getting your hands dirty.

*Savvy readers will note that I gave PHP my blessings for SEO uses but said nothing of JavaScript’s impact on crawlers/search engines.

Kyle recently posted an article on GoogleBot’s handling of AJAX/JavaScript which digs into that topic a bit more.

With any luck I’ll get some time soon to share a gem of JavaScripting that allows you to completely sculpt your page-rank and trust flow in completely non-organic way. The concept would please search engines, but at the same time cannot be viewed as ‘white hat’ no matter how well it works.

SEO news blog post by @ 11:19 am


 

 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.