Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

Much Ado About (not provided)

Our motto is (not provided).

Our motto is (not provided).

As many of our readers may already know, earlier this week Google changed the way their URL functions in such a way that for those who monitor their analytics (which should be all of you), you’ll now only see (not provided) where once you would have seen your keyword. This move was met with disappointment and more than a bit of annoyance on the part of SEOs and website owners. The reason (so they say) is to protect the privacy of their users. The logic is, if keyword data passes then it can be picked up in the log files of the site being visited along with data such as the IP address that would allow the user to be pinpointed with some degree of accuracy. So, to make sure that the owner of the custom t-shirt site I visited last week can’t figure out it was me that searched “custom t-shirts canada” that data is now kept from the receiving site. Now, here’s the annoyance – to say that it’s a case of protecting privacy would work UNTIL we realize that the same can’t be said for paid traffic. If you purchase traffic though AdWords, the data is tracked. Now of course it has to be or we’d all just be paying for AdWords and trusting that we were getting the traffic we paid for and that the bids made sense but the hypocrisy is pretty obvious – why is a user that clicks on an organic result then more deserving of privacy than those who click on a paid result? They’re not obviously, and we’re not being told the truth BUT that’s not really the discussion to be had is it? The fact of the matter is, it’s Google and they can do what they want with their own website. I believe I should get to do with my site what I want (within the confines of the law of course) and so I won’t take that away from others. So what is the real discussion …

What Do We Do Now?

While we’re all spending time arguing about the hypocrisy and crying foul, the fact of the matter is that it is what it is and now we have to figure out what to do.  We no longer have keyword data from Google.  There are two routes forward, the short term patch and the long term changes.

Short Term

In the short term we can use Advanced Segments to at least get a good idea about what keywords are producing what effect.  Essentially we can use them to filter traffic that follows patters similar to what specific keywords or keyword groups behaved like.  This tends to only work well with large traffic groupings so unless you get huge traffic for single phrases that behave uniquely – you’ll probably have to group your traffic together.  Branded vs non-branded for example.  I’m not going to get into how this is gone here in this blog post simply because I wrote a lengthy piece on it for Search Engine Watch back when (not provided) was first becoming an issue.  You can read about it at http://searchenginewatch.com/article/2143123/How-to-Understand-Your-Google-Not-Provided-Traffic.

This will only work for a while however.  You’ll see new traffic coming in and won’t know how it’s behavior impacts the results.  Essentially – this will give you a decent idea until your traffic sources change, your site changes, or time passes.  So what do we do …

Long Term

In the long run we have no option but to make massive adjustments to the way we look at our sites.  We can no longer determine which keywords perform the best and try to caft the user experience for them.  Instead we have to look at our search traffic in a big bucket.  Or do we?

While this may be true for some traffic, we can still segment but the landing page (which will give you a good idea of the phrases) as well as look at groups of pages (all in a  single directory for example).  I know for example that this change comes right when we ourselves are redesigning our website and in light of this I will be changing the way our directory structure and page naming system work to allow for better grouping of landing pages by common URL elements.  I imagine I won’t be the last to consider this factor when building or redeveloping a website.

What will need to change is our reliance on specific pieces of data.  I know I like to see that phrase A produced X result and work to improve that.  We’ll not have to look at larger groupings of data.  A downside to this (and Google will have to address this or we as SEOs will) is that it’s going to be a lot easier to mask bad practices as specific phrase data won’t be available.  I know for example that in an audit I was part of, we found bot traffic in part based on common phrase elements.  Today we wouldn’t be able to do this and the violations would continue.

We’re All Still Learning

Through the next couple months we’ll all be adjusting our reporting practices to facilitate this change.  I know that some innovative techniques will likely be developed to report as accurately as possible what traffic is doing what.  I know I’ll be staying on top if it and we’ll keep you posted here in our blog and on our Facebook page.

SEO news blog post by @ 10:48 am on September 26, 2013

Categories:Analytics,Google,Google

 

SEO concerns for Mobile Websites

You want to serve your clients needs regardless of what device they visit your site with, but how do you do it easily without upsetting your SEO?

Lets look at the various options for tackling Mobile sites and what each means in terms of SEO:

Responsive Design :
 
Visual demonstration of responsive web design

  • Responsive design is growing in popularity, especially as communications technology evolves, and bandwidth/memory use is less of a concern.
  • This method also gives us a single URL to work with which helps to keep the sitemap/structure as simple as possible without redirection nightmares.
  • On top of that, Googlebot won’t need to visit multiple URLs to index your content updates.
  • Less to crawl means Googlebot will have a better chance to index more of your pages/get deeper inside your site.
“Why is/was there a concern about mobile page size?”

Low-end mobiles, like a Nokia C6 from 4+ years ago (which was still an offering from major telcos last year), typically require that total page data be less than 1mb in order for the phone to handle the memory needs of rendering/displaying the site.

If you go over that memory limit/tipping point you risk causing the browser to crash with an error that the device memory has been exceeded. Re-loading the browser drops you on the device’s default home-page with all your history lost. I think we could all agree that this is not a good remote experience for potential clients.

Higher-end devices are still victims of their real-world connectivity. Most 3rd generation devices can hit really nice peak speeds, but rarely get into a physical location where those speeds are consistent for a reasonable length of time.

Therefore, even with the latest gee-wiz handsets, your ratio of successfully delivering your entire page to mobile users will be impacted by the amount of data you require them to fetch.

In a responsive web design scenario the main HTML content is typically sent along with CSS markup that caters to the layout/screen limitations of a mobile web browser. While this can mean omission of image data and other resources, many sites simply attempt to ‘resize’ and ‘rearrange’ the content leading to very similar bandwidth/memory needs for mobile sites using responsive design approaches.

The SEO concern with responsive designs is that since the written HTML content is included in the mobile styling it’s very crucial that external search engines/crawlers understand that the mobile styled content is not cloaking or other black-hat techniques. Google does a great job of detecting this and we discuss how a bit later on with some links to Google’s own pages on the topic.

Mobile Pages :

Visual demonstration of mobile web page design

 
If you’ve ever visited ‘mobile.site.com’ or something like that, you’ve already seen what mobile versions of a site can look like. Typically these versions skip reformatting the main site content and they get right down to the business of catering to the unique needs of mobile visitors.

Not only can it be a LOT easier to build a mobile version of your site/pages, you can expect these versions to have more features and be more compatible with a wider range of devices.

Tools like jQuery Mobile will have you making pages in a jiffy and uses modern techniques/HTML5. It’s so easy you could even make a demo image purely for the sake of a blog post! ;)

This also frees up your main site design so you can make changes without worrying what impact it has on mobile.

“What about my content?”

Excellent question!

Mobile versions of sites with lots of useful content (AKA: great websites) can feel like a major hurdle to tackle, but in most cases there’s some awesome solutions to making your content work with mobile versions.

The last thing you’d want to do is block content from mobile visitors, and Google’s ranking algorithm updates in June/2013 agree.

Even something as simple as a faulty redirect where your mobile site is serving up:
mobile.site.com/
..when the visitor requested:
www.site.com/articles/how_to_rank.html

.. is a really bad situation, and in Google’s own words:

“If the content doesn’t exist in a smartphone-friendly format, showing the desktop content is better than redirecting to an irrelevant page.”

 
You might think the solution to ‘light content’ or ‘duplicate content’ in mobile versions is to block crawlers from indexing the mobile versions of a page, but you’d be a bit off the mark because you actually want to make sure crawlers know you have mobile versions to evaluate and rank.

In fact if you hop on over to Google Analytics, you will see that Google is tracking how well your site is doing for mobile, desktop, and tablet visitors:
Example of Google Analytics for a site with mobile SEO issues.

(Nearly double the bounce rate for Mobile? Low page counts/duration as well!?)

 
Google Analytics will show you even more details, so if you want to know how well you do on Android vs. BlackBerry, they can tell you.

“How do the crawlers/search engines sort it out?”

A canonical URL is always a good idea, but using a canonical between a mobile page and the desktop version just makes sense.

A canonical can cancel out any fears of showing duplicate content and help the crawlers understand the relationship between your URLs with just one line of markup.

On the flip-side a rel=”alternate” link in the desktop version of the page will help ensure the connection between them is understood completely.

The following is straight from the Google Developers help docs:

On the desktop page, add:

<link rel="alternate" media="only screen and (max-width: 640px)" href="http://m.example.com/page-1" >

and on the mobile page, the required annotation should be:

<link rel="canonical" href="http://www.example.com/page-1" >

This rel=”canonical” tag on the mobile URL pointing to the desktop page is required.

Even with responsive design, Googlebot is pretty smart, and if you aren’t blocking access to resources intended for a mobile browser, Google can/should detect responsive design from the content itself.

Google’s own help pages confirm this and provide the following example of responsive CSS markup:

    @media only screen and (max-width: 640px) {...}

In this example they are showing us a CSS rule that applies when the screen max-width is 640px; A clear sign that the rules would apply to a mobile device vs. desktop.

Google Webmaster Central takes the information even further, providing tips and examples for implementing responsive design.

Ever wondered how to control what happens when a mobile device rotates and the screen width changes? Click the link above. :)

SEO news blog post by @ 3:51 pm on August 16, 2013


 

Free Ranking Reports on Google!

I keep seeing people ask for their rank, asking what the best free ranking tools are, etc., like it’s so darn hard to ask Google where your website is in terms of it’s keywords.

First of all, Google Analytics has an ‘Average Position’ column for popular search queries that tells you a lot of great info about your site’s keywords.

Google WMT Search Queries chart
This is an example of Search Queries sorted by Average Position

 
The link to this area is:
https://www.google.com/webmasters/tools/top-search-queries?hl=en&siteUrl=
+ your URL.

Our website link would look like this:
…earch-queries?hl=en&siteUrl=http://www.beanstalk-inc.com/

You can also click at the top of the position column to sort it, or tack this onto the end of the URL:
&grid.sortBy=8&grid.currentOrder=2d

If you aren’t getting enough data from this, first try out the download options, and load them up in a spreadsheet so you can sort/filter the data.

Most folks are surprised what a little bit of filtering and grouping can accomplish to provide you with a fresh perspective on data.

Still not enough? Well there’s a zillion free tools that will gladly trade your URL and keyword targets for a limited ranking report.

This is valuable data, so why not trade something free for it? Google does!

Indeed there’s enough free tools, that I won’t even bother mentioning one. Why don’t we just make one?

It’s not ‘hard’ to get your rank really, lets break it down:

  • Make a list of phrases you are trying to rank for
  • Do a Google search for your first phrase
  • Keep searching until you find your site
  • Take note of the position
  • Repeat

So how does the average person do this? It’s gets pretty technical, but all the resources are out there, and free!

To break that down in simple terms:

  • Setup a server or install XAMPP
  • Setup a database/table to store your rankings by date
  • Make a page that CURLs for your keywords
  • Setup a schedule to execute the php page regularly

Bingo, you now have your own ranking reports tool, and nobody is the wiser, besides Google, and they are usually too busy to care that you’re extra curious about your rankings.

Nerd reading a book

Don’t get me wrong, there’s a lot of fine details to explain and not everyone is comfortable installing programs like this or scripting, but I am going to look at getting permission to make this a step-by-step how-to guide with full downloads so even novices can give this a try.

A final point to make is that excessive/automated queries on Google is a breach of their TOS, and could result in annoying blocks/complaints from Google if you were to attempt to use this method for a large set of keyword phrases, or wanted the reports updated constantly.

If you are a ‘power user’ who needs a lot of data, you’ll end up paying someone, and either you pay to use someone’s API key at a premium, or you get your own API key from Google and only pay for what you use.

Seems like an easy decision to me!

SEO news blog post by @ 1:03 pm on January 24, 2013


 

Google’s new Offline Conversion API

Happy 2013!

It may look like we’ve been loyal to the Mayan calendar, but we’ve just been busy internally over the holidays and didn’t blog.

Google has also been busy in 2013, retiring the old Offline Conversions APIs (both the Javascript and Python versions were retired in November 2012), and beginning a new Offline Conversions import service within the DoubleClick Search brand.

This announcement has been subject to both good and bad press, typically depending on the technical skills/depth of knowledge of the story writer.

Most writers looking for the worst possible scenario chose to doubt Google’s privacy controls, and boldly suggest there will be problems due to data aggregation.

Google’s DoubleClick service explicitly states:

“Advertisers are prohibited from sending personally identifiable information using this feature, as outlined by the Terms of Service for the API.”

Further to that there are lots of assumptions being made about who can supply data, who has access, and what data is relevant. In one article they just tossed in a mention that the data could be ‘decrypted’ by 3rd parties/or government agencies with nothing to back that claim up.

To help understand the role of this service lets look at a typical use case:

  • You sell widgets.
  • Your website has online ordering.
  • You also have a physical store.
  • Clients are finding items online, but buying them in person.

So if you are basing your promotion efforts on Web based analytics, you will be in the dark as to what promotions drove the clients to come to the store and make a purchase.

Unless Google gave you an interface with which to send them transaction info on offline sales?

Lets see how that would work:

  • A Google user is searching for widgets.
  • Google puts a PPC Ad on the page promoting your widgets.
  • The user clicks on the Ad, and looks up ‘Blue Widget # 42′.
  • 2 hours later, your in-store till sells 2 ‘Blue Widget # 42′s.
  • The till sends “2 x Blue Widget # 42″ to Google as ‘sold’.

That’s it, Google now can relate the pay per click advertisement as relevant to the sale of the widget, and you have more info on how well that advertisement worked.

This also works very well with telephone based sales, especially if you are in a position to use specific phone numbers, or extensions, to narrow down how the call came about.

So while some folks are very concerned about how much companies will know about them when companies start comparing notes, that’s not the situation here at all.

Companies have been comparing notes for years, without the help of Google. Just think about the shopping trends that you reveal when you use an Air Miles card?

Google only wants to help reduce unwanted/ineffective advertising and reduce the amount of money businesses spend to reach potential clients.

SEO news blog post by @ 12:46 pm on January 3, 2013


 

Time to look at your Google Calendars (Again)

October is a trade off between birthdays (New-years babies unite!), feasting, and parties, vs. bearing witness to the lament caused by waking up in the dark, low energy, and the changing seasons.

Google can’t change the position of the sun, but it could improve your mood by helping quickly add events to your calendar.

Example of a Google calendar with more calendars added to it.
I tried to get a screenshot of the weather feature but only so much fits in 550px

 
To get more events on your calendar, without importing or adding them one at a time you need to ‘subscribe’ to additional calendars.

The first step, after getting logged into a Google account is to click on the Other Calendars menu and choose the “Browse Interesting calendars” option:

The Other Calendars menu in Google Calendars.

 
On the next page you should see three tabs, “Holidays”, “Sports”, and “More”.

I’d say everyone should add their national holidays, even if you’ve done this before, take a moment to preview the official calendar for your country, as the official version is likely a lot better than what you’ve been subscribing to.

The sports tab is pointless, since we’re nerds, and there’s no WRC/Drifting events in the list. (I kid, I kid.. No, not really.)

Finally the ‘More’ tab is where the magic happens.

Under the ‘More’ tab you want to seek out: “Contacts’ birthdays and events”

Subscribing to this calendar and allowing it to show on your main calendar will help you track all those birthday parties that will help get you through this dreary fall season.

Keep in mind however that subscribing to a calendar does not modify your calendar, nor does it add notifications or alerts to your calendar.

If you want to be reminded a week ahead of your best friend’s birthday, you should go make that event manually.

If you just want to know on the day of his birthday that you forgot, then you can simply click on the birthday’s calendar item and then click on “copy to my calendar” to get that event on your personal calendar.

All my friends use FB not G+ so who cares?

Well, at least in New Zealand, G+ user interest is actually passing Twitter/Linked In for new users, and making up ground quickly on Facebook.

Roy Morgan’s analysis of Social Media trends in NZ is a bit hard to look at (even upsidedown) but his data is very telling of the growth that G+ is getting from the adoption of Android phones and other Google products.

I’d love to say that G+ is just more social/edgy/trendy than FB but that’s never what it’s been for/about.

If you’ve read any of my rants about comparing the two social networks you’ll know I look at it like replacing a banana (FB) with an orange (G+).

On one hand, a banana can be fun, especially if you’re care-free about discarding the peel, but an Orange has some serious potentials that a Banana lacks, especially in clean presentation.

Ultimately as SEOs we would advise paying respects to both networks as each has it’s perks, though G+ hasn’t made news this week for app developers selling 1 million user profiles for $5 US.

TL;DR: Man buys 1 million user data records (mainly First/Last Name, Gender, Age, Email, Phone #,etc.. data) for $5 and FB thanks him by telling him not to talk about it.

So really, enjoy your access to private data while it lasts, build those calendars while it’s easy, because if we have app developers selling a million user data records for $5, you can be sure people won’t want to share valid info with insecure sites. In fact due to this, it’s better to put in intentionally incorrect info and only trust services with solid security reputations.

SEO news blog post by @ 11:47 am on October 25, 2012


 

Red-Handed Face-Palm

Facebook is making headlines again, but not the kind that Mark Zuckerberg would like.
Mark Zuckerberg looking unhappy
Earlier this week ‘Limited Run’, an e-commerce developer that used Facebook as part of it’s start-up media campaign, posted a report on their findings of click-through data from their Facebook ads.

The data that Limited Run shared was a bit startling. In their own words:
Facebook was charging us for clicks, yet we could only verify about 20% of them actually showing up on our site.

Since data is all about who’s looking at it or how someone looks at it, the folks at Limited Run signed into a ‘handful’ of other tracking services and found the exact same thing.

At this point you have a web developer who is very curious about something going on with their web traffic, so naturally they built an analytics system for their own site:
Here’s what we found: on about 80% of the clicks Facebook was charging us for, JavaScript wasn’t on … in all of our years of experience, only about 1-2% of people coming to us have JavaScript disabled, not 80% like these clicks coming from Facebook.

Limited Run is a start-up company, and the publicity from being the first to catch Facebook with it’s hand in the proverbial cookie jar of advertising money would certainly help ensure the company’s run isn’t so limited.

Even still Limited Run was VERY careful to point out that there is little to no way of proving that Facebook is behind the bot -> ad traffic.

They are however dropping Facebook’s advertising and their company page on FB because of a claim that FB was unwilling to assist them with a name change, “because they weren’t actively paying for $2k or more in campaigns”.

Plus if 80% of the traffic from an advertising source is fake, and you have to pay for 100% of it, there’s better ways to promote your company.

So as this was a smaller advertiser, not someone spending millions of ad revenue on Facebook, we took it as a one-off issue, until this morning when Forbes posted a link to an article on Macleans.ca about “blank” image advertising tests on Facebook.

The gist of the piece is that a blank image test actually netted double the clicks of a static banner style image (think a logo or some non-promotion/non-offer) and only one click in ten thousand less than the average banner ad.

Web Trends even jumped in to do some testing on the clicks to see if there was some sort of curious appeal to clicking on a blank image and by using heat maps and quizzes they confirmed that the traffic is not human.

Facebook makes %85 of it’s ~$2.2 billion revenue from advertising traffic, and 14%-19% of FB revenue is from Zynga, a company that is suddenly involved in a stock crash scandal.
Mark Pincus - Founder of Zynga Games
If you hadn’t heard, just prior to some ugly profit reports for the company, the company Founder Mark Pincus, and key members of company, cashed out over $516 million in shares!

Zynga share prices are currently at $2.83 each, way down from the $10 initial share price, and miles away from the $14.69 peak price of the company’s stock.

It would appear for now that both companies have some explaining to do, and some problems to solve. For the users/subscribers this should be a wake up call on where you are spending your time and your advertising budgets.

SEO news blog post by @ 10:28 am on August 1, 2012


 

Webcology Year In Review

For those interested in what some of the top minds of SEO, SEM, Mobile Marketing and Social Media have to say about 2011 and maybe more importantly – what they see coming in 2012 then Thursday’s Webcology is a must listen.  Hosted on WebmasterRadio.fm, Jim Hedger and I will be hosting 2 separate round-tables with 5 guests each over 2 hours covering everything from Panda to personalization; mobile growth to patent applications.  It’s going to be a fast-paced show with something for everyone.

The show will be airing live from 2PM EST until 4PM EST on Thursday December 22nd.  If you catch it live you’ll have a chance to join the chat room and ask questions of your own but if you miss it you still have an opportunity to download the podcast a couple days later.  I don’t often focus this blog on promoting the radio show I co-host but with the lineup we have including SEOmoz’s Rand Fishkin, Search Engine Watch’s Jonathan Allen and Mike Grehan, search engine patent guru Bill Slawski and many more talented and entertaining Internet Marketing experts it’s definitely worth letting our valued blog visitors know about it. And if you’re worried it might just be a quiet discussion, Terry Van Horne is joining us to insure that doesn’t happen.  Perhaps I’ll ask him a question or two about his feelings about Schema.org (if you listen to the show … you’ll quickly get why this is funny). :)

So tune in tomorrow at 2PM EST at http://www2.webmasterradio.fm/webcology/, be sure to join the chat room to let us know your thoughts and enjoy.

SEO news blog post by @ 3:32 pm on December 21, 2011


 

Do iframes count for SEO?

Great question!

I swear I’ve seen iframes crawled before but even if I haven’t seen iframe data in search indexes, it’s not something that we should just count on and forget about, especially with the growing competition in the search engine market. I’m looking at you Blekko.

So how do you test such a thing without wasting time waiting eons for the results to appear in the SERPs? Here’s how!

The text below is just an iframe:

Seems like a unique phrase that very few, if any search engine optimization companies would use, so it should work well.

After a few days if we’re never seen for the above phrase but we are seen for the below phrase, the question is answered. We’ll run the query across the gamut and see if we can’t report back on who/how quickly it’s crawled. ;)

May many Russian rockets sail past the Earthling moon and delve into many Martian delights with a souvenir to show for it.

SEO news blog post by @ 3:54 pm on November 7, 2011


 

Secure search service stirs SEOs slightly

Every once in a while there’s an announcement that makes a huge kerfuffle online only to be yesterdays news the next week. Yesterday’s news is that Google made the move towards secure searches for Google account holders that are logged in while searching. It was actually announced on the 18th, and I didn’t see anything until Kyle mentioned it on the afternoon of the 19th, so it’s actually worse than yesterday’s news!

Google secure search

Anyone following search engine news would be perfectly normal to feel a bit of déjà vu since Google’s had secure search options way back in early 2010. The latest announcement that is stirring up responses is the fact that they are now dropping header info that would normally be passed along to the destination site which could then be tracked and analyzed for SEO purposes.

Google has plenty of good reasons to make this move and only a few reasons against it. Here’s a quick breakdown of the pros/cons:

  • Most searchers are not logged in and won’t be effected
  • Estimates fall between %3-%7 of current search traffic is logged in
  • Tracking the “not provided” searches in Google Analytics will show the missing traffic
  • Mobile users connecting from public WiFi networks can search securely
  • Users of free internet services will have additional privacy
  • HTTPS Everywhere is crucial and backed by Google
  • Webmaster Central still provides search terms to registered owners

Cons:

  • Mobile searchers tend to be logged in
  • Traffic projections for mobile search are growing
  • Google has to make the data accessible to it’s paid users
  • SSL is now becoming a much larger ranking factor

Amy Chang over on the Google Analytics blog had the following point to make:

“When a signed in user visits your site from an organic Google search, all web analytics services, including Google Analytics, will continue to recognize the visit as Google ‘organic’ search, but will no longer report the query terms that the user searched on to reach your site..”
“Keep in mind that the change will affect only a minority of your traffic. You will continue to see aggregate query data with no change, including visits from users who aren’t signed in and visits from Google ‘cpc’.”

Thom Craver, Web and Database specialist for the Saunders College at Rochester Institute of Technology (RIT) was quoted on Search Engine Watch as noting:

“Analytics can already run over https if you tell it to in the JavaScript Code … There’s no reason why Google couldn’t make this work, if the site owners cooperated by offering their entire site via HTTPS.”

Personally, as you can tell from my lead-in, I feel like this is much ado about nothing. Unless competing search engines are willing to risk user privacy/safety to cater to SEOs in a short term bid for popularity, this isn’t going to be repealed. I don’t like to see the trend of money = access, but in this case I don’t see much choice and I’ll stand behind Google’s move for now.

SEO news blog post by @ 12:12 pm on October 20, 2011


 

Google Analytics Features SEO Reports

Google has just announced a new service feature to use in you Google Analytics. The popular analytics utility now offers a set of reports called "Search Engine Optimization." This feature is now out of beta testing and is available for public consumption.

The Webmaster Tools section contains three reports based on the Webmaster Tools data that Google feels will offer a better understanding of how your site performs within search results.

Google has created a new section for these reports called Search Engine Optimization that will live under the Traffic Sources section.
You will need to have a Google Webmaster account before you can use this feature and will have to connect your Google Analytics and Webmaster Tools accounts. Once you are setup, the data is displayed almost immediately, although data metrics from the last two days is not available. Following is a brief summary of some of the features offered in the new reports.

seo reports.jpg

The reports you’ll find there are:

Queries: impressions, clicks, position, and CTR info for the top 1,000 daily queries. One point to keep in mind here is that Google is showing the "average position" and is not actually displaying your true "rank." Google is not displaying rank, but instead they record the position of each impression.

Landing Pages: impressions, clicks, position, and CTR info for the top 1,000 daily landing pages. The Landing Pages report shows how many times your top landing pages were shown in search results, again along with average position and CTR.

Geographical Summary: impressions, clicks, and CTR by country. This is useful when you are targeting other countries in your SEO strategy. There is also a Google Property Report which is useful for seeing how your site performs among different search results like image, mobile and video searches.

It seems in this instance that Google is trying to play catch up to Blekko but both companies have some fine tuning to do on their SEO reports features before they are fully functional. The new SEO reports does make Google a little more transparent which many feel they have been severely lacking for some time.

Once the rough spots are smoothed out with the new reports they will undoubtedly offer some very insight into how your site is performing but some very good insight into how the Google SERPs actual function in a reality.

SEO news blog post by @ 11:29 am on October 5, 2011

Categories:Analytics,Google

 

Older Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.