Beanstalk on Google+ Beanstalk on Facebook Beanstalk on Twitter Beanstalk on LinkedIn Beanstalk on Pinterest
Translate:
Published On:
SEO articles and blog published on ...
Hear Us On:
Webmaster Radio
Blog Partner Of:
WebProNews Blog Partner
Helping Out:
Carbon balanced.
Archives
RSS

XMLRSS

Beanstalk's Internet Marketing Blog

At Beanstalk Search Engine Optimization we know that knowledge is power. That's the reason we started this Internet marketing blog back in 2005. We know that the better informed our visitors are, the better the decisions they will make for their websites and their online businesses. We hope you enjoy your stay and find the news, tips and ideas contained within this blog useful.


November 19, 2013

The Curious Case of the DNS Error

While reviewing a client’s Webmaster Tools data yesterday, we came across something rather odd. WMT reported a DNS error on November 14th. A quick manual check of keyword rankings determined a significant loss, yet the analytics data showed a slight drop in organic search traffic between Nov 15th and 16th with a full recovery by the 17th, but keyword rankings are still slow to recover. We resolved to investigate the issue further and found that we weren’t the only ones seeking answers.

Fast forward to today and there’s a slight buzz in the SEO community from those who have noticed similar occurrences among their own sites. Barry Schwartz over at Search Engine Roundtable wrote an article today bringing attention the issue. Although there has been little official word yet, Barry did receive a comment from Google stating that they were not seeing anything unusual.

Could it be a bug on Google’s end?

Dr. Pete over at MOZ.com has suggested it may very well be. In April 2012 there was a Google bug that affected some domains by treating them as parked domains – which resulted in devaluation. There is speculation that this may be a similar case.tin tin

In the discussion at Search Engine Roundtable yesterday over whether or not there was an algorithm update on Nov 14th that could be related to the occurrence of these DNS errors, Dr. Pete wrote:

“A DNS issue at large scale could absolutely affect the index. If Google had a technical problem that caused them to fail to resolve host records, they could interpret that as a site outage and potentially de-index sites temporarily. That’s speculative, but it’s possible. The fact that many of these warnings seem to be false alarms also indicates that something failed on Google’s end.”

At this point there are no solid answers over the cause of these mysterious DNS errors and what, if anything, it has affected. So, if you too have noticed a DNS error on your site from Nov 14th /15th, hold tight and we will report more information on the issue as it becomes available.

SEO news blog post by @ 4:52 pm

Categories:Articles,Google

 

 

October 21, 2013

Google Q3, Mobile Ads & Hummingbird

Google announced their Q3 earnings last Thursday (October 17th) with higher-than-expected earnings, up 12% over the previous year at $14.89 billion. This resulted in Google shares crossing the $1000 per share mark for the first time in the companies’ history. Before we get into how that’s being accomplished, let me first insert my brief rant:

<rant>

THERE IS NO REASON GOOGLE SHOULD BE VALUED AT WHAT IT IS !!  IT’S LIKE WE’VE FORGOTTEN THE DOT COM BUBBLE BURSTING WITH THE CRAZY VALUATIONS WE GIVE TECH COMPANIES !!!

</rant>

Alright, feeling better …

At the end of the day the higher than expected earnings came on the back of an average 8% drop in the average cost per click.  This drop was due mainly to the growth in mobile however (where the rates are cheaper) and rather than indicating a decline in search is an indicator to the contrary.  Because growth in the mobile realm is as high as it is, it is able to impact the overall averages dramatically however desktop search did not decline.  This is a case of Google winning in mobile and not losing in desktop creating a net gain though an average cost per click drop.

If we don’t think this growth in mobile wasn’t the key to the Hummingbird changes we’d be kidding ourselves.  Hummingbird has very little to do with desktop search and everything to do with mobile and mobile devices.  With the growth in the sector being what it is and the enormous revenue opportunities that exist there – it looks as though Google is adjusting their entire algorithm to accommodate.  And it makes sense as users demand more from mobile and from technology in general.  The contest is on to feed more data faster and monetize better.  Tell us what we want before we know we want it.

Will Google be able to keep up?  Only time will tell.  It’s theirs to lose at this point but not that long ago it was Microsoft’s to lose.  Of course, Microsoft could buy Google if they wanted so …

And now, on a lighter note (albeit it only slightly relevant) let’s take a moment to remember what we have, what mobile is doing, what we take for granted and maybe even chuckle a bit …

SEO news blog post by @ 11:10 pm

Categories:Google,Update

 

 

October 4, 2013

Adobe Hacked

A siege on Adobe.Adobe has been hacked with the credit card information of almost 3 million accounts compromised. This is a huge blow for the company and for the trust users have in them as well as a solemn reminder for all of us as the fragile nature of our data. We discuss often the privacy concerns around Facebook and Google but ti takes an event like this to remind us that the systems we take for granted every day, like eCommerce – mandatory now for the smooth functioning of our society – are vulnerable at even the highest level.

Admittedly, the belief currently is that the credit card data pulled was encrypted, anyone familiar with encryption knows that with enough time and computing power, it can be cracked.  You can simply ask the NSA for verification on that point and sophisticated hackers (say for example, like the ones that could break through Adobe’s security) will have access to the knowledge and resources to get it done.

I personally got my email notification from Adobe at 11:01PM yesterday, hours after the event occurred.  Now fortunately, I’ve paid for everything via PayPal (admittedly more to avoid currency conversion fees) so it’s not a sizable issue for Beanstalk but for many of my friends and clients this is a huge issue. On their blog they reported the following actions being taken:

As a precaution, we are resetting relevant customer passwords to help prevent unauthorized access to Adobe ID accounts. If your user ID and password were involved, you will receive an email notification from us with information on how to change your password. We also recommend that you change your passwords on any website where you may have used the same user ID and password.

We are in the process of notifying customers whose credit or debit card information we believe to be involved in the incident. If your information was involved, you will receive a notification letter from us with additional information on steps you can take to help protect yourself against potential misuse of personal information about you. Adobe is also offering customers, whose credit or debit card information was involved, the option of enrolling in a one-year complimentary credit monitoring membership where available.

We have notified the banks processing customer payments for Adobe, so that they can work with the payment card companies and card-issuing banks to help protect customers’ accounts.

We have contacted federal law enforcement and are assisting in their investigation.

I’ll give them kudos for doing what needs to be done and can’t even blame them for it happening.  For those affected, make the appropriate arrangements and for those unaffected, take this as a serious reminder about what can happen to your credit care information, other private information and to your website.

Image source: http://runawayjuno.com/2012/07/21/taos-pueblo-adobe-architecture-new-mexico/

SEO news blog post by @ 10:30 am


 

 

October 2, 2013

Google Changes Property Links – Removes Video

For those of who who haven’t yet noticed, Google has changed the way they display the links to their other properties and search functions. As opposed to the typical row of links across the top of the page, Google has replaced this row with an “Apps” button to the right beside the Sign In link.  For comparison, here’s a links to the archives and what it looked like just a few hours ago – Google homepage on archives.org.

The page currently looks like

Google homepage on search.Which, when you hover over the Apps link to the right becomes.

Google homepage on hover.The push here it is clearly to provide a clear path to their most popular search function but one might notice (maybe) that on the homepage the option to search videos outside of YouTube has been removed (though it does appear to be currently available on the internal search options).  I suppose I can’t begrudge them, it’s their site and they have the right to point people to their other properties but do you remember the days when Google was a search engine?  Wasn’t that neat?

SEO news blog post by @ 4:48 pm

Categories:Google

 

 

September 26, 2013

Much Ado About (not provided)

Our motto is (not provided).

Our motto is (not provided).

As many of our readers may already know, earlier this week Google changed the way their URL functions in such a way that for those who monitor their analytics (which should be all of you), you’ll now only see (not provided) where once you would have seen your keyword. This move was met with disappointment and more than a bit of annoyance on the part of SEOs and website owners. The reason (so they say) is to protect the privacy of their users. The logic is, if keyword data passes then it can be picked up in the log files of the site being visited along with data such as the IP address that would allow the user to be pinpointed with some degree of accuracy. So, to make sure that the owner of the custom t-shirt site I visited last week can’t figure out it was me that searched “custom t-shirts canada” that data is now kept from the receiving site. Now, here’s the annoyance – to say that it’s a case of protecting privacy would work UNTIL we realize that the same can’t be said for paid traffic. If you purchase traffic though AdWords, the data is tracked. Now of course it has to be or we’d all just be paying for AdWords and trusting that we were getting the traffic we paid for and that the bids made sense but the hypocrisy is pretty obvious – why is a user that clicks on an organic result then more deserving of privacy than those who click on a paid result? They’re not obviously, and we’re not being told the truth BUT that’s not really the discussion to be had is it? The fact of the matter is, it’s Google and they can do what they want with their own website. I believe I should get to do with my site what I want (within the confines of the law of course) and so I won’t take that away from others. So what is the real discussion …

What Do We Do Now?

While we’re all spending time arguing about the hypocrisy and crying foul, the fact of the matter is that it is what it is and now we have to figure out what to do.  We no longer have keyword data from Google.  There are two routes forward, the short term patch and the long term changes.

Short Term

In the short term we can use Advanced Segments to at least get a good idea about what keywords are producing what effect.  Essentially we can use them to filter traffic that follows patters similar to what specific keywords or keyword groups behaved like.  This tends to only work well with large traffic groupings so unless you get huge traffic for single phrases that behave uniquely – you’ll probably have to group your traffic together.  Branded vs non-branded for example.  I’m not going to get into how this is gone here in this blog post simply because I wrote a lengthy piece on it for Search Engine Watch back when (not provided) was first becoming an issue.  You can read about it at http://searchenginewatch.com/article/2143123/How-to-Understand-Your-Google-Not-Provided-Traffic.

This will only work for a while however.  You’ll see new traffic coming in and won’t know how it’s behavior impacts the results.  Essentially – this will give you a decent idea until your traffic sources change, your site changes, or time passes.  So what do we do …

Long Term

In the long run we have no option but to make massive adjustments to the way we look at our sites.  We can no longer determine which keywords perform the best and try to caft the user experience for them.  Instead we have to look at our search traffic in a big bucket.  Or do we?

While this may be true for some traffic, we can still segment but the landing page (which will give you a good idea of the phrases) as well as look at groups of pages (all in a  single directory for example).  I know for example that this change comes right when we ourselves are redesigning our website and in light of this I will be changing the way our directory structure and page naming system work to allow for better grouping of landing pages by common URL elements.  I imagine I won’t be the last to consider this factor when building or redeveloping a website.

What will need to change is our reliance on specific pieces of data.  I know I like to see that phrase A produced X result and work to improve that.  We’ll not have to look at larger groupings of data.  A downside to this (and Google will have to address this or we as SEOs will) is that it’s going to be a lot easier to mask bad practices as specific phrase data won’t be available.  I know for example that in an audit I was part of, we found bot traffic in part based on common phrase elements.  Today we wouldn’t be able to do this and the violations would continue.

We’re All Still Learning

Through the next couple months we’ll all be adjusting our reporting practices to facilitate this change.  I know that some innovative techniques will likely be developed to report as accurately as possible what traffic is doing what.  I know I’ll be staying on top if it and we’ll keep you posted here in our blog and on our Facebook page.

SEO news blog post by @ 10:48 am

Categories:Analytics,Google,Google

 

 

September 17, 2013

 

September 9, 2013

Liquid Galaxy: Science Fiction Becomes Fact

Google Earth is definitely one of the most fascinating playthings in the company’s toybox; it was impressive when it launched in 2001 (under the name ‘Keyhole Earthviewer’) and it remains impressive to this day. I remember logging on as a teenager at home and finding the Eiffel Tower in Paris; back then, the only option was a top-down view, and I was disappointed when I tried to change the angles so I could “stand” next to France’s most iconic building. But Google Earth has taken care of that problem; thanks to Street View being integrated into the program, you can zoom into practically anywhere on Earth and roam the streets, exploring cities you’ve never seen from the comfort of your desk.

That’s not all; Google Earth has added data to allow users to zoom in under the oceans, see the Lunar Lander on the surface of the Moon, and even view high-resolution images of Martian terrain scooped from the Mars Orbiter and Exploration Rovers. Google Earth users can even view historical images, traveling back in time to view what certain areas looked like many years ago. You can explore the Wieliczka Salt Mine in Poland and the Prado Museum in Madrid.

NGC_4414_(NASA-med)But one of Google Earth’s most incredible features is the one you probably won’t have heard of; it’s an open-source DIY-capable piece of code that takes one step closer to bringing science fiction tech to life. It’s called Liquid Galaxy, and its description—an ‘immersive Google Earth’—doesn’t do nearly enough justice to the possibilities it can create. You won’t find Liquid Galaxy as a major Google release; its official project page is full of technobabble and source code modifications from engineers all over the world. Part of the beauty of the product is that it can be whatever you want it to be. But when it comes down to it, Liquid Galaxy is a design concept that allows you to project Google Earth onto several screens at once, creating a unified surround view of the world. It was originally developed by some Google employees as an independent project; they wanted to recreate the experience of seeing their geo-product imagery in a more seamless way. Using a few extra Linux workstations, they built a big gazebo-style case that housed eight 55-inch LCD screens, and used a cluster of computers to project Google Earth seamlessly and simultaneously—a combination of the Holodeck and a huge flight simulator.

Liquid Galaxy presents an endless amount of potential for teaching everything from geography to climate change and urban planning; after taking Liquid Galaxy on the road and being met with overwhelming praise, in 2010 Google made their configuration, codes, and schematics public so that anyone could rig up their own version. This makes Liquid Galaxy a fasciatingly unique Google product; while it’s been available to the public for three years, very few people have had firsthand experience with one. Georgia State University has a 48-screen display wall using four Windows 7 machines; NASA has one at the Johnson Space Center. Some can be controlled using Xbox Kinect; others use head tracking software. Liquid Galaxy has been used to run the virtual reality game Second Life, allowing players to truly feel as if they’re stepping into Linden Labs’ simulated universe. One civilian user has even rigged a five-screen Liquid Galaxy to run a Quake 3 mod.

If you’re computer-savvy and itching for a new project, you can find the Liquid Galaxy project here. The site contains how-tos, a guide for where to buy pre-built componenets, and encourages users to post their new enhancements, any defects they find, and what they’ve built with the technology. Liquid Galaxy’s open source means that the possibilities really are endless; with a few high-quality computers and a creative imagination you could end up making your wildest science fiction dreams come true.

SEO news blog post by @ 9:35 am


 

 

August 16, 2013

SEO concerns for Mobile Websites

You want to serve your clients needs regardless of what device they visit your site with, but how do you do it easily without upsetting your SEO?

Lets look at the various options for tackling Mobile sites and what each means in terms of SEO:

Responsive Design :
 
Visual demonstration of responsive web design

  • Responsive design is growing in popularity, especially as communications technology evolves, and bandwidth/memory use is less of a concern.
  • This method also gives us a single URL to work with which helps to keep the sitemap/structure as simple as possible without redirection nightmares.
  • On top of that, Googlebot won’t need to visit multiple URLs to index your content updates.
  • Less to crawl means Googlebot will have a better chance to index more of your pages/get deeper inside your site.
“Why is/was there a concern about mobile page size?”

Low-end mobiles, like a Nokia C6 from 4+ years ago (which was still an offering from major telcos last year), typically require that total page data be less than 1mb in order for the phone to handle the memory needs of rendering/displaying the site.

If you go over that memory limit/tipping point you risk causing the browser to crash with an error that the device memory has been exceeded. Re-loading the browser drops you on the device’s default home-page with all your history lost. I think we could all agree that this is not a good remote experience for potential clients.

Higher-end devices are still victims of their real-world connectivity. Most 3rd generation devices can hit really nice peak speeds, but rarely get into a physical location where those speeds are consistent for a reasonable length of time.

Therefore, even with the latest gee-wiz handsets, your ratio of successfully delivering your entire page to mobile users will be impacted by the amount of data you require them to fetch.

In a responsive web design scenario the main HTML content is typically sent along with CSS markup that caters to the layout/screen limitations of a mobile web browser. While this can mean omission of image data and other resources, many sites simply attempt to ‘resize’ and ‘rearrange’ the content leading to very similar bandwidth/memory needs for mobile sites using responsive design approaches.

The SEO concern with responsive designs is that since the written HTML content is included in the mobile styling it’s very crucial that external search engines/crawlers understand that the mobile styled content is not cloaking or other black-hat techniques. Google does a great job of detecting this and we discuss how a bit later on with some links to Google’s own pages on the topic.

Mobile Pages :

Visual demonstration of mobile web page design

 
If you’ve ever visited ‘mobile.site.com’ or something like that, you’ve already seen what mobile versions of a site can look like. Typically these versions skip reformatting the main site content and they get right down to the business of catering to the unique needs of mobile visitors.

Not only can it be a LOT easier to build a mobile version of your site/pages, you can expect these versions to have more features and be more compatible with a wider range of devices.

Tools like jQuery Mobile will have you making pages in a jiffy and uses modern techniques/HTML5. It’s so easy you could even make a demo image purely for the sake of a blog post! ;)

This also frees up your main site design so you can make changes without worrying what impact it has on mobile.

“What about my content?”

Excellent question!

Mobile versions of sites with lots of useful content (AKA: great websites) can feel like a major hurdle to tackle, but in most cases there’s some awesome solutions to making your content work with mobile versions.

The last thing you’d want to do is block content from mobile visitors, and Google’s ranking algorithm updates in June/2013 agree.

Even something as simple as a faulty redirect where your mobile site is serving up:
mobile.site.com/
..when the visitor requested:
www.site.com/articles/how_to_rank.html

.. is a really bad situation, and in Google’s own words:

“If the content doesn’t exist in a smartphone-friendly format, showing the desktop content is better than redirecting to an irrelevant page.”

 
You might think the solution to ‘light content’ or ‘duplicate content’ in mobile versions is to block crawlers from indexing the mobile versions of a page, but you’d be a bit off the mark because you actually want to make sure crawlers know you have mobile versions to evaluate and rank.

In fact if you hop on over to Google Analytics, you will see that Google is tracking how well your site is doing for mobile, desktop, and tablet visitors:
Example of Google Analytics for a site with mobile SEO issues.

(Nearly double the bounce rate for Mobile? Low page counts/duration as well!?)

 
Google Analytics will show you even more details, so if you want to know how well you do on Android vs. BlackBerry, they can tell you.

“How do the crawlers/search engines sort it out?”

A canonical URL is always a good idea, but using a canonical between a mobile page and the desktop version just makes sense.

A canonical can cancel out any fears of showing duplicate content and help the crawlers understand the relationship between your URLs with just one line of markup.

On the flip-side a rel=”alternate” link in the desktop version of the page will help ensure the connection between them is understood completely.

The following is straight from the Google Developers help docs:

On the desktop page, add:

<link rel="alternate" media="only screen and (max-width: 640px)" href="http://m.example.com/page-1" >

and on the mobile page, the required annotation should be:

<link rel="canonical" href="http://www.example.com/page-1" >

This rel=”canonical” tag on the mobile URL pointing to the desktop page is required.

Even with responsive design, Googlebot is pretty smart, and if you aren’t blocking access to resources intended for a mobile browser, Google can/should detect responsive design from the content itself.

Google’s own help pages confirm this and provide the following example of responsive CSS markup:

    @media only screen and (max-width: 640px) {...}

In this example they are showing us a CSS rule that applies when the screen max-width is 640px; A clear sign that the rules would apply to a mobile device vs. desktop.

Google Webmaster Central takes the information even further, providing tips and examples for implementing responsive design.

Ever wondered how to control what happens when a mobile device rotates and the screen width changes? Click the link above. :)

SEO news blog post by @ 3:51 pm


 

 

August 6, 2013

Twitter’s New Anti-Abuse Policies and the Dark Side of Social Media

I won’t lie when I say that one of the best parts of my job is managing social media accounts; it can be legitimately fun, but it’s also a very important illustration of how the Internet affects customer/business interactions. My experience mostly comes from being a voracious and active social media user in my private life; I enjoy a following of 400+ people on Twitter, and I have seen what the network is capable of: live-blogging the Vancouver Olympic opening ceremonies, catching cheating politicians in the act, and spreading the word of everything from hot TV shows to full-blown revolutions. While some might resist it, social media is vital for modern reputation management and customer service; the web has democratized marketing in a very drastic way, making it nearly impossible for a company to cover up substantial issues with their products or service. When you do a great job, you might get the occasional positive mention; when you mess up, your customers will definitely air their grievances. And as a social media user myself, I can vouch for the fact that the public has come to respect businesses that address these issues honestly when they’re contacted about them.

Unfortunately, this democratization has lead to some inevitable abuses of the system. In some cases it’s a rival company posting fake reviews in an attempt to discredit the competition; in others, a company (or person) may be the subject of a vicious complaint that goes viral online. Part of online reputation management is being able to mitigate these issues, whether by reporting abuse to site moderators or addressing complaints head-on.

I say all of this because some business owners on desktop and Android platforms may see a new feature on Twitter in the coming weeks: an in-tweet ‘Report Abuse’ button. Currently, users who wish to flag threats must visit the online help center and go through several extra steps to report abuse; the new button will make the process far quicker, and (hopefully) hasten the removal of hate speech. Twitter’s announcement wasn’t just a routine update; it was spurred largely by a British woman named Caroline Criado-Perez, and the flood of horrific rape, violence, and bomb threats she received over the weekend. These weren’t mere trolls; the abuse got so serious that at least one man was arrested on Sunday as a result. What did Criado-Perez do to warrant hundreds of 140-character threats of violence? She campaigned—successfully—for the British government to put author Jane Austen’s face on the new £10 banknote. The threats were also sent to a female Member of Parliament who tweeted her support for the campaign.

If it seems absurd, that’s because it is; this wasn’t a case of radical politics or controversial opinion, but a fairly tame move to represent more British women on currency. The horrifying result was a stark reminder of the abusive power of social media, especially against women and other marginalized groups in society. But even if you’re not an active participant in social issues online, it’s intimidating to realize just how quickly the anonymous web can turn against you. While some have applauded Twitter for finally taking a decisive action to make their website safer for all users, the decision has also drawn criticism from people who have seen how ‘Report Abuse’ functions on other websites have actually been used against legitimate accounts as a form of abuse in and of itself; a group of trolls flagging an account they disagree with can result in that account being suspended by the website, even when the owner hasn’t actually violated any rules.

Of course, the gender politics and personal vendettas of social media are quite a bit more intense than what we do as SEOs to help clients. In terms of reputation management online, the Report Abuse button will likely be a helpful way to ensure that a company doesn’t suffer from malicious treatment. However, it also may be far too easy to report a dissatisfied (and vocal) customer out of sheer frustration. Online reputation is a fickle beast; a few damning reviews can take down an entire small business, and the damage can be very difficult to control—it’s easy to feel helpless when it seems like nothing you do can push down a few dissatisfied customers in favor of the happy ones. Business owners on Twitter should still make it a priority to engage with unhappy customers on a personal level, rather than just report an account because of a particularly bad review—even if it makes the problem temporarily disappear, the Internet is not kind to those types of tactics.

The Criado-Perez debacle over the weekend has shown Twitter’s dark side, particularly when it comes to misogyny and online gender violence. The effect of the new reporting feature remains to be seen in that regard. While smaller businesses on social media may not engage in that debate, it’s a prudent reminder that the web’s anonymity can cause a lot of malicious action in the name of free speech. Reputation management isn’t going to get easier as a result of Twitter’s changes; it will still require a human touch and an honest connection, because that’s what garners respect in the social media sphere. But hopefully this small corner of the web will be a little safer for everyone who uses it, giving people more courage to speak their minds without fear of retaliatory attempts to forcibly silence them.

SEO news blog post by @ 3:14 pm


 

 

March 6, 2013

Google+ Cover

Today we’ve got just a very quick blog post for you to let everyone know of a couple changes to Google+. Now you may be saying, “Google+? Why should I care?” I’ll leave that debate you your own mind save to say, if Google asks you to drink some Kool-Aid, just hope it’s a flavor you like. It’s become very clear over the past couple year that not only is Google not going to let Google+ go the way of Google Wave or the litany of other failed tests, they’re making moves to insure that it thrives or at the very least becomes the control mechanism for your other activities to a point where it doesn’t matter if you use Google+ … you’re information is being stored there regardless.

But today I’m not discussing the benefits of Google+ specifically, just covering a few key updates. So let’s get to that.

Changes To Google+

As of the morning Google has announces that they’re rolling out some changes to how your profile functions/appears.  They are:

  • The size of cover photos has increased to 2120px by 1192px.  To me this doesn’t make a ton of sense as it pushes the actual information down the page requiring more scrolling on all but the largest monitors but I can see applications of it for photographers and designers.  While I may not entirely believe this max resolution is ideal, I highly recommend toying with different images and this definitely provides a wide-range of options.
  • A tab for reviews.  They’ve added a tab when users can see all the reviews you’ve done.  You may want to scan through your reviews and make sure they match the image you want to send publicly.  One might argue you should be doing this all along but I know I looked as soon as the announcement came.
  • Editing your info get’s easier.  They’ve made the interface for editing your information a bit clearer and easy to use.

They did note that things are rolling out gradually so if you don’t see it yet, check back soon.  This writer doesn’t expect it to be a long rollout as it’s a Google+ change and they don’t want people to check, see they can’t play around, and forget to come back.

SEO news blog post by @ 7:56 am

Categories:Google,Google+

 

 

« Newer PostsOlder Posts »
Level Triple-A conformance icon, W3C-WAI Web Content Accessibility Guidelines 1.0 Valid XHTML 1.0! Valid CSS!
Copyright© 2004-2014
Beanstalk Search Engine Optimization, Inc.
All rights reserved.