Beantin

James Royal-Lawson

SEO

6 Articles worth reading… (Spotted: Week 34, 2010)

The business case for social intranets

To quote Oscar: “Most people will come to understand that a social intranet is not just about adding features such as blogs, wikis, activity feeds & micro-blogging on top of a traditional intranet; it’s about rethinking the purpose of intranets with the intention of bringing the paradigm shift in how we communicate & collaborate that is taking place on the web to the very core of how enterprises are operated & managed.”

Enterprise Microlearning

The significance of enterprise microblogging (or “microlearning”). Not only does it state the importance of status updates in the workplace, but also gives a number of practical examples of their use.

Does news add any value to an intranet?

Time after time when we look at intranet stats and surveys we see the evidence that employees just aren’t that interested in news articles – they want things (especially on the start page) that help them get their jobs done.

User behavior in SERPs. Eye tracking study July 2010

This translation of a Spanish eye tracking study shows how people’s intentions (they tasks they are trying to complete) affect their behaviour when viewing search engine result pages.

Santa Barbara Zoo launches smartphone technology

Using QR codes is a cost-effective and straight-forward way to improve visitor interaction at zoos and museums. Hunt down relevant content (perhaps it’s already on your site?) and print some new signs plus some guides for visitors explaining how to scan the codes.

Halfords: mobile site review

Lots of things here that Halfords could improve and tweak. Interesting to see the start of a trend for “collect in store” (rather than “buy via mobile”). It’s a mistake though to prevent mobile users from accessing the regular “desktop” site.

10 Articles worth reading… (Spotted: Week 29-31, 2010)

BBC News website’s content management and publishing systems

The BBC Internet blog often produces some really good in-depth insights into how things work at the BBC. This time, in connection with the recent redesign of BBC News, they take a look at web management, web standards & their in-house CMS

Why QR Codes Are Poised to Hit the Mainstream

SL have just started testing Quick Response codes at a number of bus stops here in Stockholm. Co-incidentally Mashable writes “Why QR Codes Are Poised to Hit the Mainstream” around the same time. You shouldn’t produce any printed matter now without a QR Code in my opinion.

Half of site searches are unsuccessful: report

Half of site searches are unsuccessful. Yet studies have shown that site-searches result in a significantly higher conversion rate than the average. Clear opportunity for improvement there…

The Internet Generation Prefers the Real World

This article from Spiegel Online describes how the (German) Internet Generation would appear to prefers the Real World – or rather, that it is integrated into their lives but that they are not necessarily “internet experts”. What you can conclude is that there is no global norm when it comes to the internet generation and their internet usage. Read my earlier post about Internet usage and young Swedes in Sweden

18 Simple SEO Items Commonly Missed in Web Development

Not a bad list. Your next site won’t be worse if you follow all of these 18 points! There are, of course, more than 18 things that are normally missed.

Blog – Subdomain or Subfolder? Which Is Best?

More on the eternal question of on-domain, off-domain, or sub-folder blog – which is best? Well, they all can be in my opinion. Really does depend on each individual case.

Will the Real Browser Stats Please Stand Up?

Global visitor stats are always misleading. Make sure to always study the stats for your own site (and therefore visitor groups), especially browser version and viewport size before a redesign.

How long are intranet projects?

Steptwo write about how long an intranet project should be (it is often not given enough time) – As is often the case, similar things apply to Internet projects too.

Employees demand a clean intranet home page, no scrolling

Keep it above the fold! Employees demand a clean Intranet home page, no scrolling, fewer links.

Enterprise 2.0 Roll-up: Google Kills Wave, Collaboration Ain’t Easy

In this CMSWire article they’ve taken the example of how a global collaboration product (Google Wave) failed to be adopted and likened it to the difficulty of implementing collaborative tools within the Firewall. Fair few pearls of wisdom from some clued-up people.

Search-engine friendly country site select boxes

Design often has the final say in a redesign project – or at best, a very powerful voice – which isn’t always a good, or acceptable, situation.

Recently I was part of a project where I needed to preserve an aspect of the old design for SEO reasons. The new design had included a select box, but I needed those “options” to be real links that would pass link-love. So, I offered this search-engine friendly solution.

Country links

Country links on the original website

To give a bit more background, the old site had a footer that contained links to every single country site within the organisation. This was about 26 links. On every page of the site. Most of those country sites had a similar footer, making most of the links reciprocal. That’s quite an international network of inter-linked international top level domains.

I obviously wanted to maintain that network of links after the redesign. It clearly wasn’t going to help the position of any of the sites in SERPs by removing them.

Country select box

Choose your country site select box

The design that was produced had “simplified” the list of countries in the footer to be a select box drop down menu. Although this is not unusual for companies with multiple country sites, it’s not always a good thing for usability (I like populating the select box using geolocation as a solution – but that would be another blog post!) and it’s really not a good thing for search engines.

No link-love for select boxes

Although Google has indexed text in select boxes for a number of years, and also indexes (new) URLs that is discovers within those lists, it doesn’t pass any pagerank to those links. Neither does it attribute the anchor text (or more correctly in this case: option text) to the destination link.

This obviously meant that the international network of inter-linked top level domains would come crashing down to the ground. Not really something that was on the list of requirements…

Country sites as a linked list

So in order to preserve the link network, and to honour the design decision, I decided to re-introduce the <a> link list of countries, and in order to not make this visible to (most) end-users, I set it to “display:none”.

Now hold on I can hear you say. Doesn’t Google (and other search engines) consider adding “display:none” to things as cloaking? Well, not necessarily. The key is whether there is a mechanism for making the content visible to visitors or not.

So in order to keep both the search engines and users happy what I did was add the “display:none” only if javascript is enabled. That way we are always serving the same HTML content to all visitors and search-engines, but making parts of it invisible when viewed in the browser by most visitors. Importantly, we are letting the search engines see and index all the links to the countries.

Adding a class

Add a js class to the country link list, or whatever element of the page you want to be hidden when javascript is enabled.

<div class="js">

Include an external js file

Add a link to an external script directly after your CSS styles (you may already have such an external file already)

<script type="text/javascript" src="/script/functions.js">

</script>

Document.write

in that script add a document write to write the additional css style link

document.write('<link rel="stylesheet" 

href="/styles/js-enabled.css"
type="text/css" media="screen" />');

Display:none

Finally, your js css file. Add the “js” class styling with display:none there.

.js {
display: none;
}

There are other ways you could achieve the same result; especially if you’re already using an Ajax library such as jquery, but I thought it was good to share with you an example that didn’t force the introduction of that overhead.

Cloaking

Yes, you could argue that this is technically cloaking, but it is better to say that we are offering enhanced content to those with javascript disabled. By doing this we are cloaking in a way that is helping Google and visitors who find long lists within select boxes difficult to use.

Scribe versus SEO Guidelines

As an experiment, Jon Buscall wrote a blog post following my SEO guidelines for copywriters. He then tested it with Scribe.

Scribe is a software service that analyzes the content and tells you how to tweak your content to get better search engine rankings. As a pleasant surprise, Jon’s article didn’t just get a high rating – Scribe gave it a top score of 100%. It was as good as it could get, according to Scribe. No tweaks needed.

OK, Guidelines are only as good as the person following them. Jon (amongst other things) is a professional copywriter and has clearly taken the time to put into practice what I recommended in my guidelines for writers.

It has never the less shown that you don’t need magic tricks or software to produce search engine friendly content – you just need to focus on writing good content and following a few simple rules.

Google’s broken date recognition

I don’t exactly know when it happened (probably an effect of the “May update” Michael Grey spotted the date problem during April), but Google has clearly got some problems with how they are currently deciding when a page was published.

Trick Google

Simon Sundén pointed out two weeks ago in this article on his Swedish blog that it was easy to trick google into showing any date you wanted in search result pages. Simon suggested that Google was giving extra weight to dates in titles and main headings. But Google’s problems appear to be even more wide-spread.

Google’s algorithm is currently making some really poor guesses as to the published dates of certain articles. Hans Kullin has today spotted that Google is changing correct dates in their search results for old articles from Swedish newspaper Aftonbladet to incorrect dates based on the date they happen to re-index the page.

Aftonbladet example

Let’s take this Aftonbladet article from March 2008 – Bojkotta inte Kina-OS!.

Screenshot of Aftonbladet

You can see from the date in the above picture that Aftonbladet are clearly saying that the article was published on the 20th March 2008.

Screenshot of Google SERP

When we search for that article, Google is telling us that it was published on the 27th of May 2010 (yesterday at the time of writing this).

Screenshot of source code

Why though? Well, the first date that Google reaches when indexing the html of that article is indeed the 27th of May (as you can see in the above image). The date the article was published comes later on further down in the code. In addition, today’s date is repeated a second time in the code towards the bottom of the page.

The most reliable date?

Screenshot of the trigger date in the Aftonbladet menu

Aftonbladet are showing today’s date at the very top of their left hand navigation. (and by the side of their search box in the page-footer) Google’s current broken way of establishing the date that an article was published is seeing this date and deciding that it is the most reliable date on the page.

Exploiting the problem

Hopefully Google will fix this. Given the importance and weight of recently published content, we’re going to see a lot of people exploiting this problem with Google’s date calculation algorithm in order to push their old content back up the search result pages.

3 of 9
123456789
Reload this page with responsive web design DISABLED