The web today

Google has posted their research on how sites are coded. HTML tolerance has created havoc with the quality of the web today. Common sense hasn’t prevailed either. Time to take a look at my own templates.

Yes, I suck at coding. After years of learning and practising web standards I still get it wrong. This site is already a few years old and shows my coding skills to be mediocre. I’m aware of some of the problems. Also I wonder weather or not to continue on with XHTML. As I’ve stated before I see no compelling reason to switch. Being superfluous doesn’t mean it has no place.

Having said that I must not be complacent about the browsers ability to handle code I present it with. The following ought to be dropped altogether:

When pages are not rendered as XML the it’s pointless to use the “xml:lang” attribute.
html xml:lang=“en”

This doesn’t do anything either:
http-equiv=“Content-Style-Type”

Pointless meta tags might as well join them:
meta name=“keywords”
meta name=“MSSmartTagsPreventParsing”
meta name=“generator”

I’ve always been aware of the ‘MSSmartTagsPreventParsing’ attribute being a complete waste of time. Right up to the point reading about it in Google’s writeup. It was a protest in the way Microsoft approached the web. However, Google has been caught doing some similar naughtiness. But developers voiced opposition by adding this attribute. Kinda cool, in a geeky kind of way of course.

The most interesting thing of Google’s research is of course how the semantic web is shaping up. At least when we look at className’s and ID’s. I’ll be taking a closer look at these when rewriting my templates for this site. I’ll set up a list of most suitable names and identifiers that seem to be used most frequently. Should be fun, in a geeky kind of way of course.

Next entry: Site engine updated
Previous entry: Commodore 64 online