CSS fanatics

I am by my own admission a bit a grumpy and unconventional web standards advocate. It’s been said that Web Standards advocates are simply put painful and downright unpleasant to work with. However, I’m not to concerned with strict adherence to W3C recommendations never mind validation. I work towards sustainability, accessibility and usability. The W3C is in these terms not very helpful, indeed that can also be very annoying.

In 2000 I was looked upon like I was a total and utter nut job for trying to work in a fashion that would provide me with the easiest way to minimize my HTML and enable the ability to be extremely flexible with the design. This, in the end, resulted in the separation of content structure from presentation. This apparently was happening in isolation across the globe. Mainly in the United States due to their bigger chunk of the web market at the time. A year later Zeldman’s article To hell with bad browsers started making some waves. Blogging was the main impetus of an emerging community that labelled itself the champion of web standards. The amazing thing was that there seemed to be this unusually unified front because we all individually thought along the same lines. Those days are long gone.

In the next couple of years we will see a remarkable change in the web standards landscape. This change has already started. When change is upon us fanaticism rears its ugly head. The thing is that many of these fanatics end up howling at the moon because web standards are not what they think they are.

I am myself have always been fanatic about at least two things. I’m a fanatic about others things as well, but these two usually get me into trouble with somebody or other.
1. Minimal HTML, less is nearly always better. However, writing less takes longer.
2. If somebody else breaks a site a year or so after I’ve made it, it’s my fault. No if or buts, I made a mistake.
Most web heads understand the first rule; the benefits are obvious. The second rule almost nobody seems to get. Those that do have usually been doing web design work for years.

To understand the second rule you must understand the client’s perception of your work. Or rather the perception of the work the clients web/marketing departments contracts out. If a browser update messes things up like IE7 did then this department will have to explain to their bosses what went wrong. For political reasons they will often point the finger at any of the external parties that might fit the bill, even if it wasn’t their fault. I’ve seen this in each and every company I’ve worked in or for. No exceptions.
For mainly this reason I focus my front-end work to be almost free of any ‘anomalies’ that might cause a hick-up in future situations. That not only means avoiding hacks but also avoiding complexity that might trip up other web teams when they need to change something in the front-end. The initial front-end must be obvious and straightforward.
Before adding a CSS library or part of it to your own CSS you must take into account the possible impact this library may inflict because the front-end could suffer major problems in the long term. I’ve encountered exactly this with the CSS Reset. Now, I love the reset, it’s fantastic. All browsers in an instant become almost equal. However, in the real world it can be disastrous for the long-term maintainability of a web site. In the CSS Reset everything gets zeroed and if you don’t define all of the elements back in to something that at the very least resembles a non-CSS html page then you’re in for trouble. Other developers will start adding bits of CSS and when they find some of the elements are still zeroed they’ll start writing additional CSS with a high specificity to overcome the aberration. The cascade becomes unpredictable and thus unusable. In the end each and every change becomes a major hassle. I allow for the CSS Reset as long as it’s not zeroed and that it’s fully patterned.
You must author the initial CSS in way that disparate teams can alter it with the least amount of hassle whist retaining maintainability. This increases a sites durability and value (ROI) for your client.
Other problems can be with having to many CSS files that have to operate with some or all of the other CSS files. Keep the number of files down by grouping / merging them as much as possible. Avoid dependencies across CSS files. When you’re working with multiple teams and multiple systems this merging can become rather complicated.

The other thing I do to minimize future page rendering errors is to avoid the use of hacks whenever possible. Hacks are avoidable more often then most seem to realise. Luckily hacks are becoming less common when compared to five years ago. There are a number of things to look out for when using hacks. It must not rely on a bug! It must be simple (minimal impact on maintainability). It should comply with the W3C recommendations. Believe it or not, validation is not a concern. Validation tools do not reflect compliance with Web Standards; they merely perform a best guess of possible authoring errors.
The not relying on a bug rule gets me into some scrapes with fanatical web heads. This is usually because I’ve stepped on their toes by bashing something they consider to be the bee’s knees. For example some of ‘them’ love the holly hack (* html) for which the IE development team have said, “To be very clear the root node selector was a bug.”
The holly hack is very nice; it does the job very elegantly by targeting only IE6 and IE7 in quirksmode. That it relies on a bug doesn’t seem to faze some fanatics. Furthermore, the root node is part of the browser not part of the HTML or CSS specification. In general terms, relying on a bug that utilises such a poorly documented element is principally asking for problems down the road.

I’ve allowed teams to use CSS frameworks to speed up the prototype and also for production. With the condition that they write their own classnames and identifiers in English. Using another language kind of pointless and using two languages is just stupid. Usually I see them stripping other junk out during the process of rewriting the ID’s which is just as important. The problem with cut and paste is that it also often means fire and forget. By cleaning up the code it become obvious that you just can’t cut and paste and walk a way. If you use someone else’s code you’re also responsible for it’s impact. There is no telling how long the site will remain online.

Next entry: Web Standards has won
Previous entry: Can Microsoft take advantage of HTML 5?