Thursday, March 29, 2007

JavaScript syntax highlighters

I'm really into writing documentation, and very often, this documentation must include code samples.

I wanted a JavaScript syntax highlighter that:

  1. involved very little hassle in setting up,
  2. worked like a real parser using parse trees to ensure accurate highlighting,
  3. and provided decent coverage of the most common programming languages in use today.
Obviously, by opting to go with a JavaScript syntax highlighter, I chose to forgo the option of doing my syntax highlighting on the server side. Doing it on the server side still interests me, but for immediate and practical reasons, the quickest way to get syntax highlighting in my documentation would be to do it on the client side.

I went searching around for JavaScript syntax highlighters, and these are the top three that met my criteria.
  1. Javascript code prettifier. This was the first decent one that I found. It's pretty basic but I got up and running very quickly. The documentation is sparse and the tone of the README quite terse, but it meant there was less for me to sift through. The neat thing about it is that it has automatic language detection.
  2. SHJS. This is more full featured than the Google tool, and supports a wider range of languages. It appears to be more mature than the other two. Unlike the Google solution, it doesn't offer automatic detection of programming languages.
  3. JUSH. I'd like to check it out in more detail later on, but the website says it only supports HTML, CSS, JavaScript, PHP and SQL. I needed highlighting of Java and Ruby, not to mention C and C++.
For now, I'm going with the Google code prettifier, just because it was the first one I found, integrated, and liked. One that I did not like was dp.syntaxHighlighter, mostly because it required that my code be in a TEXTAREA instead of a PRE (pre-formatted field).

Wednesday, March 28, 2007

Where the rubber meets the road

Back when I was writing software that ran on HP LaserJet printers, I hated having to wait 45 minutes for one-line changes to compile (in a ClearCase build system) and then burn to flash memory. Sometimes, a problem with the printer's behavior wouldn't even be a problem with my software. It would sometimes be caused by an obscure bug in the hardware.

Because of time to market pressures, the team would have to work with prototype hardware, the design of which wasn't even finalized yet. The guys who wrote the low-level code had to write some registers and hack their way around hardware defects.

Somehow, it seemed wrong to me that I should constantly be worrying about the integrity of the underlying hardware or the operating system on which my software ran. I should have been able to take it for granted.

I longed for the day when I could instead work on web applications, where I could safely assume that the hardware was fine and that there were no major defects with the platform on which I was building. The platform on which my software ran would come already debugged, I figured.

Since diving more deeply into web application programming, the experience has been as good as I had imagined, for the most part. There's always a lot of documentation out there, and it's usually just a matter of searching through Google results to see which one relates best to the problem I have at hand. I rarely get stuck for long.

If you ask most web developers, the worst thing and most frustrating thing about writing web applications would probably be cross-browser compatibility. Different browsers handle CSS and the JavaScript DOM a little differently. It's the bane of many a web developer's existence.

When I'm working on getting a web application working with different browsers, the thought crosses my mind that I'm wasting a lot of time by having to check for compatibility with browsers that should just work. It seems like an enormous and unnecessary waste of time.

After thinking about it, though, every software industry has its area of "inefficiency" for developers, and that's usually where the rubber meets the road. It's where the software, however elegant it is, has to face the real world in all its ugliness and imperfection.

In web applications, cross-browser compatibility in web applications is needed because that's where the application faces the real world, where people use all sorts of different browsers.

With Unix software, there are portability issues between different kinds of Unix. There's the worry of having to work with different thread models and libraries. There's the process of having to build with different compilers, not just gcc. There's GNU make and BSD make. The #ifdefs and #defines aren't any prettier than JavaScript workarounds.

Network software has to know how to deal with extensions to a standard that may not be universal. The RFCs are great: they've helped to avert major compatibility issues in the past, but technology companies are inevitably going to shoehorn in their own little extensions to network protocols. Clients need to know what servers are capable of, and servers likewise.

The point is, when I think about it, having to check for cross-browser compatibility doesn't seem all that bad. Yes, it does mean that hours and hours will be gone from the day trying to hunt down the problem. Yes, it could have been better if Microsoft had tried to follow standards in their implementation. Yes, many other things could be done in the future to better the current state of things: there are libraries and frameworks out there that abstract away much of the pain, just like people have figured out how to create portable Unix programs (grab a copy of the autotools and run a configure script). But at the end of the day, the reality is that people out there are using different browsers to view the same site, and it's my responsibility to make sure my application works. It often seems like a huge waste of time, but it's that final push that really makes the difference.

Saturday, March 17, 2007

Physical media and bit rot

Last night, I played some of my old classical music CDs — good music to program to. The songs skipped every other second because of the scratches on the discs. It appeared that my careless handling of CDs over the years eventually caught up with me.

Tonight, I ripped some of these to Ogg Vorbis (an open and royalty-free file format similar to MP3) to see if I could listen to my songs without the skipping. Sure enough, they were all smooth!

Now, none of this is stuff that I'm terribly surprised about, but it just served as a reminder to me that relying only on physical media to store data is a little dangerous, especially when the medium goes bad or acts flaky. The lifetime of information on the network is short, but as long as it's out there, stored and accessed frequently, it's constantly being duplicated and refreshed.

Information that's alive and circulating is healthy information.

Thursday, March 15, 2007

HBR: "Leading Clever People"

This month's Harvard Business Review contains an apt and timely article, "Leading Clever People." Rob Goffee and Gareth Jones define clever people as "the handful of employees whose ideas, knowledge, and skills give them the potential to produce disproportionate value from the resources their organizations make available to them." Still, this doesn't mean that they're better off working on their own. One of the people they quoted, the head of development for a global accounting firm, stated that clever people "can be sources of great ideas, but unless they have systems and discipline they may deliver very little."

One good point they made about managing clever folks is the importance of demonstrating that you're an expert in your own right. This is to establish credibility and respect. At the same time, one mustn't be so above-and-beyond or in-your-face so as to discourage the real talent.

Read "Leading Clever People" at HBR. You may have to view an ad.