Nik's Technology Blog

Travels through programming, networks, and computers

Learning jQuery 1.3 - Book Review

My first exposure to jQuery was using other developer's plugins to create animation effects such as sliders, and accordion menus.
The highly refactored and compressed production code isn't the easiest to read and understand, especially if you want to alter the code to any great extent.
After reading a few tutorials, I thought I'd buy a book and get more involved with the jQuery library.

As an ASP.NET developer used to coding with intellisense, I was pleased that jQuery has been incorporated into Visual Studio to allow ease of developing.
I browsed through the jQuery books on Amazon and opted to buy "Learning JQuery 1.3" by Jonathon Chaffer and Karl Swedberg after reading the user reviews.

I've now read most of the book and can highly recommend it.  The book assumes the reader has good HTML, CSS knowledge as well as a familiarity with JavaScript and the DOM, but this enables the book to quickly move onto doing useful, everyday tasks with jQuery.

The first six chapters of the book explore the jQuery library in a series of tutorials and examples focusing on core jQuery components.  Chapters 7 to 9 look at real-world problems and show how jQuery can provide solutions to them, and the final two chapters cover using and developing jQuery plugins.

Web developers should be aware of web accessibility and SEO issues with using client-side scripting and it is good to see the book highlighting the concepts of progressive enhancement and graceful degradation where appropriate.

"the inherent danger in making certain functionality, visual appeal, or textual information available only to those with web browsers capable of (and enabled for) using JavaScript.  Important information should be accessible to all, not just people who happen to be using the right software." - Learning jQuery 1.3,  page 94

After a brief introduction into the world of jQuery, what it does and how it came about the book moves quickly on to selectors, which are a fundamental part of how jQuery selects element(s) from the DOM.  It also covers jQuery's chaining capability, which coming from other programming languages looks odd at the outset, but quickly proves to be a very powerful technique.

The authors then move on to talk about events.  What I particularly like about the way jQuery handles events is that the behavioural code can be cleanly separated away from the HTML mark-up without having to litter tags with onclick and onload attributes.

The examples show how to add functionality on top of your HTML by binding events to elements on the page, which when triggered cause jQuery to modify the HTML to bring the page to life.  Techniques are introduced by example, then slowly refactored and improved while introducing new jQuery methods along the way, which is a breeze to follow and learn.

The fourth chapter covers effects such as fading in and out and custom animations, and jumps straight in to cover a useful example of how text size can be increased on-the-fly for ease of reading.  The intro also mentions an important usability example of effects.

jQuery effects "can also provide important usability enhancements that help orient the user when there is some change on a page (especially common in AJAX applications)."- Learning jQuery 1.3,  page 67

Chapter 5 is all about DOM manipulation and covers jQuery's many insertion methods such as copying and cloning parts of the page, which it demonstrates with another useful example in the form of dynamically creating CSS styled pull quotes from a page of text used to attract a readers attention.

AJAX is the next topic, which interested me enough to create a little tool to load in an XML RSS feed and create a blog category list from the data.
The chapter covers the various options of loading partial data from the server including appending a snippet of HTML into the page, JSON, XML and how to choose which method is the most appropriate.

Table manipulation is next on the agenda and the book discusses how to sort table data preventing page refreshing using AJAX as well as client-side sorting, filtering and pagination.

Chapter 8 delves into forms, using progressive enhancement to improve their appearance and behaviour.  It also covers AJAX auto-completion as well as an in-depth look at shopping carts.

Shufflers and Rotators are next and the book starts out by building a headline news feed rotator which gets it's headlines from an RSS feed, typically used by blogs.  It also covers carousels, image shufflers and image enlargement.

Chapter 10 and 11 examine the plugin architecture of jQuery and demonstrate how to use plugins and build your own.  I successfully produced my first jQuery plugin from reading this book.  You can check out my tag cloud plugin and read about how I originally built it before turning it into a plugin that other developers can use.

Create a jQuery Tag Cloud from RSS XML Feed

I previously created a jQuery Blogger Template Category List Widget to retrieve blog categories from a RSS feed and create a list of links which click through to Blogger label pages.

I've now taken this code a step further and modified it to calculate the number of times each category/tag occurs enabling me to create a tag cloud from the data, like the one below.


Before I explain the code I wrote to make the tag cloud I'll go through the solution to a bug I found with the original categories code.

You may recall this snippet of code where I iterate through each post and then each category of each post, finally, when all the categories have been added to the array I sort them prior to de-duping them.

$.get('/blog/rss.xml', function(data) {
//Find each post
        $(data).find('item').each(function() {
//Get all the associated categories/tags for the post
            $($(this)).find('category').each(function() {
                categories[categories.length] = $(this).text();

I later refactored the code removing the $(data).find('item').each iteration which wasn't required since find('category') will find them all anyway.

I then discovered that the JavaScript .sort() function was case-sensitive which resulted in lower case categories being placed at the end of the list, causing problems when I de-dup them.

So the rewritten snippet of code became:

$.get('blog/rss.xml', function(data) {
     //Find each tag and add to an array
     $(data).find('category').each(function() {
         categories[categories.length] = $(this).text();

where caseInsensitiveCompare refers to a JavaScript compare function:

function caseInsensitiveCompare(a, b) {
    var anew = a.toLowerCase();
    var bnew = b.toLowerCase();
    if (anew < bnew) return -1;
    if (anew > bnew) return 1;
    return 0;

Creating the Tag Cloud jQuery Code

I start off as before fetching the XML, adding all the categories/tags from the RSS feed to a JavaScript array, then sorting them.

But I needed a way to store, not only the tag name, but the number of times that tag is used on the blog (the number of times the category appears in the feed).  For this I decided to use a multi-dimensional array which would essentially store the data in a grid fashion e.g.

Tag Name Count
Accessibility 2
Blogging 15
jQuery 2


The de-dup loop from my previous categories script now performs two jobs, it removes the tag duplicates and creates a count of each tag occurrence.

Once the multi-dimensional array has been populated, all that's left to do is iterate through the array creating the HTML necessary to build the tag cloud, followed by appending it to a DIV tag with an ID="bloggerCloud" on the page.

Note the calculation I perform to get the tags appearing a reasonable pixel size ((tagCount * 3) + 12).

$(document).ready(function() {
    var categories = new Array();
    var dedupedCategories = [];
    $.get('blog/rss.xml', function(data) {
        //Find each tag and add to an array
        $(data).find('category').each(function() {
            categories[categories.length] = $(this).text();
        //Dedup tag list and create a multi-dimensional array to store 'tag' and 'tag count'
        var oldCategory = '';
        var x = 0;
        $(categories).each(function() {
            if (this.toString() != oldCategory) {
                //Create a new array to put inside the array row 
                dedupedCategories[x] = [];
                //Store the tag name first 
                dedupedCategories[x][0] = this.toString();
                //Start the tag count 
                dedupedCategories[x][1] = 1;
            } else {
                //Increment tag count
                dedupedCategories[x - 1][1] = dedupedCategories[x - 1][1] + 1;
            oldCategory = this.toString();
        // Loop through all unique tags and write the cloud
        var cloudHtml = "";
        $(dedupedCategories).each(function(i) {
            cloudHtml += "<a href=\"/blog/labels/";
            cloudHtml += dedupedCategories[i][0] + ".html\"><span style=\"font-size:" + ((dedupedCategories[i][1] * 3) + 12) + "px;\">";
            cloudHtml += dedupedCategories[i][0] + "</span></a> \n";
    return false;

Since building this script I've now gone one step further and created a jQuery plug-in based on this code.  For more details and the source code see my jQuery Tag Cloud Plugin page.

jQuery Blogger Template Category List Widget

Blogger is a hosted blogging service which allows you to publish your blog to your own URL and create your own custom HTML templates to match your website design. 
I have been using Blogger for this blog for several years, and have been trying to find a good way of displaying a list of categories on each blog page.

As yet I haven't found an official way of creating a category list using the Blogger mark-up code, so I decided to write my own widget to do the job for me.

When I say category list I mean a list of all the blog tags/labels in your blog, each linking to a page with posts categorised using that particular tag, just like the examples below.

Blog Categories

Because Blogger is a hosted blogging service you can't use a server-side language to create the category list for your HTML template, instead you must rely on client-side JavaScript.

Thankfully the Blogger service publishes XML files to your website along with the post, archive and category HTML pages.  These are in ATOM and RSS formats and are there primarily for syndication, but XML files are also fairly straight-forward to parse using most programming languages and contain all the category data we need to build a categories list.

I chose to use the jQuery library because it makes the process even easier.

The Blogger XML Format

From the Blogger ATOM XML snippet below you can see that each blog item can have multiple category nodes.  This means that the code must loop through each blog post, then loop through each category of each post to create our category list, but it also means that we will have duplicate categories, because more than one post can have the same category.

  <guid isPermaLink='false'></guid>
  <pubDate>Thu, 14 May 2009 18:30:00 +0000</pubDate>
  <category domain=''>C Sharp</category>
  <category domain=''>ASP.NET</category>
  <category domain=''>Visual Studio</category>
  <title>Language Interoperability in the .NET Framework</title>
  <atom:summary type='text'>.NET is a powerful framework which was built to allow cross-language support...</atom:summary>
  <thr:total xmlns:thr=''>0</thr:total>

The jQuery Code

The jQuery code is fairly easy to follow, but here is a quick explanation.  After the DOM is available for use, I create two JavaScript arrays, one to hold the categories and one to hold our de-duped category list.  Then I load in the Blogger RSS feed and iterate through each blog post adding each category to the categories array.
Once it reaches the end of the RSS feed, I need to sort the array into alphabetical order so that I can de-duplicate the categories list I just populated, which is what the next jQuery .each() function does.
All I have left to do is loop through the de-duped categories list, create the HTML link for each category and the append the HTML unordered list to the page.

$(document).ready(function() {
    var categories = new Array();
    var dedupedCategories = new Array();
    $.get('/blog/rss.xml', function(data) {
        //Find each post
        $(data).find('item').each(function() {
            //Get all the associated categories/tags for the post
            $($(this)).find('category').each(function() {
                categories[categories.length] = $(this).text();
        //Dedup category/tag list
        var oldCategory = '';
        $(categories).each(function() {
            if (this.toString() != oldCategory) {
                //Add new category/tag
                dedupedCategories[dedupedCategories.length] = this.toString();
            oldCategory = this.toString();
        // Loop through all unique categories/tags and write a link for each
        var html = "<h3>Categories</h3>";
        html += "<ul class=\"niceList\">";
        $(dedupedCategories).each(function() {
            html += "<li><a href=\"/blog/labels/";
            html += this.toString() + ".html\">";
            html += this.toString() + "</a></li>\n";

        html += "</ul>";
    return false;


Update your Blogger Template HTML to Show Categories

The only HTML you need to add to your Blogger template is a call to jQuery, and this script in the head of your page, plus an empty HTML DIV tag, in the place where you want your categories list to appear.

<script type="text/javascript" src="/scripts/jquery.js"></script>
<script type="text/javascript" src="/scripts/blogcategories.js"></script>

<div id="bloggerCategories"></div>

You can see the script in action on my blog, or see this code rewritten to create a tag cloud.

ASP.NET Content Disposition Problem in IE7

I've just spent quite a while debugging a problem with content disposition I was having with Internet Explorer 7, the code works fine in Firefox but causes this error message to occur in IE7.

"Internet Explorer cannot download xxx from xxx."

"Internet Explorer was not able to open this Internet site.  The requested site is either unavailable or cannot be found.  Please try again later."


This was my original snippet of C# code:

Response.Buffer = true;
Response.ContentType = docToDisplay.Type.ContentType.ToString();
Response.AddHeader("Content-Disposition", "attachment;filename=" + Server.UrlEncode(docToDisplay.FileName));



I eventually figured out that the following line on code was causing the issue.


I then did a quick search for "Response.Cache.SetCacheability(HttpCacheability.NoCache);" and discovered another developer who have had the same Content-Disposition issue.  Unfortunately for me that page didn't get returned when I was searching for the Internet Explorer error message.

This was the response to the post by Microsoft Online Support:

"Yes, the exporting code you provided is standard one and after some further
testing, I think the problem is just caused by the httpheader set by
Response.Cache.SetCacheability(HttpCacheability.No Cache)
I just captured the http messages when setting and not setting the above
"NOCache" option and found that when the http response returned the
Cache-Control: no-cache
header. So we can also reproduce the problem when using the following code:
Response.CacheControl = "no-cache";
IMO, this should be the clientside browser's behavior against "no-cache"
response with stream content other than the original text/html content. So
would you try avoid setting the CacheAbility or the "Cache-Control" header
to "no-cache" when you'd like to output custom binary file stream?
Steven Cheng
Microsoft Online Support"

After removing the Response.Cache.SetCacheability line the file downloads correctly in Internet Explorer.

Time to start testing your websites in Safari on Windows?

Apple recently added their Safari web browser to the Apple Software Update and pre-checked the box by default. This effectively means that a lot of Windows users will now, possibly without knowing it, have installed Safari.
I'm not going to discuss the ethics of this practice here, instead read John's Blog - CEO of Mozilla.

But what it means for the humble web designer or developer is that we should really be installing Safari on our Windows machines and adding it to the list of browsers we test our sites against as the number of users is bound to increase as a consequence.

Apple pushed Safari web browser through their Apple Updates service

Competition in the browser business is good and over the last few years Firefox has begun to gain ground on Microsoft's Internet Explorer domination. It has also forced the browsers to become more standards compliant, thereby helping web developers and designers design cross-browser, cross-platform web pages.

According to Apple, Safari is a standards compliant browser built on the open source WebKit project, so hopefully if your pages have been built to W3C standards they will require minimal checking, but it is always wise to test. Apple have a range of web developer resources for the Safari browser, including the Safari CSS support, Safari developer FAQ, and a general web development best practices guide.

Request a web page using HTTP and a Telnet session

Ever wanted to be a real web geek?
Well, you can get one step closer by following these steps and browse a website using a Telnet session via the Windows(R) DOS terminal.
Believe it or not you can actually use this method to diagnose HTTP issues, and it also provides an insite into how the HyperText Tranfer Protocol (HTTP) works.

HTTP Request using Telnet

  1. Open a DOS prompt by clicking Start > Run and typing CMD and hitting Enter.
  2. Clear your screen of commands by typing CLS and pressing Enter.
  3. Start a Telnet session by typing telnet and pressing Enter.
  4. Configure the Telnet session to echo typed characters to the screen by typing set localecho.
  5. Instruct Telnet how you want to handle the Enter key by typing set crlf.
  6. Open up a connection to the site you want over HTTP port 80, by typing o 80.
  7. Press Enter several times until the cursor lands on an empty line and then request a page from the site.
  8. Type the following carefully without making errors:

GET / HTTP/1.1

  1. Then press Enter twice and you should receive the HTML response for the page you just requested from the web server, delivered to you by HTTP!

Here's what you should have typed, and the response from the DOS terminal and Telnet session. I've ommited the verbose HTML response from the web server.

Welcome to Microsoft Telnet Client

Escape Character is 'CTRL+]'

Microsoft Telnet> set localecho
Local echo on
Microsoft Telnet> set crlf
New line mode - Causes return key to send CR & LF
Microsoft Telnet> o 80
Connecting To
GET / HTTP/1.1

Insert a Blogger Atom Feed into an ASP.NET Web Page

I've been busy recently migrated my homepage (and several others) from classic ASP to ASP.NET. My homepage displays the latest 5 posts with a summary and a link to the full blog post.
I eventually found a tutorial using XSLT explaining how to achieve this after discovering that XmlDataSource XPATH doesn't support namespaces!
I've tinkered with the XSLT that Arnaud Weil posted in his blog to achieve the following objectives:

  1. Limit the amount of posts returned by the transformation.
  2. Show a summary of the post.
  3. Show a summary that tries hard not to cut words in half when generating a snippet.
  4. Produce XHTML valid code.

Here's the source of my XSLT...

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0"

<!--<xsl:output method="html"/>-->
<xsl:output method="xml" indent="yes" omit-xml-declaration="yes"/>
<xsl:template match="/atom:feed">
<div id="FeedSnippets">
<xsl:apply-templates select="atom:entry" />

<xsl:template match="atom:entry" name="feed">
<xsl:if test="position()&lt;6">
<h4><xsl:value-of select="atom:title"/></h4>
<xsl:when test="string-length(substring-before(atom:summary,'. ')) &gt; 0">
<xsl:value-of select="substring-before(atom:summary,'. ')" />...<br />
<xsl:when test="string-length(substring-before(atom:summary,'.')) &gt; 0">
<xsl:value-of select="substring-before(atom:summary,'.')" />...<br />
<xsl:value-of select="substring(atom:summary,0,200)" />...<br />
<strong>Read full post: </strong><a href="{atom:link[@rel='alternate']/@href}"><xsl:value-of select="atom:title"/></a></p>
<hr />

Font and Text Styles for Websites

People find reading text on screen much more difficult than reading on paper, by following some simple guidelines web designers can help make this as painless as possible.

Due to the nature of how web browsers render textual content on the screen; using the fonts installed on the client machines, it severely limits the font variation web designers have at their disposal.
Most browsers will be able to render the following fonts without reverting to a default alternative:

  • Arial
  • Georgia
  • Verdana
  • Arial Black
  • Times New Roman
  • Trebuchet MS
  • Courier New
  • Impact
  • Comic Sans MS

I personally stick to the top 3 fonts in this list as they are very readable on-screen and are the most professional; avoid Comic Sans MS and Courier New.
It is also possible to specify alternative fonts in your CSS style sheets for the browser to use if your font of choice isn't available.

Once you've chosen a font for a website use that font throughout to maintain consistency. Websites that use too many fonts sprinkled through their pages look unprofessional. Verdana is the most readable font even at low point sizes.

Maintaining Readability

To increase a sites' readability you should focus your attention on three things:

  1. Choose a good font and a decent readable default font size.
  2. Make sure the text and background contrast in colours is high.
  3. Avoid using all capitals in blocks of text and headings.

Text in Images

Some web designers get around the font support problems mentioned above by creating a bitmap image of all headings, titles etc. Although this method does allow you to use any font your graphics program supports, including anti-aliased (smooth-edged) fonts, it causes a big accessibility and SEO problems. You should only use this technique for a company logo, all other textual information should be actual text in your HTML.

Hyperlink Usability

How often do you visit a website and are unsure where to click and what is clickable?
In the past all links were blue and underlined, but web designers have increasingly found that modern stylesheet functionality (CSS) now allows them to change the look and feel of links and experiment with unconventional navigation. Sometimes so much so, that users have to mine the pages for links by hovering over the page to find out what is clickable, all for the sake of design.

User Expectations

Hyperlinks need to stand out to users as links, a clickable entity that will cause a new page to be requested when clicked.
Users should not have to spend their precious time learning what style the site uses to render hyperlinks, by watching when their mouse cursor turns into a pointing-hand while scanning the site.
Web designers need to understand users expectations.

All links should be underlined at the very least. Any other use of underlined text or blue words should be avoided as these can be confused with links.

The Rel Attribute in HTML

The rel attribute is available for use in a few HTML tags namely the <link> and <a> anchor tags, but until recently it has been fairly pointless to use because web browsers did not support the intended functionality of most of the values you could assign to the rel attribute.

The rel attribute has been around since HTML 3 specifications and defines the relationship between the current document and the document specified in the href attribute of the same tag. If the href attribute is missing from the tag, the rel attribute is ignored.

For example:
<link rel="stylesheet" href="styles.css">

In this example the rel attribute specifies that the href attribute contains the stylesheet for the current document.
This is probably the only recognised and supported use of the rel attribute by modern web browsers and by far the most common use for it to date.
There are other semantic uses for the rel tag, beyond those which a browser might find useful; such examples include social networking, and understanding relationships between people; see, the other use which has been talked a lot about recently concerns search engine spiders.

Search Engines and the rel Attribute

Recently Google has played a big part in finding another use for the rel attribute. This time the HTML tag in question was the humble anchor tag.
Google and the other major search engines (MSN and Yahoo!) have a constant battle with SERP SPAM which clutter their results and make them less useful. These pages make their way into the top results pages by using black hat SEO methods such as automated comment SPAM, link farms etc.
Rather than adopt a complex algorithm to determine these SPAM links which increase target pages search engine vote sometimes called "Page Rank" or "Web Rank", the search engines (Google, MSN and Yahoo!) have collectively decided that if blogging software, big directories and general links pages etc use anchor tags with a rel="nofollow" attribute those links will simply be ignored by search engine spiders, yet still be fully functional for end users.
Of course using rel="nofollow" does not mean the links are deemed as bad in any way, every link on a blog comment will be treated in the same fashion. The webmaster is essentially saying

"this link was not put here by me, so ignore it and do not pass any "link juice" on to it".

More on nofollow by Search Engine Watch.

Putting Webmasters in Control

Putting this kind of control in the webmasters hands hasn't been without controversy. People will always try to experiment with ways of manipulating the intended outcome to favour their own goals, such as using nofollow internally in their site etc. Others have welcomed the move as a way of reducing the problem of spamming.