There seems to be so much fuss surrounding support for aging Microsoft browser Internet Explorer 6 lately, both from the web developer community and big corporations such as Google and Facebook. There are many websites dedicated to eradicating the browser, a Twitter petition, a joke campaign to save IE6 and a whole lot more…
While I don’t particularly enjoy spending a considerable amount of time per project making sure websites I build are IE6 compatible, I do see the benefit of supporting the browser.
I was in Google Analytics recently and looked at my browser statistics for this site. Visitors to my site are fairly IT literate but Internet Explorer 6 still has a larger user base than Safari, Chrome and Opera with almost 9% share. Looking on the W3C Schools browser statistics, 12.1% of their users browsed the web with IE6 in September 2009.
NikMakris.com Web browser market share Sept 2009
NikMakris.com Internet Explorer browser share Sept 2009
I could make the decision not to support IE6 for my personal site and about 9% of my visitors would be affected, but if I made that decision on a commercial website, I could end up losing out on business, especially since many of the people still actively using IE6 are businesses or public sector organisations who can’t easily upgrade or install an alternative web browser.
Many organisations also have legacy applications that do not work with new versions of Internet Explorer and during a recession many organisations will avoid spending money on upgrades and new software if they can afford not to.
Whilst it might be okay for Google and Facebook to block support for the browser when you visit their own web properties, would a client of yours be happy if you did the same with a website you built, potentially losing them business?
Internet Explorer 6 may be a dog of a browser in 2009, if you’re a web developer it probably causes you hours of pain creating dedicated style sheets and conditional statements. You may even have had to make major template changes to deal with the many quirks of the browser rendering engine, but hopefully in the not too distant future it will become such a small percentage of the web browser market that we can all forget about it and start concentrating on new technologies such as HTML 5!
I've just spent quite a while debugging a problem with content disposition I was having with Internet Explorer 7, the code works fine in Firefox but causes this error message to occur in IE7.
"Internet Explorer cannot download xxx from xxx."
"Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later."
This was my original snippet of C# code:
Response.Buffer = true;
Response.ContentType = docToDisplay.Type.ContentType.ToString();
Response.AddHeader("Content-Disposition", "attachment;filename=" + Server.UrlEncode(docToDisplay.FileName));
I eventually figured out that the following line on code was causing the issue.
I then did a quick search for "Response.Cache.SetCacheability(HttpCacheability.NoCache);" and discovered another developer who have had the same Content-Disposition issue. Unfortunately for me that page didn't get returned when I was searching for the Internet Explorer error message.
This was the response to the post by Microsoft Online Support:
"Yes, the exporting code you provided is standard one and after some further
testing, I think the problem is just caused by the httpheader set by
I just captured the http messages when setting and not setting the above
"NOCache" option and found that when the http response returned the
header. So we can also reproduce the problem when using the following code:
Response.CacheControl = "no-cache";
IMO, this should be the clientside browser's behavior against "no-cache"
response with stream content other than the original text/html content. So
would you try avoid setting the CacheAbility or the "Cache-Control" header
to "no-cache" when you'd like to output custom binary file stream?
Microsoft Online Support"
After removing the Response.Cache.SetCacheability line the file downloads correctly in Internet Explorer.
I've just read a post over at Search Engine Journal about statistics from Hitwise UK suggesting British users are increasingly using browser toolbars to search for domains they know already like tesco.com rather than typing them directly into their browser address bar.
I use this technique a lot because I frequently misspell a domain name or get the wrong domain extension for a website. When this happens more-often-than-not you get a holding page, cyber-squatter site, or worst still a site that attempts to mimic the intended destination in order to "phish" log-in details.
When you use a search toolbar to navigate to a domain the top search result is most likely going to to be the real domain.