I've been looking at the ASP.NET cache object, which makes the older Application object effectively redundant. The analogy is that it is like a leaky bucket. A bucket in which you can store data which is "expensive" to retrieve from its source every time. Since databases and files are considered slow in comparison with in-memory data then it makes sense to keep a copy of certain data you use frequently in the cache, which is stored in memory.
The bucket is leaky because you can only put so much data in the cache before it fills up and has to drop some. It uses a simple method called LRU - Least Recently Used to decide what data it disposes of by default. I say by default, because you can specify, if you so wish and make data persist for a period of time or make it dependent on other data on file, database, or cached data.
The cache is shared however by the whole application and is therefore accessible by the whole application, so you have to be careful what kind on data you store in here. You probably don't want to store user specific data here!
You also need to check that the data you want to use is in the cache before you go and use it. Sounds obvious, but if you don't code this right, by assigning the cached object to a variable then checking the variable you could discover bugs which you can't replicate in a test environment.
Deciding when to move business logic to the data layer in a multi-tiered application is a tricky one and depends a lot on the RDMS you are using.
I've been looking at an ASP.NET application that creates 7 rows in a database for each "musical performance" when a button is clicked on a web form to rebuild a database.
To begin with a SQL 2005 stored procedure written in VB.NET (But could easily have been C# with the same results since all .NET languages are compiled to IL). was called from a loop in the business layer 7 times to create the 7 rows. The application was run and it took about 90 seconds to rebuild a test database in this manner.
This logic was then moved into 1 stored procedure still written in VB.NET; The result was that instead of making 7 calls to the database each time, we make 1 call to the database and move the business logic, executing the loop there, and hence at the data layer. The database rebuild took about 30 seconds this time.
Stored Procedures in Native T-SQL
To try and make the execution faster we decided to write the stored procedure in T-SQL, which is still the native language of SQL Server. This shaved another 15 seconds of the execution time! Just proving what sort of overhead writing your stored procedures in anything other than T-SQL introduces.
This exercise provides a glimpse into how the decisions you make when designing an application radically affect the responsiveness of the system.
Business Logic at the Data Layer
If you were pulling lots of data to the business layer performing some processing then discarding most of the data at that point, then it may improve performance if this operation was moved down to the data layer. It really depends on the data and the application.
One of the problems with storing too much application logic in the data layer is version control. It can become a mine field managing lots of complex stored procedures between test and live environments and keeping them in-sync with the application.
On the last Learning Tree ASP.NET course I took I never thought to inquire why we were made to work in pairs sitting in front of one PC workstation. Well it seems this is not a cost cutting exercise, which were my first thoughts. Apparently Learning Tree bases this on research and programming methodologies (namely Extreme Programming, which I may have to look into further at some point). According to the lecturer the perfect number of programmers crafting a piece of code is 1.8 (Some pieces of code are too trivial for two developers to work on simultaneously), but in a learning environment it is 2.2, with the extra help coming from the teacher and other peers in the class room.
Essentially what he was trying to say was; developers can't think and type at the same time, the product of pair programming is less errors and better quality, structured code.
When people use a search engine to try and solve a problem they will most probably enter words that describe the symptoms of the problem they are looking to solve. They may even enter a question into the query box. Most of the time, unless the user knows of a particular product which will do the job, they won't search for an xyz machine from Acme Inc.
Traditional Marketing Versus Online Marketing
This poses a problem for manufacturers who describe their product with sexy slogans and marketing speak. Search engines index the textual content on your web pages. If there is not much plain English which describes what the product will do, what problems it will solve, you are potentially not doing your product website justice.
Keyword Research - Speak Your Customer's Language
This is where you need to do a little homework and research into what words users are likely to use in order to find your product and your competitors product. There are many ways you can do this. Google Adwords has a tool to do this, so does WordTracker.
You can use the results of this research to careful write your product text to tailor and optimise it for your audience. This is know as linguistic Search Engine Optimisation (SEO).
Another day another ludicrous allegation about cyberspace. Apparently..."The vast majority of blogs on top social websites contain potentially offensive material."
This was the conclusion of a ScanSafe commissioned report, which claims sites such as MySpace, YouTube and Blogger which are a "hit" among children can hold porn or adult language. According to the report 1 in 20 blogs contains a virus or some sort of malicious spyware.
User generated content is to blame of course; because of the nature of how the content is built and edited it makes it very difficult to control and regulate.
Even if you were to monitor every post on a website as part of your process, how would you clarify whether a particular portion of text, or Photoshopped image has violated anyone's copyright or intellectual property?
This is a problem the big search engines have as well. With so many SPAM sites scrapping content from other sites, then republishing the resulting mashed content as their own work in order to cash-in on affiliate income generated from SERPS. Is Google working on a solution to stem this SPAM?
EU Intellectual Property Ruling
Another potential blow to websites which rely on user generated content is the European Union ruling on intellectual property which is making its way through the ratification process. This could see ISP's and website owners being charged for copyright infringements even if the data was posted by users of the site.
The rel attribute is available for use in a few HTML tags namely the <link> and <a> anchor tags, but until recently it has been fairly pointless to use because web browsers did not support the intended functionality of most of the values you could assign to the rel attribute.
The rel attribute has been around since HTML 3 specifications and defines the relationship between the current document and the document specified in the href attribute of the same tag. If the href attribute is missing from the tag, the rel attribute is ignored.
<link rel="stylesheet" href="styles.css">
In this example the rel attribute specifies that the href attribute contains the stylesheet for the current document.
This is probably the only recognised and supported use of the rel attribute by modern web browsers and by far the most common use for it to date.
There are other semantic uses for the rel tag, beyond those which a browser might find useful; such examples include social networking, and understanding relationships between people; see http://gmpg.org/xfn/intro, the other use which has been talked a lot about recently concerns search engine spiders.
Search Engines and the rel Attribute
Recently Google has played a big part in finding another use for the rel attribute. This time the HTML tag in question was the humble anchor tag.
Google and the other major search engines (MSN and Yahoo!) have a constant battle with SERP SPAM which clutter their results and make them less useful. These pages make their way into the top results pages by using black hat SEO methods such as automated comment SPAM, link farms etc.
Rather than adopt a complex algorithm to determine these SPAM links which increase target pages search engine vote sometimes called "Page Rank" or "Web Rank", the search engines (Google, MSN and Yahoo!) have collectively decided that if blogging software, big directories and general links pages etc use anchor tags with a rel="nofollow" attribute those links will simply be ignored by search engine spiders, yet still be fully functional for end users.
Of course using rel="nofollow" does not mean the links are deemed as bad in any way, every link on a blog comment will be treated in the same fashion. The webmaster is essentially saying
"this link was not put here by me, so ignore it and do not pass any "link juice" on to it".
More on nofollow by Search Engine Watch.
Putting Webmasters in Control
Putting this kind of control in the webmasters hands hasn't been without controversy. People will always try to experiment with ways of manipulating the intended outcome to favour their own goals, such as using nofollow internally in their site etc. Others have welcomed the move as a way of reducing the problem of spamming.
The British Pound broke through the physiological barrier of $2 yesterday due to the relative strength of the British economy. For us Brits this has some advantages like cheap shopping trips to New York, and some negatives such as companies who export goods to the US will suffer due to their goods becoming more expensive to American importers.
It also affects British web publishers who earn money from American companies. Affiliate programs like Google's Adsense, Amazon Associates etc are all paid in US dollars. Some schemes have the option of holding payments, but with the weakening economy in the US this exchange rate might be with us for some time.
For the first time in the UK two people have been cautioned by police for accessing wireless broadband connections without permission. Both cases were detected by suspicious behaviour in cars parked in the vicinity and not through electronic means.
Both people were warned for dishonestly obtaining electronic telecoms with intent to avoid payment.
Most wireless routers come without Wi-Fi encryption turned on by default, leaving unsavvy users open to this kind of abuse.
Most broadband ISP terms and conditions state that you cannot share your broadband connection with your neighbours etc, therefore all related activity on your connection is connected with you.
Due to recent laws, ISPs have to keep records of your Internet activity for a number of years. If authorised people are accessing your connection and using it for illegal practices then how would you prove your innocence?
Recently news has come out that anti-piracy companies are monitoring P2P traffic, using a modified version of Shareaza they are automatically sending your IP to your ISP demanding your details if it detects that pirated material is being downloaded. Some people have questioned whether an IP is enough evidence to connect a person with a crime, especially considering these cases of drive-by Wi-Fi hacking.
More and more sites are adopting XML syndication technologies such as RSS and ATOM which users can subscribe to.
Rather than being a push technology like email newsletters, RSS is a pull technology. The subscriber is in full control of the subscription, the publisher does not have a relationship with the subscriber, or need to know their email address. This makes unsubscribing very easy and because you don't need to supply an email you do not need to worry whether your details will be sold on by unscrupulous companies.
XML syndication has been around for over 5 years or so, but in the early days the RSS readers available weren't up to scratch, so it took a while for the technology to gather momentum. Nowadays there are plenty of good readers, such as Bloglines, Google Reader etc, which are very polished products that support all the major formats.
The last nail in the email newsletters coffin will be the adoption of RSS advertising into the mainstream. Currently Google and Yahoo! are performing tests with advertising on these syndication formats. As soon as these are released the already strong relationship Google has with publishers will allow it to rapidly make RSS very lucrative for website publishers.
Until recently publishers syndicating their content via RSS had a hard time analysing their circulation, that's where companies such as Feedburner have found a niche and continue to provide publishers with additional services on top of basic subscription tracking.
Of course syndicating your content is just another method of publishing. First you had paper, then HTML now XML. You can't irradiate SPAM with RSS, people can set up SPAM blogs etc, but it's the subscribers who are in control of their subscriptions. So as a publisher you know that your 500 subscribers reported by your RSS analytics product of choice are actively reading your content or else they'd simply click to unsubscribe from within their RSS reader application. Compare that to a database of registered subscribers dating back several years; are those users viewing your newsletter in their preview pane and pressing delete rather than unsubscribing via an unsubscribe link?
Content is King
The old adage that 'content is King' is truer than ever with RSS syndication. The problem with giving such power to the subscriber is that your content needs to be top-notch in order to keep your subscribers subscribing. Even though there are guidelines specifying opt-out and unsubscribe methods and practices, which newsletter senders must adhere to, the fact is unsubscribing from RSS is far easier and is not reliant on differing geographic data protection laws.