WEB Advent 2008 / Coping with the Holiday Shopping Spree

Now that the holidays are closing in, a lot of people are realizing that they have not yet bothered to buy gifts for their close ones, which is why oodles of people can be seen running around most of December in an attempt to find the perfect gift… well, heck, anything cheap and funny!

Given how we have been evolving over the past decade or so in terms of acquiring things online, this is prime time for online shops and, in general, web sites that have anything to do with the holidays. Now, imagine the surge of people that are suddenly filling the shops, but instead of envisioning a busy shopping mall, think of them flocking to your web site.

It’s overcrowded, people are yelling—or in this case, they’re sending uppercase emails to your support staff—things are not going as fast as they’d like… all the joys that come with holiday shopping. Yay!

You might be thinking that it doesn’t affect your site, since you are all pimped out with memcache, APC, Varnish and other neat caching solutions that should keep you afloat in case of a sudden surge in activity on your precious site. Well, think again!

Your servers may be able to handle the load, but what about your users? What about their response time? Did you know that 80-90% of users’ response time is spent downloading content? So, the other 10-20% is for generating all the content and doing all the background crunching with memcache et al. It is really important to have the backend caching working properly, or that 10% could end up being 60%.

This is better known as the Pareto principle, the 80-20 rule, or—as Vilfredo Pareto put it in the 1900s—“80% of the wealth of the nation is owned by 20% of its citizens.” In other words, 80% of consequences come from 20% of causes.

Today’s reality is that there are still people out there on DSL connections that merely have 1 MB down and less than 100 KB up, or even on dial-up connections.

One important thing to think about is user perception. If the user thinks the site is slow, then it is slow, no matter the unload-to-onload time you are getting with your performance tools. So, if it looks like a duck, sounds like a duck, then it is a duck. Even if the HTML has loaded, we still need to load other components; images, JavaScript, CSS and such.

There are several ways to improve the response time across the board; really easy ways:

Put Yourself in Your Users’ Shoes

Too many development teams never really consider the view from their users’ browsers. Sure, many developers bite the bullet and test on Internet Explorer 5, but that’s not really walking in the users’ shoes.

One thing that’s often overlooked is to test the site with a slow connection. Development teams should really set up testing proxies that are located on slow DSL connections or limit the upstream and downstream speed; make those developers suffer.

Testing sites only 10 meters from the actual development/production server via a local network or over a 10 Gigabit pipe can hide some sneaky issues that only happen on those slower connections.

HTTP Requests

That’s right, HTTP requests. Reducing the number of requests your site makes will speed it up. That’s not surprising given that, most of the time, reducing content means less data to download. Alas, you are wrong if you think I’m going to suggest that you should lose valuable content for better response times. I do advise people to trim off the fat if possible, as long as the site doesn’t lose value as a result. However, there are other ways to achieve better response times without actually getting rid of any content.

Contrary to what most people believe—those I have spoken to, at least—your users will spend 40-60% of their time on your site with an empty browser cache! Imagine that! Too many people rely on the fact that browsers will cache images, scripts, etc. and think that sending a lot of content is fine, since it’s just a one-time download or a request every other blue moon.

So, reducing the number of HTTP requests will not only benefit your first-time users, since they have fewer components to download, but it will also improve the experience of long-term users that have lost their cache for one reason or another. First impressions are important—it’s all about perception.

Cookies

Cookies, cookies, cookies! Om nom nom.

Do you know how much damage badly thought out cookies can do to a site’s response time? If not, then you are about to find out.

If a cookie is served on example.org and the site contains components (images, JavaScript, CSS, etc.) then the cookie will be served up with each of those components. In other words, add the component size to the cookie size and you get (close to) the actual download size of the request. So, a 1 KB image with a 5 KB cookie becomes a 6 KB download.

Also, user upload speed actually matters. Surprised? When accessing a site with a cookie, the browser will upload the local version of the cookie if available, otherwise the web site wouldn’t be able to access that stored information. So, if the user has a slow connection and the cookie is large, then the response time will be slow, which is logical.

The Yahoo! performance team experimented with cookies some months back to see how cookie sizes affect the response time. The table below shows the cookie size loaded on a DSL connection with a downstream speed of around 800 kbps and an empty HTML document. Delays in response time are related to the size the cookie:

Cookie Size Median Response Time (Delta)
0 bytes 78 ms (0 ms)
500 bytes 79 ms (+1 ms)
1000 bytes 94 ms (+16 ms)
1500 bytes 109 ms (+31 ms)
2000 bytes 125 ms (+47 ms)
2500 bytes 141 ms (+63 ms)
3000 bytes 156 ms (+78 ms)

So, what to do? Make sure you set the correct domains on your cookies and put only the required data in those cookies. Let’s imagine that we have helgi.example.org, www.example.org and advent.example.org. Logging into one of them gets you logged into the rest—think single sign-on—then you put the authentication cookie on *.example.org and all of the subdomains will be able to access it. However, take care not to add data to this cookie that’s only relevant to helgi.example.org. Instead, create a whole new cookie with data specific to that domain.

Another idea is to simply stop putting crap into the cookies—keep them as clean as possible. A common example problem is where sites put all of a user’s details into a cookie thinking that they are saving a database call or two. Well, stop that! It actually slows things down for your users.

Maximize Parallel Downloads

So, who hasn’t gone to one of the bigger web sites out there and seen things like img1.example.org, img2.example.org, img3.example.org and so on? Ever wondered why that is done? It has to do with how many parallel downloads browsers can handle.

By default, Internet Explorer and Firefox allow around eight connections in total (across all tabs) and two per host over an HTTP 1.1 connection. This can be configured if you’re a proper geek. I’ve heard that with recent Firefox builds, they are increasing this to eight connections per host, but I haven’t confirmed this, and we still have people using older browser anyway.

So, this means that the browser can only download two components at a time. The rest of the components are simply… blocked. Now, think about that for a second and you can see how this can be a bit of a bottleneck, especially for people with faster connections. So, why not make a fast site even faster?

With a little thought on the matter, observant people will notice where those extra domains come into play… They do indeed allow you to load more components in parallel since subdomains are counted as a new host.

  • 1 domain = 2 parallel downloads
  • 2 domains = 4 parallel downloads
  • 3 domains = 6 parallel downloads
  • And so on…

If we consider the default behavior, then our loading process would look something like this:

Chart showing component files downloading in batches of two.

While ,if we added a second domain into the mix, it could become something like this—four parallel downloads instead of just two:

Chart showing component files downloading in batches of four.

Some people might see the dark side and end up with 20 subdomains just to allow for more parallel downloading, thinking it will speed up the web site by many magnitudes. Unfortunately, that’s not the case. What needs to be considered here is how to avoid thrashing the end user and their hardware. The more data that gets downloaded, the more data needs to be processed in parallel. This also puts strain on the connections, so be thoughtful.

Also, adding more subdomains in this way means more DNS look-ups, and the look-up time may vary between DNS servers. One has to take the user’s geographic location into account as well.

A good average seems to be between two and four subdomains. So, in summary:

  • Point the CNAMEs to the same IP as the main server.
  • If you serve example.jpg from img1.example.org, make sure that you always serve it from that domain, otherwise you won’t get the benefit of using the browser cache.
  • Keep a decent balance of subdomains.

JavaScript & CSS

Most sites have multiple JavaScript and CSS files, each containing a bunch of whitespace and new lines, which means extra size. This is a good opportunity for improvement.

One thing that can be done is to combine all JavaScript / CSS files into one. The result: fewer HTTP requests, but it also presents us with some challenges. Obviously, things should still be developed separately in their own modules, but then combined on builds. However, this raises another issue: what to combine and when?

Why would you want to combine five scripts into one on a page that only uses three of those? It depends on your site, the size of the files, etc., so it’s up to you to decide. Ideally, you’d want as few files per site as possible. If you decide to have a couple of files with different combinations of scripts in them, then you will end up with a couple of files that need to be cached. You need to research how it will affect your users.

Now, combining files is not the only action we can take. We can also “minify” them by stripping out certain things, doing some replacements, turning var fooBarIsMagic = 1 into var a = 1 and other such black magic.

There are few programs/scripts that can help:

Dojo ShrinkSafe and YUI Compressor are Java implementations using Rhino, and the other two are JavaScript implementations. JSMin is maintained by Douglas Crockford, and Packer is made by author of the ‘IE7’ JavaScript library, Dean Edwards. Some of these tools can handle combining many files into one and even compress (gzip) the result.

I recommend taking a deeper look into this and figuring out which of these tools best fits your needs. If you go to The JavaScript CompressorRater, you can submit your JavaScript to see how each library performs on your code and find what suits you best.

While on this topic, I think it is important to note how JavaScript files fall into the parallel downloading I mentioned above. JavaScript files simply do not comply with the parallel downloading rules put in place by the browsers, so you need to move as much JavaScript as close to the bottom of the HTML file as possible, otherwise you will block the downloading of other components:

Chart showing JavaScript downloading before other types of component files.

Images

Images have always been something that we, as developers, really don’t want to touch, given how we think. We tend to use the left side of our brain, but designers and creative people make better use of the right side. Now, given how I’ve been talking about minimizing JavaScript and CSS, reducing HTTP requests and such, I think we should stick to that theme for a bit.

Many web sites contain a fair number of images: round edges, small icons and other visual queues that most likely amount to a third of your HTTP requests. So, what can we do to improve on this? We can use CSS sprites or HTML image maps here. Personally, I favor the CSS sprite approach.

The idea behind both approaches is to move smaller images into one big image and use positioning to pick only the parts we need. That allows us to take ten small images and replace them with one, reducing the overhead by nine HTTP requests. In turn, this improves our response time because of the parallel download limit discussed earlier. Even if you have gotten past that limitation by using multiple subdomains, it’s generally a bit better to download one decently sized object instead of ten smaller objects. This is just the nature of the network stack.

Now, while using things like CSS sprites or image maps gets us a long way, there are couple of other things we can do with images. We can scale them with ImageMagick instead of using HTML attributes—I’m amazed how often I see people use the attributes! We may also reduce image quality—the human eye can only detect so much detail at a time, so dropping from 100% down to 70% on thumbnails is fine and saves us some bytes. In fact, many PNG images out there can be reduced in size by 50% without actually losing any visual quality! Hot damn!

There are tools out there to help with this—just hit up Google or Yahoo!—but one such tool is OptiPNG. So, do reduce the quality of images if you can—compress them. Your budget will thank you when it comes to paying the bandwidth bill.

One final thing that few people realize in regards to images is the favicon.ico. A lot of web sites neglect to even put in an empty ICO file, causing a 404 in their server log on every single request for that missing file. This especially hits sites that employ a special 404 page: when the ICO file is missing, it means loading that pretty 404 page and that can amount to a lot of resource consumption.

So put an ICO file in place, be it empty or not. The same thing applies to these files as for other images: keep it small and optimized—it’s located on every single page request.

Conclusion

And that’s how the cookie crumbles!

I only touched slightly on each of these issues and I recommend that everyone goes and reads more on those topics. I did not cover things such as CDNs (Content Delivery Networks), Expire headers, ETags, mobile-specific improvements and there are a couple of other things that I’m probably forgetting.

Employ these techniques wisely, and only use what is required—don’t go all-out and throw everything in. I mean, has anyone seen those crazy decorated houses in the U.S. over Christmas? It looks like a damned light show rather than something to delight the eyes! Remember: K.I.S.S. But also remember, if a site needs to be visually rich, then it needs to be visually rich. Don’t start stripping things out and make the site lose its value—be smart about it.

Until next time, happy holidays!

P.S.

For those that don’t get why I’m comparing this to Christmas shopping, online shopping on the last Black Friday increased quite a lot according to PriceGrabber.com, so now it’s now more important than ever before to make the online shopping/browsing experience as quick and as smooth as possible.

Other posts