I am currently working on an c# application that minimizes and combines javascript/css asynchronously in the background to load on to the page which completed. Once the combined minimized file(s) are created, they are saved on to disk and all subsequent requests to the page will load this file.
The reason for this is to assist with performance on the page. I have a concern though, what if the combined file is large, eg 200 kb. Would it be better to combine in to 2 files if this was the case and have 2 separate http requests? The file will be gzipped and cached.
Well, there are two main schools of thought.
The first, is reduce the number of HTTP requests as much as possible. This would say to reduce ALL CSS files down to one monster. It's better to download 400kb once, than multiple 50kb files. (and the same for JS).
The other is to combine where necessary, but no further. If you have 100kb of CSS that's only needed on one section of the site, there's no reason to slow the rest of the site down for your users. This is especially true for JS since there are lots of sites that include jQuery (for example) on every page because 10% of the site uses it.
My take on it is a combination of the two. If I use code on about 50% of the site or more, I include it in the "master" file. If the code is small (less than 5kb or 10kb), I include it in the master file. Otherwise I split it to separate files.
The whole reason for any of this is to make the user experience better. You could do a giant brute force and load all css and JS in 2 respective files every page load (sure it would be cached). But if the landing page doesn't need 50% of that code, you're needlessly slowing down the page with the biggest impact.
And that's why I believe that the best solution to this problem is to have a human analyze the situation. They can look for duplicates and abstractions. They can look at the needs of the page/site and determine the best scenario. Unless you want to make your program do that (which would be difficult), it's not going to give the best result (but then again, there is a difference between good and good-enough)...
That's my $0.02 anyway...
Google's documentation about page speed recommends the following:
Combine external JavaScript
Recommendations
Partition files optimally. Here are some rules of thumb for combining your JavaScript files in production:
- Partition the JavaScript into 2 files:
one JS containing the minimal code
needed to render the page at startup;
and one JS file containing the code
that isn't needed until the page load
has completed.
- Serve as few JavaScript
files from the document
<head>
as
possible, and keep the size of those
files to a minimum.
- Serve JavaScript
of a rarely visited component in its
own file.
- Serve the file only when
that component is requested by a user.
For small bits of JavaScript code that
shouldn't be cached, consider inlining
that JavaScript in the HTML page
itself.
Combine external CSS
Recommendations
- Partition the CSS into 2 files each: one CSS file containing the minimal code needed to render the page at startup; and one CSS file containing the code that isn't needed until the page load has completed.
- Serve CSS of a rarely visited component in its own file. Serve the file only when that component is requested by a user.
- For CSS that shouldn't be cached, consider inlining it.
- Don't use CSS @import from a CSS file.
Your best best is to reduce the number of files, this is because of the fact that you can only have __ requests open per domain name doing simultaneous downloads. So if you break it into two requests, you are using more of that allotment than you need.
Overall, you really want to reduce the total number of requests.