In the past, I used Microsoft Web Application Stress Tool and Pylot to stress test web applications. I'd written a simple home page, login script, and site walkthrough (in an ecommerce site adding a few items to a cart and checkout).
Just hitting the homepage hard with a handful of developers would almost always locate a major problem. More scalability problems would surface at the second stage, and even more - after the launch.
The URL of the tools I used were Microsoft Homer (aka Microsoft Web Application Stress Tool) and Pylot.
The reports generated by these tools never made much sense to me, and I would spend many hours trying to figure out what kind of concurrent load the site would be able to support. It was always worth it because the stupidest bugs and bottlenecks would always come up (for instance, web server misconfigurations).
What have you done, what tools have you used, and what success have you had with your approach? The part that is most interesting to me is coming up with some kind of a meaningful formula for calculating the number of concurrent users an app can support from the numbers reported by the stress test application.
Take a look at TestComplete.
For simple usage, I perfer ab(apache benchmark) and siege, later one is needed as ab don't support cookie and would create endless sessions from dynamic site.
both are simple to start:
siege can run with more urls.
last siege version turn verbose on in siegerc, which is annoy. you can only disable it by edit that file(
/usr/local/etc/siegerc
).Trying all mentioned here, I found curl-loader as best for my purposes. very easy interface, real-time monitoring, useful statistics, from which I build graphs of performance. All features of libcurl are included.
I've used The Grinder. It's open source, pretty easy to use, and very configurable. It is Java based and uses Jython for the scripts. We ran it against a .NET web application, so don't think it's a Java only tool (by their nature, any web stress tool should not be tied to the platform it uses).
We did some neat stuff with it... we were a web based telecom application, so one cool use I set up was to mimick dialing a number through our web application, then used an auto answer tool we had (which was basically a tutorial app from Microsoft to connect to their RTC LCS server... which is what Microsoft Office Communicator connects to on a local network... then modified to just pick up calls automatically). This then allowed us to use this instead of an expensive telephony tool called The Hammer (or something like that).
Anyways, we also used the tool to see how our application held up under high load, and it was very effective in finding bottlenecks. The tool has built in reporting to show how long requests are taking, but we never used it. The logs can also store all the responses and whatnot, or custom logging.
I highly recommend this tool, very useful for the price... but expect to do some custom setup with it (it has a built in proxy to record a script, but it may need customization for capturing something like sessions... I know I had to customize it to utilize a unique session per thread).
At the risk of being accused of shameless self promotion I would like to point out that in my quest for a free load testing tool I went to this article: http://www.devcurry.com/2010/07/10-free-tools-to-loadstress-test-your.html
Either I couldn't get the throughput I wanted, or I couldn't get the flexibility I wanted. AND I wanted to easily aggregate the results of multiple load test generation hosts in post test analysis.
I tried out every tool on the list and to my frustration found that none of them quite did what I wanted to be able to do. So I built one and am sharing it.
Here it is: http://sourceforge.net/projects/loadmonger
PS: No snide comments on the name from folks who are familiar with urban slang. I wasn't but am slightly more worldly now.
One more note, for our web application, I found that we had huge performance issues due to contention between threads over locks... so the moral was to think over the locking scheme very carefully. We ended up having worker threads to throttle too many requests using an asynchronous http handler, otherwise the application would just get overwhelmed and crash and burn. It meant a huge backlog could pile up, but at least the site would stay up.