Thank you for contacting us. We will get back to you shortly.
April 17, 2006 - by jason
Paul Q. in the debunking lighttpd post he wrote a while back, had as his main point that benchmarking a web server using localhost tells you nothing about and ignores the ability of a web server (and the underlying OS) to saturate it’s network connection.
So in the benchmarks over at weblog.textdrive.com, they were all pumping out about 1000 requests/second over a gigabit connection, and now to put that in some context.
The page was our main and initial page at textdrive.com, it’s 140KB uncompressed and because most of that is images, it comes out compressed at 122 KB and has 20 objects that are being requested per page visit.
Let’s for argument’s sake say it’s 125KB.
A 100Mbps connection has a theoretical maximum of 12.5 MB/sec and a 1000Mbps has a theoretical maximum of 125 MB/sec, and I’d like to make the math easy.
So for a 125KB page, you can do 100-1000 unique visitors per second and with 20 objects per page, that is 2000-20,000 requests per second that could pump out of that system.
And this is assuming you have all your ducks in a row and your system can flood a 100Mbps or 1000 Mbps connection.
What is the ability to do 1000 requests/second then?
For the 125KB page that’s composed of 20 things, it means ((1000 requests/sec)*(20 requests per page)) * 0.125MB per page = 6.25 MB/sec
Or 50Mbps constant.
That’s actually a lot. There’s 122,400 seconds in a day and to do that constantly (1000 requests/second) would be 12,240,000 uniques in a day with 244,800,000 hits.
Granted this is just “web” and a simple page pulled by simple rails from a simple database.