Blog / Benchmark1

posted 06 Jul 2022

Performance usually comes with tradeoffs. These can take many forms, from skipping security checks to only optimizing a small part of the program that is being benchmarked. If you let performance guide a design other aspects of a program can suffer. Or in other words it doesn't matter how fast a broken program runs.

Despite optimization not being a priority some benchmarks are so interesting that are worth sharing. The condensing patch referenced below condenses multiple network write request into a single one.

for 500 concurrency ab -k -c 500 -n 10000 http://localhost:<port>/

server and version requests/sec (mean)
before patches 22790.46
-O2 alone 23167.02
After the condensing patch 26393.86
After patch + -O2 30257.64
nginx/1.21.6 26683.67
lighttpd/1.4.64 fails at 500 concurrency
Apache/2.4.54 fails at 500 concurrency

bonus for 100 concurrency due to other server giving errors ab -k -c 100 -n 10000 http://localhost:<port>/

server and version requests/sec (mean)
after patch + -O2 32136.88
nginx/1.21.6 26788.39
lighttpd/1.4.64 56971.62
Apache/2.4.54 fails at 100 concurrency

Switching from -Os to -O3 has a small performance impact, but reducing write requests adds around 4k req/s (about 20% faster). Looking at the other servers defaults I have to say I'm confused, I thought nginx would be faster, it does handle concurrency better than either apache or lighttpd though. Also it seems lighttpd is really fast at static requests.

While testing consumes time and I can't say I'm a big fan of optimization, free performance with minimal tradeoffs is best performance.

Now on to figuring out why file transfers are so slow.

Related topics: benchmark, hinsightd
posted 06 Jul 2022 📝 by tiotags
☚ next post: Benchmark2 previous post: Brevity
Archived posts