Tuning Nginx for Best Performance
This article is part 2 of a series about building a high-performance web cluster powerful enough to handle 3 million requests per second. For this part of the project, you can use any web server you like. I decided to use Nginx, because it’s lightweight, reliable, and fast.
Generally, a properly tuned Nginx server on Linux can handle 500,000 – 600,000 requests per second. My Nginx servers consistently handle 904k req/sec, and have sustained high loads like these for the ~12 hours that I tested them.
It’s important to know that everything listed here was used in a testing environment, and that you might actually want very different settings for your production servers.
Install the Nginx package from the EPEL repository.
yum -y install nginx
Back up the original config, and start hacking away at a config of your own.
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.orig vim /etc/nginx/nginx.conf
service nginx start chkconfig nginx on
Now point Tsung at this server and let it go. It’ll run for ~10 minutes before it hits the server’s peak capabilities, depending on your Tsung config.
[[email protected] ~] vim ~/.tsung/tsung.xml <server host="YOURWEBSERVER" port="80" type="tcp"/>
Hit ctrl+C after you’re satisfied with the test results, otherwise it’ll run for hours. Use the alias “treport” that we set up earlier to view the results.
Web server tuning, part 2: TCP stack tuning
This section applies to any web server, not just Nginx.Tuning the kernel’s TCP settings will help you make the most of your bandwidth. These settings worked best for me on a 10-Gbase-T network. My network’s performance went from ~8Gbps with the default system settings, to 9.3Gbps, using these tuned settings. As always, your mileage may vary.
When tuning these options, I recommend changing just one at a time. Then run a network benchmark tool like ‘netperf’, ‘iperf’, or something like my script, cluster-netbench.pl, to test more than one pair of nodes at a time.
yum -y install netperf iperf
# Increase system IP port limits to allow for more connections net.ipv4.ip_local_port_range = 2000 65000 net.ipv4.tcp_window_scaling = 1 # number of packets to keep in backlog before the kernel starts dropping them net.ipv4.tcp_max_syn_backlog = 3240000 # increase socket listen backlog net.core.somaxconn = 3240000 net.ipv4.tcp_max_tw_buckets = 1440000 # Increase TCP buffer sizes net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_congestion_control = cubic
Apply the new settings in between each change.
sysctl -p /etc/sysctl.conf
Don’t forget to run your network benchmark program between each change! It’s important to keep track of what settings work for you. You’ll save yourself a lot of time by being methodical in your testing.