Adds more benchmarks

This commit is contained in:
James Roberts
2021-12-12 18:39:03 +02:00
parent d3ebf48b12
commit ce3f83114a
25 changed files with 189 additions and 63 deletions

View File

@@ -34,7 +34,7 @@ pip install fastwsgi
## Performance
FastWSGI is one of the fastest general use WSGI servers out there!
FastWSGI is one of the fastest general use WSGI servers out there!
For a comparison between other popular WSGI servers, see [PERFORMANCE.md](./performance_benchmarks/PERFORMANCE.md)

View File

@@ -1,7 +1,5 @@
# Performance Benchmarks
## Flask based benchmarks
A set of "Hello World" benchmarks comparing FastWSGI's performance to other popular WSGI servers.
All benchmarks were performed with [wrk](https://github.com/wg/wrk).
@@ -10,6 +8,8 @@ All benchmarks were performed with [wrk](https://github.com/wg/wrk).
- 100 concurrent connections
- 60 second load duration
All servers were set up to use just a single worker (1 process).
```bash
wrk -t8 -c100 -d60 http://localhost:5000 --latency
```
@@ -25,11 +25,35 @@ cd performance_benchmarks/
./benchmark_all.sh
```
## Simple WSGI application benchmarks
This group of benchmarks were compares how a simple "Hello World" WSGI application performs with varying underlying WSGI servers.
For a simple WSGI application, if you're looking for speed, FastWSGI is simply unmatched!
On a **single worker**, over 70k requests per second and over 4 million requests served in 60 seconds!
### Requests per second
![requests-per-seond](./requests_per_second.jpg)
![wsgi-requests-per-seond](./graphs/wsgi_requests_per_second.jpg)
### Requests served in 60 seconds
![requests-served](./requests_served.jpg)
![wsgi-requests-served](./graphs/wsgi_requests_served.jpg)
## Simple Flask application benchmarks
This group of benchmarks compares how a simple "Hello World" Flask application performs with varying underlying WSGI servers.
FastWSGI performs on par with [bjoern](https://github.com/jonashaag/bjoern), another ultra fast WSGI server written in C. It appears that Flask is the likely bottleneck here and not the WSGI server being used. Pushing a simple Flask application significantly beyond 9k Requests per second on a single worker seems unlikely.
### Requests per second
![flask-requests-per-seond](./graphs/flask_requests_per_second.jpg)
### Requests served in 60 seconds
![requests-served](./graphs/flask_requests_served.jpg)

View File

@@ -1,16 +1,16 @@
pip install -r requirements.txt
echo "Benchmarking Flask"
./benchmark_basic_flask.sh
./benchmarks/benchmark_basic_flask.sh
echo "Benchmarking Flask + Gunicorn"
./benchmark_gunicorn_flask.sh
./benchmarks/benchmark_gunicorn_flask.sh
echo "Benchmarking Flask + FastWSGI"
./benchmark_fastwsgi_flask.sh
./benchmarks/benchmark_fastwsgi_flask.sh
echo "Benchmarking Flask + Bjoern"
./benchmark_bjoern_flask.sh
./benchmarks/benchmark_bjoern_flask.sh
echo "Benchmarking CherryPy"
./benchmark_cherrypy.sh
./benchmarks/benchmark_cherrypy.sh

View File

@@ -1,6 +0,0 @@
fuser -k 5000/tcp;
rm -rf nohup.out
nohup python3 servers/basic_flask.py &
sleep 3
wrk -t8 -c100 -d60 http://localhost:5000 --latency > results/basic_flask_results.txt
fuser -k 5000/tcp;

View File

@@ -0,0 +1,6 @@
fuser -k 5000/tcp;
rm -rf nohup.out
nohup python3 ../servers/basic_flask.py &
sleep 3
wrk -t8 -c100 -d60 http://localhost:5000 --latency > ../results/basic_flask_results.txt
fuser -k 5000/tcp;

View File

@@ -1,6 +1,6 @@
fuser -k 5000/tcp;
rm -rf nohup.out
nohup python3 servers/bjoern_flask.py &
nohup python3 ../servers/bjoern_flask.py &
sleep 3
wrk -t8 -c100 -d60 http://localhost:5000 --latency > results/bjoern_flask_results.txt
fuser -k 5000/tcp;

View File

@@ -0,0 +1,6 @@
fuser -k 5000/tcp;
rm -rf nohup.out
nohup python3 ../servers/bjoern_wsgi.py &
sleep 3
wrk -t8 -c100 -d60 http://localhost:5000 --latency > results/bjoern_wsgi_results.txt
fuser -k 5000/tcp;

View File

@@ -1,7 +1,7 @@
fuser -k 5000/tcp;
fuser -k 8080/tcp;
rm -rf nohup.out
nohup python3 servers/cherry_py.py &
nohup python3 ../servers/cherry_py.py &
sleep 3
wrk -t8 -c100 -d60 http://localhost:8080 --latency > results/cherrypy_results.txt
fuser -k 8080/tcp;

View File

@@ -1,6 +1,6 @@
fuser -k 5000/tcp;
rm -rf nohup.out
nohup python3 servers/fastwsgi_flask.py &
nohup python3 ../servers/fastwsgi_flask.py &
sleep 3
wrk -t8 -c100 -d60 http://localhost:5000 --latency > results/fastwsgi_flask_results.txt
fuser -k 5000/tcp;

View File

@@ -0,0 +1,6 @@
fuser -k 5000/tcp;
rm -rf nohup.out
nohup python3 ../servers/fastwsgi_wsgi.py &
sleep 3
wrk -t8 -c100 -d60 http://localhost:5000 --latency > results/fastwsgi_wsgi_results.txt
fuser -k 5000/tcp;

View File

@@ -1,6 +1,6 @@
fuser -k 5000/tcp;
rm -rf nohup.out
cd servers/
cd ../servers/
nohup gunicorn gunicorn_flask:app --bind 127.0.0.1:5000 &
sleep 3
wrk -t8 -c100 -d60 http://localhost:5000 --latency > ../results/gunicorn_flask_results.txt

View File

@@ -0,0 +1,7 @@
fuser -k 5000/tcp;
rm -rf nohup.out
cd ../servers/
nohup gunicorn gunicorn_wsgi:application --bind 127.0.0.1:5000 &
sleep 3
wrk -t8 -c100 -d60 http://localhost:5000 --latency > ../results/gunicorn_wsgi_results.txt
fuser -k 5000/tcp;

View File

@@ -1,8 +1,53 @@
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import FuncFormatter
benchmarks = {
def y_fmt(x, y):
return "{:,.0f}".format(x)
def requests_per_second_graph(save_name, data):
fig = plt.figure()
fig.set_size_inches(8, 6)
ax = fig.add_axes([0, 0, 1, 1])
labels = [dp[0] for dp in data]
rps = [dp[1] for dp in data]
ax.set_ylabel("Requests per second")
ax.set_title("Requests per second per WSGI server")
ax.yaxis.set_major_formatter(FuncFormatter(y_fmt))
ax.bar(labels, rps)
plt.savefig(f"graphs/{save_name}", bbox_inches="tight", pad_inches=0.3, dpi=200)
def requests_served_graph(save_name, data):
fig = plt.figure()
fig.set_size_inches(8, 6)
ax = fig.add_axes([0, 0, 1, 1])
labels = [dp[0] for dp in data]
rs = [dp[1] for dp in data]
ax.set_ylabel("Requests served")
ax.set_title("Requests serverd in 60 seconds")
ax.yaxis.set_major_formatter(FuncFormatter(y_fmt))
ax.bar(labels, rs)
plt.savefig(f"graphs/{save_name}", bbox_inches="tight", pad_inches=0.3, dpi=200)
def extract_data(req_ps, req_served, benchmarks):
for key, file in benchmarks.items():
with open(file, "r") as f:
for line in f:
if "Requests/sec:" in line:
data = int(float(line.split("Requests/sec:")[1].strip()))
req_ps.append((key, data))
if "requests in" in line:
datapoint = (key, int(line.split("requests in")[0].strip()))
req_served.append(datapoint)
flask_benchmarks = {
"CherryPy": "results/cherrypy_results.txt",
"Flask": "results/basic_flask_results.txt",
"Flask+\nGunicorn": "results/gunicorn_flask_results.txt",
@@ -10,46 +55,19 @@ benchmarks = {
"Flask+\nFastWSGI": "results/fastwsgi_flask_results.txt",
}
requests_per_second = []
requests_served = []
wsgi_benchmarks = {
"Gunicorn": "results/gunicorn_wsgi_results.txt",
"Bjoern": "results/bjoern_wsgi_results.txt",
"FastWSGI": "results/fastwsgi_wsgi_results.txt",
}
for key, file in benchmarks.items():
with open(file, "r") as f:
for line in f:
if "Requests/sec:" in line:
datapoint = (key, int(float(line.split("Requests/sec:")[1].strip())))
requests_per_second.append(datapoint)
if "requests in" in line:
datapoint = (key, int(line.split("requests in")[0].strip()))
requests_served.append(datapoint)
flask_requests_per_second, flask_requests_served = [], []
extract_data(flask_requests_per_second, flask_requests_served, flask_benchmarks)
requests_per_second_graph("flask_requests_per_second.jpg", flask_requests_per_second)
requests_served_graph("flask_requests_served.jpg", flask_requests_served)
print(requests_per_second)
print(requests_served)
import matplotlib.pyplot as plt
fig1 = plt.figure()
fig1.set_size_inches(8, 6)
ax1 = fig1.add_axes([0, 0, 1, 1])
labels = [dp[0] for dp in requests_per_second]
rps = [dp[1] for dp in requests_per_second]
ax1.set_ylabel("Requests per second")
ax1.set_title("Requests per second per WSGI server")
ax1.set_yticks([x for x in range(0, 10000, 1000)])
ax1.bar(labels, rps)
plt.savefig(f"requests_per_second.jpg", bbox_inches="tight", pad_inches=0.3, dpi=200)
fig2 = plt.figure()
fig2.set_size_inches(8, 6)
ax2 = fig2.add_axes([0, 0, 1, 1])
labels = [dp[0] for dp in requests_served]
rps = [dp[1] for dp in requests_served]
ax2.set_ylabel("Requests served")
ax2.set_title("Requests serverd in 60 seconds")
ax2.set_yticks([x for x in range(0, 1000000, 30000)])
ax2.bar(labels, rps)
plt.savefig(f"requests_served.jpg", bbox_inches="tight", pad_inches=0.3, dpi=200)
wsgi_requests_per_second, wsgi_requests_served = [], []
extract_data(wsgi_requests_per_second, wsgi_requests_served, wsgi_benchmarks)
requests_per_second_graph("wsgi_requests_per_second.jpg", wsgi_requests_per_second)
requests_served_graph("wsgi_requests_served.jpg", wsgi_requests_served)

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 166 KiB

View File

@@ -0,0 +1,13 @@
Running 1m test @ http://localhost:5000
8 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 51.37ms 3.79ms 61.18ms 85.61%
Req/Sec 234.45 14.99 363.00 87.81%
Latency Distribution
50% 50.00ms
75% 50.07ms
90% 59.97ms
99% 60.09ms
112130 requests in 1.00m, 12.84MB read
Requests/sec: 1867.38
Transfer/sec: 219.01KB

View File

@@ -0,0 +1,13 @@
Running 1m test @ http://localhost:5000
8 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.35ms 196.56us 3.76ms 78.03%
Req/Sec 8.92k 509.23 12.89k 65.79%
Latency Distribution
50% 1.30ms
75% 1.42ms
90% 1.62ms
99% 2.00ms
4264150 requests in 1.00m, 414.79MB read
Requests/sec: 71030.10
Transfer/sec: 6.91MB

View File

@@ -0,0 +1,13 @@
Running 1m test @ http://localhost:5000
8 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.96ms 479.07us 21.88ms 87.38%
Req/Sec 804.85 49.50 848.00 73.98%
Latency Distribution
50% 14.85ms
75% 15.16ms
90% 15.48ms
99% 16.46ms
384737 requests in 1.00m, 62.38MB read
Requests/sec: 6407.66
Transfer/sec: 1.04MB

View File

@@ -0,0 +1,11 @@
import bjoern
def application(environ, start_response):
headers = [("Content-Type", "text/plain")]
start_response("200 OK", headers)
return [b"Hello, World!"]
if __name__ == "__main__":
bjoern.run(application, "127.0.0.1", 5000)

View File

@@ -0,0 +1,11 @@
import fastwsgi
def application(environ, start_response):
headers = [("Content-Type", "text/plain")]
start_response("200 OK", headers)
return [b"Hello, World!"]
if __name__ == "__main__":
fastwsgi.run(wsgi_app=application, host="127.0.0.1", port=5000)

View File

@@ -0,0 +1,4 @@
def application(environ, start_response):
headers = [("Content-Type", "text/plain")]
start_response("200 OK", headers)
return [b"Hello, World!"]