Replies: 2 comments
-
Hi @mike667, I'm sorry to hear that. Now, given this, here are a few considerations:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
@gi0baro Thanks for the advice! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I was very happy when I saw the benchmarks.
I tried it on my real project. The project is written in Django, using WSGI. My benchmark simulates user behavior on my website using a k6 script. I conducted a test with the same number of users for both gunicorn and granian.
It turned out that the requests per second are the same or worse than with gunicorn. The p95 response time with gunicorn is better. I tried different configurations of threads/workers per 1 CPU, tried with --opt / --no-opt. The result is always worse or close to gunicorn. The best result I currently achieve is by running one process per CPU for both gunicorn and granian. (The project runs on Kubernetes across three nodes with 8 CPUs each)
For me, the goal was to achieve at least a 0.5% improvement or to use fewer resources. I understand that this is just a web server and it's not the bottleneck of the system.
Do you have any ideas on what I might be doing wrong?
Thank you
Beta Was this translation helpful? Give feedback.
All reactions