Support for configuring logic around worker exit to support Prometheus client multiprocessing #43
Dominick-Peluso-Bose
started this conversation in
Ideas
Replies: 1 comment
-
That sounds smth you can already do overriding the spawning method thanks to the import os
from granian import Granian
from prometheus_client import multiprocess
def custom_spawn(*args, **kwargs):
Granian._spawn_wsgi_worker(*args, **kwargs)
multiprocess.work_process_dead(os.getpid())
if __name__ == "__main__":
Granian("mymodule:app", interface="wsgi").serve(spawn_target=custom_spawn) of course you can switch wsgi with asgi and rsgi interfaces. But my question would be: since granian doesn't re-spawn processes (at least right now) is this needed? I mean, if the worker is shutting down, it also implicitly means the entire server/app is shutting down, so you still won't produce any metrics in prometheus.. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
If you are using granian with multiple workers, I think to make use of Prometheus client we would need a way to hook into the lifecycle of when a worker exits to clean up Prometheus client multiprocessing.
In gunicorn, you add something like this to your gunicorn configuration file:
It would be great if granian added a way to hook into events like that. Another helpful one would be when the server starts.
Beta Was this translation helpful? Give feedback.
All reactions