I use the library multiprocessing in a flask-based web application to start long-running processes. The function that does it is the following:
def execute(self, process_id):
self.__process_id = process_id
process_dir = self.__dependencies["process_dir"]
process = Process(target=self.function_wrapper, name=process_id, args=(self.__parameters, self.__config, process_dir,))
When I want to deploy some code on this web application, I restart a service that restarts gunicorn, served by nginx. My problem is that this restart kills all children processes started by this application as if a SIGINT signal were sent to all children. How could I avoid that ?
EDIT: After reading this post, it appears that this behavior is normal. The answer suggests to use the subprocess library instead. So I reformulate my question: how should I proceed if I want to start long-running tasks (which are python functions) in a python script and make sure they would survive the parent process OR make sure the parent process (which is a gunicorn instance) would survive a deployement ?
FINAL EDIT: I chose @noxdafox answer since it is the more complete one. First, using process queuing systems might be the best practice here. Then as a workaround, I can still use multiprocessing but using the python-daemon context (see here ans here) inside the function wrapper. Last, @Rippr suggests using subprocess with a different process group, which is cleaner than forking with multiprocessing but involves having standalone functions to launch (in my case I start specific functions from imported libraries).