Avoid killing children when parent process is killed

0 votes

I use the library multiprocessing in a flask-based web application to start long-running processes. The function that does it is the following:

def execute(self, process_id):
    self.__process_id = process_id
    process_dir = self.__dependencies["process_dir"]
    self.fit_dependencies()
    process = Process(target=self.function_wrapper, name=process_id, args=(self.__parameters, self.__config, process_dir,))
    process.start()

When I want to deploy some code on this web application, I restart a service that restarts gunicorn, served by nginx. My problem is that this restart kills all children processes started by this application as if a SIGINT signal were sent to all children. How could I avoid that ?

EDIT: After reading this post, it appears that this behavior is normal. The answer suggests to use the subprocess library instead. So I reformulate my question: how should I proceed if I want to start long-running tasks (which are python functions) in a python script and make sure they would survive the parent process OR make sure the parent process (which is a gunicorn instance) would survive a deployement ?

FINAL EDIT: I chose @noxdafox answer since it is the more complete one. First, using process queuing systems might be the best practice here. Then as a workaround, I can still use multiprocessing but using the python-daemon context (see here ans here) inside the function wrapper. Last, @Rippr suggests using subprocess with a different process group, which is cleaner than forking with multiprocessing but involves having standalone functions to launch (in my case I start specific functions from imported libraries).

Sep 7, 2018 in Python by bug_seeker
• 15,350 points
475 views

1 answer to this question.

0 votes

I would recommend against your design as it's quite error prone. Better solutions would de-couple the workers from the server using some sort of queuing system (RabbitMQ, Celery, Redis, ...).

Nevertheless, here's a couple of "hacks" you could try out.

  1. Turn your child processes into UNIX daemons. The python daemon module could be a starting point.
  2. Instruct your child processes to ignore the SIGINT signal. The service orchestrator might work around that by issuing a SIGTERM or SIGKILL signal if child processes refuse to die. You might need to disable such feature.

    To do so, just add the following line at the beginning of the function_wrapper function:

    signal.signal(signal.SIGINT, signal.SIG_IGN)
answered Sep 7, 2018 by Priyaj
• 56,900 points

Related Questions In Python

0 votes
1 answer

Python ctypes segmentation fault when rootfs is read-only and /tmp is noexec

I cant really seem to reproduce the ...READ MORE

answered Aug 27, 2018 in Python by anonymous
89 views
0 votes
1 answer
0 votes
1 answer

Why feed_dict is constructed when running epoch in PTB tutorial on Tensorflow?

Q1. feed_dict is used in this case to set ...READ MORE

answered Oct 8, 2018 in Python by Priyaj
• 56,900 points
82 views
0 votes
1 answer

What is the process to kill a particular thread in python?

A multiprocessing.Process can p.terminate() In the cases where I want to ...READ MORE

answered Feb 4 in Python by charlie_brown
• 7,720 points
178 views
0 votes
1 answer
+13 votes
2 answers

Git management technique when there are multiple customers and need multiple customization?

Consider this - In 'extended' Git-Flow, (Git-Multi-Flow, ...READ MORE

answered Mar 26, 2018 in DevOps & Agile by DragonLord999
• 8,380 points
206 views
0 votes
1 answer

Avoid killing children when parent process is killed

I would recommend against your design as ...READ MORE

answered Sep 14, 2018 in Python by Priyaj
• 56,900 points
42 views
0 votes
1 answer

Why feed_dict is constructed when running epoch in PTB tutorial on Tensorflow?

Q1. feed_dict is used in this case to set ...READ MORE

answered Oct 3, 2018 in Python by Priyaj
• 56,900 points
75 views