[Django]-Error: can't start new thread

46👍

The “can’t start new thread” error almost certainly due to the fact that you have already have too many threads running within your python process, and due to a resource limit of some kind the request to create a new thread is refused.

You should probably look at the number of threads you’re creating; the maximum number you will be able to create will be determined by your environment, but it should be in the order of hundreds at least.

It would probably be a good idea to re-think your architecture here; seeing as this is running asynchronously anyhow, perhaps you could use a pool of threads to fetch resources from another site instead of always starting up a thread for every request.

Another improvement to consider is your use of Thread.join and Thread.stop; this would probably be better accomplished by providing a timeout value to the constructor of HTTPSConnection.

13👍

You are starting more threads than can be handled by your system. There is a limit to the number of threads that can be active for one process.

Your application is starting threads faster than the threads are running to completion. If you need to start many threads you need to do it in a more controlled manner I would suggest using a thread pool.

12👍

I was running on a similar situation, but my process needed a lot of threads running to take care of a lot of connections.

I counted the number of threads with the command:

ps -fLu user | wc -l

It displayed 4098.

I switched to the user and looked to system limits:

sudo -u myuser -s /bin/bash

ulimit -u

Got 4096 as response.

So, I edited /etc/security/limits.d/30-myuser.conf and added the lines:

myuser hard nproc 16384

myuser soft nproc 16384

Restarted the service and now it’s running with 7017 threads.

Ps. I have a 32 cores server and I’m handling 18k simultaneous connections with this configuration.

6👍

I think the best way in your case is to set socket timeout instead of spawning thread:

h = httplib.HTTPSConnection(self.config['server'], 
                            timeout=self.config['timeout'])

Also you can set global default timeout with socket.setdefaulttimeout() function.

Update: See answers to Is there any way to kill a Thread in Python? question (there are several quite informative) to understand why. Thread.__stop() doesn’t terminate thread, but rather set internal flag so that it’s considered already stopped.

5👍

I completely rewrite code from httplib to pycurl.

c = pycurl.Curl()
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.CONNECTTIMEOUT, CONNECTION_TIMEOUT)
c.setopt(pycurl.TIMEOUT, COOPERATION_TIMEOUT)
c.setopt(pycurl.NOSIGNAL, 1)
c.setopt(pycurl.POST, 1)
c.setopt(pycurl.SSL_VERIFYHOST, 0)
c.setopt(pycurl.SSL_VERIFYPEER, 0)
c.setopt(pycurl.URL, "https://"+server+path)
c.setopt(pycurl.POSTFIELDS,sended_data)

b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)

c.perform()

something like that.

And I testing it now. Thanks all of you for help.

👤Oduvan

4👍

If you are tying to set timeout why don’t you use urllib2.

👤piyer

4👍

I found this question because pip failed to install packages while inside a docker container. A related issue on the pip repo suggests this to be a poorly worded exception coming from rich that is thrown when the system reaches the limit for the maximum number of threads for some reason. The following fixes are given:

  • Upgrading Docker to a version > 20.10.7
  • Running pip with -q to suppress the rich output

3👍

I’m running a python script on my machine only to copy and convert some files from one format to another, I want to maximize the number of running threads to finish as quickly as possible.

Note: It is not a good workaround from an architecture perspective If you aren’t using it for a quick script on a specific machine.

In my case, I checked the max number of running threads that my machine can run before I got the error, It was 150

I added this code before starting a new thread. which checks if the max limit of running threads is reached then the app will wait until some of the running threads finish, then it will start new threads

while threading.active_count()>150 :
    time.sleep(5)
mythread.start()

1👍

If you are using a ThreadPoolExecutor, the problem may be that your max_workers is higher than the threads allowed by your OS.

It seems that the executor keeps the information of the last executed threads in the process table, even if the threads are already done. This means that when your application has been running for a long time, eventually it will register in the process table as many threads as ThreadPoolExecutor.max_workers

0👍

As far as I can tell it’s not a python problem. Your system somehow cannot create another thread (I had the same problem and couldn’t start htop on another cli via ssh).

The answer of Fernando Ulisses dos Santos is really good. I just want to add, that there are other tools limiting the number of processes and memory usage "from the outside". It’s pretty common for virtual servers. Starting point is the interface of your vendor or you might have luck finding some information in files like
/proc/user_beancounters

👤Uschi

Leave a comment