How to timeout, kill, and retry a python function using threading (Windows)

I am making api calls to the Alpaca brokerage and sometimes indefinitely not receiving an answer. I would like to be able to wait for response for a minute and then cancel and retry. Something like this:

import threading
import time

class LongFunctionInside(object):
    lock_state = threading.Lock()
    working = False

    def long_function(self, timeout):

        self.working = True

        timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
        timeout_work.setDaemon(True)
        timeout_work.start()

        while True:  # endless/long work
            time.sleep(0.1)  # in this rate the CPU is almost not used
            if not self.working:  # if state is working == true still working
                break
        self.set_state(True)

    def work_time(self, sleep_time):  # thread function that just sleeping specified time,
    # in wake up it asking if function still working if it does set the secured variable work to false
        time.sleep(sleep_time)
        if self.working:
            self.set_state(False)

    def set_state(self, state):  # secured state change
        while True:
            self.lock_state.acquire()
            try:
                self.working = state
                break
            finally:
                self.lock_state.release()

lw = LongFunctionInside()
lw.long_function(10)

does not work because the api call is not something short like a print function that can be followed by a if not self.working: check. I have tried to find and fix solutions for hours and am lost. It seems like a simple problem and one that probably is merely two parallel thread processes with one keeping track of time and then killing the other and then retrying the whole thing. I just don’t know how to do this. I have seen solutions for UNIX but I am on Windows. Any help appreciated! Thanks!

I wonder if you’re hit by the fact that Python with the threading module can’t run Python code in parallel? So if the task actually stays busy within Python (not e.g. waiting for I/O) the canceling code won’t get to run. In that case you need multiple processes, e.g. with the multiprocessing module or a ProcessPoolExecutor (that’s what I’d prefer, possibly with asyncio managing the timeout).

Thanks for the response! Could you briefly demonstrate in code how that would look please?

Something like this:

from multiprocessing import Process
from time import sleep

def keep_sleeping():
    while True:
        sleep(1)
        print('still sleeping')

if __name__ == '__main__':
    p = Process(target=keep_sleeping)
    p.start()
    sleep(3)
    print('terminating process')
    p.terminate()
    p.join()

Forget about the executor though, you can’t easily cancel a task once it’s started running. :sweat_smile:

Thanks so much for helping me! When I run this code, I only get “terminating process” printing. Why is “still sleeping” not printing? I am looking for something along these lines but in formal code:

def apicall():
    return api.get_data("AAPL")

def getdata():
    data = None
    gotdata = False
    while gotdata == False: 
        #start data = apicall()
        #keep track of time while apicall is running; this is the hard part
        #when time is 20s
        if data != None:
            gotdata = True
            return data
        else:
            print("Retrying API call.")
            #kill function that has not returned anything and is still waiting for the api response

getdata()

Thanks!

That’s really odd! I can’t try on Windows specifically, but according to the docs all the methods I used in the example work on all supported platforms. I can’t really imagine the process would take 2 seconds or more to start even on a slow system. :thinking:

For transferring your code to this model, you’d use apicall instead of keep_sleeping in my example, and add a pipe or queue to transfer the result (see Exchanging objects between processes in the multiprocessing docs).