Python requests, urllib3 requests, Python requests using an HTTPAdapter disable retries completely

52 Views Asked by At

No matter what I do, I cannot get retries to stop. I've tried HTTPADapter explained here: https://scrapeops.io/python-web-scraping-playbook/python-requests-retry-failed-requests/#retry-failed-requests-using-sessions--httpadapter

I've tried urllib3 using retries=False as explained here: https://urllib3.readthedocs.io/en/stable/user-guide.html#retrying-requests

No matter what, when I simulate a bad connection to my URL, which I want to fail on the first try after its timeout (so no retries), I still see my program trying 3 times, based on timing (3x the timeout value)

Does anyone know what i am mis-understanding or if they know of a deeper more wired in default that is occurring here?

I tried this way

import requests
import urllib3

from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
   
http = requests.Session()
retries = Retry(total=0, status_forcelist=[ 502, 503, 504 ])
adapter = HTTPAdapter(max_retries=retries)
http.mount("http://", adapter)
r = http.get(url,timeout=mytimeout)

I've tried this way:

http = urllib3.PoolManager(retries=False)
r = http.request('GET', url, timeout=mytimeout)

and this variation at the request level

http = urllib3.PoolManager()
r = http.request('GET', url, retries=False, timeout=mytimeout)

Still, configure a URL for example with these settings with a 2-second timeout, and get a failure finally after 6 seconds. If I use 3 sec I get 12, etc. so appears there is still a retry happening.

0

There are 0 best solutions below