I am using a combination of the tweepy library and requests library in python to scrape the users who like certain tweets and their locations (for those who disclose it publicly). To avoid rate limits, I've been setting wait_on_rate_limit=True for my tweepy API and that has been working.
However, to get the location of users, I have to use the requests library and it is often exceeding the rate limit in the middle of scraping the users of a post, returning an error before my function can return a value. Here is the code:
def get_user_location(id):
BEARER_TOKEN = "my_bearer_token"
endpoint = f'https://api.twitter.com/2/users/{id}'
headers = {'authorization': f'Bearer {BEARER_TOKEN}'}
params = {
'user.fields': 'location',
}
response = requests.get(endpoint,
params=params,
headers=headers) # send the request
if "location" in response.json()["data"].keys():
return response.json()["data"]["location"]
return None
I've looked through the documentation and couldn't find anything about parameters I can pass into my requests.get call to make my program wait to continue executing until the rate limit cooldown is over. Does anyone know I can make my application wait until the rate limit is reset so it doesn't return a rate limit error?