scarping cp website codes error with request

31 Views Asked by At

i am trying to scrape a cp-website but due to lot of requests i made to the site it blocks my IP so i am trying to achieve so by making a rotatory proxy so i get some free proxies and i am checking if they indeed work or not but unfortunately none of the works and gives me timeout error or max retries exceeded, so do i miss something and if how to fix it and thanks in advance.

here is my code snippet in python:

import requests
from itertools import cycle

# Define a list of proxy IP addresses (Replace with actual proxy IPs)
proxy_list = [
    '103.179.182.37:8080',
    '191.240.153.144:8080',
    '45.81.146.7:8080',
    '36.66.171.243:8080',
    '177.207.208.35:8080',
    '85.117.60.162:8080',
    '183.89.42.196:8080',
]

# Create an iterator to cycle through the proxy list
proxy_pool = cycle(proxy_list)

# Define the URL you want to scrape
url = 'https://codeforces.com/contest/1879/submission/225031392'

# Set the number of requests you want to make
num_requests = 100

for _ in range(num_requests):
    # Get the next proxy from the pool
    proxy = next(proxy_pool)

    # Define proxy settings for the request
    proxy_settings = {
        "http": proxy,
        "https": proxy,
    }

    try:
        # Send a GET request using the proxy
        response = requests.get(url, proxies=proxy_settings, timeout=10)

        # Check if the request was successful
        if response.status_code == 200:
            print(response.text)
        else:
            print(f"Request failed with status code: {response.status_code}")

    except Exception as e:
        print(f"Error: {str(e)}")
0

There are 0 best solutions below