Scrapy : (400 Bad Request)HTTP status code is not handled or not allowed

1.6k Views Asked by At

As i am new to python and scrapy. I have been trying to scrape a website which is URL-fragmented. I am making a post request to get the response but unfortunately its not getting me the result.

    def start_requests(self):
    try:
        form = {'menu': '6'
            , 'browseby': '8'
            , 'sortby': '2'
            , 'media': '3'
            , 'ce_id': '1428'
            , 'ot_id': '19999'
            , 'marker': '354'
            , 'getpage': '1'}

        head = {
            'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
            # 'Content-Length': '78',
            # 'Host': 'onlinelibrary.ectrims-congress.eu',
            # 'Accept-Encoding': 'gzip, deflate, br',
            # 'Connection': 'keep-alive',
            'XMLHttpRequest':'XMLHttpRequest',
        }

        urls = [
            'https://onlinelibrary.ectrims-congress.eu/ectrims/listing/conferences'
            ]

        request_body = urllib.parse.urlencode(form)
        print(request_body)
        print(type(request_body))

        for url in urls:
            req = Request(url=url, body= request_body, method='POST', headers=head,callback=self.parse)
            req.headers['Cookie'] = 'js_enabled=true; is_cookie_active=true;'

            yield req

    except Exception as e:
        print('the error is {}'.format(e))

i am getting a constant error

[scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <POST https://onlinelibrary.ectrims-congress.eu/ectrims/listing/conferences> (failed 4 times): 400 Bad Request

When i tried to postman to check the same , I am getting the expected output. Can somebody help me with this.

2

There are 2 best solutions below

2
On
0
On

If you want to use Request to send a POST request, you will have to use json.dumps() to convert the dictionary to string.

Here is a working solution:

import scrapy
    
class EventsSpider(scrapy.Spider):
    name = 'events'

    def start_requests(self):
        form = {'menu': '6', 'browseby': '8', 'sortby': '2', 'media': '3', 'ce_id': '1428', 'ot_id': '19999', 'marker': '354', 'getpage': '1'}

        head = {
            'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
            'XMLHttpRequest': 'XMLHttpRequest',
        }

        url = 'https://onlinelibrary.ectrims-congress.eu/ectrims/listing/conferences'
        request_body = json.dumps(form)
        req = scrapy.Request(url=url, body=request_body, method='POST', headers=head, callback=self.parse)
        yield req

    def parse(self, response):
        print(response.json().keys())

Output:

dict_keys(['html', 'type', 'debug', 'total_pages', 'current_page', 'total_items', 'login'])

Bonus Tip: If you can get it working in Postman, you can click the Code button, which looks like </> on the right-hand side panel. You will have the code generated using requests library if you select Python.