I use Crawlera as a IP rotating service to crawl a specific website which is banning my IP quickly but I have this problem only with one website out of a dozen.
As it is possible to register multiple middlewares for a Scrapy project, I wanted to know if it was possible to define the downloader middleware to use PER REQUEST.
So I could use my Crawlera's quota only for the problematic website and not for all my requests.
One of possible solution - usage
custom_settingsspider attribute (and removingCrawleraMiddlewarefrom project settings(assuming that you have 1 spider per 1 website and
CrawleraMiddlewareenabled in project settings):In this case
CrawleraMiddlewarewill be used only in spiders where it defined in theircustom_settingsattribute.