I have been trying to automate login into stackoverflow to learn web scraping. First I tried scrapy, from which I did not get that lucky using following code.

import scrapy
from scrapy.utils.response import open_in_browser

class QuoteSpider(scrapy.Spider):
    name = 'stackoverflow'
    start_urls = ['https://stackoverflow.com/users/login']


    def parse(self, response):
        token = response.xpath('.//*[@name="fkey"]/@value').extract_first()
        yield scrapy.FormRequest('https://stackoverflow.com/users/login?ssrc=head&returnurl=https://stackoverflow.com/',
        formdata = {
            'fkey': token,
            "ssrc": "head",
            'username': "[email protected]",
            'password': 'example123',
            'oauth_version':'',
            'oauth_server':''
        },callback=self.startscraper)
    
    def startscraper(self,response):
        yield scrapy.Request('https://stackoverflow.com/users/12454709/gopal-kisi',callback=self.verifylogin)

    def verifylogin(self,response):
        open_in_browser(response)

So, I tried selenium later, I successfully login to stackoverflow using following code.

from selenium import webdriver
import pandas as pd
import time

driver = webdriver.Chrome("./chromedriver.exe")
driver.get("https://stackoverflow.com/users/login?ssrc=head&returnurl=https%3a%2f%2fstackoverflow.com%2f")
time.sleep(2)
username = driver.find_element_by_xpath("//*[@id='email']")
username.clear()
username.send_keys("[email protected]")
time.sleep(5)
password = driver.find_element_by_xpath("//*[@id='password']")
password.clear()
password.send_keys("example123")
time.sleep(0.5)
driver.find_element_by_xpath("//*[@id='submit-button']").click()
driver.close()

I know selenim and scrapy are two different method. Now, for scraping, I found scrapy is much more easier to process and save data than selenium and also it uses headless browsing, just like I needed.

So, is there any way to solve login problem in scrapy. Or, can I merge selenium with scrapy, so that, I cound login with selenium and then remaining work can be done by scrapy?

2

There are 2 best solutions below

1
On BEST ANSWER
  • It seems url https://stackoverflow.com/users/login is forbidden by robots.txt, so I'm not sure automating this is allowed by stackoverflow
  • You don't need Selenium to log in. You can just use Scrapy for this. I based myself on the example in their official documentation. You can use the FromRequest.from_response to populate most of the fields needed to login, and just add a correct email & password. The below works for me in scrapy shell:
from scrapy import FormRequest

url = "https://stackoverflow.com/users/login"
fetch(url)
req = FormRequest.from_response(
    response,
    formid='login-form',
    formdata={'email': '[email protected]',
              'password': 'testpw'},
    clickdata={'id': 'submit-button'},
)
fetch(req)
0
On

In addition to the way by Wim Hermans, you can also POST https://stackoverflow.com/users/login with the following parameters:

  • email: your email
  • password: your password
  • fkey

Here's an example:

import requests
import getpass
from pyquery import PyQuery

# Fetch the fkey
login_page = requests.get('https://stackoverflow.com/users/login').text
pq = PyQuery(login_page)
fkey = pq('input[name="fkey"]').val()

# Prompt for email and password
email = input("Email: ")
password = getpass.getpass()

# Login
requests.post(
    'https://stackoverflow.com/users/login',
    data = {
        'email': email,
        'password': password,
        'fkey': fkey
    })