Web scraping script for Anki

1.3k Views Asked by At

I'm very new to programming. Learning python to speed up my language learning with Anki. I Wanted to create web scraping script for Anki to create cards quicker. Here is my code: (It's not the final product, I enventually want to learn how to send to csv file so I can then import to Anki.)

from bs4 import BeautifulSoup
import requests

#get data from user
input("Type word ")

#get page
page  = requests.get("https://fr.wiktionary.org/wiki/", params=word)

#make bs4 object
soup = BeautifulSoup(page.content, 'html.parser')

#find data from soup
IPA=soup.find(class_='API')
partofspeech=soup.find(class_='ligne-de-forme')

#open file
f=open("french.txt", "a")

#print text
print (IPA.text)
print (partofspeech.text)

#write to file
f.write(IPA.text)
f.write(partofspeech.text)

#close file
f.close()

It only returns the "word of the day" from Wikitionnaire and not the user's input. Any ideas?

1

There are 1 best solutions below

0
On

You can follow up the following approach

(1) Read something in French, note the words or sentences you want to learn onto a paper.

(2) Write down these words/sentences onto a {text, json, markdown, ...} file.

(3) Read these world with Python with I/O handling.

(4) Use anki-connect that runs a web server to interface with your Anki account.

(5) Write a Python script to HTTP post your input word and scrape the answer on deepl.com for example.

(6) Combine these tools to add a session of learning onto Anki in one command.

(7) Happy learning !

Some code

Anki-connect

# https://github.com/FooSoft/anki-connect
# https://github.com/FooSoft/anki-connect/blob/master/actions/decks.md

import json
import urllib.request

def request(action, **params):
    return {'action': action, 'params': params, 'version': 6}

def invoke(action, **params):
    requestJson = json.dumps(request(action, **params)).encode('utf-8')
    response = json.load(urllib.request.urlopen(urllib.request.Request('http://localhost:8765', requestJson)))
    if len(response) != 2:
        raise Exception('response has an unexpected number of fields')
    if 'error' not in response:
        raise Exception('response is missing required error field')
    if 'result' not in response:
        raise Exception('response is missing required result field')
    if response['error'] is not None:
        raise Exception(response['error'])
    return response['result']

invoke('createDeck', deck='english-to-french')
result = invoke('deckNames')
print(f'got list of decks: {result}')

invoke('deleteDecks', decks=['english-to-french'], cardsToo=True)
result = invoke('deckNames')
print(f'got list of decks: {result}')

Web-scraping with scrapy

import scrapy


CODES = {
    'fr': 'french',
    'en': 'english'
}


URL_BASE = "https://www.linguee.com/%s-%s/translation/%s.html"

# these urls can come from another data file
# def get_data_from_file(filepath: string):
#   with open('data.json', 'r') as f:
#       lines = f.readlines()
# 
#   return [URL_BASE % (CODES['fr'], CODES['en'], line) for line in lines]
URLS = [
    URL_BASE % (CODES['fr'], CODES['en'], 'lascive')
]


class BlogSpider(scrapy.Spider):
    name = 'linguee_spider'

    start_urls = URLS

    def parse(self, response):
        for span in response.css('span.tag_lemma'):
            yield {'world': span.css('a.dictLink ::text').get()}

        for div in response.css('div.translation'):
            for span in div.css('span.tag_trans'):
                yield {'translation': span.css('a.dictLink ::text').get()}

Shell script, wrapping up all

#!/bin/bash

# setup variables
DATE=$(date +"%Y-%m-%d-%H-%M")
SCRIPT_FILE="/path/to/folder/script.py"
OUTPUT_FILE="/path/to/folder/data/${DATE}.json"
echo "Running --- ${SCRIPT_FILE} --- at --- ${DATE} ---"

# activate virtualenv and run scrapy
source /path/to/folder/venv/bin/activate
scrapy runspider ${SCRIPT_FILE} -o ${OUTPUT_FILE}
echo "Saved results into --- ${OUTPUT_FILE} ---"

# reading data from scrapy output and creating an Anki card using anki-connect
python create_anki_card.py