from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
#import re
req = Request("https://www.indiegogo.com/individuals/23489031")
html_page = urlopen(req)
soup = BeautifulSoup(html_page, "lxml")
links = []
for link in soup.findAll('a'):
links.append(link.get('href'))
print(links)
This code works if I use only one url but does not work with multiple urls. how do i do the same if i want to do it with multiple urls?
Here I leave you a small code that I had to do at some point of scraping
you can adapt it to what you want to achieve .. I hope it helps you
you can loop through url_list and execute getlinks(url) with it