I'm testing Selenium for web scraping on a website, but I have a question:
The website contains multiple pages, and the information I need is always within an element with an ID. For example, on page one, I have IDs ranging from "card0" to "card50". However, this pattern repeats on page two, starting again from "card0" and going up to "card50".
I'm trying to locate these elements using "find_element By.XPATH," but I'm having trouble repeating this in a way that works correctly. Here's a snippet of the code:
element = driver.find_element(By.XPATH,"//*[text()[contains(.,id='card')]]")
Thank you all for the support.
Assuming your html look like this
And you want to get all these div element by using
XPATH
, you can do it like thisYou need to use
driver.find_elements
(with "s") since you are expecting multiple elementsYou can also do this easier using CSS_SELECTOR
So you would typically do this in multiple pages like this
You cannot store these elements in a variable and use it after the loop since it will cause
StaleElementReferenceException
if you have visited another page.