Can't get "WebDriver" element data if not "eye-visible" in browser using Selenium and Python

304 Views Asked by At

I'm doing a scraping with Selenium in Python. My problem is that after I found all the WebElements, I'm unable to get their info (id, text, etc) if the element is not really VISIBLE in the browser opened with Selenium.

What I mean is:

First image

Second image

As you can see from the first and second images, I have the first 4 "tables" that are "visible" for me and for the code. There are however, other 2 tables (5 & 6 Gettho lucky dip & Sue Specs) that are not "visible" until I drag down the right bar.

Here's what I get when I try to get the element info, without "seeing it" in the page:

Third image

Manually dragging the page to the bottom and therefore making it "visible" to the human eye (and also to the code ???) is the only way I can the data from the WebDriver element I need:

Fourth image

What am I missing ? Why Selenium can't do it in background ? Is there a manner to solve this problem without going up and down the page ?

PS: the page could be any kind of dog race page in http://greyhoundbet.racingpost.com/. Just click City - Time - and then FORM.


Here's part of my code:

# I call this function with the URL and it returns the driver object
def open_main_page(url):
chrome_path = r"c:\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get(url)

# Wait for page to load
loading(driver, "//*[@id='showLandingLADB']/h4/p", 0)
element = driver.find_element_by_xpath("//*[@id='showLandingLADB']/h4/p")
element.click()

# Wait for second element to load, after click
loading(driver, "//*[@id='landingLADBStart']", 0)
element = driver.find_element_by_xpath("//*[@id='landingLADBStart']")
element.click()

# Wait for main page to load.
loading(driver, "//*[@id='whRadio']", 0)
return driver

Now I have the browser "driver" which I can use to find the elements I want

url = "http://greyhoundbet.racingpost.com/#card/race_id=1640848&r_date=2018- 
09-21&tab=form"
browser = open_main_page(url)

# Find dog names
names = []
text: str
tags = browser.find_elements_by_xpath("//strong")

Now "TAGS" is a list of WebDriver elements as in the figures.

I'm pretty new to this area.


UPDATE: I've solved the problem with a code workaround.

tags = driver.find_elements_by_tag_name("strong")

for tag in tags:
    driver.execute_script("arguments[0].scrollIntoView();", tag)
    print(tag.text)

In this manner the browser will move to the element position and it will be able to get its information.

However I still have no idea why with this page in particular I'm not able to read webpages elements that are not visible in the Browser area untill I scroll and literally see them.

0

There are 0 best solutions below