how to restart the execution of a test case in selenium ide - selenium

I have a test case which stops and throws an error "Element not found".
Now what I want to do is, I want to selenium ide to refresh the page and run the script from beginning.
All I want to know is that, is there any way to make this happen in Selenium IDE. If yes, how?

Yes, of course. First of all, you should have a try-except-else structure. Then you should have a while True loop and specify the NoSuchElementException for continuing from the beginning of the loop. In the first line of this except block, you put a line as driver.refresh(). In the else block you break the loop and consider that you should have two except block; one for NoSuchElementException and another for other exceptions. In the second you should break the loop I guess. Here's a sample code:
import selenium.common.exceptions as SeleniumExceptions
from selenium import webdriver
import sys
driver = webdriver.Firefox()
while True:
try:
element = driver.find_element_by_id('some_id')
# do whatever you want
except SeleniumExceptions.NoSuchElementException:
driver.refresh()
continue
except:
print sys.exc_info()
break
else:
break
EDIT1:
Consider that if you have to work with a specific frame or iframe, you should switch back to it with driver.switch_to_frame(driver.find_element_by_id('frame_id')) and continue your work; or if you think you should wait for the frame to load, you can use this code:
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
WebDriverWait(driver, timeout=10).until(EC.frame_to_be_available_and_switch_to_it(driver.find_element_by_id('frame_id')))

Related

There is an element, why am I getting an NoSuchElementException?

problem:
Given the following code:
from selenium import webdriver
browser = webdriver.Chrome()
browser.get('https://navercomp.wisereport.co.kr/v2/company/c1010001.aspx?cmp_cd=004000')
# move to my goal
browser.find_element_by_link_text("재무분석").click()
browser.find_element_by_link_text("재무상태표").click()
# extract the data
elem = browser.find_element_by_xpath("//*[#id='faRVArcVR1a2']/table[2]/tbody/tr[2]/td[6]")
print(elem.text)
I write this code to extract finance data.
At first, I just move to page which have wanting data.
And I copy the XPATH by Chrome Browser function.
But although there is 'text', I get faced NoSuchElementException.
Why this problem happen?
try to fix:
At first, I thought that 'is this happen because of the delay'?
Although there is almost no delay in my computer, I just try to fix it.
I add some import code and change 'elem' part:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser = webdriver.Chrome()
browser.get('https://navercomp.wisereport.co.kr/v2/company/c1010001.aspx?cmp_cd=004000')
# move to my goal
browser.find_element_by_link_text("재무분석").click()
browser.find_element_by_link_text("재무상태표").click()
# extract the data
elem = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH, '//*[#id="faRVArcVR1a2"]/table[2]/tbody/tr[2]/td[6]')))
print(elem.text)
but as a result, only TimeoutException happens..
I don't know why these problem happens. help pls! thank u..
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
browser = webdriver.Chrome()
browser.get('https://navercomp.wisereport.co.kr/v2/company/c1010001.aspx?cmp_cd=004000')
# move to my goal
browser.find_element_by_link_text("재무분석").click()
browser.find_element_by_link_text("재무상태표").click()
elementXpath = '//table[#summary="IFRS연결 연간 재무 정보를 제공합니다."][2]/tbody/tr[2]/td[6]'
# extract the data
WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH, elementXpath)))
# Wait for the table to load
time.sleep(1)
elem = browser.find_element(by=By.XPATH, value=elementXpath)
print(elem.text)
There were several problems:
the ID of the div which wraps the table ("faRVArcVR1a2") changes every time you load the page, that's why this is not a proper way of finding the element. I changed that so that it is found by the summary of the table.
WebDriverWait doesn't return the element, that's why you have to get the element with find_element after you know it is present.
Even after you waited for the table to appear, you have to wait an additional second so that all cells of the table load. Otherwise you would get an empty string.

Trying to find the correct xpath

I have made code, see following lines:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from selenium.webdriver.common.by import By
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://www.flashscore.com/match/jBvNMej6/#match-summary")
print(driver.title)
driver.maximize_window() # For maximizing window
driver.implicitly_wait(10) # gives an implicit wait for 20 seconds
driver.find_element_by_id('onetrust-reject-all-handler').click()
time.sleep(2)
driver.find_element(By.CLASS_NAME,'previewShowMore.showMore').click()
main = driver.find_element(By.CLASS_NAME,'previewLine'[b[text()="Hot stat:"]]/text)
print(main.text)
time.sleep(2)
driver.close()
However, I get the following error.
main = driver.find_element(By.CLASS_NAME,'previewLine'[b[text()="Hot stat:"]]/text)
^
SyntaxError: invalid syntax
What can I do to avoid this?
thx! : )
Well, in this line
main = driver.find_element(By.CLASS_NAME,'previewLine'[b[text()="Hot stat:"]]/text)
You have made a great mix :)
Your locator is absolutely invalid.
Also, if you want to print the paragraph text without the "Hot streak" you will need to remove that string from the entire div (paragraph) text.
This should do what you are trying to achieve:
main = driver.find_element(By.XPATH,"//div[#class='previewLine' and ./b[text()='Hot streak']]").text
main = main.replace('Hot streak','')
print(main)
I'm not finding any text 'Hot stat:'. You'll have to attach the html code where you found that.
I assume that you want to retrieve the text of a specific previewLine?
main = driver.find_element(By.XPATH ,'//div[#class="previewLine"]/b[contains(text(),"Hot streak")]/..')
print(main.text)

how to use time sleep to make selenium output consistent

This might be the stupidest question i asked yet but this is driving me nuts...
Basically i want to get all links from profiles but for some reason selenium gives different amounts of links most of the time ( sometimes all sometimes only a tenth)
I experimented with time.sleep and i know its affecting the output somehow but i dont understand where the problem is.
(but thats just my hypothesis maybe thats wrong)
I have no other explanation why i get incosistent output. Since i get all profile links from time to time the program is able to find all relevant profiles.
heres what the output should be (for different gui input)
input:anlagenbau output:3070
Fahrzeugbau output:4065
laserschneiden output:1311
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
from selenium.common.exceptions import TimeoutException
from urllib.request import urlopen
from datetime import date
from datetime import datetime
import easygui
import re
from selenium.common.exceptions import NoSuchElementException
import time
#input window suchbegriff
suchbegriff = easygui.enterbox("Suchbegriff eingeben | Hinweis: suchbegriff sollte kein '/' enthalten")
#get date and time
now = datetime.now()
current_time = now.strftime("%H-%M-%S")
today = date.today()
date = today.strftime("%Y-%m-%d")
def get_profile_url(label_element):
# get the url from a result element
onlick = label_element.get_attribute("onclick")
# some regex magic
return re.search(r"(?<=open\(\')(.*?)(?=\')", onlick).group()
def load_more_results():
# load more results if needed // use only on the search page!
button_wrapper = wd.find_element_by_class_name("loadNextBtn")
button_wrapper.find_element_by_tag_name("span").click()
#### Script starts here ####
# Set some Selenium Options
options = webdriver.ChromeOptions()
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
# Webdriver
wd = webdriver.Chrome(options=options)
# Load URL
wd.get("https://www.techpilot.de/zulieferer-suchen?"+str(suchbegriff))
# lets first wait for the timeframe
iframe = WebDriverWait(wd, 5).until(
EC.frame_to_be_available_and_switch_to_it("efficientSearchIframe")
)
# the result parent
result_pane = WebDriverWait(wd, 5).until(
EC.presence_of_element_located((By.ID, "resultPane"))
)
#get all profilelinks as list
time.sleep(5)
href_list = []
wait = WebDriverWait(wd, 15)
while True:
try:
#time.sleep(1)
wd.execute_script("loadFollowing();")
#time.sleep(1)
try:
wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".fancyCompLabel")))
except TimeoutException:
break
#time.sleep(1) # beeinflusst in irgeneiner weise die findung der ergebnisse
result_elements = wd.find_elements_by_class_name("fancyCompLabel")
#time.sleep(1)
for element in result_elements:
url = get_profile_url(element)
href_list.append(url)
#time.sleep(2)
while True:
try:
element = wd.find_element_by_class_name('fancyNewProfile')
wd.execute_script("""var element = arguments[0];element.parentNode.removeChild(element);""", element)
except NoSuchElementException:
break
except NoSuchElementException:
break
wd.close #funktioniert noch nicht
print("####links secured: "+str(len(href_list)))
Since you say that the sleep is affecting the number of results, it sounds like they're loading asynchronously and populating as they're loaded, instead of all at once.
The first question is whether you can ask the web site developers to change this, to only show them when they're all loaded at once.
Assuming you don't work for the same company as them, consider:
Is there something else on the page that shows up when they're all loaded? It could be a button or a status message, for instance. Can you wait for that item to appear, and then get the list?
How frequently do new items appear? You could poll for the number of results relatively infrequently, such as only every 2 or 3 seconds, and then consider the results all present when you get the same number of results twice in a row.
The issue is the method presence_of_all_elements_located doesn't wait for all elements matching a passed locator. It waits for presence of at least 1 element matching the passed locator and then returns a list of elements found on the page at that moment matching that locator.
In Java we have
wait.until(ExpectedConditions.numberOfElementsToBeMoreThan(element, expectedElementsAmount));
and
wait.until(ExpectedConditions.numberOfElementsToBe(element, expectedElementsAmount));
With these methods you can wait for predefined amount of elements to appear etc.
Selenium with Python doesn't support these methods.
The only thing you can see with Selenium in Python is to build some custom method to do these actions.
So if you are expecting some amount of elements /links etc. to appear / be presented on the page you can use such method.
This will make your test stable and will avoid usage of hardcoded sleeps.
UPD
I have found this solution.
This looks to be the solution for the mentioned above methods.
This seems to be a Python equivalent for wait.until(ExpectedConditions.numberOfElementsToBeMoreThan(element, expectedElementsAmount));
myLength = 9
WebDriverWait(browser, 20).until(lambda browser: len(browser.find_elements_by_xpath("//img[#data-blabla]")) > int(myLength))
And this
myLength = 10
WebDriverWait(browser, 20).until(lambda browser: len(browser.find_elements_by_xpath("//img[#data-blabla]")) == int(myLength))
Is equivalent for Java wait.until(ExpectedConditions.numberOfElementsToBe(element, expectedElementsAmount));

How to handle unexpected ad popup in hdfc site?

I am trying to automate HDFC website. While entering there is an ad popup,which sometimes come and sometimes don't. I want to handle it using try catch. Please help with this.
The trick is to first see if the element is present, act accordingly it is, else continue on as normal. You could do the following and evaluate the true/false returned by the function :)
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
wait = WebDriverWait(driver, 10)
def check_exists_by_xpath(xpath,wait):
try:
wait.until(EC.visibility_of_element_located((By.XPATH,(xpath)))
except NoSuchElementException:
return False
return True

selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element:

I'm trying to automatically generate lots of users on the webpage kahoot.it using selenium to make them appear in front of the class, however, I get this error message when trying to access the inputSession item (where you write the gameID to enter the game)
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.kahoot.it")
gameID = driver.find_element_by_id("inputSession")
username = driver.find_element_by_id("username")
gameID.send_keys("53384")
This is the error:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element:
{"method":"id","selector":"inputSession"}
Looks like it takes time to load the webpage, and hence the detection of webelement wasn't happening. You can either use #shri's code above or just add these two statements just below the code driver = webdriver.Firefox():
driver.maximize_window() # For maximizing window
driver.implicitly_wait(20) # gives an implicit wait for 20 seconds
Could be a race condition where the find element is executing before it is present on the page. Take a look at the wait timeout documentation. Here is an example from the docs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("http://somedomain/url_that_delays_loading")
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "myDynamicElement"))
)
finally:
driver.quit()
In my case, the error was caused by the element I was looking for being inside an iframe. This meant I had to change frame before looking for the element:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://www.google.co.uk/maps")
frame_0 = driver.find_element_by_class_name('widget-consent-frame')
driver.switch_to.frame(frame_0)
agree_btn_0 = driver.find_element_by_id('introAgreeButton')
agree_btn_0.click()
Reddit source
You can also use below as an alternative to the above two solutions:
import time
time.sleep(30)
this worked for me (the try/finally didn't, kept hitting the finally/browser.close())
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get('mywebsite.com')
username = None
while(username == None):
username = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "username"))
)
username.send_keys('myusername#email.com')
It seems that your browser did not read proper HTML texts/tags, use a delay function that'll help the page to load first and then get all tags from the page.
driver = webdriver.Chrome('./chromedriver.exe')
# load the page
driver.get('https://www.instagram.com/accounts/login/')
# use delay function to get all tags
driver.implicitly_wait(20)
# access tag
driver.find_element_by_name('username').send_keys(self.username)
Also for some, it may be due to opening the new tabs when clicking the button(not in this question particularly). Then you can switch tabs by command.
driver.switch_to.window(driver.window_handles[1]) #for switching to second tab
I had the same problem as you and this solution saved me:
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
element = wait.until(EC.element_to_be_clickable((By.ID, 'someid')))
it just means the function is executing before button can be clicked. Example solution:
from selenium import sleep
# load the page first and then pause
sleep(3)
# pauses executing the next line for 3 seconds