For some unknown reasons ,my browser open test pages of my remote server very slowly. So I am thinking if I can reconnect to the browser after quitting the script but don't execute webdriver.quit() this will leave the browser opened. It is probably kind of HOOK or webdriver handle.
I have looked up the selenium API doc but didn't find any function.
I'm using Chrome 62,x64,windows 7,selenium 3.8.0.
I'll be very appreciated whether the question can be solved or not.
No, you can't reconnect to the previous Web Browsing Session after you quit the script. Even if you are able to extract the Session ID, Cookies and other session attributes from the previous Browsing Context still you won't be able to pass those attributes as a HOOK to the WebDriver.
A cleaner way would be to call webdriver.quit() and then span a new Browsing Context.
Deep Dive
There had been a lot of discussions and attempts around to reconnect WebDriver to an existing running Browsing Context. In the discussion Allow webdriver to attach to a running browser Simon Stewart [Creator WebDriver] clearly mentioned:
Reconnecting to an existing Browsing Context is a browser specific feature, hence can't be implemented in a generic way.
With internet-explorer, it's possible to iterate over the open windows in the OS and find the right IE process to attach to.
firefox and google-chrome needs to be started in a specific mode and configuration, which effectively means that just
attaching to a running instance isn't technically possible.
tl; dr
webdriver.firefox.useExisting not implemented
Yes, that's actually quite easy to do.
A selenium <-> webdriver session is represented by a connection url and session_id, you just reconnect to an existing one.
Disclaimer - the approach is using selenium internal properties ("private", in a way), which may change in new releases; you'd better not use it for production code; it's better not to be used against remote SE (yours hub, or provider like BrowserStack/Sauce Labs), because of a caveat/resource drainage explained at the end.
When a webdriver instance is initiated, you need to get the before-mentioned properties; sample:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
# now Google is opened, the browser is fully functional; print the two properties
# command_executor._url (it's "private", not for a direct usage), and session_id
print(f'driver.command_executor._url: {driver.command_executor._url}')
print(f'driver.session_id: {driver.session_id}')
With those two properties now known, another instance can connect; the "trick" is to initiate a Remote driver, and provide the _url above - thus it will connect to that running selenium process:
driver2 = webdriver.Remote(command_executor=the_known_url)
# when the started selenium is a local one, the url is in the form 'http://127.0.0.1:62526'
When that is ran, you'll see a new browser window being opened.
That's because upon initiating the driver, the selenium library automatically starts a new session for it - and now you have 1 webdriver process with 2 sessions (browsers instances).
If you navigate to an url, you'll see it is executed on that new browser instance, not the one that's left from the previous start - which is not the desired behavior.
At this point, two things need to be done - a) close the current SE session ("the new one"), and b) switch this instance to the previous session:
if driver2.session_id != the_known_session_id: # this is pretty much guaranteed to be the case
driver2.close() # this closes the session's window - it is currently the only one, thus the session itself will be auto-killed, yet:
driver2.quit() # for remote connections (like ours), this deletes the session, but does not stop the SE server
# take the session that's already running
driver2.session_id = the_known_session_id
# do something with the now hijacked session:
driver.get('https://www.bing.com/')
And, that is it - you're now connected to the previous/already existing session, with all its properties (cookies, LocalStorage, etc).
By the way, you do not have to provide desired_capabilities when initiating the new remote driver - those are stored and inherited from the existing session you took over.
Caveat - having a SE process running can lead to some resource drainage in the system.
Whenever one is started and then not closed - like in the first piece of the code - it will stay there until you manually kill it. By this I mean - in Windows for example - you'll see a "chromedriver.exe" process, that you have to terminate manually once you are done with it. It cannot be closed by a driver that has connected to it as to a remote selenium process.
The reason - whenever you initiate a local browser instance, and then call its quit() method, it has 2 parts in it - the first one is to delete the session from the Selenium instance (what's done in the second code piece up there), and the other is to stop the local service (the chrome/geckodriver) - which generally works ok.
The thing is, for Remote sessions the second piece is missing - your local machine cannot control a remote process, that's the work of that remote's hub. So that 2nd part is literally a pass python statement - a no-op.
If you start too many selenium services on a remote hub, and don't have a control over it - that'll lead to resource drainage from that server. Cloud providers like BrowserStack take measures against this - they are closing services with no activity for the last 60s, etc, yet - this is something you don't want to do.
And as for local SE services - just don't forget to occasionally clean up the OS from orphaned selenium drivers you forgot about :)
OK after mixing various solutions shared on here and tweaking I have this working now as below. Script will use previously left open chrome window if present - the remote connection is perfectly able to kill the browser if needed and code functions just fine.
I would love a way to automate the getting of session_id and url for previous active session without having to write them out to a file during hte previous session for pick up...
This is my first post on here so apologies for breaking any norms
#Set manually - read/write from a file for automation
session_id = "e0137cd71ab49b111f0151c756625d31"
executor_url = "http://localhost:50491"
def attach_to_session(executor_url, session_id):
original_execute = WebDriver.execute
def new_command_execute(self, command, params=None):
if command == "newSession":
# Mock the response
return {'success': 0, 'value': None, 'sessionId': session_id}
else:
return original_execute(self, command, params)
# Patch the function before creating the driver object
WebDriver.execute = new_command_execute
driver = webdriver.Remote(command_executor=executor_url, desired_capabilities={})
driver.session_id = session_id
# Replace the patched function with original function
WebDriver.execute = original_execute
return driver
remote_session = 0
#Try to connect to the last opened session - if failing open new window
try:
driver = attach_to_session(executor_url,session_id)
driver.current_url
print(" Driver has an active window we have connected to it and running here now : ")
print(" Chrome session ID ",session_id)
print(" executor_url",executor_url)
except:
print("No Driver window open - make a new one")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=myoptions)
session_id = driver.session_id
executor_url = driver.command_executor._url
Without getting into why do you think that leaving an open browser windows will solve the problem of being slow, you don't really need a handle to do that. Just keep running the tests without closing the session or, in other words, without calling driver.quit() as you have mentioned yourself. The question here though framework that comes with its own runner? Like Cucumber?
In any case, you must have some "setup" and "cleanup" code. So what you need to do is to ensure during the "cleanup" phase that the browser is back to its initial state. That means:
Blank page is displayed
Cookies are erased for the session
I'd like to benchmark my web application. Specifically I'd like to measure the load time of a particular DOM element.
I can use webdriver's wait for visible to measure how long a an element took to load and save the result somewhere. However I'd also like to measure concurrency and other factors.
Is there a standard way to do this?
While I love WebdriverIO, I would recommend using a tool other than it for benchmarking. WebdriverIO uses HTTP requests to send commands to Selenium, and because of that isn't the most performant thing. Your stats could be way off simply because the request from WebdriverIO to Selenium takes longer than usual.
I'd suggest using a tool a little "closer to the metal", possibly CasperJS
Is there a way to get a realtime view of what PhantomJS (or similar) is rendering?
I would like to develop my automation script while interacting with (or at least seeing a screencap of) the page it's targeted to.
No, there is no such thing. SlimerJS has the same API as PhantomJS, but runs the Gecko engine. You can see directly what is going on and run it headlessly with xvfb-run.
You will not be able to interact with it. You may want to use a screengrabber to record a video of the interaction when the tests are long and you don't want to run the test suite again if you didn't catch the problem in the test case.
The obvious way to debug PhantomJS scripts is to render many screenshots using page.render() and logging some objects to the console with
console.log(JSON.stringify(yourObj, undefined, 4));
with nice formatting.
Solution we use is an automatic screenshoting in case of exceptions, phantomJs will render the current page into a file that you can exam later .
That's for test execution phase.
When you writing the tests, just keep additional window open ("normal browser") with the application you trying to test and design the test according to it.
When the design is done, execute the test with phantomJS.
My Suggestion is to use logging alongside.
http://casperjs.org/
CasperJS is an open source navigation scripting & testing utility written in Javascript for the PhantomJS WebKit headless browser and SlimerJS (Gecko). It eases the process of defining a full navigation scenario and provides useful high-level functions, methods & syntactic sugar for doing common tasks such as:
defining & ordering browsing navigation steps
filling & submitting forms
clicking & following links
capturing screenshots of a page (or part of it)
testing remote DOM
logging events
downloading resources, including binary ones
writing functional test suites, saving results as JUnit XML
scraping Web contents
The solution to this problem is using the remote debugger:
--remote-debugger-port=9000
Using slimerjs for testing scripts with a browser is not advisable since it is based on gecko, which means the script might work on slimerjs and not on phantomjs or viceversa.
take a look at this guide for more info...
https://drupalize.me/blog/201410/using-remote-debugger-casperjs-and-phantomjs
I am using the Selenium Firefox Web Driver to run automated NUnit tests and, seemingly at random, it is not able to locate a particular button.
The function it is using is this
driver.FindElement(By.Id("createUser")).Click();
I can't figure out a pattern to why it does and doesn't work at different times. Does anyone have any ideas as to what is going on or what a possible work around might be? I am not using dynamic ids, and I have
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(15));
in place to cause it to wait up to 15 seconds for the page to fully render.
in a continuous integration build environment when running several Selenium tests in parallel (using Firefox driver) for different applications and each tests records its screenshots after every "action" (e.g. navigating to a page, submitting a form etc.) it seems like that whichever application window pops up that one gets on the top of the z-axis and will have the focus.
So using the method getScreenshotAs() from the Selenium API to record images results in mixed up screenshots sometimes showing one application and sometimes the other application.
Recording the HTML responses with getPageSource() on the other hand seems to work correctly using the Firefox driver instance "bound" to the test.
Is there any solution how to deal with the mixed up image screenshots? Is there a possibility how to ensure that getScreenshotAs() only consideres its own Firefox driver instance? Thanks for any hints!
Peter
I don't know what flavor of selenium you are using but here is a reference to the API that looks like it would fix your problem, but I have never tested it.
http://selenium.googlecode.com/svn/trunk/docs/api/dotnet/index.html
What that link shows is the IWrapDriver which according to the documentation Gets the IWebDriver used to find this element.
So from my understanding you could set your IWebDriver in your method and then wrapit with the IWrapDriver and then use that to reference for you getScreenShotAs();