Selenium Grid Proxy: how to get the response after a command get executed - selenium-grid

I created a Selenium Grid proxy, I want to log every command done, the problem is I can't find a way to get the response of the command for example after "GetTitle" command I want to get the "Title" returned.

Where do you want this logging to be done ? If you attempt at logging this at the Custom Proxy, then these logs would be available only on the machine that runs the hub. Is that what you want ? If yes, then here's how you should be doing it :
Within an overloaded variant of org.openqa.grid.internal.listeners.CommandListener#afterCommand (this method should be available in your DefaultRemoteProxy extension object that you are building), extract this information from within the javax.servlet.http.HttpServletRequest by reading its entity value and then translating that into a proper payload.
Here's how the afterCommand() (or) beforeCommand() method from your customized version of org.openqa.grid.selenium.proxy.DefaultRemoteProxy can look like:
org.openqa.grid.web.servlet.handler.SeleniumBasedResponse ar = new org.openqa.grid.web.servlet.handler.SeleniumBasedResponse(response);
if (ar.getForwardedContent() != null) {
System.err.println("Content" + ar.getForwardedContent());
}
If that's not what you want, then you should be looking at leveraging the EventFiringWebDriver. Take a look at the below blogs to learn how to work with the EventFiringWebDriver. The EventFiringWebDriver does not require customization at the Grid side, it just needs you to make use of the EventFiringWebDriver which would wrap within it an existing RemoteWebDriver object and the listeners you inject to it will help you get this.
http://darrellgrainger.blogspot.in/2011/02/generating-screen-capture-on-exception.html
https://rationaleemotions.wordpress.com/2015/04/18/eavesdropping-into-webdriver/ (This is my blog) Here I talk about not even using EventFiringWebDriver but instead work with a decorated CommandExecutor which would log all these information for you.

Related

Browser not quitting when called from another scenario using Karate

I have a scenario which is a series of rest api calls but in the middle is a section that executes a few steps within a chrome browser. The browser steps are common to another scenario so I tried to extract the browser steps into a separate feature that could then be called from multiple scenarios.
When the main scenario executes it executes the browser feature but fails to auto-close the browser after execution. I read in the documentation "Karate will close the browser automatically after a Scenario unless the driver instance was created before entering the Scenario" . The configure driver code is in the callable scenario.
I also tried caling quit() but this resulted in the error: "The forked VM terminated without properly saying goodbye. VM crash or System.exit called?"
Does anyone know how I can ensure the browser closes in this circumstance?
UPDATE: As suggested by #PeterThomas I started to craft a full example to replicate this when I discovered that to replicate is actually quite simple.
If the UI feature is called like this then the browser is closed after execution:
* call read('classpath:/ui/callable/GoogleSearch.feature')
If called like this then the browser remains open:
* def result = call read('classpath:/ui/callable/GoogleSearch.feature')
My UI scenario scrapes a value from a web page which I then stored within a '* def ticket' within the called feature. I was hoping to access it via result.ticket. As I am unable to do this I am successfully using the following:
* def extractedTicket = { value: '' }
* call read('classpath:/ui/callable/GoogleSearch.feature')
* def ticket = extractedTicket.value
And within the called feature:
* set extractedTicket.value = karate.extract(val, '.ticket=(.*?)&', 1)
First, I think you should provide a way to replicate this, so that we can investigate and fix this for everyone. Please follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue
That said, maybe for just getting Chrome to do a few steps you should just use the Java API - and you can call it from wherever you want, even within a feature file using Java interop: https://github.com/karatelabs/karate#java-api
Also see if this answer gives you any pointers: https://stackoverflow.com/a/60387907/143475

How to interact with network tab in chrome using karate DSL when doing web automation

I am writing UI automation script using karate DSL. In this at certain point I need to get value from network call in chrome. I want to interact with one of the webservice call in chrome devtools network tab and get the json response of that webservice.
I need this because I have to extract the value from that particular call and pass it on to the next step in my automation script.
I have seen the question related to sessionStorage(Is there a way of getting a sessionStorage using Karate DSL?) but I wonder how to do the same for network call using script command or any other way?
The first thing I would recommend is don't forget that Karate is an API testing tool at its core. Maybe all you need to do is manually make that call and get the response. You should be able to scrape the HTML and get the host and parameters needed.
That said - there's a new feature (only for Chrome) which is documented here: https://github.com/intuit/karate/tree/develop/karate-core#intercepting-http-requests - and is available in 0.9.6.RC2
It may not directly solve for what you want, but in a Karate mock, you should be able to set a value for use later e.g. by using a Java singleton or writing to a temp-file.
If there is something oddly more specific you need, please contribute code to Karate. Finally, there is an experimental way in which you can actually make raw requests to the Chrome DevTools session: https://github.com/intuit/karate/tree/develop/examples/ui-test#devtools-protocol-tips - it is for advanced users, but maybe you are one :)

Selenium Grid: Node API?

The problem:
I want to run Selenium Grid on AWS and would like to use their dynamic scaling. On scale down, it will just terminate an instance... which mean that a node can disappear just like that. Not the behaviour I would like, but using scripts or lifecycle hooks, I can try and make sure that any sessions on the node is not active before it is terminated.
Seems like I can hit this API to disconnect the node from the hub: http://NODE-IP:5555/selenium-server/driver/?cmd=shutDownSeleniumServer
Ideally, I need to find an API to the node directly to gather data of session activity.
Alternatives? Sessions logs?
Note:
This answer is valid only for Selenium 3.x series (3.14.1 which is as of today the last of the builds in Selenium 3 series). Selenium 4 grid architecture is a complete different one and as such this answer will not necessarily be relevant for Selenium 4 grid (Its yet to be released).
Couple of things. What you are asking for sounds like you need a sort of self healing mechanism. This is not available in the plain vanilla selenium grid flavor.
Selenium node, doesn't have the capability to track sessions that are running within it.
You need to build all of this at the Selenium Hub (which is where all this information resides in).
On a high level, you would need to do the following
Build a custom proxy by extending org.openqa.grid.selenium.proxy.DefaultRemoteProxy which would have the following capabilities:
Add an API which when used would mark the proxy as quiesced (meaning the node has been marked for maintenance and will no longer accept any new session requests)
Override getNewSession(Map<String, Object> requestedCapability) such that it first checks if a node is not quiesced and only then facilitate a new session.
Build a custom servlet which when invoked can do the following:
Given a node it can use the API built via 1.1 and mark a node as quiesced
would return back the list of nodes that don't have any sessions running in them. If you build your servlet by extending org.openqa.grid.web.servlet.RegistryBasedServlet, within your servlet you should be able to get the list of free node urls by doing something like below
List<RemoteProxy> freeProxies =
StreamSupport.stream(getRegistry().getAllProxies().spliterator(), false)
.filter(remoteProxy -> !remoteProxy.isBusy())
.collect(Collectors.toList());
List<URL> urls =
freeProxies.stream().map(RemoteProxy::getRemoteHost).collect(Collectors.toList());
Now that we have the custom Hub which is now enabled with functionality to do this cleanup, you could now first invoke the 2.1 end-point to mark nodes to be shutdown and then keep polling 2.2 end-point to retrieve all the IP and Port combinations for the nodes that are no longer supporting any test session and then invoke http://NODE-IP:5555/selenium-server/driver/?cmd=shutDownSeleniumServer on them.
That on a high level can do what you are looking for.
Some useful links which can help you get oriented on this (All of the provided links are blogs that I wrote at various points in time).
Self healing grid - https://rationaleemotions.wordpress.com/2013/01/28/building-a-self-maintaining-grid-environment/
Building a custom proxy - https://rationaleemotions.github.io/gridopadesham/CUSTOM_PROXY.html
Building a custom servlet for the hub - https://rationaleemotions.github.io/gridopadesham/CUSTOM_SERVLETS.html

Is it possible to combine multiple commands in single webdriver http call?

I'm using Selenium from Java with a remote grid. When I find an element on a page I would like to retrieve its text, multiple attributes from this element, check whether it is displayed and whether it is enabled.
As far as I can see each thing I retrieve triggers a new remote call (to http endpoint of the webdriver). Since I know beforehand which values I'm interested in I would like to combine them in a single http call (as the call can be quite slow). Is this possible in Selenium with Java? Or even with the webdriver protocol?
To be clear: my problem is not finding an element based on multiple criteria in one go, I know how to do that. But after I find the element I want to know the values of multiple properties, and I want to gather these efficiently.
As far as I can see the protocol requires a separate call for each attribute value, the text, whether the element is displayed and whether it enabled. For me this means for instance 6 round trips to the server, where one could suffice if I were able to 'multiplex' all data I would like to retrieve in a single call.
Is there a way to optimize retrieving multiple details/properties of an element once I found it?
On solution to have less calls between the driver and server could be to use some javascript in the context of the client side/window.
You can write something like
combinedObject = driver.executeScript("function(domelement) {
return { abc: domelement.getAttribute('abc'), efg: domelement.getAttribute('efg'), hij: domelement.getAttribute('hij') };
}",foundedElement);
This can reduce the number of calls between driver and server.
If it makes sense to mix some javascript functions with your java code is your decision.

How can I set different driver for one step in behat?

In default I run tests with goutte. How can I set different driver for one step? For example to take screenshot after failed step I need selenium driver. And I don't know which step will fail.
Have a look at the Mink docs, specifically the managing sessions chapter to learn how to change the default driver. If you're not familiar with Behat hooks it's also good to catch up with Hooking into the Test Process docs.
Here's an example of how you could access mink and change the default session. Once this method is executed, all the following operations on the session object will be perform through the selected driver.
use Behat\Behat\Hook\Scope\BeforeStepScope;
use Behat\Behat\Hook\Scope\AfterStepScope;
class MyContext extends RawMinkContext
{
/**
* #BeforeStep
*/
public function before(BeforeStepScope $scope)
{
// note that this will be called before EVERY step
// add logic here if you want to perform it before SOME steps
// You can't really know if your step will fail though ;)
$mink = $this->getMink();
$mink->setDefaultSessionName('selenium');
}
public function after(AfterStepScope $scope)
{
// here you can inspect $scope to see if your step failed
}
}
This is not a complete solution, but should point you into the right direction if you really want to pursue it.
However, I strongly discourage you from doing so.
If your step failed it was already executed. To make a screenshot you would need to execute the step again with a different driver. The state of the app would be most likely different at this point. You'd also need to fight with differences between the drivers, try to share the cookie etc. It's just not worth the effort.
Instead, simply dump an html. You can always display it in a browser.