Selenium Grid: Node API? - selenium

The problem:
I want to run Selenium Grid on AWS and would like to use their dynamic scaling. On scale down, it will just terminate an instance... which mean that a node can disappear just like that. Not the behaviour I would like, but using scripts or lifecycle hooks, I can try and make sure that any sessions on the node is not active before it is terminated.
Seems like I can hit this API to disconnect the node from the hub: http://NODE-IP:5555/selenium-server/driver/?cmd=shutDownSeleniumServer
Ideally, I need to find an API to the node directly to gather data of session activity.
Alternatives? Sessions logs?

Note:
This answer is valid only for Selenium 3.x series (3.14.1 which is as of today the last of the builds in Selenium 3 series). Selenium 4 grid architecture is a complete different one and as such this answer will not necessarily be relevant for Selenium 4 grid (Its yet to be released).
Couple of things. What you are asking for sounds like you need a sort of self healing mechanism. This is not available in the plain vanilla selenium grid flavor.
Selenium node, doesn't have the capability to track sessions that are running within it.
You need to build all of this at the Selenium Hub (which is where all this information resides in).
On a high level, you would need to do the following
Build a custom proxy by extending org.openqa.grid.selenium.proxy.DefaultRemoteProxy which would have the following capabilities:
Add an API which when used would mark the proxy as quiesced (meaning the node has been marked for maintenance and will no longer accept any new session requests)
Override getNewSession(Map<String, Object> requestedCapability) such that it first checks if a node is not quiesced and only then facilitate a new session.
Build a custom servlet which when invoked can do the following:
Given a node it can use the API built via 1.1 and mark a node as quiesced
would return back the list of nodes that don't have any sessions running in them. If you build your servlet by extending org.openqa.grid.web.servlet.RegistryBasedServlet, within your servlet you should be able to get the list of free node urls by doing something like below
List<RemoteProxy> freeProxies =
StreamSupport.stream(getRegistry().getAllProxies().spliterator(), false)
.filter(remoteProxy -> !remoteProxy.isBusy())
.collect(Collectors.toList());
List<URL> urls =
freeProxies.stream().map(RemoteProxy::getRemoteHost).collect(Collectors.toList());
Now that we have the custom Hub which is now enabled with functionality to do this cleanup, you could now first invoke the 2.1 end-point to mark nodes to be shutdown and then keep polling 2.2 end-point to retrieve all the IP and Port combinations for the nodes that are no longer supporting any test session and then invoke http://NODE-IP:5555/selenium-server/driver/?cmd=shutDownSeleniumServer on them.
That on a high level can do what you are looking for.
Some useful links which can help you get oriented on this (All of the provided links are blogs that I wrote at various points in time).
Self healing grid - https://rationaleemotions.wordpress.com/2013/01/28/building-a-self-maintaining-grid-environment/
Building a custom proxy - https://rationaleemotions.github.io/gridopadesham/CUSTOM_PROXY.html
Building a custom servlet for the hub - https://rationaleemotions.github.io/gridopadesham/CUSTOM_SERVLETS.html

Related

How to interact with network tab in chrome using karate DSL when doing web automation

I am writing UI automation script using karate DSL. In this at certain point I need to get value from network call in chrome. I want to interact with one of the webservice call in chrome devtools network tab and get the json response of that webservice.
I need this because I have to extract the value from that particular call and pass it on to the next step in my automation script.
I have seen the question related to sessionStorage(Is there a way of getting a sessionStorage using Karate DSL?) but I wonder how to do the same for network call using script command or any other way?
The first thing I would recommend is don't forget that Karate is an API testing tool at its core. Maybe all you need to do is manually make that call and get the response. You should be able to scrape the HTML and get the host and parameters needed.
That said - there's a new feature (only for Chrome) which is documented here: https://github.com/intuit/karate/tree/develop/karate-core#intercepting-http-requests - and is available in 0.9.6.RC2
It may not directly solve for what you want, but in a Karate mock, you should be able to set a value for use later e.g. by using a Java singleton or writing to a temp-file.
If there is something oddly more specific you need, please contribute code to Karate. Finally, there is an experimental way in which you can actually make raw requests to the Chrome DevTools session: https://github.com/intuit/karate/tree/develop/examples/ui-test#devtools-protocol-tips - it is for advanced users, but maybe you are one :)

Karate Standalone as Mock Server with multiple Feature Files

I try to setup an integration/API test suite with Karate and consider to use Karate Netty for mocking required services. For the test setup the system under test A (a Spring Boot app) is started up completely. The Karate tests are then executed by a Maven test run against this instance.
The service A depends on multiple other services these needs to be mocked away for the tests. To do so my idea was to configure a running Karate Netty standalone instance as HTTP proxy (done by JVM args of the service A).
Now my idea was to create one test feature file: xyz-test.feature
And the required mocks for this file are defined in an associated mock feature file: xyz-mock.feature
(The test scenarios are rather complex and the responses of the external services could vary)
This means for a full test run I need to load up a couple of mock feature files. So:
What is the matching strategy for multiple mock feature files? Which scenario wins, so to say.
Is there any way to ensure, that the right mock file is used for the associated test file?
(Clearly I can reconfigure the running standalone instance and advice it to use xyz-mock.feature next.
But this would stop me from using parallel execution for my API tests, right?)
I already thought about reusing the Correlation-Id which I can send in for each test and then match against this in the mock file (it is also sent to all called services). But:
Is there a way to define a global matcher per mock file?
It sounds like you need only one mock file. You could boot 2 on different ports if you wanted, but there is no way to "merge" them into one port - if that is what you were looking for.
In my experience, you will be able to have a single mock take care of all your edge cases. This is because Karate's approach is un-conventional: you pretty much write a stateful server. But by keeping variables in memory and some clever JSON-path, you can simulate CRUD with very few lines of code: https://github.com/intuit/karate/tree/master/karate-netty#background
You can use only one at a time, by design
Given the above limitation, here's an interesting idea: add something like an extra pathMatches('/__test/reset') scenario that cleans-up your state and sets the Background variables to things like * def cats = []. Now in each feature, just call the special "reset" URL at the start. The good thing is Karate is thread-safe. Another idea as you said is you can maintain two or three different variables and use some logic to "route" based on a header, again very easy IMO. Use a map of maps, e.g:
def data = { cats1: {}, cats2: {}, cats3: {} }
And you can get the header, e.g. if it is mode: cats1
* def mode = karate.get('requestHeaders.mode[0]')
* def cats = data[mode]
not sure if this answers your question, but if the last Scenario has an "empty" description, it is a "catch all" and can in theory delegate to another server (or mock): https://github.com/intuit/karate/tree/develop/karate-netty#proxy-mode
Your question is a little confusing, so you may have to edit and re-word it if I haven't understood.
EDIT: using multiple mock files should be possible in 1.1.0 onwards: https://github.com/intuit/karate/issues/1566

Nifi API - Update parameter context

We've created a parameter context within Nifi which is allocated to several process groups. We would like to update the value of one parameter within the parameter context. Is there any option to do this via the API?
NiFi CLI from nifi-toolkit has commands for interacting with parameters, there is one for set-param:
https://github.com/apache/nifi/tree/master/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/params
You could use that, or look at the code to see how it uses the API.
Also, anything you can do from NiFi UI has go through the REST API. So you can always open Chrome Dev Tools, take some action in the UI like updating a parameter, and then look at which calls are made.

Selenium Grid Proxy: how to get the response after a command get executed

I created a Selenium Grid proxy, I want to log every command done, the problem is I can't find a way to get the response of the command for example after "GetTitle" command I want to get the "Title" returned.
Where do you want this logging to be done ? If you attempt at logging this at the Custom Proxy, then these logs would be available only on the machine that runs the hub. Is that what you want ? If yes, then here's how you should be doing it :
Within an overloaded variant of org.openqa.grid.internal.listeners.CommandListener#afterCommand (this method should be available in your DefaultRemoteProxy extension object that you are building), extract this information from within the javax.servlet.http.HttpServletRequest by reading its entity value and then translating that into a proper payload.
Here's how the afterCommand() (or) beforeCommand() method from your customized version of org.openqa.grid.selenium.proxy.DefaultRemoteProxy can look like:
org.openqa.grid.web.servlet.handler.SeleniumBasedResponse ar = new org.openqa.grid.web.servlet.handler.SeleniumBasedResponse(response);
if (ar.getForwardedContent() != null) {
System.err.println("Content" + ar.getForwardedContent());
}
If that's not what you want, then you should be looking at leveraging the EventFiringWebDriver. Take a look at the below blogs to learn how to work with the EventFiringWebDriver. The EventFiringWebDriver does not require customization at the Grid side, it just needs you to make use of the EventFiringWebDriver which would wrap within it an existing RemoteWebDriver object and the listeners you inject to it will help you get this.
http://darrellgrainger.blogspot.in/2011/02/generating-screen-capture-on-exception.html
https://rationaleemotions.wordpress.com/2015/04/18/eavesdropping-into-webdriver/ (This is my blog) Here I talk about not even using EventFiringWebDriver but instead work with a decorated CommandExecutor which would log all these information for you.

How to efficiently deploy content types to a Content Type Hub

I have set up a Content Type Hub and tested the syndication is working correctly by creating a test content type and watching it be published to the client site.
Then I deployed the content types I am actually interested in publishing to the hub (by way of a feature) along with the site columns they depend on.
I get the error
Content type '...' cannot be published to this site because feature '...' is not enabled.
I want to deploy content types with features for upgradability and ease of porting between dev, qual and prod environments. But am left not understanding what the benefit of the Hub is.
If I have to activate the deploying feature, the content types will already be on the site before publishing will take place. If I have to manually create the content types on the Hub site with the web UI (yuck!), I have the issue of trying to keep three landscapes manually synchronized.
Is there a way to efficiently manage content type deployment to the Hub while still using the Hub to publish the content types?
The advantage of using the Content Type Hub, is that it allows you to use and reuse your Content Types over multiple site collections and Web Applications throughout your farm.
Because all of your site collections are now using instances of the same syndicated content types, if, in the future, you need to add/remove/rename columns within the content types, this is done as easily as updating the content type, and resubscribing (then allowing sharepoint to run its timer jobs, and double checking that the changes updated because you're a careful SharePoint administrator).
I am not sure which error you are receiving, there simply isn't enough context in your post. However, I think you may be slightly confused on how syndicated content types are published. First, you turn on the content syndication hub publishing feature on the site collection that holds all of the content types you are going to reuse throughout your farm. Next you configure the mixed metadata service, so that SharePoint loads each of your content types "into memory" more or less.
After this step, you get to choose which site collections you want to subscribe to the syndication hub. To do this, you need to turn on the content type publishing site collection feature. Note: If you use blank templates for your sites you may receive a feature error like you've described, due to a "flaw" with blank templates. See my post at: http://www.thesharepointblog.net/Lists/Posts/Post.aspx?ID=109
Only AFTER you've turned on the subscribing feature, And content Type Hub timer job has run, AND the subscribing timer job has run, will your site collection see the available content types.
As for manually creating content types on the hub site, the only OOB way of doing this is to use the UI. Personally, I wrote a utility that does everything I just described for me, from creating the initial content types, to creating the syndication hub, publishing them to all of the site collections, and most time consumingly, associating them with all of the lists and libraries on the subscribing site collections. I had intended for my employing company to sell it, but as they don't seem interested, I could open source it if there is enough interest.
Hope this was helpful.
This looks like a shortcoming of the hub, indeed.
I've witnessed it before.
If you've deployed your content type to the hub, please check if the INHERITS tag of the content type element is set to TRUE. Otherwise it won't work in a hub.
<ContentType ID="xxxxx"
Name="xxxx"
Group="xxxx"
Description="xxxx"
Inherits="TRUE"
Version="0">
</ContentType>
Don't forget that you can actually synchronize the content types BETWEEN farms -- this is especially valuable when you are developing on a separate farm and don't want to hassle with a PnP Framework for managing your content types... In some cases, the Content Type may already exist on the production farm and you need a copy of them on dev and/or test..