Set parameter to JUnit5 extension - junit5

I wonder how to configure an extension by a test.
Scenario:
The test provides a value that should be used inside the extension.
Current Solution:
The value is defined as field inside the test and used by the extension thru scanning all declared fields and pick the correct one (reflection).
Problem:
This solution is very indirect. People using this extension must know about extension internals and declare the correct fields (type and value). Errors can lead the developer into the right direction, but its a bit of pain by try/error.
Is there a way to use e.g. annotation with value to configure a junit5 extension?

You may either provide configuration parameters and access them via the ExtensionContext at runtime or since 5.1 there's a programmatic way to register customized extension instances via #RegisterExtension.
Example copied from the JUnit 5 User-Guide:
#RegisterExtension
static WebServerExtension server = WebServerExtension.builder()
.enableSecurity(false)
.build();
#Test
void getProductList() {
WebClient webClient = new WebClient();
String serverUrl = server.getServerUrl();
// Use WebClient to connect to web server using serverUrl and verify response
assertEquals(200, webClient.get(serverUrl + "/products").getResponseStatus());
}

Related

Selenium Grid Proxy: how to get the response after a command get executed

I created a Selenium Grid proxy, I want to log every command done, the problem is I can't find a way to get the response of the command for example after "GetTitle" command I want to get the "Title" returned.
Where do you want this logging to be done ? If you attempt at logging this at the Custom Proxy, then these logs would be available only on the machine that runs the hub. Is that what you want ? If yes, then here's how you should be doing it :
Within an overloaded variant of org.openqa.grid.internal.listeners.CommandListener#afterCommand (this method should be available in your DefaultRemoteProxy extension object that you are building), extract this information from within the javax.servlet.http.HttpServletRequest by reading its entity value and then translating that into a proper payload.
Here's how the afterCommand() (or) beforeCommand() method from your customized version of org.openqa.grid.selenium.proxy.DefaultRemoteProxy can look like:
org.openqa.grid.web.servlet.handler.SeleniumBasedResponse ar = new org.openqa.grid.web.servlet.handler.SeleniumBasedResponse(response);
if (ar.getForwardedContent() != null) {
System.err.println("Content" + ar.getForwardedContent());
}
If that's not what you want, then you should be looking at leveraging the EventFiringWebDriver. Take a look at the below blogs to learn how to work with the EventFiringWebDriver. The EventFiringWebDriver does not require customization at the Grid side, it just needs you to make use of the EventFiringWebDriver which would wrap within it an existing RemoteWebDriver object and the listeners you inject to it will help you get this.
http://darrellgrainger.blogspot.in/2011/02/generating-screen-capture-on-exception.html
https://rationaleemotions.wordpress.com/2015/04/18/eavesdropping-into-webdriver/ (This is my blog) Here I talk about not even using EventFiringWebDriver but instead work with a decorated CommandExecutor which would log all these information for you.

"NotSupportedException" when WebRequest is unable to find a creator for that prefix

I have a really strange problem with WebRequest in a ServiceStack web application (hosted by XSP on Mono).
It seems that the registration of request modules works in a very strange way; I am using WebRequest to create an HTTP request, and it is failing because it was not able to find a creator for that "prefix" (HTTP).
The exception I am seeing is NotSupportedException, and I was able to track it to the fact that no creator is registered for the HTTP prefix (I am hitting https://github.com/mono/mono/blob/master/mcs/class/System/System.Net/WebRequest.cs, around line 479)
EDIT: more details: NotSupportedException is thrown by WebRequest.GetCreator, which uses the URL prefix as a key to choose which creator to return; in my case, a HttpRequestCreator. The exception is thrown because there is no creator registered for the "HTTP" prefix (actually, there are no creators at all).
So I searched around a little bit, dug into Mono sources, and found that modules are (or should be) added to the webRequestModules section of system.web in one of the various *.config files.
I looked at my machine.config file, and there it is:
System.Net.HttpRequestCreator, System, Version=4.0.0.0
Looking at WebRequest Mono sources
it seems that prefixes are added from configuration(s) inside the class static constructor (not a good choice, IMHO, but still.. it should work).
To test it, I tried to add an HttpRequestCreator to system.net/webRequestModules in my web.config; this is loaded by XSP/Mono and results in a duplicate key exception (which is expected since HttpRequestCreator should be already loaded, as it is already present in machine.config).
Even stranger: if I add a mock handler for Http, like this:
bool res = System.Net.WebRequest.RegisterPrefix ("http", new MyHttpRequestCreator ());
Debug.Assert (res == false);
The assertion sometimes pass... sometimes not!
(RegisterPrefix returns "false" if a creator for the same prefix is already registered; I expect it always to return false, but this is not the case! Again, it is completely random)
When the registration "fails" (i.e., returns false because an "HTTP" prefix is already registered), then the WebRequest can create requests for HTTP. It is as if calling RegisterPrefix "wakes up" the static constructor and let it run.
I am perplexed: it seems like a race condition in the execution of the static constructor of WebRequest, but this does not make sense (the runtime protects static constructors with a lock, IIRC)
What am I missing?
How could I solve or work around this problem?
Is it my fault (misunderstanding or missing something), or does it look like a Mono bug, so should I submit it?
Details:
mono --version
Mono JIT compiler version 3.0.6 (Debian 3.0.6+dfsg-1~exp1~pre1)
Possibly related, unanswered question:
HTTP protocol not supported in WebRequest under mono
Try this hacky workaround on this issue:
private static HttpWebRequest CreateWebRequest(Uri uri)
{
var type = Type.GetType("System.Net.HttpRequestCreator, System, Version=4.0.0.0,Culture=neutral, PublicKeyToken=b77a5c561934e089");
var creator = Activator.CreateInstance(type,nonPublic:true) as IWebRequestCreate;
return creator.Create(uri) as HttpWebRequest;
}
It sounds like you are experiencing a race condition in the execution of the static constructor of the WebRequest class. This could be caused by a variety of factors, including incorrect synchronization of access to shared resources, or the use of uninitialized variables.
One potential solution to this problem could be to ensure that the WebRequest class is initialized before attempting to use it. This can be done by accessing a static field or method of the WebRequest class before making any requests. For example:
int dummy = WebRequest.DefaultMaximumErrorResponseLength;
This will force the static constructor to run, ensuring that the WebRequest class is properly initialized before you attempt to make any requests.
Alternatively, you could try using the Lazy class to initialize the WebRequest class in a thread-safe manner. This can be done as follows:
private static readonly Lazy<int> DefaultMaximumErrorResponseLength =
new Lazy<int>(() => WebRequest.DefaultMaximumErrorResponseLength);
// ...
int dummy = DefaultMaximumErrorResponseLength.Value;
This will ensure that the WebRequest class is initialized in a thread-safe manner, without the risk of race conditions.

#ValidateConnection method is failing to be called when using "#Category component"

I have an issue in a new devkit Project where the following #ValidateConnection method is failing to be called (but my #processor methods are called fine when requested in the flows)
#ValidateConnection
public boolean isConnected() {
return isConnected;
}
I thought that the above should be called to check whether to call the #Connect method.
I think it is because I am using a non default category (Components) for the connector
#Category(name = "org.mule.tooling.category.core", description = "Components")
And the resulting Behavoir is different to what I am used to with DevKit in Cloud connector mode.
I guess I will need to do checks in each #processor for now to see if the initialization logic is done, as there doesn't seem to be an easy way to run a one time config.
EDIT_________________
I actually tried porting it back to a cloud connector #cat and the same behaviour, maybe its an issue with devkit -DarchetypeVersion=3.4.0, I used 3.2.x somthing before and things worked a bit better
The #ValidateConnection annotated method in the #Connector is called at the end of the makeObject() method of the generated *ConnectionFactory class. If you look for references of who is calling your isConnected() you should be able to confirm this.
So no, you should not need to perform the checks, it should be done automatically for you.
There must be something else missing... do you have a #ConnectionIdentifier annotated method?
PS. #Category annotation is purely for cosmetic purposes in Studio.

Sharing code between Worklight Adapters

In most cases I've dealt with so far the Worklight Adapter implementation has been pretty trivial, just a few lines of JavaScript.
On the current project, using WL 5.0.6, we have several adapters, each with several procedures. Our particular backends require some common logic to set up requests and interpret responses. Seems ideal for refactoring common code to shared library, execpt that as far as I can see there's no "library" concept in the adapter environment unless we want to drop down into Java.
Are there any patterns for code-reuse between adapters?
I think you are right. There is currently no way of importing custom JavaScript libraries.
There is a way to include/load Javascript files in Mozilla Rhino engine by using the "load(xyz.js)" function, but this will make your Worklight adapter undeployable.
But I've noticed, that this will make your Worklight adapter undeployable. If you deploy a second *.js file within an adapter, you'll get the following error message:
Adapter deployment failed: Procedure 'getStories' is not implemented in the adapter's JavaScript file.
It seems like Worklight Server can only handle one JavaScript file per adapter.
I have shared some common functionality between adapters by implementing the functionality in Java code and including the jar file in the Worklight war file. This came in handy to invoke stored procs via JDBC that can handle multiple out parms and also retrieving PDF content from internal backend services. The jar needs to be in the lib dir of the worklight.war web app that the adapter will be deployed to.
Example of creating a java object in the adapter:
var parm = new org.apache.http.message.BasicNameValuePair("QBS_Reference_ID",refId);
One way to share JavaScript between adapters is to follow a pattern somewhat like this:
CommonAdapter-impl.js:
var commonObject = {
invokeBackend: function(input, options) {
// Do stuff to check/modify input & options
response = WL.Server.invokeHttp(input);
// Do stuff to check/modify response
return response;
}
}
getCommonObject: function() {
return commonObject;
}
NormalAdapter-impl.js:
function getSomeData(dataNumber) {
var input = {
method: 'get',
returnedContentType: 'json',
path: '/restservices/getsomedata',
}
return _getCommonObject().invokeBackend(input);
}
function _getCommonObject() {
var invocationData = {
adapter: 'CommonAdapter',
procedure: 'getCommonObject',
parameters: []
}
return WL.Server.invokeProcedure(invocationData);
}
In this particular case, the common adapter is being used to provide a "wrapper" function around WL.Server.invokeHttp, but it can be used to provide any other utility functions as well.
The reason for this pattern in particular is that it allows the WL.Server.invokeHttp to run in the context of the "calling" adapter, which means the endpoint (hostname, port number, etc.) specified in the calling adapter's .xml file will be used. It does mean that the short _getCommonObject() function has to be repeated in each "normal" adapter, but this is a fairly small piece of boilerplate.
Note that this pattern has been used with Worklight 6.1 - there is no guarantee it will work in future or past versions.

How to use webdriver using the dart library?

I am having trouble getting started with the webdriver dart library.
I was hoping for some simple examples.
I do have the seleniumn server standalone running in the background.
I am very new to dart and very experienced with ruby and watir-webdriver.
I was expecting something similar to the code below
import 'package:webdriver/webdriver.dart';
main() {
var url = "http://google.com";
var driver = new WebDriver();
b = driver.newSession(browser:'firefox');
b.getUrl(url);
}
But the error I am getting is
Unhandled exception:
No constructor 'WebDriver' declared in class 'WebDriver'.
Looking at the source
class WebDriver extends WebDriverBase {
WebDriver(host, port, path) : super(host, port, path);
So it seems like the constructor is there; and the defaults are in the WebDriverBase to go to the remote server. What am I doing wrong? I have scoured the internet trying to find simple examples with no luck
Currently, there are known issues with local and session storage, script execution, and log access.
To use these bindings, the Selenium standalone server must be running. You can download it at http://code.google.com/p/selenium/downloads/list.
There are a number of commands that use ids to access page elements. These ids are not the HTML ids; they are opaque ids internal to WebDriver. To get the id for an element you would first need to do a search, get the results, and extract the WebDriver id from the returned Map using the 'ELEMENT' key. see http://commondatastorage.googleapis.com/dartlang-api-docs/13991/webdriver.html