Jbehave as Data Driven Testing Framework - testing

I have some scenarios written in Jbehave and I would like to run it for 1000+ data. The Problem is that I cannot list all data items in 'Examples' because, firstly, it is not maintainable and secondly, I get this data file everyday from an external service.
Is there a way to write a scenario that can take data from the file?

Parameters can be loaded from an external file,
Details with an example are here: http://jbehave.org/reference/stable/parametrised-scenarios.html
Loading parameters from an external resource
The parameters table can also be loaded from an external resource, be
it a classpath resource or a URL.
Given a stock of <symbol> and a <threshold>
When the stock is traded at <price>
Then the alert status should be <status>
Examples:
org/jbehave/examples/trader/stories/trades.table
We need to enable the parser to find the resource with the appropriate
resource loader configured via the ExamplesTableFactory:
new MostUsefulConfiguration()
.useStoryParser(new RegexStoryParser(
new ExamplesTableFactory(new LoadFromClasspath(this.getClass())))
)

I too have same requirement and I think below will be the possible solution.
Implement a method to read the excel sheet and prepare the testData.table before scenario start executes, use #BeforeScenario jbehave annotation in steps java file.
refer this link to implement loading data from external resource http://jbehave.org/reference/stable/parametrised-scenarios.html
#BeforeScenario
public void prepareTestData(String excelSheetPath) {
// java code to read given excelSheetPath and prepare a *.table
}

Related

Restassured response from one step definition, and other from another step definition is giving NPE

So I have a feature A, and step definition A. and similarly feature B, and step definition B. In both this step definition there is "THEN validate that response is 200". And implementation of this, points to a common step definition (which is in step definition A).
The issue is when i am running class B, THEN validate that response is 200, fails.
Because its implementation is in class A, and response there is NULL. How should I handle this?
When you define a step which is common across features, then you are sharing the same step (and it’s associated stepdef). The variable that you are using in Step Definition A to store the response, is not shared in Step Definition B.
The best way to address this is to define a separate class for calling the REST API and use that as a common class for both features, as shown in the gif below:
https://nocodebdd.live/nocodebdd-demo-npe-issue
I have also attached a gif on how this could be done in NoCodeBDD. I am the creator of NoCodeBDD. I created this product to speed up automation of BDDs without having to write any code. I would love to get some feedback on the product from the community. You can download a free version from https://www.nocodedd.com/download
It seems like you want to share your glue code across multiple features for which cucumber already provides a option under #CucumberOptions.
What I mean't by this is just provide the glue option details within your runner file which triggers the features. For eg:
#CucumberOptions(features = "<Feature Files Path (eg: src/test/resources/features)>", glue = {"<stepdef package(eg: com.demo.stepdefs)>"})
This should make your glue code sharable amoung all the features present within Feature File Path details.

How to use one scenario output to another scenario without using properties files

I am working on API testing project. My requirement is to use response of one API as a response of another. I need different Feature files for each API. The challenge was to use output of one API as input to another which in my case is output of one feature file as input of another.
Also i don't want to call one feature file in another. So to achieve this currently we are using Runner class to initiate the test and using Properties file to store the responses. In the same run we are reading these properties file which act as input to another API(Feature file).
Is there any other better way to do this since we are not willing to use properties file in the framework.
Thanks
I think you are over-complicating your tests. My advice is combine the 2 calls into one scenario. Else there is no way unless you call a second feature file.

Play: Automating test data setup

I have a playframework project that has reached beta/user testing.
For this testing we require test data to exist in the environment
I am looking for a way to automate this via scripts.
The best way will be via call's to the API passing the correctly shaped data based on the models in the project (thus dependant on the project not external).
Are there any existing SBT plugins that I could utilise that would be able to create the appropriate JSON and pass it to the API to setup the environment
Why do you need a plugin for this? I think what you want to do is to have a set of Json, then call the end-points and see what is the response from the back-end. In case of "setting up" based on a call that has a Json, you could use FakeRequest in your tests:
val application = newGuiceApplicationBuilder().build()
val response = route(application, FakeRequest(POST, "/end-point")).get
contentAsString(response) must include("where is Json")
In your test you can also test the response from the back-end and the Json you are feeding it:
Create a set of Json using Writes, based on a case class you are using in the back-end. You could also purposely create an invalid Json as well, that misses a field for example; or has an invalid structure.
Use Table driven testing and sending FakeRequest with the body/header containing your Json; and then checking it against the expected results.
I'm on the move, when I get home, I can write an example code here.

How to mock http:request-config that has oauth2

I'm writing functional test and having difficulty mocking http:request-config with oauth2. It failed at requesting for token. I tried moving the config to a separate file and create a different config in src/test/resources and include only the test config when testing. Now it complains about "name must be unique" - how do I get around this?
Be sure that your getConfigFiles() override does not include the configuration file that contains the original . This means it will need to be in a separate file from the one containing the flow you are testing.
Another method is to use a mock HTTP server such as sham-http.
In order to test Mule application you can use MUnit:
http://developer.mulesoft.com/docs/display/current/MUnit
It will allow you to mock message processors.
Now, config elements are top level elements. Those can not be mock.
I would suggest you take a look to documentation to see if the tool fit your needs.
HTH

How to design a REStful API for a media analysis engine

I am new to Restful concept and have to design a simple API for a media analysis service I need to set up, to perform various tasks, e.g. face analysis, region detection, etc. on uploaded images and video.
Outline of my initial design is as follows:
Client POSTs a configuration XML file to http://manalysis.com/facerecognition. This creates a profile that can be used for multiple analysis sessions. Response XML includes a ProfileID to refer to this profile. Clients can skip this step to use the default config parameters
Client POSTs video data to be analyzed to http://manalysis.com/facerecognition (with ProfileID as a parameter, if it's set up). This creates an analysis session. Return XML has the SessionID.
Client can send a GET to http://manalysis.com/facerecognition/SessionID to receive the status of the session.
Am I on the right track? Specifically, I have the following questions:
Should I include facerecognition in the URL? Roy Fielding says that "a REST API must not define fixed resource names or hierarchies" Is this an instance of that mistake?
The analysis results can either be returned to the client in one large XML file or when each event is detected. How should I tell the analysis engine where to return the results?
Should I explicitly delete a profile when analysis is done, through a DELETE call?
Thanks,
C
You can fix the entry point url,
GET /facerecognition
<FaceRecognitionService>
<Profiles href="/facerecognition/profiles"/>
<AnalysisRequests href="/facerecognition/analysisrequests"/>
</FaceRecognitionService>
Create a new profile by posting the XML profile to the URL in the href attribute of the Profiles element
POST /facerecognition/profiles
201 - Created
Location: /facerecognition/profile/33
Initiate the analysis by creating a new Analysis Request. I would avoid using the term session as it is too generic and has lots of negative associations in the REST world.
POST /facerecognition/analysisrequests?profileId=33
201 - Created
Location: /facerecognition/analysisrequest/2103
Check the status of the process
GET /facerecognition/analysisrequest/2103
<AnalysisRequest>
<Status>Processing</Status>
<Cancel Method="DELETE" href="/facerecognition/analysisrequest/2103" />
</AnalysisRequest>
when the processing has finished, the same GET could return
<AnalysisRequest>
<Status>Completed</Status>
<Results href="/facerecognition/analysisrequest/2103/results" />
</AnalysisRequest>
The specific URLs that I have chosen are relatively arbitrary, you can use whatever is the clearest to you.