My requirement is : I want to have parallel execution with say 5 thread. All thread would be creating an entity.I want to have more threads so that text execution time could be less.But I am facing issue as when threads are increasing ,I get error from db saying unable to lock the error as all threads are using same user to create an entity.Is it possible in karate that I can use multiple user credentials so that threads can pick users randomly and create an entity??
Simple solution, write the logic in Java to do this and make it a singleton or static method. Then make a call to it from your script something like this:
* var MyCode = Java.type('com.myco.MyCode')
* var entity = MyCode.getEntity()
So you can keep track of entities created (maybe in a Set or Map) and re-use as per your wish.
Sorry Karate does not have built-in support for this kind of thing.
Related
I am currently working on testing small API in our company and I need to randomly distribute a number of calls on all methods of that API. I am using 5.3 version of JMeter, company security politics, so we do not have newer versions if that matters.
Because number of methods is around 15 my idea right now is supply .properties file to JMeter, which will contain overall number of calls to API, then, through JSR223 sampler in SetUp Thread Group I will set properties with random amount of users in thread. However, I encountered a problem in doing so: I successfully set all properties, but I cannot access them, when calling ___property function in another Thread groups.
Is there any method to set those properties by script and access them through JMeter function?
Edit: Adding the code I am using in SetUp Thread Group to add properties
jmeter_properties.load(new FileInputStream(new File('env.properties')));
def allUsers = jmeter_properties.get('number.of.users') as Integer;
def random = new Random();
def thisUsers = random.nextInt(allUsers);
allUsers = allUsers - thisUsers;
props.put('getProjectById.users', thisUsers);```
I'm not sure whether it's expected behaviour or a bug in JMeter which should be reported via JMeter Bugzilla, but I do confirm that:
Your code is correct
The generated value cannot be referenced either by __P() or by __property() functions calls
However if you use __groovy() function the property is resolved just fine, so if you do something like:
${__groovy(props.get('getProjectById.users'),)}
in the 2nd (or whatever thread group) you will get the outcome you're looking for
I am performing a load test with karate Gatling. As per my requirement, I need to create the booking and use the bookingId from the response and need to pass it to the update/cancel the booking request.
I have tried with below process:
In the test.feature file:
def createBooking = call read('createBooking')
def updateBooking = call read('updateBooking') { bookingid: createBooking.response.bookingId }
I am trying to apply 1000 ramp users at a time.
In the ghatling simulation file:
val testReq = scenario("testing").exec(karateFeature("classpath:test.feature"))
setUp(
testReq.inject(rampUsers(1000).during(1 seconds))
)
This process is unable to provide me the required throughPut. I am unable to find the bottleneck whether there is a problem with the karate or API server. In each scenario, we have both create and update bookings, so I am trying to capture all the 1000 bookings ids from the response during the load test and pass it to the update/cancel bookings. I will save it to a file and utilize the booking response for updating a booking. As I am new to Karate, can anyone suggest a way to store all the load test API responses to a file?
The 1.0 RC version has better support for passing data across feature files, refer this: https://github.com/intuit/karate/issues/1368
so in the scala code you should be able to do something like this:
session("myVarName").as[String]
And to get the RC version, see: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
That said - please be aware that getting complex data-driven tests to work as a performance test is not easy, so yes - you will need to do some research. My suggestion is read and understand the info in the first link in this answer.
Writing to file is absolutely NOT recommended during a performance test. If you really want to go down that route, please read this: https://stackoverflow.com/a/54593057/143475
Finally if you are still stuck, please follow the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
This question already has an answer here:
Tag logic for Parallel Run
(1 answer)
Closed 2 years ago.
I have tried few approaches to solve my problem but with no success (I do need to improve my Java :)), so I am hopping that I am missing something or that someone can point me in the right direction.
I have multiple microservices that I need to test. I should be able to test all at once or only the ones I want. Each service has its own DB and different feature files. Note that these services may not be all up and running.
I can run tests with manually setting config for each service. Ideally I would like to pass a variable with service name in command line and the tests should start.
In current set up I use callSingle to run DBInit.feature which runs SQL scripts to populate my DB. I have also set global variables that are used in feature files. And this works fine.
Problems start when I add more feature files that are used to test the service that is not running. And when I have to use callSingle for specified service to populate its DB.
The first idea was to use different envs, but I could need 5 envs to be executed in a single run and with one report. Then I was thinking to implement runner for each service but I am not sure if these runners run in parallel and not sure how could I populate DB in this case?
Is it possible to use custom variable that will be passed to main test class.
public class DemoTestSelected {
#BeforeClass
public static void beforeClass() throws Exception {
TestBase.beforeClass();
}
#Test
public void testSelected() {
List<String> tags = Arrays.asList("~#ignore");
List<String> features = Arrays.asList("classpath:demo/cats");
String karateOutputPath = "target/surefire-reports";
Results results = Runner.path(features)
.tags(tags)
.outputCucumberJson(true)
.reportDir(karateOutputPath).parallel(5);
DemoTestParallel.generateReport(karateOutputPath);
assertTrue(results.getErrorMessages(), results.getFailCount() == 0);
}
}
For example tags and features to be set in config?
I re-read your question a few times and gave up trying to understand it. But I'll lay down a couple of principles:
you should use tags to decide which features to run / not-run. try to fit everything you need to this model and don't complicate things
for more control, you can set some "system property" on the command-line and maybe before you use the Runner, you can write some Java logic which would be - "if karate.env (or some other system property) is foo, then select tags one. two and three etc.
yes the Karate 1.0 series can technically run multiple Runner instances in parallel, but that is left to you and we don't have an example, it would require you to manage threads or a Java Executor manually
In a Mule app (using Mule 4), I am trying to invoke a single API multiple times for an input array of Strings as this:
"input_arr": [
"val1", "val2", "val3"
]
All the invocations can run in parallel as they are independent, but I want to wait and collate the results once they all complete. Also, if one or more result in errors, I want to obtain that as well.
I tried couple different ways:
1. Simple foreach -- not efficient since it is sequential.
2. Batch - it is async and the main flow does not wait.
What would be the best way to achieve this efficiently in Mulesoft?
If you are using Mule 4.2 + then parallel-foreach might achieve what you are looking for.
The Parallel For Each scope enables you to process a collection of messages by splitting the collection into parts that are simultaneously processed in separate routes within the scope of any limitation configured for concurrent-processing.
NOTE: However, because this feature is not available in the Anypoint Studio Mule Palette view, you must manually configure Parallel For Each scope in the XML.
Also there are some differences other than concurrency with the new scope, so make sure to read the documentation:
https://docs.mulesoft.com/mule-runtime/4.2/parallel-foreach-scope
Probably the solution is to use the Parallel Foreach scope from Mule 4.2.
Using Ncqrs, is there a way to replay every single event ever happened (all aggregate types) and feed these through my denormalizers in order to recreate the whole read model from scratch?
Edit:
I though it's be good to provide a more specific use case. I'm building this inside a ASP.NET MVC application and using Entity Framework (Code first) for working with the read models. In order to speed up development (and because I'm lazy), I want to use a database initializer that recreates the database schemas once any read model changes. Then using the initializer's seed method to repopulate them.
There is unfortunately nothing built in to do this for you (though I haven't updated the version of ncqrs I use in quite a while so perhaps that's changed). It is also somewhat non-trivial to do it since it depends on exactly what you want to do.
The way I would do it (up to this point I have not had a need) would be to:
Call to the event store to get all relevant events
Depending on what you are doing this could be all events or just the events for one aggregate root, or a subset of events for one or more aggregate roots.
Re-create the read-model in memory from scratch (to save slow and unnecessary writing)
Store the re-created read-model in place of the existing one
Call to the event store one more time to get any events that may have been missed
Repeat until there are no new events being returned
One thing to note, if you are recreating the entire read-model database from scratch I would off-line the service temporarily or queue up new events until you finish.
Again there are different ways you could approach this problem, your architecture and scenarios will probably dictate how best to do it.
We use a MsSqlServerEventStore, to replay all the events I implemented the following code:
var myEventBus = NcqrsEnvironment.Get<IEventBus>();
if (myEventBus == null) throw new Exception("EventBus is not found in NcqesEnvironment");
var myEventStore = NcqrsEnvironment.Get<IEventStore>() as MsSqlServerEventStore;
if (myEventStore == null) throw new Exception("MsSqlServerEventStore is not found in NcqesEnvironment");
var myEvents = myEventStore.GetEventsAfter(GetFirstEventIdFromEventStore(), int.MaxValue);
myEventBus.Publish(myEvents);
This will push all the events on the eventbus and the denormalizers will process all the events. The function GetFirstEventIdFromEventStore just queries the eventstore and returns the first Id from the eventstore (where SequentialId = 1)
What I ended up doing is the following. At the service startup, before any commands are being processed, if the read model has changed, I throw it away and recreate it from scratch by processing all past events in my denormalizers. This is done in the database initializer's seed method.
This was a trivial task using the MS SQL event storage as there was a method for retrieving all events. However, I'm not sure about other event storages.