What's the best way to configure karate to stop execution when any of the scenario fails?
I saw karate.abort() however I think it will just abort that specific scenario.
I want to abort the whole feature file execution.
This is not supported as of now. This has never been requested until now and on the other hand people demand "soft assertions". Feel free to raise a feature request and explain why this is important.
Note that you have the option of running a chosen Scenario by name (IntelliJ) or tag from the IDE.
Related
Currently I have a workflow.feature for a gatling performance test, that calls all existing functional tests in the given order. If one of the tests breaks the whole workflow is stopped.
How to force the execution of all steps even if one step fails?
Feature: A workflow of all functional tests to be executed for performance/loading tests.
Scenario: Test all functional scenarios in the given order.
* call read('classpath:foo1/bar1.feature')
* call read('classpath:foo2/bar2.feature')
* call read('classpath:foo3/bar3.feature')
...
* call read('classpath:fooX/barX.feature')
This is a manually managed list of calls, but maybe there is a way to grab all existing feature files from all subfolders dynamically?
If one of the tests breaks the whole workflow is stopped.
If you use a Scenario Outline: it processes all rows even if one fails. So maybe:
Scenario Outline:
call read('classpath:' + file)
Examples:
| file |
| foo/bar.feature |
| baz/ban.feature |
maybe there is a dynamic way to grab all existing feature files from all subfolders
You should be able to write Scala code to do this if you insist and this has nothing to do with Karate. Or the above dynamic feature may give you some ideas. Hint - you can mix Java into Karate feature files very easily.
Is there a way to force the execution of a list of features in any order so, that the next feature file is executed if the previous one is failed.
See above. Also don't ask so many questions in one, keep it simple please.
Is there a way to get Karate to automatically print the name of each scenario as it is executed into the logs? We have a reasonably large suite that generates ~25MB of log data in our Jenkins pipeline console output and sometimes it’s a little tricky trying to match a line where com.intuit.karate logs an ERROR to the failure summary at the end of the run. It is most likely possible to obtain the scenario name and print() it but that would mean adding code to many hundred scenarios which I’d like to avoid.
As part of the fix for this issue Karate will log the Scenario name (if not empty) along with any failure trace.
A beta version with this fix is available 0.6.1.2 it would be great if you can try it and confirm.
If you feel more has to be done, do open a ticket and we'll do our best to get this into the upcoming 0.6.2 release.
I found a bug in an open source project on GitHub, and wrote a failing test for it, but haven't suggested a fix due to insufficient familiarity with the code.
How does one usually contribute such tests? Shall I create a pull request? Note that the continuous integration would fail for my commit as it adds a (currently) failing test..
(For reference here's the actual test)
You can try to use the "Issues" functionality of Git. Create an issue as a bug report, instead of creating a pull request.
I am new to software testing. Currently I need to test a middle-sized web application. We have just refactored our codebase and added many event logging logic to the existing code. The event logging code will write to both Windows Eventlog and a SQL database table as well.
The amount of the events is about 200. What approach should I take to test/verify this code refactoring effectivly and efficiently?
Thanks.
I would be tempted to implement unit tests for each of the events to make sure when an event occurs the correct information is passed into your event logging logic.
This would mean that you can trigger one event on the deployed site and verify the data is written to the database and event log. You can have an acceptable level of confidence that the remaining event will be recorded correctly.
If unit testing isn't an option then you will need to verify each event manually, I would alternate between checking the database and the event log as there should be little risk in this area failing. That would mean you would have 200 tests rather than 400 tests.
You could also partition the application into sensible sections and trigger a few events for each section to give you a reasonable level of confidence in the application.
The approach you take will really be determined by how long you have to test, what the cost of would be if an event didn't get logged, and how well developed the logging logic is.
Hope this helps
I would have added tests before you did the refactoring. you dont know where you have broken it already :).
you are saying that it logs into EventViewer and DB, I hope you have exposed logging feature as an interface so that you can:
Extend it to log to some other device if needed
Also makes mocking bit a lot easier
if you have 200 events to test, that's not going to be easy tbh. I dont think you can escape from creating eq number of tests for your 200 events.
I would do it this way:
i would search for all places where my logging interface is used and note all classes and
start with critical paths/ones first (that way you at least cover critical ones)
or you could start from the end, i.e. note down all possible combinations of logs you are getting, maybe point to stale data so that you know if the input is the same, output should be the same too. And every time, regression test your new binaries agaisnt this data and you should get similar number/level of logs.
This shouldn't be to difficult.
Pick a free automated web test tool like Watir (java) or WatiN (.net), (or VS UI Test if you have it.)
Create tests that cover the areas of the web application you expect/need to fire events. Examine the SQL Db after each test to see what events did fire.
If those event streams are correct for the test add a step into the test to verifiy that exactly that event stream was created in the Db.
This will give you a set of tests that will validate the eventing from any portion of your web site in a repeatable fashion.
The efficent & efective part of this approach is that it allows you to create only as many tests as you need to verify the app. Also you do not need to recreate a unit test approach with one test per event.
Automating the tests will allow you re-execute them without additonal effort, and this will really add up over the long haul.
This approach can also be taken with manual testing, but it will be tricky to get consistent & repeatable results. Also re-testing will take nearly as long as the testing uncovers defects that need to be fixed.
Note: while this will be the most effective & efficent way it will not be exhaustive. There will likely be edge case that get missed, but that can be said of nearly any test approach. Just add test cases until you get the coverage you need.
Hope this helps,
Chris
I'd like to run tests that simulate users modifying certain data at the same time for a grails application.
Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel.
I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.
I have used Geb (http://grails.org/plugin/geb/) for this recently. It is a layer on top of WebDriver and Selenium etc.. Its very easy to write a Grails script to act as a user in your app and then just run several instances on different consoles. Geb uses a jQuery style syntax for locating stuff in the DOM which is very cool:
import geb.Browser
import geb.Configuration
includeTargets << grailsScript("_GrailsInit")
target(main: "Do stuff as fast as possible") {
Configuration cfg = new Configuration(baseUrl: "http://localhost:8080/your_app/")
Browser.drive(cfg) {
go "user/login"
$("#login form").with {
email = "someone#somewhere.com"
password = "secret"
_action_Login().click()
}
...
}
}
setDefaultTarget(main)
Just put your script in scripts/YourScript.groovy and then you can do "Grails YourScript" to run it. I tracked down some concurrency issues by just running several of these at full speed. You do need to build a war and deploy it properly as Grails in dev mode is very slow and runs out of permgen space quite quickly.
Just an idea: it seems difficult to make client starts at the same time, but can they wait for each other just before modifying data?
Such as, a client keeps logging its process: "Client x access DATA", "Client x editing DATA" in a file. They also keep looking this log file, to see other clients' progress. Then don't permit a client complete editing a DATA until another client comes in to edit that DATA.
I've found Grinder to be an excellent tool for heavy load testing. Running multiple instances performing the same tests at one time can often uncover concurrency issues in your app that you wouldn't find with normal tests.
If you want to do this within Unit Tests or in-code Integration Tests, you could always spin up multiple threads in code and have them perform the task you're trying to test.
Are you primarily interested in load testing multiple active users, as opposed to those who just have a HttpSession? Solid load testing is predicated on really really good func. testing however. How are your functional tests organized and executed to-day? Grails has a plug-in* for that, too, and it appears to be in the Top of the Pops at the plug-in portal.
Are you attempting to test out how the optimistic locking mechanism performs under load?
If the former use case is the one that means more, it sounds like you may be looking for JUnitPerf. Here is the --> download
*functional-test <1.2.7> -- Functional Testing
WebTest is built on Ant which provides the parallel task. You might be able to use this in conjuction with the Webtest plugin to run some actions in parallel. I've never tried it though.
Have a look at MultithreadedTC. It looks like it could be used to exercise certain interleaving cases where multiple threads are executing your code in ways you consider potentially risky.
I doubt you'll find a convenient way to test specific multithreaded interleaving cases with Selenium because Selenium controls a browser which sends requests to your server. I haven't heard of a way to instrument code for multithreaded interleaving tests when the threads are started as real web requests to a running web server.