So, I have say 3 TestMethods. I attach the same datasource to each, then run with 5 rows of data. I get 5 iterations of TM #1 followed by 5 of TM #2 followed by 5 of TM #3.
11111 22222 33333
What I'd prefer is to have is an iteration of all 3 TMs, followed by another etc.
123 123 123 123 123
I know you're not supposed to have dependencies among test methods, but the fact of the matter is that it's a workflow-driven application, and their are dependencies among operations. Can't do 2 until you've done 1, can't do 3 until you've done 2. Once you've done 1, you can't do it again. etc.
11111, 22222, 33333 works, when all goes well, and is appropriate for some test cases, but doesn't realistically reflect the way the app is used. And when it doesn't work, it can burn up a lot of data that we can't re-use and so end up having to generate new.
Diligently reading help and Googling like a fool has not produced any useful guidance on how... or even whether... this can be done.
Thoughts?
You can try wrapping them in to one test case and add data source to it.
[TestMethod]
[DataSoruce XXXXXXX]
public void OuterTest()
{
Scenario1();
Scenario2();
Scenario3();
}
private void Scenario1()
{
// Do your Stuff
}
private void Scenario2()
{
// Do your stuff
}
private void Scenario3()
{
// Do your stuff
}
then you will have iteration like 123 123 123 123 123
Related
Is it possible to continue the execution of test step even if one of the assert/match fails?
Ex:
Scenario: Testing
* def detail = {"a":{"data":[{"message":["push","dash"]},{"message":["data","Test"]}]}}
* match detail contains {"a":{"data":[{"message":["push","dash"]}]}}
* print detail
Here match will fail but execution stop at that point.
is there a way to do a soft assertion so that next step gets executed?
EDIT in 2021 - a PR introducing a continueOnStepFailure flag was contributed by Joel Pramos here and is available in Karate 1.0 onwards. You can find more details here: https://stackoverflow.com/a/66733353/143475
If you use a Scenario Outline each "row" is executed even if one fails.
Scenario Outline: Testing
* def detail = { a: 1, b: 2, c: 3 }
* match detail contains <expected>
Examples:
| expected |
| { a: 1 } |
| { b: 2 } |
| { c: 3 } |
Note that the concept of "soft assertions" is controversial and some consider it a bad practice:
a) https://softwareengineering.stackexchange.com/q/7823
b) https://martinfowler.com/articles/nonDeterminism.html
For those looking for a way to show all mis-matches between 2 JSON objects, see this: https://stackoverflow.com/a/61349887/143475
And finally, since some people want to do "conditional" match logic in JS, see this answer also: https://stackoverflow.com/a/50350442/143475
I want to run the function
{`Security$x}
over a list
order`KDB_SEC_ID
and return the list of values that failed. I have the below, which works, but I'm wondering if there is a neater way to write this without the use of a do loop.
Example Code:
idx:0;
fails:();
do[count (order`KDB_SEC_ID);
error:#[{`Security$x};(order`KDB_SEC_ID)[idx];0Nj];
if[error=0Nj;fails:fails,(order`KDB_SEC_ID)[idx]];
idx:idx+1;
];
missingData:select from order where KDB_SEC_ID in distinct fails;
I agree that Terry's answer is the simplest method but here is a simpler way to do the method you were trying to help you see how achieve it without using do loops
q)SECURITY
`AAPL`GOOG`MSFT
q)order
KDB_SEC_ID val
--------------
AAPL 1
GOOG 2
AAPL 3
MSFT 4
IBM 5
q)order where #[{`SECURITY$x;0b};;1b] each order`KDB_SEC_ID
KDB_SEC_ID val
--------------
IBM 5
It outputs a 0b if it passes and 1b if it fails resulting in a boolean list. Using where on a boolean list returns the indices where the 1b's occur which you can use to index into order to return the failing rows.
If your test is to check which of the KDB_SEC_ID's can be enumerated against the Security list, couldn't you do
q)select from order where not KDB_SEC_ID in Security
Or am I missing something?
To answer your question in a more general case, you could achieve a try-catch over a list to return the list of fails using something like
q){x where #[{upper x;0b};;1b] each x}(2;`ab;"Er";1)
2 1
I'm using late acceptance as local search algorithm and here is how it actually picks moves:
If my forager is 5, it'll pick 5 moves and then get 1 random move to be applied for every step.
At every step it only picks moves that are increasing scores ie greedy picking across steps.
Forager.pickMove()
public LocalSearchMoveScope pickMove(LocalSearchStepScope stepScope) {
stepScope.setSelectedMoveCount(selectedMoveCount);
stepScope.setAcceptedMoveCount(acceptedMoveCount);
if (earlyPickedMoveScope != null) {
return earlyPickedMoveScope;
}
List<LocalSearchMoveScope> finalistList = finalistPodium.getFinalistList();
if (finalistList.isEmpty()) {
return null;
}
if (finalistList.size() == 1 || !breakTieRandomly) {
return finalistList.get(0);
}
int randomIndex = stepScope.getWorkingRandom().nextInt(finalistList.size());// should have checked for best here
return finalistList.get(randomIndex);
}
I have two questions:
In first, can we make forager to pick the best of 5 instead of pick 1 randomly.
Can we allow move to pick that degrades score but can increase score later(no way to know it)?
Look for acceptedCountLimit and selectedCountLimit in the docs. Those do exactly that.
That's already the case (especially with Late Acceptance and Simulated Annealing). In the DEBUG log, just look at the step score vs the best score. Or ask for the step score statistic in optaplanner-benchmark.
My team have settled on MSpec for our BDD testing framework, which from their usage so far looks really good - but I'm struggling with the documentation/google for finding any implementation similar to SpecFlow's 'Scenario Outline'. I've shown an example of this below, but basically it allows you to write one 'test' and run it multiple times from a table (example) of inputs/expected outputs. I'll be embarrassed if answer turns out to be a LMGTFY but I've not been able to find anything myself. I don't want to say to the team it's not possible if I've just not found how to do it in MSpec (or understood MSpec properly). I wonder if this is why in some of the pro's/con's for MSpec I see references to the number of classes you can end up with listed as a negative.
Example of SpecFlow Scenario Outline
Scenario Outline: Successfully Convert Seconds to Minutes Table
When I navigate to Seconds to Minutes Page
And type seconds for <seconds>
Then assert that <minutes> minutes are displayed as answer
Examples:
| seconds | minutes |
| 1 day, 1 hour, 1 second | 1500 |
| 5 days, 3 minutes | 7203 |
| 4 hours | 240 |
| 180 seconds | 3 |
From: https://gist.github.com/angelovstanton/615da65a8f821d7a43c92ef9e2fd0b01#file-energyandpowerconvertcalculator-feature
Short answer, this is current not supported by by mspec. We planned this several years back, but the contribution never made it back into master.
If you want scenario outlines either use a different framework or create parameterized static methods in a helper class and call these from your context classes. Which will leave you with 1 class per scenario.
I'm using the StackExchange Miniprofiler with ASP.NET MVC 4. I'm currently trying to profile an assignment to a member variable of a class with an expensive expression that generates the value to be assigned. Miniprofiler doesn't seem to want to profile assignment statements. I've simplified my code to highlight the error:
public ActionResult TestProfiling()
{
var profiler = MiniProfiler.Current;
using (profiler.Step("Test 1"))
Thread.Sleep(50);
int sue;
using (profiler.Step("Test 2"))
{
sue = 1;
}
if (sue == 1)
sue = 2;
using (profiler.Step("Test 3"))
{
Thread.Sleep(50);
int bob;
using (profiler.Step("Inner Test"))
{
bob = 1;
}
if (bob == 1)
bob = 2;
}
return View();
}
N.B. the if statements are simply to avoid compiler warnings.
Test 1 and Test 3 get displayed in the Miniprofiler section on the resulting page. Test 2 and Inner Test do not. However if I replace the contents of either Test 2 or Inner Test with a sleep statement they get output to the resulting page.
What is going on here? Even if I replace the simple assignment statement inside one of the non appearing tests i.e.
using (profiler.Step("Test 2"))
{
ViewModel.ComplexData = MyAmazingService.LongRunningMethodToGenerateComplexData();
}
with a more complex one, the Test 2 step still doesn't get output to the rendered Miniprofiler section. Why isn't Miniprofiler profiling assignment statements?
Edit: code example now corresponds to text.
Edit2: After further digging around it seems that the problem isn't with assignment statements. It seems that whether something gets displayed in the output results is dependent on how long it takes to execute. i.e.
using (profiler.Step("Test 2"))
{
sue = 1;
Thread.Sleep(0);
}
Using the above code, Test 2 is not displayed in the Miniprofiler results.
using (profiler.Step("Test 2"))
{
sue = 1;
Thread.Sleep(10);
}
Using the above code Test 2 is now displayed in the Miniprofiler results.
So it seems my LongRunningCodeToGenerateComplexData turns out to be quite quick... but is it expected behaviour of Miniprofiler to not show steps that take a really small amount of time?
Just click on "show trivial" on the bottom right of the profiler results.
this should show all actions lesser
It seems the problem was that Miniprofiler isn't displaying results for steps where the execution time is less than 3ms.
Edit: From the Miniprofiler documentation.
TrivialDurationThresholdMilliseconds Any Timing step with a duration less than or equal to this will be hidden by default in the UI; defaults to 2.0 ms.
http://community.miniprofiler.com/permalinks/20/various-miniprofiler-settings