I'm trying to create a CI workflow which requires me to do both code coverage and ordinary testing. We run both at this time, because the coverage test instrumented for grcov can fail, but the normal test could pass. I want to automate comparing the test that failed in coverage instrumentation to the tests run without it. So for each test that failed in the grcov instrumentation I want to run a similar test without instrumentation.
Currently, the way to do it, is to hardcode the test paths, and/or use unreliable text-based testing detection. It's both labour intensive and not very reliable once people start adding more tests (which they will).
So my question is
Is there a way to get a list of all tests that are to be run from a multi-crate project.
Is there a way to attach actions to cargo -test if it fails, without stopping cargo test.
Is there a way to ask cargo test to only mention the names of failed tests (if any).
Your best bet for acting on cargo test results would be to use the unstable --format json option. It will output events in a JSON Lines format:
cargo test -- -Zunstable-options --format json
{ "type": "suite", "event": "started", "test_count": 2 }
{ "type": "test", "event": "started", "name": "tests::my_other_test" }
{ "type": "test", "event": "started", "name": "tests::my_test" }
{ "type": "test", "name": "tests::my_test", "event": "ok" }
{ "type": "test", "name": "tests::my_other_test", "event": "failed", "stdout": "thread 'tests::my_other_test' panicked at 'assertion failed: false', src/lib.rs:20:9\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\n" }
{ "type": "suite", "event": "failed", "passed": 1, "failed": 1, "ignored": 0, "measured": 0, "filtered_out": 0, "exec_time": 0.003150667 }
You can save that to a file and use another utility like jq to parse out the relevant data. Here are some examples that might be helpful to you.
Get all the tests that were ran:
jq -c 'select(.type=="test" and .event=="started") | .name' <test-output.json
"tests::my_other_test"
"tests::my_test"
Get all the tests that failed:
jq -c 'select(.type=="test" and .event=="failed") | .name' <test-output.json
"tests::my_other_test"
Related
I installed the Mochawesome results reporting add-on to Cypress. The problem is that every time I finish all the tests, I only have the html file with the last one in the reports folder. Do you know how to make a report from the whole set of tests?
However, only one file is created each time, and it is the last one
and in my cypress.json I have this:
Code and tests in cypress.json:
{
"projectId": "fi4fhz",
"viewportHeight": 1080,
"viewportWidth": 1920,
"testFiles": [
"settings.js",
"test1.js",
"test2.js",
"test3.js",
"test4.js",
"test5.js",
"test6.js",
"test7.js",
"test8.js",
"test9.js",
"test10.js",
],
"env": {
"numTestsKeptInMemory": 0,
"projectUrl": "https://testlocal:6001/",
"settings": {
"SP": {
"tenant": "k.online",
"clientId": "3a15528c",
"clientSecret": ".u4L",
"administrationUrl": ""
}
},
"reporter": "mochawesome",
"reporterOptions": {
"charts": true,
"overwrite": false,
"html": false,
"json": true,
"reportDir": "cypress/report/mochawesome-report"
},
your configuration seems correct, but try to change reportDir to cypress/report and follow this steps:
After running all tests, you should find multiple "mochawesome.json" files, 1 for each spec. To create a single html report, you need one more package.
Install mochawesome-merge package (npm i mochawesome-merge)
Run mochawesome-merge "cypress/report/*.json" > mochawesome.json, this will create a single JSON file that contains all the tests.
Now run the command marge mochawesome.json (marge means MochAwesome Report GEnerator) and the mochawesome-report directory will be created where there will be an html with all your tests. Here you can see an example of my report
I'm using the selenium IDE for Chrome on Mac Big Sur. I want to add a puse in between commands to see why something isn't executing properly. This is in my ".side" file
}, {
"id": "32f35ed7-1a28-4540-a93d-3cb8ba0e012a",
"comment": "",
"command": "pause",
"target": "",
"targets": [],
"value": "100000"
}, {
I have put a really high value but when I play back my test, it just breezes through it without pausing at all, although it tells me the command was successfully run
What's the right way to pause my test?
Use Target instead of Value, Like below
Target: 100000
instead of
value: 100000
Also,
Set speed to fastest (Actions --> Fastest), otherwise it won't work.
My fixtures are set up like so
{
"fixtures": [
{
"name": "login",
"pageUrl": "http:\/\/localhost:3000\/",
"tests": [
{
"name": "type name",
"commands": [
{
"type": "type-text",
"studio": {
},
"callsite": "0",
"selector": {
"type": "js-expr",
"value": "input[type=email]"
},
"options": {
},
"text": "example#email.com"
}
]
}
]
}
]
}
with one simple test to find the input and type some text but when running the command I get
testcafe chrome login.testcafe
ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
Type "testcafe -h" for help.
I've seen this issue a couple of times on their issues board one relating to CI integration on a Linux server and another which seems like a similar issue of trying to connect to localhost
https://github.com/DevExpress/testcafe-browser-provider-electron/issues/20
https://github.com/DevExpress/testcafe/issues/1133
New to testcafe any help would be appreciated!
I've found the solution some network policies don't allow access to your machine on some ports in my example it's 57501.
testcafe chrome login.testcafe --hostname localhost
adding --hostname resolves the issue
documentation
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--hostname-name
I still don't know how to launch from the IDE but this resolves my main issue.
TestCafe Studio Preview does not support setting command line options (hostname in your case). The TestCafe team is going to implement this functionality in the official release.
So, for now, it is only possible to run tests via a command line.
UPDATE:
You can set the hostname option in the TestCafe Studio Settings dialog:
In JMeter I have an automation test plan with several assertions. In my assertion result listener I can see the result off all assertions in a handy overview. So far so good.
At the end of the test plan, I'm calling JIRA to post a new issue with the test results. I want the description of that issue to contain the overview from the assertion result listener.
How can I define the assertion results as a variable, so that I can reference them later in my JIRA call?
How can I map this view to a variable?
My JIRA call should look like this:
POST /rest/api/2/issue
{
"fields": {
"project":
{
"key": "Blah"
},
"assignee": {
"name": "Joe"
},
"priority": {
"name": "Major"
},
"summary": "Jmeter Test Result",
"description": "${assertionresults}",
"issuetype": {
"name": "Test Execution"
}
}
You can add after the Sampler with the assertion:
Test Action and inside it a JSR223 PreProcessor and write the following code using AssertionResult.getFailureMessage method:
vars.put("assertionresults", prev.getAssertionResults()[0].getFailureMessage());
It will save in assertionresults variable the first assertion message.
I'm building several base images for our infrastructure and would like to mimic the Docker Hub nomenclature for the image tags. For example, Java image on Docker Hub includes several aliases for the same image, e.g. 8 and latest is the same image.
If I were to replicate this system in ImageStreams, I would need to create a BuildConfig with an output specification like this:
"output": {
"to": {
"kind": "ImageStreamTag"
"name": "jdk:8"
}
}
Obviously, this only includes one tag, so even if I were to write
"output": {
"to": {
"kind": "ImageStreamTag"
"name": "jdk:8"
},
"to": {
"kind": "ImageStreamTag"
"name": "jdk:latest"
}
}
only the latest definition would actually be executed.
Is there any proper way to push the same image into different tags apart from creating a different BuildConfig (which would probably "build" from Docker image to Docker image)?
There is a card on the trello board to do this: https://trello.com/c/nOX8FTRq/686-5-support-multiple-tags-for-a-build-output .
You should also be able to do this using oc tag to avoid having to run the same build twice.