Running a test in golang that only works for some versions [duplicate] - testing

This question already has answers here:
How do I skip a tests file if it is run on systems with go 1.4 and below?
(2 answers)
Closed 6 years ago.
I have a library here (https://github.com/turtlemonvh/altscanner) that includes a test comparing functionality of a custom scanner to bufio.Scanner. In particular, I am comparing my approach to the Buffer method which wasn't added until go1.6.
My actual code works with versions of go back to 1.4, but I wanted to include this test (and I'd like to add a benchmark as well) that uses the Buffer function of the bufio.Scanner object.
How can I include these tests that use features of go1.6+ while still allowing code to run for go1.4 and 1.5?
I imagine the answer is using a build flag to trigger the execution of these tests only if explicitly requested (and I do have access to the go version in my CI pipeline via a travis environment variable). I could also abuse the short flag here.
Is there a cleaner approach?

A few minutes after posting this I remembered about build constraints. Go has a built in constraint that handles this exact case, i.e. "version of go must be >= X".
Moving that test into a separate file and adding //+build go1.6 at the top fixed it.

Related

How to check if a pre-compiled libtcmalloc.so is compiled without libunwind?

I do not know where to even start, apologies for the noob question but seems there's nothing on this specific case in SO unless there's more generic terms I do not know.
Recent (like 5+ years or so), gperftools support environment variable that dumps used backtracing method. Set TCMALLOC_STACKTRACE_METHOD_VERBOSE to 1 and it'll show what it uses by default. You can also override used method via TCMALLOC_STACKTRACE_METHOD environment variable.
Another option is via package manager. On my debian system I see that libtcmalloc-minimal4 is linked without libunwind (as expected since it doesn't capture any backtraces), and libgoogle-perftool4 does depend in libunwind8. On debian and ubuntu you can see that by running apt-cache show .

Karate - Starting Mock Server with multiple feature files [duplicate]

This question already has an answer here:
Karate Standalone as Mock Server with multiple Feature Files
(1 answer)
Closed 1 year ago.
My feature files are structured like this
As you can see, each module has a common, mock and test feature files.
for eg: category-common.feature, category-mock.feature and category-test.feature. These contain all common definitions, mock API definitions and tests respectively related to category APIs.
We are using the java -jar karate.jar -m <feature_file> command to run the mock server.
This approach is good when we are testing the APIs module wise. The question is how can we deploy all mocks together in a single port?
As per this answer, it is not possible to do it. If not, what are some other approaches we can follow?
Someone contributed a PR to add this post the 1.0 release, so you should read this thread: https://github.com/intuit/karate/issues/1566
And you should be able to test and provide feedback on 1.1.0.RC2
Of course if you can contribute code, nothing like it :)

ExecutionHook with parallel runner [duplicate]

This question already has answers here:
Dynamic scenario freezes when called using afterFeature hook
(2 answers)
Closed 1 year ago.
I am using the parallel runner to run one of m feature files. It has 8 scenarios as of now. I wanted to integrate a third party reporting plugin (Extent report) to build out the reports. I planned to use the ExecutionHook interface to try and achieve this. Below are the issues i faced and havent found a even after looking at the documentation.
My issues
I am creating a new test on the afterFeature method. This gives me 2 handles, Feature and ExecutionContext. However since the tests are running in parallel, the reporting steps are getting mixed on each other? How do i handle this? Any out of the box method i can use?
To counter the above, i decided to build the whole report towards the end on the afterAll overridden method but here i am missing the execution context data so i cant use the context.getRequestBuilder() to get the urls and paths.
Any help would be great.
Please focus on the 1.0 release: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
Reasons:
it gives you a way to build the whole report at the end, and the Results object can iterate over all ScenarioResult instances
ExecutionHook has been changed to RuntimeHook, see example
yes, since tests can run in parallel, it is up to you to synchronize as the framework has to be high performance, but building reports at the end using the Results object is recommended instead of using the RuntimeHook

Post-process xcodebuild test output in bamboo

I am looking for a way to export the test results from a test run with xcodebuild test ... to the Atlassian bamboo CI server. This is what I have found so far:
ocunit2junit: Consumes the raw output and produces a set of JUnit *.xml files that can be read by the bamboo JUnit reporter. Unfortunately, it's not working well with Xcode 11 (doesn't pick up all the test results). It hasn't been updated in the past eight years which makes it likely that the xcodebuild output has changed sufficiently to render the parser fragile.
trainer: This appears to be a smaller project that uses the xcresults file. The last update is 13 months ago. My concern is that this might end up even being even more fragile in case Apple decides to change some internals.
xcpretty: The top dog that is widely recommended and referenced. Unfortunately, it hasn't been updated in more than two years and this issue suggests it won't happen in the future. I have also had trouble exporting the test results in JUnit format and error reporting isn't working properly.
All of these export to JUnit format which is then picked up by bamboo, maybe that's not the best choice? Apart from these options are there any alternatives I have missed that export xcodebuild test results to bamboo?

FluentAssertions without exceptions? [duplicate]

This question already has an answer here:
Customize failure handling in FluentAssertions
(1 answer)
Closed 2 years ago.
This seems like a long shot...
I am building a test harness for manual testing (for my QA Team). It runs in a console application and can output some level of smart data, but nothing so automatic as a fully automated test (not my rules).
I would love to use FluentAssertions to generate the text to show, but I don't want to throw an exception.
Is there a way to have FluentAssertions just output a string with its fluent message? (Without throwing an exception.)
NOTE: I am aware of a possible workaround: (Try/Catch statements around an AssertionScope around my fluent assertion checks). But I am hoping to keep the extra code to a minimum so as to not confuse the non-programmer QA person that has to use the test harness.
You could replace the Services.ThrowException property with custom behavior or you could use AssertionScope's Discard method.