How to specify which binary Intelij uses when running tests with the JUnit plugin - intellij-idea

I use InteliJ to run JUnit tests.
I would like to specify the name/path of the command it uses to execute them. Specifically, rather than the specified JDKs/bin/java, I'd like to use a custom command (e.g. my_java).
My particular reason is that I'd like my_java to be a small script that launches "java" at a lower priority. If there is an alternate approach, that would be just as useful.

I reached out to JetBrains and asked them directly. According to them, specifying a custom binary is "not possible". They did suggest I look at writing a custom external tool.

Related

Can we do variable substitution on YAML files in IntelliJ?

I am using IntelliJ to develop Java applications which uses YAML files for the app properties. These YAML files have some placeholder/template params like:
credentials:
clientId: ${client.id}
secretKey: ${secret.key}
My CI/CD pipeline takes care of substituting the actual value for these params (client.id and secret.key) based on the environment on which it is getting deployed.
I'm looking for something similar in IntelliJ. Something like, I configure some static/fixed values for the params (Ex: client.id and secret.key) within the IDE and when I run locally using the IDE, these values should be substituted onto these YAML files and run.
This will actually save me from updating the YAML files with the placeholder params each time I check in some other changes to my version control system.
There is no such feature in IDEA, because IDEA cannot auto detect every possible known or unknown expression language or template macros that you could use in a yaml file. Furthermore, IDEA must create a context for that or these template files.
For IDEA it's just a normal yaml file.
IDEA has a language injection feature.
That can be used to inject sql into a java string for instance or inject any language into a yaml field.
This is a really nice feature and can help you to rename sql column names aso. but this won't solve your special problem, because you want to make that template "runnable" within in certain context where you define your variables.
My suggestion would be, to write a small simple program that makes nearly the same as the template engine does.
When you only need simple string replacements and no macro execution then this could be done via regular expression.
If it's more complicated I would use the same template engine as the "real processor" does.
If you want further help, it would be good to know how your yaml processing pipeline looks like.

Folder specific cucumber-reporting without parallel run?

Was wondering if I could get setup cucumber-reporting for specific folders?
For example in the https://github.com/intuit/karate#naming-conventions, in CatsRunner.java i set up the third party cucumber reporting? without parallel execution. please advise or direct me on how to set it up.
Rationale. its easier to read and helps me in debugging
You are always recommended to use the 3rd party Cucumber Reporting when you want HTML reports. And you can create as many Java classes similar to the DemoTestParallel in different packages. The first argument to CucumberRunner.parallel() needs to be a Java class - and by default, the same package and sub-folders will be scanned for *.feature files.
BUT I think your question is simply how to easily debug a single feature in dev-mode. Easy, just use the #RunWith(Karate.class) JUnit runner: video | documentation

PMD-Eclipse: How to set suppressmarker?

As the PMD docs say,
you can tell PMD to ignore a specific line by using the "NOPMD" marker
but
you can use whatever text string you want to suppress warnings
by using the command-line option -suppressmarker.
How can you set -suppressmarker when using the PMD-Eclipse plugin?
The PMD plugin for Eclipse doesn't contain all the options that the command line does. The easiest thing is to use //NOPMD and ignore the problem completely.
Alternatively, you could use an Ant script to run PMD. You can set an Ant script to run as a builder for an Eclipse project if you want it to run automatically. You lose the Eclipse integration with this approach though. Looking at an HTML report rather than being able to click on the error seems like a big tradeoff just to use a different marker.

How can I create a single Clojure source file which can be safely used as a script and a library without AOT compilation?

I’ve spent some time researching this and though I’ve found some relevant info,
Here’s what I’ve found:
SO question: “What is the clojure equivalent of the Python idiom if __name__ == '__main__'?”
Some techniques at RosettaCode
A few discussions in the Cojure Google Group — most from 2009
but none of them have answered the question satisfactorily.
My Clojure source code file defines a namespace and a bunch of functions. There’s also a function which I want to be invoked when the source file is run as a script, but never when it’s imported as a library.
So: now that it’s 2012, is there a way to do this yet, without AOT compilation? If so, please enlighten me!
I'm assuming by run as a script you mean via clojure.main as follows:
java -cp clojure.jar clojure.main /path/to/myscript.clj
If so then there is a simple technique: put all the library functions in a separate namespace like mylibrary.clj. Then myscript.clj can use/require this library, as can your other code. But the specific functions in myscript.clj will only get called when it is run as a script.
As a bonus, this also gives you a good project structure, as you don't want script-specific code mixed in with your general library functions.
EDIT:
I don't think there is a robust within Clojure itself way to determine whether a single file was launched as a script or loaded as a library - from Clojure's perspective, there is no difference between the two (it all gets loaded in the same way via Compiler.load(...) in the Clojure source for anyone interested).
Options if you really want to detect the manner of the launch:
Write a main class in Java which sets a static flag then launched the Clojure script. You can easily test this flag from Clojure.
Use AOT compilation to implement a Clojure main class which sets a flag
Use *command-line-args* to indicate script usage. You'll need to pass an extra parameter like "script" on the command line.
Use a platform-specific method to determine the command line (e.g. from the environment variables in Windows)
Use the --eval option in the clojure.main command line to load your clj file and launch a specific function that represents your script. This function can then set a script-specific flag if needed
Use one of the methods for detecting the Java main class at runtime
I’ve come up with an approach which, while deeply flawed, seems to work.
I identify which namespaces are known when my program is running as a script. Then I can compare that number to the number of namespaces known at runtime. The idea is that if the file is being used as a lib, there should be at least one more namespace present than in the script case.
Of course, this is extremely hacky and brittle, but it does seem to work:
(defn running-as-script
"This is hacky and brittle but it seems to work. I’d love a better
way to do this; see http://stackoverflow.com/q/9027265"
[]
(let
[known-namespaces
#{"clojure.set"
"user"
"clojure.main"
"clj-time.format"
"clojure.core"
"rollup"
"clj-time.core"
"clojure.java.io"
"clojure.string"
"clojure.core.protocols"}]
(= (count (all-ns)) (count known-namespaces))))
This might be helpful: the github project lein-oneoff describes itself as "dependency management for one-off, single-file clojure programs."
This lets you define everything in one file, but you do need the oneoff plugin installed in order to run it from the command line.

CDash Custom Dynamic Analysis

I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.