NUnit 3.0 console doesn't work with "include" parameter - nunit-3.0

Before NUnit 3.0 I used the next parameter to include category of tests to execute: /include:"name"
Now in NUnit 3.0 the writing seems to be different - according to this: https://github.com/nunit/dev/wiki/Command-Line-Options
I have to use something like: -include=name but it seems to me that correct option would be --include=name as I write other parameters like --workers the same way and they work.
The problems is that when I use --include parameter I get error: "Invalid argument: --include=Selenium"(Jenkins console shows me this error).
What am I doing wrong?

I dug dipper and somewhere found that for now there is no --include parameter!
What we should use instead is: --where "cat==name"
cat = category. If we wanna to take priority into consideration we'll do something like this: --where "cat==name && Priority==High"

Related

What does #+SKIP_ENV=true mean in this Makefile?

I'm tinkering with this project where Step 6 requires me to run a command like make db-prepare-artix7. This command corresponds to this section of the Makefile. I am confused by the #+SKIP_ENV=true in the recipe. What is #+SKIP_ENV here, and what does it do? Couldn't find anywhere referring to SKIP_ENV.
Thanks!
Explaining every part:
The # means the command will not be echoed by Make during recipe execution
The + means the command will be executed even during dry runs: make --dry-run ...
The SKIP_ENV=true is sh(ell) syntax for setting the environment variable SKIP_ENV to the string true for the duration of the command that follows
In your case the source ... command
The effect of SKIP_ENV depends on the command - dig deeper to find out

How to use Bamboo plan variables in an inline script task?

When defining a Bamboo plan variable, the page has this.
For task configuration fields, use the syntax
${bamboo.myvariablename}. For inline scripts, variables are exposed as
shell environment variables which can be accessed using the syntax
$BAMBOO_MY_VARIABLE_NAME (Linux/Mac OS X) or %BAMBOO_MY_VARIABLE_NAME%
(Windows).
However, that doesn't work in my Linux inline script. For example, I have the following defined a a plan variable
name: my_plan_var value: some_string
My inline script is simply...
PLAN_VAR=$BAMBOO_MY_PLAN_VAR
echo "Plan var: $PLAN_VAR"
and I just get a blank string.
I've tried this
PLAN_VAR=${bamboo.my_plan_var}
But I get
${bamboo.my_plan_var}: bad substitution
on the log viewer window.
Any pointers?
I tried the following and it works:
On the plan, I set my_plan_var to "it works" (w/o quotes)
In the inline script (don't forget the first line):
#/bin/sh
PLAN_VAR=$bamboo_my_plan_var
echo "testing: $PLAN_VAR"
And I got the expected result:
testing: it works
I also wanted to create a Bamboo variable and the only thing I've found to share it between scripts is with inject-variables like following:
Add to your bamboo-spec.yaml the following after your script that will create the variable:
Build:
tasks:
- script: create-bamboo-var.sh
- inject-variables:
file: bamboo-specs/vars.yaml
scope: RESULT
# namespace: plan
- script: echo ${bamboo.inject.GIT_VERSION} # just for testing
Note: Namespace defaults to inject.
In create-bamboo-var.sh create the file bamboo-specs/vars.yaml:
#!bin/bash
versionStr=$(git describe --tags --always --dirty --abbrev=4)
echo "GIT_VERSION: ${versionStr}" > ./bamboo-specs/vars.yaml
Or for multiple lines you can use:
SW_NUMBER_DIGITS=${1} # Passed as first parameter to build script
cat <<EOT > ./bamboo-specs/vars.yaml
GIT_VERSION: ${versionStr}
SW_NUMBER_APP: ${SW_NUMBER_DIGITS}
EOT
Scope can be local or result. Local means it's only available for current job and result means it can be used in subsequent stages of this plan and releases that are created from the result.
Namespace is just used to avoid naming collisions with other variables.
With the above you can use that variable in later scripts with ${bamboo.inject.GIT_VERSION}. The last script task is just to see that it is working in other scripts. You can also see the variables in the web app as build meta data.
I'm using the above script before the build (in my case compiling C-Code) takes place so I can also create a version.h file that can be used by the source code.
This is still a bit cumbersome but I'm happy with it and I hope it will help others to configure Bamboo. Bamboo documentation could be better. (Still a lot try and error)

liquibase tagExist command exit code

i am using liquibase 3.4.1 in the command line way.
My command looks like this :
D:\Work>java -cp ".\*" liquibase.integration.commandline.Main --defaultsFile=liquibase_methods.properties tagExists 4.5
works pretty well :
The tag 4.5 does not exist in user#jdbc:oracle:thin:#url:port:SID
Liquibase 'tagExists' Successful
when I do echo %errorlevel%, the OS tells me 0, like the previous command was correcctly released.
is there a 'quite easy' way to get an exit code != 0 when the tagExists command returns that the tag doesn't exist ?
by 'quite easy' I mean also something more proper than parse the result text and look for keywords..
Regards,
Guillaume
This would require a change in the liquibase source code. Looking at the class src/main/java/liquibase/integration/commandline/Main.java you can see that whether there is an error or not, liquibase just does a return. This would need to be changed so that it did System.exit(int) and the system would need to be altered so that the commands themselves returned some sort of success code.
I think Nathan is working on improvements for 4.0, but for the 3.x line it seems like a fairly straightforward change. The issue with a change like this though is what unintended consequences it would have on other systems. I would suggest forking the project on github and making the change for yourself, and then creating a pull request to see if it can be added to the main line code.

WebHCat & Pig - how to pass a parameter file to the job?

I am using HCatalog's WebHCat API to run Pig jobs, such as documented here:
https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference+Pig
I have no problem running a simple job but I would like to attach a parameters file to the job, such as one can do using pig command line's parameter: --param_file .
I assume this is possible through arg request's parameter, so I tried multiple things, such as passing:
'arg': '-param_file /path/to/param.file'
or:
'arg': {'param_file': '/path/to/param.file'}
None seems to work, and error stacks don't say much.
I would love to know if this is possible, and if so, how to correctly achieve this.
Many thanks
Correct usage:
'arg': ['-param_file', '/path/to/param.file']
Explanation:
By passing the value in arg,
'arg': {'-param_file': '/path/to/param.file'}
webhcat generates "-param_file" for the command prompt.
Pig throws the following error
ERROR org.apache.pig.Main - ERROR 2999: Unexpected internal error. Can not create a Path from a null string
Using a comma instead of the colon operator passes the path to file as a second argument.
webhcat will generate "-param_file" "/path/to/param.file"
P.S: I am using Requests library on python to make the REST calls

Rspec output format

This question is more out of curiosity than purpose. Can we change the output of Rspec command, where it shows dots and Fs. For example, here is an output from one of my projects:
.F.F.F.F
.....
........
Finished in 0.27137 seconds
8 examples, 4 failures
Can we get Pass Failed Pass Failed Pass Failed Pass Failed instead of .F.F.F.F
You indeed can, check out the rspec wiki or google 'rspec progressformatter' -- here's one that does something very close to what you want.
Color might help a bit - add the alias spec=spec --color --format specdoc to your ~/.bashrc file.