How do I pass gitlab-ci variables to karate Netty jar? - gitlab-ci

I'm trying to use Karate Netty jar in a gitlab-ci pipeline. I'm pulling in an image that contains the jar as a step in the pipeline. I am able to execute tests just fine for unsecured services.
Like so:
karate-test:
stage: acceptance-test
image:
name: registry.gitlab.opr.business.org/karate-universe:0.0.3
entrypoint: [ "" ]
script:
- java -jar /karate.jar -e dev src/test/karate/acceptance-test.feature -o /target/karate
environment:
name: Test
artifacts:
paths:
- /target/karate
Now I'm trying to pass credentials into a karate feature for a secured service but cannot find the capabilities from the jar interface.
I've tried passing the credentials like so:
- java -jar /karate.jar -e dev src/test/karate/acceptance-test.feature -o /target/karate -Duser.password ${REQUEST_PASSWORD} -Duser.id ${REQUEST_USER}
REQUEST_PASSWORD and REQUEST_USER are gitlab variables that are available to me in gitlab-ci.
When I run the pipeline, I get:
Unmatched arguments [-Duser.password, -Duser.id]
Does Karate Netty have the capabilities of being able to pass variables for karate-config use like regular Karate does? I cannot keep secrets in the karate-config file itself.

Make sure the -Dfoo=bar part comes before the -jar option, because everything after that is passed to Karate, and not the JVM.
java -Dfoo=bar -Dbaz=ban -jar /karate.jar
Note that you can also get environment variables easily:
java.lang.System.getenv('PATH')
Normally people pass values as -D JVM options. If you have some advanced needs for the standalone JAR - see this: https://stackoverflow.com/a/56458094/143475

Related

Automated Security Test in GitLab

i'm trying to implement automation inside my GitLab project.
In order to perform security scan, i would like to use ZAP to go through all the URLs present in the
project and scan them. It's clearly not possible to pass manually all the URLs, so i'm trying to find a way to make all the test as automated as possible.
The problem is: how to reach all the URLs present in the application?
I thought a way could be to pass them as a "variable" in the YML file, and use them as parameter in the ZAP command, something like that (see below).
Is this a reasonable solution? Is there any other way to perform an automated scan inside a repository (without passing manually the URLs)?
Thanks
variables:
OWASP_CONTAINER: $APP_NAME-$BUILD_ID-OWASP
OWASP_IMAGE: "owasp/zap2docker-stable"
OWASP_REPORT_DIR: "owasp-data"
ZAP_API_PORT: "8090"
PENTEST_IP: 'application:8080'
run penetration tests:
stage: pen-tests
image: docker:stable
- docker exec $OWASP_CONTAINER zap-cli -v -p $ZAP_API_PORT active-scan http://$PENTEST_IP/html
You need to turn on a new feature flag (FF_NETWORK_PER_BUILD) to enable a network per build. Then also services can reach each others (Available since GitLab runner 12.9). For more information see: https://docs.gitlab.com/runner/executors/docker.html#networking
Working example owasp zap job in GitLab CI:
owasp-zap:
variables:
FF_NETWORK_PER_BUILD: 1
image: maven
services:
- selenium/standalone-chrome
- name: owasp/zap2docker-weekly
entrypoint: ['zap.sh', '-daemon', '-host', '0.0.0.0', '-port', '8080',
'-config', 'api.addrs.addr.name=.*', '-config', 'api.addrs.addr.regex=true', '-config', 'api.key=1234567890']
script:
- sleep 5
- mvn clean test -Dbrowser=chrome -Dgrid_url=http://selenium-standalone-chrome:4444/wd/hub -Dproxy=http://owasp-zap2docker-weekly:8080
- curl http://owasp-zap2docker-weekly:8080/OTHER/core/other/htmlreport/?apikey=1234567890 -o report.html
artifacts:
paths:
- report.html

karate argLine arguments not picked up with 'mvn gatling:test' command

I have an existing suite of karate tests which can run on different environments (dev / qa) using the approach below:
mvn test -DargLine="-DauthUser=*** -DauthPassword=*** -Dkarate.env=qa"
Now i have added some gatling tests and when try to run the tests on 'qa' with the following command, the tests still run on my default environment which is 'dev' instead of 'qa'.
mvn gatling:test -DargLine="-DauthUser=*** -DauthPassword=*** -Dkarate.env=qa"
Seems like the argLine approach will not work with maven gatling plugin. If not, is there any other way of passing these arguments for gatling tests?
I came across previous post where its suggested to not use -DargLine when specifying arguments - I want to pass multiple arguments in karate-config.js through mvn command
Just pass the command line arguments like:
mvn gatling:test -DauthUser=*** -DauthPassword=*** -Dkarate.env=qa

Execute java command as build configuration in IntelliJ

I use MultiRun plugin to build the backend services and the UI in the same Run. I need to run a java command before all this and i want to include it in the multirun configuration i'm using. The command is like:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb -port 8001
Which option in the configurations list should i use to set this?.
Thanks.

Running Jenkins tests in Docker containers build from dockerfile in codebase

I want to deploy a continuous integration platform based on Jenkins. As I have various kinds of projects (PHP / Symfony, node, angular, …) and as I want these tests to run both locally and on Jenkins, I was thinking about using Dockers containers.
The process I’m aiming for is :
A merge request is opened on Github / Gitlab
A webhook notifies Jenkins of the merge request
Jenkins pulls the repo, builds the containers and runs a shell script to execute the tests
Once the tests are finished, Jenkins retrieves the results from one of the containers (through a shared volume) and process the results.
I do not want Jenkins to be in a container.
With this kind of process, I’m hoping to be able to run very easily the tests on each developer machine with something like a docker-composer up and then in one of the container ./tests all.
I’m not very familiar with Jenkins. I’ve read a lot of documentation, but most of them suggested to define Jenkins slaves for each kind of projects beforehand. I would like everything to be as dynamic as possible and require as less configuration on Jenkins as possible.
I would appreciate a description of your test process if you have ever implemented something similar. If you think what I’m aiming for is impossible, I would also appreciate if you could explain to me why.
A setup I suggest is Docker in Docker.
The base is a derived Docker image, which extends the jenkins:2.x image by adding a Docker commandline client.
The Jenkins is started as a container with its home folder (a folder e.g. /var/jenkins_home mounted from the Docker host) and the Docker socket file to be able to start Docker containers from Jenkins build jobs.
docker run -d --name jenkins -v /var/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock ... <yourDerivedJenkinsImage>
To check, if this setup is working just execute following command after starting the Jenkins container:
docker exec jenkins docker version
If the "docker version" output does NOT show:
Is the docker daemon running on this host?
Everythin is fine.
In your build jobs, you could configure the process you mentioned above. Let Jenkins simply check out the repository. The repository should contain your build and test scripts.
Use a freestyle build job with a shell execution. A shell execution could look like this:
docker run --rm --volumes-from jenkins <yourImageToBuildAndTestTheProject> bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh
This command simply starts a new container (to build and/or test your project) with the volumes from jenkins. Which means that the cloned repository will be available under $WORKSPACE. So if you run "bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh" your project will be built within a container of "yourImageToBuildAndTestTheProject". After running this, you could start other containers for integration tests or combine this with "docker-compose" by installing it on the derived Jenkins image.
Advantages are the minimal configuration affort you have within Jenkins - only the SCM configuration for cloning the GIT repository is required. Since each Jenkins job uses the Docker client directly you could use for each project one or Docker image to build and/or test, WITHOUT further Jenkins configuration.
If you need additional configuration e.g. SSH keys or Maven settings, just put them on the Docker host and start the Jenkins container with the additional volumes, which contain those configuration files.
Using this Docker option within the shell execution of your build jobs:
--volumes-from jenkins
Automatically adds workspace and configuration files to each of your build jobs.

How to run selenium 3.x with chrome driver through terminal

May be it's easy question but I can't find any info about that.
I used to run selenium 2.x as that way. I start server:
java -jar selenium-server-standalone-2.53.1.jar -Dwebdriver.chrome.driver=chromedriver -browserSideLog -debug -timeout 60
And then I run my tests. I use Dart so I do
pub run test test/selenium/custom_component_test.dart
But now i'm trying use selenium 3. I have downloaded it and substitute my old terminal call with new jar but seems I can do it. Selenium tells me it doesn't know such parameter "-Dwebdriver.chrome.driver". And in help I can't see parameters to specify parameter.
So, how to run selenium 3 with chrome driver?
your options are out of order. -D... is a java runtime variable. it needs to come before the -jar directive.
Change your command to
java -Dwebdriver.chrome.driver=chromedriver -jar selenium-server-standalone-2.53.1.jar -browserSideLog -debug -timeout 60
I used to run selenium 2.x as that way.
Yes, we changed the source to use JCommander in 3.0 to parse options passed into the jar. -D directives are now parsed as options you are trying to pass into the jar, just like -debug and -timeout. For your command to be well formed, you really should be using -D... before the -jar directive.