Extracting ZAP report running in container on Jenkins agent (docker based) - zap

My setup is as follows:
Jenkins pipeline script which triggers Jenkins job which runs inside a dokcer container.
ZAP is in containerzied mode
Commands used:
echo DEBUG - mkdir -p $PWD/out
mkdir -p $PWD/out
echo DEBUG - chmod 777 $PWD/out
chmod 777 $PWD/out
test -d ${PWD}/out \
&& docker run -v $(pwd)/out:/zap/wrk/:rw -t owasp/zap2docker-live zap-api-scan.py -t $TARGET_URL -f openapi -d -r zap_scan_report.html
Also tried: docker run --user $(id -u):$(id -g) -v $(pwd)/out:/zap/wrk/:rw -t owasp/zap2docker-live zap-api-scan.py -t $TARGET_URL -f openapi -d -r zap_scan_report.html
Scan works fine but report is not in the "out" directory.
This works fine on a VM environment
Any suggestions as I guess the mount is not working in a docker container

Related

PDI - Microsoft Excel Writer - Permission denied

I'm using PDI to generate a Excel.xslx file in a folder using Microsoft Excel Writer component and I'm trying to read this file from a microservice. The problem is I can't read because the file is with permissions: -rw-r-----. How can I write the file with permissions to everyone or how can I change this permissions in PDI?
I created an user "pentaho", put the service to run in the same docker and with the same user.
Dockerfile Pentaho:
...
RUN cd /pentaho && \
rm /pentaho/*server*/promptuser.sh; \
sed -i -e 's/\(exec ".*"\) start/\1 run/' /pentaho/*server*/tomcat/bin/startup.sh; \
mkdir /home/pentaho && groupadd -r pentaho && useradd -r -g pentaho -p $(perl -e'print crypt("pentaho", "aa")' ) -G sudo pentaho && \
chown -R pentaho.pentaho /pentaho && \
chown -R pentaho.pentaho /home/pentaho
WORKDIR /pentaho
USER pentaho
EXPOSE 8080
Dockerfile App:
FROM company/pentaho:1.0.0
MAINTAINER Company
ADD start_scripts/run.sh /pentaho/
...
RUN sudo chown -R pentaho.pentaho /pentaho/pentaho-server
WORKDIR /pentaho
USER pentaho
EXPOSE 8080
# 1. Run
ENTRYPOINT ["bash", "/pentaho/run.sh"]
Run.sh:
if [ -z "$DEBUG" ]; then
echo Starting Sheet Formatting service and Pentaho in DEBUG mode
cd /pentaho/
java -jar sheet-service.jar &
cd *server*
./start-pentaho.sh;
else
echo Starting Sheet Formatting service and Pentaho in normal mode
cd /pentaho/
java -jar sheet-service.jar &
cd *server*
./start-pentaho-debug.sh;
fi

rsync not finding local directory when sending through SSH on pipeline

Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync

Gitlab CI/CD: Deploy to ubuntu server using ssh keys (using a windows shell runner)

Hello everyone i need your help plz, i'm using gitlab ci/cd and trying to deploy my .jar application to an ubuntu server, i configured my gitlab project with a windows runner with shell executor. i configured a key based access on the runner to avoid being prompt for a password;
the following command runs successfully when i login to the runner machine and use it's powershell :
scp -i C:\Users\Administrators\ssh\id_rsa myapp-0.0.1-SNAPSHOT.jar username#myubuntuserver:/
but when i'm using the above commande in my .yml file to copy the .jar on the server, it doesn't give any response until the job fail due to timeout
i tried also the solution proposed here https://docs.gitlab.com/ee/ci/ssh_keys/ by setting an SSH_PRIVATE_KEY variable on my project but i'm unable to adapt the given 'before_script' to my windows runner.
this is the before_script proposed in the documentation (above link):
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
when the previous scp command is replaced by this:
ssh -iv C:\Users\Administrators\ssh\id_rsa username#myubuntuserver
i get the following output:
the image
Thanks in advance
It works after doing the following steps:
1) configuring the runner (shell executor) on ubuntu 18.04
2) Then from the terminal login as the gitlab-runner user: sudo su - gitlab-runner
3) run ssh-keygen -t rsa
4) run ssh -i ~/.ssh/id_rsa username#myubuntuserver:
5) run cat ~/.ssh/id_rsa.pub | ssh username#myubuntuserver "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
5) now you can add the following to your job script (yml file) and it should work:
- scp -i ~/.ssh/id_rsa fileToCopy username#myubuntuserver:/mydirectory
#you can execute multiple commands at a time, for ex:
- ssh username#myubuntuserver " mv /mydirectory/myapp-0.0.1-SNAPSHOT.jar /mydirectory/myapp.jar "
Hope it will help
If ssh -iv C:\Users\Administrators\ssh\id_rsa username#myubuntuserver does not work, that may be because of the C: part, which confuses ssh into thinkig C is the name of the server!
A Unix-like path would work:
ssh -iv /C/Users/Administrators/ssh/id_rsa username#myubuntuserver
But, as the OP Medmahmoud comments, this supposes the public key has been published on the server:
Configure the runner on ubuntu18.04.
Then from the terminal login as the gitlab-runner user:
sudo su - gitlab-runner - run ssh-keygen -t rsa
ssh -i ~/.ssh/id_rsa username#myubuntuserver
cat ~/.ssh/id_rsa.pub | ssh username#myubuntuserver \
"mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
Now from your yml file the following should work:
- scp -i ~/.ssh/id_rsa pom.xml username#myubuntuserver:/mydirectory

Cant get a Docker image with apache to display the test webpage

I have a docker image where I have put apache. I want it to that when the container starts, apache starts and I can visit the test page. However, the page is not appearing when I try.
This is my current dockerfile:
FROM centos:7
MAINTAINER me <me#me.com>
RUN yum update -y && yum install -y httpd php
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80
EXPOSE 443
CMD ["/usr/sbin/init"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
I am running the container with the command docker run -d -P <container_name>, and when I do docker ps, I see the ports section being populated correctly, with 0.0.0.0:32784->80/tcp, 0.0.0.0:32783->443/tcp as the output.
The url im trying to use to access it is 172.17.0.2:32784.
Any ideas?
Turns out the issue was that I was trying to connect with the docker containers IP, when the IP I shouldve been connecting with the IP of the server that it was hosted on.
Derp.

Docker HTTPS access - ONLYOFFICE3

I'm following the ONLYOFFICE Docker documentation
(GITHUB ONLYOFFICE docker HTTPS access) to get ONLYOFFICE
documentserver and communityserver running with HTTPS.
What I've tried:
1.
I've created the cert files (.crt, .key, .pem) like mentioned in the documentation. After that I created a file named env.list in my home dir /home/jw/data/ with the following content:
SSL_CERTIFICATE_PATH=/opt/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/opt/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/opt/onlyoffice/Data/certs/dhparam.pem
SSL_VERIFY_CLIENT=true
2.
After that I added the directory /home/jw/data/ to my $PATH environment
variable:
PATH=$PATH:/home/jw/data/; export PATH
3.
On the same shell I started the docker container like this:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
4.
The documentserver is running fine. After that I've started the
communityserver with:
sudo docker run -i -t -d --link onlyoffice-document-server:document_server --env-file /home/jw/data/env.list onlyoffice/communityserver
5.
With the command docker ps -a I see booth docker containers running fine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f573111f2e5 onlyoffice/communityserver "/bin/sh -c 'bash -C " 29 seconds ago Up 28 seconds 80/tcp, 443/tcp, 5222/tcp lonely_mcnulty
23543300fa51 onlyoffice/documentserver "/bin/sh -c 'bash -C " 42 seconds ago Up 41 seconds 80/tcp, 0.0.0.0:443->443/tcp onlyoffice-document-server
But when I'm trying to access https://localhost there is an error "Secure
Connection Failed" in Firefox.
Did I miss something?
Okay got it:
I've changed the environment variables in env.list to:
SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/var/www/onlyoffice/Data/certs/dhparam.pem
After that used the following command to run ONLY the documentserver:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
The ONLYOFFICE OnlineEditor API is now available over HTTPS:
https://localhost/OfficeWeb/apps/api/documents/api.js
If you want to use CommunityServer with HTTPS just change the run command above to:
sudo docker run -i -t -d --name onlyoffice-community-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/<username>/env.list onlyoffice/communityserver
Thank you anyway!