Unable to login to Olympus/Restcomm - restcomm

I am "brand new" to both RestComm and Docker. After some learnings I was able to run Docker (running on an Ubuntu VM - host Windows 10 PRO). Everything appears to be fine but I am unable to login to Olympus (though RestComm console is ok).
The command I am using is: docker run -i -d --name=restcomm -e RCBCONF_STATIC_ADDRESS=192.168.110.162 -e ENVCONFURL="https://raw.githubusercontent.com/RestComm/Restcomm-Docker/master/env_files/restcomm_env_locally.sh" -p 80:80 -p 443:443 -p 9990:9990 -p 5060:5060 -p 5061:5061 -p 5062:5062 -p 5063:5063 -p 5060:5060/udp -p 65000-65050:65000-65050/udp -p 8080:8080 -p 8443:8443 restcomm/restcomm:latest
Can you guys help me?
Thanks
Cassio

Related

Systemd SSH tunnel service failing while command works in command line

I've been trying to setup a SSH reverse tunnel Systemd service for automatically exposing my non-public IP computer to the internet for SSHing. I have two different services, one pointing to my own server (another computer with public IP) which works fine, and one trying to use serveo.net (a free service of TCP tunnelling via ssh client). My service works fine for my own server but fails for Serveo.
My service definition is as follow:
[Unit]
Description=Setup a remote tunnel to serveo.net
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=0
[Service]
User=ssh-tunnel
Group=ssh-tunnel
ExecStart=/usr/bin/ssh -T -o ServerAliveInterval=60 -i /var/lib/ssh-tunnel/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ExitOnForwardFailure=yes -R myalias:58227:localhost:22 serveo.net
Restart=always
RestartSec=60
[Install]
WantedBy=multi-user.target
When executing the same command in the terminal, it works correctly:
sudo -u ssh-tunnel -g ssh-tunnel /usr/bin/ssh -T -o ServerAliveInterval=60 -i /var/lib/ssh-tunnel/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ExitOnForwardFailure=yes -R myalias:58227:localhost:22 serveo.net
The only difference of behaviour I can see between my own server and Serveo is that Serveo has a sort of interactive command interface.
Has anyone have a similar issue with Systemd services?
Serveo needs an interactive shell. You want to add
[Service]
StandardInput=tty-force
to force the use of a shell that works with the serveo configuration

Extracting ZAP report running in container on Jenkins agent (docker based)

My setup is as follows:
Jenkins pipeline script which triggers Jenkins job which runs inside a dokcer container.
ZAP is in containerzied mode
Commands used:
echo DEBUG - mkdir -p $PWD/out
mkdir -p $PWD/out
echo DEBUG - chmod 777 $PWD/out
chmod 777 $PWD/out
test -d ${PWD}/out \
&& docker run -v $(pwd)/out:/zap/wrk/:rw -t owasp/zap2docker-live zap-api-scan.py -t $TARGET_URL -f openapi -d -r zap_scan_report.html
Also tried: docker run --user $(id -u):$(id -g) -v $(pwd)/out:/zap/wrk/:rw -t owasp/zap2docker-live zap-api-scan.py -t $TARGET_URL -f openapi -d -r zap_scan_report.html
Scan works fine but report is not in the "out" directory.
This works fine on a VM environment
Any suggestions as I guess the mount is not working in a docker container

rsync not finding local directory when sending through SSH on pipeline

Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync

Localstack Service "s3" not yet available, retrying

Service "S3" not yet available, retrying.
I am using localstack docker image.
When I am hitting the command:
docker run -it -p 4567-4578:4567-4578 -p 8080:8080 localstack/localstack
I am getting errors: S3 is not yet available
I am using MacOS.
I got the solution.
Try to run the services that you need.
docker run -p 4569:4569 -p 4572:4572 -p 4575:4575 -p 4576:4576 -e SERVICES=dynamodb,s3,sns,sqs -p 8080:8080 localstack/localstack

Docker HTTPS access - ONLYOFFICE3

I'm following the ONLYOFFICE Docker documentation
(GITHUB ONLYOFFICE docker HTTPS access) to get ONLYOFFICE
documentserver and communityserver running with HTTPS.
What I've tried:
1.
I've created the cert files (.crt, .key, .pem) like mentioned in the documentation. After that I created a file named env.list in my home dir /home/jw/data/ with the following content:
SSL_CERTIFICATE_PATH=/opt/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/opt/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/opt/onlyoffice/Data/certs/dhparam.pem
SSL_VERIFY_CLIENT=true
2.
After that I added the directory /home/jw/data/ to my $PATH environment
variable:
PATH=$PATH:/home/jw/data/; export PATH
3.
On the same shell I started the docker container like this:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
4.
The documentserver is running fine. After that I've started the
communityserver with:
sudo docker run -i -t -d --link onlyoffice-document-server:document_server --env-file /home/jw/data/env.list onlyoffice/communityserver
5.
With the command docker ps -a I see booth docker containers running fine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f573111f2e5 onlyoffice/communityserver "/bin/sh -c 'bash -C " 29 seconds ago Up 28 seconds 80/tcp, 443/tcp, 5222/tcp lonely_mcnulty
23543300fa51 onlyoffice/documentserver "/bin/sh -c 'bash -C " 42 seconds ago Up 41 seconds 80/tcp, 0.0.0.0:443->443/tcp onlyoffice-document-server
But when I'm trying to access https://localhost there is an error "Secure
Connection Failed" in Firefox.
Did I miss something?
Okay got it:
I've changed the environment variables in env.list to:
SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/var/www/onlyoffice/Data/certs/dhparam.pem
After that used the following command to run ONLY the documentserver:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
The ONLYOFFICE OnlineEditor API is now available over HTTPS:
https://localhost/OfficeWeb/apps/api/documents/api.js
If you want to use CommunityServer with HTTPS just change the run command above to:
sudo docker run -i -t -d --name onlyoffice-community-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/<username>/env.list onlyoffice/communityserver
Thank you anyway!