Localstack Service "s3" not yet available, retrying - amazon-s3

Service "S3" not yet available, retrying.
I am using localstack docker image.
When I am hitting the command:
docker run -it -p 4567-4578:4567-4578 -p 8080:8080 localstack/localstack
I am getting errors: S3 is not yet available
I am using MacOS.

I got the solution.
Try to run the services that you need.
docker run -p 4569:4569 -p 4572:4572 -p 4575:4575 -p 4576:4576 -e SERVICES=dynamodb,s3,sns,sqs -p 8080:8080 localstack/localstack

Related

rsync not finding local directory when sending through SSH on pipeline

Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync

How to make Zalenium work with AWS Fargate?

The issue
I would like to use Zalenium in a container using AWS Fargate. However, to do so, we have to pull two images: Zalenium and Selenium. Indeed, during its process, Zalenium creates containers using Selenium image. Therefore it needs to find the Image somewhere.
A possible solution
I was thinking of creating an ubuntu container with Docker installed which would run the following commands:
It would first pull the images
docker pull elgalu/selenium
docker pull dosel/zalenium
and then create a Zalenium container with the Docker socket mounted to create another container:
docker run --rm -ti --name zalenium -p 4444:4444 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/videos:/home/seluser/videos \
--privileged dosel/zalenium start
That would mean I would create a container that would inside another container that would be inside another container which doesn't sound straightforward.
So before doing that, I wanted to check if someone wouldn't have a better solution. As being new to AWS, I might have missed something.
You can pull the image implicity through the Zalenium container, just check https://opensource.zalando.com/zalenium/#tryit, section "Or without pulling elgalu/selenium explicitly:"
Example:
# Pull Zalenium
docker pull dosel/zalenium
# Run it!
docker run --rm -ti --name zalenium -p 4444:4444 \
-e PULL_SELENIUM_IMAGE=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/videos:/home/seluser/videos \
--privileged dosel/zalenium start
# Point your tests to http://localhost:4444/wd/hub and run them
# Stop
docker stop zalenium

BitBucket Pipeline ssh to Digital Ocean Permission denied (publickey)

I'm sure this is not the first question for BitBucket Pipeline and Digital Ocean, but I have gone through several similar posts without any luck.
pipelines:
default:
- step:
name: SSH to Digital Ocean and update docker image
script:
- ssh -i ~/.ssh/config root#xxx.xxx.xxx.xxx
- docker rm -f mycontainer
- docker image rm -f myrepo/imagename:tag
- docker pull myrepo/imagename:tag
- docker run --name mycontainer -p 12345:80 -d=true --restart=always myrepo/imagename:tag
services:
- docker
Here is the SSH Key in my BitBucket repository
Here is what the BitBucket Pipeline shows to me:
How can I resolve this?
This is not a key problem - it's that the Pipelines container does not act as a normal terminal, but ssh expects a terminal under normal operation. You should be able to pass the command(s) to be run as arguments to the SSH command: ssh -i /path/to/key user#host "docker rm -f mycontainer && docker image rm -f myrepo/imagename:tag" etc.

Unable to login to Olympus/Restcomm

I am "brand new" to both RestComm and Docker. After some learnings I was able to run Docker (running on an Ubuntu VM - host Windows 10 PRO). Everything appears to be fine but I am unable to login to Olympus (though RestComm console is ok).
The command I am using is: docker run -i -d --name=restcomm -e RCBCONF_STATIC_ADDRESS=192.168.110.162 -e ENVCONFURL="https://raw.githubusercontent.com/RestComm/Restcomm-Docker/master/env_files/restcomm_env_locally.sh" -p 80:80 -p 443:443 -p 9990:9990 -p 5060:5060 -p 5061:5061 -p 5062:5062 -p 5063:5063 -p 5060:5060/udp -p 65000-65050:65000-65050/udp -p 8080:8080 -p 8443:8443 restcomm/restcomm:latest
Can you guys help me?
Thanks
Cassio

Docker HTTPS access - ONLYOFFICE3

I'm following the ONLYOFFICE Docker documentation
(GITHUB ONLYOFFICE docker HTTPS access) to get ONLYOFFICE
documentserver and communityserver running with HTTPS.
What I've tried:
1.
I've created the cert files (.crt, .key, .pem) like mentioned in the documentation. After that I created a file named env.list in my home dir /home/jw/data/ with the following content:
SSL_CERTIFICATE_PATH=/opt/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/opt/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/opt/onlyoffice/Data/certs/dhparam.pem
SSL_VERIFY_CLIENT=true
2.
After that I added the directory /home/jw/data/ to my $PATH environment
variable:
PATH=$PATH:/home/jw/data/; export PATH
3.
On the same shell I started the docker container like this:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
4.
The documentserver is running fine. After that I've started the
communityserver with:
sudo docker run -i -t -d --link onlyoffice-document-server:document_server --env-file /home/jw/data/env.list onlyoffice/communityserver
5.
With the command docker ps -a I see booth docker containers running fine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f573111f2e5 onlyoffice/communityserver "/bin/sh -c 'bash -C " 29 seconds ago Up 28 seconds 80/tcp, 443/tcp, 5222/tcp lonely_mcnulty
23543300fa51 onlyoffice/documentserver "/bin/sh -c 'bash -C " 42 seconds ago Up 41 seconds 80/tcp, 0.0.0.0:443->443/tcp onlyoffice-document-server
But when I'm trying to access https://localhost there is an error "Secure
Connection Failed" in Firefox.
Did I miss something?
Okay got it:
I've changed the environment variables in env.list to:
SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/var/www/onlyoffice/Data/certs/dhparam.pem
After that used the following command to run ONLY the documentserver:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
The ONLYOFFICE OnlineEditor API is now available over HTTPS:
https://localhost/OfficeWeb/apps/api/documents/api.js
If you want to use CommunityServer with HTTPS just change the run command above to:
sudo docker run -i -t -d --name onlyoffice-community-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/<username>/env.list onlyoffice/communityserver
Thank you anyway!