I just included a Redis Store in my Express application and got it to work.
I wanted to include this Redis Store in Travis CI for my code to keep working there. I read in the Travis Documentation that it is possible to start Redis, with the factory settings.
In my project, I don't use the factory settings, I wrote my own redis.conf file which specifies the port and the password.
So I added the following line to my .travis.yml file:
services:
- redis-server --port 6380 --requirepass 'secret'
But this returns the following on Travis CI:
$ sudo service redis-server\ --port\ 6380\ --requirepass\ \'secret\' start
redis-server --port 6380 --requirepass 'secret': unrecognized service
Is there any way to fix this?
If you want to customize the option for Redis on Travis CI, I'd suggest not using the services section, but rather do this:
before_script: sudo redis-server /etc/redis/redis.conf --port 6380 --requirepass 'secret'
The services section runs services using their init/upstart scripts, which may not support the options you've added in there. The command is also escaped for security reasons, hence the documentation only hinting that you can list normal service names in that section.
Related
Since I need to use a self-hosted runner, the option to use existing marketplace SSH Actions is not viable because they tend to docker and build an image on GitHub Actions which fails because of a lack of permissions for an unknown user. Existing GitHub SSH Actions such as appleboy fail due to reliance on Docker for me.
Please note: Appleboy & other similar actions work great when using GitHub Actions Runners.
Since it's a self-hosted runner i.e. I have access to it all the time. So using an SSH profile and then using native-run command works pretty well.
To create an SSH profile & use it in GitHub Actions:
Create (if doesn't exist already) a "config" file in "~/.ssh/".
Add a profile in "~/.ssh/config". For example-
Host dev
HostName qa.bluerelay.com
User ec2-user
Port 22
IdentityFile /home/ec2-user/mykey/something.pem
Now to test it, on self-hosted runner run:
ssh dev ls
This should run the ls command inside the dev server.
4. Now since the SSH profile is set up, you can easily run remote SSH commands using GitHub Actions. For example-
name: Test Workflow
on:
push:
branches: ["main"]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Download Artifacts into Dev
run: |
ssh dev 'cd GADeploy && pwd && sudo wget https://somelink.amazon.com/web.zip'
- name: Deploy Web Artifact to Dev
run: |
ssh dev 'cd GADeploy && sudo ./deploy-web.sh web.zip'
This works like a charm!!
Independently of the runner itself, you would need first to check if SSH does work from your server
curl -v telnet://github.com:22
# Assuming your ~/.ssh/id_rsa.pub is copied to your GitHub profile page
git ls-remote git#github.com:you/YourRepository
Then you can install your runner, without Docker.
Finally, your runner script can execute jobs.<job_id>.steps[*].run commands, using git with SSH URLs.
I am looking options to install confluent schema registry, is it possible to download and install registry alone and make it work with existing kafka setup ?
Thanks
Assuming you have Zookeeper/Kafka running already, you can easily run Confulent Schema Registry using Docker with running the following command:
docker run -p 8081:8081 -e \
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=host.docker.internal:2181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:5.3.2
parameters:
-p 8081:8081 - will open the port 8081 between the container to your machine
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL - is your Zookeeper host and port, I'm using host.docker.internal to resolve local machine that is hosting Zookeeper (outside of the container)
SCHEMA_REGISTRY_HOST_NAME - The hostname advertised in Zookeeper. This is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment.
SCHEMA_REGISTRY_LISTENERS - the Schema Registry host and port number to open
SCHEMA_REGISTRY_DEBUG Run in debug mode
note: the script was using the version 5.3.2, make sure this version is aligned with your Kafka version.
Yes you can use your existing Kafka setup, just match to the compatible version of Confluent Platform. Here are the docs on getting started
https://docs.confluent.io/current/schema-registry/docs/intro.html#installation
tl;dr download the platform to pull out the pieces you need or get the docker image and point it at your Kafka cluster.
I have installed redis cluster 3.0.0. But Want to upgrade it to 3.0.7. Can somebody tell me the steps to do it?
I don't want to loose any data. And don't want any downtime either.
Steps I did when upgrading from 2.9.101 to 3.0 release. I hope it will do for upgrading to 3.0.7 too.
Compile 3.0.7 from the source and start several instances with cluster enabled.
Let the 3.0.7 instances replicate the 3.0.0 instances as slave
Connect to each 3.0.7 instance and do a manual failover, then the 3.0.0 masters would become slaves after several seconds.
Wait for your application to connect to the new masters; also check the configuration files, and modify the entries to the new masters on your need
Remove those slaves
UPDATE : Docker approach
As it's probably unable to replacing the binary executable while the process is still alive, you could do it by run some Redis in docker.
First you should install docker on your machine and pull the Redis image, or pull a basic OS image and manually build Redis in it, whatever
Based on this image, you are supposed to
copy your current redis.conf into it
make sure the dir exists in the image (cluster-config-file could be the same for all the containers as they are saved individually in their own fs)
make sure the directory for logfile exists and is not the same as dir (we will later map this directory to the host)
leave port logfile anything you like, as they are specified when a container is started
commit the image as redis-3.0.7
Now launch a containerized Redis. I suppose your logfile is located in /var/log/redis/, this Redis binds :8000, and your config file in the image is /etc/redis/redis.conf
docker run -d --net=host -v /var/log/redis:/var/log/redis \
-p 8000:8000 -t redis-3.0.7 \
/usr/bin/redis-server /etc/redis/redis.conf \
--port 8000 \
--logfile /var/log/redis/redis_8000.log
Now you have a Redis 3.0.7 instance, and are ready to finish the rest steps in the previous part.
I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database.
If I install redis in the docker image (Ubuntu 14.04):
RUN apt-get update && apt-get -y install redis-server
RUN pip install redis
The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.
If I start the Redis server with the following command, it hangs :
RUN redis-server
If I try to tweak the previous line, it does not work either :
RUN nohup redis-server &
So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?
The Docker "last command" is already used with:
CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
RUN commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.
Use CMD instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD:
CMD start.sh
In the start.sh script you write the following:
#!/bin/bash
nohup redis-server &
uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an "init" script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.
Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that :)
The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example...
# docker-compose.yml
web:
image: myapp
command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
links:
- redis
- mysql
celeryd:
image: myapp
command: celery worker -A myapp.celery
links:
- redis
- mysql
redis:
image: redis
mysql:
image: mysql
This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name "redis" and "mysql" inside your app containers, so instead of pointing at "localhost" you'd point at "redis".
There is a lot of good info on the Docker-compose docs
use supervisord which would control both processes. The conf file might look like this:
...
[program:redis]
command= /usr/bin/redis-server /srv/redis/redis.conf
stdout_logfile=/var/log/supervisor/redis-server.log
stderr_logfile=/var/log/supervisor/redis-server_err.log
autorestart=true
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true
I have containers with python apps and I need them to automatically start and expose ssh when running them. I know it's against Docker's best practices, but right now I don't have any other solution. I'd be interested to know the best way to automatically run an additionnal service in a docker container anyway.
Since Docker will only start one process, installing sshd isn't enough. There are apparently multiple options to deal with it:
use a process manager like Monit or Supervisor
use the ENTRYPOINT option
append a command (service sshd start, for instance) at the end of /etc/bash.bashrc (see this answer)
Option 1 seems overkill to me. Also I suppose I'll have to run the container with a cmd calling the process manager instead of bash or my python app: not exactly what I want.
I don't know how to use Option 2 for such a case. Should I write a custom script starting sshd and then running the provided command if any ? How should this script look like ?
Option 3 is very straightforward but quite dirty. Also it won't work if I run the container with another command than /bin/bash.
What's the best solution and how to set it up ?
You mention that option 1 seems like overkill. Why is it overkill? Supervisor is very simple to configure and will basically do what you want.
First, write supervisor config files that starts your python app and sshd:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:pythonapp]
command=/path/to/python myapp.py -x args etc etc
Call that file supervisord.conf and commit it somewhere in your repo. In your Dockerfile, copy that file to the container as one of the container build steps, expose the ports for SSH and your app (if needed) and set the CMD to start supervisord:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
This is clean and easy to understand. It's how I run multiple processes in a container when needed. It is even suggested in the Docker docs as a nice solution.
If you don't want to use a process manager, you can wrap your actual container command inside a shell script and sudo service ssh start, then execute your actual command.
sudo service ssh start
python myapp.py -x args blah blah
This will start up ssh as a daemon, and then your python app will start up after.
Yes, We can configure the Supervisord for the multi process in a container. If you want to use Openssh-server we can configure the Supervisor like below-:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
in supervisord.conf file.
We can add the supervisord.conf file in the docker image update a line in Dockerfile.
RUN apt update && apt install -y supervisor openssh-server
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
Reference link-: Gotechnies