SaltStack job detection - redis

Intro
Lately we've been noticing some weird behaviour in our production environment, apprently there's a task pulling the data from Prod Redis into Staging Redis, the process itself is managed by Salt.
What I'm trying to achieve
Bottom line: I want to understand the triger for this action (There's no schedule command for this task, the command is being launched from the Salt master in a different manner).
Some code
This is the .sls which is running this task:
redis-server:
service.dead:
- enable: True
fetchredis:
cmd.run:
- names:
- /usr/bin/redis-cli -h {{grains['shost']}} --rdb /etc/redis-cluster/dump.rdb
- gsutil cp /etc/redis-cluster/dump.rdb gs://redis-rtp-bkp/{{salt['cmd.run']('date +"%Y-%m-%d-%H-%M"')}}-{{grains['shost']}}.rdb
- prereq:
- service: redis-server
chown:
cmd.run:
- name: chown -R redis /etc/redis-cluster/*
- cwd: /
- user: root
- require:
- cmd: fetchredis
start_redis:
service.running:
- name: redis-server
- require:
- cmd: chown
What I've tried so far?
I used all sorts of salt-run queries, either on specific jids that showed nothing or some errors.
Any suggestions on finding the trigger?
Thank you.

Found it, next time I'll know where to look, there was an .sls in the Pillars directory, the content is as follows:
schedule:
bkp:
function: state.sls
seconds: 600
args:
- redis.bkp
Thank you all for your kind help.

Related

gitlab CI dependency availability between stages

I have 7 stages in my pipeline. I need ruby for 3 of the stages.
things I have tried two different options,
Install ruby on each of the required stage,
Install ruby as part of the before_script section
Using before_script takes up too much of time trying to install ruby on the 4 other stages that does not require it.
Is there a way to do install dependencies as part of one stage and carry it forward for rest of the stages.
example yml
image: ubuntu:21.10
before_script:
- apt update
- apt install ruby-full
- apt install python3.8
stages:
- s1
- s2
- s3
- s4
s1:
stage: s1
script: ruby s1.rb
s2:
stage: s2
script: ruby s2.rb
s3:
stage: s3
script: python3 s3.py
s4:
stage: s4
script: python3 s4.py
There's a few elements here to understand. Generally, every job starts with the same fresh environment. The only differences to this would be files passed through artifacts: or files restored from cache: configurations. Actions performed in one job generally otherwise have no effect on any other jobs.
Using before_script takes up too much of time trying to install ruby on the 4 other stages that does not require it.
It's also important to know that before_script can be set for each job independently. If one job doesn't need it, just override the before_script: key in that job.
Anyhow. There are a few ways you might optimize your build speed with respect to dependencies:
Docker image containing your dependencies
Typically, you would just use a ruby image as your image: for jobs requiring ruby. Usually an official image from dockerhub will work, like ruby:3.1-alpine.
some_ruby_job:
image: "ruby:3.1-alpine"
script: # ruby is already available by default
- echo "hello ruby"
- ruby -v
some_other_job:
image: alpine:latest
script:
- echo "this job does not need ruby"
Making a custom docker image
If your dependencies are very complex, you may even choose to create your own docker images and push them to the project's container registry so you can use the custom image with all your dependencies as your image:.
You could even build an image in one stage and use it as the image: in subsequent stages. This example uses docker caching with --cache-from to further speed up that process.
build:
image: docker:19.03.12
stage: .pre
services:
- docker:19.03.12-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME || true
- docker build --cache-from $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME -t $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME .
- docker push $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME
some_ruby_job:
stage: test
# This is the image that was built in the previous stage!
image: $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME
script:
- echo "all my dependencies are here!"
- ruby -v
Caching
To further speed things along, you may also choose to cache your ruby dependencies (say, if you install gems as part of your job)
Something like:
some_ruby_job:
stage: one
cache:
key:
files:
- Gemfile.lock
paths:
- vendor/ruby
# ...
That way the vendor/ruby directory is cached which will avoid the need to download the gems again in every stage.
Cache policy
You can also speed up caching behavior in subsequent stages by setting the cache policy to pull (to avoid time spent uploading the cache after the job). In other words, only one job is responsible for generating the cache, the other jobs reuse the same cache.
ruby_jobs_in_future_stages:
cache:
key:
files:
- Gemfile.lock
paths:
- vendor/ruby
policy: pull # only download the cache, don't upload it

How to run a command when Docker container restarts

I'm new to using Docker and docker-compose so apologies if I have some of the terminology wrong.
I've been provided with a Dockerfile and docker-compose.yml and have successfully got the images built and container up and running (by running docker-compose up -d), but I would like to update things to make my process a bit easier as occasionally I need to restart Apache on the container (WordPress) by accessing it using:
docker exec -it 89a145b5ea3e /bin/bash
Then typing:
service apache2 restart
My first problem is that there are two other services that I need to run for my project to work correctly and these don't automatically restart when I run the above service apache2 restart command.
The two commands I need to run are:
service memcached start
service cron start
I would like to know how to always run these commands when apache2 is restart.
Secondly, I would like to configure my Dockerfile or docker-compose.yml (not sure where I'm supposed to be adding this) so that this behaviour is baked in to the container/image when it is built.
I've managed to install the services by adding them to my Dockerfile but can't figure out how to get these services to run when the container is restart.
Below are the contents for relevant files:
Dockerfile:
FROM wordpress:5.1-php7.3-apache
RUN yes | apt-get update -y \
&& apt-get install -y vim \
&& apt-get install -y net-tools \
&& apt-get install -y memcached \
&& apt-get install -y cron
docker-compse.yml
version: "3.3"
services:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
wordpress:
container_name: my-site
build: .
depends_on:
- db
volumes:
- ./my-site-wp:/var/www/html/:consistent
ports:
- "8001:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: vagrant
WORDPRESS_DB_NAME: wp_database
volumes:
db_data:
my-site-wp:
...occasionally I need to restart Apache on the container (WordPress)...
Don't do that. It's a really, really bad habit. You're treating the container like a server where you go in and fix things that break. Think of it like it's a single application -- if it breaks, restart the whole dang thing.
docker-compose restart wordpress
Or restart the whole stack, even.
docker-compose restart
Treat your containers like cattle not pets:
Simply put, the “cattle not pets” mantra suggests that work shouldn’t grind to a halt when a piece of infrastructure breaks, nor should it take a full team of people (or one specialized owner) to nurse it back to health. Unlike a pet that requires love, attention and more money than you ever wanted to spend, your infrastructure should be made up of components you can treat like cattle – self-sufficient, easily replaced and manageable by the hundreds or thousands. Unlike VMs or physical servers that require special attention, containers can be spun up, replicated, destroyed and managed with much greater flexibility.)
Per each container in the compose file, you can add a run command flag in the yaml which will run a command AFTER your container has started. This will run during every start up. On the other hand, commands in the Dockerfile will only run when the image is being built. Ex:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
command: # bash command goes here
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
However, this is not what you are after. Why would you mess with a container from another container? The depends_on flag should restart the downstream services. It seems your memcache instance isn't docked and hence, you are trying to fit it in the application level logic, which is the antithesis of Docker. This code should be in the infra level from the machine or the orchestrator (eg. Kubernetes).

Suggested Configuration for executing K6 scripts on GiLab CI

I have been playing around with K6 performance tests on GitLab CI and I am wondering what is the best and recommended approach for setup.
According to the K6 docs and sample project it defines the .gitlab-ci.yml as follows:
before_script:
- mkdir -p .k6-bin
- |
if [[ ! -f .k6-bin/k6 ]]; then
curl -O -L https://github.com/loadimpact/k6/releases/download/v0.21.1/k6-v0.21.1-linux64.tar.gz;
tar -xvzf k6-v0.21.1-linux64.tar.gz;
mv k6-v0.21.1-linux64/k6 .k6-bin/k6;
fi
cache:
key: k6-bin
paths:
- .k6-bin
loadtest:
stage: test
script: .k6-bin/k6 run -o cloud loadtests/main.js
I found this to be quite verbose especially when you consider a prebuilt docker image is made available. This approach would require manual updates when new versions are released and doesn’t seem as clean as the following configuration I am currently using:
loadtest:
stage: test
image:
name: loadimpact/k6:latest
entrypoint: [""]
script: k6 run ./loadtests/main.js
Both work exactly as expected which is why I’m wondering whether the K6 team know something and don’t recommend the use of their docker image?
Ah, I am one of the people on the k6 team and in this case you are absolutely right - the docker approach is the better one. We'll fix the documentation and the example repo - https://github.com/loadimpact/k6/issues/1196. I don't know why they advocated the other approach - it was probably an old copy-paste from another CI system that doesn't work as well with containers as GitLab CI does. Case in point, the actual k6 version used is super old - v0.21.1 was released on Jun 4 2018. Thanks for pointing this out, we'll fix the docs and example in the upcoming days, so for now stick with your gut instead of our obsolete docs!

BitBucket deployment using SSH keys to remote server

I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.

How do we use the 'variables' keyword in gitlab-ci.yml?

I am trying to make use of the variables: keyword documented in the Gitlab CI Documentation here:
FROM: https://docs.gitlab.com/ce/ci/yaml/README.html
variables
This feature requires gitlab-runner with version equal or greater than
0.5.0.
GitLab CI allows you to add to .gitlab-ci.yml variables that are set
in build environment. The variables are stored in repository and are
meant to store non-sensitive project configuration, ie. RAILS_ENV or
DATABASE_URL.
variables:
DATABASE_URL: "postgres://postgres#postgres/my_database"
These variables can be later used in all executed commands and
scripts.
The YAML-defined variables are also set to all created service
containers, thus allowing to fine tune them.
When I attempt to use it, my builds do not run any stages and are marked successful anyway, a good sign of bad YAML. I pasted my gitlab-ci.yml contents into the LINT tool in the settings area and the output error is:
Status: syntax is incorrect
Error: variables job: unknown parameter PACKAGE_NAME
I'm using my YAML syntax the same as the docs, however it will not work. I'm unable to find any open bugs related to this. Below are my current versions and a sanitized version of my gitlab-ci.yml.
Gitlab Version: 7.13.2 Omnibus
Gitlab Runner Version: 0.5.2
gitlab-ci.yml (Sanitized)
types:
- test
- build
variables:
PACKAGE_NAME: "awesome-django-app"
PACKAGE_SUMMARY: "Awesome webapp backend."
MAJOR_RELEASE: "1"
MINOR_RELEASE: "0"
PATCH_LEVEL: "0dev"
DEV_DB_URL: "db"
DEV_SERVER: "pydev.example.com"
PROD_SERVER: "pyprod.example.com"
TEST_SERVER: "pytest.example.com"
envtest:
type: test
script:
- ". ./testbuild.sh"
tags:
- python2.7
- postgres
- linux
except:
- tags
buildrpm:
type: build
script:
- mkdir -p ~/rpmbuild/SOURCES
- mkdir -p ~/rpmbuild/SPECS
- mkdir -p ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL
- cp $PACKAGE_NAME.spec ~/rpmbuild/SPECS/.
- cp -r * ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL/.
- cd ~/tarbuild
- tar -zcf ~/rpmbuild/SOURCES/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL.tar.gz *
- cd ~
- rm -Rf ~/tarbuild
- rpmlint -i ~/rpmbuild/SPECS/$PACKAGE_NAME.spec
- echo $CI_BUILD_ID
- 'rpmbuild -ba ~/rpmbuild/SPECS/$PACKAGE_NAME.spec \
--define="_build_number $CI_BUILD_ID" \
--define="_python_version_min 2.7" \
--define="_version $MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL" \
--define="_package_name $PACKAGE_NAME" \
--define="_summary $SUMMARY"'
- scp rpmbuild/RPMS/noarch/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL-$CI_BUILD_ID.noarch.rpm $DEV_SERVER:~/.
tags:
- python2.7
- postgres
- linux
- rpm
except:
- tags
Question:
How do I use this value properly?
Additional Info:
Removing this section from the YAML file causes everything to work so the rest of the file is in working order. (Of course undefined variables lead to script errors...)
Even just reducing the variables for testing down to just PACKAGE_NAME causes the same break.
The original answer is no longer correct.
The original documentation now stands, Now there are more ways as well. Variables can be created from the GUI, API, or by being defined in the .gitlab-ci.yml as well.
https://docs.gitlab.com/ce/ci/variables/README.html
While it is in the documentation, I do not believe that variables were included in the latest version of gitlab (7.13). The functionality to read variables out of the yaml files was brought in by a commit by ayufan 9 days ago.
Looking at the parser on the 7.13 stable branch, you can see that his contribution did not make it in. So assuming you're on 7.13 or earlier, I'm afraid we are out of luck. Since it is on master, I am fairly certain that we'll see it in the next release. Until then, we could either monkey patch, do a git pull if you're using the source directly, or just rely on the project variables until the next release.