Run Kubectl in apache - apache

I have this bash script:
#!/bin/bash
USERNAME=$1
WORKDIR='dir_'$USERNAME
mkdir deployment/$WORKDIR
cat deployment/deploy.yml > deployment/$WORKDIR/deploy.yml
sed -i 's/alopezfu/'$USERNAME'/g' deployment/$WORKDIR/deploy.yml
kubectl apply -f deployment/$WORKDIR/deploy.yml
rm -rf deployment/$WORKDIR/
And i use exec funcition in PHP for run.
And i get this messege in /var/log/apache/error.log
To view or setup config directly use the 'config' command.
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
Via the command-line flag --kubeconfig
Via the KUBECONFIG environment variable
In your home directory as ~/.kube/config
I need help 🙏

Since you are running the script as a diferent user, you need to "tell" to kubectl where is the configuration file.
This can be done setting the variable KUBECONFIG in your environment.
Supposing the kubernetes config file is in the dir /var/www/ with the correct permission to be readable, you can configure your php script like this:
<?php
$kubeconfig = "/var/www/config"; // The config file
putenv("KUBECONFIG=$kubeconfig"); // set the environment variable KUBECONFIG
$output = shell_exec("KUBECONFIG=$kubeconfig ; kubectl get pods -A"); // Runs the command
echo "<pre>$output</pre>"; // and return the expected output.
?>
Please be aware that:
Setting certain environment variables may be a potential security breach.
Some actions that should mitigate the impacts:
Make sure your config file is safe and not reachable from the browser;
Consider to create a serviceAccount with limited permissions;
Here you can find some useful commands and kubectl tips.
How to create a service account for kubectl

Related

Kubernetes rolling update with updating value in deployment file

I wanted to share a solution I did with kubernetes and have your opinion on best practice to do in such case. I'm still new to kubernetes.
I had a problem I wanted to be able to update my application by restarting my deployment pod that execute all the necessary action to do that already in command start.
I'm using microk8s and I wanted to just go to the good folder and execute microk8s kubectl apply -f myfilename and let kubernetes handle the rest with rolling update.
My issue was how to set dynamic value inside my .yaml file so the command would detect the change and start the process.
I've planned to do a bash script that do the job like the following:
file="my-file-deployment.yaml"
oldstr=`grep 'my' $file | xargs`
timestamp="$(date +"%Y-%m-%d-%H:%M:%S")"
newstr="value: my-version-$timestamp"
sed -i "s/$oldstr/$newstr/g" $file
echo "old version : $oldstr"
echo "Replaced String : $newstr"
sudo microk8s kubectl apply -f $file
on my deployment.yaml file I'm giving the following env:
env:
- name: version
value: my-version-2022-09-27-00:57:15
I'm switching with timestamp to a new value then I launch the command:
microk8s kubectl apply -f myfilename
it is working great for the moment. I still have to configure startupProbe to have a better rolling update execution because I'm having few second downtime which isn't cool.
Is there a better solution to work with rolling update using microk8s?
If you are trying to trigger a rolling update on your deployment (assuming it is a deployment), you can patch the deployment and let the cluster handle the rollout. Here's a trick I use and it's literally a one-liner:
kubectl -n {namespace} patch deployment {name-of-your-deployment} \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
This will patch your deployment, adding an annotation to the template block. In this way, the cluster thinks there is a change requiring an update to the deployment's pods, and will cycle them while following the rollingUpdate clause.
The date +'%s' will resolve to a different number each time so every time you run this, it will cause the cluster to cycle the deployment's pods.
We use this trick to force a rolling update when we have done an update that requires our pods to be restarted.
You can accompany this with the rollout status command to wait for the update to complete:
kubectl rollout status deployment/{name-of-your-deployment} -n {namespace}
So a complete line would be something like this if I wanted to rolling update my nginx deployment and wait for it to complete:
kubectl -n nginx patch deployment nginx \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" \
&& kubectl rollout status deployment/nginx -n nginx
One caveat, though. Using kubectl patch does not make changes to the yamls on disk, so if you wanted a copy of the change recorded locally, such as for auditing purposes, similar to what you are doing at the moment, then you could adapt this to do it as a dry-run and redirect output to file:
kubectl -n nginx patch deployment nginx \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" \
--dry-run=client \
-o yaml >patched-nginx.yaml

Apache Airflow command not found with SSHOperator

I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim

One liner ssh different enviroment variables than normal ssh

I am using AWS Beanstalk, in case it may be relevant to the question.
The issue that I have is that when I do from my local terminal:
ssh mozart-api printenv
I missing most of the enviroment variables, instead if I do:
ssh mozart-api
..wait to open..
printenv
I get all enviroment varibles as I was expecting.
At first I thought it could be an ssh configuration in server but can't find anything strange.
if I do:
ssh mozart-api "export hello=123 && echo $hello"
then it outputs 123, which means that variables can be set and queried, however I just cannot get the existing variables from the server.
This is causing an issue because I am preparing a script that will run a command in ssh on this server, but because the variables are not loaded the project fails to open the database.
I tried reimporting them in one liner:
ssh mozart-api "sudo chmod +777 /etc/profile.d/sh.local && (/opt/elasticbeanstalk/bin/get-config environment | jq -r 'keys[] as \$k | \"echo export \(\$k)=\(.[\$k])\"') > /etc/profile.d/sh.local && printenv"
But still can't see the new added variables.
ssh mozart-api executes a login shell, which probably sources one or more files that define your environment variables.
ssh mozart-api printenv executes printenv instead of a login shell, so the only variables you see are the ones you inherit from the parent process, not any of the variables defined in your shell configuration files.

Is it possible to tell ansible not to use ~/.ssh/config?

My ~/.ssh/config file is interfering with ansible, I use a lot of abbreviations in there to make my life easier when logging onto servers.
for example in:
Host te*
HostName %h.example.com
User test
In my ansible hosts file I have:
[servers]
te1.exmaple.com
te2.example.com
which means when I run ansible, the connection will fail because it will use my ssh config file and try to connect to te1.example.com.example.com.
I know I could modify ansible hosts to just be te1 and let ssh config add the rest of the domain, but I know that other members of my team don't have their .ssh/config set up like me so this isn't really an option, and tbh is the easy route which will end up causing problems for others.
Is there a way in ansible to tell it not to use mine or anyone else .ssh/config file?
You can use the ANSIBLE_SSH_ARGS parameter in ansible.cfg for that. The required ssh parameter is -F configfile which has the following meaning:
-F configfile
Specifies an alternative per-user configuration file. If a
configuration file is given on the command line, the system-wide
configuration file (/etc/ssh/ssh_config) will be ignored. The default
for the per-user configuration file is ~/.ssh/config.
So your ANSIBLE_SSH_ARGS with the defaults in in ansible.cfg would then look like this:
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -F /dev/null
ssh -F allows you to specify "an alternative per-user configuration file".
-F configfile
Specifies an alternative per-user configuration file. If a configuration file is given on the command line, the
system-wide configuration file (/etc/ssh/ssh_config) will be ignored. The default for the per-user configuration
file is ~/.ssh/config.
In Ansible you can configure it by ANSIBLE_SSH_ARGS.
For example in ansible.cfg you can set it to any file that fits your needs.
[ssh_connection]
ssh_args = -F ...
Or, you might want to create a separate user (let's say ansible-admin) set her ~/.ssh/config and use it to run ansible.
This is what worked for me in the end. I added this to my ansible.cgf file.
[all:vars] ansible_ssh_common_args = '-F /dev/null'
Thanks to all who answered :)

Subversion export/checkout in Dockerfile without printing the password on screen

I want to write a Dockerfile which exports a directory from a remote Subversion repository into the build context so I can work with these files in subsequent commands. The repository is secured with user/password authentication.
That Dockerfile could look like this:
# base image
FROM ubuntu
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# export my repository
RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory
# further commands, e.g. on container start run a file just downloaded from the repository
CMD ["/bin/bash", "path/to/file.sh"]
However, this has the drawback of printing my username and password on the screen or any logfile where the stdout is directed, as in Step 2 : RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory. In my case, this is a Jenkins build log which is also accessible by other people who are not supposed to see the credentials.
What would be the easiest way to hide the echo of username and password in the output?
Until now, I have not found any way how to execute RUN commands in a Dockerfile silently when building the image. Could the password maybe be imported from somewhere else and attached to the command beforehand so it does not have to be printed anymore? Or are there any methods for password-less authentication in Subversion that would work in the Dockerfile context (in terms of setting them up without interaction)?
The Subversion Server is running remotely in my company and not on my local machine or the Docker host. To my knowledge, I have no access to it except for accessing my repository via username/password authentication, so copying any key files as root to some server folders might be difficult.
The Dockerfile RUN command is always executed and cached when the docker image is build so the variables that svn needs to authenticate must be provided at build time. You can move the svn export call when the docker run is executed in order to avoid this kind of problems. In order to do that you can create a bash script and declare it as a docker entrypoint and pass environment variables for username and password. Example
# base image
FROM ubuntu
ENV REPOSITORY_URL http://subversion.myserver.com/path/to/directory
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# make it executable before you add it here otherwise docker will coplain
ADD docker-entrypoint.sh /enrypoint.sh
ENTRYPOINT /entrypoint.sh
docker-entrypoint.sh
#!/bin/bash
# maybe here some validation that variables $REPO_USER $REPO_PASSOWRD exists.
svn export --username="$REMOTE_USER" --password="$REMOTE_PASSWORD" "$REPOSITORY_URL"
# continue execution
path/to/file.sh
Run your image:
docker run -e REPO_USER=jane -e REPO_PASSWORD=secret your/image
Or you can put the variables in a file:
.svn-credentials
REPO_USER=jane
REPO_PASSWORD=secret
Then run:
docker run --env-file .svn-credentials your/image
Remove the .svn-credentials file when your done.
Maybe using SVN with SSH is a solution for you? You could generate a public/private key pair. The private key could be added to the image whereas the public key gets added to the server.
For more details you could have a look at this stackoverflow question.
One solution is to ADD the entire SVN directory you previously checked out on your builder file-system (or added as a svn:externals if your Dockerfile is itself in a SVN repository like this: svn propset svn:externals 'external_svn_directory http://subversion.myserver.com/path/to/directory' ., then do a svn up).
Then in your Dockerfile you can simply have this:
ADD external_svn_directory /tmp/external_svn_directory
RUN svn export /tmp/external_svn_directory /path/where/to/export/to
RUN rm -rf /tmp/external_svn_directory
Subversion stores authentication details (if it not disabled in configuration) at client side and use stored username|password on request for the subsequent operations on the same URL.
Thus - you have to run (successful) svn export in Dockerfile with username|password only once and allow SVN to use cached credentials (remove auth. options from command-line) later