Exploring Puppet filebuckets: Error: Could not run: File not found - backup

I am exploring Puppet filebuckets with a manifest that contains the following excerpt:
file { '/tmp/test' :
backup,
# ...
}
When I apply this manifest, Puppet reports that it backed up the old version of /tmp/test into the (local) filebucket puppet:
Info: /Stage[main]/<module>/File[/tmp/test]:
Filebucketed /tmp/test to puppet with sum <hash>
This matches the following description in the documentation:
Default value: puppet, which backs up to a filebucket of the same
name. (Puppet automatically creates a local filebucket named puppet if
one doesn’t already exist.)
When I now try to inspect the contents of the filebucket with puppet filebucket --local list (or puppet filebucket --local --bucket puppet list) I get this error message:
Error: Could not run: File not found
What can explain this behavior and how can I successfully inspect the contents of the (local) filebucket? This is for Puppet version 4.10.5.

This seems to be related to a bug in Puppet 4. This workaround applies:
puppet filebucket --local \
--bucket /opt/puppetlabs/puppet/cache/clientbucket \
list
UPDATE Pipeing the output from this command into sort -k 2 will sort entries by date (newest first).

Related

%files section of Singularity recipe non-intuitively copies files to wrong bind location

I am working on CentOS 8 and am using Singularity 3.6.2. I have a Singularity recipe file :
BootStrap: yum
OSVersion: 8
MirrorURL: http://mirror.centos.org/centos-8/8/BaseOS/x86_64/os/
Include: yum
%files
/gpfs0/home1/group/user/path/to/some.rpm /tmp
%post
ls /tmp
echo "Hello from inside the container"
When I run :
$ sudo singularity build test.simg tmp
INFO: Starting build...
INFO: Skipping GPG Key Import
INFO: Adding owner write permission to build path: /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0
INFO: Copying /gpfs0/home1/group/user/path/to/some.rpm to /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0/tmp
INFO: Running post scriptlet
+ ls /tmp
qtsingleapp-RStudi-c679-6387e228-lockfile
rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0
rootfs-b10ad12c-229a-11eb-85a3-34800d2d90f0
+ echo 'Hello from inside the container'
Hello from inside the container
INFO: Creating SIF file...
According to the Singularity documentation
In the default configuration, the system default bind points are $HOME , /sys:/sys , /proc:/proc, /tmp:/tmp,
Question :
Why is the %files section putting my rpm in /tmp/rootfs-4db1e756-22a8-11eb-bb20-34800d2d90f0/tmp and not in /tmp? That seems to contradict the documentation. This is also different from the behavior observed with Singularity v2.5.1
Also, how would I access said file. The long 'hash-like' part of the path seems to change depending on the build?
I don't have an answer reconciling the documentation with where the %files section is actually putting the files, however I do have an answer for how to access the files copied. You need to use ${SINGULARITY_CONTAINER} in the %post section.
E.g.
$ cat Singularity
BootStrap: yum
OSVersion: 8
MirrorURL: http://mirror.centos.org/centos-8/8/BaseOS/x86_64/os/
Include: yum
%files
# Will need to use environmental variables to copy the code to
/gpfs0/home/group/user/path/to/some.rpm /tmp
%post
ls ${SINGULARITY_CONTAINER}/tmp
echo "Hello from inside the container"
When building yields :
$ sudo singularity build tmp.simg tmp
INFO: Starting build...
INFO: Skipping GPG Key Import
INFO: Adding owner write permission to build path: /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0
INFO: Copying /gpfs0/home/group/user/path/to/some.rpm to /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0/tmp
INFO: Running post scriptlet
+ ls /tmp/rootfs-e2a3fbb4-242b-11eb-a267-34800d2d90f0/tmp
some.rpm
+ echo 'Hello from inside the container'
Hello from inside the container
INFO: Creating SIF file...

How to dynamically pass parameter to the RPM during installation

We are in need to dynamically pass a variable during RPM installation and capture it in the spec file to trigger a script in %post
Following is the command
RPM Install Command
sudo rpm -Uvh --force abc.noarch.rpm --define '_ip 10.1.2.4' --define 'version 3'
**abc.spec**
Name: abc
Version: 1
Release: 1.0
Summary: Test
%{!?_ip: %define _ip 0.0.0.0 }
%{!?_version: %define _version 0 }
%post
echo "ip:::: %{_ip}"
echo "VESION:::: %{_version}"
So when I run the RPM with the above command , I get the following output.
[root#test solution]$ sudo rpm -Uvh --force abc.noarch.rpm --define '_ip 10.1.2.4' --define 'version 3'
Preparing... ################################# [100%]
Updating / installing...
1:abc ################################# [ 50%]
ip:::: 0.0.0.0
VESION:::: 0
Though i pass a different value in the CLI command , I still see that the argument which I pass is not been captured in the spec file.
Need inputs on how to capture the values which im passing the CLI .
The option --define defines macro. Macros are evaluated when building an RPM from SRC.RPM using rpmbuild. The binary (does not matter if arch or noarch) package has every macro already expanded. Even the %bindir etc.
The RPM ecosystem was designed as non-interactive. This is a big difference from the DEB ecosystem when questions can be raised using debconf.
You cannot workaround it. You cannot ask even by directly reading STDIN as rpm close this descriptor before executing scriptlets.
The best practice is to use configuration files. E.g. /etc/abc/ip.conf. And:
either instruct user to manualy (or using Ansible) alter that file and store their correct data
or do NOT distribute /etc/abc/ip.conf in main abc package and instead require abc-config. And then create one or more config packages which will be like:
Package: abc-testing-config
Provides: abc-config
...
%files
/etc/abc/ip.conf
And you then instruct users to install abc abc-test-config. Or it can be abc abc-EMEA-config etc....

Run Kubectl in apache

I have this bash script:
#!/bin/bash
USERNAME=$1
WORKDIR='dir_'$USERNAME
mkdir deployment/$WORKDIR
cat deployment/deploy.yml > deployment/$WORKDIR/deploy.yml
sed -i 's/alopezfu/'$USERNAME'/g' deployment/$WORKDIR/deploy.yml
kubectl apply -f deployment/$WORKDIR/deploy.yml
rm -rf deployment/$WORKDIR/
And i use exec funcition in PHP for run.
And i get this messege in /var/log/apache/error.log
To view or setup config directly use the 'config' command.
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
Via the command-line flag --kubeconfig
Via the KUBECONFIG environment variable
In your home directory as ~/.kube/config
I need help 🙏
Since you are running the script as a diferent user, you need to "tell" to kubectl where is the configuration file.
This can be done setting the variable KUBECONFIG in your environment.
Supposing the kubernetes config file is in the dir /var/www/ with the correct permission to be readable, you can configure your php script like this:
<?php
$kubeconfig = "/var/www/config"; // The config file
putenv("KUBECONFIG=$kubeconfig"); // set the environment variable KUBECONFIG
$output = shell_exec("KUBECONFIG=$kubeconfig ; kubectl get pods -A"); // Runs the command
echo "<pre>$output</pre>"; // and return the expected output.
?>
Please be aware that:
Setting certain environment variables may be a potential security breach.
Some actions that should mitigate the impacts:
Make sure your config file is safe and not reachable from the browser;
Consider to create a serviceAccount with limited permissions;
Here you can find some useful commands and kubectl tips.
How to create a service account for kubectl

BitBucket deployment using SSH keys to remote server

I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.

docker run cannot find name flag argument

I have recently setup a Rstudio application on Google compute container engine using Docker and the Rocker/rstudio package. Now I want to start my saved container with a name using the following ssh command line:
sudo docker -d -p 8787:8787 --name samplename user/laatste
which returns the following error
flag provided but not defined: --name
I have tried with and without quotes, equal signs, double and single hyphens, before, between and after the other flags and arguments, but the same error keeps returning.
version information:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
The reason I want to name the container is that I want to run standard (static) startup and shutdown scripts with the Google compute instance to automatically save and load changes made in R. The container name is used for identifying the container to be saved. Any other solution for this is also very welcome.
I guess you wanted to do:
sudo docker run -d -p 8787:8787 --name samplename user/laatste
You forgot to specify command (run) here.