I use BorgBackup for my backups. Here is what i have when i list my archives :
borg list borg#[SERVER_IP]:/home/backups/$(hostname)
jenkins_data_2018-06-16 Sat, 2018-06-16 09:28:08
redmine_data_2018-06-16 Sat, 2018-06-16 09:31:38
Now, i would like to add the command "borg prune" and check what it could delete :
borg prune -v --list --dry-run borg#[SERVER_IP]:/home/backups/$(hostname) --keep-daily=7 --keep-weekly=4 --keep-monthly=3
Keeping archive: redmine_db_2018-06-16 Sat, 2018-06-16 09:31:38
Would prune: jenkins_data_2018-06-16 Sat, 2018-06-16 09:28:08
So, Borg would prune an archive which has been created today.
Do you know why please ?
You told borg prune to keep the latest backup for 7 days.
As the redmine backup is newer (later) than the other one, it keeps that for that day and deletes the other one.
Of course this is not what you wanted as the 2 backups are not for the same input data. But for borg to be able to "see" that, you need to:
borg prune --prefix redmine_db_ ...
borg prune --prefix jenkins_data_ ...
The official borg prune manual page (https://borgbackup.readthedocs.io/en/stable/usage/prune.html) dictates:
If a prefix is set with -P, then only archives that start with the
prefix are considered for deletion and only those archives count
towards the totals specified by the rules. Otherwise, all archives in
the repository are candidates for deletion! There is no automatic
distinction between archives representing different contents. These
need to be distinguished by specifying matching prefixes.
So, it will consider ALL backups in that repository as the same backupset and so all of them are prunable. You need to specify which backup set to prune with the --prefix argument, and with all the optional arguments you want (-d, -w, -m, etc) so that those arguments are applied only to the backup sets matching your prefix.
Related
I would like to use the flyte api's to fetch the latest a launchplan for a deployment environment without specifying the sha.
Users are encouraged to specify the SHA when referencing Launch Plans or any other Flyte entity. However, there is one exception. Flyte has the notion of an active launch plan. For a given project/domain/name combination, a Launch Plan can have any number of versions. All four fields combined identify one specific Launch Plan. Those four fields are the primary key. One, at most one, of those launch plans can also be what we call 'active'.
To see which ones are active, you can use the list-active-launch-plans command in flyte-cli
(flyte) captain#captain-mbp151:~ [k8s: flytemain] $ flyte-cli -p skunkworks -d production list-active-launch-plans -l 200 | grep TestFluidDynamics
NONE 248935c0f189c9286f0fe13d120645ddf003f339 lp:skunkworks:production:TestFluidDynamics:248935c0f189c9286f0fe13d120645ddf003f339
However, please be aware that if a launch plan is active, and has a schedule, that schedule will run. There is no way to make a launch plan "active" but disable its schedule (if it has one).
If you would like to set a launch plan as active, you can do so with the update-launch-plan command.
First find the version you want (results truncated):
(flyte) captain#captain-mbp151:~ [k8s: flytemain] $ flyte-cli -p skunkworks -d staging list-launch-plan-versions -n TestFluidDynamics
Using default config file at /Users/captain/.flyte/config
Welcome to Flyte CLI! Version: 0.7.0b2
Launch Plan Versions Found for skunkworks:staging:TestFluidDynamics
Version Urn Schedule Schedule State
d4cf71c20ce987a4899545ae01286f42297a8f3b lp:skunkworks:staging:TestFluidDynamics:d4cf71c20ce987a4899545ae01286f42297a8f3b
9d3e8d156f7ba0c9ac338b5d09949e88eed1f6c2 lp:skunkworks:staging:TestFluidDynamics:9d3e8d156f7ba0c9ac338b5d09949e88eed1f6c2
248935c0f189c928b6ffe13d120645ddf003f339 lp:skunkworks:staging:TestFluidDynamics:248935c0f189c928b6ffe13d120645ddf003f339
...
Then
flyte-cli update-launch-plan --state active -u lp:skunkworks:staging:TestFluidDynamics:d4cf71c20ce987a4899545ae01286f42297a8f3b
I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks
I'm looking at modifying a backup script that has been setup for me on my server. The script currently runs each morning to backup all of my domains under the /var/www/vhosts/ directory and I'd like to have it run only four times per week (Sun, Tue, Thu, Sat) instead of daily, if possible. I'm relatively new to the scripting language/commands and was wondering if someone might be able to help me with this? Here is the current script:
umask 0077
BPATH="/disk2/backups/vhosts_backups/`date +%w`"
LOG="backup.log"
/bin/rm -rf $BPATH/*
for i in `ls /var/www/vhosts` on
do
tar czf $BPATH/$i.tgz -C /var/www/vhosts $i 2>>$BPATH/backup.log
done
Thank you,
Jason
To answer my own question (in case it could benefit anyone else), it turns out that the backup script was scheduled through crontab, and that's what needed the adjustment. I did crontab -e and modified the 4th field below from an * to "0,2,4,6" (for Sun, Tue, Thu, Sat).
5 1 * * 0,2,4,6 /root/scripts/vhosts_backup.sh
Currently, we have the following in our CVS Repository :
Module1
|
|
+-----A
|
+-----B
We want o restructure this module such that the sub directories A and B appears as high level modules. What I could do is to check Module1 out and then pull A and B out and then do a fresh cvs add for A and B individually, thus making them new cvs modules. But I am sure if I do this, I am going to lose the history as well as I would have to remove all internal CVS folders under A and B.
Q1: So is there a way to restructure this and retain the history?
What I essentially am trying to do is to filter out access between A and B.
So -
Q2: Is there a way to set up security so that certain users can check out Module1/A only and not Module1/B ? and vice-versa?
Q1: So is there a way to restructure this and retain the history?
Like you wrote in your comment, if you have sys privs you can mv modules around the repository and keep the history of all the files below A and B but in doing so, you lose the history that /A used to be Module1/A and /B used to be in Module1/B (not to mention build scripts probably break now). Subversion resolves this for you by offering the move (or rename) command which remembers the move/rename history of a module.
Q2: Is there a way to set up security so that certain users can check out Module1/A only and not Module1/B ? and vice-versa?
There sure is, used group permissions. From this page,
http://www.linux.ie/articles/tutorials/managingaccesswithcvs.php
Here's the snip I'm referring to in case that page ever goes away
To every module its group
We have seen earlier how creating a
cvsusers group helped with the
coordination of the work of several
developers. We can extend this
approach to permit directory level
check-in restrictions.
In our example, let's say that the
module "cujo" is to be r/w for jack
and john, and the module "carrie" is
r/w for john and jill. We will create
two groups, g_cujo and g_carrie, and
add the appropriate users to each - in
/etc/group we add
g_cujo:x:3200:john,jack
g_carrie:x:3201:john,jill>
Now in the repository (as root), run
find $CVSROOT/cujo -exec chgrp g_cujo {} \;
find $CVSROOT/carrie -exec chgrp g_carrie {} \;
ensuring, as before, that all
directories have the gid bit set.
Now if we have a look in the
repository...
john#bolsh:~/cvs$ ls -l
total 16
drwxrwsr-x 3 john cvsadmin 4096 Dec 28 19:42 CVSROOT
drwxrwsr-x 2 john g_carrie 4096 Dec 28 19:35 carrie
drwxrwsr-x 2 john g_cujo 4096 Dec 28 19:40 cujo
and if Jack tries to commit a change
to carrie...
jack#bolsh:~/carrie$ cvs update
cvs server: Updating .
M test
jack#bolsh:~/carrie$ cvs commit -m "test"
cvs commit: Examining .
Checking in test;
/home/john/cvs/carrie/test,v <-- test
new revision: 1.2; previous revision: 1.1
cvs [server aborted]: could not open lock file
`/home/john/cvs/carrie/,test,': Permission denied
jack#bolsh:~/carrie$
But in cujo, there is no problem.
jack#bolsh:~/cujo$ cvs update
cvs server: Updating .
M test
jack#bolsh:~/cujo$ cvs commit -m "Updating test"
cvs commit: Examining .
Checking in test;
/home/john/cvs/cujo/test,v <-- test
new revision: 1.2; previous revision: 1.1
done
jack#bolsh:~/cujo$
The procedure for adding a user is now
a little more complicated that it
might be. To create a new CVS user, we
have to create a system user, add them
to the groups corresponding to the
modules they may write to, and (if
you're using a pserver method)
generate a password for them, and add
an entry to CVSROOT/passwd.
To add a project, we need to create a
group, import the sources, change the
groups on all the files in the
repository and make sure the set gid
on execution bit is set on all
directories inside the module, and add
the relevant users to the group.
There is undoubtedly more
administration needed to do all this
than when we jab people with a pointy
stick. In that method, we never have
to add a system user or a group or
change the groups on directories - all
that is taken care of once we set up
the repository. This means that an
unpriveleged user can be the CVS admin
without ever having root priveleges on
the machine.
Is it possible to use AIX's mksysb and savevg to create a bootable tape with the rootvg and then append all the other VGs?
Answering my own question:
To backup, use a script similar to this one:
tctl -f/dev/rmt0 rewind
/usr/bin/mksysb -p -v /dev/rmt0.1
/usr/bin/savevg -p -v -f/dev/rmt0.1 vg01
/usr/bin/savevg -p -v -f/dev/rmt0.1 vg02
/usr/bin/savevg -p -v -f/dev/rmt0.1 vg03
...etc...
tctl -f/dev/rmt0 rewind
Notes:
- mksysb backs up rootvg and creates a bootable tape.
- using "rmt0.1" prevents auto-rewind after operations.
Also, mkszfile and mkvgdata were used previously to create the "image.data" and various "vgdata" and map files. I did this because my system runs all disks mirrored and I wanted the possibility of restoring with only half the number of disks present. All my image.dat, vgdata and map files were done unmirrored to allow more flexibility during restore.
To restore, the procedures are:
For rootvg, boot from tape and follow the on-screen prompt (a normal mksysb restore).
For the other volume groups, it goes like this:
tctl -f/dev/rmt0.1 rewind
tctl -f/dev/rmt0.1 fsf 4
restvg -f/dev/rmt0.1 hdisk[n]
"fsf 4" will place the tape at the first saved VG following the mksysb backup. Use "fsf 5" for the 2nd, "fsf 6" for the 3rd, and so on.
If restvg complains about missing disks, you can add the "-n" flag to forego the "exact map" default parameter.
If you need to recuperate single files, you can do it like this:
tctl -f/dev/rmt0 rewind
restore -x -d -v -s4 -f/dev/rmt0.1 ./[path]/[file]
"-s4" is rootvg, replace with "-s5" for 1st VG following, "-s6" for 2nd, etc. The files are restored in your current folder.
This technique gives you a single tape that can be used to restore any single file or folder; and also be used to completely rebuild your system from scratch.
First, use savevg to backup any extra volume groups to a file system on the rootvg:
savevg -f /tmp/vgname
Compress it if it will be too large, or use the -i option to exclude files. The easiest way is to exclude all files on the volume group and restore those off of the regular backup device. Once that is done, create your normal mksysb.
For DR purposes, restore the system using the mksysb, then use restvg to restore the volume groups out of your backup files. Restore any extra files that may have been excluded, and you're running again.