We have a job going on which takes the backup of ldap ( Using slapcat ). We are uploading the backup file (.ldif) to s3. We want to verify whether the backed up file is complete or incomplete, based on which it will push the data to s3.
I have tried counting out the lines of the file ( wc -l and find command ). But this will only count the lines, we want to cover the case where even number of lines is greater than 0, it should validate the backup. Basically, does ldap has any feature to validate the backup.
- name: check whether file is empty or not, if yes, exit
shell: "wc -l /tmp/ldap-backup.ldif|awk '{print $1}'"
register: pl_checker
I couldn't update earlier as I am not sure whether this is proper answer or just a hack.
What I did is an ldapsearch for objectClass=* and then used grep for objectClass and counted the line with wc -l. The value I got was compared with the value that I am getting by doing grep in the ldap backfile ( .ldif file ) and then counting number of lines in that. If both the number matches then number of objectClass matches and so the backup file has the complete data.
This was written in ansible playbook.
- name: Count number of objectClass by ldapsearch
shell: "ldapsearch -LLL -xD'cn=<cn>,dc=ldap,dc=<env>,dc=<dc>,dc=<dc>' -w
<password> -b'dc=ldap,dc=<env>,dc=<dc>,dc=<dc>' objectClass=* | grep
objectClass | wc -l"
register: pl_ldapsearch
- name: Count number of objectClass in backup (.ldif) file
shell: "cat /tmp/ldap-backup.ldif | grep objectClass | wc -l"
register: pl_ldapfile
- name: Delete backup file if it is incomplete
file:
path: /tmp/ldap-backup.ldif
state: absent
when: pl_ldapsearch.stdout != pl_ldapfile.stdout
- name: condition check
fail: msg="ldap-backup.ldif file incomplete, intentionally failing the
playbook"
when: pl_ldapsearch.stdout != pl_ldapfile.stdout
Not sure about the yml syntax as it changed while posting answer.
Related
I have a ldap database that has been used for some time. naturally it is full on entries.
I recently tried to set up an index for uid to help searching, i added the following to my slapd.conf file
include /etc/openldap/schema/core.schema
database bdb
suffix "dc=domain,dc=net"
directory /var/lib/ldap
index uid eq,pres
I then ran slapindex
slapindex -f /etc/openldap/slapd.conf -b "dc=jhc,dc=net" uid
But this didnt seem to do it, i dont know if this part is correct but to make any progress the only thing that worked seemed to be adding the following line to a db ldif file in /etc/openldap/slapd.d/cn=config/
olcDbIndex: uid pres,eq
I then ran the slapindex again and started ldap. Searching for a uid is now much faster but doesnt give me a result on entries that where already in the db only new entries show when i do an ldapsearch and filter for the uid, for reference the search is below but i have taken out details of my ldap server
ldapsearch "cn=admin,dc=domain,dc=net" -b "cn=users,dc=domain,dc=net" "(uid=newuser)"
What am i missing to get entries that already exist to be indexed?
For anyone with this issue the solution is to essential migrate your DB and add in the new index attribute to the config.ldif file, for me this was done by running
slapcat -n 0 -l config.ldif
slapcat -n 2 -l data.ldif
Then remove the files in /etc/openldap/slapd.d and /usr/local/openldap/
Edit the the config.ldif file and add in your index value, if you have other index values just copy them for me it looked like this
olcDbIndex uid eq
The final step is to add your DB back with your two ldif files
slapadd -c -F /etc/openldap/slapd.d -n 0 -l config.ldif
slapadd -c -F /etc/openldap/slapd.d -n 2 -l data.ldif
You should be able to start ldap now, make sure your ldap user is the owner of the openldap folders and its contents.
Is it possible to remove all entries from LDAP by one-line commend?
I tried:
ldapdelete -r 'cn=*,dc=domain,dc=com' -w
but it's not working. I have no better ideas;/
ldapdelete is to remove specific DN, you can't use a wilcard.
There is no native "oneliner". You can execute a ldapsearch and provide the list of DN resulting from this search to the ldapdelete
Something like :
ldapsearch -LLL -s one -b "dc=domain,dc=com" "(cn=*)" dn | awk -F": " '$1~/^\s*dn/{print $2}' > listOfDNtoRemove.txt && ldapdelete -r -f listOfDNtoRemove.txt
-s one : this option on the ldapsearch is to retrieve only the first level child under the branch dc=domain,dc=com
-LLL : this option is to have LDIF format output
-r : this option is to recursively delete the previously first level branch found and their childs
awk -F": " '$1~/^\s*dn/{print $2}' : this awk is to print only the line starting by dn: and printing the value of the dn
NOTE : ldapdelete also reads the list of DN from the standard input, so you can pipe the ldapsearch results directly to the ldapdelete if you want to avoid the temporary file
With the HDB backend
You can try this approach: go to the /var/lib/ldap directory and run this command:
sudo rm __db.* *.bdb log.*
The slapd server should preferably be shutdown before running this command.
Make sure you have a backup of the files before executing this
With the MDB backend
Similar as the above, but the file names are different:
sudo rm *.mdb
Using zsh 5.2 on Fedora 24 workstation.
I want to be programatically able to:
move an image file (can have jpg/ jpeg/ png/ JPG/ PNG extensions)
from /tmp/folder1 to ~/Pictures
This file will have the same few initial characters --- prefix111.jpg OR prefix222.png, etc.
rename the file such that samefilename.JPG becomes 20161013.jpg
20161013 is today's date in yyyymmdd format
Note that the extension becomes small letters
And JPEG or jpeg becomes jpg
change the permissions of the moved file to 644
All at one go.
If there are multiple prefix* files, the command should just fail silently.
I will initially like to do it at the command prompt with an option to add a cron job later. I mean, will the same zsh command/ script work in cron?
I am sure, this is doable. However, with my limited shell knowledge, could only achieve:
mv /tmp/folder1/prefix-*.JPG ~/Pictures/$(date +'%Y%m%d').jpg
Problems with my approach are many. It does not handle capitalization, does not take care of different extensions and does not address the permission issue.
How about this:
#!/bin/sh
FILES="/tmp/folder1/prefix*.jpg /tmp/folder1/prefix*.jpeg /tmp/folder1/prefix*.png h/tmp/folder1/prefix*.JPG /tmp/folder1/prefix*.PNG"
if [ $(ls $FILES | wc -l ) -gt 1 ]; then
exit 1
fi
if [ $(ls $FILES | grep -i '\.png$') ]; then
SUFF=png
else
SUFF=jpg
fi
DEST=$HOME/Pictures/$(date +'%Y%m%d').$SUFF
mv $FILES $DEST
chmod 644 $DEST
I'm new to Apache SVN and I need some help to use a pre-commit script to filter which files are being upload to my repository.
I was searching a lot and found this script on another question, but it didn't work for me.
#!/bin/bash
REPOS=$1
TXN=$2
AWK=/usr/bin/awk
SVNLOOK="/usr/bin/svnlook";
#Put all the restricted formats in variable FILTER
FILTER=".(sh|xls|xlsx|exe|xlsm|XLSM|vsd|VSD|bak|BAK|class|CLASS)$"
# Figure out what directories have changed using svnlook.
FILES=`${SVNLOOK} changed -t ${REPOS} ${TXN} | ${AWK} '{ print $2 }'` > /dev/null
for FILE in $FILES; do
#Get the base Filename to extract its extension
NAME=`basename "$FILE"`
#Get the extension of the current file
EXTENSION=`echo "$NAME" | cut -d'.' -f2-`
#Checks if it contains the restricted format
if [[ "$FILTER" == *"$EXTENSION"* ]]; then
echo "Your commit has been blocked because you are trying to commit a restricted file." 1>&2
echo "Please contact SVN Admin. -- Thank you" 1>&2
exit 1
fi
done
exit 0
If I try to use svnlook changed -t repodirectory it didn't work because had a missing subcommand.
I overwrote my pre-commit.tmpl but it didn't work, can someone help me?
First - seems you incorrectly use svnlook. It should has parameters:
svnlook changed ${REPOS} -t ${TXN}
-t means 'read from transaction' and TXN - transaction name itself.
Second - not sure if I understand correctly, but hook file should has name pre-commit not pre-commit.tmpl
Third - pre-commit should has correct rights. For tests try a+rwx
update. It is not easy to obtain transaction object for tests, but you can use svnlook -r <revision> <repositiry_path> and experiment on already commited revisions.
I'd like to compare a few files from the bazaar branch lp:ubuntu/nvidia-graphics-drivers. I'm mainly interested in the debian subdirectory inside that branch, but due to the binary blob in http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-graphics-drivers/oneiric/files, it takes ages to get just the text files. I've already downloaded 555MB and it's still counting.
Is it possible to retrieve a bazaar branch, including or excluding certain files by one of the following properties:
file size
file extension
file name (include only debian/ for example)
I do not need to push back any changes, nor do I need to view the history of a file. I just want to compare two files in the debian/ directory, files with the .in extension and files without.
As far as I'm aware, no. You're downloading the branch history, not just the individual files. And each file is an integral part of the branch's history.
On the bright side, you only have to check it out once. Unless those binary files change, they'll be skipped the next time you pull from Launchpad.
Depending on the branch's history, you may be able to cut down on the download size if you use a lightweight checkout (bzr checkout --lightweight). But of course, that may come back and bite you later, as it means you won't get a local copy of the branch, only the checked-out files. So it'll work much like SVN, where every operation has to go through the server. And as long as you don't need to look at the branch history, or commit your changes, that should serve you just fine, I believe.
I ended up doing some dirty grep-ing on the HTTP response since bzr info "$branch" and bzr ls -d "$branch" "$directory" did not provide enough information to me.
The below Bash script relies on the working of Launchpads front-end Loggerhead. It recursively downloads from a given URL. Currently, it ignores *.run files. Save it as bzrdl in a directory available from $PATH and run it with bzrdl http://launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-graphics-drivers/oneiric/files/head:/debian/. All files will be saved in the current directory, be sure that it's empty to avoid conflicts.
#!/bin/bash
max_retries=5
rooturl="$1"
if ! [[ $rooturl =~ /$ ]]; then
echo "Usage: ${0##*/} URL"
echo "URL must end with a slash. Example URL:"
echo "http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-graphics-drivers/oneiric/files/head:/"
exit 1
fi
tmpdir="$(mktemp -d)"
target="$(pwd)"
# used for holding HTTP response before extracting data
tmp="$(mktemp)"
# url_filter reads download URLs from stdin (piped)
url_filter() {
grep -v '\.run$'
}
get_files_from_dir() {
local slash=/
local dir="$1"
# to avoid name collision: a/b/c/ -> a.d/b.d/c.d/
local storedir="${dir//$slash/.d${slash}}"
mkdir -p "$tmpdir/$storedir" "$target/$dir"
local i subdir
for ((i=0; i<$max_retries; i++ )); do
if wget -O "$tmp" "$rooturl$dir"; then
# store file list
grep -F -B 1 '<img src="/static/images/ico_file_download.gif" alt="Download File" />' "$tmp" |\
grep '^<a' | cut -d '"' -f 2 | url_filter \
> "$tmpdir/$storedir/files"
IFS=$'\n'
for subdir in $(grep -F -B 1 '<img src="/static/images/ico_folder.gif" ' "$tmp" | \
grep -F '<a ' | rev | cut -d / -f 2 | rev); do
IFS=$' \t\n'
get_files_from_dir "$dir$subdir/"
done
return
fi
done
echo "Failed to download directory listing of: $dir" >> "$tmpdir/errors"
}
download_files() {
local slash=/
local dir="$1"
# to avoid name collision: a/b/c/ -> a.d/b.d/c.d/
local storedir="${dir//$slash/.d${slash}}"
local done=false
local subdir
cd "$tmpdir/$storedir"
for ((i=0; i<$max_retries; i++)); do
if wget -B "$rooturl$dir" -nc -i files -P "$target/$dir"; then
done=true
break
fi
done
$done || echo "Failed to download all files from $dir" >> "$tmpdir/errors"
for subdir in *.d; do
download_files "$dir${subdir%%.d}/"
done
}
get_files_from_dir ''
# make *.d expand to nothing if no directories are found
shopt -s nullglob
download_files ''
echo "TMP dir: $tmpdir"
echo "Errors : $(wc -l "$tmpdir/errors" 2>/dev/null | cut -d ' ' -f 2 || echo 0)"
The temporary directory and file is not removed afterwards, that must be done manually. Any errors (failures to download) will be written to $tmpdir/errors
It's confirmed to work with:
bzrdl http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-settings/oneiric/files/head:/debian/
Feel free to correct any mistakes or add improvements.
There is no way to selectively check out a specific directory from a Bazaar branch at the moment, although we do have plans to add such support in the future.
There is definitely too much traffic for the clone you are doing, considering the size of the branch. It's probably a bug in the client implementation.
Here on bzr 2.4 it is still quite slow but not too bad (60s):
localhost:/tmp% bzr branch http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-settings/oneiric
Most recent Ubuntu Oneiric version: 275.09.07-0ubuntu1
Packaging branch status: CURRENT
Branched 37 revision(s).
From the log:
[11866] 2011-07-31 00:56:57.007 INFO: Branched 37 revision(s).
56.786 Transferred: 5335kB (95.8kB/s r:5314kB w:21kB)