openldap index existing values - ldap

I have a ldap database that has been used for some time. naturally it is full on entries.
I recently tried to set up an index for uid to help searching, i added the following to my slapd.conf file
include /etc/openldap/schema/core.schema
database bdb
suffix "dc=domain,dc=net"
directory /var/lib/ldap
index uid eq,pres
I then ran slapindex
slapindex -f /etc/openldap/slapd.conf -b "dc=jhc,dc=net" uid
But this didnt seem to do it, i dont know if this part is correct but to make any progress the only thing that worked seemed to be adding the following line to a db ldif file in /etc/openldap/slapd.d/cn=config/
olcDbIndex: uid pres,eq
I then ran the slapindex again and started ldap. Searching for a uid is now much faster but doesnt give me a result on entries that where already in the db only new entries show when i do an ldapsearch and filter for the uid, for reference the search is below but i have taken out details of my ldap server
ldapsearch "cn=admin,dc=domain,dc=net" -b "cn=users,dc=domain,dc=net" "(uid=newuser)"
What am i missing to get entries that already exist to be indexed?

For anyone with this issue the solution is to essential migrate your DB and add in the new index attribute to the config.ldif file, for me this was done by running
slapcat -n 0 -l config.ldif
slapcat -n 2 -l data.ldif
Then remove the files in /etc/openldap/slapd.d and /usr/local/openldap/
Edit the the config.ldif file and add in your index value, if you have other index values just copy them for me it looked like this
olcDbIndex uid eq
The final step is to add your DB back with your two ldif files
slapadd -c -F /etc/openldap/slapd.d -n 0 -l config.ldif
slapadd -c -F /etc/openldap/slapd.d -n 2 -l data.ldif
You should be able to start ldap now, make sure your ldap user is the owner of the openldap folders and its contents.

Related

How to copy file from server to local using ssh and sudo su?

Somewhat related to: Copying files from server to local computer using SSH
When debugging on DEV server I can see logs with
# Bash for Windows
ssh username#ip
# On server as username
sudo su
# On server as su
cat path/to/log.file
The problem is that while every line of the file is indeed printed out, the CLI seems to have a height limit, and I can only see the last "so many" lines after the printing is done.
If there is a better solution, please bring it forward, otherwise, how do I copy the "log.file" to my computer.
Note: I don't have a password for my username, because the user is created with echo "$USER ALL=(ALL:ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/$USER.
After sudo su copy the file to the /tmp folder on the server with
cp path/to/log.file /tmp/log.file
After that the standard command should work
scp username#ip:/tmp/log.file log.file
log.file is now in the current directory (echo $PWD).

Query runs successfully and fetches empty result from user defined bucket, scope, and collection

I have set up a local couchbase one node cluster environment on Ubuntu.
Query runs and fetches result from default bucket after importing all the JSON documents in zip folder using cbdoclcoader command to default bucket
Command:
/opt/couchbase/bin/cbdocloader -c localhost:8091 -u Administrator -p 10i-0113 -b mybucket -m 100 -d Downloads/JSONs_List20211229-20220123T140145Z-001.zip
Query runs and fetches empty result from user defined bucket, scope, and collection and I don't find the reason of this although i have successfully imported json documents using the below command
/opt/couchbase/bin/cbimport json -c localhost:8091 -u Administrator -p 10i-0113 -b One_bucket -f lines -d file://'fileset__e53c883b-bc30-42cb-b4f7-969998c91e3d.json' -t 2 -g %type%::%id% --scope-collection-exp Raw.%type%
My guess is that when I try to create the index, it creates an index on the default bucket and I can not find a way to create an index on my custom bucket.
Please assist
I have fixed it :). Yes I was not getting any results when I try to query the collection because there was no index created on it.
Creating the index fixed the issue.
CREATE PRIMARY INDEX ON default:onebucket.rawscope.fileset

Verify size of ldap backup file

We have a job going on which takes the backup of ldap ( Using slapcat ). We are uploading the backup file (.ldif) to s3. We want to verify whether the backed up file is complete or incomplete, based on which it will push the data to s3.
I have tried counting out the lines of the file ( wc -l and find command ). But this will only count the lines, we want to cover the case where even number of lines is greater than 0, it should validate the backup. Basically, does ldap has any feature to validate the backup.
- name: check whether file is empty or not, if yes, exit
shell: "wc -l /tmp/ldap-backup.ldif|awk '{print $1}'"
register: pl_checker
I couldn't update earlier as I am not sure whether this is proper answer or just a hack.
What I did is an ldapsearch for objectClass=* and then used grep for objectClass and counted the line with wc -l. The value I got was compared with the value that I am getting by doing grep in the ldap backfile ( .ldif file ) and then counting number of lines in that. If both the number matches then number of objectClass matches and so the backup file has the complete data.
This was written in ansible playbook.
- name: Count number of objectClass by ldapsearch
shell: "ldapsearch -LLL -xD'cn=<cn>,dc=ldap,dc=<env>,dc=<dc>,dc=<dc>' -w
<password> -b'dc=ldap,dc=<env>,dc=<dc>,dc=<dc>' objectClass=* | grep
objectClass | wc -l"
register: pl_ldapsearch
- name: Count number of objectClass in backup (.ldif) file
shell: "cat /tmp/ldap-backup.ldif | grep objectClass | wc -l"
register: pl_ldapfile
- name: Delete backup file if it is incomplete
file:
path: /tmp/ldap-backup.ldif
state: absent
when: pl_ldapsearch.stdout != pl_ldapfile.stdout
- name: condition check
fail: msg="ldap-backup.ldif file incomplete, intentionally failing the
playbook"
when: pl_ldapsearch.stdout != pl_ldapfile.stdout
Not sure about the yml syntax as it changed while posting answer.

How to remove all records from LDAP?

Is it possible to remove all entries from LDAP by one-line commend?
I tried:
ldapdelete -r 'cn=*,dc=domain,dc=com' -w
but it's not working. I have no better ideas;/
ldapdelete is to remove specific DN, you can't use a wilcard.
There is no native "oneliner". You can execute a ldapsearch and provide the list of DN resulting from this search to the ldapdelete
Something like :
ldapsearch -LLL -s one -b "dc=domain,dc=com" "(cn=*)" dn | awk -F": " '$1~/^\s*dn/{print $2}' > listOfDNtoRemove.txt && ldapdelete -r -f listOfDNtoRemove.txt
-s one : this option on the ldapsearch is to retrieve only the first level child under the branch dc=domain,dc=com
-LLL : this option is to have LDIF format output
-r : this option is to recursively delete the previously first level branch found and their childs
awk -F": " '$1~/^\s*dn/{print $2}' : this awk is to print only the line starting by dn: and printing the value of the dn
NOTE : ldapdelete also reads the list of DN from the standard input, so you can pipe the ldapsearch results directly to the ldapdelete if you want to avoid the temporary file
With the HDB backend
You can try this approach: go to the /var/lib/ldap directory and run this command:
sudo rm __db.* *.bdb log.*
The slapd server should preferably be shutdown before running this command.
Make sure you have a backup of the files before executing this
With the MDB backend
Similar as the above, but the file names are different:
sudo rm *.mdb

scp files in a certain order using ls

Whenever I try to SCP files (in bash), they end up in a seemingly random(?) order.
I've found a simple but not-very-elegant way of keeping a desired order, described below. Is there a clever way of doing it?
Edit: deleted my early solution from here, cleaned, adapted using other suggestions, and added as an answer below.
To send files from a local machine (e.g. your laptop) to a remote (e.g. your calculation server), you can use Merlin2011's clever solution:
Go into the folder in your local machine where you want to copy files from.
Execute the scp command, assuming you have an access key for the remote server:
scp -r $(ls -rt) user#foo.bar:/where/you/want/them/.
Note: if you don't have a public access key it may be better to do something similar using tar, then send the tar file, i.e. tar -zcvf files.tar.gz $(ls -rt), and then send that tar file on its own using scp.
But to do it the other way around you might not be able to run the scp command directly from the remote server to send files to, say, your laptop. Instead, you may need to, let's say bring files into your laptop. My brute-force solution is:
In the remote server, cd into the folder you want to copy files from.
Create a list of the files in the order you want. For example, for reverse order of creation (most recent copied last):
ls -rt > ../filenames.txt
Now you need to add the path to each file name. Before you go up to the directory where the list is, print the path using pwd. Now do go up: cd ..
You now need to add this path to each file name in the list. There are many ways to do this, here's one using awk:
cat filenames.txt | awk '{print "path/to/files/" $0}' > delete_me.txt
You need the filenames to be in the same line, separated by a space, so change newlines to spaces:
tr '\n' ' ' < delete_me.txt > filenames.txt
Get filenames.txt to the local server, and put it in the folder where you want to copy the files into.
The scp run would be:
scp -r user#foo.bar:"$(cat filenames.txt)" .
Similarly, this assumes you have a private access key, otherwise it's much simpler to tar the file in the remote, and bring that.
One can achieve file transfer with alphabetical order using rsync:
rsync -P -e ssh -r user#remote_host:/some_path local_path
P allows partial downloading, e sets the SSH protocol and r downloads recursively.
You can do it in one line without an intermediate using xargs:
ls -r <directory> | xargs -I {} scp <Directory>/{} user#foo.bar:folder/
Of course, this would require you to type your password multiple times if you do not have public key authentication.
You can also use cd and still skip the intermediate file.
cd <directory>
scp $(ls -r) user#foo.bar:folder/