Import whitespace value to LDAP using LDIF - ldap

LDAP doesn't allow empty field values. Once I needed to use empty field, I inserted single space instead (using ruby code). Now I've exported the data to LDIF, and in LDIF whitespace doesn't matter, and so in LDIF my value of a single space is not preserved.
Now I've exported data.ldif from that LDAP instance, and wish to import it to another LDAP instance. However, LDAP complains about empty fields, since in LDIF whitespace doesn't matter, and my single space values are not preserved in any special way.
Is there a way to preserve my single space values in LDIF? (should I put quotes around them or something like that?)

Solved! One can encode space in base64:
Find-replace:
middleName:␣␣
to
middleName:: IA==
Double column means the following is encoded in base64. IA== is a single utf-8 space, encoded in base64.

Related

skipLeadingRows=1 in external table definition

In the below example, how can I set the skip leading row option?
bq --location=US query --external_table_definition=sales::Region:STRING,Quarter:STRING,Total_sales:INTEGER#CSV=gs://mybucket/sales.csv 'SELECT Region,Total_sales FROM sales;'
Regards,
Sreekanth
Flags options can be found under the installation home folder (I marked in bold below the flag you are looking for)
/google-cloud-sdk/platform/bq/bq.py:
--[no]allow_jagged_rows: Whether to allow missing trailing optional columns in
CSV import data.
--[no]allow_quoted_newlines: Whether to allow quoted newlines in CSV import
data.
-E,--encoding: : The character encoding used by the input
file. Options include:
ISO-8859-1 (also known as Latin-1)
UTF-8
-F,--field_delimiter: The character that indicates the boundary between
columns in the input file. "\t" and "tab" are accepted names for tab.
--[no]ignore_unknown_values: Whether to allow and ignore extra, unrecognized
values in CSV or JSON import data.
--max_bad_records: Maximum number of bad records allowed before the entire job
fails.
(default: '0')
(an integer)
--quote: Quote character to use to enclose records. Default is ". To indicate
no quote character at all, use an empty string.
--[no]replace: If true erase existing contents before loading new data.
(default: 'false')
--schema: Either a filename or a comma-separated list of fields in the form
name[:type].
--skip_leading_rows: The number of rows at the beginning of the source file to
skip.
(an integer)
--source_format: : Format of
source data. Options include:
CSV
NEWLINE_DELIMITED_JSON
DATASTORE_BACKUP

How to avoid plus sign to create a line feed in a rdlc textbox

I need to print an encrypted string as is in a rdlc report. My problem is if the string contain a plus sign it creates a new line in the Textbox. How to avoid this?
Encryption produces output that is binary and contains many bytes that have no displayable representation.
Because of this if encrypted data needs to be displayed it is generally either Base64 (best for computers) or hexadecimal (best for people) encoded.
It seems that you may have base64 encoded encrypted data and that is generally composed of the upper and lowercase characters, the 10 digits, "+", "/" and "=". You can not delete these and expect to recover the encrypted data.
If these characters present a problem they can be many times be escaped in some manor or another encoding can be chosen such as hexadecimal or an alternate Base64 character set, see Base64. If you choose an alternate Base64 character set interoperability will most likely be impaired.
Note: More information would produce a better answer.
I had to replace the "+" with "÷".
Users don't notice is it since the PDF is just a visual representation of the CFDI, I haven't had any issues with it.

Redis: acceptable characters for values

I am aware of the naming conventions for Redis keys (this is a great link here Naming Convention and Valid Characters for a Redis Key ) but what of the values? Will I have an issue if my values include characters such as &^*$#+{ ?
From http://redis.io/topics/data-types:
Redis Strings are binary safe, this means that a Redis string can contain any kind of data, for instance a JPEG image or a serialized Ruby object.
A String value can be at max 512 Megabytes in length.
So those chars you've specified will be fine, as will any other data.
#Ruan is not exactly covering the whole story. I have looked close at that section of the Redis docs and it doesn't cover special characters.
For example, you will need to escape double quotes " with a preceding backslash \" in your key.
Also if you do have special characters in your key i.e, spaces, single or double quotes, you will need to wrap your whole key in double quotes.
The following keys are valid and you can use them to start understanding how special characters are handled.
The following allows spaces in your key.
set "users:100:John Doe" 1234
The following allows special characters by escaping them.
set "metadata:2:moniker\"#\"" 1234

Postgres 9.3 end-of-copy marker corrupt - Any way to change this setting?

I am trying to stream data through an AWK program to a Postgres COPY command. This works great usually. However, in my data recently I have been getting long text stings containing '\.' values.
Postgres Documentation mentions this combination of characters represents the end-of-data marker, http://www.postgresql.org/docs/9.2/static/sql-copy.html, and I am getting the associated errors when trying to insert with COPY.
My question is, is there a way to turn this off? Perhaps change the end-of-data marker to a different combination of characters? Or do I have to alter/remove these strings before trying to insert using the COPY command?
You can try to filter your data through sed 's:\\:\\\\:g' - this would change every \ in your data to \\, which is a correct escape sequence for single backslash in copy data.
But I think not only backslash would be problematic. Also newlines should be encoded by \n, carriage returns as \r and tabs as \t (tab is a default field delimiter in copy).

Approximate search with openldap

I am trying to write a search that queries our directory server running openldap.
The users are going to be searching using the first or last name of the person they're interested in.
I found a problem with accented characters (like áéíóú), because first and last names are written in Spanish, so while the proper way is Pérez it can be written for the sake of the search as Perez, without the accent.
If I use '(cn=*Perez*)' I get only the non-accented results.
If I use '(cn=*Pérez*)' I get only accented results.
If I use '(cn=~Perez)' I get weird results (or at least nothing I can use, because while the results contain both Perez and Pérez ocurrences, I also get some results that apparently have nothing to do with the query...
In Spanish this happens quite a lot... be it lazyness, be it whatever you want to call it, the fact is that for this kind of thing people tend NOT to write the accents because it's assumend all these searches work with both options (I guess since Google allowes it, everybody assumes it's supposed to work that way).
Other than updating the database and removing all accents and trimming them on the query... can you think of another solution?
You have your ~ and = swapped above. It should be (cn~=Perez). I still don't know how well that will work. Soundex has always been strange. Since many attributes are multi-valued including cn you could store a second value on the attribute that has the extended characters converted to their base versions. You would at least have the original value to still go off of when you needed it. You could also get real fancy and prefix the converted value with something and use the valuesReturnFilter to filter it out from your results.
#Sample object
dn:cn=Pérez,ou=x,dc=y
cn:Pérez
cn:{stripped}Perez
sn:Pérez
#etc.
Then modify your query to use an or expression.
(|(cn=Pérez)(cn={stripped}Perez))
And you would include a valuesReturnFilter that looked like
(!(cn={stripped}*))
See RFC3876 http://www.networksorcery.com/enp/rfc/rfc3876.txt for details. The method for adding a request control varies by what platform/library you are using to access the directory.
Search filters ("queries") are specified by RFC2254.
Encoding:
RFC2254
actually requires filters (indirectly defined) to be an
OCTET STRING, i.e. ASCII 8-byte String:
AttributeValue is OCTET STRING,
MatchingRuleId
and AttributeDescription
are LDAPString, LDAPString is an OCTET STRING.
The standard on escaping: Use "<ASCII HEX NUMBER>" to replace special characters
(https://www.rfc-editor.org/rfc/rfc4515#page-4, examples https://www.rfc-editor.org/rfc/rfc4515#page-5).
Quote:
The <valueencoding> rule ensures that the entire filter string is a
valid UTF-8 string and provides that the octets that represent the
ASCII characters "*" (ASCII 0x2a), "(" (ASCII 0x28), ")" (ASCII
0x29), "\" (ASCII 0x5c), and NUL (ASCII 0x00) are
represented as a backslash "\" (ASCII 0x5c) followed by the two hexadecimal digits
representing the value of the encoded octet.
Additionally, you should probably replace all characters that semantically modify the filter (RFC 4515's grammar gives a list), and do a Regex replace of non-ASCII characters with wildcards (*) to be sure. This will also help you with characters like "é".