I do not manage to set and retrieve string with accents in my redis db.
Chars with accents are encoded, how can I retrieve them back as they where set ?
redis> set test téléphone
OK
redis> get test
"t\xc3\xa9l\xc3\xa9phone"
I know this has already been asked
(http://stackoverflow.com/questions/6731450/redis-problem-with-accents-utf-8-encoding) but there is no detailed answer.
The Redis server itself stores all data as a binary objects, so it is not dependent on the encoding. The server will just store what is sent by the client (including UTF-8 chars).
Here are a few experiments:
$ echo téléphone | hexdump -C
00000000 74 c3 a9 6c c3 a9 70 68 6f 6e 65 0a |t..l..phone.|
c3a9 is the representation of the 'é' char.
$ redis-cli
> set t téléphone
OK
> get t
"t\xc3\xa9l\xc3\xa9phone"
Actually the data is correctly stored in the Redis server. However, when it is launched in a terminal, the Redis client interprets the output and applies the sdscatrepr function to transform non printable chars (whose definition is locale dependent, and may be broken for multibyte chars anyway).
A simple workaround is to launch redis-cli with the 'raw' option:
$ redis-cli --raw
> get t
téléphone
Your own application will probably use one of the client libraries rather than redis-cli, so it should not be a problem in practice.
Related
I am trying to take the contents of a file that has a Hex number and convert that number to Binary and output to a file.
This is what I am trying but not getting the binary value:
xxd -r -p Hex.txt > Binary.txt
The contents of Hex.txt is: ff
I have also tried FF and 0xFF, but would like to just use ff since the device I am pulling the info from has it in that format.
Instead of 11111111 which it should be, I get a y with 2 dots above it.
If I change it to ee, I get an i with 2 dots. It seems to be reading it just fine but according to what I have read on the xxd -r -p command, it is not outputing it in the correct format.
The other ways I have found to convert Hex to Binary have either also not worked or is a pretty big Bash script that seems unnecessary to do what I thought would be a simple task.
This also gives me the y with 2 dots.
$ for i in $(cat Hex.txt) ; do printf "\x$i" ; done > Binary.txt
For some reason almost every solution I find gives me this format instead of a human readable Binary value with 1s and 0s.
Any help is appreciated. I am planning on using this in a script to pull the Relay values from Digital Loggers devices using curl and giving Home Assistant a readable file to record the Relay State. Digital Loggers curl cmd gives the state of all 8 relays at once using Hex instead of being able to pull the status of a specific relay.
If "file.txt" contains:
fe
0a
and you run this:
perl -ane 'printf("%08b\n",hex($_))' file.txt
You'll get this:
11111110
00001010
If you use it a lot, you might want to make a bash function of it in your login profile along these lines - being extremely respectful of spaces and semi-colons that might look unnecessary:
bin(){ perl -ane 'printf("%08b\n",hex($_))' $1 ; }
Then you'll be able to do:
bin file.txt
If you dislike Perl for some reason, you can achieve something similar without it as follows:
tr '[:lower:]' '[:upper:]' < file.txt |
while read h ; do
echo "obase=2; ibase=16; $h" | bc
done
I have a Gzip file that has multiple blocks.Every block starts with
1F 8B 08
And ends with
00 00 FF FF
I tried to decompress the file using 7-Zip and gzip tool in linux ,But I always get an error saying that the file is invalid.
So I wrote this python script
import zlib
CHUNKSIZE=1
f=open("file.gz","rb")
buffer=f.read(CHUNKSIZE)
data=""
r=CHUNKSIZE
d = zlib.decompressobj(16+zlib.MAX_WBITS)
while buffer:
outstr = d.decompress(buffer)
print(r)
buffer=f.read(CHUNKSIZE)
r=r+CHUNKSIZE
outstr = d.flush()
I have notice that when it reach to the header of the second block
00 00 00 FF FF 1F 8B 08
at the point between FF and 1F
the script return
zlib.error: Error -3 while decompressing data: invalid block type
I made the size of the chunk to be 1 so the I would know exactly where the problem is.
I know that the problem is not in the file because I have multiple files constructed the same way and they show exactly the same error.
I know that the problem is not in the file because I have multiple
files constructed the same way and they show exactly the same error.
The conclusion is not that the problem is not in the file, but rather that the problem is in all of your files. Someone either inadvertently or deliberately constructed invalid gzip files. It looks like they did that by using Z_SYNC_FLUSH or Z_FULL_FLUSH instead of Z_FINISH to end each stream before starting another faux gzip stream. A gzip stream ends with a last block followed by an eight-byte gzip trailer containing two check values on the integrity of the uncompressed data.
You can nevertheless continue with decompression, though without the comfort of any integrity checking of the data, by simply picking up with a new instance of decompressobj when you get an error and see a new gzip header, 1f 8b 08.
More importantly you should locate and contact the source of these files and say "Hey, WTF?"
I am trying to perform decryption with RNDecryptor. What I have done is to take the output from an openssl encryption operation and try to decode it using RNDecryptor.
This command encrypts this string with aes-256-cbc with a passcode of abc.123. It then
converts the output to base64.
$ echo "This is good" | openssl enc -e -aes-256-cbc -k abc.123 -md md5 -base64
U2FsdGVkX1+mgp+PlVPeyjiEJzkN6jWwN9z5CynnHu4=
I then take the base64 string "U2FsdGVkX1+mgp+PlVPeyjiEJzkN6jWwN9z5CynnHu4=", and put it into my Objective C program...
NSString *b64Encrypted = #"U2FsdGVkX1+mgp+PlVPeyjiEJzkN6jWwN9z5CynnHu4=";
NSData *notB64 = [b64Encrypted base64DecodedData];
NSData *decryptedData = [RNDecryptor decryptData:notB64 withPassword:#"abc.123" error:&decryptionError];
if (decryptionError != nil) {
NSLog([decryptionError debugDescription]);
}
Result is
Error Domain=net.robnapier.RNCryptManager Code=2 "Unknown header" UserInfo=0x102505ab0 {NSLocalizedDescription=Unknown header}
When I take a close look at the data, This is some things I notice...
From openssl, the data from hexdump looks like the following... (Note I did not convert to base64)
~ $ echo "This is good" | openssl enc -e -aes-256-cbc -k abc.123 -md md5 -out g.1
~ $ hexdump g.1
00000 53 61 6c 74 65 64 5f 5f 19 dd cc 48 19 9e c3 2c Salted__...H...,
00010 16 1c 71 c5 c7 56 3b 97 c8 48 fc ae 7c 56 a1 91 ..q..V;..H..|V..
What I notice is that the data starts with "Salted__" then the next 8 bytes is the salt.
When I use the RNEncryptor method, the resulting data never starts with the "Salted__" seen when using openssl. It does always start with the hex value of 0x0201
NSData *encryptedData = [RNEncryptor encryptData:data
withSettings:kRNCryptorAES256Settings
password:password
error:&error];
So my question is... Is RNEncryptor/RNDecryptor doing the right thing, and is it compatible with openssl?
So I found out the problem. Basically to be compatible with opessl, use the RNOpenSSLEncryptor class.
For reference the RNDecryptor class has a header in the expected data. The first two bytes make up the header. The first byte indicates the presences of v1hmac or RNCrypterFileVersion. The second byte is an option to go along with the first byte.
So if you want to be compatible with opnssl, use the RNOpenSSLEnryptor/RNOpenSSLDecryptor class.
OpenSSL adds its own salt. You can try -nosalt option in enc command.
I have a PCAP file that was created on a Mac with mergecap that can be parsed on a Mac with Apple's libpcap but cannot be parsed on a Linux system. combined file has an extra 16-byte header that contains 0a 0d 0d 0a 78 00 00 00 before the 4d 3c 2b 1a intro that's common in pcap files. Here is a hex dump:
0000000: 0a0d 0d0a 7800 0000 4d3c 2b1a 0100 0000 ....x...M<+.....
0000010: ffff ffff ffff ffff 0100 4700 4669 6c65 ..........G.File
0000020: 2063 7265 6174 6564 2062 7920 6d65 7267 created by merg
0000030: 696e 673a 200a 4669 6c65 313a 2037 2e70 ing: .File1: 7.p
0000040: 6361 7020 0a46 696c 6532 3a20 362e 7063 cap .File2: 6.pc
0000050: 6170 200a 4669 6c65 333a 2034 2e70 6361 ap .File3: 4.pca
0000060: 7020 0a00 0400 0800 6d65 7267 6563 6170 p ......mergecap
Does anybody know what this is? or how I can read it on a Linux system with libpcap?
I have a PCAP file
No, you don't. You have a pcap-ng file.
that can be parsed on a Mac with Apple's libpcap
libpcap 1.1.0 and later can also read some pcap-ng files (the pcap API only allows a file to have one link-layer header type, one snapshot length, and one byte order, so only pcap-ng files where all sections have the same byte order and all interfaces have the same link-layer header type and snapshot length are supported), and OS X Snow Leopard and later have a libpcap based on 1.1.x, so they can read those files.
(OS X Mountain Lion and later have tweaked libpcap to allow it to write pcap-ng files as well; the -P flag makes tcpdump write out pcap-ng files, with text comments attached to some outgoing packets indicating the process ID and process name of the process that sent them - pcap-ng allows text comments to be attached to packets.)
but cannot be parsed on a Linux system
Your Linux system probably has an older libpcap version. (Note: do not be confused by Debian and Debian derivatives calling the libpcap package "libpcap0.8" - they're not still using libpcap 0.8.)
combined file has an extra 16-byte header that contains 0a 0d 0d 0a 78 00 00 00
A pcap-ng file is a sequence of "blocks" that start with a 4-byte block type and a 4-byte length, both in the byte order of the host that wrote them.
They're divided into "sections", each one beginning with a "Section Header Block" (SHB); the block type for the SHB is 0x0a0d0d0a, which is byte-order-independent (so that you don't have to know the byte order to read the SHB) and contains carriage returns and line feeds (so that if the file is, for example, transferred between a UN*X system and a Windows system by a tool that thinks it's transferring a text file and that "helpfully" tries to fix line endings, the SHB magic number will be damaged and it will be obvious that the file was corrupted by being transferred in that fashion; think of it as the equivalent of a shock indicator).
The 0x78000000 is the length; what follows it is the "byte-order magic" field, which is 0x1A2B3C4D (which is not the same as the 0xA1B2C3D4 magic number for pcap files), and which serves the same purposes as the pcap magic number, namely:
it lets code identify that the file is a pcap-ng file
it lets code determine the byte order of the section.
(No, you don't need to know the length before looking for the pcap magic number; once you've found the magic number, you then check the length to make sure it's at least 28 and, if it's less than or equal to 28, you reject the block as not being valid.)
Does anybody know what this is?
A (little-endian) pcap-ng file.
or how I can read it on a Linux system with libpcap?
Either read it on a Linux system with a newer version of libpcap (which may mean a newer version of whatever distribution you're using, or may just mean doing an update if that will get you a 1.1.0 or later version of libpcap), read it with Wireshark or TShark (which have their own library for reading capture files, which supports the native pcap and pcap-ng formats, as well as a lot of other formats), or download a newer version of libpcap from tcpdump.org, build it, install it, and then build whatever other tools need to read pcap-ng files with that version of libpcap rather than the one that comes with the system.
Newer versions of Wireshark write pcap-ng files by default, including in tools such as mergecap; you can get them to write pcap files with a flag argument of -F pcap.
I've been tasked with moving an old dynamic website from a Windows server to Linux. The site was initially written with no regard to character case. Some filenames were all upper-case, some lower-case, and some mixed. This was never a problem in Windows, of course, but now we're moving to a case-sensitive file system.
A with a quick find/rename command (thanks to another tutorial) got the filenames to all lowercase.
However, many of the URL references in the code still point to these mixed-case filenames, so I enabled mod_speling to overcome this issue. It seems to work OK for the most part, with the exception of one page: I have a file name haematobium.html, which, everytime a link points to .../haematobium.html, it gets rewritten as .../hæmatobium.html in the browser.
I don't know how this strange character made its way into the filename in the first place, but I've corrected the code in the HTML document to now link to haematobium.html, then renamed the haematobium.html file itself to match.
When requesting .../haematobium.html in Chrome, it "corrects" to .../hæmatobium.html in the address bar, and shows an error saying "The requested URL .../hæmatobium.html was not found on this server."
In IE9, I'm promted for the login (this is a .htaccess protected page), I enter it, and then if forwards the URL to .../h%C3%A6matobium.html, which again doesn't load.
In my frustration I even copied haematobium.html to both hæmatobium.html and hæmatobium.html, still, none of the three pages actually load.
So my question: I read somewhere that mod_speling tries to "learn" misspelled URLs. Does it actually rename files (is that where the odd character might have come from)? Does it keep a cache of what's been called for, and what it was forwarded to (a cache I could clear)?
PS. there are also many mixed-case references to MySQL database tables and fields, but that's a whole 'nother nightmare.
[Cannot comment yet, therefore answering.]
Your question doesn't make it entirely clear which of the two names (two characters ae [ASCII], or one ligature character æ [Unicode]) for haematobium.html actually exists in your Apache's file system.
Try the following in your shell:
$ echo -n h*matobium.html | hd
The output should be either one of the following two alternatives. This is ASCII, with 61 and 65 for a and e, respectively:
00000000 68 61 65 6d 61 74 6f 62 69 75 6d 2e 68 74 6d 6c |haematobium.html|
00000010
And this is Unicode, with c3 a6 for the single character æ:
00000000 68 c3 a6 6d 61 74 6f 62 69 75 6d 2e 68 74 6d 6c |h..matobium.html|
00000010
I would recommend using the ASCII version, it makes life considerably easier.
Now to your actual question. mod_speling does neither "learn", nor rename or cache its data. The caching is either done by your browsers, or by proxies in between your browsers and the server.
It's actually best practice to test these cases with command line tools like wget or curl, which should be already available or easily installable on any Linux.
Use wget -S or curl -i to actually see the response headers sent by your web server.