Error sending command to Beanstalkd from telnet - telnet

Whenn I send the following sequence via telnet I get EXPECTED_CRLF:
$ telnet localhost 11300
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
put 0 0 1 4
68 6f 6c 61
EXPECTED_CRLF
UNKNOWN_COMMAND
I thought, when I press "Enter" inside telnet, it will be send an "CR LF" (https://www.freesoft.org/CIE/RFC/1123/31.htm)
Beanstalkd Protocol here: https://github.com/beanstalkd/beanstalkd/blob/master/doc/protocol.txt
I tried toggling crlf like #Alister Bulman said, but it didn't work:
$ telnet
telnet> toggle crlf
Will send carriage returns as telnet <CR><LF>.
telnet> open localhost 11300
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
put 0 0 1 4
68 6f 6c 61
EXPECTED_CRLF
UNKNOWN_COMMAND

The issue here was: you can send a text without to encode it in bytes. For the text "hola" is right 4 Bytes, but for "68 6f 6c 61" had to be "11" bytes length.
I misunderstood the protocol, since it is described as "a sequence of bytes" for <data>. Indeed the TCP delivery is a stream of bytes!
- <data> is the job body -- a sequence of bytes of length <bytes> from the previous line.
So the correct commands are:
$ telnet
telnet> open localhost 11300
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
put 0 0 1 4
hola
INSERTED 1
put 0 0 1 11
68 6f 6c 61
INSERTED 2

Related

How to use ExtFilterDefine for png files in perl as one liner?

Because of md5 hash scanning tools like wpscan, I want to prevent the script kiddies to detect as much information as possible about my wordpress site. With to following perl snippet, I m trying to add some extra characters to all requested png files. But it does not work and I don't know why. Does somebody can help me out?
My goal is not to change it right inside the files - just for requested output on screen.
ExtFilterDefine pngfilter mode=output intype=image/png cmd="/usr/bin/perl -pe 'END { unless (-f q{/tmp/md5_filter.tmp}) { print qq(\/*) . time() . qq(\*/) } }'"
I use the same snippet logic for css and js files. Here it works as expected.
It does work.
$ perl -pe 'END { print qq(/*) . time() . qq(*/) }' derpkin.png >derpkin_.png
$ diff <( hexdump -C derpkin.png ) <( hexdump -C derpkin_.png )
3023,3024c3023,3025
< 0000bce0 00 00 00 00 49 45 4e 44 ae 42 60 82 |....IEND.B`.|
< 0000bcec
---
> 0000bce0 00 00 00 00 49 45 4e 44 ae 42 60 82 2f 2a 31 36 |....IEND.B`./*16|
> 0000bcf0 35 36 33 35 30 37 37 36 2a 2f |56350776*/|
> 0000bcfa
At least, it works in the sense that it does exactly what you wanted it to do. But does it makes sense to add arbitrary text to the end of a PNG? I'm not familiar enough withe PNG file format to answer that.
Caveat: It will not work on Windows because of CRLF ⇔ LF translation.

How to send APDU to Mifare Classic 1k card?

What I am trying to achieve is to send APDU command to MIFARE Classic 1K card to change its A and B keys.
I was able to establish a connection with the card and use a default key (FFFFFFFFFFFF) to read block 0 and block 1. I used HID MifareSamples application for it.
Now, I would like to change A key from default to something else. I found a solution here, at stackoverflow (Mifare Change KEY A and B) which suggests that I have to send this APDU:
New key A = 00 11 22 33 44 55 Access bits not overwritten Key B not
used (so FF FF FF FF FF FF)
=> Write to Sector Trailer 00 11 22 33 44 55 FF 0F 00 FF FF FF FF FF FF FF
I found a good tool JSmartCard Explorer which allows you to send APDUs to cards. Then I read PCSC specifications 3.2.2.1.4 Load Keys Command chapter and understood that the command should probably look like this:
FF 82 00 00 18 00 11 22 33 44 55 FF 0F 00 FF FF FF FF FF FF FF
But unfortunately JSmartCard tool fails with "Command not allowed (no current EF)".
What I am doing wrong? How can I change the key?
First of all, MIFARE Classic cards do not use APDU commands. Hence, you do not send APDUs to the card but to the card reader (which translates them into MIFARE Classic commands). APDU commands to be processed by the reader typically start with the class byte FF.
In MIFARE Classic cards, the keys (A and B) and the access conditions for each sector are stored in the sector trailer (the last block of each sector). A MIFARE Classic 1K card has 16 sectors with 4 blocks each.
So if you want to set the keys & access conditions for sector 0, you would need to write them to block 3 (the last block of sector 0). The PC/SC standard defines the write command (UPDATE BINARY) for storage cards as:
FF D6 XXYY 10 ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
Where XXYY is the block address and ZZ... is the data to be written to the block.
The format of the sector trailer is (see this answer for further details):
<key A> | access bits | general purpose byte | <key B>
So in order to set
key A = 00 11 22 33 44 55
key B = 66 77 88 99 AA BB
access bits = 787788 (sector trailer is writable using key B only; access bits/GPB can be read with key A or B; data blocks are writable using key B only; data blocks can be read with key A or B)
GPB is set to 69
for sector 0, you would use the following write command:
FF D6 0003 10 001122334455 787788 69 66778899AABB
Note that you cannot partially update the sector trailer, you always have to construct and write the whole sector trailer.

openssl incompatible with RNDecryptor?

I am trying to perform decryption with RNDecryptor. What I have done is to take the output from an openssl encryption operation and try to decode it using RNDecryptor.
This command encrypts this string with aes-256-cbc with a passcode of abc.123. It then
converts the output to base64.
$ echo "This is good" | openssl enc -e -aes-256-cbc -k abc.123 -md md5 -base64
U2FsdGVkX1+mgp+PlVPeyjiEJzkN6jWwN9z5CynnHu4=
I then take the base64 string "U2FsdGVkX1+mgp+PlVPeyjiEJzkN6jWwN9z5CynnHu4=", and put it into my Objective C program...
NSString *b64Encrypted = #"U2FsdGVkX1+mgp+PlVPeyjiEJzkN6jWwN9z5CynnHu4=";
NSData *notB64 = [b64Encrypted base64DecodedData];
NSData *decryptedData = [RNDecryptor decryptData:notB64 withPassword:#"abc.123" error:&decryptionError];
if (decryptionError != nil) {
NSLog([decryptionError debugDescription]);
}
Result is
Error Domain=net.robnapier.RNCryptManager Code=2 "Unknown header" UserInfo=0x102505ab0 {NSLocalizedDescription=Unknown header}
When I take a close look at the data, This is some things I notice...
From openssl, the data from hexdump looks like the following... (Note I did not convert to base64)
~ $ echo "This is good" | openssl enc -e -aes-256-cbc -k abc.123 -md md5 -out g.1
~ $ hexdump g.1
00000 53 61 6c 74 65 64 5f 5f 19 dd cc 48 19 9e c3 2c Salted__...H...,
00010 16 1c 71 c5 c7 56 3b 97 c8 48 fc ae 7c 56 a1 91 ..q..V;..H..|V..
What I notice is that the data starts with "Salted__" then the next 8 bytes is the salt.
When I use the RNEncryptor method, the resulting data never starts with the "Salted__" seen when using openssl. It does always start with the hex value of 0x0201
NSData *encryptedData = [RNEncryptor encryptData:data
withSettings:kRNCryptorAES256Settings
password:password
error:&error];
So my question is... Is RNEncryptor/RNDecryptor doing the right thing, and is it compatible with openssl?
So I found out the problem. Basically to be compatible with opessl, use the RNOpenSSLEncryptor class.
For reference the RNDecryptor class has a header in the expected data. The first two bytes make up the header. The first byte indicates the presences of v1hmac or RNCrypterFileVersion. The second byte is an option to go along with the first byte.
So if you want to be compatible with opnssl, use the RNOpenSSLEnryptor/RNOpenSSLDecryptor class.
OpenSSL adds its own salt. You can try -nosalt option in enc command.

Apache mod_speling falsely "correcting" URLs?

I've been tasked with moving an old dynamic website from a Windows server to Linux. The site was initially written with no regard to character case. Some filenames were all upper-case, some lower-case, and some mixed. This was never a problem in Windows, of course, but now we're moving to a case-sensitive file system.
A with a quick find/rename command (thanks to another tutorial) got the filenames to all lowercase.
However, many of the URL references in the code still point to these mixed-case filenames, so I enabled mod_speling to overcome this issue. It seems to work OK for the most part, with the exception of one page: I have a file name haematobium.html, which, everytime a link points to .../haematobium.html, it gets rewritten as .../hæmatobium.html in the browser.
I don't know how this strange character made its way into the filename in the first place, but I've corrected the code in the HTML document to now link to haematobium.html, then renamed the haematobium.html file itself to match.
When requesting .../haematobium.html in Chrome, it "corrects" to .../hæmatobium.html in the address bar, and shows an error saying "The requested URL .../hæmatobium.html was not found on this server."
In IE9, I'm promted for the login (this is a .htaccess protected page), I enter it, and then if forwards the URL to .../h%C3%A6matobium.html, which again doesn't load.
In my frustration I even copied haematobium.html to both hæmatobium.html and hæmatobium.html, still, none of the three pages actually load.
So my question: I read somewhere that mod_speling tries to "learn" misspelled URLs. Does it actually rename files (is that where the odd character might have come from)? Does it keep a cache of what's been called for, and what it was forwarded to (a cache I could clear)?
PS. there are also many mixed-case references to MySQL database tables and fields, but that's a whole 'nother nightmare.
[Cannot comment yet, therefore answering.]
Your question doesn't make it entirely clear which of the two names (two characters ae [ASCII], or one ligature character æ [Unicode]) for haematobium.html actually exists in your Apache's file system.
Try the following in your shell:
$ echo -n h*matobium.html | hd
The output should be either one of the following two alternatives. This is ASCII, with 61 and 65 for a and e, respectively:
00000000 68 61 65 6d 61 74 6f 62 69 75 6d 2e 68 74 6d 6c |haematobium.html|
00000010
And this is Unicode, with c3 a6 for the single character æ:
00000000 68 c3 a6 6d 61 74 6f 62 69 75 6d 2e 68 74 6d 6c |h..matobium.html|
00000010
I would recommend using the ASCII version, it makes life considerably easier.
Now to your actual question. mod_speling does neither "learn", nor rename or cache its data. The caching is either done by your browsers, or by proxies in between your browsers and the server.
It's actually best practice to test these cases with command line tools like wget or curl, which should be already available or easily installable on any Linux.
Use wget -S or curl -i to actually see the response headers sent by your web server.

How to save and retrieve string with accents in redis?

I do not manage to set and retrieve string with accents in my redis db.
Chars with accents are encoded, how can I retrieve them back as they where set ?
redis> set test téléphone
OK
redis> get test
"t\xc3\xa9l\xc3\xa9phone"
I know this has already been asked
(http://stackoverflow.com/questions/6731450/redis-problem-with-accents-utf-8-encoding) but there is no detailed answer.
The Redis server itself stores all data as a binary objects, so it is not dependent on the encoding. The server will just store what is sent by the client (including UTF-8 chars).
Here are a few experiments:
$ echo téléphone | hexdump -C
00000000 74 c3 a9 6c c3 a9 70 68 6f 6e 65 0a |t..l..phone.|
c3a9 is the representation of the 'é' char.
$ redis-cli
> set t téléphone
OK
> get t
"t\xc3\xa9l\xc3\xa9phone"
Actually the data is correctly stored in the Redis server. However, when it is launched in a terminal, the Redis client interprets the output and applies the sdscatrepr function to transform non printable chars (whose definition is locale dependent, and may be broken for multibyte chars anyway).
A simple workaround is to launch redis-cli with the 'raw' option:
$ redis-cli --raw
> get t
téléphone
Your own application will probably use one of the client libraries rather than redis-cli, so it should not be a problem in practice.