Reversing XOR Encryption not working? - vb.net

So I have the following function To XOR a string and it works but when I pass the encrypted string back into it the output is not decrypetd. MY understanding is to De XOR something is to simply XOR it again but it seems to fail, Any ideals on why this is ? Also Im storing the xor as hex
Public Shared Function StringToHash(ByVal str As String) As Integer
Dim hash As Integer = (AscW(Char.ToLower(str(0))) Xor &H4B9ACE2F) * &H1000193
For i As Integer = 1 To str.Length - 1
hash = (AscW(Char.ToLower(str(i))) Xor hash) * &H1000193
Next
hash = hash * &H1000193
Return hash
End Function

As already indicated, this is about hashing, not encryption - two related but separate notions.
Although secure hashes are one-way, the above function is certainly not a secure hash. So you may be able to reverse it for very small input sizes.
Besides that, although hashes are usually not reversible, it may be possible to brute force the hash value.
This would only work for if the input alphabet is limited (a few characters, or a limited set of words etc.) and it may return too many inputs if the hash output size is small (which in this case it certainly is). But otherwise you just toss in some input and test if it results in the same hash value.

Related

how to represent message as an integer between 1 and n-1?

I am trying to implement simple El-Gamal cryptosystem.
And I can't understand how to represent message as an integer between 1 and n-1.
The only thing that comes to my mind is:
if n bit length is k, then divide input message m on t | t < k bits and each piece of bits use as integer number.
I think It is wrong.
So how to represent message as an integer between 1 and n-1?
You could do that which is essentially the equivalent of using ECB mode in block ciphers, but there are attacks on this. An attacker may reorder the different blocks of the ciphertext and you would decrypt it without problem, but the received plaintext would be broken without you knowing this. This may also open the door for replay attacks since the blocks are all encrypted independently. You would need some kind of authenticated encryption.
Back to your original question. Such a problem is usually solved by using hybrid encryption. A block cipher like AES is used to encrypt the whole plaintext with a random key. This random key is in turn encrypted through ElGamal since the key is small enough to be represented in < k bits.
Now depending on the mode of operation of the block cipher this could still be malleable. You would either need to put a hash of the ciphertext/plaintext next to the random key as an integrity check. Or otherwise use an authenticated mode of operation like GCM and add the resulting tag next to the random key. Depending on k, this should fit.
Note that you should use some kind of padding for random key | hash/tag if it doesn't reach k.

RSA-OAEP : How do Cryptographic hash functions expand a number of bits?

First of, this question is not really code related, but i am trying to understand what happens behind the code. Hope someone know the anwser to this one, because it have been troubling me for some time.
I am writing a program in c#, which is using the RSA crypto service provider.
From what i can understand, the class is using SHA1 by standard in its padding.
I have been trying to understand what actually happens during the padding, but can't seem to get my head around a single step in the process.
The algorithm for OAEP that i am currently looking at, is simply the wiki one.
http://en.wikipedia.org/wiki/OAEP
The step that is troubling me is 3). I thought hash functions always returned a certain amount of bits (SHA1 - 160bits), so how can it simply expand the amount of bits to n-k0, which with a standard 1024 key bit-strenght would be 864 bits?
I've never done anything with OAEP, but crypto hash functions (as described in step 3) use a procedure spelled out in http://en.wikipedia.org/wiki/PBKDF. Basically, to expand the number of output bits, you 1st repeat the hash with an incremented counter concatenated to the argument being hashed, then concatenate those results until you have enough bits. This technique doesn't add entropy to the result, but does allow you to create a longer output bitstream.
From wikipedia:
If you want a key that's dklen long, and your crypto hash function U only outputs hlen bits:
DK = T1 || T2 || ... || Tdklen/hlen
Ti = F(Password, Salt, Iterations, i)
F(Password, Salt, Iterations, i) = U1 ^ U2 ^ ... ^ Uc
U1 = PRF(Password, Salt || INT_msb(i))
U2 = PRF(Password, U1)
...
Uc = PRF(Password, Uc-1)
(If you only need one iteration of the cryptographic hash function, c=1, so you don't need the XOR operator ^, and for each i, you only need to calculate U1)
Specifically for OAEP, the recommendation is to use an algorithm called MGF1, which operates. By repeatedly hashing a seed and a counter, and concatenating the results together, the spe I fixation comes from RfC 2437
From the RfC text, where Z is the seed and l is the length of the output:
3.For counter from 0 to {l / hLen}-1, do the following:
a.Convert counter to an octet string C of length 4 with the
primitive I2OSP: C = I2OSP (counter, 4)
b.Concatenate the hash of the seed Z and C to the octet string T:
T = T || Hash (Z || C)
4.Output the leading l octets of T as the octet string mask.

vb xor checksum

This question may already have been asked but nothing on SO actually gave me the answer I need.
I am trying to reverse engineer someone else's vb.NET code and I am stuck with what a Xor is doing here. Here is 1 line of the body of a soap request that gets parsed (some values have been obscured so the checksum may not work in this case):
<HD>CHANGEDTHIS01,W-A,0,7753.2018E,1122.6674N, 0.00,1,CID_V_01*3B</HD>
and this is the snippet of vb code that checks it
LastStar = strValues(CheckLoop).IndexOf("*")
StrLen = strValues(CheckLoop).Length
TransCheckSum = Val("&h" + strValues(CheckLoop).Substring(LastStar + 1, (StrLen - (LastStar + 1))))
CheckSum = 0
For CheckString = 0 To LastStar - 1
CheckSum = CheckSum Xor Asc(strValues(CheckLoop)(CheckString))
Next '
If CheckSum <> TransCheckSum Then
'error with the checksum
...
OK, I get it up to the For loop. I just need an explanation of what the Xor is doing and how that is used for the checksum.
Thanks.
PS: As a bonus, if anyone can provide a c# translation I would be most grateful.
Using Xor is a simple algorithm to calculate a checksum. The idea is the same as when calculating a parity bit, but there is eight bits calculated across the bytes. More advanced algorithms like CRC and MD5 are often used to calculate checksums for more demanding applications.
The C# code would look like this:
string value = strValues[checkLoop];
int lastStar = value.IndexOf("*");
int transCheckSum = Convert.ToByte(value.Substring(lastStar + 1, 2), 16);
int checkSum = 0;
for (int checkString = 4; checkString < lastStar; checkString++) {
checkSum ^= (int)value[checkString];
}
if (checkSum != transCheckSum) {
// error with the checksum
}
I made some adjustments to the code to accomodate the transformation to C#, and some things that makes sense. I declared the variables used, and used camel case rather than Pascal case for local variables. I use a local variable for the string, instead of getting it from the collection each time.
The VB Val method stops parsing when it finds a character that it doesn't recognise, so to use the framework methods I assumed that the length of the checksum is two characters, so that it can parse the string "3B" rather than "3B</HD>".
The loop starts at the fourth character, to skip the first "<HD>", which should logically not be part of the data that the checksum should be calculated for.
In C# you don't need the Asc function to get the character code, you can just cast the char to an int.
The code is basically getting the character values and doing a Xor in order to check the integrity, you have a very nice explanation of the operation in this page, in the Parity Check section : http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/xor.html

MD5 Hashing in objective-c (iOS), based on a shared key

I'm currently developing an app which needs to send authentication data with it to a provided API. Basically it needs to generate a hash based on the data you want to send, based on a shared key.
The problem is that while I have been able to track down functions that will do MD5 hashing, they are not based on a key, which is absolutely crucial.
Is there any way this can be done in objective-c for the iOS platform?
The API is usually used with PHP, which provides something like this handy function:
$key = hash_hmac('md5', $postdata , $sharedkey);
Is there any chance of implementing an equal in objective-c?
The MD5 algorithm only uses one string as input. The convention is that you append your key (aka "salt" value) to the string you are hashing. My guess is that the PHP MD5 function has a second parameter for the key to make life easier, but you should get the same result if you just do this:
NSString *value = [data stringByAppendingString:key];
NSString *hashed = MD5HASH(value); //pseudocode
UPDATE:
Okay, I checked Wikipedia and it looks like you need to do a bit of extra work to implement HMAC-style hashing. So you have two options.
Implement the HMAC algorithm on top of the MD5 hash you're already using (it doesn't look too hard - I've pasted the pseudocode below).
Don't bother with HMAC - just generate the hash at both ends using a regular MD5 by concatenating the message and the key - that should be pretty secure, it's what most people do.
HMAC algorithm
function hmac (key, message)
if (length(key) > blocksize) then
key = hash(key) // keys longer than blocksize are shortened
end if
if (length(key) < blocksize) then
key = key ∥ [0x00 * (blocksize - length(key))] // keys shorter than blocksize are zero-padded ('∥' is concatenation)
end if
o_key_pad = [0x5c * blocksize] ⊕ key // Where blocksize is that of the underlying hash function
i_key_pad = [0x36 * blocksize] ⊕ key // Where ⊕ is exclusive or (XOR)
return hash(o_key_pad ∥ hash(i_key_pad ∥ message)) // Where '∥' is concatenation
end function
Typically you just append the key to the bytes that you are hashing.
So if the shared secret is "12345" and you are passing username=jsd and password=test you would construct your string like "username=jsd&password=test&secret=12345". Then the receiving end would construct its own version from the username & password + the secret, run the same md5, and receive the same value.

Storing integers in a redis ordered set?

I have a system which deals with keys that have been turned into unsigned long integers (by packing short sequences into byte strings). I want to try storing these in Redis, and I want to do it in the best way possible. My concern is mainly memory efficiency.
From playing with the online REPL I notice that the two following are identical
zadd myset 1.0 "123"
zadd myset 1.0 123
This means that even if I know I want to store an integer, it has to be set as a string. I notice from the documentation that keys are just stored as char*s and that commands like SETBIT indicate that Redis is not averse to treating strings as bytestrings in the client. This hints at a slightly more efficient way of storing unsigned longs than as their string representation.
What is the best way to store unsigned longs in sorted sets?
Thanks to Andre for his answer. Here are my findings.
Storing ints directly
Redis keys must be strings. If you want to pass an integer, it has to be some kind of string. For small, well-defined sets of values, Redis will parse the string into an integer, if it is one. My guess is that it will use this int to tailor its hash function (or even statically dimension a hash table based on the value). This works for small values (examples being the default values of 64 entries of a value of up to 512). I will test for larger values during my investigation.
http://redis.io/topics/memory-optimization
Storing as strings
The alternative is squashing the integer so it looks like a string.
It looks like it is possible to use any byte string as a key.
For my application's case it actually didn't make that much difference storing the strings or the integers. I imagine that the structure in Redis undergoes some kind of alignment anyway, so there may be some pre-wasted bytes anyway. The value is hashed in any case.
Using Python for my testing, so I was able to create the values using the struct.pack. long longs weigh in at 8 bytes, which is quite large. Given the distribution of integer values, I discovered that it could actually be advantageous to store the strings, especially when coded in hex.
As redis strings are "Pascal-style":
struct sdshdr {
long len;
long free;
char buf[];
};
and given that we can store anything in there, I did a bit of extra Python to code the type into the shortest possible type:
def do_pack(prefix, number):
"""
Pack the number into the best possible string. With a prefix char.
"""
# char
if number < (1 << 8*1):
return pack("!cB", prefix, number)
# ushort
elif number < (1 << 8*2):
return pack("!cH", prefix, number)
# uint
elif number < (1 << 8*4):
return pack("!cI", prefix, number)
# ulonglong
elif number < (1 << 8*8):
return pack("!cQ", prefix, number)
This appears to make an insignificant saving (or none at all). Probably due to struct padding in Redis. This also drives Python CPU through the roof, making it somewhat unattractive.
The data I was working with was 200000 zsets of consecutive integer => (weight, random integer) × 100, plus some inverted index (based on random data). dbsize yields 1,200,001 keys.
Final memory use of server: 1.28 GB RAM, 1.32 Virtual. Various tweaks made a difference of no more than 10 megabytes either way.
So my conclusion:
Don't bother encoding into fixed-size data types. Just store the integer as a string, in hex if you want. It won't make all that much difference.
References:
http://docs.python.org/library/struct.html
http://redis.io/topics/internals-sds
I'm not sure of this answer, it's more of a suggestion than anything else. I'd have to give it a try and see if it works.
As far as I can tell, Redis only supports UTF-8 strings.
I would suggest grabbing a bit representation of your long integer and pad it accordingly to fill up the nearest byte. Encode each set of 8 bytes to a UTF-8 string (ending up with 8x*utf8_char* string) and store that in Redis. The fact that they're unsigned means that you don't care about that first bit but if you did, you could add a flag to the string.
Upon retrieving the data, you have to remember to pad each character to 8 bytes again as UTF-8 will use less bytes for the representation if the character can be stored with less bytes.
End result is that you store a maximum of 8 x 8 byte characters instead of (possibly) a maximum of 64 x 8 byte characters.