padding about OpenSSL heartbleed - ssl

I have something not understand about the padding part of the heartbeat in openSSL.
In the code of openssl 1.0.1g, it shows as the followings:
n2s(p, payload);
if (1 + 2 + payload + 16 > s->s3->rrec.length)
return 0; /* silently discard per RFC 6520 sec. 4 */
pl = p;
It shows that the length of padding is 16, however in the RFC6520, it says that the padding length is at least 16 bytes. Then if the client send a heartbeat with the padding (32 bytes or bigger), does the code of OpenSSL still has vulnerability?

1 + 2 + payload + 16 is the minimum message length; it might be less than a corresponding message length, but it cannot be greater. Thus, the test says that if that calculated value is greater than the actual message length, which is inconsistent with a well constructed heartbeat, the message should be discarded, preventing the bug.

Related

AES-128 What padding method is used in this cipher example?

I have just implemented an AES-128 encryption algorithm, with the following message and key.
Message: "Two One Nine Two" (128 bits)
Key: "Thats my Kung Fu" (128 bits)
The cipher output for this is :
29c3505f571420f6402299b31a02d73a
which is correct when I cross-checked with online generators.
However, the online generator output is usually longer :
29c3505f571420f6402299b31a02d73ab3e46f11ba8d2b97c18769449a89e868
I tried several padding methods (bit, zerolength, cms, null, space) but nothing seems to produce exactly the b3e46f11ba8d2b97c18769449a89e868 part of the crypt text.
Could anyone help to explain what padding method (in binary) is used to produce those numbers, please?
Thank you #Topaco, the padding is indeed PKCS7. In this case, since the input is exactly 128 bit, an extra padding block of 128 bit must be appended, consisting of 16 bytes of the value 16 each:
00010000 000010000 00010000 00010000 00010000.... (x16)
This gives the correct crypt text
b3e46f11ba8d2b97c18769449a89e868
for the key in this example.

Perl6 IO::Socket::Async truncates data

I'm rewriting my P5 socket server in P6 using IO::Socket::Async, but the data received got truncated 1 character at the end and that 1 character is received on the next connection. Someone from Perl6 Facebook group (Jonathan Worthington) pointed that this might be due to the nature of strings and bytes are handled very differently in P6. Quoted:
In Perl 6, strings and bytes are handled very differently. Of note, strings work at grapheme level. When receiving Unicode data, it's not only possible that a multi-byte sequence will be split over packets, but also a multi-codepoint sequence. For example, one packet might have the letter "a" at the end, and the next one would be a combining acute accent. Therefore, it can't safely pass on the "a" until it's seen how the next packet starts.
My P6 is running on MoarVM
https://pastebin.com/Vr8wqyVu
use Data::Dump;
use experimental :pack;
my $socket = IO::Socket::Async.listen('0.0.0.0', 7000);
react {
whenever $socket -> $conn {
my $line = '';
whenever $conn {
say "Received --> "~$_;
$conn.print: &translate($_) if $_.chars ge 100;
$conn.close;
}
}
CATCH {
default {
say .^name, ': ', .Str;
say "handled in $?LINE";
}
}
}
sub translate($raw) {
my $rawdata = $raw;
$raw ~~ s/^\s+|\s+$//; # remove heading/trailing whitespace
my $minus_checksum = substr($raw, 0, *-2);
my $our_checksum = generateChecksum($minus_checksum);
my $data_checksum = ($raw, *-2);
# say $our_checksum;
return $our_checksum;
}
sub generateChecksum($minus_checksum) {
# turn string into Blob
my Blob $blob = $minus_checksum.encode('utf-8');
# unpack Blob into ascii list
my #array = $blob.unpack("C*");
# perform bitwise operation for each ascii in the list
my $dec +^= $_ for $blob.unpack("C*");
# only take 2 digits
$dec = sprintf("%02d", $dec) if $dec ~~ /^\d$/;
$dec = '0'.$dec if $dec ~~ /^[a..fA..F]$/;
$dec = uc $dec;
# convert it to hex
my $hex = sprintf '%02x', $dec;
return uc $hex;
}
Result
Received --> $$0116AA861013034151986|10001000181123062657411200000000000010235444112500000000.600000000345.4335N10058.8249E00015
Received --> 0
Received --> $$0116AA861013037849727|1080100018112114435541120000000000000FBA00D5122500000000.600000000623.9080N10007.8627E00075
Received --> D
Received --> $$0108AA863835028447675|18804000181121183810421100002A300000100900000000.700000000314.8717N10125.6499E00022
Received --> 7
Received --> $$0108AA863835028447675|18804000181121183810421100002A300000100900000000.700000000314.8717N10125.6499E00022
Received --> 7
Received --> $$0108AA863835028447675|18804000181121183810421100002A300000100900000000.700000000314.8717N10125.6499E00022
Received --> 7
Received --> $$0108AA863835028447675|18804000181121183810421100002A300000100900000000.700000000314.8717N10125.6499E00022
Received --> 7
First of all, TCP connections are streams, so there's no promises that the "messages" that are sent will be received as equivalent "messages" on the receiving end. Things that are sent can be split up or merged as part of normal TCP behavior, even before Perl 6 behavior is considered. Anything that wants a "messages" abstraction needs to build it on top of the TCP stream (for example, by sending data as lines, or by sending a size in bytes, followed by the data).
In Perl 6, the data arriving over the socket is exposed as a Supply. A whenever $conn { } is short for whenever $conn.Supply { } (the whenever will coerce whatever it is given into a Supply). The default Supply is a character one, decoded as UTF-8 into a stream of Perl 6 Str. As noted in the answer you already received, strings in Perl 6 work at grapheme level, so it will keep back a character in case the next thing that arrives over the network is a combining character. This is the "truncation" that you are experiencing. (There are some things which can never be combined. For example, \n can never have a combining character placed on it. This means that line-oriented protocols won't encounter this kind of behavior, and can be implemented as simply whenever $conn.Supply.lines { }.)
There are a couple of options available:
Do whenever $conn.Supply(:bin) { }, which will deliver binary Blob objects, which will correspond to what the OS passed to the VM. That can then be .decode'd as wanted. This is probably your best bet.
Specify an encoding that does not support combining characters, for example whenever $conn.Supply(:enc('latin-1')) { }. (However, note that since \r\n is 1 grapheme, then if the message were to end in \r then that would be held back in case the next packet came along with a \n).
In both cases, it's still possible for messages to be split up during transmission, but these will (entirely and mostly, respectively) avoid the keep-one-back requirement that grapheme normalization entails.

computing the exchange hash for ecdsa-sha2-nistp256

I am writing code for an SSH server and can not get past the Elliptic Curve Diffie-Hellman Key Exchange Reply part of the connection. The client also closes the connection and says "Host Key does not match the signature supplied".
I am using putty as the client and a PIC micro-controller is running the server code.
From RFC 5656 [SSH ECC Algorithm Integration] :
"The hash H is formed by applying the algorithm HASH on a
concatenation of the following:
string V_C, client's identification string (CR and LF excluded)
string V_S, server's identification string (CR and LF excluded)
string I_C, payload of the client's SSH_MSG_KEXINIT
string I_S, payload of the server's SSH_MSG_KEXINIT
string K_S, server's public host key
string Q_C, client's ephemeral public key octet string
string Q_S, server's ephemeral public key octet string
mpint K, shared secret
"
the host key algorithm and key exchange algorithm is ecdsa-sha2-nistp256 and ecdh-sha2-nistp256 respectively.
referring to RFC 4251 for data type representations, as well as the source code in openSHH (openBSD) this is what I have concatenated.
4 bytes for then length of V_C followed by V_C
4 bytes for then length of V_S followed by V_S
4 bytes for length of I_C followed by I_C (payload is from Message Code to the start of Random Padding)
4 bytes for length of I_S followed by I_S (payload is from Message Code to the start of Random Padding)
4 bytes for the length of K_S followed by K_S (for K_S I used the same group of bytes that is used to calculate the fingerprint)
4 bytes for the length of Q_C followed by Q_C (i used the uncompressed string which has length of 65 - 04||X-coordinate||Y-coordinate)
4 bytes for the length of Q_S followed by Q_S
4 bytes for the length of K followed by K (length is 32 or 33 depending is the leading bit is set or not. If it is set then K is preceded by a 00 byte)
Once concatenated I hash it with SHA256 because I'm using NISTP256. SHA256 outputs 32 bytes which is the size of the curve, so I take the whole SHA256 output and perform the signature algorithm on it.
I can never get the correct signature from my message concatenation.
I know my signature algorithm is correct because given the message hash output I can get the correct signature.
I know my shared secret is correct because I get the same output as online shared secret calculators.
I know the SHA256 is correct because I get the same result using online calculators.
This leads me to assume the error is in the concatenation of the exchange hash.
Any help is greatly appreciated, thanks.
ECDSA signature generation is non-deterministic, i.e. part of the input is the hash and part of the input consists of random bytes. So whatever you do, you will always get a different signature. This is all right because signature verification will still work.
The only way to get a repeated signature is to mess with the random number generator (during testing, you don't want to sign two values using the same random number: you'd expose the private key!).

WEP (Shared Key Authentication), how is the 136 byte challenge response formed?

I am playing around with WEP(Shared key authentication) challenge/response mechanism at the moment and I hope someone could help me out.
The AP sends a challenge text to the STA. The challenge text is 128 bytes
The STA encrypts the challenge and sends it back to the AP. This is 136 bytes (data) in wireshark.
My Question:
Can someone tell me the make-up of the 136 byte data challenge response and why it is this size.
Why is it not Enc([challengetext (128)] + [icv(4)]) = 132 bytes?
Thanks.
You forgot the 4 bytes of the IV in the beginning.
I'm not an expert and I'm using personal experience to confirm the answer to the question. Feel free to edit eventual wrong terms.
TL;DR
Encrypted frame send by the STA contains:
802.11 parameters (24 bytes)
WEP parameters (clear IV + key index) (4 bytes)
management headers (encrypted) (8 bytes)
data (encrypted) (128 bytes)
ICV (encrypted) (4 bytes)
Total is 168 bytes, total of encrypted data without ICV is 136 bytes.
The encrypted data shown by Wireshark and Cie is 8 bytes longer than clear text challenge because it also carries the management headers (encrypted but predictable).
What does the AP send?
Clear text challenge frame sent by the AP is 160 bytes long, and the encrypted challenge response frame is 168 bytes long. That is not the question, but let's make things clear.
In the clear text AP messages, the management headers are also clear text:
Authentication algorithm (2 bytes)
Authentication SEQ (2 bytes)
status code (2 bytes)
tag number (1 byte)
tag length (1 byte)
(challenge) ('tag length' bytes)
The management header are 8 bytes long.
What does the STA send?
In the STA encrypted message, everything over 802.11 layer is considered as "data" as this is encrypted gibberish. Before this data, you can find (part of the 802.11 layer) the WEP parameters: IV (3 bytes) and key index (1 bytes). This is clear text. You also have the ICV, the very last 4 bytes of the frame. Those are 8 bytes that appear in all WEP encrypted frames.
The "data" section contains the encrypted challenge and the encrypted management headers (that's what answers your question).
Your question
If you have 8 more bytes in the WEP frame that actually seem to compensate the 8 bytes management headers, then why your encrypted challenge data is 8 bytes longer?
This is not because of IV or ICV, as we saw before, as they are not part of the challenge data. Those 8 bytes are actually from the management headers, that are encrypted within the "data" section. The frame containing the encrypted challenge is also a management frame, but you can't see the headers as they are encrypted. Those are your 8 mysterious bytes (see simplified frame skeletons below)
I will finish on the fact that those shared key authentications permit you to do offline dictionnary or bruteforce attacks on WEP authentication captures without any IV (except the one used to encrypt the challenge of course). The fact that the first 8 encrypted bytes are management headers makes them predictable (it's always the same). So in a bruteforce implementation, you can just RC4 the first 4 or 8 bytes of the frame, instead of the whole 136 bytes, which leads to way better performance on huge dictionnary/full bruteforce attacks.
Authentication frames skeleton
Management frame with cleartext challenge
--------------------------------------------------------------
(ieee 802.11 headers) -> 24 bytes
--------------------------------------------------------------
---------------- 8 bytes management headers ------------------
ieee 802.11 Wireless Management:
[0][1] == Authentication algo (int16) == 0x0100 (Shared Key)
[2][3] == Authentication SEQ (int16) == 0x0002
[4][5] == Status code (int16) == 0x0000 (Successful)
[6] == Tag Number (int8) == 0x10 (Challenge text)
[7] == Tag length (int8) == 0x80 (128 bytes long challenge)
--------------------------------------------------------------
---------------------- 128 bytes data ------------------------
[0:128]== Challenge text
--------------------------------------------------------------
24 + 8 + 128 = 160 bytes frame
Encrypted frame with predictable encrypted management headers
--------------------------------------------------------------
(ieee 802.11 headers) -> 24 bytes
--------------------------------------------------------------
------------------ 4 bytes WEP parameters --------------------
[0][1][2] == IV (3 bytes, clear text)
[3] == key index (int8) (should be 0)
--------------------------------------------------------------
---------------- 8 bytes management headers ------------------
From here, everything is encrypted
[0][1] == Authentication algo (int16) == 0x0100 (Shared Key)
[2][3] == Authentication SEQ (int16) == 0x0003 (incremented since last frame)
[4][5] == Status code (int16) == 0x0000 (Successful)
[6] == Tag Number (int8) == 0x10 (Challenge text)
[7] == Tag length (int8) == 0x80 (128 bytes long challenge)
--------------------------------------------------------------
---------------------- 128 bytes data ------------------------
[0:128]== Encrypted data challenge
--------------------------------------------------------------
---------------------- 4 bytes ICV ---------------------------
[0:4] == WEP ICV
--------------------------------------------------------------
24 + 4 + 8 + 128 + 4 = 168 bytes frame
8 + 128 = 136 bytes "data" (as wireshark interprets it)
The only thing that changes from the previous one in the management headers of the encrypted frame is the SEQ number.

GMP variable's bit size

How to know the size of a declared variable in GMP??or how can we decide the size of an integer in GMP?
mpz_random(temp,1);
in manual it is given that this function allocates 1limb(=32bits for my comp) size to the "temp"....
but it is having 9 digit number only..
SO i dont think that 32 bit size number holds only 9 digits number..
So please help me to know the size of integer variable in GMP ..
thanks in adv..
mpz_sizeinbase(num, 2) will give you the size in 'used' bits.
32 bits (4 bytes) really can be used to store only 9 decimal digits
2^32 = 4 294 967 296
so only 9 full decimal digits here (the 10th is in interval from 0 up 4, so it is not full).
You can recompute this via logarithms:
log_10(2^32)
let's ask google
log base 10(2^32) = 9.63295986
Everything is correct.
You can check the number of limbs in a debugger. A GMP integer has the internal field '_mp_size' which is the count of the limbs used to hold the current value of the variable (0 is a special case: it's represented with _mp_size = 0). Here's an example I ran in Visual C++ (see my article How to Install and Run GMP on Windows Using MPIR):
mpz_set_ui(temp, 1073741824); //2^30, (_mp_size = 1)
mpz_mul(temp,temp,temp); //2^60 (_mp_size = 2)
mpz_mul(temp,temp,temp); //2^120 (_mp_size = 4)