Getting TPM's public EK: meaning of leading/trailing bits - cryptography

I have been trying to get a TPM's EK's public key using two methods:
using Hyper-V's Get-PlatformIdentifier I get the following result:
3082010a0282010100<EKPUBLICKEY>0203010001
Using Urchin's C Library:
<EKPUBLICKEY>
Can anyone explain what do 3082010a0282010100 and 0203010001 mean/encode?

It is DER Encoding of format for ASN.1 Types.
For example, 3082010A0282010100<KEY>0203010001
30: said SEQUENCE type
82010A: Said SEQUENCE of length 010A (82 of which more than 80, indicates the length information of 2 bytes.)
02: Integer type
820101: An integer representing the length of 0101 (decimal 257)
00<KEY>: The integer is modulus, 00 used to denote a positive integer, deduct 00 and 256 bytes, so the modulus is 256 bytes
Finally Exponent
0203010001: 02 integer representing the length of 3010001 Exponent, 03

Related

Integer part bit growth for fixed point numbers of the 0.xyz kind

First of all we should agree on the definition of the QM.N format. I will follow this resource and its conventions.
For the purposes of this paper the notion of a Q-point for a fixed-point number is introduced.
This labeling convention is as follows:
Q[QI].[QF]
Where QI = # of integer bits & QF = # of fractional bits
For signed integer variable types we will include the sign bit in QI as it does have integer
weight albeit negative in sign.
Based on this convention, if I had to represent the number -0.123 in the format Q1.7 I would write it as: 1.1110001
The theory says that:
When performing an integer multiplication the product is 2xWL if both the multiplier and
multiplicand are WL long. If the integer multiplication is on fixed-point variables, the number of
integer and fractional bits in the product is the sum of the corresponding multiplier and
multiplicand Q-points as described by the following equations
Knowing this is useful because after multiplication we have double precision, and we need to rescale the output to our input precision. Knowing where the integer part is allows us to prevent overflow and to pick the relevant bits, as in the example where the long string is the result of the multiplication:
However, when performing the multiplication between two Q1.7 numbers of the format 0.xyz I have noticed that the integer part never grows, allowing me to pick only one bit from the integer part. I have written a piece of code that picks only the fractional part after multiplication, and here are the results.
Test 0
Testing +0.5158*+0.0596
A:real_val:+0.5156 fixed: 66 int: 0 frac: 1000010
B:real_val:+0.0547 fixed: 7 int: 0 frac: 0000111
C: real_val:+0.0282 fixed: 462 int: 00 frac: 00000111001110
Floating multiplication: +0.0307
Test 1
Testing +0.4842*-0.9558
A:real_val:+0.4766 fixed: 61 int: 0 frac: 0111101
B:real_val:-0.9531 fixed: -122 int: 1 frac: 0000110
C: real_val:-0.4542 fixed: -7442 int: 11 frac: 10001011101110
Floating multiplication: -0.4628
Test 2
Testing +0.2812*+0.2433
A:real_val:+0.2734 fixed: 35 int: 0 frac: 0100011
B:real_val:+0.2422 fixed: 31 int: 0 frac: 0011111
C: real_val:+0.0662 fixed: 1085 int: 00 frac: 00010000111101
Floating multiplication: +0.0684
Test 3
Testing -0.7235*-0.9037
A:real_val:-0.7188 fixed: -92 int: 1 frac: 0100100
B:real_val:-0.8984 fixed: -115 int: 1 frac: 0001101
C: real_val:+0.6458 fixed: 10580 int: 00 frac: 10100101010100
Floating multiplication: +0.6538
My question to you is if I am overlooking anything here or if this is normal and expected behaviour from fixed points. If so, I will be happy with my numbers never overflowing during multiplication.
Basically what I mean is that after multiplication of two Q1.X numbers in the form 0.xyz the integer part will always be 0 (if the result is positive) or 1111.. if the result is negative.
So my accumulator register will be filled with only 2*X of meaningful bits and I can take only them, plus the sign.
No, the number of bits in the result is still the sum of the bits in the inputs.
Summary:
Signed Q1.31 times signed Q1.31 equals signed Q2.62.
Unsigned Q1.31 times unsigned Q1.31 equals unsigned Q2.62.
Explanation:
Unsigned Q1.n numbers can represent from zero (inclusive) to two (exclusive). If you multiply two such numbers together the range of results is from zero (inclusive) to 4 (exclusive). Just less than four is three point something, and three fits in the two bits above the point.
Signed Q1.n numbers can represent from negative one (inclusive) to one (exclusive). If you multiply two such numbers together the range of results is negative one (exclusive) to one (inclusive). Signed Q1.31 times signed Q1.31 would fit in Q1.62 except for the single case -1.0 times -1.0 equals +1.0, which requires the extra bit above the point.
The equations in your question apply equally in both these cases.

SAS TO COBOL conversion variable declaration

Friends,
I am doing SAS to COBOL conversion.I am stuck with below declaration and conversion.So I am getting SOC7 in COBOL run.Please provide some solution.
IP in SAS - PD3.5
OP in SAS - z6.5
My COBOL declaration below.
IP s9.9(5);
OP .9(5);
Please suggest some solution..
Thanks a lot!!
Packed Decimal is stored one digit per nibble, which is two digits per byte, with the last nibble storing the sign. The sign nibbles C, A, F, and E are treated as positive; the sign nibbles B and D are treated as negative. Sign nibbles C and D are referred to as "preferred sign". A sign nibble of F is considered "unsigned," meaning it is neither positive nor negative, though pragmatically you can think of it as positive for arithmetic purposes. +123 is stored in two bytes as x'123C', -456 is stored as x'456D'.
The SAS PD informat specifies PDw.d where w is the width of the field in bytes and d is the number of decimal places to the right within the field. So PD3.5 is a 3 byte field (which would store 5 digits and a sign) with all 5 digits to the right of the decimal point.
To obtain the COBOL declaration for a SAS PDw.d declaration...
a = (w * 2) - 1
b = a - d
if b = 0
PIC SVd Packed-Decimal
else
PIC S9(b)Vd Packed-Decimal
The SAS Z format specifies Zw.d where w is the width of the field in bytes and d is the number of decimal places to the right within the field. The field will be padded with zeroes on the left to make it w bytes wide. So Z6.5 specifies a 6 byte output field with 5 bytes to the right of the decimal point. One byte is taken by the decimal point itself, and unfortunately there is no room for the sign, which may be a bug or may be intentional (perhaps all the data is known to be positive).
IP PIC Sv99999 Packed-Decimal.
OP PIC .99999.
When you MOVE IP TO OP the conversion from Packed Decimal to Zoned Decimal will be done for you by COBOL.

TLS/PSK key length

Is there a maximum length for the length of the key. I am using gnutls's psktool to create keys and I need to create key of the size of 128 bits. but the maximum value it lets me to use as key length is 64 Is this impossible.
Gnutls's psktool takes the key size in bytes, not in bits. So the maximum length is 64 bytes = 512 bits.

Bitwise operators: How do I clear the most significat bit?

I'm working on a problem where I need to convert an integer into a special text encoding. The requirements state the I pack the int into bytes and then clear the most significant bit. I am using the bitwise operator I am unsure of how to clear the most significant bit. Here is the problem and my method that I'm working with so far:
PROBLEM:
For this task, you need to write a small program including a pair of functions that can
convert an integer into a special text encoding
The Encoding Function
This function needs to accept a signed integer in the 14-bit range [-8192..+8191] and return a 4 character string.
The encoding process is as follows:
Add 8192 to the raw value, so its range is translated to [0..16383]
2.Pack that value into two bytes such that the most significant bit of each is cleared
Unencoded intermediate value (as a 16-bit integer):
00HHHHHH HLLLLLLL
Encoded value:
0HHHHHHH 0LLLLLLL
1 of 3
Format the two bytes as a single 4-character hexadecimal string and return it.
Sample values:
Unencoded (decimal) | Intermediate (decimal) | Intermediate (hex) | Encoded (hex)
0 | 8192 | 2000 | 4000
-8192 | 0 | 0000 | 0000
8191 | 16383 | 3fff | 7F7F
2048 | 10240 | 2800 | 5000
-4096 | 4096 | 1000 | 2000
My function
-(NSString *)encodeValue{
// get the input value
int decValue = [_inputValue.text intValue];
char* bytes = (char*)&decValue;
NSNumber *number = #(decValue+8192); //Add 8192 so that the number can't be negative, because we're about to lose the sign.
u_int16_t shortNumber = [number unsignedShortValue]; //Convert the integer to an unsigned short (2 bytes) using NSNumber.
shortNumber = shortNumber << 1; // !!!! This is what I'm doing to clear the MSB !!!!!!!
NSLog(#"%hu", shortNumber);
NSString *returnString = [NSString stringWithFormat:#"%x", shortNumber]; //Convert the 2 byte number to a hex string using format specifiers
return returnString;
}
I'm using the shift bitwise operator to clear the MSB and I get the correct answer for a couple of the values, but not every time.
If I am understanding you correctly then I believe you are after something like this:
u_int16_t number;
number = 0xFFFF;
number &= ~(1 << ((sizeof(number) * 8) - 1));
NSLog(#"%x", number); // Output will be 7fff
How it works:
sizeof(number) * 8 gives you the number of bits in the input number (eg. 16 for a u_int16_t)
1 << (number of bits in number - 1) gives you a mask with only the MSB set (eg. 0x8000)
~(mask) gives you the bitwise NOT of the mask (eg. 0x7fff)
ANDing the mask with your number then clears only the MSB leaving all others as they were
You are misunderstanding your task.
You are not supposed to clear the most significant bit anywhere. You have 14 bits. You are supposed to separate these 14 bits into two groups of seven bits. And since a byte has 8 bits, storing 7 bits into a byte will leave the most significant bit cleared.
PS. Why on earth are you using an NSNumber? If this is homework, I would fail you for the use of NSNumber alone, no matter what the rest of the code does.
PS. What is this char* bytes supposed to be good for?
PS. You are not clearing any most significant bit anywhere. You have an unsigned short containing 14 significant bits, so the two most significant bits are cleared. You shift the number to the left, so the most significant bit, which was always cleared, remains cleared, but the second most significant bit isn't. And all this has nothing to do with your task.

first 16 bit of a 32 bit hex

Please excuse my lack of knowledge here but could someone let me know how i can get the first 16 bits of a 32 bit hex number.
That depends on what you mean by "first". Given a number, such as 0xdeadbeef, would you consider 0xdead or 0xbeef to be "first"?
If the former, divide the number by 65536 (as an integer). If the latter, compute the modulus to 65536.
This is of course also doable with binary operators such as shift/and, I'm just not sure sure how to express that in your desired language. I'm sure there will be other answers with more precise details.
Assuming by first, you mean least value?
if My32BitNumber is an int
dim f16 as integer = &hFFFF and My32BitNumber
If you're actually looking at a 32 bit number e.g. Gee, what are the first 16 bits of DEADBEEF
that would be the last four hex digits BEEF
& it with 0xffff.
int input = 0xabcd;
int first2Bytes = input & 0xffff;
Dim i As Integer = &HDEADBEEF
Dim s16 As UShort
s16 = i And &HFFFF 'BEEF
'or
s16 = (i >> 16) And &HFFFF 'DEAD
This will get you the 32 bit number as a four byte array:
Dim bytes As Byte() = BitConverter.GetBytes(number)
To get the first two bytes as a signed 16 bit number:
Dim first As Int16 = BitConverter.ToInt16(bytes, 0)
To get the first two bytes as an unsigned 16 bit number:
Dim first As UInt16 = BitConverter.ToUInt16(bytes, 0)
This is of course a bit slower than using bit shifts or division, but it handles the sign bit (the most significant bit) correctly, which you may have problems with using bit shift or division.
You can also get the first two bytes as a 16 bit unsigned number and assign it to an Integer:
Dim first As Integer = BitConverter.ToUInt16(bytes, 0)
(Getting a signed 16 bit number and assign to an Integer means that the sign bit would also be copied to the top 16 bits of the Integer, which is probably not desired.)
If you want the last two bytes (least significant) instead of the first two (most significant), just change the index in the ToUInt16/ToInt16 call from 0 to 2.