How to verify a signature made by trezor wallet - cryptography

I want to verify a message signed by my trezor hardware wallet.
Basically I have these information.
.venv/bin/trezorctl btc get-public-node -n 0
Passphrase required:
Confirm your passphrase:
node.depth: 1
node.fingerprint: ea66f037
node.child_num: 0
node.chain_code: e02030f2a7dfb474d53a96cb26febbbe3bd3b9756f4e0a820146ff1fb4e0bd99
node.public_key: 026b4cc594c849a0d9a124725997604bc6a0ec8f100b621b1eaed4c6094619fc46
xpub: xpub69cRfCiJ5BVzesfFdsTgEb29SskY74wYfjTRw5kdctGN2xp1HF4udTP21t68PAQ4CBq1Rn3wAsWr84wiDiRmmSZLwkEkv4qK5T5Y7EXebyQ
$ .venv/bin/trezorctl btc sign-message 'aaa' -n 0
Please confirm action on your Trezor device
Passphrase required:
Confirm your passphrase:
message: aaa
address: 17DB2Q3oZVkQAffkpFvF4cwsXggu39iKdQ
signature: IHQ7FDJy6zjwMImIsFcHGdhVxAH7ozoEoelN2EfgKZZ0JVAbvnGN/w8zxiMivqkO8ijw8fXeCMDt0K2OW7q2GF0=
I wanted to use python3-ecdsa. When I want to verify the signature with any valid public key, I get an AssertionError: (65, 64), because the base64.b64decode of the signature is 65 bytes, but should be 64.
When I want to load the node.public_key into a ecdsa.VerifyingKey, I get an AssertionError: (32, 64), because the bytes.fromhex return 32 bytes, but every example I found uses 64 bytes for the public key.
Probably I need to convert the bip32 xpub to a public key, but I really dont know how.
Solutiion
python-ecdsa needs to be at version 0.14 or greater to handle compressed format of the public key.
import ecdsa
import base64
import hashlib
class DoubleSha256:
def __init__(self, *args, **kwargs):
self._m = hashlib.sha256(*args, **kwargs)
def __getattr__(self, attr):
if attr == 'digest':
return self.double_digest
return getattr(self._m, attr)
def double_digest(self):
m = hashlib.sha256()
m.update(self._m.digest())
return m.digest()
def pad_message(message):
return "\x18Bitcoin Signed Message:\n".encode('UTF-8') + bytes([len(message)]) + message.encode('UTF-8')
public_key_hex = '026b4cc594c849a0d9a124725997604bc6a0ec8f100b621b1eaed4c6094619fc46'
public_key = bytes.fromhex(public_key_hex)
message = pad_message('aaa')
sig = base64.b64decode('IHQ7FDJy6zjwMImIsFcHGdhVxAH7ozoEoelN2EfgKZZ0JVAbvnGN/w8zxiMivqkO8ijw8fXeCMDt0K2OW7q2GF0=')
vk = ecdsa.VerifyingKey.from_string(public_key, curve=ecdsa.SECP256k1)
print(vk.verify(sig[1:], message, hashfunc=DoubleSha256))

Public-key. Mathematically an elliptic curve public key is a point on the curve. For the elliptic curve used by Bitcoin, secp256k1, as well as other X9-style (Weierstrass form) curves, there are (in practice) two standard representations originally established by X9.62 and reused by many others:
uncompressed format: consists of one octet with value 0x04, followed by two blocks of size equal to the curve order size containing the (affine) X and Y coordinates. For secp256k1 this is 1+32x2 = 65 octets
compressed format: consists of one octet with value 0x02 or 0x03 indicating the parity of the Y coordinate, followed by a block of size equal tot he curve order containing the X coordinate. For secp256k1 this is 1+32 = 33 octets
The public key output by your trezor is the second form, 0x02 + 32 octets = 33 octets. Not 32.
I've never seen an X9EC library (ECDSA and/or ECDH) that doesn't accept at least the standard uncompressed form, and usually both. It is conceivable your python library expects only the uncompressed form without the leading 0x04, but if so this gratuitous and rather risky nonstandardness, unless a very good explanation is provided in the doc or code, would make me suspicious of its quality. If you do need to convert the compressed form to uncompressed you must implement the curve equation, which for secp256k1 can be found in standard references, not to mention many implementations. Compute x^3 + a*x + b, take the square root in F_p, and choose either the positive or negative value that has the correct parity (agreeing with the leading byte here 0x02).
The 'xpub' is a base58check encoding of a hierarchical deterministic key, which is not just an EC(DSA) key but adds metadata for the key derivation process. If you base58 decode it and remove the check, you get (in hex):
0488B21E01EA66F03700000000E02030F2A7DFB474D53A96CB26FEBBBE3BD3B9756F4E0A820146FF1FB4E0BD99026B4CC594C849A0D9A124725997604BC6A0EC8F100B621B1EAED4C6094619FC46good
which breaks down exactly as your display showed:
0488B21E fixed prefix
01 .depth
EA66F037 .fingerprint
00000000 .child_num
E02030F2A7DFB474D53A96CB26FEBBBE3BD3B9756F4E0A820146FF1FB4E0BD99 .chain_code
026B4CC594C849A0D9A124725997604BC6A0EC8F100B621B1EAED4C6094619FC46 .public_key
Confirming this, the ripemd160 of sha256 of (the bytes that are shown in hex as) 026B4CC594C849A0D9A124725997604BC6A0EC8F100B621B1EAED4C6094619FC46 is (the bytes shown in hex as) 441e1d2adf9ff2a6075d71d0d8782228e0df47f8, and prefixing the version byte 00 for mainnet to that and base58check encoding gives the address 17DB2Q3oZVkQAffkpFvF4cwsXggu39iKdQ as shown.
Signature. Mathematically an X9.62-type ECDSA signature is two integers, called r and s. There are two different standards for representing them, and Bitcoin uses both with variations:
ASN.1 DER format. DER is a general purpose encoding that contains 'tag' and 'length' metadata and variable length data depending on the numeric values, here r and s; for secp256k1 in general this encoding is usually 70 to 72 octets but occasionally less. However, to avoid certain 'malleability' attacks current Bitcoin requires use of 's' values less than half the curve order, commonly called 'low-s', which reduces the maximum length of the ASN.1 DER encoding to 71 octets. Bitcoin uses this for transaction signatures, and adds a 'sighash' byte immediately following it (in the 'scriptsig' aka redeem script) indicating certain options on how the signature was computed (and thus should be verified).
'plain' or P1363 format. This is fixed length and consists simply of the r and s values as fixed-length blocks; for secp256k1 this is 64 octets. Bitcoin uses this for message signatures but it adds a 'recovery' byte' to the beginning that allows determining the publickey from the message and signature if necessary, making the total 65 octets.
See https://bitcoin.stackexchange.com/questions/38351/ecdsa-v-r-s-what-is-v/38909 and https://bitcoin.stackexchange.com/questions/12554/why-the-signature-is-always-65-13232-bytes-long .
If your python library is designed for general purpose ECDSA, not Bitcoin, and wants a 64-byte signature, that almost certainly is the 'plain' format which corresponds to the Bitcoin message signature (here decoded from base64) with the first byte removed.

Related

Homomorphic encryption using Palisade library

To all homomorphic encryption experts out there:
I'm using the PALISADE library:
int plaintextModulus = 65537;
float sigma = 3.2;
SecurityLevel securityLevel = HEStd_128_classic;
uint32_t depth = 2;
//Instantiate the crypto context
CryptoContext<DCRTPoly> cc = CryptoContextFactory<DCRTPoly>::genCryptoContextBFVrns(
plaintextModulus, securityLevel, sigma, 0, depth, 0, OPTIMIZED);
could you please explain (all) the parameters especially intrested in ptm, depth and sigma.
Secondly I am trying to make a Packed Plaintext with the cc above.
cc->MakePackedPlaintext(array);
What is the maximum size of the array? On my local machine (8GB RAM) when the array is larger than ~8000 int64 I get an free(): invalid next size (normal) error
Thank you for asking the question.
Plaintext modulus t (denoted as t here) is a critical parameter for BFV as all operations are performed mod t. In other words, when you choose t, you have to make sure that all computations do not wrap around, i.e., do not exceed t. Otherwise you will get an incorrect answer unless your goal is to compute something mod t.
sigma is the distribution parameter (used for the underlying Learning with Errors problem). You can just set to 3.2. No need to change it.
Depth is the multiplicative depth of the circuit you are trying to compute. It has nothing to with the size of vectors. Basically, if you have AxBxCxD, you have a depth 3 with a naive approach. BFV also supports more efficient binary tree evaluation, i.e., (AxB)x(CxD) - this option will reduce the depth to 2.
BFV is a scheme that supports packing. By default, the size of packed ciphertext is equal to the ring dimension (something like 8192 for the example you mentioned). This means you can pack up to 8192 integers in your case. To support larger arrays/vectors, you would need to break them into batches of 8192 each and encrypt each one separately.
Regarding your application, the CKKS scheme would probably be a much better option (I will respond on the application in more detail in the other thread).
I have some experience with the SEAL library which also uses the BFV encryption scheme. The BFV scheme uses modular arithmetic and is able to encrypt integers (not real numbers).
For the parameters you're asking about:
The Plaintext Modulus is an upper bound for the input integers. If this parameter is too low, it might cause your integers to overflow (depending on how large they are of course)
The Sigma is the distribution parameter for Gaussian noise generation
The Depth is the circuit depth which is the maximum number of multiplications on a path
Also for the Packed Plaintext, you should use vectors not arrays. Maybe that will fix your problem. If not, try lowering the size and make several vectors if necessary.
You can determine the ring dimension (generated by the crypto context based on your parameter settings) by using cc->GetRingDimension() as shown in line 113 of https://gitlab.com/palisade/palisade-development/blob/master/src/pke/examples/simple-real-numbers.cpp

CRC of input data shorter than poly width

I'm in the process of writing a paper during my studies on implementing CRC in Excel with VBA.
I've created a fairly straightforward, modular algorithm that uses Ross's parametrized model.
It works flawlessly for any length polynomian and any combination of parameters except for one; when the length of the input data is shorter than the width of the polynomial and an initial value is chosen ("INIT") that has any bits set which are "past" the length of the input data.
Example:
Input Data: 0x4C
Poly: 0x1021
Xorout: 0x0000
Refin: False
Refout: False
If I choose no INIT or any INIT like 0x##00, I get the same checksum as any of the online CRC generators. If any bit of the last two hex characters is set - like 0x0001 - my result is invalid.
I believe the question boils down to "How is the register initialized if only one byte of input data is present for a two byte INIT parameter?"
It turns out I was misled (or I very well may have misinterpreted) the explaination of how to use the INIT parameter on the sunshine2k website.
The INIT value must not be XORed with the first n input bytes per se (n being the width of the register / cropped poly / checksum), but must only be XORed in after the n 0-Bits have been appended to the input data.
This specification does not matter when input data is equal or larger than n bytes, but it does matter when the input data is too short.

What is the entropy of XORed CSPRNG bytes with low entropy hash?

Let’s say I take 256 bits from a CSPRNG and assume it is perfectly 256 bits of entropy. Call this rand.
Then let’s say I take the sha256 of the ASCII text “password”. Call this hash.
Now we XOR rand and hash. Call this mixed.
Is the entropy of mixed less than that of rand?
If so, is there a formula for calculating its entropy?
Example below: What is the entropy of mixed as a function of rand and weak_hash
#!/usr/bin/python3
import hashlib, os
def main():
rand = int(os.urandom(32).hex(),16)
weak_hash = int(hashlib.sha256(b'password').digest().hex(),16)
mixed = ("%064x" % (rand ^ weak_hash))
print(mixed)
main()
You are describing a one-time-pad. If the key stream: the output of the CSPRNG is fully random then the ciphertext will be indistinguishable from random as well.
Of course the output of CSPRNG is not fully random. However, if the CSPRNG is well seeded with enough entropy then you'd have the same security as a stream cipher, which mimics a one time pad.
So the output (mixed) will be as random as the CSPRNG, as long as the CSPRNG doesn't get into a previously encountered state. That should basically only happen if the entropy source fails.

What's the proper way to get a fixed-length bytes representation of an ECDSA Signature?

I'm using python and cryptography.io to sign and verify messages. I can get a DER-encoded bytes representation of a signature with:
cryptography_priv_key.sign(message, hash_function)
...per this document: https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/
A DER-encoded ECDSA Signature from a 256-bit curve is, at most, 72 bytes; see: ECDSA signature length
However, depending on the values of r and s, it can also be 70 or 71 bytes. Indeed, if I examine length of the output of this function, it varies from 70-72. Do I have that right so far?
I can decode the signature to ints r and s. These are both apparently 32 bytes, but it's not clear to me whether that will always be so.
Is it safe to cast these two ints to bytes and send them over the wire, with the intention of encoding them again on the other side?
The simple answer is, yes, they will always be 32 bytes.
The more complete answer is that it depends on the curve. For example, a 256-bit curve has an order of 256-bits. Similarly, a 128-bit curve only has an order of 128-bits.
You can divide this number by eight to find the size of r and s.
It gets more complicated when curves aren't divisible by eight, like secp521r1 where the order is a 521-bit number.
In this case, we round up. 521 / 8 is 65.125, thus it requires that we free 66 bytes of memory to fit this number.
It is safe to send them over the wire and encode them again as long as you keep track of which is r and s.

Does Camellia have a 256 bit block size?

Wikipedia says Camellia comes with a block size of 128 and a variable key size (128, 192, 256). Another site lists it as a 256 Bit cipher.
The OpenSSL API has a function named EVP_camellia_256_cbc. Does this refer to the key size or the block size? And does Camellia support 256 Bit block sizes at all?
The information on the Wikipedia page is correct: Camellia has a fixed block size of 128 bit and a variable key size of 128, 192 and 256 bit. You can compare that with other authoritative sources like its specification, e.g. found in RFC 3713.
The "256 bit" in "256 bit cipher" usually refers to its security level and that is determined by its key size (and potential attack vectors that might decrease it).
Therefore, EVP_camellia_256_cbc means Camellia with a 256 bit key size, so you should supply keys of that size. Supplying keys of the correct key size is important, because some implementations may behave differently than others and you will lose a lot of time debugging when trying to connect different implementations.
For example, if you define that you want to use Camellia-256, but you're passing a key of 192 bit, it may happen that
one implementation fills the passed key with 0x00 byte up to the specified key size,
another implementation doesn't care about the specification and only looks at the actual supplied key to then run Camellia-192 or
a broken implementation (for non-standard key sizes) that calculates the number of rounds (12 or 14 for Camellia) that need to be used and arrives at a non-standard number of rounds which makes the result non-compatible with all other implementations.