I've been looking to change the arm64 chars to 92x but don't know where to start?
The current arm64 code is:
x93kxke | i23kx931
I want to convert it to 92x
You should use Sha256
SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA) and first published in 2001.[3][4] They are built using the Merkle–Damgård construction, from a one-way compression function itself built using the Davies–Meyer structure from a specialized block cipher.
SHA-2 includes significant changes from its predecessor, SHA-1. The SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. SHA-256 and SHA-512 are novel hash functions computed with eight 32-bit and 64-bit words, respectively. They use different shift amounts and additive constants, but their structures are otherwise virtually identical, differing only in the number of rounds. SHA-224 and SHA-384 are truncated versions of SHA-256 and SHA-512 respectively, computed with different initial values. SHA-512/224 and SHA-512/256 are also truncated versions of SHA-512, but the initial values are generated using the method described in Federal Information Processing Standards (FIPS) PUB 180-4.
SHA-2 was first published by the National Institute of Standards and Technology (NIST) as a U.S. federal standard (FIPS). The SHA-2 family of algorithms are patented in US patent 6829355.[5] The United States has released the patent under a royalty-free license.[6]
Currently, the best public attacks break preimage resistance for 52 out of 64 rounds of SHA-256 or 57 out of 80 rounds of SHA-512, and collision resistance for 46 out of 64 rounds of SHA-256.[1][2]
Related
To all homomorphic encryption experts out there:
I'm using the PALISADE library:
int plaintextModulus = 65537;
float sigma = 3.2;
SecurityLevel securityLevel = HEStd_128_classic;
uint32_t depth = 2;
//Instantiate the crypto context
CryptoContext<DCRTPoly> cc = CryptoContextFactory<DCRTPoly>::genCryptoContextBFVrns(
plaintextModulus, securityLevel, sigma, 0, depth, 0, OPTIMIZED);
could you please explain (all) the parameters especially intrested in ptm, depth and sigma.
Secondly I am trying to make a Packed Plaintext with the cc above.
cc->MakePackedPlaintext(array);
What is the maximum size of the array? On my local machine (8GB RAM) when the array is larger than ~8000 int64 I get an free(): invalid next size (normal) error
Thank you for asking the question.
Plaintext modulus t (denoted as t here) is a critical parameter for BFV as all operations are performed mod t. In other words, when you choose t, you have to make sure that all computations do not wrap around, i.e., do not exceed t. Otherwise you will get an incorrect answer unless your goal is to compute something mod t.
sigma is the distribution parameter (used for the underlying Learning with Errors problem). You can just set to 3.2. No need to change it.
Depth is the multiplicative depth of the circuit you are trying to compute. It has nothing to with the size of vectors. Basically, if you have AxBxCxD, you have a depth 3 with a naive approach. BFV also supports more efficient binary tree evaluation, i.e., (AxB)x(CxD) - this option will reduce the depth to 2.
BFV is a scheme that supports packing. By default, the size of packed ciphertext is equal to the ring dimension (something like 8192 for the example you mentioned). This means you can pack up to 8192 integers in your case. To support larger arrays/vectors, you would need to break them into batches of 8192 each and encrypt each one separately.
Regarding your application, the CKKS scheme would probably be a much better option (I will respond on the application in more detail in the other thread).
I have some experience with the SEAL library which also uses the BFV encryption scheme. The BFV scheme uses modular arithmetic and is able to encrypt integers (not real numbers).
For the parameters you're asking about:
The Plaintext Modulus is an upper bound for the input integers. If this parameter is too low, it might cause your integers to overflow (depending on how large they are of course)
The Sigma is the distribution parameter for Gaussian noise generation
The Depth is the circuit depth which is the maximum number of multiplications on a path
Also for the Packed Plaintext, you should use vectors not arrays. Maybe that will fix your problem. If not, try lowering the size and make several vectors if necessary.
You can determine the ring dimension (generated by the crypto context based on your parameter settings) by using cc->GetRingDimension() as shown in line 113 of https://gitlab.com/palisade/palisade-development/blob/master/src/pke/examples/simple-real-numbers.cpp
I'm using python and cryptography.io to sign and verify messages. I can get a DER-encoded bytes representation of a signature with:
cryptography_priv_key.sign(message, hash_function)
...per this document: https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/
A DER-encoded ECDSA Signature from a 256-bit curve is, at most, 72 bytes; see: ECDSA signature length
However, depending on the values of r and s, it can also be 70 or 71 bytes. Indeed, if I examine length of the output of this function, it varies from 70-72. Do I have that right so far?
I can decode the signature to ints r and s. These are both apparently 32 bytes, but it's not clear to me whether that will always be so.
Is it safe to cast these two ints to bytes and send them over the wire, with the intention of encoding them again on the other side?
The simple answer is, yes, they will always be 32 bytes.
The more complete answer is that it depends on the curve. For example, a 256-bit curve has an order of 256-bits. Similarly, a 128-bit curve only has an order of 128-bits.
You can divide this number by eight to find the size of r and s.
It gets more complicated when curves aren't divisible by eight, like secp521r1 where the order is a 521-bit number.
In this case, we round up. 521 / 8 is 65.125, thus it requires that we free 66 bytes of memory to fit this number.
It is safe to send them over the wire and encode them again as long as you keep track of which is r and s.
Wikipedia says Camellia comes with a block size of 128 and a variable key size (128, 192, 256). Another site lists it as a 256 Bit cipher.
The OpenSSL API has a function named EVP_camellia_256_cbc. Does this refer to the key size or the block size? And does Camellia support 256 Bit block sizes at all?
The information on the Wikipedia page is correct: Camellia has a fixed block size of 128 bit and a variable key size of 128, 192 and 256 bit. You can compare that with other authoritative sources like its specification, e.g. found in RFC 3713.
The "256 bit" in "256 bit cipher" usually refers to its security level and that is determined by its key size (and potential attack vectors that might decrease it).
Therefore, EVP_camellia_256_cbc means Camellia with a 256 bit key size, so you should supply keys of that size. Supplying keys of the correct key size is important, because some implementations may behave differently than others and you will lose a lot of time debugging when trying to connect different implementations.
For example, if you define that you want to use Camellia-256, but you're passing a key of 192 bit, it may happen that
one implementation fills the passed key with 0x00 byte up to the specified key size,
another implementation doesn't care about the specification and only looks at the actual supplied key to then run Camellia-192 or
a broken implementation (for non-standard key sizes) that calculates the number of rounds (12 or 14 for Camellia) that need to be used and arrives at a non-standard number of rounds which makes the result non-compatible with all other implementations.
I'm a bit confused on the difference between SHA-2 and SHA-256 and often hear them used interchangeably. I think SHA-2 a "family" of hash algorithms and SHA-256 a specific algorithm in that family. Can anyone clear up the confusion.
The SHA-2 family consists of multiple closely related hash functions. It is essentially a single algorithm in which a few minor parameters are different among the variants.
The initial spec only covered 224, 256, 384 and 512 bit variants.
The most significant difference between the variants is that some are 32 bit variants and some are 64 bit variants. In terms of performance this is the only difference that matters.
On a 32 bit CPU SHA-224 and SHA-256 will be a lot faster than the other variants because they are the only 32 bit variants in the SHA-2 family. Executing the 64 bit variants on a 32 bit CPU will be slow due to the added complexity of performing 64 bit operations on a 32 bit CPU.
On a 64 bit CPU SHA-224 and SHA-256 will be a little slower than the other variants. This is because due to only processing 32 bits at a time, they will have to perform more operations in order to make it through the same number of bytes. You do not get quite a doubling in speed from switching to a 64 bit variant because the 64 bit variants do have a larger number of rounds than the 32 bit variants.
The internal state is 256 bits in size for the two 32 bit variants and 512 bits in size for all four 64 bit variants. So the number of possible sizes for the internal state is less than the number of possible sizes for the final output. Going from a large internal state to a smaller output can be good or bad depending on your point of view.
If you keep the output size fixed it can in general be expected that increasing the size of the internal state will improve security. If you keep the size of the internal state fixed and decrease the size of the output, collisions become more likely, but length extension attacks may become easier. Making the output size larger than the internal state would be pointless.
Due to the 64 bit variants being both faster (on 64 bit CPUs) and likely to be more secure (due to larger internal state), two new variants were introduced using 64 bit words but shorter outputs. Those are the ones known as 512/224 and 512/256.
The reasons for wanting variants with output that much shorter than the internal state is usually either that for some usages it is impractical to use such a long output or that the output need to be used as key for some algorithm that takes an input of a certain size.
Simply truncating the final output to your desired length is also possible. For example a HMAC construction specify truncating the final hash output to the desired MAC length. Due to HMAC feeding the output of one invocation of the hash as input to another invocation it means that using a hash with shorter output results in a HMAC with less internal state. For this reason it is likely to be slightly more secure to use HMAC-SHA-512 and truncate the output to 384 bits than to use HMAC-SHA-384.
The final output of SHA-2 is simply the internal state (after processing length extended input) truncated to the desired number of output bits. The reason SHA-384 and SHA-512 on the same input look so different is that a different IV is specified for each of the variants.
Wikipedia:
The SHA-2 family consists of six hash functions with digests (hash
values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384,
SHA-512, SHA-512/224, SHA-512/256.
I am attempting to create the properly DER encoded ECC parameters for the custom Microsoft P160 PlayReady curve to feed into a HSM. I have found a few sources that specify the definition of the P160 curve since it is non-standard and custom. Below is a link to one source. In particular, the PlayReady curve values are discussed in Section 6.4.2 of the book Elementary Number Theory,A Computational Approach by William Stein.
Below is an exert from another source concerning the P160 PlayReady curve parameters.
For ECC, Microsoft is using an elliptic curve over Zp, where p is a
160 bit prime number (given below). The curve consists of the points
that lie on the curve y^2=x^3+ax+b, where the operations are done over
the field Zp and a and b are coefficients that are given below.
All values are represented as packed binary values: in other words, a
single value over Zp is encoded simply as 20 bytes, stored in little
endian order. A point on the elliptic curve is therefore a 40 byte
block, which consists of two 20 byte little endian values (the x
coordinate followed by the y coordinate). Here are the parameters for
the elliptic curve used in MS-DRM:
p (modulus): 89abcdef012345672718281831415926141424f7
coefficient a: 37a5abccd277bce87632ff3d4780c009ebe41497
coefficient b: 0dd8dabf725e2f3228e85f1ad78fdedf9328239e
generator x: 8723947fd6a3a1e53510c07dba38daf0109fa120
*generator y: 445744911075522d8c3c5856d4ed7acda379936f
Order of curve: 89abcdef012345672716b26eec14904428c2a675
These constants are fixed, and used by all parties in the MS-DRM
system. The "nerd appeal" of the modulus is high when you see this
number in hexadecimal: it includes counting in the hexadecimal, as
well as the digits of fundamental constants e, pi, and sqrt(2).
Based on this information I have created the following hex-encoding of the DER encoded curve parameters for the P160 curve using BouncyCastle as my base ASN.1 library. Note that no seed value is specified in these curve parameters.
308195020101302006072a8648ce3d010102150089abcdef012345672718281831415926141424f7302c041437a5abccd277bce87632ff3d4780c009ebe4149704140dd8dabf725e2f3228e85f1ad78fdedf9328239e0429048723947fd6a3a1e53510c07dba38daf0109fa120445744911075522d8c3c5856d4ed7acda379936f02150089abcdef012345672716b26eec14904428c2a675
Although mathematically these curve parameters are accepted by the HSM and OpenSSL, the P160 curve points produced are not acceptable to PlayReady. I am able to use the same process to produce valid P256 curve points that are acceptable to PlayReady so I do no believe my methods are flawed. Does anyone have any experience with the PlayReady P160 curve parameters?
After researching with Microsoft the solution was found. Apparently one must byte swap the public key x/y points, and the private key to get the PlayReady tool kit to accept the EC key pair only for the P160 curve. This odd byte swapping was not required for the P256 EC key pair.