DataType equivalent in Protobuf - serialization

I know that the data-types supported by protobuf-c are restricted to the ones mentioned here , but what can be a good protobuf-c equivalent to the following data types in C
time_t,
int8_t,
int16_t,
uint8_t,
uint16_t,
ushort

For time_t, use uint64_t.
For all the others, use sint32_t (often negative), int32_t (rarely negative), or uint32_t (never negative). Protobuf uses a variable-width encoding for integers that avoids using more space on the wire than is really needed. For instance, numbers less than 128 will be encoded in 1 byte by int32_t.

Related

float32 between (0, 1) optimization for wireless transmission

I'm looking for a way to optimize float32 value, that contains only values from (0, 1) for wireless transfer.
Since in my case all values are positive, I've already trimmed Sign bit, but now I need some help
I think, that my next step is going to be trimming all Fraction trailing zeroes zeros, and then, somehow optimize Exponent part
But I'm not quite sure, maybe there is already some bit-level optimizations exists for same cases, since using float32 to store values between (0, 1) is very common practice
Data integrity and error correction are not relevant in my case
So, I've decided to just use uint16_t, where 0=0f and 65365=1f, which is enough for me. But at memory level I'm not actually using float at all. This integer-based logic helped me not only achieve shorter byte sequence for wireless transfer, but also use less memory
Other option is to use uint32_t, that will use the same amount amount of bits
Conversion to float, if you need it, is smth like this:
float to_float = (float) value / (float) UINT16_MAX

Homomorphic encryption using Palisade library

To all homomorphic encryption experts out there:
I'm using the PALISADE library:
int plaintextModulus = 65537;
float sigma = 3.2;
SecurityLevel securityLevel = HEStd_128_classic;
uint32_t depth = 2;
//Instantiate the crypto context
CryptoContext<DCRTPoly> cc = CryptoContextFactory<DCRTPoly>::genCryptoContextBFVrns(
plaintextModulus, securityLevel, sigma, 0, depth, 0, OPTIMIZED);
could you please explain (all) the parameters especially intrested in ptm, depth and sigma.
Secondly I am trying to make a Packed Plaintext with the cc above.
cc->MakePackedPlaintext(array);
What is the maximum size of the array? On my local machine (8GB RAM) when the array is larger than ~8000 int64 I get an free(): invalid next size (normal) error
Thank you for asking the question.
Plaintext modulus t (denoted as t here) is a critical parameter for BFV as all operations are performed mod t. In other words, when you choose t, you have to make sure that all computations do not wrap around, i.e., do not exceed t. Otherwise you will get an incorrect answer unless your goal is to compute something mod t.
sigma is the distribution parameter (used for the underlying Learning with Errors problem). You can just set to 3.2. No need to change it.
Depth is the multiplicative depth of the circuit you are trying to compute. It has nothing to with the size of vectors. Basically, if you have AxBxCxD, you have a depth 3 with a naive approach. BFV also supports more efficient binary tree evaluation, i.e., (AxB)x(CxD) - this option will reduce the depth to 2.
BFV is a scheme that supports packing. By default, the size of packed ciphertext is equal to the ring dimension (something like 8192 for the example you mentioned). This means you can pack up to 8192 integers in your case. To support larger arrays/vectors, you would need to break them into batches of 8192 each and encrypt each one separately.
Regarding your application, the CKKS scheme would probably be a much better option (I will respond on the application in more detail in the other thread).
I have some experience with the SEAL library which also uses the BFV encryption scheme. The BFV scheme uses modular arithmetic and is able to encrypt integers (not real numbers).
For the parameters you're asking about:
The Plaintext Modulus is an upper bound for the input integers. If this parameter is too low, it might cause your integers to overflow (depending on how large they are of course)
The Sigma is the distribution parameter for Gaussian noise generation
The Depth is the circuit depth which is the maximum number of multiplications on a path
Also for the Packed Plaintext, you should use vectors not arrays. Maybe that will fix your problem. If not, try lowering the size and make several vectors if necessary.
You can determine the ring dimension (generated by the crypto context based on your parameter settings) by using cc->GetRingDimension() as shown in line 113 of https://gitlab.com/palisade/palisade-development/blob/master/src/pke/examples/simple-real-numbers.cpp

Standard text representation for floating-point numbers

Is there a standard text representation for the floating-point numbers that is supported by the most popular languages?
What is the standard fro representing infinities and NaNs?
There isn't a general consensus, unfortunately.
However, there seems to be some convergence on hexadecimal notation for floats. See pg. 57/58 of http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
The advantage of this notation is that you can precisely represent the value of the float as represented by the machine without worrying about any loss of precision. See this page for examples: https://www.exploringbinary.com/hexadecimal-floating-point-constants/
Note that NaN and Infinity values are not supported by hexadecimal-floats. There seems to be no general consensus on how to write these. Most languages actually don't even allow writing these as constants, so you resort to expressions such as 0/0 or 1/0 etc. instead.
Since you tagged this question with serialization, I'd recommend simply serializing using the bit-pattern you have for the float value. This will cost you 8-characters for single-precision and 16-characters for double-precision, (64 bits and 128 bits respectively, assuming 8-bit per character). Perhaps not the most efficient, but it'll ensure you can encode all possible values and transmit precisely.

Handling 512 bit numbers

I need to convert SHA-512 hex strings to integers and perform arithmetic functions with out my program crashing. I have only found ways to handle up to 64 bit numbers so far. How can I handle larger numbers?

What does alignment to 16-byte boundary mean in x86

Intel's official optimization guide has a chapter on converting from MMX commands to SSE where they state the fallowing statment:
Computation instructions which use a memory operand that may not be aligned to a 16-byte boundary must be replaced with an unaligned 128-bit load (MOVDQU) followed by the same computation operation that uses instead register operands.
(chapter 5.8 Converting from 64-bit to 128-bit SIMD Integers, pg. 5-43)
I can't understand what they mean by "may not be aligned to a 16-byte boundary", could you please clarify it and give some examples?
Certain SIMD instructions, which perform the same instruction on multiple data, require that the memory address of this data is aligned to a certain byte boundary. This effectively means that the address of the memory your data resides in needs to be divisible by the number of bytes required by the instruction.
So in your case the alignment is 16 bytes (128 bits), which means the memory address of your data needs to be a multiple of 16. E.g. 0x00010 would be 16 byte aligned, while 0x00011 would not be.
How to get your data to be aligned depends on the programming language (and sometimes compiler) you are using. Most languages that have the notion of a memory address will also provide you with means to specify the alignment.
I'm guessing here, but could it be that "may not be aligned to a 16-byte boundary" means that this memory location has been aligned to a smaller value (4 or 8 bytes) before for some other purposes and now to execute SSE instructions on this memory you need to load it into a register explicitly?
Data that's aligned on a 16 byte boundary will have a memory address that's an even number — strictly speaking, a multiple of two. Each byte is 8 bits, so to align on a 16 byte boundary, you need to align to each set of two bytes.
Similarly, memory aligned on a 32 bit (4 byte) boundary would have a memory address that's a multiple of four, because you group four bytes together to form a 32 bit word.