I am trying to convert a benchmark to homomorphic domain, and I'm using Palisade library for that due to its support of multiple schemes like BFV, BGV, CKKS.
I am currently using BFV scheme.
There is a line in the code-
y = x >> 6.
This line in my encrypted domain should look something like this, given Enc(x)-
Enc(y) = Enc(x) >> 6.
I see that division is not supported in Palisade, and think this is also the case with Microsoft SEAL library.
Can someone please guide how to go about this operation in Palisade?
I tried this (but this assumes that bits of x are known, which is not the case as we only know Enc(x))-
If the breakup of the input integer into binary bits is known -
Say we are given integer x=233 and its binary representation as 'vector<int64_t> v= {1 1 1 0 1 0 0 1 }'. We can encrypt this vector into HE domain and get ciphertext1.
Then, say we want to do- x>>6. For this, we can do-
Plaintext plaintext1;
cryptoContext->Decrypt(keyPair.secretKey, ciphertext1, &plaintext1);
plaintext1->SetLength(v.size()-6);
vector<int64_t> v_final=plaintext1->GetPackedValue(); // gives {1 1}
Related
Just wondering if anyone knows why Perl6's log function returns a Num type and not a Rat type.
say (e*e).log.WHAT;
> (Num)
say (2/3).WHAT;
> (Rat)
In mathematics Log is a Continuous function therefore it has mathematically-real values. Num type describes mathematically-real numbers in Perl 6. Rat type describes mathematically-rational numbers.
It's because no one has done the work to make it do anything else yet. It's a situation that the language could handle (not that it's special to Perl 6) but also a situation that you might not want it to handle.
There's no object that represents the natural base e and maintains it as such until it can't any longer (just as Rats don't turn into Nums unless they have to). That's possible and would also allow us to decide how to treat it. Maybe we want a Rat, or FatRat, or even a certain number of decimal places in a Num. But it doesn't do that.
It's not that e is special though. It doesn't work with powers of 10 either:
> 100.log10
2
> 100.log10.^name
Num
The code behind .log10 could check that the operand is a power of 10 and decide to return an Int in that case. But it would have to check every number for that and most numbers aren't a power of 10. Checking all of those would slow down the process. It's easier to make it a little "incorrect".
But you can use .narrow to get a more constrained type possibly:
> 100.log10.narrow.^name
Int
This is different from asking for a particular type and maybe getting a different number:
> (10/3).Int
3
> (10/3).narrow.^name
Rat
And for fun:
> i*i
-1+0i
> (i*i).^name
Complex
> (i*i).narrow.^name
Int
Perl6 is not a computer algebra system, so it treats e*e like any other Num - and once you've got a floating point number, only explicit operations such as rounding should change the type to something like Int or Rat: The computer cannot know if the return value 2e0 of (e*e).log actually represents 2, or some 2+ε.
I am currently working on a file compressor based on Huffman decoding. So I have a decoding tree like so:
and I have to encode this tree on an output file by following a certain criteria:
"for each leaf, write out a 0 bit, followed by the 8 bits of
the corresponding character. Write out the bits in the order bit 7, bit 6, . . ., bit 0, that is high bit first. As a special case, if the byte is 0, write out bit 8, which will be a 0 for a byte value of 0, and 1 for a byte value of 256 (the EOF marker)." For an internal node, just write a bit 1.
So what I plan to do is to create a bit array and add to it the corresponding bits in the specified format. The problem is that I don't know how to convert a number to binary in smalltalk.
For example, if I want to encode the first leaf, I would want to do something like 01101011 i.e 0 followed by the bit representation of k and then add every bit one by one into the array.
I don't know which dialect you are using exactly, but generally, you can access the bits of Integer. They are modelled as if the representation was in two-complement, with an infinite sequence of bits.
2 is ....0000000000010
1 is ....0000000000001
0 is ....0000000000000 with infinitely many 0 on the left
-1 is ....1111111111111 with infinitely many 1 on the left
-2 is ....1111111111110
This is also true for LargeIntegers, even though they are generally implemented as sign magnitude (the class encodes the sign), two-complement will be emulated.
Then you can operate with bitAnd: bitOr: bitXor: bitInvert bitShift:, and in some flavours bitAt:put:
You can access the bits with (2 bitAt: index) where the index starts at 1 for least significant bit, or grows higher. If it's missing, implement it with bitAnd: and bitShift:...
For positive, you can ask for the rank of high bit (2 highBit).
All these operations should create a new integer (there's no in place modification possible).
Conceptually, a ByteArray is a collection of unsigned integers on 8 bits (between 0 and 255), so you can implement a bit Array with them (if it does not already exist in the dialect). Or you can use an Integer (but won't be able to control size which will be infinite, nor in place mofifications, operations will cost a copy).
I am still learning GNU Radio and I have trouble understanding something about signal processing block type. I understand that if I create a block taking let say 2 samples in the input and output 4 samples, it will be an interpolator of 2.
But now, I would like to create a block which will be a framer. So, it will have two inputs and one output. The block will receives the n samples from the first input, then take m inputs from the second input and append to the samples received from input one, and then output them. In this case, my samples are supposed to be bytes.
How to proceed in this case please ? Am I taking the right path like that? Do any one know to proceed with this type of scenario?
Your case (input 0 and input 1 having different relative rates to the output) is not covered by the sync_block/interpolator/decimator "templates" that GNU Radio has, so you have to use the general block approach.
Assuming you're familiar with gr_modtool¹, you can use it to add things like interpolator (relative rate >1), decimators (<1) and sync (=1) blocks:
-t BLOCK_TYPE, --block-type=BLOCK_TYPE
One of sink, source, sync, decimator, interpolator,
general, tagged_stream, hier, noblock.
But also note the general type. Using that, you can implement a block that doesn't have any restrictions on the relation between in- and output. That means that
you will have to manually consume() items from the inputs, because the number of items you took from the input can no longer be derived by the number of output items, and
you will have to implement a forecast method to tell the GNU Radio scheduler how much items you'll need for a given output.
gr_modtool will give you a stub where you'll only have to add the right code!
¹ if you're not: It's introduced in the GNU Radio Guided Tutorials, part 3 or so, somethig that I think will be a quick and fun read to you.
Considering that the question was asked 4 years ago and that there has been many changes in GNU Radio since then, I want to add to the answer that now this is possible to do with the Patterned Interleaver block.
patterned_interleaver_image
This block works the following way: it receives inputs in the ports to the left and outputs a single interleaved pattern in the port that is to the right. So let's imagine a block with 2 inputs, V1 and V2:
V1 = [0,1,0,0,1,1]
V2 = [1,1,1,0,1,0]
Suppose we want the output to be the first 2 bits of V1 followed by the first 2 bits of V2 followed by the next 2 bits of V1 and then the next 2 bits of V2 and so on...that is, we want the output to be
Vo = [0,1,1,1,0,0,1,0,1,1,1,0].
In order to accomplish this we go to the properties of the Patterned Interleaver block which looks like this:
patterned_interleaver_properties
The Pattern field allows us to control the order in which the bits in the input ports will be interleaved. By default they are in [0,0,1,1] meaning that the block will take 2 bits from input port 0 followed by 2 bits from input port 1. The corresponding output will be
[0,1,1,1,0,0,1,0,1,1,1,0],
that is, the first 2 bits of V1 followed by the first 2 bits of V2 and then the next 2 bits of V1, etc.
Let's see another example. In case the Pattern field is set to [0,0,1,1,1,0] the output will be 2 bits from input port 0 followed by 3 bits from input port 1 and then 1 bit from input port 0. In the output we will obtain [0,1,1,1,1,0,0,1,0,1,0,0].
Lastly, the Pattern field is also used to determine the number of input ports. If the Pattern field is [0,0,1,2] we will see that another input port is added to the block.
patterned_interleaver_3_inputs
I'm looking at linked data in MS Access.
The "Yes/No" fields contain the value -1 for YES and 0 for NO. Can someone explain why such a counter-intuitive value is used for "Yes"? (Obviously, it should be 1 and 0)
I imagine there must be a good reason, and I would like to know it.
The binary representation of False is 0000000000000000 (how many bits are used depends on the implementation). If you perform a binary NOT operation on it, it will be changed to 1111111111111111, i.e. True, but this is the binary representation of the signed integer -1.
A bit of 1 at the most significant position signals a negative number for signed numbers. Changing the sign of a number happens by inverting all the bits and adding 1. This is called the Two's complement.
Let us change the sign of 1111111111111111. First invert; we get:
0000000000000000
Then add one:
0000000000000001, this is 1.
This is the proof that 1111111111111111 was the binary representation of -1.
UPDATE
Also, when comparing these values do not compare
x = -1
or
x = 1
instead, do compare
x <> 0
this always gives the correct result, independently of the convention used. Most implementations treat any value unequal zero as True.
"Yes" is -1 because it isn't anything else.
When dealing with Microsoft products, especially one as old as Access, don't assume that there is a good reason for any design choice.
I know what they do, I just don't understand when you'd get a use for them..
When you need to manipulate individual bits of a chunk of data (like a byte or an int). This happens frequently, for example, in algorithms dealing with:
encryption
compression
audio / video processing
networking (protocols)
persistence (file formats)
etc.
I've used them for bit masks before. Say you have an item that has a list of items that can have either a yes or no value (options on a car for instance). You can assign one integer column that will give a value for every option by assigning each option to a binary digit in the number.
Example: 5 = 101 in binary
that would mean:
option 1 - yes
option 2 - no
option 3 - yes
If you were to query on this you would use bitwise & or | operators to select the correct items.
Here is a good article that goes over it in more detail.
One example is if you have an (A)RGB color stored as a 32 bit integer and you want to extract the individual color components:
red = (rgb >> 16) & 0x000000ff;
green = (rgb >> 8) & 0x000000ff;
blue = rgb & 0x000000ff;
Of course as a high level programmer you would normally prefer to use a library function to do this rather than fiddling with bits yourself. But the library might be implemented using bitwise operations.