Infer 64 bit key from 48-bit round key in DES - cryptography

I have the 48-bit round key for the last round in DES. I would like to recover the 64-bit master key from it.
Is it possible? If yes, can anyone provide the algorithm to do so.

Related

Key generation that is random, unique DB-wide and bounded

I have three main constraints for int8 key generation:
Keys need be unpredictably random
Number of rows created should not be calculable
Order of row creation should not be calculable
Keys need to be unique across the entire db
Future table merges should not require changing keys
Future Table-Per-Class superclass additions should not require changing keys
Keys need to be bounded
These keys will be base58 encoded in various places such as URLs
Range will be narrower at first but should be adjustable in future to meet demand
I have heard of a few approaches to tackling at least some of these constraints:
DB-wide serial8
Does not meet the first constraint, but meets the second and third constraints.
DB-wide serial8 combined with a cipher
If the cipher key is leaked or discovered at any point then this becomes just DB-wide serial8, since the cipher key can't be changed without changing all the existing keys.
UUIDs
Does not meet the last constraint due to their 128-bit nature, but meets the first and second constraints.
Set default to floor(random() * n)
Does not meet the second constraint due to collision risk, but meets the first and last constraints.
Using random combined with a keys base table to keep track of all the keys
This does sort of meet all the constraints. However if a duplicate key is generated the insertion will fail. I am also concerned about any performance / locking issues.
If there is a nice way to make this re-roll until a non-colliding key is found, with good performance, then this would be an acceptable solution.
What is the best way to generate keys in PostgreSQL that meets the three criteria I listed?
If you expect to have no more than a billion identifiers, then generating random 64-bit numbers (so 2^64 possible values) may suit your purposes, as long as you have a way to check those numbers for uniqueness. They should be generated using a cryptographic random number generator (such as secrets.SystemRandom in Python or random_int in PHP).
With random numbers this long, the expected chance of producing a duplicate will be less than 50% after generating a billion random numbers (see "Birthday problem" for more precise formulas).

How many rows can you have when using checksum as primary key?

How many rows can you have when using checksum as primary key?
CHECKSUM returns int, so teoretically you can have 2^32 = 4294967296 unique values.
But in real life you'll never reach that number, as checksum can return same result for different arguments.
And for this reason you should never use checksum result as PK.
Don't use checksum as primary key since they are not unique. Create a normal auto-increment PK, a checksum column and add an index to it if you need.
Here is why: hashes are subject to collision. Collision is when 2 different inputs result in the same hash. It is unlikely to happen, but chances are. For example, CRC32 for a file with the text "plumless" is exactly the same for the text "buckeroo". Same for "coding" vs "gnu".
When you get around 250.000 rows, chances of collision, therefore duplicate PK, will get considerably high for you.
Sources
http://preshing.com/20110504/hash-collision-probabilities/
http://www.lammertbies.nl/comm/info/crc-calculation.html
https://softwareengineering.stackexchange.com/questions/49550/which-hashing-algorithm-is-best-for-uniqueness-and-speed/145633#145633

How to use java object as a KEY in Redis?

which is the best way to use a java object as key in Redis?
I was thinking of serializing the object (through FST or KRYO).
Is this the best way?
Should i use an hash?
Thanks
I don't think it's a good idea using object byte array as the key, because Redis has key size limited, and calculate byte array hash code as the real key, so you should using the unique key stand for the object, smaller key size can improve performance.

SSL: If you use 2048 bit RSA key will the symmetric key that is negotiated also be larger

I am using openssl. I need to use a bigger RSA key (2048 ). Is there any relation between the size of the RSA key and the size of the symmetric key (say DES). SSL doesn't appear to put any restriction
Size of symmetric key depends on the symmetric algorithm and it's not directly related to asymmetric key size. Eg. no matter what length of used RSA key is, DES key will remain at 56 bits.
In SSL, that which is encrypted with RSA is the pre-master secret, a random string generated by the client which always has length 48 bytes. Then, the pre-master secret is derived (with the key derivation function that is known as "PRF" in the SSL/TLS standard) into exactly as many bits are required for whatever symmetric encryption algorithms will be used. Thus, no direct relation between the RSA key size, and the symmetric encryption key size.
A 48-byte pre-master can be encrypted with any RSA key of length 472 bits or more, so no problem here.
No, there's no restriction - you can select both the session key and the RSA key separately depending on what level of protection you need. Of course there will be some "recommended" relation between keys lengths, but the choice is up to you. That relation might change if at some point a minor weakness is found in RSA and you decide you need a considerably longer key - that weakness will likely not affect the symmetric algorithm and so the keylength for the latter may be kept unchanged.
The asymmetric key is used to send the symmetric by encrypting it. Thus they are independent operations and independent of each other. As a result an increase in the bit size of one will not impact the size of the other.

For a primary key of an integral type, why is it important to avoid gaps?

I am generating a surrogate key for a table & due to my hi/lo algorithm, everytime you reboot/restart the machine, gaps may appear.
T1: current hi = 10000000
(sequence being dished out .. 1 to 100)
Assume that current sequence is 10000050
T2: restart system.
T3: System gives out the next_hi as 10000100
(sequence being dished out now ranges from 101 to 200)
T4: Next request for a key returns 100001001
From a primary key or indexing internals perspective, why is it important that there be no gaps in the sequences ? I'm asking this for a deeper understanding of mysql specifically.
From a primary key or indexing internals perspective, why is it important that there be no gaps in the sequences?
It's not important - what lead you to believe it was?
All that matters with the primary key is that it is unique to all the data in the table. Doesn't matter what the value is, or if the records before and after are sequencial values.
why is it important that there be no gaps in the sequences
It is not important. Gaps are fine. For performance reasons, gaps are tolerated.
What would be useful is a guarantee to have an strictly increasing sequence (i.e. the sequence has the same ordering as the row creation time). But even that is not guaranteed in a clustered configuration (with local counter caches).
Who told you this? A surrogate key has no meaning at all, so there can't be any gap. What is a gap in something that has no meaning? We use UUID's for our keys, something like this:
6ba7b812-9dad-11d1-80b4-00c04fd430c8. What would be the "next" key? Nobody knows, nobody cares. As long as it is unique, it's fine.