Why is RSA keys in GnuPG limited to 4096 bits?
Would it be illegal for me to modify the source to increase the max size?
ssh-keygen does not have this limitation (e.g., I can create a key that's 32768 bits long). Why is that?
At keylength.com there is this:-
To protect a 256-bit symmetric key (e.g. AES-256), you may consider using at the minimum a 17120-bit asymmetric system (e.g. RSA).
The 4096 bit limit can be raised as described in a short article entitled "Generate large keys with GnuPG", reproduced below. This was done for the gnupg package in homebrew to allow for 8192 bit keys: PR 4201. A word of caution about memory allocation for the larger keys: comp.security.pgp.tech.
Generate large keys with GnuPG | David Norman
If you'd like to generate larger keys than 4096 bits with GnuPG, you can compile
a new version that increases the upper limit of 4096. You'll probably find
yourself generating it as RSA. Download the patch to your un-tared gnupg-1.4.19
directory and apply it with:
usbdrive#sandisk-extreme64:~/gnupg-1.4.19$ patch -p0 < gnupg_1.4.19_large_keygen.patch
patching file g10/keygen.c
usbdrive#sandisk-extreme64:~/gnupg-1.4.19$ ./configure --enable-large-secmem
[...]
checking whether to allocate extra secure memory... yes
[...]
usbdrive#sandisk-extreme64:~/gnupg-1.4.19$ make -j2
usbdrive#sandisk-extreme64:~/gnupg-1.4.19$ make check
usbdrive#sandisk-extreme64:~/gnupg-1.4.19$ sudo make install
usbdrive#sandisk-extreme64:~/gnupg-1.4.19$ gpg --gen-key --enable-large-rsa
Without the --enable-large-rsa flag, the key generation process will
automatically downgrade the key to 4096.
To compile on a Mac, you'll need to download Xcode from the App Store first. The
patch increases the upper limit of the key size to 15489 bits. Without
increasing the secure memory limit, generating a key larger than about 7680-bits
will fail because it won't be able to allocate enough memory to the process.
Generating keys larger than around 7680-bits (192-bit symmetric equivalent) can
also make it impossible to decrypt messages with standard secure memory limits
set at compile time because the gpg binary won't be able to allocate enough
secure memory to decrypt the message, even small ones.
gnupg_1.4.19_xlarge_key_gen.patch
--- g10/keygen.c 2015-02-26 12:24:21.000000000 -0500
+++ g10/keygen.c 2015-03-02 22:12:09.028419377 -0500
## -1041,8 +1041,9 ##
nbits = 2048;
log_info(_("keysize invalid; using %u bits\n"), nbits );
}
- else if (nbits > 4096) {
- nbits = 4096;
+ else if (nbits > 15489) {
+ /* fallback to RFC3766 256-bit symmetric equivalency */
+ nbits = 15489;
log_info(_("keysize invalid; using %u bits\n"), nbits );
}
## -1251,7 +1252,8 ##
PKT_public_key *pk;
MPI skey[6];
MPI *factors;
- const unsigned maxsize = (opt.flags.large_rsa ? 8192 : 4096);
+ /* New large key limit RFC3766 256-bit symmetric equivalency */
+ const unsigned maxsize = (opt.flags.large_rsa ? 15489 : 4096);
assert( is_RSA(algo) );
## -1578,7 +1580,7 ##
static unsigned int
ask_keysize (int algo, unsigned int primary_keysize)
{
- unsigned nbits, min, def=2048, max=4096;
+ unsigned nbits, min, def=2048, max=15489;
int for_subkey = !!primary_keysize;
int autocomp = 0;
gnupg_1.4.19_xlarge_secmem.patch
--- configure 2015-02-27 03:37:52.000000000 -0500
+++ configure 2015-03-02 22:28:31.488401783 -0500
## -5076,7 +5076,7 ##
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $large_secmem" >&5
$as_echo "$large_secmem" >&6; }
if test "$large_secmem" = yes ; then
- SECMEM_BUFFER_SIZE=65536
+ SECMEM_BUFFER_SIZE=131072
else
SECMEM_BUFFER_SIZE=32768
fi
Article ends. Retrieved on 2016-02-26 from an archived copy of the original.
There is pretty sensible explanation (for similar question) by Fire Ant at Security Forums:
http://www.security-forums.com/viewtopic.php?p=317962#317962
All rights reserved there, but fair use citation of short excerpt shouldn't be inappropriate methinks:
Key sizes over 4096 are not currently supported in GPG. The reason for this is that 8192 keys are very slow. If you require a key greater than 4096-bit then you should really thing about what you are using that key for?
Why does it not support anything larger? Perhaps at the time of development of GnuPG (and the continued development of RSA for that matter) and the key sizes, computing limits at the time,both for the client side use cases and the potential of government agencies being able to break a 4096 key, developers and cryptographers felt a 4096 key was large enough. Really, 4096 is a really large key and would take a very long time to break with current technology. If a government agency was after you and really wanted to get your messages they would (in the US) get a court order to put a rootkit on your machines to not even worry about breaking encryption.
Now, I haven't looked at the source code specifically, but if you change the key size to anything larger than 4096, you might have problems with other users using your key if their software doesn't support a larger key size. For example, I have a 4096 key, a friend of mine cannot send me messages from an Android device because he cannot find an application that supports anything larger than 3072! Keep that in mind.
If you want to modify the source code, you can do so easily on Debian Linux with this script:
Raising GnuPG key size limits and making ideal .conf files.
Here is a link to a bash script that increases the GnuPG key size limit beyond 4096 bits. The page also provides an ideal GnuPG .conf file.
https://gist.github.com/anonymous/3d928a0bcbb3ed92c454
Please provide input and recommended changes.
Related
I'm new to Redis. How can I get the memory footprint of a specific key in redis?
db0
1) "unacked_mutex"
2) "_kombu.binding.celery"
3) "_kombu.binding.celery.pidbox"
4) "_kombu.binding.celeryev"
I just want to get the memory footprint of one specific key like "_kombu.binding.celery" , or one specific db like db0 , how can I get it?
redis_version: 2.8.17
Any commentary is very welcome. great thanks.
You are running a very old version of redis. The MEMORY command is not available in that version, so there is no precise way of getting at this information. However, you can approximate this information using the DUMP command.
Simply call DUMP _kombu.binding.celery and save the results to a file. The result is some characters and escape sequences. When you load this file into an environment like node, you can look at the length of the string and multiply by 2 to get the number of bytes. This is not precise, but it will give you a generally close approximation.
Here's what you could do:
in Redis:
$ redis-cli
127.0.0.1:6379> hset c 123 456
(integer) 0
127.0.0.1:6379> dump c
"\r\x12\x12\x00\x00\x00\r\x00\x00\x00\x02\x00\x00\xfe{\x03\xc0\xc8\x01\xff\t\x00\x10\xd4L \x908\x8b2"
in Node:
$ node
> a="\r\x12\x12\x00\x00\x00\r\x00\x00\x00\x02\x00\x00\xfe{\x03\xc0\xc8\x01\xff\t\x00\x10\xd4L \x908\x8b2"
'\r\u0012\u0012\u0000\u0000\u0000\r\u0000\u0000\u0000\u0002\u0000\u0000þ{\u0003ÀÈ\u0001ÿ\t\u0000\u0010ÔL 82'
> a.length
30
This is close to half of the actual amount that redis provides with MEMORY USAGE:
127.0.0.1:6379> MEMORY USAGE c
(integer) 63
MEMORY USAGE _kombu.binding.celery would give you the number of bytes that a key and value require to be stored in RAM.
Here is the doc for the command.
I have a dataset on the server1 that I want to back up to the second server2.
Server1 (original):
zfs list -o name,used,avail,refer,creation,usedds,usedsnap,origin,compression,compressratio,refcompressratio,mounted,atime,lused storage/iscsi/webhost-old produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
storage/iscsi/webhost-old 67,8G 1,87T 67,8G Út kvě 31 6:54 2016 67,8G 16K - lz4 1.00x 1.00x - - 67,4G
Sending volume to the 2nd server:
zfs send storage/iscsi/webhost-old | pv | ssh -c arcfour,aes128-gcm#openssh.com root#10.0.0.2 zfs receive -Fduv pool/bkp-storage
received 69,6GB stream in 378 seconds (189MB/sec)
Server2 zfs list produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
pool/bkp-storage/iscsi/webhost-old 36,1G 3,01T 36,1G Pá pro 29 10:25 2017 36,1G 0 - lz4 1.15x 1.15x - - 28,4G
Why is there such a difference in sizes? Thanks.
From what you posted, I noticed 3 things that seemed odd:
the compressratio is 1.15x on system 2, but 1.00x on system 1
on system 2, used is 1.27x higher than logicalused
the logicalused and the number zfs receive report are ~2.3x higher on system 1 than system 2
These terms are all defined in the man page, but are still confusing to reverse-engineer explanations for in practice.
(1) could happen if you enabled compression on the source dataset after you wrote all the data to it, since ZFS doesn't rewrite the data to compress it when you enable that setting. The data sent by zfs send is uncompressed unless you use -c, but system 2 will try to compress it as it runs zfs receive if the setting is enabled on the destination dataset. If both system 1 and system 2 had the same compression settings before the data was written, they would have the same compressratio as well.
(2) can happen due to metadata written along with your data, but in this case it's too high for "normal" metadata, which accounts for 1-2% of most pools. It's probably caused by a pool-wide setting, like configuring RAID-Z, or a weird combination of striping and mirroring (like 4 stripes, but with one of them being a mirror).
For (3), I re-read the man page to try to figure it out:
logicalused
The amount of space that is "logically" consumed by this dataset and
all its descendents. See the used property. The logical space
ignores the effect of the compression and copies properties, giving a
quantity closer to the amount of data that applications see.
If you were sending a dataset (instead of a single iSCSI volume) and the send size matched system 2's logicalused value (instead of system 1's), I would guess you forgot to send some child datasets (i.e. by using zfs send -R). However, neither of those are true in this case.
I had to do some additional digging -- this blog post from 2005 might contain the explanation. If system 1 didn't have compression enabled when the data was written (like I guessed above for (1)), the function responsible for not writing zeroed-out blocks (zio_compress_data) would not be run, so you probably have a bunch of empty blocks written to disk, and accounted for in the logicalused size. However, since lz4 is configured on system 2, it would run there, and those blocks would not be counted.
For my task I need to load a bulk of data into Redis as soon as possible. It looks like this article is right about my case: https://redis.io/topics/mass-insert
The article starts from giving an example of using multiple inline SET commands with redis-cli. Then they proceed to generating Redis protocol and again use it with redis-cli. They don't explain the reasons or benefits of using Redis protocol.
Using of Redis protocol is a bit harder and it generates a bit more traffic. I wonder, what are the reasons to use Redis protocol rather than simple one-line commands? Probably despite the fact the data is larger, it is easier (and faster) for Redis to parse it?
Good point.
Only a small percentage of clients support non-blocking I/O, and not
all the clients are able to parse the replies in an efficient way in
order to maximize throughput. For all this reasons the preferred way
to mass import data into Redis is to generate a text file containing
the Redis protocol, in raw format, in order to call the commands
needed to insert the required data.
What I understood is that you emulate a client when you use Redis protocol directly, which would benefit from the highlighted points.
Based on the docs you provided, I tried these scripts:
test.rb
def gen_redis_proto(*cmd)
proto = ""
proto << "*"+cmd.length.to_s+"\r\n"
cmd.each{|arg|
proto << "$"+arg.to_s.bytesize.to_s+"\r\n"
proto << arg.to_s+"\r\n"
}
proto
end
(0...100000).each{|n|
STDOUT.write(gen_redis_proto("SET","Key#{n}","Value#{n}"))
}
test_no_protocol.rb
(0...100000).each{|n|
STDOUT.write("SET Key#{n} Value#{n}\r\n")
}
ruby test.rb > 100k_prot.txt
ruby test_no_protocol.rb > 100k_no_prot.txt
time cat 100k.txt | redis-cli --pipe
time cat 100k_no_prot.txt | redis-cli --pipe
I've got these results:
teixeira: ~/stackoverflow $ time cat 100k.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.168s
user 0m0.025s
sys 0m0.015s
(5 arquivo(s), 6,6Mb)
teixeira: ~/stackoverflow $ time cat 100k_no_prot.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.433s
user 0m0.026s
sys 0m0.012s
I am reworking the programmer for the Olimex iCE40HX1K board (targetted towards a STM32F103 ma) where I also would like to implement the "SPI Slave" mode to configure an image directly into RAM without using the serial flash.
Looking at the Lattice "programming and Configuration guide" (page 11), it is noted in table 8 that a EPROM for a ICE40-LP/LX1K must be at least 34112 bytes. (which -I guess- means that the configuration-files can be up to that size).
However, all images I have (sofar) created with the icestorm tools are 32220 octets.
I am a bit puzzled here.
Can somebody explain the difference between these two figures?
Does the HX1K need a configuration-file of 32220 or 34112 bytes?
I don't know how Lattice arrived at this number. A complete HX1K bin file with BRAM initialization but without comment and without multiboot header is 32220 bytes in size. The (optional) multiboot header would add another 160 bytes (32220 + 160 = 32380). The lattice tools usually add about 80 bytes to the comment field (32220 + 80 = 32300). Whatever I do, all numbers I have are more than 1000 short of 34112.
I don't know if there is a maximum length for the comment. Maybe there is and 34112 is the size of a bit stream with a comment of maximum length?
34112 - 32220 = 1892. Maybe someone decided to add 8kB (8192 bytes) just in case, but that person accidentally swapped the first two digits? Idk..
If you don't care about comments or multiboot headers, then iCE40 1K bit-streams have a fixed size, and that size is 32220 bytes.
If I run the openssl command line in hmac mode (as below), is the key used for the hmac used directly or is it hashed before using it as the key?
echo "foo" | openssl dgst -sha256 -binary -hmac "test" | openssl base64
Similarly, when encrypting a file with openssl (as below)is the pass phrase hashed with the salt? (If so how is it done? A pointer to the right source file would be even better.)
openssl enc -salt
The hmac option does not use salting or hashing; it just uses the passphrase directly as the key. See apps/dgst.c in the source distribution:
else if (!strcmp(*argv,"-hmac"))
{
if (--argc < 1)
break;
hmac_key=*++argv;
}
...
if (hmac_key)
{
sigkey = EVP_PKEY_new_mac_key(EVP_PKEY_HMAC, e,
(unsigned char *)hmac_key, -1);
if (!sigkey)
goto end;
}
The enc command does seem to use some form of salting, at least in some cases. The relevant source file is apps/enc.c, but seems to come with some caveats:
/* Note that str is NULL if a key was passed on the command
* line, so we get no salt in that case. Is this a bug?
*/
if (str != NULL)
{
/* Salt handling: if encrypting generate a salt and
* write to output BIO. If decrypting read salt from
* input BIO.
*/
It then uses the function EVP_BytesToKey (in crypto/evp/evp_key.c) to generate a random key. This function seems to be a non-standard algorithm, which looked perhaps plausibly OK at a very brief glance but I couldn't attest to it beyond that.
Source snippets and comments are all from the OpenSSL 1.0.0 release.