Where can I find an industry-accepted secure pseudo-random number generator for Objective-C? Is there one built in to the OS X SDK?
My question is basically the same as this one, except that I'm looking for a secure PRNG.
EDIT:
Thanks everyone for the help. Here's a simple one-liner to implement the /dev/random method:
-(NSData *)getRandomBytes:(NSUInteger)length {
return [[NSFileHandle fileHandleForReadingAtPath:#"/dev/random"] readDataOfLength:length];
}
Security.framework has a facility for doing this, called SecRandomCopyBytes(). Although it's really basically just copying from /dev/random.
You can use
int SecRandomCopyBytes (
SecRandomRef rnd,
size_t count,
uint8_t *bytes
);.
This function is available in Security/Security.h Framework.
This function reads from /dev/random to obtain an array of cryptographically-secure random bytes.
/dev/random is a blocking interface that only returns as much random
data as the system possesses at any particular time. Once the system
runs out of randomness, no more can be fetched until more is generated
(by listening to the user bang on the keyboard, or however the OS
gathers true randomness). /dev/urandom is a non-blocking interface
that always returns the requested amount of data, by using a
pseudorandom generator even when true randomness is exhausted.OS X has both of these as well, however they both act like /dev/urandom does on Linux.
Random Numbers
random(4) Mac OS X Manual Page
How good is SecRandomCopyBytes?
Related
For fun, I'm writing a bignum library in Rust. My goal (as with most bignum libraries) is to make it as efficient as I can. I'd like it to be efficient even on unusual architectures.
It seems intuitive to me that a CPU will perform arithmetic faster on integers with the native number of bits for the architecture (i.e., u64 for 64-bit machines, u16 for 16-bit machines, etc.) As such, since I want to create a library that is efficient on all architectures, I need to take the target architecture's native integer size into account. The obvious way to do this would be to use the cfg attribute target_pointer_width. For instance, to define the smallest type which will always be able to hold more than the maximum native int size:
#[cfg(target_pointer_width = "16")]
type LargeInt = u32;
#[cfg(target_pointer_width = "32")]
type LargeInt = u64;
#[cfg(target_pointer_width = "64")]
type LargeInt = u128;
However, while looking into this, I came across this comment. It gives an example of an architecture where the native int size is different from the pointer width. Thus, my solution will not work for all architectures. Another potential solution would be to write a build script which codegens a small module which defines LargeInt based on the size of a usize (which we can acquire like so: std::mem::size_of::<usize>().) However, this has the same problem as above, since usize is based on the pointer width as well. A final obvious solution is to simply keep a map of native int sizes for each architecture. However, this solution is inelegant and doesn't scale well, so I'd like to avoid it.
So, my questions: is there a way to find the target's native int size, preferably before compilation, in order to reduce runtime overhead? Is this effort even worth it? That is, is there likely to be a significant difference between using the native int size as opposed to the pointer width?
It's generally hard (or impossible) to get compilers to emit optimal code for BigNum stuff, that's why https://gmplib.org/ has its low level primitive functions (mpn_... docs) hand-written in assembly for various target architectures with tuning for different micro-architecture, e.g. https://gmplib.org/repo/gmp/file/tip/mpn/x86_64/core2/mul_basecase.asm for the general case of multi-limb * multi-limb numbers. And https://gmplib.org/repo/gmp/file/tip/mpn/x86_64/coreisbr/aors_n.asm for mpn_add_n and mpn_sub_n (Add OR Sub = aors), tuned for SandyBridge-family which doesn't have partial-flag stalls so it can loop with dec/jnz.
Understanding what kind of asm is optimal may be helpful when writing code in a higher level language. Although in practice you can't even get close to that so it sometimes makes sense to use a different technique, like only using values up to 2^30 in 32-bit integers (like CPython does internally, getting the carry-out via a right shift, see the section about Python in this). In Rust you do have access to add_overflow to get the carry-out, but using it is still hard.
For practical use, writing Rust bindings for GMP is probably your best bet, unless that already exists.
Using the largest chunks possible is very good; on all current CPUs, add reg64, reg64 has the same throughput and latency as add reg32, reg32 or reg8. So you get twice as much work done per unit. And carry propagation through 64 bits of result in 1 cycle of latency.
(There are alternate ways to store BigInteger data that can make SIMD useful; #Mysticial explains in Can long integer routines benefit from SSE?. e.g. 30 value bits per 32-bit int, allowing you to defer normalization until after a few addition steps. But every use of such numbers has to be aware of these issues so it's not an easy drop-in replacement.)
In Rust, you probably want to just use u64 regardless of the target, unless you really care about small-number (single-limb) performance on 32-bit targets. Let the compiler build u64 operations for you out of add / adc (add with carry).
The only thing that might need to be ISA-specific is if u128 is not available on some targets. You want to use 64 * 64 => 128-bit full multiply as your building block for multiplication; if the compiler can do that for you with u128 then that's great, especially if it inlines efficiently.
See also discussion in comments under the question.
One stumbling block for getting compilers to emit efficient BigInt addition loops (even inside the body of one unrolled loop) is writing an add that takes a carry input and produces a carry output. Note that x += 0xff..ff + carry=1 needs to produce a carry out even though 0xff..ff + 1 wraps to zero. So in C or Rust, x += y + carry has to check for carry out in both the y+carry and the x+= parts.
It's really hard (probably impossible) to convince compiler back-ends like LLVM to emit a chain of adc instructions. An add/adc is doable when you don't need the carry-out from adc. Or probably if the compiler is doing it for you for u128.overflowing_add
Often compilers will turn the carry flag into a 0 / 1 in a register instead of using adc. You can hopefully avoid that for at least pairs of u64 in addition by combining the input u64 values to u128 for u128.overflowing_add. That will hopefully not cost any asm instructions because a u128 already has to be stored across two separate 64-bit registers, just like two separate u64 values.
So combining up to u128 could just be a local optimization for a function that adds arrays of u64 elements, to get the compiler to suck less.
In my library ibig what I do is:
Select architecture-specific size based on target_arch.
If I don't have a value for an architecture, select 16, 32 or 64 based on target_pointer_width.
If target_pointer_width is not one of these values, use 64.
I'm looking for function which can fast convert array of uint8's to int32's (keeping count of numbers).
There is already such a function to convert uint8 to double in vDSP library:
vDSP_vfltu8D
How can analogous function be implemented on Objective-c (iOS, amd arch)? Pure C solutions accepted too.
In that case, based on the comments above:
ARM's Neon SIMD/Vector library is what you're looking for, but I'm not 100% sure it's supported on iOS. Even if it was, I wouldn't recommend it. You've got a 64-bit architecture on iOS, meaning you would only be able to DOUBLE the speed of your process (because you're converting to int32s).
Now that is if there was a single command that could do this. There isn't. There are a few commands that would allow you to, when used in succession, load the uint8s into a 64-bit register, shift them and zero out the other bytes, and then store those as int32s. Those commands will have more overhead because it takes several operations to do it.
If you really want to look into the commands available, check them out here (again, not sure if they're supported on iOS): http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0489e/CJAJIIGG.html
The iOS architecture isn't really built for this kind of processing. Vector commands in most cases only become useful when a computer has 256-bit registers, allowing you to load in 32 bytes at a time and operate on them simultaneously. I would recommend you go with the normal approach of converting one at a time in a loop (or maybe unwrap the loop to remove a bit of overhead like so:
//not syntactically correct code
for (int i = 0; i < lengthOfArray; i+=4) {
int32Array[i] = (int32)int8Array[i];
int32Array[i + 1] = (int32)int8Array[i + 1];
int32Array[i + 2] = (int32)int8Array[i + 2];
int32Array[i + 3] = (int32)int8Array[i + 3];
}
While it's a small optimization, it removes 3/4s of the looping overhead. It won't do much, but hey, it's something.
Source: I worked on Intel's SIMD/Vector team, converting C functions to optimize on 256-bit registers. Some things just couldn't be done efficiently, unfortunately.
I use AES128 crypto in CTR mode for encryption, implemented for different clients (Android/Java and iOS/ObjC). The 16 byte IV used when encrypting a packet is formated like this:
<11 byte nonce> | <4 byte packet counter> | 0
The packet counter (included in a sent packet) is increased by one for every packet sent. The last byte is used as block counter, so that packets with fewer than 256 blocks always get a unique counter value. I was under the assumption that the CTR mode specified that the counter should be increased by 1 for each block, using the 8 last bytes as counter in a big endian way, or that this at least was a de facto standard. This also seems to be the case in the Sun crypto implementation.
I was a bit surprised when the corresponding iOS implementation (using CommonCryptor, iOS 5.1) failed to decode every block except the first when decoding a packet. It seems that CommonCryptor defines the counter in some other way. The CommonCryptor can be created in both big endian and little endian mode, but some vague comments in the CommonCryptor code indicates that this is not (or at least has not been) fully supported:
http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-60026/Source/API/CommonCryptor.c
/* corecrypto only implements CTR_BE. No use of CTR_LE was found so we're marking
this as unimplemented for now. Also in Lion this was defined in reverse order.
See <rdar://problem/10306112> */
By decoding block by block, each time setting the IV as specified above, it works nicely.
My question: is there a "right" way of implementing the CTR/IV mode when decoding multiple blocks in a single go, or can I expect it to be interoperability problems when using different crypto libs? Is CommonCrypto bugged in this regard, or is it just a question of implementing the CTR mode differently?
The definition of the counter is (loosely) specified in NIST recommendation sp800-38a Appendix B. Note that NIST only specifies how to use CTR mode with regards to security; it does not define one standard algorithm for the counter.
To answer your question directly, whatever you do you should expect the counter to be incremented by one each time. The counter should represent a 128 bit big endian integer according to the NIST specifications. It may be that only the least significant (rightmost) bits are incremented, but that will usually not make a difference unless you pass the 2^32 - 1 or 2^64 - 1 value.
For the sake of compatibility you could decide to use the first (leftmost) 12 bytes as random nonce, and leave the latter ones to zero, then let the implementation of the CTR do the increments. In that case you simply use a 96 bit / 12 byte random at the start, in that case there is no need for a packet counter.
You are however limited to 2^32 * 16 bytes of plaintext until the counter uses up all the available bits. It is implementation specific if the counter returns to zero or if the nonce itself is included in the counter, so you may want to limit yourself to messages of 68,719,476,736 = ~68 GB (yes that's base 10, Giga means 1,000,000,000).
because of the birthday problem you've got a 2^48 chance (48 = 96 / 2) of creating a collision for the nonce (required for each message, not each block), so you should limit the amount of messages;
if some attacker tricks you into decrypting 2^32 packets for the same nonce, you run out of counter.
In case this is still incompatible (test!) then use the initial 8 bytes as nonce. Unfortunately that does mean that you need to limit the number of messages because of the birthday problem.
Further investigations sheds some light on the CommonCrypto problem:
In iOS 6.0.1 the little endian option is now unimplemented. Also, I have verified that CommonCrypto is bugged in that the CCCryptorReset method does not in fact change the IV as it should, instead using pre-existing IV. The behaviour in 6.0.1 is different from 5.x.
This is potentially a security risc, if you initialize CommonCrypto with a nulled IV, and reset it to the actual IV right before encrypting. This would lead to all your data being encrypted with the same (nulled) IV, and multiple streams (that perhaps should have different IV but use same key) would leak data via a simple XOR of packets with corresponding ctr.
Part of what I'm developing is a random company name generator. It draws from several arrays of name parts. I use the rand() function to draw the random name parts. However, the same "random" numbers are always generated in the same sequence every time I launch the app, so the same names always appear.
So I searched around SO, and in C there is an srand() function to "seed" the random function with something like the current time to make it more random - like srand(time(NULL)). Is there something like that for Objective-C that I can use for iOS development?
Why don't you use arc4random which doesn't require a seed? You use it like this:
int r = arc4random();
Here's an article comparing it to rand(). The arc4random() man page says this about it in comparison to rand():
The arc4random() function uses the key stream generator employed by the arc4 cipher, which uses 8*8 8
bit S-Boxes. The S-Boxes can be in about (21700) states. The arc4random() function returns pseudo-
random numbers in the range of 0 to (232)-1, and therefore has twice the range of rand(3) and
random(3).
If you want a random number within a range, you can use the arc4random_uniform() function. For example, to generate a random number between 0 and 10, you would do this:
int i = arc4random_uniform(11);
Here's some info from the man page:
arc4random_uniform(upper_bound) will return a uniformly distributed random number less than upper_bound. arc4random_uniform() is recommended over constructions like ``arc4random() % upper_bound'' as it avoids "modulo bias" when the upper bound is not a power of two.
The functions rand() and srand() are part of the Standard C Library and like the rest of the C library fully available for you to us in iOS development with Objective-C. Note that these routines have been superseded by random() and srandom(), which have almost identically calling conventions to rand() and srand() but produce much better results with a larger period. There is also an srandomdev() routine which initializes the state of the random number generator using the random number device. These are also part of the Standard C Library and available for use on iOS in Objective-C.
I have read here -- without understanding much -- that it's bad to use mod range. So this typical recommendation for Objective-C
int r = arc4random() % 45;
might be a bad idea to get a number from 0 to 45 (something about the distribution and this formula having a preference for low bits). What should one use in Objective-C?
<sarcasm>
I am so glad to be able to finally learn this stuff after using only high-level languages (Java et. al) all this time. Tomorrow I will try to make fire with two twigs. </sarcasm>
Java is just as high level as Objecive C here - in this case Java' Random.getInt() is the same as arc4random in that they both return a 32-bit pseudo-random number.
The issue raised in the URL (and I have seen elsewhere) is that rand()
could be repeating itself every 32768
values.
Whilst OSX's arc4random could have (2**1700) states.
But as in all uses of pseudo-random generators you need to be aware of their weaknesses before using them e.g. a preference for low bits in some generators and also the comment in the OpenBSD arc4random man page where it says
arc4random_uniform() is recommended
over constructions like ``arc4random()
% upper_bound'' as it avoids "modulo
bias" when the upper bound is not a
power of two.