D std.random different behavior between integer and decimal uniform random number generation - objective-c

I'm generating some seeded random numbers in D using the default random number generation engine, and I'm getting some "slightly off" values depending on whether I request integer or floating point/double random number generation. For example this code:
import std.random; // From docs: alias Random = MersenneTwisterEngine!(uint, 32, 624, 397, 31, 2567483615u, 11, 7, 2636928640u, 15, 4022730752u, 18).MersenneTwisterEngine;
int seed = 123;
Random genint = Random(seed);
Random gendouble = Random(seed);
foreach (i; 0..4) {
writefln("rand[%d]: %d %f", i, uniform(0, 1000000, genint), uniform(0.0, 1000000.0, gendouble));
}
generates these results:
rand[0]: 696626 696469.187433
rand[1]: 713115 712955.321584
rand[2]: 286203 286139.338810
rand[3]: 428567 428470.925062
For some reason they're close but off by a noticeable amount. Is this expected behavior? I'm not completely familiar with the method being used to generate the numbers so I'm wondering what might be causing this to occur. Specifically, the reason I'm curious about this is I'm trying to port some D code to Objective-C, using the MTRandom library:
int seed = 123;
MTRandom *genint = [[MTRandom alloc] initWithSeed:seed];
MTRandom *gendouble = [[MTRandom alloc] initWithSeed:seed];
for (int i = 0; i < 4; i++) {
NSLog(#"rand[%d]: %d %f", i, [genint randomUInt32From:0 to:1000000], [gendouble randomDoubleFrom:0.0 to:1000000.0]);
}
and I'm getting these results:
rand[0]: 696469 696469.187433
rand[1]: 712956 712955.321584
rand[2]: 286139 286139.338810
rand[3]: 428471 428470.925062
Identical for retrieving random doubles, but here the integer results are much closer.. though sometimes off by one! Is 712955.32 supposed to be rounded up to 712956 using this method of random generation? I just don't know...
What's the best way I can ensure that my seeded random integer generation is identical between D and Objective-C?

The following is based on a very quick review of the docs page:
I don't see where it's indicated that the results of uniform!int and uniform!double should even be remotely close, nor any specification on how the bits from the random number generator get converted to the requested type.
In short, I think the ints and doubles being close is an implementation detail and you can not depend on it. Furthermore, I'd not depend on the obj-c and D implementations returning the same sequence of doubles, even from the same seed. (I expect you can count of the MT construct producing the same bit stream however.)
If you need the same sequence, then you will need to crack open the obj-c version and port it to D as well.

Related

Generate random Float number in xCode [duplicate]

This is the first time I'm trying random numbers with C (I miss C#). Here is my code:
int i, j = 0;
for(i = 0; i <= 10; i++) {
j = rand();
printf("j = %d\n", j);
}
with this code, I get the same sequence every time I run the code. But it generates different random sequences if I add srand(/*somevalue/*) before the for loop. Can anyone explain why?
You have to seed it. Seeding it with the time is a good idea:
srand()
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main ()
{
srand ( time(NULL) );
printf ("Random Number: %d\n", rand() %100);
return 0;
}
You get the same sequence because rand() is automatically seeded with the a value of 1 if you do not call srand().
Edit
Due to comments
rand() will return a number between 0 and RAND_MAX (defined in the standard library). Using the modulo operator (%) gives the remainder of the division rand() / 100. This will force the random number to be within the range 0-99. For example, to get a random number in the range of 0-999 we would apply rand() % 1000.
rand() returns pseudo-random numbers. It generates numbers based on a given algorithm.
The starting point of that algorithm is always the same, so you'll see the same sequence generated for each invocation. This is handy when you need to verify the behavior and consistency of your program.
You can set the "seed" of the random generator with the srand function(only call srand once in a program) One common way to get different sequences from the rand() generator is to set the seed to the current time or the id of the process:
srand(time(NULL)); or srand(getpid()); at the start of the program.
Generating real randomness is very very hard for a computer, but for practical non-crypto related work, an algorithm that tries to evenly distribute the generated sequences works fine.
To quote from man rand :
The srand() function sets its argument
as the seed for a new sequence of
pseudo-random integers to be returned
by rand(). These sequences are
repeatable by calling srand() with the
same seed value.
If no seed value is provided, the
rand() function is automatically
seeded with a value of 1.
So, with no seed value, rand() assumes the seed as 1 (every time in your case) and with the same seed value, rand() will produce the same sequence of numbers.
There's a lot of answers here, but no-one seems to have really explained why it is that rand() always generates the same sequence given the same seed - or even what the seed is really doing. So here goes.
The rand() function maintains an internal state. Conceptually, you could think of this as a global variable of some type called rand_state. Each time you call rand(), it does two things. It uses the existing state to calculate a new state, and it uses the new state to calculate a number to return to you:
state_t rand_state = INITIAL_STATE;
state_t calculate_next_state(state_t s);
int calculate_return_value(state_t s);
int rand(void)
{
rand_state = calculate_next_state(rand_state);
return calculate_return_value(rand_state);
}
Now you can see that each time you call rand(), it's going to make rand_state move one step along a pre-determined path. The random values you see are just based on where you are along that path, so they're going to follow a pre-determined sequence too.
Now here's where srand() comes in. It lets you jump to a different point on the path:
state_t generate_random_state(unsigned int seed);
void srand(unsigned int seed)
{
rand_state = generate_random_state(seed);
}
The exact details of state_t, calculate_next_state(), calculate_return_value() and generate_random_state() can vary from platform to platform, but they're usually quite simple.
You can see from this that every time your program starts, rand_state is going to start off at INITIAL_STATE (which is equivalent to generate_random_state(1)) - which is why you always get the same sequence if you don't use srand().
If I remember the quote from Knuth's seminal work "The Art of Computer Programming" at the beginning of the chapter on Random Number Generation, it goes like this:
"Anyone who attempts to generate random numbers by mathematical means is, technically speaking, in a state of sin".
Simply put, the stock random number generators are algorithms, mathematical and 100% predictable. This is actually a good thing in a lot of situations, where a repeatable sequence of "random" numbers is desirable - for example for certain statistical exercises, where you don't want the "wobble" in results that truly random data introduces thanks to clustering effects.
Although grabbing bits of "random" data from the computer's hardware is a popular second alternative, it's not truly random either - although the more complex the operating environment, the more possibilities for randomness - or at least unpredictability.
Truly random data generators tend to look to outside sources. Radioactive decay is a favorite, as is the behavior of quasars. Anything whose roots are in quantum effects is effectively random - much to Einstein's annoyance.
Random number generators are not actually random, they like most software is completely predictable. What rand does is create a different pseudo-random number each time it is called One which appears to be random. In order to use it properly you need to give it a different starting point.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main ()
{
/* initialize random seed: */
srand ( time(NULL) );
printf("random number %d\n",rand());
printf("random number %d\n",rand());
printf("random number %d\n",rand());
printf("random number %d\n",rand());
return 0;
}
This is from http://www.acm.uiuc.edu/webmonkeys/book/c_guide/2.13.html#rand:
Declaration:
void srand(unsigned int seed);
This function seeds the random number generator used by the function rand. Seeding srand with the same seed will cause rand to return the same sequence of pseudo-random numbers. If srand is not called, rand acts as if srand(1) has been called.
rand() returns the next (pseudo) random number in a series. What's happening is you have the same series each time its run (default '1'). To seed a new series, you have to call srand() before you start calling rand().
If you want something random every time, you might try:
srand (time (0));
Rand does not get you a random number. It gives you the next number in a sequence generated by a pseudorandom number generator. To get a different sequence every time you start your program, you have to seed the algorithm by calling srand.
A (very bad) way to do it is by passing it the current time:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main() {
srand(time(NULL));
int i, j = 0;
for(i = 0; i <= 10; i++) {
j = rand();
printf("j = %d\n", j);
}
return 0;
}
Why this is a bad way? Because a pseudorandom number generator is as good as its seed, and the seed must be unpredictable. That is why you may need a better source of entropy, like reading from /dev/urandom.
call srand(sameSeed) before calling rand(). More details here.
Seeding the rand()
void srand (unsigned int seed)
This function establishes seed as the seed for a new series of pseudo-random numbers. If you call rand before a seed has been established with srand, it uses the value 1 as a default seed.
To produce a different pseudo-random series each time your program is run, do srand (time (0))
None of you guys are answering his question.
with this code i get the same sequance everytime the code but it generates random sequences if i add srand(/somevalue/) before the for loop . can someone explain why ?
From what my professor has told me, it is used if you want to make sure your code is running properly and to see if there is something wrong or if you can change something.

Efficient random permutation of n-set-bits

For the problem of producing a bit-pattern with exactly n set bits, I know of two practical methods, but they both have limitations I'm not happy with.
First, you can enumerate all of the possible word values which have that many bits set in a pre-computed table, and then generate a random index into that table to pick out a possible result. This has the problem that as the output size grows the list of candidate outputs eventually becomes impractically large.
Alternatively, you can pick n non-overlapping bit positions at random (for example, by using a partial Fisher-Yates shuffle) and set those bits only. This approach, however, computes a random state in a much larger space than the number of possible results. For example, it may choose the first and second bits out of three, or it might, separately, choose the second and first bits.
This second approach must consume more bits from the random number source than are strictly required. Since it is choosing n bits in a specific order when their order is unimportant, this means that it is making an arbitrary distinction between n! different ways of producing the same result, and consuming at least floor(log_2(n!)) more bits than are necessary.
Can this be avoided?
There is obviously a third approach of iteratively computing and counting off the legal permutations until a random index is reached, but that's simply a space-for-time trade-off on the first approach, and isn't directly helpful unless there is an efficient way to count off those n permutations.
clarification
The first approach requires picking a single random number between zero and (where w is the output size), as this is the number of possible solutions.
The second approach requires picking n random values between zero and w-1, zero and w-2, etc., and these have a product of , which is times larger than the first approach.
This means that the random number source has been forced to produce bits to distinguish n! different results which are all equivalent. I'd like to know if there's an efficient method to avoid relying on this superfluous randomness. Perhaps by using an algorithm which produces an un-ordered list of bit positions, or by directly computing the nth unique permutation of bits.
Seems like you want a variant of Floyd's algorithm:
Algorithm to select a single, random combination of values?
Should be especially useful in your case, because the containment test is a simple bitmask operation. This will require only k calls to the RNG. In the code below, I assume you have randint(limit) which produces a uniform random from 0 to limit-1, and that you want k bits set in a 32-bit int:
mask = 0;
for (j = 32 - k; j < 32; ++j) {
r = randint(j+1);
b = 1 << r;
if (mask & b) mask |= (1 << j);
else mask |= b;
}
How many bits of entropy you need here depends on how randint() is implemented. If k > 16, set it to 32 - k and negate the result.
Your alternative suggestion of generating a single random number representing one combination among the set (mathematicians would call this a rank of the combination) is simpler if you use colex order rather than lexicographic rank. This code, for example:
for (i = k; i >= 1; --i) {
while ((b = binomial(n, i)) > r) --n;
buf[i-1] = n;
r -= b;
}
will fill the array buf[] with indices from 0 to n-1 for the k-combination at colex rank r. In your case, you'd replace buf[i-1] = n with mask |= (1 << n). The binomial() function is binomial coefficient, which I do with a lookup table (see this). That would make the most efficient use of entropy, but I still think Floyd's algorithm would be a better compromise.
[Expanding my comment:] If you only have a little raw entropy available, then use a PRNG to stretch it further. You only need enough raw entropy to seed a PRNG. Use the PRNG to do the actual shuffle, not the raw entropy. For the next shuffle reseed the PRNG with some more raw entropy. That spreads out the raw entropy and makes less of a demand on your entropy source.
If you know exactly the range of numbers you need out of the PRNG, then you can, carefully, set up your own LCG PRNG to cover the appropriate range while needing the minimum entropy to seed it.
ETA: In C++there is a next_permutation() method. Try using that. See std::next_permutation Implementation Explanation for more.
Is this a theory problem or a practical problem?
You could still do the partial shuffle, but keep track of the order of the ones and forget the zeroes. There are log(k!) bits of unused entropy in their final order for your future consumption.
You could also just use the recurrence (n choose k) = (n-1 choose k-1) + (n-1 choose k) directly. Generate a random number between 0 and (n choose k)-1. Call it r. Iterate over all of the bits from the nth to the first. If we have to set j of the i remaining bits, set the ith if r < (i-1 choose j-1) and clear it, subtracting (i-1 choose j-1), otherwise.
Practically, I wouldn't worry about the couple of words of wasted entropy from the partial shuffle; generating a random 32-bit word with 16 bits set costs somewhere between 64 and 80 bits of entropy, and that's entirely acceptable. The growth rate of the required entropy is asymptotically worse than the theoretical bound, so I'd do something different for really big words.
For really big words, you might generate n independent bits that are 1 with probability k/n. This immediately blows your entropy budget (and then some), but it only uses linearly many bits. The number of set bits is tightly concentrated around k, though. For a further expected linear entropy cost, I can fix it up. This approach has much better memory locality than the partial shuffle approach, so I'd probably prefer it in practice.
I would use solution number 3, generate the i-th permutation.
But do you need to generate the first i-1 ones?
You can do it a bit faster than that with kind of divide and conquer method proposed here: Returning i-th combination of a bit array and maybe you can improve the solution a bit
Background
From the formula you have given - w! / ((w-n)! * n!) it looks like your problem set has to do with the binomial coefficient which deals with calculating the number of unique combinations and not permutations which deals with duplicates in different positions.
You said:
"There is obviously a third approach of iteratively computing and counting off the legal permutations until a random index is reached, but that's simply a space-for-time trade-off on the first approach, and isn't directly helpful unless there is an efficient way to count off those n permutations.
...
This means that the random number source has been forced to produce bits to distinguish n! different results which are all equivalent. I'd like to know if there's an efficient method to avoid relying on this superfluous randomness. Perhaps by using an algorithm which produces an un-ordered list of bit positions, or by directly computing the nth unique permutation of bits."
So, there is a way to efficiently compute the nth unique combination, or rank, from the k-indexes. The k-indexes refers to a unique combination. For example, lets say that the n choose k case of 4 choose 3 is taken. This means that there are a total of 4 numbers that can be selected (0, 1, 2, 3), which is represented by n, and they are taken in groups of 3, which is represented by k. The total number of unique combinations can be calculated as n! / ((k! * (n-k)!). The rank of zero corresponds to the k-index of (2, 1, 0). Rank one is represented by the k-index group of (3, 1, 0), and so forth.
Solution
There is a formula that can be used to very efficiently translate between a k-index group and the corresponding rank without iteration. Likewise, there is a formula for translating between the rank and corresponding k-index group.
I have written a paper on this formula and how it can be seen from Pascal's Triangle. The paper is called Tablizing The Binomial Coeffieicent.
I have written a C# class which is in the public domain that implements the formula described in the paper. It uses very little memory and can be downloaded from the site. It performs the following tasks:
Outputs all the k-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters.
Converts the k-index to the proper lexicographic index or rank of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle and is very efficient compared to iterating over the entire set.
Converts the index in a sorted binomial coefficient table to the corresponding k-index. The technique used is also much faster than older iterative solutions.
Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers. This version returns a long value. There is at least one other method that returns an int. Make sure that you use the method that returns a long value.
The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to use the 4 above methods. Accessor methods are provided to access the table.
There is an associated test class which shows how to use the class and its methods. It has been extensively tested with at least 2 cases and there are no known bugs.
The following tested example code demonstrates how to use the class and will iterate through each unique combination:
public void Test10Choose5()
{
String S;
int Loop;
int N = 10; // Total number of elements in the set.
int K = 5; // Total number of elements in each group.
// Create the bin coeff object required to get all
// the combos for this N choose K combination.
BinCoeff<int> BC = new BinCoeff<int>(N, K, false);
int NumCombos = BinCoeff<int>.GetBinCoeff(N, K);
// The Kindexes array specifies the indexes for a lexigraphic element.
int[] KIndexes = new int[K];
StringBuilder SB = new StringBuilder();
// Loop thru all the combinations for this N choose K case.
for (int Combo = 0; Combo < NumCombos; Combo++)
{
// Get the k-indexes for this combination.
BC.GetKIndexes(Combo, KIndexes);
// Verify that the Kindexes returned can be used to retrive the
// rank or lexigraphic order of the KIndexes in the table.
int Val = BC.GetIndex(true, KIndexes);
if (Val != Combo)
{
S = "Val of " + Val.ToString() + " != Combo Value of " + Combo.ToString();
Console.WriteLine(S);
}
SB.Remove(0, SB.Length);
for (Loop = 0; Loop < K; Loop++)
{
SB.Append(KIndexes[Loop].ToString());
if (Loop < K - 1)
SB.Append(" ");
}
S = "KIndexes = " + SB.ToString();
Console.WriteLine(S);
}
}
So, the way to apply the class to your problem is by considering each bit in the word size as the total number of items. This would be n in the n!/((k! (n - k)!) formula. To obtain k, or the group size, simply count the number of bits set to 1. You would have to create a list or array of the class objects for each possible k, which in this case would be 32. Note that the class does not handle N choose N, N choose 0, or N choose 1 so the code would have to check for those cases and return 1 for both the 32 choose 0 case and 32 choose 32 case. For 32 choose 1, it would need to return 32.
If you need to use values not much larger than 32 choose 16 (the worst case for 32 items - yields 601,080,390 unique combinations), then you can use 32 bit integers, which is how the class is currently implemented. If you need to use 64 bit integers, then you will have to convert the class to use 64 bit longs. The largest value that a long can hold is 18,446,744,073,709,551,616 which is 2 ^ 64. The worst case for n choose k when n is 64 is 64 choose 32. 64 choose 32 is 1,832,624,140,942,590,534 - so a long value will work for all 64 choose k cases. If you need numbers bigger than that, then you will probably want to look into using some sort of big integer class. In C#, the .NET framework has a BigInteger class. If you are working in a different language, it should not be hard to port.
If you are looking for a very good PRNG, one of the fastest, lightweight, and high quality output is the Tiny Mersenne Twister or TinyMT for short . I ported the code over to C++ and C#. it can be found here, along with a link to the original author's C code.
Rather than using a shuffling algorithm like Fisher-Yates, you might consider doing something like the following example instead:
// Get 7 random cards.
ulong Card;
ulong SevenCardHand = 0;
for (int CardLoop = 0; CardLoop < 7; CardLoop++)
{
do
{
// The card has a value of between 0 and 51. So, get a random value and
// left shift it into the proper bit position.
Card = (1UL << RandObj.Next(CardsInDeck));
} while ((SevenCardHand & Card) != 0);
SevenCardHand |= Card;
}
The above code is faster than any shuffling algorithm (at least for obtaining a subset of random cards) since it only works on 7 cards instead of 52. It also packs the cards into individual bits within a single 64 bit word. It makes evaluating poker hands much more efficient as well.
As a side, note, the best binomial coefficient calculator I have found that works with very large numbers (it accurately calculated a case that yielded over 15,000 digits in the result) can be found here.

Why I keep getting the same random values? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
why do i always get the same sequence of random numbers with rand()?
I am confounded by the fact that even using different programs (on the same machine) to run /compile, and after nilling the vaues (before and after) the function.. that NO MATTER WHAT.. I'll keep getting the SAME "random" numbers… each and every time I run it. I swear this is NOT how it's supposed to work.. I'm going to illustrate as simply as is possible…
#import <Foundation/Foundation.h>
int main(int argc, char *argv[]) {
int rPrimitive = 0; rPrimitive = 1 + rand() % 50;
NSNumber *rObject = nil; rObject = [NSNumber numberWithInt:rand() % 10];
NSLog(#"%i %#", rPrimitive, rObject);
rPrimitive = 0; rObject = nil;
NSLog(#"%i %#", rPrimitive, rObject);
return 0;
}
Run it in TextMate:
i686-apple-darwin11-llvm-gcc-4.2
8 9
0 (null)
Run it in CodeRunner:
i686-apple-darwin11-llvm-gcc-4.2
8 9
0 (null)
Run it a million times, if you'd like. You can gues what it will always be. Why does this happen? Why oh why is this "how it is"?
This is why (from the rand man page):
If no seed value is provided, the rand() function is automatically
seeded with a value of 1.
Since it is always seeded with the same number it will always produce the same sequence of numbers. To get it to produce a different sequence each time it runs, you need to use a different seed each time it runs. You can use srand() to set the seed.
Because the numbers aren't random, they're pseudorandom. They're generated according to an algorithm which will always produce the same output, given the same initial seed. You're not seeding the PRNG, so it uses a default, constant seed.
If you seed the PRNG using something less predictable (such as the current time and/or PID), you will get different results each time. In the case of rand(3), you need to seed it with srand(3).
The reason it is like that is because rand is a pseudo-random-number-generator, meaning it doesn't generate true random numbers (which is actually a very difficult thing to do). It generates the next number in the sequence using the “seed”, and at the start of execution the seed is always set to the same value (1 or so), so if you don't change the seed, you'll always get the same sequence of random numbers. You can use something like srand(time(NULL)); to seed the random number generator based on the time, or you can use a random number generator that is considered strong enough for cryptographic purposes, arc4random.
You might thing “why is it like this?”, but there are some cases where you want to generate the same series of “random numbers” multiple times.

How to make rand() more likely to select certain numbers?

Is it possible to use rand() or any other pseudo-random generator to pick out random numbers, but have it be more likely that it will pick certain numbers that the user feeds it? In other words, is there a way, with rand() or something else, to pick a pseudo random number, but be able to adjust the odds of getting certain outcomes, and how do you do that if it is possible.
BTW, I'm just asking how to change the numbers that rand() outputs, not how to get the user input.
Well, your question is a bit vague... but if you wanted to pick a number from 0-100 but with a bias for (say) 43 and 27, you could pick a number in the range [0, 102] and map 101 to 43 and 102 to 27. It will really depend on how much bias you want to put in, what your range is etc.
You want a mapping function between uniform density of rand() and the probability density that you desire. The mapping function can be done lots of different ways.
You can certainly use any random number generator to skew the results. Example in C#, since I don't know objective-c syntax. I assume that rand() return a number tween 0 and 1, 0 inclusive and 1 exclusive. It should be quite easy to understand the idear and convert the code to any other language.
/// <summary>
/// Dice roll with a double chance of rolling a 6.
/// </summary>
int SkewedDiceRoll()
{
// Set diceRool to a value from 1 to 7.
int diceRool = Math.Floor(7 * rand()) + 1;
// Treat a value of 7 as a 6.
if (diceRoll == 7)
{
diceRoll = 6;
}
return diceRoll;
}
This is not too difficult..
simply create an array of all possible numbers, then pad the array with extra numbers of which you want to result more often.
ie:
array('1',1','1','1','2','3','4','4');
Obviously when you query that array, it will call "1" the most, followed by "4"
In other words, is there a way, with rand() or something else, to pick a pseudo random number, but be able to adjust the odds of getting certain outcomes, and how do you do that if it is possible.
For simplicity sake, let's use the drand48() which returns "values uniformly distributed over the interval [0.0,1.0)".
To make the values close to one more likely to appear, apply skew function log2():
log2( drand48() + 1.0 ); // +1 since log2() in is [0.0, 1.0) for values in [1.0, 2.0)
To make the values close to zero more likely to appear, use the e.g. exp():
(exp(drand48()) - 1.0) * (1/(M_E-1.0)); // exp(0)=1, exp(1)=e
Generally you need to crate a function which would map the uniformly distributed values from the random function into values which are distributed differently, non-uniformly.
You can use the follwing trick
This example has a 50 percent chance of producing one of your 'favourite' numbers
int[] highlyProbable = new int[]{...};
public int biasedRand() {
double rand = rand();
if (rand < 0.5) {
return highlyProbable[(int)(highlyProbable.length * rand())];
} else {
return (int)YOUR_RANGE * rand();
}
}
In addition to what Kevin suggested, you could have your regular group of numbers (the wide range) chopped into a number of smaller ranges, and have the RNG pick from the ranges you find favorable. You could access these ranges in a particular order, or, you can access them in some random order (but I can assume this wouldn't be what you want.) Since you're using manually specified ranges to be accessed within the wide range of elements, you're likely to see the numbers you want pop up more than others. Of course, this is just how I'd approach it, and it may not seem all that rational.
Good luck.
By definition the output of a random number generator is random, which means that each number is equally likely to occur next (1/10 chance) and you should not be able to affect the outcome.
Of-course, a pseudo-random generator creates an output that will always follow the same pattern for a given input seed. So if you know the seed, then you may have some idea of the output sequence. You can, of-course, use the modulus operator to play around with the set of numbers being output from the generator (eg. %5 + 2 to generate numbers from 2 to 7).

Generating random numbers in Objective-C

I'm a Java head mainly, and I want a way to generate a pseudo-random number between 0 and 74. In Java I would use the method:
Random.nextInt(74)
I'm not interested in a discussion about seeds or true randomness, just how you accomplish the same task in Objective-C. I've scoured Google, and there just seems to be lots of different and conflicting bits of information.
You should use the arc4random_uniform() function. It uses a superior algorithm to rand. You don't even need to set a seed.
#include <stdlib.h>
// ...
// ...
int r = arc4random_uniform(74);
The arc4random man page:
NAME
arc4random, arc4random_stir, arc4random_addrandom -- arc4 random number generator
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <stdlib.h>
u_int32_t
arc4random(void);
void
arc4random_stir(void);
void
arc4random_addrandom(unsigned char *dat, int datlen);
DESCRIPTION
The arc4random() function uses the key stream generator employed by the arc4 cipher, which uses 8*8 8
bit S-Boxes. The S-Boxes can be in about (2**1700) states. The arc4random() function returns pseudo-
random numbers in the range of 0 to (2**32)-1, and therefore has twice the range of rand(3) and
random(3).
The arc4random_stir() function reads data from /dev/urandom and uses it to permute the S-Boxes via
arc4random_addrandom().
There is no need to call arc4random_stir() before using arc4random(), since arc4random() automatically
initializes itself.
EXAMPLES
The following produces a drop-in replacement for the traditional rand() and random() functions using
arc4random():
#define foo4random() (arc4random() % ((unsigned)RAND_MAX + 1))
Use the arc4random_uniform(upper_bound) function to generate a random number within a range. The following will generate a number between 0 and 73 inclusive.
arc4random_uniform(74)
arc4random_uniform(upper_bound) avoids modulo bias as described in the man page:
arc4random_uniform() will return a uniformly distributed random number less than upper_bound. arc4random_uniform() is recommended over constructions like ``arc4random() % upper_bound'' as it avoids "modulo bias" when the upper bound is not a power of two.
Same as C, you would do
#include <time.h>
#include <stdlib.h>
...
srand(time(NULL));
int r = rand() % 74;
(assuming you meant including 0 but excluding 74, which is what your Java example does)
Edit: Feel free to substitute random() or arc4random() for rand() (which is, as others have pointed out, quite sucky).
I thought I could add a method I use in many projects.
- (NSInteger)randomValueBetween:(NSInteger)min and:(NSInteger)max {
return (NSInteger)(min + arc4random_uniform(max - min + 1));
}
If I end up using it in many files I usually declare a macro as
#define RAND_FROM_TO(min, max) (min + arc4random_uniform(max - min + 1))
E.g.
NSInteger myInteger = RAND_FROM_TO(0, 74) // 0, 1, 2,..., 73, 74
Note: Only for iOS 4.3/OS X v10.7 (Lion) and later
This will give you a floating point number between 0 and 47
float low_bound = 0;
float high_bound = 47;
float rndValue = (((float)arc4random()/0x100000000)*(high_bound-low_bound)+low_bound);
Or just simply
float rndValue = (((float)arc4random()/0x100000000)*47);
Both lower and upper bound can be negative as well. The example code below gives you a random number between -35.76 and +12.09
float low_bound = -35.76;
float high_bound = 12.09;
float rndValue = (((float)arc4random()/0x100000000)*(high_bound-low_bound)+low_bound);
Convert result to a rounder Integer value:
int intRndValue = (int)(rndValue + 0.5);
According to the manual page for rand(3), the rand family of functions have been obsoleted by random(3). This is due to the fact that the lower 12 bits of rand() go through a cyclic pattern. To get a random number, just seed the generator by calling srandom() with an unsigned seed, and then call random(). So, the equivalent of the code above would be
#import <stdlib.h>
#import <time.h>
srandom(time(NULL));
random() % 74;
You'll only need to call srandom() once in your program unless you want to change your seed. Although you said you didn't want a discussion of truly random values, rand() is a pretty bad random number generator, and random() still suffers from modulo bias, as it will generate a number between 0 and RAND_MAX. So, e.g. if RAND_MAX is 3, and you want a random number between 0 and 2, you're twice as likely to get a 0 than a 1 or a 2.
Better to use arc4random_uniform. However, this isn't available below iOS 4.3. Luckily iOS will bind this symbol at runtime, not at compile time (so don't use the #if preprocessor directive to check if it's available).
The best way to determine if arc4random_uniform is available is to do something like this:
#include <stdlib.h>
int r = 0;
if (arc4random_uniform != NULL)
r = arc4random_uniform (74);
else
r = (arc4random() % 74);
I wrote my own random number utility class just so that I would have something that functioned a bit more like Math.random() in Java. It has just two functions, and it's all made in C.
Header file:
//Random.h
void initRandomSeed(long firstSeed);
float nextRandomFloat();
Implementation file:
//Random.m
static unsigned long seed;
void initRandomSeed(long firstSeed)
{
seed = firstSeed;
}
float nextRandomFloat()
{
return (((seed= 1664525*seed + 1013904223)>>16) / (float)0x10000);
}
It's a pretty classic way of generating pseudo-randoms. In my app delegate I call:
#import "Random.h"
- (void)applicationDidFinishLaunching:(UIApplication *)application
{
initRandomSeed( (long) [[NSDate date] timeIntervalSince1970] );
//Do other initialization junk.
}
Then later I just say:
float myRandomNumber = nextRandomFloat() * 74;
Note that this method returns a random number between 0.0f (inclusive) and 1.0f (exclusive).
There are some great, articulate answers already, but the question asks for a random number between 0 and 74. Use:
arc4random_uniform(75)
Generate random number between 0 to 99:
int x = arc4random()%100;
Generate random number between 500 and 1000:
int x = (arc4random()%501) + 500;
As of iOS 9 and OS X 10.11, you can use the new GameplayKit classes to generate random numbers in a variety of ways.
You have four source types to choose from: a general random source (unnamed, down to the system to choose what it does), linear congruential, ARC4 and Mersenne Twister. These can generate random ints, floats and bools.
At the simplest level, you can generate a random number from the system's built-in random source like this:
NSInteger rand = [[GKRandomSource sharedRandom] nextInt];
That generates a number between -2,147,483,648 and 2,147,483,647. If you want a number between 0 and an upper bound (exclusive) you'd use this:
NSInteger rand6 = [[GKRandomSource sharedRandom] nextIntWithUpperBound:6];
GameplayKit has some convenience constructors built in to work with dice. For example, you can roll a six-sided die like this:
GKRandomDistribution *d6 = [GKRandomDistribution d6];
[d6 nextInt];
Plus you can shape the random distribution by using things like GKShuffledDistribution.
//The following example is going to generate a number between 0 and 73.
int value;
value = (arc4random() % 74);
NSLog(#"random number: %i ", value);
//In order to generate 1 to 73, do the following:
int value1;
value1 = (arc4random() % 73) + 1;
NSLog(#"random number step 2: %i ", value1);
Output:
random number: 72
random number step 2: 52
For game dev use random() to generate randoms. Probably at least 5x faster than using arc4random(). Modulo bias is not an issue, especially for games, when generating randoms using the full range of random(). Be sure to seed first. Call srandomdev() in AppDelegate. Here's some helper functions:
static inline int random_range(int low, int high){ return (random()%(high-low+1))+low;}
static inline CGFloat frandom(){ return (CGFloat)random()/UINT32_C(0x7FFFFFFF);}
static inline CGFloat frandom_range(CGFloat low, CGFloat high){ return (high-low)*frandom()+low;}