How does rand() work? Does it have certain tendencies? Is there something better to use? - objective-c

I have read that it has something to do with time, also you get from including time.h, so I assumed that much, but how does it work exactly? Also, does it have any tendencies towards odd or even numbers or something like that? And finally is there something with better distribution in the C standard library or the Foundation framework?

Briefly:
You use time.h to get a seed, which is an initial random number. C then does a bunch of operations on this number to get the next random number, then operations on that one to get the next, then... you get the picture.
rand() is able to touch on every possible integer. It will not prefer even or odd numbers regardless of the input seed, happily. Still, it has limits - it repeats itself relatively quickly, and in almost every implementation only gives numbers up to 32767.
C does not have another built-in random number generator. If you need a real tough one, there are many packages available online, but the Mersenne Twister algorithm is probably the most popular pick.
Now, if you are interested on the reasons why the above is true, here are the gory details on how rand() works:
rand() is what's called a "linear congruential generator." This means that it employs an equation of the form:
xn+1 = (*a****xn + ***b*) mod m
where xn is the nth random number, and a and b are some predetermined integers. The arithmetic is performed modulo m, with m usually 232 depending on the machine, so that only the lowest 32 bits are kept in the calculation of xn+1.
In English, then, the idea is this: To get the next random number, multiply the last random number by something, add a number to it, and then take the last few digits.
A few limitations are quickly apparent:
First, you need a starting random number. This is the "seed" of your random number generator, and this is where you've heard of time.h being used. Since we want a really random number, it is common practice to ask the system what time it is (in integer form) and use this as the first "random number." Also, this explains why using the same seed twice will always give exactly the same sequence of random numbers. This sounds bad, but is actually useful, since debugging is a lot easier when you control the inputs to your program
Second, a and b have to be chosen very, very carefully or you'll get some disastrous results. Fortunately, the equation for a linear congruential generator is simple enough that the math has been worked out in some detail. It turns out that choosing an a which satisfies *a***mod8 = 5 together with ***b* = 1 will insure that all m integers are equally likely, independent of choice of seed. You also want a value of a that is really big, so that every time you multiply it by xn you trigger a the modulo and chop off a lot of digits, or else many numbers in a row will just be multiples of each other. As a result, two common values of a (for example) are 1566083941 and 1812433253 according to Knuth. The GNU C library happens to use a=1103515245 and b=12345. A list of values for lots of implementations is available at the wikipedia page for LCGs.
Third, the linear congruential generator will actually repeat itself because of that modulo. This gets to be some pretty heady math, but the result of it all is happily very simple: The sequence will repeat itself after m numbers of have been generated. In most cases, this means that your random number generator will repeat every 232 cycles. That sounds like a lot, but it really isn't for many applications. If you are doing serious numerical work with Monte Carlo simulations, this number is hopelessly inadequate.
A fourth much less obvious problem is that the numbers are actually not really random. They have a funny sort of correlation. If you take three consecutive integers, (x, y, z), from an LCG with some value of a and m, those three points will always fall on the lattice of points generated by all linear combinations of the three points (1, a, a2), (0, m, 0), (0, 0, m). This is known as Marsaglia's Theorem, and if you don't understand it, that's okay. All it means is this: Triplets of random numbers from an LCG will show correlations at some deep, deep level. Usually it's too deep for you or I to notice, but its there. It's possible to even reconstruct the first number in a "random" sequence of three numbers if you are given the second and third! This is not good for cryptography at all.
The good part is that LCGs like rand() are very, very low footprint. It typically requires only 32 bits to retain state, which is really nice. It's also very fast, requiring very few operations. These make it good for noncritical embedded systems, video games, casual applications, stuff like that.
PRNGs are a fascinating topic. Wikipedia is always a good place to go if you are hungry to learn more on the history or the various implementations that are around today.

rand returns numbers generated by a pseudo-random number generator (PRNG). The sequence of numbers it returns is deterministic, based on the value with which the PRNG was initialized (by calling srand).
The numbers should be distributed such that they appear somewhat random, so, for example, odd and even numbers should be returned at roughly the same frequency. The actual implementation of the random number generator is left unspecified, so the actual behavior is specific to the implementation.
The important thing to remember is that rand does not return random numbers; it returns pseudo-random numbers, and the values it returns are determined by the seed value and the number of times rand has been called. This behavior is fine for many use cases, but is not appropriate for others (for example, rand would not be appropriate for use in many cryptographic applications).

How does rand() work?
http://en.wikipedia.org/wiki/Pseudorandom_number_generator
I have read that it has something to
do with time, also you get from
including time.h
rand() has nothing at all to do with the time. However, it's very common to use time() to obtain the "seed" for the PRNG so that you get different "random" numbers each time your program is run.
Also, does it have any tendencies
towards odd or even numbers or
something like that?
Depends on the exact method used. There's one popular implementation of rand() that alternates between odd and even numbers. So avoid writing code like rand() % 2 that depends on the lowest bit being random.

Related

What does the number in parentheses in `np.random.seed(number)` means?

What is the difference between np.random.seed(0), np.random.seed(42), and np.random.seed(..any number). what is the function of the number in parentheses?
python uses the iterative Mersenne Twister algorithm to generate pseudo-random numbers [1]. The seed is simply where we start iterating.
To be clear, most computers do not have a "true" source of randomness. It is kind of an interesting thing that "randomness" is so valuable to so many applications, and is quite hard to come by (you can buy a specialized device devoted to this purpose). Since it is difficult to make random numbers, but they are nevertheless necessary, many, many, many, many algorithms have been developed to generate numbers that are not random, but nevertheless look as though they are. Algorithms that generate numbers that "look randomish" are called pseudo-random number generators (PRNGs). Since PRNGs are actually deterministic, they can't simply create a number from the aether and have it look randomish. They need an input. It turns out that using some complex operations and modular arithmetic, we can take in an input, and get another number that seems to have little or no relation to the input. Using this intuition, we can simply use the previous output of the PRNG as the next input. We then get a sequence of numbers which, if our PRNG is good, will seem to have no relation to each other.
In order to get our iterative PRNG started, we need an initial input. This initial input is called a "seed". Since the PRNG is deterministic, for a given seed, it will generate an identical sequence of numbers. Usually, there is a default seed that is, itself, sort of randomish. The most common one is the current time. However, the current time isn't a very good random number, so this behavior is known to cause problems sometimes. If you want your program to run in an identical manner each time you run it, you can provide a seed (0 is a popular option, but is entirely arbitrary). Then, you get a sequence of randomish numbers, but if you give your code to someone they can actually entirely recreate the runtime of the program as you witnessed it when you ran it.
That would be the starting key of the generator. Typically if you want to get reproducible results you'll use the same seed over and over again throughout your simulations.
You are setting the seed of the random number generator so you can get reproducible results. Example.
np.random.seed(0)
np.random.randint(0,100,10)
Output:
array([44, 47, 64, 67, 67, 9, 83, 21, 36, 87])
Now, if you ran the same code your computer, you should get the same 10 number output from the random integers from 0 to 100.

Testing whether some code works for ALL input numbers?

I've got an algorithm using a single (positive integer) number as an input to produce an output. And I've got the reverse function which should do the exact opposite, going back from the output to the same integer number. This should be a unique one-to-one reversible mapping.
I've tested this for some integers, but I want to be 100% sure that it works for all of them, up to a known limit.
The problem is that if I just test every integer, it takes an unreasonably long time to run. If I use 64-bit integers, that's a lot of numbers to check if I want to check them all. On the other hand, if I only test every 10th or 100th number, I'm not going to be 100% sure at the end. There might be some awkward weird constellation in one of the 90% or 99% which I didn't test.
Are there any general ways to identify edge cases so that just those "interesting" or "risky" numbers are checked? Or should I just pick numbers at random? Or test in increasing increments?
Or to put the question another way, how can I approach this so that I gain 100% confidence that every case will be properly handled?
The approach for this is generally checking every step of the computation for potential flaws. Concerning integer math, that is overflows, underflows and rounding errors from division, basically that the mathematical result can't be represented accurately. In addition, all operations derived from this suffer similar problems.
The process of auditing then looks at single steps in turn. For example, if you want to allocate memory for N integers, you need N times the size of an integer in bytes and this multiplication can overflow. You now determine those values where the multiplication overflows and create according tests that exercise these. Note that for the example of allocating memory, proper handling typically means that the function does not allocate memory but fail.
The principle behind this is that you determine the ranges for every operation where the outcome is somehow different (like e.g. where it overflows) and then make sure via tests that both variants work. This reduces the number of tests from all possible input values to just those where you expect a significant difference.

How many iterations of Rabin-Miller should I use for cryptographic safe primes?

I am generating a 2048-bit safe prime for a Diffie-Hellman-type key, p such that p and (p-1)/2 are both prime.
How few iterations of Rabin-Miller can I use on both p and (p-1)/2 and still be confident of a cryptographically strong key? In the research I've done I've heard everything from 6 to 64 iterations for 1024-bit ordinary primes, so I'm a little confused at this point. And once that's established, does the number change if you are generating a safe prime rather than an ordinary one?
Computation time is at a premium, so this is a practical question - I'm basically wondering how to find out the lowest possible number of tests I can get away with while at the same time maintaining pretty much guaranteed security.
Let's assume that you select a prime p by selecting random values until you hit one for which Miller-Rabin says: that one looks like a prime. You use n rounds at most for the Miller-Rabin test. (For a so-called "safe prime", things are are not changed, except that you run two nested tests.)
The probability that a random 1024-bit integer is prime is about 1/900. Now, you do not want to do anything stupid so you generate only odd values (an even 1024-bit integer is guaranteed non-prime), and, more generally, you run the Miller-Rabin test only if the value is not "obviously" non-prime, i.e. can be divided by a small prime. So you end up with trying about 300 values with Miller-Rabin before hitting a prime (on average). When the value is non-prime, Miller-Rabin will detect it with probability 3/4 at each round, so the number of Miller-Rabin rounds you will run on average for a single non-prime value is 1+(1/4)+(1/16)+... = 4/3. For the 300 values, this means about 400 rounds of Miller-Rabin, regardless of what you choose for n.
So if you select n to be, e.g., 40, then the cost implied by n is less than 10% of the total computational cost. The random prime selection process is dominated by the test on non-primes, which are not impacted by the value of n you choose. I talked here about 1024-bit integers; for bigger numbers the choice of n is even less important since primes become sparser as size increases (for 2048-bit integers, the "10%" above become "5%").
Hence you can choose n=40 and be happy with it (or at least know that reducing n will not buy you much anyway). On the other hand, using a n greater than 40 is meaningless, because this would get you to probabilities lower than the risk of a simple miscomputation. Computers are hardware, they can have random failures. For instance, a primality test function could return "true" for a non-prime value because a cosmic ray (a high-energy particle hurtling through the Universe at high speed) happens to hit just the right transistor at the right time, flipping the return value from 0 ("false") to 1 ("true"). This is very unlikely -- but no less likely than probability 2-80. See this stackoverflow answer for a few more details. The bottom line is that regardless of how you make sure that an integer is prime, you still have an unavoidable probabilistic element, and 40 rounds of Miller-Rabin already give you the best that you can hope for.
To sum up, use 40 rounds.
The paper Average case error estimates for the strong probable prime test by Damgard-Landrock-Pomerance points out that, if you randomly select k-bit odd number n and apply t independent Rabin-Miller tests in succession, the probability that n is a composite has much stronger bounds.
In fact for 3 <= t <= k/9 and k >= 21,
For a k=1024 bit prime, t=6 iterations give you an error rate less than 10^(-40).
Each iteration of Rabin-Miller reduces the odds that the number is composite by a factor of 1/4.
So after 64 iterations, there is only 1 chance in 2^128 that the number is composite.
Assuming you are using these for a public key algorithm (e.g. RSA), and assuming you are combining that with a symmetric algorithm using (say) 128-bit keys, an adversary can guess your key with that probability.
The bottom line is to choose the number of iterations to put that probability within the ballpark of the other sizes you are choosing for your algorithm.
[update, to elaborate]
The answer depends entirely on what algorithms you are going to use the numbers for, and what the best known attacks are against those algorithms.
For example, according to Wikipedia:
As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys.
So, if you are planning to use these primes to generate (say) a 1024-bit RSA key, then there is no reason to run more than 40 iterations or so of Rabin-Miller. Why? Because by the time you hit a failure, an attacker could crack one of your keys anyway.
Of course, there is no reason not to perform more iterations, time permitting. There just isn't much point to doing so.
On the other hand, if you are generating 2048-bit RSA keys, then 56 (or so) iterations of Rabin-Miller is more appropriate.
Cryptography is typically built as a composition of primitives, like prime generation, RSA, SHA-2, and AES. If you want to make one of those primitives 2^900 times stronger than the others, you can, but it is a little like putting a 10-foot-steel vault door on a log cabin.
There is no fixed answer to your question. It depends on the strength of the other pieces going into your cryptographic system.
All that said, 2^-128 is a ludicrously tiny probability, so I would probably just use 64 iterations :-).
From the libgcrypt source:
/* We use 64 Rabin-Miller rounds which is better and thus
sufficient. We do not have a Lucas test implementaion thus we
can't do it in the X9.31 preferred way of running a few
Rabin-Miller followed by one Lucas test. */
cipher/primegen.c line# 1295
I would run two or three iterations of Miller-Rabin (i.e., strong Fermat probable prime) tests, making sure that one of the bases is 2.
Then I would run a strong Lucas probable prime test, choosing D, P, and Q with the method described here:
https://en.wikipedia.org/wiki/Baillie%E2%80%93PSW_primality_test
There are no known composites that pass this combination of Fermat and Lucas tests.
This is much faster than doing 40 Rabin-Miller iterations. In addition, as was pointed out by Pomerance, Selfridge, and Wagstaff in https://math.dartmouth.edu/~carlp/PDF/paper25.pdf, there are diminishing returns with multiple Fermat tests: if N is a pseudoprime to one base, then it is more likely than the average number to be a pseudoprime to other bases. That's why, for example, we see so many psp's base 2 are also psp's base 3.
A smaller probability is usually better, but I would take the actual probability value with a grain of salt. Albrecht et al Prime and Prejudice: Primality Testing Under Adversarial Conditions break a number of prime-testing routines in cryptographic libraries. In one example, the published probability is 1/2^80, but the number they construct is declared prime 1 time out of 16.
In several other examples, their number passes 100% of the time.
Only 2 iterations, assuming 2^-80 as a negligibly probability.
From (Alfred. J. Menezes. et al. 1996) §4.4 p.148:
Does it matter? Why not run for 1000 iterations? When searching for primes, you stop applying the Rabin-Miller test anyway the first time it fails, so for the time it takes to find a prime it doesn't really matter what the upper bound on the number of iterations is. You could even run a deterministic primality checking algorithm after those 1000 iterations to be completely sure.
That said, the probability that a number is prime after n iterations is 4^-n.

How can I test that my hash function is good in terms of max-load?

I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins):
Probability that a bin is empty, for large n is 1/e.
Expected number of empty bins is n/e.
Probability that a bin has k balls is <= 1/ek! (corrected).
Probability that a bin has at least k collisions is <= ((e/k)**k)/e (corrected).
These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely.
Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ).
Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere.
Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test?
This lecture seems to cover it best, but I am no mathematician.
http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf
BTW, with high probability seems to mean 1 - 1/n.
That is a fascinating paper/lecture-- makes me wish I had taken some formal algorithms class.
I'm going to take a stab at some answers here, based on what I've just read from that, and feel free to vote me down. I'd appreciate a correction, though, rather than just a downvote :) I'm also going to use n and N interchangeably here, which is a big no-no in some circles, but since I'm just copy-pasting your formulae, I hope you'll forgive me.
First, the base of the logs. These numbers are given as big-O notation, not as absolute formulae. That means that you're looking for something 'on the order of ln(n) / ln(ln(n))', not with an expectation of an absolute answer, but more that as n gets bigger, the relationship of n to the maximum number of collisions should follow that formula. The details of the actual curve you can graph will vary by implementation (and I don't know enough about the practical implementations to tell you what's a 'good' curve, except that it should follow that big-O relationship). Those two formulae that you posted are actually equivalent in big-O notation. The 3 in the second formula is just a constant, and is related to a particular implementation. A less efficient implementation would have a bigger constant.
With that in mind, I would run empirical tests, because I'm a biologist at heart and I was trained to avoid hard-and-fast proofs as indications of how the world actually works. Start with N as some number, say 100, and find the bin with the largest number of collisions in it. That's your max-load for that run. Now, your examples should be as close as possible to what you expect actual users to use, so maybe you want to randomly pull words from a dictionary or something similar as your input.
Run that test many times, at least 30 or 40. Since you're using random numbers, you'll need to satisfy yourself that the average max-load you're getting is close to the theoretical 'expectation' of your algorithm. Expectation is just the average, but you'll still need to find it, and the tighter your std dev/std err about that average, the more you can say that your empirical average matches the theoretical expectation. One run is not enough, because a second run will (most likely) give a different answer.
Then, increase N, to say, 1000, 10000, etc. Increase it logarithmically, because your formula is logarithmic. As your N increases, your max-load should increase on the order of ln(n) / ln(ln(n)). If it increases at a rate of 3*ln(n) / ln(ln(n)), that means that you're following the theory that they put forth in that lecture.
This kind of empirical test will also show you where your approach breaks down. It may be that your algorithm works well for N < 10 million (or some other number), but above that, it starts to collapse. Why could that be? Maybe you have some limitation to 32 bits in your code without realizing it (ie, using a 'float' instead of a 'double'), or some other implementation detail. These kinds of details let you know where your code will work well in practice, and then as your practical needs change, you can modify your algorithm. Maybe making the algorithm work for very large datasets makes it very inefficient for very small ones, or vice versa, so pinpointing that tradeoff will help you further characterize how you could adapt your algorithm to particular situations. Always a useful skill to have.
EDIT: a proof of why the base of the log function doesn't matter with big-O notation:
log N = log_10 (N) = log_b (N)/log_b (10)= (1/log_b(10)) * log_b(N)
1/log_b(10) is a constant, and in big-O notation, constants are ignored. Base changes are free, which is why you're encountering such variation in the papers.
Here is a rough start to the solution of this problem involving uniform distributions and maximum load.
Instead of bins and balls or urns or boxes or buckets or m and n, people (p) and doors (d) will be used as designations.
There is an exact expected value for each of the doors given a certain number of people. For example, with 5 people and 5 doors, the expected maximum door is exactly 1.2864 {(1429-625) / 625} above the mean (p/d) and the minimum door is exactly -0.9616 {(24-625) / 625} below the mean. The absolute value of the highest door's distance from the mean is a little larger than the smallest door's because all of the people could go through one door, but no less than zero can go through one of the doors. With large numbers of people (p/d > 3000), the difference between the absolute value of the highest door's distance from the mean and the lowest door's becomes negligible.
For an odd number of doors, the center door is essentially zero and is not scalable, but all of the other doors are scalable from certain values representing p=d. These rounded values for d=5 are:
-1.163 -0.495 0* 0.495 1.163
* slowly approaching zero from -0.12
From these values, you can compute the expected number of people for any count of people going through each of the 5 doors, including the maximum door. Except for the middle ordered door, the difference from the mean is scalable by sqrt(p/d).
So, for p=50,000 and d=5:
Expected number of people going through the maximum door, which could be any of the 5 doors, = 1.163 * sqrt(p/d) + p/d.
= 1.163 * sqrt(10,000) + 10,000 = 10,116.3
For p/d < 3,000, the result from this equation must be slightly increased.
With more people, the middle door slowly becomes closer and closer to zero from -0.11968 at p=100 and d=5. It can always be rounded up to zero and like the other 4 doors has quite a variance.
The values for 6 doors are:
-1.272 -0.643 -0.202 0.202 0.643 1.272
For 1000 doors, the approximate values are:
-3.25, -2.95, -2.79 … 2.79, 2.95, 3.25
For any d and p, there is an exact expected value for each of the ordered doors. Hopefully, a good approximation (with a relative error < 1%) exists. Some professor or mathematician somewhere must know.
For testing uniform distribution, you will need a number of averaged ordered sessions (750-1000 works well) rather than a greater number of people. No matter what, the variances between valid sessions are great. That's the nature of randomness. Collisions are unavoidable. *
The expected values for 5 and 6 doors were obtained by sheer brute force computation using 640 bit integers and averaging the convergence of the absolute values of corresponding opposite doors.
For d=5 and p=170:
-6.63901 -2.95905 -0.119342 2.81054 6.90686
(27.36099 31.04095 33.880658 36.81054 40.90686)
For d=6 and p=108:
-5.19024 -2.7711 -0.973979 0.734434 2.66716 5.53372
(12.80976 15.2289 17.026021 18.734434 20.66716 23.53372)
I hope that you may evenly distribute your data.
It's almost guaranteed that all of George Foreman's sons or some similar situation will fight against your hash function. And proper contingent planning is the work of all good programmers.
After some more research and trial-and-error I think I can provide something part way to to an answer.
To start off, ln and log seem to refer to log base-e if you look into the maths behind the theory. But as mmr indicated, for the O(...) estimates, it doesn't matter.
max-load can be defined for any probability you like. The typical formula used is
1-1/n**c
Most papers on the topic use
1-1/n
An example might be easiest.
Say you have a hash table of 1000 slots and you want to hash 1000 things. Say you also want to know the max-load with a probability of 1-1/1000 or 0.999.
The max-load is the maximum number of hash values that end up being the same - ie. collisions (assuming that your hash function is good).
Using the formula for the probability of getting exactly k identical hash values
Pr[ exactly k ] = ((e/k)**k)/e
then by accumulating the probability of exactly 0..k items until the total equals or exceeds 0.999 tells you that k is the max-load.
eg.
Pr[0] = 0.37
Pr[1] = 0.37
Pr[2] = 0.18
Pr[3] = 0.061
Pr[4] = 0.015
Pr[5] = 0.003 // here, the cumulative total is 0.999
Pr[6] = 0.0005
Pr[7] = 0.00007
So, in this case, the max-load is 5.
So if my hash function is working well on my set of data then I should expect the maxmium number of identical hash values (or collisions) to be 5.
If it isn't then this could be due to the following reasons:
Your data has small values (like short strings) that hash to the same value. Any hash of a single ASCII character will pick 1 of 128 hash values (there are ways around this. For example you could use multiple hash functions, but slows down hashing and I don't know much about this).
Your hash function doesn't work well with your data - try it with random data.
Your hash function doesn't work well.
The other tests I mentioned in my question also are helpful to see that your hash function is running as expected.
Incidentally, my hash function worked nicely - except on short (1..4 character) strings.
I also implemented a simple split-table version which places the hash value into the least used slot from a choice of 2 locations. This more than halves the number of collisions and means that adding and searching the hash table is a little slower.
I hope this helps.

Storage algorithm question - verify sequential data with little memory

I found this on an "interview questions" site and have been pondering it for a couple of days. I will keep churning, but am interested what you guys think
"10 Gbytes of 32-bit numbers on a magnetic tape, all there from 0 to 10G in random order. You have 64 32 bit words of memory available: design an algorithm to check that each number from 0 to 10G occurs once and only once on the tape, with minimum passes of the tape by a read head connected to your algorithm."
32-bit numbers can take 4G = 2^32 different values. There are 2.5*2^32 numbers on tape total. So after 2^32 count one of numbers will repeat 100%. If there were <= 2^32 numbers on tape then it was possible that there are two different cases – when all numbers are different or when at least one repeats.
It's a trick question, as Michael Anderson and I have figured out. You can't store 10G 32b numbers on a 10G tape. The interviewer (a) is messing with you and (b) is trying to find out how much you think about a problem before you start solving it.
The utterly naive algorithm, which takes as many passes as there are numbers to check, would be to walk through and verify that the lowest number is there. Then do it again checking that the next lowest is there. And so on.
This requires one word of storage to keep track of where you are - you could cut down the number of passes by a factor of 64 by using all 64 words to keep track of where you're up to in several different locations in the search space - checking all of your current ones on each pass. Still O(n) passes, of course.
You could probably cut it down even more by using portions of the words - given that your search space for each segment is smaller, you won't need to keep track of the full 32-bit range.
Perform an in-place mergesort or quicksort, using tape for storage? Then iterate through the numbers in sequence, tracking to see that each number = previous+1.
Requires cleverly implemented sort, and is fairly slow, but achieves the goal I believe.
Edit: oh bugger, it's never specified you can write.
Here's a second approach: scan through trying to build up to 30-ish ranges of contiginous numbers. IE 1,2,3,4,5 would be one range, 8,9,10,11,12 would be another, etc. If ranges overlap with existing, then they are merged. I think you only need to make a limited number of passes to either get the complete range or prove there are gaps... much less than just scanning through in blocks of a couple thousand to see if all digits are present.
It'll take me a bit to prove or disprove the limits for this though.
Do 2 reduces on the numbers, a sum and a bitwise XOR.
The sum should be (10G + 1) * 10G / 2
The XOR should be ... something
It looks like there is a catch in the question that no one has talked about so far; the interviewer has only asked the interviewee to write a program that CHECKS
(i) if each number that makes up the 10G is present once and only once--- what should the interviewee do if the numbers in the given list are present multple times? should he assume that he should stop execting the programme and throw exception or should he assume that he should correct the mistake by removing the repeating number and replace it with another (this may actually be a costly excercise as this involves complete reshuffle of the number set)? correcting this is required to perform the second step in the question, i.e. to verify that the data is stored in the best possible way that it requires least possible passes.
(ii) When the interviewee was asked to only check if the 10G weight data set of numbers are stored in such a way that they require least paases to access any of those numbers;
what should the interviewee do? should he stop and throw exception the moment he finds an issue in the algorithm they were stored in, or correct the mistake and continue till all the elements are sorted in the order of least possible passes?
If the intension of the interviewer is to ask the interviewee to write an algorithm that finds the best combinaton of numbers that can be stored in 10GB, given 64 32 Bit registers; and also to write an algorithm to save these chosen set of numbers in the best possible way that require least number of passes to access each; he should have asked this directly, woudn't he?
I suppose the intension of the interviewer may be to only see how the interviewee is approaching the problem rather than to actually extract a working solution from the interviewee; wold any buy this notion?
Regards,
Samba