Lately I implemented a MersenneTwister for 64-bit integer (or long). Is there a guide or examples of how to test PRNG so that I may know whether or not my implementation is good-enough solution. I'm specially interested into how to verify if my implementation has good enough uniform distribution.
The more specifically this is tied to MersenneTwister the better.
You do not need to test the Mersenne Twister algorithm -- that's been done over and over by people who really know what they're doing -- you only have to test whether you've correctly implemented the algorithm.
You can go to the Mersenne Twister web site and grab their test output. If you produce the same sequence of outputs that they do, you've probably implemented the algorithm correctly.
Note that the MT site has a link specifically for 64 bit machines and different test outputs for 32 and 64 bit versions.
The standard battery of tests for a PRNG is the Diehard Tests.
Easiest approach (If it's truely generic MT) would be to compared it with a known-good MT library with the same seed.
Aloha!
As someone else said - use the known answer test vectors for the algorithms. If you meet the test vectors you can be reasonably sure that your generator works.
If you really want to test the generator. Use the DIEHARD tests++ as implemented by the Dieharder tool:
http://www.phy.duke.edu/~rgb/General/dieharder.php
Related
There are so many Hyperloglog implementation out there, but how do you verify / test Hyperloglog implementation? To check it's "accuracy", it's "error" bound behavior? Just throwing some static test cases looks very ineffective.
More concrete, someone changes the random number routine, how do I know that is not a disastrous choice and show with some automated, repeatable tests?
Can anyone point me to any known good tests in github or other place, and may be some explanations?
Good question. First, note that while HyperLogLog's theoretical foundation offers some indication of accuracy, it is critical to test the implementation you are using.
Testing should use random datasets (additional static datasets are also possible), and should be applied across varying set cardinalities. If you have any test automation framework in place, that would be a natural place to ensure avoiding regression, as you suggested above. However, note that to measure accuracy with large cardinalities, test runtime might be prohibitive.
You can use the implementation below for reference. It includes unit tests which draw large numbers of random numbers, and check the accuracy at fixed intervals.
https://github.com/Microsoft/CardinalityEstimation
I am currently working with an existing implementation of Perlin noise, which came bundled with a bunch of code I am trying to clean up. The code in question is severely under-tested, and I would like to make sure that each of its components receives proper testing in case there are any hidden bugs.
However, I am not sure how I would go about testing the correctness of the Perlin noise implementation in this case. I welcome all suggestions.
This is a tough problem & probably doesn't have a single best solution.
For some images properties, you might be able to perform automated tests using Computer Vision techniques. I.E. if your Perlin noise output is supposed to be tile-able, an edge detection filter might be able to catch problems. I've also had some good results using FFT filters when I was working on an image classifier for perlin noise based wood grain textures. In my experience, implementing such tests can easily take more time then building the code being tested. To minimize that, I'd stick with libraries like OpenCV, Octave, etc. Also, this approach depends on having known good output in order to build your tests.
From a certain perspective, Perlin noise is a type of random number generator. To that end, you might be able to use RNG test suites like the NIST Statistical Test Suite or the Diehard tests. This approach depends on having known good output in order to build your tests.
Finally, you could build tests that output the results to file & then perform a manual confirmation of each against expected results. For convenience, you could load collections of images via a web page & maybe even integrate reporting check boxes to collect pass/fail responses from your tester. This solution is the best I've come up with for testing properties that are difficult, impossible or impracticable to quantify. I.E. I only know my particle effect is correct when I see it.
Hypothetically speaking, if my scientific work was leading toward the development of functions/modules/subroutines (on a desktop), what would I need to know to incorporate it into a large-scale simulation to be run on a supercomputer (which might simulate molecules, fluids, reactions, and so on)?
My impression is that it has to do with taking advantage of certain libraries (e.g., BLAS, LAPLACK) where possible, revising algorithms (reducing iteration), profiling, parallelizing, considering memory-hard disk-processor use/access... I am aware of the adage, "want to optimize your code? don't do it", but if one were interested in learning about writing efficient code, what references might be available?
I think this question is language agnostic, but since many number-crunching packages for biomolecular simulation, climate modeling, etc. are written in some version of Fortran, this language would probably be my target of interest (and I have programmed rather extensively in Fortran 77).
Profiling is a must at any level of machinery. In common usage, I've found that scaling to larger and larger grids requires a better understanding of the grid software and the topology of the grid. In that sense, everything you learn about optimizing for one machine is still applicable, but understanding the grid software gets you additional mileage. Hadoop is one of the most popular and widespread grid systems, so learning about the scheduler options, interfaces (APIs and web interfaces), and other aspects of usage will help. Although you may not use Hadoop for a given supercomputer, it is one of the less painful methods for learning about distributed computing. For parallel computing, you may pursue MPI and other systems.
Additionally, learning to parallelize code on a single machine, across multiple cores or processors, is something you can begin learning on a desktop machine.
Recommendations:
Learn to optimize code on a single machine:
Learn profiling
Learn to use optimized libraries (after profiling: so that you see the speedup)
Be sure you know algorithms and data structures very well (*)
Learn to do embarrassingly parallel programming on multiple core machines.
Later: consider multithreaded programming. It's harder and may not pay off for your problem.
Learn about basic grid software for distributed processing
Learn about tools for parallel processing on a grid
Learn to program for alternative hardware, e.g. GPUs, various specialized computing systems.
This is language agnostic. I have had to learn the same sequence in multiple languages and multiple HPC systems. At each step, take a simpler route to learn some of the infrastructure and tools; e.g. learn multicore before multithreaded, distributed before parallel, so that you can see what fits for the hardware and problem, and what doesn't.
Some of the steps may be reordered depending on local computing practices, established codebases, and mentors. If you have a large GPU or MPI library in place, then, by all means, learn that rather than foist Hadoop onto your collaborators.
(*) The reason to know algorithms very well is that as soon as your code is running on a grid, others will see it. When it is hogging up the system, they will want to know what you're doing. If you are running a process that is polynomial and should be constant, you may find yourself mocked. Others with more domain expertise may help you find good approximations for NP-hard problems, but you should know that the concept exists.
Parallelization would be the key.
Since the problems you cited (e.g. CFD, multiphysics, mass transfer) are generally expressed as large-scale linear algebra problems, you need matrix routines that parallelize well. MPI is a standard for those types of problems.
Physics can influence as well. For example, it's possible to solve some elliptical problems efficiently using explicit dynamics and artificial mass and damping matricies.
3D multiphysics means coupled differential equations with varying time scales. You'll want a fine mesh to resolve details in both space and time, so the number of degrees of freedom will rise rapidly; time steps will be governed by the stability requirements of your problem.
If someone ever figures out how to run linear algebra as a map-reduce problem they'll have it knocked.
Hypothetically speaking, if my scientific work was leading toward the development of functions/modules/subroutines (on a desktop), what would I need to know to incorporate it into a large-scale simulation to be run on a supercomputer (which might simulate molecules, fluids, reactions, and so on)?
First, you would need to understand the problem. Not all problems can be solved in parallel (and I'm using the term parallel in as wide meaning as it can get). So, see how the problem is now solved. Can it be solved with some other method quicker. Can it be divided in independent parts ... and so on ...
Fortran is the language specialized for scientific computing, and during the recent years, along with the development of new language features, there has also been some very interesting development in terms of features that are aiming for this "market". The term "co-arrays" could be an interesting read.
But for now, I would suggest reading first into a book like Using OpenMP - OpenMP is a simpler model but the book (fortran examples inside) explains nicely the fundamentals. Message parsing interface (for friends, MPI :) is a larger model, and one of often used. Your next step from OpenMP should probably go in this direction. Books on the MPI programming are not rare.
You mentioned also libraries - yes, some of those you mentioned are widely used. Others are also available. A person who does not know exactly where the problem in performance lies should IMHO never try to undertake the task of trying to rewrite library routines.
Also there are books on parallel algorithms, you might want to check out.
I think this question is language agnostic, but since many number-crunching packages for biomolecular simulation, climate modeling, etc. are written in some version of Fortran, this language would probably be my target of interest (and I have programmed rather extensively in Fortran 77).
In short it comes down to understanding the problem, learning where the problem in performance is, re-solving the whole problem again with a different approach, iterating a few times, and by that time you'll already know what you're doing and where you're stuck.
We're in a position similar to yours.
I'm most in agreement with #Iterator's answer, but I think there's more to say.
First of all, I believe in "profiling" by the random-pausing method, because I'm not really interested in measuring things (It's easy enough to do that) but in pinpointing code that is causing time waste, so I can fix it. It's like the difference between a floodlight and a laser.
For one example, we use LAPACK and BLAS. Now, in taking my stack samples, I saw a lot of the samples were in the routine that compares characters. This was called from a general routine that multiplies and scales matrices, and that was called from our code. The matrix-manipulating routine, in order to be flexible, has character arguments that tell it things like, if a matrix is lower-triangular or whatever. In fact, if the matrices are not very large, the routine can spend more than 50% of its time just classifying the problem. Of course, the next time it is called from the same place, it does the same thing all over again. In a case like that, a special routine should be written. When it is optimized by the compiler, it will be as fast as it reasonably can be, and will save all that classifying time.
For another example, we use a variety of ODE solvers. These are optimized to the nth degree of course. They work by calling user-provided routines to calculate derivatives and possibly a jacobian matrix. If those user-provided routines don't actually do much, samples will indeed show the program counter in the ODE solver itself. However, if the user-provided routines do much more, samples will find the lower end of the stack in those routines mostly, because they take longer, while the ODE code takes roughly the same time. So, optimization should be concentrated in the user-provided routines, not the ODE code.
Once you've done several of the kind of optimization that is pinpointed by stack sampling, which can speed things up by 1-2 orders of magnitude, then by all means exploit parallelism, MPI, etc. if the problem allows it.
I'm planning some experiments in symbolic execution of C code, using an off-the-shelf SMT solver, and wondering which solver to use; looking at e.g. the SMT contest entrants, and taking only the open-source systems, narrows it down to Beaver, Boolector, CVC3, OpenSMT, Sateen, Sonolar, STP, Verit; which is still a long list.
Trying to narrow it down a little further, I notice that some of the systems advertise the ability to handle bit vector arithmetic, whereas others only advertise the ability to handle general integer arithmetic. In principle, the former is correct for C, where variables are machine words, not unbounded integers. How much difference does it make in practice? What happens if you try to use a general integer system for this kind of job? Does one of the following scenarios apply?
A bit vector system is slightly more efficient, but you can use either, no problem.
You can use a general integer system with a bit of tweaking.
A general integer system is fine for signed int (because the result of overflow is undefined) but will give the wrong answer for unsigned.
A general integer system just isn't correct for machine word arithmetic, and I can reduce my short list to only those systems that provide bit vector arithmetic.
Something else...?
I've tried to ask as specific a question as possible, but if anyone can suggest any other criteria for narrowing down the list, that would be great!
I've had good experience using STP for symbolic execution. STP was designed precisely for this task. Also, there have been a number of symbolic execution tools that have successfully used STP for this purpose, so there is reason to believe that STP doesn't suck. I would definitely recommend STP to others as a default choice for this sort of experimentation.
However, I haven't tried the other systems, so I don't know how STP compares to them.
Personally, I see STP as the baseline and the default choice for this kind of application. So, if you only have time to try one solver, trying STP seems like a pretty reasonable choice.
If I had to guess, my guess would be that bit-vector arithmetic is important to support, because any large systems code is going to have a non-trivial amount of code that performs bitwise operations. Also, I'd suspect/worry some systems code may rely upon the behavior of unsigned arithmetic to wrap modulo 2n, and if you try to model it with integers, you will not get the semantics of C right (because, as you say, integers just aren't correct for machine-word arithmetic) and consequently, if you try to use an integer-only solver, you may experience some difficulties. However, I have no hard evidence for either of these suspicions.
P.S. Z3 might also be a contender to add to your list to consider. (Do you really need your solver to be open-source, as long as it is free? I'd expect that a symbolic execution tool would use it only as a blackbox, without modification.)
According to SMT-Wikipedia at 2011-08, we have:
Based on these measures, it appears that the most vibrant, well-organized projects are OpenSMT, STP and CVC4.
I'm just checking this stuff - so far, all three seems reasonable, plus older CVC -> CVC3.
When working on a large software project I often use fuzz-testing as part of my test cases to help smoke out bugs that may only show up when the input reaches a certain size or shape. I've done this most commonly by just using the standard random number facilities that are bundled with the programming language I happen to be using.
Recently I've started wondering, ignoring the advantages or disadvantages of fuzz testing in general, whether or not it's a good idea to be using non-cryptographically secure pseudorandom number generators when doing fuzz testing. Weak random number generators often exhibit patterns that distinguish them from true random sequences, even if those patterns are not readily obvious. It seems like a fuzz test using a weak PRNG might always fail to trigger certain latent bugs that only show up in certain circumstances because the pseudorandom numbers could be related to one another in a way that never trigger those circumstances.
Is it inherently unwise to use a weak PRNG for fuzz testing? If it is theoretically unsound to do so, is it still reasonable in practice?
You're confusing two very different grades of "weakness":
statistical weakness means the output of the PRNG exhibits statistical patterns, such as having certain sequences occur more often than others. This could actually lead to ineffective fuzz testing in some rare cases. Statistically strong PRNGs are performant and widely available though (most prominently the Mersenne Twister).
cryptographical weakness means that the output of the RNG is in some way predictable given knowledge other than the seed (such as the output itself). It makes absolutley no sense to require a PRNG used for fuzz testing to be cryptographically strong, because the "patterns" exhibited by statistically-strong-but-cryptographically-weak PRNGs are pretty much only an issue if you need to prevent a cryptographically versed attacker from predicting the output.
I don't think it really matters, but I can't prove it.
Fuzz-testing will only try some inputs, in most cases a minuscule proportion of the possibilities. No matter how good the RNG you use, it may or may not find one of the inputs that breaks your code, depending on what proportion of all possible inputs break your code. Unless the pattern in the PRNG is very simple, it seems to me unlikely that it will correspond in any way to a pattern in the "bad" inputs you're looking for, so it will hit it neither more nor less than true random.
In fact, if you knew how to pick an RNG to maximise the probability of it finding bad inputs, you could probably use that knowledge to help find the bug more directly...
I don't think you should use a really bad PRNG. rand for example is permitted to exhibit very simple patterns, such as the LSB alternating. And if your code uses a PRNG internally, you probably want to avoid using the same PRNG in a similar way in the test, just to be sure you don't accidentally only test cases where the input data matches the internally-generated number stream! Small risk, of course, since you'd hope they'll be using different seeds, but still.
It's usually not that hard in a given language to find crypto or at least secure hash libraries. SHA-1 is everywhere and easy to use to generate a stream, or failing that RC4 is trivial to implement yourself. Both provide pretty good PRNG, if not quite so secure as Blum Blum Shub. I'd have thought the main concern is speed - if for example a Mersenne Twister can generate fuzz test cases 10 times as fast, and the code under test is reasonably fast, then it might have a better chance of finding bad inputs in a given time regardless of the fact that given 624 outputs you can deduce the complete state of the RNG...
You don't need unpredictable source (that's exactly what a cryptographically secure generator is), you only need a source with good statistical properties.
So using a general purpose generator is enough - it is fast and usually reproduceable (which means problems are also reproduceable).