Generate a series of pseudo-random sequences to remove sequential bias - sequence

I have n things that are going to be tested by t testers. Each tester is subject to order bias (that is, they may give higher ratings to things they test earlier - or the reverse), so I want to eliminate that. One way to do that would be to generate a random testing sequence for each tester.
For example, with n=5, m=2:
Tester 1 Tester 2
4 2
2 5
5 4
1 3
3 1
Notice that in this generated sequence however, both testers test thing 5 immediately after thing 2. Also, things 3 and 1 both appear towards the end of the testing, for both testers. This is sub-optimal.
My question: how can I generate an optimal set of sequences, maximising the chance of each thing appearing at every different position, and minimising the repetition of individual sequences.
Harder question: how much more optimal would that be than the naive (pseudo-)random generation? Can that be quantified?
This isn't a homework question, although it may sound like it :) (It's for a wine tasting...)
I really wasn't sure whether this should go in math.stackexchange, cs.stackexchange, or here. Ultimately I actually want to implement this, so...

Related

What is the most conclusive way to evaluate an n-way split test where n > 2?

I have plenty of experience designing, running and evaluating two-way split tests (A/B Tests). Those are by far the most common in digital marketing, where I do most of my work.
However, I'm wondering if anything about the methodology needs to change when more variants are introduced into an experiment (creating, say, a 3-way test (A/B/C Test)).
My instinct tells me I should just run n-1 evaluations against the control group.
If I run a 3-way split test, for example, instinct says I should find significance and power twice:
Treatment A vs Control
Treatment B vs Control
So, in that case, I'm finding out which, if any, treatment performed better than the control (1-tailed test, alt: treatment - control > 0, the basic marketing hypothesis).
But, I'm doubting my instinct. It's occurred to me that running a third test contrasting Treatment A with Treatment B could yield confusing results.
For example, what if there's not enough evidence to reject a null that treatment B = treatment A?
That would lead to a goofy conclusion like this:
Treatment A = Control
Treatment B > Control
Treatment B = Treatment A
If treatments A and B are likely only different due to random chance, how could only one of them outperform the control?
And that's making me wonder if there's a more statistically sound way to evaluate split tests with more than one treatment variable. Is there?
Your instincts are correct, and you can feel less goofy by rewording your statements:
We could find no statistically significant difference between Treatment A and Control.
Treatment B is significantly better than Control.
However it remains inconclusive whether Treatment B is better than Treatment A.
This would be enough to declare Treatment B a winner, with the possible followup of retesting A vs B. But depending on your specific situation, you may have a business need to actually make sure Treatment B is better than Treatment A before moving forward and you can make no such decision with your data. You must gather more data and/or restart a new test.
What I've found is a far more common scenario is Treatment A and Treatment B both soundly beat control (as they're often closely related and have related hypotheses), but there is no statistically significant difference between Treatment A or Treatment B. This is an interesting scenario where if you are required to pick a winner, it's okay throwing significance out the window and picking the one that has the strongest effect. The reason why is that the significance level (eg. 95%) is set to avoid false positives and making unnecessary changes. There's an assumption that there are switching costs. In this case, you must pick A or B and throw out control, so in my opinion it's okay picking the best one until you have more data.

VB.Net - Recursion or Iteration

first post here because I've got a problem thats got me stumped. I am creating a calculation tool for a project at uni, now I'm not in an it degree but for this project I havent had problem too difficult until now.
Basically I am doing design work for a building, where each floor of the building can be allocated one of 9 possible designs. By then allocating solutions to each of these then calculating costs I am trying to find the most efficient combination.
Normally I would just use some nested loops to find the best design, no problems, but for the calculation tool I am required to change the number of floors, and therefore the number of nested loops, which I am unfamiliar with how to do.
The General structure is this
1- X Number of Floors.
9 Possible designs for each floor.
Based on each combination a cost must be calculated, and if it is within the best 5 results it will be stored.
There is a total of 9^x total solutions.
So
Floor = 1
For Solution = 1 to 9
Floor = 2
For Solution = 1 to 9
CalculateCost()
if CalculateCost < Best Then
Write, Floor1 Solution Value, Floor2 Solution Value to Output
Etc...
Now I am using Vb.net, and do not really know how to do recursion. If someone could simply point me in the way of a resource that may help me on this issue I would be very grateful.
Edit - Whilst I have tried to simplify, the cost of design implementation changes based on various other factors, so I can't simply just take the cheapest design for all. I have tried to solve this through practical theory and so far found so solution, therefore the brute force method is required
You're already using For loops to try each solution for each given floor, but you're manually iterating through your Floor variable. It seems you have 9 independently declared variables with the schema FloorX Solution Value. Given all of these things, I think what you really need is a dynamic array. Here's some rough code using this approach:
Dim FloorSolutionValues() As Byte ' I'm assuming values of 1-9
Dim NumberOfFloors As Integer ' Get this from the user
ReDim FloorSolutionValues(NumberOfFloors - 1)
For CurrentFloor As Integer = 0 To NumberOfFloors - 1
For CurrentSolution As Byte = 1 To 9
If CalculateCost() < FloorSolutionValues(CurrentFloor) Then
FloorSolutionValues(CurrentFloor) = CurrentSolution
End If
Next
Next
I'm making some assumptions that may or may not be true, but this should get you on the right path.

Rough estimate of test cases

I'm curious how many test cases others have for a site similar to mine. It's your basic CRUD with business workflow website. 3 user roles, a couple input pages, a couple search pages, a business rule engine, etc. Maybe 50k lines of .NET code (workflow and persistence altogether). DB with about 10 main tables plus about 100 supporting tables (lookups, logs, etc.). The main UI for entering data is quite big, around 100 data fields, multiple grids, about 5 action/submit type buttons.
I know this is vague and I'm only hoping for order of magnitude figures. I'm also thinking of basic test cases, not code coverage type cases. But like if I told you we had 25 test cases I'm sure you'd say way WAY not enough. So I'm just looking for ballpark figures.
TIA
I would have as many test cases as it takes to ensure a high level of confidence in the system.
The number of tables, rules, lines of code, etc is actually immaterial.
You should have the appropriate unit tests to ensure your domain objects and business rules are firing correctly. You should have tests to ensure your queries execute appropriately (this is a harder one).
You might even want to have test cases for paths through the software. In other words, click here, get this page, click there, edit a field, save the page, go back... This type is the most difficult as the tests are usually recorded and have to be rerecorded when the pages change (ie: a field is added or removed).
Generally speaking it's more about coverage than number of tests. You want your tests to cover as much of the applications funcionality as is feasible. Note that I didn't say possible. You can cover an entire application (100%) with test cases, but for every little change, bug fix, etc you'll have to recode those tests. This is more desired for a mature app. For newer apps you don't want to hamstring your developers and QA team that way as they'll spend inordinate amounts of time fixing/changing unit tests...
For any system, you could easily spend as much time developing your automated tests as you do the system itself. In some cases, even more.
As for our group, we tend to have lots of unit tests. However, for testing paths through the system we only record those once a particular area has moved into a "maintenance" type of mode. Meaning we expect little change for quite a while in that area and the path test is simply to ensure no one jacked it up.
UPDATE: the comments here led me to the following:
Going a little further: Let's examine 1 small piece of code:
Int32 AddNumbers(Int32 a, Int32 b) {
return a+b;
}
On the face of it you could get away with a single test:
Int32 result = AddNumbers(1,2);
Assert.Equals(result, 3);
However, that probably isn't enough. What happens if you do this:
Int32 result = AddNumbers(Int32.MaxValue, 1);
Assert.Equals(result, (Int32.MaxValue+1));
Now we have a failure. Here's another one:
Int32 result = AddNumbers(Int32.MinValue, -1);
Assert.Equals(result, (Int32.MinValue-1));
So, we have an extremely simple method that requires at least 3 tests. The initial to see if it can give any result, then 2 for bounds checking. That's 3 tests for essentially 2 lines of code (method definition and the one line computation).
As your code becomes more complex, things get really dicey:
Decimal DivideThis(Decimal a, Decimal b) {
result = Decimal.Divide(a,b);
}
This slight change introduces yet another exception condition beyond bounds: DivideByZero. So now we are up to 4 tests required for 2 lines of code.
Now, let's simplify it a bit:
String AppendData(String data, String toAppend) {
return String.Format("{0}{1}", data, toAppend);
}
Our test case here is:
String result = AppendData("Hello", "World");
Assert.Equals(result, "HelloWorld");
That's just one test case for the code block, with no others really needed.
What does this tell us: For starters 2 lines of code might cause us to need between 1 and 4 test cases. You mentioned 50k lines... Using that logic, you will need between 50,000 and 200,000 test cases...
Of course, life is rarely so simple. In those 50k lines of code you have, there are going to be large blocks of code that have very limited inputs. For example a mortgage interest calculator might take 3 parameters, and return 1 value (the APR). The code itself might run 100 lines or so (been awhile, just work with me). The number of test cases for this is going to be determined by edge cases along the lines of making sure you properly handle rounding.
So, let's say it's 5 cases: which brings us to 20 lines of code = 1 case. Calculating that out your 50k lines might result in 2,500 test cases. Obviously much smaller than what we expected above.
Finally, I'm going to throw another wrinkle into the mix. Some test systems can handle inputs and your assertions coming from a data file. Considering our first one we could have a data file that has a line for each parameter combination we want to test. In this scenario, we only need 1 test case to cover 3 (or more..) possible conditions.
The test case might look like (pseudo code):
read input file.
parse expected result, parameter 1, parameter 2
run method
assert method result = parsed result
repeat for each line of the file
With that capability, we are down to 1 test case per scenario. I would say 1 per method, but the reality is that most methods are rarely standalone and it's entirely possible that numerous methods are implicitly tested through explicit testing of others; therefore not requiring their own individual tests.
This leads me to this: It is impossible to determine the right number of test cases without a full understanding of your code base. 5 cases that are at the UI level might be enough for complete coverage depending on the complexity of the tests; or it might take thousands. Therefore it's much better to base it on code coverage. What percentage of the code, and branching logic, are you testing?
If you ask a car salesman for a rough price of a car and he would give me that price, I wouldn't buy my car there, because he forgot to ask me some important questions. What kind of car do you want? Which extras do you want on the car? etc.
Same for number of test cases .... If a hiring manager would ask me that question I would probably give him the following answer.
#test cases = between #Requirements*2 and #Requirements*infinite (some requirements can lead to bollions of possibilities)
I also would say that based on my experience the number would realistically be #Requirements*5 (is the number I use at the initial phase, for projects with new, changed and omitted functionality)
where the following error margin has to be taken depending on the phase I am making this estimate:
Initiation phase : error margins = 400%
...
Testing phase : error margin = 10%
By the time you start the testing phase, detailed requirements/specs are available, volatillity of requirements is stabilized, creep of requirements is almost zero, etc.
At that time I also will be able to give better estimates ...

Developing Rainbow Tables

I am currently working on a parallel computing project where i am trying to crack passwords using rainbow tables.
The first step that i have thought of is to implement a very small version of it that cracks password of lengths 5 or 6 (only numeric passwords to begin with). To begin with, i have some questions with the configuration settings.
1 - What should be the size that i should start with. My first guess is, i will start with a table with 1000 Initial, Final pair. Is this is a good size to start with?
2- Number of chains - I really got no information online with what should be the size of a chain be
3 - Reduction function - If someone can give me any information about how should i go about building one.
Also, if anyone has any information or any example, it will be really helpful.
There is already a wealth of rainbow tables available online. Calculating rainbow tables simply moves the computation burden from when the attack is being run, to the pre-computation.
http://www.freerainbowtables.com/en/tables/
http://www.renderlab.net/projects/WPA-tables/
http://ophcrack.sourceforge.net/tables.php
http://www.codinghorror.com/blog/2007/09/rainbow-hash-cracking.html
It's a time-space tradeoff. The longer the chains are, the less of them you need, so the less space it'll take up, but the longer cracking each password will take.
So, the answer is always to build the biggest table you can in the space that you have available. This will determine your chain length and number of chains.
As for choosing the reduction function, it should be fast and behave pseudo-randomly. For your proposed plaintext set, you could just pick 20 bits from the hash and interpret them as a decimal number (choosing a different set of 20 bits at each step in the chain).

Storage algorithm question - verify sequential data with little memory

I found this on an "interview questions" site and have been pondering it for a couple of days. I will keep churning, but am interested what you guys think
"10 Gbytes of 32-bit numbers on a magnetic tape, all there from 0 to 10G in random order. You have 64 32 bit words of memory available: design an algorithm to check that each number from 0 to 10G occurs once and only once on the tape, with minimum passes of the tape by a read head connected to your algorithm."
32-bit numbers can take 4G = 2^32 different values. There are 2.5*2^32 numbers on tape total. So after 2^32 count one of numbers will repeat 100%. If there were <= 2^32 numbers on tape then it was possible that there are two different cases – when all numbers are different or when at least one repeats.
It's a trick question, as Michael Anderson and I have figured out. You can't store 10G 32b numbers on a 10G tape. The interviewer (a) is messing with you and (b) is trying to find out how much you think about a problem before you start solving it.
The utterly naive algorithm, which takes as many passes as there are numbers to check, would be to walk through and verify that the lowest number is there. Then do it again checking that the next lowest is there. And so on.
This requires one word of storage to keep track of where you are - you could cut down the number of passes by a factor of 64 by using all 64 words to keep track of where you're up to in several different locations in the search space - checking all of your current ones on each pass. Still O(n) passes, of course.
You could probably cut it down even more by using portions of the words - given that your search space for each segment is smaller, you won't need to keep track of the full 32-bit range.
Perform an in-place mergesort or quicksort, using tape for storage? Then iterate through the numbers in sequence, tracking to see that each number = previous+1.
Requires cleverly implemented sort, and is fairly slow, but achieves the goal I believe.
Edit: oh bugger, it's never specified you can write.
Here's a second approach: scan through trying to build up to 30-ish ranges of contiginous numbers. IE 1,2,3,4,5 would be one range, 8,9,10,11,12 would be another, etc. If ranges overlap with existing, then they are merged. I think you only need to make a limited number of passes to either get the complete range or prove there are gaps... much less than just scanning through in blocks of a couple thousand to see if all digits are present.
It'll take me a bit to prove or disprove the limits for this though.
Do 2 reduces on the numbers, a sum and a bitwise XOR.
The sum should be (10G + 1) * 10G / 2
The XOR should be ... something
It looks like there is a catch in the question that no one has talked about so far; the interviewer has only asked the interviewee to write a program that CHECKS
(i) if each number that makes up the 10G is present once and only once--- what should the interviewee do if the numbers in the given list are present multple times? should he assume that he should stop execting the programme and throw exception or should he assume that he should correct the mistake by removing the repeating number and replace it with another (this may actually be a costly excercise as this involves complete reshuffle of the number set)? correcting this is required to perform the second step in the question, i.e. to verify that the data is stored in the best possible way that it requires least possible passes.
(ii) When the interviewee was asked to only check if the 10G weight data set of numbers are stored in such a way that they require least paases to access any of those numbers;
what should the interviewee do? should he stop and throw exception the moment he finds an issue in the algorithm they were stored in, or correct the mistake and continue till all the elements are sorted in the order of least possible passes?
If the intension of the interviewer is to ask the interviewee to write an algorithm that finds the best combinaton of numbers that can be stored in 10GB, given 64 32 Bit registers; and also to write an algorithm to save these chosen set of numbers in the best possible way that require least number of passes to access each; he should have asked this directly, woudn't he?
I suppose the intension of the interviewer may be to only see how the interviewee is approaching the problem rather than to actually extract a working solution from the interviewee; wold any buy this notion?
Regards,
Samba