Shared Hash Table for parallel chess search - chess

This is a conceptual question about the shared hashtable algorithm for parallel chess search.
I have implemented an alpha-beta search that spawns 4 threads, each of which conducts a search and returns the best move/evaluation. However, I am observing search instabilities, in which the threads return different results. I am using the lockless hashtable described in the link, so some entries might be overwritten or corrupted, although corrupt data will never actually be used.
Why might search threads return different results? Is this an expected outcome of parallel search, or is it a problem? If expected, how do I know which move to select?

A parallel search is expected to be non-deterministic. Once you share knowledge via the transposition table (aka hashtable), you will have this effect.
If you run parallel searches, there is no easy way to decide what the correct result is. So, what can you do? If you have four threads, you could try a majority vote but I'm not aware of engines that do that.
Note that search stability is an issue that you will even have in a sequential search algorithm once you start using transposition tables.
If you run a sequential search algorithm with a transposition table and later repeat the same search (without resetting the transposition table), you are not guaranteed to get the same results. The same goes for a parallel search, only that a parallel search will not even be deterministic.
Search stability and determinism are not the same thing:
In a sequential search, you can achieve determinism (useful for debugging). In a parallel search, you cannot avoid it in practice.
Neither a sequential nor a parallel search can rule out search stability in practice. In a good search algorithm, it should be relatively unlikely, however.
For an explanation why transposition tables lead to search instability, take a look at this question.

Related

What are the advantages of the knowledge that a corpus of text follows the zipf's law?

I have the frequency count of all the words from a file (that i am using to analyze and index data: elasticsearch), and the frequency of words follows zipf's law. How can i use this knowledge to improve my search over it? Rather, how can i use it to get anything done to my benefit?
I think this is a very interesting question, and I'm sad that it's gone without answer or comment for so long. Zipfian distribution is a phenomenon that occurs not only in language, but far beyond that.
Zipf and Pareto
Zipfian distribution or Zipf's Law is a rank-frequency distribution of words in this case. But perhaps more importantly pareto distribution implies that approximately 20% of words(cause) account for roughly 80% of word occurrences(outcome) in any given body, or bodies, of text. Lucene, the brain behind elasticsearch, accounts for this in multiple ways, and often beyond that of zipf's law. It's common that your results will contain a zipfian distribution.
Word frequency, least is best(usually)
One of the problems here is in most bodies of text the most common words actually bare the least context. Usually being an article or having a very limited context. The top 3 most common words in english are: "the", "of", and "to". Elasticsearch actually comes with a list of stop words which will optimize indexing by ignoring articles.
Elasticsearch stop words:
a, an, and, are, as, at, be, but, by, for, if, in, into, is, it, no,
not, of, on, or, such, that, the, their, then, there, these, they,
this, to, was, will, with
It's actually a common occurrence that words that appear the least frequent bare the most context. So you're likely going to look for the least frequent words when doing text search.
80:20 phenomenon
The thing is elasticsearch and lucene are both build with these things in mind, and well optimised for such. A simple LRU eviction policy for caching indices actually works very well, as 80% of your searches will likely use 20% of your actual indices making cache pollution both infrequent and low impact due to a predictable workload. So if you allocate a cache size larger than 20% your total index size you should be fine. In the event that the index is not in cache it will read off the disk(usually mmap), and you can optimize performance by having a drive with fast random reads(like an SSD).
More Reading
There is an interesting article on this. Its likely that the total word rank in your data set looks very similar to that of the word rank of most other data sets. So optimizing performance as well as relevance is left to those few words which are likely to occur least often, but are likely to be most search for. This may be jargon in context to the demographic/profession your application is targeting.
Conclusion
These optimizations, however, could be premature. Like I stated, lucene and elasticsearch both do their part to increase effectiveness and efficiency of search with these principles in mind. Like I stated earlier, a simple LRU cache works very well in this case, and LRU is both common(already part of ES) and relatively simple. Cases where it might be worth-while are usually cases where you have a lot of jargon or specific language or perhaps multilingual. For something like a news site you'll likely want a more broad solution, as you cover a huge spectrum of topics which include many different words and subjects. These are usually things to consider when you're configuring elasticsearch, but tinkering with the analyzer can be complicated and may be hard to do effectively especially if you have a large range of subjects with different terminologies that you need to index, however this will likely have the largest effect on increasing search relevance.

Where can I find several significant sorting algorithms tests cases?

I want to develop a very efficient sorting algorithm based on some ideas that I have. The problem is that I want to test my algorithm's efficiency against the majority highly appreciated sorting algorithms that already exist.
Ideally I would like to find:
a large bunch of sorting tests that are SIGNIFICANT for providing me with the efficiency of my algorithm
a large set of already existing and strongly-optimized sorting algorithms (with their code - no matter the language)
even better, software that provides adequate environment for sorting algorithms developers
Here's a post that I found earlier which contains 2 tables with comparisons between timsort, quicksort, dual-pivot quicksort and java 6 sort: http://blog.quibb.org/2009/10/sorting-algorithm-shootout/
I can see in those tables that those TXT files (starting from 1245.repeat.1000.txt on to sequential.10000000.txt) contain the test cases for those algorithms, but I can't find the original TXT's anywhere!
Can anyone point me to any link with many sorting test-cases AND/OR many HIGHLY EFFICIENT sorting algorithms? (it's the test cases I am interested in the most, sorting algorithms are all over the internet)
Thank you very much in advance!
A few things:
Quicksort goes nuts on forward- and reverse sorted lists so it will need other list types.
Testing on random data is fine, but if you want to compare the performance of different algorithms that means you cannot generate new random data every time or your results won't be reliable. I think you should try to come up with a pseudo"random" algorithm that writes data in in an order that is based on the number of entries. That way the data generated for lists of size n, 10n and 100n will be similar.
Testing of sorting is not primarily about speed (until an algorithm has been finalized) but the ratio of comparisons to entries. If one sort requires 15 comparisons per entry in a list and another 12 for the same list the second will be more efficient even if it executed in twice the time. For the more trivial sorting concepts the number of exchanges necessary will also come into play.
For testing use a vector of integers in RAM. If the algorithm works well the vector of integers can be translated to a vector of indeces into a buffer containing data to be compared. Such an algorithm would sort the vector of indeces based on the data they point to.

What model best suits optimizing for a real-time strategy game?

An article has been making the rounds lately discussing the use of genetic algorithms to optimize "build orders" in StarCraft II.
http://lbrandy.com/blog/2010/11/using-genetic-algorithms-to-find-starcraft-2-build-orders/
The initial state of a StarCraft match is pre-determined and constant. And like chess, decisions made in this early stage of the match have long-standing consequences to a player's ability to perform in the mid and late game. So the various opening possibilities or "build orders" are under heavy study and scrutiny. Until the circulation of the above article, computer-assisted build order creation probably wasn't as popularity as it has been recently.
My question is... Is a genetic algorithm really the best way to model optimizing build orders?
A build order is a sequence of actions. Some actions have prerequisites like, "You need building B before you can create building C, but you can have building A at any time." So a chromosome may look like AABAC.
I'm wondering if a genetic algorithm really is the best way to tackle this problem. Although I'm not too familiar with the field, I'm having a difficult time shoe-horning the concept of genes into a data structure that is a sequence of actions. These aren't independent choices that can be mixed and matched like a head and a foot. So what value is there to things like reproduction and crossing?
I'm thinking whatever chess AIs use would be more appropriate since the array of choices at any given time could be viewed as tree-like in a way.
Although I'm not too familiar with the field, I'm having a difficult time shoe-horning the concept of genes into a data structure that is a sequence of actions. These aren't independent choices that can be mixed and matched like a head and a foot. So what value is there to things like reproduction and crossing?
Hmm, that's a very good question. Perhaps the first few moves in Starcraft can indeed be performed in pretty much any order, since contact with the enemy is not as immediate as it can be in Chess, and therefore it is not as important to remember the order of the first few moves as it is to know which of the many moves are included in those first few. But the link seems to imply otherwise, which means the 'genes' are indeed not all that amenable to being swapped around, unless there's something cunning in the encoding that I'm missing.
On the whole, and looking at the link you supplied, I'd say that genetic algorithms are a poor choice for this situation, which could be accurately mathematically modelled in some parts and the search tree expanded out in others. They may well be better than an exhaustive search of the possibility space, but may not be - especially given that there are multiple populations and poorer ones are just wasting processing time.
However, what I mean by "a poor choice" is that it is inefficient relative to a more appropriate approach; that's not to say that it couldn't still produce 98% optimal results in under a second or whatever. In situations such as this where the brute force of the computer is useful, it is usually more important that you have modelled the search space correctly than to have used the most effective algorithm.
As TaslemGuy pointed out, Genetic Algorithms aren't guaranteed to be optimal, even though they usually give good results.
To get optimal results you would have to search through every possible combination of actions until you find the optimal path through the tree-like representation. However, doing this for StarCraft is difficult, since there are so many different paths to reach a goal. In chess you move a pawn from e2 to e4 and then the opponent moves. In StarCraft you can move a unit at instant x or x+1 or x+10 or ...
A chess engine can look at many different aspects of the board (e.g. how many pieces does it have and how many does the opponent have), to guide it's search. It can ignore most of the actions available if it knows that they are strictly worse than others.
For a build-order creator only time really matters. Is it better to build another drone to get minerals faster, or is it faster to start that spawning pool right away? Not as straightforward as with chess.
These kinds of decisions happen pretty early on, so you will have to search each alternative to conclusion before you can decide on the better one, which will take a long time.
If I were to write a build-order optimizer myself, I would probably try to formulate a heuristic that estimates how good (close the to the goal state) the current state is, just as chess engines do:
Score = a*(Buildings_and_units_done/Buildings_and_units_required) - b*Time_elapsed - c*Minerals - d*Gas + e*Drone_count - f*Supply_left
This tries to keep the score tied to the completion percentage as well as StarCraft common knowledge (keep your ressources low, build drones, don't build more supply than you need). The variables a to f would need tweaking, of course.
After you've got a heuristic that can somewhat estimate the worth of a situation, I would use Best-first search or maybe IDDFS to search through the tree of possibilities.
Edit:
I recently found a paper that actually describes build order optimization in StarCraft, in real time even. The authors use depth-first search with branch and bound and heuristics that estimate the minimum amount of effort required to reach the goal based on the tech tree (e.g. zerglings need a spawning pool) and the time needed to gather the required minerals.
Genetic Algorithm can be, or can sometimes not be, the optimal or non-optimal solution. Based on the complexity of the Genetic Algorithm, how much mutation there is, the forms of combinations, and how the chromosomes of the genetic algorithm is interpreted.
So, depending on how your AI is implemented, Genetic Algorithms can be the best.
You are looking at a SINGLE way to implement genetic algorithms, while forgetting about genetic programming, the use of math, higher-order functions, etc. Genetic algorithms can be EXTREMELY sophisticated, and by using clever combining systems for crossbreeding, extremely intelligent.
For instance, neural networks are optimized by genetic algorithms quite often.
Look up "Genetic Programming." It's similar, but uses tree-structures instead of lines of characters, which allows for more complex interactions that breed better. For more complex stuff, they typically work out better.
There's been some research done using hierarchical reinforcement learning to build a layered ordering of actions that efficiently maximizes a reward. I haven't found much code implementing the idea, but there are a few papers describing MAXQ-based algorithms that have been used to explicitly tackle real-time strategy game domains, such as this and this.
This Genetic algorithm only optimizes the strategy for one very specific part of the game: The order of the first few build actions of the game. And it has a very specific goal as well: To have as many roaches as quickly as possible.
The only aspects influencing this system seem to be (I'm no starcraft player):
build time of the various units and
buildings
allowed units and buildings given the available units and buildings
Larva regeneration rate.
This is a relatively limited, relatively well defined problem with a large search space. As such it is very well suited for genetic algorithms (and quite a few other optimization algorithm at that). A full gene is a specific set of build orders that ends in the 7th roach. From what I understand you can just "play" this specific gene to see how fast it finishes, so you have a very clear fitness test.
You also have a few nice constraints on the build order, so you can combine different genes slightly smarter than just randomly.
A genetic algorithm used in this way is a very good tool to find a more optimal build order for the first stage of a game of starcraft. Due to its random nature it is also good at finding a surprising strategy, which might have been an additional goal of the author.
To use a genetic algorithm as the algorithm in an RTS game you'd have to find a way to encode reactions to situations rather than just plain old build orders. This also involves correctly identifying situations which can be a difficult task in itself. Then you'd have to let these genes play thousands of games of starcraft, against each other and (possibly) against humans, selecting and combining winners (or longer-lasting losers). This is also a good application of genetic algorithms, but it involves solving quite a few very hard problems before you even get to the genetic algorithm part.

Sphinx/Solr/Lucene/Elastic Relevancy

We have an extremely large database of 30+ Million products, and need to query them to create search results and ad displays thousands of times a second. We have been looking into Sphinx, Solr, Lucene, and Elastic as options to perform these constant massive searches.
Here's what we need to do. Take keywords and run them through the database to find products that match the closest. We're going to be using our OWN algorithm to decide which products are most related to target our advertisements, but we know that these engines already have their own relevancy algorithms.
So, our question is how can we use our own algorithms on top of the engine's, efficiently. Is it possible to add them to the engines themselves as a module of some sort? Or would we have to rewrite the engine's relevancy code? I suppose we could implement the algorithm from the application by executing multiple queries, but this would really kill efficiency.
Also, we'd like to know which search solution would work best for us. Right now we're leaning towards Sphinx, but we're really not sure.
Also, would you recommend running these engines over MySQL, or would it be better to run them over some type of key-value store like Cassandra? Keep in mind there are 30 Million records, and likely to double as we move along.
Thanks for your responses!
I can't give you an entire answer, as I haven't used all the products, but I can say some things which might help.
Lucene/Solr uses a vector space model. I'm not certain what you mean by you're using your "own" algorithm, but if it gets too far away from the notion of tf/idf (say, by using a neural net) you're going to have difficulties fitting it into lucene. If by your own algorithm you just mean you want to weight certain terms more heavily than others, that will fit in fine. Basically, lucene stores information about how important a term is to a document. If you want to redefine the calculation of how important a term is, that's easy to do. If you want to get away from the whole notion of a term's importance to a document, that's going to be a pain.
Lucene (and as a result Solr) stores things in its custom format. You don't need to use a database. 30 million records is not an remarkably large lucene index (depending, of course, on how big each record is). If you do want to use a db, use hadoop.
In general, you will want to use Solr instead of Lucene.
I have found it very easy to modify Lucene. But as my first bullet point said, if you want to use an algorithm that's not based on some notion of a term's importance to a document, I don't think Lucene will be the way to go.
I actually did something similar with Solr. I can't comment on the details, but basically the proprietary analysis/relevance step generated a series of search terms with associated boosts and fed them to Solr. I think this can be done with any search engine (they all support some sort of boosting).
Ultimately it comes down to what your particular analysis requires.

How to evaluate a search engine?

I am a student carrying out a study to enhance a search engine's existing algorithm.
I want to know how I can evaluate the search engine - which I have improved - to quantify how much the algorithm was improved.
How should I go about comparing the old and new algorithm?
Thanks
This is normally done by creating a test suite of questions and then evaluating how well the search response answers those questions. In some cases the responses should be unambiguous (if you type slashdot into a search engine you expect to get slashdot.org as your top hit), so you can think of these as a class of hard queries with 'correct' answers.
Most other queries are inherently subjective. To minimise bias you should ask multiple users to try your search engine and rate the results for comparison with the original. Here is an example of a computer science paper that does something similar:
http://www.cs.uic.edu/~liub/searchEval/SearchEngineEvaluation.htm
Regarding specific comparison of the algorithms, although obvious, what you measure depends on what you're interested in knowing. For example, you can compare efficiency in computation, memory usage, crawling overhead or time to return results. If you are trying to produce very specific behaviour, such as running specialist searches (e.g. a literature search) for certain parameters, then you need to explicitly test this.
Heuristics for relevance are also a useful check. For example, when someone uses search terms that are probably 'programming-related', do you tend to get more results from stackoverflow.com? Would your search results be better if you did? If you are providing a set of trust weightings for specific sites or domains (e.g. rating .edu or .ac.uk domains as more trustworthy for technical results), then you need to test the effectiveness of these weightings.
First, let me start out by saying, kudos to you for attempting to apply traditional research methods to search engine results. Many SEO's have done this before you, and generally keep this to themselves as sharing "amazing findings" usually means you can't exploit or have the upper hand anymore, this said I will share as best I can some pointers and things to look for.
Identify what part of the algorithm are you trying to improve?
Different searches execute different algorithms.
Broad Searches
For instance in a broad term search, engines tend to return a variety of results. Common part of these results include
News Feeds
Products
Images
Blog Posts
Local Results (this is based off of a Geo IP lookup).
Which of these result types are thrown into the mix can vary based on the word.
Example: Cats returns images of cats, and news, Shoes returns local shopping for shoes. (this is based on my IP in Chicago on October 6th)
The goal in returning results for a broad term is to provide a little bit of everything for everyone so that everyone is happy.
Regional Modifiers
Generally any time a regional term is attached to a search, it will modify the results greatly. If you search for "Chicago web design" because the word Chicago is attached, the results will start with a top 10 regional results. (these are the one liners to the right of the map), after than 10 listings will display in general "result fashion".
The results in the "top ten local" tend to be drastically different than those in organic listing below. This is because the local results (from google maps) rely on entirely different data for ranking.
Example: Having a phone number on your website with the area code of Chicago will help in local results... but NOT in the general results. Same with address, yellow book listing and so forth.
Results Speed
Currently (as of 10/06/09) Google is beta testing "caffeine" The main highlight of this engine build is that it returns results in almost half the time. Although you may not consider Google to be slow now... speeding up an algorithm is important when millions of searches happen every hour.
Reducing Spam Listings
We have all found experienced a search that was riddled with spam. The new release of Google Caffeine http://www2.sandbox.google.com/ is a good example. Over the last 10+ one of the largest battles online has been between Search Engine Optimizers and Search Engines. Gaming google (and other engines) is highly profitable and what Google spends most of its time combating.
A good example is again the new release of Google Caffeine. So far my research and also a few others in the SEO field are finding this to be the first build in over 5 years to put more weight on Onsite elements (such as keywords, internal site linking, etc) than prior builds. Before this, each "release" seemed to favor inbound links more and more... this is the first to take a step back towards "content".
Ways to test an algorythm.
Compare two builds of the same engine. This is currently possible by comparing Caffeine (see link above or google, google caffeine) and the current Google.
Compare local results in different regions. Try finding search terms like web design, that return local results without a local keyword modifier. Then, use a proxy (found via google) to search from various locations. You will want to make sure you know the proxies location (find a site on google that will tell your your IP address geo IP zipcode or city). Then you can see how different regions return different results.
Warning... DONT pick the term locksmith... and be wary of any terms that when returning result, have LOTS of spammy listings.. Google local is fairly easy to spam, especially in competitive markets.
Do as mentioned in a prior answer, compare how many "click backs" users require to find a result. You should know, currently, no major engines use "bounce rates" as indicators of sites accuracy. This is PROBABLY because it would be EASY to make it look like your result has a bounce rate in the 4-8% range without actually having one that low... in other words it would be easy to game.
Track how many search variations users use on average for a given term in order to find the result that is desired. This is a good indicator of how well an engine is smart guessing the query type (as mentioned WAY up in this answer).
**Disclaimer. These views are based on my industry experience as of October 6th, 2009. One thing about SEO and engines is they change EVERY DAY. Google could release Caffeine tomorrow, and this would change a lot... that said, this is the fun of SEO research!
Cheers
In order to evaluate something, you have to define what you expect from it. This will help to define how to measure it.
Then, you'll be able to measure the improvement.
Concerning a search engine, I guess that you might be able to measure itsability to find things, its accuracy in returning what is relevant.
It's an interesting challenge.
I don't think you will find a final mathematical solution if that is your goal. In order to rate a given algorithm, you require standards and goals that must be accomplished.
What is your baseline to compare against?
What do you classify as "improved"?
What do you consider a "successful search"?
How large is your test group?
What are your tests?
For example, if your goal is to improve the process of page ranking then decide if you are judging the efficiency of the algorithm or the accuracy. Judging efficiency means that you time your code for a consistent large data set and record results. You would then work with your algorithm to improve the time.
If your goal is to improve accuracy then you need to define what is "inaccurate". If you search for "Cup" you can only say that the first site provided is the "best" if you yourself can accurately define what is the best answer for "Cup".
My suggestion for you would be to narrow the scope of your experiment. Define one or two qualities of a search engine that you feel need refinement and work towards improving them.
In the comments you've said "I have heard about a way to measure the quality of the search engines by counting how many time a user need to click a back button before finding the link he wants , but I can use this technique because you need users to test your search engine and that is a headache itself". Well, if you put your engine on the web for free for a few days and advertise a little you will probably get at least a couple dozen tries. Provide these users with the old or new version at random, and measure those clicks.
Other possibility: assume Google is by definition perfect, and compare your answer to its for certain queries. (Maybe sum of distance of your top ten links to their counterparts at Google, for example: if your second link is google's twelveth link, that's 10 distance). That's a huge assumption, but far easier to implement.
Information scientists commonly use precision and recall as two competing measures of quality for an information retrieval system (like a search engine).
So you could measure your search engine's performance relative to Google's by, for example, counting the number of relevant results in the top 10 (call that precision) and the number of important pages for that query that you think should have been in the top 10 but weren't (call that recall).
You'll still need to compare the results from each search engine by hand on some set of queries, but at least you'll have one metric to evaluate them on. And the balance of these two is important too: otherwise you can trivially get perfect precision by not returning any results or perfect recall by returning every page on the web as a result.
The Wikipedia article on precision and recall is quite good (and defines the F-measure which takes into account both).
I have had to test a search engine professionally. This is what I did.
The search included fuzzy logic. The user would type into a web page "Kari Trigger", and the search engine would retrieve entries like "Gary Trager", "Trager, C", "Corey Trager", etc, each with a score from 0->100 so that I could rank them from most likely to least likely.
First, I re-architected the code so that it could be executed removed from the web page, in a batch mode using a big file of search queries as input. For each line in the input file, the batch mode would write out the top search result and its score. I harvested thousands of actual search queries from our production system and ran them thru the batch setup in order to establish a baseline.
From then on, each time I modified the search logic, I would run the batch again and then diff the new results against the baseline. I also wrote tools to make it easier to see the interesting parts of the diff. For example, I didn't really care if the old logic returned "Corey Trager" as an 82 and the new logic returned it as an 83, so my tools would filter those out.
I could not have accomplished as much by hand-crafting test cases. I just wouldn't have had the imagination and insight to have created good test data. The real world data was so much richer.
So, to recap:
1) Create a mechanism that lets you diff the results of running new logic versus the results of prior logic.
2) Test with lots of realistic data.
3) Create tools that help you work with the diff, filtering out the noise, enhancing the signal.
You have to clearly identify positive and negative qualities such as how fast one gets the answer they are seeking or how many "wrong" answers they get on the way there. Is it an improvement if the right answer is #5 but the results are returned 20 times faster? Things like that will be different for each application. The correct answer may be more important in a corporate knowledge base search but a fast answer may be needed for a phone support application.
Without parameters no test can be claimed to be a victory.
Embrace the fact that the quality of search results are ultimately subjective. You should have multiple scoring algorithms for your comparison: The old one, the new one, and a few control groups (e.g. scoring by URI length or page size or some similarly intentionally broken concept). Now pick a bunch of queries that exercise your algorithms, say a hundred or so. Let's say you end up with 4 algorithms total. Make a 4x5 table, displaying the first 5 results of a query across each algorithm. (You could do top ten, but the first five are way more important.) Be sure to randomize which algorithm appears in each column. Then plop a human in front of this thing and have them pick which of the 4 result sets they like best. Repeat across your entire query set. Repeat for as many more humans as you can stand. This should give you a fair comparison based on total wins for each algorithm.
http://www.bingandgoogle.com/
Create an app like this that compares and extracts the data. Then run a test with 50 different things you need to look for and then compare with the results you want.