Lucene SpellChecker Prefer Permutations or special scoring - lucene

I'm using Lucene.NET 3.0.3
How can I modify the scoring of the SpellChecker (or queries in general) using a given function?
Specifically, I want the SpellChecker to score any results that are permutations of the searched word higher than the the rest of the suggestions, but I don't know where this should be done.
I would also accept an answer explaining how to do this with a normal query. I have the function, but I don't know if it would be better to make it a query or a filter or something else.

I think the best way to go about this would be to use a customized Comparator in the SpellChecker object.
Check out the source code of the default comparator here:
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.lucene/lucene-spellchecker/3.6.0/org/apache/lucene/search/spell/SuggestWordScoreComparator.java?av=f
Pretty simple stuff, should be easy to extend if you already have the algorithm you want to use to compare the two Strings.
Then you can use set it up to use your comparator with SpellChecker.SetComparator
I think I mentioned the possiblity of using a Filter for this in a previous question to you, but I don't think that's really the right way to go, looking at it a bit more.
EDIT---
Indeed, No Comparator is available in 3.0.3, So I believe you'll need to access the scoring through the a StringDistance object. The Comparator would be nicer, since the scoring has already been applied and is passed into it to do what you please with it. Extending a StringDistance may be a bit less concrete since you will have to apply your rules as a part of the score.
You'll probably want to extend LevensteinDistance (source code), which is the default StringDistance implementation, but of course, feel free to try JaroWinklerDistance as well. Not really that familiar with the algorithm.
Primarily, you'll want to override getDistance and apply your scoring rules there, after getting a baseline distance from the standard (parent) implementation's getDistance call.
I would probably implement something like (assuming you ahve a helper method boolean isPermutation(String, String):
class CustomDistance() extends LevensteinDistance{
float getDistance(String target, String other) {
float distance = super.getDistance();
if (isPermutation(target, other)) {
distance = distance + (1 - distance) / 2;
}
return distance;
}
}
To calculate a score half again closer to 1 for a result that is a permuation (that is, if the default algorithm gave distance = .6, this would return distance = .8, etc.). Distances returned must be between 0 and 1. My example is just one idea of a possible scoring for it, but you will likely need to tune your algorithm somewhat. I'd be cautious about simply returning 1.0 for all permutations, since that would be certain to prefer 'isews' over 'weis' when looking with 'weiss', and it would also lose the ability to sort the closeness of different permutations ('isews' and 'wiess' would be equal matches to 'weiss' in that case).
Once you have your Custom StringDistance it can be passed to SpellChecker either through the Constructor, or with SpellChecker.setStringDistance

From femtoRgon's advice, here's what I ended up doing:
public class PermutationDistance: SpellChecker.Net.Search.Spell.StringDistance
{
public PermutationDistance()
{
}
public float GetDistance(string target, string other)
{
LevenshteinDistance l = new LevenshteinDistance();
float distance = l.GetDistance(target, other);
distance = distance + ((1 - distance) * PermutationScore(target, other));
return distance;
}
public bool IsPermutation(string a, string b)
{
char[] ac = a.ToLower().ToCharArray();
char[] bc = b.ToLower().ToCharArray();
Array.Sort(ac);
Array.Sort(bc);
a = new string(ac);
b = new string(bc);
return a == b;
}
public float PermutationScore(string a, string b)
{
char[] ac = a.ToLower().ToCharArray();
char[] bc = b.ToLower().ToCharArray();
Array.Sort(ac);
Array.Sort(bc);
a = new string(ac);
b = new string(bc);
LevenshteinDistance l = new LevenshteinDistance();
return l.GetDistance(a, b);
}
}
Then:
_spellChecker.setStringDistance(new PermutationDistance());
List<string> suggestions = _spellChecker.SuggestSimilar(word, 10).ToList();

Related

Lucene ignores / overwrite fuzzy edit distance in QueryParser

Given the following QueryParser with a FuzzySearch term in the query string:
fun fuzzyquery() {
val query = QueryParser("term", GermanAnalyzer()).parse("field:search~4")
println(query)
}
The resulting Query will actually have this representation:
field:search~2
So, the ~4 gets rewritten to ~2. I traced the code down to the following implementation:
QueryParserBase
protected Query newFuzzyQuery(Term term, float minimumSimilarity, int prefixLength) {
String text = term.text();
int numEdits = FuzzyQuery.floatToEdits(minimumSimilarity, text.codePointCount(0, text.length()));
return new FuzzyQuery(term, numEdits, prefixLength);
}
FuzzyQuery
public static int floatToEdits(float minimumSimilarity, int termLen) {
if (minimumSimilarity >= 1.0F) {
return (int)Math.min(minimumSimilarity, 2.0F);
} else {
return minimumSimilarity == 0.0F ? 0 : Math.min((int)((1.0D - (double)minimumSimilarity) * (double)termLen), 2);
}
}
As is clearly visible, any value higher than 2 will always get reset to 2. Why is this and how can I correctly get the fuzzy edit distance I want into the query parser?
This may cross the border into "not an answer" - but it is too long for a comment (or a few comments):
Why is this?
That was a design decision, it would seem. It's mentioned in the documentation here.
"The value is between 0 and 2"
There is an old article here which gives an explanation:
"Larger differences are far more expensive to compute efficiently and are not processed by Lucene.".
I don't know how official that is, however.
More officially, from the JavaDoc for the FuzzyQuery class, it states:
"At most, this query will match terms up to 2 edits. Higher distances (especially with transpositions enabled), are generally not useful and will match a significant amount of the term dictionary."
How can I correctly get the fuzzy edit distance I want into the query parser?
You cannot, unless you customize the source code.
The best (least worst?) alternative, I think, is probably the one mentioned in the above referenced FuzzyQuery Javadoc:
"If you really want this, consider using an n-gram indexing technique (such as the SpellChecker in the suggest module) instead."
In this case, one price to be paid will be a potentially much larger index - and even then, n-grams are not really equivalent to edit distances. I don't know if this would meet your needs.

How do you read the values of individual features from a FeatureField in Lucene?

I'm using Lucene 7.6.0 and I've indexed a series of documents with a FeatureField named "features", that stores query-independent evidence (e.g., "indegree", "pagerank"). If I'm not mistaken, the theory is that these are stored as a term vector, where "indegree" and "pagerank" are stored as terms and their values are stored as the corresponding term frequencies.
I've tested some queries where I combined BM25 and each individual feature, and some return a different ranking, when compared to BM25 alone, but some others seem to have no effect. This might just be a coincidence, which is fine, but I would like to check whether the values were correctly indexed. How do I do this?
I've tried using Luke to inspect the index, but there is no term vector associated with the "features" field. The active flags for "features" are only "Idf", but I honestly can't find a way to access the frequencies for each document. The best I was able to do, in order to check whether the field had any value, was something like:
IndexReader reader = DirectoryReader.open(
FSDirectory.open(Paths.get("/tmp/lucene-index")));
reader.totalTermFreq(new Term("features", "indegree"));
This printed the number 33344, which does not match the value I indexed (a single document with indegree 10), however I suspect this might be codified somehow.
I know this API is still experimental, but I was wondering if anyone knew if it would be possible to retrieve the feature values, either for each document or globally somehow (maybe an anonymous vector, without a link to the corresponding documents).
I was able to verify the that the ranking by each feature matches the order for the data that I have. I also believe I was able to fairly reverse the provided relevance score to obtain the original feature value (I say "fairly", because I found what seem to be slightly rounding errors; let me know if it's an error instead). The code I used was the following:
IndexReader reader = DirectoryReader.open(
FSDirectory.open(Paths.get("/tmp/lucene-index")));
IndexSearcher searcher = new IndexSearcher(reader);
searcher.setSimilarity(new BM25Similarity(1.2f, 0.75f));
float w = 1.8f;
float k = 1f;
float a = 0.6f;
Query query = FeatureField.newSigmoidQuery("features", "indegree", w, k, a);
TopDocs hits = searcher.search(query, 5);
for (int i = 0; i < hits.scoreDocs.length; i++) {
Document doc = searcher.doc(hits.scoreDocs[i].doc);
float featureValue = (float) Math.pow(
(hits.scoreDocs[i].score / w * Math.pow(k, a))
/ (1 - hits.scoreDocs[i].score / w),
1 / a
);
System.out.println(featureValue + "\t" + doc.get("doc_id"));
}
reader.close();
The equation for featureValue is simply the sigmoid scaling of the static feature S (the "indegree" in this case) solved for S, based on the relevance score. You can find the original equation in the paper cited in Lucene's JavaDoc for FeatureField: https://dl.acm.org/citation.cfm?doid=1076034.1076106
Please let me know if you find any error with this solution, or if there is an easier way to inspect the index.

Getting Term Frequencies For Query

In Lucene, a query can be composed of many sub-queries. (such as TermQuery objects)
I'd like a way to iterate over the documents returned by a search, and for each document, to then iterate over the sub-queries.
For each sub-query, I'd like to get the number of times it matched. (I'm also interested in the fieldNorm, etc.)
I can get access to that data by using indexSearcher.explain, but that feels quite hacky because I would then need to parse the "description" member of each nested Explanation object to try and find the term frequency, etc. (also, calling "explain" is very slow, so I'm hoping for a faster approach)
The context here is that I'd like to experiment with re-ranking Lucene's top N search results, and to do that it's obviously helpful to extract as many "features" as possible about the matches.
Via looking at the source code for classes like TermQuery, the following appears to be a basic approach:
// For each document... (scoreDoc.doc is an integer)
Weight weight = weightCache.get(query);
if (weight == null)
{
weight = query.createWeight(indexSearcher, true);
weightCache.put(query, weight);
}
IndexReaderContext context = indexReader.getContext();
List<LeafReaderContext> leafContexts = context.leaves();
int n = ReaderUtil.subIndex(scoreDoc.doc, leafContexts);
LeafReaderContext leafReaderContext = leafContexts.get(n);
Scorer scorer = weight.scorer(leafReaderContext);
int deBasedDoc = scoreDoc.doc - leafReaderContext.docBase;
int thisDoc = scorer.iterator().advance(deBasedDoc);
float freq = 0;
if (thisDoc == deBasedDoc)
{
freq = scorer.freq();
}
The 'weightCache' is of type Map and is useful so that you don't have to re-create the Weight object for every document you process. (otherwise, the code runs about 10x slower)
Is this approximately what I should be doing? Are there any obvious ways to make this run faster? (it takes approx 2 ms for 280 documents, as compared to about 1 ms to perform the query itself)
Another challenge with this approach is that it requires code to navigate through your Query object to try and find the sub-queries. For example, if it's a BooleanQuery, you call query.clauses() and recurse on them to look for all leaf TermQuery objects, etc. Not sure if there is a more elegant / less brittle way to do that.

How can I ask Lucene to do simple, flat scoring?

Let me preface by saying that I'm not using Lucene in a very common way and explain how my question makes sense. I'm using Lucene to do searches in structured records. That is, each document, that is indexed, is a set of fields with short values from a given set. Each field is analysed and stored, the analysis producing usually no more than 3 and in most cases just 1 normalised token. As an example, imagine files for each of which we store two fields: the path to the file and a user rating in 1-5. The path is tokenized with a PathHierarchyTokenizer and the rating is just stored as-is. So, if we have a document like
path: "/a/b/file.txt"
rating: 3
This document will have for its path field the tokens "/a", "/a/b" and "/a/b/file.ext", and for rating the token "3".
I wish to score this document against a query like "path:/a path:/a/b path:/a/b/different.txt rating:1" and get a value of 2 - the number of terms that match.
My understanding and observation is that the score of the document depends on various term metrics and with many documents with many fields each, I most definitely am not getting simple integer scores.
Is there some way to make Lucene score documents in the outlined fashion? The queries that are run against the index are not generated by the users, but are built by the system and have an optional filter attached, meaning they all have a fixed form of several TermQuerys joined in a BooleanQuery with nothing like any fuzzy textual searches. Currently I don't have the option of replacing Lucene with something else, but suggestions are welcome for a future development.
I doubt there's something ready to use, so most probably you will need to implement your own scorer and use it when searching. For complicated cases you may want to play around with queries, but for simple case like yours it should be enough to overwrite DefaultSimilarity setting tf factor to raw frequency (number of specified terms in document in question) and all other components to 1. Something like this:
public class MySimilarity extends DefaultSimilarity {
#Override
public float computeNorm(String field, FieldInvertState state) {
return 1;
}
#Override
public float queryNorm(float sumOfSquaredWeights) {
return 1;
}
#Override
public float tf(float freq) {
return freq;
}
#Override
public float idf(int docFreq, int numDocs) {
return 1;
}
#Override
public float coord(int overlap, int maxOverlap) {
return 1;
}
}
(Note, that tf() is the only method that returns something different than 1)
And the just set similarity on IndexSearcher.

How can I write like "x == either 1 or 2" in a programming language? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why do most programming languages only have binary equality comparison operators?
I have had a simple question for a fairly long time--since I started learning programming languages.
I'd like to write like "if x is either 1 or 2 => TRUE (otherwise FALSE)."
But when I write it in a programming language, say in C,
( x == 1 || x == 2 )
it really works but looks awkward and hard to read. I guess it should be possible to simplify such an or operation, and so if you have any idea, please tell me. Thanks, Nathan
Python allows test for membership in a sequence:
if x in (1, 2):
An extension version in C#
step 1: create an extension method
public static class ObjectExtensions
{
public static bool Either(this object value, params object[] array)
{
return array.Any(p => Equals(value, p));
}
}
step 2: use the extension method
if (x.Either(1,2,3,4,5,6))
{
}
else
{
}
While there are a number of quite interesting answers in this thread, I would like to point out that they may have performance implications if you're doing this kind of logic inside of a loop depending on the language. A far as for the computer to understand, the if (x == 1 || x == 2) is by far the easiest to understand and optimize when it's compiled into machine code.
When I started programming it seemed weird to me as well that instead of something like:
(1 < x < 10)
I had to write:
(1 < x && x < 10)
But this is how most programming languages work, and after a while you will get used to it.
So I believe it is perfectly fine to write
( x == 1 || x == 2 )
Writing it this way also has the advantage that other programmers will understand easily what you wrote. Using a function to encapsulate it might just make things more complicated because the other programmers would need to find that function and see what it does.
Only more recent programming languages like Python, Ruby etc. allow you to write it in a simpler, nicer way. That is mostly because these programming languages are designed to increase the programmers productivity, while the older programming languages' main goal was application performance and not so much programmer productivity.
It's Natural, but Language-Dependent
Your approach would indeed seem more natural but that really depends on the language you use for the implementation.
Rationale for the Mess
C being a systems programming language, and fairly close to the hardware (funny though, as we used to consider a "high-level" language, as opposed to writing machine code), it's not exactly expressive.
Modern higher-level languages (again, arguable, lisp is not that modern, historically speaking, but would allow you to do that nicely) allow you to do such things by using built-in constructs or library support (for instances, using Ranges, Tuples or equivalents in languages like Python, Ruby, Groovy, ML-languages, Haskell...).
Possible Solutions
Option 1
One option for you would be to implement a function or subroutine taking an array of values and checking them.
Here's a basic prototype, and I leave the implementation as an exercise to you:
/* returns non-zero value if check is in values */
int is_in(int check, int *values, int size);
However, as you will quickly see, this is very basic and not very flexible:
it works only on integers,
it works only to compare identical values.
Option 2
One step higher on the complexity ladder (in terms of languages), an alternative would be to use pre-processor macros in C (or C++) to achieve a similar behavior, but beware of side effects.
Other Options
A next step could be to pass a function pointer as an extra parameter to define the behavior at call-point, define several variants and aliases for this, and build yourself a small library of comparators.
The next step then would be to implement a similar thing in C++ using templates to do this on different types with a single implementation.
And then keep going from there to higher-level languages.
Pick the Right Language (or learn to let go!)
Typically, languages favoring functional programming will have built-in support for this sort of thing, for obvious reasons.
Or just learn to accept that some languages can do things that others cannot, and that depending on the job and environment, that's just the way it is. It mostly is syntactic sugar, and there's not much you can do. Also, some languages will address their shortcomings over time by updating their specifications, while others will just stall.
Maybe a library implements such a thing already and that I am not aware of.
that was a lot of interesting alternatives. I am surprised nobody mentioned switch...case - so here goes:
switch(x) {
case 1:
case 2:
// do your work
break;
default:
// the else part
}
it is more readable than having a
bunch of x == 1 || x == 2 || ...
more optimal than having a
array/set/list for doing a
membership check
I doubt I'd ever do this, but to answer your question, here's one way to achieve it in C# involving a little generic type inference and some abuse of operator overloading. You could write code like this:
if (x == Any.Of(1, 2)) {
Console.WriteLine("In the set.");
}
Where the Any class is defined as:
public static class Any {
public static Any2<T> Of<T>(T item1, T item2) {
return new Any2<T>(item1, item2);
}
public struct Any2<T> {
T item1;
T item2;
public Any2(T item1, T item2) {
this.item1 = item1;
this.item2 = item2;
}
public static bool operator ==(T item, Any2<T> set) {
return item.Equals(set.item1) || item.Equals(set.item2);
}
// Defining the operator== requires these three methods to be defined as well:
public static bool operator !=(T item, Any2<T> set) {
return !(item == set);
}
public override bool Equals(object obj) { throw new NotImplementedException(); }
public override int GetHashCode() { throw new NotImplementedException(); }
}
}
You could conceivably have a number of overloads of the Any.Of method to work with 3, 4, or even more arguments. Other operators could be provided as well, and a companion All class could do something very similar but with && in place of ||.
Looking at the disassembly, a fair bit of boxing happens because of the need to call Equals, so this ends up being slower than the obvious (x == 1) || (x == 2) construct. However, if you change all the <T>'s to int and replace the Equals with ==, you get something which appears to inline nicely to be about the same speed as (x == 1) || (x == 2).
Err, what's wrong with it? Oh well, if you really use it a lot and hate the looks do something like this in c#:
#region minimizethisandneveropen
public bool either(value,x,y){
return (value == x || value == y);
}
#endregion
and in places where you use it:
if(either(value,1,2))
//yaddayadda
Or something like that in another language :).
In php you can use
$ret = in_array($x, array(1, 2));
As far as I know, there is no built-in way of doing this in C. You could add your own inline function for scanning an array of ints for values equal to x....
Like so:
inline int contains(int[] set, int n, int x)
{
int i;
for(i=0; i<n; i++)
if(set[i] == x)
return 1;
return 0;
}
// To implement the check, you declare the set
int mySet[2] = {1,2};
// And evaluate like this:
contains(mySet,2,x) // returns non-zero if 'x' is contained in 'mySet'
In T-SQL
where x in (1,2)
In COBOL (it's been a long time since I've even glanced briefly at COBOL, so I may have a detail or two wrong here):
IF X EQUALS 1 OR 2
...
So the syntax is definitely possible. The question then boils down to "why is it not used more often?"
Well, the thing is, parsing expressions like that is a bit of a bitch. Not when standing alone like that, mind, but more when in compound expressions. The syntax starts to become opaque (from the compiler implementer's perspective) and the semantics downright hairy. IIRC, a lot of COBOL compilers will even warn you if you use syntax like that because of the potential problems.
In .Net you can use Linq:
int[] wanted = new int{1, 2};
// you can use Any to return true for the first item in the list that passes
bool result = wanted.Any( i => i == x );
// or use Contains
bool result = wanted.Contains( x );
Although personally I think the basic || is simple enough:
bool result = ( x == 1 || x == 2 );
Thanks Ignacio! I translate it into Ruby:
[ 1, 2 ].include?( x )
and it also works, but I'm not sure whether it'd look clear & normal. If you know about Ruby, please advise. Also if anybody knows how to write this in C, please tell me. Thanks. -Nathan
Perl 5 with Perl6::Junction:
use Perl6::Junction 'any';
say 'yes' if 2 == any(qw/1 2 3/);
Perl 6:
say 'yes' if 2 == 1|2|3;
This version is so readable and concise I’d use it instead of the || operator.
Pascal has a (limited) notion of sets, so you could do:
if x in [1, 2] then
(haven't touched a Pascal compiler in decades so the syntax may be off)
A try with only one non-bitwise boolean operator (not advised, not tested):
if( (x&3) ^ x ^ ((x>>1)&1) ^ (x&1) ^ 1 == 0 )
The (x&3) ^ x part should be equal to 0, this ensures that x is between 0 and 3. Other operands will only have the last bit set.
The ((x>>1)&1) ^ (x&1) ^ 1 part ensures last and second to last bits are different. This will apply to 1 and 2, but not 0 and 3.
You say the notation (x==1 || x==2) is "awkward and hard to read". I beg to differ. It's different than natural language, but is very clear and easy to understand. You just need to think like a computer.
Also, the notations mentioned in this thread like x in (1,2) are semantically different then what you are really asking, they ask if x is member of the set (1,2), which is not what you are asking. What you are asking is if x equals to 1 or to 2 which is logically (and semantically) equivalent to if x equals to 1 or x equals to 2 which translates to (x==1 || x==2).
In java:
List list = Arrays.asList(new Integer[]{1,2});
Set set = new HashSet(list);
set.contains(1)
I have a macro that I use a lot that's somewhat close to what you want.
#define ISBETWEEN(Var, Low, High) ((Var) >= (Low) && (Var) <= (High))
ISBETWEEN(x, 1, 2) will return true if x is 1 or 2.
Neither C, C++, VB.net, C#.net, nor any other such language I know of has an efficient way to test for something being one of several choices. Although (x==1 || x==2) is often the most natural way to code such a construct, that approach sometimes requires the creation of an extra temporary variable:
tempvar = somefunction(); // tempvar only needed for 'if' test:
if (tempvar == 1 || tempvar == 2)
...
Certainly an optimizer should be able to effectively get rid of the temporary variable (shove it in a register for the brief time it's used) but I still think that code is ugly. Further, on some embedded processors, the most compact and possibly fastest way to write (x == const1 || x==const2 || x==const3) is:
movf _x,w ; Load variable X into accumulator
xorlw const1 ; XOR with const1
btfss STATUS,ZERO ; Skip next instruction if zero
xorlw const1 ^ const2 ; XOR with (const1 ^ const2)
btfss STATUS,ZERO ; Skip next instruction if zero
xorlw const2 ^ const3 ; XOR with (const2 ^ const3)
btfss STATUS,ZERO ; Skip next instruction if zero
goto NOPE
That approach require two more instructions for each constant; all instructions will execute. Early-exit tests will save time if the branch is taken, and waste time otherwise. Coding using a literal interpretation of the separate comparisons would require four instructions for each constant.
If a language had an "if variable is one of several constants" construct, I would expect a compiler to use the above code pattern. Too bad no such construct exists in common languages.
(note: Pascal does have such a construct, but run-time implementations are often very wasteful of both time and code space).
return x === 1 || x === 2 in javascript