Does Lucene use Extended Boolean Model retrieval? - lucene

Some time ago I came across extended boolean model, which combine boolean retrieval logic with ability to rank documents the way similar to Vector Space Model does.
As far as I understand this is exactly the way Lucene does it's job in ranking documents. Am I right?

It is a combination of the Vector Space Model and the Boolean Model. Checkout the Scoring docs page:
Lucene scoring uses a combination of the Vector Space Model (VSM) of Information Retrieval and the Boolean model to determine how relevant a given Document is to a User's query. In general, the idea behind the VSM is the more times a query term appears in a document relative to the number of times the term appears in all the documents in the collection, the more relevant that document is to the query. It uses the Boolean model to first narrow down the documents that need to be scored based on the use of boolean logic in the Query specification.
If you compare the formulas at Similarity with the classic VSM formula you'll note that they are similar (though not equal).

Related

Implement Simple TF-IDF Scoring in Lucene

This is the Lucene practical scoring function (https://lucene.apache.org/core/7_5_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html):
The documentation says:
idf(t) stands for Inverse Document Frequency. This value correlates to
the inverse of docFreq (the number of documents in which the term t
appears). This means rarer terms give higher contribution to the total
score. idf(t) appears for t in both the query and the document, hence
it is squared in the equation.
The last line is not clear to me (i.e., idf(t) appears for t in both the query and document). How to calculate the idf(t) for t in query?
What I am trying to do is to implement the simple TF-IDF scoring formula as a baseline approach for an experiment.
(TF-IDF Sore(q,d) = ∑for each term t in q: tf(t,d)*idf(t,d))
Lucene's scoring function is different from what I am trying to do. I can override the DefaultSimilarity class to ignore the effect of coord(q,d), queryNorm(q), t.getBoost(), and norm(t,d). My only concern is the idf(t) part. Why is it squared? How can I implement the simple TF-IDF scoring?

Machine Learning text comparison model

I am creating a machine learning model that essentially returns the correctness of one text to another.
For example; “the cat and a dog”, “a dog and the cat”. The model needs to be able to identify that some words (“cat”/“dog”) are more important/significant than others (“a”/“the”). I am not interested in conjunction words etc. I would like to be able to tell the model which words are the most “significant” and have it determine how correct text 1 is to text 2, with the “significant” words bearing more weight than others.
It also needs to be able to recognise that phrases don’t necessarily have to be in the same order. The two above sentences should be an extremely high match.
What is the basic algorithm I should use to go about this? Is there an alternative to just creating a dataset with thousands of example texts and a score of correctness?
I am only after a broad overview/flowchart/process/algorithm.
I think TF-IDF might be a good fit to your problem, because:
Emphasis on words occurring in many documents (say, 90% of your sentences/documents contain the conjuction word 'and') is much smaller, essentially giving more weight to the more document specific phrasing (this is the IDF part).
Ordering in Term Frequency (TF) does not matter, as opposed to methods using sliding windows etc.
It is very lightweight when compared to representation oriented methods like the one mentioned above.
Big drawback: Your data, depending on the size of corpus, may have too many dimensions (the same number of dimensions as unique words), you could use stemming/lemmatization in order to mitigate this problem to some degree.
You may calculate similiarity between two TF-IDF vector using cosine similiarity for example.
EDIT: Woops, this question is 8 months old, sorry for the bump, maybe it will be of use to someone else though.

SOLR and Ratio of Matching Words

Using SOLR version 4.3, it appears that SOLR is valuing the percentage of matching terms more than the number of matching terms.
For example, we do a search for Dog and a document with just the word dog and a three other words returns. We have another article with hundreds of words, that has the word dog in it 27 times.
I would expect the second article to return first. However, the one with one word out of three returns first. I was hoping to find out what in SOLR controls this so I can make the appropriate modifications. I have looked the SOLR documentation and have seen COORD mentioned, but it seems to indicate that the article with 27 references should return first. Any help would be appreciated.
For 4.x Solr still used regular TF/IDF as its scoring formula, and you can see the Lucene implementation detailed in the documentation for TFIDFSimilarity.
For your question, the two factors that affect the score is:
The length of the field, as given in norm():
norm(t,d) encapsulates a few (indexing time) boost and length factors:
Field boost - set by calling field.setBoost() before adding the field to a document.
lengthNorm - computed when the document is added to the index in accordance with the number of tokens of this field in the document, so that shorter fields contribute more to the score. LengthNorm is computed by the Similarity class in effect at indexing.
.. while the number of terms matching (not their frequency), is given by coord():
coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. This is a search time factor computed in coord(q,d) by the Similarity in effect at search time.
There are a few settings in your schema that can affect how Solr scores the documents in your example:
omitNorms
If true, omits the norms associated with this field (this disables length normalization for the field, and saves some memory)
.. this will remove the norm() part of the score.
omitTermFreqAndPositions
If true, omits term frequency, positions, and payloads from postings for this field.
.. and this will remove the boost from multiple occurrences of the same term. Be aware that this will remove positions as well, making phrase queries impossible.
But you should also consider upgrading Solr, as the BM25 similarity that's the default from 6.x usually performs better. I can't remember if a version is available for 4.3.

Is it possible to obtain, alter and replace the tfidf document representations in Lucene?

Hej guys,
I'm working on some ranking related research. I would like to index a collection of documents with Lucene, take the tfidf representations (of each document) it generates, alter them, put them back into place and observe how the ranking over a fixed set of queries changes accordingly.
Is there any non-hacky way to do this?
Your question is too vague to have a clear answer, esp. on what you plan to do with :
take the tfidf representations (of each document) it generates, alter them
Lucene stores raw values for scoring :
CollectionStatistics
TermStatistics
Per term/doc pair stats : PostingsEnum
Per field/doc pair : norms
All this data is managed by lucene and will be used to compute a score for a given query term. A custom Similarity class can be used to change the formula that generates this score.
But you have to consider that a search query is made of multiple terms, and the way the scores of individual terms are combined can be changed as well. You could use existing Query classes (e.g. BooleanQuery, DisjunctionMax) but you could also write your own.
So it really depends on what you want to do with of all this but note that if you want to change the raw values stored by lucene this is going to be rather hard. You'll have to write a custom lucene codec and probably most the query stack to take benefit of your new data.
One nice thing you should consider is the possibility to store an arbitrary byte[] payloads. This way you could store a value that would have been computed outside of lucene and use it in a custom similarity or query.
Please see the following tutorials: Getting Started with Payloads and Custom Scoring with Lucene Payloads it may you give some ideas.

Lucene dynamic field weighting

Lucene supports a setting of fixed relative field weights during query creation. This means that for all matching documents, the similarity of the content of all searchable fields to the query is weighted (and summed) based on those fixed pre-defiend weights. My question is whether it is possible to set the document field weights dynamically during the search, based on each document's attributes.
For example, if all indexed documents have a numeric field, I would like to set the relative weights of each document textual fields based on its numeric filed value.
Thanks
David
Yes, it's possible. In order to do that you can use the CustomScoreQuery. You can find a good example in the book Lucene In Action, where the CustomScoreQuery is extended to get the recency boosting (boosting of documents based on custom calculation made on date). In particular, you want to override the CustomScoreProvider with yours implementing the calculation that you need.