I have made the code to find the similarity between two documents by finding their tf and then their cosine values . But when i was looking at the standard examples on lucene , every program had made use of an index .
My process involves a comparision between one reference document and other documents from a folder .
Do u think i should use indexing ?
checkout the MoreLikeThis class.
Related
Hej guys,
I'm working on some ranking related research. I would like to index a collection of documents with Lucene, take the tfidf representations (of each document) it generates, alter them, put them back into place and observe how the ranking over a fixed set of queries changes accordingly.
Is there any non-hacky way to do this?
Your question is too vague to have a clear answer, esp. on what you plan to do with :
take the tfidf representations (of each document) it generates, alter them
Lucene stores raw values for scoring :
CollectionStatistics
TermStatistics
Per term/doc pair stats : PostingsEnum
Per field/doc pair : norms
All this data is managed by lucene and will be used to compute a score for a given query term. A custom Similarity class can be used to change the formula that generates this score.
But you have to consider that a search query is made of multiple terms, and the way the scores of individual terms are combined can be changed as well. You could use existing Query classes (e.g. BooleanQuery, DisjunctionMax) but you could also write your own.
So it really depends on what you want to do with of all this but note that if you want to change the raw values stored by lucene this is going to be rather hard. You'll have to write a custom lucene codec and probably most the query stack to take benefit of your new data.
One nice thing you should consider is the possibility to store an arbitrary byte[] payloads. This way you could store a value that would have been computed outside of lucene and use it in a custom similarity or query.
Please see the following tutorials: Getting Started with Payloads and Custom Scoring with Lucene Payloads it may you give some ideas.
I have a Lucene indexed corpus of more than 1 million documents.
I am searching for named entities such as "Susan Witting" by using the the Lucene java API for queries.
I would like to expand my queries by also searching for "Sue Witting" for example but would like that term to have a lower weight than the main query term.
How can I go about doing that?
I found infos about the boosting option in the Lucene Manual. But it seems to be set at indexing and it needs fields.
You can boost each query clause independently. See the Query Javadoc.
If you want to give different weight to the words of a term. Then
Query#setBoost(float)
is not useful. A better way is:
Term term = new Term("some_key", "stand^3 firm^2 always");
This allows to give different weight to words in the same term query. Here, the word stand boosted by three but always is has the default boost value.
I need to process a database in order to add meta-information such as td-idf weights to the documents terms.
Successively I need to create document pairs with similarity measures such as td-idf cosine similarity, etc...
I'm planning to use Apache Lucene for this task. I'm actually not interested in the retrieval, or running a query, but in indexing the data and elaborate them in order to generate an output file with the above mentioned document pairs and similarity scores. The next step would be to pass these results to a Weka classifier.
Can I easily do it with Lucene ?
thanks
Try Integrating Apache Mahout with Apache Lucene and Solr. Replace the places that say "Mahout" with "Weka". Good Luck.
I'm searching articles in PubMed via Lucene.
Each of the 20,000,000 articles has an abstract with ~250 words and an ID.
At the moment I store my searches, with each take multiple seconds, in a TopDocs object.
Searchs can find thousands of articles.
I'm just interested in the ID of the article.
Does Lucene load the abstracts internally into the TopDocs?
If so can I prevent that behavior through FieldSelectors or do FieldSelectors only work with IndexReader and don't work with IndexSearcher?
No, Lucene does not load the values of fields into TopDocs. TopDocs only contains the doc number and score for each one of the matching documents.
If you're having performance issues, here's another SO question that can help you:
Optimizing Lucene performance
Lucene, by default, does not load any stored fields. If you want to retrieve only the ID field, and if you can afford to load up all the IDs in memory, then you can load all values as follows and reuse them.
String[] allIDs = FieldCache.DEFAULT.getStrings(indexReader, "IDFieldName")
Please check the answer for FieldCache. Best way to retrieve certain field of all documents returned by a Lucene search
You're on the right lines.
Try using a SetBasedFieldSelector when you retrieve the document from the index.
As another poster noted, iterating through the hits will return a ScoreDoc object. This will give you the document Id that can be used to retrieve the document using the IndexReader associated with the IndexSearcher.
If IO is a problem because of loading fields you aren't interested in, you should be in for a pleasant surprise.
Hope this helps,
Summary: I collect the doc ids of all hits for a given search by using a custom Collector (it populates a BitSet with the ids). The searching and getting doc ids are quite fast according to my needs but when it comes to actually fetching the documents from disk, things get very slow. Is there a way to optimize Lucene for faster document collection?
Details: I'm working on a processed corpus of Wikipedia and I keep each sentence as a separate document. When I search for "computer", I get all sentences containing the term computer. Currently, searching the corpus and getting all document ids work in sub-second but fetching the first 1000 documents takes around 20 seconds. Fetching all documents takes proportionally more time (i.e. another 20 sec for each 1000-document batch).
Subsequent searches and document fetching takes much less time (though, I don't know who's doing the caching, OS or Lucene?) but I'll be searching for many diverse terms and I don't want to rely on caching, the performance on the very first search is crucial for me.
I'm looking for suggestions/tricks that will improve the document-fetching performance (if it's possible at all). Thanks in advance!
Addendum:
I use Lucene 3.0.0 but I use Jython to drive Lucene classes. Which means, I call the get_doc method of the following Jython class for every doc id I retrieved during the search:
class DocumentFetcher():
def __init__(self, index_name):
self._directory = FSDirectory.open(java.io.File(index_name))
self._index_reader = IndexReader.open(self._directory, True)
def get_doc(self, doc_id):
return self._index_reader.document(doc_id)
I have 50M documents in my index.
You, probably, are storing lot of information in the document. Reduce the stored fields to as much as you can.
Secondly, while retrieving fields, select only those fields which you need. You can use following method of IndexReader to specify only few of the stored fields.
public abstract Document document(int n, FieldSelector fieldSelector)
This way you don't load up fields which are not used.
You can utilize following code sample.
FieldSelector idFieldSelector =
new SetBasedFieldSelector(Collections.singleton("idFieldName"), Collections.emptySet());
for (int i: resultDocIDs) {
String id = reader.document(i, idFieldSelector).get("idFieldName");
}
Scaling Lucene and Solr discusses many ways to improve Lucene performance.
As you are working on Lucene search within Wikipedia, you may be interested in Rainman's Lucene Search of Wikipedia. He mostly discusses algorithms and less performance, but this may still be relevant.