Is there any way to extract all the tokens from solr? - lucene

How can one extract all the tokens from solr? Not from one document, but from all the documents indexed in solr?
Thanks!

You may do something like this(This sample is approved to be working on a lucene 4.x index):
IndexSearcher isearcher = new IndexSearcher(dir, true);
IndexReader reader = isearcher.getIndexReader();
Fields fields = MultiFields.getFields(reader);
Collection<String> cols = reader.getFieldNames(IndexReader.FieldOption.ALL);
for (String col : cols) {
Terms te = fields.terms(col);
if (te != null) {
TermsEnum tex = te.getThreadTermsEnum();
while (tex.next() != null)
// do something
tex.getTerm().text();
}
}
This iterates over all columns and also over every term per col. You may lookup the methods provided by TermsEnum like getTerm().

Related

Can we use SpanNearQuery in phonetic index?

I've implemented a lucene-based software to index more than 10 millions of person's names and these names can be written on different ways like "Luíz" and "Luis". The index was created using the phonetic values of the respective tokens (a custom analyzer was created).
Currently, I'm using QueryParser to query for a given name with good results. But, in the book "Lucene in Action" is mentioned that SpanNearQuery can improve my queries using the proximity of tokens. I've played with the SpanNearQuery against a non-phonetic index of name and the results were superior compared to QueryParser.
As we should query using the same analyzer used to indexing, I couldn't find how I can use my custom phonetic analyzer and SpanNearQuery at same time, or rephrasing:
how can I use SpanNearQuery on the phonetic index?
Thanks in advance.
My first thought is: Wouldn't a phrase query with slop do the job? That would certainly be the easiest way:
"term1 term2"~5
This will use your phonetic analyzer, and produce a proximity query with the resulting tokens.
So, if you really do need to use SpanQueries here (perhaps you are using fuzzy queries or wildcards or some such, or PhraseQuery has been leering menacingly at you and you want nothing more to do with it), you'll need to do the analysis yourself. You can do this by getting a TokenStream from Analyzer.tokenStream, and iterating through the analyzed tokens.
If you are using a phonetic algorithm that produces a single code per term (soundex, for example):
SpanNearQuery.Builder nearBuilder = new SpanNearQuery.Builder("text", true);
nearBuilder.setSlop(4);
TokenStream stream = analyzer.tokenStream("text", queryStringToParse);
stream.addAttribute(CharTermAttribute.class);
stream.reset();
while(stream.incrementToken()) {
CharTermAttribute token = stream.getAttribute(CharTermAttribute.class);
nearBuilder.addClause(new SpanTermQuery(new Term("text", token.toString())));
}
Query finalQuery = nearBuilder.build();
stream.close();
If you are using a double metaphone, where you can have 1-2 terms at the same position, it's a bit more complex, as you'll need to consider those position increments:
SpanNearQuery.Builder nearBuilder = new SpanNearQuery.Builder("text", true);
nearBuilder.setSlop(4);
TokenStream stream = analyzer.tokenStream("text", "through and through");
stream.addAttribute(CharTermAttribute.class);
stream.addAttribute(PositionIncrementAttribute.class);
stream.reset();
String queuedToken = null;
while(stream.incrementToken()) {
CharTermAttribute token = stream.getAttribute(CharTermAttribute.class);
PositionIncrementAttribute increment = stream.getAttribute(PositionIncrementAttribute.class);
if (increment.getPositionIncrement() == 0) {
nearBuilder.addClause(new SpanOrQuery(
new SpanTermQuery(new Term("text", queuedToken)),
new SpanTermQuery(new Term("text", token.toString()))
));
queuedToken = null;
}
else if (increment.getPositionIncrement() >= 1 && queuedToken != null) {
nearBuilder.addClause(new SpanTermQuery(new Term("text", queuedToken)));
queuedToken = token.toString();
}
else {
queuedToken = token.toString();
}
}
if (queuedToken != null) {
nearBuilder.addClause(new SpanTermQuery(new Term("text", queuedToken)));
}
Query finalQuery = nearBuilder.build();
stream.close();

Neo4j\Lucene multiterm wildcard at the end of query

I'm trying to create auto suggestion based on Lucene full text index.
The main issue is how to create autosuggestion(autocomplete) based on multiterm phrases, for example -
nosql dat*
results can be
nosql database
nosql data
but not
perfect nosql database
What is the correct syntax for Lucene query in order to create auto suggestion based on the first words in a multi term query with a wildcard at the end ?
I had a similar requirement,
Lucene has Span queries that allow you to use location of words in the text in queries.
I've implemented it in Lucene using FirstSpanQuery. (read about it in the docs)
here I use SpanNearQuery to force all the words to be next to each other and
SpanFirstQuery to force all of them to be in the start of the text.
if (querystr.contains(" ")) // more than one word?
{
String[] words = querystr.split(" ");
SpanQuery[] clausesWildCard = new SpanQuery[words.length];
for (int i = 0; i < words.length; i++) {
if (i == words.length - 1) //last word, add wildcard clause
{
PrefixQuery pq = new PrefixQuery(new Term(VALUE, words[i]));
clausesWildCard[i] = new SpanMultiTermQueryWrapper<PrefixQuery>(pq);
}
else
{
Term clause = new Term(VALUE, words[i]);
clausesWildCard[i] = new SpanTermQuery(clause);
}
}
SpanQuery allTheWordsNear = new SpanNearQuery(clausesWildCard, 0, true);
prefixquery = new SpanFirstQuery(allTheWordsNear, words.length);
}

Lucene - Iterating through TermsEnum for docfreq

I am trying to get the doc frequency for each term in term enum. But I getting everytime only a "1" for the document frequency for all terms. Any hint, what the problem could be? This is my code:
Terms terms = reader.getTermVector(docId, field);
TermsEnum termsEnum = null;
termsEnum = terms.iterator(termsEnum);
BytesRef termText = null;
while((termsEnum.next()) != null){
int docNumbersWithTerm = termsEnum.docfreq();
System.out.println(docNumbersWithTerm);
}
The Terms instance from IndexReader.getTermVector acts as if you have a single-document index, comprised entirely of the document specified. Since there is only one document to consider in this context, you should always get docfreq() = 1. You could generate the docfreq from the full index using the IndexReader.docFreq method:
int docNumbersWithTerm = reader.docFreq(new Term(termsEnum.term(), field));
System.out.println(docNumbersWithTerm);

Find list of terms indexed by Lucene

Is it possible to extract the list of all the terms in a Lucene index as a list of strings? I couldn't find that functionality in the doc. Thanks!
In Lucene 4 (and 5):
Terms terms = SlowCompositeReaderWrapper.wrap(directoryReader).terms("field");
Edit:
This seems to be the 'correct' way now (Lucene 6 and up):
LuceneDictionary ld = new LuceneDictionary( indexReader, "field" );
BytesRefIterator iterator = ld.getWordsIterator();
BytesRef byteRef = null;
while ( ( byteRef = iterator.next() ) != null )
{
String term = byteRef.utf8ToString();
}
Lucene 3:
C#: C# Lucene get all the index
Java:
IndexReader indexReader = IndexReader.open(path);
TermEnum termEnum = indexReader.terms();
while (termEnum.next()) {
Term term = termEnum.term();
System.out.println(term.text());
}
termEnum.close();
indexReader.close();
Java (all terms for a specific field): How can I get the list of unique terms from a specific field in Lucene?
Python: Finding a single fields terms with Lucene (PyLucene)

Is it possible to iterate through documents stored in Lucene Index?

I have some documents stored in a Lucene index with a docId field.
I want to get all docIds stored in the index. There is also a problem. Number of documents is about 300 000 so I would prefer to get this docIds in chunks of size 500. Is it possible to do so?
IndexReader reader = // create IndexReader
for (int i=0; i<reader.maxDoc(); i++) {
if (reader.isDeleted(i))
continue;
Document doc = reader.document(i);
String docId = doc.get("docId");
// do something with docId here...
}
Lucene 4
Bits liveDocs = MultiFields.getLiveDocs(reader);
for (int i=0; i<reader.maxDoc(); i++) {
if (liveDocs != null && !liveDocs.get(i))
continue;
Document doc = reader.document(i);
}
See LUCENE-2600 on this page for details: https://lucene.apache.org/core/4_0_0/MIGRATE.html
There is a query class named MatchAllDocsQuery, I think it can be used in this case:
Query query = new MatchAllDocsQuery();
TopDocs topDocs = getIndexSearcher.search(query, RESULT_LIMIT);
Document numbers (or ids) will be subsequent numbers from 0 to IndexReader.maxDoc()-1. These numbers are not persistent and are valid only for opened IndexReader. You could check if the document is deleted with IndexReader.isDeleted(int documentNumber) method
If you use .document(i) as in above examples and skip over deleted documents be careful if you use this method for paginating results.
i.e.: You have a 10 docs/per page list and you need to get the docs. for page 6. Your input might be something like this: offset=60,count = 10 (documents from 60 to 70).
IndexReader reader = // create IndexReader
for (int i=offset; i<offset + 10; i++) {
if (reader.isDeleted(i))
continue;
Document doc = reader.document(i);
String docId = doc.get("docId");
}
You will have some problems with the deleted ones because you should not start from offset=60, but from offset=60 + the number of deleted documents that appear before 60.
An alternative I found is something like this:
is = getIndexSearcher(); //new IndexSearcher(indexReader)
//get all results without any conditions attached.
Term term = new Term([[any mandatory field name]], "*");
Query query = new WildcardQuery(term);
topCollector = TopScoreDocCollector.create([[int max hits to get]], true);
is.search(query, topCollector);
TopDocs topDocs = topCollector.topDocs(offset, count);
note: replace text between [[ ]] with own values.
Ran this on large index with 1.5million entries and got random 10 results in less than a second.
Agree is slower but at least you can ignore deleted documents if you need pagination.