I have worked out how to use Lucene's Porter Stemmer but would like to also retrieve the original, un-stemmed word. So, to this end, I added a CharTermAttribute to the TokenStream before creating the PorterStemFilter, as follows:
Analyzer analyzer = new StandardAnalyzer();
TokenStream original = analyzer.tokenStream("StandardTokenStream", new StringReader(inputText));
TokenStream stemmed = new PorterStemFilter(original);
CharTermAttribute originalWordAttribute = original.addAttribute(CharTermAttribute.class);
CharTermAttribute stemmedWordAttribute = stemmed.addAttribute(CharTermAttribute.class);
stemmed.reset();
while (stemmed.incrementToken()) {
System.out.println(stemmedWordAttribute+" "+originalWordAttribute);
}
Unfortunately, both attributes return the stemmed word.
Is there a way to get the original word as well?
Lucene's PorterStemFilter can be combined with Lucene's KeywordRepeatFilter. The Porter Stemmer uses this to provide both the stemmed and unstemmed tokens.
Modifying your approach:
Analyzer analyzer = new StandardAnalyzer();
TokenStream original = analyzer.tokenStream("StandardTokenStream", new StringReader(inputText));
TokenStream repeated = new KeywordRepeatFilter(original);
TokenStream stemmed = new PorterStemFilter(repeated);
CharTermAttribute stemmedWordAttribute = stemmed.addAttribute(CharTermAttribute.class);
stemmed.reset();
while (stemmed.incrementToken()) {
String originalWord = stemmedWordAttribute.toString();
stemmed.incrementToken();
String stemmedWord = stemmedWordAttribute.toString();
System.out.println(originalWord + " " + stemmedWord);
}
This is fairly crude, but shows the approach.
Example input:
testing giraffe book passing
Resulting output:
testing test
giraffe giraff
book book
passing pass
For each pair of tokens, if the second matches the first (book book), then there was no stemming.
Normally, you would use this with RemoveDuplicatesTokenFilter to remove the duplicate book term - but if you do that I think it becomes much harder to track the stemmed/unstemmed pairs - so for your specific scenario, I did not use that de-duplication filter.
Related
For fun and learning I am trying to build a part-of-speech (POS) tagger with OpenNLP and Lucene 7.4. The goal would be that once indexed I can actually search for a sequence of POS tags and find all sentences that match sequence. I already get the indexing part, but I am stuck on the query part. I am aware that SolR might have some functionality for this, and I already checked the code (which was not so self-expalantory after all). But my goal is to understand and implement in Lucene 7, not in SolR, as I want to be independent of any search engine on top.
Idea
Input sentence 1: The quick brown fox jumped over the lazy dogs.
Applied Lucene OpenNLP tokenizer results in: [The][quick][brown][fox][jumped][over][the][lazy][dogs][.]
Next, applying Lucene OpenNLP POS tagging results in: [DT][JJ][JJ][NN][VBD][IN][DT][JJ][NNS][.]
Input sentence 2: Give it to me, baby!
Applied Lucene OpenNLP tokenizer results in: [Give][it][to][me][,][baby][!]
Next, applying Lucene OpenNLP POS tagging results in: [VB][PRP][TO][PRP][,][UH][.]
Query: JJ NN VBD matches part of sentence 1, so sentence 1 should be returned. (At this point I am only interested in exact matches, i.e. let's leave aside partial matches, wildcards etc.)
Indexing
First, I created my own class com.example.OpenNLPAnalyzer:
public class OpenNLPAnalyzer extends Analyzer {
protected TokenStreamComponents createComponents(String fieldName) {
try {
ResourceLoader resourceLoader = new ClasspathResourceLoader(ClassLoader.getSystemClassLoader());
TokenizerModel tokenizerModel = OpenNLPOpsFactory.getTokenizerModel("en-token.bin", resourceLoader);
NLPTokenizerOp tokenizerOp = new NLPTokenizerOp(tokenizerModel);
SentenceModel sentenceModel = OpenNLPOpsFactory.getSentenceModel("en-sent.bin", resourceLoader);
NLPSentenceDetectorOp sentenceDetectorOp = new NLPSentenceDetectorOp(sentenceModel);
Tokenizer source = new OpenNLPTokenizer(
AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY, sentenceDetectorOp, tokenizerOp);
POSModel posModel = OpenNLPOpsFactory.getPOSTaggerModel("en-pos-maxent.bin", resourceLoader);
NLPPOSTaggerOp posTaggerOp = new NLPPOSTaggerOp(posModel);
// Perhaps we should also use a lower-case filter here?
TokenFilter posFilter = new OpenNLPPOSFilter(source, posTaggerOp);
// Very important: Tokens are not indexed, we need a store them as payloads otherwise we cannot search on them
TypeAsPayloadTokenFilter payloadFilter = new TypeAsPayloadTokenFilter(posFilter);
return new TokenStreamComponents(source, payloadFilter);
}
catch (IOException e) {
throw new RuntimeException(e.getMessage());
}
}
Note that we are using a TypeAsPayloadTokenFilter wrapped around OpenNLPPOSFilter. This means, our POS tags will be indexed as payloads, and our query - however it'll look like - will have to search on payloads as well.
Querying
This is where I am stuck. I have no clue how to query on payloads, and whatever I try does not work. Note that I am using Lucene 7, it seems that in older versions querying on payload has changed several times. Documentation is extremely scarce. It's not even clear what the proper field name is now to query - is it "word" or "type" or anything else? For example, I tried this code which does not return any search results:
// Step 1: Indexing
final String body = "The quick brown fox jumped over the lazy dogs.";
Directory index = new RAMDirectory();
OpenNLPAnalyzer analyzer = new OpenNLPAnalyzer();
IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer);
IndexWriter writer = new IndexWriter(index, indexWriterConfig);
Document document = new Document();
document.add(new TextField("body", body, Field.Store.YES));
writer.addDocument(document);
writer.close();
// Step 2: Querying
final int topN = 10;
DirectoryReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
final String fieldName = "body"; // What is the correct field name here? "body", or "type", or "word" or anything else?
final String queryText = "JJ";
Term term = new Term(fieldName, queryText);
SpanQuery match = new SpanTermQuery(term);
BytesRef pay = new BytesRef("type"); // Don't understand what to put here as an argument
SpanPayloadCheckQuery query = new SpanPayloadCheckQuery(match, Collections.singletonList(pay));
System.out.println(query.toString());
TopDocs topDocs = searcher.search(query, topN);
Any help is very much appreciated here.
Why don't you use TypeAsSynonymFilter instead of TypeAsPayloadTokenFilter and just make a normal query. So in your Analyzer:
:
TokenFilter posFilter = new OpenNLPPOSFilter(source, posTaggerOp);
TypeAsSynonymFilter typeAsSynonymFilter = new TypeAsSynonymFilter(posFilter);
return new TokenStreamComponents(source, typeAsSynonymFilter);
And indexing side:
static Directory index() throws Exception {
Directory index = new RAMDirectory();
OpenNLPAnalyzer analyzer = new OpenNLPAnalyzer();
IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer);
IndexWriter writer = new IndexWriter(index, indexWriterConfig);
writer.addDocument(doc("The quick brown fox jumped over the lazy dogs."));
writer.addDocument(doc("Give it to me, baby!"));
writer.close();
return index;
}
static Document doc(String body){
Document document = new Document();
document.add(new TextField(FIELD, body, Field.Store.YES));
return document;
}
And searching side:
static void search(Directory index, String searchPhrase) throws Exception {
final int topN = 10;
DirectoryReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
QueryParser parser = new QueryParser(FIELD, new WhitespaceAnalyzer());
Query query = parser.parse(searchPhrase);
System.out.println(query);
TopDocs topDocs = searcher.search(query, topN);
System.out.printf("%s => %d hits\n", searchPhrase, topDocs.totalHits);
for(ScoreDoc scoreDoc: topDocs.scoreDocs){
Document doc = searcher.doc(scoreDoc.doc);
System.out.printf("\t%s\n", doc.get(FIELD));
}
}
And then use them like this:
public static void main(String[] args) throws Exception {
Directory index = index();
search(index, "\"JJ NN VBD\""); // search the sequence of POS tags
search(index, "\"brown fox\""); // search a phrase
search(index, "\"fox brown\""); // search a phrase (no hits)
search(index, "baby"); // search a word
search(index, "\"TO PRP\""); // search the sequence of POS tags
}
The result looks like this:
body:"JJ NN VBD"
"JJ NN VBD" => 1 hits
The quick brown fox jumped over the lazy dogs.
body:"brown fox"
"brown fox" => 1 hits
The quick brown fox jumped over the lazy dogs.
body:"fox brown"
"fox brown" => 0 hits
body:baby
baby => 1 hits
Give it to me, baby!
body:"TO PRP"
"TO PRP" => 1 hits
Give it to me, baby!
In Lucene 4.6.0 there was the method extractTerms that provided the extraction of terms from a query (Query 4.6.0). However, from Lucene 6.2.1, it does no longer exist (Query Lucene 6.2.1). Is there a valid alternative for it?
What I'd need is to parse terms (and corrispondent fields) of a Query built by QueryParser.
Maybe not the best answer but one way is to use the same analyzer and tokenize the query string:
Analyzer anal = new StandardAnalyzer();
TokenStream ts = anal.tokenStream("title", query); // string query
CharTermAttribute termAtt = ts.addAttribute(CharTermAttribute.class);
ts.reset();
while (ts.incrementToken()) {
System.out.println(termAtt.toString());
}
anal.close();
I have temporarely solved my problem with the following code. Smarter alternatives will be well accepted:
QueryParser qp = new QueryParser("title", a);
Query q = qp.parse(query);
Set<Term> termQuerySet = new HashSet<Term>();
Weight w = searcher.createWeight(q, true, 3.4f);
w.extractTerms(termQuerySet);
I am adding the document to lucene index as follows:
Document doc = new Document();
String stringObj = (String)field.get(obj);
doc.add(new TextField(fieldName, stringObj.toLowerCase(), org.apache.lucene.document.Field.Store.YES));
indexWriter.addDocument(doc);
I am doing a wild card search as follows:
searchTerm = "*" + searchTerm + "*";
term = new Term(field, sTerm.toLowerCase());
Query query = new WildcardQuery(term);
TotalHitCountCollector collector = new TotalHitCountCollector();
indexSearcher.search(query, collector);
if(collector.getTotalHits() > 0){
TopDocs hits = indexSearcher.search(query, collector.getTotalHits());
}
When I have a string with a "this" value, it is not getting added to the index, hence i do not get the result on searching by "this". I am using a StandardAnalyzer.
Common terms of English language like prepositions, pronouns etc are marked as stop words and omitted before indexing. You can define a custom analyzer or custom stop word list for your analyzer. That way you will be able to omit words that you don't want to be indexed and keep the stop words that you need.
I've implemented a lucene-based software to index more than 10 millions of person's names and these names can be written on different ways like "Luíz" and "Luis". The index was created using the phonetic values of the respective tokens (a custom analyzer was created).
Currently, I'm using QueryParser to query for a given name with good results. But, in the book "Lucene in Action" is mentioned that SpanNearQuery can improve my queries using the proximity of tokens. I've played with the SpanNearQuery against a non-phonetic index of name and the results were superior compared to QueryParser.
As we should query using the same analyzer used to indexing, I couldn't find how I can use my custom phonetic analyzer and SpanNearQuery at same time, or rephrasing:
how can I use SpanNearQuery on the phonetic index?
Thanks in advance.
My first thought is: Wouldn't a phrase query with slop do the job? That would certainly be the easiest way:
"term1 term2"~5
This will use your phonetic analyzer, and produce a proximity query with the resulting tokens.
So, if you really do need to use SpanQueries here (perhaps you are using fuzzy queries or wildcards or some such, or PhraseQuery has been leering menacingly at you and you want nothing more to do with it), you'll need to do the analysis yourself. You can do this by getting a TokenStream from Analyzer.tokenStream, and iterating through the analyzed tokens.
If you are using a phonetic algorithm that produces a single code per term (soundex, for example):
SpanNearQuery.Builder nearBuilder = new SpanNearQuery.Builder("text", true);
nearBuilder.setSlop(4);
TokenStream stream = analyzer.tokenStream("text", queryStringToParse);
stream.addAttribute(CharTermAttribute.class);
stream.reset();
while(stream.incrementToken()) {
CharTermAttribute token = stream.getAttribute(CharTermAttribute.class);
nearBuilder.addClause(new SpanTermQuery(new Term("text", token.toString())));
}
Query finalQuery = nearBuilder.build();
stream.close();
If you are using a double metaphone, where you can have 1-2 terms at the same position, it's a bit more complex, as you'll need to consider those position increments:
SpanNearQuery.Builder nearBuilder = new SpanNearQuery.Builder("text", true);
nearBuilder.setSlop(4);
TokenStream stream = analyzer.tokenStream("text", "through and through");
stream.addAttribute(CharTermAttribute.class);
stream.addAttribute(PositionIncrementAttribute.class);
stream.reset();
String queuedToken = null;
while(stream.incrementToken()) {
CharTermAttribute token = stream.getAttribute(CharTermAttribute.class);
PositionIncrementAttribute increment = stream.getAttribute(PositionIncrementAttribute.class);
if (increment.getPositionIncrement() == 0) {
nearBuilder.addClause(new SpanOrQuery(
new SpanTermQuery(new Term("text", queuedToken)),
new SpanTermQuery(new Term("text", token.toString()))
));
queuedToken = null;
}
else if (increment.getPositionIncrement() >= 1 && queuedToken != null) {
nearBuilder.addClause(new SpanTermQuery(new Term("text", queuedToken)));
queuedToken = token.toString();
}
else {
queuedToken = token.toString();
}
}
if (queuedToken != null) {
nearBuilder.addClause(new SpanTermQuery(new Term("text", queuedToken)));
}
Query finalQuery = nearBuilder.build();
stream.close();
I can't figure out how to make phrase query to work. It returns exact mathes, but slop option doesn't seem to make a difference.
Here's my code:
static void Main(string[] args)
{
using (Directory directory = new RAMDirectory())
{
Analyzer analyzer = new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29);
using (IndexWriter writer = new IndexWriter(directory, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED))
{
// index a few documents
writer.AddDocument(createDocument("1", "henry morgan"));
writer.AddDocument(createDocument("2", "henry junior morgan"));
writer.AddDocument(createDocument("3", "henry immortal jr morgan"));
writer.AddDocument(createDocument("4", "morgan henry"));
}
// search for documents that have "foo bar" in them
String sentence = "henry morgan";
IndexSearcher searcher = new IndexSearcher(directory, true);
PhraseQuery query = new PhraseQuery()
{
//allow inverse order
Slop = 3
};
query.Add(new Term("contents", sentence));
// display search results
List<string> results = new List<string>();
Console.WriteLine("Looking for \"{0}\"...", sentence);
TopDocs topDocs = searcher.Search(query, 100);
foreach (ScoreDoc scoreDoc in topDocs.ScoreDocs)
{
var matchedContents = searcher.Doc(scoreDoc.Doc).Get("contents");
results.Add(matchedContents);
Console.WriteLine("Found: {0}", matchedContents);
}
}
private static Document createDocument(string id, string content)
{
Document doc = new Document();
doc.Add(new Field("id", id, Field.Store.YES, Field.Index.NOT_ANALYZED));
doc.Add(new Field("contents", content, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
return doc;
}
I thought that all options except document with id=3 are supposed to match, but only the first one does. Did I miss something?
In Lucene In Action 2nd, 3.4.6 Searching by phrase: PhraseQuery.
PhraseQuery uses this information to locate documents where terms are within a certain distance of one another
Sure, a plain TermQuery would do the trick to locate this document
knowing either of those words, but in this case we only want documents
that have phrases where the words are either exactly side by side
(quick fox) or have one word in between (quick [irrelevant] fox)
So the PhraseQuery is actually used between terms and the sample code in that chapter also proves it. As you use StandardAnalyzer, so "henry morgan"
will be henry and morgan after analyzation. Therefore, you can not add "henry morgan" as one Term
/*
Sets the number of other words permitted between words
in query phrase.If zero, then this is an exact phrase search.
*/
public void setSlop(int s) { slop = s; }
The definition of setSlop may further explain the case.
After a little change on your code, I got it nailed.
// code in Scala
val query = new PhraseQuery();
query.setSlop(3)
List("henry", "morgan").foreach { word =>
query.add(new Term("contents", word))
}
In this case, the four documents will all be matched. If you have any further problem, I suggest you read that chapter in Lucene In Action 2nd.
That might help.