Simple synonyms (wordA = wordB) are fine. When there are two or more synonyms (wordA = wordB = wordC ...), then phrase matching is only working for the first, unless the phrases have proximity modifiers.
I have a simple test case (it's delivered as an Ant project) which illustrates the problem.
Materials
You can download the test case here: mydemo.with.libs.zip (5MB)
That archive includes the Lucene 9.2 libraries which my test uses; if you prefer a copy without the JAR files you can download that from here: mydemo.zip (9KB)
You can run the test case by unzipping the archive into an empty directory and running the Ant command ant rnsearch
Input
When indexing the documents, the following synonym list is used (permuted as necessary):
note,notes,notice,notification
subtree,sub tree,sub-tree
I have three documents, each containing a single sentence. The three sentences are:
These release notes describe a document sub tree in a simple way.
This release note describes a document subtree in a simple way.
This release notice describes a document sub-tree in a simple way.
Problem
I believe that any of the following searches should match all three documents:
release note
release notes
release notice
release notification
"release note"
"release notes"
"release notice"
"release notification"
As it happens, the first four searches are fine, but the quoted phrases demonstrate a problem.
The searches for "release note" and "release notes" match all three records, but "release notice" only matches one, and "release notification" does not match any.
However if I change the last two searches like so:
"release notice"~1
"release notification"~2
then all three documents match.
What appears to be happening is that the first synonym is being given the same index position as the term, the second synonym has the position offset by 1, the third offset by 2, etc.
I believe that all the synonyms should be given the same position so that all four phrases match without the need for proximity modifiers at all.
Edit, here's the source of my analyzer:
public class MyAnalyzer extends Analyzer {
public MyAnalyzer(String synlist) {
this.synlist = synlist;
}
#Override
protected TokenStreamComponents createComponents(String fieldName) {
WhitespaceTokenizer src = new WhitespaceTokenizer();
TokenStream result = new LowerCaseFilter(src);
if (synlist != null) {
result = new SynonymGraphFilter(result, getSynonyms(synlist), Boolean.TRUE);
result = new FlattenGraphFilter(result);
}
return new TokenStreamComponents(src, result);
}
private static SynonymMap getSynonyms(String synlist) {
boolean dedup = Boolean.TRUE;
SynonymMap synMap = null;
SynonymMap.Builder builder = new SynonymMap.Builder(dedup);
int cnt = 0;
try {
BufferedReader br = new BufferedReader(new FileReader(synlist));
String line;
try {
while ((line = br.readLine()) != null) {
processLine(builder,line);
cnt++;
}
} catch (IOException e) {
System.err.println(" caught " + e.getClass() + " while reading synonym list,\n with message " + e.getMessage());
}
System.out.println("Synonym load processed " + cnt + " lines");
br.close();
} catch (Exception e) {
System.err.println(" caught " + e.getClass() + " while loading synonym map,\n with message " + e.getMessage());
}
if (cnt > 0) {
try {
synMap = builder.build();
} catch (IOException e) {
System.err.println(e);
}
}
return synMap;
}
private static void processLine(SynonymMap.Builder builder, String line) {
boolean keepOrig = Boolean.TRUE;
String terms[] = line.split(",");
if (terms.length < 2) {
System.err.println("Synonym input must have at least two terms on a line: " + line);
} else {
String word = terms[0];
String[] synonymsOfWord = Arrays.copyOfRange(terms, 1, terms.length);
addSyns(builder, word, synonymsOfWord, keepOrig);
}
}
private static void addSyns(SynonymMap.Builder builder, String word, String[] syns, boolean keepOrig) {
CharsRefBuilder synset = new CharsRefBuilder();
SynonymMap.Builder.join(syns, synset);
CharsRef wordp = SynonymMap.Builder.join(word.split("\\s+"), new CharsRefBuilder());
builder.add(wordp, synset.get(), keepOrig);
}
private String synlist;
}
The analyzer includes synonyms when it builds the index, and does not add synonyms when it is used to process a query.
For the "note", "notes", "notice", "notification" list of synonyms:
It is possible to build an index of the above synonyms so that every query listed in the question will find all three documents - including the phrase searches without the need for any ~n proximity searches.
I see there is a separate question for the other list of synonyms "subtree", "sub tree", "sub-tree" - so I will skip those here (I expect the below approach will not work for those, but I would have to take a closer look).
The solution is straightforward, and it's based on a realization that I was (in an earlier question) completely incorrect in an assumption I made about how to build the synonyms:
You can place multiple synonyms of a given word at the same position as the word, when building your indexed data. I incorrectly thought you needed to provide the synoyms as a list - but you can provide them one at a time as words.
Here is the approach:
My analyzer:
Analyzer analyzer = new Analyzer() {
#Override
protected Analyzer.TokenStreamComponents createComponents(String fieldName) {
Tokenizer source = new StandardTokenizer();
TokenStream tokenStream = source;
tokenStream = new LowerCaseFilter(tokenStream);
tokenStream = new ASCIIFoldingFilter(tokenStream);
tokenStream = new SynonymGraphFilter(tokenStream, getSynonyms(), ignoreSynonymCase);
tokenStream = new FlattenGraphFilter(tokenStream);
return new Analyzer.TokenStreamComponents(source, tokenStream);
}
};
The getSynonyms() method used by the above analyzer, using the note,notes,notice,notification list:
private SynonymMap getSynonyms() {
// de-duplicate rules when loading:
boolean dedup = Boolean.TRUE;
// include original word in index:
boolean includeOrig = Boolean.TRUE;
String[] synonyms = {"note", "notes", "notice", "notification"};
// build a synonym map where every word in the list is a synonym
// of every other word in the list:
SynonymMap.Builder synMapBuilder = new SynonymMap.Builder(dedup);
for (String word : synonyms) {
for (String synonym : synonyms) {
if (!synonym.equals(word)) {
synMapBuilder.add(new CharsRef(word), new CharsRef(synonym), includeOrig);
}
}
}
SynonymMap synonymMap = null;
try {
synonymMap = synMapBuilder.build();
} catch (IOException ex) {
System.err.print(ex);
}
return synonymMap;
}
I looked at the indexed data by using org.apache.lucene.codecs.simpletext.SimpleTextCodec, to generate human-readable indexes (just for testing purposes):
IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
iwc.setOpenMode(OpenMode.CREATE);
iwc.setCodec(new SimpleTextCodec());
This allowed me to see where the synonyms were inserted into the indexed data. So, for example, taking the word note, we see the following indexed entries:
term note
doc 0
freq 1
pos 2
doc 1
freq 1
pos 2
doc 2
freq 1
pos 2
So, that tells us that all three documents contain note at token position 2 (the 3rd word).
And for notification we see exactly the same data:
term notification
doc 0
freq 1
pos 2
doc 1
freq 1
pos 2
doc 2
freq 1
pos 2
We see this for all the words in the synonym list, which is why all 8 queries return all 3 documents.
Related
I am indexing technical documentation and incorporating synonyms at index time, so that users can search with a number of alternative patterns. But only some synonyms seem to be getting into the map.
I have a text file synonyms.list which contains a series of lines like so:
note,notes,notice
subtree,sub-tree,sub tree
My analyzer and synonym map builder (I've removed try and catch wrappers to save space, but they aren't the problem):
public class TechAnalyzer extends Analyzer {
#Override
protected TokenStreamComponents createComponents(String fieldName) {
WhitespaceTokenizer src = new WhitespaceTokenizer();
TokenStream result = new TechTokenFilter(new LowerCaseFilter(src));
result = new SynopnymGraphFilter(result, getSynonyms(getSynonymsList()), Boolean.TRUE);
result = new FlattenGraphFilter(result);
return new TokenStreamComponents(src, result);
}
private static SynonymMap getSynonyms(String synlist) {
boolean dedup = Boolean.TRUE;
SynonymMap synMap = null;
SynonymMap.Builder builder = new SynonymMap.Builder(dedup);
int cnt = 0;
BufferedReader br = new BufferedReader(new FileReader(synlist));
String line;
while ((line = br.readLine()) != null) {
processLine(builder,line);
cnt++;
}
br.close();
if (cnt > 0) {
synMap = builder.build();
}
return synMap;
}
private static void processLine(SynonymMap.Builder builder, String line) {
boolean keepOrig = Boolean.TRUE;
String terms[] = line.split(",");
if (terms.length > 1) {
String word = terms[0];
String[] synonymsOfWord = Arrays.copyOfRange(terms, 1, terms.length);
for (String syn : synonymsOfWord) {
addPair(builder, word, syn, keepOrig);
}
}
}
private static void addPair(SynonymMap.Builder builder, String word, String syn, boolean keepOrig) {
CharsRef synp = SynonymMap.Builder.join(syn.split("\\s+"), new CharsRefBuilder());
CharsRef wordp = new CharsRef(word);
builder.add(wordp, synp, keepOrig);
// builder.add(synp, wordp, keepOrig); // ? do I need this??
}
I'm not splitting word in addPair() because (at the moment, anyway) the first term in every line of synonyms.list must be a word not a phrase.
My first question relates to that comment at the bottom of addPair(): if I am adding (word,synonym) to the map, do I also need to add (synonym,word)? Or is the map commutative? I can't tell, because of the problem I'm having which is the basis of the next question.
So... the technical documentation being indexed contains some documents which refer to "release notes", and some which refer to "release notices". There are also points described as a "release note". So I would like a search for any of "release note", "release notes", or "release notice" to match all three alternatives.
My code doesn't seem to enable this. If I index a single file which refers to "release notes" I can inspect the generated index with luke and I can see that the index only ever contains one synonym, not two. The same position in the index might have "note" and "notes", or "notes" and "notice", depending on the order of the words in the synonyms.list text file, but it will never have "note", "notes" and "notice".
Obviously I'm not building the map correctly, but the documentation hasn't helped me see what I am doing wrong.
If you've read this far, and can see the flaw in my code, please help me see it too!
Thanks, etc.
Our database contains thousands of numbers in various formats and what I am attempting to do is remove all punctuation at index time and store only the digits and then when a user types digits into a keyword field, only match on those digits. I thought that a custom analyzer was the way to go but I think I am missing an important step...
#Override
protected TokenStreamComponents createComponents(String fieldName) {
log.debug("Creating Components for Analyzer...");
final Tokenizer source = new KeywordTokenizer();
LowerCaseFilter lcFilter = new LowerCaseFilter(source);
PatternReplaceFilter prFilter = new PatternReplaceFilter(lcFilter,
Pattern.compile("[^0-9]"), "", true);
TrimFilter trimFilter = new TrimFilter(prFilter);
return new TokenStreamComponents(source, trimFilter);
}
...
#KeywordSearch
#Analyzer(impl = com.jjkane.common.search.analyzer.PhoneNumberAnalyzer.class)
#Field(name = "phone", index = org.hibernate.search.annotations.Index.YES, analyze = Analyze.YES, store = Store.YES)
public String getPhone() {
return this.phone;
}
This may just be ignorance on my part in attempting to do this... From all the documentation, it seems like I am on the right track, but the query never matches unless I submit (555)555-5555 as an exact match to what was in my db. If I put in 5555555555, I get nothing...
I have a file containing some phrases. Using jarowinkler by lucene, it is supposed to get me the most similar phrases of my input from that file.
Here is an example of my problem.
We have a file containing:
//phrases.txt
this is goodd
this is good
this is god
If my input is this is good, it is supposed to get me 'this is good' from the file first, since the similarity score here is the biggest (1). But for some reason, it returns: "this is goodd" and "this is god" only!
Here is my code:
try {
SpellChecker spellChecker = new SpellChecker(new RAMDirectory(), new JaroWinklerDistance());
Dictionary dictionary = new PlainTextDictionary(new File("src/main/resources/words.txt").toPath());
IndexWriterConfig iwc=new IndexWriterConfig(new ShingleAnalyzerWrapper());
spellChecker.indexDictionary(dictionary,iwc,false);
String wordForSuggestions = "this is good";
int suggestionsNumber = 5;
String[] suggestions = spellChecker.suggestSimilar(wordForSuggestions, suggestionsNumber,0.8f);
if (suggestions!=null && suggestions.length>0) {
for (String word : suggestions) {
System.out.println("Did you mean:" + word);
}
}
else {
System.out.println("No suggestions found for word:"+wordForSuggestions);
}
} catch (IOException e) {
e.printStackTrace();
}
suggestSimilar won't provide suggestions which are identical to the input. To quote the source code:
// don't suggest a word for itself, that would be silly
If you want to know whether wordForSuggestions is in the dictionary, use the exist method:
if (spellChecker.exist(wordForSuggestions)) {
//do what you want for an, apparently, correctly spelled word
}
For the just the sake of learning I've created an index from 1 file and wanted to search it. I am using Lucene Version 4.4. I know that indexing part is true.
tempFileName is the name of file which contains tokens and this file has the following words :
"odd plus odd is even ## even plus even is even ## odd plus even is odd ##"
However when I provide a query it returns nothing. I can't see what would be the problem. Any help is greatly appreciated.
Indexing part :
public void startIndexingDocument(String indexPath) throws IOException {
Analyzer analyzer = new WhitespaceAnalyzer(Version.LUCENE_44);
SimpleFSDirectory directory = new SimpleFSDirectory(new File(indexPath));
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_44,
analyzer);
IndexWriter writer = new IndexWriter(directory, config);
indexDocs(writer);
writer.close();
}
private void indexDocs(IndexWriter w) throws IOException {
Document doc = new Document();
File file = new File(tempFileName);
BufferedReader br = new BufferedReader(new FileReader(tempFileName));
Field field = new StringField(fieldName, br.readLine().toString(),
Field.Store.YES);
doc.add(field);
w.addDocument(doc);
}
Searching part :
public void readFromIndex(String indexPath) throws IOException,
ParseException {
Analyzer anal = new WhitespaceAnalyzer(Version.LUCENE_44);
QueryParser parser = new QueryParser(Version.LUCENE_44, fieldName, anal);
Query query = parser.parse("odd");
IndexReader reader = IndexReader.open(NIOFSDirectory.open(new File(
indexPath)));
IndexSearcher searcher = new IndexSearcher(reader);
TopScoreDocCollector collector = TopScoreDocCollector.create(10, true);
searcher.search(query, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
// display
System.out.println("fieldName =" + fieldName);
System.out.println("Found : " + hits.length + " hits.");
for (int i = 0; i < hits.length; i++) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
System.out.println((i + 1) + ". " + d.get(fieldName));
}
reader.close();
}
The problem is that you are using a StringField. StringField indexes the entire input as a single token. Good for atomic strings, like keywords, identifiers, stuff like that. Not good for full text searching.
Use a TextField.
StringField have a single token. So, I try to test with simple code.
for example #yns~ If you have a file that this is cralwer file and this contents hava a single String.
ex) file name : data03.scd , contents : parktaeha
You try to search with "parktaeha" queryString.
You get the search result!
field name : acet, queryString parktaeha
======== start search!! ========== q=acet:parktaeha Found 1 hits. result array length :1 search result=> parktaeha
======== end search!! ==========
Look under the code. This code is test code.
while((target = in.readLine()) != null){
System.out.println("target:"+target);
doc.add(new TextField("acet",target ,Field.Store.YES)); // use TextField
// TEST : doc.add(new StringField("acet", target.toString(),Field.Store.YES));
}
ref url
I can't get this to work with Lucene 4.0 and its new features... Could somebody please help me??
I have crawled a bunch of html-documents from the web. Now I would like to count the number of distinct words of every Document.
This is how I did it with Lucene 3.5 (for a single document. To get them all I loop over all documents... every time with a new RAMDirectory containing only one doc) :
Analyzer analyzer = some Lucene Analyzer;
RAMDirectory index;
index = new RAMDirectory();
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_35, analyzer);
String _words = new String();
// get somehow the String containing a certain text:
_words = doc.getPageDescription();
try {
IndexWriter w = new IndexWriter(index, config);
addDoc(w, _words);
w.close();
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
try {
// System.out.print(", count Terms... ");
IndexReader reader = IndexReader.open(index);
TermFreqVector[] freqVector = reader.getTermFreqVectors(0);
if (freqVector == null) {
System.out.println("Count words: ": 0");
}
for (TermFreqVector vector : freqVector) {
String[] terms = vector.getTerms();
int[] freq = vector.getTermFrequencies();
int n = terms.length;
System.out.println("Count words: " + n);
....
How can I do this with Lucene 4.0?
I'd prefer to do this using a FSDirectory instead of RAMDirectory however; I guess this is more performant if I have a quite high number of documents?
Thanks and regards
C.
Use the Fields/Terms apis.
See especially the example 'access term vector fields for a specific document'
Seeing as you are looping over all documents, if your end goal is really something like the average number of unique terms across all documents, keep reading to the 'index statistics section'. For example in that case, you can compute that efficiently with #postings / #documents: getSumDocFreq()/maxDoc()
http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/index/package-summary.html#package_description