Lucene: Wildcards are missing from index - lucene

i am building a search index that contains special names - containing ! and ? and & and + and ... I have to tread the following searches different:
me & you
me + you
But whatever i do (did try with queryparser escaping before indexing, escaped it manually, tried different indexers...) - if i check the search index with Luke they do not show up (question marks and #-symbols and the like show up)
The logic behind is that i am doing partial searches for a live suggestion (and the fields are not that large) so i split it up into "m" and "me" and "+" and "y" and "yo" and "you" and then index it (that way it is way faster than a wildcard query search (and the index size is not a big problem).
So what i would need is to also have this special wildcard characters be inserted into the index.
This is my code:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using Lucene.Net.Analysis;
using Lucene.Net.Util;
namespace AnalyzerSpike
{
public class CustomAnalyzer : Analyzer
{
public override TokenStream TokenStream(string fieldName, TextReader reader)
{
return new ASCIIFoldingFilter(new LowerCaseFilter(new CustomCharTokenizer(reader)));
}
}
public class CustomCharTokenizer : CharTokenizer
{
public CustomCharTokenizer(TextReader input) : base(input)
{
}
public CustomCharTokenizer(AttributeSource source, TextReader input) : base(source, input)
{
}
public CustomCharTokenizer(AttributeFactory factory, TextReader input) : base(factory, input)
{
}
protected override bool IsTokenChar(char c)
{
return c != ' ';
}
}
}
The code to create the index:
private void InitIndex(string path, Analyzer analyzer)
{
var writer = new IndexWriter(path, analyzer, true);
//some multiline textbox that contains one item per line:
var all = new List<string>(txtAllAvailable.Text.Replace("\r","").Split('\n'));
foreach (var item in all)
{
writer.AddDocument(GetDocument(item));
}
writer.Optimize();
writer.Close();
}
private static Document GetDocument(string name)
{
var doc = new Document();
doc.Add(new Field(
"name",
DeNormalizeName(name),
Field.Store.YES,
Field.Index.ANALYZED));
doc.Add(new Field(
"raw_name",
name,
Field.Store.YES,
Field.Index.NOT_ANALYZED));
return doc;
}
(Code is with Lucene.net in version 1.9.x (EDIT: sorry - was 2.9.x) but is compatible with Lucene from Java)
Thx

Finally had the time to look into it again. And it was some stupid mistake in my denormalice method that did filter out single character parts (as it was in the beginning) and thus it did filter out the plus sign if surrounded by spaces :-/
Thx for your help though Moleski!
private static string DeNormalizeName(string name)
{
string answer = string.Empty;
var wordsOnly = Regex.Replace(name, "[^\\w0-9 ]+", string.Empty);
var filterText = (name != wordsOnly) ? name + " " + wordsOnly : name;
foreach (var subName in filterText.Split(' '))
{
if (subName.Length >= 1)
{
for (var j = 1; j <= subName.Length; j++)
{
answer += subName.Substring(0, j) + " ";
}
}
}
return answer.TrimEnd();
}

Related

Seaching for product codes, phone numbers in Lucene

I'm looking for general advice how to search identifiers, product codes or phone numbers in Apache Lucene 8.x. Let's say I'm trying to to search lists of product codes (like an ISBN, for example 978-3-86680-192-9). If somebody enters 9783 or 978 3 or 978-3, 978-3-86680-192-9 should appear. Same should happen if an identifier uses any combinations of letters, spaces, digits, punctuation (examples: TS 123, 123.abc. How would I do this?
I thought I could solve this with a custom analyzer that removes all the punctuation and whitespace, but the results are mixed:
public class IdentifierAnalyzer extends Analyzer {
#Override
protected TokenStreamComponents createComponents(String fieldName) {
Tokenizer tokenizer = new KeywordTokenizer();
TokenStream tokenStream = new LowerCaseFilter(tokenizer);
tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("[^0-9a-z]"), "", true);
tokenStream = new TrimFilter(tokenStream);
return new TokenStreamComponents(tokenizer, tokenStream);
}
#Override
protected TokenStream normalize(String fieldName, TokenStream in) {
TokenStream tokenStream = new LowerCaseFilter(in);
tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("[^0-9a-z]"), "", true);
tokenStream = new TrimFilter(tokenStream);
return tokenStream;
}
}
So while I get the desired results when performing a PrefixQuery with TS1*, TS 1* (with whitespace) does not yield satisfactory results. When I look into the parsed query, I see that Lucene splits TS 1* into two queries: myField:TS myField:1*. WordDelimiterGraphFilter looks interesting, but I couldn't figure out to apply it here.
This is not a comprehensive answer - but I agree that WordDelimiterGraphFilter may be helpful for this type of data. However, there could still be test cases which need additional handling.
Here is my custom analyzer, using a WordDelimiterGraphFilter:
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.core.KeywordTokenizer;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
import java.util.Map;
import java.util.HashMap;
public class IdentifierAnalyzer extends Analyzer {
private WordDelimiterGraphFilterFactory getWordDelimiter() {
Map<String, String> settings = new HashMap<>();
settings.put("generateWordParts", "1"); // e.g. "PowerShot" => "Power" "Shot"
settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
settings.put("catenateAll", "1"); // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
settings.put("preserveOriginal", "1"); // e.g. "500-42" => "500" "42" "500-42"
settings.put("splitOnCaseChange", "1"); // e.g. "fooBar" => "foo" "Bar"
return new WordDelimiterGraphFilterFactory(settings);
}
#Override
protected TokenStreamComponents createComponents(String fieldName) {
Tokenizer tokenizer = new KeywordTokenizer();
TokenStream tokenStream = new LowerCaseFilter(tokenizer);
tokenStream = getWordDelimiter().create(tokenStream);
return new TokenStreamComponents(tokenizer, tokenStream);
}
#Override
protected TokenStream normalize(String fieldName, TokenStream in) {
TokenStream tokenStream = new LowerCaseFilter(in);
return tokenStream;
}
}
It uses the WordDelimiterGraphFilterFactory helper, together with a map of parameters, to control which settings are applied.
You can see the complete list of available settings in the WordDelimiterGraphFilterFactory JavaDoc. You may want to experiment with setting/unsetting different ones.
Here is a test index builder for the following 3 input values:
978-3-86680-192-9
TS 123
123.abc
public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
Analyzer analyzer = new IdentifierAnalyzer();
IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
iwc.setOpenMode(OpenMode.CREATE);
Document doc;
List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");
try (IndexWriter writer = new IndexWriter(dir, iwc)) {
for (String identifier : identifiers) {
doc = new Document();
doc.add(new TextField("identifiers", identifier, Field.Store.YES));
writer.addDocument(doc);
}
}
}
This creates the following tokens:
For querying the above indexed data I used this:
public static void doSearch() throws IOException, ParseException {
Analyzer analyzer = new IdentifierAnalyzer();
QueryParser parser = new QueryParser("identifiers", analyzer);
List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");
for (String search : searches) {
Query query = parser.parse(search);
printHits(query, search);
}
}
private static void printHits(Query query, String search) throws IOException {
System.out.println("search term: " + search);
System.out.println("parsed query: " + query.toString());
IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
IndexSearcher searcher = new IndexSearcher(reader);
TopDocs results = searcher.search(query, 100);
ScoreDoc[] hits = results.scoreDocs;
System.out.println("hits: " + hits.length);
for (ScoreDoc hit : hits) {
System.out.println("");
System.out.println(" doc id: " + hit.doc + "; score: " + hit.score);
Document doc = searcher.doc(hit.doc);
System.out.println(" identifier: " + doc.get("identifiers"));
}
System.out.println("-----------------------------------------");
}
This uses the following search terms - all of which I pass into the classic query parser (though you could, of course, use more sophisticated query types via the API):
9783
9783*
978 3
978-3
TS1*
TS 1*
The only query which failed to find any matching documents was the first one:
search term: 9783
parsed query: identifiers:9783
hits: 0
This should not be a surprise, since this is a partial token, without a wildcard. The second query (with the wildcard added) found one document, as expected.
The final query I tested TS 1* found three hits - but the one we want has the best matching score:
search term: TS 1*
parsed query: identifiers:ts identifiers:1*
hits: 3
doc id: 1; score: 1.590861
identifier: TS 123
doc id: 0; score: 1.0
identifier: 978-3-86680-192-9
doc id: 2; score: 1.0
identifier: 123.abc

How to prevent hard line feed in XAML Editor.Text?

I've searched the Web for a solution but could not find one. I have a construct like the following:
<Editor IsReadOnly="True" >
<Editor.Text>
This is a lot of text.
So much text that I want to break it onto two or more lines in my XAML.
</Editor.Text>
</Editor>
The problem is that Xamarin keeps that line feed between text. and So as a hard line feed in the Editor. Is there any way to prevent that and have all the text flow as though it was specified on one line? It's not the end of the world -- I can live with one long line of XAML text, but it is inconvenient.
Thanks.
You could process the Text content to remove \n and remove excess spaces.
Here is a sample for Android (using CustomRenderer).The similar method for iOS.
in your Android project:
[assembly: ExportRenderer(typeof(Editor), typeof(AndroidEditorRenderer))]
namespace xxxx
{
class AndroidEditorRenderer:EditorRenderer
{
public AndroidEditorRenderer(Context context) : base(context)
{
}
protected override void OnElementChanged(ElementChangedEventArgs<Editor> e)
{
base.OnElementChanged(e);
var str = Control.Text.Replace("\n", "", StringComparison.OrdinalIgnoreCase);
string[] ss = str.Split(' ');
StringBuilder stringBuilder = new StringBuilder();
foreach (var item in ss)
{
if (!string.IsNullOrEmpty(item))
{
stringBuilder.Append(item+" ");
}
}
string content = stringBuilder.ToString().Substring(0,stringBuilder.Length-1);
Control.Text = content;
}
}
}
Update,do this in your page constructor.
public partial class YourPage: ContentPage
{
public YourPage
{
InitializeComponent();
refill();
}
private void refill()
{
var str2 = editor.Text.Replace("\n", "");
string[] ss = str2.Split(' ');
StringBuilder stringBuilder = new StringBuilder();
foreach (var item in ss)
{
if (!string.IsNullOrEmpty(item))
{
stringBuilder.Append(item + " ");
}
}
string content = stringBuilder.ToString().Substring(0, stringBuilder.Length - 1);
editor.Text = content;
}
}

Index field is empty

I'm working with the Lucene library. I want to index some documents and generate TermVectors for them. I've written an Indexer class to create the fields of the index, but this code returns an empty field.
My index class is:
public class Indexer {
private static File sourceDirectory;
private static File indexDirectory;
private String fieldtitle,fieldbody;
public Indexer() {
this.sourceDirectory = new File(LuceneConstants.dataDir);
this.indexDirectory = new File(LuceneConstants.indexDir);
fieldtitle = LuceneConstants.CONTENTS1;
fieldbody= LuceneConstants.CONTENTS2;
}
public void index() throws CorruptIndexException,
LockObtainFailedException, IOException {
Directory dir = FSDirectory.open(indexDirectory.toPath());
Analyzer analyzer = new StandardAnalyzer(StandardAnalyzer.STOP_WORDS_SET); // using stop words
IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
if (indexDirectory.exists()) {
iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
} else {
// Add new documents to an existing index:
iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
}
IndexWriter writer = new IndexWriter(dir, iwc);
for (File f : sourceDirectory.listFiles()) {
Document doc = new Document();
String[] linetext=getAllText(f);
String title=linetext[1];
String body=linetext[2];
doc.add(new Field(fieldtitle, title, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
doc.add(new Field(fieldbody, body, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
writer.addDocument(doc);
}
writer.close();
}
public String[] getAllText(File f) throws FileNotFoundException, IOException {
String textFileContent = "";
String[] ar = null;
try {
BufferedReader in = new BufferedReader(new FileReader(f));
for (String str : Files.readAllLines(Paths.get(f.getAbsolutePath()))) {
textFileContent += str;
ar=textFileContent.split("--");
}
in.close();
} catch (IOException e) {
System.out.println("File Read Error");
}
return ar;
}
}
and result of debug is:
doc Document #534
fields ArrayList "size=0"
Static
linetext String[] #535(length=4)
title String "how ...."
body String "I created ...."
I also get another error in debugging:
Non-static method "toString" cannot be referenced from a static context.
This error is happened for filepath.
Sounds like you've got an empty file, or are running into an IOException. See this part of your code:
String[] ar = null;
try {
//Do Stuff
} catch (IOException e) {
System.out.println("File Read Error");
}
return ar;
On an IOException, you fail to handle it, and effectively guarantee you'll immediately thereafter run into another exception. You need to figure out how to handle it if you run into an IOException, or if getAllText returns an array of length 1 or 2
Also, not the issue you are currently running into, but this is almost certainly backwards:
if (indexDirectory.exists()) {
iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
} else {
// Add new documents to an existing index:
iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
}
And there really isn't a need for it at all, anyway. That's what CREATE_OR_APPEND is for, to write to an existing index, or create it if it isn't there. Just replace that whole bit with
iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);

lucene updateDocument not work

I am using Lucene 3.6. I want to know why update does not work. Is there anything wrong?
public class TokenTest
{
private static String IndexPath = "D:\\update\\index";
private static Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_33);
public static void main(String[] args) throws Exception
{
try
{
update();
display("content", "content");
}
catch (IOException e)
{
e.printStackTrace();
}
}
#SuppressWarnings("deprecation")
public static void display(String keyField, String words) throws Exception
{
IndexSearcher searcher = new IndexSearcher(FSDirectory.open(new File(IndexPath)));
Term term = new Term(keyField, words);
Query query = new TermQuery(term);
TopDocs results = searcher.search(query, 100);
ScoreDoc[] hits = results.scoreDocs;
for (ScoreDoc hit : hits)
{
Document doc = searcher.doc(hit.doc);
System.out.println("doc_id = " + hit.doc);
System.out.println("内容: " + doc.get("content"));
System.out.println("路径:" + doc.get("path"));
}
}
public static String update() throws Exception
{
IndexWriterConfig writeConfig = new IndexWriterConfig(Version.LUCENE_33, analyzer);
IndexWriter writer = new IndexWriter(FSDirectory.open(new File(IndexPath)), writeConfig);
Document document = new Document();
Field field_name2 = new Field("path", "update_path", Field.Store.YES, Field.Index.ANALYZED);
Field field_content2 = new Field("content", "content update", Field.Store.YES, Field.Index.ANALYZED);
document.add(field_name2);
document.add(field_content2);
Term term = new Term("path", "qqqqq");
writer.updateDocument(term, document);
writer.optimize();
writer.close();
return "update_path";
}
}
I assume you want to update your document such that field "path" = "qqqq". You have this exactly backwards (please read the documentation).
updateDocument performs two steps:
Find and delete any documents containing term
In this case, none are found, because your indexed documents does not contain path:qqqq
Add the new document to the index.
You appear to be doing the opposite, trying to lookup by document, then add the term to it, and it doesn't work that way. What you are looking for, I believe, is something like:
Term term = new Term("content", "update");
document.removeField("path");
document.add("path", "qqqq");
writer.updateDocument(term, document);

How to use Lucene 4.0 Spatial API?

I cannot find any complete examples of how to use this API. The code below is not giving any results. Any idea why?
static String spatialPrefix = "_point";
static String latField = spatialPrefix + "lat";
static String lngField = spatialPrefix + "lon";
public static void main(String[] args) throws IOException {
SpatialLuceneExample spatial = new SpatialLuceneExample();
spatial.addData();
IndexReader reader = DirectoryReader.open(modules.getDirectory());
IndexSearcher searcher = new IndexSearcher(reader);
searchAndUpdateDocument(38.9510000, -77.4107000, 100.0, searcher,
modules);
}
private void addLocation(IndexWriter writer, String name, double lat,
double lng) throws IOException {
Document doc = new Document();
doc.add(new org.apache.lucene.document.TextField("name", name,
Field.Store.YES));
doc.add(new org.apache.lucene.document.DoubleField(latField, lat,
Field.Store.YES));
doc.add(new org.apache.lucene.document.DoubleField(lngField, lng,
Field.Store.YES));
doc.add(new org.apache.lucene.document.TextField("metafile", "doc",
Field.Store.YES));
writer.addDocument(doc);
System.out.println("===== Added Doc to index ====");
}
private void addData() throws IOException {
IndexWriter writer = modules.getWriter();
addLocation(writer, "McCormick & Schmick's Seafood Restaurant",
38.9579000, -77.3572000);
addLocation(writer, "Jimmy's Old Town Tavern", 38.9690000, -77.3862000);
addLocation(writer, "Ned Devine's", 38.9510000, -77.4107000);
addLocation(writer, "Old Brogue Irish Pub", 38.9955000, -77.2884000);
//...
writer.close();
}
private final static Logger logger = LogManager
.getLogger(SpatialTools.class);
public static void searchAndUpdateDocument(double lo, double la,
double dist, IndexSearcher searcher, LuceneModules modules) {
SpatialContext ctx = SpatialContext.GEO;
SpatialArgs args = new SpatialArgs(SpatialOperation.IsWithin,
ctx.makeCircle(lo, la, DistanceUtils.dist2Degrees(dist,
DistanceUtils.EARTH_MEAN_RADIUS_KM)));
PointVectorStrategy strategy = new PointVectorStrategy(ctx, "_point");
// RecursivePrefixTreeStrategy recursivePrefixTreeStrategy = new
// RecursivePrefixTreeStrategy(grid, fieldName);
// How to use it?
Query makeQueryDistanceScore = strategy.makeQueryDistanceScore(args);
LuceneSearcher instance = LuceneSearcher.getInstance(modules);
instance.getTopResults(makeQueryDistanceScore);
//no results
Filter geoFilter = strategy.makeFilter(args);
try {
Sort chainedSort = new Sort().rewrite(searcher);
TopDocs docs = searcher.search(new MatchAllDocsQuery(), geoFilter,
10000, chainedSort);
logger.debug("search finished, num: " + docs.totalHits);
//no results
for (ScoreDoc scoreDoc : docs.scoreDocs) {
Document doc = searcher.doc(scoreDoc.doc);
double la1 = Double.parseDouble(doc.get(latField));
double lo1 = Double.parseDouble(doc.get(latField));
double distDEG = ctx.getDistCalc().distance(
args.getShape().getCenter(), lo1, la1);
logger.debug("dist deg: : " + distDEG);
double distKM = DistanceUtils.degrees2Dist(distDEG,
DistanceUtils.EARTH_MEAN_RADIUS_KM);
logger.debug("dist km: : " + distKM);
}
} catch (IOException e) {
logger.error("fail to get the search result!", e);
}
}
Did you see the javadocs? These docs in turn point to SpatialExample.java which is what you're looking for. What could I do to make them more obvious?
If you're bent on using a pair of doubles as the internal index approach then use PointVectorStrategy. However, you'll get superior filter performance if you instead use RecursivePrefixTreeStrategy. Presently, PVS does better distance sorting, though, scalability wise. You could use both for their respective benefits.
Just looking quickly at your example, I see you didn't use SpatialStrategy.createIndexableFields(). The intention is that you use that.
See the following link for example : http://mad4search.blogspot.in/2013/06/implementing-geospatial-search-using.html