lucene updation problem - lucene

i am using this function to update the index ..
private static void insert_index(String url)throws Exception
{
System.out.println(url);
IndexWriter writer = new IndexWriter(
FSDirectory.open(new File(INDEX_DIR)),
new StandardAnalyzer(Version.LUCENE_CURRENT),
true,
IndexWriter.MaxFieldLength.UNLIMITED);
Document doc;
String field;
String text;
doc = new Document();
field = "url";
text = url;
doc.add(new Field(field, text, Field.Store.YES, Field.Index.ANALYZED));
field = "tags";
text = "url";
doc.add(new Field(field, text, Field.Store.YES, Field.Index.ANALYZED));
writer.addDocument(doc);
writer.commit();
writer.close();
}
it index more urls and if i search the field with url it shows only the last indexed url....

When creating a new index for the first time, the create parameter for the IndexWriter constructor has to be set to true. From then on it must be set to false, otherwise the previously saved index content is overridden. I'd change my code to detect index files before creating a new instance of IndexWriter.
This code can be used to workout if the index files exist
private bool IndexExists(string sIndexPath)
{
return IndexReader.IndexExists(sIndexPath))
}
Then create the IndexWriter instance like this:
IndexWriter writer = new IndexWriter(
FSDirectory.open(new File(INDEX_DIR)),
new StandardAnalyzer(Version.LUCENE_CURRENT),
IndexExists(INDEX_DIR) == false, // <-- This is what I mean
IndexWriter.MaxFieldLength.UNLIMITED);

Related

perform query if document contains a word has low score

I created a Lucene index and want to find all documents that contain a certain word or phrase.
When i do that, i recognized that the score gets lower the longer the text is that contains that word.
How can I create a query that only checks for the existence of a word in my documents / fields?
That's how I created the index
public static Directory CreateIndex(IEnumerable<WorkItemDto> workItems)
{
StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_30);
Directory index = new RAMDirectory();
IndexWriter writer = new IndexWriter(index, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED);
foreach (WorkItemDto workItemDto in workItems)
{
Document doc = new Document();
doc.Add(new Field("Title", workItemDto.Title, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
//doc.Add(new NumericField("ID", Field.Store.YES, true).SetIntValue(workItemDto.Id));
writer.AddDocument(doc);
}
writer.Dispose();
return index;
}
And this is how i created the query:
StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_30);
Query query = new QueryParser(Version.LUCENE_30, "Title", analyzer).Parse("Some");
IndexSearcher searcher = new IndexSearcher(indexDir);
TopDocs docs = searcher.Search(query, 10);
ScoreDoc[] hits = docs.ScoreDocs;

lucene search is working for only small letters

i am adding my lucene document like following
final Document document = new Document();
document.add(new Field("login", user.getLogin(), Field.Store.YES, Field.Index.NO));
document.add(new Field("email", user.getEmail(), Field.Store.YES, Field.Index.ANALYZED));
document.add(new Field("firstName", user.getFirstName(), Field.Store.YES, Field.Index.ANALYZED));
document.add(new Field("lastName", user.getLastName(), Field.Store.YES, Field.Index.ANALYZED));
userIndexWriter.addDocument(document);
So if i search with small letters , the search is successful, but if i search with capital letters, the search returns nothing.
Anybody has a clue if i am missing something..?
analyzer = new StandardAnalyzer(Version.LUCENE_36);
final IndexWriterConfig indexWriterConfig = new IndexWriterConfig(Version.LUCENE_36, analyzer);
final IndexWriter indexWriter = new IndexWriter(directory, indexWriterConfig);
and my search manager
final SearcherManager searcherManager = new SearcherManager(indexWriter, true, null);
and i am searching like following
final BooleanQuery booleanQuery = new BooleanQuery();
final Query query1 = new PrefixQuery(new Term("email", prefix));
final Query query2 = new PrefixQuery(new Term("firstName", prefix));
final Query query3 = new PrefixQuery(new Term("lastName", prefix));
booleanQuery.add(query1, BooleanClause.Occur.SHOULD);
booleanQuery.add(query2, BooleanClause.Occur.SHOULD);
booleanQuery.add(query3, BooleanClause.Occur.SHOULD);
final SortField sortField = new SortField("firstName", SortField.STRING, true);
final Sort sort = new Sort(sortField);
final TopDocs topDocs = searcherManager .search(booleanQuery, DEFAULT_TOP_N_SEARCH_USER, sort);
Make sure you apply the same analysis to both the document and query. For instance, if you set the indexing analyzer to be StandardAnalzyer, then you need also to apply it to your query like this:
QueryParser queryParser = new QueryParser(Version.LUCENE_CURRENT, "firstName", new StandardAnalyzer(Version.LUCENE_CURRENT));
try {
Query q = queryParser.parse("Ameer");
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

lucene updateDocument not work

I am using Lucene 3.6. I want to know why update does not work. Is there anything wrong?
public class TokenTest
{
private static String IndexPath = "D:\\update\\index";
private static Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_33);
public static void main(String[] args) throws Exception
{
try
{
update();
display("content", "content");
}
catch (IOException e)
{
e.printStackTrace();
}
}
#SuppressWarnings("deprecation")
public static void display(String keyField, String words) throws Exception
{
IndexSearcher searcher = new IndexSearcher(FSDirectory.open(new File(IndexPath)));
Term term = new Term(keyField, words);
Query query = new TermQuery(term);
TopDocs results = searcher.search(query, 100);
ScoreDoc[] hits = results.scoreDocs;
for (ScoreDoc hit : hits)
{
Document doc = searcher.doc(hit.doc);
System.out.println("doc_id = " + hit.doc);
System.out.println("内容: " + doc.get("content"));
System.out.println("路径:" + doc.get("path"));
}
}
public static String update() throws Exception
{
IndexWriterConfig writeConfig = new IndexWriterConfig(Version.LUCENE_33, analyzer);
IndexWriter writer = new IndexWriter(FSDirectory.open(new File(IndexPath)), writeConfig);
Document document = new Document();
Field field_name2 = new Field("path", "update_path", Field.Store.YES, Field.Index.ANALYZED);
Field field_content2 = new Field("content", "content update", Field.Store.YES, Field.Index.ANALYZED);
document.add(field_name2);
document.add(field_content2);
Term term = new Term("path", "qqqqq");
writer.updateDocument(term, document);
writer.optimize();
writer.close();
return "update_path";
}
}
I assume you want to update your document such that field "path" = "qqqq". You have this exactly backwards (please read the documentation).
updateDocument performs two steps:
Find and delete any documents containing term
In this case, none are found, because your indexed documents does not contain path:qqqq
Add the new document to the index.
You appear to be doing the opposite, trying to lookup by document, then add the term to it, and it doesn't work that way. What you are looking for, I believe, is something like:
Term term = new Term("content", "update");
document.removeField("path");
document.add("path", "qqqq");
writer.updateDocument(term, document);

Searching sentences in PDF using Lucene phrase query and PDFBOX

I have used the following code for searching text in pdf. It is working fine with single word. But for sentences as mentioned in the code, it is showing that it is not present even if the text is present in the document. can any one help me in resolving this?
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);
// Store the index in memory:
Directory directory = new RAMDirectory();
// To store an index on disk, use this instead:
//Directory directory = FSDirectory.open("/tmp/testindex");
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_CURRENT, analyzer);
IndexWriter iwriter = new IndexWriter(directory, config);
Document doc = new Document();
PDDocument document = null;
try {
document = PDDocument.load(strFilepath);
}
catch (IOException ex) {
System.out.println("Exception Occured while Loading the document: " + ex);
}
int i =1;
String name = null;
String output=new PDFTextStripper().getText(document);
//String text = "This is the text to be indexed";
doc.add(new Field("contents", output, TextField.TYPE_STORED));
iwriter.addDocument(doc);
iwriter.close();
// Now search the index
DirectoryReader ireader = DirectoryReader.open(directory);
IndexSearcher isearcher = new IndexSearcher(ireader);
// Parse a simple query that searches for "text":
QueryParser parser = new QueryParser(Version.LUCENE_CURRENT, "contents", analyzer);
String sentence = "Following are the";
PhraseQuery query = new PhraseQuery();
String[] words = sentence.split(" ");
for (String word : words) {
query.add(new Term("contents", word));
}
ScoreDoc[] hits = isearcher.search(query, null, 1000).scoreDocs;
if(hits.length>0){
System.out.println("Searched text existed in the PDF.");
}
ireader.close();
directory.close();
}
catch(Exception e){
System.out.println("Exception: "+e.getMessage());
}
}
You should use the query parser to create a query from your sentence instead of creating your phrasequery by yourself. your self created query contains the term "Following" which is not indexed since the standard analyzer will lowercase it during indexing so only "following" is indexed.

Lucene 4.2 analyzer while indexing fields

I am trying to index a set of documents using Lucene 4.2. I've created a custom analyzer, that doesn't tokenize and doesn't lowercase the terms, with the following code:
public class NoTokenAnalyzer extends Analyzer{
public Version matchVersion;
public NoTokenAnalyzer(Version matchVersion){
this.matchVersion=matchVersion;
}
#Override
protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
// TODO Auto-generated method stub
//final Tokenizer source = new NoTokenTokenizer(matchVersion, reader);
final KeywordTokenizer source=new KeywordTokenizer(reader);
TokenStream result = new LowerCaseFilter(matchVersion, source);
return new TokenStreamComponents(source, result);
}
}
I use the analyzer to construct the index (inspired by the code provided in the Lucene documentation):
public static void IndexFile(Analyzer analyzer) throws IOException{
boolean create=true;
String directoryPath="path";
File folderToIndex=new File(directoryPath);
File[]filesToIndex=folderToIndex.listFiles();
Directory directory=FSDirectory.open(new File("index path"));
IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_42, analyzer);
if (create) {
// Create a new index in the directory, removing any
// previously indexed documents:
iwc.setOpenMode(OpenMode.CREATE);
} else {
// Add new documents to an existing index:
iwc.setOpenMode(OpenMode.CREATE_OR_APPEND);
}
IndexWriter writer = new IndexWriter(directory, iwc);
for (final File singleFile : filesToIndex) {
//process files in the directory and extract strings to index
//..........
String field1;
String field2;
//index fields
Document doc=new Document();
Field f1Field= new Field("f1", field1, TextField.TYPE_STORED);
doc.add(f1Field);
doc.add(new Field("f2", field2, TextField.TYPE_STORED));
}
writer.close();
}
The problem with the code is that the indexed fields are not tokenized, but they are also not lowercased,i.e, it seems that the analyzer is not being applied during indexing.
I can't figure out what's wrong? How can I make the analyzer work?
The code works correctly. So it might serve someone in creating a custom analyzer in Lucene 4.2, and using it for indexing and searching.