FastVectorhighlighter with External Database - lucene

I am using Lucene.NET 2.9 with one of my projects. I am using Lucene to create indexes for documents and search on those documents. A field in my document is text heavy and I have stored that into my MS SQL Database. So basically I search via lucene on its indexes and then fetch complete documents from MS SQL database.
The problem I am facing is that I want to highlight my search query terms in results. For that I am using FastVectorHighlighter. Now this particular highlighter required Lucence DocId and field to highlight fields. The problem is that this particular text heavy field since is not stored in lucene database, is not highlighted in my search results.
Any suggestion on how to accomplish same. I either add the same field to my lucene database. It will resolve the problem but would make my database very heavy. Secondly if there is some alternative method to highlight the text it will give me very high flexibility.
Thank you for reading question,
Naveen

if you dont want to store the text in the Lucene index, you should use the Highlighter contrib.
Latest sources for it can be grabbed at https://svn.apache.org/repos/asf/incubator/lucene.net/trunk/src/contrib/Highlighter/

Related

What's the storage solution used by search engines to store indexes to enable efficient querying and scalability?

There are lots of articles on how search engines perform indexing, but couldn't find any information on how they store these indexed records in a way that enables fast querying with scalability. Could someone explain the index storing mechanisms used in search engines or point to any article ?
Solr is able to achieve fast search responses because, instead of searching the text directly, it searches an index instead. This is like retrieving pages in a book related to a keyword by scanning the index at the back of a book, as opposed to searching every word of every page of the book.
This type of index is called an inverted index, because it inverts a page-centric data structure (page->words) to a keyword-centric data structure (word->pages).
Inverted index is a major term in the domain of Information Retrieval and Natural Language Processing. Take a document, note down all the unique words appearing in that document as well as frequency of the words. Here you are ready with your own inverted index. Solr creates similar inverted index of the documents posted to its core using a defined schema. Schema is a blue print which helps Solr in creating invered index of the documents by giving a set of predefined fields in the schema.xml file.

search a database

Let's say I have a large database with product information. I want to create a search engine for that database, preferably with indexing and autocorrect features. How do I go about doing this? Are there any good libraries I could use, so that I don't have to start from scratch with basic SQL? Just some basic recommendations, links, would be much appreciated.
I am familiar with PHP, C#, VB, and Java, but I know very little about databases.
If your product database creates web pages, you would be best served using lucene or htdig. Those will do really good text searching based on your content.
Otherwise you will want to search the large fields of your database using the full text search capabilities in mysql.
To do the autocomplete you will need to have an offline indexing process that works similarly to google. Create another table called wordIndex. It contains words and the number of occurrences in your product db.
When a user starts to type, you do an ajax lookup on this table and autocomplete based on that.
If mySQL FULLTEXT searching doesn't do all you need it to (databases have indexes of their own you can set up), two good choices are Solr (based on Lucene) and Sphinx. Both are often used to provide a full featured search index on top of a mySQL database. Here's a comparison of the two.

Index strategy for tagged documents where tags can change often

In addition to text content my documents have tags which can be searched too. The problem now is that the tags change quite often and every time a tag gets added or removed I have to call UpdateDocument which is quite slow when done for hundreds of documents.
Are there any well performing strategies for storing tags that change often and need to be searched with Lucene? I have been thinking about keeping the tags in separate documents to keep them smaller but I can't figure out how to quickly search for tags AND content.
Store [tag, UID] pairs in a relational database. Every time a tag is added or updated, it is added and updated in this table in the database.
When performing a Lucene search that includes both tag data (stored in a database) and content (indexed in Lucene) you will need to merge the results together. One way you can do this is to:
Make a database query to pull up all the UID's for the tag in question
Translate all the UID's to Lucene doc ID's and set a bit in a BitSet for every matching Lucene doc ID
Create a Filter that wraps the BitSet, and pass that filter in to your search.
We implemented this approach in our system, and it works well. You might need to put a cache in front of the database for performance reasons, though. The particulars of step (3) will vary depending on which version of Lucene you're using.

sql server 2005 full text index query to help find noise words in content

Is there a way to query a full text index to help determine additional noise words? I would like to add some custom noise words and wondered if theres a way to analyse the index to help determine suggestions.
As simple as in
http://arcanecode.com/2008/05/29/creating-and-customizing-noise-words-in-sql-server-2005-full-text-search/
where this is explained (how to do it). Coming up with proper ones, though, is hard.
I decided to look into lucene.net because I wasn't happy with the relevance calculations in sql server full text indexing.
I managed to figure out how to index all the content pretty quickly and then used Luke to find noise words. I have now edited the sql server noise files based on this analysis. Now I have a search solution that works reasonably well using sql server full text indexing, but I plan to move to lucene.net in the future.
Using sql server full text indexing as a base, I developed a domain centric approach to finding relevant content using tool I understood. After some serious thinking and testing, I used many other measures to determine the relevance of a search result other than what is provided by analysing text content for term frequency and word distance. SQL Server full text indexing provided me a great start, and now I have a strategy I can express using lucene that will work very well.
It would have taken me a whole lot longer to understand lucene, and develop a strategy for the search. If anyone out there is still reading this, use full text indexing for testing your idea and then move to lucene once you have a strategy you know will work for your domain.

Relevant Search Results Across Multiple Databases

I have three databases that all have the contents of several web pages in them. What would be the best way to go about searching all three and having the most relevant web page at the top of the search results?
The only way I can think of is break down content by word count and/or creating a complex set of search rules to give one content priority over another. This might be more trouble than what it's worth, but I was wondering if anybody knows a way or product out there that would be able to help me.
To further support Ivans answer above Lucene is the way to go. You haven't mentioned what platform you're on so I'll point out that you can use a .NET port of this too.
If you do use Lucene there is a very good book from Manning on the subject which I recommend you look at.
When it comes to populating your index, you have a couple of choices. For starters you can just dump all of your text into the index and allow the engine to just search on it. However, I'd recommend adding fixed fields to your index which will allow you to support things such as partitioned searches or searches against those fields only.
To explain, lets say you have a field for the website. Then you can partition your index by restricting the index search to those documents that have that website in that field.
The other process is to extract points of interest from your document and allow searches on those without searching the entire index entry. Your mileage may vary with this as the lucene engine is very well written so it may simply allow you to collect your searches into more logical units which helps you with your solution.
I've done this myself and it helps when answering management questions about what exactly is searched and indexed.
HTH!
If you're using MS SQL Server then the full text search can return a ranking for you. I haven't used it, so you'll need to check the documentation or online for specifics.