what techniques does Solr use to index files? - indexing

as you know, there are different technique to index documents for search engines.
such as inverted index, Distributed Dynamic Indexing, Semantic Indexing, NGram Indexing, Context Indexing, Big Data, Multilingual Indexing and so on.
I am working with Solr now. I wonder which techniques does Solr use to index documents and how does Solr (or Lucene) use these techniques?

First - this is a very wide area and most of the terms you're listing isn't index types. They describe product features (or buzzwords) that could be supported regardless of how the index is built behind the scene.
Solr uses Lucene - which at the core is an inverted index.
The index stores statistics about terms in order to make term-based search more efficient. Lucene's index falls into the family of indexes known as an inverted index. This is because it can list, for a term, the documents that contain it. This is the inverse of the natural relationship, in which documents list terms.
There is also many support structures in place to make Lucene even more efficient for certain queries and features. On such feature is the DocValues support - which can be described as a column oriented store with document -> term mappings to speed up things like faceting.
You can see most of these support features in the Codecs API Doc for Lucene 6.3.0. As it's quite a large list, I'll leave it out from the comment itself.

To answer which techniques - Under the hood , Solr uses Lucene APIs and Lucene indexing technique is - Inverted Indexing. Solr is simply a complete application with infrastructure wrapper but underlying document indexing technique is the one provided by Lucene APIs.
How does Solr (or Lucene) use these techniques?
Here is a nice overview of Lucene indexing for beginners. Its just a very simplistic overview but explains the basics.
Since Solr is a product, most of its available documentations are functional ones ( not explaining actual indexing techniques etc) and since raw usage of Lucene is minimal, Lucene documentation is not up to the mark so most of the time, one needs to dig Lucene code or API documentation to understand working of Lucene.
Hope it helps !!

Related

What's the storage solution used by search engines to store indexes to enable efficient querying and scalability?

There are lots of articles on how search engines perform indexing, but couldn't find any information on how they store these indexed records in a way that enables fast querying with scalability. Could someone explain the index storing mechanisms used in search engines or point to any article ?
Solr is able to achieve fast search responses because, instead of searching the text directly, it searches an index instead. This is like retrieving pages in a book related to a keyword by scanning the index at the back of a book, as opposed to searching every word of every page of the book.
This type of index is called an inverted index, because it inverts a page-centric data structure (page->words) to a keyword-centric data structure (word->pages).
Inverted index is a major term in the domain of Information Retrieval and Natural Language Processing. Take a document, note down all the unique words appearing in that document as well as frequency of the words. Here you are ready with your own inverted index. Solr creates similar inverted index of the documents posted to its core using a defined schema. Schema is a blue print which helps Solr in creating invered index of the documents by giving a set of predefined fields in the schema.xml file.

Index data structure for RavenDB

Both MongoDB and CouchDB use B-trees as an underlying data structure for storing indexes. Anyone knows what is the equivalent for RavenDB? There is nothing mentioned about this in the documentation. Thanks!
RavenDB uses Lucene index.
In order to allow fast queries over your indexes, RavenDB processes
them in the background, executing the queries against the stored
documents and persisting the results to a Lucene index. Lucene is a
full text search engine library (Raven uses the .NET version) which
allows us to perform lightning fast full text searches.
You can read more about indexing in the documentation: How the indexes work

Solr on a .NET site

I've got an ASP.NET site backed with a SQL Server database. I'm been using Lucene.NET to index and search the database. I'm adding faceted search navigation to the results page (the facets are a hiarchical category tree). I asked yesterday to make sure I was using the right technique for faceting. All I've gotten so far is a suggestion to use Solr, but Solr does a lot of things I don't need.
I would really like to know from anyone who is familiar with the Solr's source code if Solr's facet processing is terribly different from the one described here by Bert Willems. Bascially you have a Lucene filter for each facet, you get the bits array from it, and you count the set bits in the array.
I'm thinking since mine is hiarchical to begin with I should be able to optimize this pretty well, but I'm afraid I might be grossly under-estimating the impact of this design on search performance. If Solr is no quicker, I'm not going to gain anything by using it.
I'd recommend creating a prototype project modeling your faceting needs with Solr and benchmark it against Lucene.net.
Even though faceting in Solr is very optimized (and gets new optimizations all the time, like the parallel per-segment faceting method), when using Solr there is some overhead, for example network roundtrips and response parsing.
If your code already implements Lucene.NET, performs adequately and you don't need any of Solr's additional features, then there is no need to switch to Solr. But also consider that if you choose Solr you will get faceting performance boosts for free with each new version.

Why are document stores like Lucene / Solr not included in NoSQL conversations?

All of us have come across the recent hype of no-SQL solutions lately. MongoDB, CouchDB, BigTable, Cassandra, and others have been listed as no-SQL options. Here's an example:
http://architects.dzone.com/articles/what-nosql-store-should-i-use
However, three years ago a co-worker and I were using Lucene.NET as what seem to fit the description of no-SQL. We did not use it just for user-inputted search queries; we used it to make a few reindexed RDBMS table data extremely performant. We implemented our own .NET sort-of-equivalent-to-Solr service to manage these indexes and make them callable. When I left the company, the team switched to Solr itself. (For those not in the know, Solr is a web service that wraps Lucene with REST-callable queries and index dumps.)
What I don't understand is, why is Solr not counted in the typical lists of no-SQL solution options? Am I missing something here? I assume that there are technical reasons why Solr is not comparable to the likes of CouchDB, etc., and in fact I understand that CouchDB uses Lucene as its data store (yes?), but what disqualifies Solr?
I'm not asking as some kind of Solr fanboy or anything, I just don't understand why Solr and the like don't fit the definition of no-SQL, and if Solr technically does fit the definition then what about it likely makes people pooh-pooh it? I'm asking because I'm having difficulty determining whether I should continue using Lucene-based solutions (like Solr) for solutions that I build or if I should really do more research with these other options.
I once listened to an interview with author Ursula K. LeGuin about fiction writing. The interviewer asked her about authors who work in different genre of writing. What makes one author a romance writer, and another a mystery writer, and another a science fiction writer? LeGuin responded by explaining:
Genre is about marketing, not about content.
It was an eye-opening statement.
I think the same applies to technology solutions. The NoSQL movement is attracting attention because it's full of marketing energy right now. NoSQL data stores like Hadoop, CouchDB, MongoDB, have commercial ventures backing them, pushing their solutions as new and innovative and exciting so they can grow their business. The term "NoSQL" is a marketing brand that helps them to explain their value.
You're right that Lucene/Solr is technically very similar to a NoSQL document store: it's a denormalized bag of documents (their term) with fields that aren't necessarily consistent across the collection of documents. It's indexed in a sophisticated way to allow you to search across all fields or by specific fields.
But that's not the genre Lucene uses to explain its value. They don't have the same mission to grow a market and a business, since they're managed by the Apache Foundation. They're happy to focus on the use case of fulltext search, even though the technology could be used in other ways. They're following a tenet of software success: do one thing, and do it well.
After doing more Google-searching, I think this document sums it up pretty well:
https://web.archive.org/web/20100504055638/http://www.lucidimagination.com/blog/2010/04/30/nosql-lucene-and-solr/
Case in point, Lucene/Solr is NoSql and could be considered one of NoSql's more mature "forefathers". It just does not get the NoSql hype it deserves because it didn't invent the term "no-SQL" and its users don't use the term, so the hype machine overlooked it.
I think that the most relevant characteristic of solr/lucene that drops from the nosql list it's because until recently, making lucene work as a real-time system was a pain. The usual workflow for any performant application was to index the incremental updates in batchs, and updating the index every 5 minutes for example.
I think that stimpy77 is partly right on the NoSQL being a branding thing. But also, NoSQL means that it's a data storage platform that is simpler/easier then SQL based solutions. And I think while Solr/Lucene share some aspects (they store data), it really misses the mark to think that Solr/Lucene could be used as primary data storage for anything that has relationships. Sure, lots of documents can be thrown into it, and powerful search pull them back. But as soon as you want relationships, then others such as CouchDB and others do much better that have a query syntax of some kind. Search is a bandaid solution in that case. Think about the use case "find all documents tagged with word 'car'". If I have some structures in my data, then it's easy for me to get the document for tag car, and pull everybody back. Versus relying on a search query that includes fq=tag:'car'. Search is more and more powerful the fewer relationships you have, but the more relationships, the better a datastore like CouchDB and brethren are. Thats why you still see CouchDB and friends paired with Solr, and vice versa! Let each one do what it does best.
Of course, that isn't to say you can't leverage storing your source data in Solr, that can be a powerful tool to use!
The main differences between a no sql and solr in operational wise are the following in my opinion.
Solr requires an intermediate data store (database or XML files) whereas nosql itself a straight data store.
You cannot do a constant writes to solr (solr 4.0 seems to bring that support) and you can only index at the max of every 2 mins and 200 records (which is very slow for high throughput writes and you are forced for an intermediate storage).
You are require to change / define the schema when you alter what is stored in document. NoSQL has no such definitions.
Solr indexes has performance implication when its index size grows whereas NoSQL is optimized for it (or claims to be :) )
Solr has underlying lucene search algorithms bundled but in NoSQL you need to build them, This applies to the magnificent faceted search or blazing fast document search provided by solr.
Last but few points, Its about the difference not the one mentioned here as marketing strategy in which solr goes out from NoSQL
Lucene/Solr - Iam gonna use Solr, Since Solr uses lucene internally and has addition features. So Solr is basically an upgrade to Lucene with new constume.
Solr is mainly used for purpose to create facets and indexing plain texts for search engine.
Solr can use most of the databases to store its data. It is inconsistent to keep data in solr since it directly use disks.
NoSQL databases are easy to learn compared to Solr. Solr is more or less having lot of configurations and concepts (For eg: Fields).
Performance is something that we have to consider b/w . Solr provides high performance compared to other NoSQL databases.
Note: Combining the Solr with some databases provides the best performance.
Summary: Solr is also a NoSQL datastore which is a predecessor of all NoSQL databases. Which didn't get the hype of others. But still in the field due to its performance and power.

Is there a set of best practices for building a Lucene index from a relational DB?

I'm looking into using Lucene and/or Solr to provide search in an RDBMS-powered web application. Unfortunately for me, all the documentation I've skimmed deals with how to get the data out of the index; I'm more concerned with how to build a useful index. Are there any "best practices" for doing this?
Will multiple applications be writing to the database? If so, it's a bit tricky; you have to have some mechanism to identify new records to feed to the Lucene indexer.
Another point to consider is do you want one index that covers all of your tables, or one index per table. In general, I recommend one index, with a field in that index to indicate which table the record came from.
Hibernate has support for full text search, if you want to search persistent objects rather than unstructured documents.
There's an OpenSymphony project called Compass of which you should be aware. I have stayed away from it myself, primarily because it seems to be way more complicated than search needs to be. Also, as I can tell from the documentation (I confess I haven't found the time necessary to read it all), it stores Lucene segments as blobs in the database. If you're familiar with the Lucene architecture, Compass implements a Lucene Directory on top of the database. I think this is the wrong approach. I would leverage the database's built-in support for indexing and implement a Lucene IndexReader instead. The same criticism applies to distributed cache implementations, etc.
I haven't explored this at all, but take a look at LuSql.
Using Solr would be straightforward as well but there'll be some DRY-violations with the Solr schema.xml and your actual database schema. (FYI, Solr does support wildcards, though.)
We are rolling out our first application that uses Solr tonight. With Solr 1.3, they've included the DataImportHandler that allows you to specify your database tables (they call them entities) along with their relationships. Once defined, a simple HTTP request will tirgger an import of your data.
Take a look at the Solr wiki page for DataImportHandler for details.
As introduction:
Brian McCallister wrote a nice blog post: Using Lucene with OJB.