lucene: space requirements for optimising the database? - optimization

It is my understanding that there are several options when it comes to database optimisation in Lucene:
optimise the whole thing into one segment, space hungry by at least 2× ?
optimise into several segments
remove deleted entries — expungeDeletes(), without changing the number of segments?
Consider that a database is not held on a platter disc (mfs is in use). Do each of these operations have some bound on space requirements?
I noticed that expungeDeletes() is no longer documented for Lucene 4.6.0 — has it been removed? I'm coming from Lucene 3.0.2 / December 2011, although I'm open to upgrading to 4.6 sometime.

Manual optimization methods have now been removed in favour of Tiered Merge Policy. You may read about this in the blog post of one of the authors of Lucene. In short, merge will happen automatically as it is believed that the algorithm (which knows the internal state of the index) will do a better job than the user.
p.s. I think you need to get the nomenclature right. There's no such thing as "database" in Lucene (you probably meant index?)

Related

Indexing engine

I'm developing context discover system - which is mix of searching and suggestions.
Currently I'm looking for library for indexing.
After some investigation I stayed on Lucene and Terrier and found Indri not comfortable.
What are the downsides of both? What problem I can meet while using them?
Is it true that Terrier doesn't have incremental indexing (every time new document is added, I need to rebuild and reindex everything)?
My requirements are:
- easy adding new documents
- easy score methods injection
- quiet well defined model
And one more thing: is Terrier still active? I haven't seen any update since 10/03/2010 terrier changelog
What sort of database are you going to be using? Lucene, in my experience, is much better documented than Terrier.
Here's an article comparing Lucene and Terrier:
http://text-analytics.blogspot.com/2011/05/java-based-retrieval-toolkits.html

Why are document stores like Lucene / Solr not included in NoSQL conversations?

All of us have come across the recent hype of no-SQL solutions lately. MongoDB, CouchDB, BigTable, Cassandra, and others have been listed as no-SQL options. Here's an example:
http://architects.dzone.com/articles/what-nosql-store-should-i-use
However, three years ago a co-worker and I were using Lucene.NET as what seem to fit the description of no-SQL. We did not use it just for user-inputted search queries; we used it to make a few reindexed RDBMS table data extremely performant. We implemented our own .NET sort-of-equivalent-to-Solr service to manage these indexes and make them callable. When I left the company, the team switched to Solr itself. (For those not in the know, Solr is a web service that wraps Lucene with REST-callable queries and index dumps.)
What I don't understand is, why is Solr not counted in the typical lists of no-SQL solution options? Am I missing something here? I assume that there are technical reasons why Solr is not comparable to the likes of CouchDB, etc., and in fact I understand that CouchDB uses Lucene as its data store (yes?), but what disqualifies Solr?
I'm not asking as some kind of Solr fanboy or anything, I just don't understand why Solr and the like don't fit the definition of no-SQL, and if Solr technically does fit the definition then what about it likely makes people pooh-pooh it? I'm asking because I'm having difficulty determining whether I should continue using Lucene-based solutions (like Solr) for solutions that I build or if I should really do more research with these other options.
I once listened to an interview with author Ursula K. LeGuin about fiction writing. The interviewer asked her about authors who work in different genre of writing. What makes one author a romance writer, and another a mystery writer, and another a science fiction writer? LeGuin responded by explaining:
Genre is about marketing, not about content.
It was an eye-opening statement.
I think the same applies to technology solutions. The NoSQL movement is attracting attention because it's full of marketing energy right now. NoSQL data stores like Hadoop, CouchDB, MongoDB, have commercial ventures backing them, pushing their solutions as new and innovative and exciting so they can grow their business. The term "NoSQL" is a marketing brand that helps them to explain their value.
You're right that Lucene/Solr is technically very similar to a NoSQL document store: it's a denormalized bag of documents (their term) with fields that aren't necessarily consistent across the collection of documents. It's indexed in a sophisticated way to allow you to search across all fields or by specific fields.
But that's not the genre Lucene uses to explain its value. They don't have the same mission to grow a market and a business, since they're managed by the Apache Foundation. They're happy to focus on the use case of fulltext search, even though the technology could be used in other ways. They're following a tenet of software success: do one thing, and do it well.
After doing more Google-searching, I think this document sums it up pretty well:
https://web.archive.org/web/20100504055638/http://www.lucidimagination.com/blog/2010/04/30/nosql-lucene-and-solr/
Case in point, Lucene/Solr is NoSql and could be considered one of NoSql's more mature "forefathers". It just does not get the NoSql hype it deserves because it didn't invent the term "no-SQL" and its users don't use the term, so the hype machine overlooked it.
I think that the most relevant characteristic of solr/lucene that drops from the nosql list it's because until recently, making lucene work as a real-time system was a pain. The usual workflow for any performant application was to index the incremental updates in batchs, and updating the index every 5 minutes for example.
I think that stimpy77 is partly right on the NoSQL being a branding thing. But also, NoSQL means that it's a data storage platform that is simpler/easier then SQL based solutions. And I think while Solr/Lucene share some aspects (they store data), it really misses the mark to think that Solr/Lucene could be used as primary data storage for anything that has relationships. Sure, lots of documents can be thrown into it, and powerful search pull them back. But as soon as you want relationships, then others such as CouchDB and others do much better that have a query syntax of some kind. Search is a bandaid solution in that case. Think about the use case "find all documents tagged with word 'car'". If I have some structures in my data, then it's easy for me to get the document for tag car, and pull everybody back. Versus relying on a search query that includes fq=tag:'car'. Search is more and more powerful the fewer relationships you have, but the more relationships, the better a datastore like CouchDB and brethren are. Thats why you still see CouchDB and friends paired with Solr, and vice versa! Let each one do what it does best.
Of course, that isn't to say you can't leverage storing your source data in Solr, that can be a powerful tool to use!
The main differences between a no sql and solr in operational wise are the following in my opinion.
Solr requires an intermediate data store (database or XML files) whereas nosql itself a straight data store.
You cannot do a constant writes to solr (solr 4.0 seems to bring that support) and you can only index at the max of every 2 mins and 200 records (which is very slow for high throughput writes and you are forced for an intermediate storage).
You are require to change / define the schema when you alter what is stored in document. NoSQL has no such definitions.
Solr indexes has performance implication when its index size grows whereas NoSQL is optimized for it (or claims to be :) )
Solr has underlying lucene search algorithms bundled but in NoSQL you need to build them, This applies to the magnificent faceted search or blazing fast document search provided by solr.
Last but few points, Its about the difference not the one mentioned here as marketing strategy in which solr goes out from NoSQL
Lucene/Solr - Iam gonna use Solr, Since Solr uses lucene internally and has addition features. So Solr is basically an upgrade to Lucene with new constume.
Solr is mainly used for purpose to create facets and indexing plain texts for search engine.
Solr can use most of the databases to store its data. It is inconsistent to keep data in solr since it directly use disks.
NoSQL databases are easy to learn compared to Solr. Solr is more or less having lot of configurations and concepts (For eg: Fields).
Performance is something that we have to consider b/w . Solr provides high performance compared to other NoSQL databases.
Note: Combining the Solr with some databases provides the best performance.
Summary: Solr is also a NoSQL datastore which is a predecessor of all NoSQL databases. Which didn't get the hype of others. But still in the field due to its performance and power.

Fulltext Search with InnoDB

I'm developing a high-volume web application, where part of it is a MySQL database of discussion posts that will need to grow to 20M+ rows, smoothly.
I was originally planning on using MyISAM for the tables (for the built-in fulltext search capabilities), but the thought of the entire table being locked due to a single write operation makes me shutter. Row-level locks make so much more sense (not to mention InnoDB's other speed advantages when dealing with huge tables). So, for this reason, I'm pretty determined to use InnoDB.
The problem is... InnoDB doesn't have built-in fulltext search capabilities.
Should I go with a third-party search system? Like Lucene(c++) / Sphinx? Do any of you database ninjas have any suggestions/guidance? LinkedIn's zoie (based off Lucene) looks like the best option at the moment... having been built around realtime capabilities (which is pretty critical for my application.) I'm a little hesitant to commit yet without some insight...
(FYI: going to be on EC2 with high-memory rigs, using PHP to serve the frontend)
Along with the general phasing out of MyISAM, InnoDB full-text search (FTS) is finally available in MySQL 5.6.4 release.
Lots of juicy details at https://dev.mysql.com/doc/refman/5.6/en/innodb-fulltext-index.html.
While other engines have lots of different features, this one is InnoDB, so it's native (which means there's an upgrade path), and that makes it a worthwhile option.
I can vouch for MyISAM fulltext being a bad option - even leaving aside the various problems with MyISAM tables in general, I've seen the fulltext stuff go off the rails and start corrupting itself and crashing MySQL regularly.
A dedicated search engine is definitely going to be the most flexible option here - store the post data in MySQL/innodb, and then export the text to your search engine. You can set up a periodic full index build/publish pretty easily, and add real-time index updates if you feel the need and want to spend the time.
Lucene and Sphinx are good options, as is Xapian, which is nice and lightweight. If you go the Lucene route don't assume that Clucene will better, even if you'd prefer not to wrestle with Java, although I'm not really qualified to discuss the pros and cons of either.
You should spend an hour and go through installation and test-drive of Sphinx and Lucene. See if either meets your needs, with respect to data updates.
One of the things that disappointed me about Sphinx is that it doesn't support incremental inserts very well. That is, it's very expensive to reindex after an insert, so expensive that their recommended solution is to split your data into older, unchanging rows and newer, volatile rows. So every search your app does would have to search twice: once on the larger index for old rows and also on the smaller index for recent rows. If that doesn't integrate with your usage patterns, this Sphinx is not a good solution (at least not in its current implementation).
I'd like to point out another possible solution you could consider: Google Custom Search. If you can apply some SEO to your web application, then outsource the indexing and search function to Google, and embed a Google search textfield into your site. It could be the most economical and scalable way to make your site searchable.
Perhaps you shouldn't dismiss MySQL's FT so quickly. Craigslist used to use it.
MySQL’s speed and Full Text Search has enabled craigslist to serve their users .. craigslist uses MySQL to serve approximately 50 million searches per month at a rate of up to 60 searches per second."
edit
As commented below, Craigslist seems to have switched to Sphinx some time in early 2009.
Sphinx, as you point out, is quite nice for this stuff. All the work is in the configuration file. Make sure whatever your table is with the strings has some unique integer id key, and you should be fine.
try this
ROUND((LENGTH(text) - LENGTH(REPLACE(text, 'serchtext', ''))) / LENGTH('serchtext'),0)!=0
You should take a look at Sphinx. It is worth a try. It's indexing is super fast and it is distributed. You should take a look at this (http://www.percona.com/webinars/2012-08-22-full-text-search-throwdown) webminar. It talks about searching and has some neat benchmarks. You may find it helpful.
If everything else fails, there's always soundex_match, which sadly isn't really fast an accurate
For anyone stuck on an older version of MySQL / MariaDB (i.e. CentOS users) where InnoDB doesn't support Fulltext searches, my solution when using InnoDB tables was to create a separate MyISAM table for the thing I wanted to search.
For example, my main InnoDB table was products with various keys and referential integrity. I then created a simple MyISAM table called product_search containing two fields, product_id and product_name where the latter was set to a FULLTEXT index. Both fields are effectively a copy of what's in the main product table.
I then search on the MyISAM table using fulltext, and do an inner join back to the InnoDB table.
The contents of the MyISAM table can be kept up-to-date via either triggers or the application's model.
I wouldn't recommend this if you have multiple tables that require fulltext, but for a single table it seems like an adequate work around until you can upgrade.

Tips/recommendations for using Lucene

I'm working on a job portal using asp.net 3.5
I've used Lucene for job and resume search functionality.
Would like to know tips/recommendations if any with respect to Lucene performance optimization, scalability, etc.
Thanks a ton!
I've documented how I used Lucene.NET (in BugTracker.NET) here:
http://www.ifdefined.com/blog/post/2009/02/Full-Text-Search-in-ASPNET-using-LuceneNET.aspx
One thing you should keep in mind is that it is very hard to cluster or replicate lucene indexes in large installations, like fail over scenarios or distributed systems. So you should either have a good way to replicate your index jobs or the whole database.
If you use a sort, watch out for the size of the comparators. When sorts are used, for each document returned by the searcher there will be a comparator object stored for each SortField in the Sort object. Depending on the size of the documents and the number of fields you want to sort on, this can become a big headache.

PostgreSQL performance monitoring tool

I'm setting up a web application with a FreeBSD PostgreSQL back-end. I'm looking for some database performance optimization tool/technique.
Database optimization is usually a combination of two things
Reduce the number of queries to the database
Reduce the amount of data that needs to be looked at to answer queries
Reducing the amount of queries is usually done by caching non-volatile/less important data (e.g. "Which users are online" or "What are the latest posts by this user?") inside the application (if possible) or in an external - more efficient - datastore (memcached, redis, etc.). If you've got information which is very write-heavy (e.g. hit-counters) and doesn't need ACID-semantics you can also think about moving it out of the Postgres database to more efficient data stores.
Optimizing the query runtime is more tricky - this can amount to creating special indexes (or indexes in the first place), changing (possibly denormalizing) the data model or changing the fundamental approach the application takes when it comes to working with the database. See for example the Pagination done the Postgres way talk by Markus Winand on how to rethink the concept of pagination to make it more database efficient
Measuring queries the slow way
But to understand which queries should be looked at first you need to know how often they are executed and how long they run on average.
One approach to this is logging all (or "slow") queries including their runtime and then parsing the query log. A good tool for this is pgfouine which has already been mentioned earlier in this discussion, it has since been replaced by pgbadger which is written in a more friendly language, is much faster and more actively maintained.
Both pgfouine and pgbadger suffer from the fact that they need query-logging enabled, which can cause a noticeable performance hit on the database or bring you into disk space troubles on top of the fact that parsing the log with the tool can take quite some time and won't give you up-to-date insights on what is going in the database.
Speeding it up with extensions
To address these shortcomings there are now two extensions which track query performance directly in the database - pg_stat_statements (which is only helpful in version 9.2 or newer) and pg_stat_plans. Both extensions offer the same basic functionality - tracking how often a given "normalized query" (Query string minus all expression literals) has been run and how long it took in total. Due to the fact that this is done while the query is actually run this is done in a very efficient manner, the measurable overhead was less than 5% in synthetic benchmarks.
Making sense of the data
The list of queries itself is very "dry" from an information perspective. There's been work on a third extension trying to address this fact and offer nicer representation of the data called pg_statsinfo (along with pg_stats_reporter), but it's a bit of an undertaking to get it up and running.
To offer a more convenient solution to this problem I started working on a commercial project which is focussed around pg_stat_statements and pg_stat_plans and augments the information collected by lots of other data pulled out of the database. It's called pganalyze and you can find it at https://pganalyze.com/.
To offer a concise overview of interesting tools and projects in the Postgres Monitoring area i also started compiling a list at the Postgres Wiki which is updated regularly.
pgfouine works fairly well for me. And it looks like there's a FreeBSD port for it.
I've used pgtop a little. It is quite crude, but at least I can see which query is running for each process ID.
I tried pgfouine, but if I remember, it's an offline tool.
I also tail the psql.log file and set the logging criteria down to a level where I can see the problem queries.
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this time.
I also use EMS Postgres Manager to do general admin work. It doesn't do anything for you, but it does make most tasks easier and makes reviewing and setting up your schema more simple. I find that when using a GUI, it is much easier for me to spot inconsistencies (like a missing index, field criteria, etc.). It's only one of two programs I'm willing to use VMWare on my Mac to use.
Munin is quite simple yet effective to get trends of how the database is evolving and performing over time. In the standard kit of Munin you can among other thing monitor the size of the database, number of locks, number of connections, sequential scans, size of transaction log and long running queries.
Easy to setup and to get started with and if needed you can write your own plugin quite easily.
Check out the latest postgresql plugins that are shipped with Munin here:
http://munin-monitoring.org/browser/branches/1.4-stable/plugins/node.d/
Well, the first thing to do is try all your queries from psql using "explain" and see if there are sequential scans that can be converted to index scans by adding indexes or rewriting the query.
Other than that, I'm as interested in the answers to this question as you are.
Check out Lightning Admin, it has a GUI for capturing log statements, not perfect but works great for most needs. http://www.amsoftwaredesign.com
DBTuna http://www.dbtuna.com/postgresql_monitor.php has recently started supporting PostgreSQL monitoring. We use it extensively for MySQL monitoring, so if it provides the same for Postgres then it should be a good fit for you too.