Solr tlog extremely large, not merging with index after commits - indexing

I am in the process of a bulk indexing operation into a solr 5.0 collection with approx 200m documents now. I am noticing that the tlog is building up and is not being deleted, additionally, indexing performance has gotten really slow. I am wondering why the tlog is not being removed. This is what the data directory looks like:
du -sh *
4.0K data
69G index
109G tlog
I've tried multiple variations of:
update?commit=true&expungeDeletes=true&openSearcher=true
I see in the log file that Solr is picking it up, but there are no changes.
The commit settings in solrconfig are:
<autoCommit>
<maxTime>15000</maxTime>
<maxDocs>1500000</maxDocs>
<openSearcher>false</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxTime>900000</maxTime>
<maxDocs>2000000</maxDocs>
</autoSoftCommit>
One thing to keep in mind is that I had soft commit commented out during the indexing process. Also, these values are pretty high because this is relatively index heavy collection, with pretty controlled querying, so the commit strategy is pretty relaxed.
I restarted Solr and naturally it is taking forever to start because it is replaying the tlog, not sure if it will clear this up once fully started. Now, I am under impression that Solr keeps some tlogs around in case it needs to replica the data to another collection, but this is a standalone instance and is not really necessary, additionally, since it is larger than the index folder, I am assuming there are items not commited to the main index yet. Is that right?
Any idea what's happening here?

So I thought I'd pass along an update, even though it's a bit late.
I restarted Solr instance, naturally it took about 4 hours to start up since tlogs had to be replayed. Then they were purged after a commit.

Related

How do I keep Particular ServiceControl Audit db (Raven5) from getting larger and larger in size?

We recently upgraded Particular.ServiceControl.Audit to v4.26 and recreated our Audit instance, since from v4.26 and up new audit instances will bump up RavenDB from 3.5 to 5.4.
We did this hoping to remedy a problem in 4.25.x where compacting the audit database would not free up any disk space.
As it is, the database is still growing really large (3-400GB). To test if the data contained should actually use up all this disk space, we tried to tamper with the ServiceControl.Audit/AuditRetentionPeriod config parameter, setting it to a small value like 1 day (before value was 30 days). Naively, maybe, waiting for db to shrink at some point - expecting that retention would somehow influence disk space used. The db file remained the same size (and growing).
The docs for compacting the database in ServiceControl only mention the esent based Raven 3.5, but Raven5+ uses Raven's own Voron storage engine. There does not seem to be Particular docs describing how to compact the Voron db.
So we tried to follow the RavenDB docs for compacting the db through Raven Studio after putting the service in maintenance mode. As the screenshot shows, this operation stopped or stalled very shortly after initializing (disregard time elapsed, as we actually left it running for several hours).
We tried this several times with no luck. We can see that the disk containing the specific db has zero reads or writes, so it is definitely stalled in some way. Pressing the "Abort" button also got stuck every time, after which we resolved to simply restarting the entire service (which then seemed to bring the db back to normal operation).
So the question is: how do we keep the audit db from growing indefinitely? We can't at any point see it not growing or staying the same size, causing SSD disk costs to grow in expenses.

Firebird backup restore is frustrating, is there a way to avoid it?

I am using Firebird, but lately the database grows really seriously.
There is really a lot of delete statements running, as well update/inserts, and the database file size grows really fast.
After tons of deleting records the database size doesn't decrease, and even worse, i have the feeling that actually the query getting slowed down a bit.
In order to fix this a daily backup/restore process have been involved, but because of it's time to complete - i could say that it is really frustrating to use Firebird.
Any ideas on workarounds or solution on this will be welcome.
As well, I am considering switching to Interbase because I heard from a friend that it is not having this issue - it is so ?
We have a lot of huge databases on Firebird in production but never had an issue with a database growth. Yes, every time a record being deleted or updated an old version of it will be kept in the file. But sooner or later a garbage collector will sweap it away. Once both processes will balance each other the database file will grow only for the size of new data and indices.
As general precaution to prevent an enormous database growth try to make your transactions as short as possible. In our applications we use one READ ONLY transaction for reading all the data. This transaction is open through whole application life time. For every batch of insert/update/delete statements we use short separate transactions.
Slowing of database operations could be resulted from obsolete indices stats. Here you can find an example of how to recalculate statistics for all indices: http://www.firebirdfaq.org/faq167/
Check if you have unfinished transactions in your applications. If transaction is started but not committed or rolled back, database will have own revision for each transaction after the oldest active transaction.
You can check the database statistics (gstat or external tool), there's oldest transaction and the next transaction. If the difference between those numbers keeps growing, you have the stuck transaction problem.
There are also monitoring tools the check situation, one I've used is Sinatica Monitor for Firebird.
Edit: Also, database file doesn't shrink automatically ever. Parts of it get marked as unused (after sweep operation) and will be reused. http://www.firebirdfaq.org/faq41/
The space occupied by deleted records will be re-used as soon as it is garbage collected by Firebird.
If GC is not happening (transaction problems?), DB will keep growing, until GC can do its job.
Also, there is a problem when you do a massive delete in a table (ex: millions of records), the next select in that table will "trigger" the garbage collection, and the performance will drop until GC finishes. The only way to workaround this would be to do the massive deletes in a time when the server is not very used, and run a sweep after that, making sure that there are no stuck transactions.
Also, keep in mind that if you are using "standard" tables to hold temporary data (ie: info is inserted and delete several times), you can get corrupted database in some circumstances. I strongly suggest you to start using Global Temporary Tables feature.

Solr Index slow after a while

I use SolrJ to send data to my Solr server.
When I start my program off, it indexes stuff at the rate of about 1000 docs per sec(I commit every 250,000 docs)
I have noticed that when my index is filled up with about 5 million docs, it starts crawling, not just at commit time, add time too.
My Solr server and indexing program run on the same machine
Here are some of the relevant portions from my solrconfig:
<useCompoundFile>false</useCompoundFile>
<ramBufferSizeMB>1024</ramBufferSizeMB>
<mergeFactor>150</mergeFactor>
Any suggestions about how to fix this?
that merge factor seems really, really (really) high.
Do you really want that?
If you aren't using compound files that could easily lead to a ulimit problem (if you are linux).

Best way to keep index real time?

I have a Solr/Lucene index file of approximately 700 Gb. The documents that I need to index are being read in real-time, roughly 1000 docs every 30 minutes are submitted and need to be indexed. In my scenario a script is run every 30 mins that indexes the documents that are not yet indexed, since it is a requirement that new documents should be searchable as soon as possible, but this process slow down the searching.
Is this the best way i can index latest documents or there is some other better way!
First, remember that Solr is not a real-time search engine (yet). There is still work to be done.
You can use a master/slave setup, where the indexation are done on the master and the search on the slave. With this, indexation does not affect search performance. After the commit is done on the master, force the slave to fetch the latest index from the master. While the new index is being replicated on the slave, it is still processing queries with the previous index.
Also, check you cache warming settings. Remember that this might slow down the searches if those settings are too aggressive. Also check the queries launched on the new searcher event.
You can do this with Lucene easily. Split the indexes in multiple parts (or to be precise, while building indexes, create "smaller" parts.) Create searcher for each of the part and store a reference to them. You can create a MultiSearcher on top of these individual parts.
Now, there will be only one index that will get the new documents. At regular intervals, add documents to this index, commit and re-open this searcher.
After the last index is updated, you can create a new multi-searcher again, using the previously opened searchers.
Thus, at any point, you will be re-opening only one searcher and that will be quite fast.
Check http://code.google.com/p/zoie/ wrapper around Lucene to make it real time - code donated from Linkedin.
^^i do this, with normal lucene, non solr, and it works really nice. however not sure if there is a solr way to do that at the moment. twitter recently went with lucene for searching and has effectively real time searching by just writing to their index at any update. their index resides completely in memory, so updating/reading the index is of no consequence and happens instantly, a lucene index can always be read while being written to, as long as there is only one writer at a time.
Check out this wiki page

What's the easiest way to add an index on a live myISAM table?

I have a myISAM table running in production on mySQL, and by doing a few tests, we've found we can tremendously speed up a query by adding a certain compound index. So far so good. However, I am not really about the best way to add this index in a production environment without locking the table for a long time (it's got 27GBs of data, so not so much, but it does take a while).
Do you have any tips? If this was a more sophisticated setup of course we'd have a live replica of all of the data on another machine, and we could safely switch. Unfortunately, we're not there yet, and I would like to speed up this query as soon as possible (it's causing big customer headaches). Is there some simple way to replicate the data and then do a swap-out trick? Some other tricks that I am missing?
UPDATE: Reading about "Online Index Operations" in SQL Server makes me very jealous http://msdn.microsoft.com/en-us/library/ms191261.aspx :)
Thanks!
you can use replication to get downtime on the order of a couple minutes, instead of the hours it might take to create an index on that table.
to set up the slave, see http://dev.mysql.com/doc/refman/5.0/en/replication-howto-existingdata.html
a recommendation i can make to help speed up the process is in step 2 follow the "Creating a Data Snapshot Using Raw Data Files" method. but instead of copying over the wire to the slave, copy to a different location on the master. and bring the master back up as soon at the copy is done and you've made the necessary changes to the config file (set server-id and enabled binary logging). this will minimize your downtime to just a minute or two. once the server is back up, you can copy the copied files to the slave box.
once you have the slave up and running and you have verified everything is replicating properly, you can pause the slave. create the index on the salve. when the index creation is complete, resume the slave. this will catch the slave up to the master. on the master, use FLUSH TABLE WITH READ LOCK. check the slave status to make sure the log position on the master and the slave match. if they do, shut down the slave and copy the files for that table back to the master.
I'm with Randy. We've been in a similar situation, and there are two ways in MySQL to accomplish something like this:
Take down the server while it runs. This is what you'll probably do. It's simple, it's easy, it works. Time to do? Maybe a half hour/45 minutes, dependent on disk bandwidth. See below.
Make a new table with the new index, copy all the data over, pause the server delete the first table, alter the new one to the old name, start the server. Downtime? 10 minutes, maybe, but really complicated.
Option two works, and saves you the downtime of creating the index (if it takes a long time). But it takes more space, it's more complicated (since you have to deal with the new records inserted off the main table, and it will probably lock on MyISAM while copying the data out. Deleting a table will take some time, altering the table to the new name will take some time. It's just really complicated. If you had a 2TB table this might be useful, but for 27G it's probably overkill.
Do you have a second server that is close in specifications to your production server? Load up your most recent backup and do the index there, so you know about how long it will take to add. Then plan for downtime.
InnoDB is better about many things, but new indexes still lock the table. Those abilities that MSSQL (and I think PostgreSQL) have to do those kind of things without locking would be great.
Find your low usage window and take your application offline during the index build. Since you don't have replication or a multimaster or whatever, you're just going to have to bite the bullet on this one. See you at 1am. :-)
Not much you can do with one server here.
If you copy the table and do a dry run, at least you'll find out how long it's going to take without locking the live table, so you can schedule some maintenance time if necessary, or make a decision whether you can just push the button and leave users hanging for a couple of minutes :)
Or schedule it for a quiet time...
at 04:00 /usr/bin/mysql -uXXX -pXXX -e 'alter table mytable add key(col1, col2)'