I find out that if no new write action was done after the AOF rewrite and RDB save. The generated AOF file is as same as RDB file. Is that correct?
And why redis choose the way that iterate the database to rewrite AOF? It means that the program should iterate the database every time start AOF rewrite. The last-time AOF-rewrite is of no help to the current AOF-rewrite. When the database is big, this rewrite may cost too much time.
In my opinion, doing real rewrite AOF file may be better. Start from the end position of last time AOF-rewrite, and rewrite the following AOF steps. I know that doing in this way may lose some performance where re-build redis database. Is there some other questions of this way?
I find out that if no new write action was done after the AOF rewrite and RDB save. The generated AOF file is as same as RDB file. Is that correct?
YES
Start from the end position of last time AOF-rewrite, and rewrite the following AOF steps
Say user calls 1 million SET commands to set 1 million keys, before the first AOF-rewrite. Then user calls 1 million DEL commands to delete these keys before the second AOF-rewrite. If you start from the end of last AOF-rewrite, your AOF file will contain 2 million records which are useless. Also your AOF will grown bigger and bigger until the disk is full, and reloading such a big AOF file will be very slow.
Related
I'm trying to reduce the DAT file size about a Mnesia table (disc_copies), but I don't find yet the solution. I can't truncate the file or create a new one, I must work in this table during the system is working.
The procedure that I considered was to delete the oldest records, but this isn't enough because the DAT file size remain the same until I restart the node.
So there is a way to force the sync to disk without restart the entire node?
The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.
I've been using Redis on a windows server for last 10 months without any issue but this morning I checked my website and saw that it's completely empty!!!
After a few minutes of investigation I realised that Redis database was empty???
Luckily I use redis as a caching solution so I still have all data in MS SQL database and I've managed to recover content of my website.
But I realised that redis has stopped saving data into dump.rdb. The last time file was updated 20.11.2015 at 11:35.
Redis config file has set
save 900 1
save 300 10
save 60 10000
and by just reloading all from MS SQL this morning I had more than 15.000 writes. So the file should be updated, right?
I run redis-check-dump dump.rdb and as result got:
Processed 7924 valid opcodes
I even run manually SAVE command and as result got:
OK <2.12>
But the file size and update date of dump.rdb is the same 20.11.2015
I just want to highlight that between 20.11.2015 and today I haven't changed anything in redis configuration or restarted the server
Any idea?
It's not the answer but at least I've managed to make Redis to start dumping data to disk.
Using console I set a new dbfilename name and now Redis is again dumping data data to disk.
It would be great if someone has a clue why it had stopped duping data to original dump file
I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach to cleanup the log.
FYI: All the SSIS packages in the job is using Transaction on some tasks. for eg. Sequence Cointainer
I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
Help me on these issues. Thanks in advance.
I have a 5-core solr 1.4 master that is replicated to another 5-core solr using solr replication as described here. All writes are done against the master and replicated to the slave intermittently. This is done using the following sequence:
Commit on each master core
Replicate on each slave core
Optimize on each slave core
Commit on each slave core
The problem I am having is that the slave seems to be keeping around old index files and taking up ever more disk space. For example, after 3 replications, the master core data directory looks like this:
$ du -sh *
145M index
But the data directory on the slave of the same core looks like this:
$ du -sh *
300M index
144M index.20100621042048
145M index.20100629035801
4.0K index.properties
4.0K replication.properties
Here's the contents of index.properties:
#index properties
#Tue Jun 29 15:58:13 CDT 2010
index=index.20100629035801
And replication.properties:
#Replication details
#Tue Jun 29 15:58:13 CDT 2010
replicationFailedAtList=1277155032914
previousCycleTimeInSeconds=12
timesFailed=1
indexReplicatedAtList=1277845093709,1277155253911,1277155032914
indexReplicatedAt=1277845093709
replicationFailedAt=1277155032914
lastCycleBytesDownloaded=150616512
timesIndexReplicated=3
The solrconfig.xml for this slave contains the default deletion policy:
[...]
<mainIndex>
<unlockOnStartup>false</unlockOnStartup>
<reopenReaders>true</reopenReaders>
<deletionPolicy class="solr.SolrDeletionPolicy">
<str name="maxCommitsToKeep">1</str>
<str name="maxOptimizedCommitsToKeep">0</str>
</deletionPolicy>
</mainIndex>
[...]
What am I missing?
It is useless to commit and optimize on the slaves. Since all the write operations are done on the master, it is the only place where those operations should occur.
This may be the cause of the problem: since you do an additional commit and optimize on the slaves, it keeps more commit points on the slaves. But this is only a guess, it should be easier to understand what happens with your full solrconfig.xml on both the master and the slaves.
the optimize that's done at slave is causing the index to double its size. on optimize separate index segments will be created to rewrite the original index into number of segments mentioned during optimize (default is 1).
Best practice is to optimize once in a while don't invoke it at any event (run a cron job or something) and optimize only at master not at slave. slaves will get these new segments through replication.
You shouldn commit at slave, index reload will take care of the availability of new docs at slave after replication.
I determined that the extra index.* directories seem to be left behind when I replicate after completely reloading the master. What I mean by "completely reloading" is stopping the master, deleting everything under [core]/data/*, restarting (at which point solr creates a new index), indexing all of our docs, then replicating.
Based on some additional testing, I have found that it seems to be safe to remove the other index* directories (other than the one specified in [core]/data/index.properties). If I'm not comfortable with that workaround I may decide to empty the slave index (stop; delete data/*; start) before replicating the first time after completely reloading the master.