Redis doesn't update dump.rdb any more - redis

I've been using Redis on a windows server for last 10 months without any issue but this morning I checked my website and saw that it's completely empty!!!
After a few minutes of investigation I realised that Redis database was empty???
Luckily I use redis as a caching solution so I still have all data in MS SQL database and I've managed to recover content of my website.
But I realised that redis has stopped saving data into dump.rdb. The last time file was updated 20.11.2015 at 11:35.
Redis config file has set
save 900 1
save 300 10
save 60 10000
and by just reloading all from MS SQL this morning I had more than 15.000 writes. So the file should be updated, right?
I run redis-check-dump dump.rdb and as result got:
Processed 7924 valid opcodes
I even run manually SAVE command and as result got:
OK <2.12>
But the file size and update date of dump.rdb is the same 20.11.2015
I just want to highlight that between 20.11.2015 and today I haven't changed anything in redis configuration or restarted the server
Any idea?

It's not the answer but at least I've managed to make Redis to start dumping data to disk.
Using console I set a new dbfilename name and now Redis is again dumping data data to disk.
It would be great if someone has a clue why it had stopped duping data to original dump file

Related

Reducing File size for disk backed aerospike

How can we reduce the file size for aerospike .dat file?
our current config is
namespace test {
memory-size 20G # Maximum memory allocation for data and
# primary and secondary indexes.
storage-engine device { # Configure the storage-engine to use
file /opt/aerospike/test.dat # Location of data file on server.
filesize 100G # Max size of each file in GiB.
}
}
current file size if test.dat is 90GB as par ls -ltrh. But on AMC ui it shows 50GB is used.
I want to reduce the file size to 80GB. I tried following this doc
Decrease
filesize Decreasing the size of the files with an Aerospike
service restart will potentially end up deleting random data which can
results in unexpected behavior on the Aerospike cluster due to the
truncation, maybe even landing into low available percentage on the
node. Thus, you would need to delete the file itself and let the data
be migrated from the other nodes in the cluster.
Stop Aerospike server.
Delete the file and update the configuration with the new filesize.
Start Aerospike server.
But when I start the server post data deletion, the startup fails with error
Jan 20 2022 03:44:50 GMT: WARNING (drv_ssd): (drv_ssd.c:3784) unable to open file /opt/aerospike/test.dat: No such file or directory
I have few questions wrt this
Is there a way to restart process with no initial data and let it take data from other nodes in the cluster?
If i wanted to reduce the size from 100G to 95G, would i still have to do the same thing? considering current file size is only 90GB. Is there still a risk of losing data?
Stopping the Aerospike server, deleting the file and restarting it is the way to go. There are other ways (like cold starting empty -- cold-start-empty) but the way you have done it is the recommended one. Seems there are some permission issues preventing the server to create that file in that directory.
Yes, you would have to do the same thing for reducing the file size, as mentioned in that document you referred to.

how to shrink a very big log file in SQL server 2016

I have a warehouse with a 800 GB log file.now I want to shrink or another solution to reduce the disk space occupied by the Log.ldf.
I tried shrink file in several ways.I got a full backup , transaction log back up , changed recovery mode , ran dbcc command but none of them affected log file volume in disk , detach database and then deleted the log file but because of memory-optimize file container I faced error while I attempted to attach it again (I read SQL server will automatically add a log file but apparently not when database has a memory_optimize file )
after all these solutions my log file is still 800 GB and I don't know what to do to free disk space used by log file.
Is there any suggestion ? Or do I forget to do sth in my approaches ?

Aerospike space occupying

Initially, I have 2 sets(tables) each contains 45gb of data which is total 90gb of data in 1 namespace(database), So I decided to remove 1 set to free up the ram size, after deletion of 1 set, again it shows 90gb, ram size changed nothing. Without a restart of aerospike server, Is there a way to flush the deleted data to free up my ram ??
Thanks in advance !!
From Aerospike CE 3.12 on up you should be using the truncate command to truncate the data in a namespace, or a set of a namespace.
The aerospike/delete-set repo is an ancient workaround (hasn't been updated in 2 years). In the Java client simply use the AerospikeClient.truncate() command.

SQL server database log file increasing enormously

I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach to cleanup the log.
FYI: All the SSIS packages in the job is using Transaction on some tasks. for eg. Sequence Cointainer
I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
Help me on these issues. Thanks in advance.

empty sql server 2008 db backup file is very big

Im deploying my db, i more or less emptied the db(data) and then created a backup.
the .bak file is over 100mb.
why is this?
how do i get it down?
im using sql server 2008
EDIT
When you back up, please note that SQL Server backup files can contain multiple backups. It does not overwrite by default. If you choose the same backup file and do not choose the overwrite option, it simply adds another backup to the same file. So your file just keeps getting larger.
Run this and all will be revealed:
select dpages *8 [size in kbs]
from sysindexes
where indid <= 1
order by 1 desc
You can also..
Do two backups in a row to have the 2nd backup contain minimal log data. The first backup will contain logged activity so as to be able to recover. The 2nd one would no longer contain them.
There is also an issue with leaked Service Broker handles if you use SSSB in your database with improper code, but if this is the case, the query above will reveal it.
To get the size down, you can use WITH COMPRESSION, eg.
backup database mydb to disk = 'c:\tempdb.bak' with compression
It will normally bring it down to about 20% the size. As Martin has commented above, run also
exec sp_spaceused
To view the distribution of data/logs. From what you are saying, 1.5 MB for first table... down to 8kB on the 45th row, that accounts for maybe tens of MB, so the rest could be in the log file.