How do I keep Particular ServiceControl Audit db (Raven5) from getting larger and larger in size? - ravendb

We recently upgraded Particular.ServiceControl.Audit to v4.26 and recreated our Audit instance, since from v4.26 and up new audit instances will bump up RavenDB from 3.5 to 5.4.
We did this hoping to remedy a problem in 4.25.x where compacting the audit database would not free up any disk space.
As it is, the database is still growing really large (3-400GB). To test if the data contained should actually use up all this disk space, we tried to tamper with the ServiceControl.Audit/AuditRetentionPeriod config parameter, setting it to a small value like 1 day (before value was 30 days). Naively, maybe, waiting for db to shrink at some point - expecting that retention would somehow influence disk space used. The db file remained the same size (and growing).
The docs for compacting the database in ServiceControl only mention the esent based Raven 3.5, but Raven5+ uses Raven's own Voron storage engine. There does not seem to be Particular docs describing how to compact the Voron db.
So we tried to follow the RavenDB docs for compacting the db through Raven Studio after putting the service in maintenance mode. As the screenshot shows, this operation stopped or stalled very shortly after initializing (disregard time elapsed, as we actually left it running for several hours).
We tried this several times with no luck. We can see that the disk containing the specific db has zero reads or writes, so it is definitely stalled in some way. Pressing the "Abort" button also got stuck every time, after which we resolved to simply restarting the entire service (which then seemed to bring the db back to normal operation).
So the question is: how do we keep the audit db from growing indefinitely? We can't at any point see it not growing or staying the same size, causing SSD disk costs to grow in expenses.

Related

Transaction log filling the drive if mirroring fails

I am having two mirrored SQL Server 2016 machines with mirroring set up. This is supposed to be hot-standby scheme.
HDD size is 1TB, and database size is around 600GB (just one DB). This is 90 days worth of archived data, everything older than 90 days is getting deleted every night (automatically through an external application which is using/filling the database in the first place). So 600GB is the peak DB size, it will not go beyond as it is being cleaned up regularly.
The problem is with the transaction log if one server fails, or if mirroring gets suspended for any other reason. If I understood the principle correctly, healthy server will retain the transaction logs as long as it doesn't get information from partner that everything is OK. So if mirroring fails, the HDD will get filled within several hours.
Is there any suitable technique to prevent this? I have backups of logs every 15 minutes and everything works fine, but if mirroring gets suspended, backups are not worth much, as the log will keep growing in spite of the backups. And the situation on site is a bit specific, there are no engineers, only operators who are accessing this data once or twice per day, so it's impossible to react straight away. It can take more than 24 hours to have someone attend the problem.
Only thing I could think of is some sort of trigger that would remove the mirroring completely once it was suspended for some time (or maybe if it is suspended and HDD space is too low). This will prevent the healthy server from crashing completely, but someone will again have to come to site and set the mirroring up from scratch. And due to bad design from the start, DB size is bigger than half of the HDD size, so I can't even create local backup/restore, I would have to do everything via the 100Mbps NAS that belongs to the client. And this would take more time than it would take the transaction log to fill the drive again.

RavenDB taking forever to show updates

I'm starting to assess our company using RavenDB for storing some stuff that doesn't really belong in a relational database (we're traditionally a SQL Server shop). I installed RavenDB locally on my machine, created a database, added a document. Nice!
Being a DBA, I decided to see how backups/restores work. I backed up my database, deleted it, then restored it from the backup. After refreshing my admin screen, I saw my database. I clicked on it, and got a message that the database doesn't exist.
After a couple hours, I tried again. Still doesn't exist. A full day later, I walk into work, and try again. This time the database works. I've had similar situations with updating documents. The update seems to take anywhere between 1 second - several hours to show an update...
Is this normal for RavenDB?? Am I completely misconfigured?? I run SQL Server on my local machine and it's lightning-fast, so I can't imagine updating a single document could take that long. As-is, I can't imagine recommending we use RavenDB for anything.
Are you querying using indexes or getting documents by ID? Documents should be updated immediately (ACID). If indexes are slow to update (check their status using RavenDB Studio), it could be a configuration problem or something external like an anti-virus software can cause them to update slowly.
Apparently, at least for the document-update latency, the default for caching in queries is enabled, so I was getting cached results.
Jeffery,
No, that isn't normal by a long short. You should be able to immediately see what was changed.
Note that certain AV products will interfere with the HTTP pipeline and can affect RavenDB's usage. The studio will also auto update things only every 5 seconds (to reduce UI jitter), but that is about it.
Restoring a database (from the same machine), should take only as long as it take to copy the files (pure I/O bound operation).
If this is from another machine using a different version of Windows, we might need to run a check on the file, which can take a bit of time, but that doesn't sound like your scenario

SQL Log File Not Shrinking in SQL Server 2012

I am dealing with someone else's backup Maintenance Plan and have an issue with the log file, I have a database that sits on one drive with a size of 31 GB and a log file that sits on another server with a size of 20 GB, the database is in Full Recovery Model. There is a maintenance plan that runs once a day to do a complete backup and a second plan that does a backup of the log file every 15 minutes. I have checked and the drive that the log file gets backed up to and there is still plenty of room but the log file never gets smaller after the backup, is there something missing from the maintenance plan?
Thanks in advance
The situation as you describe it seems fine.
A transaction log backup does not shrink the log file. However, it does truncate the log, file, which means that space can be reused:
From Books Online (Transaction Log Truncation):
Log truncation automatically frees space in the logical log for reuse
by the transaction log.
Also, from Managing the Transaction Log:
Log truncation, which is automatic under the simple recovery model, is
essential to keep the log from filling. The truncation process reduces
the size of the logical log file by marking as inactive the virtual
log files that do not hold any part of the logical log.
This means that each time the transaction log backup occurs in your scenario, it's creating free space in the file which can be used by subsequent transactions.
Leading on from this, should you shrink the file as well? Generally speaking, the answer is no. Assuming your database does not suddenly have massive one-off spikes in usage, the transaction log will have grown to a size to accommodate the typical workload.
This means if you start shrinking the log, SQL Server will just need to grow it again... This is a resource intensive operation, affecting server performance, and no transactions can complete while the log is growing.
The current plan and file sizes all seem reasonable to me.
I don't know if this applies to your situation, but earlier versions of SQL Server 2012 have a bug that crops up when model is set to Simple recovery model. For any database created with model set to Simple, log files will continue to grow in an attempt to reach the 2,097,152 MB limit. This still applies if you alter to Full afterwards. KB article 2830400 states that altering to Full, then altering back to Simple is a workaround -- that was not my experience. Running CU 7 for SP1 was the only trick that worked for me.
The article provides links for the first updates that resolved this bug: "Cumulative Update 4 for SQL Server 2012 SP1", as well as, "Cumulative Update 7 for SQL Server 2012" (if you haven't installed SP1).
If you change the recovery to full and then back to simple, the shrink will work successfully.

Shrink data base SQL Server 2008

HI
I have made a maintenance package in that have used shrink database task for specific database, it ran successfully, found slight increase in previous db size.
Initial size(129 gb) after running the package(130gb).
I am expecting after shrinkning it should shrink? what might be happen? am sure package scheduled to run and check the history found run successfully.
Any help/ please advise any special care required, Thanks in advance.
Do not shrink the database during maintenance. There is probably no other more damaging action you can do. Read more at Auto-shrink – turn it OFF. IF a database has grown to a certain size, then it will likely grow back if you shrink it. Shrinking the database is tremendously damaging to the index fragmentation and will slow down your reporting and analytic workloads. Once shrink, when the database will grow back during normal operations the auto-growth events will interrupt and freeze the database during the growth.
There is one thing to shrink a database that had got out of control due to some rogue action that increased it. But to have the shrink in maintenance task means you will constantly do it on a scheduled interval, and this is very bad.
There are a couple of things you can do the check on this. In SQL Server Management Studio (SSMS), Object Explorer, right-click on the database name and select Properties. On the General tab you'll find the Space Available value. Is there any available space?
Note that the space available includes space in the transaction log file. You need that space, so you don't want to shrink the database too much.
Also, keep in mind that you're database is probably in full recovery mode. what this means is that, as data is being inserted, updated, and deleted in the database, sql server logs it in the database log. This log can become quite large on a busy database. You can reduce the size of the log by performing full backups. Remember, the point of the log is so that you can do log backups and do point in time restores. If you're not doing this, or don't need to do this, you might consider having the database turned to simple recovery mode.

When should one use auto shrink on log files in SQL Server?

I have had a few problems with log files growing too big on my SQL Servers (2000). Microsoft doesn't recommend using auto shrink for log files, but since it is a feature it must be useful in some scenarios. Does anyone know when is proper to use the auto shrink property?
Your problem is not that you need to autoshrink periodically but that you need to backup the log files periodically. (We back ours up every 15 minutes.) Backing up the database itself is not sufficient, you must do the log as well. If you do not back up the transaction log, it will grow until it takes up all the space on the drive. If you back it up, it frees the space to be reused (you will still probably need to shrink after the first backup to get the log down to a more reasonable size). If you don't need to be able torecover from transactions (which you should need to be able to do unless your entire database consists of tables that are loaded from another source and can easily be re-loaded.), then set your log to simlpe recovery mode.
One reason why autoshrinking isn't so good an idea is that you will be growing the transaction log frequently which slows down performance. IF you back up the log, one you get to a relatively stable size (the amount of space normally used by the transaction log in the time period between backups), then the log will only need to grow occasionally if there are an unusually heavy amount fo transactions.
My take on this is that auto-shrink is useful when you have many fairly small databases that frequently get larger due to added data, and then have a lot of empty space afterwards. You also need to not mind that the files will be fragmented on the disk when they frequently grow and shrink. I'd never use auto-shrink on a critical database or one larger than 2 GB, as you never know when the shrink operation will kick in, and access to the database will be blocked until the shrink has completed.
You should never have autoshrink turned on. It causes performance degradation in several ways. The file-system and indexes become fragmented and it is very resource intensive. It is also not necessary if you manage your backups correctly.
Read this answer from Paul Randal on Server Fault and Just Say No To Auto-Shrink!!
I used to use it when we had a demo version of a huge database that took up a lot of space on the laptop, so we used it to keep the size down.
The key is to use it only when the data is basically throw away.
You should truncate the logs periodically as a part of your backup strategy.