Tens of thousands of htm files in c:\Temp - visual-studio-2022

VS 2022 creates tens of thousands of htm-files (obviuosly log-files) in c:\Temp\Default and c:\Temp\NativeImage.
After a couple of weeks of work with VS they add upp to several GB. This slows down work as the Virus scanner will take all cpu for several minutes after starting VS.
Deleting those folders and I'm back to normal for a week or so. And is not good for my SSD.
What can be done to prevent this?
I havn't seen this with VS2019 or early versions of VS2022.

From your comment, this sounds like logs from the Fusion Log Viewer. Double-check your settings by running fuslogvw.exe from an elevated command prompt (need to be elevated to change settings).
If you have enabled either of the last 2 options (Log bind failures to disk or Log all binds to disk), it will create a log file for each applicable event. This can end up consuming lots of disk space, as you're observing.
If you set it to the second option (Log in exception text), you'll be able to view these details in the debugger if you're encountering assembly bind failures, but it doesn't write to disk anywhere (just a slight memory increase in the exception objects).

Related

How do I keep Particular ServiceControl Audit db (Raven5) from getting larger and larger in size?

We recently upgraded Particular.ServiceControl.Audit to v4.26 and recreated our Audit instance, since from v4.26 and up new audit instances will bump up RavenDB from 3.5 to 5.4.
We did this hoping to remedy a problem in 4.25.x where compacting the audit database would not free up any disk space.
As it is, the database is still growing really large (3-400GB). To test if the data contained should actually use up all this disk space, we tried to tamper with the ServiceControl.Audit/AuditRetentionPeriod config parameter, setting it to a small value like 1 day (before value was 30 days). Naively, maybe, waiting for db to shrink at some point - expecting that retention would somehow influence disk space used. The db file remained the same size (and growing).
The docs for compacting the database in ServiceControl only mention the esent based Raven 3.5, but Raven5+ uses Raven's own Voron storage engine. There does not seem to be Particular docs describing how to compact the Voron db.
So we tried to follow the RavenDB docs for compacting the db through Raven Studio after putting the service in maintenance mode. As the screenshot shows, this operation stopped or stalled very shortly after initializing (disregard time elapsed, as we actually left it running for several hours).
We tried this several times with no luck. We can see that the disk containing the specific db has zero reads or writes, so it is definitely stalled in some way. Pressing the "Abort" button also got stuck every time, after which we resolved to simply restarting the entire service (which then seemed to bring the db back to normal operation).
So the question is: how do we keep the audit db from growing indefinitely? We can't at any point see it not growing or staying the same size, causing SSD disk costs to grow in expenses.

WAL log files fill up quickly - how to prevent this?

currently the logs in the folder “/engine-rocksdb/journals” are running full (WAL logs).
When does ArangoDB do a cleaning run of these logs and delete them automatically and how to trigger this cleaning run earlier? My ArangoDB 3.10 runs in single mode and in a virtual environment (cloud with a network storage).
The logfile are increasing very fast for me because there are many writes to the DB. What is the best way, any idea?
What I have done so far:
If I set the value “rocksdb.wal-archive-size-limit” it does delete the logs when the set limit is reached, but it shows errors in the logfile:
2022-09-27T17:53:04Z [898948] WARNING [d9793] {engines} forcing removal of RocksDB WAL file '/archive/813371.log' with start sequence 5387062892 because of overflowing archive. configured maximum archive size is 1073741824, actual archive size is: 75401520
However, I still don't understand the meaning of the logfile output: "configured maximum archive size is 1073741824, actual archive size is: 75401520`". The "actual archive size" is smaller?
But what are the consequences of lowering the "wal-archive-size-limit" value? Is it possible to switch off the wal-archive completely. What exactly is it for? As I understand it, ArangoDb need it for transaction security (i.e. in case of power loss), right?
In general, yes, this is a good thing, but how can I get ArangoDb to a) limit this WAL-archive (without error massages) and b) do a cleaning run faster?
thx :-)
When does ArangoDB do a cleaning run of these logs and delete them automatically and how to trigger this cleaning run earlier?
ArangoDB uses RocksDB underneath, and RocksDB will move WAL file (.log files) into its archive as soon as possible. In order to do so all data from the WAL file needs to be safely stored in the column families' .sst files and have been flushed to disk.
ArangoDB will delete files from the WAL archive (and only from there) once it can assure that an archived WAL file is not used anymore. It will not remove files for the archive that are or may be in current use.
There are a few reasons why ArangoDB may keep archived WAL files for some time:
when server-to-server replication is used: while a follower replicates data, it may read from the leader's WAL. Deleting the WAL file on the leader may make the replication fail
when arangodump is used to create a database dump, it will create a snapshot of data on the server, and the WAL files for that snapshot will be kept around until the snapshot isn't needed anymore (i.e. arangodump finishes).
the first 180 seconds after server start, all WAL files are intentionally kept, for forensic reasons, and to allow followers to replay events from a leader's WAL when it is restarted. The value of 180 seconds can be changed by adjusting the startup option --rocksdb.wal-file-timeout-initial.
there can be some background processing of changes that may refer to data from WAL files. For example, each insert into a collection will need to increase the collection's count() value by 1. To save an extra write into RocksDB on each insert, the count() value is only written to the storage engine by a background thread, ideally only once every X insert operations. However, this may lead to WAL files being around for a bit longer, especially if the background thread cannot keep up with the insert workload.
There is the startup option --rocksdb.wal-archive-size-limit to put a hard limit on the cumulated size of the WAL files in the archive. From your question, it appears that you are currently using ArangoDB version 3.10.
From the warning message you posted, it seems that the WAL archive cleanup somehow applies the wrong limit values.
It turns out that there has been a recent bugfix, released in ArangoDB version 3.10.1, 3.9.4, and 3.8.8, that should rectify this behavior. So upgrading to one of these or later versions may actually help when using the WAL archive size limit.
Shared your question in the Speedb hive, on Discord, and here is what we got for you:
"By default, ArangoDB set the max_wal_size to 1G the value of rocksdb.wal-archive-size-limit must be set to at least twice this number (otherwise you may end up with a single WAL file and the delete will fail)."
Hope this help, if it doesn't or you have follow up questions, please join the Speedb Discord and we will be happy to help.

azure virtual machine disk errors

I've been using 3 identical VMs on Azure for a month or more without problem.
Today I couldn't Remote Desktop to one of them, and restarted it from the Azure Portal. That took a long time. It eventually came back up, and the Event log has numerous entries such as:
"The IO operation at logical block address 70 for Disk 0 ..... was retried"
"Windows cannot access the file C:\windows\Microsoft.Net\v4.0.30319\clrjit.dll for one of the following reasons, network, disk etc.
There are lots of errors like this. To me they seem symptomatic that the underlying disk system is having serious problems. Given the VHD is stored in a triple replicated Azure blob, I would have thought there was some immunity to this kind of thing?
Many hours later it's still doing the same thing. It works fine for a few hours, then slows to a crawl with the Event log containing lots of disk problems. I can upload screen shots of the event log if people are interested.
This is a pretty vanilla VM, I'm only using the one OS disk it came with.
The other two identical VMs in the same region are fine.
Just wondering if anybody has seen this before with Azure VMs and how to safeguard against it, or recover from it.
Thanks.
Thank you for providing all the details and we apologize for the inconvenience. We have investigated the failures and determined that they were caused by a platform issue. Your virtual machine’s disk does not have any problems and therefore you should be able to continue using it as is.

SQL Log File Not Shrinking in SQL Server 2012

I am dealing with someone else's backup Maintenance Plan and have an issue with the log file, I have a database that sits on one drive with a size of 31 GB and a log file that sits on another server with a size of 20 GB, the database is in Full Recovery Model. There is a maintenance plan that runs once a day to do a complete backup and a second plan that does a backup of the log file every 15 minutes. I have checked and the drive that the log file gets backed up to and there is still plenty of room but the log file never gets smaller after the backup, is there something missing from the maintenance plan?
Thanks in advance
The situation as you describe it seems fine.
A transaction log backup does not shrink the log file. However, it does truncate the log, file, which means that space can be reused:
From Books Online (Transaction Log Truncation):
Log truncation automatically frees space in the logical log for reuse
by the transaction log.
Also, from Managing the Transaction Log:
Log truncation, which is automatic under the simple recovery model, is
essential to keep the log from filling. The truncation process reduces
the size of the logical log file by marking as inactive the virtual
log files that do not hold any part of the logical log.
This means that each time the transaction log backup occurs in your scenario, it's creating free space in the file which can be used by subsequent transactions.
Leading on from this, should you shrink the file as well? Generally speaking, the answer is no. Assuming your database does not suddenly have massive one-off spikes in usage, the transaction log will have grown to a size to accommodate the typical workload.
This means if you start shrinking the log, SQL Server will just need to grow it again... This is a resource intensive operation, affecting server performance, and no transactions can complete while the log is growing.
The current plan and file sizes all seem reasonable to me.
I don't know if this applies to your situation, but earlier versions of SQL Server 2012 have a bug that crops up when model is set to Simple recovery model. For any database created with model set to Simple, log files will continue to grow in an attempt to reach the 2,097,152 MB limit. This still applies if you alter to Full afterwards. KB article 2830400 states that altering to Full, then altering back to Simple is a workaround -- that was not my experience. Running CU 7 for SP1 was the only trick that worked for me.
The article provides links for the first updates that resolved this bug: "Cumulative Update 4 for SQL Server 2012 SP1", as well as, "Cumulative Update 7 for SQL Server 2012" (if you haven't installed SP1).
If you change the recovery to full and then back to simple, the shrink will work successfully.

Using a SQL Server for application logging. Pros/Cons?

I have a multi-user application that keeps a centralized logfile for activity. Right now, that logging is going into text files to the tune of about 10MB-50MB / day. The text files are rotated daily by the logger, and we keep the past 4 or 5 days worth. Older than that is of no interest to us.
They're read rarely: either when developing the application for error messages, diagnostic messages, or when the application is in production to do triage on a user-reported problem or a bug.
(This is strictly an application log. Security logging is kept elsewhere.)
But when they are read, they're a pain in the ass. Grepping 10MB text files is no fun even with Perl: the fields (transaction ID, user ID, etc..) in the file are useful, but just text. Messages are written sequentially, one like at a time, so interleaved activity is all mixed up when trying to follow a particular transaction or user.
I'm looking for thoughts on the topic. Anyone done application-level logging with an SQL database and liked it? Hated it?
I think that logging directly to a database is usually a bad idea, and I would avoid it.
The main reason is this: a good log will be most useful when you can use it to debug your application post-mortem, once the error has already occurred and you can't reproduce it. To be able to do that, you need to make sure that the logging itself is reliable. And to make any system reliable, a good start is to keep it simple.
So having a simple file-based log with just a few lines of code (open file, append line, close file or keep it opened, repeat...) will usually be more reliable and useful in the future, when you really need it to work.
On the other hand, logging successfully to an SQL server will require that a lot more components work correctly, and there will be a lot more possible error situations where you won't be able to log the information you need, simply because the log infrastructure itself won't be working. And something worst: a failure in the log procedure (like a database corruption or a deadlock) will probably affect the performance of the application, and then you'll have a situation where a secondary component prevents the application of performing it's primary function.
If you need to do a lot of analysis of the logs and you are not comfortable using text-based tools like grep, then keep the logs in text files, and periodically import them to an SQL database. If the SQL fails you won't loose any log information, and it won't even affect the application's ability to function. Then you can do all the data analysis in the DB.
I think those are the main reasons why I don't do logging to a database, although I have done it in the past. Hope it helps.
We used a Log Database at my last job, and it was great.
We had stored procedures that would spit out overviews of general system health for different metrics that I could load from a web page. We could also quickly spit out a trace for a given app over a given period, and if I wanted it was easy to get that as a text file, if you really just like grep-ing files.
To ensure the logging system does not itself become a problem, there is of course a common code framework we used among different apps that handled writing to the log table. Part of that framework included also logging to a file, in case the problem is with the database itself, and part of it involves cycling the logs. As for the space issues, the log database is on a different backup schedule, and it's really not an issue. Space (not-backed-up) is cheap.
I think that addresses most of the concerns expressed elsewhere. It's all a matter of implementation. But if I stopped here it would still be a case of "not much worse", and that's a bad reason to go the trouble of setting up DB logging. What I liked about this is that it allowed us to do some new things that would be much harder to do with flat files.
There were four main improvements over files. The first is the system overviews I've already mentioned. The second, and imo most important, was a check to see if any app was missing messages where we would normally expect to find them. That kind of thing is near-impossible to spot in traditional file logging unless you spend a lot of time each day reviewing mind-numbing logs for apps that just tell you everything's okay 99% of the time. It's amazing how freeing the view to show missing log entries is. Most days we didn't need to look at most of the log files at all... something that would be dangerous and irresponsible without the database.
That brings up the third improvement. We generated a single daily status e-mail, and it was the only thing we needed to review on days that everything ran normally. The e-mail included showed errors and warnings. Missing logs were re-logged as warning by the same db job that sends the e-mail, and missing the e-mail was a big deal. We could send forward a particular log message to our bug tracker with one click, right from within the daily e-mail (it was html-formatted, pulled data from a web app).
The final improvement was that if we did want to follow a specific app more closely, say after making a change, we could subscribe to an RSS feed for that specific application until we were satisfied. It's harder to do that from a text file.
Where I'm at now, we rely a lot more on third party tools and their logging abilities, and that means going back to a lot more manual review. I really miss the DB, and I'm contemplated writing a tool to read those logs and re-log them into a DB to get these abilities back.
Again, we did this with text files as a fallback, and it's the new abilities that really make the database worthwhile. If all you're gonna do is write to a DB and try to use it the same way you did the old text files, it adds unnecessary complexity and you may as well just use the old text files. It's the ability to build out the system for new features that makes it worthwhile.
yeah, we do it here, and I can't stand it. One problem we have here is if there is a problem with the db (connection, corrupted etc), all logging stops. My other big problem with it is that it's difficult to look through to trace problems. We also have problems here with the table logs taking up too much space, and having to worry about truncating them when we move databases because our logs are so large.
I think its clunky compared to log files. I find it difficult to see the "big picture" with it being stored in the database. I'll admit I'm a log file person, I like being able to open a text file and look through (regex) it instead of using sql to try and search for something.
The last place I worked we had log files of 100 meg plus. They're a little difficult to open, but if you have the right tool it's not that bad. We had a system to log messages too. You could quickly look at the file and determine which set of log entries belonged which process.
We've used SQL Server centralized logging before, and as the previous posted mentioned, the biggest problem was that interrupted connectivity to the database would mean interrupted logging. I actually ended up adding a queuing routine to the logging that would try the DB first, and write to a physical file if it failed. You'd just have to add code to that routine that, on a successful log to the db, would check to see if any other entries are queued locally, and write those too.
I like having everything in a DB, as opposed to physical log files, but just because I like parsing it with reports I've written.
I think the problem you have with logging could be solved with logging to SQL, provided that you are able to split out the fields you are interested in, into different columns. You can't treat the SQL database like a text field and expect it to be better, it won't.
Once you get everything you're interested in logging to the columns you want it in, it's much easier to track the sequential actions of something by being able to isolate it by column. Like if you had an "entry" process, you log everything normally with the text "entry process" being put into the "logtype" column or "process" column. Then when you have problems with the "entry process", a WHERE statement on that column isolates all entry processes.
we do it in our organization in large volumes with SQL Server. In my openion writing to database is better because of the search and filter capability. Performance wise 10 to 50 MB worth of data and keeping it only for 5 days, does not affect your application. Tracking transaction and users will be very easy compare to tracking it from text file since you can filter by transaction or user.
You are mentioning that the files read rarely. So, decide if is it worth putting time in development effort to develop the logging framework? Calculate your time spent on searching the logs from log files in a year vs the time it will take to code and test. If the time spending is 1 hour or more a day to search logs it is better to dump logs in to database. Which can drastically reduce time spend on solving issues.
If you spend less than an hour then you can use some text search tools like "SRSearch", which is a great tool that I used, searches from multiple files in a folder and gives you the results in small snippts ("like google search result"), where you click to open the file with the result interested. There are other Text search tools available too. If the environment is windows, then you have Microsoft LogParser also a good tool available for free where you can query your file like a database if the file is written in a specific format.
Here are some additional pros and cons and the reason why I prefer log files instead of databases:
Space is not that cheap when using VPS's. Recovering space on live database systems is often a huge hassle and you might have to shut down services while recovering space. If your logs is so important that you have to keep them for years (like we do) then this is a real problem. Remember that most databases does not recover space when you delete data as it simply re-uses the space - not much help if you are actually running out of space.
If you access the logs fequently and you have to pull daily reports from a database with one huge log table and millions and millions of records then you will impact the performance of your database services while querying the data from the database.
Log files can be created and older logs archived daily. Depending on the type of logs massive amounts of space can be recovered by archiving logs. We save around 6x the space when we compress our logs and in most cases you'll probably save much more.
Individual smaller log files can be compressed and transferred easily without impacting the server. Previously we had logs ranging in the 100's of GB's worth of data in a database. Moving such large databases between servers becomes a major hassle, especially due to the fact that you have to shut down the database server while doing so. What I'm saying is that maintenance becomes a real pain the day you have to start moving large databases around.
Writing to log files in general are a lot faster than writing to DB. Don't underestimate the speed of your operating system file IO.
Log files only suck if you don't structure your logs properly. You may have to use additional tools and you may even have to develop your own to help process them, but in the end it will be worth it.
You could log to a comma or tab delimited text format, or enable your logs to be exported to a CSV format. When you need to read from a log export your CSV file to a table on your SQL server then you can query with standard SQL statements. To automate the process you could use SQL Integration Services.
I've been reading all the answers and they're great. But in a company I worked due to several restrictions and audit it was mandatory to log into a database. Anyway, we had several ways to log and the solution was to install a pipeline where our programmers could connect to the pipeline and log into database, file, console, or even forwarding log to a port to be consumed by another applications.
This pipeline doesn't interrupt the normal process and keeping a log file at the same time you log into the database ensures you rarely lose a line.
I suggest you investigate further log4net that it's great for this.
http://logging.apache.org/log4net/
I could see it working well, provided you had the capability to filter what needs to be logged and when it needs to be logged. A log file (or table, such as it is) is useless if you can't find what you're looking for or contains unnecessary information.
Since your logs are read rarely, I would write them on file (better performance and reliability).
Then, if and only if you need to read them, I would import the log file in a data base (better analysis).
Doing so, you get the advantages of both methods.