(Windows) Exception Handling: to Event Log or to Database? - sql

I've worked in shops where I've implemented Exception Handling into the event log, and into a table in the database.
Each have their merits, of which I can highlight a few based on my experience:
Event Log
Industry standard location for exceptions (+)
Ease of logging (+)
Can log database connection problems here (+)
Can build report and viewing apps on top of the event log (+)
Needs to be flushed every so often, if alot is reported there (-)
Not as extensible as SQL logging [add custom fields like method name in SQL] (-)
SQL/Database
Can handle large volumes of data (+)
Can handle rapid volume inserts of exceptions (+)
Single storage location for exception in load balanced environment (+)
Very customizable (+)
A little easier to build reporting/notification off of SQL storage (+)
Different from where typical exceptions are stored (-)
Am I missing any major considerations?
I'm sure that a few of these points are debatable, but I'm curious what has worked best for other teams, and why you feel strongly about the choice.

You need to differentiate between logging and tracing. While the lines are a bit fuzzy, I tend to think of logging as "non developer stuff". Things like unhandled exceptions, corrupt files, etc. These are definitely not normal, and should be a very infrequent problem.
Tracing is what a developer is interested in. The stack traces, method parameters, that the web server returned an HTTP Status of 401.3, etc. These are really noisy, and can produce a lot of data in a short amount of time. Normally we have different levels of tracing, to cut back the noise.
For logging in a client app, I think that Event Logs are the way to go (I'd have to double check, but I think ASP.NET Health Monitoring can write to the Event Log as well). Normal users have permissions to write to the event log, as long as you have the Setup (which is installed by an admin anyway) create the event source.
Most of your advantages for Sql logging, while true, aren't applicable to event logging:
Can handle large volumes of data:
Do you really have large volumes of unhandled exceptions or other high level failures?
Can handle rapid volume inserts of exceptions: A single unhandled exception should bring your app down - it's inherently rate limited. Other interesting events to non developers should be similarly aggregated.
Very customizable: The message in an Event Log is pretty much free text. If you need more info, just point to a text or structured XML or binary file log
A little easier to build reporting/notification off of SQL storage: Reporting is built in with the Event Log Viewer, and notification systems are, either inherent - due to an application crash - or mixed in with other really critical notifications - there's little excuse for missing an Event Log message. For corporate or other networked apps, there's a thousand and 1 different apps that already cull from Event Logs for errors...chances are your sysadmin is already using one.
For tracing, of which the specific details of an exception or errors is a part of, I like flat files - they're easy to maintain, easy to grep, and can be imported into Sql for analysis if I like.
90% of the time, you don't need them and they're set to WARN or ERROR. But, when you do set them to INFO or DEBUG, you'll generate a ton of data. An RDBMS has a lot of overhead - for performance (ACID, concurrency, etc.), storage (transaction logs, SCSI RAID-5 drives, etc.), and administration (backups, server maintenance, etc.) - all of which are unnecessary for trace logs.

I wouldn't log straight to the database. As you say, database issues become tricky to log :)
I would log to the filesystem, and then have a job which bulk-inserts from files to the database. Personally I like having the logs in the database in the log run primarily for the scaling situation - I pretty much assume I'll have more than one machine running, and it's handy to be able to effectively have a combined log. (Each entry should state the machine it comes from, of course.)
Report and viewing apps can be done very easily from a database - there may be fewer log-specialized reporting tools out there at the moment, but pretty much all databases have generalised reporting functionality.
For ease of logging, I'd use a framework like log4net which takes a lot of the effort out of it, and is a tried and tested solution. Aside from anything else, that means you can change your output strategy with no code changes. You could even log to both the event log and the database if necessary, or send some logs to one place and some to the other. (I've assumed .NET here, but there are similar logging frameworks for many platforms.)

One thing that needs considering about event logging is that there are products out there which can monitor your servers' event logs (like Microsoft Operations Manager) and intelligently do notification, and gather statistics on their contents.
A "minus" of SQL-based logging is that it adds another layer of dependencies to your application, which may or may not always be acceptable. I've done both in my career. I once or twice even used a MSMQ based message queue to queue log events and empty the queue into a MSSQL database to eliminate the need for my client software to have a connection to the DB.

One note about writing to the event log: that requires certain permissions for your application users that in some environments may be restricted by default.
Where I'm at we do most of our logging to a database, with flat files as backup. It's pretty nice, we can do things like get an RSS feed for an app to watch for a few days when we make a change.

Related

Reliability of Windows Azure Storage Logging

We are in the process of creating a piece of software to backup a storage account (blobs & tables, no queues) and while researching how to do this we came across the possibility storage logging. We would like to use this feature to do smart incremental backups after an initial full backup. However in the introductory post for this feature here the following caveat is mentioned:
During normal operation all requests are logged; but it is important to note that logging is provided on a best effort basis. This means we do not guarantee that every message will be logged due to the fact that the log data is buffered in memory at the storage front-ends before being written out, and if a role is restarted then its buffer of logs would be lost.
As this is a backup solution this behavior makes the features unusable, we can't miss a file. However I wonder if this has changed in the meantime as Microsoft has built a number of features on top of it like blob function triggers and very recently their new Azure Event Grid.
My question is whether this behavior has changed in the meantime or are the logs still on a best effort basis and should we stick to our 'scanning' strategy?
The behavior for Azure Storage logs is still same. For your case, you might be better off using the EventGrid notification for Blob storage: https://azure.microsoft.com/en-us/blog/introducing-azure-event-grid-an-event-service-for-modern-applications/

Multiple application on network with same SQL database

I will have multiple computers on the same network with the same C# application running, connecting to a SQL database.
I am wondering if I need to use the service broker to ensure that if I update record A in table B on Machine 1, the change is pushed to Machine 2. I have seen applications that need to use messaging servers to accomplish this before but I was wondering why this is necessary, surely if they connect to the same database, any changes from one machine will be reflected on the other?
Thanks :)
This is mostly about consistency and latency.
If your applications always perform atomic operations on the database, and they always read whatever they need with no caching, everything will be consistent.
In practice, this is seldom the case. There's plenty of hidden opportunities for caching, like when you have an edit form - it has the values the entity had before you started the edit process, but what if someone modified those in the mean time? You'd just rewrite their changes with your data.
Solving this is a bunch of architectural decisions. Different scenarios require different approaches.
Once data is committed in the database, everyone reading it will see the same thing - but only if they actually get around to reading it, and the two reads aren't separated by another commit.
Update notifications are mostly concerned with invalidating caches, and perhaps some push-style processing (e.g. IM client might show you a popup saying you got a new message). However, SQL Server notifications are not reliable - there is no guarantee that you'll get the notification, and even less so that you'll get it in time. This means that to ensure consistency, you must not depend on the cached data, and you have to force an invalidation once in a while anyway, even if you didn't get a change notification.
Remember, even if you're actually using a database that's close enough to ACID, it's usually not the default setting (for performance and availability, mostly). You need to understand what kind of guarantees you're getting, and how to write code to handle this. Even the most perfect ACID database isn't going to help your consistency if your application introduces those inconsistencies :)

log4net reconnectonerror - is connect timeout necessary?

We have a number of .NET applications that make use of log4net for logging to a sql server databsae. For various reasons (unrelated to log4net...I think) on occasion logging stops. The application might continue working but logging will not continue until the IIS application pool is recycled. The obvious solution would be to add reconnectonerror to the log4net appender. However, as I understand it it is always suggested that "connect timeout=1" be added to the appended connection string. Why?
What I mean is...
If log4net logging worked without "connect timeout=1" why would including make adding "connect timeout=1" matter?
According to the documentation, the act of reconnecting may block the calling thread.
More specifically, if a connection is not available, Log4Net will attempt to reconnect once there are enough messages for a batch. If there is a chronic problem with the database, this could result in performance degradation -- especially if you have configured lots of logging or a small batch size.
One of Apache's design goals is to allow log statements to remain in production code without incurring high performance cost. That's where the connect timeout suggestion comes from. If you have to pay the cost of reconnecting, at least make it quick so you don't take too much of a performance hit.
Sources :
http://logging.apache.org/log4net/release/sdk/log4net.Appender.AdoNetAppender.ReconnectOnError.html
http://mail-archives.apache.org/mod_mbox/logging-log4net-user/200506.mbox/%3CDDEB64C8619AC64DBC074208B046611C7692DA#kronos.neoworks.co.uk%3E

How important is Audit Logging in a Standalone Application

My experience are mostly in developing web applications and we do a lot of audit trails there. Literally every table is audited. I believe this is because user transactions are centralized to a server and they share the same table so it is important who did what.
But now I am assigned to a project developing a standalone application (specifically a mobile application with occasional server transactions). Some are suggesting to add Audit logging but I am not sure what is the norm for standalone applications. For those who have experiences, kindly share if you think it is mandatory or not. I'm leaning towards NO (that it is not that important) because it will only increase resource consumption (and mobile limited). It may affect performance, stability and usability.
Well for the app itself it may not be so important, however having automated error logs can be very useful when you faced with an angry customer(s) and need to quickly debug the app. You can even have a special 'debug mode' to collect more info.
You should also log your server transactions, adding an extra query in the request won't really affect performance.

Using a SQL Server for application logging. Pros/Cons?

I have a multi-user application that keeps a centralized logfile for activity. Right now, that logging is going into text files to the tune of about 10MB-50MB / day. The text files are rotated daily by the logger, and we keep the past 4 or 5 days worth. Older than that is of no interest to us.
They're read rarely: either when developing the application for error messages, diagnostic messages, or when the application is in production to do triage on a user-reported problem or a bug.
(This is strictly an application log. Security logging is kept elsewhere.)
But when they are read, they're a pain in the ass. Grepping 10MB text files is no fun even with Perl: the fields (transaction ID, user ID, etc..) in the file are useful, but just text. Messages are written sequentially, one like at a time, so interleaved activity is all mixed up when trying to follow a particular transaction or user.
I'm looking for thoughts on the topic. Anyone done application-level logging with an SQL database and liked it? Hated it?
I think that logging directly to a database is usually a bad idea, and I would avoid it.
The main reason is this: a good log will be most useful when you can use it to debug your application post-mortem, once the error has already occurred and you can't reproduce it. To be able to do that, you need to make sure that the logging itself is reliable. And to make any system reliable, a good start is to keep it simple.
So having a simple file-based log with just a few lines of code (open file, append line, close file or keep it opened, repeat...) will usually be more reliable and useful in the future, when you really need it to work.
On the other hand, logging successfully to an SQL server will require that a lot more components work correctly, and there will be a lot more possible error situations where you won't be able to log the information you need, simply because the log infrastructure itself won't be working. And something worst: a failure in the log procedure (like a database corruption or a deadlock) will probably affect the performance of the application, and then you'll have a situation where a secondary component prevents the application of performing it's primary function.
If you need to do a lot of analysis of the logs and you are not comfortable using text-based tools like grep, then keep the logs in text files, and periodically import them to an SQL database. If the SQL fails you won't loose any log information, and it won't even affect the application's ability to function. Then you can do all the data analysis in the DB.
I think those are the main reasons why I don't do logging to a database, although I have done it in the past. Hope it helps.
We used a Log Database at my last job, and it was great.
We had stored procedures that would spit out overviews of general system health for different metrics that I could load from a web page. We could also quickly spit out a trace for a given app over a given period, and if I wanted it was easy to get that as a text file, if you really just like grep-ing files.
To ensure the logging system does not itself become a problem, there is of course a common code framework we used among different apps that handled writing to the log table. Part of that framework included also logging to a file, in case the problem is with the database itself, and part of it involves cycling the logs. As for the space issues, the log database is on a different backup schedule, and it's really not an issue. Space (not-backed-up) is cheap.
I think that addresses most of the concerns expressed elsewhere. It's all a matter of implementation. But if I stopped here it would still be a case of "not much worse", and that's a bad reason to go the trouble of setting up DB logging. What I liked about this is that it allowed us to do some new things that would be much harder to do with flat files.
There were four main improvements over files. The first is the system overviews I've already mentioned. The second, and imo most important, was a check to see if any app was missing messages where we would normally expect to find them. That kind of thing is near-impossible to spot in traditional file logging unless you spend a lot of time each day reviewing mind-numbing logs for apps that just tell you everything's okay 99% of the time. It's amazing how freeing the view to show missing log entries is. Most days we didn't need to look at most of the log files at all... something that would be dangerous and irresponsible without the database.
That brings up the third improvement. We generated a single daily status e-mail, and it was the only thing we needed to review on days that everything ran normally. The e-mail included showed errors and warnings. Missing logs were re-logged as warning by the same db job that sends the e-mail, and missing the e-mail was a big deal. We could send forward a particular log message to our bug tracker with one click, right from within the daily e-mail (it was html-formatted, pulled data from a web app).
The final improvement was that if we did want to follow a specific app more closely, say after making a change, we could subscribe to an RSS feed for that specific application until we were satisfied. It's harder to do that from a text file.
Where I'm at now, we rely a lot more on third party tools and their logging abilities, and that means going back to a lot more manual review. I really miss the DB, and I'm contemplated writing a tool to read those logs and re-log them into a DB to get these abilities back.
Again, we did this with text files as a fallback, and it's the new abilities that really make the database worthwhile. If all you're gonna do is write to a DB and try to use it the same way you did the old text files, it adds unnecessary complexity and you may as well just use the old text files. It's the ability to build out the system for new features that makes it worthwhile.
yeah, we do it here, and I can't stand it. One problem we have here is if there is a problem with the db (connection, corrupted etc), all logging stops. My other big problem with it is that it's difficult to look through to trace problems. We also have problems here with the table logs taking up too much space, and having to worry about truncating them when we move databases because our logs are so large.
I think its clunky compared to log files. I find it difficult to see the "big picture" with it being stored in the database. I'll admit I'm a log file person, I like being able to open a text file and look through (regex) it instead of using sql to try and search for something.
The last place I worked we had log files of 100 meg plus. They're a little difficult to open, but if you have the right tool it's not that bad. We had a system to log messages too. You could quickly look at the file and determine which set of log entries belonged which process.
We've used SQL Server centralized logging before, and as the previous posted mentioned, the biggest problem was that interrupted connectivity to the database would mean interrupted logging. I actually ended up adding a queuing routine to the logging that would try the DB first, and write to a physical file if it failed. You'd just have to add code to that routine that, on a successful log to the db, would check to see if any other entries are queued locally, and write those too.
I like having everything in a DB, as opposed to physical log files, but just because I like parsing it with reports I've written.
I think the problem you have with logging could be solved with logging to SQL, provided that you are able to split out the fields you are interested in, into different columns. You can't treat the SQL database like a text field and expect it to be better, it won't.
Once you get everything you're interested in logging to the columns you want it in, it's much easier to track the sequential actions of something by being able to isolate it by column. Like if you had an "entry" process, you log everything normally with the text "entry process" being put into the "logtype" column or "process" column. Then when you have problems with the "entry process", a WHERE statement on that column isolates all entry processes.
we do it in our organization in large volumes with SQL Server. In my openion writing to database is better because of the search and filter capability. Performance wise 10 to 50 MB worth of data and keeping it only for 5 days, does not affect your application. Tracking transaction and users will be very easy compare to tracking it from text file since you can filter by transaction or user.
You are mentioning that the files read rarely. So, decide if is it worth putting time in development effort to develop the logging framework? Calculate your time spent on searching the logs from log files in a year vs the time it will take to code and test. If the time spending is 1 hour or more a day to search logs it is better to dump logs in to database. Which can drastically reduce time spend on solving issues.
If you spend less than an hour then you can use some text search tools like "SRSearch", which is a great tool that I used, searches from multiple files in a folder and gives you the results in small snippts ("like google search result"), where you click to open the file with the result interested. There are other Text search tools available too. If the environment is windows, then you have Microsoft LogParser also a good tool available for free where you can query your file like a database if the file is written in a specific format.
Here are some additional pros and cons and the reason why I prefer log files instead of databases:
Space is not that cheap when using VPS's. Recovering space on live database systems is often a huge hassle and you might have to shut down services while recovering space. If your logs is so important that you have to keep them for years (like we do) then this is a real problem. Remember that most databases does not recover space when you delete data as it simply re-uses the space - not much help if you are actually running out of space.
If you access the logs fequently and you have to pull daily reports from a database with one huge log table and millions and millions of records then you will impact the performance of your database services while querying the data from the database.
Log files can be created and older logs archived daily. Depending on the type of logs massive amounts of space can be recovered by archiving logs. We save around 6x the space when we compress our logs and in most cases you'll probably save much more.
Individual smaller log files can be compressed and transferred easily without impacting the server. Previously we had logs ranging in the 100's of GB's worth of data in a database. Moving such large databases between servers becomes a major hassle, especially due to the fact that you have to shut down the database server while doing so. What I'm saying is that maintenance becomes a real pain the day you have to start moving large databases around.
Writing to log files in general are a lot faster than writing to DB. Don't underestimate the speed of your operating system file IO.
Log files only suck if you don't structure your logs properly. You may have to use additional tools and you may even have to develop your own to help process them, but in the end it will be worth it.
You could log to a comma or tab delimited text format, or enable your logs to be exported to a CSV format. When you need to read from a log export your CSV file to a table on your SQL server then you can query with standard SQL statements. To automate the process you could use SQL Integration Services.
I've been reading all the answers and they're great. But in a company I worked due to several restrictions and audit it was mandatory to log into a database. Anyway, we had several ways to log and the solution was to install a pipeline where our programmers could connect to the pipeline and log into database, file, console, or even forwarding log to a port to be consumed by another applications.
This pipeline doesn't interrupt the normal process and keeping a log file at the same time you log into the database ensures you rarely lose a line.
I suggest you investigate further log4net that it's great for this.
http://logging.apache.org/log4net/
I could see it working well, provided you had the capability to filter what needs to be logged and when it needs to be logged. A log file (or table, such as it is) is useless if you can't find what you're looking for or contains unnecessary information.
Since your logs are read rarely, I would write them on file (better performance and reliability).
Then, if and only if you need to read them, I would import the log file in a data base (better analysis).
Doing so, you get the advantages of both methods.