Theoretically, the end user should never see internal errors. But in practice, theory and practice differ. So the question is what to show the end user. Now, for the totally non-technical user, you want to show as little as possible ("click here to submit a bug report" kind of things), but for more advanced users, they will want to know if there is a work around, if it's been known for a while, etc. So you want to include some sort of info about what's wrong as well.
The classic way to do this is either an assert with a filename:line-number or a stack trace with the same. Now this is good for the developer because it points him right at the problem; however it has some significant downsides for the user, particularly that it's very cryptic (e.g. unfriendly) and code changes change the error message (Googling for the error only works for this version).
I have a program that I'm planning on writing where I want to address these issues. What I want is a way to attach a unique identity to every assert in such a way that editing the code around the assert won't alter it. (For example, if I cut/paste it to another file, I want the same information to be displayed) Any ideas?
One tack I'm thinking of is to have an enumeration for the errors, but how to make sure that they are never used in more than one place?
(Note: For this question, I'm only looking at errors that are caused by coding errors. Not things that could legitimately happen like bad input. OTOH those errors may be of some interest to the community at large.)
(Note 2: The program in question would be a command line app running on the user's system. But again, that's just my situation.)
(Note 3: the target language is D and I'm very willing to dive into meta-programming. Answers for other languages more than welcome!)
(note 4: I explicitly want to NOT use actual code locations but rather some kind of symbolic names for the errors. This is because if code is altered in practically any way, code locations change.)
Interesting question. A solution I have used several times is this: If it's a fatal error (non-fatal errors should give the user a chance to correct the input, for example), we generate a file with a lot of relevant information: The request variables, headers, internal configuration information and a full backtrace for later debugging. We store this in a file with a generated unique filename (and with the time as a prefix).
For the user, we present a page which explains that an unrecoverable error has occurred, and ask that they include the filename as a reference if they would like to report the bug. A lot easier to debug with all this information from the context of the offending request.
In PHP the debug_backtrace() function is very useful for this. I'm sure there's an equivalent for your platform.
Also remember to send relevant http headers: Probably: HTTP/1.1 500 Internal Server Error
Given a sensible format of the error report file, it's also possible to analyze the errors that users have not reported.
Write a script to grep your entire source tree for uses of these error codes, and then complain if there are duplicates. Run that script as part of your unit tests.
I know nothing about your target language, but this is an interesting question that I have given some thought to and I wanted to add my two cents.
My feeling has always been that messages for hard errors and internal errors should be as useful as possible for the developer to identify the problem & fix it quickly. Most users won't even look at this error message, but the highly sophisticated end users (tech support people perhaps) will often get a pretty good idea what the problem is and even come up with novel workarounds by looking at highly detailed error messages. The key is to make those error messages detailed without being cryptic, and this is more an art than a science.
An example from a Windows program that uses an out-of-proc COM server. If the main program tries to instantiate an object from the COM server and fails with the error message:
"WARNING: Unable to Instantiate
UtilityObject: Error 'Class Not
Registered' in 'CoCreateInstance'"
99% of users will see this and think it is written in Greek. A tech support person may quickly realize that they need ro re-register the COM server. And the developer will know exactly what went wrong.
In order to associate some contextual information with the assertion, in my C++ code I will often use a simple string with the name of the method, or something else that makes it clear where the error occured (I apologize for answering in a language you didn't ask about):
int someFunction()
{
static const std::string loc = "someFunction";
: :
if( somethingWentWrong )
{
WarningMessage(loc.c_str(), "Unable to Instantiate UtilityObject: Error 'Class Not
Registered' in 'CoCreateInstance);
}
}
...which generates:
WARNING [someFunction] : Unable to
Instantiate UtilityObject: Error
'Class Not Registered' in
'CoCreateInstance
Related
If there is a problem in the source code, usually a programmer goes through the log manually and tries to identify the problem in the source code.
But is it possible to automate this process? Can we automate the process that would give the potential lines in the source code that was responsible for generating the fault.
So, for example:
If there is some problem in the log file. Then this automation tool should say that this problem has occurred due to line 30,31,32,35,38 in source code ABC
Thanks!!
It depends on the language you are using.
In Java (and probably other JVM languages) this feature is built-in: Every exception that is thrown has a reference to the stack trace, including class, method and line number of every method involved. All you need to do is something like
exception.printStackTrace();
In C and C++, you can use preprocessor macros like __FUNCTION__ or __LINE__ when throwing an exception or writing a log message, for example:
throw "Error in " + __FUNCTION__ + ", line " + std::to_string(__LINE__);
The macros will be replaced by the current function and the current line.
If you are looking for a method that works with any language and any type of logging, there is no good solution. You could run a tool like grep over all source files, that will try to find matches. However this will only work if the log messages appear as string literals in source code at the position where the message is written. This is unlikely because the messages are likely to contain variable values or constants defined somewhere else.
Assuming we are not talking about (unit) testing, because this is what they do - show you where is the problem exactly.
Then this automation tool should say that this problem has occurred due to line 30,31,32,35,38 in source code ABC
In my team we had similar discussion and what we've come with is a Top5 most likely issues document (PlayBook). After reading logs every time on failure, we've noticed that in most of the times there is a requiring pattern. So 8 out of 10 cases the issues were following one of those patterns. So it is possible to trace the latest changes (with the help from Git). If your changes are small and frequent - this approach works quite well.
Options:
1) When there is bad input, the app crashes and prints a message to the console saying what happened
2) When there is bad input, the app throws away the input and continues on as if nothing happened (though nothing the problem in a separate log file).
While 2 may seem like the obvious solution, the app is an engine and framework for game development, so if a user is writing something and does something wrong, it may be beneficial for that problem to be immediately obvious (app crashing) rather than it being ignored and the user potentially forgetting to check the log to see if there were any problems (may forget if the programmed behavior isn't very noticeable on screen, so he doesn't catch that it is missing).
There is no one-size-fits-all solution. It really depends on the situation and how bad the input is.
However, since you specifically mentioned this is for an engine or framework, then I would say it should never crash. It should raise exceptions or provide notable return codes or whatever is relevant for your environment, and then the application developer using your framework can decide how to handle. The framework itself should not make this decision for all apps that utilize the framework.
I would use exceptions if the language you are using allows them..
Since your framework will be used by other developers you shouldn't really constraint any approach, you should let the developers catch your exception (or errors) and manage what to do..
Generally speaking nothing should crash on user input. Whether the app can continue with the error logged or stop right there is something that is useful to be able to configure.
If it's too easy to ignore errors, people will just do so, instead of fixing them. On the other hand, sometimes an error is not something you can fix, or it's totally unrelated to what you're working on, and it's holding up your current task. So it depends a bit on who the user is.
Logging libraries often let you switch logs on and off by module and severity. It might be that you want something similar, to let users configure the "stop on error" behaviour for certain modules or only when above a certain level of severity.
Personally I would avoid the crash approach and opt for (2) that said make sure that the error is detected and logged and above all avoid any swallowing of errors (e.g. empty catch).
It is always helpful to have some kind of tracing/logging module, for instance later when you are doing performance tuning or general troubleshooting.
It depends on what the problem is. When I'm programming and writing error handling I use this as my mantra:
Is this exception really exceptional?
Meaning, is the error in input or whatever condition is "not normal" recoverable? In the case of a game, a File not Found exception on a texture could be recoverable and you could show a default texture so you know something broke.
However, if you have textures in a compressed file and you keep getting checksum errors, that would be an exceptional exception and I would crash the game with the details.
It really boils down to: can the application keep running without issue?
The one exception to this rule though (ha ha) is, if something is corrupted you can no longer trust your validation methods and you should crash as quickly as you can to prevent the corruption from spreading.
What error checking do you do? What error checking is actually necessary? Do we really need to check if a file has saved successfully? Shouldn't it always work if it's tested and works ok from day one?
I find myself error checking for every little thing, and most of the time if feels overkill. Things like checking to see if a file has been written to a file system successfully, checking to see if a database statement failed.......shouldn't these be things that either work or don't?
How much error checking do you do? Are there elements of error checking that you leave out because you trust that it'll just work?
I'm sure I remember reading somewhere something along the lines of "don't test for things that'll never really happen".....can't remember the source though.
So should everything that could possibly fail be checked for failure? Or should we just trust those simpler operations? For example, if we can open a file, should we check to see if reading each line failed or not? Perhaps it depends on the context within the application or the application itself.
It'd be interesting to hear what others do.
UPDATE: As a quick example. I save an object that represents an image in a gallery. I then save the image to disc. If the saving of the file fails I'll have to image to display even though the object thinks there is an image. I could check for failure of the the image saving to disc and then delete the object, or alternatively wrap the image save in a transaction (unit of work) - but that can get expensive when using a db engine that uses table locking.
Thanks,
James.
if you run out of free space and try to write file and don't check errors your appliation will fall silently or with stupid messages. i hate when i see this in other apps.
I'm not addressing the entire question, just this part:
So should everything that could
possibly fail be checked for failure?
Or should we just trust those simpler
operations?
It seems to me that error checking is most important when the NEXT step matters. If failure to open a file will allow error messages to get permanently lost, then that is a problem. If the application will simply die and give the user an error, then I would consider that a different kind of problem. But silently dying, or silently hanging, is a problem that you should really do your best to code against. So whether something is a "simple operation" or not is irrelevant to me; it depends on what happens next, or what would be the result if it failed.
I generally follow these rules.
Excessively validate user input.
Validate public APIs.
Use Asserts that get compiled out of production code for everything else.
Regarding your example...
I save an object that represents an image in a gallery. I then save the image to disc. If the saving of the file fails I'll have [no] image to display even though the object thinks there is an image. I could check for failure of the the image saving to disc and then delete the object, or alternatively wrap the image save in a transaction (unit of work) - but that can get expensive when using a db engine that uses table locking.
In this case, I would recommend saving the image to disk first before saving the object. That way, if the image can't be saved, you don't have to try to roll back the gallery. In general, dependencies should get written to disk (or put in a database) first.
As for error checking... check for errors that make sense. If fopen() gives you a file ID and you don't get an error, then you don't generally need to check for fclose() on that file ID returning "invalid file ID". If, however, file opening and closing are disjoint tasks, it might be a good idea to check for that error.
This may not be the answer you are looking for, but there is only ever a 'right' answer when looked at in the full context of what you're trying to do.
If you're writing a prototype for internal use and if you get the odd error, it doens't matter, then you're wasting time and company money by adding in the extra checking.
On the other hand, if you're writing production software for air traffic control, then the extra time to handle every conceivable error may be well spent.
I see it as a trade off - extra time spent writing the error code versus the benefits of having handled that error if and when it occurs. Religiously handling every error is not necessary optimal IMO.
Brian Kernighan was asked this question in a recent interview. I'll quote his reply:
Brian: I'm torn on this. Error-handling code tends to be bulky and very uninteresting and uninstructive, so it often gets in the way of learning and understanding the basic language constructs. At the same time, it's important to remind programmers that errors do happen and that their code has to be able to cope with errors.
My personal preference is to pretty much ignore error handling in the earlier parts of a tutorial, other than to mention that errors can happen, and similarly to ignore errors in most examples in reference manuals unless the point of some section is errors. But this can reinforce the unconscious belief that it's safe to ignore errors, which is always a bad idea.
I often leave off error handling in code examples here and on my own blog, and I've noticed that this is the general trend on Stack Overflow. Are we reinforcing bad habits? Should we spend more time polishing examples with error handling, or does it just get in the way of illustrating the point?
I think it might be an improvement if when posting example code we at least put comments in that say you should put error handling code in at certain spots. This might at least help somebody using that code to remember that they need to have error handling. This will keep the extra code for error handling out but will still reinforce the idea that there needs to be error handling code.
Any provided example code will be copy-pasted into production code at least once, so be at your best when writing it.
Beyond the question of cluttering the code when you're demonstrating a coding point, I think the question becomes, how do you choose to handle the error in your example code?
That is to say, what do you do ? What's fatal for one application is non-fatal for another. e.g. if I can't retrieve some info from a webserver (be it a 404 error or a non-responsive server) that may be fatal if you can't do anything without that data. But if that data is supplementary to what you're doing, then perhaps you can live without it.
So the above may point to simply logging the error. That's better than ignoring the error completely. But I think often the difficulty is in knowing how/when (and when not) to recover from an error. Perhaps that's a whole new tutorial in itself.
Examples should be illustrative. They should always show the point being made clearly with as little distraction as possible. Here's a meta-example:
Say we want to read a number from a file, add 3, and print it to the console. We'll need to demonstrate a few things.
infile = file("example.txt")
content = infile.read()
infile.close()
num = int(content)
print (3 + num)
wordy, but correct, except there are a few things that could go wrong. First, what if the file didn't exist? What if it does exist but doesn't contain a number?
So we show how the errors would be handled.
try:
infile = file("example.txt")
content = infile.read()
infile.close()
num = int(content)
print (3 + num)
except ValueError:
print "Oops, the file didn't have a number."
except IOError:
print "Oops, couldn't open the file for some reason."
After a few iterations of showing how to handle the errors raised by, in this case, file handling and parsing. Of course we'd like to show a more pythonic way of expressing the try clause. Now we drop the error handling, cause that's not what we're demonstrating.
First lets eliminate the unneeded extra variables.
infile = file("example.txt")
print (3 + int(infile.read()))
infile.close()
Since we're not writing to it, nor is it an expensive resource on a long-running process, it's actually safe to leave it open. It will closewhen the program terminates.
print ( 3 + int(file("example.txt").read()))
However, some might argue that's a bad habit and there's a nicer way to handle that issue. We can use a context to make it a little clearer. of course we would explain that a file will close automatically at the end of a with block.
with file("example.txt") as infile:
print (3 + int(infile.read()))
And then, now that we've expressed everything we wanted to, we show a complete example at the very end of the section. Also, we'll add some documentation.
# Open a file "example.txt", read a number out of it, add 3 to it and print
# it to the console.
try:
with file("example.txt") as infile:
print (3 + int(infile.read()))
except ValueError: # in case int() can't understand what's in the file
print "Oops, the file didn't have a number."
except IOError: # in case the file didn't exist.
print "Oops, couldn't open the file for some reason."
This is actually the way I usually see guides expressed, and it works very well. I usually get frustrated when any part is missing.
I think the solution is somewhere in the middle. If you are defining a function to find element 'x' in list 'y', you do something like this:
function a(x,y)
{
assert(isvalid(x))
assert(isvalid(y))
logic()
}
There's no need to be explicit about what makes an input valid, just that the reader should know that the logic assumes valid inputs.
Not often I disagree with BWK, but I think beginner examples especially should show error handling code, as this is something that beginners have great difficulty with. More experienced programmers can take the error handling as read.
One idea I had would be to include a line like the following in your example code somewhere:
DONT_FORGET_TO_ADD_ERROR_CHECKING(); // You have been warned!
All this does is prevent the code compiling "off the bat" for anyone who just blindly copies and pastes it (since obviously DONT_FORGET_TO_ADD_ERROR_CHECKING() is not defined anywhere). But it's also a hassle, and might be deemed rude.
I would say that it depends on the context. In a blog entry or text book, I would focus on the code to perform or demonstrate the desired functionality. I would probably give the obligatory nod to error handling, perhaps, even put in a check but stub the code with an ellipsis. In teaching, you can introduce a lot of confusion by including too much code that doesn't focus directly on the subject at hand. In SO, in particular, shorter (but complete) answers seem to be preferred so handling errors with "a wave of the hand" may be more appropriate in this context as well.
That said, if I made a code sample available for download, I would generally make it as complete as possible and include reasonable error handling. The idea here is that for learning the person can always go back to the tutorial/blog and use that to help understand the code as actually implemented.
In my personal experience, this is one of the issues that I have with how TDD is typically presented -- usually you only see the tests developed to check that the code succeeds in the main path of execution. I would like to see more TDD tutorials include developing tests for alternate (error) paths. This aspect of testing, I think, is the hardest to get a handle on since it requires you to think, not of what should happen, but of all the things that could go wrong.
Error handling is a paradigm by itself; it normally shouldn't be included in examples since it seriously corrupts the point that the author tries to come across with.
If the author wants to pass knowledge about error handling in a specific domain or language then I would prefer as a reader to have a different chapter that outlines all the dominant paradigms of error handling and how this affects the rest of the chapters.
I don't think error handling should be in the example if it obscures the logic. But some error handling is just the idiom of doing some things, and in theese case include it.
Also if pointing out that error handling needs to be added. For the love of deity also point out what errors needs to be handled.
This is the most frustrating part of reading some examples. If you don't know what you are doing (which we have to assume of the reader of the example...) you don't know what errors to look for either. Which turns the "add error handling" suggestion into "this example is useless".
One approach I've seen, notably in Advanced Programming in the UNIX Environment and UNIX Network Programming is to wrap calls with error checking code and then use the wrappers in the example code. For instance:
ssiz_t Recv(...)
{
ssize_t result;
result = recv(...);
/* error checking in full */
}
then, in calling code:
Recv(...);
That way you get to show error handling while allowing the flow of calling code to be clear and concise.
No, unless the purpose of the example is to demonstrate an aspect of exception handling. This is a pet peeve of mine -- many examples try to demonstrate best practices and end up obscuring and complicating the example. I see this all the time in code examples that start by defining a bunch of interfaces and inheritance chains that aren't necessary for the example. A prime example of over complicating was a hands-on lab I did at TechEd last year. The lab was on Linq, but the sample code I was directed to write created a multi-tier application for no purpose.
Examples should start with the simplest possible code that demonstrates the point, then progress into real-world usage and best practices.
As an aside, when I've asked for code samples from job candidates almost all of them are careful to demonstrate their knowledge of exception handling:
public void DoSomethingCool()
{
try
{
// do something cool
}
catch (Exception ex)
{
throw ex;
}
}
I've received hundreds of lines of code with every method like this. I've started to award bonus points for those that use throw; instead of throw ex;
Sample code need not include error handling but it should otherwise demonstrate proper secure coding techniques. Many web code snippets violate the OWASP Top ten.
I've been mostly working with VB.Net for over a year and just noticed this
Am I going insane, or does VB.Net NOT have an "Unreachable code" warning?
The following compiles quite happily with nary a warning or error, even though there is a return between the two writeline calls.
Sub Main()
Console.WriteLine("Hello World")
Return
Console.WriteLine("Unreachable code, will never run")
End Sub
Am I missing something? Is there some way to switch this on that I can't find.
If not, is there a good reason for its omission? (i.e. or am I right in thinking this is a woeful state of affairs)
Forgive the air of rant about this question, it's not a rant, I would like an answer.
Thanks
I've raised this on MS Connect, as bug# 428529
Update
I received the following from the VB Teams program manager
Thanks for taking the time to report
this issue. The compiler has limited
support for this scenario, and as you
point out we don't have warnings for
unreachable code. There are some
scenarios that our flow analysis
algorithm does handle, such as the
following:
Sub Main()
Dim x As Integer
Return
x = 4
End Sub
In this case you'll get a warning that
x has never been assigned. For the
case you mentioned however we'll have
to look at implementing that for a
future release.
My guess is that it's an oversight in the compiler. Flow control is a very difficult problem to get correct in any language, but especially in a language like VB which has so many different flow control mechanisms. For instance,
Exceptions
Goto
On Error (Resume, Goto, etc ...)
Exit calls
If you feel strongly about this issue, please file a bug on Connect. We do take bugs filed via Connect very seriously and do our best to fix as many as possible.
They mention this in the following post:
https://stackoverflow.com/questions/210187/usage-statistics-c-versus-vb-net
See the last post.
I guess you could use FXCop to check your code instead or get a copy of Resharper from:
http://www.jetbrains.com/resharper/
I'd like to address Jared's answer.
Most of the issues he brings up are not problematic for data flow analysis.
The one exception is "On Error / Resume". They mess up data flow analysis pretty bad.
However, it's a pretty simple problem to mitigate:
If more than one "On Error" statement is used in a method, or the "Resume next" statement is used, you can just turn off data flow analysis and report a generic warning. A good one might be something like "On Error / Resume are deprecated, use exceptions instead." :)
In the common case of one only "On Error" statement and no "resume" statement, you can pretty much do normal data flow analysis, and should get reasonable results from it.
The big problem is with the way the existing DFA code is implemented. It doesn't use a control flow graph, and so changing it ends up being really expensive. I think if you want to address these kinds of issues you really need rip out the existing DFA code and replace it with something that uses a control flow graph.
AFAIK, you are correct that VB.NET does not give you a warning. C# does though.