Does libsodium have error codes? - libsodium

Or any other mechanism to get the reason why a function is failing?
I'm calling crypto_pwhash and it's failing (returns -1) for no obvious reason, in fact the exact same code worked fine with an old version of libsodium. Having an error code would make debugging this a lot easier.

Most functions do not return specific error codes when they fail.
History has shown that using different error codes for cryptographic operations can help attackers conduct devastating attacks.
crypto_pwhash allocates memory. So one reason it may fail is that you are running out of memory. In that case, errno will be set appropriately.
Also check that the parameters are within the allowed ranges. Functions such as crypto_pwhash_bytes_min(), crypto_pwhash_memlimit_min() and crypto_pwhash_memlimit_min() can come in handy. For example, given the purpose of that function, the output must be at least 128 bits.

Related

Can't eliminate Access corruption

My firm's Access database has been having some serious problems recently. The errors we're getting seem like they indicate corruption -- here are the most common:
Error accessing file. Network connection may have been lost.
There was an error compiling this function.
No error, Access just crashes completely.
I've noticed that these errors only happen with a compiled database. If I decompile it, it works fine. If I take an uncompiled database and compile it, it works fine -- until the next time I try to open it. It appears that compiling the database into a .ACCDE file solves the problem, which is what I've been doing, but one person has reported that the issue returned for her, which has me very nervous.
I've tried exporting all of the objects in the database to text, starting with a brand new database, and importing them all again, but that doesn't solve the problem. Once I import all of the objects into the clean database, the problem comes back.
One last point that seems be related, but I don't understand how. The problem started right around the time that I added some class modules to the database. These class modules use the VBA Implements keyword, in an effort to clean up my code by introducing some polymorphism. I don't know why this would cause the problem, but the timing seems to indicate a relationship.
I've been searching for an explanation, but haven't found one yet. Does anyone have any suggestions?
EDIT: The database includes a few references in addition to the standard ones:
Microsoft ActiveX Data Objects 2.8
Microsoft Office 12.0
Microsoft Scripting Runtime
Microsoft VBScript Regular Expressions 5.5
Some of the things I do and use when debugging Access:
Test my app in a number of VM. You can use HyperV on Win8, VMWare or VirtualBox to set up various controlled test environments, like testing on WinXP, Win7, Win8, 32bit or 64 bits, just anything that matches the range of OS and bitness of your users.
I use vbWatchDog, a clever utility that only adds a few classes to your application (no external dependency) and allows you to trap errors at high level, and show you exactly where they happen. This is invaluable to catch and record strange errors, especially in the field.
If the issue appears isolated to one or a few users only, I would try to find out what is special about their config. If nothing seems out of place, I would completely unsintall all Office component and re-install it after scrubbing the registry for dangling keys and removing all traces of folders from the old install.
If your users do not need a complete version of Access, just use the free Access Runtime on their machine.
Make sure that you are using consistent versions of Access throughout: if you are using Access 2007, make sure your dev machine is also using that version and that all other users are also only using that version and that no components from Access 2010/2013 are present.
Try to ascertain if the crash is always happening around the same user-actions. I use a simple log file that I write to when a debugging flag is set. The log file is a simple text file that I open/write to/close everytime I log something (I don't keep it open to make sure the data is flushed to the file, otherwise when Access crashes, you may only have old data in the log file as the new one may still be in the buffer). Things I log are, for instance, sensitive function entry/exit, SQL queries that I execute from code, form open/close, etc.
As a generality, make sure your app compiles without issue (I mean when doing Debug > Compile from the IDE). Any issue at this stage must be solved.
Make absolutely sure you close all open recordsets, preferrably followed by setting their variables to Nothing. VBA is not as sensitive as it used to be about dangling references, but I found it good practice, especially when you cannot be absolutely sure that these references will be freed (especially when doing stuff at Module-level or Class-level for instance, where the scope may be longer-lived than expected).
Similarly, make sure you properly destroy any COM object you create in your classes (and subs/functions. The Class_Terminate destructor must explicitly clean up all. This is also valid when closing forms if you created COM objects (you mentioned using ADOX, scripting objects and regex). In general keeping track of created objects is paramount: make sure you explicitly free all your objects by resetting them (for instance using RemoveAll on a dictionary, then assigning their reference to Nothing.
Do not over-use On Error Resume or On Error Goto. I almost never use these except when absolutely necessary to recover from otherwise undetectable errors. Using these error trapping constructs can hide a lot of errors that would otherwise show you that something is wrong with your code. I prefer to program defensively than having to handle exceptions.
For testing, disable your error trapping to see if it isn't hiding the cause of your crashes.
Make sure that the front-end is local to the user machine, You mention they get their individual front-end from the network but I'm not sure if they run it from there or if it it copied on their local machine. At any rate, it should be local not on a remote folder.
You mention using SQL Server as a backend. Try to trace all the queries being executed. It's possible that the issue comes from communication with SQL Server, a corrupt driver, a security issue that prevents some queries from being run, a query returning unexpected data, etc. Watch the log files and event log on the server closely for strange errors, especially if they involve security.
Speaking of event log, look for the trace of the crash in the event log of your users. There may be information there, however cryptic.
If you use custom ribbon actions, make sure thy are not causing issues. I had strange problems over time with the ribbon. Log all all function calls made by the ribbon.

gfortran optimization causes fortran do-variable loop error during runtime

I have written a fortran routine that uses some legacy fortran 77 code for finite elements. However, with a particular mesh, when the -O optimization flag is turned on, an important do-loop iterator is somehow being modified, even though fortran supposedly prohibits this. I have compiled this code using gfortran4.5 with the -fcheck=do run-time checking enabled and it has verifies what I've noted above. A runtime error occurs, only when optimizations are turned on and points directly to the do-iterator.
Using gdb on optimized code seems (while it seems erratic - lines bouncing back and forth) seems to clearly indicate that the do-iterator somehow gets set back to zero, and essentially this causes a nice infinite loop.
Any suggestions as to how to hunt down and fix whatever is causing this bug would be greatly appreciated, as I'd like to make sure the whole project can be consistently compiled with the same flags.
You say that you use fcheck=do; why not go all the way and use fcheck=all? What you're seeing sounds like a typical case of memory corruption due to an array bounds violation, which fcheck=all can in some cases catch. Where the array bounds checking doesn't work that well is with implicit interfaces and incorrect bounds being passed; a solution here is to put your procedures into modules, allowing the compiler to check interfaces.
And, like Jonathan Dursi said, consider using a tool like valgrind.

Portland group FORTRAN pgf90 program fails when compiled with -fast, succeeds with -fast -Mnounroll

This code hummed along merrily for a long time, until we recently discovered an edge case where it fails silently-- no errors returned.
The fail is apprently pretty subtle. We can get the code to run uneventfully in the edge case by:
1) compiling with any set of options that includes -traceback or debug (-g or -gopt);
2) compiling with -fast -Mnounroll;
3) compiling with optimization <2;
4) adding WRITE statements into the code to determine the location of the fail;
In other words, most of the tools useful for debugging the failure-- actually result in the failure disappearing.
I am probing for any information on failures related to loop unrolling or other optimization, and their resolution.
Thank you all in advance.
I'm not familiar with pgf (heck, it's been 10 years since I used any fortran), but here are some general suggestions for tracking down (potential) compiler bugs:
Simplify a reproducible case. I.e. try to reproduce the problem with a similar looking piece of code that has all the superfluous details removed. This is helpful because a) you'll be less hesitant to release the code publicly, and b) if someone attempts to diagnose the problem, it will be easier for them with less surrounding material.
Talk to the experts: If you have a support contract for pgf, use it! There's a support request form on their site. If not, there's a User Forums section where you might be able to post your information - someone else may have better workaround, or an employee there may be able to log your problem.
Double-check your code. Is it possible that you're relying on some sort of unspecified behavior? This is the sort of thing that would cause your program to switch behavior when changing optimization levels. I'm not saying compiler bugs are impossible, but it could be a hack in your code too.
Hope that's helpful.

Oracle - Avoiding invalidation errors

I've noticed that whenever I have package-level constants (or any variables for that matter), whenever I recompile the package, persistent connections to the database get an error that "existing state of package body has been invalidated".
Is there a way to avoid this? Perhaps through synonyms? What are best-practices in this case?
In general, you should avoid replacing code in a live production instance.
If you really really have to be live 24/24 7/7, and you can't schedule ANY (even tiny) downtime you will have to avoid package-level variables since recompilation of a such a package will trigger the aforementionned error.
You could also catch the error in your client application and decide what to do. Maybe you have sufficient information to restart whatever the client was doing.
See also
This thread on AskTom covers the same topic.
The problem is if, at 10:00am, a session starts and the constant is set to 'A', then at 11:00am you change it to 'B', then the session 'blows up' in confusion.
The Serially_Reusable pragma may work for you. Basically it won't preserve state between calls. So at 11:0am it will just start using 'B'. If you can be 100% sure that won't break your code, it can work. Re-initializing the constants whenever they are needed may be an overhead.
Also look at calling DBMS_SESSION.MODIFY_PACKAGE_STATE or DBMS_SESSION.RESET_PACKAGE at appropriate intervals. That may reduce the number of errors you get.
You should also look at Edition-based redefinition in the new 11gR2. That's a more comprehensive solution, but I guess you'd need to upgrade for that.
In the past I've gotten around this by moving all state-related stuff into separate packages.
For example, if I had a package "CUSTOMER_PKG", I'd move all the global variables into a spec-only package called "CUSTOMER_GLOBALS_PKG".
Unfortunately this means exposing all private globals that were defined in the package body. We had to enforce a development standard so that CUSTOMER_GLOBALS_PKG was only allowed to be referred to by CUSTOMER_PKG.

internal error markers

Theoretically, the end user should never see internal errors. But in practice, theory and practice differ. So the question is what to show the end user. Now, for the totally non-technical user, you want to show as little as possible ("click here to submit a bug report" kind of things), but for more advanced users, they will want to know if there is a work around, if it's been known for a while, etc. So you want to include some sort of info about what's wrong as well.
The classic way to do this is either an assert with a filename:line-number or a stack trace with the same. Now this is good for the developer because it points him right at the problem; however it has some significant downsides for the user, particularly that it's very cryptic (e.g. unfriendly) and code changes change the error message (Googling for the error only works for this version).
I have a program that I'm planning on writing where I want to address these issues. What I want is a way to attach a unique identity to every assert in such a way that editing the code around the assert won't alter it. (For example, if I cut/paste it to another file, I want the same information to be displayed) Any ideas?
One tack I'm thinking of is to have an enumeration for the errors, but how to make sure that they are never used in more than one place?
(Note: For this question, I'm only looking at errors that are caused by coding errors. Not things that could legitimately happen like bad input. OTOH those errors may be of some interest to the community at large.)
(Note 2: The program in question would be a command line app running on the user's system. But again, that's just my situation.)
(Note 3: the target language is D and I'm very willing to dive into meta-programming. Answers for other languages more than welcome!)
(note 4: I explicitly want to NOT use actual code locations but rather some kind of symbolic names for the errors. This is because if code is altered in practically any way, code locations change.)
Interesting question. A solution I have used several times is this: If it's a fatal error (non-fatal errors should give the user a chance to correct the input, for example), we generate a file with a lot of relevant information: The request variables, headers, internal configuration information and a full backtrace for later debugging. We store this in a file with a generated unique filename (and with the time as a prefix).
For the user, we present a page which explains that an unrecoverable error has occurred, and ask that they include the filename as a reference if they would like to report the bug. A lot easier to debug with all this information from the context of the offending request.
In PHP the debug_backtrace() function is very useful for this. I'm sure there's an equivalent for your platform.
Also remember to send relevant http headers: Probably: HTTP/1.1 500 Internal Server Error
Given a sensible format of the error report file, it's also possible to analyze the errors that users have not reported.
Write a script to grep your entire source tree for uses of these error codes, and then complain if there are duplicates. Run that script as part of your unit tests.
I know nothing about your target language, but this is an interesting question that I have given some thought to and I wanted to add my two cents.
My feeling has always been that messages for hard errors and internal errors should be as useful as possible for the developer to identify the problem & fix it quickly. Most users won't even look at this error message, but the highly sophisticated end users (tech support people perhaps) will often get a pretty good idea what the problem is and even come up with novel workarounds by looking at highly detailed error messages. The key is to make those error messages detailed without being cryptic, and this is more an art than a science.
An example from a Windows program that uses an out-of-proc COM server. If the main program tries to instantiate an object from the COM server and fails with the error message:
"WARNING: Unable to Instantiate
UtilityObject: Error 'Class Not
Registered' in 'CoCreateInstance'"
99% of users will see this and think it is written in Greek. A tech support person may quickly realize that they need ro re-register the COM server. And the developer will know exactly what went wrong.
In order to associate some contextual information with the assertion, in my C++ code I will often use a simple string with the name of the method, or something else that makes it clear where the error occured (I apologize for answering in a language you didn't ask about):
int someFunction()
{
static const std::string loc = "someFunction";
: :
if( somethingWentWrong )
{
WarningMessage(loc.c_str(), "Unable to Instantiate UtilityObject: Error 'Class Not
Registered' in 'CoCreateInstance);
}
}
...which generates:
WARNING [someFunction] : Unable to
Instantiate UtilityObject: Error
'Class Not Registered' in
'CoCreateInstance