Proprietary handling/collecting of user defined errors - error-handling

I do not know how to implement a proprietary handling procedure of user defined errors (routines/algorithm stop) or warnings messages (routine/algorithm can proceed) without using Exceptions (i.e. failwith .... Standard System Exception).
Example: I have a Module with a series of Functions that uses a lot of input data to be checked and to be used to calculate the thickness of a Pressure Vessel Component.
The calculation procedure is complex, iterative and there are a lot of checks to be performed before getting a result, check that can generate "User Defined Errors" that stop the procedure/routine/algorithm or generate a "Warning Messages" proceeding on.
I need to collect these Errors and Messages to be shown to the User in a dedicated form (Wpf or Windows form). This later at the end.
Note: every time that I read a books of F# or C# or Visual basic or an Article in Internet, I found the same Phylosophy/Warning: raise of System/User-Defined Exception should be limited as much as possible: Exception are for unmanageable Exceptional Events ( not predictable) and cause "overload" to the Computer System.
I do not know which handling philosophy to implement. I'm confused. Limited sources available on internet on this particular argument.
Actually I'm planning to adopt this Phylosophy , taken from: "https://fsharpforfunandprofit.com/posts/recipe-part2/". It sounds good for me, ... complex, but good. No other references I was able to go find on this relevant argument.
Question: there are other Phylosophies that I can consider to create this Proprietary handling/collecting of user defined errors? Some books to read or some articles?
My decision will give a big impact on how to design and write my code (splitting problem in several functions, generate a "motor" that run in sequence functions or compose then in different ways depending on results, where to check for Errors/Warnings, how to store Errors and Warning Messages to understand what is going on or where "Errors/Warnings" are genetate and caused "By Which Function"?).
Many thanks in advance.

The F# way is to encode the errors in the types as much as possible. The easiest example is an option type where you would return None if the operation failed ans Some value when it succeeded. Surprisingly, very often this is enough! If not, then you can encode different types of errors AND a success "state" in a discriminated union, e.g.
[<Measure>]
type psi
type VesselPressureResult =
| PressureOk
| WarningApproachingLimit
| ErrorOverLimitBy of int<psi>
and then you will use pattern matching to "decide" what to do in each case. If you need to add more variants, e.g. ErrorTooLow, then you would add that to the DU and then the compiler will "tell" you about all places where you need to fix the logic.
Here is the perfect source with detailed information: https://fsharpforfunandprofit.com/series/designing-with-types.html

Related

DynamicRecord - what is it?

When I run the following query:
match (n) return distinct labels(n);
I am seeing the following error:
DynamicRecord[396379,used=false,(0),type=-1,data=byte[],start=true,next=-1] not in use
Other people have asked how to deal with this situation. I am asking a different set of questions: what is a DynamicRecord in Neo4j? And, what can be done to avoid this type of error?
What is DynamicRecord
The source for DynamicRecord is here. This is largely useless.
Anyhow, all I can gather is that it is:
It is a very low-level construct in store kernal.
A multitude of tests use it with relation to consistency checking.
It appears to be a record that is dynamically created (meaning, at run time - not stored on disk), and it can represent different type of data (property blocks, schema, etc.)
This is also largely useless. I know.
What can be done to avoid this type of error.
This seems to be a very generic error, but most online resources (Github issues / SO questions) seem to relate to DB upgrades. Some pointed out in changes to some consts used by DynamicRecord that yield data-corruption after upgrades.
Based on that, I guess that the following steps could prevent such error:
Backup your data.
Migrate your data properly when upgrading.
Do not use different versions of neo against the same data.
You've guessed it - This is also rather useless, but I hope it is better than nothing.

How to quickly analyse the impact of a program change?

Lately I need to do an impact analysis on changing a DB column definition of a widely used table (like PRODUCT, USER, etc). I find it is a very time consuming, boring and difficult task. I would like to ask if there is any known methodology to do so?
The question also apply to changes on application, file system, search engine, etc. At first, I thought this kind of functional relationship should be pre-documented or some how keep tracked, but then I realize that everything can have changes, it would be impossible to do so.
I don't even know what should be tagged to this question, please help.
Sorry for my poor English.
Sure. One can technically at least know what code touches the DB column (reads or writes it), by determining program slices.
Methodology: Find all SQL code elements in your sources. Determine which ones touch the column in question. (Careful: SELECT ALL may touch your column, so you need to know the schema). Determine which variables read or write that column. Follow those variables wherever they go, and determine the code and variables they affect; follow all those variables too. (This amounts to computing a forward slice). Likewise, find the sources of the variables used to fill the column; follow them back to their code and sources, and follow those variables too. (This amounts to computing a backward slice).
All the elements of the slice are potentially affecting/affected by a change. There may be conditions in the slice-selected code that are clearly outside the conditions expected by your new use case, and you can eliminate that code from consideration. Everything else in the slices you may have inspect/modify to make your change.
Now, your change may affect some other code (e.g., a new place to use the DB column, or combine the value from the DB column with some other value). You'll want to inspect up and downstream slices on the code you change too.
You can apply this process for any change you might make to the code base, not just DB columns.
Manually this is not easy to do in a big code base, and it certainly isn't quick. There is some automation to do for C and C++ code, but not much for other languages.
You can get a bad approximation by running test cases that involve you desired variable or action, and inspecting the test coverage. (Your approximation gets better if you run test cases you are sure does NOT cover your desired variable or action, and eliminating all the code it covers).
Eventually this task cannot be automated or reduced to an algorithm, otherwise there would be a tool to preview refactored changes. The better you wrote code in the beginning, the easier the task.
Let me explain how to reach the answer: isolation is the key. Mapping everything to object properties can help you automate your review.
I can give you an example. If you can manage to map your specific case to the below, it will save your life.
The OR/M change pattern
Like Hibernate or Entity Framework...
A change to a database column may be simply previewed by analysing what code uses a certain object's property. Since all DB columns are mapped to object properties, and assuming no code uses pure SQL, you are good to go for your estimations
This is a very simple pattern for change management.
In order to reduce a file system/network or data file issue to the above pattern you need other software patterns implemented. I mean, if you can reduce a complex scenario to a change in your objects' properties, you can leverage your IDE to detect the changes for you, including code that needs a slight modification to compile or needs to be rewritten at all.
If you want to manage a change in a remote service when you initially write your software, wrap that service in an interface. So you will only have to modify its implementation
If you want to manage a possible change in a data file format (e.g. length of field change in positional format, column reordering), write a service that maps that file to object (like using BeanIO parser)
If you want to manage a possible change in file system paths, design your application to use more runtime variables
If you want to manage a possible change in cryptography algorithms, wrap them in services (e.g. HashService, CryptoService, SignService)
If you do the above, your manual requirements review will be easier. Because the overall task is manual, but can be aided with automated tools. You can try to change the name of a class's property and see its side effects in the compiler
Worst case
Obviously if you need to change the name, type and length of a specific column in a database in a software with plain SQL hardcoded and shattered in multiple places around the code, and worse many tables present similar column namings, plus without project documentation (did I write worst case, right?) of a total of 10000+ classes, you have no other way than manually exploring your project, using find tools but not relying on them.
And if you don't have a test plan, which is the document from which you can hope to originate a software test suite, it will be time to make one.
Just adding my 2 cents. I'm assuming you're working in a production environment so there's got to be some form of unit tests, integration tests and system tests already written.
If yes, then a good way to validate your changes is to run all these tests again and create any new tests which might be necessary.
And to state the obvious, do not integrate your code changes into the main production code base without running these tests.
Yet again changes which worked fine in a test environment may not work in a production environment.
Have some form of source code configuration management system like Subversion, GitHub, CVS etc.
This enables you to roll back your changes

Practice for handling errors in Objective C client code

I’ve seen Objective C code using a collection of practices, raging from passing a pointer of NSError for execution finish status - to using ‎NSAssert - to implementing #throw - to relaying on delegate for status code returned in the callback - to the old c method of returning a boolean/int indicating with 1 being success and co.
I can’t identify a consistent pattern for how should I be handling errors happening in my app running on client devices. For ex, what would you recommend handling for the following cases:
Client attempt to access a network resource, network resource timed out / returned 500?
Unexpected state that should have not even happened reached in logical code section?
Attempt to write to disk failed? (Out of disk space, not permission and code)
Coming from Java, server side practices exceptions the weapon of choice, using Objective C and C is seems that exceptions exist but are not encouraged. NSAssert seems harsh, as it will crash the application, which in most cases is not the optimal solution. So, I’d appreciate a Best practices advice.
Exceptions are used to indicate programmer error and/or non-recoverable errors only. Exceptions should not be used for flow control. NSAssert is more of a development tool. Use NSError for recoverable, user addressable (or caused) errors.
https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/ErrorHandlingCocoa/ErrorHandling/ErrorHandling.html
First we have to agree upon the terminology. I think there are two types of bad situations in a program: Errors and Exceptions.
Errors An unwanted situation that can happen but you can recover from it with the program memory being in a consistent state and further execution can continue.
eg1. Client attempt to access a network resource, network resource timed out / returned 500?
eg2. Attempt to write to disk failed? (Out of disk space, not permission and code)
Approach Handle the error the way you want it handled. Show user message, log it to file/console, report it to your back-end if you think it needs to be reported.
Exceptions A state that should never have been reached and may corrupt the memory state of the app in a way that prevents further execution.
This could happen at two levels.
Level 1 A logic error that you have overlooked and is now causing the exception.
eg1. accessing invalid array elements i.e. index out of bound exception.
In the above example, you unconsciously made a programming mistake. You couldn't have handled it because you overlooked it. This will crash the application.
Approach Your approach here should be to identify the bug and fix it. In the process some end users will be affected which is the case with the most meticulous software.
Level 2
As you are programming, you come across a situation that your business logic doesn’t permit.
eg1. Unexpected state that should have not even happened reached in logical code section?
eg2. A database entry that you always expect to be present as per your business logic. For example, let's say you are making an app that requires a currency table with currency names, unicode symbols for currencies and country names. The value from this table are to be shown as a drop down in your UI. You may create a table upon first launch of the app and insert the values or may be you ship the table in the bundle but you always expect it to have the values you are inserting. You make an sqlite select query->sqlite execution succeeds->but no values are returned. Business logic demands that values should be present but for some mysterious reason they are absent.
Here is where most confusion stems from because sometimes you can recover from some of these situations by treating them as errors and displaying a message to the end user. This approach is wrong. This is an exception at business logic level. One that you should prevent from happening.
Approach You should force crash the app here as well using NSAsserts both in development and production mode. This is an aggressive approach and some end users will be affected, but I have read experts advising to adopt an aggressive approach to finding bugs and remedying them rather than pretending they don’t exist or thinking you have smartly covered them all when all you have done is masking exceptions as errors through your “handling tactics”. I believe the experts are right. Coming from Java, you may find it a very objCeee.. way of doing things.
Nils and Bools You should understand that they are just facilitators to reach at the conclusion of whether you treat something as an error or exception. They are not the end in themselves. While writing your methods with return value, you should define what a nil or bool(no) signifies and document your intent on top of the method. Sometimes it would be an error, sometimes it would be an exception, sometimes it could be a normal expected result. While dealing with apple frameworks/third party libraries you should understand what their intent is when they return such values and determine how you want to treat them in your code.
Some other tips
Don’t use #try/catch because experts say so. If you wish to challenge them assuming that Apple has put it there for a reason, Apple is always right, experts(including Apple employees themselves) have got it wrong after all these years, you can go ahead and do it.
Formalize your approach to this->stick to it->document your intent. Don’t think it over and over. Because at some stage error/exceptions/nils/bools would cross the limits of propriety and reach the precincts of taste. You will be better off settling this once and for all and freeing up your time.

How to nest rules in HP Exstream?

I am using HP Exstream (formerly Dialogue from Exstream Software) version 5.0.x. It has a feature to define and save boolean expressions as "Rules".
It has been about 6 years since I used this, but does anybody know if you can define a rule in terms of another rule? There is a "VB-like" language in a popup window, so you are not forced to use the and/or, variable-relational expression form, but I don't have documentation handy. :-(
I would like to define a rule, "NotFoo", in terms of "Foo", instead of repeating the inverse of the whole thing. (Yes, that would be retarded, but that's probably what I will be forced to do, as in other examples of what I am maintaining.) Actually, nested rules would have many uses, if I can figure out how to do it.
I later found that what one needs to do in this case is create user defined "functions", which can reference each other (so long as you avoid indirect recursion). Then, use the functions to define the "rules" (and, don't even bother with "library" rules instead of "inline" rules, most of the time).
I'm late to the question but since you had to answer yourself there is a better way to handle it.
The issue with using functions and testing the result is that there's a good chance that you're going to be adding unnecessary processing because the engine will run through the function every time it's called. Not a big issue with a simple function but it can easily become a problem if the function is complex, especially if it's called in several places.
Depending on the timing of the function (you didn't say whether it was a run level, customer level, or specific to particular documents), it's often better to have the function set a User Boolean variable to store the result then in your library rules you can just check the value of the variable without having to run through the function every time.

Should stored procedures check for potential issues when executing?

Let me explain this question a bit :)
I'm writing a bunch of stored procedures for a new product.
They will only ever be called by the c# application, written by the developers who are following the same tech spec I've been given.
I cant go into the real tech spec, so I'll give an close enough example:
In the tech spec, we're having to store file data in a couple of proprietary zip files, with a database storing the names and locations of each file within a zip (eg, one database for each zip file)
Now, lets say that this tech spec states that, to perform "Operation A", the following steps must be done:
1: Calculate the space requirements of the file to be added
2: Get a list of zip files and their database connection strings (call stored proc "GetZips")
2: Find a suitable location within the zip file to store the file (call stored proc "GetSuitableFileLocation" against each database connection, until a suitable one is found)
3: In step 2, you will be provided with a start/end point within the zip to add your file.
Call the "AllocateLocationToFile" stored proc, passing in these values, then add your file to the zip.
OK - so the question is, should "AllocateLocationToFile" re-check the specified start/end points are still "free", and if not, raise an exception?
There was a bit of a discussion about this in the office, and whilst I believe it should check and raise, others believe that it should not, as there is no need due to the developer calling "GetSuitableFileLocation" immediately beforehand.
Can I ask for some valued oppinions?
Generally, it is better to be as safe as possible. A calling code should never rely on an external code (the sps are kind of external). The idea is that you can not predict what would happen in the future. New guys come to the company... the sps are given to another team and so on...
Personally, the fact that B() is right after A() doesn't guarantee anything. To change this for whatever reason is not something to be considered impossible.
A team should never take decisions based on "we are going to maintain this, no problem at all" because they might get fired, the company may sell the product and so on..
My suggestion is to do the checking, profile the code and if it is really a bottleneck to remove it, but write somewhere that THIS CAN BREAK!.
Given that you're manipulating files, with all the potential havoc this can create, I'd say in this scenario the risk (damage component) is high enough to be cautious.
And Svetlozar's right: what if great success does cause re-use, or other added-on applications? Not everyone may be as well-behaved as your team is right now.
One reason why it might be a good idea would involve race conditions. Is it possible two users could call the process at the same time and get the same values? Please at least test this scenario with the currently designed process.