I'm working on a project, and we've been having trouble with a bug that seems to be randomly updating all records of a model in the database when it shouldn't be. We've looked at the logs and we can see SQL updates being called without where clauses, which obviously are the culprit, but we don't know why it's happening. We don't call update_all anywhere.
Our best guess is that a single model is being passed somewhere that is expecting a collection and that is causing unexpected behaviour, but so far we've been unable to find where this happens.
Are there any other common mistakes that may be causing this behavior? Any help would be greatly appreciated.
Related
For some reason I only recently found out about unique constraints for Core Data. It looks way cleaner than the alternative (doing a fetch first, then inserting the missing entities in the designated context) so I decided to refactor all my existing persistence code.
If I got it right, the gist of it is to always insert a new entity, and, as longs as I have a proper merge policy, saving the context will take care of the uniqueness and in a more efficient way. The problem is every time I save a context with the inserted entity I get a NSCoreDataConstraintViolationException, no error though. When I do the fetch to make sure
there is indeed only one instance with a unique field
other changes to this entity were applied
everything seems to be okay, but I’m still concerned about this exception, since I do saves and therefore get it quite often, a few times per second in some cases.
My project is in objective-c and I know exceptions are expensive there so I’m having doubts if I’m missing something.
Here is a sample project with this issue (just a few lines of code, be sure to add an exception breakpoint)
NSMergeByPropertyObjectTrumpMergePolicy and constraints are not useful tools and should also never be used. The correct way to manage uniqueness is with a fetch before the insert as it appears you have already been doing.
Let's starts with why the only correct merge policy is NSErrorMergePolicy. You should only be writing to core data in on synchronous say (performBackgroundTask is not enough you also need an operation queue). If you have two performBackgroundTask running at the same time and they contradict then you will lose data. Merge policy is answer the question of "which data would you like to lose?" the correct answer is "Don't lose my data!" which is NSErrorMergePolicy.
The same issue happens when you have a constraint. Let's says you have an entity with a unique constraint on the phone number. And you try to insert another entity with the same phone number. What would you like to happen? It depends on what exactly the data is. It might be two different people, and the phone number should be made different (perhaps they were lacking area code), or it might be one person and the data should be merged. Or you might have a constraint on an uniqueID and the number should just be incremented. But on the database level it doesn't know. It always just does a merge. It will silently lose data.
You can create a custom NSMergePolicy and inspect NSConstraintConflict to decide what do to. But in practice you'd have to think about every time you edit the database and what each change means, which can be very hard outside of the context of writing a change to the database. In other words, the problem with a constraints and merge policy is that it the run is on the wrong level of your application to effectively deal with the problem.
Using constraints with a merge policy of error is OK, as it is a way to find problems with your app (as long as you are monitoring crashes and fixing them). But you still need to do the fetch before the insert to make sure the error doesn't happen.
If you want to clean up code then just have one place that you create your objects. Something like objectWithId:createIfNeed:inContext: which does the fetch and create.
When I run the following query:
match (n) return distinct labels(n);
I am seeing the following error:
DynamicRecord[396379,used=false,(0),type=-1,data=byte[],start=true,next=-1] not in use
Other people have asked how to deal with this situation. I am asking a different set of questions: what is a DynamicRecord in Neo4j? And, what can be done to avoid this type of error?
What is DynamicRecord
The source for DynamicRecord is here. This is largely useless.
Anyhow, all I can gather is that it is:
It is a very low-level construct in store kernal.
A multitude of tests use it with relation to consistency checking.
It appears to be a record that is dynamically created (meaning, at run time - not stored on disk), and it can represent different type of data (property blocks, schema, etc.)
This is also largely useless. I know.
What can be done to avoid this type of error.
This seems to be a very generic error, but most online resources (Github issues / SO questions) seem to relate to DB upgrades. Some pointed out in changes to some consts used by DynamicRecord that yield data-corruption after upgrades.
Based on that, I guess that the following steps could prevent such error:
Backup your data.
Migrate your data properly when upgrading.
Do not use different versions of neo against the same data.
You've guessed it - This is also rather useless, but I hope it is better than nothing.
We have several large and small Access databases that support users across the state. Recently, I've been trying to add error checking into these datasbases since many where orginally created without it. However, some of these databases have a lot of functions and adding error checking to each one seems overly tedious.
I basically just want the database to send me an email with a description of the error and object/function on which it occurred. Is there an easier way to do this than to add error checking to each function?
While not exactly an answer (or, should I say, not the answer you want), my response to this question holds true in this case as well. In either case, I don't think you can just throw some generic function into your code, or do so without editing every pre-existing function. If proper error trapping is built into your application, you never have to worry about it.
I wonder if there's a way to prevent the creation of objects that contain old ansi sintax of join, maybe server triggers, can anyone help me?
You can create a DDL trigger and mine the eventdata() XML for the content of the proc. If you can detect the old syntax using some fancy string-parsing functions (maybe looking for commas between known table names or looking for *= or =*), then you can roll back the creation of the proc or function.
First reaction - code reviews and a decent QA process!
I've had some success looking at sys.syscomments.text. A simple where text like '%*=%' should do. Be aware that long SQL strings may be split across multiple rows. I realise this won't prevent objects getting in there in the first place. But then DDL triggers won't tell you how big your current problem is.
Although I fully understand your effort, I believe that this type of actions is the wrong way of getting where you want. First of all, you might get into serious trouble with your boss and, depending of where you work, get fired.
Second, as stated before, doing code reviews, explaining why the old syntax sucks. You have to have a decent reason why one should avoid the *= stuff. 'Because you don't like it' is not a feasible argument. In fact, there are quite some articles around showing that certain problems are just not solvable using this type of syntax.
Third, you might want to point out that separating conditions into grouping (JOIN ... ON...) and filtering conditions (WHERE...) increases the readability and might therefore be an options.
Collect your arguments and convince your colleagues rather than punishing them in quite an arrogant way.
has anyone had any/know of any issues with ibatis submitting several duplicate queries?
we have been seeing (intermittently) the same sql statement being executed up to 5 times. Originally we thought we were dealing with over zealous click happy users, but we freeze the submit buttons to prevent multiple clicking and we still get this.
I seem to remember reading somewhere that this is a bug in ibatis, but i cant find it again (or maybe i dreamt it, my dreams are often weird).
Thanks
Are you talking about this?
https://issues.apache.org/jira/browse/IBATIS-369
They say the bug is fixed.