NHibernate SQLite on Mono concurrency problem: Database file is locked - nhibernate

I have an application I'm porting from MSSQL and .NET to SQLite and Mono. I'm using NHibernate, FluentNHibernate, NHibernateLINQ and SQLite.
When I test the application with only one person connected everything works OK, but the moment somebody else starts using the app it breaks and throws an SQLite Exception saying "Database File is Locked".
I know that SQLite locks the database when a write is being made and returns a busy status, I'm guessing maybe I haven't configured NHibernate correctly to handle this, but I can't find any information online that has helped so far. It's like I'm the only person with this problem. Am I ? Any ideas ?
Thanks

I suspect your problem is not FNH per se.
I had a similar problem in my FNH / SQLite project (.NET, not Mono). It mysteriously fixed itself after I refactored some of the session management code for other reasons. (The main changes were to use Transactions for ALL DB access, and to ensure all Transaction and Session objects were properly Disposed).
This link discusses a similar problem that was caused by a missing Dispose. I suspect that may have been my problem, but am not sure. (Just keeping my fingers crossed that the problem does not reappear!).
Another good source of things to try is Database file is inexplicably locked during SQLite commit

Related

Database options in Mac application

I am currently using mysql database at server and for local sqlite in my application. I am facing lots problem in local database. Sometime database lock, unable to update etc.
Is there any other option than Core data and Sqlite for storing data locally in Mac application?
Just replacing one database with another is unlikely to fix locking problems. SQLite and CoreData (which often uses SQLite) are solid technologies that are used by many, if not most, Mac applications.
Without more information about the locks you're experiencing, I'd suggest that it's more likely that you're using the database incorrectly. Are you trying to access the database from multiple-threads? Are you correctly closing prepared statements?
You could continue to use Core Data but use a different Storage Backend, e.g., the Binary one (which should be less problematic concerning locking and transaction safety in general). See the Core Data Programming Guide for the different kinds of Persistent Store Coordinators.
Regarding Stephen Darlington's Answer: I don't quite agree. Depending on the Concurrency Control in SQLite (most probably Optimistic Concurrency Control) a transaction may be aborted because it modifies data that is currently "in use". This may happen at the granularity of a database of even a single relation. Using a "less transaction safe" backend like the binary store may already be sufficient in this case. This puts the burden on you to manage consistency but if you are sure your transactions don't conflict you should be fine.
I'd say that Core Data is the best one. Why don't you use this?
You can also save things in XML. This is quite handy to save some small things.
You can save a dicitionary for example like this:
[dict writeToFile:#"YOUR_PATH" atomically:NO];
You are able to get the dictionary again by implementing:
NSDicionary* dict = [NSDictionary dictionaryWithContentsOfFile:#"YOUR_PATH"];

Getting Started with Fluent NHibernate

I'm trying to get into using Fluent NHibernate, and I have a couple questions. I'm finding the documentation to be lacking.
I understand that Fluent NHibernate / NHibernate allows you to auto-generate a database schema. Do people usually only do this for Test/Dev databases? Or is that OK to do for a production database? If it's ok for production, how do you make sure that you're not blowing away production data every time you run your app?
Once the database schema is already created, and you have production data, when new tables/columns/etc. need to be added to the Test and/or Production database, do people allow NHibernate to do this, or should this be done manually?
Is there any REALLY GOOD documentation on Fluent NHibernate? (Please don't point me to the wiki because in following along with the "Your first project" code building it myself, I was getting run-time errors because they forget to tell you to add a reference. Not cool.)
Thanks,
Andy
I've been using Fluent NHibernate Automapping for a few months now. I'm by no means an expert, but can take a stab at your questions...
FNH Automapping does indeed create DB schemas from POCO classes, including lists of other objects (this was the reason I chose NHibernate in the first place).
When you change schemas, you have to rerun the automapping, which does drop the whole database, unfortunately. In my case, it's not a big problem because I'm importing existing binary data files, so I just have to re-import my data every time the schema changes. I've read that there's some data migration support available with NHibernate, but have no experience with this. (BTW, Subsonic will do data migration, but it's automapping functionality is far more rudimentary - at least it was when I evaluated it a few months ago)
FNH documentation is one of my pet peeves - they have not even added Intellisense help on the method names, etc. (But they get really huffy when you point that out - ask me how I know!) I've made a couple of edits to the wiki when I could, but there's so much more that could be done there. The best approach is to start with a working example (i.e.
this one from Nikola Malovic, and post questions to the support form if (when!) you run into trouble. In general, I've found the FNH community pretty helpful, and have been able to work through all my difficulties. They've also fixed a couple of bugs I've found.
Overall, using FNH has been a huge win for my project - highly recommended!
I don't use Fluent, but I can help with classic NHibernate.
yes, the creation of the schema is very recommendable for production use (Schema Export). When you do this is up to you. For instance, you could create the database by an installer. You shouldn't drop existing databases, but this is a decision of you application.
I don't understand this question. Do you mean you need to upgrade an existing database to a new database schema? This is unfortunately something you need to implement yourself. NH can't do much about this, because it is very specific to you data and the changes you made. There is also a Schema Update or something like this, which is not recommended for production use.
I don't use Fluent, so I can't help here.

Is there something like a "long running offline transaction" for NHibernate or any other ORM?

In essence this is a followup of this question. I'm beginning to feel that I should give up the whole idea, but I'll give it one more shot.
What I want is pretty much like a DB transaction. It should track my changes to the DB and then in the end allow me to either commit or rollback them. If I insert an object, I should get it back in my next (appropriate) SELECT query. If I delete it, future SELECT queries should not return it. Etc.
But there is one catch - this transaction would be very long running. It would start when the user opened a form (I'm talking about Windows Forms here), and the commit/rollback would be when the user closed it(with OK/Cancel). So it could take anywhere between seconds and days. This requirement rules out a standard DB transaction because that would lock the tables/rows it touched, and other users wouldn't be able to use the system. Also the transaction should not commit ANY changes to the DB until it was really committed. So if one user makes some changes, others don't see them until OK button is hit. This prevents errors in case the computer crashes or is disconnected from the network.
I'm quite OK if the solution puts constraints on my model (I'm using MSSQL 2008, btw). I can design the DB/code any way I like. I'm also fine with the idea that a commit could fail because someone already modified one of the objects my transaction touched.
Is there anything like this? I looked at NHibernate.Burrow, but I'm not sure that that's the thing I want.
Added: It's the very beginning of the project so I'm not tied to NHibernate. I started out with it but I can still change easily.
As far as I can judge, DataObjects.Net supports exactly this concept via DisconnectedState. The feature is very new (released just few weeks ago), its preliminary documentation is here. WPF sample for DataObjects.Net uses it for UI transactions.
I'm not sure if it is mentioned there, but DisconnectedState, as well as its OperationLog can be serialized. So its cached state can survive even application restarts.
I don't think anyone will implement this in the NHibernate core, because nobody will use it. Viewmodel is not the same model as domain model.
This is not a direct answer to your question, but this is the sort of thing that WWF (gotta love the name) was set out to solve (not that it did so at least by v 3.5).
If you're still following this, Ayende Rahien has an article in MSDN magazine http://msdn.microsoft.com/en-us/magazine/ee819139.aspx about the session per form/presenter approach. Also take a look at chapter 5 of the NHibernate book http://manning.com/kuate/ (sample chapter available), the one on transactions and conversations.
As long as you delay the flush/transaction till the ok button is pressed, it should work (depending on the flush mode). But complete isolation is a difficult ask because your session will be able to access data that has been committed by other sessions when dealing with multiple entities. You will have to think about handling such issues.
As an aside, how would you deal with this situation if you don't use NHibernate?
EclipseLink has limited supported for such a beast. They call it "Conforming" and they implemented it in the "unit of work" context.

How can we migrate to using VS2005's Database Projects?

At my company, our current method of updating the database is to connect using the Server Explorer in VS2005, then modify the stored procedures by opening them and editing. The devs here seem to enjoy that "write and save it like it's code" mentality. It is pretty convenient, how it automatically turns Create into Alter and runs the scripts against the existing database when we need to tweak something.
Recently, this bit us pretty hard during a server crash when we lost a lot of changes that hadn't gotten backed up. I'm pushing to move our SQL development where it belongs: in DB Projects so we can put them into SVN along with the othe code. The alternative is nightly back-ups of the database.
I don't know much about DB projects though, or how the workflow with them is. I'm afraid that if I can't get something of similar utility to their current model, they just won't switch. Any thoughts on maintaining our current working model, but switching over to DB Projects?
If the developers make the rules (and your post sounds like they do), you can only proceed if the new workflow is "better" to them. Being a developer myself, I think that's the way it should be. I've seen some non-developers think up pretty nonsensical development processes, and force them on the developers to everyone's detriment.
If you're thinking about VS DB projects, you'd first test if VS DB actually works with your database. If it does, you'd have to set up a big chance in process: the "true" copy of the database is now in VS DB instead of the database server.
Another way out is to backup the development server regularly. If you back it up daily, and a transaction log backup every hour, it becomes very hard to loose a significant amount of work.
Or create a scheduled job that writes the entire database definition to a text file. (Script all objects in database.) These files are usually very small, so you can keep a long backlog.
Many respected bloggers seem to think storing database definitions in SVN is a good idea. See this coding horror post, or related Stack Overflow related question How do I version my MS SQL database in SVN.
Talk it over with the developers and see what you can agree on.

Getting "database is locked" error messages from Trac

Wondering if anyone has gotten the infamous "database is locked" error from Trac and how you solved it. It is starting to occur more and more often for us. Will we really have to bite the bullet and migrate to a different DB backend, or is there another way?
See these two Trac bug entries for more info:
http://trac.edgewall.org/ticket/3446
http://trac.edgewall.org/ticket/3503
Edit 1 Thanks for the answer and the recommendation, which seems to confirm our suspicion that migrating to PostgreSQL seems to be the best option. The SQLite to PostgreSQL script is here: http://trac-hacks.org/wiki/SqliteToPgScript Here goes nothing...
Edit 2 (solved) The migration went pretty smooth and I expect we won't be seeing the locks any more. The speed isn't noticeably better as far as I can tell, but at least the locks are gone. Thanks!
That's a problem with the current SQLite adapter. There are scripts to migrate to postgres and I can really recommend that, postgres is a lot speeder for trac.
They just fixed this on Sept 10, and the fix will be in 0.11.6.
http://trac.edgewall.org/ticket/3446#comment:39
I don't think this is 100% fixed just yet. We experience this error a couple dozen times a day. In our case, we have 30+ people updating Trac constantly as we use it for tracking pretty much everything, and not just bugs. From ticket #3446:
Quite obviously, this is [...] due to
our database access patterns... which
currently limit our concurrency to at
most one write access each few seconds