What is the preferred way to lock a Metakit database from TCL?
Basically I have an application that reads/writes from a Metakit database file, and I'm worried that if the user has two instances of my application running, they could corrupt the database (by doing two writes at the same time).
I know I could use sockets to communicate between instances, but I'd rather not as that could conflict with existing software on the PC. I also thought about using a lock file, but if the process crashes the database would be permanently locked. I know on UNIX it's common to write the PID to a lock file, but I don't know how to tell if a process is still running in a cross platform way. My primary target is Windows.
I'm not totally opposed to adding some native code (compiled C binary), but thought there might be a better pure-TCL way first.
Thanks!
Using a lock file is not so unusual; even though database crashing can make the database hard to unlock. There are some simple workarounds for this issue.
place the lock in a place that gets cleaned up after a reboot; /tmp for unix
If the application opens and finds that the lock is still open; tell the user what's going on and suggest how to fix it; offer to delete the lock file (after issuing sufficient warning) or tell the user where it is so that they can take the risk of deleting it themselves.
The description from the Metakit page on commits says that there are a number of access modes that can be used to allow multiple readers concurrent with a single writer (probably using locking under the hood). Standard metakit is reasonably careful about not leaving its files in an inconsistent state, so I expect that it will handle all that side of things fairly well. What I don't know is how the features discussed on that page are exposed to Tcl scripts.
Related
We have an Access database split front-end / back end that frequently corrupts (for various reasons; bad architecture, bad code, too many users, slow network and so on, we're currently re-writing in SQL Server). Usually when this happens admins email all staff and ask them to exit the front end and any other files that link to the back end (e.g some reports in Excel have connections to it) so we can open the DB and have it auto-compact / repair when it detects its in a corrupted state.
Getting users out of the system is like herding cats and we're not always able to do it in good time. I've implemented a form timer event that checks a 3rd DB for a flag as to whether it should remain open, the idea is we set that flag to false when we need the front ends closed. This seems to be effective but I can't say for sure if it works on 100% of installs as sometimes we still experience that file is locked. This may be because of the Excel reports though these are viewed rarely.
Lately, rather than waiting for people to exit I've been making a copy of the corrupted DB before opening it, repairing the copy and then overwriting the original with the copied file when the repair is finished. This seems to work well.
My question is: What are the issues, if any, around overwriting the backend? Could it cause any problems that aren't immediately apparent? I've been doing this a few weeks now and haven't noticed any issues but it just feels like a bad practice. e.g. What happens to the lock file? Does that get updated automatically?
Not much, because the worst case already happened.
When copying an open Access database, there's a risk of open transactions and writes being half-finished, corrupting the database, not being committed, or trashing the VB Project part of the database.
But, the file is already corrupted, and when closing it if you have an open transaction, you will receive an error message (that's also a plausible cause for why your form timer isn't working).
I don't have statistics, but I think closing up a corrupt database by writing transactions to it is likely more dangerous than just copying it with open transactions, as these writes might overwrite stuff they shouldn't.
Of course, never do this when your database is not corrupted, since it can cause corruption if the database is not already corrupted.
Of course, if you have intermittent corruption, then the real issue should be preventing that from occurring, and the bug Gord Thompson referred to in a comment (this one) is very common and likely the culprit. It can go good 20 times in a row, until it goes wrong once and you'll have to revert to a backup, possibly losing data (or worse, not having a backup and losing much more data).
I will have multiple computers on the same network with the same C# application running, connecting to a SQL database.
I am wondering if I need to use the service broker to ensure that if I update record A in table B on Machine 1, the change is pushed to Machine 2. I have seen applications that need to use messaging servers to accomplish this before but I was wondering why this is necessary, surely if they connect to the same database, any changes from one machine will be reflected on the other?
Thanks :)
This is mostly about consistency and latency.
If your applications always perform atomic operations on the database, and they always read whatever they need with no caching, everything will be consistent.
In practice, this is seldom the case. There's plenty of hidden opportunities for caching, like when you have an edit form - it has the values the entity had before you started the edit process, but what if someone modified those in the mean time? You'd just rewrite their changes with your data.
Solving this is a bunch of architectural decisions. Different scenarios require different approaches.
Once data is committed in the database, everyone reading it will see the same thing - but only if they actually get around to reading it, and the two reads aren't separated by another commit.
Update notifications are mostly concerned with invalidating caches, and perhaps some push-style processing (e.g. IM client might show you a popup saying you got a new message). However, SQL Server notifications are not reliable - there is no guarantee that you'll get the notification, and even less so that you'll get it in time. This means that to ensure consistency, you must not depend on the cached data, and you have to force an invalidation once in a while anyway, even if you didn't get a change notification.
Remember, even if you're actually using a database that's close enough to ACID, it's usually not the default setting (for performance and availability, mostly). You need to understand what kind of guarantees you're getting, and how to write code to handle this. Even the most perfect ACID database isn't going to help your consistency if your application introduces those inconsistencies :)
I am working on a VB.NET application.
As per the nature of the application, One module has to monitor database (SQLite DB) each second. This Monitoring is done by simple select statement which run to check data against some condition.
Other Modules performs a select,Insert and Update statements on same SQLite DB.
on SQLite concurrent select statements are working fine, but I'm having hard time here to find out, why it is not allowing Inset and Update.
I understand it's a file based lock, but is there anyway to get it done?
each module, in fact statement opens and close the connection to DB.
I've restricted user to run single statement at a time by GUI design.
any Help will be appreciated.
If your database file is not on a network, you could allow a certain amount of read/write concurrency by enabling WAL mode.
But perhaps you should use only a single connection and do your own synchronization for all DB accesses.
You can use some locking mechanism to make sure the database works in a multithreading situation. Since your application is a read intensive one according to what you said, you can consider using a ReaderWriterLock or ReaderWriterLockSlim. (refer to here and here for more details)
If you have only one database, then creating just one instance of the lock is OK; if you have more than one database, each of them can be assigned a lock. Every time you do some read or write, enter the lock (by EnterReadLock() for ReaderWriterLockSlim, or by AcquireReaderLock() for ReaderWriterLock) before you do something, and after you're done exit the lock. Note that you can place the exit of the lock in a finally clause lest you forget to release it.
The strategy above is being used in our production applications. It's not so good as to use a single thread in your case because you have to take performance into account.
I am working on a VB.NET project that grabs data from an Access DB file. All the code snipeets I have come across open the DB, do stuff and close it for each operation. I currently have the DB open for the entire time the application is running and only close it when the application exits.
My question is: Is there a benefit to opening the connection to the DB file for each operation instead of keeping it open for the duration the application is running?
In many database systems it is good practice to keep connections open only when they are in use, since an open connection consumes resources in the database. It is also considered good practice for your code to have as little knowledge as possible about the concrete database in use (for instance by programming against interfaces such as IDbConnection rather than concrete types as OleDbConnection.
For this reason, it could be a good idea to follow the practice of keeping the connection open as little as possible regardless of whether it makes sense or not for the particular database that you use. It simply makes your code more portable, and it increases your chance of not getting it wrong, in case you in your next project happen to work against a system where keeping connections open is a bad thing to do.
So, your question should really be reversed: is there anything to gain by keeping the connection open?
There is no benefit with the Jet/ACE database engine. The cost of creating the LDB file (the record locking file) is very high. You could perhaps avoid that by opening the file exclusively (if it's a single user), but my sense is that opening exclusive is slower than opening multi-user.
The advice for opening and closing connections is based around an assumption of a database server being on the other end of the connection. If you consider how that works, the cost of opening and closing the connection is very slight, as the database deamon has the data files already open, and handles locking on the fly via structures in memory (and perhaps on disk -- I really don't know about how it's implemented in any particular server database) that already exist once the server is up and running.
With Jet/ACE, all users are contending for two files, the data file and the locking file, and setting that up is much more expensive than the incremental cost of creating a new connection to a server database.
Now, in situations where you're aiming for high concurrency with a Jet/ACE data store, there might have to be a trade-off among these factors, and you might get higher concurrency by being much more miserly with your connections. But I would say that if you're into that realm with Jet/ACE, you should probably be contemplating upsizing to a server-based back end in the first place, rather than wasting time on optimizing Jet/ACE for an environment it was not designed for.
Is there a way to ensure that only a single instance of the desktop version of a trusted Silverlight 4 Out Of Browser app will run?
Or do I need to manually enforce this through the creation of a crude mutex of some sort?
If I must enforce this myself, I'd look at creating a file in isolated storage as a lock and then deleting it on exit. I'd check this files existence on launch to prevent opening a subsequent instance.
Obviously I'd need a way to handle the app crashing or exiting some other way that prevents the lock file from being deleted. My instinct would be to have a timeout on the file and so ignore it if it's over a certain period of time old. Unfortunately, the app plays movies so it's likely it will run for several hours under normal circumstances. A lock timeout of a few hours isn't likely to be popular with any users in this situation. Are there any better solutions?