What is the difference between single user database and multiple user database? Why I would need to use a single user database? why it was made?
If I have multiple threads accessing the same database on the same time using the same log-in credentials, do I need a multiple user database?
What is the difference between single user database and multiple user
database? Why I would need to use a single user database? why it was
made?
Single-user mode allows only one connection to a database at a given moment in time.
Since you've tagged this question as SQLite I'll base my answer on that specific engine.
SQLite's concurrency protection model is based on direct file reads and writes to the database file on disk, so obviously two different programs can't be writing to the same file on a disk at the exact same time. SQLite will force the second query to wait while changes are being made to the database file.
SQLite protects against file/database corruption using operating system file locks when INSERTS and UPDATES are being done. But it does allow multiple read-only handles at the same time (just like how you can open same the file multiple times in read-only on your desktop, but won't be able to save changes).
In other databases like SQL Server, single-user mode is often only used when the database needs maintenance - like restoring a backup file, changing the database structure, or changing global database settings.
Single-user mode is useful because you can't have clients connected and trying to read/write data while you are making these kinds of changes.
But under normal conditions, you don't want to run SQL Server in single-user mode for performance reasons. However, unless you have a high-volume webservice, SQLite will probably be able to handle the performance requirements despite changes being one at a time on disk.
If I have multiple threads accessing the same database on the same time using the same log-in credentials, do I need a multiple user database?
No, it will still work. And SQLite databases don't require authentication at all, there are no credentials for it other than file/directory permissions.
Related
Hello i have a database created with only Microsoft access (meaning no sql have been used) is it possible to make multiple users use it from different computers and the datas they input gets updated in all the computers?
Can someone just briefly tell me how if the answer is yes,
Much appreciated
I have on several occasions used the following technique with success:
(1) Split the Access Database in two:
The Back End: This database should contain the shared tables.
The Front End: A database for forms, queries and basically everything except tables.
Instead of actual tables, this database should contain "linked" versions of those tables which are held in the "back end".
(2) There is a central copy of the front-end database, but no-one opens this directly. Instead, they run a batch file which creates a local copy of that central front-end, and then opens that.
This setup has the advantage that the central "front-end" remains unused, and therefore isn't locked, and so the developer can edit it. The users will get the updates whenever they next launch the database using the batch file.
A second advantage is that the "backend" can be upsized to a "proper" database, and the front-end could then remain largely unchanged, just that the linked tables would no longer be in another Access Database.
I am working on a desktop application using VB.net which stores data on an sqlite database. The client says that they want the app to be accessed over the LAN by different departments. Is it possible that SQLIte can work in this setup?
No. SQLite isn't meant to be used by multiple clients at the same time (unless it is strictly read-only).
You should use a server style database (SQL Server, PostgreSQL, MySQL, etc.).
If you really want a file-based database, Access is the only one I know of that will work over a LAN with multiple users.
Yes. SQLite can access database files on a network file system, and does handle concurrent accesses correctly (if the OS and the network implement locking correctly (many don't; see How To Corrupt An SQLite Database File)).
However, SQLite is a file-based database system; like Access, it is not really designed for network operation (see Appropriate Uses For SQLite).
Consider using a client/server database, if possible.
There are some features in our LOB application that allow users to define their own queries to retrieve data for reports and listings within the app. The problem that we are encountering is that sometimes these queries they have written a really heavy (and sometimes erroneous) and cause massive load on the server.
Removing these features is out of the question but Im wanting to know if there is a way to create some type of sandbox within SQL server so that the queries that they execute are only allotted a certain amount of resources to execute therefore not giving them the chance to cause any damage to anyone else using the system. Any ideas?
The Resource governor has been mentioned in the comments above already. One other solution I can think of is using SQL Server High Availability Groups.
At the last place I worked had this kind of set up. There is a primary server which takes in all the transactions that write stuff to the database, with a secondary in case the primary fails. Added to this we also had read-only replicas added to the availability group.
The main purpose of this is in the event that your main server goes down you are automatically transferred to another replica. When you connect your application to the database, you connect it to the Availability Group rather than a specific server. Then if a server goes down you are automatically transferred to a secondary server instead. However, it can also be used to optimise application functionality that just needs read-only access by taking load off the primary server.
Any functionality that we knew that it only needed read-only access then we could connect to the availability group and add into the connection string ApplicationIntent=READONLY which means that we're using the read-only replica rather than the primary, leaving the primary for regular transactions. (IIRC, by default the primary will accept any read/write connection, so you have to configure the primary not to accept read-only connections)
Anyway, the kicking off point for reading up about this is here: https://msdn.microsoft.com/en-us/library/ms190202.aspx
The latest Windows 10 1903 upgrade already has inbuilt Sandbox features, where you can run SQL server within it's own sandbox. I don't think SQL Server itself has its own inbuilt sandbox environment, as it would be practically impossible to manage within a normal Windows server that is not using sandbox, if you know what I mean.
For reasons I'm not about to explain, We keep a Access database that is to be a copy of a subset of a larger oracle database. It is not feasible to refer to data directly in the Oracle database due to speed issues (don't ask).
Every time a specific application is opened the local Access database is updated from the newest data found to the time of opening the application. First of all this does not capture changes in the existing records. Secondly it does not take into account changes in the source database made after opening the application.
For this reason several checks may be needed when carrying out certain operations in the application. So is it possible to update the local Access database only with changes in the Oracle database in a smarter and faster way than the hard way I am imagining (I'm not a PL/SQL / SQL expert)? Possibly it might be sufficient to look for changes only after a certain date (stored in one of the fields of the recordset retrieved).
Any suggestions?
You might want to look into data replication beethween Oracle en MSAccess databases. For example thru an ODBS drive or sqlserver database. Just google "ms access oracle replication" an see if this solves your problem.
I am developing an Adobe AIR application which stores data locally using a SQLite database.
At any time, I want the end user to synchronize his/her local data to a central MySQL database.
Any tips, advice for getting this right?
Performance and stability is the key (besides security ;))
I can think of a couple of ways:
Periodically, Dump your MySQL database and create a new SQLite database from the dump. You can then serve the SQLite database (SQLite databases are contained in a single file) for your users client to download and replace the current database.
Create a diff script that generates the necessary statements to bring the current database up to speed (various INSERT, UPDATE and DELETE statements). To do this, you must record the time of each change continuously in your database (the time of creation and update for each row, and keep a history of deleted rows).
User's client will download the diff file (a text file of the various statements) and apply it on the local database.
Both approaches have their own pros and cons - by dumping the entire database, you make sure all the data gets through. It is also much easier than creating the diff, however it might put more load on the server, depending on how often does the database gets updated between dumps.
On the other hand, diffing between the database will give you just the data that changed (hopefully), but it is more open to logical errors. It will incur an additional overhead on the client as well, since it will have to create/update all the necessary records instead of just copying a file.
If you're just sync'ing from the server to client, Eran's solution should work.
If you're just sync'ing from the client to the server, just reverse it.
If you're sync'ing both ways, have fun. You'll at minimum probably want to keep change logs, and you'll need to figure out how to deal with conflicts.