Multiple users accessing a linked table occasionally see a message "Cannot update. Database or object is read-only" - vba

We have a split MS Access database. When users log on, they are connected/linked to two separate Access database (one for the specific project they are working on and one for record locking (and other global settings)). The "locking" database is the one I need to find a solution for.
One of the tables "tblTS_RecordLocking", simply stores a list of user names and the recordID of the record they are editing. This never has more than 50 records - usually being closer to 5-10. But before a user can do anything on a record, it opens "tblTS_RecordLocking" to see if the record is in use (so it happens a lot):
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.ProjectID_Lock)=1111) AND ((tblTSS_RecordLocking.RecordID_Lock)=123456));", , dbReadOnly)
If it's in use, the user simply gets a message and the form/record remains locked. If not in use, it will close the recordset and re-open it so that the user name is updated with the Record ID:
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.UserName_Lock)='John Smith'));")
If recIOC.EOF = True Then
recIOC.AddNew
recIOC.Fields![UserName_Lock] = "John Smith"
Else
recIOC.Edit
End If
recIOC.Fields![RecordID_Lock] = 123456
recIOC.Fields![ProjectID_Lock] = 111
recIOC.Update
recIOC.Close: set recIOC=Nothing
As soon as this finishes, everything realting to the second database is closed down (and the .laccdb file disappears).
So here's the problem. On very rare occasions, a user can get a message:
3027 - Cannot update. Database or object is read-only.
On even rarer occasions, it can flag the db as corrrupt needing to be compressed and re-indexed.
I really want to know the most reliable way to do the check/update. The process may run several hundred times in a day, and whilst I only see the issue once every few weeks, and for the most part handle it cleanly (on the front-end), I'm sure there is a better, more reliable way.

I agree with mamadsp that moving to SQL is the best option and am already in the process of doing this. However, whilst I was not able to create a fix for this issue, I was able to find a work-around that has yet to fail.
Instead of having a single lock table in the global database. I found that creating a lock table in the project database solved the problem. The main benefit of this is that there are much fewer activities on the table. So, not perfect - but it is stable.

Related

Cosmos DB where condition by external document

I have a following structure of document (omiting all with underscore prefix like _self)
{
"id": "c5055e2b-efb2-4c86-907d-a0beb1dca4dc",
"Name": "John Johnson",
"partitionKey": "0ecdb989-01c6-4f11-9fd2-3e1dcc1c8cb9",
"FKToBeDeleted": "FK_c5055e2b-efb2-4c86-907d-a0beb1dca4dc_ToBeDeleted",
}
And as You can see there is a field named FKToBeDeleted and I use this to mark document, but it has to be as a reference, because in my app may occure kind of database concurrency, because 1st app can GET document, process it, and 2nd app can update document during processing and 1st one will not see any changes, because downloading again huge document and updating it is RU consuming, so I wanted to reduce the cost. Going further, I created a document for this.
{
"id": "FK_c5055e2b-efb2-4c86-907d-a0beb1dca4dc_ToBeDeleted",
"partitionKey": "0ecdb989-01c6-4f11-9fd2-3e1dcc1c8cb9",
"ToBeDeleted": false,
}
And now there is a problem, because my front-end app does not want to display any ToBeDeleted documents. This kinda cheats the user, because I just mark it as deleted but later delete the document.
Now the question is how the SQL query should look like? Previously it was like the following query, because r.ToBeDeleted was boolean.
SELECT r.id, r.Name, r.AddedAt, r._ts
FROM ROOT r
WHERE
(NOT(r.ToBeDeleted))
ORDER BY r.AddedAt desc
Now FKToBeDeleted is only a reference to another document, but the ID is in r.FKToBeDeleted, so I tried some nested SELECT but it didn't work.
Any suggestions what is the right way to achieve that?
EDIT (clarification)
Let's have a following situation.
There are two apps (you can also treat them as threads) which uses the same Cosmos DB instance.
STEP 1 - is a moment of start of processing some data, but database document is needed, so it gets that and on the right side you can see current document (but in fact only ToBeDeleted is interesting here).
STEP 2 - is a moment, when user wants to remove this processed item, because he is no longer interested of its results, but database document is also required here, so again there is a GET.
STEP 3 - is a moment, when job of soft delete is done and there is a need to update database document, and the field is set to true.
STEP 4 - is a moment, when processing is over of common flow and at the end there is update of the document. BUT, Application 2 downloaded it before STEP 3, and it's overriding things that Application 1 did, which is bad.
So I made a solution for that.
As you can see, the steps are the same, but instead of updating the same document, I update a referenced document, so I don't have a problem with overriding data.
Now, the problem is how to make a SQL query to join two documents, so the FK_1 id will be replaced of the value of ToBeDeleted field in another document.
According to this article there is no possibility to join two documents, which of course does not help me at all, yet closes the topic.
JOIN keyword exists in the language, but it is used to “unfold” nested containers, there is no way to join different documents.
Perhaps, you can use subquery instead of JOIN.
https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-subquery#mimic-join-with-external-reference-data

Users updating same row at the same time SQL Server

I want to create a SQL Server table that has a Department and a Maximum Capacity columns (assume 10 for this scenario). When users add them selves to a department the system will check the current assignment count (assume 9 for this scenario) in the department and compare it to the maximum value. If it is below the maximum, they will be added.
The issue is this: what if two users submit at the same time and the when the code retrieves the current assignment count it will be 9 for both. One user updates the row sooner so now its 10 but the other user has already retrieved the previous value before the update (9) and so both are valid when compared and we end up with 11 users in the department.
Is this even possible and how can one solve it?
The answer to your problem lies in understanding "Database Concurrency" and then choosing the correct solution to your specific scenario.
It too large a topic to cover in a single SO answer so I would recommend doing some reading and coming back with specific questions.
However in simple form you either block the assignments out to the first person who tries to obtain them (pessimistic locking), or you throw an error after someone tries to assign over the limit (optimistic locking).
In the pessimistic case you then need ways to unblock them if the user fails to complete the transaction e.g. a timeout. A bit like on a ticket booking website it says "These tickets are being held for you for the next 10 minutes, you must complete your booking within that time else you may lose them".
And when you're down to the last few positions you are going to be turning everyone after the first away... no other way around it if you require this level of locking. (Well you could then create a waiting list, but that's another issue in itself).

SQL Server: Avoid simultaneous updates increasing a column value over its target

I have a SQL Server table called AD where ad's to be viewed are stored as
create table Sponsors.AD
(
ADID varchar(40) primary key,
SponsorID varchar(30),
PurchasedViews int , --How many views the ad must reach before it is disabled
CurrentViewCount int, --Keeps track of how many views the ad has gotten
{...}
Active bit -- for easier checking of whether the AD still has clicks to give
)
This feeds into a webpage where, to access a feature, users first need to view an ad. Users can pick one ad from a menu that displays three options [they pick one, the ad's media is displayed and the feature is unlocked at the conclusion].
After they view the ad, its CurrentViewCount should be updated (increased by 1).
This is handled by a stored procedure that includes an update call for the table - separate from the stored procedure that fetches 3 ads at random for the option menu - but I'm looking for suggestions on how to solve the problem of synchronizing all concurrent AD views - as it could happen that
two or more users have the same ad in their 3-choice-menu
two or more users view the same ad at the same time
1 and 2 are not a problem on their own but they could be if the ad is one click away from it's set maximum.
One way I've thought to solve this is to set the active flag as false if the ad is one click away from it's target when it is displayed in the 3-option menu, and if the user does not click it, the flag will be reset to true -- but then I'd need to handle cases where the user exits the option dialogue or disconnects, times out, etc. I feel like there must be a better way.
Another suggestion I've heard is to automatically increase the counter when the ads are summoned to the 3-option menu but that's even more overhead than the other and suffers the same issues.
Locking the table is absolutely infeasible unless we wanted to only serve one ad view at a time - so I'm not even considering it.
I'm sure something like this has been discussed before but don't know what keywords/etc to search to find more on this.
I would not count the clicks within the same table... Could avoid your locking issues...
But, to get to your question: maybe you could handle this "fuzzy". Not the thight active=yes/no but rather something like an InactivityLevel together with a timeout.
As long as your flag would be true, everything is fine. If the counter exceeds, you switch to "No new visitors" and set a timestamp, so your add won't display in a new context. You set this to "inactive" after a given timeout.

sql next entered record

I have the following table "users" (simplified for the purposes of this question) with non-incremental ids (not my design but I have to live with it)
userUUID email
----------------------------------------------------
44384582-11B1-4BB4-A41A-004DFF1C2C3 dabac#sdfsd.com
3C0036C2-04D8-40EE-BBFE-00A9A50A9D81 sddf#dgfg.com
20EBFAFE-47C5-4BF5-8505-013DA80FC979 sdfsd#ssdfs.com
...
We are running a program that loops through the table and sends emails to registered users. Using a try/catch block, I am recording the UUID of a failed record in a text file. The program then ceases and would need restarting.
When the program is restarted, I don't want to resend emails to those that were successful, but begin at the failed record. I assume this means, I need to process records that were created AFTER the failed record.
How do I do this?
Why not keep track somewhere (e.g. another table, or even in a BIT column of the original table, called "WelcomeEmailSent" or something) which UUIDs have already been mailed? Then no matter where your program dies or what state it was in, it can start up again and always know where it left off.
sort by a column (in this case I would recommend userUUID)
do a where userUUID > 'your-last-uuid'

Optimal way to add / update EF entities if added items may or may not already exist

I need some guidance on adding / updating SQL records using EF. Lets say I am writing an application that stores info about files on a hard disk, into an EF4 database. When you press a button, it will scan all the files in a specified path (maybe the whole drive), and store information in the database like the file size, change date etc. Sometimes the file will already be recorded from a previous run, so its properties should be updated; sometimes a batch of files will be detected for the first time and will need to be added.
I am using EF4, and I am seeking the most efficient way of adding new file information and updating existing records. As I understand it, when I press the search button and files are detected, I will have to check for the presence of a file entity, retrieve its ID field, and use that to add or update related information; but if it does not exist already, I will need to create a tree that represents it and its related objects (eg. its folder path), and add that. I will also have to handle the merging of the folder path object as well.
It occurs to me that if there are many millions of files, as there might be on a server, loading the whole database into the context is not ideal or practical. So for every file, I might conceivably have to make a round trip to the database on disk to detect if the entry exists already, retrieve its ID if it exists, then another trip to update. Is there a more efficient way I can insert/update multiple file object trees in one trip to the DB? If there was an Entity context method like 'Insert If It Doesnt Exist And Update If It Does' for example, then I could wrap up multiple in a transaction?
I imagine this would be a fairly common requirement, how is it best done in EF? Any thoughts would be appreciated.(oh my DB is SQLITE if that makes a difference)
You can check if the record already exists in the DB. If not, create and add the record. You can then set the fields of the record which will be common to insert and update like the sample code below.
var strategy_property_in_db = _dbContext.ParameterValues().Where(r => r.Name == strategy_property.Name).FirstOrDefault();
if (strategy_property_in_db == null)
{
strategy_property_in_db = new ParameterValue() { Name = strategy_property.Name };
_dbContext.AddObject("ParameterValues", strategy_property_in_db);
}
strategy_property_in_db.Value = strategy_property.Value;