I am facing a lock table overflow issue and below is the error it displays me and as soon as it displays it crashes the code.
Lock table overflow, increase -L on server (915)
I have checked the error number and it is saying we need to modify that -L value before server starts and it has been set to 500 by default. But I would not imagine I have been given that privilege to change that value unless I am a database administrator of the company.
What i was trying to do was wipe out roughly 11k of member records with all the linked table records ( more than 25 tables are linked to each member record ) while backing them each table up into separate file. So roughly it achieves 'EXCLUSIVE-LOCK' when entering the member for loop as below,
for each member
EXCLUSIVE-LOCK:
/*
Then find each linked records in a order.
Extract them.
Delete them.
*/
Finally it extracts the member.
Delete member.
end.
When it hits certain number of member records program crashes out. So i had to run it as batches like,
for each member
EXCLUSIVE-LOCK:
Increment a member count.
When count = 1k
then RETURN.
/*
Then find each linked records in a order.
Extract them.
Delete them.
*/
Finally it extracts the member.
Delete member.
end.
So literally I've ended up with running the same code more than 11 times to get the work done. I hope someone should have come across this issue and would be great help if you like to share a long time solution rather than my temporary solution.
You need a lock for each record that is part of a transaction. Otherwise other users could make conflicting changes before your transaction commits.
In your code you have a transaction that is scoped to the outer FOR EACH. Thus you need 1 lock for the "member" record and another lock for each linked record associated with that member.
(Since you are not showing real code it is also possible that your actual code has a transaction scope that is even broader...)
The lock table must be large enough to hold all of these locks. The lock table is also shared by all users -- so not only must it hold your locks but there has to be room for whatever other people are doing as well.
FWIW -- 500 is very, very low. The default is 8192. There are two startup parameters using the letter "l", one is upper case, -L, and that is the lock table and it is a server startup parameter. Lower case, -l, is the "local buffer size" and that is a client parameter. (It controls how much memory is available for local variables.)
"Batching", as you have sort of done, is the typical way to ensure that no one process uses too many locks. But if your -L is really only 500 a batch size of 1,000 makes no sense. 100 is more typical.
A better way to batch:
define buffer delete_member for member.
define buffer delete_memberLink for memberLink. /* for clarity I'll just do a single linked table... */
for each member no-lock: /* do NOT get a lock */
batch_loop: do for delete_member, delete_memberLink while true transaction:
b = 0.
for each delete_memberLink exclusive-lock where delete_memberLink.id = member.id:
b = b + 1.
delete delete_memberLink.
if b >= 100 then next batch_loop.
end.
find delete_member exclusive-lock where recid( delete_member ) = recid( member ).
leave batch_loop. /* this will only happen if we did NOT execute the NEXT */
end.
end.
You could also increase your -L database startup parameter to take into account your one off query / delete.
Related
We have a split MS Access database. When users log on, they are connected/linked to two separate Access database (one for the specific project they are working on and one for record locking (and other global settings)). The "locking" database is the one I need to find a solution for.
One of the tables "tblTS_RecordLocking", simply stores a list of user names and the recordID of the record they are editing. This never has more than 50 records - usually being closer to 5-10. But before a user can do anything on a record, it opens "tblTS_RecordLocking" to see if the record is in use (so it happens a lot):
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.ProjectID_Lock)=1111) AND ((tblTSS_RecordLocking.RecordID_Lock)=123456));", , dbReadOnly)
If it's in use, the user simply gets a message and the form/record remains locked. If not in use, it will close the recordset and re-open it so that the user name is updated with the Record ID:
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.UserName_Lock)='John Smith'));")
If recIOC.EOF = True Then
recIOC.AddNew
recIOC.Fields![UserName_Lock] = "John Smith"
Else
recIOC.Edit
End If
recIOC.Fields![RecordID_Lock] = 123456
recIOC.Fields![ProjectID_Lock] = 111
recIOC.Update
recIOC.Close: set recIOC=Nothing
As soon as this finishes, everything realting to the second database is closed down (and the .laccdb file disappears).
So here's the problem. On very rare occasions, a user can get a message:
3027 - Cannot update. Database or object is read-only.
On even rarer occasions, it can flag the db as corrrupt needing to be compressed and re-indexed.
I really want to know the most reliable way to do the check/update. The process may run several hundred times in a day, and whilst I only see the issue once every few weeks, and for the most part handle it cleanly (on the front-end), I'm sure there is a better, more reliable way.
I agree with mamadsp that moving to SQL is the best option and am already in the process of doing this. However, whilst I was not able to create a fix for this issue, I was able to find a work-around that has yet to fail.
Instead of having a single lock table in the global database. I found that creating a lock table in the project database solved the problem. The main benefit of this is that there are much fewer activities on the table. So, not perfect - but it is stable.
I want to create a SQL Server table that has a Department and a Maximum Capacity columns (assume 10 for this scenario). When users add them selves to a department the system will check the current assignment count (assume 9 for this scenario) in the department and compare it to the maximum value. If it is below the maximum, they will be added.
The issue is this: what if two users submit at the same time and the when the code retrieves the current assignment count it will be 9 for both. One user updates the row sooner so now its 10 but the other user has already retrieved the previous value before the update (9) and so both are valid when compared and we end up with 11 users in the department.
Is this even possible and how can one solve it?
The answer to your problem lies in understanding "Database Concurrency" and then choosing the correct solution to your specific scenario.
It too large a topic to cover in a single SO answer so I would recommend doing some reading and coming back with specific questions.
However in simple form you either block the assignments out to the first person who tries to obtain them (pessimistic locking), or you throw an error after someone tries to assign over the limit (optimistic locking).
In the pessimistic case you then need ways to unblock them if the user fails to complete the transaction e.g. a timeout. A bit like on a ticket booking website it says "These tickets are being held for you for the next 10 minutes, you must complete your booking within that time else you may lose them".
And when you're down to the last few positions you are going to be turning everyone after the first away... no other way around it if you require this level of locking. (Well you could then create a waiting list, but that's another issue in itself).
Good evening!
At this moment I'm working on page in the Oracle Apex Application, which works the following way. This is the page, which contains some kind of big and complex report with data differentiated by one feature (let it be named feature A). On the left side of the page there is a "catalog menu", due to which user can see the data answering to feature A, on the right side the data is shown and above there is a the search bar, which can help users to find some exact data by other features feature B, feature C etc.
I had a view (let it be named V_REPORT_BIG_DATA) for showing the report, but it was so big and was loading so slow, that I've decided to switch the page on the table with the same fields as V_REPORT_BIG_DATA (let it be named T_REPORT_BIG_DATA_TEMP). Besides, it has the additional field for process identificator (let it be named PID) and is temporary not physically, but by its purpose. I thought that it must work this way: user enters the page, receives his own PID relevant to the session (it works if PID is null, otherwise it doesn't change), and then the procedure (P_REPORT_BIG_DATA_RELOAD) makes the deleting of the "old" data and uploading of the "new" one, besides these actions are executed with the one PID and are concerned to definite (say, current) user.
But my idea didn't appear to work correct. The procedure P_REPORT_BIG_DATA_RELOAD itself works fine and is executed from the Process page, and PID is a global Application Item (it is generated from a database sequence). But my brain has nearly been blown up when I saw that my table has duplicates of data concerned to one user and one PID! By making the table of logs (which has been filled with the facts, how much rows had been deleted and inserted again, in the code of P_REPORT_BIG_DATA_RELOAD) I saw very strange thing: some users "loaded" duplicates as if the uploading procedure had been executed several times simultaneously!
Taking into account all I've said before, I have the following question: what do I do wrong? What should I do, so that I wouldn't have to use the word "distinct" in the query from the table T_REPORT_BIG_DATA_TEMP?
UPD: Additional facts to my question. Excuse me for my inattention, because I thought that there I cannot edit my first posts. :-/
Well, I'll explain my problem further. :) Firstly, I did all the best for my view P_REPORT_BIG_DATA_RELOAD to expect its loading much faster, but it involves many-many rows. Secondly, the code executed from the Process Page (say, during the loading of my page) is this:
begin
if :PID is null then
:PID := NEW_PID;
end if;
P_REPORT_BIG_DATA_RELOAD(AUTH => :SUSER, PID => :PID);
end;
NEW_PID is a function which generates new PID, and P_REPORT_BIG_DATA_RELOAD is my procedure which refresh the data depending on user and his PID.
and the code of my procedure is this:
procedure P_REPORT_BIG_DATA_RELOAD
(AUTH in varchar2, PID in number)
is
NCOUNT_DELETED number;
NCOUNT_INSERTED number;
begin
--first of all I check that both parameters are not null - let me omit this part
--I find the count of data to be deleted (for debug only)
select count(*)
into NCOUNT_DELETED
from T_REPORT_BIG_DATA_TEMP T
where T.AUTHID = AUTH
and T.PID = P_REPORT_BIG_DATA_RELOAD.PID;
--I delete "old" data
delete from T_REPORT_BIG_DATA_TEMP T
where T.AUTHID = AUTH
and T.PID = P_REPORT_BIG_DATA_RELOAD.PID;
--I upload "new" one
insert into T_REPORT_BIG_DATA_TEMP
select V.*, PID from
(select S.* from V_REPORT_BIG_DATA S
where S.AUTHID = AUTH);
--I find the count of uploaded data (for debug only)
NCOUNT_INSERTED := SQL%ROWCOUNT;
--I write the logs (for debug only)
insert into T_REPORT_BIG_DATA_TEMP_LG(AUTHID,PID,INS_CNT,DLD_CNT,WHEN)
values(AUTH,PID,NCOUNT_INSERTED,NCOUNT_DELETED,sysdate);
end P_REPORT_BIG_DATA_RELOAD;
And one more fact: I tried to turn :PID into Page Item, but it cleared after every refresh in spite of that the option Maintain session state is Per session, so that I couldn't even hope for using the same PID by every definite user in definite session.
If you are working with access control, you must have faced the issue where the Automatic Record Permission field (with Rules) does not update itself on recalculating the record. You either have to launch full recalculation or wait for a considerable amount of time for the changes to take place.
I am facing this issue where based on 10 different field values in the record, I have to give read/edit access to 10 different groups respectively.
For instance:
if rule 1 is true, give edit access to 1st group of users
if rule 1 and 2 are true, give edit access to 1st AND 2nd group of
users.
I have selected 'No Minimum' and 'No Maximum' in the Auto RP field.
How to make the Automatic Record Permission field to update itself as quickly as possible? Am I missing something important here?
If you are working with access control, you must have faced the issue
where the Automatic Record Permission field (with Rules) does not
update itself on recalculating the record. You either have to launch
full recalculation or wait for a considerable amount of time for the
changes to take place.
Tanveer, in general, this is not a correct statement. You should not face this issue with [a] well-designed architecture (relationships between your applications) and [b] correct calculation order within the application.
About the case you described. I suggest you check and review the following possibilities:
1. Calculation order.Automatic Record Permissions [ARP from here] are treated by Archer platform in the same way as calculated fields. This means that you can modify the calculation order in which calculated field and automatic record permissions will be updated when you save the record.So it is possible that your ARP field is calculated before certain calculated fields you use in the rules in ARP. For example, let say you have two rules in ARP field:
if A>0 then group AAA
if B>0 then groub BBB
Now, you will have a problem if calculation order is the following:
"ARP", "A", "B"
ARP will not be updated after you click "Save" or "Apply", but it will be updated after you click "Save" or "Apply" twice within the save record.With calculation order "A","B","ARP" your ARP will get recalculated right away.
2. Full recalculation queue.
Since ARPs are treated as calculated fields, this mean that every time ARP needs to get updated there will be recalculation job(s) created on the application server on the back end. And if for some reason the calculation queue is full, then record permission will not get updated right away. Job engine recalculation queue can be full if you have a data feed running or if you have a massive amount of recalculations triggered via manual data imports. Recalculation job related to ARP update will be created and added to the queue. Recalculation job will be processed based on the priorities defined for job queue. You can monitor the job queue and alter default job's processing priorities in Archer v5.5 via Archer Control Panel interface. I suggest you check the job queue state next time you see delays in ARP recalculations.
3. "Avalanche" of recalculations
It is important to design relationships and security inheritance between your applications so recalculation impact is minimal.
For example, let's say we have Contacts application and Department application. - Record in the Contacts application inherits access using Inherited Record Permission from the Department record.-Department record has automatic record permission and Contacts record inherits it.-Now the best part - Department D1 has 60 000 Contacts records linked to it, Department D2 has 30 000 Contacts records linked to it.The problem you described is reproducible in the described configuration. I will go to the Department record D1 and updated it in a way that ARP in the department record will be forced to recalculate. This will add 60 000 jobs to the job engine queue to recalculate 60k Contacts linked to D1 record. Now without waiting I go to D2 and make change forcing to recalculate ARP in this D2 record. After I save record D2, new job to recalculate D2 and other 30 000 Contacts records will be created in the job engine queue. But record D2 will not be instantly recalculated because first set of 60k records was not recalculated yet and recalculation of the D2 record is still sitting in the queue.
Unfortunately, there is not a good solution available at this point. However, this is what you can do:
- review and minimize inheritance
- review and minimize relationships between records where 1 record reference 1000+ records.
- modify architecture and break inheritance and relationships and replace them with Archer to Archer data feeds if possible.
- add more "recalculation" power to you Application server(s). You can configure your web-servers to process recalculation jobs as well if they are not utilized to certain point. Add more job slots.
Tanveer, I hope this helps. Good luck!
I have the following table "users" (simplified for the purposes of this question) with non-incremental ids (not my design but I have to live with it)
userUUID email
----------------------------------------------------
44384582-11B1-4BB4-A41A-004DFF1C2C3 dabac#sdfsd.com
3C0036C2-04D8-40EE-BBFE-00A9A50A9D81 sddf#dgfg.com
20EBFAFE-47C5-4BF5-8505-013DA80FC979 sdfsd#ssdfs.com
...
We are running a program that loops through the table and sends emails to registered users. Using a try/catch block, I am recording the UUID of a failed record in a text file. The program then ceases and would need restarting.
When the program is restarted, I don't want to resend emails to those that were successful, but begin at the failed record. I assume this means, I need to process records that were created AFTER the failed record.
How do I do this?
Why not keep track somewhere (e.g. another table, or even in a BIT column of the original table, called "WelcomeEmailSent" or something) which UUIDs have already been mailed? Then no matter where your program dies or what state it was in, it can start up again and always know where it left off.
sort by a column (in this case I would recommend userUUID)
do a where userUUID > 'your-last-uuid'