In my code, i need to assign the Pallet number to the selected carton boxes.
once the user the selects the boxes(15-30 boxes) and press OK, i run the following code.
//UPDATE THE PALLET NO FOR ALL THE SELECTED CARTONS
foreach (DataGridViewRow item in dgvCartonDetails.Rows)
{
dbLayer.tblCartonUpdatePalletid(item.Cells["CM_ID"].Value.ToString(), Pno, _Settings.Line.ToString());
//STORED PROCEDURE:tblCartonUpdatePalletid
//update tblCarton set CM_palletid = #palletid, cm_cartoncompletetime = getdate() where cm_id = #cm_id
}
//PRINT ALL THE BOXES IN THE PALLET
dbLayer.tblPrintAllCartonsOfthePallet(PalletID);
//STORED PROCEDURE: tblPrintAllCartonsOfthePallet
//select * from tblCarton where cm_palletid = #PalletID
Some times i face the lock error (ref pic).
I have given the stored procedure data as well for referance. Carton table will increase at the rate of 5000 records/day.
I dont know what i am missing. where should i look into? Thanks in advance.
There must be another process running at the same time. You need to identify what it is. The best thing to do if possible is to capture a deadlock in profiler as it will show you exactly which processes deadlocked and one which resource.
I assume you run those 2 queries in 1 transaction.
If there are 2 processes running the code above, what may happen is they update tblCarton at the same time. The first one updates a record on page 1 and the second updates a record on page 2. Then they need to update the pages in reverse: first needs to update page 2 and second page 1. That will result in a deadlock.
Without getting a deadlock report from profiler, it's hard to say if this is happening for sure, though.
Related
I am working on a server that accesses a database. It is possible for multiple people to access the same record. Will each request wait in line, or will they all try to modify that record at the same time.
Example:
I have an image, and the database will keep track of how many "likes" that image has.
UPDATE `images` SET `image_likes` = `image_likes` + 1 WHERE `image_id` = 0;
Assuming that specific image has 0 "likes" and 3 people at the same time "like" that image, would those 3 request properly be processed, resulting in that image having 3 likes, or is there a chance that the record can be corrupted, or at the very least be incorrect, maybe only showing 2 "likes"?
My Database uses the MyISAM engine and I am using it through GoDaddy.
Thank you
Php by itself is not thread safe but MySQL is , in this case MySQL will handle this issue and you will get 3 likes. Unless there is any other operation involved this should not be a problem
You can give it a try by calling that script via console multiple times to see what happens
Good evening!
At this moment I'm working on page in the Oracle Apex Application, which works the following way. This is the page, which contains some kind of big and complex report with data differentiated by one feature (let it be named feature A). On the left side of the page there is a "catalog menu", due to which user can see the data answering to feature A, on the right side the data is shown and above there is a the search bar, which can help users to find some exact data by other features feature B, feature C etc.
I had a view (let it be named V_REPORT_BIG_DATA) for showing the report, but it was so big and was loading so slow, that I've decided to switch the page on the table with the same fields as V_REPORT_BIG_DATA (let it be named T_REPORT_BIG_DATA_TEMP). Besides, it has the additional field for process identificator (let it be named PID) and is temporary not physically, but by its purpose. I thought that it must work this way: user enters the page, receives his own PID relevant to the session (it works if PID is null, otherwise it doesn't change), and then the procedure (P_REPORT_BIG_DATA_RELOAD) makes the deleting of the "old" data and uploading of the "new" one, besides these actions are executed with the one PID and are concerned to definite (say, current) user.
But my idea didn't appear to work correct. The procedure P_REPORT_BIG_DATA_RELOAD itself works fine and is executed from the Process page, and PID is a global Application Item (it is generated from a database sequence). But my brain has nearly been blown up when I saw that my table has duplicates of data concerned to one user and one PID! By making the table of logs (which has been filled with the facts, how much rows had been deleted and inserted again, in the code of P_REPORT_BIG_DATA_RELOAD) I saw very strange thing: some users "loaded" duplicates as if the uploading procedure had been executed several times simultaneously!
Taking into account all I've said before, I have the following question: what do I do wrong? What should I do, so that I wouldn't have to use the word "distinct" in the query from the table T_REPORT_BIG_DATA_TEMP?
UPD: Additional facts to my question. Excuse me for my inattention, because I thought that there I cannot edit my first posts. :-/
Well, I'll explain my problem further. :) Firstly, I did all the best for my view P_REPORT_BIG_DATA_RELOAD to expect its loading much faster, but it involves many-many rows. Secondly, the code executed from the Process Page (say, during the loading of my page) is this:
begin
if :PID is null then
:PID := NEW_PID;
end if;
P_REPORT_BIG_DATA_RELOAD(AUTH => :SUSER, PID => :PID);
end;
NEW_PID is a function which generates new PID, and P_REPORT_BIG_DATA_RELOAD is my procedure which refresh the data depending on user and his PID.
and the code of my procedure is this:
procedure P_REPORT_BIG_DATA_RELOAD
(AUTH in varchar2, PID in number)
is
NCOUNT_DELETED number;
NCOUNT_INSERTED number;
begin
--first of all I check that both parameters are not null - let me omit this part
--I find the count of data to be deleted (for debug only)
select count(*)
into NCOUNT_DELETED
from T_REPORT_BIG_DATA_TEMP T
where T.AUTHID = AUTH
and T.PID = P_REPORT_BIG_DATA_RELOAD.PID;
--I delete "old" data
delete from T_REPORT_BIG_DATA_TEMP T
where T.AUTHID = AUTH
and T.PID = P_REPORT_BIG_DATA_RELOAD.PID;
--I upload "new" one
insert into T_REPORT_BIG_DATA_TEMP
select V.*, PID from
(select S.* from V_REPORT_BIG_DATA S
where S.AUTHID = AUTH);
--I find the count of uploaded data (for debug only)
NCOUNT_INSERTED := SQL%ROWCOUNT;
--I write the logs (for debug only)
insert into T_REPORT_BIG_DATA_TEMP_LG(AUTHID,PID,INS_CNT,DLD_CNT,WHEN)
values(AUTH,PID,NCOUNT_INSERTED,NCOUNT_DELETED,sysdate);
end P_REPORT_BIG_DATA_RELOAD;
And one more fact: I tried to turn :PID into Page Item, but it cleared after every refresh in spite of that the option Maintain session state is Per session, so that I couldn't even hope for using the same PID by every definite user in definite session.
I coding a application that dealing with files. So, I have a table that contains information about all the files that registered in the application.
My "files" table looks like this: ID, Path and LastScanTime.
The algorithm that I use in my application is simple:
Take the oldest row (LastScanTime is the oldest)
Extract the file path
Do some magics on this file (takes exactly 5 minutes)
Update the LastScanTime to the current time (now)
Go to step "1"
Until now, the task is pretty simple. For this, I going to use this SQL statement for getting the oldest item:
SELECT TOP 1 * FROM files ORDER BY [LastScanTime] ASC
and at the end of the item processing (preventing the item to be selected immediately again):
UPDATE Files SET [LastScanTime]=GETDATE() WHERE Id=#ItemID
Now, I going to add some complexity to the algorithm:
Take the 3 oldest row (LastScanTime is the oldest)
For each row, do:
A. Extract the file path
B. Do some magics on this file (takes exactly 5 minutes)
C. Update the LastScanTime to the current time (now)
D. Go to step "1"
The problem that now I facing with is that the whole process is going to be processed in parallel (no more serial processing). So, changing my SQL statement to the next statement is not enough!
SELECT TOP 3 * FROM files ORDER BY [LastScanTime] ASC
Why this SQL statement isn't enough?
Let's say that I run my code and started to execute the first 3 items. Now, after a minute I want to execute another 3 items. This SQL statement will retrieve exactly the same "oldest" items that we already started to process.
Possible solution
Implementing a SELECT & UPDATE (combined) that getting the 3 oldest item and immediately update their last scan time. Since there no SELECT & UPDATE in same statement, what will happens if between the executing of the first SELECT, will come in another SELECT? The both statements will get the same results. This is a problem... Another problem is that we mark the item as "scanned recently", before the scan is really finished. What happend if the scanned will terminated by an error?
I'm looking for tips and tricks to solve this problem. The solutions can add columns as needed.
I'll appreciate you help.
Well I usually have habit of having two different field name in the database. one is AddedDate and another is ModifiedDate.
So the algorithm in your terms will be:-
Take the oldest row (AddedDate is the oldest)
Extract the file path
Do some process on this file
Update the ModifiedDate to the current time (now)
It seems that you are going to invent event queue with your SQL. Possibly standard approaches like RabbitMQ or ActiveMQ may solve your problem.
I am facing a lock table overflow issue and below is the error it displays me and as soon as it displays it crashes the code.
Lock table overflow, increase -L on server (915)
I have checked the error number and it is saying we need to modify that -L value before server starts and it has been set to 500 by default. But I would not imagine I have been given that privilege to change that value unless I am a database administrator of the company.
What i was trying to do was wipe out roughly 11k of member records with all the linked table records ( more than 25 tables are linked to each member record ) while backing them each table up into separate file. So roughly it achieves 'EXCLUSIVE-LOCK' when entering the member for loop as below,
for each member
EXCLUSIVE-LOCK:
/*
Then find each linked records in a order.
Extract them.
Delete them.
*/
Finally it extracts the member.
Delete member.
end.
When it hits certain number of member records program crashes out. So i had to run it as batches like,
for each member
EXCLUSIVE-LOCK:
Increment a member count.
When count = 1k
then RETURN.
/*
Then find each linked records in a order.
Extract them.
Delete them.
*/
Finally it extracts the member.
Delete member.
end.
So literally I've ended up with running the same code more than 11 times to get the work done. I hope someone should have come across this issue and would be great help if you like to share a long time solution rather than my temporary solution.
You need a lock for each record that is part of a transaction. Otherwise other users could make conflicting changes before your transaction commits.
In your code you have a transaction that is scoped to the outer FOR EACH. Thus you need 1 lock for the "member" record and another lock for each linked record associated with that member.
(Since you are not showing real code it is also possible that your actual code has a transaction scope that is even broader...)
The lock table must be large enough to hold all of these locks. The lock table is also shared by all users -- so not only must it hold your locks but there has to be room for whatever other people are doing as well.
FWIW -- 500 is very, very low. The default is 8192. There are two startup parameters using the letter "l", one is upper case, -L, and that is the lock table and it is a server startup parameter. Lower case, -l, is the "local buffer size" and that is a client parameter. (It controls how much memory is available for local variables.)
"Batching", as you have sort of done, is the typical way to ensure that no one process uses too many locks. But if your -L is really only 500 a batch size of 1,000 makes no sense. 100 is more typical.
A better way to batch:
define buffer delete_member for member.
define buffer delete_memberLink for memberLink. /* for clarity I'll just do a single linked table... */
for each member no-lock: /* do NOT get a lock */
batch_loop: do for delete_member, delete_memberLink while true transaction:
b = 0.
for each delete_memberLink exclusive-lock where delete_memberLink.id = member.id:
b = b + 1.
delete delete_memberLink.
if b >= 100 then next batch_loop.
end.
find delete_member exclusive-lock where recid( delete_member ) = recid( member ).
leave batch_loop. /* this will only happen if we did NOT execute the NEXT */
end.
end.
You could also increase your -L database startup parameter to take into account your one off query / delete.
I am a developer on a web app that uses an Oracle database. However, often the UI will trigger database operations that take a while to process. As a result, the client would like a progress bar when these situations occur.
I recently discovered that I can query V$SESSION_LONGOPS from a second connection, and this is great, but it only works on operations that take longer than 6 seconds. This means that I can't update the progress bar in the UI until 6 seconds has passed.
I've done research on wait times in V$SESSION but as far as I've seen, that doesn't include the waiting for the query.
Is there a way to get the progress of the currently running query of a session? Or should I just hide the progress bar until 6 seconds has passed?
Are these operations Pl/SQL calls or just long-running SQL?
With PL/SQL operations we can write messages with SET_SESSION_LONGOPS() in the DBMS_APPLICATION_INFO package. We can monitor these messages in V$SESSION_LONGOPS. Find out more.
For this to work you need to be able to quantify the operation in units of work. These must be iterations of something concrete, and numeric not time. So if the operation is insert 10000 rows you could split that up into 10 batches. The totalwork parameter is the number of batches (i.e. 10) and you call SET_SESSION_LONGOPS() after every 1000 rows to increment the sofar parameter. This will allow you to render a thermometer of ten blocks.
These messages are session-based but there's no automatic way of distinguishing the current message from previous messages from the same session & SID. However if you assign a UID to the context parameter you can then use that value to filter the view.
This won't work for a single long running query, because there's no way for us to divide it into chunks.
i found this very usefull
dbms_session.set_module("MY Program" , "Kicking off ... ")
..
dbms_session.set_action("Extracting data ... ")
..
dbms_session.set_action("Transforming data ... ")
..
you can monitor the progress using
select module , action from v$session where sid = :yoursessionid
I've done quite a lot of web development with Oracle over the years and found that most users prefer an indeterminate progress bar, than a determinate bar that is inaccurate (a la pretty much any of Microsoft's progress bars which annoy me no end), and unfortunately there is no infallible way of accurately determining query progress.
Whilst your research into the long ops capability is admirable and would definitely help to make the progress of the database query more reliable, it can't take into account the myriad of other variables that may/will affect the web operation's transactional progress (network load, database load, application server load, client-side data parsing, the user clicking on a submit button 1,000 times, etc and so on).
I'd stick to the indeterminate progress method using Javascript callbacks. It's much easier to implement and it will manage your user's expectations as appropriate.
Using V$_SESSION_LONGOPS requires to set TIMED_STATISTICS=true or SQL_TRACE=true. Your database schema must be granted the ALTER SESSION system privilege to do so.
I once tried using V$_SESSION_LONGOPS with a complex and long running query. But it turned up that V$_SESSION_LONGOPS may show the progress of parts of the query like full table scans, join operations, and the like.
See also: http://www.dba-oracle.com/t_v_dollar_session_longops.htm
What you can do is just to show the user "the query is still running". I implemented a <DIV> nested into a <TD> that gets longer with every status request sent by the browser. Status requests are initiated by window.SetTimeout (every 3 seconds) and are AJAX calls to a server-side procedure. The status report returned by the server-side procedure simply says "we are still running". The progress bar's width (i.e. the <DIV>'s width) increments by 5% of the <TD>s width every time and is reset to 5% after showing 100%.
For long running queries you might track the time they took in a separate table, possibly with individual entries for varying where clauses. You could use this to display the average time plus the time that just elapsed in the client-side dialog.
If you have a long running PL/SQL procedure or the like on the server side doing several steps, try this:
create a table for status messages
use a unique key for any process the user starts. Suggestion: client side's javascript date in milliseconds + session ID.
in case the long running procedure is to be started by a link in a browser window, create a job using DBMS_JOB.SUBMIT to run the procedure instead of running the procedure directly
write a short procedure that updates the status table, using PRAGMA AUTONOMOUS_TRANSACTION. This pragma allows you to commit updates to the status table without committing your main procedure's updates. Each major step of your main procedure should have an entry of its own in this status table.
write a procedure to query the status table to be called by the browser
write a procedure that is called by an AJAX call if the use clicks "Cancel" or closes the window
write a procedure that is called by the main procedure after completion of each step: it queries the status table and raises an exception with an number in the 20,000s if the cancel flag was set or the browser did not query the status for, say, 60 seconds. In the main procedure's exception handler look for this error, do a rollback, and update the status table.