SQL is updating a record thread safe - sql

I am working on a server that accesses a database. It is possible for multiple people to access the same record. Will each request wait in line, or will they all try to modify that record at the same time.
Example:
I have an image, and the database will keep track of how many "likes" that image has.
UPDATE `images` SET `image_likes` = `image_likes` + 1 WHERE `image_id` = 0;
Assuming that specific image has 0 "likes" and 3 people at the same time "like" that image, would those 3 request properly be processed, resulting in that image having 3 likes, or is there a chance that the record can be corrupted, or at the very least be incorrect, maybe only showing 2 "likes"?
My Database uses the MyISAM engine and I am using it through GoDaddy.
Thank you

Php by itself is not thread safe but MySQL is , in this case MySQL will handle this issue and you will get 3 likes. Unless there is any other operation involved this should not be a problem
You can give it a try by calling that script via console multiple times to see what happens

Related

Multiple users accessing a linked table occasionally see a message "Cannot update. Database or object is read-only"

We have a split MS Access database. When users log on, they are connected/linked to two separate Access database (one for the specific project they are working on and one for record locking (and other global settings)). The "locking" database is the one I need to find a solution for.
One of the tables "tblTS_RecordLocking", simply stores a list of user names and the recordID of the record they are editing. This never has more than 50 records - usually being closer to 5-10. But before a user can do anything on a record, it opens "tblTS_RecordLocking" to see if the record is in use (so it happens a lot):
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.ProjectID_Lock)=1111) AND ((tblTSS_RecordLocking.RecordID_Lock)=123456));", , dbReadOnly)
If it's in use, the user simply gets a message and the form/record remains locked. If not in use, it will close the recordset and re-open it so that the user name is updated with the Record ID:
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.UserName_Lock)='John Smith'));")
If recIOC.EOF = True Then
recIOC.AddNew
recIOC.Fields![UserName_Lock] = "John Smith"
Else
recIOC.Edit
End If
recIOC.Fields![RecordID_Lock] = 123456
recIOC.Fields![ProjectID_Lock] = 111
recIOC.Update
recIOC.Close: set recIOC=Nothing
As soon as this finishes, everything realting to the second database is closed down (and the .laccdb file disappears).
So here's the problem. On very rare occasions, a user can get a message:
3027 - Cannot update. Database or object is read-only.
On even rarer occasions, it can flag the db as corrrupt needing to be compressed and re-indexed.
I really want to know the most reliable way to do the check/update. The process may run several hundred times in a day, and whilst I only see the issue once every few weeks, and for the most part handle it cleanly (on the front-end), I'm sure there is a better, more reliable way.
I agree with mamadsp that moving to SQL is the best option and am already in the process of doing this. However, whilst I was not able to create a fix for this issue, I was able to find a work-around that has yet to fail.
Instead of having a single lock table in the global database. I found that creating a lock table in the project database solved the problem. The main benefit of this is that there are much fewer activities on the table. So, not perfect - but it is stable.

I want multiple servers processing data from a single database table

I have a single database table on a relational database. Data will be loaded into it. I then want to have multiple servers processing that data concurrently (I don't want to have only one server running at a time). E.g. each server will:
Query for a fixed number of rows
Do some work for each row retrieved
Update each row to show it has been processed
How do I ensure that each row is only processed once? Note I don't want to pre-assign a row of data to a server; i'm designing for high availability so the solution should keep running if one or more servers goes down.
The solution I've gone for so far is as follows:
The table has three columns: LOCKED_BY (VARCHAR), LOCKED_AT (TIMESTAMP) and PROCESSED (CHAR)
Each server starts by attempting to "pseudo-lock" some rows by doing:
UPDATE THE_TABLE
SET LOCKED_BY= $servername,
LOCKED_AT = CURRENT_TIMESTAMP,
WHERE (LOCKED_BY = null OR (CURRENT_TIMESTAMP- LOCKED_AT > $timeout)
AND PROCSSED = 'N'
i.e. try to "pseudo-lock" rows that aren't locked already or where the pseudo-lock as expired. Only do this for unprocessed rows.
More than one server may have attempted this at the same time. The current server needs to query to find out if it was successful in the "pseudo-lock":
SELECT * FROM THE_TABLE
WHERE LOCKED_BY = $server_name
AND PROCESSED = 'N'
If any rows are returned the server can process them.
Once the processing has been done the row is updated
UPDATE THE_TABLE SET PROCESSED = 'Y' WHERE PRIMARYKEYCOL = $pk
Note: the update statement should ideally limit the number of rows updated.
If you are open to changing platform then I would suggest moving to a modern, cloud-based solution like Snowflake. This will do what you want but in the background and by default - so you don't need to know what it's doing or how it's doing it (unless you want to).
This may come across as patronising, which is not my intention, but what you are attempting (in the way you are attempting it) is very complex; so if you don't already know how to do it then someone telling you how to do it is not going to give you the skills/experience you need to be able to implement it successfully

Users updating same row at the same time SQL Server

I want to create a SQL Server table that has a Department and a Maximum Capacity columns (assume 10 for this scenario). When users add them selves to a department the system will check the current assignment count (assume 9 for this scenario) in the department and compare it to the maximum value. If it is below the maximum, they will be added.
The issue is this: what if two users submit at the same time and the when the code retrieves the current assignment count it will be 9 for both. One user updates the row sooner so now its 10 but the other user has already retrieved the previous value before the update (9) and so both are valid when compared and we end up with 11 users in the department.
Is this even possible and how can one solve it?
The answer to your problem lies in understanding "Database Concurrency" and then choosing the correct solution to your specific scenario.
It too large a topic to cover in a single SO answer so I would recommend doing some reading and coming back with specific questions.
However in simple form you either block the assignments out to the first person who tries to obtain them (pessimistic locking), or you throw an error after someone tries to assign over the limit (optimistic locking).
In the pessimistic case you then need ways to unblock them if the user fails to complete the transaction e.g. a timeout. A bit like on a ticket booking website it says "These tickets are being held for you for the next 10 minutes, you must complete your booking within that time else you may lose them".
And when you're down to the last few positions you are going to be turning everyone after the first away... no other way around it if you require this level of locking. (Well you could then create a waiting list, but that's another issue in itself).

Oracle Apex page based on loading data after entering the one or its refreshing

Good evening!
At this moment I'm working on page in the Oracle Apex Application, which works the following way. This is the page, which contains some kind of big and complex report with data differentiated by one feature (let it be named feature A). On the left side of the page there is a "catalog menu", due to which user can see the data answering to feature A, on the right side the data is shown and above there is a the search bar, which can help users to find some exact data by other features feature B, feature C etc.
I had a view (let it be named V_REPORT_BIG_DATA) for showing the report, but it was so big and was loading so slow, that I've decided to switch the page on the table with the same fields as V_REPORT_BIG_DATA (let it be named T_REPORT_BIG_DATA_TEMP). Besides, it has the additional field for process identificator (let it be named PID) and is temporary not physically, but by its purpose. I thought that it must work this way: user enters the page, receives his own PID relevant to the session (it works if PID is null, otherwise it doesn't change), and then the procedure (P_REPORT_BIG_DATA_RELOAD) makes the deleting of the "old" data and uploading of the "new" one, besides these actions are executed with the one PID and are concerned to definite (say, current) user.
But my idea didn't appear to work correct. The procedure P_REPORT_BIG_DATA_RELOAD itself works fine and is executed from the Process page, and PID is a global Application Item (it is generated from a database sequence). But my brain has nearly been blown up when I saw that my table has duplicates of data concerned to one user and one PID! By making the table of logs (which has been filled with the facts, how much rows had been deleted and inserted again, in the code of P_REPORT_BIG_DATA_RELOAD) I saw very strange thing: some users "loaded" duplicates as if the uploading procedure had been executed several times simultaneously!
Taking into account all I've said before, I have the following question: what do I do wrong? What should I do, so that I wouldn't have to use the word "distinct" in the query from the table T_REPORT_BIG_DATA_TEMP?
UPD: Additional facts to my question. Excuse me for my inattention, because I thought that there I cannot edit my first posts. :-/
Well, I'll explain my problem further. :) Firstly, I did all the best for my view P_REPORT_BIG_DATA_RELOAD to expect its loading much faster, but it involves many-many rows. Secondly, the code executed from the Process Page (say, during the loading of my page) is this:
begin
if :PID is null then
:PID := NEW_PID;
end if;
P_REPORT_BIG_DATA_RELOAD(AUTH => :SUSER, PID => :PID);
end;
NEW_PID is a function which generates new PID, and P_REPORT_BIG_DATA_RELOAD is my procedure which refresh the data depending on user and his PID.
and the code of my procedure is this:
procedure P_REPORT_BIG_DATA_RELOAD
(AUTH in varchar2, PID in number)
is
NCOUNT_DELETED number;
NCOUNT_INSERTED number;
begin
--first of all I check that both parameters are not null - let me omit this part
--I find the count of data to be deleted (for debug only)
select count(*)
into NCOUNT_DELETED
from T_REPORT_BIG_DATA_TEMP T
where T.AUTHID = AUTH
and T.PID = P_REPORT_BIG_DATA_RELOAD.PID;
--I delete "old" data
delete from T_REPORT_BIG_DATA_TEMP T
where T.AUTHID = AUTH
and T.PID = P_REPORT_BIG_DATA_RELOAD.PID;
--I upload "new" one
insert into T_REPORT_BIG_DATA_TEMP
select V.*, PID from
(select S.* from V_REPORT_BIG_DATA S
where S.AUTHID = AUTH);
--I find the count of uploaded data (for debug only)
NCOUNT_INSERTED := SQL%ROWCOUNT;
--I write the logs (for debug only)
insert into T_REPORT_BIG_DATA_TEMP_LG(AUTHID,PID,INS_CNT,DLD_CNT,WHEN)
values(AUTH,PID,NCOUNT_INSERTED,NCOUNT_DELETED,sysdate);
end P_REPORT_BIG_DATA_RELOAD;
And one more fact: I tried to turn :PID into Page Item, but it cleared after every refresh in spite of that the option Maintain session state is Per session, so that I couldn't even hope for using the same PID by every definite user in definite session.

Determining query's progress (Oracle PL/SQL)

I am a developer on a web app that uses an Oracle database. However, often the UI will trigger database operations that take a while to process. As a result, the client would like a progress bar when these situations occur.
I recently discovered that I can query V$SESSION_LONGOPS from a second connection, and this is great, but it only works on operations that take longer than 6 seconds. This means that I can't update the progress bar in the UI until 6 seconds has passed.
I've done research on wait times in V$SESSION but as far as I've seen, that doesn't include the waiting for the query.
Is there a way to get the progress of the currently running query of a session? Or should I just hide the progress bar until 6 seconds has passed?
Are these operations Pl/SQL calls or just long-running SQL?
With PL/SQL operations we can write messages with SET_SESSION_LONGOPS() in the DBMS_APPLICATION_INFO package. We can monitor these messages in V$SESSION_LONGOPS. Find out more.
For this to work you need to be able to quantify the operation in units of work. These must be iterations of something concrete, and numeric not time. So if the operation is insert 10000 rows you could split that up into 10 batches. The totalwork parameter is the number of batches (i.e. 10) and you call SET_SESSION_LONGOPS() after every 1000 rows to increment the sofar parameter. This will allow you to render a thermometer of ten blocks.
These messages are session-based but there's no automatic way of distinguishing the current message from previous messages from the same session & SID. However if you assign a UID to the context parameter you can then use that value to filter the view.
This won't work for a single long running query, because there's no way for us to divide it into chunks.
i found this very usefull
dbms_session.set_module("MY Program" , "Kicking off ... ")
..
dbms_session.set_action("Extracting data ... ")
..
dbms_session.set_action("Transforming data ... ")
..
you can monitor the progress using
select module , action from v$session where sid = :yoursessionid
I've done quite a lot of web development with Oracle over the years and found that most users prefer an indeterminate progress bar, than a determinate bar that is inaccurate (a la pretty much any of Microsoft's progress bars which annoy me no end), and unfortunately there is no infallible way of accurately determining query progress.
Whilst your research into the long ops capability is admirable and would definitely help to make the progress of the database query more reliable, it can't take into account the myriad of other variables that may/will affect the web operation's transactional progress (network load, database load, application server load, client-side data parsing, the user clicking on a submit button 1,000 times, etc and so on).
I'd stick to the indeterminate progress method using Javascript callbacks. It's much easier to implement and it will manage your user's expectations as appropriate.
Using V$_SESSION_LONGOPS requires to set TIMED_STATISTICS=true or SQL_TRACE=true. Your database schema must be granted the ALTER SESSION system privilege to do so.
I once tried using V$_SESSION_LONGOPS with a complex and long running query. But it turned up that V$_SESSION_LONGOPS may show the progress of parts of the query like full table scans, join operations, and the like.
See also: http://www.dba-oracle.com/t_v_dollar_session_longops.htm
What you can do is just to show the user "the query is still running". I implemented a <DIV> nested into a <TD> that gets longer with every status request sent by the browser. Status requests are initiated by window.SetTimeout (every 3 seconds) and are AJAX calls to a server-side procedure. The status report returned by the server-side procedure simply says "we are still running". The progress bar's width (i.e. the <DIV>'s width) increments by 5% of the <TD>s width every time and is reset to 5% after showing 100%.
For long running queries you might track the time they took in a separate table, possibly with individual entries for varying where clauses. You could use this to display the average time plus the time that just elapsed in the client-side dialog.
If you have a long running PL/SQL procedure or the like on the server side doing several steps, try this:
create a table for status messages
use a unique key for any process the user starts. Suggestion: client side's javascript date in milliseconds + session ID.
in case the long running procedure is to be started by a link in a browser window, create a job using DBMS_JOB.SUBMIT to run the procedure instead of running the procedure directly
write a short procedure that updates the status table, using PRAGMA AUTONOMOUS_TRANSACTION. This pragma allows you to commit updates to the status table without committing your main procedure's updates. Each major step of your main procedure should have an entry of its own in this status table.
write a procedure to query the status table to be called by the browser
write a procedure that is called by an AJAX call if the use clicks "Cancel" or closes the window
write a procedure that is called by the main procedure after completion of each step: it queries the status table and raises an exception with an number in the 20,000s if the cancel flag was set or the browser did not query the status for, say, 60 seconds. In the main procedure's exception handler look for this error, do a rollback, and update the status table.