Suppose we have table A with fields time: date, status: int, playerId: int, serverid: int
We added constraint on time, playerid and serverid (UNQ_TIME_PLAYERID_SERVERID)
At some time we try to update all rows in table A with new status and date:
update status = 1, time = sysdate where serverid=XXX and status != 1 and time > sysdate
Problem that there are two separated processes on separate machines that can execute same update at same sysdate.
And UNQ_TIME_PLAYERID_SERVERID violation occurs!
Is there any possibility to force Oracle check where cause before concrete update (when lock on row acquired)?
I do not want to use any 'select for update' things
If it's really the same update 100% of the time, then just catch the exception and ignore it.
In case you want to prevent an error occuring in the first place, you need to implement some logic to prevent the second update statement from ever executing.
I could think of a "lock table" just for this purpose. Create a table TABLE_A_LOCK_TB (add columns based on what information you want to have stored there for administrative reasons, e.g. user who set the lock or a timestamp, ...).
Before you execute an update statement on table A, just insert a row to TABLE_A_LOCK_TB. Once an update was successful, delete said row.
Before executing any update statement on table A just check whether the TABLE_A_LOCK_TB has a dataset. If it doesn't your update is good to go, if it does you don't execute the update.
To make this process easier you could just write a package for "locking" and "unlocking" table A by inserting / deleting a row from the TABLE_A_LOCK_TB. Also implement a function to check the "lock status".
If you need this logic for several tables you can also make it dynamic by just having a column holding the table name in TABLE_A_LOCK_TB and checking against that.
In your application logic you can handle every update like this then (pseudocode):
IF your_lock_package.lock_status(table_name) = false THEN
your_lock_package.set_lock(table_name);
-- update statement(s)
your_lock_package.release_lock(table_name);
ELSE
-- "error" handling / information to user + exit
Related
I have one table and two different SQL procedures(AS400) to Insert/Update records to that same table. Both the SQL procedures having IF EXISTS condition to handle the data.
IF EXIST (SELECT 1 FROM TABLE WHERE FIELD001 = 'test') THEN
Update table....
ELSE
INSERT INTO TABLE VALUES ('test')...
ENDIF;
But still am getting duplicate records in my table with mili seconds difference.
Ex.1st record is --> 2017-07-24-04.21.47.485832
2nd record is --> 2017-07-24-04.21.47.487468
These tables could be Inserted/Updated interactively as well as batch. Anyway How come this is possible for duplicate records..?. Please experts give some possibilities where/when/how duplicate records will be inserted.
And also i don't want to fix this with UNIQUE INDEX, PRIMARY KEY etc...
Sorry I didn't attach any coding with this.
Thanks,
Loganathan.
Adding codes here,,,
The table which I mentioned earlier will insert/update from various ways, but we confirmed these records were inserted interactively from a single session using below single procedure.
Original Records in table.
9243548 CUSTYPE 2017-07-10-16.53.09.825860 2017-07-10-16.53.09.825860
9243548 ROYALTY 2017-07-10-16.53.09.485832 2017-07-10-16.53.09.485832
9243548 ROYALTY 2017-07-10-16.53.09.487468 2017-07-10-16.53.09.487468
Calling program:
if v_res_spec_sts <> '' then
if (v_res_spec_sts <> v_current_res_spec_sts
or v_current_res_spec_sts IS NULL) then
call SPCASPECSV (p_resvnum, c_Royalty, v_res_spec_sts,
p_updateUser, p_updateProgram) ;
end if;
end if;
Procedure:
IF EXISTS (SELECT 1 FROM CASPECLPF WHERE RSRES# = P_RSRES#
AND RSCOND = P_RSCOND) THEN
UPDATE CASPECLPF SET
RSSSTS = COALESCE(P_RSSSTS, RSSSTS)
,RSSLCM = TODAYMONTH
,RSSLCD = TODAYDAY
,RSSLCY = TODAYYEAR
,RSSLCU = COALESCE(P_RSSLCU, RSSLCU)
,RSSLCP = COALESCE(P_RSSLCP, RSSLCP)
WHERE RSRES# = P_RSRES# AND RSCOND = P_RSCOND;
ELSE
INSERT INTO CASPECLPF
(
RSRES#
,RSCOND
,RSSSTS
,RSSLCM
,RSSLCD
,RSSLCY
,RSSLCU
,RSSLCP
)
VALUES
(
COALESCE(P_RSRES#, 0)
,COALESCE(P_RSCOND, ' ')
,COALESCE(P_RSSSTS, ' ')
,TODAYMONTH
,TODAYDAY
,TODAYYEAR
,COALESCE(P_RSSLCU, ' ')
,COALESCE(P_RSSLCP, ' ')
);
END IF;
Make sure they are not in different sessions i.e. inserting in one session and not doing the commit, then second session would obviously not find the record inserted in 1st session.
Also if that is not the case, please provide the code.
Because it's not a duplicate.
If you had a primary or unique key defined, the system would have prevented the second process from writing a record at 2017-07-24-04.21.47.487468.
As it is when the second process checked for a record at 2017-07-24-04.21.47.485500, one didn't exist. But by the time the second process inserted a record the first process had also inserted a record.
Even with a primary key, the existence check and insert are two separate operations. You'd still have to monitor for a duplicate key on the insert and handle appropriately.
The MERGE statement is usually preferred for such "upsert" (UPDATE or INSERT) operations. However, even with a atomic merge, a second process could insert a record between existence check & insert. You have to use a locking level of *RR (repeatable read) which basically locks the entire table to ensure that no process can add a record between the existence check and the insert.
With processes inserting microseconds apart, locking the entire table is going to hurt.
You really need to define a primary key, or at least a unique one.
I've a table. In this table I have two columns - 'insert_name' and 'modified_name'. I need to insert into this columns data about who has inserted data into the table('insert_name') and who has changed these data in the table (modified_name). How it can be done?
You are looking for basic DML statements.
If your record is already in the table, then you need to UPDATE it. Otherwise, when you are about to add your record to it and it doesn't already exist in the destination table then you are looking for INSERT INTO statement.
Example of updating information for record with first id:
UPDATE yourtable SET insert_name = 'value1', modified_name = 'value2' WHERE id = 1
Example of inserting new record:
INSERT INTO yourtable(id, company_name, product_name, insert_name)
VALUES (1, 'Google', 'PC', 'value1')
If you are looking for automatic changes to those columns then you need to look into triggers.
Remember that more often than not you may find that the application connecting to the database is using single database user in which case you probably know the context within the application itself (who inserts, who updates). This does eliminate triggers and put the task straight on simple insert/update commands from within your application layer.
You might be able to use the CURRENT_USER function to find the name of the user making the change.
The value from this function could then be used to update the appropriate column. This update could be done as part of the INSERT or UPDATE statement. Alternatively use an INSERT or UPDATE trigger.
Personally I avoid triggers if I can.
For those 2 columns add Current_User as Default constraint.
As the first time Insert Statement will save them with current login user names. For update write an Update trigger with the same Current_User statement for the column Modified_Name.
If and only if your application business logic can't update the column modified_nme then only go for Trigger.
See the use of Current_Use
https://msdn.microsoft.com/en-us/library/ms176050.aspx
I have a table as below that contains dealer codes and status. Every night between 1 and 6am the status column may change for each dealer code. For example today the status of 00141.00062 is operational, but tomorrow it will be deactivated if the store was closed.
Briefly,I would like to track the changes using by stored procedures and send a notification email to me just for the updated values.
Lastly, I do not prefer to create a trigger cause of according to my previous experience it will be affect my main app. Therefore, I will be aprreciate if you can explain how I can do it via stored procedures.
DEALER_CODE STATUS
----------------------------
00141.00062 OPERASYONEL
01033.00061 DEACTIVE
00070.00002 DEACTIVE
00524.00002 DEACTIVE
00387.00020 DEACTIVE
00543.00001 DEACTIVE
00310.00061 DEACTIVE
00247.00062 OPERATIONAL
If your UPDATE statement affects multiple rows at once, you'll get the trigger fired once, but with multiple rows in the Deleted (old values before UPDATE) and Inserted (new values after UPDATE) pseudo tables. Therefore, it's the easiest to just compare those pseudo tables to figure out which rows have changed.
Also: I would strongly recommend to NOT send the e-mail directly from the trigger, since the trigger executes in the context of the UPDATE statement that caused it to fire and thus any delay in sending the e-mail just slows down your main app.
Instead, just add a row into a table, and then periodically (once every night, once every 4 hours or whatever suits your needs) have a separate process grab the new rows from that table and put those into an e-mail.
So the trigger should look something like this:
CREATE TRIGGER trgUpdateStatus
ON dbo.YourTableName
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
-- insert a row into a "changed" table that will then be
-- used to asynchronously send out e-mails
INSERT INTO dbo.ChangedDealerStatuses (DealerCode, OldStatus, NewStatus)
SELECT
old.Dealer_Code, old.Status, new.Status
FROM
Deleted old
INNER JOIN
Inserted new ON old.Dealer_Code = new.Dealer_Code
WHERE
old.Status <> new.Status
END
You can use an after update trigger for this purpose. This is sample code:
CREATE TRIGGER TRIGGERNAME -- NAME OF TRIGGER
ON TABLENAME -- NAME OF YOUR TABLE
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF UPDATE([STATUS])
BEGIN
PRINT 'STATUS COLUMN IS UPDATED'
---------------------TODO------------
-- INSERT INTO TABLE OR SEND EMAIL-------
END
END
Note: please note that if update statement will be executed if update operation on column status. It never check if you have updated with same value.
I am making some tweaks to a legacy application built on SQL Server 2000, needless to say I only want to do the absolute minimum in the fear that it may just all fall apart.
I have a large table of users, tbUsers, with a BIT flag for IsDeleted. I want to archive off all current and future IsDeleted = 1 user records into my archive table tbDeletedUsers.
Moving the currently deleted users is straight forward, however I want a way to move any future users where the IsDeleted flag is set. I could use a standard AFTER trigger on the column however I plan to add some constraints to the tbUser table that would violate this, what I'd like is for my INSTEAD OF UPDATE trigger to fire and move the record to archive table instead?
I guess my question is... is it possible to trigger an INSTEAD OF UPDATE trigger on the update of an individual column? This is what I have so far:
CREATE TRIGGER trg_ArchiveUsers
INSTEAD OF UPDATE ON tbUsers
AS
BEGIN
...
END
GO
If so an example (SQL 2000 compatible) would be much appreciated!
Using the UPDATE(columnname) test, you can check in a trigger whether a specific column was updated (and then take specific actions), but you can't have a trigger fire only on the update of a specific column. It will fire as soon as the update is performed, regardless of the fact which column was the target of the update.
So, if you think you have to use an INSTEAD OF UPDATE trigger, you'll need to implement two kinds of actions in it:
1) insert into tbDeletedUsers + delete from tbUsers – when IsDeleted is updated (or, more exactly, updated and set to 1);
2) update tbUsers normally – when IsDeleted is not updated (or updated but not set to 1).
Because more than one row can be updated with a single UPDATE instruction, you might also need to take into account that some rows might have IsDeleted set to 1 and others not.
I'm not a big fan of INSTEAD OF triggers, but if I really had to use one for a task like yours, I might omit the UPDATE() test and implement the trigger like this:
CREATE TRIGGER trg_ArchiveUsers
ON tbUsers
INSTEAD OF UPDATE
AS
BEGIN
UPDATE tbUsers
SET
column = INSERTED.column,
…
FROM INSERTED
WHERE INSERTED.key = tbUsers.key
AND INSERTED.IsDeleted = 0
;
DELETE FROM tbUsers
FROM INSERTED
WHERE INSERTED.key = tbUsers.key
AND INSERTED.IsDeleted = 1
;
INSERT INTO tbDeletedUsers (columns)
SELECT columns
FROM INSERTED
WHERE IsDeleted = 1
;
END
I have two tables
moduleprogress which contains fields:
studentid
modulecode
moduleyear
modules which contains fields:
modulecode
credits
I need a trigger to run when the user is attempting to insert or update data in the moduleprogress table.
The trigger needs to:
look at the studentid that the user has input and look at all modules that they have taken in moduleyear "1".
take the modulecode the user input and look at the modules table and find the sum of the credits field for all these modules (each module is worth 10 or 20 credits).
if the value is above 120 (yearly credit limit) then it needs to error; if not, input is ok.
Does this make sense? Is this possible?
#a_horse_with_no_name
This looks like it will work but I will only be using the database to input data manually so it needs to error on input. I'm trying to get a trigger similar to this to solve the problem(trigger doesn't work) and forget that "UOS_" is before everything. Just helps me with my database and other functions.
CREATE OR REPLACE TRIGGER "UOS_TESTINGS"
BEFORE UPDATE OR INSERT ON UOS_MODULE_PROGRESS
REFERENCING NEW AS NEW OLD AS OLD
DECLARE
MODULECREDITS INTEGER;
BEGIN
SELECT
m.UOS_CREDITS,
mp.UOS_MODULE_YEAR,
SUM(m.UOS_CREDITS)
INTO MODULECREDITS
FROM UOS_MODULE_PROGRESS mp JOIN UOS_MODULES m
ON m.UOS_MODULE_CODE = mp.UOS_MODULE_CODE
WHERE mp.UOS_MODULE_YEAR = 1;
IF MODULECREDITS >= 120 THEN
RAISE_APPLICATION_ERROR(-20000, 'Students are only allowed to take upto 120 credits per year');
END IF;
END;
I get the error message :
8 23 PL/SQL: ORA-00947: not enough values
4 1 PL/SQL: SQL Statement ignored
I'm not sure I understand your description, but the way I understand it, this can be solved using a materialized view, which might give better transactional behaviour than the trigger:
CREATE MATERIALIZED VIEW LOG
ON moduleprogress WITH ROWID (modulecode, studentid, moduleyear)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG
ON modules with rowid (modulecode, credits)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW mv_module_credits
REFRESH FAST ON COMMIT WITH ROWID
AS
SELECT pr.studentid,
SUM(m.credits) AS total_credits
FROM moduleprogress pr
JOIN modules m ON pr.modulecode = m.modulecode
WHERE pr.moduleyear = 1
GROUP BY pr.studentid;
ALTER TABLE mv_module_credits
ADD CONSTRAINT check_total_credits CHECK (total_credits <= 120)
But: depending on the size of the table this might however be slower than a pure trigger based solution.
The only drawback of this solution is, that the error will be thrown at commit time, not when the insert happens (because the MV is only refreshed on commit, and the check constraint is evaluated then)