Restrict Insert based on previous insertion date - sql

I want to restrict insertion in my table based on some condition.
My table is like
col1 col2 Date Create
A 1 04/05/2016
B 2 04/06/2016
A 3 04/08/2016 -- Do not allow insert
A 4 04/10/2016 -- Allow insert
So I want to restrict insert based on the number of days the same record was inserted earlier.
As shown in able example, A can be inserted again in table only after 4 days of previous insertion not before that.
Any pointers how I can do this in SQL/Oracle.

You only want to insert when there not exists a record with the same col1 and a too recent date:
insert into mytable (col1, col2, date_create)
select 'B' as col1, 4 as col2, trunc(sysdate) as date_create from dual ins
where not exists
(
select *
from mytable other
where other.col1 = ins.col1
and other.date_create > ins.date_create - 4
);
An undesired record would not be inserted thus. However, no exception would be raised. If you want that, I'd suggest a PL/SQL block or a before insert trigger.

If several processes write to your table simultaneously with possibly conflicting data then oracle database should do the job.
This can be solved by defining a constraint to check if there already exists an entry with the same col1 value younger than four days.
As far as I know, it is not possible to define such a constraint directly. Instead, define a materialized view and add a constraint on this view.
create materialized view mytable_mv refresh on commit as
select f2.col1, f2.date_create, f1.date_create as date_create_conflict
from mytable f2, mytable f1
where f2.col1 = f1.col1
and f2.date_create > f1.date_create
and f2.date_create - f1.date_create < 4;
This materialized view will contain an entry, if and only if a conflict exists.
Now define a constraint on this view:
alter table mytable_mv add constraint check_date_create
check(date_create=
date_create_conflict) deferrable;
It is executed when the current transaction is commited (because the materialized view is refreshed - as declared above refresh on commit).
This works fine if you insert into your table mytable in an autonomous transaction, e.g. for a logging table.
In other cases, you can force the refresh on the materialized view by dbms_mview.refresh('mytable_mv') or use another option than refresh on commit.

Related

which delete statement is better for deleting millions of rows

I have table which contains millions of rows.
I want to delete all the data which is over a week old based on the value of column last_updated.
so here are my two queries,
Approach 1:
Delete from A where to_date(last_updated,''yyyy-mm-dd'')< sysdate-7;
Approach 2:
l_lastupdated varchar2(255) := to_char(sysdate-nvl(p_days,7),'YYYY-MM-DD');
insert into B(ID) select ID from A where LASTUPDATED < l_lastupdated;
delete from A where id in (select id from B);
which one is better considering performance, safety and locking?
Assuming the delete removes a significant fraction of the data & millions of rows, approach three:
create table tmp
Delete from A where to_date(last_updated,''yyyy-mm-dd'')< sysdate-7;
drop table a;
rename tmp to a;
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2345591157689
Obviously you'll need to copy over all the indexes, grants, etc. But online redefinition can help with this https://oracle-base.com/articles/11g/online-table-redefinition-enhancements-11gr1
When you get to 12.2, there's another simpler option: a filtered move.
This is an alter table move operation, with an extra clause stating which rows you want to keep:
create table t (
c1 int
);
insert into t values ( 1 );
insert into t values ( 2 );
commit;
alter table t
move including rows where c1 > 1;
select * from t;
C1
2
While you're waiting to upgrade to 12.2+ and if you don't want to use the create-as-select method for some reason then approach 1 is superior:
Both methods delete the same rows from A* => it's the same amount of work to do the delete
Option 1 has one statement; Option 2 has two statements; 2 > 1 => option 2 is more work
*Statement level consistency means you might get different results running the processes. Say another session tries to update an old row that your process will remove.
With just the delete, the update will be blocked until the delete finishes. At which point the row's gone, so the update does nothing.
Whereas if you do the insert first, the other session can update & commit the row before the insert completes. So the update "succeeds". But the delete will then remove it! Which can lead to some unhappy customers...
Your stored dateformat seems suitable for proper sorting, so you could go the other way round and convert sysdate to string:
--this is false today
select * from dual where '2019-06-05' < to_char(sysdate-7, 'YYYY-MM-DD');
--this is true today
select * from dual where '2019-05-05' < to_char(sysdate-7, 'YYYY-MM-DD');
So it would be:
Delete from A where last_updated < to_char(sysdate-7, ''yyyy-mm-dd'');
It has the benefit that your default index (if there is any) will be used.
It has the disadvantage on relying on the String/Varchar ordering which might be changed i.e. bei NLS changes (if i remember right), so in any case you should do a little testing before...
In the long term, you should of cource alter the colum to a proper date-datatype, but I guess that doesn't help you right now ;)
If you are trying to delete most of the rows in the table, I would advise you go with a different approach, namely:
create <new table name> as
select *
from <old table name>
where <predicates for the data you want to keep>;
then
drop table <old table name>;
and finally you can rename the new table back to the old table.
You could always partition the new table (i.e. create the new table with a separate statement containing the partitioning clauses, and then have an insert as select into the new table from the old table).
That way, when you need to delete rows, it's a simple matter of dropping the relevant partition(s).

How to use multiple identity numbers in one table?

I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them.
ie)
Form1- Numbered 2000000-2999999
Form2- Numbered 3000000-3999999
dbo.test2 - is my form information table
Tsel - is my autoinc table for the 3000000 series numbers
Tadv - is my autoinc table for the 2000000 series numbers
What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms.
Although it does work, I'm concerned that the numbers will get messed up under load.
I'm not sure the ##IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above.
See code below.
**** TRIGGER ****
CREATE TRIGGER MAKEANID2 ON dbo.test2
AFTER INSERT
AS
SET NOCOUNT ON
declare #someid int
declare #someid2 int
declare #startfrom int
declare #test1 varchar(10)
select #someid=##IDENTITY
select #test1 = (Select name1 from test2 where sysid = #someid )
if #test1 = 'select'
begin
insert into Tsel Default values
select #someid2 = ##IDENTITY
end
if #test1 = 'adv'
begin
insert into Tadv Default values
select #someid2 = ##IDENTITY
end
update test2
set name2=(#someid2) where sysid = #someid
SET NOCOUNT OFF
The best way to keep the two IDs in sync is to create a persisted Computed Column based on the actual identity column. Where Col1 is the identity column and Col2 is the persisted computed column that is the result of some formula based on Col1. You can then even Create Indexes on Computed Columns.
test this out:
CREATE TABLE YourTable
(Col1 int not null identity(2000000,1)
,Col2 AS (Col1-2000000+3000000) PERSISTED
,Col3 varchar(5)
)
GO
insert into YourTable (col3) values ('a')
insert into YourTable (col3) SELECT 'b' UNION SELECT 'c'
SELECT * FROM YourTable
OUTPUT:
Col1 Col2 Col3
----------- ----------- -----
2000000 3000000 a
2000001 3000001 b
2000002 3000002 c
(3 row(s) affected)
EDIT After OPs comments, I'm still not 100% sure what you are after.
I never used SQL Server 2000 (we skipped that version), and I don't really want to look up how to do everything in that version, it is so limited without the OUTPUT clause and ROW_NUMBER(), CTEs, etc.
I can think of three methods to do:
1) You could just create a sequence table, where you have 2 rows one for A and one for B, each time you need to insert one, look up, increment, and save the value of the type of seq you need and then insert with that value. for example if you are inserting a type "A" row, do this:
INSERT INTO test2
(col1, col2, col3,...)
SELECT
ISNULL(MAX(NextSeq),0)+1, col2, col3,...
FROM YourSequenceTable WITH (UPDLOCK, HOLDLOCK)
WHERE SequenceType='A'
UPDATE YourSequenceTable
SET NextSeq=ISNULL(NextSeq,0)+1
WHERE SequenceType='A'
2) change your table structure to just save the data in Tsel or Tadv and have a trigger insert into a third common table table where you can have your additional "common" identity. common table would be like
CommonTable
ID int not null indentity(1,1) primary key
TselID int null FK to Tsel.PK
TadvID int null FK to Tadv.PK
3) if you need a single table, try this, which is a real hack. Change your Tsel and Tadv tables to contain all the necessary columns and from the application INSERT INTO Tsel when the value is select and have a trigger grab that identity value and then INSERT that into test2, then remove the data from tsel. Then, from the application when the value is adv just INSERT INTO Tadv an have a trigger on that table insert the data into test2, and remove the data from Tadv. You need to have all data columns in Tsel and Tadv so the trigger can copy the values to test2, but the trigger will remove the rows from there (the identity will be sequential even if the original rows are removed).
your Tsel trigger would look like:
CREATE Trigger MAKEANID2_Tsel ON dbo.Tsel
AFTER INSERT
AS
--copy data from Tsel into test2., test2 can still have its own identity value
INSERT INTO test2
(PK, col1, col2, col3,...)
SELECT
col0, col1, col2, col3,....
FROM INSERTED
--remove rows from Tsel, which were just copied and not needed anymore.
DELETE Tsel
WHERE PK IN (SELECT PK FROM INSERTED)
GO
YOu are right to worry about ##identity, it is not a recommended peice of code, if somone else adds a differnet trigger that inserets an identity and that one triggers first, that is the value you will get.
But you have much bigger problems. Your trigger is deisgned to work on only one record ata time. This is a very very very bad thing to do with a trigger. Triggers operate on sets of data and must ALWAYS even if you think therer will never be more than one record inserted ata time) be set up to handle sets of data not one record. Further, you don;t need to ask for the identity, you have the identities of all records inserted inteh batch in a psuedotable availlble in triggers called inserted.
Now reading one of your comments, you say you can't have any missing values at all. Inthat case you cannot under any circustance use an identity column as it will have gaps if any transaction is rolled back. You will have to write your own process to create the numbers based onteh last number and look out for race conditions.

Foreign Key reference in mysql innodb

Simply put: I have a table with 2 columns: one is username and other is nickname.
I have another table where I have 2 columns: one is username and other is countNicknames.
I want countNicknames to contain the number of nicknames each user has (and whenever I insert a new nickname to the table1, the value under table2.countNicknames will automatically update.
Can you please write down how to construct the second table to reference the first one?
Why not just count when you need the value?
I want countNicknames to contain the number of nicknames each user has (and whenever I insert a new nickname to the table1, the value under table2.countNicknames will automatically update.
This is effectively what a view will do.
CREATE OR REPLACE VIEW user_nickname_count AS
SELECT t.username,
COUNT(*) 'countNicknames'
FROM TABLE t
GROUP BY t.username
A view looks like a table, so you cah join to it if needed. And the only overhead is that it effectively is just a SELECT query being run - the values are calculated on demand, rather than having to setup triggers.
For more info on views, see the documentation.
Well, #Lasse's suggestion is better, but ... there is two other another options...
Does MySql have triggers? If it does, then you would add an Insert, Update, Delete trigger on the first table that updates (or inserts or deletes as necessary) the second table's CountNickNames attribute every time a record is inserted, Updated or deleted from the first table...
Create Trigger NickNameCountTrig On NickNameCountTable
For Insert, Update, Delete
As
Update nct Set
CountNickNames =
(Select Count() From FirstTable
Where Name = nct.Name)
From NickNameCountTable nct
Where Name In (Select Distinct Name from inserted
Union
Select Distinct Name From deleted)
-- -----------------------------------------------
Insert NickNameCountTable (Name, CountNickNames)
Select name, count() from FirstTable ft
Where Not Exists
(Select * From NickNameCountTable
Where Name = ft.Name)
-- ------ This is optional -----------------------
Delete NickNameCountTable
Where CountNickNames = 0
Does MySql have indexed views (or some equivilent)? apparently not - so never mind this option ....

Row number in Sybase tables

Sybase db tables do not have a concept of self updating row numbers. However , for one of the modules , I require the presence of rownumber corresponding to each row in the database such that max(Column) would always tell me the number of rows in the table.
I thought I'll introduce an int column and keep updating this column to keep track of the row number. However I'm having problems in updating this column in case of deletes. What sql should I use in delete trigger to update this column?
You can easily assign a unique number to each row by using an identity column. The identity can be a numeric or an integer (in ASE12+).
This will almost do what you require. There are certain circumstances in which you will get a gap in the identity sequence. (These are called "identity gaps", the best discussion on them is here). Also deletes will cause gaps in the sequence as you've identified.
Why do you need to use max(col) to get the number of rows in the table, when you could just use count(*)? If you're trying to get the last row from the table, then you can do
select * from table where column = (select max(column) from table).
Regarding the delete trigger to update a manually managed column, I think this would be a potential source of deadlocks, and many performance issues. Imagine you have 1 million rows in your table, and you delete row 1, that's 999999 rows you now have to update to subtract 1 from the id.
Delete trigger
CREATE TRIGGER tigger ON myTable FOR DELETE
AS
update myTable
set id = id - (select count(*) from deleted d where d.id < t.id)
from myTable t
To avoid locking problems
You could add an extra table (which joins to your primary table) like this:
CREATE TABLE rowCounter
(id int, -- foreign key to main table
rownum int)
... and use the rownum field from this table.
If you put the delete trigger on this table then you would hugely reduce the potential for locking problems.
Approximate solution?
Does the table need to keep its rownumbers up to date all the time?
If not, you could have a job which runs every minute or so, which checks for gaps in the rownum, and does an update.
Question: do the rownumbers have to reflect the order in which rows were inserted?
If not, you could do far fewer updates, but only updating the most recent rows, "moving" them into gaps.
Leave a comment if you would like me to post any SQL for these ideas.
I'm not sure why you would want to do this. You could experiment with using temporary tables and "select into" with an Identity column like below.
create table test
(
col1 int,
col2 varchar(3)
)
insert into test values (100, "abc")
insert into test values (111, "def")
insert into test values (222, "ghi")
insert into test values (300, "jkl")
insert into test values (400, "mno")
select rank = identity(10), col1 into #t1 from Test
select * from #t1
delete from test where col2="ghi"
select rank = identity(10), col1 into #t2 from Test
select * from #t2
drop table test
drop table #t1
drop table #t2
This would give you a dynamic id (of sorts)

How to call a PL SQL function within a CHECK statement?

I would like to add a CHECK statement that calls a function when inserting new entries into a table.
I have used the following sample code to implement such functionality:
CREATE TABLE customers(
id NUMBER NOT NULL,
PRIMARY KEY(id));
CREATE OR REPLACE FUNCTION totalCustomers
RETURN NUMBER IS
total NUMBER := 0;
BEGIN
SELECT count(*) into total
FROM customers;
RETURN total;
END;
/
ALTER TABLE customers
ADD CHECK (totalCustomers() < 10);
When I run this query in livesql.oracle.com, I get the following error:
ORA-00904: "TOTALCUSTOMERS": invalid identifier.
What is the right way of calling this function in the check statement?
P.S. Please ignore the contents of the function. I will replace it with the desired contents later.
There isn't one.
Straight from the Oracle documentation:
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/constraint.html#GUID-1055EA97-BA6F-4764-A15F-1024FD5B6DFE
Conditions of check constraints cannot contain the following
constructs:
.............
Calls to user-defined functions
..............
Now: you said "disregard the actual content of the function". That is not a healthy attitude; the content matters too. For one thing, the function would have to be deterministic anyway (yours is not) - this is a problem quite apart from it being a user-defined function. Moreover, conditions in constraints can only refer to values in a single row - they can't be "table" constraints, like yours is.
You may wonder, then - how would one implement a "constraint" like yours? One somewhat common method is to create a materialized view based on "select count(*)....." and put a constraint on the MV. The MV should refresh full on commit. Whenever you modify the base table and you commit. the MV is refreshed - and if the count increases above 10, the changes are rolled back.
In your comment on mathguy's answer, you say "I am also trying to make sure the time period of new entries don't overlap with existing ones." I have done this with "refresh fast on commit" materialized views. Warning: "fast" refreshes can be slow if you are not careful, please refer to this blog http://www.adellera.it/ , especially concerning statistics on the materialized view log.
I am assuming exclusive end dates. If an end date is null, that means the datetime range goes on indefinitely. Many overlaps will be caught immediately by the primary key and unique constraints. The others will be caught at commit time by the constraint on the materialized view. Note that at the end of the transaction the MV will never have any rows.
SQL> create table date_ranges (
2 key1, start_date,
3 primary key(key1, start_date),
4 end_date,
5 unique(key1, end_date),
6 check(start_date < end_date)
7 )
8 as
9 with a as (select date '2000-01-01' dte from dual)
10 select 1, dte, dte+1 from a
11 union all
12 select 1, dte+1, dte+2 from a
13 union all
14 select 1, dte-1, dte from a
15 union all
16 select 2, dte+10, dte+11 from a
17 union all
18 select 2, dte+12, dte+13 from a
19 union all
20 select 2, dte+8, dte+9 from a
21 /
Table DATE_RANGES created.
SQL> create materialized view log on date_ranges
2 with sequence, rowid, primary key, commit scn (end_date) including new values
3 /
Materialized view log DATE_RANGES created.
SQL> create materialized view overlapping_ranges refresh fast on commit
2 as
3 select a.rowid arid, b.rowid brid
4 from date_ranges a, date_ranges b
5 where a.key1 = b.key1
6 and a.rowid != b.rowid
7 and a.start_date < b.end_date
8 and a.end_date > b.start_date;
Materialized view OVERLAPPING_RANGES created.
SQL>
SQL> alter materialized view overlapping_ranges
2 add constraint overlaps_not_allowed check (1=0) deferrable initially deferred
3 /
Materialized view OVERLAPPING_RANGES altered.
SQL> insert into date_ranges select 1, date '1999-12-30', date '2000-01-4' from dual;
1 row inserted.
SQL> commit;
Error starting at line : 42 in command -
commit
Error report -
ORA-02091: transaction rolled back
ORA-02290: check constraint (STEW.OVERLAPS_NOT_ALLOWED) violated
I would suggest a trigger for such requirement.
CREATE OR REPLACE TRIGGER AI_customers
AFTER INSERT ON customers
DECLARE
total NUMBER;
BEGIN
SELECT count(*) into total
FROM customers;
IF total > 10 THEN
RAISE_APPLICATION_ERROR(-20001, 'Total number of customers must not exceed 10');
END IF;
END;
/
Note, this is a STATEMENT LEVEL trigger (no FOR EACH ROW clause), thus you cannot get the famous "ORA-04091: table is mutating, trigger/function may not see it" error.
However, this trigger has some limitations in a multi-user environment. If user_1 inserts records into customers table and user_2 also inserts some records (before user_1 has done a COMMIT) then you may get more than 10 records in your customers table.