INSERT ALL with ID using DEFAULT ON NULL fails PK Constraint - sql

Using Oracle 12, I have a table defined similar to this:
CREATE TABLE example
( "ID" NUMBER(*,0) DEFAULT ON NULL ex_seq.nextval NOT NULL ENABLE,
"SIG_BOOK" NUMBER(10,0),
"SIG_LINE" NUMBER(10,0),
"TRANSFER" NUMBER(10,0) DEFAULT NULL,
CONSTRAINT "PK_EXAMPLE_ID" PRIMARY KEY ("ID")
-- snipped
)
When I do standard individual row inserts and leave off the ID, the sequence.nextval is called and the row is properly inserted. But we have to insert up to several thousand rows, so I am trying to use code like this:
INSERT ALL
INTO example (sig_book, sig_line, transfer) VALUES (1,22000006,3436440)
INTO example (sig_book, sig_line, transfer) VALUES (1,22000006,3184718)
SELECT * FROM dual
When using INSERT ALL then the Primary Key constraint is violated.
We can switch back to the standard Trigger/Sequence pair, but were hoping to gain additional performance from using INSERT ALL.
Is there something special I have to do to get this bulk insert to work on a table with the key defined using DEFAULT ON NULL, or do I need to return to the old Trigger/Sequence pair?

It gets a bit more clear when you look at the query plan:
-----------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-----------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | MULTI-TABLE INSERT | | | | |
| 2 | SEQUENCE | EX_SEQ | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
| 4 | INTO | EXAMPLE | | | |
| 5 | INTO | EXAMPLE | | | |
-----------------------------------------------------------------------
First Oracle runs the query (SELECT FROM dual), assigning the single id from the sequence and only after it goes through the results, inserting it to the tables. So this approach will not work.
You might try to use an interim table without the id:
CREATE TABLE example_without_id ("SIG_BOOK" NUMBER(10,0), ... );
INSERT ALL
INTO example_without_id (sig_book, sig_line, transfer) VALUES (1,22000006,3436440)
INTO example_without_id (sig_book, sig_line, transfer) VALUES (1,22000006,3184718)
SELECT * FROM dual;
INSERT INTO example (sig_book, sig_line, transfer) SELECT * FROM example_without_id;
Something as simple as putting these records in a file and loading them via SQL Loader could work as well.

How about using INSERT INTO . . . SELECT instead?
INSERT INTO example (sig_book, sig_line, transfer)
SELECT 1,22000006,3436440 FROM DUAL UNION ALL
SELECT 1,22000006,3184718 FROM DUAL;

Related

MariaDB - insert record or update timestamp if record exists

Objective: cronjob runs a task; when completed successfully, insert new host record. If record exists, update timestamp to reflect this status.
# Table layout
> describe hosts_completed;
+-----------+---------------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------------------+------+-----+-------------------+-----------------------------+
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| timestamp | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| hostname | varchar(32) | YES | MUL | NULL | |
+-----------+---------------------+------+-----+-------------------+-----------------------------+
# Current inventory
> select * from hosts_completed;
+----+---------------------+----------+
| id | timestamp | hostname |
+----+---------------------+----------+
| 10 | 2020-11-02 12:51:08 | myHost1 |
| 11 | 2020-11-02 14:32:16 | MyHost2 |
+----+---------------------+----------+
I want to update the status for myHost1 and my best shot would be like
> insert into hosts_completed(hostname) values("myHost1") ON DUPLICATE KEY UPDATE timestamp=now();
and it runs but adds a new record, it does not update the myHost1 record.
Where is the glitch?
The on duplicate key syntax requires a unique constraint on the column that is used to detect the conflict. Create it first:
alter table hosts_completed
add constraint unique_hostname
unique (hostname);
Note that this pre-requires no duplicates in the column (otherwise you need to housekeep your data before you can create the constraint).
Then you can use your curent query:
insert into hosts_completed(hostname)
values('myHost1')
on duplicate key update timestamp = now();

How can i insert into table on the basis of a column value is same or not in Postgresql

I am inserting data into a table looks like this
| num | name | value |
----------------------------------
| 1 | name1 | 1 |
| 2 | name2 | 1 |
| 3 | name3 | 1 |
| 4 | name4 | 2 |
| 5 | name5 | 3 |
I wanted to insert with where clause like insert into table (num, name, value) values(6,name,1) when (num and value together) not exist in any row together
I tried to select first and insert on basis of that result but I think that is not the best way I want it in a single query
tried like: select * from the table where name=$name and value= $value if I got result then not insert otherwise insert. It was done with two queries but i don't want it.
Any help will be appriciated.
Use a unique constraint to enforce uniqueness for (num, value):
alter table t add constraint unq_t_num_value unique (num, value);
Then the database ensures that the integrity of the table -- that these values are unique. You don't have to do it explicitly.
Note that if the unique constraint is violated, you get an error and the insert is aborted (along with other rows that might be inserted). If you want to ignore the error instead, you can use on conflict ignore.

it's possible to use sequences in mariaDB Server?

A sequence is an alternative to create for example the Primary Key on every record in your tables.
So this is the common syntax I'm using actually
CREATE TABLE users(
id BIGINT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100) UNIQUE NOT NULL
);
As you can see the common solution I implemented is the PRIMARY KEY clausule
But is RDBMS such as Oracle or PostgreSQL is possible to use sequences for replacing the previous code
For example in Oracle you declare
CREATE SEQUENCE id
START WITH 1
INCREMENT BY 1;
Or in PostgreSQL in this way
CREATE SEQUENCE id
START WITH 1
INCREMENT BY 1;
But this is possible in mariaDB Server?
In MariaDB since version 10.3 there is the possibility of natively using the creation of sequences; through the following example you can see how it is achieved
First let's create the table sequence
CREATE SEQUENCE id
START WITH 1
INCREASE BY 1;
If for example we want to see the structure of the table id, newly created; just execute the following in the console
MariaDB [blog]> describe id;
+-----------------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------------+---------------------+------+-----+---------+-------+
| next_not_cached_value | bigint(21) | NO | | NULL | |
| minimum_value | bigint(21) | NO | | NULL | |
| maximum_value | bigint(21) | NO | | NULL | |
| start_value | bigint(21) | NO | | NULL | |
| increment | bigint(21) | NO | | NULL | |
| cache_size | bigint(21) unsigned | NO | | NULL | |
| cycle_option | tinyint(1) unsigned | NO | | NULL | |
| cycle_count | bigint(21) | NO | | NULL | |
+-----------------------+---------------------+------+-----+---------+-------+
From the above table you can notice important details such as the fact that the default numeric value is of type BIGINT
So that as we said that it is going to start in the numbering in 1 and in one more is going to increase, it allows us to generate the progressive number that can be associated, like a primary to a table
Let's create a new table of examples in the mariaDB manager
MariaDB [blog]> CREATE TABLE demo (
-> id BIGINT NOT NULL,
-> name VARCHAR (30),
-> PRIMARY KEY (id));
Finally, when declaring a primary key of autoincrement type it is not necessary to declare it in the insert sentence, when we use a sequence if it is necessary to write the name of the column; as the following example
MariaDB [blog]> INSERT INTO demo (id, name)
-> VALUES
-> (NEXT VALUE FOR id, 'alpha');
How can the previous sequence be observed, to insert the dynamic value generated by the sequence, we invoke the name of the sequence through NEXT VALUE for and at the end the name of the sequence id
Finally, you can obtain the result of our previous sentence, we execute a SELECT regularly on the table and we obtain the following
MariaDB [blog]> SELECT * FROM demo;
+ ---- + ------ +
| id | name |
+ ---- + ------ +
| 1 | alpha |
+ ---- + ------ +
Extra configurations:
Optionally, you can configure the following parameters to a sequence within the mariaDB manager:
minvalue = You can set it 1
maxvalue = Depending on the type of data you choose if it is INT or BIGINT you must check to place a cap that respects the limits of
those data types
Cycle = By default it has the option no cycle, otherwise, once the minimum value starts and the maximum limit is reached, the counter
is restarted and the numbering begins again (provided that the limit
of the data type) possible )
In a word - yes. That feature is available since version 10.3.
See the documentation for the full details.

Define a unique constraint between combinations in 2 columns and a third column

In the table below, (colA, colB, colC) is the primary key. In colD, I have defined a new type of id which is unique per (colA,colB) combination, that is:
colD = f(colA,colB)
so that (A,1) should give me id_1, (B,2) corresponds to id_2 etc where the ids are integer values. The table below shows a mistake where for (A,1) there are 2 ids - id_1 and id_2.
I would like to enforce a constraint that each pair of values (colA,colB) maps to one and only one value in colD. Of course, I can add a unique constraint to (colA,colB,colC,colD) because (colA,colB,colC) is the primkey, but this won't detect colC and colD changing simultaneously.
I'm not sure what the best way is here.
+colA + colB + colC + colD +
+--------------------------+
| A | 1 |180901| id_1 |
| A | 1 |180902| id_1 |
| A | 1 |180903| id_1 |
| A | 1 |180904| id_2 |
| . | . | . | . |
| . | . | . | . |
| . | . | . | . |
| | | | |
| | | | |
| | | | |
| | | | |
You can enforce this constraint using an indexed view:
create table dbo.T (colA char(1) not null, colB int not null,
colC int not null, colD varchar(6) not null,
constraint PK_T PRIMARY KEY (colA,colB,colC))
go
create view dbo.DRI_T
with schemabinding
as
select colA,colB,colD,COUNT_BIG(*) as Cnt
from dbo.T
group by colA,colB,colD
go
create unique clustered index IX_DRI_T on dbo.DRI_T (colA,colB)
go
insert into T(colA,colB,colC,colD)
values ('A',1,180901,'id_1')
go
insert into T(colA,colB,colC,colD)
values ('A',1,180902,'id_1')
go
insert into T(colA,colB,colC,colD)
values ('A',1,180903,'id_1')
go
insert into T(colA,colB,colC,colD)
values ('A',1,180904,'id_2')
go
The error generated by this fourth insert statement is:
Msg 2601, Level 14, State 1, Line 23
Cannot insert duplicate key row in object 'dbo.DRI_T' with unique index 'IX_DRI_T'. The duplicate key value is (A, 1).
The statement has been terminated.
Hopefully you can see how it's working to achieve this error. Of course, it's not reported exactly the same as a direct constraint violation but I think it contains enough information that I'd be happy using this in one of my DBs.
And you can of course pick far better names for the DRI_T and IX_DRI_T if you want to make the error more obvious. E.g. IX_DRI_T_colD_mismatch_colA_colB

Are there problems with this 'Soft Delete' solution using EAV tables?

I've read some information about the ugly side of just setting a deleted_at field in your tables to signify a row has been deleted.
Namely
http://richarddingwall.name/2009/11/20/the-trouble-with-soft-delete/
Are there any potential problems with taking a row from a table you want to delete and pivoting it into some EAV tables?
For instance.
Lets Say I have two tables deleted and deleted_row respectively described as follows.
mysql> describe deleted;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| tablename | varchar(255) | YES | | NULL | |
| deleted_at | timestamp | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
mysql> describe deleted_rows;
+--------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| entity | int(11) | YES | MUL | NULL | |
| name | varchar(255) | YES | | NULL | |
| value | blob | YES | | NULL | |
+--------+--------------+------+-----+---------+----------------+
Now when you wanted to delete a row from any table you would delete it from the table then insert it into these tables as such.
deleted
+----+-----------+---------------------+
| id | tablename | deleted_at |
+----+-----------+---------------------+
| 1 | products | 2011-03-23 00:00:00 |
+----+-----------+---------------------+
deleted_row
+----+--------+-------------+-------------------------------+
| id | entity | name | value |
+----+--------+-------------+-------------------------------+
| 1 | 1 | Title | A Great Product |
| 2 | 1 | Price | 55.00 |
| 3 | 1 | Description | You guessed it... it's great. |
+----+--------+-------------+-------------------------------+
A few things I see off the bat.
You'll need to use application logic
to do the pivot (Ruby, PHP, Python,
etc)
The table could grow pretty big
because I'm using blob to handle
the unknown size of the row value
Do you see any other glaring problems with this type of soft delete?
Why not mirror your tables with archive tables?
create table mytable(
col_1 int
,col_2 varchar(100)
,col_3 date
,primary key(col_1)
)
create table mytable_deleted(
delete_id int not null auto_increment
,delete_dtm datetime not null
-- All of the original columns
,col_1 int
,col_2 varchar(100)
,col_3 date
,index(col_1)
,primary key(delete_id)
)
And then simply add on-delete-triggers on your tables that inserts the current row in the mirrored table before the deletion? That would provide you with dead-simple and very performant solution.
You could actually generate the tables and trigger code using the data dictionary.
Note that I might not want to have a unique index on the original primary key (col_1) in the archive table, because you may actually end up deleting the same row twice over time if you are using natural keys. Unless you plan to hook up the archive tables in your application (for undo purposes) you can drop the index entirely. Also, I added the time of delete (deleted_dtm) and a surrogate key that can be used to delete the deleted (hehe) rows.
You may also consider range partitioning the archive table on deleted_dtm. This makes it pretty much effortless to purge data from the tables.