Get just Inserted/Modified record based on updated_timestamp - sql

I want to get just Inserted/Modified record based on updated_timestamp.
I have following scenario for DB2 database:
Triggering insert or update query to DB. The table contains updated_timestamp field which capture the insert or updated time.
Want to get my previous inserted/ updated record only using select query.
Example
insert into table_name(x,y,CURRENT TIMESTAMP);
want to get the above inserted record using select as
select * from table_name where updated_timestamp > ?
with what value should I replace the ?, above query should return me latest inserted record as x,y,<time_stamp>

If I understand what your asking, couldn't you use a subquery pulling the max(updated_timestamp) and other values from the table and use that to filter to only the most recently updated records for each one?
Something like this:
insert into table_name (x, y, timestamp)
Select table_name.x, table_name.y, DateTime()
from table_name join (select x, y, Max(updated_timestamp)
updated_timestamp from table_name) table_name2
on table_name.x = table_name2.x and table_name.y = tablename2.y
and table_name.updated_timestamp = table_name2.updated_timestamp

if your db2 version as this option you can use final table like this
SELECT updated_timestamp
FROM FINAL TABLE (INSERT INTO table_name (X, X, updated_timestamp )
VALUES(valueforX, valueforY, CURRENT TIMESTAMP));
look IBM Doc
you can use a variable too:
CREATE OR REPLACE VARIABLE YOURLIB.MYTIMESTAMP TIMESTAMP DEFAULT CURRENT TIMESTAMP;
INSERT INTO table_name (X, X, updated_timestamp )
VALUES(valueforX, valueforY, YOURLIB.MYTIMESTAMP));
but the best solution is update your table with you primary key and get your timestamp with your primary key after.
A suggestion, you use trigger for update last timestamp. May be can you use autotimestamp like this :
CREATE TABLE table_name
(
X VARCHAR(36),
Y VARCHAR(36),
CHANGE_TS TIMESTAMP FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP NOT NULL
)

Related

Replace value by fkey after moving data to related table

A numeric column should be extended to hold multiple values, i.e. reference some different entity. SQL only (Postgres specifically if no standard solution available).
Schema now:
Table X with columns ID, VAL, STUFF
Table Y with columns ID, VAL1, VAL2
What I want to achieve:
Table X with columns ID, YID, STUFF
Table Y won't be altered (neither existing data touched)
Table Y gets inserts for all rows of table X where X.VAL should be inserted as Y.VAL1. Y.ID auto-incremented, Y.VAL2 may remain NULL. Table X should then be updated to hold Y's ID as foreign key X.YID instead of the actual value X.VAL that is now stored in Y.VAL1.
Somehow I think it has to be possible to achieve that with a clean SQL-only solution. What I've found so far:
create some PG/SQL script: just loop over table X, insert the stuff to table Y row by row returning the ID and updating table X
plain SQL: get the number of entries in table Y, INSERT INTO Y with SELECT FROM X ... ORDER BY ID, INSERT INTO X with SELECT FROM Y ... skipping the number of entries that have been there before so the order should remain stable. I really don't like that solution. Sounds dirty to me.
Any suggestions? Or better go with PG/SQL?
TIA
There is a third option: a single SQL statement. Postgres allows DML within a CTE. So create a CTE that performs the insert and returns the generated id. Use the returned id in the main query which updates the original table. This then does what you are looking for in a single SQL statement.
with cte as
( insert into y(val1)
select val
from x
returning y.id, y.val1
)
update x
set val = cte.id
from cte
where x.val = cte.val1;
Then assuming you want to maintain referential integrity:
alter table x
add constraint x2y_fk
foreign key (val)
references y(id) ;
See Demo: Note: The demo copies both val and stuff from table x into table y. This was strictly for demonstration purposes and is not necessary.

SQL Insert existing/duplicate row into table but change only one column value?

I have Audit table with more than 50 columns and need to insert desired row (duplicate) by changing just one column value (column CreatedDate, set value to GETDATE()). I know this can be achieved by INSERT INTO SELECT * FROM but as there is more han 50 columns, code would seem messy:
INSERT INTO Audit_Table (Col1, Col2, .....CreatedDate, ......, Col50, ...ColN)
SELECT (Col1, Col2, .....GETDATE(), ......, Col50, ...ColN) FROM Audit_Table
WHERE Audit_Table.Id = Desired_Id
If i shouldn't change CreatedDate column value, it would be very simple:
INSERT INTO Audit_Table SELECT * FROM Audit_Table WHERE Audit_Table.ID = Desired_Id
Is there any other way to duplicate row and change only one/desired column value?
You can insert the record into a temporary table, update the CreatedDate column to GETDATE(), then insert it into the Audit_Table.
No. There is no way to say * except column_foo in SQL.
The workaround would be to generate the
SELECT
col1
, col2
, [...]
, coln
FROM foo;
statement (or parts of it) by querying the database's system catalogue for the column names in their order. There is always a table with all tables and a table with all columns.
Then, make sure you put the necessary commas in the right place (or remove them where you don't need them, or generate the comma in all rows of the report but the first - by using the ROW_NUMBER() OLAP function and evaluating whether it returns 1 or something else). Finally, edit the right date column, by replacing it with CURRENT_DATE or whatever your database uses for the current day.
Good luck -
Marco
You can build upon your existing idea. Just duplicate the row (I assume, you have an auto-incrementing primary key column) and then in a separate query update the time i.e.
Do this :
INSERT INTO Audit_Table SELECT * FROM Audit_Table WHERE Audit_Table.ID = Desired_Id
And then :
UPDATE Audit_Table SET CreatedDate = GETDATE() WHERE primaryKeyID = newPrimaryKeyID
Hope this helps!!!
try below as reference
you can use below statement to copy the all rows,
mysql> insert into newstudent select * from students;
you can use below statement to copy the specific row from table,
mysql> insert into newstudent
-> select id, name, age, address
-> from students
-> where
-> id = 1248;
you can use below statement to copy the either of the row from table,
mysql> insert into newstudent
-> select id, name, age, address
-> from students
-> where
-> id = 1248 or id=1249;
use limit clause also along with this

Shorthand for inserting row into history table with SQL

I've got a trigger which copies a row whenever updated or deleted into a history table.
As of now I'm doing:
INSERT INTO history (column_x, column_y, column_z) VALUES (X, Y, Z);
Is it possible to shorthand it with:
INSERT INTO history VALUES (OLD)
The above does not work, but it gives an idea of what I'm looking for.
The columns match exactly as I've created the history table with:
CREATE TABLE history (LIKE original)
You should have some primary key defined in the table. Then you can do the insert select statement:
INSERT INTO history
SELECT * FROM notHistory WHERE ID = #ID

How do I add an auto incrementing column to an existing vertica table?

I have a table that currently has the following structure
id, row1
(null), 232
(null), 4455
(null), 16
I'd like for id to be an auto incrementing primary key, as follows:
id, row1
1, 232
2, 4455
3, 16
I've read the documentation and it looks like the function that I need is AUTO_INCREMENT and that I can edit the table using an ALTER TABLE statement. However, I can't seem to get the syntax quite right. How do I go about doing this? Is it even possible with a pre-existing table?
What you need to do is the following:
create a new sequence:
CREATE SEQUENCE sequence_auto_increment START 1;
create a new table:
create table tab2 as select * from tab1 limit 0;
insert the data:
insert /*+ direct */ into tab2
select NEXTVAL('sequence_auto_increment'),row1 from tab1;
as #Kermit mentioned the best way to do it in Vertica is to recreate the table(once) instead of multiple times, use the direct hint so you skip the WOS storage(much faster)
As for the column constraint that #Nazmul created, i won't use it Vertica doesn't care to much about constraints, you will need to force him to insert what you want and default constraints are not the way.
You need to update your exiting data something like below
UPDATE table
SET id = table2.id
FROM
(
SELECT row1, RANK() OVER (ORDER BY val) as id
FROM t1;
) as table2
where table.primaryKey = table2.primaryKey
Then you do alter your table using below syntax
-- get the value to start sequence at
SELECT MAX(id) FROM t2;
-- create the sequence
CREATE SEQUENCE seq1 START 5;
-- syntax as of 6.1
-- modify the column to add next value for future rows
ALTER TABLE t2 ALTER COLUMN id SET DEFAULT NEXTVAL('seq1');
If you want to use the Auto_Increment feature,
1)Copy data to temp table
2)Recreate the base table with the column using auto increment
3)Copy back the data to for other columns
If you just want the numbers in, refer the other answer by Nazmul

INSERT new row if value does not exist and get id either way

I would like to insert a record into a table and if the record is already present get its id, otherwise run the insert and get the new record's id.
I will be inserting millions of records and have no idea how to do this in an efficient manner. What I am doing now is to run a select to check if the record is already present, and if not, insert it and get the inserted record's id. As the table is growing I imagine that SELECT is going to kill me.
What I am doing now in python with psycopg2 looks like this:
select = ("SELECT id FROM ... WHERE ...", [...])
cur.execute(*select)
if not cur.rowcount:
insert = ("INSERT INTO ... VALUES ... RETURNING id", [...])
cur.execute(*insert)
rid = cur.fetchone()[0]
Is it maybe possible to do something in a stored procedure like this:
BEGIN
EXECUTE sql_insert;
RETURN id;
EXCEPTION WHEN unique_violation THEN
-- return id of already existing record
-- from the exception info ?
END;
Any ideas of how optimize a case like this?
First off, this is obviously not an UPSERT as UPDATE was never mentioned. Similar concurrency issues apply, though.
There will always be a race condition for this kind of task, but you can minimize it to an extremely tiny time slot, while at the same time querying for the ID only once with a data-modifying CTE (introduced with PostgreSQL 9.1):
Given a table tbl:
CREATE TABLE tbl(tbl_id serial PRIMARY KEY, some_col text UNIQUE);
Use this query:
WITH x AS (SELECT 'baz'::text AS some_col) -- enter value(s) once
, y AS (
SELECT x.some_col
, (SELECT t.tbl_id FROM tbl t WHERE t.some_col = x.some_col) AS tbl_id
FROM x
)
, z AS (
INSERT INTO tbl(some_col)
SELECT y.some_col
FROM y
WHERE y.tbl_id IS NULL
RETURNING tbl_id
)
SELECT COALESCE(
(SELECT tbl_id FROM z)
,(SELECT tbl_id FROM y)
);
CTE x is only for convenience: enter values once.
CTE y retrieves tbl_id - if it already exists.
CTE z inserts the new row - if it doesn't.
The final SELECT avoids running another query on the table with the COALESCE construct.
Now, this can still fail if a concurrent transaction commits a new row with some_col = 'foo' exactly between CTE y and z, but that's extremely unlikely. If it happens you get a duplicate key violation and have to retry. Nothing lost. If you don't face concurrent writes, you can just forget about this.
You can put this into a plpgsql function and rerun the query on duplicate key error automatically.
Goes without saying that you need two indexes in this setup (like displayed in my CREATE TABLE statement above):
a UNIQUE or PRIMARY KEY constraint on tbl_id (which is of serial type!)
another UNIQUE or PRIMARY KEY constraint on some_col
Both implement an index automatically.