How to insert multiple default rows into a table in PostgresQL - sql

I have a table with columns taking default values:
create table indexing_table
(
id SERIAL PRIMARY KEY,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
);
How do I insert multiple default rows into this table? Do I have to repeat the command:
insert into indexing_table default values;
as many times as I want it to be inserted?

When you have a default value, you can tell the database to use this default value:
INSERT INTO indexing_table(id, created_at)
VALUES(default, default),
(default, default),
(default, default);
If you need hundreds of default records and one of your defaults is "now()", use generate_series():
INSERT INTO indexing_table(created_at)
SELECT NOW() FROM generate_series(1,100);

I have same problem. Have header table which contains only id generated by sequence. I need insert rows into snapshot table and of course i must first fill header. So i have temporary table with many rows and want insert fast to both table.
Loop is easy to write, but its not a way to optimize query (In actualization process it run several thousand times on diferent databases/schemes)
Could run insert with specific values
INSERT INTO csh33 (id)
SELECT (SELECT last_value FROM csh33_id_seq) + row_number() OVER ()
FROM temp_tss11;
-- Primary key "id" is Serial so dont name it
INSERT INTO css33 (header_id, time_from, time_to, code, name)
SELECT (SELECT last_value FROM csh33_id_seq) + row_number() OVER (), now(), null, code, name, FROM temp_tss11;
SELECT setval('csh33_id_seq', (SELECT max(id) FROM csh33) + 1);
Or i could don't name fields with default values
INSERT INTO csh33 SELECT FROM temp_tss11;
-- But must consider raised sequence in filling snapshot table (Don't care about ordering, so only subtract)
INSERT INTO css33 (header_id, time_from, time_to, code, name)
SELECT (SELECT last_value FROM csh33_id_seq) - row_number() OVER (), now(), null, code, name, FROM temp_tss11;
But for you question
INSERT INTO yourTableName SELECT generate_series(1,100)
Note i use PG 9.4

An option if you do not want to name any column keeping the default values syntax is to build it dynamically:
do $$
begin
execute (
select string_agg('insert into indexing_table default values',';')
from generate_series(1,10)
);
end; $$;
do
execute

The simplist way is to insert now() into the "created_at" column like so:
insert into indexing_table (created_at)
select now()
;

Related

Recreate table from a select and add an extra datetime default column (Snowflake)

I'm having problems creating a table that should be pretty straightforward. The SQL code (Snowflake) is:
create or replace table bank_raw as
select
*,
created_at datetime default current_timestamp()
from bank_raw;
My error is: Syntax error: unexpected 'DEFAULT'. (line 12).
I don't know how I can recreate this table and add this default timestamp column. By the way, I have already created multiple tables from scratch with created_at DateTime default current_timestamp().
Any ideas?
It is possible to define column list definition when using CTAS:
Sample data:
CREATE TABLE bank_raw(id INT, col TEXT);
INSERT INTO bank_raw(id, col) VALUES (1, 'a'), (2,'b');
Query:
CREATE OR REPLACE TABLE bank_raw(id INT,
col TEXT,
created_at datetime default current_timestamp())
AS
SELECT
id, col, CURRENT_TIMESTAMP()
FROM bank_raw;
Output:
SELECT * FROM bank_raw;
DESCRIBE TABLE bank_raw;
Since this is a DML operation not a DDL operation, the default keyword does not apply. You can simply remove it and instead project the column and name it:
create or replace table bank_raw as
select
*,
current_timestamp() as created_at
from bank_raw;
Edit: To enforce a default, you cannot alter a table to add a column with a default value except for sequences. So you'd need to do something like this:
select get_ddl('table','BLANK_RAW');
-- Copy and paste the DDL. Rename the new table,
-- and add the default timestamp:
create or replace table A
(
-- Existing columns here then:
created_at timestamp default current_timestamp
);
You can then do an insert from a select on the table BLANK_RAW. You'll need to specify a column list and omit the CREATED_AT column.

How to add comparing to ON CONFLICT () DO UPDATE

I need to check, if in table there are any operations with current user for today.
Usually I compare time in this way: timestamp > CURRENT_TIMESTAMP::date
Could you please help, how to do it in INSERT in ON CONFLICT () DO UDPATE?
INSERT INTO table (login, smth, timestamp)
VALUES ('username', 'smth', CURRENT_TIMESTAMP)
ON CONFLICT (login, timestamp) DO UPDATE
SET smth = 'smth'
timestamp = CURRENT_TIMESTAMP
Here will be exactly comparing of timestamp, but I need to check, if it's for today, like above: timestamp > CURRENT_TIMESTAMP::date
Thanks!
If you want to store the timestamp but have a unique constraint on the date, then you can do that easily in the most recent versions of Postgres using a computed column. This requires adding a new column which is the date into the table:
create table t (
login text,
smth text,
ts timestamp,
ts_date date generated always as (ts::date) stored
);
And then creating a unique constraint:
create unique index unq_t_login_timestamp on t(login, ts_date);
Now you can use on conflict:
INSERT INTO t (login, smth, ts)
VALUES ('username', 'smth', CURRENT_TIMESTAMP)
ON CONFLICT (login, ts_date) DO UPDATE
SET smth = 'smth',
ts = CURRENT_TIMESTAMP;
Here is the code in a db<>fiddle.
EDIT:
It is better to eschew the computed column and just use:
create unique index unq_t_login_timestamp on t(login, (timestamp::date));
If you can use CTE, see here.
In case of your question, the query is like below:
(However, I'm not clear what "timestamp > CURRENT_TIMESTAMP::date" means.)
with
"data"("w_login","w_smth","w_timestamp") as (
select 'username2'::text, 'smth'::text, CURRENT_TIMESTAMP
),
"update" as (
update "table" set ("smth","timestamp")=("w_smth","w_timestamp") from "data"
where "login"="w_login" and "w_timestamp">CURRENT_TIMESTAMP::date
returning *
)
insert into "table"
select * from "data"
where not exists (select * from "update");
DB Fiddle

Date in SELECT statement is always the same

I'm trying to insert data from one table into a table that has a two-column key. The source table does not share the destination's keys. Both key columns in the destination are varchars.
I have an insert statement:
INSERT INTO Table1 (Invoice, DetailLine, SomeData1, SomeData2)
SELECT ('1'+RIGHT('00000000000000' + CONVERT(varchar, DatePart(ns,SYSDATETIME()), 14), 0, 'STARTING_VALUE_1407', [ActualValue]
FROM Table2;
When I execute the above, my milliseconds for my DateTime2 object are all the same, as if it's only evaluating that value once. This is preventing me from using this as a temporary unique key. Is it possible to use SYSDATETIME(), or any other date function, in a SELECT statement and have the value reevaluated for each row? If not, is there a way to generate a unique value when doing an INSERT INTO SELECT when selecting data that doesn't normally share the destination table's key?
The SYSDATETIME() function is evaluated once in your query, because it is considered a runtime constant.
You can try a windowing function such as ROW_NUMBER():
INSERT INTO table1
(invoice,
detailline,
somedata1,
somedata2)
SELECT ROW_NUMBER()
OVER (ORDER BY actualvalue),
0,
'STARTING_VALUE_1407',
[actualvalue]
FROM table2;
I'm not sure if you have a requirement to be tied to milliseconds or if the values need to be the same in the same row, but the following could be used to get unique:
SELECT NEWID() AS GuidNo1,
NEWID() AS GuidNo2
Or
SELECT CAST(RAND() * 1000000 AS INT) AS [RandomNumber1],
CAST(RAND() * 1000000 AS INT) AS [RandomNumber2]

Oracle Referencing Primary Key

Primary key of parent got its value from sequence customerNo = customerSeq.nextval. How do I insert that value into child table as foreign key?
insert into account values (accountSeq.nextval,'500',customerSeq.nextval,'S','O');
doesn't work and give me error.
You can use currval to get the last generated value.
insert into account
(account_id, some_col, customer_id, col3, col4)
values
(accountSeq.nextval,'500',customerSeq.currval,'S','O');
It's good coding style to explicitely list the columns of the table in the insert table. You also didn't show your table definition, but do not use String literals for numbers '500' is a string, 500 is a number.
More details are in the manual: http://docs.oracle.com/cd/E11882_01/server.112/e26088/pseudocolumns002.htm#i1009336
When you insert a record into a table with a sequence, you use the "returning" clause to retrieve the value used into a PL/SQL variable, and then use that for child records.
insert into
my_table (
id,
col1,
...)
values (
my_table_seq.nextval,
'A',
...)
returning id
into my_package.my_table_id;
insert into
child_table (
id,
my_table_id,
...)
values (
child_table_seq.nextval,
my_package.my_table_id,
'B',
...)

SQLite auto-increment non-primary key field

Is it possible to have a non-primary key to be auto-incremented with every insertion?
For example, I want to have a log, where every log entry has a primary key (for internal use), and a revision number ( a INT value that I want to be auto-incremented).
As a workaround, this could be done with a sequence, yet I believe that sequences are not supported in SQLite.
You can do select max(id)+1 when you do the insertion.
For example:
INSERT INTO Log (id, rev_no, description)
VALUES ((SELECT MAX(id) + 1 FROM log), 'rev_Id', 'some description')
Note that this will fail on an empty table since there won't be a record with id is 0 but you can either add a first dummy entry or change the sql statement to this:
INSERT INTO Log (id, rev_no, description)
VALUES ((SELECT IFNULL(MAX(id), 0) + 1 FROM Log), 'rev_Id', 'some description')
SQLite creates a unique row id (rowid) automatically. This field is usually left out when you use "select * ...", but you can fetch this id by using "select rowid,* ...". Be aware that according to the SQLite documentation, they discourage the use of autoincrement.
create table myTable ( code text, description text );
insert into myTable values ( 'X', 'some descr.' );
select rowid, * from myTable;
:: Result will be;
1|X|some descr.
If you use this id as a foreign key, you can export rowid - AND import the correct value in order to keep data integrity;
insert into myTable values( rowid, code text, description text ) values
( 1894, 'X', 'some descr.' );
You could use a trigger (http://www.sqlite.org/lang_createtrigger.html) that checks the previous highest value and then increments it, or if you are doing your inserts through in a stored procedure, put that same logic in there.
My answer is very similar to Icarus's so I no need to mention it.
You can use Icarus's solution in a more advanced way if needed. Below is an example of seat availiabilty table for a train reservation system.
insert into Availiability (date,trainid,stationid,coach,seatno)
values (
'11-NOV-2013',
12076,
'SRR',
1,
(select max(seatno)+1
from Availiability
where date='11-NOV-2013'
and trainid=12076
and stationid='SRR'
and coach=1)
);
You can use an AFTER INSERT trigger to emulate a sequence in SQLite (but note that numbers might be reused if rows are deleted). This will make your INSERT INTO statement a lot easier.
In the following example, the revision column will be auto-incremented (unless the INSERT INTO statement explicitly provides a value for it, of course):
CREATE TABLE test (
id INTEGER PRIMARY KEY NOT NULL,
revision INTEGER,
description TEXT NOT NULL
);
CREATE TRIGGER auto_increment_trigger
AFTER INSERT ON test
WHEN new.revision IS NULL
BEGIN
UPDATE test
SET revision = (SELECT IFNULL(MAX(revision), 0) + 1 FROM test)
WHERE id = new.id;
END;
Now you can simply insert a new row like this, and the revision column will be auto-incremented:
INSERT INTO test (description) VALUES ('some description');