Bulk Inserting data to table which have default current timestamp column - sql

I have a table on redshift with following structure
CREATE TABLE schemaName.tableName (
some_id INTEGER,
current_time TIMESTAMP DEFAULT GETDATE()
);
If I bulk insert data from other table for example
INSERT INTO schemaName.tableName (some_id) SELECT id FROM otherSchema.otherTable;
Will the value for current_time column be same for all bulk inserted rows? Or it will depend on insertion time for each record. As the column data-type is TIMESTAMP
I am considering this for Amazon Redshift only.
So far I have tested with changing the default value of current_time column to SYSDATE and bulk inserting 10 rows to target table. current_time column values per row yields results like 2016-11-16 06:38:52.339208 and are same for each row, where GETDATE() yields result like 2016-11-16 06:43:56. I haven't found any documentation regarding this and need confirmation regarding this.
To be precise, all rows get same timestamp values after executing following statement
INSERT INTO schemaName.tableName (some_id) SELECT id FROM otherSchema.otherTable;
But if I change the table structure to following
CREATE TABLE schemaName.tableName (
some_id INTEGER,
current_time DOUBLE PRECISION DEFAULT RANDOM()
);
rows get different random values for current_time

Yes. Redshift will have same default value in the case of bulk insert. The RedshiftDocumentation has the below content:
the evaluated DEFAULT expression for a given column is the same for
all loaded rows, a DEFAULT expression that uses a RANDOM() function
will assign to same value to all the rows.

Related

How to add column to database with default

I have a database that I'm trying to add a column to. This column should hold information of the type timestamp, and I want every row to have the same timestamp (the current time) when I'm done.
I currently have tried:
cursor.execute('''ALTER TABLE my_table ADD COLUMN time timestamp DEFAULT ?''', (datetime.datetime.utcnow(),))
Which results in sqlite3.OperationalError: near "?": syntax error.
So then I tried:
cursor.execute(f'''ALTER TABLE my_table ADD COLUMN time timestamp DEFAULT {datetime.datetime.utcnow()}''')
Which results in sqlite3.OperationalError: near "-": syntax error.
Also, doing
cursor.execute(f'''ALTER TABLE my_table ADD COLUMN time timestamp DEFAULT CURRENT_TIMESTAMP''')
results in sqlite3.OperationalError: Cannot add a column with non-constant default.
How can I add the new column and set the values in that column? (Either through DEFAULT, or some other mechanism.)
SQLite does not allow adding a new column with a non-constant value. So this:
alter table my_table add column my_time timestamp default current_timestamp;
... generates error:
Cannot add a column with non-constant default
A simple option would be to recreate the table. Assuming that you have single column called id, that would look like:
create table my_table_new(
id int primary key,
my_time timestamp default current_timestamp
);
insert into my_table_new(id) select id from my_table;
drop table my_table; -- back it up first !
alter table my_table_new rename to my_table;
You can first add the new column and then update every existing row in the table to the desired value:
ALTER TABLE my_table ADD COLUMN time;
UPDATE my_table SET time = CURRENT_TIMESTAMP;

SQL - Multiple fields are updated instead of one

I have four columns: ID, STARTTIME, ENDINGTIME and DURATION.
The table is created with:
CREATE TABLE tableName (
ID INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
STARTTIME TIMESTAMP,
ENDINGTIME TIMESTAMP,
DURATION TIME);
The ID is an auto_increment column. Then I've the code for inserting a new STARTTIME:
INSERT INTO tableName(STARTTIME) VALUES(CURRENT_TIMESTAMP);
Secondly I've the code for updating the row with the biggest ID to set the ENDINGTIME:
SET #latestInsertID = (SELECT MAX(ID) FROM tableName);
UPDATE tableName SET ENDINGTIME=(CURRENT_TIMESTAMP) WHERE ID=#latestInsertID;
Now I can execute both (all three) queries without getting an exception and the first query works totally fine (as I expected). But the last query updates (from the row I wanted to update) the ENDINGTIME as well as the STARTTIME. Why doesn't it just update the ENDINGTIME?
Thank you for every solution!
Use DATETIME instead of TIMESTAMP (MWE)
Here's why:
The timestamp field is generally used to define at which moment in time a row was added or updated and by default will automatically be assigned the current datetime when a record is inserted or updated. The automatic properties only apply to the first TIMESTAMP in the record; subsequent TIMESTAMP columns will not be changed.
Educated guess. Column is defined as:
CREATE TABLE tablename(
-- ...
STARTTIME TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
Or there is underlying trigger that perfroms same logic.

SQL Pivot table - filter timestamp

I have a logger table with timestamp,tagname and tagvalue fields.
Every time tag value changes, the control system writes record to the table with those 3 parameters.
Timestamp for records is not synchornized.
I want to run a pivot table query to get all data for 3 different tags to show the values of those 3 tags only.
When I run the query below, I get in return a dataset with all timestamp records in the table and lots of null values in the value fields(the SQL returns me all timestamp values).
I use the query:
SELECT *
FROM (
SELECT [timestamp],
[_VAL] AS '_VAL',
[point_id]
FROM DATA_LOG) p
PIVOT(SUM([_VAL]) FOR point_id in ([GG02.PV_CURNT],
[GG02.PV_JACKT],
[GG02.PV_SPEED],
[GG02.PV_TEMP])
) as tagvalue
ORDER BY timestamp ASC
Here's an example to the values I get in return from the SQL Server:
Results example:
Please anybody can help me how to limit the timestamp that SQL returns me only for timestamp relevant to those 3 tags and not all timestamp values in the table? (the return values list will include a record when at least one of the tags values will not be null)
If anybody have other ideas and not using PIVOT query to get the data in the format shown above - any idea will be welcome.
I think you simply want:
WHERE [GG02.PV_CURNT] IS NOT NULL OR
[GG02.PV_JACKT] IS NOT NULL OR
[GG02.PV_SPEED] IS NOT NULL OR
[GG02.PV_TEMP] IS NOT NULL
in the subquery.

postgresql INSERTs NULL values from SELECT COS(another field) query

If I run
SELECT
(cos(radians(spa.spa_zenithangle)))
FROM generic.spa;
I get a sensible range of results from -1 to 1. but if I run this insert statement all the resulting values in the spa.spa_cos_zenith field are NULLs
INSERT INTO generic.spa
(spa_cos_zenith)
SELECT
(cos(radians(spa.spa_zenithangle)))
FROM generic.spa;
The table definition is:
CREATE TABLE generic.spa (
spaid INTEGER DEFAULT nextval('generic.spa_id_seq'::regclass) NOT NULL,
measurementdatetime TIMESTAMP WITHOUT TIME ZONE,
spa_zenithangle NUMERIC(7,3),
spa_cos_zenith DOUBLE PRECISION,
CONSTRAINT spa_pk PRIMARY KEY(spaid)
)
WITH (oids = false);
Anyone know why the COS functions returns results ok but they cant be inserted into another field?
I suspect you want update, not insert:
UPDATE generic.spa
SET spa_cos_zenith = cos(radians(spa.spa_zenithangle));
INSERT inserts new rows, so you are duplicating the rows. The only column in the new rows is the COS() value. Nothing changes in the old rows.

archive one table date in another table with archive date in Oracle

i have one table test it has 10 column with 20 rows.
I need to move this data to archive_test table which has 11 column (10 same as test table plus one column is archive date).
when i tried to insert like below its shows error because number of column mismatch.
insert into archive_test
select * from test;
Please suggest the better way to do this.Thanks!
Well, obviously you need to supply values for all the columns, and although you can avoid doing so you should also explicitly state whic value is going to be inserted into which column. If you have an extra column in the target table you either:
Do not mention it
Specify a default value as part of its column definition in the table
Have a trigger to populate it
Specify a value for that column.
eg.
insert into table archive_test (col1, col2, col3 ... col11)
select col1,
col2,
col3,
...
sysdate
from test;
assuming that archive_date is the last column:
INSERT INTO archive_test
SELECT test.*, sysdate
FROM test