I created a table in BigQuery and in one of the columns I specified its mode as a REPEATED (array) TIMESTAMP column, that is column4.
CREATE OR REPLACE TABLE
`project.dataset.table` ( column1 string,
column2 TIMESTAMP,
column3 ARRAY<int64>,
column4 ARRAY<TIMESTAMP>)
When I insert data into that table, the column4 converts the CURRENT_TIMESTAMP() into the following format.
INSERT INTO
`project.dataset.table` (column1,
column2,
column3,
column4)
VALUES
("rowtest1", CURRENT_TIMESTAMP(), [5], [CURRENT_TIMESTAMP()])
In the same query I stated the CURRENT_TIMESTAMP() to Column2 and Column4, but for Column4 it changed the format of CURRENT_TIMESTAMP() to 1660318705383274 instead 2022-08-12 15:38:25.383274 UTC .
I want to keep the format as in the Column2 2022-08-12 15:38:25.383274 UTC for both columns, is it possible?
I want to keep the column4 as REPEATED because I will use it as an updated_at field, to avoid redundancy in the table.
BigQuery UI seems to just display a timestamp value in an array as a format of unix timestamp. But internally, it still has a timestamp format in it.
See the query result as a JSON format.
SELECT [CURRENT_TIMESTAMP] ts;
And when you UNNEST an ARRAY<TIMESTAMP>, you can see it as a normal timestamp format like below.
SELECT ts FROM UNNEST([CURRENT_TIMESTAMP]) ts;
Related
I have insert a couple of rows into Snowflake table, however it returns only date format.
INSERT INTO usage (Customer_ID,
Movie_Name,
Movie_Genre,
Movie_Length,
Start_Time,
End_Time)
values (1234,
'Shrek',
'Kids',
2.52,
to_timestamp('12-31-2013 13:33','mm-dd-yyyy HH24:MI'),
to_timestamp('12-31-2013 16:04','mm-dd-yyyy HH24:MI')
);
Can someone tell me what's wrong?
SELECT * from values (1234,
'Shrek',
'Kids',
2.52,
to_timestamp('12-31-2013 13:33','mm-dd-yyyy HH24:MI'),
to_timestamp('12-31-2013 16:04','mm-dd-yyyy HH24:MI')
);
gives:
COLUMN1
COLUMN2
COLUMN3
COLUMN4
COLUMN5
COLUMN6
1,234
Shrek
Kids
2.52
2013-12-31 13:33:00.000
2013-12-31 16:04:00.000
so the values block is valid, this implies if you only getting a date, you have a column of type DATE or your default format for display timestamps does not have the time part in it anymore.
select *, system$typeof(Start_Time) from usage;
should show TIMESTAMP
I put date as datetype, should I change to timestamp?
If the column data type is DATE the time component is lost during insertion:
Snowflake supports a single DATE data type for storing dates (with no time elements).
In addition, all accepted TIMESTAMP values are valid inputs for dates; however, the TIME information is truncated.
In order to preserve both date and time the column data type has to be changed to TIMESTAMP.
Sample:
CREATE OR REPLACE TABLE usage(col DATE, col2 TIMESTAMP);
INSERT INTO usage(col, col2)
VALUES (to_timestamp('12-31-2013 13:33','mm-dd-yyyy HH24:MI'),
to_timestamp('12-31-2013 13:33','mm-dd-yyyy HH24:MI'));
SELECT * FROM usage;
How can I create a view which should be applicable to all the dates. Right now if I remove the date from table1_20210713, it is throwing incorrect table error.
Select column1, column2 from table1_20210713
Union All
Select column1, column2 from table2_20210713
You should be able to achieve this using a wildcard table: https://cloud.google.com/bigquery/docs/querying-wildcard-tables
For yours you would do something along the lines of
select column_1
, column_2
from `project.dataset.table1_*`
I want to add multiple entries, with 1 column staying the same, and 1 column increasing by 1 every time.
To do this manually I would have to do
INSERT INTO table (column1, column2) VALUES ('1'::integer, '1'::integer)returning column1, column2;
INSERT INTO table (column1, column2) VALUES ('1'::integer, '2'::integer)returning column1, column2;
INSERT INTO table (column1, column2) VALUES ('1'::integer, '3'::integer)returning column1, column2;
etc
Is there a way I could do the numbers 1 to 34000 in 1 query?
Use generate_series():
INSERT INTO table (column1, column2)
SELECT 1, gs.n
FROM generate_series(1, 34000) gs(n);
Note: There is no need to convert a string to an integer. In general, a number literal is fine. In any case, Postgres would convert the number literal to the correct type if needed (say int or numeric or float or whatever).
I want to select data from TABLE A and insert data into TABLE B. Though I know how to achieve this. I have used:
INSERT INTO TableA(column1, column2, column3, ...)
SELECT column1, column2, column3, ...
FROM TableB
WHERE condition;
But the challenge is the data type. The target TABLE A has a column with data type definition as
column1(decimal(16,3),null)
Whereas the table TABLE B has a column with data type definition as
column1(decimal(15,4),null)
Please help me here!
It should be fine. SQL Server will automatically convert the values. If you have a very large value in B that doesn't fit in A, then you could get an error.
To solve that, use try_convert():
INSERT INTO TableA(column1, column2, column3, ...)
SELECT TRY_CONVERT(decimal(15,4), column1), column2, column3, ...
FROM TableB
WHERE condition;
This will avoid a type conversion error.
In Teradata Is there any way to get column name as well with the error message. For example I have a table
tablename column1 int,
tablename column2 timestamp,
tablename column3 timestamp,
tablename column4 timestamp,
tablename column5 char(20)
When i insert a wrong value in a column, it does not return me COLUMNNAME. For Example if i insert wrong time it just say 6760 : invalid timestamp
but which column is having problem we dont know.
is there is any mathod to know about it.
No, a SQL Insert will not return that info.
But when you use a MERGE with LOGGING errors instead you'll get a row in the error table indicating which column caused it (iirc).