I have a value in my csv file for timetamp as '1522865628160'. When I load the data in bigQuery where this field type is timestamp, it saves the timestamp as '1522865628160000'. so when I query like
select * from <tablename> limit 1
it gives me error
Cannot return an invalid timestamp value of 1522865628160000000 microseconds relative to the Unix epoch. The range of valid timestamp values is [0001-01-1 00:00:00, 9999-12-31 23:59:59.999999]; error in writing field timestamp"
please help
I think the issue here is that you tried to load your UNIX timestamp data into a timestamp column in BigQuery. A BigQuery timestamp column is not the same thing as a UNIX timestamp. The latter is just a numerical value representing the number of seconds since the start of the UNIX epoch in 1970.
So the fix here would be to load your data into an INT64 (or INTEGER if you are using legacy) column. From there, you may convert your UNIX timestamp to a bona fide date or timestamp.
There is a MSEC_TO_TIMESTAMP() function which can convert an integer number of milliseconds since the UNIX epoch to a bona fide timestamp, e.g.
SELECT MSEC_TO_TIMESTAMP(1522865628160)
2018-04-04 11:13:48 UTC
Related
I have a column eventtime that only stores the time of day as string. Eg:
0445AM - means 04:45 AM. I am using the below query to convert to UNIX timestamp.
select unix_timestamp(eventtime,'hhmmaa'),eventtime from data_raw limit 10;
This seems to work fine for test data. I always thought unixtimestamp is a combination of date and time while here I only have the time. My question is what date does it consider while executing the above function? The timestamps seem to be quite small.
Unix timestamp is the bigint number of seconds from Unix epoch (1970-01-01 00:00:00 UTC). The unix time stamp is a way to track time as a running total of seconds.
select unix_timestamp('0445AM','hhmmaa') as unixtimestamp
Returns
17100
And this is exactly 4hrs, 45min converted to seconds.
select 4*60*60 + 45*60
returns 17100
And to convert it back use from_unixtime function
select from_unixtime (17100,'hhmmaa')
returns:
0445AM
If you convert using format including date, you will see it assumes the date is 1970-01-01
select from_unixtime (17100,'yyyy-MM-dd hhmmaa')
returns:
1970-01-01 0445AM
See Hive functions dosc here.
Also there is very useful site about Unix timestamp
I want to create a table in redshift that stores incrementally incoming data from the source. The date field in the mysql source is not stored as UTC. Is it possible to convert and store the new record as UTC upon record creation.
I was thinking doing something like that:
CREATE TABLE test(
my_dt_field datetime without timezone NOT NULL ...)
Any help would be very appreciated!
Redshift provides following options of datatypes available to store dates:
1.DATE
Use the DATE data type to store simple calendar dates without time stamps.
2.TIMESTAMP
TIMESTAMP is an alias of TIMESTAMP WITHOUT TIME ZONE.
Use the TIMESTAMP data type to store complete timestamp values that include the date and the time of day.
TIMESTAMP columns store values with up to a maximum of 6 digits of precision for fractional seconds.
If you insert a date into a TIMESTAMP column, or a date with a partial time stamp value, the value is implicitly converted into a full time stamp value with default values (00) for missing hours, minutes, and seconds. Time zone values in input strings are ignored.
By default, TIMESTAMP values are Coordinated Universal Time (UTC) in both user tables and Amazon Redshift system tables.
3.TIMESTAMPTZ
TIMESTAMPTZ is an alias of TIMESTAMP WITH TIME ZONE.
Use the TIMESTAMPTZ data type to input complete time stamp values that include the date, the time of day, and a time zone. When an input value includes a time zone, Amazon Redshift uses the time zone to convert the value to Coordinated Universal Time (UTC) and stores the UTC value.
To view a list of supported time zone names, execute the following command.
select pg_timezone_names();
To answer your question declare your column datatype as TIMESTAMP, by default it stores in UTC
You can also refer AWS document here: https://docs.aws.amazon.com/redshift/latest/dg/r_Datetime_types.html
I am new to postgresql bot not to sql in general. I have a table that I need to read values from, on of the columns is a unix timestamp that I want to convert in to a more human readable format thus I found this:
SELECT lt,dw,up,to_char(uxts, 'YYYY-MM-DD HH24:MI:SS')
from products;
But that produces an error:
ERROR: multiple decimal points
I am lost here. I am sure someone can show me how to do it. The documentation isn't that clear to me. Postgresql 9.5 is the database.
to_char() converts a number, date or timestamp to a string, not the other way round.
You want to_timestamp()
Convert Unix epoch (seconds since 1970-01-01 00:00:00+00) to timestamp
So just apply that function on your column
SELECT lt,dw,up,to_timestamp(uxts) as uxts
from products;
This assumes that uxts is some kind of number data type (integer, bigint or double precision)
I'm playing with some tables in bigquery and I receive this error:
Cannot return an invalid timestamp value of -62169990264000000 microseconds relative to the Unix epoch.
The range of valid timestamp values is [0001-01-1 00:00:00, 9999-12-31 23:59:59.999999]
Doing the query in legacy sql and sorting ascending, it displays as 0001-11-29 22:15:36 UTC
How does it get transformed into microseconds?
This is the query:
#standardSQL
SELECT
birthdate
FROM
X
WHERE
birthdate IS NOT NULL
ORDER BY
birthdate ASC
**strong text**Confirming , that in BigQuery Legacy SQL
SELECT USEC_TO_TIMESTAMP(-62169990264000000)
produces 0001-11-29 22:15:36 UTC timestamp
whereas in BigQuery Standard SQL
SELECT TIMESTAMP_MICROS(-62169990264000000)
produces error:
TIMESTAMP value is out of allowed range: from 0001-01-01 00:00:00.000000+00 to 9999-12-31 23:59:59.999999+00.
How does it get transformed in microseconds?
TIMESTAMP
You can describe TIMESTAMP data types as either UNIX timestamps or calendar datetimes. BigQuery stores TIMESTAMP data internally as a UNIX timestamp with microsecond precision.
See more about TIMESTAMP type
Midnight of January 1 of the year 0001 (the minimum possible timestamp value in standard SQL) is -62135596800000000 in microseconds relative to the UNIX epoch, which is greater than -62169990264000000. I don't have a good explanation for legacy SQL's behavior with that timestamp value, but you can read about some suggestions for dealing with it in standard SQL in this item on the issue tracker. We plan to add some content to the migration guide about this timestamp behavior in the future as well.
So I've been given a lovely little database. One of the tables in the database (several million rows large) has this column:
time_in character varying(255)
Stored in there is an epoch timestamp. What is the most sane way I can convert this to a proper epoch timestamp column without losing data?
First off there is no separate epoch timestamp datatype so the type you want to convert to is just regular timestamp. In the PostgreSQL Documentation - ALTER TABLE there's an example that fits to your case almost perfectly (I just added a cast to integer):
ALTER TABLE foo
ALTER COLUMN time_in SET DATA TYPE timestamp with time zone
USING
timestamp with time zone 'epoch' + time_in::integer * interval '1 second';
Note that the conversion might take some time and will produce an error if all of the rows are not valid epoch times.
Or, quoting the manual here:
A single-argument to_timestamp function is also available; it accepts
a double precision argument and converts from Unix epoch (seconds
since 1970-01-01 00:00:00+00) to timestamp with time zone. (Integer
Unix epochs are implicitly cast to double precision.)
ALTER TABLE foo ALTER COLUMN time_in
SET DATA TYPE timestamptz USING to_timestamp(time_in::float8);
But first, decide whether timestamp (timestamp without time zone) or timestamptz (timestamp with time zone) is the better choice for you:
Ignoring timezones altogether in Rails and PostgreSQL