Milliseconds from GETUTCDATE not stored in datetime field - sql

I have stored procedure that inserts data into table. One column in the table is datetime and is used for storing the time stamp of row insert:
INSERT INTO myTable (Field1, Field2, Field3) VALUES (1, 2, GETUTCDATE());
Field3 is datetime column. When I select data from that table with simple SELECT * FROM myTable query, all datetime values are shown with .000 value for milliseconds.
If I execute SELECT GETUTCDATE(), the milliseconds are displayed: 2013-10-16 18:02:55.793
Why milliseconds are not stored/displayed in the date time column on SELECT?

You have to be doing something somewhere to change this to smalldatetime or something because it works fine. I just created a new table, inserted data like you showed, queried the table and I have the milliseconds.
I have been unable to find anything where you can set the precision at the server level so it must be in your code.

Which datetime type are you using? To store the date with precision up to a millisecond, you need to use DATETIME2 if you're using SQL Server 2008 or higher.
'DATETIME' gives a precision of about 1/300th of a second.
'SMALLDATETIME' has accuracy of 1 minute.
Source: http://msdn.microsoft.com/en-us/library/ff848733.aspx

As Steve suggested, issue was not related with server. There is a trigger on this table, that trigger does milliseconds rounding on insert.

Related

Timestamp decrease the hour in insert overwrite

I have been work with Sqoop, hive and Impala.
My Sqoop Job get a field from SQL Server with the format datetime to write in a TABLE1 stored as textfile. The field in TABLE1 have the timestamp format.
After this, I created a HQL script using INSERT OVERWRITE TABLE2 ... SELECT TABLE1.
The field in TABLE2 have the Timestamp format too, but the time increased in 1 hour, I don't know why.
All the tables was created previously. How can I fix this?
The difference might come from different time zones (local time, server time, utc, ...)
However, you can fix the wrong values in the database with
DATEADD(interval, number, date);
Also see: https://www.w3schools.com/sql/func_sqlserver_dateadd.asp for more

How to convert nvarchar(254) to decimal(7,2)

I have a table with 10M+ rows and want to change the data type of one the columns from nvarchar(254) to decimal(7,2). What is the most efficient and effective query to make this change?
I have tried using ALTER in order to make this change, but get an error in SSMS
Error converting data type nvarchar to numeric.
I have also tried using CAST, but this results in errors as well. Admittedly, I'm not a DBA so I have struggled to understand the following:
How to properly write a CAST query that does not yield errors
Whether the CAST and CONVERT functions change the design of the data at the database level (meaning in the Object Explorer, when I right-click the table and then click 'Design' I see the data type of the column has changed) or if the changes only last until the next query is run or the program is exited.
This table was initially created over a month ago as the result of a workflow that was run a few months ago; this workflow has since been scheduled to push new data to the table on an hourly cadence, so deleting the job/table and starting over is not an option.
SET STATISTICS TIME ON
ALTER TABLE Clone3
ALTER COLUMN Price decimal(7,2)
The ultimate goal is to store this data correctly so that arithmetic operations can be performed when it is ingested into other visualization programs (e.g., Tableau, Power BI, etc.) That said, the expected result here is for the data type to be changed to Decimal(7,2) but the actual result is nvarchar(254).
UPDATE
After running SELECT Price from Clone3 WHERE TRY_CONVERT(decimal(7,2),Price) IS NULLthere are 239 records that return in scientific notation. For example -5.0000000000000003E-2
FINAL UPDATE
I ran the following query to update the records that were causing the conversion error (these were negative numbers like '-0.05' being converted to scientific notation for some strange reason).
UPDATE Clone3
SET Price = CAST(Price AS Float)
WHERE TRY_CONVERT(decimal(7,2), Price) IS NULL
Because all of the records are now in a numeric data type, I can convert the entire dataset to decimal(7,2), using this query.
ALTER TABLE Clone3
ALTER COLUMN Price decimal(7,2)
I think I can call this solved, so many thanks to everyone for their responses, especially #Larnu for the code snippet that eventually helped me figure this out.
This 5.9999999999999998E-2 cannot be converted directly to decimal(7,2), although it can be converted to a float, which can then be converted to a decimal(7,2). EG
select cast(cast('5.9999999999999998E-2' as float) as decimal(7,2))
While not the most efficient, or a general solution for this kind of thing, you could alter the table twice, eg:
use tempdb
drop table if exists t
create table t(s varchar(200))
insert into t(s) values ('5.9999999999999998E-2')
go
alter table t alter column s float
alter table t alter column s decimal(7,2)
go
select * from t
The most efficient way is probably to empty the table and reload it:
select *
into temp_t
from t;
truncate table temp_t;
alter table t alter column price decimal(7, 2);
insert into t
select *
from temp_t;
There is more overhead to updating the records in place.

Bulk Inserting data to table which have default current timestamp column

I have a table on redshift with following structure
CREATE TABLE schemaName.tableName (
some_id INTEGER,
current_time TIMESTAMP DEFAULT GETDATE()
);
If I bulk insert data from other table for example
INSERT INTO schemaName.tableName (some_id) SELECT id FROM otherSchema.otherTable;
Will the value for current_time column be same for all bulk inserted rows? Or it will depend on insertion time for each record. As the column data-type is TIMESTAMP
I am considering this for Amazon Redshift only.
So far I have tested with changing the default value of current_time column to SYSDATE and bulk inserting 10 rows to target table. current_time column values per row yields results like 2016-11-16 06:38:52.339208 and are same for each row, where GETDATE() yields result like 2016-11-16 06:43:56. I haven't found any documentation regarding this and need confirmation regarding this.
To be precise, all rows get same timestamp values after executing following statement
INSERT INTO schemaName.tableName (some_id) SELECT id FROM otherSchema.otherTable;
But if I change the table structure to following
CREATE TABLE schemaName.tableName (
some_id INTEGER,
current_time DOUBLE PRECISION DEFAULT RANDOM()
);
rows get different random values for current_time
Yes. Redshift will have same default value in the case of bulk insert. The RedshiftDocumentation has the below content:
the evaluated DEFAULT expression for a given column is the same for
all loaded rows, a DEFAULT expression that uses a RANDOM() function
will assign to same value to all the rows.

SQL : datatype as trunc(sysdate)

I was trying to create a table with a column's data type as trunc(sysdate).
Is that possible?
When I tried it , I got below error
SQL Error: ORA-00902: invalid datatype
I am trying this because I want to make sure data inserted into that column doesn't have timestamp.
Just create a trigger
CREATE TRIGGER schema.trigger_name
BEFORE INSERT OR UPDATE
ON schema.table_name
FOR EACH ROW
new.column_name = trunc(column_name);
No that is not possible.
Trunc() is a function that truncates date to a specific unit of measure.
The DATE datatype stores point-in-time values (dates and times) in a
table. The DATE datatype stores the year (including the century), the
month, the day, the hours, the minutes, and the seconds (after
midnight).

Postgres Data type conversion

I have this dataset that's in a SQL format. However the DATE type needs to be converted into a different format because I get the following error
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
ERROR: date/time field value out of range: "28-10-96"
LINE 58: ...040','2','10','','P13-00206','','','','','1-3-95','28-10-96'...
^
HINT: Perhaps you need a different "datestyle" setting.
I've definitely read the documentation on date format
http://www.postgresql.org/docs/current/static/datatype-datetime.html
But my question is how do I convert all of the dates in a proper format without going through all the 500 or so data rows and making sure each one is correct before inserting into a DB. Backend is handle by rails, but I figured going through SQL to cleaning it up will be best here.
I have a CREATE TABLE statement above this dataset, and mind you the data set was given to be via a DBF converter/external source
Here's part of my dataset
INSERT INTO winery_attributes
(ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST)
VALUES
(1,'ACACIA WINERY','PROD','8000','34436','','0','50000','250000','APPT','75','525','27375','3612','63','30987','22','97','x','001_02169-MOD_AcaciaWinery','','','','','1-11-79','1-9-82','34436','x','125000','Los Carneros','1');
INSERT INTO winery_attributes
(ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST)
VALUES
('2','AETNA SPRING CELLARS','PROD','2500','2500','','0','2000','20000','TST APPT','0','3','156','0','0','156','1','10','x','','','','','x','1-4-86','1-6-86','2500','','0','Napa Valley','3');
INSERT INTO winery_attributes
(ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST)
VALUES
('3','ALTA VINEYARD CELLAR','PROD','480','480','','0','5000','5000','NO','0','4','208','0','0','208','4','6','x','003_U-387879','','','','','2-5-79','1-9-80','480','','0','Diamond Mountain District','3');
INSERT INTO winery_attributes
(ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST)
VALUES
('4','BLACK STALLION','PROD','43600','43600','','0','100000','100000','PUB','50','350','18200','0','0','18200','2','45','x','P13-00391','','','','','1-5-80','1-9-85','43600','','0','Oak Knoll District of Napa Valley','3');
INSERT INTO winery_attributes
(ID,NAME,STATUS,BLDSZ_ORIG,BLDSZ_CURR,HAS_CAVE,CAVESIZE,PROD_ORIG,PROD_CURR,TOUR_TASTG,VISIT_DAY,VISIT_WEEK,VISIT_YR,VISIT_MKTG,VISIT_NMEV,VISIT_ALL,EMPLYEENUM,PARKINGNUM,WDO,LAST_UP,IN_CITYBDY,IN_AIASP,NOTES,SMLWNRYEXM,APPRV_DATE,ESTAB_DATE,TOTAL_SIZE,SUBJ_TO_75,GPY_AT_75,AVA,SUP_DIST)
VALUES
('5','ALTAMURA WINERY','PROD','11800','11800','x','3115','50000','50000','APPT','0','20','1040','0','0','1040','2','10','','P13-00206','','','','','1-3-95','28-10-96','14915','x','50000','Napa Valley','4');
The dates in your data set are in the form of a string. Since they are not in the default datestyle (which is YYYY-MM-DD) you should explicitly convert them to a date as follows:
to_date('1-5-80', 'DD-MM-YY')
If you store the data in a timestamp instead, use
to_timestamp('1-5-80', 'DD-MM-YY')
If you are given the data set in the form of the INSERT statements that you show, then first load all the data as simple strings into varchar columns, then add date columns and do an UPDATE (and similarly for integer and boolean columns):
UPDATE my_table
SET estab = to_date(ESTAB_DATE, 'DD-MM-YY'), -- column estab of type date
apprv = to_date(APPRV_DATE, 'DD-MM-YY'), -- etc
...
When the update is done you can ALTER TABLE to drop the text columns with dates (integers, booleans).