colon(:) and dot(.) as millisecond separator in datetime2 - sql

I have migrated a Sybase database to SQL server 2008.
The main application that using the database trying to set some of dateTime2 column with data like 1986-12-24 16:56:57:81000 which is giving this error:
Conversion failed when converting date and/or time from character string.
Running the same query using dot(.) instead of colon(:) as millisecond separator like 1986-12-24 16:56:57.81000 or limiting the milliseconds to 3 digits like 1986-12-24 16:56:57:810 will solve the problem.
NOTE:
1- I don't have access to the source of application to fix this issue and there are lots of table with the same problem.
2. Application connect to database using ODBC connection.
Is there any fast forwarding solution or should i write lots of triggers on all tables to fix it using the above solutions?
Thanks in advance

AS Gordon Linoff said
A trigger on the current table is not going to help because the type
conversion happens before the trigger is called. Think of how the
trigger works: the data is available in a "protorow".
But There is a simple answer!
Using SQL Server Native Client Connection instead of basic SQL Server ODBC connection handle everything.
Note:
1. As i used SQL Server 2008 version 10 of SQL server native client works fine but not the version 11 (it's for SQL Server 2012).
2. Use Regional Settings make some other conversion problem so don't use it if you don't need it.

Select REPLACE(getdate(), ':', '.')
But it will Give String Formate to datetime Which is not covert into DateTime formate

Why would you need triggers? You can use update to change the last ':' to '.':
update t
set col = stuff(col, 20, 1, '.');
You also mistakenly describe the column as datetime2. That uses an internal date/time format. Your column is clearly a string.
EDIT:
I think I misinterpreted the question (assuming the data is already in a table). Bring the data into staging tables and do the conversion in another step.
A trigger on the current table is not going to help because the type conversion happens before the trigger is called. Think of how the trigger works: the data is available in a "protorow".
You could get a trigger to work by creating views and building a trigger on a view, but that is even worse. Perhaps the simplest solution would be:
Change the name and data type of the column so it contains a string.
Add a computed column that converts the value to datetime2.

Related

Pulling a SQL Server date to be used in Oracle query within SSIS environment is ignored in the Oracle query

I am having trouble in my Oracle query that uses a variable stored in SSIS which has a date that is pulled from sql server.
I am using an execute sql task that simply gets a max date from a sql server table and stores it in a variable. E.g.
SELECT MAX(t.Date) FROM table t;
I then want to use that variable in my Oracle query which is an ADO.NET source connection. I noticed you can't parameterize in those connections and found the work around where you use the sql expression with your user variable in it. So now my Oracle source query looks something like this:
"SELECT DISTINCT t.* FROM table t WHERE TO_CHAR(t.LastUpdateDate, 'YYYY-MM-DD') > " + "'#[User::LastUpdateDate]'"
The query syntax itself is fine, but when I run it, it is pulling all rows and seems to be completely ignoring the where clause of the date.
I've tried removing the TO_CHAR from LastUpdateDate.
I've tried adding a TO_CHAR to my user variable #[User::LastUpdateDate].
I've tried using the CONVERSION() function from sql server on #[User::LastUpdateDate].
Nothing seems to work and the query just runs and pulls in all data as if I don't have the WHERE clause on the query.
Does anyone know how to rectify this issue or point out what I might be doing wrong?
Thank you for any and all help!
**EDIT:
My date being pulled from SQL Server is in this format: 2022-09-01 20:17:58.0000000
This is not an answer, just troubleshooting advice
You do not say what data type #[User::LastUpdateDate] is, I'll assume it's a datetime
Ideally all datetime data should be kept in datetime data types, then format becomes completely irrelevant. However since it's difficult to parameterise Oracle queries in SSIS, you have to concoct a string to be submitted. Now date format does become important.
On to something a little different, it is a very good habit performancewise, to not put functions around columns that you are searching on. This is called sargability - look it up.
Given these things, I suggest that you concoct your required SQL query bit by bit and troubleshoot.
First, format your date parameter as an Oracle date literal. Remember this is normally a bad and unecessary thing. We are only doing it because we have to concoct a SQL string.
So create another SSIS variable called strLastUpdateDate and put this hideous expression in it:
RIGHT("0" + (DT_STR,2,1252)DATEPART( "dd" , #[User::LastUpdateDate] ), 2) + '-' +
(DT_STR,3,1252)DATEPART( "mmm" , #[User::LastUpdateDate] ) + '-' +
(DT_STR,4,1252)DATEPART("yyyy" , #[User::LastUpdateDate] )
Yes this is ludicrously long code but it will turn your date variable into a Oracle string literal. You could simplify this by putting it into your original max query but lets not go there. Use whatever debugging technique you have to confirm that it works as expected.
Now you should be able to use this:
"SELECT t.*, '"+#[User::LastUpdateDate]+"' As MyStrDate FROM table t WHERE
t.LastUpdateDate > '" #[User::strLastUpdateDate] + "'"
You can try running that and see if it makes any difference. Make sure you use this https://dba.stackexchange.com/questions/8828/how-do-you-show-sql-executing-on-an-oracle-database to monitor what is actually being submitted to Oracle.
This is all from memory and googling - I haven't done SSIS for many years now
I suspect after all this you may still have the same problem because I recall from many years having the same mysterious issue.

Is there a way to set a nullubule Timstamp2 back on null?

I have in a table a nullubule timestamp that tracks when the entry got called from a client. Sometimes something goes wrong on the client side and I need to set the timestamp back to null. I tried directly in SQL management studio to execute the query:
USE [MyDB]
GO
UPDATE [dbo].[MyTable]
SET [MyTimestamp]=null
WHERE ID=SomeInt;
I get the message that one row got altered but when I refresh my select * on the table there is no change on the timestamp.
PS: The whole DB runs on an azure server but I can also not get it to work on my test DB on local host in SQL Server 2014.
Would be grateful for input 
The answer is you cannot change the timestamp column to NULL. It is like a row version number.
Also
The timestamp data type is just an incrementing number and does not
preserve a date or a time.
There are some workarounds which you can use as the one which is used here in the related thread but now Timestamp datatype is rarely used.

Select GetDate() via VB.net different then Select GetDate() via SSMS

When I Use the below code:
Dim cmd As New OdbcCommand("SELECT GETDATE()", oConn)
retVal = cmd.ExecuteScalar()
The resulting output is:
8/1/2013 10:10:39 AM
When I run the exact same query directly in Management Studio I get:
2013-08-01 10:10:39.317
When I check my computer settings versus the SQL Server settings they match.
Anyone know what I need to do to ensure it matches?
Specifically I am talking about the Date format difference.
If you want the date output with a specific string format, then you can use CONVERT() with a style number. For example:
SELECT CONVERT(CHAR(20), GETDATE(), 22),
CONVERT(CHAR(23), GETDATE(), 21);
Results:
-------------------- -----------------------
08/01/13 10:53:54 AM 2013-08-01 10:53:54.943
However, if you are using the date for things other than direct display, only apply that formatting when you are displaying it. For all other purposes it should remain a datetime type and should not be converted to a string.
As for the differences in the actual time value, it's not clear what problem you're talking about, but I suspect you simply ran these queries half an hour apart. If those were run at or around the same time, it looks like the server is half an hour fast - maybe it's in a different time zone or maybe it's just a lot of drift or someone not bothering to use a time service. Your application should never use the time / time zone of the client, especially if it's distributed - always use the time on the server.
Dates have no format. Format comes into play only when you convert dates to a string. The forma used depends on who does the conversion: the server or the client?
Your VB.NET query returns a date from the server and converts it to a string when you write it to the console, a form or whatever. VB.NET uses your programm's CurrentCulture, whose defaults come from the current user's regional settings.
When you display data in SSMS, an ISO format is used so there is no ambiguity when you edit the data.
When you compare date and string values in a query, either explicitly by converting a date to a string or implicitly because you just typed MyDate = '13/1/2013, a conversion is made using the column's collation. Collations are inheritted so the column's collation is the same as the database's collation.
Try this:
net time \\SERVER_NAME
Note: Obviously SERVER_NAME is the name of your SQL Server machine.
Do you see a 30 minute difference in the result of that call?
I looked deeper into the code and found that some enterprising fellow had added code to a line of SQL later in the process which forces DMY format on that query.
so the code in the VB is returning the proper Date on the app machine. Which means that there must be a difference between my computer and the app machine.
Another coder ran into the same issue and so there solution was to add the below code to the SQL that was pulling from the DB.
SET DATEFORMAT dmy
This forces the SQL to use DMY format... I removed this code Compiled and ran the EXE from the server machine and my issue dried up!
Thanks for everyone's help.

SSIS getdate into DateTimeOffset column - data value overflowed the type

I have an SSIS package. The source is a SQL query. The destination is a table. The package worked until I changed a column in a destination table from datetime to datetimeoffset(0).
Now, all records fail with a "Conversion failed because the data value overflowed the type used by the provider" error on this particular column.
The value in the source query is getdate(). I tried TODATETIMEOFFSET(getdate(),'-05:00') without success.
In fact, the only thing that has worked so far is to hard code the following into the source query:
cast('3/14/12' as datetime)
The only other interesting piece of information is that the package worked fine when running the source query against another server implying that maybe a setting is involved - but I see no obvious differences between the two servers.
I was going to suggest to add a "data conversion component" to deal with it, but since you changed only on the destination, it means that you can change your source query to do:
select cast(YOUR_DATE_COLUMN as datetimeoffset(0))
In case anyone else is looking, we found an alternative solution that works if the source is in SQL 2005 (no support for datetimeoffset).
select dateAdd(minute,datediff(minute,0,getutcdate()),0)
The intent is to reduce the precision. Granted I also lose seconds but if I try the above line with seconds I get an overflow error.

FileMaker TimeStamp field UPDATE using SQL

I need to update a FileMaker Timestamp field with a timestamp taken from PHP and put into a script using the PHP API and executeSQL API and plugin
so
UPDATE table SET time ='2011-05-27 11:28:57'
My Question is as follows, how do I utilise the available scripting functions within Filemaker Pro 11 to convert the string that is being supplied within the SQL statement to an acceptable TimeStamp format for FileMake? or is it possible using the executeSQL plugin for FileMaker to do the conversion within the ExecuteSQL() function within the Execute SQL plugin?
I haven't tried it out, but it should work using CAST:
CAST( expression AS type [ (length) ] )
so, it should read:
UPDATE table SET time = CAST ('2011-05-27 11:28:57' AS TIMESTAMP)
However, please be aware that Filemaker's own ExecuteSQL() functions doesn't support UPDATE or INSERT INTO statements. You need to get a free extension from Dracoventions called epSQLExecute() in order to do this.
Hope this helps (someone).
Gary
You haven't given us much to go on, but my guess would be that you are updating a timestamp column with a string that does not match the required format.
You should convert your string to the appropriate object and then the update should work.