How to insert a cast conversion into database - sql

I've been trying to insert a varchar value into a table in SQL using a cast.
The varchar input values has a string datetime format like this:
'08/25/2022 03:34:59 PM'
The fechaInicio column is originally filled with NULL, and the purpose of the stored procedure is to update that column with the #strDateTime value sent.
Example of my table [Table_Input]:
fechaInicio
ID
NULL
2
If I just do a
SELECT CAST('08/25/2022 03:34:59 PM' AS DATETIME)
it actually works and shows me the correct casting in the message window. But the problem is when I try to update into the table.
I removed my try-except commands to see the error.
If I call the stored procedure like this
[SP_Table_Input_Get_Series] '08/25/2022 03:34:59 PM', 2
I get the following error:
Msg 241, Level 16, State 1, Procedure SP_Table_Input_Get_Series, Line 34 [Batch Start Line 13]
Conversion failed when converting date and/or time from character string
My stored procedure is something like this:
PROCEDURE [SP_Table_Input_Get_Series]
#strDateTime NVARCHAR(50),
#cId int
AS
BEGIN TRANSACTION
UPDATE [Table_Input]
SET
---fechaInicio =convert(datetime, #strDateTime, 5),
---fechaInicio = N'select cast(#strDateTime as datetime)'
fechaInicio = CAST(#strDateTime AS datetime)
WHERE id = #cId -- the where clause works fine
COMMIT TRANSACTION
All the 3 options (including commented ones in the stored procedure) didn't work.
Also a constraint is I cannot modify the column type to varchar or any other type.
I will really appreciated if someone can help me find a solution.
I'm running the stored procedure directly in Microsoft SQL Server Management Studio.

Please try the following solution.
As #AlwaysLearning pointed out, I changed 89 to 59 seconds.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (ID INT IDENTITY PRIMARY KEY, fechaInicio DATETIME2(0));
INSERT #tbl (fechaInicio) VALUES
(GETDATE());
-- DDL and sample data population, end
DECLARE #strDateTime VARCHAR(50) = '08/25/2022 03:34:59 PM';
-- before
SELECT * FROM #tbl;
UPDATE #tbl
SET fechaInicio = convert(DATETIME2(0), #strDateTime, 101)
where ID = 1;
-- after
SELECT * FROM #tbl;
Output
ID
fechaInicio
1
2022-08-25 15:34:59

Related

Count all files in a directory using datetime

I'm almost there with a script to count the files in the last 18 hours in a specific directory. We expect 12 a day, so it's good for the person running the script to see how many files are there, and can investigate based on this script.
My current script is as follow:
DECLARE #Now DATETIME= GetDate() --Get now
DECLARE #TimeD INT = -18 --Number of hours to look into the past
create table #regop(
[date] dateTime,
depth int,
[file] int)
insert into #regop
EXECUTE master.dbo.xp_dirtree N'\\location...\', 1, 1
select count([file]) from #regop
WHERE TRY_CONVERT(date, LEFT([date],8), 1) = CONVERT(date, GETDATE())
and [file]=1 and [date] >= DATEADD(HH,#TimeD,#Now)
drop table #regop
I'm currently getting
Msg 8114, Level 16, State 1, Procedure xp_dirtree, Line 1 [Batch Start Line 0]
Error converting data type nvarchar to datetime.
Can anyone assist where I'm going wrong?
To be honest I think it is not possible to retrieve data when the file was created with xp_dirtree. As an first output argument that function returns file name which is nvarchar, not an data. That part cause an error. You can fix that by changing data type in temporary table:
create table #regop(
--[date] dateTime,
[file_name] nvarchar(512),
depth int,
[file] int)

User-Defined Table Type insertion causing conversion error

Not a duplicate of User-Defined Table Type insertion sometimes causing conversion error
I have a user defined table type:
CREATE TYPE [dbo].[udtImport] AS TABLE(
Name varchar(256) null
Code varchar(32) null
StartDate varchar(256) null
EndDate varchar(256) null
DateCreated datetime null
The DateCreated field is populated in my DB layer using DateTime.Now(), while all other fields are from an imported table.
When I import a file with the date fields populated I get a SQL error:
Conversion failed when converting date and/or time from character
string.
Intercepting the generated code using SQL Profiler shows this:
DECLARE #p1 dbo.udtImport;
INSERT INTO #p1
VALUES
( N'Kit A1',
N'A002',
'2016-04-02 00:00:00.000',
'2016-10-10 00:01:00.000',
'2018-10-22 16:08:28.6823468' );
exec impSaveImport #ImportList=#p1
impSaveImport is a stored procedure that has just one parameter: the table type var and does a straight insert to table [Import]. No logic, no triggers, no reference to other tables.
Executing this code in SSMS shows the same error as expected.
Trimming the last 4 digits off the last DateTime field causes the insert query to succeed.
So far so good.
When I import a file with the StartDate and EndDate fields empty, I get no error, and data is successfully inserted into the Import table.
When I intercept the successful insert using profiler I get this:
DECLARE #p1 dbo.udtImport;
INSERT INTO #p1
VALUES
( N'Kit A1',
N'A002',
null,
null,
'2018-10-22 16:34:11.5243245' );
exec impSaveImport #ImportList=#p1
Keep in mind this query SUCCESSFULLY insert one row into the table Import.
When I run this latest query in SSMS I get the same conversion error as before,
but it ran without error from within my MVC app!
This last part has me stumped.
How can it be?
Project is using MVC with SQL2016.
You could use DATETIME2:
CREATE TYPE [dbo].[udtImport] AS TABLE(
Name varchar(256) null,
Code varchar(32) null,
StartDate varchar(256) null, -- should be datetime2 format
EndDate varchar(256) null, -- should be datetime2 format
DateCreated datetime2 null);
DECLARE #p1 dbo.udtImport;
INSERT INTO #p1(Name, Code, StartDate, EndDate, DateCreated)
VALUES
( N'Kit A1',
N'A002',
'2016-04-02 00:00:00.000',
'2016-10-10 00:01:00.000',
'2018-10-22 16:08:28.6823468' );
db<>fiddle demo

Converting Oracle TIMESTAMP(6) TO SQL SERVER 2008 DATETIME2(6)

I am bulk importing a csv file to SQL server 2008, the csv file has been generated from exporting the table data from Oracle SQL developer.
The data for one column in that csv file is in TIMESTAMP(6) for which I am having the DATETIME2(6) datatype for the required column in the SQL server 2008.
I am importing the CSV file using the below statement
USE H_CLAIMS
GO
BULK INSERT H_CLAIMS.dbo.APPLICATION_QUEUES
FROM 'D:\MyWork\HC DB Work\HCAIDDB_CSV_EXPORTS\APPLICATION_QUEUES_export.CSV'
WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n')
GO
while doing above I am getting the below error
Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 5 (CREATED_DATE).
Msg 4864, Level 16, State 1, Line 1
The sample data in the column mentioned in the error is like
21-NOV-14 08.57.51.565214000 AM
So I am looking for the answer, which can overcome this issue with any other attributes during the bulk insert statement or any convert function which can properly convert the datetime in the sample data to SQL SERVER 2008 datetime2 format.
SQL Server doesn't know how to convert the text value " 21-NOV-14 08.57.51.565214000 AM" to a DATETIME2 column. Try it in a query analyser window :
SELECT CAST('21-NOV-14 08.57.51.565214000 AM' AS DATETIME2(6))
Note that if you're using DATETIME2(6) it'll be loosing precision compared to what you're trying to import. Have a look at http://msdn.microsoft.com/en-GB/library/bb677335.aspx.
When I've had to do this coming from DB2 text files, I've done it two different ways.
Import the datetime field into a varchar then written a bit of SQL to manipulate the string into a format SQL Server can recognise, so something like. Bit slow and clunky, especially if you have a lot of data.
Use SSIS and create a transformation to do the string manipulation there. This has the advantage of still being able to bulk insert into the destination table, but does mean you need to be able to have access to integration services.
As I couldn't find any bulk Insert which will do the work for me, I have gone with a different approach. After many trails with cast and convert, I followed the below approach which is working as expected
I have created a function which can convert the oracle timestamp(6) to nvarchar of sql which can be directly inserted as datetime2(6) datatype in sql server 2008. Below is the function
Then I have used a stored procedure which can accept the file path as input parameter and a temp table to hold the nvarchar based datetime2 value. In the stored procedure I have used the dynamic bulk insert statement to insert into the required table. The procedure is after the function
CREATE FUNCTION DATETIMECONVERTER
(
#ORACLETIMESTAMP NVARCHAR(100)
)RETURNS nvarchar(100)
AS
BEGIN
DECLARE #convertedString nvarchar(100);
select #convertedString= replace(#ORACLETIMESTAMP,'.',':');
RETURN STUFF(#convertedString, CHARINDEX(':', #convertedString,18), 1, '.')
END
GO
CREATE PROCEDURE IMPORT_APPLICATION_ROLES #PATH varchar(1000)
AS
IF OBJECT_ID('H_CLAIMS.DBO.TEMP_APPLICATION_QUEUES', 'U') IS NOT NULL
DROP TABLE H_CLAIMS.DBO.TEMP_APPLICATION_ROLES
CREATE TABLE H_CLAIMS.DBO.TEMP_APPLICATION_ROLES
(
ROLE_ID INT NOT NULL,
ROLE_NAME NVARCHAR(255),
ROLE_DESC NVARCHAR(255),
CREATED_BY NVARCHAR(100),
CREATED_DATE NVARCHAR(100),
UPDATED_BY NVARCHAR(100),
UPDATED_DATE NVARCHAR(100)
)
DECLARE #bulkInsert NVARCHAR(4000) = 'BULK INSERT TEMP_APPLICATION_ROLES FROM ''' + #PATH + ''' WITH ( FIELDTERMINATOR ='','', ROWTERMINATOR =''\n'' )';
EXEC(#bulkInsert)
INSERT INTO APPLICATION_ROLES
(ROLE_ID,ROLE_NAME,ROLE_DESC,CREATED_BY,CREATED_DATE,UPDATED_BY,UPDATED_DATE)
SELECT ROLE_ID,ROLE_NAME,ROLE_DESC,CREATED_BY,dbo.DATETIMECONVERTER(CREATED_DATE)AS CREATED_DATE,
UPDATED_BY,dbo.DATETIMECONVERTER(UPDATED_DATE) AS UPDATED_DATE
FROM H_CLAIMS.dbo.TEMP_APPLICATION_ROLES
DROP TABLE H_CLAIMS.DBO.TEMP_APPLICATION_QUEUES
GO
to execute the statment I have used the below statement
EXEC H_CLAIMS.DBO.IMPORT_APPLICATION_QUEUES #PATH='D:\my_export.CSV';
Make sure to place the .csv files in the server machines drive while executing the stored procedure
I may be late to answer this but allow me to give you my workaround (if the precision doesn't really matter)
I import the timestamp from oracle table into SQL 2008 varchar then I update the varchar into a format that will fit for datetime2 then I alter the SQL table column to data type datetime2.
EG: in case you have time stamp like '01-JAN-15 12.00.00.000000000 AM +05:30'
update My_Table
set MyTimeStamp =
substring(MyTimeStamp, 1,10)+
REPLACE(substring(MyTimeStamp, 11, 8),'.',':')+
substring(MyTimeStamp, 19, 13)
where MyTimeStamp like '%.%.%.%';
alter table [My_Table] alter column MyTimeStamp DATETIME2;
Go;

Getting error when inserting into temptable

I have a #temptable which I'm trying to populate but its not working.
DECLARE
#nBranchId int
,#tmStartDate datetime
,#tmEndDate datetime
SELECT #nBranchId = 3483
,#tmStartDate = DATEADD(DAY, -10, GETDATE())
,#tmEndDate = GETDATE()
CREATE table #temptable (
nResultsId int
,nInstrId int
,nBranchId int
,nFoldersId int
,strPaperId varchar(50)
,strPosName varchar(50)
,fQuantity float
,fRevaluationPrice float
,fHistRevaluationPrice float
,tmDate datetime
,nPrevResultsId int
)
INSERT INTO #temptable
SELECT
xpr.nResultsId
,xpr.nInstrId
,xpr.nBranchId
,xpr.nFoldersId
,xpr.strPaperId
,xpr.strPosName
,xpr.fQuantity
,xpr.fRevaluationPrice
,xpr.fHistRevaluationPrice
,xpr.tmDate
,nPrevResultsId = dbo.fnGetPrevTradeResultId(xpr.nBranchId, xpr.nInstrId, xpr.strPaperId, xpr.strPosName,xpr.tmDate, xpr.nFoldersId)
FROM dbo.XP_Results AS xpr WITH(READUNCOMMITTED)
WHERE 1 = 1
AND xpr.nBranchId = ISNULL(#nBranchId, xpr.nBranchId)
AND xpr.tmDate BETWEEN #tmStartDate AND #tmEndDate
AND xpr.nInstrId <> 18
DROP table #temptable
Getting this error:
Msg 8152, Level 16, State 14, Line 28
String or binary data would be truncated.
The statement has been terminated.
Where am I missing it? Have looked and looked but can't solve it
You have different length data types
To avoid this problem use a SELECT INTO statement
#Temptable would be created automatically with correct data type (Extra benefit you don't have to script CREATE statement)
DECLARE
#nBranchId int
,#tmStartDate datetime
,#tmEndDate datetime
SELECT #nBranchId = 3483
,#tmStartDate = DATEADD(DAY, -10, GETDATE())
,#tmEndDate = GETDATE()
SELECT xpr.nResultsId
,xpr.nInstrId
,xpr.nBranchId
,xpr.nFoldersId
,xpr.strPaperId
,xpr.strPosName
,xpr.fQuantity
,xpr.fRevaluationPrice
,xpr.fHistRevaluationPrice
,xpr.tmDate
,nPrevResultsId = dbo.fnGetPrevTradeResultId(xpr.nBranchId, xpr.nInstrId, xpr.strPaperId, xpr.strPosName,xpr.tmDate, xpr.nFoldersId)
INTO #temptable
FROM dbo.XP_Results AS xpr WITH(READUNCOMMITTED)
WHERE 1 = 1
AND xpr.nBranchId = ISNULL(#nBranchId, xpr.nBranchId)
AND xpr.tmDate BETWEEN #tmStartDate AND #tmEndDate
AND xpr.nInstrId <> 18
DROP table #temptable
Should be fixed by changing these two columns to look like this. Likely what is going on is, you are trying to insert varchars greater than 50 characters into a varchar(50) column.
strPaperId varchar(max),
strPosName varchar(max)
That means that one of your columns has data that is larger than the data type size you declared for the temp table column.
For example, if you have a temp table column of varchar(2), then try to insert the value '123', you would get that error message because the value we are inserting is longer than the size of the column you are inserting into. Note that this message can mean any type.
Find the temp table column with the problem and increase the size to the size in the actual table.

Column name or number of supplied values does not match table definition

In the SQL Server, I am trying to insert values from one table to another by using the below query:
delete from tblTable1
insert into tblTable1 select * from tblTable1_Link
I am getting the following error:
Column name or number of supplied values does not match table definition.
I am sure that both the tables have the same structure, same column names and same data types.
They don't have the same structure... I can guarantee they are different
I know you've already created it... There is already an object named ‘tbltable1’ in the database
What you may want is this (which also fixes your other issue):
Drop table tblTable1
select * into tblTable1 from tblTable1_Link
I want to also mention that if you have something like
insert into blah
select * from blah2
and blah and blah2 are identical keep in mind that a computed column will throw this same error...
I just realized that when the above failed and I tried
insert into blah (cola, colb, colc)
select cola, colb, colc from blah2
In my example it was fullname field (computed from first and last, etc)
for inserts it is always better to specify the column names see the following
DECLARE #Table TABLE(
Val1 VARCHAR(MAX)
)
INSERT INTO #Table SELECT '1'
works fine, changing the table def to causes the error
DECLARE #Table TABLE(
Val1 VARCHAR(MAX),
Val2 VARCHAR(MAX)
)
INSERT INTO #Table SELECT '1'
Msg 213, Level 16, State 1, Line 6
Insert Error: Column name or number of
supplied values does not match table
definition.
But changing the above to
DECLARE #Table TABLE(
Val1 VARCHAR(MAX),
Val2 VARCHAR(MAX)
)
INSERT INTO #Table (Val1) SELECT '1'
works. You need to be more specific with the columns specified
supply the structures and we can have a look
The problem is that you are trying to insert data into the database without using columns. SQL server gives you that error message.
Error: insert into users values('1', '2','3') - this works fine as long you only have 3 columns
If you have 4 columns but only want to insert into 3 of them
Correct: insert into users (firstName,lastName,city) values ('Tom', 'Jones', 'Miami')
Beware of triggers. Maybe the issue is with some operation in the trigger for inserted rows.
Dropping the table was not an option for me, since I'm keeping a running log. If every time I needed to insert I had to drop, the table would be meaningless.
My error was because I had a couple columns in the create table statement that were products of other columns, changing these fixed my problem. eg
create table foo (
field1 as int
,field2 as int
,field12 as field1 + field2 )
create table copyOfFoo (
field1 as int
,field2 as int
,field12 as field1 + field2) --this is the problem, should just be 'as int'
insert into copyOfFoo
SELECT * FROM foo
The computed columns make the problem.
Do not use SELECT *. You must specify each fields after SELECT except computed fields
some sources for this issues are as below
1- Identity column ,
2- Calculated Column
3- different structure
so check those 3 , i found my issue was the second one ,
For me the culprit is int value assigned to salary
Insert into Employees(ID,FirstName,LastName,Gender,Salary) values(3,'Canada', 'pa', 'm',15,000)
in salary column When we assign 15,000 the compiler understand 15 and 000.
This correction works fine for me.
Insert into Employees(ID,FirstName,LastName,Gender,Salary) values(4,'US', 'sam', 'm',15000)
Update to SQL server 2016/2017/…
We have some stored procedures in place to import and export databases.
In the sp we use (amongst other things) RESTORE FILELISTONLY FROM DISK where we create a
table "#restoretemp" for the restore from file.
With SQL server 2016, MS has added a field SnapshotURL nvarchar(360) (restore url Azure) what has caused the error message.
After I have enhanced the additional field, the restore has worked again.
Code snipped (see last field):
SET #query = 'RESTORE FILELISTONLY FROM DISK = ' + QUOTENAME(#BackupFile , '''')
CREATE TABLE #restoretemp
(
LogicalName nvarchar(128)
,PhysicalName nvarchar(128)
,[Type] char(1)
,FileGroupName nvarchar(128)
,[Size] numeric(20,0)
,[MaxSize] numeric(20,0)
,FileID bigint
,CreateLSN numeric(25,0)
,DropLSN numeric(25,0) NULL
,UniqueID uniqueidentifier
,ReadOnlyLSN numeric(25,0)
,ReadWriteLSN numeric(25,0)
,BackupSizeInByte bigint
,SourceBlockSize int
,FilegroupID int
,LogGroupGUID uniqueidentifier NULL
,DifferentialBaseLSN numeric(25,0)
,DifferentialbaseGUID uniqueidentifier
,IsReadOnly bit
,IsPresent bit
,TDEThumbprint varbinary(32)
-- Added field 01.10.2018 needed from SQL Server 2016 (Azure URL)
,SnapshotURL nvarchar(360)
)
INSERT #restoretemp EXEC (#query)
SET #errorstat = ##ERROR
if #errorstat <> 0
Begin
if #Rueckgabe = 0 SET #Rueckgabe = 6
End
Print #Rueckgabe
Check your id. Is it Identity? If it is then make sure it is declared as ID not null Identity(1,1)
And before creating your table , Drop table and then create table.
The problem I had that caused this error was that I was trying to insert null values into a NOT NULL column.
I had the same problem, and the way I worked around it is probably not the best but it is working now.
It involves creating a linked server and using dynamic sql - not the best, but if anyone can suggest something better, please comment/answer.
declare #sql nvarchar(max)
DECLARE #DB_SPACE TABLE (
[DatabaseName] NVARCHAR(128) NOT NULL,
[FILEID] [smallint] NOT NULL,
[FILE_SIZE_MB] INT NOT NULL DEFAULT (0),
[SPACE_USED_MB] INT NULL DEFAULT (0),
[FREE_SPACE_MB] INT NULL DEFAULT (0),
[LOGICALNAME] SYSNAME NOT NULL,
[DRIVE] NCHAR(1) NOT NULL,
[FILENAME] NVARCHAR(260) NOT NULL,
[FILE_TYPE] NVARCHAR(260) NOT NULL,
[THE_AUTOGROWTH_IN_KB] INT NOT NULL DEFAULT(0)
,filegroup VARCHAR(128)
,maxsize VARCHAR(25)
PRIMARY KEY CLUSTERED ([DatabaseName] ,[FILEID] )
)
SELECT #SQL ='SELECT [DatabaseName],
[FILEID],
[FILE_SIZE_MB],
[SPACE_USED_MB],
[FREE_SPACE_MB],
[LOGICALNAME],
[DRIVE],
[FILENAME],
[FILE_TYPE],
[THE_AUTOGROWTH_IN_KB]
,filegroup
,maxsize FROM OPENQUERY('+ QUOTENAME('THE_MONITOR') + ','''+ ' EXEC MASTER.DBO.monitoring_database_details ' +''')'
exec sp_executesql #sql
INSERT INTO #DB_SPACE(
[DatabaseName],
[FILEID],
[FILE_SIZE_MB],
[SPACE_USED_MB],
[FREE_SPACE_MB],
[LOGICALNAME],
[DRIVE],
[FILENAME],
[FILE_TYPE],
THE_AUTOGROWTH_IN_KB,
[filegroup],
maxsize
)
EXEC SP_EXECUTESQL #SQL
This is working for me now.
I can guarantee the number of columns and type of columns returned by the stored procedure are the same as in this table, simply because I return the same table from the stored procedure.
In my case, I had:
insert into table1 one
select * from same_schema_as_table1 same_schema
left join...
and I had to change select * to select same_schema.*.
You're missing column name after TableName in insert query:
INSERT INTO TableName**(Col_1,Col_2,Col_3)** VALUES(val_1,val_2,val_3)
In my case the problem was that the SP I was executing returned two result sets, and only the second result set was matching the table definition.