I have a #temptable which I'm trying to populate but its not working.
DECLARE
#nBranchId int
,#tmStartDate datetime
,#tmEndDate datetime
SELECT #nBranchId = 3483
,#tmStartDate = DATEADD(DAY, -10, GETDATE())
,#tmEndDate = GETDATE()
CREATE table #temptable (
nResultsId int
,nInstrId int
,nBranchId int
,nFoldersId int
,strPaperId varchar(50)
,strPosName varchar(50)
,fQuantity float
,fRevaluationPrice float
,fHistRevaluationPrice float
,tmDate datetime
,nPrevResultsId int
)
INSERT INTO #temptable
SELECT
xpr.nResultsId
,xpr.nInstrId
,xpr.nBranchId
,xpr.nFoldersId
,xpr.strPaperId
,xpr.strPosName
,xpr.fQuantity
,xpr.fRevaluationPrice
,xpr.fHistRevaluationPrice
,xpr.tmDate
,nPrevResultsId = dbo.fnGetPrevTradeResultId(xpr.nBranchId, xpr.nInstrId, xpr.strPaperId, xpr.strPosName,xpr.tmDate, xpr.nFoldersId)
FROM dbo.XP_Results AS xpr WITH(READUNCOMMITTED)
WHERE 1 = 1
AND xpr.nBranchId = ISNULL(#nBranchId, xpr.nBranchId)
AND xpr.tmDate BETWEEN #tmStartDate AND #tmEndDate
AND xpr.nInstrId <> 18
DROP table #temptable
Getting this error:
Msg 8152, Level 16, State 14, Line 28
String or binary data would be truncated.
The statement has been terminated.
Where am I missing it? Have looked and looked but can't solve it
You have different length data types
To avoid this problem use a SELECT INTO statement
#Temptable would be created automatically with correct data type (Extra benefit you don't have to script CREATE statement)
DECLARE
#nBranchId int
,#tmStartDate datetime
,#tmEndDate datetime
SELECT #nBranchId = 3483
,#tmStartDate = DATEADD(DAY, -10, GETDATE())
,#tmEndDate = GETDATE()
SELECT xpr.nResultsId
,xpr.nInstrId
,xpr.nBranchId
,xpr.nFoldersId
,xpr.strPaperId
,xpr.strPosName
,xpr.fQuantity
,xpr.fRevaluationPrice
,xpr.fHistRevaluationPrice
,xpr.tmDate
,nPrevResultsId = dbo.fnGetPrevTradeResultId(xpr.nBranchId, xpr.nInstrId, xpr.strPaperId, xpr.strPosName,xpr.tmDate, xpr.nFoldersId)
INTO #temptable
FROM dbo.XP_Results AS xpr WITH(READUNCOMMITTED)
WHERE 1 = 1
AND xpr.nBranchId = ISNULL(#nBranchId, xpr.nBranchId)
AND xpr.tmDate BETWEEN #tmStartDate AND #tmEndDate
AND xpr.nInstrId <> 18
DROP table #temptable
Should be fixed by changing these two columns to look like this. Likely what is going on is, you are trying to insert varchars greater than 50 characters into a varchar(50) column.
strPaperId varchar(max),
strPosName varchar(max)
That means that one of your columns has data that is larger than the data type size you declared for the temp table column.
For example, if you have a temp table column of varchar(2), then try to insert the value '123', you would get that error message because the value we are inserting is longer than the size of the column you are inserting into. Note that this message can mean any type.
Find the temp table column with the problem and increase the size to the size in the actual table.
Related
I've been trying to insert a varchar value into a table in SQL using a cast.
The varchar input values has a string datetime format like this:
'08/25/2022 03:34:59 PM'
The fechaInicio column is originally filled with NULL, and the purpose of the stored procedure is to update that column with the #strDateTime value sent.
Example of my table [Table_Input]:
fechaInicio
ID
NULL
2
If I just do a
SELECT CAST('08/25/2022 03:34:59 PM' AS DATETIME)
it actually works and shows me the correct casting in the message window. But the problem is when I try to update into the table.
I removed my try-except commands to see the error.
If I call the stored procedure like this
[SP_Table_Input_Get_Series] '08/25/2022 03:34:59 PM', 2
I get the following error:
Msg 241, Level 16, State 1, Procedure SP_Table_Input_Get_Series, Line 34 [Batch Start Line 13]
Conversion failed when converting date and/or time from character string
My stored procedure is something like this:
PROCEDURE [SP_Table_Input_Get_Series]
#strDateTime NVARCHAR(50),
#cId int
AS
BEGIN TRANSACTION
UPDATE [Table_Input]
SET
---fechaInicio =convert(datetime, #strDateTime, 5),
---fechaInicio = N'select cast(#strDateTime as datetime)'
fechaInicio = CAST(#strDateTime AS datetime)
WHERE id = #cId -- the where clause works fine
COMMIT TRANSACTION
All the 3 options (including commented ones in the stored procedure) didn't work.
Also a constraint is I cannot modify the column type to varchar or any other type.
I will really appreciated if someone can help me find a solution.
I'm running the stored procedure directly in Microsoft SQL Server Management Studio.
Please try the following solution.
As #AlwaysLearning pointed out, I changed 89 to 59 seconds.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (ID INT IDENTITY PRIMARY KEY, fechaInicio DATETIME2(0));
INSERT #tbl (fechaInicio) VALUES
(GETDATE());
-- DDL and sample data population, end
DECLARE #strDateTime VARCHAR(50) = '08/25/2022 03:34:59 PM';
-- before
SELECT * FROM #tbl;
UPDATE #tbl
SET fechaInicio = convert(DATETIME2(0), #strDateTime, 101)
where ID = 1;
-- after
SELECT * FROM #tbl;
Output
ID
fechaInicio
1
2022-08-25 15:34:59
I'm trying to convert this data in my database to int based off the code below from another thread. As I'm pretty new to SQL I just wanted to try and convert the column "Web ID" to understand the code and not have too many errors.
I'm using the following code, but still get an error:
select *
into #tmp
from [db].[desktop-order-data]
truncate table [db].[desktop-order-data]
alter table [db].[desktop-order-data]
alter column ["Web ID)"] int
insert [db].[desktop-order-data]
select cast(["Web ID"] as int)
from #tmp
drop table #tmp
When I run this, I get this error:
Msg 213, Level 16, State 1, Line 4
Column name or number of supplied values does not match table definition.
I'm a bit confused as I'm still referencing the same table as in the query. What should I be referencing here?
You should simply be able to do:
alter table [db].[desktop-order-data] alter column ["Web ID"] int;
For your code, you want an update, not an insert. You don't need a temporary table. Just do:
update [db].[desktop-order-data]
set ["Web ID"] = cast(["Web ID"] as int);
If this causes a problem, then find the rows that don't convert. In SQL Server, you can do:
select d.*
from [db].[desktop-order-data] d
where try_cast(["Web ID"] as int) is null and ["Web ID"] is not null;
If you actually have double quotes around the values, then you need to remove them. I would recommend:
-- remove double quotes
update [db].[desktop-order-data] d
set ["Web ID"] = replace(["Web ID"], '"', '');
-- check that the resulting values are convertible
select *
from db].[desktop-order-data] d
where try_cast(["Web ID"] as int) is null and ["Web ID"] is not null;
-- if the above query returns no rows, then
update [db].[desktop-order-data]
set ["Web ID"] = cast(["Web ID"] as int);
The range for int is:
-2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647)
Your Web ID numbers are 10 digits long and start with 4 - they're not going to fit.
You might try bigint instead.
You can't fit a number on the order of 4363155167 into int, which goes from -2147483648 to 2147483647. Can you use bigint here?
You're getting that error because there is more than one column in [desktop-order-data] and you are trying to insert only 1 column, without specifying the target column.
Just change your insert statement to:
INSERT INTO [db].[desktop-order-data] ([Web ID])
SELECT CAST(["Web ID"] as int)
FROM #tmp
I'm dealing with a table in which a bunch of arbitrary settings are stored as VARCHAR(255) values. The particular one I'm tasked with dealing with is a sequence number that needs to be incremented and returned to the caller. (Again, note that the sequence "number" is stored as VARCHAR, which is something I don't have any control over).
Because it's a sequence number, I don't really want to select and update in separate steps. When I've dealt with this sort of thing in the past with actual numeric fields, my method has been something like
UPDATE TABLE SET #SEQ_NUM = VALUE = VALUE + 1
which increments the value and gives me the updated value in one swell foop. I thought in this situation, I'd try the same basic thing with casts:
DECLARE #SEQ_NUM VARCHAR(255)
UPDATE SOME_TABLE
SET #SEQ_NUM = VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
WHERE NAME = 'SOME_NAME'
The actual update works fine so long as I don't try to assign the result to the variable; as soon as I do, I receive the following error:
Msg 549, Level 16, State 1, Line 4 The collation
'SQL_Latin1_General_CP1_CI_AS' of receiving variable is not equal to
the collation 'Latin1_General_BIN' of column 'VALUE'.
I understand what that means, but I don't understand why it's happening, or by extension, how to remedy the issue.
As an aside to fixing the specific error, I'd welcome suggestions for alternative approaches to incrementing a char sequence "number".
From one of the comments, sounds like you may have already hit on this, but here's what I would recommend:
UPDATE TABLE
SET VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
OUTPUT inserted.VALUE
WHERE NAME = 'SOME_NAME'
This will output the new value like a SELECT statement does. You can also cast inserted.VALUE to an int if you wanted to do that in the SQL.
If you wanted to put the value into #SEQ_NUM instead of outputing the value from the statement/stored procedure, you can't use a scalar variable, but you can pump it into a table variable, like so:
DECLARE #SEQ_NUM AS TABLE ( VALUE VARCHAR(255) );
UPDATE TABLE
SET VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
OUTPUT inserted.VALUE INTO #SEQ_NUM ( VALUE )
WHERE NAME = 'SOME_NAME'
SELECT VALUE FROM #SEQ_NUM
Maintaining a sequential number manually is by no means a solution I'd like to work with, but I can understand there might be constraints around this.
If you break it down in to 2 steps, then you can work around the issue. Note I've replaced your WHERE clause for this example code to work:
CREATE TABLE #SOME_TABLE ( [VALUE] VARCHAR(255) )
INSERT INTO #SOME_TABLE
( VALUE )
VALUES ( '12345' )
DECLARE #SEQ_NUM VARCHAR(255)
UPDATE #SOME_TABLE
SET [VALUE] = CAST(( CAST([VALUE] AS INT) + 1 ) AS VARCHAR(255))
WHERE 1 = 1
SELECT *
FROM #SOME_TABLE
SELECT #SEQ_NUM = [VALUE]
FROM #SOME_TABLE
WHERE 1 = 1
SELECT #SEQ_NUM
DROP TABLE #SOME_TABLE
You can continue using the quirky update in OP but you have to split the triple assignment #Variable = Column = Expression in the UPDATE statement to two simple assignments of #Variable = Expression and Column = #Variable like this
CREATE TABLE #SOME_TABLE (
NAME VARCHAR(255)
, VALUE VARCHAR(255) COLLATE Latin1_General_BIN
)
INSERT #SOME_TABLE SELECT 'SOME_NAME', '42'
DECLARE #SEQ_NUM VARCHAR(255)
/*
-- this quirky update fails on COLLATION mismatch or data-type mismatch
UPDATE #SOME_TABLE
SET #SEQ_NUM = VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
WHERE NAME = 'SOME_NAME'
*/
-- this quirky update works in all cases
UPDATE #SOME_TABLE
SET #SEQ_NUM = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
, VALUE = #SEQ_NUM
WHERE NAME = 'SOME_NAME'
SELECT *, #SEQ_NUM FROM #SOME_TABLE
This simple rewrite prevents db-engine complaining on difference in data-type between #Variable and Column too (e.g. VARCHAR vs NVARCHAR) and seems like a more "portable" way of doing quirky updates (if there is such thing)
I have a table that looks like this:
memberno(int)|member_mouth (varchar)|Inspected_Date (varchar)
-----------------------------------------------------------------------------
12 |'1;2;3;4;5;6;7' |'12-01-01;12-02-02;12-03-03' [7 members]
So by looking at how this table has been structured (poorly yes)
The values in the member_mouth field is a string that is delimited by a ";"
The values in the Inspected_Date field is a string that is delimited by a ";"
So - for each delimited value in member_mouth there is an equal inspected_date value delimited inside the string
This table has about 4Mil records, we have an application written in C# that normalizes the data and stores it in a separate table. The problem now is because of the size of the table it takes a long time for this to process. (the example above is nothing compared to the actual table, it's much larger and has a couple of those string "array" fields)
My question is this: What would be the best and fastest way to normilize this data in MSSQL proc? let MSSQL do the work and not a C# app?
The best way will be SQL itself. The way followed in the below code is something which worked for me well with 2-3 lakhs of data.
I am not sure about the below code when it comes to 4 Million, but may help.
Declare #table table
(memberno int, member_mouth varchar(100),Inspected_Date varchar(400))
Insert into #table Values
(12,'1;2;3;4;5;6;7','12-01-01;12-02-02;12-03-03;12-04-04;12-05-05;12-07-07;12-08-08'),
(14,'1','12-01-01'),
(19,'1;5;8;9;10;11;19','12-01-01;12-02-02;12-03-03;12-04-04;12-07-07;12-10-10;12-12-12')
Declare #tableDest table
(memberno int, member_mouth varchar(100),Inspected_Date varchar(400))
The table will be like.
Select * from #table
See the code from here.
------------------------------------------
Declare #max_len int,
#count int = 1
Set #max_len = (Select max(Len(member_mouth) - len(Replace(member_mouth,';','')) + 1)
From #table)
While #count <= #max_len
begin
Insert into #tableDest
Select memberno,
SUBSTRING(member_mouth,1,charindex(';',member_mouth)-1),
SUBSTRING(Inspected_Date,1,charindex(';',Inspected_Date)-1)
from #table
Where charindex(';',member_mouth) > 0
union
Select memberno,
member_mouth,
Inspected_Date
from #table
Where charindex(';',member_mouth) = 0
Delete from #table
Where charindex(';',member_mouth) = 0
Update #table
Set member_mouth = SUBSTRING(member_mouth,charindex(';',member_mouth)+1,len(member_mouth)),
Inspected_Date = SUBSTRING(Inspected_Date,charindex(';',Inspected_Date)+1,len(Inspected_Date))
Where charindex(';',member_mouth) > 0
Set #count = #count + 1
End
------------------------------------------
Select *
from #tableDest
Order By memberno
------------------------------------------
Result.
You can take a reference here.
Splitting delimited values in a SQL column into multiple rows
Do it on SQl server side, if possible a SSIS package would be great.
First, thanks for all your help! You really make a difference, and I GREATLY appreciate it.
So I have a Varchar column and it holds a 16 digit number, example: 1000550152872026
select *
FROM Orders
where isnumeric([ord_no]) = 0
returns: 0 rows
select cast([ord_no] as bigint)
FROM Progression_PreCall_Orders o
order by [ord_no]
returns: Error converting data type varchar to bigint.
How do I get this 16 digit number into a math datatype so I can add and subtract another column from it?
UPDATE: Found scientific notation stored as varchar ex: 1.00054E+15
How do I convert that back into a number then?
DECIMAL datatype seems to work fine:
DECLARE #myVarchar AS VARCHAR(32)
SET #myVarchar = '1000550152872026'
DECLARE #myDecimal AS DECIMAL(38,0)
SET #myDecimal = CAST(#myVarchar AS DECIMAL(38,0))
SELECT #myDecimal + 1
Also, here's a quick example where IsNumeric returns 1 but converting to DECIMAL fails:
DECLARE #myVarchar AS VARCHAR(32)
SET #myVarchar = '1000550152872026E10'
SELECT ISNUMERIC(#myVarchar)
DECLARE #myDecimal AS DECIMAL(38,0)
SET #myDecimal = CAST(#myVarchar AS DECIMAL(38,0)) --This statement will fail
EDIT
You could try to CONVERT to float if you're dealing with values written in scientific notation:
DECLARE #Orders AS TABLE(OrderNum NVARCHAR(64), [Date] DATETIME)
INSERT INTO #Orders VALUES('100055015287202', GETDATE())
INSERT INTO #Orders VALUES('100055015287203', GETDATE())
INSERT INTO #Orders VALUES('1.00055015287E+15', GETDATE()) --sci notation
SELECT
CONVERT(FLOAT, OrderNum, 2) +
CAST(REPLACE(CONVERT(VARCHAR(10), GETDATE(), 120), '-', '') AS FLOAT)
FROM #Orders
WITH validOrds AS
(
SELECT ord_no
FROM Orders
WHERE ord_no NOT LIKE '%[^0-9]%'
)
SELECT cast(validOrds.ord_no as bigint) as ord_no
FROM validOrds
LEFT JOIN Orders ords
ON ords.ord_no = validOrds.ord_no
WHERE ords.ord_no is null
Take a look at this link for an explanation of why isnumeric isn't functioning the way you are assuming it would: http://www.sqlservercentral.com/articles/IsNumeric/71512/
Take a look at this link for an SO post where a user has a similar problem as you:
Error converting data type varchar
hence, you should always use the correct datatype for each column unless you have a very specific reason to do so otherwise... Even then, you'll need to be extra careful when saving values to the column to ensure that they are indeed valid values