Convert INT to String when using STUFF Function in SQL Server 2016 - sql

I'm trying to convert an INT but it is having an issue with the conversion.
Conversion failed when converting the nvarchar value '245428,246425' to data type int.
The query I am using:
SELECT STUFF
(
(
SELECT DISTINCT ',' + CONVERT(VARCHAR(20), NumField)
FROM Table A
WHERE ID = 218554
FOR XML PATH('')
) ,1,1,''
)
I use this as a subquery in a larger table like so:
SELECT
Field1,
Field2,
CASE WHEN criteria = '1'
THEN (SELECT STUFF(
(
SELECT DISTINCT ',' + CONVERT(VARCHAR(20), NumField)
FROM Table A
WHERE ID = 218554
FOR XML PATH('')
) ,1,1,''
))
END
FROM
Table B
The STUFF query runs fine when it's executed on it's own but when I run it in the full query it comes up with the conversion error.

I don't think you are not showing the full query -- or at least the full case expression. A case expression returns a single value with a single type.
When there are type conflicts, then SQL Server has to determine the single overall type, according to its rules. If one then returns an integer and another returns a string, then the case expression is an integer (not a string). So, the string is converted to an integer.
You can see this problem with much simpler logic:
select (case when 1=1 then 'a' else 0 end)
Even though the else is never execution, the type of the expression is determined at compile time -- and 'a' cannot be converted to an integer.

Related

TSQL CTE error ''Types don't match between the anchor and the recursive part"

Would someone help me understand the details of the error below..? This is for SQL Server 2008.
I did fix it myself, and found many search hits which show the same fix, but none explain WHY this happens in a CTE.
Types don't match between the anchor and the recursive part in column "txt" of recursive query "CTE".
Here is an example where I resolved the issue with CAST, but why does it work?
WITH CTE(n, txt) AS
(
--SELECT 1, '1' --This does not work.
--SELECT 1, CAST('1' AS varchar) --This does not work.
--SELECT 1, CAST('1' AS varchar(1000)) --This does not work.
SELECT
1,
CAST('1' AS varchar(max)) --This works. Why?
UNION ALL
SELECT
n+1,
txt + ', ' + CAST(n+1 AS varchar) --Why is (max) NOT needed?
FROM
CTE
WHERE
n < 10
)
SELECT *
FROM CTE
I assume there are default variable types at play which I do not understand, such as:
what is the type for something like SELECT 'Hello world! ?
what is the type for the string concatenation operator SELECT 'A' + 'B' ?
what is the type for math such as SELECT n+1 ?
The info you want is all in the documentation:
When concatenating two char, varchar, binary, or varbinary expressions, the length of the resulting expression is the sum of the lengths of the two source expressions, up to 8,000 bytes.
snip ...
When comparing two expressions of the same data type but different lengths by using UNION, EXCEPT, or INTERSECT, the resulting length is the longer of the two expressions.
The precision and scale of the numeric data types besides decimal are fixed. When an arithmetic operator has two expressions of the same type, the result has the same data type with the precision and scale defined for that type.
However, a recursive CTE is not the same as a normal UNION ALL:
The data type of a column in the recursive member must be the same as the data type of the corresponding column in the anchor member.
So in answer to your questions:
'Hello world!' has the data type varchar(12) by default.
'A' + 'B' has the data type varchar(2) because that is the sum length of the two data types being summed (the actual value is not relevant).
n+1 is still an int
In a recursive CTE, the data type must match exactly, so '1' is a varchar(1). If you specify varchar without a length in a CAST then you get varchar(30), so txt + ', ' + CAST(n+1 AS varchar) is varchar(33).
When you cast the anchor part to varchar(max), that automatically means the recursive part will be varchar(max) also. You don't need to cast to max, you could also cast the recursive part directly to varchar(30) for example:
WITH CTE(n, txt) AS
(
--SELECT 1, '1' --This does not work.
SELECT 1, CAST('1' AS varchar(30)) --This does work.
--SELECT 1, CAST('1' AS varchar(1000)) --This does not work.
UNION ALL
SELECT
n+1,
CAST(CONCAT(txt, ', ', n+1) AS varchar(30))
FROM
CTE
WHERE
n < 10
)
SELECT *
FROM CTE
db<>fiddle
If you place the query into a string then you can get the result set data types like with the query :
DECLARE #query nvarchar(max) = 'SELECT * FROM table_name';
EXEC sp_describe_first_result_set #query, NULL, 0;

Checking if field contains multiple string in sql server

I am working on a sql database which will provide with data some grid. The grid will enable filtering, sorting and paging but also there is a strict requirement that users can enter free text to a text input above the grid for example
'Engine 1001 Requi' and that the result will contain only rows which in some columns contain all the pieces of the text. So one column may contain Engine, other column may contain 1001 and some other will contain Requi.
I created a technical column (let's call it myTechnicalColumn) in the table (let's call it myTable) which will be updated each time someone inserts or updates a row and it will contain all the values of all the columns combined and separated with space.
Now to use it with entity framework I decided to use a table valued function which accepts one parameter #searchQuery and it will handle it like this:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS #Result TABLE
( ... here come columns )
AS
BEGIN
DECLARE #searchToken TokenType
INSERT INTO #searchToken(token) SELECT value FROM STRING_SPLIT(#searchText,' ')
DECLARE #searchTextLength INT
SET #searchTextLength = (SELECT COUNT(*) FROM #searchToken)
INSERT INTO #Result
SELECT
... here come columns
FROM myTable
WHERE (SELECT COUNT(*) FROM #searchToken WHERE CHARINDEX(token, myTechnicalColumn) > 0) = #searchTextLength
RETURN;
END
Of course the solution works fine but it's kinda slow. Any hints how to improve its efficiency?
You can use an inline Table Valued Function, which should be quite a lot faster.
This would be a direct translation of your current code
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE (
SELECT COUNT(*)
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) > 0
) = (SELECT COUNT(*) FROM searchText)
);
GO
You are using a form of query called Relational Division Without Remainder and there are other ways to cut this cake:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE NOT EXISTS (
SELECT 1
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) = 0
)
);
GO
This may be faster or slower depending on a number of factors, you need to test.
Since there is no data to test, i am not sure if the following will solve your issue:
-- Replace the last INSERT portion
INSERT INTO #Result
SELECT
... here come columns
FROM myTable T
JOIN #searchToken S ON CHARINDEX(S.token, T.myTechnicalColumn) > 0

PostgreSQL - check if column exists and nest condition statement

Transactions column's names in below code are dynamicaly generated (so it means that sometimes particular name/column doesn't exist). Using this select it finishes successfully only in case when every of those names exists, if not, I got error like this (example):
Error(s), warning(s): 42703: column "TransactionA" does not exist
SELECT
*,
((CASE WHEN "TransactionA" IS NULL THEN 0 ELSE "TransactionA" END) -
(CASE WHEN "TransactionB" IS NULL THEN 0 ELSE "TransactionB" END) +
(CASE WHEN "TransactionC" IS NULL THEN 0 ELSE "TransactionC" END)) AS "Account_balance"
FROM Summary ORDER BY id;
Could you tell me please how can I check first if the column exists and then how can I nest another CASE statement or other condition statement to make it working in a correct way?
You can build any query dynamically with information from the Postgres catalog tables. pg_attribute in your case. Alternatively, use the information schema. See:
Query to return output column names and data types of a query, table or view
How to check if a table exists in a given schema
Basic query to see which of the given columns exist in a given table:
SELECT attname
FROM pg_attribute a
WHERE attrelid = 'public.summary'::regclass -- tbl here
AND NOT attisdropped
AND attnum > 0
AND attname IN ('TransactionA', 'TransactionB', 'TransactionC'); -- columns here
Building on this, you can have Postgres generate your whole query. While being at it, look up whether columns are defined NOT NULL, in which case they don't need COALESCE:
CREATE OR REPLACE FUNCTION f_build_query(_tbl regclass, _columns json)
RETURNS text AS
$func$
DECLARE
_expr text;
BEGIN
SELECT INTO _expr
string_agg (op || CASE WHEN attnotnull
THEN quote_ident(attname)
ELSE format('COALESCE(%I, 0)', attname) END
, '')
FROM (
SELECT j->>'name' AS attname
, CASE WHEN j->>'op' = '-' THEN ' - ' ELSE ' + ' END AS op
FROM json_array_elements(_columns) j
) j
JOIN pg_attribute a USING (attname)
WHERE attrelid = _tbl
AND NOT attisdropped
AND attnum > 0;
IF NOT FOUND THEN
RAISE EXCEPTION 'No column found!'; -- or more info
END IF;
RETURN
'SELECT *,' || _expr || ' AS "Account_balance"
FROM ' || _tbl || '
ORDER BY id;';
END
$func$ LANGUAGE plpgsql;
The table itself is parameterized, too. May or may not be useful for you. The only assumption is that every table has an id column for the ORDER BY. Related:
Table name as a PostgreSQL function parameter
I pass columns names and the associated operator as JSON document for flexibility. Only + or - are expected as operator. Input is safely concatenated to make SQL injection impossible.About json_array_elements():
Query for element of array in JSON column
Example call:
SELECT f_build_query('summary', '[{"name":"TransactionA"}
, {"name":"TransactionB", "op": "-"}
, {"name":"TransactionC"}]');
Returns the according valid query string, like:
SELECT *, + COALESCE("TransactionA", 0) - COALESCE("TransactionB", 0) AS "Account_balance"
FROM summary
ORDER BY id;
"TransactionC" isn't there in this case. If both existing columns happen to be NOT NULL, you get instead:
SELECT *, + "TransactionA" - "TransactionB" AS "Account_balance"
FROM summary
ORDER BY id;
db<>fiddle here
You could execute the generated query in the function immediately and return result rows directly. But that's hard as your return type is a combination of a table rows (unknown until execution time?) plus additional column, and SQL demands to know the return type in advance. For just id and sum (stable return type), it would be easy ...
It's odd that your CaMeL-case column names are double-quoted, but the CaMeL-case table name is not. By mistake? See:
Are PostgreSQL column names case-sensitive?
How to pass column names containing single quotes?
Addressing additional question from comment.
If someone used column names containing single quotes by mistake:
CREATE TABLE madness (
id int PRIMARY KEY
, "'TransactionA'" numeric NOT NULL -- you wouldn't do that ...
, "'TransactionC'" numeric NOT NULL
);
For the above function, the JSON value is passed as quoted string literal. If that string is enclosed in single-quotes, escape contained single-quotes by doubling them up. This is required on top of valid JSON format:
SELECT f_build_query('madness', '[{"name":"''TransactionA''"}
, {"name":"TransactionB", "op": "-"}
, {"name":"TransactionC"}]'); --
("''TransactionA''" finds a match, "TransactionC" does not.)
Or use dollar quoting instead:
SELECT f_build_query('madness', $$[{"name":"'TransactionA'"}
, {"name":"TransactionB", "op": "-"}
, {"name":"TransactionC"}]$$);
db<>fiddle here with added examples
See:
Insert text with single quotes in PostgreSQL
Assuming that id is a unique id in summary, then you can use the following trick:
SELECT s.*,
(COALESCE("TransactionA", 0) -
COALESCE("TransactionB", 0) +
COALESCE("TransactionC", 0)
) AS Account_balance
FROM (SELECT id, . . . -- All columns except the TransactionX columns
FROM (SELECT s.*,
(SELECT TransactionA FROM summary s2 WHERE s2.id = s.id) as TransactionA,
(SELECT TransactionB FROM summary s2 WHERE s2.id = s.id) as TransactionB,
(SELECT TransactionC FROM summary s2 WHERE s2.id = s.id) as TransactionC
FROM Summary s
) s CROSS JOIN
(VALUES (NULL, NULL, NULL)) v(TransactionA, TransactionB, TransactionC)
) s
ORDER BY s.id;
The trick here is that the correlated subqueries do not qualify the TransactionA. If the value is defined for summary, then that will be used. If not, it will come from the values() clause in the outer query.
This is a bit of a hack, but it can be handy under certain circumstances.
Check this example:
UPDATE yourtable1
SET yourcolumn = (
CASE
WHEN setting.value IS NOT NULL
THEN CASE WHEN replace(setting.value,'"','') <> '' THEN replace(setting.value,'"','') ELSE NULL END
ELSE NULL
END
)::TIME FROM (SELECT value FROM yourtable2 WHERE key = 'ABC') AS setting;

SQL XML Parsing Query for Element Hierarchy

I am attempting to write a SQL Query that will take in an XML object of undefined schema (YAY!) and transform it to a two column table of ElementName, Value columns. I was able to get a simple query down after some time (I am not a SQL person by any means).
DECLARE #strXml XML
SET #strXml = '<xml>
<FirstName>TEST</FirstName>
<LastName>PERSON</LastName>
<DOB>1/1/2000</DOB>
<TestObject>
<SomeProperty>CHECKED</SomeProperty>
<EmbeddedObject>
<SomeOtherProperty>NOT CHECKED</SomeOtherProperty>
</EmbeddedObject>
</TestObject>
</xml>'
DECLARE #XmlMappings TABLE
(
NodeName VARCHAR(64),
Value VARCHAR(128)
)
INSERT INTO #XmlMappings
SELECT doc.col.value('fn:local-name(.)[1]', 'varchar(64)') AS ElementName,
doc.col.value('.', 'varchar(128)') AS Value
FROM #strXml.nodes('/xml/*') doc(Col)
SELECT * FROM #XmlMappings
This query can handle the simple condition of the specified XML with only the first level elements. However elements such as TestObject and EmbeddedObject end up flattened. What I am looking for is to get some type of mapping like
ElementName | Value
=====================================================
FirstName | TEST
LastName | PERSON
DOB | 1/1/2000
TestObject.SomeProperty | CHECKED
TestObject.EmbeddedObject.SomeOtherProperty | NOT CHECKED
The hard part for me is the hierarchical structure with the . operator. I don't care if it is some other delimiter than . that gets output, it is more of just getting the output done, and I don't know enough about XML in SQL to be able to know even what to query.
Please note that I can also not use OPENXML since this is looking to be deployed on SQL Azure which does not support that feature at this time.
With a CTE and cross apply
;with cte as
(
select
convert(varchar(100), x.n.value('fn:local-name(.)','varchar(100)') ) as path,
convert(varchar(100), x.n.value('fn:local-name(.)','varchar(100)') ) AS name,
x.n.query('*') AS children,
x.n.value('.','varchar(1000)') as value
from #strxml.nodes('/xml/*') AS x(n)
union all
select
convert(varchar(100), x.path + '.' + c.n.value('fn:local-name(.)','varchar(100)') ),
convert(varchar(100), c.n.value('fn:local-name(.)','varchar(100)') ) ,
c.n.query('*'),
c.n.value('.','varchar(1000)')
from cte x
cross apply x.children.nodes('*') AS c(n)
)
select path, value from cte where datalength(children) = 5

SQL for nvarchar 0 = '' & = 0?

I was searching for integers in a nvarchar column. I noticed that if the row contains '' or 0 it is picked up if I search using just 0.
I'm assuming there is some implicit conversion happening which is saying that 0 is equal to ''. Why does it assign two values?
Here is a test:
--0 Test
create table #0Test (Test nvarchar(20))
GO
insert INTO #0Test (Test)
SELECT ''
UNION ALL
SELECT 0
UNION ALL
SELECT ''
Select *
from #0Test
Select *
from #0Test
Where test = 0
SELECT *
from #0Test
Where test = '0'
SELECT *
from #0Test
Where test = ''
drop table #0Test
The behavior you see is the one describe din the product documentation. The rules of Data Type Precedence specify that int has higher precedence than nvarchar therefore the operation has to occur as an int type:
When an operator combines two expressions of different data types, the
rules for data type precedence specify that the data type with the
lower precedence is converted to the data type with the higher
precedence
Therefore your query is actually as follow:
Select *
from #0Test
Where cast(test as int) = 0;
and the empty string N'' yields the value 0 when cast to int:
select cast(N'' as int)
-----------
0
(1 row(s) affected)
Therefore the expected result is the one you see, the rows with an empty string qualify for the predicate test = 0. Further proof that you should never mix types freely. For a more detailed discussion of the topic, see How Data Access Code Affects Database Performance.
You are implicitly converting the field to int with your UNION statement.
Two empty strings and the integer 0 will result in an int field. This is BEFORE you insert into the nvarchar field, so the data type in the temp table is irrelevant.
Try changing the second select in the UNION to:
SELECT '0'
And you will get the expected result.