We have a table with GUID primary keys. When I search for a specific key, I can use either:
SELECT * FROM products WHERE productID='34594289-16B9-4EEF-9A1E-B35066531DE6'
SELECT * FROM products WHERE productID LIKE '34594289-16B9-4EEF-9A1E-B35066531DE6'
RESULT (for both):
product_ID Prd_Model
------------------------------------ --------------------------------------------------
34594289-16B9-4EEF-9A1E-B35066531DE6 LW-100
(1 row affected)
We have a customer who uses our ID but adds more text to it to create some kind of compound field in their own system. They sent me one of these values to look up and I had an unexpected result. I meant to trim the suffix but forgot, so I ran this:
SELECT * FROM products WHERE productID='34594289-16B9-4EEF-9A1E-B35066531DE6_GBR_USD'
When I ran it, I unexpectedly got the same result:
product_ID Prd_Model
------------------------------------ --------------------------------------------------
34594289-16B9-4EEF-9A1E-B35066531DE6 LW-062
(1 row affected)
Now if I trim a value off the end of the GUID when searching I get nothing (GUID is 1 digit short):
SELECT * FROM products WHERE productID='34594289-16B9-4EEF-9A1E-B35066531DE'
Result:
product_ID Prd_Model
------------------------------------ --------------------------------------------------
(0 rows affected)
When using the LIKE command instead of '=' and if I add the suffix to the end, the statement returns zero results. This is what I would expect.
So why does the longer string with the suffix added to the end return a result when using '=' in the statement? It's obviously ignoring anything beyond the 36-character GUID length but I'm not sure why.
This behaviour is documented:
Converting uniqueidentifier Data
The uniqueidentifier type is considered a character type for the purposes of conversion from a character expression, and therefore is subject to the truncation rules for converting to a character type. That is, when character expressions are converted to a character data type of a different size, values that are too long for the new data type are truncated. See the Examples section.
So, the string value '34594289-16B9-4EEF-9A1E-B35066531DE6_GBR_USD' is truncated to '34594289-16B9-4EEF-9A1E-B35066531DE6' when it is implicitly cast (due to Data Type Precedence) to a uniqueidentifier and, unsurprisingly, '34594289-16B9-4EEF-9A1E-B35066531DE6' equals itself so the row is returned.
And the documentation does indeed give an example:
The following example demonstrates the truncation of data when the value is too long for the data type being converted to. Because the uniqueidentifier type is limited to 36 characters, the characters that exceed that length are truncated.
DECLARE #ID NVARCHAR(max) = N'0E984725-C51C-4BF4-9960-E1C80E27ABA0wrong';
SELECT #ID, CONVERT(uniqueidentifier, #ID) AS TruncatedValue;
Here is the result set.
String TruncatedValue
-------------------------------------------- ------------------------------------
0E984725-C51C-4BF4-9960-E1C80E27ABA0wrong 0E984725-C51C-4BF4-9960-E1C80E27ABA0
I, however, find it odd that you say that the statement below returns no rows:
SELECT *
FROM products
WHERE productID='34594289-16B9-4EEF-9A1E-B35066531DE'
Though true, it won't return rows, it will also error:
Conversion failed when converting from a character string to uniqueidentifier.
The fact it doesn't implies your column isn't a uniqueidentifier which would mean that your first statement isn't true; as the longer string would not be truncated. This means that one of the statements in the question is likely wrong; either your column is a uniqueidentifier and thus you get results but get an error in the latter, or it isn't and neither statement would return a result set. As you can see in this demonstration:
CREATE TABLE dbo.YourTable (UID uniqueidentifier, String varchar(36));
INSERT INTO dbo.YourTable (UID,String)
VALUES('34594289-16B9-4EEF-9A1E-B35066531DE6','34594289-16B9-4EEF-9A1E-B35066531DE6');
GO
--Returns data
SELECT *
FROM dbo.YourTable
WHERE UID = '34594289-16B9-4EEF-9A1E-B35066531DE6_GBR_USD'
GO
--Errors
SELECT *
FROM dbo.YourTable
WHERE UID = '34594289-16B9-4EEF-9A1E-B35066531DE';
GO
--Returns no data
SELECT *
FROM dbo.YourTable
WHERE String = '34594289-16B9-4EEF-9A1E-B35066531DE6_GBR_USD'
GO
--Returns no data
SELECT *
FROM dbo.YourTable
WHERE String = '34594289-16B9-4EEF-9A1E-B35066531DE'
GO
DROP TABLE dbo.YourTable;
db<>fiddle
Related
In SQL HANA, I need to find how many times a given word is repeated in a string column whose values are delimited by "," and output it as a separate column.
Example, the string column contains:
ZN,ZN,ZS,ZQ
Expected result for "ZN":
2
You might find it acceptable to search only the string ZN by ignoring the fact that there's a comma.
You may count the number of occurrences of any substring by using the string function OCCURRENCES_REGEXPR:
SELECT OCCURRENCES_REGEXPR('(ZN)' IN STRINGCOLUMN) "occurrences_zn" FROM TABLE;
If you really want to clearly specify that ZN is to be searched as an entire word between commas or at the edges, then you may find a better regular expression (the question is then more about regular expressions and not SQL HANA, and you may find existing answers in Stack Overflow).
I can't remember where I found the trick, but in SQL Server, the following works like a charm:
DECLARE #myStringToSearch nvarchar(250) = 'ZN,ZN,ZS,ZQ'
DECLARE #searchValue nvarchar(5) = 'ZN'
SELECT (LEN(#myStringToSearch) - LEN(REPLACE(#myStringToSearch, #searchValue, ''))) / LEN(#searchValue)
The last line compares the length of the original string with the length of the same string, but this time replacing your search value (ZN) with a blank string. In our case, this would result in 4, because ZN is 2 characters, and it was removed twice. However, we're not interested in how many characters were removed, but in how many times the value was encountered, so we divide that result by the length of your search string (2).
Output of the query:
2
You could easily implement this as a DEFAULT constraint in your table, provided your search string is the same across every row.
I wrote one anonymous block in sql , which can be converted to HANA Table function and can be used to achieve expected result.
DO
BEGIN
DECLARE FULL_STRING VARCHAR(100);
DECLARE TRIM_STRING VARCHAR(100);
DECLARE VAL_STRING VARCHAR(100);
FULL_STRING ='ZN,ZN,ZS,ZQ';
FULL_STRING=CONCAT(FULL_STRING,',');
--SELECT :FULL_STRING FROM DUMMY;
VAL_STRING=SUBSTRING(:FULL_STRING,1,LOCATE(:FULL_STRING,',',1)-1);
VAR_TABLE=SELECT :VAL_STRING STRINGVAL FROM DUMMY;
TRIM_STRING=SUBSTRING(:FULL_STRING,LOCATE(:FULL_STRING,',',1)+1 ,LENGTH(:FULL_STRING));
--SELECT * FROM :VAR_TABLE;
--SELECT :TRIM_STRING FROM DUMMY;
WHILE :TRIM_STRING IS NOT NULL AND LENGTH(:TRIM_STRING)>0
DO
VAL_STRING=SUBSTRING(:TRIM_STRING,1,LOCATE(:TRIM_STRING,',',1)-1);
--SELECT :VAL_STRING FROM DUMMY;
VAR_TABLE=SELECT STRINGVAL FROM :VAR_TABLE
UNION ALL
SELECT :VAL_STRING FROM DUMMY;
TRIM_STRING=SUBSTRING(:TRIM_STRING,LOCATE(:TRIM_STRING,',',1)+1 ,LENGTH(:TRIM_STRING));
--i=i+1;
--SELECT :TRIM_STRING FROM DUMMY;
END WHILE ;
SELECT STRINGVAL,COUNT(STRINGVAL) FROM :VAR_TABLE GROUP BY STRINGVAL;
--SELECT :TRIM_STRING FROM DUMMY;
Just need your help here.
I have a table T
A (nvarchar) B()
--------------------------
'abcd'
'xyzxcz'
B should output length of entries in A for which I did
UPDATE T
SET B = LEN(A) -- I know LEN function returns int
But when I checked out the datatype of B using sp_help T, it showed column B as nvarchar.
What's going on ?
select A
from T
where B > 100
also returned correct output?
Why is nvarchar working with logical operators ?
Please help.
Check https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-type-conversion-database-engine?view=sql-server-2017 where it is said that data types are converted explicitly or implicitly when you move, compare or store a variable. In your case, you are comparing column B with 100, forcing sql server to implicitly convert it to integer type (check the picture about conversions on the same page). As a prove, try to alter a row putting some text in column B and, after repeating your select query B>100, sql server will throw a conversione error trying to obtain an integer out of your text.
It works because of implicit conversion between types.
Data type precedence
When an operator combines expressions of different data types, the data type with the lower precedence is first converted to the data type with the higher precedence. If the conversion isn't a supported implicit conversion, an error is returned.
Types precedence:
16. int
...
25. nvarchar (including nvarchar(max) )
In you example:
select A
from T
where B > 100
--nvarchar and int (B is implicitly casted to INT)
when adding a column to a table in ssms, not adding a datatype a "default" datatype is chosen. for me on 2017 developer it's nchar(10). if you want it to be int define the column with datatype of int. in tsql it'd be
create table T (
A nvarchar --for me the nvarchar without a size gives an nvarchar(2)
,B int
);
sp_help T
--to make a specific size, largest for nvarchar is 4000 or max...max is the replacement for ntext of old, as.
create table Tmax (
A nvarchar(max)
,B int
);
--understanding nvarchar and varchar for len() and datalength()
select
datalength(N'wibble') datalength_nvarchar -- nvarchar is unicode and uses 2 bytes per char, so 12
,datalength('wibble') datalength_varchar -- varchar uses 1 byte per so 6
,len(N'wibble') len_nvarchar -- count of chars, so 6
,len('wibble') len_varchar -- count of char so still 6
nvarchar(max) and varchar(max)
hope this helps, the question is a bit discombobulated
How do I change the data type float to nvarchar in order to remove the scientific notation and still keep precision? Consider the following:
CREATE TABLE ConversionDataType (ColumnData FLOAT);
INSERT INTO ConversionDataType VALUES (25566685456126);
INSERT INTO ConversionDataType VALUES (12345545546845);
INSERT INTO ConversionDataType VALUES (12345545545257);
When I do a simple read I get the following data, as expected:
select * from ConversionDataType
ColumnData
------------------------------------
25566685456126
12345545546845
12345545545257
Now when I try update the data type to an nvarchar, it gets stored in scientific notation which is something I don't want:
update ConversionDataType
set ColumnData = CAST(ColumnData AS NVARCHAR)
The result set is as follows:
25566700000000
12345500000000
12345500000000
It replaces some digits and adds zeros after the 6th index. How can I go about this? I had a look at the Convert function but that is only for converting date time data types.
Being valid what others said in comment, if you just want to convert float to varchar without scientific notation, you need to convert to numeric. You can try this:
SELECT CAST(CAST(CAST(25566685456126291 AS FLOAT) AS NUMERIC) AS NVARCHAR)
Output:
C1
------------------------------
25566685456126292
Whereas
SELECT CAST(CAST(25566685456126291 AS FLOAT) AS NVARCHAR) AS C1
gives:
C1
------------------------------
2.55667e+016
If you need to change datatype, I think you should add a new column, update it and (if you want) delete the old column and rename the new column at the end.
CREATE TABLE TEST1 (C1 FLOAT)
INSERT INTO TEST1 VALUES (25566685456126291);
ALTER TABLE TEST1 ADD C2 VARCHAR(18)
UPDATE TEST1 SET C2=CAST(CAST(C1 AS NUMERIC) AS VARCHAR)
SELECT * FROM TEST1
Output:
C1 C2
---------------------- ------------------
2.55666854561263E+16 25566685456126292
FLOAT was a very bad decision as this is not a precise data type. If you wanted to store the phone numbers as numbers, you'd have to go for DECIMAL instead.
But you'll have to use NVARCHAR instead. And this is the only reasonable design, as phone numbers can have leading zeros or start with a plus sign. So the first thing is to introduce an NVARCHAR column:
ALTER TABLE ConversionDataType ADD ColumnDataNew NVARCHAR(30);
The function to convert a number into a string in SQL Server is FORMAT. It lets you state the format you want to use for the conversion, which is integer in your case (a simple '0'):
update ConversionDataType set ColumnDataNew = format(ColumnData, '0');
At last remove the old column and then rename the new one with the same name. SQL Server lacks an ALTER TABLE syntax to rename a column, so we must call sp_RENAME instead (at least this is what I have read on the Internet; here is a link to the docs: https://msdn.microsoft.com/de-de/library/ms188351.aspx).
ALTER TABLE ConversionDataType DROP COLUMN ColumnData;
EXEC sp_RENAME 'ConversionDataType.ColumnDataNew', 'ColumnData', 'COLUMN';
Here you can see the results: http://rextester.com/GLLB27702
SELECT CONVERT(NVARCHAR(250), StudentID) FROM TableA
StudentID is your Float Column of database
or Simply use
SELECT CONVERT(NVARCHAR(250), yourFloatVariable)
Please suppose that in SQL Server 2005, if you launch the following query:
SELECT CHICKEN_CODE FROM ALL_CHICKENS WHERE MY_PARAMETER = 'N123123123';
you obtain:
31
as result.
Now, I would like to write a function that, given a value for MY_PARAMETER, yields the corresponding value of CHICKEN_CODE, found in the table ALL_CHICKENS.
I have written the following stored function in SQL Server 2005:
ALTER FUNCTION [dbo].[determines_chicken_code]
(
#input_parameter VARCHAR
)
RETURNS varchar
AS
BEGIN
DECLARE #myresult varchar
SELECT #myresult = CHICKEN_CODE
FROM dbo.ALL_CHICKENS
WHERE MY_PARAMETER = #input_parameter
RETURN #myresult
END
But if I launch the following query:
SELECT DBO.determines_chicken_code('N123123123')
it yields:
NULL
Why?
Thank you in advance for your kind cooperation.
define the length of your varchar variables like this
varchar(100)
Without the 100 (or whatever length you choose) its lengh is 1 and the where clause will filter out the correct results.
Specify a length for your varchar (ex.: varchar(100)). Without length, varchar = 1 char.
As per other PS, You can store only one char in the #myresult because you have not specified any length, bcoz 1 char length is default for Varchar datatype.
Why we are getting NUll, not the first char:
If there are multiple records are filtered on basis of Where clause in ALL_CHICKENS table then the value of CHICKEN_CODE column is picked up from last row in ALL_CHICKENS table.
It seems that the last row has null value in CHICKEN_CODE column.
Specify a length for #input_parameter, #myresult as by default varchar lengh is 1.
I was searching for integers in a nvarchar column. I noticed that if the row contains '' or 0 it is picked up if I search using just 0.
I'm assuming there is some implicit conversion happening which is saying that 0 is equal to ''. Why does it assign two values?
Here is a test:
--0 Test
create table #0Test (Test nvarchar(20))
GO
insert INTO #0Test (Test)
SELECT ''
UNION ALL
SELECT 0
UNION ALL
SELECT ''
Select *
from #0Test
Select *
from #0Test
Where test = 0
SELECT *
from #0Test
Where test = '0'
SELECT *
from #0Test
Where test = ''
drop table #0Test
The behavior you see is the one describe din the product documentation. The rules of Data Type Precedence specify that int has higher precedence than nvarchar therefore the operation has to occur as an int type:
When an operator combines two expressions of different data types, the
rules for data type precedence specify that the data type with the
lower precedence is converted to the data type with the higher
precedence
Therefore your query is actually as follow:
Select *
from #0Test
Where cast(test as int) = 0;
and the empty string N'' yields the value 0 when cast to int:
select cast(N'' as int)
-----------
0
(1 row(s) affected)
Therefore the expected result is the one you see, the rows with an empty string qualify for the predicate test = 0. Further proof that you should never mix types freely. For a more detailed discussion of the topic, see How Data Access Code Affects Database Performance.
You are implicitly converting the field to int with your UNION statement.
Two empty strings and the integer 0 will result in an int field. This is BEFORE you insert into the nvarchar field, so the data type in the temp table is irrelevant.
Try changing the second select in the UNION to:
SELECT '0'
And you will get the expected result.