After the SQL team migrated the server from SQL Server 2012 to SQL Server 2019 UC8, an issue started to occur. The issue is during user maintenance. Basically when the password is being inserted or saved. There is a stored procedure that contains the following:
UPDATE TableOfCreds
SET Password = EncryptByPassPhrase('ThisPassword', #Password)
WHERE User_ID = #UserID
The insert is the same for the password. In SQL Server 2012 it functions as expected, in SQL Server 2019 the following error occurs:
Msg 8152, Level 16, State 30, Line 7
String or binary data would be truncated
On the Microsoft site, it states that SQL Server 2017 onward uses AES256 key and the passphrase is using TRIPLE DES with 128 key bit length. Not sure if this is the cause and if so how to fix it. Some other details are
Password column in table is of type varchar(50)
Decryption seems to work as expected
Executing the above script outside the procedure results in the same error.
Also some research revealed issues with inlining in UDF's however we are not using functions and converting the procedures to functions may take longer than we want.
What is the best way to encrypt the string and save to the table without receiving the error?
return type varchar 8000
https://learn.microsoft.com/en-us/sql/t-sql/functions/encryptbypassphrase-transact-sql?view=sql-server-ver16
select len(EncryptByPassPhrase('ThisPassword','Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industrys standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.'))
Returns
612
Related
We have one column in our table whose name is "House€1000" but after deploying the code from Azure Build Pipeline, we could see that the pound sign got converted to "?" in Azure Build Artifacts. Can anyone suggest some something which can resolve this issue?
The possibe cause is when we use non-unicode data types like char, varchar while defining the columns.
To cover characters of all languages , there might be different number of bytes involved .
Using unicode data types like nvarchar,nchar can covert them to UTF-8 encoded value .But they may not contain enough bytes to use that language .So try by including the bytes involved in that particular language for that symbol to appear .ex:VARCHAR( 270), currency > decimal (19, 9) to avoid data loss through data truncation.
Enable utf encoding while preparing the columns.Some times if not enabled , try to work around by using escaping characters ex: [% ] or [^] which means % and ^.
Please go through this Collation and Unicode support - SQL Server | Microsoft Docs
Reference:
Introducing UTF-8 support for Azure SQL Database | Azure updates | Microsoft Azure
storing-uk-pound-sterling-in-a-database
I have a subject table which has a theme field contains the following rows :
theme
-----
pays
économie
associée
And I have this basic query :
SELECT * FROM SUBJECT WHERE THEME='associée';
The query runs fine in Sql developer and returns the expected row to me.
On the other hand under Sqlplus it returns 0 lines to me (which is not normal).
I have the impression that the query does not recognize accented characters under sqlplus. I am thinking of an NLS_LANG problem but I do not know about it. Please help.
Thank you in advance.
Set your OS session's NLS_LANG variable to the value of, e.g., ENGLISH_AMERICA.AL32UTF8 and restart your SQL Developer. Retry afterwards.
If that didn't help, try also running your query as follows:
SELECT * FROM SUBJECT WHERE THEME = n'associée';
Notice the n before the string literal. That's a nvarchar2 string literal modifier. Depending on your DB charset/national charset settings you may need to explicitly state that the value you are querying for, is "national charset", not just a "regular charset".
If that didn't help, there's actually a multitude of additional variables that come into play when working with accented characters against an Oracle DB.
Explanation:
Your SQL Developer does recognize accents... provided that you have your Oracle DB session using character set compatible with your database character set. And your Oracle DB session's character set can be set either on OS level (via OS environment variable) or, possibly(!), in SQL Developer's options directly. Alas, the said multitude of other factors may include (though not exclusively):
your OS regional settings,
your OS Unicode support,
your Oracle client software's (SQL Developer) Unicode support,
your Java JDK/JRE's Unicode support,
your JDBC driver's Unicode support,
your other *DBC drivers' Unicode support, if there are any more in chain.
Sad thing is that the more interfaces you have between your keyboard and your Oracle database, the more likely is one of them to fiddle with your charset conversions badly.
So, let's just hope that the first two hints work for you, otherwise I can't help you (that easily).
I have a db2 database where I store names containing special characters. When I try to retrieve them with an internal software, I get proper results. However when I tried to do the same with queries or look into the db, the characters are stored strangely.
The documentation says that the encoding is utf-8 latin1.
My query looks something like this:
SELECT firstn, lastn
FROM unams
WHERE unamid = 12345
The user with the given ID has some special characters in his/her name: é and ó, but the query returns it as Ă© and Ăł.
Is there a way to convert the characters back to their original form with using some simple SQL function? I am new to databases and encoding, trying to understand the latter by reading this but I'm quite lost.
EDIT: Currently sending queries via SPSS Modeler with a proper ODBC driver, the database lies on a Windows Server 2016
Per the comments, the solution was to create a Windows environment variable DB2CODEPAGE=1208 , then restart, then drop and re-populate the tables.
If the applications runs locally on the Db2-server (i.e. only one hostname is involved) then the same variable can be set. This will impact all local applications that use the UTF-8 encoded database.
If the application runs remotely from the Db2-server (i.e. two hostnames are involved) then set the variable on the workstation and on the Windows Db2-server.
Current versions of IBM supplied Db2-clients on Windows will derive their codepage from the regional settings which might not always render Unicode characters correctly, so using the DB2CODEPAGE=1208 forces the Db2-client CLI drivers to use a Unicode application code page to override this.
with t (firstn) as (
values ('éó')
--SELECT firstn
--FROM unams
--WHERE unamid = 12345
)
select x.c, hex(x.c) c_hes
from
t
, xmltable('for $id in (1 to string-length($s)) return <i>{substring($s, $id, 1)}</i>'
passing t.firstn as "s" columns tok varchar(6) path '.') x(c);
C C_HEX
- -----
é C3A9
ó C3B3
The query above converts the string of characters to a table with each character (C) and its hex representation (C_HEX) in each row.
You can run it as is to check if you get the same output. It must be as described for a UTF-8 database.
Now try to comment out the line with values ('éó') and uncomment the select statement returning some row with these special characters.
If you see the same hex representation of these characters stored in the firstn column, then this means, that the string is stored appropriately, but your client tool (SPSS Modeller) can't show these characters correctly due to some reason (wrong font, for example).
How to report Errors or bugs found in SQL Server to Microsoft SQL Server Team ?
The error message is correct.
You have two operands.
The literal 'a' is treated as varchar(1) and the second one is varbinary(30) (30 is the default if no length is specified in a CAST). It is invalid to concatenate these mixed datatypes.
[var]char + [var]char works fine
[var]binary + [var]binary is also fine
To answer your question though bug reports should go on the Connect site but always worth sanity checking that it is a genuine bug first!
Three questions with the following scenario:
SQL Server 2005 production db with a Latin1 codepage and showing "?" for invalid chars in Management Studio.
SomeCompanyApp client as a service that populates the data from servers and workstations.
SomeCompanyApp management console that shows "?" for Asian characters.
Since this is a prod db I will not write to it.
I don't know if the client app that is storing the data in the database is actually storing it correctly as Unicode and it simply doesn't show because they are using Latin1 for the console.
Q1: As I understand it, SQL Server stores nvarchar text as Unicode regardless of the codepage or am I completely wrong and if the codepage is Latin1 then everything that is not in that codepage gets converted to "?".
Q2: Is it the same with a text column?
Q3: Is there a way using SQL Server Management Studio or Visual Studio and some code (don't care which language :)) to query the db and show me if the chars really do show up as Japanese, Chinese, Korean, etc.?
My final goal is to extract data from the db and store it in another db using UTF-8 to show Japanese and other Asian chars as what they are in my own client webapp. I will settle for an answer to Q3. I can code in several languages and at the very least understand some others but I'm just not knowledgeable enough about Unicode. In case you want to know my webapp will be using pyodbc and cassandra but for these questions that doesn't matter.
When inserting into an NVARCHAR column in SSMS, you need to make absolutely sure you're prefixing your string with a N:
This will NOT work:
INSERT INTO dbo.MyTable(NVarcharColumn) VALUES('Some Text with Special Char')
SQL Server will interpret your string in the VALUES(..) as VARCHAR and thus strip off any special characters.
You need this:
INSERT INTO dbo.MyTable(NVarcharColumn) VALUES(N'Some Text with Special Char')
Prefixing your text literal with an N'..' tells SQL Server to treat this as NVARCHAR all the way.
Does this help you solve your Q3 ??