EncryptByPassPhrase returns special characters - sql

I'm trying to encrypt with EncryptByPassPhrase in SQL Server 2012 but when I execute this function I get values like "öK{8+¨´¡¿" ... maybe someone can help me?.
This is the code that i'm using:
IF(#MODE = 1)
BEGIN
SET #RESUL = convert(varchar(100),ENCRYPTBYPASSPHRASE('Prueba','200000'))<br>
PRINT 'ENCRYPT'+ (CAST(#RESUL AS varchar(20)))
END

Let's break it down. According to the documentation, the output of ENCRYPTBYPASSPHRASE() is varbinary. You're CONVERTing that to varchar. According to the documenation for convert, if you don't provide a style, convert "Translates ASCII characters to binary bytes or binary bytes to ASCII characters. Each character or byte is converted 1:1.". If you're looking for something more like 0x123abc, pass an additional parameter (1) to CONVERT to make it do that.
All that said, unless you need a human to be able to transcribe the encrypted content (or otherwise interpret it), I'd leave it in its varbinary representation. Less room for error on the decryption side. Specifically:
DECLARE #resul VARBINARY(8000);
SET #RESUL = ENCRYPTBYPASSPHRASE('Prueba','200000');
SELECT CAST(DECRYPTBYPASSPHRASE('Prueba', #resul) AS VARCHAR(50));

Related

In Firebird stored procedure how do I insert non UTF text into a UTF field?

In my stored procedure I'm doing this;
declare variable my_text varchar(512);
...
select non_utf_notes from table1 where unique_field = :some_value
into :my_text;
....
insert into table2(unique_field, utf_text)
values(:some_value, :my_text);
Table1 has no character set defined, but table2 is defined with a character set of UTF8.
If you haven't explicitly set a default character set in your database (or the field in question was created before the default was set), then a (VAR)CHAR field has character set NONE unless you explicitly specified a character set for the field.
Character set NONE is a bit of an annoyance (although it can be powerful), as it is essentially binary data. The absence of a specific real character set means that upon conversion to a real character set, the bytes are handled in that character set. This is fine for most character sets (although you might get the wrong characters), but UTF-8 has a variable length encoding where certain combinations of bytes are invalid.
To handle conversion from NONE to UTF8, you either need to be sure that the data in NONE is really UTF-8, or you first need to cast to the 'right' character set (eg WIN1252) before casting to UTF8 as this will perform the right encoding.

Inserting UTF-32 characters

I'm testing UTF-32 characters (specifically emojis) with SQL Server (2008 R2, 10.5) and at this stage I'm checking if the server supports the given code
For this case I'm using the :rose with the following query
SELECT '' + nchar(0x1F339) + 'test'
which returns back in Management Studio with (NULL).
What format do I need to encode the character to have it not return null in SQL Server
SQL Server only supports UCS-2, which is currently (almost) the same as UTF-16. So exactly 2 bytes per character and all that.
An idea, if I may. You can store the data in a BINARY or VARBINARY data field which doesn't care about encoding. You can then use a mapping table or external script to parse the binary into a text field replacing 0x1F339 with :rose: or your own custom forma for example.
Since it's UTF-32, it has two be written as two UTF-16 characters:
-- Returns: 🌹test
SELECT '' + nchar(0xD83C) + nchar(0xDF39) + 'test'
You can find this code under "UTF-16 Hex (C Syntax)" title, following your link.
Also I have to recommend this article, because it was very helpful during investigation: Unicode Escape Sequences Across Various Languages and Platforms (including Supplementary Characters)
Couple of options for those who are looking for answers:
SQL Server technically does not have character escape sequences, but
you can still create characters using either byte sequences or Code
Points using the CHAR() and NCHAR() functions. We are only concerned
with Unicode here, so we will only be using NCHAR().
All versions:
NCHAR(0 - 65535) for BMP Code Points (using an int/decimal value)
NCHAR(0x0 - 0xFFFF) for BMP Code Points (using a binary/hex value)
NCHAR(0 - 65535) + NCHAR(0 - 65535) for a Surrogate Pair / Two UTF-16
Code Units
NCHAR(0x0 - 0xFFFF) + NCHAR(0x0 - 0xFFFF) for a Surrogate Pair / Two
UTF-16 Code Units
CONVERT(NVARCHAR(size), 0xHHHH) for one or more characters in UTF-16
Little Endian (“HHHH” is 1 or more sets of 4 hex digits)
Starting in SQL Server 2012:
If database’s default collation supports Supplementary Characters
(collation name ends in _SC, or starting in SQL Server 2017 name
contains 140 but does not end in _BIN*, or starting in SQL Server
2019 name ends in _UTF8 but does not contain _BIN2), then NCHAR() can
be given Supplementary Character Code Points:
decimal value can go up to 1114111
hex value can go up to 0x10FFFF
Starting in SQL Server 2019:
“_UTF8” collations enable CHAR and VARCHAR data to use the UTF-8
encoding:
CONVERT(VARCHAR(size), 0xHH) for one or more characters in UTF-8 (“HH”
is 1 or more sets of 2 hex digits)
NOTE: The CHAR() function does not work for this purpose. It can only
produce a single byte, and UTF-8 is only a single byte for values 0 –
127 / 0x00 – 0x7F.

Convert a file to Binary or Hexadecimal

So I have a file that I need to have in either binary or hex format. Everything that I've been able to find basically says to store the text in a string and convert it to binary or hex from there, but I cant do it this way. The file was written using its own private character set that uses null and system hex codes, so notepad doesn't know what to do with these characters and replaces it with wrong characters and spaces. This distorts the information so it wont be correct if I try to convert it to binary/hex.
I really just need to have the binary/hex information stored in a string or text box so I can work with it. I don't really need it to be saved as a file.
Never mind, I finally figured it out. I used a file stream to read the data byte by byte. I didn't understand how to convert this as the first byte data in the array was showing as 80 when i knew the binary data should've been "1010000" (i didn't realize at that time that 80 was the decimal format).
Anyways I used the bitconverter.tostring and it put everything together and converted it to hexadecimal format. So i'm all good now.

How to declare a SQL INSERT Statement with a Unicode letter [duplicate]

This question already has an answer here:
Can not insert German characters in Postgres
(1 answer)
Closed 9 years ago.
I have a sql statemwent, which contain a unicode specific sign. The unicode sign is ę in the polish word Przesunięcie. Please look at the following SQL INSERT Statement:
INSERT INTO res_bundle_props (res_bundle_id, value, name)
VALUES(2, 'Przesunięcie przystanku', 'category.test');
I work with the Postgres Database. In which way can i insert the polish word with the unicode letter?
Find what are the server and client encodings:
show server_encoding;
server_encoding
-----------------
UTF8
show client_encoding;
client_encoding
-----------------
UTF8
Then set the client to the same encoding as the server:
set client_encoding = 'UTF8';
SET
No special syntax is required so long as:
Your server_encoding includes those characters (if it's utf-8 it does);
Your client_encoding includes those characters;
Your client_encoding correctly matches the encoding of the bytes you're actually sending
The latter is the one that often trips people up. They think they can just change client_encoding with a SET client_encoding statement and it'll do some kind of magical conversion. That is not the case. client_encoding tells PostgreSQL "this is the encoding of the data you will receive from the client, and the encoding that the client expects to receive from you".
Setting client_encoding to utf-8 doesn't make the client actually send UTF-8. That depends on the client. Nor do you have to send utf-8; that string can also be represented in iso-8859-2, iso-8859-4 and iso-8859-10 among other encodings.
What's crucial is that you tell the server the encoding of the data you're sending. As it happens that string is the same in all three of the encodings mentioned, with the ę encoded as 0xae... but in utf-8 that'd be the two bytes 0xc4 0x99. If you send utf-8 to the server and tell it that it's iso-8859-2 the server can't tell you're wrong and will interpret it as Ä in iso-8859-2.
So... really, it depends on things like the system's default encoding, the encoding of any files/streams you're reading data from, etc. You have two options:
Set client_encoding appropriately for the data you're working with and the default display locale of the system. This is easiest for simple cases, but harder when dealing with multiple different encodings in input or output.
Set client_encoding to utf-8 (or the same as server_encoding) and make sure that you always convert all input data into the encoding you set client_encoding to before sending it. You must also convert all data you receive from Pg back.

Replace character in SQL results

This is from a Oracle SQL query. It has these weird skinny rectangle shapes in the database in places where apostrophes should be. (I wish we would could paste screen shots in here)
It looks like this when I copy and paste the results.
spouse�s
is there a way to write a SQL SELECT statement that searches for this character in the field and replaces it with an apostrophe in the results?
Edit: I need to change only the results in a SELECT statement for reporting purposes, I can't change the Database.
I ran this
select dump('�') from dual;
which returned
Typ=96 Len=3: 239,191,189
This seems to work so far
select translate('What is your spouse�s first name?', '�', '''') from dual;
but this doesn't work
select translate(Fieldname, '�', '''') from TableName
Select FN from TN
What is your spouse�s first name?
SELECT DUMP(FN, 1016) from TN
Typ=1 Len=33 CharacterSet=US7ASCII: 57,68,61,74,20,69,73,20,79,6f,75,72,20,73,70,6f,75,73,65,92,73,20,66,69,72,73,74,20,6e,61,6d,65,3f
EDIT:
So I have established that is the backquote character. I can't get the DB updated so I'm trying this code
SELECT REGEX_REPLACE(FN,"\0092","\0027") FROM TN
and I"m getting ORA-00904:"Regex_Replace":invalid identifier
This seems a problem with your charset configuracion. Check your NLS_LANG and others NLS_xxx enviroment/regedit values. You have to check the oracle server, your client and the client of the inserter of that data.
Try to DUMP the value. you can do it with a select as simple as:
SELECT DUMP(the_column)
FROM xxx
WHERE xxx
UPDATE: I think that before try to replace, look for the root of the problem. If this happens because a charset trouble you can get big problems with bad data.
UPDATE 2: Answering the comments. The problem may be is not on the database server side, may be is in the client side. The problem (if this is the problem) can be a translation on server to/from client comunication. It's for a server-client bad configuracion-coordination. For instance if the server has defined UTF8 charset and your client uses US7ASCII, then all acutes will appear as ?.
Another approach can be that if the server has defined UTF8 charset and your client also UTF8 but the application is not able to show UTF8 chars, then the problem is in the application side.
UPDATE 3: On your examples:
select translate('What. It works because the � is exactly the same char: You have pasted on both sides.
select translate(Fieldname. It does not work because the � is not stored on database, it's the char that the client receives may be because some translation occurs from the data table until it's showed to you.
Next step: Look in DUMP syntax and try to extract the codes for the mysterious char (from the table not pasting �!).
I would say there's a good chance the character is a single-tick "smart quote" (I hate the name). The smart quotes are characters 91-94 (using a Windows encoding), or Unicode U+2018, U+2019, U+201C, and U+201D.
I'm going to propose a front-end application-based, client-side approach to the problem:
I suspect that this problem has more to do with a mismatch between the font you are trying to display the word spouse�s with, and the character �. That icon appears when you are trying to display a character in a Unicode font that doesn't have the glyph for the character's code.
The Oracle database will dutifully return whatever characters were INSERTed into its' column. It's more up to you, and your application, to interpret what it will look like given the font you are trying to display your data with in your application, so I suggest investigating as to what this mysterious � character is that is replacing your apostrophes. Start by using FerranB's recommended DUMP().
Try running the following query to get the character code:
SELECT DUMP(<column with weird character>, 1016)
FROM <your table>
WHERE <column with weird character> like '%spouse%';
If that doesn't grab your actual text from the database, you'll need to modify the WHERE clause to actually grab the offending column.
Once you've found the code for the character, you could just replace the character by using the regex_replace() built-in function by determining the raw hex code of the character and then supplying the ASCII / C0 Controls and Basic Latin character 0x0027 ('), using code similar to this:
UPDATE <table>
set <column with offending character>
= REGEX_REPLACE(<column with offending character>,
"<character code of �>",
"'")
WHERE regex_like(<column with offending character>,"<character code of �>");
If you aren't familiar with Unicode and different ways of character encoding, I recommend reading Joel's article The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!). I wasn't until I read that article.
EDIT: If your'e seeing 0x92, there's likely a charset mismatch here:
0x92 in CP-1252 (default Windows code page) is a backquote character, which looks kinda like an apostrophe. This code isn't a valid ASCII character, and it isn't valid in IS0-8859-1 either. So probably either the database is in CP-1252 encoding (don't find that likely), or a database connection which spoke CP-1252 inserted it, or somehow the apostrophe got converted to 0x92. The database is returning values that are valid in CP-1252 (or some other charset where 0x92 is valid), but your db client connection isn't expecting CP-1252. Hence, the wierd question mark.
And FerranB is likely right. I would talk with your DBA or some other admin about this to get the issue straightened out. If you can't, I would try either doing the update above (seems like you can't), or doing this:
INSERT (<normal table columns>,...,<column with offending character>) INTO <table>
SELECT <all normal columns>, REGEX_REPLACE(<column with offending character>,
"\0092",
"\0027") -- for ASCII/ISO-8859-1 apostrophe
FROM <table>
WHERE regex_like(<column with offending character>,"\0092");
DELETE FROM <table> WHERE regex_like(<column with offending character>,"\0092");
Before you do this you need to understand what actually happened. It looks to me that someone inserted non-ascii strings in the database. For example Unicode or UTF-8. Before you fix this, be very sure that this is actually a bug. The apostrophe comes in many forms, not just the "'".
TRANSLATE() is a useful function for replacing or eliminating known single character codes.