SQL function
cast(expression as type):
It is ANSI standard. Is the type standardized? what types are allowed? Are they different from database to database?
Looked at the MySQL and others. MySQL has signed/unsigned, others has INT.
CAST() is ANSI standard. Off the top of my head, ANSI data types are things like:
DECIMAL/NUMERIC(scale, precision)
VARCHAR()/CHAR()
DATE/TIME/DATETIME/INTERVAL
DOUBLE PRECISION/FLOAT
BIGINT/INT/SMALLINT
MySQL changes the syntax a bit. So, UNSIGNED and SIGNED are used instead of INT. And CHAR is used for all the character types. Other databases have their own modification for CAST(). For instance Google BigQuery uses string instead of the character types.
Related
Postgresql cannot automatically convert float point data that comes from remote table in format "1,1"
I am trying to connect db2 and postgresql using some fdw extensions. Now I am using odbc_fdw, but odbc always return float types in format "1,1" and postgresql can only use point as delimiter. may be any postgresql settings or odbc configs?
SELECT CAST('1,01000000E+1' as real);
Error code 22P02. Wrong syntax for type real
I expect to automatically convert strings like "1,1" to float using cast. I think without this I won't be able to user foreign tables with float data types
you could do
SELECT string_to_array('1,01000000E+1', ',')::real[]
I try to convert mssql query to postgresql query.
mssql query is
CONVERT(VARCHAR, column)
I know postgresql cast is two ways.
1.
CAST(column as VARCHAR)
column::VARCHAR
What's the difference?
Is it ok to use the second method?
Quote from the manual
PostgreSQL accepts two equivalent syntaxes for type casts:
CAST ( expression AS type )
expression::type
The CAST syntax conforms to SQL; the syntax with :: is historical PostgreSQL usage.
(emphasis mine)
So both do the same thing, the cast() being standard SQL, the :: being Postgres specific.
Note that there is a third way of casting (as explained in the manual)
It is also possible to specify a type cast using a function-like syntax:
typename ( expression )
But it's not recommended as the manual says: "Obviously, this is not something that a portable application should rely on"
What is the meaning and difference between these queries?
SELECT U'String' FROM dual;
and
SELECT N'String' FROM dual;
In this answer i will try to provide informations from official resources
(1) The N'' text Literal
N'' is used to convert a string to NCHAR or NVARCHAR2 datatype
According to this Oracle documentation Oracle - Literals
The syntax of text literals is as follows:
where N or n specifies the literal using the national character set (NCHAR or NVARCHAR2 data).
Also in this second article Oracle - Datatypes
The N'String' is used to convert a string to NCHAR datatype
From the article listed above:
The following example compares the translated_description column of the pm.product_descriptions table with a national character set string:
SELECT translated_description FROM product_descriptions
WHERE translated_name = N'LCD Monitor 11/PM';
(2) The U'' Literal
U'' is used to handle the SQL NCHAR String Literals in Oracle Call Interface (OCI)
Based on this Oracle documentation Programming with Unicode
The Oracle Call Interface (OCI) is the lowest level API that the rest of the client-side database access products use. It provides a flexible way for C/C++ programs to access Unicode data stored in SQL CHAR and NCHAR datatypes. Using OCI, you can programmatically specify the character set (UTF-8, UTF-16, and others) for the data to be inserted or retrieved. It accesses the database through Oracle Net.
OCI is the lowest-level API for accessing a database, so it offers the best possible performance.
Handling SQL NCHAR String Literals in OCI
You can switch it on by setting the environment variable ORA_NCHAR_LITERAL_REPLACE to TRUE. You can also achieve this behavior programmatically by using the OCI_NCHAR_LITERAL_REPLACE_ON and OCI_NCHAR_LITERAL_REPLACE_OFF modes in OCIEnvCreate() and OCIEnvNlsCreate(). So, for example, OCIEnvCreate(OCI_NCHAR_LITERAL_REPLACE_ON) turns on NCHAR literal replacement, while OCIEnvCreate(OCI_NCHAR_LITERAL_REPLACE_OFF) turns it off.
[...] Note that, when the NCHAR literal replacement is turned on, OCIStmtPrepare and OCIStmtPrepare2 will transform N' literals with U' literals in the SQL text and store the resulting SQL text in the statement handle. Thus, if the application uses OCI_ATTR_STATEMENT to retrieve the SQL text from the OCI statement handle, the SQL text will return U' instead of N' as specified in the original text.
(3) Answer for your question
From datatypes perspective, there is not difference between both queries provided
N'string' just returns the string as NCHAR type.
U'string' returns also NCHAR type, however it does additional processing to the string: it replaces \\ with \ and \xxxx with Unicode code point U+xxxx, where xxxx are 4 hexadecimal digits. This is similar to UNISTR('string'), the difference is that the latter returns NVARCHAR2.
U' literals are useful when you want to have a Unicode string independent from encoding and NLS settings.
Example:
select n'\€', u'\\\20ac', n'\\\20ac' from dual;
N'\€' U'\\\20AC' N'\\\20AC'
----- ---------- ----------
\€ \€ \\\20ac
when using N' we denote that given datatype is NCHAR or NVARCHAR.
U' is used to denote unicode
The documented N'' literals are the same as standard character literals ('') except that their data type is NVARCHAR2 and not VARCHAR2. It is important to note that the characters in these literals, together with the entire SQL statement, are converted from the client character set to the database character set when transmitted to the server. All characters from the literals that are not supported by the database character set are lost.
The data type of the undocumented U'' literals is also NVARCHAR2. The content of a U'' literal is interpreted like the input to the SQL UNISTR function. That is, each character sequence \xxxx, where each x is one hex digit, is interpreted as a UTF-16 code point U+xxxx. I am not sure why the U'' literals are undocumented. I can only guess. They are used internally by the NCHAR literal replacement feature, which, when enable on a client, automatically translates N'' literals to U'' literals. This prevents the mentioned data loss due to character set conversion and enables literal Unicode data to be provided for NVARCHAR2 columns even if the database character set is not Unicode.
The two queries in this thread's question are generally not equivalent because the literal text would be interpreted differently. However, if no backslash is present in the literals, no difference can be observed.
I want to save bangla Language in sql server. Using which data type I can Do it in sql server 2005 or sql server 2008.
I tried varchar and varbinary type but it cannot save bangla Language.
How is it possible?
You're using SQL_Latin1_General_CP1_CI_AS for your collation, which is suited for the Latin character set (ISO-8859-1). To store characters fromother character sets, you can use the NVARCHAR() which can store the full Unicode range, irrespective of collation - this does mean it will need to be treated as NVARCHAR() all the way, as quoted constants (e.g. N'বাংলা Bangla'), as the data types for parameters to stored procedures, etc.
I have a function which maps java to SQL types.
As I want to store binary data, is there any type defined by the SQL standard which I can use both in PostgreSQL and hsqldb?
The SQL 92 standard does not define a binary type. PostgreSQL has a bytea type, hsqldb has a binary type.
For a very portable (if not efficient) solution, convert the binary to base64, and store it in a string.
BINARY and VARBINARY are defined by the the SQL Standard. The Standard is currently at SQL:2011 (after 92, 1999, 2003 and 2008). HSQLDB supports all the core data types defined by the Standard.
The PostgreSQL BYTEA is similar to VARBINARY. You can define the BYTEA type in HSQLDB as a VARBINARY type with a large maximum size:
CREATE TYPE BYTEA AS VARBINARY(1000000)
SQL has supported binary types since 1999: [https://mariadb.com/kb/en/sql-data-types/]. Vendors have had over a decade to add support for binary types, and most SQL databases do.