An error-free column name syntax is [My Column] instead of My Column which causes an error.
An error-free string value syntax is '25,00' instead of 25,00 which causes an error.
I'm getting an error using single quotes to enclose values, if the column data type is numeric. Are there any other ways to enclose values safely for string or numeric data types?
Numeric values don't have any enclosures or comma's.
For strings, depending on your settings, in certain DB's it could be single or double quotes.
In SQL Server the Cast / Convert functions are regionally aware. Therefore use Convert in your query, passing the number as a quoted string, to convert it to the required decimal type. eg:
SELECT CONVERT(decimal(5,2),'1234,56')
You are probably getting an error when you use quotes because the string '25,00' is not a valid decimal number. Check your RDBMS documentation to see how strings are implicitly converted to number types.
Without the quotes, 25,00 is also invalid, I believe, regardless of your location. The SQL standard does not permit literal numbers to be specified using comma as the decimal separator.
A column name like My Column causes an error because of the space in it. [My Column] removes the ambiguity.
A value such as '25,00' is valid because the quotes make it a string, while 25,00 isn't a valid number (at least not in your part of the world) because of the comma.
If you were to insert 25,00 as a number, how is the DB able to distinguish it from two numbers?
Related
Does a regular INSERT INTO statement even work with TypeOrm? Tried formatting the string and quotes every which way, lost all patience.
await queryRunner.query('INSERT INTO "table"(column1,column2) VALUES ("Hi","Bye")');
Error: column "Hi" does not exist.
ie: it tries using the 1st value as the column lookup.
I also tried
await queryRunner.query('INSERT INTO "table"(column1,column2) VALUES ($1,$2) --PARAMETERS["Hi", "Bye"]');
Error: There is no parameter $1
Your problem come from the fact that you use double quotes for your string variable. As defined in the postgreSQL documentation, a string is an arbitrary sequence of characters bounded by single quotes ('). And farther in the documentation you can find that the double quotes (") are used to defined identifiers (such as table or column names)
Can you help me with with TD error?
I just do cast(array_type as varchar(200)) as col1 and it works but when I use this col1 in the comparison to other column, I get:
The arguments of the CAST function must be of the same character data type
What is going on?
Check this reference from Teradata Documentation. It appears that you are casting a character type column with a defined character set to character column with a different character set. To rephrase, you are casting a character to a character, only the charactersets are changing rather than data type, but this is not the intended usage for CAST operation.
In order to change character sets, you will need to use TRANSLATE rather than CAST. Remember that output of TRANSLATE can give errors for non-convertible characters, so you may want to play with its arguments to ignore such errors. Check this Teradata Documentation reference for TRANSLATE.
Remember to check the WITH ERROR argument available with TRANSLATE if you get issues with non-convertible characters. Depending on your use case, you can then either replace the placeholder character with empty string or take some other action on rows containing the placeholders.
I want to store Comma(,) in Decimal(12,2) Datatype column at the place of Dot(.) in SQL Server 2014 but unable to achieve this.
I need the following behavior:
When i save decimal value 2.56 in database table then it automatically store this value as 2,56
What setting should i apply in SQL Server so that it will directly convert and save decimal Dot(.) to Comma(,)?
Is there any SQL Server Collation or Locale setting to save Comma in Decimal(12,2) datatype column?
SQL Server stores decimal values in an internal binary structure which does not include a decimal separator character. The separator used for displaying data is controlled entirely by the client application. Consequently, there is no SQL Server setting to control this.
Although you could convert the decimal value to a string containing the desired separator using T-SQL, the best practice is to do that in the presentation layer where you have more robust functions that can honor the client language and locale.
The SQL syntax uses Dot (.) as the decimal separator. You can't change that.
You are getting 2,56 because of your locale settings. All your queries should always use Dot (.)
If I'm understanding correctly, you don't care what SQL stores. What you want is to have comma instead of dot when querying the table.
If so: assume your column Average is of datatype decimal.
You can do this:
SELECT REPLACE(CAST(Average AS NVARCHAR), '.', ',')
FROM your_table
I have a string with value
'MAX DATE QUERY: SELECT iso_timestamp(MAX(time_stamp)) AS MAXTIME FROM observation WHERE offering_id = 'HOBART''
But on inserting into postgresql table i am getting error:
org.postgresql.util.PSQLException: ERROR: syntax error at or near "HOBART".
This is probably because my string contains single quotes. I don't know my string value. Every time it keeps changing and may contain special characters like \ or something since I am reading from a file and saving into postgres database.
Please give a general solution to escape such characters.
As per the SQL standard, quotes are delimited by doubling them, ie:
insert into table (column) values ('I''m OK')
If you replace every single quote in your text with two single quotes, it will work.
Normally, a backslash escapes the following character, but literal backslashes are similarly escaped by using two backslashes"
insert into table (column) values ('Look in C:\\Temp')
You can use double dollar quotation to escape the special characters in your string.
The above query as mentioned insert into table (column) values ('I'm OK')
changes to insert into table (column) values ($$I'm OK$$).
To make the identifier unique so that it doesn't mix with the values, you can add any characters between 2 dollars such as
insert into table (column) values ($aesc6$I'm OK$aesc6$).
here $aesc6$ is the unique string identifier so that even if $$ is part of the value, it will be treated as a value and not a identifier.
You appear to be using Java and JDBC. Please read the JDBC tutorial, which describes how to use paramaterized queries to safely insert data without risking SQL injection problems.
Please read the prepared statements section of the JDBC tutorial and these simple examples in various languages including Java.
Since you're having issues with backslashes, not just 'single quotes', I'd say you're running PostgreSQL 9.0 or older, which default to standard_conforming_strings = off. In newer versions backslashes are only special if you use the PostgreSQL extension E'escape strings'. (This is why you always include your PostgreSQL version in questions).
You might also want to examine:
Why you should use prepared statements.
The PostgreSQL documentation on the lexical structure of SQL queries.
While it is possible to explicitly quote values, doing so is error-prone, slow and inefficient. You should use parameterized queries (prepared statements) to safely insert data.
In future, please include a code snippet that you're having a problem with and details of the language you're using, the PostgreSQL version, etc.
If you really must manually escape strings, you'll need to make sure that standard_conforming_strings is on and double quotes, eg don''t manually escape text; or use PostgreSQL-specific E'escape strings where you \'backslash escape\' quotes'. But really, use prepared statements, it's way easier.
Some possible approaches are:
use prepared statements
convert all special characters to their equivalent html entities.
use base64 encoding while storing the string, and base64 decoding while reading the string from the db table.
Approach 1 (prepared statements) can be combined with approaches 2 and 3.
Approach 3 (base64 encoding) converts all characters to hexadecimal characters without loosing any info. But you may not be able to do full-text search using this approach.
Literals in SQLServer start with N like this:
update table set stringField = N'/;l;sldl;'''mess'
Does anyone know a good way to count characters in a text (nvarchar) column in Sql Server?
The values there can be text, symbols and/or numbers.
So far I used sum(datalength(column))/2 but this only works for text. (it's a method based on datalength and this can vary from a type to another).
You can find the number of characters using system function LEN.
i.e.
SELECT LEN(Column) FROM TABLE
Use
SELECT length(yourfield) FROM table;
Use the LEN function:
Returns the number of characters of the specified string expression, excluding trailing blanks.
Doesn't SELECT LEN(column_name) work?
text doesn't work with len function.
ntext, text, and image data types will be removed in a future version
of Microsoft SQL Server. Avoid using these data types in new
development work, and plan to modify applications that currently use
them. Use nvarchar(max), varchar(max), and varbinary(max) instead. For
more information, see Using Large-Value Data Types.
Source
I had a similar problem recently, and here's what I did:
SELECT
columnname as 'Original_Value',
LEN(LTRIM(columnname)) as 'Orig_Val_Char_Count',
N'['+columnname+']' as 'UnicodeStr_Value',
LEN(N'['+columnname+']')-2 as 'True_Char_Count'
FROM mytable
The first two columns look at the original value and count the characters (minus leading/trailing spaces).
I needed to compare that with the true count of characters, which is why I used the second LEN function. It sets the column value to a string, forces that string to Unicode, and then counts the characters.
By using the brackets, you ensure that any leading or trailing spaces are also counted as characters; of course, you don't want to count the brackets themselves, so you subtract 2 at the end.