sql raiserror specify parameter - sql

When i read the example of MSDN raiserror:
RAISERROR (N'This is message %s %d.', -- Message text.
10, -- Severity,
1, -- State,
N'number', -- First argument.
5); -- Second argument.
-- The message text returned is: This is message number 5.
GO
Why the doc using %s to specify the N'number',and %d to specify the 5 -- Second argument
The MSDN write like this:
For example, in the following RAISERROR statement, the first argument of N'number' replaces the first conversion specification of %s; and the second argument of 5 replaces the second conversion specification of %d.
My question is: How can explain it? Why don't using other like %a or %b.Any other %+apha can replace it.
I just want to get a meaningful understand.

This represents the parameter datatype.
+--------------------+----------------------+
| Type specification | Represents |
+--------------------+----------------------+
| d or i | Signed integer |
| o | Unsigned octal |
| s | String |
| u | Unsigned integer |
| x or X | Unsigned hexadecimal |
+--------------------+----------------------+
N'number' is an nvarchar string literal. So gets the %sspecifier. And the literal 5 is a signed indicator so is represented by %d.
As for the reason for these specifiers. This is documented in the RAISERROR topic
These type specifications are based on the ones originally defined for
the printf function in the C standard library

Related

Extras zeroes getting appended during round to two decimal point in postgres

I am writing a postgres procedure. In that I want to round a floating point number to two decimal point and then insert that into a table. I have written the below code.
vara float8;
SELECT round( float8 '3.1415927', 2 ) into vara;
insert into dummyTable(columnA)values(vara).
I want the columnA to contain the value 3.14. However the value which is getting inserted is 3.1400.
The data type of columnA is float8 with precision and scale 17.
There is only one function round with two arguments, and that takes numeric as argument:
test=> \df round
List of functions
Schema | Name | Result data type | Argument data types | Type
------------+-------+------------------+---------------------+------
pg_catalog | round | double precision | double precision | func
pg_catalog | round | numeric | numeric | func
pg_catalog | round | numeric | numeric, integer | func
(3 rows)
So what happens is that your float8 is converted to numeric (without scale), and the result is of the same type. This will not produce trailing zeros:
test=> SELECT round(3.14159265, 2);
round
-------
3.14
(1 row)
But if you store the result in a float8 column, you may get rounding errors:
test=> SET extra_float_digits = 3;
SET
test=> SELECT round(3.14159265, 2)::float8;
round
---------------------
3.14000000000000012
(1 row)
My recommendation is to use numeric(10,2) or something similar as the data type of the table column, then the rounding will happen automatically, and the value can never have more than two decimal places.

How to do a count of fields in SQL with wrong datatype

I am trying to import legacy data from another system into our system. The problem I am having is that the legacy data is dirty- very dirty! We have a field which should be an integer, but sometimes is a varchar, and the field is defined as a varchar...
In SQL Server, how can I do a select to show those records where the data is varchar instead if int?
Thanks
If you want to find rows1 where a column contains any non-digit characters or is longer than 9 characters (either condition means that we cannot assume it would fit in an int, use something like:
SELECT * FROM Table WHERE LEN(ColumnName) > 9 or ColumnName LIKE '%[^0-9]%'
Not that there's a negative in the LIKE condition - we're trying to find a string that contains at least one non-digit character.
A more modern approach would be to use TRY_CAST or TRY_CONVERT. But note that a failed conversion returns NULL and NULL is perfectly valid for an int!
SELECT * FROM Table WHERE ColumnName is not null and try_cast(ColumnName as int) is null
ISNUMERIC isn't appropriate. It answers a question nobody has ever wanted to ask (IMO) - "Can this string be converted to any of the numeric data types (I don't care which ones and I don't want you to tell me which ones either)?"
ISNUMERIC('$,,,,,,,.') is 1. That should tell you all you need to know about this function.
1If you just want a count, as per the title of the question, then substitute COUNT(*) for *.
In SQL Server, how can I do a select to show those records where the data is varchar instead of int?
I would do it like
CREATE TABLE T
(
Data VARCHAR(50)
);
INSERT INTO T VALUES
('102'),
(NULL),
('11Blah'),
('5'),
('Unknown'),
('1ThinkPad123'),
('-11');
SELECT Data -- Per the title COUNT(Data)
FROM
(
SELECT Data,
cast('' as xml).value('sql:column("Data") cast as xs:int ?','int') Result
FROM T --You can add WHERE Data IS NOT NULL to exclude NULLs
) TT
WHERE Result IS NULL;
Returns:
+----+--------------+
| | Data |
+----+--------------+
| 1 | NULL |
| 2 | 11Blah |
| 3 | Unknown |
| 4 | 1ThinkPad123 |
+----+--------------+
That if you can't use TRY_CAST() function, if you are working on 2012+ version, I'll recommend that you use TRY_CAST() function like
SELECT Data
FROM T
WHERE Data IS NOT NULL
AND
TRY_CAST(Data AS INT) IS NULL;
Demo
Finally, I would say do not use ISNUMERIC() function because of (from docs) ...
Note
ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($). For a complete list of currency symbols, see money and smallmoney (Transact-SQL).

Group terminals into set

What does this warning mean ?
How do I solve it ?
Here is the code I am referring to
expression : expression operator=DIV expression
| expression operator=MUL expression
| expression operator=ADD expression
| expression operator=SUB expression
| INT
| FLOAT
| BOOLEAN
| NULL
| ID
;
The ANTLR 4 parser generator can combine groups of transitions to form a single "set transition" in certain cases, reducing static and dynamic memory overhead as well as improving runtime performance. This can only occur if all alternatives of a block contain a single element or set. For example, the following code allows INT and FLOAT to be combined into a single transition:
// example 1
number
: INT
| FLOAT
;
// example 2, elements grouped into a set
primary
: '(' expression ')'
| (INT | FLOAT)
;
However, in the following situation the elements cannot be combined by the compiler so they'll be treated separately:
primary
: '(' expression ')'
| INT
| FLOAT
;
The hint suggests places where the simple addition of ( ... ) is enough to allow the compiler to collapse a set that it would otherwise not be able to. Altering your code to the following would address the warning.
expression
: expression operator=DIV expression
| expression operator=MUL expression
| expression operator=ADD expression
| expression operator=SUB expression
| ( INT
| FLOAT
| BOOLEAN
| NULL
| ID
)
;

PostgreSQL ERROR: function to_tsvector(character varying, unknown) does not exist

This psql session snippet should be self-explanatory:
psql (9.1.7)
Type "help" for help.
=> CREATE TABLE languages(language VARCHAR NOT NULL);
CREATE TABLE
=> INSERT INTO languages VALUES ('english'),('french'),('turkish');
INSERT 0 3
=> SELECT language, to_tsvector('english', 'hello world') FROM languages;
language| to_tsvector
---------+---------------------
english | 'hello':1 'world':2
french | 'hello':1 'world':2
turkish | 'hello':1 'world':2
(3 rows)
=> SELECT language, to_tsvector(language, 'hello world') FROM languages;
ERROR: function to_tsvector(character varying, unknown) does not exist
LINE 1: select language, to_tsvector(language, 'hello world')...
^
HINT: No function matches the given name and argument types.
You might need to add explicit type casts.
The problem is that Postgres function to_tsvector doesn't like varchar field type but this call should be perfectly correct according to the documentation?
Use an explicit type cast:
SELECT language, to_tsvector(language::regconfig, 'hello world') FROM languages;
Or change the column languages.language to type regconfig. See #Swav's answer.
Why?
Postgres allows function overloading. Function signatures are defined by their (optionally schema-qualified) name plus (the list of) input parameter type(s). The 2-parameter form of to_tsvector() expects type regconfig as first parameter:
SELECT proname, pg_get_function_arguments(oid)
FROM pg_catalog.pg_proc
WHERE proname = 'to_tsvector'
proname | pg_get_function_arguments
-------------+---------------------------
to_tsvector | text
to_tsvector | regconfig, text -- you are here
If no existing function matches exactly, the rules of Function Type Resolution decide the best match - if any. This is successful for to_tsvector('english', 'hello world'), with 'english' being an untyped string literal. But fails with a parameter typed varchar, because there is no registered implicit cast from varchar to regconfig. The manual:
Discard candidate functions for which the input types do not match and
cannot be converted (using an implicit conversion) to match. unknown
literals are assumed to be convertible to anything for this purpose.
Bold emphasis mine.
The registered casts for regconfig:
SELECT castsource::regtype, casttarget::regtype, castcontext
FROM pg_catalog.pg_cast
WHERE casttarget = 'regconfig'::regtype;
castsource | casttarget | castcontext
------------+------------+-------------
oid | regconfig | i
bigint | regconfig | i
smallint | regconfig | i
integer | regconfig | i
Explanation for castcontext:
castcontext char
Indicates what contexts the cast can be invoked
in. e means only as an explicit cast (using CAST or :: syntax). a
means implicitly in assignment to a target column, as well as
explicitly. i means implicitly in expressions, as well as the other cases.
Read more about the three different types of assignment in the chapter "CREATE CAST".
Alternative approach to Erwin Brandstetter's answer
You could define your language column to be of type regconfig which would make your query a bit less verbose i.e.:
CREATE TABLE languages(language regconfig NOT NULL DEFAULT 'english'::regconfig)
I have set english as default above, but that's not required. Afterwards your original query
SELECT language, to_tsvector(language, 'hello world') FROM languages;
would work just fine.

Finding MySQL errors from LOAD DATA INFILE

I am running a LOAD DATA INFILE command in MySQL and one of the files is showing errors at the mysql prompt.
How do I check the warnings and errors? Right now the only thing I have to go by is the fact that the prompt reports 65,535 warnings on import.
mysql> use dbname;
Database changed
mysql> LOAD DATA LOCAL INFILE '/dump.txt'
-> INTO TABLE table
-> (id, title, name, accuracy);
Query OK, 897306 rows affected, 65535 warnings (16.09 sec)
Records: 897306 Deleted: 0 Skipped: 0 Warnings: 0
How do I get mysql to show me what those warnings are? I looked in the error log but I couldn't find them. Running the "SHOW WARNINGS" command only returned 64 results which means that the remaining 65,000 warnings must be somewhere else.
2 |
| Warning | 1366 | Incorrect integer value: '' for column 'accuracy' at row 2038
3 |
| Warning | 1366 | Incorrect integer value: '' for column 'accuracy' at row 2038
4 |
| Warning | 1366 | Incorrect integer value: '' for column 'accuracy' at row 2038
6 |
| Warning | 1366 | Incorrect integer value: '' for column 'accuracy' at row 2038
7 |
+---------+------+--------------------------------------------------------------
--+
64 rows in set (0.00 sec)
How do I find these errors?
The MySQL SHOW WARNINGS command only shows you a subset of the warnings. You can change the limit of warning shown by modifying the parameter max_error_count.
Getting that many errors suggests that you have the wrong delimiter or extraneous quote marks that are making MySQL read the wrong columns from your input.
You can probably fix that by adding
[{FIELDS | COLUMNS}
[TERMINATED BY 'string']
[[OPTIONALLY] ENCLOSED BY 'char']
[ESCAPED BY 'char']
]
[LINES
[STARTING BY 'string']
[TERMINATED BY 'string']
]
after the tablename and before the column list.
Something like:
LOAD DATA LOCAL INFILE '/dump.txt'
INTO TABLE table
fields terminated by ' ' optionally enclosed by '"'
(id, title, name, accuracy);
By default, if you don't specify this, MySQL expects the tab character to terminate fields.
There could be a blank entry in the data file, and the target table doesn't allow null values, or doesn't have a valid default value for the field in question.
I'd check that the table has a default for accuracy - and if it doesn't, set it to zero and see if that clears up the errors.
Or you could pre-process the file with 'awk' or similar and ensure there is a valid numeric value for the accuracy field in all rows.