ORA-01722: invalid number in column with numbers only? - sql

I did a rather easy view to return only rows where there is number is CONTRACT_ID column. CONTRACT_ID has data type number(8).
CREATE OR REPLACE VIEW cid AS
SELECT *
FROM transactions
WHERE contract_id IS NOT NULL
AND LENGTH(contract_id) > 0;
View works just fine until I scroll down to row ~2950 where I get ORA-01722. Same thing happens if I want to export data to Excel, my file gets only ~2950 rows instead of expected ~20k.
Any idea what might be causing this and how to resolve this issue?
Many thanks!

You wrote too much SQL.. The following will provide all the results you require:
CREATE OR REPLACE VIEW cid AS
SELECT *
FROM transactions
WHERE contract_id IS NOT NULL
You can't LENGTH() a number - a number is either null or it's a value, so you don't need this kind of check.
Passing a number to LENGTH() will turn it into a string first, i.e. LENGTH(TO_CHAR(numbercolumn)). You don't even need a LENGTH() check for null strings, as to oracle NULL string and a zero length string are equivalent, and calling LENGTH() on an empty string or a null, will return null, not 0 (so LENGTH(myNullStr) = 0 doesnt work out; it's not comparing 0 = 0, it's comparing null = 0 and null compared with anything is always false).
The only time this seems to cause confusion is when the string columns in the table are CHAR types rather than VARCHAR types, and people forget that assigning an empty string to a CHAR causes it to become space padded out to the CHAR length hence, not a zero length string any more

First of all, you should remove redundant condition about length(), it's senseless. I'm not sure how it can produce such error, but check whether error disappered after it.
If no, replace star (*) to some field names, say, contract_id. If it will fix error - it would appoint that error source somewhere into removed fields (say, if generated column used).
I cannot imagine how error can be still alive after that, by if so, I'd tried to move it into other tablespace and add into fields list a call of logging function which stores rowid's of rows read - thus check which row produces error.

Related

Select case returning an error when both elemements not varchar in some cases

I wanted to return a value formatted with commas at every thousand if a number or just the value if it wasn't a number
I used the following statement which returned the error:
Conversion failed when converting the nvarchar value '1,000' to data type int.
Declare #QuantityToDelete int = 1000
SELECT CASE
WHEN ISNUMERIC(#QuantityToDelete)=1
THEN format(cast(#QuantityToDelete as int),'N0')
ELSE #QuantityToDelete
END [Result]
I can get it to work by using the following
SELECT CASE
WHEN ISNUMERIC(#QuantityToDelete)=1
THEN format(cast(#QuantityToDelete as int),'N0')
ELSE cast(#QuantityToDelete as varchar)
END [Result]
Result=1,000
Why doesn't the first example work when the ELSE #QuantityToDelete part of the statement isn't returned?
If I use the below switching the logic condition
SELECT CASE
WHEN ISNUMERIC(#QuantityToDelete)=0
THEN format(cast(#QuantityToDelete as int),'N0')
ELSE #QuantityToDelete
END [Result]
Result=1000
Which is expected, but no error, the case statement still has unmatched return types an nvarchar and an int as in the first example just different logic?
The important point to note is that a case expression returns a single scalar value, and that value has a single data type.
A case expression is fixed, it must evaluate the same and work the same for that query at runtime no matter what data flows through the query - in other words, the result of the case expression cannot be an int for some rows and a string for others.
Remember that the result of a query can be thought of, and used as, a table - so just like a table where you you define a column as being a specific data type, you cannot have a column where the data type can be different for rows of data.
Therefore with a case expression, SQL Server must determine at compile time what the resulting data type will be, which it does (if necessary) using data type precedence. If the case expression has different data types returned in different execution paths then it will attempt to implicitly cast them to the type with the highest precedence.
Hence your case expression that attempts to return two different data types fails because it's trying to return both a nvarchar and int and SQL Server is implicitly casting the nvarchar value to an int - and failing.
The second one works because you are controlling the casting and both paths result in the same varchar data type which works fine.
Also note that when defining a varchar it's good practice to define its length also, you can easily get complacent as it works here because the default length is 30 when casting however the default is 1 otherwise.
See the relevant part of the documentation

array_length() and cardinality() of an empty array return one of array size

I have a table like :
CREATE TABLE psdc_psr
(
id bigint not null,
available_region_code character varying(255)[],
is_valid bigint,
)
I want usearray_length(), cardinality() or available_region_code ='{}' to select empty available_region_code columns but it failed, the length return one.
Why does this happen and how to solve this problem.
I don't know what reason causes this problem. Maybe like jjanes say in the comments the array has a non-printing character.
In the last, I use (CAST(ARRAY_TO_JSON(available_region_code) AS VARCHAR) IN ('[null]', '[""]')) to check empty array which learn from answer of Zone in question(How to check if an array is empty in Postgres).
The result on blow:
for a similar issue I had in presto based SQL, I created a case statement like:
case when available_region_code[1] = '' then 0 else cardinality(available_region_code) end as cardinality
Not sure if this helps in your use case, but it resulted in 0's for my empty arrays, and the correct counts for all others.

IS ISNULL() specific for integers?

This has been bothering me with my coding continuously and I can't seem to google a good workaround.
I have a number of columns which are data type nvarchar(255). Pretty standard I would assume.
Anyway, I want to run:
DELETE FROM Ranks WHERE ISNULL(INST,0) = 0
where INST is nvarchar(255). I am thrown the error:
Conversion failed when converting the nvarchar value 'Un' to data type int.
which is the first non null in the column. However, I don't care for this showing me the error means it's not null? - I just want to delete the nulls!
Is there something simple I'm missing.
Any help would be fab!
An expression may only be of one type.
Expression ISNULL(INST,0) involves two source types, nvarchar(255) and int. However, no type change happens at this point, because ISNULL is documented to return the type of its first argument (nvarchar), and will convert the second argument to that type if needed, so the entire original expression is equivalent to ISNULL(INST, '0').
Next step is the comparison expression, ISNULL(INST, '0') = 0. It again has nvarchar(255) and int as the source data types, but this time nothing can stop the conversion - in fact, it must happen for the comparison operator, =, to even work. According to the data type precedence list, the int wins, and is chosen as the resulting type of the comparison expression. Hence all values from column INST must be converted to int before the comparison = 0 is made.
If you
just want to delete the nulls
, then just delete the nulls:
DELETE FROM Ranks WHERE INST IS NULL
If for some reason you absolutely have to use isnull in this fashion, which there is no real reason for, then you should have stayed in the realm of strings:
DELETE FROM Ranks WHERE ISNULL(INST, '') = ''
That would have deleted null entries and entries with empty strings (''), just like the WHERE ISNULL(INST, 0) = 0 would have deleted null entries and entries with '0's if all values in INST could have been converted to int.
With ISNULL(INST,0) you are saying: If the string INST is null, replace it with the string 0. But 0 isn't a string, so this makes no sense.
With WHERE ISNULL(INST,0) = 0 you'd access all rows where INST is either NULL or 0 (but as mentioned a string is not an integer).
So what do you want to achieve? Delete all rows where INST is null? That would be
DELETE FROM ranks WHERE inst IS NULL;

STUFF function sql returns null?

I have a specific column in a table, it shall contains only numbers in Nvarchar that have a length of 3. Unfortunately, some users wrote '12' but they should have written '012'. There were not enough validation at the time.
I need to fix that. Here is the logic I used :
UPDATE [Mandats_Approvisionnement].[dbo].[ARTICLE_ECOLE]
SET [UNIT_ADM] = STUFF(UNIT_ADM, 0, 0, '0')
WHERE LEN(UNIT_ADM) = 2;
The error goes like :
Cannot insert the value NULL into column 'UNIT_ADM', table
'Mandats_Approvisionnement.dbo.ARTICLE_ECOLE'; column does not allow
nulls. UPDATE fails.
I can't see where the problem is, I verified and all the records contain at least 2 characters, so the STUFF function cannot returns null as there are no NULL records in that table column [unit_adm]... How do I make it work ?
It should be stuff(UNIT_ADM,1,0,'0') as stuff returns null if the start position is 0.
Citing the documentation:
If the start position or the length is negative, or if the starting
position is larger than length of the first string, a null string is
returned. If the start position is 0, a null value is returned.
You could make this simpler by using
right('0' + UNIT_ADM, 3)
instead of stuff.

Converting char to integer in INSERT using IIF and SIMILAR TO

I am using in insert statement to convert BDE table (source) to a Firebird table (destination) using IB Datapump. So the INSERT statement is fed by source table values via parameters. One of the source field parameters is alphanum (SOURCECHAR10 char(10), holds mostly integers and needs to be converted to integer in the (integer type) destination column NEWINTFLD. If SOURCECHAR10 is not numeric, I want to assign 0 to NEWINTFLD.
I use IIF and SIMILAR to to test whether the string is numeric, and assign 0 if not numeric as follows:
INSERT INTO "DEST_TABLE" (......, "NEWINTFLD",.....)
VALUES(..., IIF( :"SOURCECHAR10" SIMILAR TO '[[:DIGIT:]]*', :"SOURCECHAR10", 0),..)
For every non numeric string however, I still get conversion errors (DSQL error code = -303).
I tested with only constants in the IIF result fields like SOURCECHAR10" SIMILAR TO '[[:DIGIT:]]*', 1, 0) and that works fine so somehow the :SOURCECHAR10 in the true result field of the IIF generates the error.
Any ideas how to get around this?
When your query is executed, the parser will notice that second use of :"SOURCECHAR10" is used in a place where an integer is expected. Therefor it will always convert the contents of :SOURCECHAR10 into an integer for that position, even though it is not used if the string is non-integer.
In reality Firebird does not use :"SOURCECHAR10" as parameters, but your connection library will convert it to two separate parameter placeholders ? and the type of the second placeholder will be INTEGER. So the conversion happens before the actual query is executed.
The solution is probably (I didn't test it, might contain syntax errors) to use something like (NOTE: see second example for correct solution):
CASE
WHEN :"SOURCECHAR10" SIMILAR TO '[[:DIGIT:]]*'
THEN CAST(:"SOURCECHAR10" AS INTEGER)
ELSE 0
END
This doesn't work as this is interpreted as a cast of the parameter itself, see CAST() item 'Casting input fields'
If this does not work, you could also attempt to add an explicit cast to VARCHAR around :"SOURCECHAR10" to make sure the parameter is correctly identified as being VARCHAR:
CASE
WHEN :"SOURCECHAR10" SIMILAR TO '[[:DIGIT:]]*'
THEN CAST(CAST(:"SOURCECHAR10" AS VARCHAR(10) AS INTEGER)
ELSE 0
END
Here the inner cast is applied to the parameter itself, the outer cast is applied when the CASE expression is evaluated to true