PostgreSQL - casting varchar to int? - sql

I'm trying to execute the following query:
DELETE FROM table_name WHERE ID>9;
But i can't since the 'ID' field is of type 'varchar'.
How can i cast it to 'int' and have it act properly? (deleting all rows with ID greater than 9 rather than converting it to a numeric varchar value)

I don't know why you would store a numeric value as a string. You might want something more like:
DELETE FROM table_name
WHERE regexp_matches(ID, '^[1-9][0-9]');
This will delete from the table any id that starts with two digits, where the first is not 0. If you attempt a conversion, then you might get a conversion error if not all ids are numbers. This will also work for long numbers that would overflow an int (although numeric would fix that problem).
EDIT:
For a general solution, I think I would write it as:
where (case when regexp_matches(id, '^[0-9]+$')
then id::numeric
end) > 70000
The case should prevent any error on non-numeric ids.

Related

Case when to transform string to int if is is not already an int

So, I have the problem that I have a Teradata SQL Code which is quite flexible through macro variables, so the INPUT can be different kinds of Tables. In those
Tables there is alwasy the information s_id, but sometimes the s_id is a string and sometime a int. So I want to transform the strings to an int and if it is already an int then it should just copy the value.
SELECT DISTINCT &i as F_ID,
/*In different Tables the Sid can appear as an string or integer. This CASE WHEN transforms a string into a Integer in case of a not numeric ID*/
CASE WHEN TYPE(data.&columnSId.) not like '%CHAR%' and to_number(data.&columnSId.) is NULL
THEN data.&columnSId.
WHEN TYPE(data.&columnSId.) like '%CHAR%'
THEN CAST(to_number(data.&columnSId.) as FLOAT)
ELSE 0
END as s_id,
FROM database.table
However somehow in cases when the s_id is already an int it nevertheless tries to use the function to_number which will fail and end in this error:
Teradata execute: Function 'to_number' called with an invalid number
or type of parameters.
Do you know how I can prevent this error? Why it even look at the other case when the first one is already true?
It's a parse time error, trying to apply To_Number on a numeric value. I don't think you can do what you want without Dynamic SQL.
But this returns a similar result by casting a numeric value to a string first:
Coalesce(Cast(To_Number(Trim(data.&columnSId.)) AS INT), 0)
Of course it's not very efficient for larger tables.

Conditional casting of column datatype

i have subquery, that returns me varchar column, in some cases this column contains only numeric values and in this cases i need to cast this column to bigint, i`ve trying to use CAST(case...) construction, but CASE is an expression that returns a single result and regardless of the path it always needs to result in the same data type (or implicitly convertible to the same data type). Is there any tricky way to change column datatype depending on condition in PostgreSQL or not? google cant help me((
SELECT
prefix,
module,
postfix,
id,
created_date
FROM
(SELECT
s."prefix",
coalesce(m."replica", to_char(CAST((m."id_type" * 10 ^ 12) AS bigint) + m."id", 'FM0000000000000000')) "module",
s."postfix",
s."id",
s."created_date"
FROM some_subquery
There is really no way to do what you want.
A SQL query returns a fixed set of columns, with the names and types being fixed. So, a priori what you want to do does not fit well within SQL.
You could work around this, by inventing your own type, that is either a big integer or a string. You could store the value as JSON. But those are work-arounds. The SQL query itself is really returning one "type" for each column; that is how SQL works.

Error unable to convert data type nvarchar to float

I have searched both this great forum and googled around but unable to resolve this.
We have two tables (and trust me I have nothing to do with these tables). Both tables have a column called eventId.
However, in one table, data type for eventId is float and in the other table, it is nvarchar.
We are selecting from table1 where eventI is defined as float and saving that Id into table2 where eventId is defined as nvarchar(50).
As a result of descrepancy in data types, we are getting error converting datatype nvarchar to float.
Without fooling around with the database, I would like to cast the eventId to get rid of this error.
Any ideas what I am doing wrong with the code below?
SELECT
CAST(CAST(a.event_id AS NVARCHAR(50)) AS FLOAT) event_id_vre,
The problem is most likely because some of the rows have event_id that is empty. There are two ways to go about solving this:
Convert your float to nvarchar, rather than the other way around - This conversion will always succeed. The only problem here is if the textual representations differ - say, the table with float-as-nvarchar uses fewer decimal digits, or
Add a condition to check for empty IDs before the conversion - This may not work if some of the event IDs are non-empty strings, but they are not float-convertible either (e.g. there's a word in the field instead of a number).
The second solution would look like this:
SELECT
case when a.eventid <> ''
then cast(cast(a.event_id as nvarchar(50)) as float)
ELSE 0.0
END AS event_id_vre,
Convert float to nvarchar instead of nvarchar to float. Of course!

UNION causes "Conversion failed when converting the varchar value to int"

I tried to search for previous articles related to this, but I can't find one specific to my situation. And because I'm brand new to StackOverflow, I can't post pictures so I'll try to describe it.
I have two datasets. One is 34 rows, 1 column of all NULLs. The other 13 rows, 1 column of varchars.
When I try to UNION ALL these two together, i get the following error:
Conversion failed when converting the varchar value to data type int.
I don't understand why I'm getting this error. I've UNIONed many NULL columns and varchar columns before, among many other types and I don't get this conversion error.
Can anyone offer suggestions why this error occurs?
The error occurs because you have corresponding columns in the two of the subqueries where the type of one is an integer and the type of the other is a character. Then, the character value has -- in at least one row -- a value that cannot be automatically converted to an integer.
This is easy to replicate:
select t.*
from (select 'A' as col union all
select 1
) t;
Here is the corresponding SQL Fiddle.
SQL Server uses pretty sophisticated type precedence rules for determining the destination type in a union. In practice, though, it is best to avoid using implicit type conversions. Instead, explicitly cast the columns to the type you intend.
EDIT:
The situation with NULL values is complicated. By itself, the NULL value has no type. So, the following works fine:
select NULL as col
union all
select 'A';
If you type the NULL, then the query will fail:
select cast(NULL as int) as col
union all
select 'A';
Also, if you put SQL Server in a position where it has to assign a type, then SQL Server will make the NULL an integer. Every column in a table or result set needs a type, so this will also fail:
select (select NULL) as col
union all
select 'A';
Perhaps your queries are doing something like this.
I have also encountered this error when I accidentally had the fields out of sequence in the 2 SELECT queries that I was unioning. Adjusting the fields' sequence fixed the problem.

Can I pass a number for varchar2 in Oracle?

I have an Oracle table and a column (col1) has type varchar2(12 byte). It has one row and value of col1 is 1234
When I say
select * from table where col1 = 1234
Oracle says invalid number. Why is that? Why I cannot pass a number when it is varchar2?
EDIT: All the responses are great. Thank you. But I am not able to understand why it does not take 1234 when 1234 is a valid varchar2 datatype.
The problem is that you expect that Oracle will implicitly cast 1234 to a character type. To the contrary, Oracle is implicitly casting the column to a number. There is a non-numeric value in the column, so Oracle throws an error. The Oracle documentation warns against implicit casts just before it explains how they will be resolved. The rule which explains the behaviour you're seeing is:
When comparing a character value with a numeric value, Oracle converts the character data to a numeric value.
Oh, it is much better to convert to char rather than to numbers:
select *
from table
where col1 = to_char(1234)
When the col1 does not look like a number, to_number returns an error, stopping the query.
Oracle says invalid number. Why is that? Why I cannot pass a number when it is varchar2?
Oracle does an implicit conversion from character type of col1 to number, since you're comparing it as a number.
Also, you assume that 1234 is the only row that's being fetched. In reality, Oracle has to fetch all rows from the table, and then filter out as per the where clause. Now there's a character value in col1 that's being fetched before it encounters your 1234 row & that's causing the error, since the character cannot be converted to a number.
This fiddle shows the behaviour. Since abc canot be converted to a number, you get that error message
Now if the only record in the table is that of col1 containing a numeric character, you'll see that the statement will work fine