oracle add column with value based on condition - sql

I would like to add a column "tag" based on value of "LEASE_ID_count" with ORACLE.
But i get this error :
value too large for column "CUSTOM_LIFETIME_VALUE_TAG"."tag" (actual:
7, maximum: 3) , caused by: OracleDatabaseException: ORA-12899: value
too large for column "CUSTOM_LIFETIME_VALUE_TAG"."tag" (actual: 7,
maximum: 3
select "COMPANY_CODE", "LEASE_ID_count",
(CASE WHEN "LEASE_ID_count" IN ('3','4', '5') THEN '3 à 5vh' WHEN "LEASE_ID_count" ='1' THEN '1vh' WHEN "LEASE_ID_count" ='2' THEN '2vh' END) "tag"
from "CUSTOM_LIFETIME_VALUE_TESR"
Any idea please to help me ? thanks

This is too long for a comment. The error message is referring to "CUSTOM_LIFETIME_VALUE_TAG"."tag". This is from a table that has no obvious reference in the query. Okay, perhaps CUSTOM_LIFETIME_VALUE_TESR is a view that references that table. That is possible.
However, the error message is about storing data into that column, not referencing it. So, my best guess is that you have a query like this:
INSERT INTO CUSTOM_LIFETIME_VALUE_TAG (COMPANY_CODE, LEASE_ID_count, tag)
<your select here>;
And the column tag in this table is defined as 3 characters. Clearly, '3 à 5vh' has 7 characters which is more than 3 which causes an error. Hence the error.
Oracle does have a lot of functionality lurking around. Even so, it is hard for me to think of how a SELECT could cause this error with no DML involved.
As Alex Poole very correctly notes: write the queries without double quotes. Quoted identifiers just make queries harder to write and read.

Related

How to get average of multiple variables

Table name: products,
column names: rice_price, sugar_price
I would like to get the average of both columns separately. For example;
SELECT
AVG(rice_price) avg_rice,
AVG(sugar_price) avg_sugar
FROM
products
If I run this query on SQL server, I get the message below
Msg 8117, Level 16, State 1, Line 4
Operand data type nvarchar is invalid for avg operator.
What could be the solution?
If most of them look like numbers, you could use this, which will exclude the ones that don't convert nicely, by handling them as null:
SELECT
AVG(try_convert(numeric(18,4),rice_price)) avg_rice,
AVG(try_convert(numeric(18,4),sugar_price)) avg_sugar
FROM
products
But you should be changing your datatypes as has been pointed out in the comments. This kind of query will help you discover the ones that aren't good.
SELECT *
FROM products
WHERE rice_price IS NOT NULL
AND try_convert(numeric(18,4),rice_price) IS NULL
SELECT *
FROM products
WHERE sugar_price IS NOT NULL
AND try_convert(numeric(18,4),sugar_price) IS NULL
The IS_NUMERIC function can work for this too, but I find I have switched to using TRY_CONVERT in this situation, because it feels more flexible - I can use whatever data type I need.

Query to ignore rows which have non hex values within field

Initial situation
I have a relatively large table (ca. 0.7 Mio records) where an nvarchar field "MediaID" contains largely media IDs in proper hexadecimal notation (as they should).
Within my "sequential" query (each query depends on the output of the query before, this is all in pure T-SQL) I have to convert these hexadecimal values into decimal bigint values in order to do further calculations and filtering on these calculated values for the subsequent queries.
--> So far, no problem. The "sequential" query works fine.
Problem
Unfortunately, some of these Media IDs do contain non-hex characters - most probably because there was some typing errors by the people which have added them or through import errors from the previous business system.
Because of these non-hex chars, the whole query fails (of course) because the conversion hits an error.
For my current purpose, such rows must be skipped/ignored as they are clearly wrong and cannot be used (there are no medias / data carriers in use with the current business system which can have non-hex character IDs).
Manual editing of the data is not an option as there are too many errors and it is not clear with what the data must be replaced.
Challenge
To create a query which only returns records which have valid hex values within the media ID field.
(Unfortunately, my SQL skills are not enough to create the above query. Your help is highly appreciated.)
The relevant section of the larger query looks like this (xxxx is where your help comes in :-))
select
pureMediaID
, mediaID
, CUSTOMERID
,CONTRACT_CUSTOMERID
from
(
select concat('0x', Replace(Ltrim(Replace(mediaID, '0', ' ')), ' ', '0')) AS pureMediaID
--, CUSTOMERID
, *
from M_T_CONTRACT_CUSTOMERS
where mediaID is not null
and mediaID like '0%'
and xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as inner1
EDIT: As per request I have added here some good and some bad data:
Good:
4335463357
4335459809
1426427996
4335463509
4335515039
4335465134
4427370396
4335415661
4427369036
4335419089
004BB03433
004e7cf9c6
00BD23133
00EE13D8C1
00CCB5522C
00C46522C
00dbbe3433
Bad:
4564589+
AB6B8BFC.8
7B498DFCnm
DB218DFChb
d<tgfh8CFC
CB9E8AFCzj
B458DFCjhl
rytzju8DFC
BFCtdsjshj
DB9888FCgf
9BC08CFCyx
EB198DFCzj
4B628CFChj
7B2B8DFCgg
After I did upgrade the compatibility level of the SQL instance to SQL2016 (it was below 2012 before) I could use try_convert with same syntax as the original convert function as donPablo has pointed out. With that the query could run fully through and every MediaID which is not a correct hex value gets nicely converted into a null value - really, really nice.
Exactly what I needed.
Unfortunately, the solution of ALICE... didn't work out for me as this was also (strangely) returning records which had the "+" character within them.
Edit: The added comment of Alice... where you create a calculated field like this:
CASE WHEN "KEY" LIKE '%[^0-9A-F]%' THEN 0 ELSE 1 end as xyz
and then filter in the next query like this:
where xyz = 1
works also with SQL Instances with compatibility level < SQL 2012.
Great addition for people which still have to work with older SQL instances.
An option (although not ideal in terms of performance) is to check the characters in the MediaID through a case statement and regular expression
Hexadecimals cannot contain characters other than A-F and numbers between 0 and 9
CASE WHEN MediaID LIKE '%[0-9A-F]%' THEN 1 ELSE 0 END
I would recommend writing a function that can be used to evaluate MediaID first and checks if it is hexadecimal and then running the query for conversion

Invalid digits on Redshift

I'm trying to load some data from stage to relational environment and something is happening I can't figure out.
I'm trying to run the following query:
SELECT
CAST(SPLIT_PART(some_field,'_',2) AS BIGINT) cmt_par
FROM
public.some_table;
The some_field is a column that has data with two numbers joined by an underscore like this:
some_field -> 38972691802309_48937927428392
And I'm trying to get the second part.
That said, here is the error I'm getting:
[Amazon](500310) Invalid operation: Invalid digit, Value '1', Pos 0,
Type: Long
Details:
-----------------------------------------------
error: Invalid digit, Value '1', Pos 0, Type: Long
code: 1207
context:
query: 1097254
location: :0
process: query0_99 [pid=0]
-----------------------------------------------;
Execution time: 2.61s
Statement 1 of 1 finished
1 statement failed.
It's literally saying some numbers are not valid digits. I've already tried to get the exactly data which is throwing the error and it appears to be a normal field like I was expecting. It happens even if I throw out NULL fields.
I thought it would be an encoding error, but I've not found any references to solve that.
Anyone has any idea?
Thanks everybody.
I just ran into this problem and did some digging. Seems like the error Value '1' is the misleading part, and the problem is actually that these fields are just not valid as numeric.
In my case they were empty strings. I found the solution to my problem in this blogpost, which is essentially to find any fields that aren't numeric, and fill them with null before casting.
select cast(colname as integer) from
(select
case when colname ~ '^[0-9]+$' then colname
else null
end as colname
from tablename);
Bottom line: this Redshift error is completely confusing and really needs to be fixed.
When you are using a Glue job to upsert data from any data source to Redshift:
Glue will rearrange the data then copy which can cause this issue. This happened to me even after using apply-mapping.
In my case, the datatype was not an issue at all. In the source they were typecast to exactly match the fields in Redshift.
Glue was rearranging the columns by the alphabetical order of column names then copying the data into Redshift table (which will
obviously throw an error because my first column is an ID Key, not
like the other string column).
To fix the issue, I used a SQL query within Glue to run a select command with the correct order of the columns in the table..
It's weird why Glue did that even after using apply-mapping, but the work-around I used helped.
For example: source table has fields ID|EMAIL|NAME with values 1|abcd#gmail.com|abcd and target table has fields ID|EMAIL|NAME But when Glue is upserting the data, it is rearranging the data by their column names before writing. Glue is trying to write abcd#gmail.com|1|abcd in ID|EMAIL|NAME. This is throwing an error because ID is expecting a int value, EMAIL is expecting a string. I did a SQL query transform using the query "SELECT ID, EMAIL, NAME FROM data" to rearrange the columns before writing the data.
Hmmm. I would start by investigating the problem. Are there any non-digit characters?
SELECT some_field
FROM public.some_table
WHERE SPLIT_PART(some_field, '_', 2) ~ '[^0-9]';
Is the value too long for a bigint?
SELECT some_field
FROM public.some_table
WHERE LEN(SPLIT_PART(some_field, '_', 2)) > 27
If you need more than 27 digits of precision, consider a decimal rather than bigint.
If you get error message like “Invalid digit, Value ‘O’, Pos 0, Type: Integer” try executing your copy command by eliminating the header row. Use IGNOREHEADER parameter in your copy command to ignore the first line of the data file.
So the COPY command will look like below:
COPY orders FROM 's3://sourcedatainorig/order.txt' credentials 'aws_access_key_id=<your access key id>;aws_secret_access_key=<your secret key>' delimiter '\t' IGNOREHEADER 1;
For my Redshift SQL, I had to wrap my columns with Cast(col As Datatype) to make this error go away.
For example, setting my columns datatype to Char with a specific length worked:
Cast(COLUMN1 As Char(xx)) = Cast(COLUMN2 As Char(xxx))

Adding a value to a 'datetime' column caused an overflow

I have cross applied a table valued function in a DML statement that returns two columns one of which is RiskValue (which is an integer denoting Scan Period)
Now when I print the value RiskValue along with dateadd function like this (y is function alias and am is another table)
select cast(y.RiskValue as int),dateadd(m,cast(y.RiskValue as int),#RunningDate)
from .....
it gives me proper values as
6 | 'Some Date'
but when I use it in a where clause as
where am.DateOpen >= dateadd(m,cast(y.RiskValue as int),#RunningDate)
I get the Error :
Adding a value to a 'datetime' column caused an overflow
Note that passing hard coded values as
where am.DateOpen >= dateadd(m,6,#RunningDate)
works fine. (Obviously it will)
Any suggestions what might be wrong?
Posting Aaron Bertrand's comment as answer so that people reaching this question will find it helpful :
RiskValue is 6 for that row, but you have to understand that SQL Server may not optimize the statement in the same order you wrote it. It can often error out trying to perform calculations on values that should have been excluded by a filter, but the calculations were attempted first. We can try
DATEADD(MONTH, CASE WHEN y.RiskValue < 20000 THEN y.RiskValue END, #RunningDate)
Also read this and this for better understanding

Manipulating a record data

I am looking for a way to take data from one table and manipulate it and bring it to another table using an SQL query.
I have a Column called NumberStuff that has data like this in it:
INC000000315482
I need to cut off the INC portion of the number and convert it into an integer and store it into a Column in another table so that it ends up looking like this:
315482
Any help would be much appreciated!
Another approach is to use the Replace function. Either in TSQL or as a Derived Column Expression in SSIS.
TSQL
SELECT REPLACE(T.MyColumn, 'INC', '') AS ReplacedINC
SSIS
REPLACE([MyColumn], "INC", "")
This removes the character based data. It then becomes an optional exercise in converting to a numeric type before storing it to the target table or letting the implicit conversion happen.
Simplest version of what you need.
select cast(right(column,6) as int) from table
Are you doing this in a SSIS statement, or?...is it always the last 6 or?...
This is a little less dependant on your formatting...removes 0's and can be any length (will trim the first 3 chars and the leading 0's).
select cast(SUBSTRING('INC000000315482',4,LEN('INC000000315482') - 3) as int)