double data type return error 'Decimal overflow' in prestoSQL - error-handling

I tried to debug my query in prestoSQL.
I got an error with "Decimal Overflow" whereas my datatype is already double. It's a big table with more than 300 columns and there are about 250's calculation with cast as double separated in many CTE's.
Is there any suggestion to prevent this to happen again ? Or which datatype is more robust ? double or DECIMAL(38,38) ? Or even they are same
Any suggestion ?
This is the only information I got :(
PS : I can not really share the code since it's very big query and confidential

Related

Why does SDI flowgraph fails when filter in projection is introduced?

I am developing a flowgraph in native HANA and I am receiving an error ORA-00972 after I have introduced a filter to the projection node that contains single quote sign.
The filter is as follows:
"VALID_FROM" >= to_timestamp(to_nvarchar($$MaxDT$$),'yyyymmddhh24miss')
When I change the filter to e.g:
"ID" IN (1,5,6,7,34)
it's working just fine.
I had the same error previously while i was querying a virtual table. The solution there was to make the namespace much much smaller so that namespace+table name+field name does not exceed 30 characters. But I am not sure what is the solution when this error is there in the flowgraph.
Any help appreciated!
Cheers
The error message is not from HANA but from an Oracle DB.
ORA-00972 means “Identifier too long” - so it may well be that the single-quoted string from the filter condition is mis-interpreted as an identifier in the remote Oracle DB.
Try to escape the single quote by using two consecutive singe quotes ''.
"VALID_FROM" >= to_timestamp(to_nvarchar($$MaxDT$$),''yyyymmddhh24miss'')
Also, reconsider the actual data type of VALID_FROM - it looks as if a character input get converted to mvarchar and then to timestamp.

query where '0'

I need to query a varchar field in sql for 0's.
when I query where field = '0' i get the resulting error message.
Conversion failed when converting the varchar value 'N' to data type int.
I'm having trouble figuring out where the issue is coming from. My Googling is failing me on this one, could someone point me in the right direction?
EDIT:
Thanks for the help on this one guys, so there were 'N's in the data just very few of them so they weren't showing up in my top 100 query until I limited the search results further.
Apparently sql didn't have any issue comparing ints to varchar(1) so long as they were ints as well. I didn't even realize I was using an int in the where farther up in my query.
Oh and sorry for not sharing my query, it was long and complicated I was trying to share what I thought was the relevant from it. I'll write a simplified query in future questions.
Anyone know how to mark this as solved?
If your field is a varchar(), then this expression:
where field = '0'
cannot return a type conversion error.
This version can:
where field = 0
It would return an error if field has the value of 'N'. I am guessing that is the situation.
Otherwise, you have another expression in your code causing the problem by doing conversions from strings to numbers.

Null check on decimal crystal reports using CDBL({value})

I am using a decimal value in a formula which gives error when there is no data.
I tried using CDBL({value}) i.e. create a formula for value=CDBL({value}) .
The use {#value} in the formula. This used to take care of null values. But now keep getting error IF NOT ISNULL({#Value}) THEN ' A number, or currency amount is required here. Details: errorKind
Any suggestions on how to fix this please
I will try to answer this and see if I get any sort of indication that it worked.. maybe even a correct answer indication :)
You cant have mixed field types returned in Crystal. If one part of the IF statement returns a numeric type then the rest has to be numeric type. If you post your entire formula I (or someone else who is willing to give up valuable time) can show you how it needs to look.

Oracle Error, moving data between databases

I am moving some data between two databases and have had much success, but then I encountered a problem doing the same kind of query that I've been doing.
The query:
INSERT INTO INTERNET.WEBSECURITY#crmtest SELECT * FROM INTERNET.WEBSECURITY;
The Error:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
Any ideas on what this might be?
You are trying to assign a value to a plsql variable which is not big enough or it has greater size than the column data type.
In addition: Assign/insert a non-numeric value to a numeric variable/column.
Probably your table columns are a bit different in datatypes and sizes. I do not see any variables in your example.

MySQL Type Conversion: Why is float the lowest common denominator type?

I recently ran into an issue where a query was causing a full table scan, and it came down to a column had a different definition that I thought, it was a VARCHAR not an INT. When queried with "string_column = 17" the query ran, it just couldn't use the index. That really threw me for a loop.
So I went searching and found what happened, the behavior I was seeing is consistent with what MySQL's documentation says:
In all other cases, the arguments are compared as floating-point (real) numbers.
So my question is... why a float?
I could see trying to convert numbers to strings (although the points in the MySQL page linked above are good reasons not to). I could also understand throwing some sort of error, or generating a warning (my preference). Instead it happily runs.
So why convert everything to a float? Is that from the SQL standard, or based on some other reason? Can anyone shed some light on this choice for me?
I feel your pain. We have a column in our DB that holds what is well-known in the company as an "order number". But it's not always a number, in certain circumstances it can have other characters too, so we keep it in a varchar. With SQL Server 2000, this means that selecting on "order_number = 123456" is bad. SQL Server effectively rewrites the predicate as "CAST(order_number, INT) = 123456" which has two undesirable effects:
the index is on order_number as a varchar, so it starts a full scan
those non-numeric order numbers eventually cause a conversion error to be thrown to the user, with a rather unhelpful message.
In a way it's good that we do have those non-numeric "numbers", since at least badly-written queries that pass the parameter as a number get trapped rather than just sucking up resources.
I don't think there is a standard. I seem to remember PostgreSQL 8.3 dropped some of the default casts between number and text types so that this kind of situation would throw an error when the query was being planned.
Presumably "float" is considered to be the widest-ranging numeric type and therefore the one that all numbers can be silently promoted to?
Oh, and similar problems (but no conversion errors) for when you have varchar columns and a Java application that passes all string literals as nvarchar... suddenly your varchar indices are no longer used, good luck finding the occurrences of that happening. Of course you can tell the Java app to send strings as varchar, but now we're stuck with only using characters in windows-1252 because that's what the DB was created as 5-6 years ago when it was just a "stopgap solution", ah-ha.
Well, it's easily understandable: float is able to hold the greatest range of numbers.
If the underlying datatype is datetime, for instance, it can be simply converted to a float number that has the same intrinsic value.
If the datatype is an string it is easy to parse it to a float, degrading performance not withstanding.
So float datatype is better to fallback.