Using try_cast in snowflake to deal with very long numbers - sql

I'm using try_cast in snowflake to convert any long values in sql to NULL.
Here is my code:
When I try running the above code, I'm getting the error as below:
I'm flattening a JSON array and using try_cast to make any large values to NULL because I was getting an error Failed to cast variant value {numberLong: -8301085358432}
SELECT try_cast(item.value:price) as item_price,
try_cast(item.value:total_price_bill) as items_total_price
FROM table, LATERAL FLATTEN(input => products) item
Error:
SQL compilation error error at line 1 at position ')'.
I don't understand where I'm doing wrong

you are using wrong syntax for try_cast. according to snowflake documentations the syntax is :
TRY_CAST( <source_string_expr> AS <target_data_type> )
and also note:
Only works for string expressions.
target_data_type must be one of the following:
VARCHAR (or any of its synonyms)
NUMBER (or any of its synonyms)
DOUBLE
BOOLEAN
DATE
TIME
TIMESTAMP, TIMESTAMP_LTZ, TIMESTAMP_NTZ, or TIMESTAMP_TZ
so for example you have to have something like this if item.value:price is string:
select try_cast(item.value:price as NUMBER) as item_price,
....

Related

SQL overlap statement

I'm trying to do overlap in SQL (using postgres) but receive an syntax error. And i do not know why I'm getting error and whats wrong with my SQL Statement
Error text
[42601] ERROR: syntax error at or near "[" Position: 296
SQL Statement
SELECT *,
ARRAY_AGG("reserve"."state_id" ) AS "states" FROM "orders_order"
JOIN "reserve" ON ("order"."id" = "reserve"."order_id")
GROUP BY "order"."id"
HAVING (NOT (ARRAY_AGG("reserve"."state_id" )&& ['test']::varchar(255)[]))
As documented in the manual there are two ways to specify an array:
First with the array keyword:
array['test']
or as a string enclosed with curly braces:
'{test}'::text[]
In the second case the casting might or might not be needed depending on the context on where it's used.
There is no need to cast it to a varchar(255)[] array. text[] will do just fine - especially when comparing it to the result of an array_agg() which returns text[] as well.

Search with inequalities on string property

We are using NHibernate over SQL Server and SQLite.
The database stores records in rows rather than columns -- each row having Path|Value as columns. Path and Value are both string properties.
For certain values of Path we would like to query for inequalities -- greater-than, less-than, etc.
The trouble we are having is that because the properties are strings, the inequalities are using string comparisons -- for example, searching for Value >= 18 returns rows where Value = 5. Unfortunately we are having trouble working around this.
1) This restriction produces incorrect results (saying 18 < 5):
Restrictions.Ge("Value", item.Value);
2) We tried casting the value to an integer, but this code produces a SqlException from NHibernate -- Error converting data type nvarchar to bigint.
Restrictions.Le(Projections.Cast(NHibernateUtil.Int64, Projections.Property("SearchString")), item.Value)
3) We were looking for a way to pad the property value with zeros (so that we would get 018 > 005), but could not find a way to do this in NHibernate.
Does anyone have any advice?
Thank you in advance!
Assuming that you want to compare on integer value, with IQueryOver:
1) This restriction produces incorrect results (saying 18 < 5):
Restrictions.Ge("Value", item.Value);
Restrictions.Ge
(
Projections.Cast(NHibernateUtil.Int32, Projections.Property<YourEntity>(x => x.YourProperty))
, intValue
)
Convert your datatype accordingly. If your C# datatype (intValue) is already numeric, no need to convert it. If your x.YourProperty is already numeric type, no need to convert it. Adjust above code accordingly.
2) We tried casting the value to an integer, but this code produces a SqlException from NHibernate -- Error converting data type nvarchar to bigint.
Restrictions.Le(Projections.Cast(NHibernateUtil.Int64, Projections.Property("SearchString")), item.Value)
Refer the above and check the datatype of item.Value.

Invalid digits on Redshift

I'm trying to load some data from stage to relational environment and something is happening I can't figure out.
I'm trying to run the following query:
SELECT
CAST(SPLIT_PART(some_field,'_',2) AS BIGINT) cmt_par
FROM
public.some_table;
The some_field is a column that has data with two numbers joined by an underscore like this:
some_field -> 38972691802309_48937927428392
And I'm trying to get the second part.
That said, here is the error I'm getting:
[Amazon](500310) Invalid operation: Invalid digit, Value '1', Pos 0,
Type: Long
Details:
-----------------------------------------------
error: Invalid digit, Value '1', Pos 0, Type: Long
code: 1207
context:
query: 1097254
location: :0
process: query0_99 [pid=0]
-----------------------------------------------;
Execution time: 2.61s
Statement 1 of 1 finished
1 statement failed.
It's literally saying some numbers are not valid digits. I've already tried to get the exactly data which is throwing the error and it appears to be a normal field like I was expecting. It happens even if I throw out NULL fields.
I thought it would be an encoding error, but I've not found any references to solve that.
Anyone has any idea?
Thanks everybody.
I just ran into this problem and did some digging. Seems like the error Value '1' is the misleading part, and the problem is actually that these fields are just not valid as numeric.
In my case they were empty strings. I found the solution to my problem in this blogpost, which is essentially to find any fields that aren't numeric, and fill them with null before casting.
select cast(colname as integer) from
(select
case when colname ~ '^[0-9]+$' then colname
else null
end as colname
from tablename);
Bottom line: this Redshift error is completely confusing and really needs to be fixed.
When you are using a Glue job to upsert data from any data source to Redshift:
Glue will rearrange the data then copy which can cause this issue. This happened to me even after using apply-mapping.
In my case, the datatype was not an issue at all. In the source they were typecast to exactly match the fields in Redshift.
Glue was rearranging the columns by the alphabetical order of column names then copying the data into Redshift table (which will
obviously throw an error because my first column is an ID Key, not
like the other string column).
To fix the issue, I used a SQL query within Glue to run a select command with the correct order of the columns in the table..
It's weird why Glue did that even after using apply-mapping, but the work-around I used helped.
For example: source table has fields ID|EMAIL|NAME with values 1|abcd#gmail.com|abcd and target table has fields ID|EMAIL|NAME But when Glue is upserting the data, it is rearranging the data by their column names before writing. Glue is trying to write abcd#gmail.com|1|abcd in ID|EMAIL|NAME. This is throwing an error because ID is expecting a int value, EMAIL is expecting a string. I did a SQL query transform using the query "SELECT ID, EMAIL, NAME FROM data" to rearrange the columns before writing the data.
Hmmm. I would start by investigating the problem. Are there any non-digit characters?
SELECT some_field
FROM public.some_table
WHERE SPLIT_PART(some_field, '_', 2) ~ '[^0-9]';
Is the value too long for a bigint?
SELECT some_field
FROM public.some_table
WHERE LEN(SPLIT_PART(some_field, '_', 2)) > 27
If you need more than 27 digits of precision, consider a decimal rather than bigint.
If you get error message like “Invalid digit, Value ‘O’, Pos 0, Type: Integer” try executing your copy command by eliminating the header row. Use IGNOREHEADER parameter in your copy command to ignore the first line of the data file.
So the COPY command will look like below:
COPY orders FROM 's3://sourcedatainorig/order.txt' credentials 'aws_access_key_id=<your access key id>;aws_secret_access_key=<your secret key>' delimiter '\t' IGNOREHEADER 1;
For my Redshift SQL, I had to wrap my columns with Cast(col As Datatype) to make this error go away.
For example, setting my columns datatype to Char with a specific length worked:
Cast(COLUMN1 As Char(xx)) = Cast(COLUMN2 As Char(xxx))

PostgreSQL Concat 2 numbers gives me error

I'm trying to concat 2 numbers from two different columns, those are chain_code and shop_code.
I tried this:
SELECT CONCAT(`shop_code`, `shop_code`) AS myid FROM {table}
But I get an error:
ERROR: operator does not exist: ` integer
LINE 1: SELECT CONCAT(`shop_code`, `shop_code`) AS myid FROM...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
I even tried to convert it to string but couldn't ... CONCAT(to_char(chain_code, '999')...) ... but it says there is no such a function called 'to_sting' (found on PostgreSQL Documentation)
First: do not use those dreaded backticks ` , that's invalid (standard) SQL.
To quote an identifier use double quotes: "shop_code", not `shop_code`
But as those identifiers don't need any quoting, just leave them out completely. In general you should avoid using quoted identifiers. They cause much more trouble than they are worth it.
For details on specifying SQL identifiers see the manual: http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
But concat() only works on text/varchar values, so you first need to convert/cast the integer values to varchar:
SELECT CONCAT(chain_code::text, shop_code::text) AS myid FROM...
but it says there is no such a function called 'to_sting'
Well, the function is to_char(), not to_sting().
Using to_char() is an alternative to the cast operator ::text but is a bit more complicated in this case:
SELECT CONCAT(to_char(chain_code,'999999'), to_char(shop_code, '999999')) AS myid FROM...
The problem with to_string() in this context is, that you need to specify a format mask that can deal with all possible values in that column. Using the cast operator is easier and just as good.
Update (thanks mu)
As mu is to short pointed out, concat doesn't actually need any cast or conversion:
SELECT CONCAT(chain_code, shop_code) AS myid FROM...
will work just fine.
Here is an SQLFiddle showing all possible solutions: http://sqlfiddle.com/#!15/2ab82/3

sql convert error on view tables

SELECT logicalTime, traceValue, unitType, entName
FROM vwSimProjAgentTrace
WHERE valueType = 10
AND agentName ='AtisMesafesi'
AND ( entName = 'Hawk-1')
AND simName IN ('TipSenaryo1_0')
AND logicalTime IN (
SELECT logicalTime
FROM vwSimProjAgentTrace
WHERE valueType = 10 AND agentName ='AtisIrtifasi'
AND ( entName = 'Hawk-1')
AND simName IN ('TipSenaryo1_0')
AND CONVERT(FLOAT , traceValue) > 123
) ORDER BY simName, logicalTime
This is my sql command and table is a view table...
each time i put "convert(float...) part " i get
Msg 8114, Level 16, State 5, Line 1
Error converting data type nvarchar to float.
this error...
One (or more) of the rows has data in the traceValue field that cannot be converted to a float.
Make sure you've used the right combination of dots and commas to signal floating point values, as well as making sure you don't have pure invalid data (text for instance) in that field.
You can try this SQL to find the invalid rows, but there might be cases it won't handle:
SELECT * FROM vwSimProjAgentTrace WHERE NOT ISNUMERIC(traceValue)
You can find the documentation of ISNUMERIC here.
If you look in BoL (books online) at the convert command, you see that a nvarchar conversion to float is an implicit conversion. This means that only "float"-able values can be converted into a float. So, every numeric value (that is within the float range) can be converted. A non-numeric value can not be converted, which is quite logical.
Probably you have some non numeric values in your column. You might see them when you run your query without the convert. Look for something like comma vs dot. In a test scenario a comma instead of a dot gave me some problems.
For an example of isnumeric, look at this sqlfiddle