Hoping someone can advise me on this. I have two tables in big query, the first is called master, the second is called daily_transfer.
In the master table, there is a column named impression_share, the data type is float and all working correctly.
However my problem is with the daily_transfer table. The idea is that on a daily basis, I'll transfer this data into master. The schema and column names are exactly the same in both tables. The problem however is that in the daily transfer table, in my float column (impression_share), I have a string value, which is < 0.1.
This string isn't pulled up as an issue initially as the table is being loaded from a google sheet, so the error is only highlighted when I try to query the data.
In summary, column type is float, but a recurring value is a string. I've tried a couple of things, firstly replacing the '< 0.1' to '0.1', but I get an error that replace can only be used with an expression of string, string, string. Which makes sense to me.
So I've tried to cast the column instead from float to string, and then replace the value. When I try to cast though I'm getting an error right away:
"Error while reading table: data-studio-reporting.analytics.daily_transfer, error message: Could not convert value to float. Row 3; Col 6."
Column 6 being "impression_share", row 3 value being < 0.1.
The query I was trying is:
SELECT
SAFE_CAST(mydata.impression_share AS STRING)
FROM `data-studio-reporting.analytics.daily_transfer` mydata
I just don't know if its possible what I'm trying to do, or if I would be better recreating the daily_transfer table and setting column 6 (impression_share) as String, to make it easier to replace and then cast before I transfer to the main table?
Any help greatly appreciated!
Thanks,
Mark
Thanks for the help on this, changing the column type in my daily_transfer_table from float to string, then replacing and casting has worked.
SELECT
mydata.Date,
CAST (REPLACE(mydata.Impression_share,'<','') AS FLOAT64 ) as impression_share_final,
mydata.Available_impressions
FROM `data-studio-reporting.google_analytics.daily_transfer_temp_test` mydata
been great for my knowledge to learn this one. thanks!
Related
I am trying to insert data from a staging table into the master table. The table has nearly 300 columns, and is a mix of data-typed Varchars, Integers, Decimals, Dates, etc.
Snowflake gives the unhelpful error message of "Numeric value '' is not recognized"
I have gone through and cut out various parts of the query to try and isolate where it is coming from. After several hours and cutting every column, it is still happening.
Does anyone know of a Snowflake diagnostic query (like Redshift has) which can tell me a specific column where the issue is occurring?
Unfortunately not at the point you're at. If you went back to the COPY INTO that loaded the data, you'd be able to use VALIDATE() function to get better information to the record and byte-offset level.
I would query your staging table for just the numeric fields and look for blanks, or you can wrap all of your fields destined for numeric fields with try_to_number() functions. A bit tedious, but might not be too bad if you don't have a lot of numbers.
https://docs.snowflake.com/en/sql-reference/functions/try_to_decimal.html
As a note, when you stage, you should try and use the NULL_IF options to get rid of bad characters and/or try to load them into stage using the actual datatypes in your stage table, so you can leverage the VALIDATE() function to make sure the data types are correct before loading into Snowflake.
Query your staging using try_to_number() and/or try_to_decimal() for number and decimal fields of the table and the use the minus to get the difference
Select $1,$2,...$300 from #stage
minus
Select $1,try_to_number($2)...$300 from#stage
If any number field has a string that cannot be converted then it will be null and then minus should return those rows which have a problem..Once you get the rows then try to analyze the columns in the result set for errors.
I need to query a varchar field in sql for 0's.
when I query where field = '0' i get the resulting error message.
Conversion failed when converting the varchar value 'N' to data type int.
I'm having trouble figuring out where the issue is coming from. My Googling is failing me on this one, could someone point me in the right direction?
EDIT:
Thanks for the help on this one guys, so there were 'N's in the data just very few of them so they weren't showing up in my top 100 query until I limited the search results further.
Apparently sql didn't have any issue comparing ints to varchar(1) so long as they were ints as well. I didn't even realize I was using an int in the where farther up in my query.
Oh and sorry for not sharing my query, it was long and complicated I was trying to share what I thought was the relevant from it. I'll write a simplified query in future questions.
Anyone know how to mark this as solved?
If your field is a varchar(), then this expression:
where field = '0'
cannot return a type conversion error.
This version can:
where field = 0
It would return an error if field has the value of 'N'. I am guessing that is the situation.
Otherwise, you have another expression in your code causing the problem by doing conversions from strings to numbers.
Let me explain why I want to do this... I have built a Tableau dashboard that allows a user to browse/search all of the tables & columns in our warehouse by schema, object type (table,view,materialized view), etc. I want to add a column that pulls a sample of the data from each column in each table - this is also done, but with this problem...:
The resulting column is comprised of data of different types (varchar2, LONG, etc.). I can basically get every type of data to conform to a single data type except for LONG - it will not allow me to convert it to anything else compatible with everything else (if that makes sense...). I simply need all data types to coexist in a single column. I've tried many different things and have been reading up on the subject for about a week now, but it sounds like it just can't be done, but in my experience there is always a way... I figured I'd check with the guru's here before admitting defeat.
One of the things I've tried:
--Here, from two different tables, I'm pulling a single piece of data from a single column and attempting to merge into a single column called SAMPLE_DATA
--OTHER is LONG data type
--ORGN_NME is VARCHAR2 data type
select 'PLAN','OTHER', cast(substr(OTHER,1,2) as varchar2(4000)) as SAMPLE_DATA from sde.PLAN union all
select 'BUS_ORGN','ORGN_NME', cast(substr(ORGN_NME,1,2) as varchar2(4000)) as SAMPLE_DATA from sde.BUS_ORGN;
Resulting error:
Lookup Error
ORA-00932: inconsistent datatypes: expected CHAR got LONG
How can I achieve this?
Thanks in advance
Long datatypes are basically unusable by most applications. I made something similar where I wanted to search the contents of packages. The solution is to convert the LONG into CLOB using a pipelined function. Adrian Billington's source code can be found here:
https://github.com/oracle-developer/dla
You end up with a view that you can query. I did not see any performance hit even when looking at large packages so it should work for you.
I am having to create a second header line and am using the first record of the Query to do this. I am using a UNION All to create this header record and the second part of the UNION to extract the Data required.
I have one issue on one column.
,'Active Energy kWh'
UNION ALL
,SUM(cast(invc.UNITS as Decimal (15,0)))
Each side are 11 lines before and after the Union and I have tried all sorts of combinations but it always results in an error message.
The above gives me "Error converting data type varchar to numeric."
Any help would be much appreciated.
The error message indicates that one of your values in the INVC table UNITS column is non-numeric. I would hazard a guess that it's either a string (VARCHAR or similar) column or something else - and one of the values has ended up in a state where it cannot be parsed.
Unfortunately there is no way other than checking small ranges of the table to gradually locate the 'bad' row (i.e. Try running the query for a few million rows at a time, then reducing the number until you home in on the bad data). SQL 2014 if you can get a database restored to it has the TRY_CONVERT function which will permit conversions to fail, enabling a more direct check - but you'll need to play with this on another system
(I'm assuming that an upgrade to 2014 for this feature is out of the question - your best bet is likely just looking for the bad row).
The problem is that you are trying to mix header information with data information in a single query.
Obviously, all your header columns will be strings. But not all your data columns will be strings, and SQL Server is unhappy when you mix data types this way.
What you are doing is equivalent to this:
select 'header1' as col1 -- string
union all
select 123.5 -- decimal
The above query produces the following error:
Error converting data type varchar to numeric.
...which makes sense, because you are trying to mix both a string (the header) with a decimal field.
So you have 2 options:
Remove the header columns from your query, and deal with header information outside your query.
Accept the fact that you'll need to convert the data type of every column to a string type. So when you have numeric data, you'll need to cast the column to varchar(n) explicitly.
In your case, it would mean adding the cast like this:
,'Active Energy kWh'
UNION ALL
,CAST(SUM(cast(invc.UNITS as Decimal (15,0))) AS VARCHAR(50)) -- Change 50 to appropriate value for your case
EDIT: Based on comment feedback, changed the cast to varchar to have an explicit length (varchar(n)) to avoid relying on the default length, which may or may not be long enough. OP knows the data, so OP needs to pick the right length.
I was trying insert a value in a column in SQL Server which is of type numeric(18, 0).
I am unable to insert a zero at the beginning. For example adding 022223 gets inserted as 22223...
I think changing the column type to varchar will work but I don't want to alter table structure.
Any way to do this without changing the table structure..Please help
There is no point to have this IN a database. You will, afterall, select the data, won't you? So while selecting do something like this. I looked in google for "mssql numeeric leading zeros". All of the solutions are like in the link I have mentioned :) Or obviously, use varchar if you, for some reason, must have data like that in a table :)
you can apply a transformation when reading and inserting the value like this:
when inserting value :
string s = "00056";
double val = double.Parse("0." + s);
and when querying value use:
double value = 0.00056; // stored value in your db field
s = val.ToString().Remove(0,2);
i think it will work in any case - whether you have leading zero in your value or not.
Well I dont think it's possible, since its being stored as a 32 bit so simply add leading zeros when printing it ...
There are multiple approaches.
http://sqlusa.com/bestpractices2005/padleadingzeros/