How Can I find a Null Value within a number? - sql

I'm using the BI tool Domo, which uses Amazon Red Shift. I have a dataset that runs nightly using Zendesk data.
I'm getting this error:
OnboardFlowExecution(2794) data flow execution id
(724670342c4c48a9a61e7a617e6462c1) failed:
java.lang.NumberFormatException: For input string: "null"
I've researched the error and I am under the impression that somewhere in the data set a Number Column has data it doesn't like and is wreaking havoc with my downstream process.
How do I find the offending column/row ?

Amazon RedShift is expecting a string value in the column but receiving 'null' values instead. Hence, it's throwing an exception (a kind of error).
Add a transform to the input dataset to handle the null exception. We've previously used NVL function in RedShift to replace null values with something else.
E.g. Your transform could be:
Select employee_id, NVL(emp_first_name, 'No Name') from employees;
The NVL function will replace all the null values in the 'emp_first_name' column with 'No Name'

Related

SQL Query in Azure Dataflow does not work when using parameter value in where clause

I use a Azure Datafactory Pipeline.
Within that pipeline i use 2 activities:
Lookup to get a date value
This is the output:
"firstRow": {
"Date": "2022-10-26T00:00:00Z"
A dataflow which is getting the date from the lookup in 1 which is used in the source options SQL query in the where clause:
This is the query:
"SELECT ProductID ,ProductName ,SupplierID,CategoryID ,QuantityPerUnit ,UnitPrice ,UnitsInStock,UnitsOnOrder,ReorderLevel,Discontinued,LastModifiedDate FROM Noordwind.Products where LastModifiedDate >= '{$DS_LastPipeLineRunDate}'"
When i fill the parameter by hand with for example '2022-10-26' then it works great, but when i let the parameter get's its value from the Lookup in step 1 the dataflow fails
Error message:
{"message":"Job failed due to reason: Converting to a date or time failed due to an invalid character. Details:null","failureType":"UserError","target":"Products","errorCode":"DF-Executor-Conversion"}
This is the parameter in the pipeline view, but clicked on the dataflow:
I have tried casting the date al kind of things but not the right thing.
Can you help me.
UPDATE:
After a question from Rakesh:
This is the activity parameter
#activity('LookupLastPipelineRunDate').output.firstRow
I have reproduced the above and got the below results.
My source sample data from SQL database.
For demo, I have used set variable for the date and given a sample date like below.
Created a string parameter and given this variable value to it.
In your case pass the lookup firstrow output date.
I have used below dataflow expression in the query of dataflow source and got the desired result.
concat('select * from dbo.table1 where d1 >=','\'',$date_value,'\'')
Result in a target SQL table.
I have created an activity set variable:
The first pipeline still returns the right date.
I even converted it just to be sure to datetime.
I can create a variable with type string.
Code:
#activity('LookupLastPipelineRunDate').output.firstRow
Regardless of the activity set variable that fails, it looks like the date enters nicely as an input in the Set variable activity
And still a get an error:
When i read this error message, it says that you can't put a date in a string variable. But i can only choose string, boolean and array, so there is no better option for this.
I also reviewd this website.
enter link description here
There for i have altered the table which contains the source data which i use in the dataflow.
I Deleted the column LastModifiedDate because it has datatype datetime.
Now i created the same column with datatype datetime2
I did this because i read that datetime2 has less problems with conversions.

Insert records into Spark SQL table

I have created spark SQL table like below through Azure Databricks:
create table sample1(price double)
Actual file has data like 'abc' instead of double value.
While inserting 'abc' string value into double column it accepts as NULL without any failure. My concern is why are we not getting any error? I want to failure message in this case.
Please let me know if I'm missing something. I want to disable the implicit conversion of datatypes.

How to check null values in JSON property in stream analytics?

I am Passing the following Json input from Eventhub to Stream Analytics.
{"meter_totalcycleenergy":null,"Test2": 20}, {"meter_totalcycleenergy":40,"Test2":20}
But the job is failing stating the error.
Encountered error trying to write 1 event(s): Cannot convert from property 'meter_totalcycleenergy' of type 'System.String' to column 'meter_totalcycleenergy' of type 'System.Single'.
Error Image
How to handle such conditions.
I think Json nulls are not exactly SQL NULLs, so what would be the proper way to check for null values in a query?
Datatype of meter_totalcycleenergy is float in my database.
You can use is not null. For eg:
select *
from input
where meter_totalcycleenergy is not null

VS 2005 SSIS Error value origin

I have an ssis package created in vs 2005 that has started to give me the following error:
[Lawson Staging Table [4046]] Error: There was an error with input column "JOB_CODE" (4200) on input "OLE DB Destination Input" (4059). The column status returned was: "The value violated
the integrity constraints for the column.".
My first question is: what are the 4046, 4200 & 4059 values following my table, column and destination?
My second question is about the integrity constraint message. The destination table is a heap (no keys or indexes) with no constraints. The destination column is defined as a varchar(10). The input column is from oracle, is defined as char(9) and is called job_code. So - where is there an integrity constraint defined?
The final question is about the select statement; looks like the following:
Select ...
,lpad(trim(e.job_code),10,'0') as job_code ...
If I take the lpad and trim functions out, it works but I need these functions in place because my spec calls for a fixed length column padded with leading zeros. This column returns data as expected in TOAD but fails in the ssis package. Does anyone see an issue with how the functions are being used?
Since this package worked in the past but suddenly started to throw this error, I'm assuming that new invalid data has come into play. however, recently added rows don't seem to be any different then historical records.
Those numbers are more likely to be the ids assigned to the each task/table/column etc.
You could probably go to the advanced editor of the data flow task and look at the input and output properties. You can see that for each input or for each column there is an ID assigned.
Next: The error that you are getting occurs usually when "Allow Nulls" option is unchecked.
Try this:
Look at the name of the column for this error/warning.
Go to SSMS and find the table
Allow Nulls for that Column
Save the table
Rerun the SSIS

Error while Querying : The value of a host variable in the EXECUTE or OPEN statement is too large for its corresponding use

On trying to use a select query statement. The input variable has 8 characters just as expected.
I dont know why this error comes for select query because for a select query it will query and if its available it will return else it will retun blank rows.
Hibernate is used. Even in mapping its correctly mapped as 8 only.
This is what i found in log file:
Cause = com.ibm.db2.jcc.a.SqlException: The value of a host variable in the EXECUTE or OPEN statement is too large for its corresponding use.
Has anybody come across this error before? Please suggest me some solutions on why this error occurs..
One possibility... This issue can come even in SELECT statement. When the parameter passed into the query is more than the size of its datatype's size, this error will pop-up.
Example:
Datatype - CHAR(12)
Search Param: "123456789012345"