Sending a xml with a tag that it's not always empty - wso2-esb

How can I insert a null value in a table column set to datetime type with dataservices esb in wso2 EI v6.6.0. . I have tried with xsi:nil="true" but if the tag is not empty, it still stores NULL in the database. The column is set to allow null values. Thank you!

Related

Why is DataStage writing NULL string values as empty strings, while other data types correctly have NULL values

I have a DataStage parallel job that writes to Hive as the final stage in a long job. I can view the data that is about to be written and there are many NULL strings that I want to see in the Hive table.
However, when I view the table that is created, there are no NULL strings, they all get converted into empty strings '' instead. I can see other datatypes, like DECIMAL(5,0) have NULL values and I can select these, e.g.
SELECT * FROM mytable WHERE decimal_column IS NULL;
The process for writing to Hive is to store the data in a staging table in a delimited text format. This is then pushed through a generic CDC process and results in data being written to a new partition in an ORC format table.
The only option I can see for handling NULL values is "Null Value" in the HDFS File Connector Stage. If I leave this blank then I get empty strings and if I type in 'NULL' then 'NULL' is what I get, i.e. not a NULL, but the string 'NULL'.
I can't change the process as it's in place for literally thousands of jobs already. Is there any way to get my string values to be NULL or am I stuck with empty strings?
According to the IBM documentation, an empty String in double-quotation "" should help.
Null value
Specify the character or string that represents null values in the data. For a source stage, input data that has the value
that you specify is set to null on the output link. For a target
stage, in the output file that is written to the file system, null
values are represented by the value that is specified for this
property. To specify that an empty string represents a null value,
specify "" (two double quotation marks).
Source: https://www.ibm.com/docs/en/iis/11.7?topic=reference-properties-file-connector

How to extract null values from Bigquery table as a TableRow object

I am trying to extract data from a BigQuery table using Google Cloud Dataflow.
My BigQuery table has few empty values(for String datatype) and null (for Numeric data types).
When I try to extract the data in dataflow using BigQueryIO.readTableRows().fromQuery(select * from table_name), I don't see the columns with null values.
How can I achieve this to get all the columns as part of the TableRow object?
Any help is appreciated
I believe this is the current behavior BigQueryIO connector. Null values are ignored in the resulting element. But empty string values should be available. Can you just assume values that are not available in the resulting element to be null ?

IS NULL not working in where clause when this data is imported from text file into SQL Server database

I converted an Excel file into a text file, then imported this text file into a SQL Server database using the SQL Server import/export wizard tool. I found IS NULL not working in the where clause on one column (see the following):
WHERE ID IS NULL;
BTW, ID column's data type is varchar(50) with null as the default.
Does anybody have any idea why IS NULL does not work here?
The import probably loaded the values as empty strings into of as NULL. To handle this, change WHERE ID IS NULL to WHERE ID = ''
If you want them to be NULL, you can change them:
UPDATE Your_Table SET ID = NULL WHERE ID = ''
David's answer is most likely what you need, but for completeness... Are the values truly NULL or are they 'null' ?
Test:
SELECT TOP 5 * FROM TABLE WHERE ID IS NULL
Test:
SELECT TOP 5 * FROM TABLE WHERE ID='null'
If the latter returns results, then the string 'null' was stored, not an actual absent/NULL value.
You can set them to NULL as follows:
UPDATE T
SET ID=NULL
FROM TABLE T
WHERE ID='null'
If neither returned data, then find what is there... maybe they're all legit values.
SELECT COUNT(2), ID
FROM TABLE
GROUP BY ID
ORDER BY ID
--HAVING COUNT(2)>1 /* uncomment this line if too much comes back... */
Then decide how to proceed...

Why does appending a NULL into a Date column result in the value 1900-01-01 being displayed?

I'm using an SSIS package to import a basic text file, it has 3 date fields, and sometimes some of the date fields are empty.
The imported table shows empty fields, I suppose because they are varchar(50) datatypes. But then I need to insert those records from that table into another table, where those columns are defined as Date datatypes.
When I run the insert statement, the resulting values in the destination table all show 1900-01-01 for the date, rather than NULL or blank. I tried forcing the value to be null, but it didn't work:
CASE WHEN refresh_date IS NULL THEN NULL ELSE refresh_date END AS RefreshDate
How can I make a Date column just accept a blank or null value?
The varchar field should not be casted or converted to a date if 'empty'. When a blank, or empty string, is casted to a date it equals '1900-01-01'. You can test this by using the following algorithm:
SELECT CAST('' as date)
Using SSIS you are better of checking if the varchar(50) field equals '', and if so setting it to NULL. Here is an example SQL query:
SELECT CASE WHEN importedfield = '' THEN NULL
ELSE CAST(importedfield as date)
END AS [NewFieldname]
Try adding a derived column transformation in the Data flow task with the following expression, the issue may be caused by empty string (not NULL values)
If the input is of type Date:
ISNULL([DateColumn]) ? NULL(DT_DATE) : (DT_DATE)[DateColumn]
If the input column is of type string
(ISNULL([DateColumn]) || [DateColumn] == "") ? NULL(DT_DATE) : (DT_DATE)[DateColumn]
And map this column to the destination column
Perhaps it's not a null value you're entering, but rather an empty string. You mentioned they could be
a VARCHAR(50) data type. So you might need to add some more logic. Try this:
NULLIF(LTRIM(RTRIM(refresh_date)),'')
You may want to check the data type of this column and see if it accepts null value.

Pentaho SQL Generator assigning UNKNOWN data type to table column

Pentaho Data Integration (CE) 5.0.1-stable is trying to generate SQL with a column that has a data type as UNKNOWN
example:
, ad_unit_type VARCHAR(255)
, creation_time UNKNOWN
, title VARCHAR(255)
Original Table Input column is DATETIME
There are no empty/null field values
There are no transformations on field
Is there a way to force Pentaho to recognize the field as DATETIME in the transformation stream?
Best
Insert Select values step
Click to open/edit step
Tab over to Meta-data
Create new field with stream field name (ex: creation_time)
Set type as Date
Press OK
Can also directly define the data type in the table input step (using SQL's CAST/CONVERT).
Now, if you go to Action > View SQL you can see that the UNKNOWN data type has been replaced with DATETIME.