Using BLOB in DBUnit - blob

I have to test a class where we are retrieving data from Oracle, from a XMLTYPE column. We are using BLOB for the casting, because the system is prepared to run in MySQL too:
BLOB salePlanXmlType = (BLOB) jdsLoad.getValueCell(0, "SALEPLAN");
In DBUnit, first we are creating the tables, and then loading data. The loading part was fun, but I managed to load two XML's using the tips found here.
Anyway I can't manage to create a table in DBUnit with a BLOB type. Here's the script I try to execute:
CREATE TABLE TSHT_SALEPLAN
(
SALEPLANCODE INTEGER,
VENDORCODE VARCHAR(10),
HOTELCODE INTEGER,
SALEPLAN BLOB
);
When I run the test with this script, I get the following error:
java.sql.SQLException: Wrong data type: BLOB in statement
[CREATE TABLE TSHT_SALEPLAN
(
SALEPLANCODE INTEGER,
VENDORCODE VARCHAR(10),
HOTELCODE INTEGER,
SALEPLAN BLOB]
at org.hsqldb.jdbc.Util.throwError(Unknown Source)
at org.hsqldb.jdbc.jdbcPreparedStatement.executeUpdate(Unknown Source)
Which I don't understand, because BLOB seems to be supported by hsqldb.
If I change the BLOB column definition and use VARBINARY, it works. But then the casting to Blob in my code throws an exception.
Has anybody used BLOB in a create table statement with DBUnit?

The exception is from an old version of HSQLDB (probably 1.8) that does not support BLOB. Use the latest version 2.3.x jar instead.

Related

U-SQL External table error: 'Unable to cast object of type 'System.DBNull' to type 'System.Type'.'

I'm failing to create external tables to two specific tables from Azure SQL DB,
I already created few external tables with no issues.
The only difference I can see between the failed and the successful external tables is that the tables that failed contains geography type columns, so I think this is the issue but i'm not sure.
CREATE EXTERNAL TABLE IF NOT EXISTS [Data].[Devices]
(
[Id] int
)
FROM SqlDbSource LOCATION "[Data].[Devices]";
Failed to connect to data source: 'SqlDbSource', with error(s): 'Unable to cast object of type 'System.DBNull' to type 'System.Type'.'
I solved it by doing a workaround to the external table:
I created a view that select from external rowset using EXECUTE
CREATE VIEW IF NOT EXISTS [Data].[Devices]
AS
SELECT Id FROM EXTERNAL SqlDbSource
EXECUTE "SELECT Id FROM [Data].[Devices]";
This made the script to completely ignore the geography type column, which is currently not supported as REMOTEABLE_TYPE for data sources by U-SQL.
Please have a look at my answer on the other thread opened by you. To add to that, I would also recommend you to have a look at how to create a table using a query. In the query, you should be able to use "extractors" in the query to create the tables. To read more about extractors, please have a look at this doc.
Hope this helps.

How to load data to Hive table and make it also accessible in Impala

I have a table in Hive:
CREATE EXTERNAL TABLE sr2015(
creation_date STRING,
status STRING,
first_3_chars_of_postal_code STRING,
intersection_street_1 STRING,
intersection_street_2 STRING,
ward STRING,
service_request_type STRING,
division STRING,
section STRING )
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES (
'colelction.delim'='\u0002',
'field.delim'=',',
'mapkey.delim'='\u0003',
'serialization.format'=',', 'skip.header.line.count'='1',
'quoteChar'= "\"")
The table is loaded data this way:
LOAD DATA INPATH "hdfs:///user/rxie/SR2015.csv" INTO TABLE sr2015;
Why the table is only accessible in Hive? when I attempt to access it in HUE/Impala Editor I got the following error:
AnalysisException: Could not resolve table reference: 'sr2015'
which seems saying there is no such a table, but the table does show up in the left panel.
In Impala-shell, error is different as below:
ERROR: AnalysisException: Failed to load metadata for table: 'sr2015'
CAUSED BY: TableLoadingException: Failed to load metadata for table:
sr2015 CAUSED BY: InvalidStorageDescriptorException: Impala does not
support tables of this type. REASON: SerDe library
'org.apache.hadoop.hive.serde2.OpenCSVSerde' is not supported.
I have always been thinking Hive table and Impala table are essentially the same and difference is Impala is a more efficient query engine.
Can anyone help sort it out? Thank you very much.
Assuming that sr2015 is located in DB called db, in order to make the table visible in Impala, you need to either issue
invalidate metadata db;
or
invalidate metadata db.sr2015;
in Impala shell
However in your case, the reason is probably the version of Impala you're using, since it doesn't support the table format altogether

Liquibase: Unable to recognizing data type CLOB

I m used Liquibase to reverse engineer Microsoft MYSQL database where I see changeset for CLOB datatype generated as VARCHAR
When I execute the changeset to new environment, as expected Column Profile is created as VARCHAR instead of CLOB.
Is this a known issue or any workaround is provided from API.
Liquibase version: 3.6.2
You have two options:
You can use updatesql to generate a SQL file where you can manually change the data type from VARCHAR to CLOB.
You can use <SQL> tags in your changelog file to let liquibase generate it as you want, for example:
<sql>
CREATE TABLE (ID NUMBER, QUERY CLOB);
</sql>
In this case you would have to take care of the rollback by yourself.

Hive create table for json data

I am trying to create the hive table which can read the json data, but when I am executing the create statement it is throwing an error.
Create statement:
CREATE TABLE employee_exp_json
( id INT,
fname STRING,
lname STRING,
profession STRING,
experience INT,
exp_service STRING
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serede2.Jsonserede'
STORED AS TEXTFILE;
Error:
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde:
org.apache.hadoop.hive.contrib.serede2.Jsonserede
I have also added the jar hive-json-serde.jar, but I'm still facing the same issue. I am creating this table on cloudera and hive version is 1.1.0.
The correct class name is
org.apache.hive.hcatalog.data.JsonSerDe
Refer: Hive SerDes
As for the other JAR you added, check its documentation. Still a different class
org.openx.data.jsonserde.JsonSerDe
Try adding the json-serde-with-dependencies.jar.
You can Download it from Download Hive Serde
Also try the class
'org.openx.data.jsonserde.JsonSerDe'

SAP Vora dealing with decimal type

So I'm trying to create and load Vora table from an ORC file created by SAP BW archiving process on HDFS.
The Hive table automatically generated on top of that file by BW has, among other things, this column:
archreqtsn decimal(23,0)
At attempt to create a Vora table using that datatype fails with the error "Unsupported type (DecimalType(23,0)}) on column archreqtsn".
So, the biggest decimal supported seems to be decimal(18,0)?
Next thing I tried was to either use decimal(18,0) or string as the type for that column. But when attempting to load data from a file:
APPEND TABLE F002_5_F
OPTIONS (
files "/sap/bw/hb3/nldata/o_1ebic_1ef002__5/act/archpartid=p20170611052758000009000/000000_0",
format "orc" )
I'm getting another error:
com.sap.spark.vora.client.VoraClientException: Could not load table F002_5_F: [Vora [<REDACTED>.com.au:30932.1639407]] sap.hanavora.jdbc.VoraException: HL(9): Runtime error. (decimal 128 unsupported (c++ exception)).
An unsuccessful attempt to load a table might lead to an inconsistent table state. Please drop the table and re-create it if necessary. with error code 0, status ERROR_STATUS
What could be the workarounds for this issue of unsupported decimal types? In fact, I might not need that column in the Vora table at all, but I can't get rid of it in the ORC file.