Nifi putSQL processor raises exception on HIVE on simple insert - hive

I'm feeding the putSQL processor using flowfiles like this one:
insert into test_nifi values ( '1476781027812');
I also tried using the version without the final ';' results are the same.
2016-10-18 10:49:58,858 ERROR [Timer-Driven Process Thread-4] o.apache.nifi.processors.standard.PutSQL PutSQL[id=d3103678-0157-1000-0000-000036cdfdbc] Failed to update database for [StandardFlow
FileRecord[uuid=b0e562b4-e974-4262-9b0b-c968f3488da4,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1476666309626-4, container=default, section=4], offset=332174, length=48],
offset=0,name=2479999684241677.avro,size=48]] due to java.sql.SQLException: Method not supported; it is possible that retrying the operation will succeed, so routing to retry: java.sql.SQLExcept
ion: Method not supported
2016-10-18 10:49:58,860 ERROR [Timer-Driven Process Thread-4] o.apache.nifi.processors.standard.PutSQL
java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveConnection.commit(HiveConnection.java:614) ~[na:na]
at org.apache.commons.dbcp.DelegatingConnection.commit(DelegatingConnection.java:334) ~[na:na]
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.commit(PoolingDataSource.java:211) ~[na:na]
at org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:371) ~[nifi-standard-processors-1.0.0.jar:1.0.0]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.0.0.jar:1.0.0]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064) [nifi-framework-core-1.0.0.jar:1.0.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0.jar:1.0.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0.jar:1.0.0]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.0.0.jar:1.0.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
SelectHiveQL processor works with no problem.
If putSQL it's broken ( at least on hive ) how can I execute SQL code which does not returns data ?
I'm connecting to Apache Hive (version 1.1.0-cdh5.5.2)
The table where I'm inserting the data is defined as :
CREATE TABLE `test_nifi`(
`value` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://hdfs-prod/user/hive/warehouse/unifieddata.db/test_nifi';
inserts from belinee are working.

I solved it by myself.... the correct processor is PutHiveQL :P

Related

serialization issue with kafka connect and redis sink

I have created a table in ksqlDb using key value specifically mentioned to be as 'Avro' like below.
create table (
region varchar(10) primary key,
male_count Integer,
female_count Integer
) with (kafka_topic='test',
key_format='avro',
value_format='avro',
partitions=1,
replicas=1)
Now I want the data from ksqldb to be sinked to redis using kafka connect. I am encountering the record convertion issue like below.
org.apache.kafka.connect.errors.ConnectException: failed to convert record
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:101)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:90)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kafka.connect.errors.ConnectException: unsupported command schema io.confluent.ksql.avro_schemas.KsqlDataSourceSchema
at io.github.jaredpetersen.kafkaconnectredis.sink.writer.RecordConverter.convert(RecordConverter.java:57)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:98)
... 12 more
{"type":"log", "host":"ckaf-kc-csf-kafka-connect-redis-55cb89b7bb-v2rtz", "level":"ERROR", "neid":"kafka-connect-455e22730a28462c969de25d9f54451e", "system":"kafka-connect", "time":"2022-10-18T10:16:57.224Z", "timezone":"UTC", "log":{"message":"task-thread-kafka-connect-redis-18-0 - org.apache.kafka.connect.runtime.WorkerSinkTask - WorkerSinkTask{id=kafka-connect-redis-18-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: failed to convert record"}}
org.apache.kafka.connect.errors.ConnectException: failed to convert record
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:101)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:90)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kafka.connect.errors.ConnectException: unsupported command schema io.confluent.ksql.avro_schemas.KsqlDataSourceSchema
at io.github.jaredpetersen.kafkaconnectredis.sink.writer.RecordConverter.convert(RecordConverter.java:57)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:98)
... 12 more
{"type":"log", "host":"ckaf-kc-csf-kafka-connect-redis-55cb89b7bb-v2rtz", "level":"ERROR", "neid":"kafka-connect-455e22730a28462c969de25d9f54451e", "system":"kafka-connect", "time":"2022-10-18T10:16:57.224Z", "timezone":"UTC", "log":{"message":"task-thread-kafka-connect-redis-18-0 - org.apache.kafka.connect.runtime.WorkerTask - WorkerSinkTask{id=kafka-connect-redis-18-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted"}}
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:631)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kafka.connect.errors.ConnectException: failed to convert record
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:101)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:90)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601)
... 10 more
Caused by: org.apache.kafka.connect.errors.ConnectException: unsupported command schema io.confluent.ksql.avro_schemas.KsqlDataSourceSchema
at io.github.jaredpetersen.kafkaconnectredis.sink.writer.RecordConverter.convert(RecordConverter.java:57)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:98)
============================
Redis sink connector property is as shown below:
"name": "kafka-connect-redis-103",
"config": {
"connector.class": "io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkConnector",
"tasks.max": "1",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "our service url",
"key.converter.enhanced.avro.schema.support": "true",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "our service url",
"value.converter.enhanced.avro.schema.support": "true",
"topics": "test",
"redis.client.mode": "Cluster",
"redis.uri": "our service url",
"redis.operation.timeout.ms": 10000,
"redis.password": "ciredis",
"redis.database":3
}
what is wrong here? I have the Avro conversion property being specified. Still I am getting record conversion error.
Help here would be appreciated.

Spark SQL query `SHOW VIEWS IN` through Hive metastore fails with `missing 'FUNCTIONS' at 'IN'`

Have Spark (2.4.4) with a Hive metastore running. When accessing it through JDBC/ODBC with a query like
SHOW VIEWS IN space1
i get following error:
[2020-03-18T10:54:57,722][DEBUG][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.execution.SparkSqlParser][][] Parsing command: SHOW VIEWS IN `space1`
[2020-03-18T10:54:57,733][ERROR][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation][][] Error executing query, currentState RUNNING,
org.apache.spark.sql.catalyst.parser.ParseException:
missing 'FUNCTIONS' at 'IN'(line 1, pos 11)
== SQL ==
SHOW VIEWS IN `space1`
-----------^^^
at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:241) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:117) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:232) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_201]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_201]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844) [hadoop-common-2.8.5.jar:?]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_201]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
[2020-03-18T10:54:57,765][ERROR][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation][][] Error running hive query:
org.apache.hive.service.cli.HiveSQLException: org.apache.spark.sql.catalyst.parser.ParseException:
missing 'FUNCTIONS' at 'IN'(line 1, pos 11)
== SQL ==
SHOW VIEWS IN `space1`
-----------^^^
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:269) ~[spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_201]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_201]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844) [hadoop-common-2.8.5.jar:?]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_201]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
E.g. i get it when i connect Tableau to my Spark, or i can fire the query explicitly via a JDBC connected SQL tool.
Any idea's ?
Note, a query like
SELECT * FROM `employer` WHERE `Name` IN ('John','Alex');
finishes without a problem!
Also somebody else had this problem before but got no response: https://community.powerbi.com/t5/Desktop/Spark-connector-issue/td-p/952481
SHOW VIEWS command only works since Spark 3. That is why you are seeing that error.
See: https://issues.apache.org/jira/browse/SPARK-31113

Hive throws Error for Aggregate function

When i try to execute a non-Aggregate command using Hive,the query seems to work fine,something as below:
select * from airlines_analysis.airline;
select airline.month from airlines_analysis.airline;
But when i execute an Aggregate command, something as below:
select count(distinct uniquecarrier) from airlines_analysis.airline;
i face the below issue:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257) at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
Which version of Hive is this?
Are you using Tez engine? Tez sometimes behaves weird when dealing with huge data sets. try this
set hive.execution.engine=mr;
select count(distinct uniquecarrier) from airlines_analysis.airline;

Beeline unable to create external hbase table but hive cli can

I have hbase 1.2.3 cluster, and install hive 2.1.1. When I try to create external hbase table through beeline/hiveserver2, I got exception. But if I use hive cli, it is ok. The create statement is as following:
create external table hbase_xing(id int, name string)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties ("hbase.columns.mapping" = ":key,f:name")
tblproperties("hbase.table.name" = "xing");
Exception is:
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Fri Jan 06 19:38:24 CST 2017, null, java.net.SocketTimeoutException: callTimeout=120000, callDuration=128483: row 'xing,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=c3.estarspace.com,16020,1483694415877, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:223)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313)
at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:205)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:742)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:735)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
at com.sun.proxy.$Proxy24.createTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:830)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:845)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3992)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:332)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1166)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:347)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: callTimeout=120000, callDuration=128483: row 'xing,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=c3.estarspace.com,16020,1483694415877, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
... 3 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to c3.estarspace.com/192.168.0.13:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to c3.estarspace.com/192.168.0.13:16020 is closing. Call id=12, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1239)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to c3.estarspace.com/192.168.0.13:16020 is closing. Call id=12, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1037)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:844)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:572)
) (state=08S01,code=1)
I put hbase.zookeeper.quorum into the hive-site.xml.
Track the source code, It looks like something wrong when HIVE to call HBASE Client to call the region server.
This problem seems only exist in HBase 1.2.3. Upgrade to HBase 1.2.4, HIVE function corrently.

Unable to produce data to hazelcast in apache camel

I have the following route configured in apache-camel
from("direct:hazelCast")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastConstants.PUT_OPERATION))
.toF("hazelcast:map:testHazel", HazelcastConstants.MAP_PREFIX);
But, when the above route is invoked I'm getting the following error:
java.lang.NullPointerException: Null key is not allowed!
at com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:95)
at com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:89)
at org.apache.camel.component.hazelcast.map.HazelcastMapProducer.put(HazelcastMapProducer.java:125)
at org.apache.camel.component.hazelcast.map.HazelcastMapProducer.process(HazelcastMapProducer.java:60)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:83)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:62)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.MulticastProcessor.doProcessParallel(MulticastProcessor.java:814)
at org.apache.camel.processor.MulticastProcessor.access$200(MulticastProcessor.java:84)
at org.apache.camel.processor.MulticastProcessor$1.call(MulticastProcessor.java:314)
at org.apache.camel.processor.MulticastProcessor$1.call(MulticastProcessor.java:299)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The code that I used was almost similar to what was there is camel docs http://camel.apache.org/hazelcast-component.html
The following is the code which produces the data to hazelcast
The following is the code snippet that I have used to produce the data to hazelcast in camel:
from("direct:hazelCast")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastConstants.PUT_OPERATION))
.setHeader(HazelcastConstants.OBJECT_ID, constant("SOME BLA BLA"))
.split()
.tokenizeXML(<SOMEValidTag>).streaming()
.unmarshal(jaxb)
.convertBodyTo(<Valid>.class)
.marshal().json(JsonLibrary.Jackson)
.toF("hazelcast:%stestHazel", HazelcastConstants.MAP_PREFIX);
Note: We need to convert the body to class which is should be serializeable
You need to set the objectid it is missing