HDFS Kafka Connect - Hive integration create table exception - hive

I am trying to sink data in to hdfs and that works fine but while configuring hive to see the data, I am getting error related to URI.
I have already tried to use store.url in place of hdfs.url but failed with null pointer exception.
My hdfs-sink.json config:
"connector.class": "io.confluent.connect.hdfs3.Hdfs3SinkConnector",
"tasks.max": "1",
"topics": "users",
"hdfs.url": "hdfs://192.168.1.221:9000",
"flush.size": "5",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"confluent.topic.bootstrap.servers": "localhost:9092",
"confluent.topic.replication.factor": "1",
"key.converter.schema.registry.url":"http://localhost:8081" ,
"value.converter.schema.registry.url":"http://localhost:8081",
"hive.integration":"true",
"hive.metastore.uris":"thrift://192.168.1.221:9083",
"schema.compatibility":"BACKWARD"
I am getting below error:
[2019-09-12 15:12:33,533] ERROR Creating Hive table threw unexpected error (io.confluent.connect.hdfs3.TopicPartitionWriter)
io.confluent.connect.storage.errors.HiveMetaStoreException: Hive MetaStore exception
at io.confluent.connect.storage.hive.HiveMetaStore.doAction(HiveMetaStore.java:99)
at io.confluent.connect.storage.hive.HiveMetaStore.createTable(HiveMetaStore.java:223)
at io.confluent.connect.hdfs3.avro.AvroHiveUtil.createTable(AvroHiveUtil.java:52)
at io.confluent.connect.hdfs3.DataWriter$3.createTable(DataWriter.java:285)
at io.confluent.connect.hdfs3.TopicPartitionWriter$1.call(TopicPartitionWriter.java:796)
at io.confluent.connect.hdfs3.TopicPartitionWriter$1.call(TopicPartitionWriter.java:792)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: hdfs://192.168.1.221:9000./null/topics/users)
AND
ERROR Altering Hive schema threw unexpected error (io.confluent.connect.hdfs3.TopicPartitionWriter)
io.confluent.connect.storage.errors.HiveMetaStoreException: Hive table not found: default.users
at io.confluent.connect.storage.hive.HiveMetaStore$9.call(HiveMetaStore.java:297)
at io.confluent.connect.storage.hive.HiveMetaStore$9.call(HiveMetaStore.java:290)
at io.confluent.connect.storage.hive.HiveMetaStore.doAction(HiveMetaStore.java:97)
at io.confluent.connect.storage.hive.HiveMetaStore.getTable(HiveMetaStore.java:303)
at io.confluent.connect.hdfs3.avro.AvroHiveUtil.alterSchema(AvroHiveUtil.java:61)
at io.confluent.connect.hdfs3.DataWriter$3.alterSchema(DataWriter.java:290)
at io.confluent.connect.hdfs3.TopicPartitionWriter$2.call(TopicPartitionWriter.java:811)
at io.confluent.connect.hdfs3.TopicPartitionWriter$2.call(TopicPartitionWriter.java:807)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)`

Related

java.io.IOException: Error closing multipart upload

I am working on pyspark code which processes Terabytes of data and write on s3.
After processing a data I am getting below error
`
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: java.io.IOException: Error closing multipart upload
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.doMultiPartUpload(MultipartUploadOutputStream.java:441)
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.close(MultipartUploadOutputStream.java:421)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:74)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:108)
at org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
at org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
`
I tried by setting below configurations. Still I am getting same error.
self._spark_session.conf.set("spark.hadoop.fs.s3a.multipart.threshold", 2097152000)
self._spark_session.conf.set("spark.hadoop.fs.s3a.multipart.size", 104857600)
self._spark_session.conf.set("spark.hadoop.fs.s3a.connection.maximum", 500)
self._spark_session.conf.set("spark.hadoop.fs.s3a.connection.timeout", 600000)
self._spark_session.conf.set("spark.hadoop.fs.s3.maxRetries", 50)
Can someone please help me to resolve this issue ?

serialization issue with kafka connect and redis sink

I have created a table in ksqlDb using key value specifically mentioned to be as 'Avro' like below.
create table (
region varchar(10) primary key,
male_count Integer,
female_count Integer
) with (kafka_topic='test',
key_format='avro',
value_format='avro',
partitions=1,
replicas=1)
Now I want the data from ksqldb to be sinked to redis using kafka connect. I am encountering the record convertion issue like below.
org.apache.kafka.connect.errors.ConnectException: failed to convert record
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:101)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:90)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kafka.connect.errors.ConnectException: unsupported command schema io.confluent.ksql.avro_schemas.KsqlDataSourceSchema
at io.github.jaredpetersen.kafkaconnectredis.sink.writer.RecordConverter.convert(RecordConverter.java:57)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:98)
... 12 more
{"type":"log", "host":"ckaf-kc-csf-kafka-connect-redis-55cb89b7bb-v2rtz", "level":"ERROR", "neid":"kafka-connect-455e22730a28462c969de25d9f54451e", "system":"kafka-connect", "time":"2022-10-18T10:16:57.224Z", "timezone":"UTC", "log":{"message":"task-thread-kafka-connect-redis-18-0 - org.apache.kafka.connect.runtime.WorkerSinkTask - WorkerSinkTask{id=kafka-connect-redis-18-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: failed to convert record"}}
org.apache.kafka.connect.errors.ConnectException: failed to convert record
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:101)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:90)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kafka.connect.errors.ConnectException: unsupported command schema io.confluent.ksql.avro_schemas.KsqlDataSourceSchema
at io.github.jaredpetersen.kafkaconnectredis.sink.writer.RecordConverter.convert(RecordConverter.java:57)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:98)
... 12 more
{"type":"log", "host":"ckaf-kc-csf-kafka-connect-redis-55cb89b7bb-v2rtz", "level":"ERROR", "neid":"kafka-connect-455e22730a28462c969de25d9f54451e", "system":"kafka-connect", "time":"2022-10-18T10:16:57.224Z", "timezone":"UTC", "log":{"message":"task-thread-kafka-connect-redis-18-0 - org.apache.kafka.connect.runtime.WorkerTask - WorkerSinkTask{id=kafka-connect-redis-18-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted"}}
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:631)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.kafka.connect.errors.ConnectException: failed to convert record
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:101)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:90)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601)
... 10 more
Caused by: org.apache.kafka.connect.errors.ConnectException: unsupported command schema io.confluent.ksql.avro_schemas.KsqlDataSourceSchema
at io.github.jaredpetersen.kafkaconnectredis.sink.writer.RecordConverter.convert(RecordConverter.java:57)
at io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkTask.put(RedisSinkTask.java:98)
============================
Redis sink connector property is as shown below:
"name": "kafka-connect-redis-103",
"config": {
"connector.class": "io.github.jaredpetersen.kafkaconnectredis.sink.RedisSinkConnector",
"tasks.max": "1",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "our service url",
"key.converter.enhanced.avro.schema.support": "true",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "our service url",
"value.converter.enhanced.avro.schema.support": "true",
"topics": "test",
"redis.client.mode": "Cluster",
"redis.uri": "our service url",
"redis.operation.timeout.ms": 10000,
"redis.password": "ciredis",
"redis.database":3
}
what is wrong here? I have the Avro conversion property being specified. Still I am getting record conversion error.
Help here would be appreciated.

Spark SQL query `SHOW VIEWS IN` through Hive metastore fails with `missing 'FUNCTIONS' at 'IN'`

Have Spark (2.4.4) with a Hive metastore running. When accessing it through JDBC/ODBC with a query like
SHOW VIEWS IN space1
i get following error:
[2020-03-18T10:54:57,722][DEBUG][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.execution.SparkSqlParser][][] Parsing command: SHOW VIEWS IN `space1`
[2020-03-18T10:54:57,733][ERROR][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation][][] Error executing query, currentState RUNNING,
org.apache.spark.sql.catalyst.parser.ParseException:
missing 'FUNCTIONS' at 'IN'(line 1, pos 11)
== SQL ==
SHOW VIEWS IN `space1`
-----------^^^
at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:241) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:117) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:232) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_201]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_201]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844) [hadoop-common-2.8.5.jar:?]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_201]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
[2020-03-18T10:54:57,765][ERROR][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation][][] Error running hive query:
org.apache.hive.service.cli.HiveSQLException: org.apache.spark.sql.catalyst.parser.ParseException:
missing 'FUNCTIONS' at 'IN'(line 1, pos 11)
== SQL ==
SHOW VIEWS IN `space1`
-----------^^^
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:269) ~[spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_201]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_201]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844) [hadoop-common-2.8.5.jar:?]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_201]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
E.g. i get it when i connect Tableau to my Spark, or i can fire the query explicitly via a JDBC connected SQL tool.
Any idea's ?
Note, a query like
SELECT * FROM `employer` WHERE `Name` IN ('John','Alex');
finishes without a problem!
Also somebody else had this problem before but got no response: https://community.powerbi.com/t5/Desktop/Spark-connector-issue/td-p/952481
SHOW VIEWS command only works since Spark 3. That is why you are seeing that error.
See: https://issues.apache.org/jira/browse/SPARK-31113

Beeline unable to create external hbase table but hive cli can

I have hbase 1.2.3 cluster, and install hive 2.1.1. When I try to create external hbase table through beeline/hiveserver2, I got exception. But if I use hive cli, it is ok. The create statement is as following:
create external table hbase_xing(id int, name string)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties ("hbase.columns.mapping" = ":key,f:name")
tblproperties("hbase.table.name" = "xing");
Exception is:
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Fri Jan 06 19:38:24 CST 2017, null, java.net.SocketTimeoutException: callTimeout=120000, callDuration=128483: row 'xing,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=c3.estarspace.com,16020,1483694415877, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:223)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313)
at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:205)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:742)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:735)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
at com.sun.proxy.$Proxy24.createTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:830)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:845)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3992)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:332)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1166)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:347)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: callTimeout=120000, callDuration=128483: row 'xing,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=c3.estarspace.com,16020,1483694415877, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
... 3 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to c3.estarspace.com/192.168.0.13:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to c3.estarspace.com/192.168.0.13:16020 is closing. Call id=12, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1239)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to c3.estarspace.com/192.168.0.13:16020 is closing. Call id=12, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1037)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:844)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:572)
) (state=08S01,code=1)
I put hbase.zookeeper.quorum into the hive-site.xml.
Track the source code, It looks like something wrong when HIVE to call HBASE Client to call the region server.
This problem seems only exist in HBase 1.2.3. Upgrade to HBase 1.2.4, HIVE function corrently.

Nifi putSQL processor raises exception on HIVE on simple insert

I'm feeding the putSQL processor using flowfiles like this one:
insert into test_nifi values ( '1476781027812');
I also tried using the version without the final ';' results are the same.
2016-10-18 10:49:58,858 ERROR [Timer-Driven Process Thread-4] o.apache.nifi.processors.standard.PutSQL PutSQL[id=d3103678-0157-1000-0000-000036cdfdbc] Failed to update database for [StandardFlow
FileRecord[uuid=b0e562b4-e974-4262-9b0b-c968f3488da4,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1476666309626-4, container=default, section=4], offset=332174, length=48],
offset=0,name=2479999684241677.avro,size=48]] due to java.sql.SQLException: Method not supported; it is possible that retrying the operation will succeed, so routing to retry: java.sql.SQLExcept
ion: Method not supported
2016-10-18 10:49:58,860 ERROR [Timer-Driven Process Thread-4] o.apache.nifi.processors.standard.PutSQL
java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveConnection.commit(HiveConnection.java:614) ~[na:na]
at org.apache.commons.dbcp.DelegatingConnection.commit(DelegatingConnection.java:334) ~[na:na]
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.commit(PoolingDataSource.java:211) ~[na:na]
at org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:371) ~[nifi-standard-processors-1.0.0.jar:1.0.0]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.0.0.jar:1.0.0]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064) [nifi-framework-core-1.0.0.jar:1.0.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0.jar:1.0.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0.jar:1.0.0]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.0.0.jar:1.0.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
SelectHiveQL processor works with no problem.
If putSQL it's broken ( at least on hive ) how can I execute SQL code which does not returns data ?
I'm connecting to Apache Hive (version 1.1.0-cdh5.5.2)
The table where I'm inserting the data is defined as :
CREATE TABLE `test_nifi`(
`value` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://hdfs-prod/user/hive/warehouse/unifieddata.db/test_nifi';
inserts from belinee are working.
I solved it by myself.... the correct processor is PutHiveQL :P