We have a Presto(Version - 323-E.8) connector with Ranger enabled CDP Hive3 cluster where I'm able to run the select query on existing Hive ORC foramatted tables but couldn't create or delete any views on Hive metastore. It's throwing permissions issue error and my admin has granted all the permissions to the user from Ranger & AD and I'm able to perform all the operations from beeline with same user on the server.
Hive Properties:
*connector.name=hive-hadoop2
hive.metastore.uri=thrift://XXXXX
hive.views-execution.enabled=true
hive.metastore.authentication.type=KERBEROS
hive.metastore.service.principal=hive/_HOST#XXXX
hive.metastore.client.principal=XXXXX
hive.metastore.client.keytab=/abc/xxxx.keytab
hive.hdfs.wire-encryption.enabled=false
hive.metastore.thrift.impersonation.enabled=true
hive.config.resources=/etc/cdp/core-site.xml,/etc/cdp/hdfs-site.xml,/etc/cdp/hive-site.xml
hive.hdfs.authentication.type=KERBEROS
hive.hdfs.presto.principal=hdfs/_HOST#XXXXX
hive.hdfs.presto.principal=XXXX
hive.hdfs.presto.keytab=/abc/xxxx.keytab
hive.security=ranger
ranger.policy-rest-url=https://XXXXX:6182
ranger.service-name=cm_hive
ranger.authentication-type=KERBEROS
ranger.kerberos-principal=XXXX
ranger.kerberos-keytab=/abc/xxxx.keytab
ranger.plugin-policy-ssl-config-file=/abc/ssl-client.xml*
Error:
io.prestosql.spi.PrestoException: Operation type CREATE_VIEW not allowed for user:XXXXX
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.createTable(ThriftHiveMetastore.java:1036)
at io.prestosql.plugin.hive.metastore.thrift.BridgingHiveMetastore.createTable(BridgingHiveMetastore.java:184)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.createTable(CachingHiveMetastore.java:524)
at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.createTable(CachingHiveMetastore.java:524)
at io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore$CreateTableOperation.run(SemiTransactionalHiveMetastore.java:2692)
at io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore$Committer.executeAddTableOperations(SemiTransactionalHiveMetastore.java:1668)
at io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore$Committer.access$1000(SemiTransactionalHiveMetastore.java:1282)
at io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore.commitShared(SemiTransactionalHiveMetastore.java:1225)
at io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore.commit(SemiTransactionalHiveMetastore.java:991)
at io.prestosql.plugin.hive.HiveMetadata.commit(HiveMetadata.java:2408)
at io.prestosql.plugin.hive.HiveConnector.commit(HiveConnector.java:202)
at io.prestosql.transaction.InMemoryTransactionManager$TransactionMetadata$ConnectorTransactionMetadata.commit(InMemoryTransactionManager.java:595)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at io.airlift.concurrent.BoundedExecutor.drainQueue(BoundedExecutor.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Operation type CREATE_VIEW not allowed for user:XXXXX
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_result$create_table_resultStandardScheme.read(ThriftHiveMetastore.java:52658)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_result$create_table_resultStandardScheme.read(ThriftHiveMetastore.java:52626)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_result.read(ThriftHiveMetastore.java:52552)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_table(ThriftHiveMetastore.java:1490)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_table(ThriftHiveMetastore.java:1477)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at io.prestosql.plugin.base.util.LoggingInvocationHandler.handleInvocation(LoggingInvocationHandler.java:60)
at com.google.common.reflect.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:86)
at com.sun.proxy.$Proxy370.create_table(Unknown Source)
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastoreClient.createTable(ThriftHiveMetastoreClient.java:161)
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.lambda$createTable$51(ThriftHiveMetastore.java:1024)
at io.prestosql.plugin.hive.metastore.thrift.ThriftMetastoreApiStats.lambda$wrap$0(ThriftMetastoreApiStats.java:42)
at io.prestosql.plugin.hive.util.RetryDriver.run(RetryDriver.java:130)
at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.createTable(ThriftHiveMetastore.java:1022)
... 19 more
Related
I use minio as the hive storage system, and there is no problem when I execute query statements like 'select * from table'.
But when I execute agg query like 'select max(age) from student',then I got an error:
java.nio.file.AccessDeniedException: hive: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Unable to load credentials from service endpoint
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:187)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:375)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:311)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2610)
at org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2606)
at org.apache.hadoop.hive.ql.exec.Utilities$GetInputPathsCallable.call(Utilities.java:3432)
at org.apache.hadoop.hive.ql.exec.Utilities.getInputPaths(Utilities.java:3370)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:359)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Unable to load credentials from service endpoint
at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:159)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1166)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:762)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:724)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4368)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4315)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1344)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1284)
at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:376)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
... 39 more
Should I add some config in my fs system?
Yes, the same issue facing by us. You can open a tunnel from your local machine to the amazon instance to check the access.
In medium an article says a custom class kind of solution.
https://medium.com/expedia-group-tech/service-slow-to-retrieve-aws-credentials-ebc02a38e95b
Have Spark (2.4.4) with a Hive metastore running. When accessing it through JDBC/ODBC with a query like
SHOW VIEWS IN space1
i get following error:
[2020-03-18T10:54:57,722][DEBUG][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.execution.SparkSqlParser][][] Parsing command: SHOW VIEWS IN `space1`
[2020-03-18T10:54:57,733][ERROR][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation][][] Error executing query, currentState RUNNING,
org.apache.spark.sql.catalyst.parser.ParseException:
missing 'FUNCTIONS' at 'IN'(line 1, pos 11)
== SQL ==
SHOW VIEWS IN `space1`
-----------^^^
at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:241) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:117) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69) ~[spark-catalyst_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) ~[spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:232) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_201]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_201]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844) [hadoop-common-2.8.5.jar:?]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_201]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
[2020-03-18T10:54:57,765][ERROR][HiveServer2-Background-Pool: Thread-203][org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation][][] Error running hive query:
org.apache.hive.service.cli.HiveSQLException: org.apache.spark.sql.catalyst.parser.ParseException:
missing 'FUNCTIONS' at 'IN'(line 1, pos 11)
== SQL ==
SHOW VIEWS IN `space1`
-----------^^^
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:269) ~[spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:175) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_201]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_201]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844) [hadoop-common-2.8.5.jar:?]
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:185) [spark-hive-thriftserver_2.11-2.4.4.jar:2.4.4]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_201]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
E.g. i get it when i connect Tableau to my Spark, or i can fire the query explicitly via a JDBC connected SQL tool.
Any idea's ?
Note, a query like
SELECT * FROM `employer` WHERE `Name` IN ('John','Alex');
finishes without a problem!
Also somebody else had this problem before but got no response: https://community.powerbi.com/t5/Desktop/Spark-connector-issue/td-p/952481
SHOW VIEWS command only works since Spark 3. That is why you are seeing that error.
See: https://issues.apache.org/jira/browse/SPARK-31113
I create an external table in Hive with partitions and then try to populate it from the existing table, however, I get the following exceptions:
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hive/warehouse/pavel.db/browserdatapart/.hive-staging_hive_2018-12-28_13-22-45_751_6056004898772238481-1/_task_tmp.-ext-10000/cityid=1/_tmp.000001_3 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:814)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:841)
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:841)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:133)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:170)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555)
... 18 more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hive/warehouse/pavel.db/browserdatapart/.hive-staging_hive_2018-12-28_13-22-45_751_6056004898772238481-1/_task_tmp.-ext-10000/cityid=1/_tmp.000001_3 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
at com.sun.proxy.$Proxy12.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1580)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1375)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:
According to the internet these exceptions occur when datanode can't communicate with namenode or when you are running low on memory, but in my case everything is fine. I have already tried formatting my namenode and datanode as well. What else could be the issue?
https://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo I've also read this. And it didn't help me.
I am running on tez. this works:
insert into table browserdatapart partition(cityid) select UserAgent,cityid from browserdata limit 100;
And this fails with the exception I provided:
insert into table browserdatapart partition(cityid) select UserAgent,cityid from browserdata;
SET hive.exec.max.dynamic.partitions=100000;
SET hive.exec.max.dynamic.partitions.pernode=100000;
Setting the above parameters solved it for me. I guess hive was not able to replicate data to those partitions that show up in the exception, because there were more than the maximum (which is 224 in the case of my dataset).
I have hbase 1.2.3 cluster, and install hive 2.1.1. When I try to create external hbase table through beeline/hiveserver2, I got exception. But if I use hive cli, it is ok. The create statement is as following:
create external table hbase_xing(id int, name string)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties ("hbase.columns.mapping" = ":key,f:name")
tblproperties("hbase.table.name" = "xing");
Exception is:
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Fri Jan 06 19:38:24 CST 2017, null, java.net.SocketTimeoutException: callTimeout=120000, callDuration=128483: row 'xing,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=c3.estarspace.com,16020,1483694415877, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:223)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313)
at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:205)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:742)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:735)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
at com.sun.proxy.$Proxy24.createTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:830)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:845)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3992)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:332)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1166)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:347)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: callTimeout=120000, callDuration=128483: row 'xing,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=c3.estarspace.com,16020,1483694415877, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
... 3 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to c3.estarspace.com/192.168.0.13:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to c3.estarspace.com/192.168.0.13:16020 is closing. Call id=12, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1239)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to c3.estarspace.com/192.168.0.13:16020 is closing. Call id=12, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1037)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:844)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:572)
) (state=08S01,code=1)
I put hbase.zookeeper.quorum into the hive-site.xml.
Track the source code, It looks like something wrong when HIVE to call HBASE Client to call the region server.
This problem seems only exist in HBase 1.2.3. Upgrade to HBase 1.2.4, HIVE function corrently.
I'm getting the following error when trying to sync the database tables in 0xDBE:
Method org.postgresql.jdbc4.Jdbc4Array.free() is not yet implemented.
Method org.postgresql.jdbc4.Jdbc4Array.free() is not yet implemented.
The SQL statement:
with languages as (select oid as lang_oid, lanname as lang
from pg_catalog.pg_language),
routines as (select proname as r_name,
prolang as lang_oid,
oid as r_id,
xmin as r_state_number,
proargnames as arg_names,
proargmodes as arg_modes,
proargtypes::int[] as in_arg_types,
proallargtypes::int[] as all_arg_types,
proargdefaults as arg_defaults,
provariadic as arg_variadic_id,
prorettype as ret_type_id,
proisagg as is_aggregate,
proiswindow as is_window,
provolatile as volatile_kind
from pg_catalog.pg_proc
where pronamespace = oid(?)
and xmin::varchar::bigint > ?)
select *
from routines natural join languages
at org.jetbrains.jdba.jdbc.BaseExceptionRecognizer.recognizeException(BaseExceptionRecognizer.java:48)
at org.jetbrains.jdba.jdbc.JdbcIntermediateSession.recognizeException(JdbcIntermediateSession.java:347)
at org.jetbrains.jdba.jdbc.JdbcIntermediateCursor.fetch(JdbcIntermediateCursor.java:249)
at com.intellij.database.remote.jdba.impl.RemoteCursorImpl.fetch(RemoteCursorImpl.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$2.run(Transport.java:202)
at sun.rmi.transport.Transport$2.run(Transport.java:199)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:275)
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:252)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:217)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:171)
at com.sun.proxy.$Proxy93.fetch(Unknown Source)
at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.execution.rmi.RemoteUtil.invokeRemote(RemoteUtil.java:124)
at com.intellij.execution.rmi.RemoteUtil.access$100(RemoteUtil.java:36)
at com.intellij.execution.rmi.RemoteUtil$RemoteInvocationHandler.invoke(RemoteUtil.java:229)
at com.sun.proxy.$Proxy94.fetch(Unknown Source)
at org.jetbrains.jdba.intermediate.AdaptIntermediateStructCollectingCursor.fetch(AdaptIntermediateStructCollectingCursor.java:107)
at org.jetbrains.jdba.core.BaseQueryRunner.fetchPack(BaseQueryRunner.java:88)
at org.jetbrains.jdba.core.BaseQueryRunner.run(BaseQueryRunner.java:70)
at com.intellij.dbm.postgre.PostgreIntrospector$SchemaRetriever.h(PostgreIntrospector.java:459)
at com.intellij.dbm.postgre.PostgreIntrospector$SchemaRetriever.access$600(PostgreIntrospector.java:177)
at com.intellij.dbm.postgre.PostgreIntrospector$SchemaRetriever$4.run(PostgreIntrospector.java:246)
at com.intellij.dbm.postgre.PostgreIntrospector$SchemaRetriever.a(PostgreIntrospector.java:203)
at com.intellij.dbm.postgre.PostgreIntrospector$SchemaRetriever.retrieve(PostgreIntrospector.java:242)
at com.intellij.dbm.postgre.PostgreIntrospector$2.run(PostgreIntrospector.java:169)
at org.jetbrains.jdba.core.BaseSession.inTransaction(BaseSession.java:88)
at org.jetbrains.jdba.core.BaseFacade$2.run(BaseFacade.java:93)
at org.jetbrains.jdba.core.BaseFacade.inSession(BaseFacade.java:125)
at org.jetbrains.jdba.core.BaseFacade.inTransaction(BaseFacade.java:89)
at com.intellij.dbm.postgre.PostgreIntrospector.a(PostgreIntrospector.java:165)
at com.intellij.dbm.postgre.PostgreIntrospector.b(PostgreIntrospector.java:153)
at com.intellij.dbm.postgre.PostgreIntrospector.introspect(PostgreIntrospector.java:86)
at com.intellij.database.dataSource.NativeSchemaLoader.a(NativeSchemaLoader.java:113)
at com.intellij.database.dataSource.NativeSchemaLoader.introspectAndAdapt(NativeSchemaLoader.java:67)
at com.intellij.database.dataSource.DatabaseSchemaLoader.loadDataSourceState(DatabaseSchemaLoader.java:107)
at com.intellij.database.dataSource.AbstractDataSource.refreshMetaData(AbstractDataSource.java:59)
at com.intellij.database.dataSource.AbstractDataSource$1.perform(AbstractDataSource.java:34)
at com.intellij.database.dataSource.AbstractDataSource$1.perform(AbstractDataSource.java:32)
at com.intellij.database.dataSource.AbstractDataSource.performJdbcOperation(AbstractDataSource.java:110)
at com.intellij.database.dataSource.AbstractDataSource.refreshMetaData(AbstractDataSource.java:32)
at com.intellij.database.dataSource.DataSourceUiUtil$2.run(DataSourceUiUtil.java:167)
at com.intellij.openapi.progress.impl.CoreProgressManager$TaskRunnable.run(CoreProgressManager.java:563)
at com.intellij.openapi.progress.impl.CoreProgressManager$2.run(CoreProgressManager.java:142)
at com.intellij.openapi.progress.impl.CoreProgressManager.a(CoreProgressManager.java:446)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:392)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:54)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:127)
at com.intellij.openapi.progress.impl.ProgressManagerImpl$1.run(ProgressManagerImpl.java:126)
at com.intellij.openapi.application.impl.ApplicationImpl$8.run(ApplicationImpl.java:367)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at org.jetbrains.ide.PooledThreadExecutor$1$1.run(PooledThreadExecutor.java:55)
Caused by: java.sql.SQLFeatureNotSupportedException: Method org.postgresql.jdbc4.Jdbc4Array.free() is not yet implemented.
at org.postgresql.Driver.notImplemented(Driver.java:727)
at org.postgresql.jdbc4.Jdbc4Array.free(Jdbc4Array.java:57)
at org.jetbrains.jdba.jdbc.JdbcValueGetters$AbstractArrayGetter.getValue(JdbcValueGetters.java:473)
at org.jetbrains.jdba.jdbc.JdbcRowFetchers$TupleFetcher.fetchRow(JdbcRowFetchers.java:163)
at org.jetbrains.jdba.jdbc.JdbcRowFetchers$TupleFetcher.fetchRow(JdbcRowFetchers.java:145)
at org.jetbrains.jdba.jdbc.JdbcRowsCollectors$ListCollector.collectRows(JdbcRowsCollectors.java:211)
at org.jetbrains.jdba.jdbc.JdbcRowsCollectors$ListCollector.collectRows(JdbcRowsCollectors.java:198)
at org.jetbrains.jdba.jdbc.JdbcIntermediateCursor.fetch(JdbcIntermediateCursor.java:245)
at com.intellij.database.remote.jdba.impl.RemoteCursorImpl.fetch(RemoteCursorImpl.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$2.run(Transport.java:202)
at sun.rmi.transport.Transport$2.run(Transport.java:199)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
However, when executing queries from the console, the data I get is the correct. I'm not very sure what's failing here, as I don't have any problems when using pgAdmin.
Has this happened to anyone else?
I solved the issue by re-downloading the JDBC driver 0xDBE was using. Right-Click on the Data Source, then go to preferences and erase the driver files currently being used and download them again.
You have to restart IDE after redownloading the driver for the changes to take effect.
I was able to fix this issue by opening the data source settings page (where you enter the DB credentials) and switching to tab Options and enabling Introspect using JDBC metadata: