I have installed HDP 3.1.5 with KERBEROS security. installed Ranger Plugin .after setting up the hive authentication as Ranger . Getting below error .
Please suggest the resolution
create role userRole; Error: Error while compiling statement: FAILED: LockException [Error 10280]: Error communicating with the metastore (state=42000,code=10280) Changed the value : hive.security.authorization.manager = org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory
Error in Hive log :
2020-03-30T10:38:08,555 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Unable to rename temp file /tmp/hmetrics4237869098581745036json to /tmp/report.json
2020-03-30T10:38:08,555 ERROR [json-metric-reporter]: metrics2.JsonFileMetricsReporter (:()) - Exception during rename
java.nio.file.FileSystemException: /tmp/hmetrics4237869098581745036json -> /tmp/report.json: Operation not permitted
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:1.8.0_212]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_212]
at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396) ~[?:1.8.0_212]
at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) ~[?:1.8.0_212]
at java.nio.file.Files.move(Files.java:1395) ~[?:1.8.0_212]
at org.apache.hadoop.hive.common.metrics.metrics2.JsonFileMetricsReporter.run(JsonFileMetricsReporter.java:175) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
Related
I'm working on project in order to load data into Amazon S3 file (file target imported with AmazonS3 plugin) with informatica powercenter using Amazon_s3 connector, but the problème is when i execute the workflow it fails (it never have been succeeded till now)
the error is the directory can't be created even so i have the rights of creating and writing in the folder Temp.
Message Code: Amazon_S3 Writer_30031
Message: [ERROR] File has not been created in the specified directory: [F:\DEV0PWC\PWC\Temp\InfaS3Staging0006041652299037943744_0]
Message Code: GENERIC_WRITER_5
Message: [ERROR] Error while initializing the writer : [Failed to create the file in the specified location F:\DEV0PWC\PWC\Temp\InfaS3Staging11060412211690186912208_0:The system cannot find the path specified]
Message Code: JAVA PLUGIN_1762
Message: [ERROR] com.informatica.powercenter.sdk.SDKException: com.informatica.cloud.api.adapter.runtime.exception.InitializationException: Failed to create the file in the specified location F:\DEV0PWC\PWC\Temp\InfaS3Staging11060412211690186912208_0:The system cannot find the path specified
at com.informatica.cloud.adapter.amazons3.write.AmazonS3Write.initializeAndValidate(Unknown Source)
at com.informatica.cloud.api.adapter.writer.runtime.WriterWrapper.initializeAndValidate(Unknown Source)
at com.informatica.cloud.api.adapter.writer.runtime.GenericWriterPartitionDriver.init(Unknown Source)
Have you any informations about this please !
I'm running locally a docker compose running flink and minio
When I try to connect to minio, I always get the following error:
caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
It seems that the plugin isn't loaded correctly.
my flink config (flink-conf.yaml):
state.backend: filesystem
s3.endpoint: http://minio:9000
s3.path.style.access: true
s3.access-key: minio
s3.secret-key: minio123
presto.s3.access-key: minio
presto.s3.secret-key: minio123
presto.s3.endpoint: http://minio:9000
presto.s3.path-style-access: true
I've copied the required plugin as following:
mkdir -p plugins/s3-fs-presto
cp opt/flink-s3-fs-presto-*.jar plugins/s3-fs-presto
Any suggestions?
Stack trace:
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Job failed. (JobID: 7ae6657256719d8c32d76ba113fb35f0)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:820)
at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
at org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:96)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1008)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1081)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1081)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:259)
... 21 more
Caused by: java.io.IOException: Error opening the Input Split s3://test/test.txt [0,3243]: Could not find a file system implementation for scheme 's3'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
at org.apache.flink.api.common.io.FileInputFormat.open(FileInputFormat.java:824)
at org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:470)
at org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:47)
at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:173)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
at org.apache.flink.api.common.io.FileInputFormat$InputSplitOpenThread.run(FileInputFormat.java:995)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
... 2 more
#
I'm receiving this exception when executing a Hive query on Tez with Hive 2.3.6 and Tez 0.9.2
I know Tez is configured correctly because I can manually run map-reduce jobs via Hadoop.
Dag submit failed due to java.io.FileNotFoundException: Path is not a file: /tmp/hive/root/_tez_session_dir/f4f4b17c-0657-41fa-8674-df83fa3ad362/lib
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:76)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:62)
at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:150)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1829)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:709)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
This error is seen on Hive 2.2+ or Hive 2.3.x+ when
hive.aux.jars.path in hive-site.xml is configured with an invalid path.
or
HIVE_AUX_JARS_PATH environment variable is configured improperly (usually in hive-env.sh)
I'm trying to initialize database in hive before each run.
The code is:
command="hive -e \"drop database if exists some_db cascade; create database some_db\"";
eval $command;
execution fails with error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to clean up java.sql.SQLSyntaxErrorException: Table 'hive.COMPLETED_COMPACTIONS' doesn't exist
it passes on rerun,
eval $command;
I can't explain also the fact that when calling the command for multiple times, it keep failing alternately.
Appreciate for advice to what happens in Hive and how to make it work on the first attempt, thanks.
Details:
Hive 2.1 running on AWS EMR 5.7, didn't see such behavior on HIVE 1.0
Full Error Stack
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to clean up java.sql.SQLSyntaxErrorException: Table 'hive.COMPLETED_COMPACTIONS' doesn't exist
at org.mariadb.jdbc.internal.util.ExceptionMapper.get(ExceptionMapper.java:125)
at org.mariadb.jdbc.internal.util.ExceptionMapper.throwException(ExceptionMapper.java:69)
at org.mariadb.jdbc.MariaDbStatement.executeQueryEpilog(MariaDbStatement.java:259)
at org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:287)
at org.mariadb.jdbc.MariaDbStatement.executeUpdate(MariaDbStatement.java:470)
at org.mariadb.jdbc.MariaDbStatement.executeUpdate(MariaDbStatement.java:486)
at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
at org.apache.hadoop.hive.metastore.txn.TxnHandler.cleanupRecords(TxnHandler.java:1721)
at org.apache.hadoop.hive.metastore.AcidEventListener.onDropDatabase(AcidEventListener.java:51)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_database_core(HiveMetaStore.java:1098)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_database(HiveMetaStore.java:1130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy19.drop_database(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_database.getResult(ThriftHiveMetastore.java:10518)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_database.getResult(ThriftHiveMetastore.java:10502)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.mariadb.jdbc.internal.util.dao.QueryException: Table 'hive.COMPLETED_COMPACTIONS' doesn't exist
at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.getResult(AbstractQueryProtocol.java:479)
at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.result(AbstractQueryProtocol.java:400)
at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:381)
at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:337)
at org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:277)
... 27 more
Blockquote
Looks like your Hive metastore schema needs to be upgraded. You can check the schema version with the schematool command.
Example output:
Metastore connection URL: jdbc:mysql://XXXXXXXXXXXXXXXX:3306/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : org.mariadb.jdbc.Driver
Metastore connection User: hive
Hive distribution version: 2.1.0
Metastore schema version: 2.1.0
If your metastore schema and Hive distribution versions are not the same, upgrade the schema.
Make sure you have a backup first, just in case. Also remember that there could be changes to the schema that breaks functionality with older Hive distributions so you may not be able to share a matastore between Hive distribution versions.
I am creating oozie workflow for hive create table command.
I have added hive-site.xml in hdfs location.
I am getting below error:-
Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.HiveMain], main() threw exception, com/facebook/fb303/FacebookService$Iface
java.lang.NoClassDefFoundError: com/facebook/fb303/FacebookService$Iface
at java.lang.ClassLoader.defineClass1(Native Method)
This might be because you are missing Thrift jar or version mismatch.
Refer the following
Error while executing program with Hive JDBC