HRegionServer shows "error telling master we are up". Showing socket exception: Invalid argument - apache

Iam trying to create a hbase cluster in 3 centos machines. Hadoop(v - 2.8.0) is up and running on top I configured HBase(v - 1.2.5).Hbase start up is fine it started HMaster and Region servers but still it shows the follwing error in region servers and in HMaster log it shows no region servers are checked in.
2017-04-20 19:30:33,950 WARN [regionserver/localhost/127.0.0.1:16020] regionserver.HRegionServer: error telling master we are up
com.google.protobuf.ServiceException: java.net.SocketException: Invalid argument
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)
at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2316)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:907)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:416)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:722)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
JPS of my master node
[hadoop#localhost bin]$ jps
20624 SecondaryNameNode
20800 ResourceManager
20401 NameNode
18061 Jps
17839 HMaster
JPS of myregion nodes are
[hadoop#localhost bin]$ jps
11168 Jps
482 DataNode
10840 HQuorumPeer
10974 HRegionServer
hbase-site.xml of all nodes
<configuration>
<property>
<name>hbase.master.hostname</name>
<value>NameNode</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://NameNode:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://NameNode:8020/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>DataNode1,DataNode2</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
regionservers file contain
DataNode1
DataNode2
etc/hosts file in all nodes contain actual ips rather than loopback ips
192.168.00.00 NameNode
192.168.00.00 DataNode1
192.168.00.00 DataNode2
Note configuration is same in all nodes. Any help will be appreciated.

I put the following property in all region servers hbase-site.xml solved my problem.<property> <name>hbase.regionserver.hostname</name> <value>DataNode1</value> </property> <property> <name>hbase.regionserver.port</name> <value>16020</value> </property>

i was facing the same problem but...
changing hostname resolved my problem
sudo hostnamectl set-hostname new_hostname
i had a master and a node called node1
link to wiki that have the configs

Related

when initializing Hive for the first-time getting error on hive-site.xml

I am unable to find the cause of the below error as it points to hive-site.xml
so far what I have configured is completely correct.
FYI i am using hadoop 3.1.1 and hive 3.1.1 and mysql for hive metastore.
adminn#master:~$ schematool -initSchema -dbType mysql
Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3210,96,"file:/home/adminn/apache-hive-3.1.1-bin/conf/hive-site.xml"]
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3003)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2931)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2806)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1460)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4990)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5063)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5150)
at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5098)
at org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3210,96,"file:/home/adminn/apache-hive-3.1.1-bin/conf/hive-site.xml"]
at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:621)
at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:491)
at com.ctc.wstx.sr.StreamScanner.reportIllegalChar(StreamScanner.java:2456)
at com.ctc.wstx.sr.StreamScanner.validateChar(StreamScanner.java:2403)
at com.ctc.wstx.sr.StreamScanner.resolveCharEnt(StreamScanner.java:2369)
at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1515)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2828)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3257)
at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3063)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2986)
... 15 more
given below is the hive-site.xml file where i made the required changes.
some imp part of hive-site.xml
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>Hivehadoop#123$</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.ds.connection.url.hook</name>
<value/>
<description>Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used</description>
</property>
<property>
<name>javax.jdo.option.Multithreaded</name>
<value>true</value>
<description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/${user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/${user.name}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
this is solved, i removed the special character form the specified line number. then its working fine.

'hiveserver2 not listening on port 10000 and 10001'

When I run:
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console
It shows
Starting HiveServer2
and nothing listens on port 10000 and 10001
The HiveServer2 service does not output error information, causing it hard to diagnostic the problem. You can try to start the metastore service provided by Hive, which listens on port 9083 and might give some information when your configuration is not properly set:
hive --service metastore # not detach from terminal to see logs
In my case, this service cannot be started, with error message:
MetaException(message:Hive Schema version 3.1.0 does not match metastore's schema
version 1.2.0 Metastoed or corrupt)
One of the direct solution to resolve this error is to ignore the version difference by setting the hive-site.xml if there is only one hive version in your machine (another solution is to modify the metastore_db version):
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
After this problem is resolved, the HiveServer2 service can be running and listening on port 10000.
hive --service hiveserver2 > /dev/null 2>&1 &
If your HiveServer2 access metastore via Derby or MySQL JDBC driver, then the aforementioned metastore service is not needed for HiveServer2. However, if HiveServer2 access metastore via thrift protocol, as configed in conf/hive-site.xml like
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop-master:9083</value>
<description>
Thrift URI for the remote metastore.
Used by metastore client to connect to remote metastore.
</description>
</property>
Then, the metastore service must be started at first.
I had a hard time to set up hive-3.1.2. I write this maybe it helps someone out. in order to diagnose the problem first try to launch metastore and hiveserver2 like this:
metastore:
hive --service metastore --hiveconf hive.root.logger=INFO,console
hiveserver2:
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console
then carefully read the the exceptions were thrown.
my problem was user hive is not allowed to perform this api call
and to solve that I added the following property to hive-site.xml:
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
<description>
Should metastore do authorization against database notification related APIs such as get_next_notification.
If set to true, then only the superusers in proxy settings have the permission
</description>
</property>
also I add my full hive-site.xml as a sample:
<configuration>
<property>
<name>datanucleus.schema.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://server-2:3306/metastore?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>mysql_username</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mysql_password</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://server-2:9083</value>
</property>
<property>
<name>atanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>server-2</value>
</property>
<property>
<name>hive.server2.transport.mode</name>
<value>binary</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
</configuration>
Thanks. There is typo. It should hive.metastore not as shown below.
**metastore**.metastore.event.db.notification.api.auth
false

HiveMetaStoreClient fails to connect to a Kerberized cluster

Kerberized HDP-2.6.3.0.
I have a test code running on my local Windows 7 machine. Note the commented code, as well, that's not making a difference.
private static void connectHiveMetastore() throws MetaException, MalformedURLException {
System.setProperty("hadoop.home.dir", "E:\\Development\\Software\\Virtualization");
/*Start : Commented or un-commented, immaterial ...*/
System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
System.setProperty("java.security.auth.login.config","E:\\Development\\lib\\hdp\\loginconf.ini");
System.setProperty("java.security.krb5.conf","E:\\Development\\lib\\hdp\\krb5.conf");
/*End : Commented or un-commented, immaterial ...*/
Configuration configuration = new Configuration();
/*Start : Commented or un-commented, immaterial ...*/
//configuration.addResource("E:\\Development\\lib\\hdp\\client_config\\HDFS_CLIENT\\core-site.xml");
//configuration.addResource("E:\\Development\\lib\\hdp\\client_config\\HDFS_CLIENT\\hdfs-site.xml");
//configuration.addResource("E:\\Development\\lib\\hdp\\client_config\\HIVE_CLIENT\\hive-site.xml");
//configuration.set("hive.server2.authentication","KERBEROS");
//configuration.set("hadoop.security.authentication", "Kerberos");
/*End : Commented or un-commented, immaterial ...*/
HiveConf hiveConf = new HiveConf(configuration,Configuration.class);
/*Start : Commented or un-commented, immaterial ...*/
/*URL url = new File("E:\\Development\\lib\\hdp\\client_config\\HIVE_CLIENT\\hive-site.xml").toURI().toURL();
HiveConf.setHiveSiteLocation(url);*/
//hiveConf.setVar(HiveConf.ConfVars.HIVE_SERVER2_AUTHENTICATION,"KERBEROS");
/*End : Commented or un-commented, immaterial ...*/
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS,"thrift://l4283t.sss.se.com:9083,thrift://l4284t.sss.se.com:9083");
HiveMetaStoreClient hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
System.out.println("Metastore client : "+hiveMetaStoreClient);
System.out.println("Is local metastore ? "+hiveMetaStoreClient.isLocalMetaStore());
System.out.println(hiveMetaStoreClient.getAllDatabases());
hiveMetaStoreClient.close();
}
The output I receive on the local machine(Note 'Is local metastore ? false'):
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/E:/Development/lib/hdp/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/E:/Development/lib/hdp/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2017-11-28 14:20:14,631 INFO [main] hive.metastore (HiveMetaStoreClient.java:open(443)) - Trying to connect to metastore with URI thrift://l4284t.sss.se.com:9083
2017-11-28 14:20:14,873 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-11-28 14:20:15,005 WARN [main] security.ShellBasedUnixGroupsMapping (ShellBasedUnixGroupsMapping.java:getUnixGroups(87)) - got exception trying to get groups for user ojoqcu: Incorrect command line arguments.
2017-11-28 14:20:15,042 WARN [main] hive.metastore (HiveMetaStoreClient.java:open(511)) - set_ugi() not successful, Likely cause: new client talking to old server. Continuing without it.
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_set_ugi(ThriftHiveMetastore.java:3802)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.set_ugi(ThriftHiveMetastore.java:3788)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:503)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:282)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:188)
at MetaStoreClient.connectHiveMetastore(MetaStoreClient.java:47)
at MetaStoreClient.main(MetaStoreClient.java:14)
2017-11-28 14:20:15,045 INFO [main] hive.metastore (HiveMetaStoreClient.java:open(539)) - Connected to metastore.
Metastore client : org.apache.hadoop.hive.metastore.HiveMetaStoreClient#5fdcaa40
Is local metastore ? false
Exception in thread "main" MetaException(message:Got exception: org.apache.thrift.transport.TTransportException null)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1256)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1129)
at MetaStoreClient.connectHiveMetastore(MetaStoreClient.java:51)
at MetaStoreClient.main(MetaStoreClient.java:14)
2017-11-28 14:20:15,057 ERROR [main] hive.log (MetaStoreUtils.java:logAndThrowMetaException(1254)) - Got exception: org.apache.thrift.transport.TTransportException null
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_all_databases(ThriftHiveMetastore.java:771)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_all_databases(ThriftHiveMetastore.java:759)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1127)
at MetaStoreClient.connectHiveMetastore(MetaStoreClient.java:51)
at MetaStoreClient.main(MetaStoreClient.java:14)
2017-11-28 14:20:15,058 ERROR [main] hive.log (MetaStoreUtils.java:logAndThrowMetaException(1255)) - Converting exception to MetaException
Process finished with exit code 1
The metastore log throws:
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:609)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:606)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1846)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:606)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
In the client-configs(which I downloaded from the cluster via Ambari) as well as the /etc/hive2/conf/hive-site.xml, /etc/hive/conf/conf.server/hive-site.xml, hive-metastore/conf/hive-site.xml
have the same following property:
<property>
<name>hive.metastore.sasl.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.server2.thrift.sasl.qop</name>
<value>auth</value>
</property>
For reference, providing relevant parts of the client config hive-site.xml:
<configuration>
<property>
<name>ambari.hive.db.schema.name</name>
<value>hive_metastore</value>
</property>
<property>
<name>atlas.rest.address</name>
<value>https://l4284t.sss.se.com:21443</value>
</property>
<property>
<name>hive.server2.thrift.http.path</name>
<value>cliservice</value>
</property>
<property>
<name>hive.server2.thrift.http.port</name>
<value>10001</value>
</property>
<property>
<name>hive.server2.thrift.max.worker.threads</name>
<value>500</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.sasl.qop</name>
<value>auth</value>
</property>
<property>
<name>hive.server2.transport.mode</name>
<value>http</value>
</property>
<property>
<name>hive.server2.use.SSL</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.kerberos.keytab.file</name>
<value>/etc/security/keytabs/hive.service.keytab</value>
</property>
<property>
<name>hive.metastore.kerberos.principal</name>
<value>hive/_HOST#GLOBAL.SCD.COM</value>
</property>
<property>
<name>hive.metastore.pre.event.listeners</name>
<value>org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener</value>
</property>
<property>
<name>hive.metastore.sasl.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://l4283t.sss.se.com:9083,thrift://l4284t.sss.se.com:9083</value>
</property>
<property>
<name>hive.security.metastore.authenticator.manager</name>
<value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value>
</property>
<property>
<name>hive.server2.authentication</name>
<value>KERBEROS</value>
</property>
<property>
<name>hive.server2.authentication.kerberos.keytab</name>
<value>/etc/security/keytabs/hive.service.keytab</value>
</property>
<property>
<name>hive.server2.authentication.kerberos.principal</name>
<value>hive/_HOST#GLOBAL.SCD.COM</value>
</property>
<property>
<name>hive.server2.authentication.spnego.keytab</name>
<value>/etc/security/keytabs/spnego.service.keytab</value>
</property>
<property>
<name>hive.server2.authentication.spnego.principal</name>
<value>HTTP/_HOST#GLOBAL.SCD.COM</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
<property>
<name>hive.server2.keystore.path</name>
<value>/etc/security/ssl/default-key.jks</value>
</property>
<property>
<name>hive.server2.logging.operation.enabled</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://l4373t.sss.se.com/hive_metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
</configuration>
Hive database connection via jdbc using the following lines of code works:
System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
System.setProperty("java.security.auth.login.config","E:\\Development\\lib\\hdp\\loginconf.ini");
System.setProperty("java.security.krb5.conf","E:\\Development\\lib\\hdp\\krb5.conf");
**********Edit-1**********
I commented the created an 'uber' jar and executed the above class directly on one of the metastore servers(Linux machines), yet, the error persists.

When YARN is running the hadoop job submitted get stuck in Accepted state

I am using VirualBox to run Ubuntu 14 VM on Windows laptop. I have configured Apache distribution HDFS and YARN for Single Node. When I run dfs and YARN then all required demons are running. When I don't configure YARN and run DFS only then I can execute MapReduce job successfully, But when I run YARN as well then job get stuck at ACCEPTED state, I tried many settings regarding changing memory settings of node but no luck.
Following link I followed to set single node
https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/SingleCluster.html
core-site.xml
`
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>`
settings of hdfs-site.xml`
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/shaileshraj/hadoop/name/data</value>
</property>
</configuration>`
settings of mapred-site.xml
`<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>`
settings of yarn-site.xml`
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2200</value>
<description>Amount of physical memory, in MB, that can be allocated for containers.</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>500</value>
</property>
RM Web UI
Here is Application Master screen of RM Web UI. What I can see AM container is not allocated, may be that is problem
If the job is not getting enough number of resources, it will be in ACCEPTED state. Whenever it gets resources it will change to RUNNING state.
In your case, open Resource Manager WebUI and check how much of resources are available to run jobs.

NoClassDefFoundError HBase with YARN

I know that this is one of the topic that's asked much. Still after I digged into all of the topics I could find (most of them talking about CLASSPATH), I cant solve mine.
Examples of the topics I found and tried:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
java.lang.NoClassDefFoundError with HBase Scan
I'm using Hadoop 2.5.1 with HBase 0.98.11 on Ubuntu 14.04
I set up pseudo-distributed mode and running hadoop with hbase successfully. After I want to set up the full-distributed mode, jobs fail with NoClassDefFound error. I tried adding "export HADOOP_CLASSPATH=/usr/local/hbase-0.98.11-hadoop2/bin/hbase classpath" into hadoop-env (also yarn-env), still dont work.
One notice I found is if I comment the
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
I can run the jobs SUCCESSFULLY. BUT it seems that I run it on single not multi node.
Here are some of the configs:
mapred-site
<property>
<name>mapred.job.tracker</name>
<value>hadoop1:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>`
hdfs-site
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
yarn-site
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run
</description>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
In yarn-env and hadoop-env there is just as default except the HADOOP_CLASSPATH (which doesn't change things even if I add it or not..)
Here is the error trace:
2015-04-25 23:29:25,143 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
at apriori2$FrequentItemsReduce.reduce(apriori2.java:550)
at apriori2$FrequentItemsReduce.reduce(apriori2.java:532)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1651)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1611)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700)
at org.apache.hadoop.mapred.MapTask.closeQuietly(MapTask.java:1990)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:774)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Really thanks for every help sir.
With Yarn, you need to set "yarn.application.classpath" property with the classpath for your MapReduce job. "export HADOOP_CLASSPATH" would not work with Yarn.