I am trying to setup HDP 3.1.0 on Oracle Linux 7.
Ambari, HDFS and HIVE Metastore services are already running but HiveServer2 is not starting.
When I try to start it manually:
# hive --service hiveserver2
I get this after several minutes of waiting:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/phoenix/phoenix-5.0.0.3.1.0.0-78-server.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018-12-15 14:15:28: Starting HiveServer2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 99822aa9-957a-439e-904e-d9adce9a7893
Hive Session ID = fa34c442-4598-4f85-9493-daaf93804164
Hive Session ID = ed8de700-4ebf-4985-ae13-830b306be0e7
Hive Session ID = 6093d16b-53f0-4e21-9429-8046d3f3917a
Hive Session ID = a4fc572d-d56f-4c8a-97a0-8fc8bc115233
Hive Session ID = 02fdb753-45f7-4009-8283-bf3d5eef00b2
Hive Session ID = 47be06ad-42d2-4281-83f3-7e9b4cac1690
Hive Session ID = dae77692-3296-464f-995b-cb45a98d2e09
Hive Session ID = c4d49aa0-f829-4765-adbc-9afd5414775b
Hive Session ID = 8e26f8d8-bb01-4384-bfa2-8cb5ea66d1e8
This is what netstat is reporting:
# netstat -ntpl | egrep "10000|10001|10002"
tcp 0 0 192.168.1.100:10001 0.0.0.0:* LISTEN 422/java
tcp 0 0 192.168.1.100:10002 0.0.0.0:* LISTEN 26918/java
Nobody is listening on port 10000 :(
This is what I have in /hive-site.xml:
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.http.port</name>
<value>10001</value>
</property>
<property>
<name>hive.server2.webui.port</name>
<value>10002</value>
</property>
I assume I can ignore SLF4J warnings, correct? What else should I check?
It turned out that that this service was trying to grab more memory than 1024 MB allowed by the default /etc/hadoop/conf/yarn-site.xml setting:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
I increased that limit to 1792 and the issue was resolved.
Related
Error
Receiving this error with HBase Storage Handler in Hive when I run a query in a Kerberized environment.
on HBase 1.5
Caused by: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: org.apache.hadoop.hbase.exceptions.UnknownProtocolException:
No registered coprocessor service found for name AuthenticationService in region hbase:meta,,1
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8499)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2282)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2264)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36808)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)
The important part being:
No registered coprocessor service found for name AuthenticationService
in region hbase:meta,,1
I did some reading and learned that AuthenticationService is provided by TokenProvider coprocessor.
In hbase-site.xml ensure these options are configured
hadoop.security.authentication
hbase.coprocessor.master.classes
hbase.coprocessor.region.classes
Ensure values are configured as follows:
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
Note:
In older versions of HBase the settinghbase.coprocessor.regionserver.classes was used, make sure you are using the correct; hbase.coprocessor.region.classes
When I run:
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console
It shows
Starting HiveServer2
and nothing listens on port 10000 and 10001
The HiveServer2 service does not output error information, causing it hard to diagnostic the problem. You can try to start the metastore service provided by Hive, which listens on port 9083 and might give some information when your configuration is not properly set:
hive --service metastore # not detach from terminal to see logs
In my case, this service cannot be started, with error message:
MetaException(message:Hive Schema version 3.1.0 does not match metastore's schema
version 1.2.0 Metastoed or corrupt)
One of the direct solution to resolve this error is to ignore the version difference by setting the hive-site.xml if there is only one hive version in your machine (another solution is to modify the metastore_db version):
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
After this problem is resolved, the HiveServer2 service can be running and listening on port 10000.
hive --service hiveserver2 > /dev/null 2>&1 &
If your HiveServer2 access metastore via Derby or MySQL JDBC driver, then the aforementioned metastore service is not needed for HiveServer2. However, if HiveServer2 access metastore via thrift protocol, as configed in conf/hive-site.xml like
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop-master:9083</value>
<description>
Thrift URI for the remote metastore.
Used by metastore client to connect to remote metastore.
</description>
</property>
Then, the metastore service must be started at first.
I had a hard time to set up hive-3.1.2. I write this maybe it helps someone out. in order to diagnose the problem first try to launch metastore and hiveserver2 like this:
metastore:
hive --service metastore --hiveconf hive.root.logger=INFO,console
hiveserver2:
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console
then carefully read the the exceptions were thrown.
my problem was user hive is not allowed to perform this api call
and to solve that I added the following property to hive-site.xml:
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
<description>
Should metastore do authorization against database notification related APIs such as get_next_notification.
If set to true, then only the superusers in proxy settings have the permission
</description>
</property>
also I add my full hive-site.xml as a sample:
<configuration>
<property>
<name>datanucleus.schema.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://server-2:3306/metastore?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>mysql_username</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mysql_password</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://server-2:9083</value>
</property>
<property>
<name>atanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>server-2</value>
</property>
<property>
<name>hive.server2.transport.mode</name>
<value>binary</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
</configuration>
Thanks. There is typo. It should hive.metastore not as shown below.
**metastore**.metastore.event.db.notification.api.auth
false
I am trying to connect Beeline with HiveServer2 and i am getting the below alert.
Need help to connect Beeline with HiveServer2.
[hdpsysuser#hdpmaster bin]$ beeline
which: no hbase in (/usr/local/bin:/usr/local/sbin:/enter code here usr/bin:/usr/sbin:/bin:/sbin:/home/hdpuser/.local/bin:/home/hdpuser/bin:/home/hdpsysuser/.local/bin:/home/hdpsysuser/bin:/usr/hadoopsw/hadoop-2.7.3/sbin:/usr/hadoopsw/hadoop-2.7.3/bin:/usr/hadoopsw/hive/bin:/usr/hadoopsw/db-derby-10.13.1.1-bin/bin)
Beeline version 2.1.1 by Apache Hive
beeline> show tables;
No current connection
beeline> !connect jdbc:hive2://hdpmaster:10000
Connecting to jdbc:hive2://hdpmaster:10000
Enter username for jdbc:hive2://hdpmaster:10000: hdpsysuser
Enter password for jdbc:hive2://hdpmaster:10000: **********
17/05/09 01:51:20 [main]: WARN jdbc.HiveConnection: Failed to connect to
hdpmaster:10000
Error: Could not open client transport with JDBC Uri:
jdbc:hive2://hdpmaster:10000: Failed to open new session: java.lang.RuntimeException:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hdpsysuser is not allowed to impersonate hdpsysuser (state=08S01,code=0)
add below property in hive-site.xml in hive conf
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
</property>
Also if you want user ABC to impersonate all(*), add below properties to your
core-site.xml
<property>
<name>hadoop.proxyuser.ABC.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.ABC.hosts</name>
<value>*</value>
</property>
Iam trying to create a hbase cluster in 3 centos machines. Hadoop(v - 2.8.0) is up and running on top I configured HBase(v - 1.2.5).Hbase start up is fine it started HMaster and Region servers but still it shows the follwing error in region servers and in HMaster log it shows no region servers are checked in.
2017-04-20 19:30:33,950 WARN [regionserver/localhost/127.0.0.1:16020] regionserver.HRegionServer: error telling master we are up
com.google.protobuf.ServiceException: java.net.SocketException: Invalid argument
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)
at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2316)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:907)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:416)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:722)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
JPS of my master node
[hadoop#localhost bin]$ jps
20624 SecondaryNameNode
20800 ResourceManager
20401 NameNode
18061 Jps
17839 HMaster
JPS of myregion nodes are
[hadoop#localhost bin]$ jps
11168 Jps
482 DataNode
10840 HQuorumPeer
10974 HRegionServer
hbase-site.xml of all nodes
<configuration>
<property>
<name>hbase.master.hostname</name>
<value>NameNode</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://NameNode:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://NameNode:8020/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>DataNode1,DataNode2</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
regionservers file contain
DataNode1
DataNode2
etc/hosts file in all nodes contain actual ips rather than loopback ips
192.168.00.00 NameNode
192.168.00.00 DataNode1
192.168.00.00 DataNode2
Note configuration is same in all nodes. Any help will be appreciated.
I put the following property in all region servers hbase-site.xml solved my problem.<property> <name>hbase.regionserver.hostname</name> <value>DataNode1</value> </property> <property> <name>hbase.regionserver.port</name> <value>16020</value> </property>
i was facing the same problem but...
changing hostname resolved my problem
sudo hostnamectl set-hostname new_hostname
i had a master and a node called node1
link to wiki that have the configs
I am getting error when listing table please help me out.
sqoop list-tables --connect "jdbc:sqlserver://serverName=YYYYYYYY;database=Operational_Standards;integratedSecurity=true;authent
icationScheme=JavaKerberos"
Warning: /usr/hdp/2.2.4.12-1/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
INFO sqoop.Sqoop: Running Sqoop version: 1.4.5.2.2.4.12-1
INFO manager.SqlManager: Using default fetchSize of 1000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.12-1/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.12-1/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.4.12-1/hive/lib/hive-jdbc-0.14.0.2.2.4.12-1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.c
lass]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ERROR manager.CatalogQueryManager: Failed to list tables
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host serverName=YYYYY, port 1433 has failed. Error: "null. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".