Spinnaker gate error: LDAP error: Could not get a resource from the pool - spinnaker

any suggestion, does this LDAP config for GATE looks ok. I am having some issue with LDAP config for Gate:
ldap:
enabled: true
url: ldap://10.19.11.12:389/dc=xxxx,dc=corp
userDnPattern: uid={0},ou=abc,ou=service accounts
managerDn: uid=testuser
managerPassword: abc123
getting below errors in Gate error logs:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.util.Pool.getResource(Pool.java:53) ~[jedis-2.9.0.jar:na]
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:226) ~[jedis-2.9.0.jar:na]
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:16) ~[jedis-2.9.0.jar:na]
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:194) ~[spring-data-redis-1.8.4.RELEASE.jar:na]```

jedis is the redis library. Its not able to connect to your redis server.
Your gate config looks correct.

Related

Apache Kafka doens't start after SSL configuration

I have a Apache Kafka (v. 2.13-3.0.0) installed on a remote Ubuntu server.
I follow this tutorial to secure my cluster:
https://medium.com/egen/securing-kafka-cluster-using-sasl-acl-and-ssl-dec15b439f9d
but when I try to start Kafka with jaas conf file with the commands:
export KAFKA_OPTS=-Djava.security.auth.login.config=<kafka-binary-
dir>/config/kafka_server_jaas.conf
./bin/kafka-server-start.sh ./config/server.properties
I receive the error:
[2021-11-12 10:30:47,864] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-11-12 10:30:48,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-11-12 10:30:48,099] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.ClassNotFoundException: kafka.security.auth.SimpleAclAuthorizer
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:417)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
These are the SSL config in server.properties file:
########### SECURITY using SCRAM-SHA-512 and SSL
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# Broker security settings
ssl.truststore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/truststore/kafka.truststore.jks
ssl.truststore.password=giuseppe
ssl.keystore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/keystore/kafka.keystore.jks
ssl.keystore.password=giuseppe
ssl.key.password=giuseppe
# ACLs
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
#zookeeper SASL
zookeeper.set.acl=false
########### SECURITY using SCRAM-SHA-512 and SSL
If I try to comment the 2 rows of ACL I receive the error:
[2021-11-12 11:05:29,301] INFO [ThrottledChannelReaper-
ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-11-12 11:05:29,331] ERROR [KafkaServer id=0] Fatal error
during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on
file .lock in /tmp/kafka-logs. A Kafka instance in another process
or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:37)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:112)
at kafka.log.LogManager$.apply(LogManager.scala:1283)
at kafka.server.KafkaServer.startup(KafkaServer.scala:254)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
What is the cause? May it be a wrong configuration?
Thanks.
Update:
Changing the row in:
# ACLs authorizer.class.name=org.apache.kafka.server.authorizer.Authorizer
there is this error: org.apache.kafka.common.KafkaException: Could not find
a public no-argument constructor for
org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
I receive this new error:
[2021-11-12 16:51:57,613] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
org.apache.kafka.common.KafkaException: Could not find a public no-argument
constructor for org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.NoSuchMethodException:
org.apache.kafka.server.authorizer.Authorizer.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3508)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2711)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:390)
... 7 more
It just seems that if you change the
kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
It should work; it worked for me.
Kafka 3.0 removed SimpleAclAuthorizer
Pull request - https://github.com/apache/kafka/commit/976e78e405d57943b989ac487b7f49119b0f4af4#diff-e0ccf1b5c964d2c303b6a69a8b8b67df5a6bfbae8aa514f580d353c4c6bf8e36
The blog seems to be using version 2.2.0.

WMQ(IBM Queue) Connection timeout

I'm able connect IBM Queue directly but when u tried to connect from mule getting the below error and not able to deploy. I'm getting the below error
ERROR 2017-04-25 06:45:13,582
[main]org.mule.retry.notifiers.ConnectNotifier: Failed to connect/reconnect:
WebSphereMQConnector
{
name=WMQ2
lifecycle=initialise
this=5e7abaf7
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=false
supportedProtocols=[wmq]
serviceOverrides=<none>
}
. Root Exception was: Connection timed out: connect. Type: class java.net.ConnectException
ERROR 2017-04-25 06:50:23,943 [main] org.mule.module.launcher.application.DefaultMuleApplication:
************************************************
Message : JMSWMQ0018: Failed to connect to queue manager 'RQACBRKB' with connection mode 'Client' and host name '172.11.11.11(6912)'.
JMS Code : JMSWMQ0018
Element : /WMQ2 # app:config.xml:14 (WMQ)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.net.ConnectException: Connection timed out: connect
at java.net.TwoStacksPlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
com.ibm.mq.jmqi.JmqiException: CC=2;RC=2538;AMQ9213: A communications error for occurred [1=java.net.ConnectException[Connection timed out: connect],3=rbitbrka.apl.com] at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.connnectUsingLocalAddress(RemoteTCPConnection.java:810) ~[?:?]
PFB connector details:
<wmq:connector name="WMQ5" hostName="${mq.host}" port="${mq.port}" queueManager="${mq.queue.manager}" channel="CLIENTS.SALES.CRM" username="${mq.user}" password="${mq.password}" transportType="CLIENT_MQ_TCPIP" specification="1.1" targetClient="JMS_COMPLIANT" validateConnections="false" doc:name="WMQ" maxRedelivery="-1">
<reconnect frequency="${mq.reconnection.period.ms}" count="${mq.reconnection.attempt}"/>
</wmq:connector>
When i telnet the ip and port getting below error:
C:\Users\111>telnet 172.11.11.11 6912
Connecting To 172.11.11.11...Could not open connection to the host, on port 6912: Connect failed
But when i ping getting responce
C:\Users\111>ping 172.11.11.11
The pertinent pieces of information from your provided error are:-
JMSWMQ0018: Failed to connect to queue manager 'RQACBRKB'
with connection mode 'Client' and host name '172.11.11.11(6912)'.
com.ibm.mq.jmqi.JmqiException: CC=2;RC=2538;
MQRC 2538 is MQRC_HOST_NOT_AVAILABLE which is explained in Knowledge Center. In there it mentions the most common reasons for this error:-
The listener has not been started on the remote system. (please check that your listener is running on port 6912 on the machine at IP address 172.11.11.11)
The connection name in the client channel definition is incorrect. (the connection name your client is using is '172.11.11.11(6912)' - is this correct?)
The network is currently unavailable.
A firewall blocking the port, or protocol-specific traffic.
The security call initializing the IBM MQ client is blocked by a security exit on the SVRCONN channel at the server.
"java.net.ConnectException: Connection timed out: connect" usually occurs when you have a config issue or you are unable to connect to the remote server. As mentioned above do you have an error on the MQ end and if not have you checked the connection properties within config. If these are correct, are you able to reach MQ from another client, such as SOAPUI?
Can you also post the connector and flow details?

Errot while Integrating Sonarqube with LDAP

sonar.security.realm=LDAP
ldap.url=ldap://ldap-company.com
ldap.bindDn=CN=xxxxx,OU=Restricted,OU=xxxx,DC=company,DC=com
ldap.bindPassword=none
# User Configuration
ldap.user.baseDn=ou=Users,dc=mycompany,dc=com
ldap.user.request=(&(objectClass=inetOrgPerson)(uid={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
# Group Configuration
ldap.group.baseDn=OU=Groups,OU=companyname,DC=comapany,DC=com
ldap.group.request=(&(objectClass=posixGroup)(memberUid={uid}))
These are my configurations----sonarqube version-6.2
Database-Embedded
Do you guys have any idea how to integrate LDAP with Sonarqube. I tried different ways but couldn't get succeeded. this I my configuration for sonar.properties
I got an error 2017.03.15 15:57:25 ERROR web[AVrTij8L9uoXNT8qAAAK][o.s.s.a.RealmAuthenticator] Error during authentication
org.sonar.plugins.ldap.LdapException: Unable to retrieve details for user xxx in <default> and also Caused by: javax.naming.NamingException: [LDAP: error code 1 - 000004DC: LdapErr: DSID-0C090752, comment: In order to perform this operation a successful bind must be completed on the connection., dat
2017.03.15 15:55:05 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2017.03.15 15:55:49 ERROR web[AVrTij8L9uoXNT8qAAAJ][o.s.s.a.RealmAuthenticator] Error during authentication
org.sonar.plugins.ldap.LdapException: Unable to retrieve details for user xxxxx in <default>
at org.sonar.plugins.ldap.LdapUsersProvider.getUserDetails(LdapUsersProvider.java:84)
at org.sonar.plugins.ldap.LdapUsersProvider.doGetUserDetails(LdapUsersProvider.java:58)
at org.sonar.server.authentication.RealmAuthenticator.doAuthenticate(RealmAuthenticator.java:89)
at org.sonar.server.authentication.RealmAuthenticator.authenticate(RealmAuthenticator.java:83)
at org.sonar.server.authentication.CredentialsAuthenticator.authenticate(CredentialsAuthenticator.java:56)
at org.sonar.server.authentication.CredentialsAuthenticator.authenticate(CredentialsAuthenticator.java:45)
at org.sonar.server.authentication.ws.LoginAction.authenticate(LoginAction.java:91)
This is my web.log
2017.03.16 13:10:09 INFO web[][o.s.s.p.UpdateCenterClient] Update center: https://update.sonarsource.org/update-center.properties (no proxy)
2017.03.16 13:10:09 INFO web[][org.sonar.INFO] Security realm: LDAP
2017.03.16 13:10:09 INFO web[][o.s.p.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn=DC=company,DC=com, request=(&(objectClass=inetOrgPerson)(uid={0})), realNameAttribute=cn, emailAttribut
e=mail}
2017.03.16 13:10:09 INFO web[][o.s.p.l.LdapSettingsManager] Group mapping: LdapGroupMapping{baseDn=OU=Groups,OU=comapny,Dc=company,DC=com, idAttribute=cn, requiredUserAttributes=[uid], request=(&(objectC
lass=posixGroup)(memberUid={0}))}
2017.03.16 13:10:09 INFO web[][o.s.p.l.LdapContextFactory] Test LDAP connection: FAIL
2017.03.16 13:10:09 INFO web[][o.s.s.p.d.EmbeddedDatabase] Embedded database stopped
2017.03.16 13:10:09 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
org.sonar.plugins.ldap.LdapException: Unable to open LDAP connection
at org.sonar.plugins.ldap.LdapContextFactory.testConnection(LdapContextFactory.java:206)
at org.sonar.plugins.ldap.LdapRealm.init(LdapRealm.java:63)
at org.sonar.server.user.SecurityRealmFactory.start(SecurityRealmFactory.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110)​
Your bind is failing. You need to test with an external LDAP tool like Apache Directory Studio tool, or Softerra's LDAP Browser.
It could be a firewall issue from your server to the LDAP server. It could be the password is incorrect. It does look like your Sonar server is able to talk to the LDAP server (Which looks like Active Directory) since you get an AD style error message about needing to bind before searching.
If you can get the error on the bind failing it will return an error code 49 with a subcode that is of interest. 525, 52e, 777 or the like that refer to different reasons Active Directory will not let you connect.
Note: Your password is 'none' which is hard to tell if that is you trying to hide the password, or an actual literal password.

Apache Drill Impersonation

I'm trying to build in security on our Drill (1.6.0) system. I managed to get the security user authentication to work(JPam as explained in the documentation), but the impersonation does not seem to work. It seems to execute and fetch via the the admin user regardless of who has logged in via ODBC.
My drill-override.conf file is configured as follows:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "localhost:2181",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security.user.auth {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
We are also only using Drill on one server, therefore I'm running drill-embedded to start things up. Troubleshooting:
root#srv001:/opt/apache-drill-1.6.0# bin/sqlline -u "jdbc:drill:schema=dfs;zk=localhost:2181;impersonation_target=dUser001" -n entryUser -p entryUserPassword
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init> (DrillConnectionImpl.java:159)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:64)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:126)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:200)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:151)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster service within the allotted time of 10000 milliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:198)
... 19 more
Any ideas on this?
I have also looked at doing my own built in security, but I'm not able to retrieve the username from a SQL query. I have tried the following without any luck:
CURRENT_USER()
USER()
SESSION_USER()
Any ideas on this approach?
I suggest to create a different pam profile (say drill) rather than login and sudo.
Then create drill file under /etc/pam.d/ directory with the content:
#%PAM-1.0
auth include password-auth
account include password-auth
To get connections run:
select * from sys.connections;

Hbase Master and Region servers could not be started

Hadoop is successfully running in distributed mode.
Getting following error while starting HBase in distributed mode.
Tried everything in hbase-site.xml configuration. No idea how to proceed with the problem?
014-03-10 13:55:42,493 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server ip-112-11-1-111.ec2.internal/112.11.1.111:2181.
Will not attempt to authenticate using SASL (Unable to locate a login configuration)
2014-03-10 13:55:42,494 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2014-03-10 13:55:42,594 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
2014-03-10 13:55:42,594 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries
2014-03-10 13:55:42,595 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2104)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:152)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2118)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1069)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:199)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1109)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1099)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:1083)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:162)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:345)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2099)
Make sure that ZooKeeper is provisioned and running expectedly.
Check zoo.cfg and /etc/hosts to make sure that all zookeeper servers are reachable by the HBase master.