Cloudera CDH 4.6.0 - Hive metastore service not starting - hive

I installed Cloudera CDH 4.6.0 on my Centos 6.2 linux server machine (Cloudera manager - 4.8). I am able to start few services, however not able to start the Hive metastore service.
Cloudera is using Postgre SQL as the remote metatore DB. My host name is delvmpll2, but when starting Hive service, it is giving java.net.UnknownHostException: localhost.localdomain.
I edited the hostname in hive-site.xml and restarted all the services, but still the same exception is coming. I could not find the place where cloudera is picking this hostname.
Could someone please let me know what would have went wrong.
Here is the exception
Caused by: java.net.UnknownHostException: localhost.localdomain
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:195)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.<init>(Socket.java:375)
at java.net.Socket.<init>(Socket.java:189)
at org.postgresql.core.PGStream.<init>(PGStream.java:62)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:76)
... 58 more
2014-07-04 07:16:06,354 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: Shutting down hive metastore.
Thanks in advance

Finally I solved it.
I changed the server_host value in config.ini file in /etc/cloudera-scm-agent to my host and after that when I restarted the services, all the services are running well

Related

Apache Ranger Audit log connect with Solr Cloud Mode with SSL

I have three nodes with Solr and ZooKeeper with enabled TLS/SSL where the ZK listen only in securePort and Solr - HTTPS.
Now I want to connect Solr to Apache Ranger for audit logs
where I am setting:
ranger.audit.solr.urls = https://HOST1:8983/solr/ranger_audits
and
ranger_admin_solr_zookeepers = HOST1:2281,HOST2:2281,HOST3:2281
The Apache Ranger is not in SSL mode and listen only on HTTP.
For Solr I have successfully create ranger_audits configset and collection with the same name.
ZooKeeper election is also successful where I have 1 leader and 2 followers.
So everything works as expected except the Apache Ranger audit communication.
The version of the Apache Ranger is 2.0.
ZooKeeper version - 3.6.3
Solr version - 8.11.1
With the current settings I get the following exception when open audit tab in Ranger UI:
2022-03-22 06:54:08,189 [http-bio-6080-exec-2] INFO org.apache.ranger.common.RESTErrorUtil (RESTErrorUtil.java:326) - Operation error. response=VXResponse={org.apache.ranger.view.VXResponse#7ef95c52statusCode={1} msgDesc={Error running solr query, please check solr configs. java.util.concurrent.TimeoutException: Could not connect to ZooKeeper HOST1:2281,HOST2:2281,HOST3:2281 within 15000 ms} messageList={[VXMessage={org.apache.ranger.view.VXMessage#3bd495a3name={ERROR_SYSTEM} rbKey={xa.error.system} message={System Error. Please try later.} objectId={null} fieldName={null} }]} }
javax.ws.rs.WebApplicationException
UPDATE:
The solution is to provide jaas.conf and java properties which fixed the problem.
-Dzookeeper.client.secure=true
-Djava.security.auth.login.config=/etc/ranger/admin/conf/jaas.conf
The sample of the jaas.conf is:
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="admin-pass";
};
Please note that this is not complete solution and the connection from Ranger to through HTTPS ZooKeepers is still problematic.

Kafka-s3-connect killed instantly after start

I want to connect aws-Kafka with s3 using confluence connector on my ec2 server. I try to configure everything like in tutorials. When I run connect-standalone or connect-distributed, at first everything goes well, I don't get any errors in the logs but after information about connection starting, my connector died instantly without any information. Has anybody got same problem?
config/connect-standalone.properties
bootstrap.servers=msk-connection-string
plugin.path=/home/ubuntu/connectors/confluentinc-kafka-connect-s3
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
offset.storage.file.filename=/tmp/connect.offsets
connector.properties
connector.class=io.confluent.connect.s3.S3SinkConnector
format.class=io.confluent.connect.s3.format.bytearray.ByteArrayFormat
flush.size=1
topics=SomeTopic
s3.bucket.name=bucket-name-here
s3.region=us-west-2
s3.part.size=5242880
aws.access.key.id=****
aws.secret.access.key=****
behavior.on.null.values=ignore
storage.class=io.confluent.connect.s3.storage.S3Storage
topics.dir=../topics
store.url=http://bucket-name.s3-website-Region.amazonaws.com
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
logs:
[2021-08-20 06:32:35,954] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser:119)
[2021-08-20 06:32:35,954] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser:120)
[2021-08-20 06:32:35,954] INFO Kafka startTimeMs: 1629441155953 (org.apache.kafka.common.utils.AppInfoParser:121)
Killed
Please help!
MSK requires TLS connection
When adding few lines with ssl configuration to config/connect-standalone.properties
producer.security.protocol=SSL
consumer.security.protocol=SSL
security.protocol=SSL
ssl.protocol=TLS
ssl.truststore.location=/your/path/to/truststore/kafka.client.truststore.jks
It starts working properly!

ActiveMQ Master/Slave on Weblogic - vm transport issue

I am trying to configure ActiveMQ master/slave setup on a single WebLogic machine. The problem is when I start Managed Server1 it successfully connects to vm transport and everything works perfectly, but when I start Managed Server2 I am receiving the following errors in broker logs
INFO 2016-September-27 10:08:00,227 ActiveMQEndpointWorker:124 - Connection attempt already in progress, ignoring connection exception
INFO 2016-September-27 10:08:01,161 TransportConnector:260 - Connector vm://localhost started
INFO 2016-September-27 10:08:30,228 TransportConnector:291 - Connector vm://localhost stopped
INFO 2016-September-27 10:08:30,229 TransportConnector:260 - Connector vm://localhost started
WARN 2016-September-27 10:08:30,228 ActiveMQManagedConnection:385 - Connection failed: javax.jms.JMSException: peer (vm://localhost#61) stopped.
WARN 2016-September-27 10:08:30,231 TransportConnection:823 - Failed to add Connection ID:ndl-wls-300.mydomain.com-52251-1474966937425-65:1 due to java.lang.NullPointerException
ERROR 2016-September-27 10:08:30,233 ActiveMQEndpointWorker:183 - Failed to connect to broker [vm://localhost?create=false]: java.lang.NullPointerException
javax.jms.JMSException: java.lang.NullPointerException
Please help, I am stuck with this.
I still don't see the reason for the slave within the same VM. I suggest you reach out to an ActiveMQ expert consultant to validate your architecture.
However, I think I can help you move a little bit closer to this issue:
There is a fundamental miss understanding here.. the vm url is broken down like this:
vm://${brokerName}?option=value,etc
The first time you create vm://localhost?create=true.. you have created a broker
The second time you reference vm://localhost?create=false.. you have created a client connection to the first broker.
To get two brokers, you'd need two different vm://${brokerName}?create=true

Timeout while waiting for the management service to start up.120 secs

I am using the following on Linux
MFP 6.3
WAS Libery 8.5.5.6 (core trial)
Tried with JDK1.7 and JDK1.6 but nothing worked out
MySQL
I could not see any other error/exception in messages.log except this and I am not sure where to change the 'timeout' value in WAS Liberty profile.
http://pastebin.com/7uuVtjHL (server.xml)
http://pastebin.com/2ScrUQLa (messages.log)
Exception thrown by application class
'com.worklight.core.auth.impl.AuthenticationFilter.isWaitingForSynchronization:598'
javax.servlet.ServletException: java.lang.RuntimeException: Timeout
while waiting for the management service to start up.120 secs. at
com.worklight.core.auth.impl.AuthenticationFilter.isWaitingForSynchronization(AuthenticationFilter.java:598)
at
com.worklight.core.auth.impl.AuthenticationFilter.doFilter(AuthenticationFilter.java:141)
at
com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:207)
at [internal classes] Caused by: java.lang.RuntimeException: Timeout
while waiting for the management service to start up.120 secs. at
com.worklight.core.init.WorklightServletInitializer$1.run(WorklightServletInitializer.java:121)
at java.lang.Thread.run(Thread.java:798)
Right now it does seem like you are experiencing the same issue as mentioned here: How to solve management service not starting up in Worklight 6.2
You are currently using IBM Java 1.7 per the messages.log file:
java.home = /usr/lib/jvm/java-1.7.0-ibm-1.7.0.9.0.x86_64/jre
Download Oracle Java 1.7 and make sure your java.home points to it. Start the server and see if there are any differences.
Instead, or in addition, you can try this: https://developer.ibm.com/answers/questions/184195/no-runtime-can-be-found-and-failed-to-obtain-jmx-c.html
In server.xml find the following:
<jndiEntry jndiName="ibm.worklight.admin.jmx.host" value="localhost"/>
Replace "localhost" with the Public IP address of the host machine and start the server.

How can I change which address Datastax agent will try to connect to?

My Cassandra instances are not listening on 127.0.0.1. When I start datastax-agent I find this in logs:
# tail -n 100 /var/log/datastax-agent/agent.log
...
ERROR [Initialization] 2015-05-19 22:35:04,064 Can't connect to Cassandra, retrying soon.
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:246)
at clojurewerkz.cassaforte.client$connect_or_close.doInvoke(client.clj:149)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at clojurewerkz.cassaforte.client$connect.invoke(client.clj:165)
at opsagent.cassandra$setup_cassandra$fn__8157.invoke(cassandra.clj:344)
at again.core$with_retries_STAR_$fn__8013.invoke(core.clj:98)
at again.core$with_retries_STAR_.invoke(core.clj:97)
at opsagent.cassandra$setup_cassandra.invoke(cassandra.clj:339)
at opsagent.opsagent$setup_cassandra.invoke(opsagent.clj:153)
at opsagent.jmx$determine_ip.invoke(jmx.clj:276)
at opsagent.jmx$setup_jmx$fn__8438.invoke(jmx.clj:293)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:745)
How can I change which address the Datastax Agent connects to? I have tried setting local_interface in the agent's address.yaml (and restarting agent), but that doesn't seem to work.
The secret was to set rpc_address to 0.0.0.0. Cred to LHWizard for pointing this out.