hive on tez, cannot monitor timeline in Tez UI - hive

I configured hive on tez as following.
I can see my queries in HiveServer2 Web UI.
But, I cannot see DAGs in Tez UI after I submit hive queries.
What configurations am I missing?
$HADOOP/etc/hadoop/mapred-site.xml
mapreduce.framework.name=yarn-tez
$YARN/conf/yarn-site.xml
yarn.timeline-service.enabled=true
yarn.timeline-service.hostname=localhost
yarn.timeline-service.http-cross-origin.enabled=true
yarn.resourcemanager.system-metrics-publisher.enabled=true
$HIVE/conf/hive-site.xml
hive.execution.engine=tez
$TEZ/conf/tez-site.xml
tez.history.logging.service.class=
org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService
tez.tez-ui.history-url.base=http://localhost/tez-ui/
I tried simple select query, and join query.
The join query got this errors.
java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:296)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:581)
at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:692)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:97)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:498)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Related

Apache Beam pipeline running on Dataflow failed to read from KafkaIO: SSL handshake failed

I'm building an Apache Beam pipeline to read from Kafka as an unbounded source.
I was able to run it locally using direct runner.
However, the pipeline would fail with the attached exception stack trace, when run using Google Cloud Dataflow runner on the cloud.
It seems it's ultimately the Conscrypt Java library that's throwing javax.net.ssl.SSLException: Unable to parse TLS packet header. I'm not really sure how to address this issue.
java.io.IOException: Failed to start reading from source: org.apache.beam.sdk.io.kafka.KafkaUnboundedSource#33b5ff70
com.google.cloud.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:783)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:360)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:193)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:135)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:126)
com.google.cloud.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:778)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:360)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:193)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:135)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.util.concurrent.FutureTask.get(FutureTask.java:206)
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:112)
com.google.cloud.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:778)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:360)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:193)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:135)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLException: Unable to parse TLS packet header
org.conscrypt.ConscryptEngine.unwrap(ConscryptEngine.java:782)
org.conscrypt.ConscryptEngine.unwrap(ConscryptEngine.java:723)
org.conscrypt.ConscryptEngine.unwrap(ConscryptEngine.java:688)
org.conscrypt.Java8EngineWrapper.unwrap(Java8EngineWrapper.java:236)
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:464)
org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:328)
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:255)
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:79)
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:460)
org.apache.kafka.common.network.Selector.poll(Selector.java:398)
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:238)
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:214)
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:190)
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:219)
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205)
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.fetchCommittedOffsets(ConsumerCoordinator.java:468)
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.refreshCommittedOffsetsIfNeeded(ConsumerCoordinator.java:450)
org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1772)
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1411)
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.setupInitialOffset(KafkaUnboundedReader.java:641)
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.lambda$start$0(KafkaUnboundedReader.java:106)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Looks like Conscrypt causes SSL errors in many cotexts like this. Dataflow worker in Beam 2.9.0 has an option to disable this. Please try. --experiment=disable_conscrypt_security_provider. Alternately, you can try Beam 2.4.x, which does not enable Conscrypt.

hive shell is not working

when i try to access the hive shell, it is showing some error logs. i am using CDH version 5.12.
[cloudera#quickstart ~]$ hive
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException:
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
at org.apache.hadoop.hive.metastache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:388)
at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:810)
at org.apache.hadoop.hive.ql.session.SessionState.getAuthorizationMode(SessionState.java:1679)
at org.apache.hadoop.hive.ql.session.SessionState.isAuthorizationModeV2(SessionState.java:1690)
at org.apache.hadoop.hive.ql.processors.CommandUtil.authorizeCommand(CommandUtil.java:55)
at org.apache.hadoop.hive.ql.processors.AddResourceProcessor.run(AddResourceProcessor.java:66)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:275)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:416)
at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:432)
at org.apache.hadoop.hive.cli.CliDriver.processInitFiles(CliDriver.java:466)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:717)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
... 45 more
)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:512)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:244)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 42 more
[cloudera#quickstart ~]$
Try
beeline -u jdbc:hive2://localhost:10000
Please make sure in your hive-site.xml has below configuration,
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore_db?
createDatabaseIfNotExist=true&autoReconnect=true&useSSL=false
</value>
</property

org.infinispan.util.concurrent.TimeoutException: Timed out applying state

We are running Infinispan 7.2.5 with 3 instances in REPL cluster.
Spark Client is connected to the cluster using HotRod.
Suddenly, the view is updated and one instance is removed from the cluster and from client view as well.
The instance was up & running but was not able to connect to other instances and was giving timed out exception.
What could have caused the instance to leave the cluster and not let it join back ? If someone can provide some insights on this.
Meanwhile, CPU was too high on the instance. Is it because of re-join attempts(if it actually takes much of CPU) or something else could be the reason ?
Client connected to Infinispan using HotRod observed below excpetion :
WARN (ClientListenerNotifier.java:266) - ISPN004039: Unable to complete reading event from server null
java.nio.channels.IllegalBlockingModeException
at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.readByte(TcpTransport.java:179)
at org.infinispan.client.hotrod.impl.protocol.Codec20.readMagic(Codec20.java:282)
at org.infinispan.client.hotrod.impl.protocol.Codec20.readEvent(Codec20.java:126)
at org.infinispan.client.hotrod.event.ClientListenerNotifier$EventDispatcher.run(ClientListenerNotifier.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
WARN (ClientListenerNotifier.java:266) - ISPN004039: Unable to complete reading event from server null
java.nio.channels.IllegalBlockingModeException
at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.readByte(TcpTransport.java:179)
at org.infinispan.client.hotrod.impl.protocol.Codec20.readMagic(Codec20.java:282)
at org.infinispan.client.hotrod.impl.protocol.Codec20.readEvent(Codec20.java:126)
at org.infinispan.client.hotrod.event.ClientListenerNotifier$EventDispatcher.run(ClientListenerNotifier.java:237)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
During the sametime, Infinispan server instance received below logs :
WARN [org.infinispan.remoting.inboundhandler.NonTotalOrderPerCacheInboundInvocationHandler] (remote-thread--p3-t13) ISPN000071: Caught exception when handling command StateResponseCommand{cache=AsrlEnbTopologyCache, origin=asr-1-asrltopologyservice-24247, topologyId=9}: org.infinispan.util.concurrent.TimeoutException: Timed out applying state
at org.infinispan.statetransfer.StateConsumerImpl.applyState(StateConsumerImpl.java:542) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.statetransfer.StateResponseCommand.perform(StateResponseCommand.java:62) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:85) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:32) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_131]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_131]
whereas, other instance available which seems to be separated from the cluster has received below logs:
ERROR [org.infinispan.statetransfer.OutboundTransferTask] (transport-thread--p2-t12) Failed to send entries to node asr-2-asrltopologyservice-2286 : Node asr-2-asrltopologyservice-2286 timed out: org.infinispan.util.concurrent.TimeoutException: Node asr-2-asrltopologyservice-2286 timed out
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:248) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:561) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:287) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.statetransfer.OutboundTransferTask.sendEntries(OutboundTransferTask.java:239) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.statetransfer.OutboundTransferTask.sendEntry(OutboundTransferTask.java:195) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.statetransfer.OutboundTransferTask.run(OutboundTransferTask.java:149) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_131]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_131]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_131]
Caused by: org.jgroups.TimeoutException: timeout waiting for response from asr-2-asrltopologyservice-2286, request: org.jgroups.blocks.UnicastRequest#6ac2f8a3, req_id=1067, mode=GET_ALL, target=asr-2-asrltopologyservice-2286
at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:427) [jgroups-3.6.2.Final.jar:3.6.2.Final]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:433) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:241) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
... 12 more
ERROR [org.infinispan.statetransfer.OutboundTransferTask] (transport-thread--p2-t14) Failed to send entries to node asr-2-asrltopologyservice-2286 : Node asr-2-asrltopologyservice-2286 timed out: org.infinispan.util.concurrent.TimeoutException: Node asr-2-asrltopologyservice-2286 timed out
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:248) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:561) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:287) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.statetransfer.OutboundTransferTask.sendEntries(OutboundTransferTask.java:239) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.statetransfer.OutboundTransferTask.sendEntry(OutboundTransferTask.java:195) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.statetransfer.OutboundTransferTask.run(OutboundTransferTask.java:149) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_131]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_131]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_131]
Caused by: org.jgroups.TimeoutException: timeout waiting for response from asr-2-asrltopologyservice-2286, request: org.jgroups.blocks.UnicastRequest#1984efac, req_id=1069, mode=GET_ALL, target=asr-2-asrltopologyservice-2286
at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:427) [jgroups-3.6.2.Final.jar:3.6.2.Final]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:433) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:241) [infinispan-core-7.2.5.Final.jar:7.2.5.Final]
... 12 more

RPC response exceeds maximum data length in Hadoop

I am trying to configure hadoop from [https://www.tutorialspoint.com/hadoop/hadoop_mapreduce.htm][1].
I have configured everything.Now at the time of execution command $HADOOP_HOME/bin/hadoop jar /home/abp/unit/unit.jar com.hadoop.ProcessUnits input_dir output_dir
I have getting below exception
abp#abp-Precision-T1700:/usr/local/hadoop/bin$ $HADOOP_HOME/bin/hadoop jar /home/abp/unit/unit.jar com.hadoop.ProcessUnits input_dir output_dir
17/04/26 15:09:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/26 15:09:10 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/04/26 15:09:10 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
Exception in thread "main" java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "abp-Precision-T1700/127.0.1.1"; destination host is: "localhost":9000;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:782)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1485)
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1337)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700)
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436)
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1433)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1436)
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:130)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:270)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:141)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:870)
at com.hadoop.ProcessUnits.main(ProcessUnits.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1800)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1155)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1052)
Please help me in resolving this.

wso2 das 3.0.1 with am 1.10.0: Cannot borrow client for ssl://localhost:7711

I tried to set offset 3 and 0, all works fine with statistics with REST setup for now. However, the DAS wso2carbon.log keeps throwing the following error messages:
TID: [-1] [] [2016-09-15 16:27:30,727] ERROR {org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker} -
Error while trying to connect to the endpoint. Cannot borrow client for ssl://localhost:7711 {org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker}
org.wso2.carbon.databridge.agent.exception.DataEndpointAuthenticationException: Cannot borrow client for ssl://localhost:7711
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:100)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:43)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointSecurityException: Error while trying to connect to ssl://localhost:7711
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:61)
at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:92)
... 6 more
Caused by: org.apache.thrift.transport.TTransportException: Could not connect to localhost on port 7711
at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:212)
at org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:166)
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:56)
... 9 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:637)
at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:425)
at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:208)
... 11 more
I am wondering what caused this, and how to fix it?
What I did is to remove port 7711 from data receiver config in apim admin-dashabord/Analytics. It is now only list tcp://localhost:7611 . That seems to fix it, but don't know why it did not respond at 7711 as it is actually configured in DAS conf/data-bridge/data-bridge-config.xml.
If you have installed any car files, you may need to update the port of receiver URL in scripts inside them, to match with your offset.