Using Redis on Jmeter GUI and SSL Exception: Connection Reset - redis

I'm trying to run Redis data set on Jmeter. I'm using VPN and bypass to proxy. I'm writing to log down side.
In HTTP Request part I wrote to adress and I'm trying to go amazon webservis adress. I was able to ran different test successfully before applying redis. Now trying with Redis. I couldn't find how to pass this error.
I found some SSL certificate know how on internet. So mostly tell install sertificate in chrome or firefox
Thanks
Redis localhost settings
Bybass proxy
2022-03-23 09:58:11,976 INFO o.a.j.u.JsseSSLManager: Using default SSL protocol: TLS
2022-03-23 09:58:11,976 INFO o.a.j.u.JsseSSLManager: SSL session context: per-thread
2022-03-23 09:58:11,976 INFO o.a.j.u.SSLManager: JmeterKeyStore Location: type JKS
2022-03-23 09:58:11,976 INFO o.a.j.u.SSLManager: KeyStore created OK
2022-03-23 09:58:11,976 WARN o.a.j.u.SSLManager: Keystore file not found, loading empty keystore
2022-03-23 09:58:13,104 INFO o.a.j.g.a.Start: Stopping test
2022-03-23 09:58:13,120 INFO o.a.j.t.JMeterThread: Stopping: Thread Group 1-1
2022-03-23 09:58:13,120 WARN o.a.j.t.JMeterThread: Interrupting: Thread Group 1-1 sampler: HTTP Request
2022-03-23 09:58:13,120 INFO o.a.j.t.JMeterThread: Thread finished: Thread Group 1-1
2022-03-23 09:58:13,129 INFO o.a.j.e.StandardJMeterEngine: Notifying test listeners of end of test
2022-03-23 09:58:13,131 WARN r.c.j.JedisFactory: Error while close
redis.clients.jedis.exceptions.JedisException: Could not return the broken resource to the pool
at redis.clients.jedis.util.Pool.returnBrokenResourceObject(Pool.java:116) ~[jedis-3.6.3.jar:?]
at redis.clients.jedis.util.Pool.returnBrokenResource(Pool.java:98) ~[jedis-3.6.3.jar:?]
at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:382) ~[jedis-3.6.3.jar:?]
at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:15) ~[jedis-3.6.3.jar:?]
at redis.clients.jedis.Jedis.close(Jedis.java:3957) ~[jedis-3.6.3.jar:?]
at redis.clients.jedis.JedisFactory.destroyObject(JedisFactory.java:166) [jedis-3.6.3.jar:?]
at org.apache.commons.pool2.PooledObjectFactory.destroyObject(PooledObjectFactory.java:126) [commons-pool2-2.9.0.jar:2.9.0]
at org.apache.commons.pool2.impl.GenericObjectPool.destroy(GenericObjectPool.java:958) [commons-pool2-2.9.0.jar:2.9.0]
at org.apache.commons.pool2.impl.GenericObjectPool.clear(GenericObjectPool.java:677) [commons-pool2-2.9.0.jar:2.9.0]
at org.apache.commons.pool2.impl.GenericObjectPool.close(GenericObjectPool.java:721) [commons-pool2-2.9.0.jar:2.9.0]
at redis.clients.jedis.util.Pool.closeInternalPool(Pool.java:122) [jedis-3.6.3.jar:?]
at redis.clients.jedis.util.Pool.destroy(Pool.java:109) [jedis-3.6.3.jar:?]
at kg.apc.jmeter.config.redis.RedisDataSet.testEnded(RedisDataSet.java:200) [jmeter-plugins-redis-0.5.jar:?]
at kg.apc.jmeter.config.redis.RedisDataSet.testEnded(RedisDataSet.java:195) [jmeter-plugins-redis-0.5.jar:?]
at org.apache.jmeter.engine.StandardJMeterEngine.notifyTestListenersOfEnd(StandardJMeterEngine.java:218) [ApacheJMeter_core.jar:5.4.3]
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:493) [ApacheJMeter_core.jar:5.4.3]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_275]
Caused by: java.lang.IllegalStateException: Invalidated object not currently part of this pool
at org.apache.commons.pool2.impl.GenericObjectPool.invalidateObject(GenericObjectPool.java:642) ~[commons-pool2-2.9.0.jar:2.9.0]
at org.apache.commons.pool2.impl.GenericObjectPool.invalidateObject(GenericObjectPool.java:620) ~[commons-pool2-2.9.0.jar:2.9.0]
at redis.clients.jedis.util.Pool.returnBrokenResourceObject(Pool.java:114) ~[jedis-3.6.3.jar:?]
... 16 more
2022-03-23 09:58:13,136 INFO o.a.j.g.u.JMeterMenuBar: setRunning(false, *local*)

Related

Securing communication between Kafka client and Zookeeper server

I've configured both Kafka server and Zookeeper server to use SSL/TLS using the JKS. I've confirmed this using openssl. I'm using Bitnami Helm charts of Kafka and Zookeeper. Below is the log output from Kafka. I'm pretty sure that the Kafka client isn't sending requests to Zookeeper server securely because of the Zookeeper logs. How do I ensure Kafka client uses SSL/TLS. I think the kafka client needs to use a client.properties file when executing config commands with args. But I don't know how to pass this file in during configuration. The logs show that Kafka client is trying to add a user called zookeeperUser to Zookeeper. This communication is non secure.
Kafka Logs
09:56:31.43
09:56:31.43 Welcome to the Bitnami kafka container
09:56:31.44 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
09:56:31.44 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
09:56:31.44
09:56:31.44 INFO ==> ** Starting Kafka setup **
09:56:31.56 DEBUG ==> Validating settings in KAFKA_* env vars...
09:56:31.65 INFO ==> Initializing Kafka...
09:56:31.66 INFO ==> No injected configuration files found, creating default config files
09:56:32.96 INFO ==> Configuring Kafka for inter-broker communications with SASL_SSL authentication.
09:56:33.13 INFO ==> Configuring Kafka for client communications with SASL_SSL authentication.
09:56:33.43 INFO ==> Custom JAAS authentication file detected. Skipping generation.
09:56:33.43 WARN ==> The following environment variables will be ignored: KAFKA_CLIENT_USERS, KAFKA_CLIENT_PASSWORDS, KAFKA_INTER_BROKER_USER, KAFKA_INTER_BROKER_PASSWORD, KAFKA_ZOOKEEPER_USER and KAFKA_ZOOKEEPER_PASSWORD
09:56:33.44 INFO ==> Creating users in Zookeeper
09:56:33.44 DEBUG ==> Creating user zookeeperUser in zookeeper
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
Error while executing config command with args '--zookeeper zookeeper.default.svc.cluster.local:3181 --alter --add-config SCRAM-SHA-256=[iterations=8192,password=zookeeperPassword],SCRAM-SHA-512=[password=zookeeperPassword] --entity-type users --entity-name zookeeperUser'
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:262)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:258)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:119)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1881)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:116)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:94)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
client.properties
cat > client.properties <<EOF
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
ssl.truststore.location=/tmp/kafka.truststore.jks
ssl.truststore.password=******
EOF
Zookeeper Logs
2021-02-11 09:56:43,055 [myid:1] - ERROR [nioEventLoopGroup-7-1:NettyServerCnxnFactory$CertificateVerifier#434] - Unsuccessful handshake with session 0x0
2021-02-11 09:56:43,055 [myid:1] - WARN [nioEventLoopGroup-7-1:NettyServerCnxnFactory$CnxnChannelHandler#273] - Exception caught
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 0000002d000000000000000000000000000075300000000000000000000000100000000000000000000000000000000000
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 0000002d000000000000000000000000000075300000000000000000000000100000000000000000000000000000000000
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
... 17 more

erlang failed to resolve ipv6 addresses using parameter from rabbitmq

I'm using rabbitmq cluster in k8s which has only pure ipv6 address. inet return nxdomain error when parsing the k8s service name.
The paramter passed to erlang from rabbitmq is:
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+A 128 -kernel inetrc '/etc/rabbitmq/erl_inetrc' -proto_dist inet6_tcp"
RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp"
erl_inetrc: |-
{inet6, true}.
when rabbitmq using its plugin rabbit_peer_discovery_k8s to invoke k8s api:
2019-10-15 07:33:55.000 [info] <0.238.0> Peer discovery backend does not support locking, falling back to randomized delay
2019-10-15 07:33:55.000 [info] <0.238.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized start
up delay.
2019-10-15 07:33:55.000 [debug] <0.238.0> GET https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/tazou/endpoints/zt4-crmq
2019-10-15 07:33:55.015 [debug] <0.238.0> Response: {error,{failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}},{inet,[inet]
,nxdomain}]}}
2019-10-15 07:33:55.015 [debug] <0.238.0> HTTP Error {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}},{inet,[inet],nxdom
ain}]}
2019-10-15 07:33:55.015 [info] <0.238.0> Failed to get nodes from k8s - {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}}
,
{inet,[inet],nxdomain}]}
2019-10-15 07:33:55.016 [error] <0.237.0> CRASH REPORT Process <0.237.0> with 0 neighbours exited with reason: no case clause matching {error,"{fa
iled_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from
_config/0 line 167 in application_master:init/4 line 138
2019-10-15 07:33:55.016 [info] <0.43.0> Application rabbit exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kub
ernetes.default.svc.cluster.local\",443}},\n
in k8s console, the address could be resolved:
[rabbitmq]# nslookup -type=AAAA kubernetes.default.svc.cluster.local
Server: 2019:282:4000:2001::6
Address: 2019:282:4000:2001::6#53
kubernetes.default.svc.cluster.local has AAAA address fd01:abcd::1
the inet could return ipv6 address.
kubectl exec -ti zt4-crmq-0 rabbitmqctl eval 'inet:gethostbyname("kubernetes.default.svc.cluster.local").'
{ok,{hostent,"kubernetes.default.svc.cluster.local",[],inet6,16,
[{64769,43981,0,0,0,0,0,1}]}}
as I know, plugin call httpc:request to invoke k8s api. I don't know what's the gap between httpc:request and inet:gethostbyname. I also don't what's used by httpc:request to resolve the address of hostname.
I query for the rabbitmq plugin, It's said that rabbitmq plugin don't aware how erlang resovlve the address. https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/issues/55.
Anything else I could set for erl_inetrc so that erlang could resolve the ipv6 address? what did i miss to config? or how could i debug from erlang side? I'm new to erlang.
B.R,
Tao

Zookeeper and ActiveMQ LevelDB replication non reliable

In my current project we are trying to set up an activeMQ cluster with LevelDB replication. Our configuration has a ZooKeeper ensemble of three nodes and an ActiveMQ cluster of three nodes.
The following is the configuration used for activeMQ: (of course the hostname is different for each node in the cluster)
<persistenceAdapter>
<replicatedLevelDB
replicas="3"
bind="tcp://0.0.0.0:0"
hostname="activemq1"
zkAddress="zk1:2181,zk2:2181,zk3:2181"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
We start up three instances of zookeeper and three instances of activemq. We observe that the zookeeper leader gets correctly elected. But in activeMQ cluster Master election is not happening. Go through the log we came to know that there is a authentication problem with zookeeper. (as per the log, I am having less knowledge in zookeeper/activemq). Herewith I pasted the logs for reference.
INFO: Loading '/opt/activemq//bin/env'
INFO: Using java '/usr/bin/java'
INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
INFO: Creating pidfile /data/activemq/activemq.pid
Java Runtime: Oracle Corporation 1.8.0_91 /usr/lib/jvm/java-8-openjdk-amd64/jre
Heap sizes: current=62976k free=59998k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/opt/activemq/conf.tmp/login.config -Dcom.sun.management.jmxremote -Djava.awt.headless=true -Djava.io.tmpdir=/opt/activemq//tmp -
Dactivemq.classpath=/opt/activemq/conf.tmp:/opt/activemq//../lib/: -Dactivemq.home=/opt/activemq/ -
Dactivemq.base=/opt/activemq/ -Dactivemq.conf=/opt/activemq/conf.tmp -Dactivemq.data=/data/activemq
Extensions classpath:[/opt/activemq/lib,/opt/activemq/lib/camel,/opt/activemq/lib/optional,/opt/activemq/lib/web,/opt/activemq/lib/extra]
ACTIVEMQ_HOME: /opt/activemq
ACTIVEMQ_BASE: /opt/activemq
ACTIVEMQ_CONF: /opt/activemq/conf.tmp
ACTIVEMQ_DATA: /data/activemq
Loading message broker from: xbean:activemq.xml
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#7823a2f9: startup date [Sat Jun 17 09:15:51 UTC 2017]; root of context hierarchy
INFO | JobScheduler using directory: /data/activemq/localhost/scheduler
INFO | Using Persistence Adapter: Replicated LevelDB[/data/activemq/leveldb, ip-172-20-44-97.ec2.internal:2181,ip-172-20-45-105.ec2.internal:2181,ip-172-20-48-226.ec2.internal:2181//activemq/leveldb-stores]
INFO | Starting StateChangeDispatcher
INFO | Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
INFO | Client environment:host.name=activemq-m1n59
INFO | Client environment:java.version=1.8.0_91
INFO | Client environment:java.vendor=Oracle Corporation
INFO | Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
INFO | Client environment:java.class.path=/opt/activemq//bin/activemq.jar
INFO | Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
INFO | Client environment:java.io.tmpdir=/opt/activemq//tmp
INFO | Client environment:java.compiler=<NA>
INFO | Client environment:os.name=Linux
INFO | Client environment:os.arch=amd64
INFO | Client environment:os.version=4.4.65-k8s
INFO | Client environment:user.name=root
INFO | Client environment:user.home=/root
INFO | Client environment:user.dir=/tmp
INFO | Initiating client connection, connectString=ip-172-20-44-97.ec2.internal:2181,ip-172-20-45-105.ec2.internal:2181,ip-172-20-48-226.ec2.internal:2181 sessionTimeout=2000 watcher=org.apache.activemq.leveldb.replicated.groups.ZKClient#4b41dd5c
WARN | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/opt/activemq/conf.tmp/login.config'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
WARN | unprocessed event state: AuthFailed
INFO | Opening socket connection to server ip-172-20-45-105.ec2.internal/172.20.45.105:2181
WARN | Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.8.0_91] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)[:1.8.0_91] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
WARN | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/opt/activemq/conf.tmp/login.config'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
INFO | Opening socket connection to server ip-172-20-48-226.ec2.internal/172.20.48.226:2181
WARN | unprocessed event state: AuthFailed
WARN | Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.8.0_91] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)[:1.8.0_91] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) [zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
WARN | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/opt/activemq/conf.tmp/login.config'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
INFO | Opening socket connection to server ip-172-20-44-97.ec2.internal/172.20.44.97:2181
WARN | unprocessed event state: AuthFailed
WARN | Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.8.0_91] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)[:1.8.0_91] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
Please help to get out from this problem.
If anyone having idea of deploying Zookeeper with ActiveMQ cluster in Kubernetes please share your ideas. since we are trying to deploy it in Kubernetes.

Websockets in HiveMQ not working - nothing in the log

Problem solved: check bottom of posting
I cant connect to websockets. Port 1883 works fine.
This is the output from MQTT.fx:
2017-01-21 07:46:26,293 INFO --- BrokerConnectorController : onConnect
2017-01-21 07:46:26,294 INFO --- ScriptsController : Clear console.
2017-01-21 07:46:26,295 INFO --- MqttFX ClientModel : MqttClient with ID MQTT_FX_Client_Websocket assigned.
2017-01-21 07:46:36,314 ERROR --- MqttFX ClientModel : Error when connecting
org.eclipse.paho.client.mqttv3.MqttException: Connection lost
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:146) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_112]
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(Unknown Source) ~[?:1.8.0_112]
at org.eclipse.paho.client.mqttv3.internal.wire.MqttInputStream.readMqttWireMessage(MqttInputStream.java:65) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:107) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
... 1 more
2017-01-21 07:46:36,315 ERROR --- MqttFX ClientModel:Please verify your Settings (e.g. Broker Address, Broker Port & Client ID) and the user credentials!
org.eclipse.paho.client.mqttv3.MqttException: Connection lost
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:146) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_112]
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(Unknown Source) ~[?:1.8.0_112]
at org.eclipse.paho.client.mqttv3.internal.wire.MqttInputStream.readMqttWireMessage(MqttInputStream.java:65) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
at org.eclipse.paho.client.mqttv3.internal.CommsReceiver.run(CommsReceiver.java:107) ~[org.eclipse.paho.client.mqttv3-1.1.0.jar:?]
... 1 more
2017-01-21 07:46:36,321 INFO --- ScriptsController : Clear console.
2017-01-21 07:46:36,322 ERROR --- BrokerConnectService : Connection lost
I made a Telnet test to the server and port and get a blank terminal.
I guess that means there is a connection because otherwise i would have a "connect failed". The message log plugin shows nothing and there is nothing in the log file.
Its Debian and HiveMQ 3.2.2.
JAVA_OPTS: -Djava.net.preferIPv4Stack=true -XX:-UseSplitVerifier -noverify -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/hivemq/heap-dump.hprof
-------------------------------------------------------------------------
2017-01-21 07:37:02,159 INFO - Starting HiveMQ Server
2017-01-21 07:37:02,176 INFO - HiveMQ version: 3.2.2
2017-01-21 07:37:02,178 INFO - HiveMQ home directory: /opt/hivemq
2017-01-21 07:37:02,226 INFO - Log Configuration was overridden by /opt/hivemq/conf/logback.xml
2017-01-21 07:37:12,013 INFO - Loaded Plugin HiveMQ JMX Metrics Reporting Plugin - v3.0.0
2017-01-21 07:37:12,014 INFO - Loaded Plugin HiveMQ MQTT Message Log Plugin - v3.0.0
2017-01-21 07:37:12,014 INFO - Loaded Plugin HiveMQ Sys Topic Plugin - v3.0.0
2017-01-21 07:37:12,014 INFO - Loaded Plugin HiveMQ JVM Metrics Plugin - v3.1.0
2017-01-21 07:37:12,038 INFO - JMX Metrics Reporting started.
2017-01-21 07:37:12,099 INFO - JMX Metrics Reporting started.
2017-01-21 07:37:12,109 INFO - Starting TCP listener on address 192.168.0.12 and port 1883
2017-01-21 07:37:12,131 INFO - Starting Websocket listener on address 192.168.0.12 and port 8000
2017-01-21 07:37:12,139 INFO - Started TCP Listener on address 192.168.0.12 and on port 1883
2017-01-21 07:37:12,139 INFO - Started Websocket Listener on address 192.168.0.12 and on port 8000
2017-01-21 07:37:12,140 INFO - Started HiveMQ in 9967ms
2017-01-21 07:37:12,142 INFO - No valid license file found. Using evaluation license, restricted to 25 connections.
EDIT
Ok it's a Nginx problem because without SSL my JavaScript site works.
It should be possible to use SSL for http and none secure for the websockets as i can see here:
Nginx MQTT websocket proxy 1
Nginx MQTT websocket proxy 2
I tried there configuration but had no luck.
Solution:
There is no need for an Nginx proxy with HiveMQ.
The Problem is that Firefox does not accept the self signed certificate for websockets. Setting "network.websocket.allowInsecureFromHTTPS" to true will make it work. In Chrome you get a message about JavaScript security and you can accept it. Since i only use Firefox and there is no message it took hours to find out what was wrong. Also the paho onFailure function didn't show up.
unfortunately websocket support has not been implemented into MQTT.fx yet.
You will have to use a different MQTT-client, if you insist on connecting via websocket.
Cheers,
Florian, from the HiveMQ Team.

JDBC client to Hive - No data or no sasl data in the stream Exception

We have a Kerberised cluster and I'm trying to run a Java action in Oozie where I make a JDBC connection to Hive. This JDBC connections works fine on the Sandbox without Kerberos.
The connection string is as simple as the following, where I'm providing username and password in it:
Connection con = DriverManager.getConnection("jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET","user123","passw123");
The Oozie action (strangely) completes succesfully, and the Java action log does not present any error:
1742 [main] INFO org.apache.hive.jdbc.Utils - Supplied authorities: W12345:10000
1742 [main] INFO org.apache.hive.jdbc.Utils - Resolved authority: W12345:10000
1766 [main] INFO org.apache.hive.jdbc.HiveConnection - Will try to open client transport with JDBC Uri: jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET
<<< Invocation of Main class completed <<<
Oozie Launcher ends
1785 [main] INFO org.apache.hadoop.mapred.Task - Task:attempt_1464245290012_0129_m_000000_0 is done. And is in the process of committing
1847 [main] INFO org.apache.hadoop.mapred.Task - Task attempt_1464245290012_0129_m_000000_0 is allowed to commit now
1854 [main] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_1464245290012_0129_m_000000_0' to hdfs://danskehadoop/user/user123/oozie-oozi/0000013-160527101253015-oozie-oozi-W/JavaAction--java/output/_temporary/1/task_1464245290012_0129_m_000000
1909 [main] INFO org.apache.hadoop.mapred.Task - Task 'attempt_1464245290012_0129_m_000000_0' done.
But in reality the Java main does not complete correctly the execution (and does not execute the needed queries) because the JDBC connection fails with an exception that I can see only in the Hive log:
ERROR [HiveServer2-Handler-Pool: Thread-78363]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
I'm actually connected to the cluster, and already done further kinit on my username.
Does anybody know what could the cause of this exception be?
Thanks in advance for the help!
Antonio
This happened to me on MapR hadoop distribution platform.
In my case it was Keepalived checking Hive port every 5 seconds and producing such error. I simply used "nc" command to check if Hive port is in use and did not use any authentication method. Later I switched to "maprcli" command which uses SASL authentication and the error was gone.