Redis version: 3.2.0
Jedis version: 2.8.1
Below is my java code for connecting to redis:
public class TestRedis {
public static void main(String[] args) {
String host = args[0];
int port = Integer.parseInt(args[1]);
try (Jedis jedis = new Jedis(host, port)) {
System.out.println("Connected to jedis " + jedis.ping());
} catch(Exception e){
e.printStackTrace();
}
}
}
I am running this program in the machine where redis is installed. This machine's ip address is 192.168.1.57
If I provide host="localhost" and port = "6379" as arguments, connection with redis successfully established.
However, If I give host="192.168.1.57" and port = "6379" in arguments, I end up with below exception:
redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection refused
at redis.clients.jedis.Connection.connect(Connection.java:164)
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:80)
at redis.clients.jedis.Connection.sendCommand(Connection.java:100)
at redis.clients.jedis.Connection.sendCommand(Connection.java:95)
at redis.clients.jedis.BinaryClient.ping(BinaryClient.java:93)
at redis.clients.jedis.BinaryJedis.ping(BinaryJedis.java:105)
at TestRedis.main(TestRedis.java:14)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at redis.clients.jedis.Connection.connect(Connection.java:158)
... 6 more
Please help...
There are a few settings that would affect this: bind and protected-mode. They work together to provide a baseline of security with new installs.
Find the following in your redis.conf file and comment it out:
bind 127.0.0.1
By adding a # in front of it:
# bind 127.0.0.1
Or, if you would rather not comment it out, you can also add the IP of your eth0/em1 interface to it, like this:
bind 127.0.0.1 192.168.1.57
Also, unless you're using password security, you'll also have to turn off protected mode by changing:
protected-mode yes
To:
protected-mode no
Make sure that you read the relevant documentation and understand the security implications of both of these changes.
After making these changes, restart redis.
Related
I have recently developed a small client server application for a customer. A windows executable adaptor provides a number of TCP sockets to interact with an external host, and my java(kotlin) client software is listening to those sockets and sends commands when necessary. Nothing fancy, and I tested the application thoroughly on my Windows10 developer system.
Now I tried to migrate the software to Ubuntu 22 as a host. The windows executable is running on wine emulation and I have checked with netstat that it listens to the expected ports, also I tested with telnet to access the primary port, and I can receive the feed. But when I try to access the ports from the java client, my code throws ChannelClosedException whenever my code tries to open the connection, and the retry logic (spring-retry) is repeating this 10 times before giving up:
Caused by: java.nio.channels.ClosedChannelException: null
at java.base/sun.nio.ch.UnixAsynchronousSocketChannelImpl.implConnect(UnixAsynchronousSocketChannelImpl.java:301) ~[na:na]
at java.base/sun.nio.ch.AsynchronousSocketChannelImpl.connect(AsynchronousSocketChannelImpl.java:200) ~[na:na]
Here some excerpt from the code I execute:
class RequestHandler(private val hostAddress: InetSocketAddress) :
Runnable{
private val client: AsynchronousSocketChannel = AsynchronousSocketChannel.open()
#Retryable(value=[ConnectException::class, ClosedChannelException::class], maxAttempts = 10)
fun init() {
connect()
executor.submit(this)
}
private fun connect() {
try {
client.connect(hostAddress).get()
client.setOption(StandardSocketOptions.TCP_NODELAY, true)
client.setOption(StandardSocketOptions.SO_KEEPALIVE, true)
} catch (e: RuntimeException) {
logger.error("Failed to connect to $hostAddress due to ${e.localizedMessage}")
}
}
}
Do you see anything here that requires special attention on a Linux host?
I can establish a port forwarding session in Ubuntu as follows:
ssh -L 8000:dev.mycompany.com:443 jump.mycompany.com
now I'd like to emulate this with Jsch:
public static void openTunnel() {
JSch jsch = new JSch();
String privateKey = "~/.ssh/id_rsa";
try {
jsch.addIdentity(privateKey);
log.info("Connecting to {}#{}", getJumpUser(), getJumpServer());
Session session = jsch.getSession(getJumpUser(), getJumpServer(), 22);
session.connect();
session.setPortForwardingL(8000, getHost(), 433);
} catch (JSchException e) {
log.error("", e);
}
}
However I get the following exception after the tunnel is set up, and trying to connect to it with RestTemplate (Spring HTTP Client - but curl gives an error as well):
ssl.SSLHandshakeException: Remote host closed connection during handshake
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:744)
What do I have to confiugre in Jsch so that it does exactly the same as the openssh client?
I suppose, you're trying to get a connection to a https webserver that's not public available.
The port forwarding works, BUT https will not work on localhost:8000, because the certificate is for dev.mycompany.com and not localhost.
You can cheat, by adding an entry into your hosts file.
127.0.0.1 dev.mycompany.com
But probably it's easier to try socks5.
ssh jump.mycompany.com -D 8005
And then setting in your browser (sample for firefox):
Select: Manual proxy configuration:
Socks Host: localhost
Port: 8005
Socks5
Trying the basic set operation on a redis server installed in red hat linux.
JedisPool pool = new JedisPool(new JedisPoolConfig(), HOST, PORT);
Jedis jedis = null;
try {
jedis = pool.getResource();
System.out.println(jedis.isConnected()); //prints true
jedis.set("status", "online"); //gets exception
} finally {
if (jedis != null) {
jedis.close();
}
}
pool.destroy();
Getting the following exception:
Exception in thread "main" redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketException: Connection reset
at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:201)
at redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40)
at redis.clients.jedis.Protocol.process(Protocol.java:132)
at redis.clients.jedis.Protocol.read(Protocol.java:196)
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:288)
at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:187)
at redis.clients.jedis.Jedis.set(Jedis.java:66)
at com.revechat.spring.redis_test.App.main(App.java:28)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:195)
... 7 more
How to resolve the issue ?
I had a similar issue. Our production Redis required an encrypted connection over TLS, whereas our test system did not. In production therefore the java.net.SocketException: Connection reset appeared when we tried to use the Jedis connection.
To fix it, use
JedisPool pool = new JedisPool(new JedisPoolConfig(), HOST, PORT, true);
for connections that require TLS.
I have a RabbitMQ instance deployed on a google cloud engine. I also have a hadoop instance deployed on a different google cloud engine but still in the same application. I am trying to connect to the RabbitMQ queue instance from the hadoop clusters but with no success.
I have a java application that should push items on the RabbitMQ queue and then receive them in the same application. The following is the connection java code:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("130.211.112.37:5672");
try {
connection = factory.newConnection();
channel = connection.createChannel();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
but i get the following result:
java.net.UnknownHostException: 130.211.112.37:5672
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at com.rabbitmq.client.impl.FrameHandlerFactory.create(FrameHandlerFactory.java:32)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:615)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:639)
at de.unibonn.iai.eis.luzzu.io.impl.SparkStreamProcessorObserver.<clinit>(SparkStreamProcessorObserver.java:157)
at de.unibonn.iai.eis.luzzu.evaluation.Main.main(Main.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried opening port 5672 on google cloud firewall. Does anyone has some pointers to the solution please?
Best
Jeremy
As wrote to the comment:
ConnectionFactory factory = new ConnectionFactory();
//factory.setHost("130.211.112.37:5672"); <----- sethost accepts only the host!
factory.setHost("130.211.112.37");
factory.setPort(5672);
try {
connection = factory.newConnection();
channel = connection.createChannel();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
By default the port is 5672, so setPort it is not necessary.
You have to use setPort only if you change the default port.
As explained here: https://www.rabbitmq.com/api-guide.html you need to call setHost and setPort to create a connection. In your app you are passing the host and port together on the same line.
I've been playing with Apache Kafka for a few days, and here is my problem,
If I set up the local test described in the "quick start" section on the website, everything is fine, the kafka producer/ consumer, zookeeper server and kafka broker work perfectly.
Now if I run on a remote server (let's call it node2) :
- Zookeeper - port 2181
- Kafka Broker - port 9092
- kafka consumer
And then if I run from my local computer :
- kafka producer
Assuming that there is no firewall on node2.
The connection end up with a timeout.
Here is the error log :
/etc/java/jdk1.6.0_41/bin/java -Didea.launcher.port=7533 -Didea.launcher.bin.path=/home/kevin/Documents/idea-IU-123.169/bin -Dfile.encoding=UTF-8 -classpath /etc/java/jdk1.6.0_41/lib/dt.jar:/etc/java/jdk1.6.0_41/lib/tools.jar:/etc/java/jdk1.6.0_41/lib/jconsole.jar:/etc/java/jdk1.6.0_41/lib/htmlconverter.jar:/etc/java/jdk1.6.0_41/lib/sa-jdi.jar:/home/kevin/Desktop/kafka-0.7.2/examples/target/scala_2.8.0/classes:/home/kevin/Desktop/kafka-0.7.2/project/boot/scala-2.8.0/lib/scala-compiler.jar:/home/kevin/Desktop/kafka-0.7.2/project/boot/scala-2.8.0/lib/scala-library.jar:/home/kevin/Desktop/kafka-0.7.2/core/target/scala_2.8.0/classes:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/jopt-simple-3.2.jar:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/log4j-1.2.15.jar:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/zookeeper-3.3.4.jar:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/zkclient-0.1.jar:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/snappy-java-1.0.4.1.jar:/home/kevin/Desktop/kafka-0.7.2/examples/lib_managed/scala_2.8.0/compile/jopt-simple-3.2.jar:/home/kevin/Desktop/kafka-0.7.2/examples/lib_managed/scala_2.8.0/compile/log4j-1.2.15.jar:/home/kevin/Documents/idea-IU-123.169/lib/idea_rt.jar com.intellij.rt.execution.application.AppMain kafka.examples.KafkaConsumerProducerDemo
log4j:WARN No appenders could be found for logger (org.I0Itec.zkclient.ZkConnection).
log4j:WARN Please initialize the log4j system properly.
Exception in thread "Thread-0" java.net.ConnectException: Connection timed out
at sun.nio.ch.Net.connect(Native Method)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:532)
at kafka.producer.SyncProducer.connect(SyncProducer.scala:173)
at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:196)
at kafka.producer.SyncProducer.send(SyncProducer.scala:92)
at kafka.producer.SyncProducer.send(SyncProducer.scala:125)
at kafka.producer.ProducerPool$$anonfun$send$1.apply$mcVI$sp(ProducerPool.scala:114)
at kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:100)
at kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:100)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
at kafka.producer.ProducerPool.send(ProducerPool.scala:100)
at kafka.producer.Producer.zkSend(Producer.scala:137)
at kafka.producer.Producer.send(Producer.scala:99)
at kafka.javaapi.producer.Producer.send(Producer.scala:103)
at kafka.examples.Producer.run(Producer.java:53)
Process finished with exit code 0
And here is my Producer's code :
import java.util.Properties;
import kafka.javaapi.producer.ProducerData;
import kafka.producer.ProducerConfig;
public class Producer extends Thread{
private final kafka.javaapi.producer.Producer<String, String> producer;
private final String topic;
private final Properties props = new Properties();
public Producer(String topic)
{
props.put("zk.connect", "node2:2181");
props.put("connect.timeout.ms", "5000");
props.put("socket.timeout.ms", "30000");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("producer.type", "sync");
props.put("conpression.codec", "0");
producer = new kafka.javaapi.producer.Producer<String, String>(new ProducerConfig(props));
this.topic = topic;
}
public void run() {
String messageStr = new String("Message_test");
producer.send(new ProducerData<String, String>(topic, messageStr));
}
}
**So I also tested to switch
props.put("zk.connect", "node2:2181");
by
props.put("broker.list", "0:node2:9082");
And in that case I can connect successfully.**
See item #3 in http://kafka.apache.org/faq.html
The workaround is to explicitly set hostname property in server.properties of Kafka
You can verify this by using Zookeeper. If you are using kafka 0.7*, open ZkCli console and do get /brokers/ids/0 and you should get all the brokers metadata. Make sure the IP address/hostnames here matches the Zk connect string you are using in producer code -
props.put("zk.connect", "node2:2181");
In my case, I was using a producer running on my local machine connecting to a ubuntu VM (same box, different IP) and this workaround helped.