Redis Client not connecting to Redis server via HAProxy - redis

I'm facing an issue while connecting to Redis server via HAProxy using Jedis as a redis client.
Everything works fine when Redis server is directly connected to but, the same is not working via HAProxy. Both HAProxy and Redis services are running on their specific ports, HAProxy on port 80 & Redis server on 6379.
We are running this setup on EC2 instances and all the necessary ports are open.
HAProxy Configuration is
frontend http-in
bind *:80
default_backend redis-server
backend redis-server
stats enable
server redis-server1 x.x.x.x:6379 check inter 1000 fall 3 rise 2
server redis-server2 x.x.x.x:6379 check inter 1000 fall 3 rise 2
Jedis Clinet code is:
try {
Jedis jedis = new Jedis(hostname, port);
jedis.set("key", "Redis-Test-Value");
System.out.println("value from redis node is: " + jedis.get("key"));
} catch (Exception e) {
System.out.println("exception is " + e);
}
Exception message that gets thrown is - redis.clients.jedis.exceptions.JedisConnectionException: Unknown reply: H
Can someone point out what am I missing and point to the right direction?

frontend http-in
Redis isn't an HTTP server, so serving it up as one won't work. Since this is a server administration issue, not a programming issue, try the list or serverfault.com for any additional questions.

Related

RabbitMQ HA & Failover

I've read both the clustering and HA chapters and got a fair understanding on RabbitMQ clustering. One thing I did not understand is, having 2+ nodes on the cluster and a set of HA queues, how connections can be made by the clients so that if one node fails they automatically and seamlessly connect to the remaining node(s). Can this be achieved by a load balancer such as, say, Amazon ELB for deployments made in AWS?
Using a load balancer like Amazon ELB or HAProxy is exactly how you should route traffic to the available nodes in the Rabbit cluster.
I'd recommend HAProxy. Here's a sample HAProxy config:
global
log 127.0.0.1 local1
maxconn 4096
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
defaults
log global
mode tcp
option tcplog
retries 3
option redispatch
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats :1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
listen aqmp_front :5672
mode tcp
balance roundrobin
timeout client 3h
timeout server 3h
option clitcpka
server aqmp-1 rabbitmq1.domain:5672 check inter 5s rise 2 fall 3
server aqmp-2 rabbitmq2.domain:5672 backup check inter 5s rise 2 fall 3
Note the last two lines. You'll need to substitute rabbitmq1.domain and rabbitmq2.domain with the location of your two nodes. Since the second server is setup as backup check HAProxy will balance the request only on the first node, and if this node fails the request will be route to the second node.
I would use simple keepalived deamon on all rabbit nodes. It just adds a virtual IP address shared between nodes which you can use for client access. Configuration is very simple, check this Hollenback's page.
Sample config:
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 1
priority 100
virtual_ipaddress {
192.168.1.1/24 brd 192.168.1.255 dev eth0
}
}
You have to configure mirror queue between rabbitmq-servers.
rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'
In the example rabbitmq will be mirror queue have prefix with amq. When server A fail, those queue unitl exit on server B. You also HA on code(connect to server fail, then connect to server B) or using HA rabbitmq port using keepalive

WebSockets: wss from client to Amazon AWS EC2 instance through ELB

How can I connect over ssl to a websocket served by GlassFish on an Amazon AWS EC2 instance through an ELB?
I am using Tyrus 1.8.1 in GlassFish 4.1 b13 pre-release as my websocket implementation.
Port 8080 is unsecured, and port 8181 is secured with ssl.
ELB dns name: elb.xyz.com
EC2 dns name: ec2.xyz.com
websocket path: /web/socket
I have successfully used both ws & wss to connect directly to my EC2 instance (bypassing my ELB). i.e. both of the following urls work:
ws://ec2.xyz.com:8080/web/socket
wss://ec2.xyz.com:8181/web/socket
I have successfully used ws (non-ssl) over my ELB by using a tcp 80 > tcp 8080 listener. i.e. the following url works:
ws://elb.xyz.com:80/web/socket
I have not, however, been able to find a way to use wss though my ELB.
I have tried many things.
I assume that the most likely way of getting wss to work through my ELB would be to create a tcp 8181 > tcp 8181 listener on my ELB with proxy protocol enabled and use the following url:
wss://elb.xyz.com:8181/web/socket
Unfortunately, that does not work. I guess that I might have to enable the proxy protocol on glassfish, but I haven't been able to find out how to do that (or if it's possible, or if it's necessary for wss to work over my ELB).
Another option might be to somehow have ws or wss run over an ssl connection that's terminated on the ELB, and have it continue unsecured to glassfish, by using an ssl > tcp 8080 listener. That didn't work for me, either, but maybe some setting was incorrect.
Does anyone have any modifications to my two aforementioned trials. Or does anyone have some other suggestions?
Thanks.
I had a similar setup and originally configured my ELB listeners as follows:
HTTP 80 HTTP 80
HTTPS 443 HTTPS 443
Although this worked fine for the website itself, the websocket connection failed. In the listener, you need to allow all secure TCP connection as opposed to SSL only to allow wss to pass through as well:
HTTP 80 HTTP 80
SSL (Secure TCP) 443 SSL (Secure TCP) 443
I would also recommend raising the Idle timeout of the ELB.
I recently enabled wss between my browser and an EC2 Node.js instance.
There were 2 things to consider:
in the ELB listeners tab, add a row for the wss port with SSL as load balancer protocol.
in the ELB description tab, set an higher idle timeout (connection settings), which is 60 sec by default. The ELB was killing the websocket connections after 1 minute, setting the idle timeout to 3600 (the max value) enables much longer communication.
It is obviously not the ultimate solution since the timeout is still there, but 1 hour is probably good enough for what we usually do.
hope this help

SailsJS on production - Error: listen EADDRINUSE

I have a VPS server with CentOS and Apache server.
But I want to run my node.js applications too. I am using sails.js
This sails application is trying to listen to port 80 of specified host.
Here is error (after sails lift running):
debug: Starting server in /var/www/user/data/nodeprojects/projectname...
info - socket.io started
debug: Restricting access to host: projectname.com
warn - error raised: Error: listen EADDRINUSE
warn:
warn: Server doesn't seem to be starting.
warn: Perhaps something else is already running on port 80 with hostname projectname.com?
What is the problem? Can I run both apache and nodejs servers on one server with one port (80)?
No, you cannot.
When a server process opens a TCP port to answer requests, it has exclusive use of that port. So, you cannot run both SailsJS and Apache servers on the same port.
Having said that, you can do lots of interesting things with Apache, such as proxying specific requests to other servers running on different ports.
A typical setup would have Apache on port 80 and SailsJS on port 8000 (or some other available port) where Apache would forward requests to certain URLs to SailsJS and then forward the reply from SailsJS back to the browser.
See either configuring Apache on Mountain Lion proxying to Node.js or http://thatextramile.be/blog/2012/01/hosting-a-node-js-site-through-apache for example implementations of this approach.
you cannot use same port for different application. NodeJS can use any open port. What you need todo is port forwarding for your app. :)

Can someone explain exactly how Booksleeve and Redis work together and it's application in a SignalR app?

We are implementing scale-out for our SignalR app and trying to avoid a single point of failure in our cluster. Thus, more than one Redis message bus server is required.
The problem with implementing Redis Sentinel is that upon fail-over, a the client needs to connect to a new endpoint [address], which would require the SignalR application to be restarted (Redis endpoint defined in Application_Start()).
Not an option.
I'm trying to understand if/how Booksleeve will help, and would like some explain this.
The issue is that we can only have one single endpoint defined for message bus. A hardware solution is not currently an option.
Would the SignalR application connect to a Booksleeve wrapper, which maintains the list of master/slaves?
Another option using Azure Service Bus. However, the instructions for Wiring Up the Windows Azure Service Bus Provider indicate there are still problems with this:
Note, this web site is an ASP.NET site that runs in an Azure web role.
As of 1.0alpha2 there are some bugs in AzureWebSites due to which
ServiceBus Scale out scenarios do not work well. We are working on
resolving this for the future
I don't know the specifics of how SignalR does the connect, but: BookSleeve already offers some concessions towards failover nodes. In particular, the ConnectionUtils.Connect method takes a string (ideal for web.config configuration values etc), which can include multiple redis nodes, and BookSleeve will then try to locate the most appropriate node to connect to. If the nodes mentioned in the string are regular redis nodes, it will attempt to connect to a master, otherwise falling back to a slave (optionally promoting the slave in the process). If the nodes mentioned are sentinel nodes, it will ask sentinel to nominate a serer to connect to.
What BookSleeve doesn't offer at the moment is a redundant connection wrapper that will automatically reconnect. That is on the road-map, but isn't difficult to do in the calling code. I plan to add more support for this at the same time as implementing redis-cluster support.
But: all that is from a BookSleeve perspective - I can't comment on SignalR specifically.
BookSleeve 1.3.41.0 supports Redis sentinel. Deployment configuration we use: 1 master redis, 1 slave redis. Each box has sentinel (one for master, one for slave). Clients connect to sentinel first, sentinel then redirects them to active master.
This is how it is implemented in client code:
public class OwinStartup
{
public void Configuration(IAppBuilder app)
{
var config = new WebClientRedisScaleoutConfiguration();
GlobalHost.DependencyResolver.UseRedis(config);
app.MapSignalR();
}
}
public class WebClientRedisScaleoutConfiguration : RedisScaleoutConfiguration
{
public WebClientRedisScaleoutConfiguration()
: base(() => getRedisConnection(), WebUIConfiguration.Default.Redis.EventKey)
{ }
private static BookSleeve.RedisConnection _recentConnection;
private static BookSleeve.RedisConnection getRedisConnection()
{
var log = new TraceTextWriter();
var connection = BookSleeve.ConnectionUtils.Connect("sentinel1:23679,sentinel2:23679,serviceName=WebClient", log);
if (connection != null)
{
_recentConnection = connection;
return connection;
}
if (_recentConnection != null)
{
return _recentConnection;
}
// Cannot return null nor throw exception -- this will break reconnection cycle.
return new BookSleeve.RedisConnection(string.Empty);
}
}
Hot to configure redis.
Common steps
Download Redis for windows http://redis.io/download
Unzip to c:\redis
Master (only very first redis box, only one such config)
Create Redis service: execute command within redis directory redis-server --service-install redis.conf --service-name redis
Start Redis service
Ensure Redis is listing port 6379
Slave (other boxes)
Update redis.conf: add line slaveof masterAddr 6379 where masterAddr
is address where redis in master mode is running, 6379 is default
redis port.
Create Redis service: execute command within redis directory redis-server --service-install redis.conf --service-name redis
Start Redis service
Ensure Redis is listing port 6379
Sentinel (common for master and slave)
Create file redis-sentinel.conf with content:
port 26379
logfile "redis-sentinel1.log"
sentinel monitor WebClient masterAddr 6379 1
where masterAddr is address where redis in master mode is running,
6379 is default redis port, 1 is quorum (number of host that makes
decision is server down or not). WebClient is group name. You specify it in client code ConnectionUtils.Connect("...,serviceName=WebClient...")
Create redis sentinel service:execute command within redis directory redis-server --service-install redis-sentinel.conf --service-name redis-sentinel --sentinel
Start redis-sentinel service

JMeter with remote servers

I'm trying to setup JMeter in a distributed mode.
I have a server running on an ec2 intance, and I want the master to run on my local computer.
I had to jump through some hopes to get RMI working correctly on the server but was solved with setting the "java.rmi.server.hostname" to the IP of the ec2 instance.
The next (and hopefully last) problem is the server communicating back to the master.
The problem is that because I am doing this from an internal network, the master is sending its local/internal ip address (192.168.1.XXX) when it should be sending back the IP of my external connection (92.XXX.XXX.XXX).
I can see this in the jmeter-server.log:
ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: 192.168.1.50; nested exception is:
That host IP is wrong. It should be the 92.XXX.XXX.XX address. I assume this is because in the master logs I see the following:
2012/07/29 20:45:25 INFO - jmeter.JMeter: IP: 192.168.1.50 Name: XXXXXX.local FullName: 192.168.1.50
And this IP is sent to the server during RMI setup.
So I think I have two options:
Tell the master to send the external IP
Tell the server to connect on the external IP of the master.
But I can't see where to set these commands.
Any help would be useful.
For the benefit of future readers, don't take no for an answer. It is possible! Plus you can keep your firewall in place.
In this case, I did everything over port 4000.
How to connect a JMeter client and server for distributed testing with Amazon EC2 instance and local dev machine across different networks.
Setup:
JMeter 2.13 Client: local dev computer (different network)
JMeter 2.13 Server: Amazon EC2 instance
I configured distributed client / server JMeter connectivity as follows:
1. Added a port forwarding rule on my firewall/router:
Port: 4000
Destination: JMeter client private IP address on the LAN.
2. Configured the "Security Group" settings on the EC2 instance:
Type: Allow: Inbound
Port: 4000
Source: JMeter client public IP address (my dev computer/network public IP)
Update: If you already have SSH connectivity, you could use an SSH tunnel for the connection, that will avoid needing to add the firewall rules.
$ ssh -i ~/.ssh/54-179-XXX-XXX.pem ServerAliveInterval=60 -R 4000:localhost:4000 jmeter#54.179.XXX.XXX
3. Configured client $JMETER_HOME/bin/jmeter.properties file RMI section:
note only the non-default values that I changed are included here:
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# Remote Hosts - comma delimited
# Add EC2 JMeter server public IP address:Port combo
remote_hosts=127.0.0.1,54.179.XXX.XXX:4000
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=4000
# Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
client.rmi.localport=4000
# To change the default port (1099) used to access the server:
server.rmi.port=4000
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=4000
4. Configured remote server $JMETER_HOME/bin/jmeter.properties file RMI section as follows:
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=4000
# Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
client.rmi.localport=4000
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=4000
5. Started the JMeter server/slave with:
jmeter-server -Djava.rmi.server.hostname=54.179.XXX.XXX
where 54.179.XXX.XXX is the public IP address of the EC2 server
6. Started the JMeter client/master with:
jmeter -Djava.rmi.server.hostname=121.73.XXX.XXX
where 121.73.XXX.XXX is the public IP address of my client computer.
7. Ran a JMeter test suite.
JMeter GUI log output
Success!
I had a similar problem: the JMeter server tried to connect to the wrong address for sending the results of the test (it tried to connect to localhost).
I solved this by setting the following parameter when starting the JMeter master:
-Djava.rmi.server.hostname=xx.xx.xx.xx
It looks as though this wont work Distributed JMeter Testing explains the requirements for load testing in a distributed environment. Number 2 and 3 are particular to your use case I believe.
The firewalls on the systems are turned off.
All the clients are on the same subnet.
The server is in the same subnet, if 192.x.x.x or 10.x.x.x ip addresses are used.
Make sure JMeter can access the server.
Make sure you use the same version of JMeter on all the systems. Mixing versions may not work correctly.
Might be very late in the game but still. Im running this with jmeter 5.3.
So to get it work by setting up the slaves in aws and the controller on your local machine.
Make sure your slave has the proper localports and hostname. The hostname on the slave should be the ec2 instance public dns.
Make sure AWS has proper security policies.
For the controller (which is your local machine) make sure you run with the parameter '-Djava.rmi.server.hostname='. You can get the ip by googling "my public ip address". Definately not those 192.xxx.xxx.x or 172.xx.xxx.
Then you have to configure your modem to port forward your machine that is used to be your controller. The port can be obtained when from the slave log (the ones that has the FINE: RMI RenewClean....., yeah you have to set the log to verbose). OR set DMZ and put your controller machine. Dangerous, but convinient just for the testing time, don't forget to off it after that
Then it should work.