This question may be duplicate.
I am building Ignite cluster.
Observed that intercommunication between nodes require multiple ports to be opened.
So far, I opened ports like 47100 and 11211.
It's tedious to raise open request for every newly found port to IT dept.
Require list of all ports used by Ignite so I can open ports at once.
It's all configurable, so this is only true if you've not changed any of the defaults and only have one node running per machine, but: 10800 (JDBC/ODBC), 11211 (TCP connector), 47100 (listener), 47500 (discovery). As Denis notes, you only really need the last two but that depends on your application.
Related
I have a Redis Cluster (3 leaders and 3 followers), when I restart all cluster nodes I would like the application to automatically identify that an IP exchange has happened.
In the application I'm using spring applying the following settings:
spring.redis.cluster.nodes: redis:6379
spring.lettuce.cluster.refresh.adaptive: true
It's as if the application was caching the old ip addresses, I need to somehow get this list of nodes updated, I'm connecting to a dns.
"Refresh adaptive" setting in my case is misspelled, lacking the term "redis".
Correct setting is: spring.redis.lettuce.cluster.refresh.adaptive
I am using apache ignite with default configurations. I have two development server A and B where each server has the same code. I have 3 ignite nodes started on each server. 3 ignite nodes on A and 3 on B
I have created a ignite cache " ignite-bridg". Since on one server each node would create a cache and partition the data and these two servers are isolated so nothing will happen.
However I see that both the servers form a cluster and 6 nodes get connected. This is highly problematic for me. I think this is happening because both servers are accidently on same multicast group.
How to resolve this problem. I need to rectify it quickly
By default Ignite uses Multicast IP finder (TcpDiscoveryMulticastIpFinder) for nodes discovery process, in your case you should use Static IP finder (TcpDiscoveryVmIpFinder) instead. By using it you could specify different lists of IP addresses for each server and form two clusters instead of one.
Here is more information regarding Static IP Finder configuration:
https://www.gridgain.com/docs/latest/developers-guide/clustering/tcp-ip-discovery#static-ip-finder
I just investigate DC/OS, I find that DC/OS has three roles:master, slave, slave_public, I want to deploy a cluster which can host master, slave or slave_public roles on one host, but currently I can't do that.
I want to know that why can't put them on one host when designed. If I do that, could I get some suggestions?
I just have the idea. If I can't do, I'll quit using DCOS, I'll use mesos and marathon.
Is there someone has the idea with me? I look forward to the reply.
This is by design, and things are actually being worked on to re-enforce that an machine is installed with only one role because things break with more than one.
If you're trying to demo / experiment with DC/OS and you only have one machine, you can use Virtual Machines or Docker to partition that one machine into multiple machines / parts which you can install DC/OS on. dcos-vagrant and dcos-docker can help you there.
As far as installing though, the configuration for each of the three roles is incompatible with one another. The "master" role causes a whole bunch of pieces of software to be started / installed on a host (Mesos-DNS, Mesos master, marathon, exhibitor, zookeeper, 3dt, adminrouter, rexray, spartan, navstar among others) which listen on various ports. The "slave" role causes a machine to have a mesos-agent (mesos renamed mesos-slave to mesos-agent, hence the disconnect) configured and started on the agent. The mesos-agent is configured to control / most ports greater than 1024 to tasks which are launched by mesos frameworks on the agent. Several of those ports are used by services which are run on masters, resulting in odd conflicts and hard to fix bad behavior.
In the case of running the "slave" and "slave_public" on the same host, those two conflict more directly, because both of them cause mesos-agent to be run on the host, with slightly different configuration. Both the mesos-agent (the one configured with the "slave" role and the one with the "slave_public" role are configured to listen on port 5051. Only one of them can use it though, so you end up with one of the agents being non-functional.
DC/OS only supports running a node as either a master or an agent(slave). You are correct that Mesos does not have this limitation. But DC/OS is more than just a Mesos/Marathon. To enable all the additional features of DC/OS there are various components built around Mesos and Marathon. At times these components behave differently whether they are running on a master or an agent and at other times the components that exist on a master may or may not exist on an agent or vice versa. So running a master and an agent on the same node would lead to conflicts/issues.
If you are looking to run a small development setup before scaling the solution out to a bigger distributed system DC/OS Vagrant might be a good starting point.
I have this twemproxy_sentinel setup that uses their default port 22122 as entry and forwards the requests to underlying redis servers running on port 6380, 6381.
Every now and then, the port 22122 becomes unavailable. Thus clients using the redis would not be able to connect. telnet to it would close instantly. All I needed to do was to /etc/init.d/nutcracker restart and things would be back to normal. All along, the sentinel and redis services are working. Only the twemproxy seems to get cut off. Before the time of restart, the nutcracker service is still running (ps would show it's running). The logs do not show any indication of things failing.
I'm not sure why this happens and tried to dig through the logs of both the redis servers, redis sentinel and twemproxy logs. I also tried looking into /var/log/messages and tried to ensure file-max won't be blocking the # of ports being opened.
Wonder where I can start to look into why things would go down.
Realized I've overlooked that max-files doesn't necessarily allows nutcracker to use those ports but merely allows the system to use so many ports. It is back to normal after actually enabling nutcracker to open more ports.
I've been using ServiceStack PooledRedisClientManager with success. I'm now adding Twemproxy into the mix and have 4 Redis instances fronted with Twemproxy running on a single Ubuntu server.
This has caused problems with light load tests (100 users) connecting to Redis through ServiceStack. I've tried the original PooledRedisClientManager and BasicRedisClientManager, both are giving the error No connection could be made because the target machine actively refused it
Is there something I need to do to get these two to play nice together? This is the Twemproxy config
alpha:
listen: 0.0.0.0:12112
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
redis: true
timeout: 400
server_retry_timeout: 30000
server_failure_limit: 3
server_connections: 1000
servers:
- 0.0.0.0:6379:1
- 0.0.0.0:6380:1
- 0.0.0.0:6381:1
- 0.0.0.0:6382:1
I can connect to each one of the Redis server instances individually, it just fails going through Twemproxy.
I haven't used twemproxy before but I would say your list of servers is wrong. I don't think you are using 0.0.0.0 correctly.
Your servers would need to be (for your local testing):
servers:
- 127.0.0.1:6379:1
- 127.0.0.1:6380:1
- 127.0.0.1:6381:1
- 127.0.0.1:6382:1
You use 0.0.0.0 on the listen command to tell twemproxy to listen on all available network interfaces on the server. This mean twemproxy will try to listen on:
the loopback address 127.0.0.1 (localhost),
on your private IP (i.e. 192.168.0.1) and
on your public IP (i.e. 134.xxx.50.34)
When you are specifying servers, the server config needs to know the actual address it should connect on. 0.0.0.0 doesn't make sense. It needs a real value. So when you come to use different Redis machines you will want to use, the private IPs of each machine like this:
servers:
- 192.168.0.10:6379:1
- 192.168.0.13:6379:1
- 192.168.0.14:6379:1
- 192.168.0.27:6379:1
Obviously your IP addresses will be different. You can use ifconfig to determine the IP on each machine. Though it may be worth using a hostname if your IPs are not statically assigned.
Update:
As you have said you are still having issues, I would make these recommendations:
Remove auto_eject_hosts: true. If you were getting some connectivity, then after time you end up with no connectivity, it's because something has caused twemproxy to think there was something wrong with the Redis hosts and reject them.
So eventually when your ServiceStack client connects to twemproxy, there will be no hosts to pass the request onto and you get the error No connection could be made because the target machine actively refused it.
Do you actually have enough RAM to stress test your local machine this way? You are running at least 4 instances of Redis, which require real memory to store the values, twemproxy consumes a large amount of memory to buffer the requests it passes to Redis, this memory pool is never released, see here for more information. Your ServiceStack app will consume memory - more so in Debug mode. You'll probably have Visual Studio or another IDE open, the stress test application, and your operating system. On top of all that there will likely be background processes and other applications you haven't closed.
A good practice is to try to run tests on isolated hardware as far as possible. If it is not possible, then the system must be monitored to check the benchmark is not impacted by some external activity.
You should read the Redis article here about benchmarking.
As you are using this in a localhost situation use the BasicRedisClientManager not the PooledRedisClientManager.