HAProxy setup on a system which does not host any RabbitMQ node - rabbitmq

I want to set up HAProxy for RabbitMQ cluster. I have following queries on the same:
(1) Suppose I have a scenario where my RabbitMQ server, client, and haproxy are on different machines.
RabbitMQ node1 -> Machine1
RabbitMQ node2 -> Machine2
HAPROXY -> Machine3
RabbitMQ client -> Mahcine4
node1 and node2 have been clustered. Is this a correct configuration? The rationale behind my asking this question is : can HAProxy be setup on a machine which does not host any node or HaProxy has to be setup on a machine which host at least one RabbitMQ server node?
(2) If the above setup is valid, then my RabbitMQ client should know only HAPrxoy machine, and in that case, how shall I connect my client to HAProxy? The client code which works when RabbitMQ client has to connect to a machine hosting RabbitMQ server node will not work here.

I investigated and found answers of my questions. 1. This set up is valid in the sense it is a possible scenario. 2. Client will connect to HAProxy server.

Related

How to recieve a Broker's data with an MQTT.fx client?

I have both a client and a broker running on a remote Linux machine within a Lora Network server that has a mosquitto connector.
The client can listen for the broker's broadcast from a certain address and port (#127.0.0.1:1883)
I would like to open an ssh tunnel between this remote machine and my machine (windows 10) to 'eavesdrop' on the communication between the client and the broker using MQTT.fx to run a mosquitto client;
So far I tried to:
Run ssh -L 22883:#remoteMachineAdress:1883 usern.ame#gatewayAdress -p222 on MobaXtrem](https://mobaxterm.mobatek.net/)
Then I launch a client on MQTT.fx to listen on the broker: 127.0.0.1 port 22883.
This establishes a connection to the broker. However, I am not receiving any of the messages passed to the original client (the one on the remote machine) receives.
Can anyone tell me what am I doing wrong?
And if there are any tutorials about this?
I appreciate all the help I can get, thank you in advance!
This configuration is correct, it was the connector on the server who was sending the data to a different application.

Apache Active MQ connect exception using Java program from remote machine

I was trying to run Apache Activemq , broker ran successfully at localhost. At same machine JMS producer , consumer Java applications ran successfully . BUT I changed Uri to tcp://192.168.1.1:61616 in activemq.xml and ran the broker in machine 1( 192.168.1.1) . I ran consumer in machine 1. I ran producer from machine 2 in LAN. But producer caused jms exception. ConnectException. Connection refused. As a result producer and consumer can not communicate in LAN . Please guide.
If I understand this correctly you have this setup:
Machine1: ActiveMQ Broker and Consumer
Machine2: Producer
Then you need to setup your configurations like this:
ActiveMQ Broker: in activemq.xml set to tcp://192.168.1.1:61616
Consumer: tcp://192.168.1.1:61616
Producer: tcp://192.168.1.1:61616
Thank you very much. It was firewall that was preventing connection. I disabled firewall, things is running fine. Regards.

How to connect to redis-ha cluster in Kubernetes cluster?

So I recently installed stable/redis-ha cluster (https://github.com/helm/charts/tree/master/stable/redis-ha) on my G-Cloud based kubernetes cluster. The cluster was installed as a "Headless Service" without a ClusterIP. There are 3 pods that make up this cluster one of which is elected master.
The cluster has installed with no issues and can be accessed via redis-cli from my local pc (after port-forwarding with kubectl).
The output from the cluster install provided me with DNS name for the cluster. Because the service is a headless I am using the following DNS Name
port_name.port_protocol.svc.namespace.svc.cluster.local (As specified by the documentation)
When attempting to connect I get the following error:
"redis.exceptions.ConnectionError: Error -2 connecting to
port_name.port_protocol.svc.namespace.svc.cluster.local :6379. Name does not
resolve."
This is not working.
Not sure what to do here. Any help would be greatly appreciated.
the DNS appears to be incorrect. it should be in the below format
<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis service name is redis and namespace is default then it should be
redis.default.svc.cluster.local:6379
you can also use pod dns, like below
<redis-pod-name>.<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis pod name is redis-0 and redis service name is redis and namespace is default then it should be
redis-0.redis.default.svc.cluster.local:6379
assuming the service port is same as container port and that is 6379
Not sure if this is still relevant. Just enhance the chart similar to other charts to support NodePort, e.g. rabbitmq-ha so that you can use any node ip and configured node port if you want to access redis from outside the cluster.

Apache Ignite | TCP discovery SPI

What is TCP discovery SPI in Apache Ignite and what it does?
What 127.0.0.1:47500..47509 in example-cache.xml of apache ignite?
Apache Ignite nodes are all the same and join together automatically to form a cluster. The "Discovery" mechanism is how they find each other, by using the IP address and port of at least 1 other Ignite node to join the rest of the cluster.
TCP Discovery uses a TCP connection to the addresses and ports specified.
127.0.0.1:47500..47509 is a shortened notation that says contact IP 127.0.0.1 (also known as your localhost) and use ports from 47500 to 47509.
This is used in the example code so you can easily run a few instances of Ignite for testing on your computer and they all will connect to each other as a cluster, even though they are all running on the same machine.
This is a component responsible for nodes discovery. Please refer to this page for details: https://apacheignite.readme.io/docs/tcpip-discovery

How to setup puppet master as a node

Currently I have a master and agent working on separate Centos 6.5 VMs. I would like to be able to configure my own master as I will be tearing down and making a new master every time.
How can I get puppet agent --test --noop to work on my master machine as well?
Currently I receive an error:
Error: Could not request certificate: 502 "Proxy Error ( The specified Secure Sockets Layer (SSL) port is not allowed. ISA Server is not configured to allow SSL requests from this port. Most Web browsers use port 443 for SSL requests. )"
SSL requests seem to be setup for port 443. Any thoughts?
Thank you very much!
Jason
Credit to Felix Frank, mr_tron
Issue seemed to be solved by removing http_proxy declaration in .bashrc file and anywhere else
Puppet Master now able to act as an agent
Thank you,
Jason