I have 4 network interface on the server and I want to use 40,000 socket per virtual host and bind each virtual host to specific network interface in RabbitMQ on CentOS.
RabbitMQ does not support binding virtual hosts with network interfaces.
You can specify multiple network interfaces RabbitMQ server would be listening on (see documentation), but as long as client connects and authenticates properly, it can access any virtual hosts it is authorized to.
The only possible solution would be to deploy 4 separate RabbitMQ server instances, each with different virtual host & listening on a different interface. With shovel plugin doing the mirroring between brokers (if you need messages to be transferred between virtual hosts).
Related
Is it possible to have multiple apache-server instances running when using "host" networking? Just as it is possible with "bridged" networking & port mapping?
Or does other instances next to the "host" networking instance have to be "bridged" in order to map another port than 80 that might already be in use?
Anything that runs using host networking, well, uses the host networking. There is no isolation between your container, other host-networked containers, and processes running directly on the host. If you are running Apache on your host, and two --net host Apache containers, and they all try to bind to 0.0.0.0 port 80, they will conflict. You need to resolve this using application-specific configuration; there is no concept of port mapping in host networking mode.
Particularly for straightforward HTTP/TCP services, host networking is almost never necessary. If you use standard bridged networking then applications in containers won’t conflict with each other or host processes. You can remap the ports to whatever is convenient for you, without worrying about reconfiguring the application.
I have a requirement to host multiple applications on same public IP and port. I'm new to this area and I figure out that SNI can be used to achieve my requirement. I decided to use Microsoft application gateway as the load balancer. I can configure 2 apps with 2 SSL certificates. My question is how can i access it via browser ? ex: if server FQDN is www.example.com, Since there are 2 applications running in it. how can I mention which application to load ?.
Each certificate needs to be associated with a specific FQDN for one application. Since you have 2 applications on the same IP and TCP port, you need to create two FQDN (i.e. www.my1stappli.mydomain.com and www.my2ndappli.mydomain.com), generate two certificates (one for each FQDN) and configure the Azure Application Gateway to handle each application with its own certificate. If you have only one virtual machine to handle those 2 applications, configure the Azure Application Gateway to redirect one application to port 80 of your virtual machine and to redirect the other application to port 81 of the same virtual machine.
Thus,
https://www.my1stappli.mydomain.com will be redirected to port 80 of your virtual machine
and https://www.my2ndappli.mydomain.com to port 81 of the same virtual machine
I have created a Virtual Machine in Bluemix. I have installed NGINX web server on it. To access the web application that is deployed on NGINX, should I use a public IP address (for e.g. http://123.456.78.9) or is there a domain name associated with the instance (something like http://abcxyz.bluemix.net)?
When you launch any VM in cloud by default they should be not accessible from public internet unless you attach a public ip address to them.
Once you have attached a public ip address to your instance and configured the firewalls to allow incoming connections (http, ssh etc) you can update your DNS server to redirect traffic there.
AWS create a public FQDN at the time of launching the ec2 instances as
ec2-nn-nn-nn-nn-region-compute.com (where nn are ip address bits), not sure any similar thing with bluemix but it is less likely to have abcxyz.bluemix.
I am planning to create a very simple home\office monitoring system, where I will be able to connect sensors using Mosquitto broker on a Raspberry Pi. The sensors will publish data to the broker and I will be able to see the data as a subscriber.
I have a publisher that periodically publishes messages to the Raspberry Pi (Mosquitto broker) in the house in the same LAN network. But I want to connect a sensor that will be located at my office to the same broker, so the connection can't be except online.
The problem that I am facing at the moment is that I want to connect a sensor to the Raspberry Pi but I need to do it over the Internet given the limitations of hardware. How do I connect the sensor to publish to the broker that resides in the Raspberry Pi from outside the LAN network? I just checked and my public IP address could have thousands of addresses underneath it, how do I know which one is me and connect to it?!
Somewhat depends on your home network. If your ISP uses so called "Carrier Grade NAT" which is increasingly common due to the fact that few IPv4 addresses are now available, you cannot make a connection from the Internet to your local network.
Otherwise you can make a connection inwards and so your remote sensor just needs a TCP/IP connection. We would need more information on the sensor to show you how to do that. To find your public IP address, you visit https://www.whatismyip.com/ from a computer on your local network. (To find out if you are stuck with carrier grade nat, ask your ISP or do a reverse lookup on your public IP address, you may be able to tell). You will also need to configure the firewall on your router so that it allows inbound connections to a PORT that you choose (on the outside) and map that to the internal IP address of the Pi and the port configured on Mosquitto for handling MQTT traffic.
For carrier grade NAT or if you cannot configure your router's firewall to allow incoming connections, you should use another MQTT broker on the Internet somewhere. Then configure your Pi broker to bridge to the Internet broker.
The Owntracks documentation has a quick tutorial. There is also information in the Mosquitto documentation on how to do this.
In this case, your Pi broker is making a connection out to the internet which works fine on any WAN if you are not excessively blocking outgoing traffic.
In either case, do not forget that any traffic over the Internet is insecure. You will need to set up SSL certificates along with a username/password combination to secure the traffic.
Easy!
Just add a port forwarding (in your local router 192.168.x.x) to your raspberrPi ip and MQTT (usually port 1883).
Then your sensor should be connected to internet and send a topic/payload to your public IP address at home...
I do this for android apps
- owntrack
- juiceSSH
- raspicheck
- myMQTT
- openHAB
- Yatse (For Kodi remote)
But do not forget when you open a port you will need to secure it access somehow...
and your public i.p. can be automatically change by your ISP (InternetServiceProvider)
I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.