I use RabbitMQ with its mqtt plugin. Also, there is a guest user who can reach multiple virtual hosts. For example, I want to publish an MQTT message directly to a virtual host (/cse-id-1) but it sends the message to the default one (/). What should I do to send the message to the specified virtual host while using MQTT?
There are several options for specifying the vhost when connecting the client, like prepending the name of the vhost followed by a colon to the username (format vhost:username), so in your case the username would be cse-id-1:guest.
See details and other options in the official documentation: https://www.rabbitmq.com/mqtt.html#virtual-hosts
Related
How do I change my RabbitMQ listener host from amqp://localhost to something other servers can access?
e.g. Public IP or Domain name
I have a RabbitMQ server which receives messages to an exchange within a virtual host called "ce_func", this exchange is bound to a queue called "azure_trigger".
I'd like to use Azure Functions new RabbitMQ binding to collect from Rabbit. Unfortunately, this is limited to collecting only from virtual host '/' . I was hoping that I could use Rabbit's federation functionality to automatically route to an "azure_trigger" queue within the "/" virtual host of the same server but so far I've failed.
I created a Rabbit "upstream" and "policy" applied to that upstream but I can't figure out the configuration. I have a Federation Status of "Running" but it's only checking the "ce_func" virtual host, I can't see where I can set the target exchange as the "/" virtual host.
Does anyone have any pointers please?
If I understand correctly, you want to deliver message between queues in different vhosts.
RabbitMQ community recommend to use Shovel plugin to handle this situation:
The source and destination can be on the same broker (typically in different vhosts) or distinct brokers.
It is possible to reference any virtual host (vhost) in the in the uri field of the federation-upstream's configuration in the form:
"amqp://" [ username [ ":" password ] "#" ] host [ ":" port ] [ "/" vhost ]
So in simple terms you can wack the vhost on the end of the uri e.g. amqp://localhost:5672/myvhost... if your vhost name is blank then just make sure you include the trailing slash '/' e.g. amqp://localhost:5672/.
A note specific to the blank vhost from the rabbitmq docs (https://www.rabbitmq.com/uri-spec.html)
The vhost component may be absent; this is indicated by the lack of a
"/" character following the amqp_authority. An absent vhost component
is not equivalent to an empty (i.e. zero-length) vhost name.
When I set permissions to the rabbitmq user, there is output the vhost:
[root#ha-node1 my.cnf.d]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
What is the meaning of the vhost when I set permission, and what function does it have?
In RabbitMQ virtual hosts are logical groups of entities, they are similar to virtual hosts in Apache or server blocks in Nginx.
Virtual hosts are created using rabbitmqctl or HTTP API and they provide logical grouping and separation of resources.
Every virtual host has a name. When an AMQP 0-9-1 client connects to RabbitMQ, it specifies a vhost name to connect to.
If authentication succeeds and the username provided was granted permissions to the vhost, connection is established.
Let me say this by giving you an analogy.
Vhosts are to Rabbit what virtual machines are to physical servers: Vhosts allow you to run data for multiple applications safely and securely by providing logical separation between instances.
This is useful for anything from separating multiple customers on the same Rabbit to avoiding naming collisions on queues and exchanges. Where otherwise you might have to run multiple Rabbits
Every RabbitMQ server has a ability to create virtual message brokers called virtual hosts (vhosts). Each one is essentially a mini-RabbitMQ server with its own queues, exchanges, and bindings ... etc, more important, with its own permissions.
For details information ref: https://livebook.manning.com/book/rabbitmq-in-action/chapter-2/
I'm trying to setup JMeter in a distributed mode.
I have a server running on an ec2 intance, and I want the master to run on my local computer.
I had to jump through some hopes to get RMI working correctly on the server but was solved with setting the "java.rmi.server.hostname" to the IP of the ec2 instance.
The next (and hopefully last) problem is the server communicating back to the master.
The problem is that because I am doing this from an internal network, the master is sending its local/internal ip address (192.168.1.XXX) when it should be sending back the IP of my external connection (92.XXX.XXX.XXX).
I can see this in the jmeter-server.log:
ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: 192.168.1.50; nested exception is:
That host IP is wrong. It should be the 92.XXX.XXX.XX address. I assume this is because in the master logs I see the following:
2012/07/29 20:45:25 INFO - jmeter.JMeter: IP: 192.168.1.50 Name: XXXXXX.local FullName: 192.168.1.50
And this IP is sent to the server during RMI setup.
So I think I have two options:
Tell the master to send the external IP
Tell the server to connect on the external IP of the master.
But I can't see where to set these commands.
Any help would be useful.
For the benefit of future readers, don't take no for an answer. It is possible! Plus you can keep your firewall in place.
In this case, I did everything over port 4000.
How to connect a JMeter client and server for distributed testing with Amazon EC2 instance and local dev machine across different networks.
Setup:
JMeter 2.13 Client: local dev computer (different network)
JMeter 2.13 Server: Amazon EC2 instance
I configured distributed client / server JMeter connectivity as follows:
1. Added a port forwarding rule on my firewall/router:
Port: 4000
Destination: JMeter client private IP address on the LAN.
2. Configured the "Security Group" settings on the EC2 instance:
Type: Allow: Inbound
Port: 4000
Source: JMeter client public IP address (my dev computer/network public IP)
Update: If you already have SSH connectivity, you could use an SSH tunnel for the connection, that will avoid needing to add the firewall rules.
$ ssh -i ~/.ssh/54-179-XXX-XXX.pem ServerAliveInterval=60 -R 4000:localhost:4000 jmeter#54.179.XXX.XXX
3. Configured client $JMETER_HOME/bin/jmeter.properties file RMI section:
note only the non-default values that I changed are included here:
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# Remote Hosts - comma delimited
# Add EC2 JMeter server public IP address:Port combo
remote_hosts=127.0.0.1,54.179.XXX.XXX:4000
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=4000
# Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
client.rmi.localport=4000
# To change the default port (1099) used to access the server:
server.rmi.port=4000
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=4000
4. Configured remote server $JMETER_HOME/bin/jmeter.properties file RMI section as follows:
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=4000
# Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
client.rmi.localport=4000
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=4000
5. Started the JMeter server/slave with:
jmeter-server -Djava.rmi.server.hostname=54.179.XXX.XXX
where 54.179.XXX.XXX is the public IP address of the EC2 server
6. Started the JMeter client/master with:
jmeter -Djava.rmi.server.hostname=121.73.XXX.XXX
where 121.73.XXX.XXX is the public IP address of my client computer.
7. Ran a JMeter test suite.
JMeter GUI log output
Success!
I had a similar problem: the JMeter server tried to connect to the wrong address for sending the results of the test (it tried to connect to localhost).
I solved this by setting the following parameter when starting the JMeter master:
-Djava.rmi.server.hostname=xx.xx.xx.xx
It looks as though this wont work Distributed JMeter Testing explains the requirements for load testing in a distributed environment. Number 2 and 3 are particular to your use case I believe.
The firewalls on the systems are turned off.
All the clients are on the same subnet.
The server is in the same subnet, if 192.x.x.x or 10.x.x.x ip addresses are used.
Make sure JMeter can access the server.
Make sure you use the same version of JMeter on all the systems. Mixing versions may not work correctly.
Might be very late in the game but still. Im running this with jmeter 5.3.
So to get it work by setting up the slaves in aws and the controller on your local machine.
Make sure your slave has the proper localports and hostname. The hostname on the slave should be the ec2 instance public dns.
Make sure AWS has proper security policies.
For the controller (which is your local machine) make sure you run with the parameter '-Djava.rmi.server.hostname='. You can get the ip by googling "my public ip address". Definately not those 192.xxx.xxx.x or 172.xx.xxx.
Then you have to configure your modem to port forward your machine that is used to be your controller. The port can be obtained when from the slave log (the ones that has the FINE: RMI RenewClean....., yeah you have to set the log to verbose). OR set DMZ and put your controller machine. Dangerous, but convinient just for the testing time, don't forget to off it after that
Then it should work.
Is there a way to disable SSL on WAS? So you can just log on using a username and password. That isnt tied to disabling the Global Security?
Cheers
First of all, the login part of the application has nothing to do with the protocol you use to reach your application, e.g. you can use a login dialog (forms, http auth... etc), with 'normal' http or http over SSL (SSL being preferred if the app is not inhouse use only, even if it is, I would think about using SSL).
In WebSphere you deploy your application on a Virtual Host. A virtual host is a collection of host names and ports (called host aliases) from which your application should be reachable. So to get the behaviour you want, I would create a new virtual host (description is for the WebSphere Admin Console Application in 6.1):
Environment > Virtual Host > New
Give it a descriptive name, like http_only. Afterwards you do:
Environment > Virtual Host > http_only > Host Aliases > New
There you add a host name or a asterisk ('*', without the quotes) and a port number (in this case the port for http. Next is changing the virtual host your application is bound on:
Applications > Enterprise Applications > app_name > Virtual hosts
There are dropdown boxes you can choose the virtual hosts from. After that and a save, the app should be reacheable only over http.