Prevent RabbitMQ web-stomp client from sending - rabbitmq

I have RabbitMQ + WebStomp. I would like totally restrict ability to send info to queue from JavaScript code. Instead only server side should do this.
In other words I would like allow following code:
...
client.subscribe("/queue/My-One-Way-Queue", function(m) {
...
client.onreceive = function(message) {
console.log(message);
}
And prevent malicious software to do following:
client.send('/queue/My-One-Way-Queue',
{'reply-to': '/temp-queue/My-One-Way-Queue'}, text);

You need to create a user for the JavaScript client that has read permissions only. See: https://www.rabbitmq.com/access-control.html

Related

Function Running in local and not when deployed to Azure

I have a http trigger function, when I sent a message to the url, it logs data on the queuestorage as required;
http://localhost:7071/api/xxxx?message=89000
However when I do the same in azure on the function url
https://yyyyy.azurewebsites.net/api/xxxx?message=89000 nothing is logged.
How can I have this resolved?
Another question;
The underlying code is
import logging
import azure.functions as func
def main(req: func.HttpRequest,msg: func.Out[str]) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
input_msg = req.params.get('message')
logging.info(input_msg)
msg.set(input_msg)
return func.HttpResponse(
"This is a test.",
status_code=200
)
It is expected to receive the following payload
{
"layerId":0,
"serviceName": "myService",
"changeType": "FeaturesCreated",
"orgId": "myorgId"
"changesUrl": "https://olserver/services/myService/FeatureService/extractChanges?serverGens=[1122, 1124]"
}
when I do http://localhost:7071/api/xxxx?message=89000, it logs the queue storage just fine but not when this payload is delivered. How can I have this configured?
Azure function app don't get environment variable from local.settings.json on portal.
You need to add below setting:
To
And in order to prevent the inability to save messages due to certain settings of the queue, you can try to create a new queue to avoid this situation.

User destinations in a multi-server environment? (Spring WebSocket and RabbitMQ)

The documentation for Spring WebSockets states:
4.4.13. User Destinations
An application can send messages targeting a specific user, and Spring’s STOMP support recognizes destinations prefixed with "/user/" for this purpose. For example, a client might subscribe to the destination "/user/queue/position-updates". This destination will be handled by the UserDestinationMessageHandler and transformed into a destination unique to the user session, e.g. "/queue/position-updates-user123". This provides the convenience of subscribing to a generically named destination while at the same time ensuring no collisions with other users subscribing to the same destination so that each user can receive unique stock position updates.
Is this supposed to work in a multi-server environment with RabbitMQ as broker?
As far as I can tell, the queue name for a user is generated by appending the simpSessionId. When using the recommended client library stomp.js this results in the first user getting the queue name "/queue/position-updates-user0", the next gets "/queue/position-updates-user1" and so on.
This in turn means the first users to connect to different servers will subscribe to the same queue ("/queue/position-updates-user0").
The only reference to this I can find in the documentation is this:
In a multi-application server scenario a user destination may remain unresolved because the user is connected to a different server. In such cases you can configure a destination to broadcast unresolved messages to so that other servers have a chance to try. This can be done through the userDestinationBroadcast property of the MessageBrokerRegistry in Java config and the user-destination-broadcast attribute of the message-broker element in XML.
But this only makes the it possible to communicate with a user from a different server than the one where the web socket is established.
I feel I'm missing something? Is there anyway to configure Spring to be able to safely use MessagingTemplate.convertAndSendToUser(principal.getName(), destination, payload) in a multi-server environment?
If they need to be authenticated (I assume their credentials are stored in a database) you can always use their database unique user id to subscribe to.
What I do is when a user logs in they are automatically subscribed to two topics an account|system topic for system wide broadcasts and account|<userId> topic for specific broadcasts.
You could try something like notification|<userid> for each person to subscribe to then send messages to that topic and they will receive it.
Since user Ids are unique to each user you shouldn't have an issue within a clustered environment as long as each environment is hitting the same database information.
Here is my send method:
public static boolean send(Object msg, String topic) {
try {
String destination = topic;
String payload = toJson(msg); //jsonfiy the message
Message<byte[]> message = MessageBuilder.withPayload(payload.getBytes("UTF-8")).build();
template.send(destination, message);
return true;
} catch (Exception ex) {
logger.error(CommService.class.getName(), ex);
return false;
}
}
My destinations are preformatted so if i want to send a message to user with id of one the destinations looks something like /topic/account|1.
Ive created a ping pong controller that tests websockets for users who connect to see if their environment allows for websockets. I don't know if this will help you but this does work in my clustered environment.
/**
* Play ping pong between the client and server to see if web sockets work
* #param input the ping pong input
* #return the return data to check for connectivity
* #throws Exception exception
*/
#MessageMapping("/ping")
#SendToUser(value="/queue/pong", broadcast=false) // send only to the session that sent the request
public PingPong ping(PingPong input) throws Exception {
int receivedBytes = input.getData().length;
int pullBytes = input.getPull();
PingPong response = input;
if (pullBytes == 0) {
response.setData(new byte[0]);
} else if (pullBytes != receivedBytes) {
// create random byte array
byte[] data = randomService.nextBytes(pullBytes);
response.setData(data);
}
return response;
}

Accessing Meteor application as another user

I've recently updated some parts of the code and want to check if they play well with production database, which has different data sets for different users. But I can only access the application as my own user.
How to see the Meteor application through the eyes of another user?
UPDATE: The best way to do this is to use a method
Server side
Meteor.methods({
logmein: function(user_id_to_log_in_as) {
this.setUserId(user_id_to_log_in_as);
}
}):
Client side
Meteor.call("logmein", "<some user_id of who you want to be>");
This is kept simple for sake of clarity, feel free to place in your own security measures.
I wrote a blog post about it. But here are the details:
On the server. Add a method that only an admin can call that would change the currently logged user programatically:
Meteor.methods(
"switchUser": (username) ->
user = Meteor.users.findOne("username": username)
if user
idUser = user["_id"]
this.setUserId(idUser)
return idUser
)
On the client. Call this method with the desired username and override the user on the client as well:
Meteor.call("switchUser", "usernameNew", function(idUser) {
Meteor.userId = function() { return idUser;};
});
Refresh client to undo.
This may not be a very elegant solution but it does the trick.
Slightly updated answer from the accepted to log the client in as new user as well as on the server.
logmein: function(user_id_to_log_in_as) {
if (Meteor.isServer) {
this.setUserId(user_id_to_log_in_as);
}
if (Meteor.isClient) {
Meteor.connection.setUserId(user_id_to_log_in_as);
}
},
More info here: http://docs.meteor.com/api/methods.html#DDPCommon-MethodInvocation-setUserId

ActiveMQ: multi-consumers connected to one queue but only one consumer recieve all the messages

I was currently using NMS to develop application based ActiveMQ(5.6).
We have several consumers(exe) trying to recieving massgaes from the same queue(not topic). While all the messages just all go to one consumer though I have make the consumer to sleep for seconds after recieving a message. By the way, we don't want the consumers recieving the same messages other consumers have recieved.
It is mentioned in the official website that we should set Prefetch Limit to decide how many messages can be streamed to a consumer at any point in time. And it can both be configured and coded.
One way I tried is to code using PrefetchPolicy class binding the ConnectionFactory class like bellow.
PrefetchPolicy poli = new PrefetchPolicy();
poli.QueuePrefetch = 0;
ConnectionFactory fac = new ConnectionFactory("activemq:tcp://Localhost:61616?jms.prefetchPolicy.queuePrefetch=1");
fac.PrefetchPolicy = poli;
using (IConnection con = fac.CreateConnection())
{
using (ISession se = con.CreateSession())
{
IDestination destination = SessionUtil.GetDestination(se, queue, DestinationType.Queue);
using (IMessageConsumer consumer = se.CreateConsumer(queue1))
{
con.Start();
while (true)
{
ITextMessage message = consumer.Receive() as ITextMessage;
Thread.Sleep(2000);
if (message != null)
{
Task.Factory.StartNew(() => extractAndSend(message.Text)); //do something
}
else
{
Console.WriteLine("No message received~");
}
}
}
}
}
But no matter what prefetch value I set the behavior of the consumers stay the same as before.
And I've tried the second way tying to get the result, namely configure the server conf file. I change the activemq.xml of the server like bellow.
" producerFlowControl="true" memoryLimit="5mb" />
" producerFlowControl="true" memoryLimit="5mb">
But though I've set the dispatchpolicy the messages still go to one consumer.
I want to know that:
Whether this behavior can be achieved by just configuring the server xml file to enable all the consumers recieve messages from one queue? If so, how to configure this and what is wrong with my configuration? If not, how can I use codes to achieve the goal?
Thanks.
Take a look at "Message Groups" feature.
I had the same problem. Only one consumer processed all messages. I found in my code I used group header during send:
request.Properties["NMSXGroupID"] = "cheese";
According to official docs:
Standard JMS header JMSXGroupID is used to define which message group
the message belongs to. The Message Group feature then ensures that
all messages for the same message group will be sent to the same JMS
consumer - while that consumer stays alive. As soon as the consumer
dies another will be chosen.
See full details at http://activemq.apache.org/message-groups.html

Testing WCF with SoapUI

I need your help on one practical issue. I have created a WCF service with basic binding with two operation contact.
1- void StartRegistration - Anonymous member can fill the basic registration form and press submit. All the information will be stored into the database and one link with some random token will be send to user's email address.
2 - void CompleteRegistration - This method validates the token sent into the email address and if token is valid, user account will be activated.
Now I have issue here. Using SoapUI I can call StartRegistration method. Email is sent to destination but I want to pass the token to CompleteRegistration method.
Since it is a WCF service so can not do dependency injection to pass the SoapUI tests :).
Please help.
If I understand your question correctly, you have two WCF methods, one for creating a token and another for confirming it.
What I would do in this case is have the first method, StartRegistration, return the token. Then you could use that token to pass into the CompleteRegistration method quite easily in Soap UI.
Another, quite messy solution, would be to have a groovy script test step in Soap UI that actually connected to the mail account, read the link and parsed the contents.
Edited:
Here is part of the script you'll need. Place it in a groovy step, that will then return the token from your mail.
Note: This code assumes that mail is plain text, not multipart. It also assumes that the mail box only has a single mail. The API for JavaMail is pretty extensive, so if you want to do any magic with it, Google is your friend :) At least, this is somewhere to start.
import javax.mail.*;
import javax.mail.internet.*;
// setup connection
Properties props = new Properties();
def host = "pop3.live.com";
def username = "mymailadress#live.com";
def password = "myPassword";
def provider = "pop3s";
// Connect to the POP3 server
Session session = Session.getDefaultInstance props, null
Store store = session.getStore provider
Folder inbox = null
String content
try
{
store.connect host, username, password
// Open the folder
inbox = store.getFolder 'INBOX'
if (!inbox) {
println 'No INBOX'
System.exit 1
}
inbox.open(Folder.READ_ONLY)
Message[] messages = inbox.getMessages()
content = messages[0].getContent()
//Do some parsing of the content here, to find your token.
//Place the result in content
}
finally
{
inbox.close false
store.close()
}
return content; //return the parsed token