Is it possible to disallow topic subscriptions to wildcards in ActiveMQ 5 (classic)?
E.g. subscribing to > will broadcast all messages to all consumers even if a consumer should not be able to subscribe to all topics (and isn't even aware of all topics).
I already tried to create <authorizationEntries> with the <authorizationPlugin>, but wasn't able to prohibit wildcard subscription.
Do you have any ideas how to completely wildcard subscription or message forwarding to wildcards for specific / all users?
As hashed out in the comments you are approaching this problem from the wrong direction. Rather than trying to disable wildcard subscriptions the correct approach is to limit what topics will actually be delivered regardless of what is subscribed to.
This is done by setting an appropriate ACL for each user (reusing the same credentials for 500k clients is a REALLY bad idea).
ActiveMQ uses a plugin based system to supply Authentication and Authorisation control based on the Java standard called JAAS. With JAAS you can plug nearly any storage mechanism (e.g. database, LDAP,...) into ActiveMQ to store your user/password details and the ACL of what topics they can access. Details of how to use JAAS with ActiveMQ can be found here
Related
In Google Cloud Messaging, one of the options available on the push API key was to restrict senders to specific IP addresses (whitelist). This allowed you to dedicate a group of push servers, and prevent machines with other IPs from sending pushes to GCM.
Does Firebase Cloud Messaging have the same or a similar option? I'm not able to find it, and it seems to have gone down the memory hole. If it has the option, how would you configure it?
The Firebase Cloud Messaging REST API to send messages is open to all callers. The authorization options are described here. As far as I know there is no direct way to limit usage of the API to specific IP addresses.
What is the best way to secure a connection between an Elasticsearch cluster hosted on Elastic Cloud and a backend given that we have hundreds of thousands of users and that I want to handle the authorization logic on the backend itself not on Elasticsearch?
Is it better to create a "system" user in the native realm with all the read and write accesses (it looks like the user feature is intended for real end-users) or to use other types of authentication (but SAML, PKI or Kerberos are also end-user oriented)? Or using other security means like IP based?
I'm used to Elasticsearch service on AWS where authorization is based on IAM roles so I'm a bit lost here.
edit: 18 months later, there's no definitive answer on this, if I had to do it again, I would probably end up using JWT.
I'm new to ActiveMQ so please bear with me if my question seem dumb :D
I have installed activemq on a CentOS machine and I'm connecting to it for writing to the qeueue and consuming from the queue through the admin user (which I dont think its the ideal way). I'm wondering if I can create a user for read only to read (consume) from the queue and another user for write only or just a single user who has read/write privileges only so this user wont be able to delete the queue or do anything that its not supposed to do.
I tried youtube and checked out activemq security documentation which talks about simple plugin and tried it but I'm not sure if I'm doing the right thing or reading the right resource?
Thanks in advance!
ActiveMQ works on different login and authorization modules , by default it picks up the PropertyLoginModule in the karaf realm. This is the admin user you are talking about. /etc/users.properties file contains these users and groups.
For Authorization you have plugins in the activemq.xml which can provide fine grained control on the Queues , Topics , Advisories and temporary queues.
The idea is to group users and provide them with read / write /admin access to Queues , you can specify all the queues your application has one by one , group them with wildchars ( as per AMQ doc ).
You can edit the users.properties file and add a few more users and tie up these users in authentication and authorization sections.
Also there are LDAP and SSL modules available for authorization and authentication.
I got LDAP server with posix users set up, they can login using LDAP etc, but now I came across serious problem... hope it's real to implement.
I need to implement mechanism like this:
When someone logs in or tries to change the password consumer LDAP usually should redirect to producer, but if producer is offline then consumer user DB should be used, and there's no way to change password at all. How do i make it?
Maybe there's any producer on\off based switches etc.?
I have a service hosted in a Worker Role in Azure. Clients connect over NetTcp bindings using certificates for mutual client/service authentication and with a custom username password validation.
Clients also receive event notifications that are broadcast through the Azure service bus using shared secret authentication.
I want this to be secure and not allow one person to share his/her login information with friends or anyone else - their login is for their use only. Similarly, a user that forgets to log off at one machine and then logs in to the service from another machine (i.e. tablet, work computer etc.) should trigger a automatic shutdown of the application that was not logged off from.
I am using a per-call serivce, and to have implement a solution using sessions would require alot of rewiring.
I figure I need to keep track of the users' context when they make a operation call and track which IPs are currently using that login/credential. I would like to be able to have some kind of "death touch" whereby the service can send a kill command to a client when multiple logins are detected.
Any suggestions or pointers to patterns that deal with this issue would be appreciated.
Thanks.
Even if you did go with PerSession you would still need to determine if the same user was in more than one session and you have the overhead of session.
I have only tested this over WSHttpBinding and not hosted in an Azure Role so please don't vote it down if it does not work on NetTcp Azure Role - comment and I will delete it. Even with PerCall the SessionID is durable and SessionID is available on both the client and server. More than one user could have the same IP address but SessionID is unique to the session. Clearly you would need to record the userID, SessionID but table storage is cheap.
Maybe update license model for concurrent usage. By recording userID and sessionID you could write an algorithm to calculate max concurrent usage.