Is there any open-source security plugin for ActiveMQ to be able to provide dynamically reconfigurable authentication and authorization (A&A) security services based on a relational database (DB)? Basically, I have a large number of users and topics, which I cannot handle them using a simple .xml file. Moreover, the access rights are changing continuously during runtime (actually the users themselves give permissions to other users to be able to subscribe to their topic) so I cannot interfere in the system to apply the new changes.
I'm no expert in ActiveMQ, but I'm one of the developers of the HiveMQ MQTT broker, which is also written in Java. We have an OpenSource Plugin SDK, which allows to customize the authentication of clients and the authorization of clients to publish/subscribe to the broker. You can use a relational database or any other kind of service that is accessable from within Java to determine if a certain client is allowed to publish or subscribe to a topic. Clients can be restricted by topic, activity (publish/subscribe) and QoS.
More information how it works can be found in the HiveMQ Plugin developers guide [1] [2].
Cheers,
Chris
[1] http://www.hivemq.com/docs/plugins/1.4.0/#auth-permission-chapter
[2] http://www.hivemq.com/docs/plugins/1.4.0/#client-authorization-chapter
Related
I need some advice about architectural design or best practice approaches.
I have a service that needs some credentials for some third party services.
My Service used by a webapp which currently keeps this credentials in a DB in encrypted mode.
WebApp and MyService are going to communicate over a MessageQueue (RabbitMQ).
How can I provide my Service these credentials from web app. Or should I completely change the design and how?
Thanks in Advance
KR
Timur
This is a complicated area, and different people have different ideas about how to do this; the problem with your design is that an attacker who can sniff the traffic between your web app and your services can get access to your keys.
You also have tight coupling between your apps and your services, as well as all the entertainment of managing credentials between dev, qa and prod environments.
Many hosting strategies include a "key management server" for this purpose - AWS has https://aws.amazon.com/kms/, for instance. I'd suggest reading up on their use cases.
Another popular solution is to store the keys in environment variables, and manage them as part of your build/deploy pipelines.
Finally some frameworks (e.g. Ruby on Rails) store these details in a credentials file, and have workflows for managing them outside the source code control processes.
I am experimenting with Mule API management these days. What I come to know is we can deploy our API to one of these:
A Mule Runtime
An API Gateway
In the documentation, it is said that we should go with option 1 when we want to separate out the implementation of your API from the orchestration. What does it mean?
Can any one please explain in detail?
Policy management from API Platform and analytics generation can be achieved only by using a correctly configured API Gateway, which is a superset of Mule EE (current version is API Gateway 2.1.0 which contains Mule EE 3.7.2).
Depending on your architecture you may have different solutions.
For example:
Proxy running on API Gateway, implementation API running somewhere
else (eg. Mule EE/CE, Tomcat, cobol server, etc)
Proxy and implementation API running on the same API Gateway
Implementation API
managed directly from API Platform without using the autogenerated
proxies.
HTH :-)
Not exactly sure what they mean there, because on this page: https://developer.mulesoft.com/docs/display/current/API+Gateway they also mention this:
Note that the API Gateway, because it acts as an orchestration layer
for services and APIs implemented elsewhere, is technology-agnostic.
You can proxy non-Mule services or APIs of any kind, as long as they
expose HTTP/HTTPS, VM, Jetty, or APIkit Router endpoints. You can also
proxy APIs that you design and build with API Designer and APIkit to
the API Gateway to separate the orchestration from the implementation
of those APIs.
So both methods technically allow you to separate API from orchestration, as your API gateway application could simply proxy another Mule application elsewhere that performs the orchestration. But my understanding of the two options are:
The API gateway is a limited offering that allows you to use a subset of Mule's connectors, transports and modules such as ApiKit and HTTP, it allows you to expose and API then use http to connect to whatever backend systems you want as a proxy and perform the orchestration in the API layer.
By using the Mule runtime operation, it gives you much more flexibility and allows you to compose as many applications as you want using the full range of connectors etc. and separate out the different aspects of your applications into as many layers as you want as separately deployable entities that you can deploy to on-premise standalone instances or Cloudhub etc.
#Ryan answer is more or less on the mark, however if you do choose the Mule ESB offering you will loose out on the API Management and governance functionality that API gateway provides OOTB.
These include
Lets you enforce runtime policies and collect data for analytics
Applies policies to APIs or endpoints around security, throttling,
rate limiting, and more
Extends PingFederate to serve as identity management and OAuth
provider for your APIs
Lets you require or restrict certain behaviors in a few simple steps
Lets you add or remove policies at runtime with no API downtime
Manages access to your API by issuing contract keys
Monitors the API to confirm it is meeting all contract terms
Ensures compliance with service level agreements (SLAs)
In my opinion go with API Gateway/Manager if your API will be consumed my third party developers with whom you might not have too many interactions (think public API's) else Mule ESB should be good.
You should be able to migrate from Mule ESB to API Manager (and vice versa) also easily if you need to, so I do not think you will get locked into your decision
PS: Content copied from here
Secure webservices in WCF
Background
We want to create a secure WCF service that has does encryption/decryption of data. The nature of data that will be encrypted and decrypted requires the highest level of security possible.
Consumers of this service will be applications within our network. The will be asp.net websites, other wcf services, console applications and possibly java based applications running on linux
Consumers will be running on local computer accounts that dont have any domain membership.
I have done a lot of reading about wcf security and do understand the concepts to a large extent. I am looking for a reference architecture that has worked well for others with similar needs.
Question
What authentication method should i use given that the new wcf service cannot depend on any database etc to store credentials, and also cannot depend of consumers to be members of a windows domain. I should be able to identify the consumer correctly within the service because the functionality will change slightly depending on who is the consumer.
What type of transfer security should i use- transport/message/mixed? Do any of these have performance considerations?
What else should i be thinking about?
Use client certificates for authentication. To identify a customer use message contracts with a custom header. Each client should put some unique value into the header.
I suggest using transport security in your case. There are two main drawbacks of using message security: Performance and which is more important to you Interoperability as you said you may need to support Java clients. You said you already read a lot about WCF security but just in case you missed it, here is a good article on Transport and Message security.
Pay attention to your service binding. I suggest using basicHttpBinding taking into account possible Java clients.
Hope it helps!
EDITED:
The header value should be a private one. Only you and your client should know about it. It's like what if I know your Gmail password, it will not take long to find out your login.
If you don't think it is secure enough you may skip custom header and map each client to an IP or a set of IPs. For example, IP 12.32.456.10 corresponds to client A. Then you can store this mappings in custom config file section and you can encrypt this section so that even people who has access to your service files can't get the mappings.
Don't forget to mark the answer as helpful if it is ;)
I am going to run a Web App on JBoss App Server 7. Does JBoss have some sort of inbuilt user management module/API which I can use rather than code my own? Or do I have to make this module myself. I know about the default JAAS pieces providing authentication AND authorisation, however I am looking to manage, add, edit, delete users from the datasource as well.
I'm not being lazy or anything, just want to know if JBoss has an easy inbuilt way before I start :)
Google implies no so I want to make sure by asking here.
As far as I know they don't provide any easy to managed identity provider, they "only" provide way to connect to identity provider using standard protocol like LDAP, SAML and WS-trust, openid to provide container managed authentication.
They have a idm project but it seems to provide standard protocol SSO identity backed by some identity store but doesn't provide way to manage the users.
PicketBox and PricketLink are the tow JBoss project you should look for more information.
These element can be used if you want to use global identity system, existing one, new product deployment or custom build.
(disclaimer: I have sped some time on Picket* projects documentation and I still don't think I get a good knowledge on how it works... )
There is a web interface and a command line interface for management operations. See the Management Clients section of the documentation.
The security realms could be what you're after. I'm not really a security expert though.
Maybe a security domain could be helpful too.
My employer is a software vendor for a specific market. Our customers integrate our system with others using web services. We use Microsoft technology, and our web services are implemented in ASP.NET and WCF.
The time has come to review our current set of services, and come up with company standards for future integrations. I am reading "Enterprise Integration Patterns," and I've also been looking a little bit at nServiceBus and Mass Transit. These may simplify issues like contract versioning and unit testing, but they seem to be most useful for providing an internal service bus, not for exposing services to external clients.
Our customers are on many different platforms, and require our services to be standards compliant. That may mean different things to different people, but I think it is safe to assume that they want to access web services described with WSDL.
In this scenario, is WCF the way to go?
WCF is by far the most standards-compliant stack on the Microsoft platform. The nice thing is that it's very flexible for different clients "out of the box", and if there are things that cause you grief, most of them can be changed via custom behaviors without too much trouble.
An alternative that I normally recommend is integration over AMQP between your message brokers. That was you can use the push paradigm instead of the polling one (which is very powerful and scalable in comparison)!
You'd set up your own broker, such as RabbitMQ, locally. Then you'd let your integration partner set up one. (Easy: just download it).
If your partner is integrating from the same data center, you'd be save to assume few network splits - meaning you could share the broker. On the other hand, if you are on different networks, you can set up the broker in federation mode. (Run rabbitmq-plugins enable rabbitmq_federation and point to the other broker)
Now you can use e.g. MassTransit:
ServiceBusFactory.New(sbc =>
{
sbc.UseRabbitMqRouting();
sbc.ReceiveFrom("rabbitmq://rabbitmq.mydomain.local/myvhost/myapplication");
// sbc.Subscribe( s => s ... );
});
, like you would do when not doing any integration.
If you look at http://rabbitmq.mydomain.local:55672/ now you will find the administration interface for RabbitMQ. MassTransit creates an exchange for each message type (sending such a message to that exchange will fan out to all subscribers), which you can put authorization rules on.
Authorization rules can be in the form of regex per user or it can be integrated into LDAP. Consult the documentation for this.
You'd also need SSL in the case that you're going over the WAN and you don't have an IPSec tunnel - that documentation is here: http://www.rabbitmq.com/ssl.html and you enable it like this.
That's it! Enjoy!
Post scriptum: if you are feeling up for an adventure that will help you manage all of your infrastructure as a side-effect, you can have a look at puppet. Puppet is a provisioner and configuration manager of servers; in this case you'd be interested in setting up SSL with puppet. First, order a wild-card subdomain certificate for your domain, then use that cert to sign other certificates: you can delegate that - see the rabbitmq guide where it states "Now we can generate the key and certificates that our test Certificate Authority will use." - generate a certificate-signing-request for the certificate instead of creating a new authority - and let RMQ use this for SSL - it will be valid for the internet.