I'm new to Apache Ignite.So Ignite supports client authentication like SASL(Simple Authentication and security layer) to use with memcached client?
If there is no support for SASL, please point me on how to achieve client authentication.
Can someone please assist me in this?
No, there is no support for Memcache client authentication. If you want to secure connection to Ignite, you should switch to the native Ignite API and implement the authentication mechanisms in a plugin (see [1], for example).
There are also some vendors, like GridGain [2], that already provide such implementations in their paid offerings.
[1] http://smartkey.co.uk/development/securing-an-apache-ignite-cluster/
[2] https://gridgain.readme.io/docs/security-and-audit
Related
My question is about MQTT support of RabbitMQ. After enabling the required plugins, RabbitMQ supports both MQTT and MQTT over Websockets. For server-side connections MQTT user/pass authentication is ok, because it works under the hood and we are able to secure these credentials with enterprise-wide tools. However, when it comes to utilize Websockets support and create connections from frontend javascript, we need to include username/password in our frontend. It would be as easy as opening up the developer console of a browser to get these credentials.
What is the best-practice for securing these connections? What alternatives do we have here? Any help would be greatly appreciated
Generate short lived credentials for each session and load them as a REST request over HTTPS combined with tight ACLs that only allow access to the topics needed for the web app.
We need to protect our reserved instance of IBM API Connect that we have in the Cloud with a WAF of our own company and we do not know if this is possible and the steps to perform or if it is only possible with a WAF of IBM's own cloud.
thanks in advance
For this answer, I'm going to assume you're asking primarily about the DataPower API Gateway.
You can either deploy your own gateway in an environment of your choosing (i.e. you're managing it) or leverage the one that IBM provides to you by default.
If you deploy your own, then you control the networking and adding your own WAF is relatively straightforward.
If you use an IBM-managed gateway, then a little creativity is required. You would likely need to set up a Mutual TLS contract between your WAF and the Gateway. You'd terminate the incoming TLS connection at the WAF (e.g. Cloudflare) and then re-encrypt the traffic from the WAF to the Gateway using the client certificate exchange. You'd potentially need to apply a Mutual TLS-enforcing profile to each deployed API on the Gateway. In this scenario, no client can call an API on your gateway without the proper TLS client key/certificate in hand.
You may want to open a support ticket for further/deeper assistance on this topic.
Is there built-in support for enabling SSL on Azure Container Instances? If not, can we hook up to SSL providers like Lets Encrypt?
There is nothing built-in today. You need to load the certs into the container and terminate SSL there. Soon, we will enable support for ACI containers to join an Azure virtual network, at which point you could front your containers with Azure Application Gateway and terminate SSL there.
As said above, no support today for built-in SSL when using ACI. I'm using Azure Application Gateway to publish my container endpoint using the HTTP-to-HTTPS bridge. This way, App Gateway needs a regular HTTPS cert (and you can use whichever model works best for you as long as you can introduce a .PFX file during provisioning or later during configuratiorn) and it will then use HTTP to talk to your (internally facing) ACI-based container. This approach becomes more secure if you bind your ACI-based container to a VNET and restrict traffic from elsewhere.
To use SSL within the ACI-container you'd need to introduce your certification while provisioning the container, and then somehow automate certificate expiration and renewal. As this is not supported in a reasonable way, I chose to use the App Gateway to resolve this. You could also use API Management but that is obviously slightly more expensive and introduces a lot more moving parts.
I blogged about this configuration here and the repo with provisioning scripts is here.
You can add SSL support at the API Gateway and simply configure the underlying API over HTTP.
You will need the secrete key to execute above api method!
You can access the underlying API hosted at the Azure Container Instance. This method does not require jwt token as this is a demo api.
I am looking for ways to authorize each individual client request made through the rest proxy. Is there a mechanism to integrate the proxy with existing Kafka ACL's?
I already configured the HTTPS authentication with client certificates so I have a unique client token I can include with every request for authorization purpose. My preferred approach would be to introduce a custom servlet filter that integrates with Kafka ACL system using something like SimpleAclAuthorizer. Unfortunately, the Rest Proxy is not a standard Web application but runs the embedded Jetty, so configuration is a bit more convoluted.
My question is, what is the least intrusive way to to accomplish this?
Thank you in advance.
You can configure a single kafka client credentials for the REST Proxy to use when connecting to Kafka, but today you cannot pass through the credentials of each HTTP(s) client separately. That feature is being worked on and will likely come out in a future release.
RBAC is available now in Confluent Kafka but it is still in preview here is the link
Flink enthusiasts,
I'm consuming from a kafka broker in a remote server that has ssl authentication enabled. How do I config my FlinkKafkaConsumer to comply with ssl authentication? Which flags/properties and values need to be set in the Properties objects for the Consumer? Any pointers to either documentation or sample code is welcome.
Disclosure: this is not Kerberos server ssl authentication.
Many thanks
Alex