Apache ignite: Client and server Authentication - ignite

I want to secure the communication between the Client and server node of Ignite instances.
How can i achieve, as there is no out of box implementation for it.
Please guide!!

I would actually recommend turning SSL on, issuing trusted certificates for client and server nodes. This way, nobody can eavesdrop on your nodes' communication if they don't have private key.
This is supported out of box by Apache Ignite without any plugins.

Related

Can I use kafka over Internet?

Is kafka suitable for Internet-use?
More precisely, what I want is to expose kafka topics as "public interface", then external consumers (or producers) can connect to it. Is it possible?
I hear there are problems if I want to use the cluster in both internal and external networks, because it is then hard to configure advertised.host.name. Is that true?
And do I have to expose zookeeper as well? I think the new consumer/producer api no longer need that.
Kafka's wire protocol is TCP-based and works fine over the public internet. In the latest versions of Kafka you can configure multiple interfaces for both internal and external traffic. Examples of Kafka over the internet in production include several Kafka-as-a-Service offerings from Heroku, IBM MessageHub, and Confluent Cloud.
You do not need to expose zookeeper if the Kafka clients use the new consumer API.
You may also choose to expose a REST Proxy such as the open source Confluent REST Proxy as a more client firewall friendly interface since it runs over HTTP(S) and will not be blocked by most corporate or personal firewalls.
I would personally not expose the Kafka server directly to clients via TCP for these reasons, only to name a few:
If a bad client opens too many connections this may affect the stability of the Kafka platform and may affects other clients too
Too many open files on the Kafka server, HW/SW settings and OS tuning is needed to limit uncontrolled clients
If you need to add a Kafka server to increase scalability, you may need to go through a lot of low level configuration (firewall, IPs visibility, certificates, etc.) on both client and server side. Other product address these problems using gateways or proxies: Coherence uses extend proxy clients, tibco EMS uses routed destinations, other SW (many JMS servers) use Store&Forward mechanisms, etc.
Maintenance of the Kafka nodes, in case of clients attached to the Kafka servers, will have to consider also the needs of clients and the SLA (service level aggreement) that have been defined with the client (ex. 24*7*365)
If you use Kafka also as a back end service, a multi layered architecture should be taken into consideration: FE gateways and BE services, etc.
Other considerations require to understand what exacly you consider to be an external (over the internet) consumer/producer in your system. Is it a component of your system that needs to access the Kafka servers? Are they internal or external to your organization, etc.
...
Naturally all these considerations can be correctly addressed also using a TCP direct connection to the Kafka servers, but I would personally use a different solution.
HTTP proxies
Or at least I would use a dedicated FE Kafka server (or couple of servers for HA) dedicated for each client that forward the messages to the main Kafka group of servers
It is possible to expose Kafka over the internet (in fact, that's how managed Kafka providers such as Aiven and Instaclustr make their money) but you have to ensure that it is adequately secured. At minimum:
ZooKeeper nodes should reside in a private subnet and not be routable from outside. ZK's security is inadequate and, at any rate, it is no longer required to bootstrap Kafka clients with ZK address(es).
Limit access to the brokers at the network level. If all your clients connect from a trusted network, then set appropriate firewall rules. If in AWS, use VPC peering or Direct Connect if you are connecting cloud-to-cloud or cloud-to-ground. If most of your clients are on a trusted network but a relative minority are not, force the latter to go via a VPN tunnel. Finally, if you want to allow connectivity from arbitrary locations, you'll just have to allow * on port 9092 (or whichever port you configure the brokers to listen on); just make sure that the other ports are closed.
Enable TLS (SSL) for client-broker connections. This is easily configured with a self-signed CA. Depending on how you expose your listeners, you may need to disable SSL hostname verification on the client. (The certificate chain of trust breaks if the advertised host names don't match the certificate's common name.) The clients will need the CA certificate installed. (Same CA that signed the brokers' certs.)
Optionally, you may enable mutual TLS authentication; however, this is logistically more taxing, as it requires each client to have its own private key that is signed by a CA trusted by the broker.
Use SASL to authenticate the client to the broker and create individual users for each application and each person that is expected to access the cluster.
Issue minimally-sufficient cluster- and topic-level access privileges in the ACLs for each user, following the Principle of Least Privilege (PoLP).
One other thing to bear in mind: Not all tooling supports SASL/SSL connectivity and some tools actually require a connection to ZooKeeper nodes (which will not be reachable in the above setup). Make sure any tooling you rely on uses the 'new' style of connectivity directly to the Kafka brokers and does not require a Zookeeper connection.
Beyond configuring client TLS, brokers have to have public IPs which we try to avoid. Normally for other services we hide everything behind load balancers. Would this be possible with kafka?
I'm not sure the Confluent REST proxy hosted on a public server is a real option when you need the high performance batching of the java producer client.

Is Weblogic Node Manager SSL setup required while implementiing SSL for Application

In Weblogic, I have more than one Machines created using Node Manager. We have been told to setup SSL implementation for our Application which is deployed across created machines in a single Weblogic Admin Console.
So for the Application we had configured certificate using .jks file and configured SSL listen port by enabling it.
However we have been told to secure Node Manager machines in which application are deployed across as well. While enabling Node Manager type to SSL instead of Plain I am getting SSLException. By the fact we no need to secure Machines which were created using Node Manager, only securing Application is sufficient. Is am I right. Else is it required to Secure Machines -> Node Manager as well.
When I am turning SSL in Machines -> Node Manager, what are the things I have to consider to avoid SSLException. Is the Weblogic restart required If configure this or so. For now I do not have UNIX access, hence I couldn't do that at this moment.
Please advise on this situation. Without securing Machines -> Node Manager I am able run the application. But not able to access it using https. Only http for the Application is working.
Please advise on the situation.
SSL for node manager is optional as there's no application related sensitive data that flows in this layer.
You mention even after configuring jks you can't get the server and hence the application listening on https. Could you elaborate what steps did you follow. Note this has nothing to do with node manager

Is there a real need to adopt ssl transport layer in a microservice architecture for internal lan-only Service to Service communication?

In a scenario where there are thousands of webservices are there reasons to use also a signed cert for each microservice or it's just going to add overhead? Services communicate via VPC sitting behind a firewall while Public endpoints are behind a nginx public facing a valid CA cert.
Services are on multiple servers on aws.
From my limited experience, I believe that it is overkill. If an attacker has access to listen in or interact with your internal network then there are most likely other issues which you should be contending with.
This article on auth0.com explains the use of SSL only on connections to the external client. I also share this view and believe implementing SSL at an individual service level would get extremely difficult unless you where running some form of proxy such as HAProxy or Nginx on each individual host which is sub-optimal, especially if you're using a form of managed cluster like Kubernetes or Docker Swarm.
My current thoughts are its fine to run SSL just for your edge services, ensure you lock down your AWS network using something like Scout2 and run unencrypted for inter-service communication on your lan.
unless all intranet in the cloud are fully VLAN-configured and isolated, is it possible for other hosts that you don't own that are on the same LAN to steal your password by running a simple tcpdump? if that's the case, we need ssl or other encryption internally on the cloud too.

RavenDB connections over HTTPS

We are setting up replication between RavenDB instances running in server mode. The instances are in different availability zones so we need a secure connection between the servers. According this this post SSL is not supported in server mode but
should be easy to add
Is there an extensibility point in the API where SSL support can be plugged in?
The API doesn't have any place for this currently, but I'm sure it would be a welcome contribution if you were so inclined to write this and submit a pull request. The underlying server is just a System.Net.HttpListener, which can be wired for ssl.
Your entry point would be at Raven.Database.Server.HttpServer.StartListening()
You would want the SSL certificate to be as easy to configure as the hostname or port. The cert itself should probably be pulled in from the Windows certificate store.

Connect to third-party two-way https ws from glassfish behind ssl-terminating-point

Context
I developed an application deployed in a Glassfish 3.1. This application is accessed only by https and sometimes it must connect to third-party webservices located out the customers networks. The customer have other applications inside his network; mine is only a new one "service".
Topology approximation
Big-ip F5 is the ssl end point. The customer have in this device the valid certificate
IIS redirects by domain to the respective service
glassfish is the machine with the application (over, of course, a glassfish 3.1)
How it works
When a user try to connect to _https://somedomain the request arrives to the F5 where the SSL encryption ends; now we have a request to _http://somedomain. In the next step F5 redirects this request to the IIS and this, finally, redirects to glassfish. This petitions are successfully processed.
Points of interest
I've full control over glassfish server and S.O. of the vm where it is located. Not other applications are or will be deployed on this server; it's a dedicated server for the app and some services it needs. The Glassfish runs on a VM with a Debian distribution as S.O. This VM is provided by a VM Server but I don't know the brand, model, etc. The glassfish have the default http listeners configuration.
I don't have any more information about network and other devices and i can't access to
any configuration file of any other device. I can't modify any part of the network for my own but maybe ask, suggest or advice for a change. Network's behavior should not change.
Actually users reach the application without problem.
The used certificate is a simple domain certificate trusted by Verysign
The customer have no idea of how to solve this.
The problem
All the third party WS the application must access have an unique https access and, in some cases, the authentication required is mutual (two-way) and here we find the problem. When the application wants to connect to WS with mutual ssl authentication it sends the glassfish local keystore configuration targeted certificate. Customer wants, if possible, use the same cert for incoming and outcoming secure connections. This cert is in the F5 and i can't add to the glassfish keystore because if I do this I would be breaking Verysign contract requirements. I've been looking for a solution at google, here(stackoverflow), jita,... but only incoming traffic solutions I've found. I understand that maybe a SSL proxy is required but I haven't found any example or alternative solution for the outcoming ssl connections.
What I'm asking for
I'm not english speaker (isn't obvious?) and maybe i doesn't use the correct terms in my search terms. I can understand that this context can be a nightmare and hard to solve but I will stand... The first think I need is to know if exists a solution (or solutions) for this problem and if it (or they!) exist where or how can I find it/them. I've prepared different alternatives to negotiate with the customer but I need to known the true. I've spent tones of hours on this.
There are a couple of solutions.
1)pay verisign more money for a second "license/cert". They will be happy to take your money for the "privilege". :)
2)Create a different virtual server listening on 443 which points to a pool that has your client's server address as the pool member. Then on the virtual server, attach a serverssl profile that is configured to use the same cert you are using for the incoming connections. Then the F5 would authenticate with the same cert along with your app server would not need a client cert installed. Also, if they need to initiate a session to you, you would have to setup a virtual server with a clientssl profile that uses the same cert and requires a client cert to connect.
If your destinations may not be static addresses, then an irule(s) would have to be created to deal with that. Can be handled in 10 or later code with a DNS call in the irule and setting a node for the session to go.