Is ssh port forwarding an acceptable way to communicate with internal API services? - ssh

Is you're building a distributed architecture with various services, is it acceptable to have those services communicate via ssh port forwarding, so that to a client a service looks like it's being served on a local port?

The only person who can answer "is it acceptable" is you, or your client.
Is it wise? Probably not, because SSL with certificates at both ends will deliver the same capability with a much less troublesome intermediate layer, but that is an engineering decision you have to make.

Related

Can I use kafka over Internet?

Is kafka suitable for Internet-use?
More precisely, what I want is to expose kafka topics as "public interface", then external consumers (or producers) can connect to it. Is it possible?
I hear there are problems if I want to use the cluster in both internal and external networks, because it is then hard to configure advertised.host.name. Is that true?
And do I have to expose zookeeper as well? I think the new consumer/producer api no longer need that.
Kafka's wire protocol is TCP-based and works fine over the public internet. In the latest versions of Kafka you can configure multiple interfaces for both internal and external traffic. Examples of Kafka over the internet in production include several Kafka-as-a-Service offerings from Heroku, IBM MessageHub, and Confluent Cloud.
You do not need to expose zookeeper if the Kafka clients use the new consumer API.
You may also choose to expose a REST Proxy such as the open source Confluent REST Proxy as a more client firewall friendly interface since it runs over HTTP(S) and will not be blocked by most corporate or personal firewalls.
I would personally not expose the Kafka server directly to clients via TCP for these reasons, only to name a few:
If a bad client opens too many connections this may affect the stability of the Kafka platform and may affects other clients too
Too many open files on the Kafka server, HW/SW settings and OS tuning is needed to limit uncontrolled clients
If you need to add a Kafka server to increase scalability, you may need to go through a lot of low level configuration (firewall, IPs visibility, certificates, etc.) on both client and server side. Other product address these problems using gateways or proxies: Coherence uses extend proxy clients, tibco EMS uses routed destinations, other SW (many JMS servers) use Store&Forward mechanisms, etc.
Maintenance of the Kafka nodes, in case of clients attached to the Kafka servers, will have to consider also the needs of clients and the SLA (service level aggreement) that have been defined with the client (ex. 24*7*365)
If you use Kafka also as a back end service, a multi layered architecture should be taken into consideration: FE gateways and BE services, etc.
Other considerations require to understand what exacly you consider to be an external (over the internet) consumer/producer in your system. Is it a component of your system that needs to access the Kafka servers? Are they internal or external to your organization, etc.
...
Naturally all these considerations can be correctly addressed also using a TCP direct connection to the Kafka servers, but I would personally use a different solution.
HTTP proxies
Or at least I would use a dedicated FE Kafka server (or couple of servers for HA) dedicated for each client that forward the messages to the main Kafka group of servers
It is possible to expose Kafka over the internet (in fact, that's how managed Kafka providers such as Aiven and Instaclustr make their money) but you have to ensure that it is adequately secured. At minimum:
ZooKeeper nodes should reside in a private subnet and not be routable from outside. ZK's security is inadequate and, at any rate, it is no longer required to bootstrap Kafka clients with ZK address(es).
Limit access to the brokers at the network level. If all your clients connect from a trusted network, then set appropriate firewall rules. If in AWS, use VPC peering or Direct Connect if you are connecting cloud-to-cloud or cloud-to-ground. If most of your clients are on a trusted network but a relative minority are not, force the latter to go via a VPN tunnel. Finally, if you want to allow connectivity from arbitrary locations, you'll just have to allow * on port 9092 (or whichever port you configure the brokers to listen on); just make sure that the other ports are closed.
Enable TLS (SSL) for client-broker connections. This is easily configured with a self-signed CA. Depending on how you expose your listeners, you may need to disable SSL hostname verification on the client. (The certificate chain of trust breaks if the advertised host names don't match the certificate's common name.) The clients will need the CA certificate installed. (Same CA that signed the brokers' certs.)
Optionally, you may enable mutual TLS authentication; however, this is logistically more taxing, as it requires each client to have its own private key that is signed by a CA trusted by the broker.
Use SASL to authenticate the client to the broker and create individual users for each application and each person that is expected to access the cluster.
Issue minimally-sufficient cluster- and topic-level access privileges in the ACLs for each user, following the Principle of Least Privilege (PoLP).
One other thing to bear in mind: Not all tooling supports SASL/SSL connectivity and some tools actually require a connection to ZooKeeper nodes (which will not be reachable in the above setup). Make sure any tooling you rely on uses the 'new' style of connectivity directly to the Kafka brokers and does not require a Zookeeper connection.
Beyond configuring client TLS, brokers have to have public IPs which we try to avoid. Normally for other services we hide everything behind load balancers. Would this be possible with kafka?
I'm not sure the Confluent REST proxy hosted on a public server is a real option when you need the high performance batching of the java producer client.

Is there a real need to adopt ssl transport layer in a microservice architecture for internal lan-only Service to Service communication?

In a scenario where there are thousands of webservices are there reasons to use also a signed cert for each microservice or it's just going to add overhead? Services communicate via VPC sitting behind a firewall while Public endpoints are behind a nginx public facing a valid CA cert.
Services are on multiple servers on aws.
From my limited experience, I believe that it is overkill. If an attacker has access to listen in or interact with your internal network then there are most likely other issues which you should be contending with.
This article on auth0.com explains the use of SSL only on connections to the external client. I also share this view and believe implementing SSL at an individual service level would get extremely difficult unless you where running some form of proxy such as HAProxy or Nginx on each individual host which is sub-optimal, especially if you're using a form of managed cluster like Kubernetes or Docker Swarm.
My current thoughts are its fine to run SSL just for your edge services, ensure you lock down your AWS network using something like Scout2 and run unencrypted for inter-service communication on your lan.
unless all intranet in the cloud are fully VLAN-configured and isolated, is it possible for other hosts that you don't own that are on the same LAN to steal your password by running a simple tcpdump? if that's the case, we need ssl or other encryption internally on the cloud too.

Publically exposing a WCF service which is behind firewall

Enviroment
Consider the following production environment setup for a web application:
End user --Internet--> web server in DMZ --Firewall--> WCF hosted on app server --> DB Server
Constraint:
Also consider that we cannot change anything from the infrastructure point of view. For example, open ports, change any firewall setting etc.
Problem:
We want to expose the WCF, which is hosted on the app server, to external clients. We are trying to solve this as follows:
End user --Internet--> Router WCF in DMZ --Firewall--> WCF hosted on app server --> DB Server
Please note that we cannot establish a db connection from the DMZ environment where the WCF needs to be hosted so that the external clients can consume it. We have developed a "Router WCF" which passes through all messages to the internal WCF and vice-versa.
This solution adds an unnecessary overhead of serializing and de-serializing data. There must a better and proper way of doing this. We are looking forward to the community for guidance. Thank you.
In DMZ the bibliography tells you: always create an intermediate layer. This means another machine on the internet will be the point of connection and it will proxy the connection back to WCF.
The machine is the web server you seem to mention, that is stupid, has no data, and (to be a proper DMZ) has a firewall between it and all the machines (WCF and the others) it serves that permits only IP:PORTS used on such machines.
In this scenario, usually Apache on the public web server with a URL-rewrite rule (i.e if it is /x/y send it to servera.internal.com:9900 - if it is /x/z send it to serverb.internal.com:9901 etc...) is enough, but there are plenty of solutions of course.
It seems you are doing exactly this, why do you say it is not the proper solution?
DMZs could seem a bit dated as protection mechanism (I agree) but you have to think when servers like your WCF machine had dozens of ports opened, and you wanted to lower the risk of random ports on web-facing machines, a giant attack surface. Nowadays everything can work with couple of ports opened, so it can seem "silly" to do all of this just to forward a TCP port. But it is still valuable as (for example) if servers behind the web server in DMZ do not have internet access, even when WCF is compromised, the attacker cannot use its own reverse shell to deploy what it is nowadays called an APT (yesterday backdoor). The attacker "won't see" his own machine from WCF as the DMZ provides the connection to the external world.

Using a TCP Tunnel for Duplex WCF connections through Proxies in enterprise scenarios

We're using a duplex contract for one feature in our yet to be released enterprise level LOB application that utilizes a thick client built with WPF and a server built with WCF.
During development so far we've been using the net.tcp binding for best performance. Now that deployment is coming up and issues such as internet access through a web proxy come to light, net.tcp isn't suitable anymore.
I've started using wsHttp and wsDualHttp but have realised in the meantime that duplex connections through a web proxy (and with NAT traversal) isn't really possible.
Now I'm thinking: why can't I set up a tcp tunnel (using proprietry software that supports web proxies, using HTTP CONNECT) and get the best of both worlds, proxy support, speed and security?
It would seem this is a common requirement.
Your options are not expansive. Microsoft's Service Bus is probably your best bet, if it works for your needs. The other options are:
VPN: Pretty self-explanatory.
SSH: SSH has tunneling functionality built-in. If you only have a small number of connections, you might be able to use an off-the-shelf SSH client and server, but with a larger number of connections it's hard to ensure that they all stay connected reliably. Several companies make SSH components you might be able to build upon (they didn't work for our needs), but they are more oriented toward the remote-execution use of SSH than tunneling.
A DIY TCP tunnel, which is a big job. Not impossible, but a big job. And will require a tremendous amount of testing to make sure you've got it right.
As far as running WCF over a tunneled connection, if you go that route, you won't have any problems. All the bindings and features work-- callbacks, reliability, message security, transport security, transactions, all work just fine.

TCP connection and firewalls

On the Internet, if you need fast,
secure server-to-server communication,
and you can specify which firewall
ports are open, NetTcpBinding can
prove very valuable.
a) Is text implying that with some other connection protocols, such as HTTP, we don't need to check for open firewall ports?
b) Why would NetTcpBinding only be useful for server-to-server communications, but not for client-to-server communications?
thank you
a) When you deploy to an enterprise, you usually don't have control over their external firewall. The setup of their firewall is made by network administrators following an enterprise wide policy. External firewalls almost always allow outgoing HTTP requests (otherwise you couldn't browse the web). Some corporate firewalls block outgoing TCP requests, which means you cannot use netTcpBinding. As an example, see this question for somebody trying to deal with that issue. If you KNOW that the firewall of every one of your customers will allow outgoing TCP, then NetTcpBinding is an appropriate choice.
b) Who says NetTcpBinding is only useful for server-to-server communications? You can deploy a client-server application into an enterprise, and providing the clients and the servers are all within the intranet, then NetTcpBinding is an appropriate choice of binding.