Centralized solution to manage iptables rules on multiple machines - iptables

Is there any open-source solution to manage iptables rules on multiple machines across a network from one single centralized management point?

These seem nice:
Puppet iptables module:
https://github.com/camptocamp/puppet-iptables
Firewall builder:
http://www.fwbuilder.org/

Related

Wireguard with dynamic setup for iot

At the moment we have multiple raspberry pies placed at different locations on different networks.
Our current solution to be able to reach them if something goes wrong is auto-ssh with jump host.
Recently I stumbled on Wireguard which could be another more slim way to solve the calling home problem.
The problem is that we would like the setup phase to be more dynamic, we don't want to do special configuration per node we have out there, we just want them to call home with a key and then be apart of the network.
Two questions:
Is Wireguard for us or are there other problems that I can't foresee here.
Is there a way to set it up dynamically with one key and let the clients get random ips?
wireguard always needs a unique keypair / host. So not what you are looking for.
If you just want a phone home option with ip connectivity I would suggest an openvpn server and client. If you use a username/password config (not using certificates), you can reuse the config on multiple clients. Openvpn will act as an dhcp server.
an howto:
https://openvpn.net/community-resources/how-to/
search for:
client-cert-not-required
The option that Maxim Sagaydachny is also valid for command access, an alternative to salt could be puppet with mco/bolt.
On any option you choose, be sure that the daemon restarts when it crashes, reboots, fails...
for systemd services this would be an override with:
[service]
restart=always

How should I deploy Traefik in my environment?

I have a set of applications that we're currently transitioning into a more "cloud-native" architecture. We're beginning by moving them into containers (Docker on Windows), and as part of this, we're hoping to use a load-balancing proxy to handle traffic to the containers.
Having looked at several options, we're hoping to use Traefik as a load-balancing proxy in this iteration of our architecture. It may or may not be important to note that all traffic through Traefik in this setup will be internal; it will not be serving any external traffic. I am also working in a self-hosted situation; because of contractual concerns, cloud providers such as AWS and Azure are not currently available to me.
My question, now, is how Traefik might best be deployed. I can see at least two options:
First, Traefik could be deployed on separate "load-balancer" hosts. I could use round-robin DNS for these hosts, and pass traffic through them.
Second, Traefik could be deployed on each of my application hosts. In this setup, Traefik would be more of a "side-car", and applications on each host would use the local Traefik instance as a proxy to the other hosts' services.
An issue I see is that in neither of these setups is true high availability achieved. In either case, a Traefik instance crashing would result in unavailability for at least some services. In the first case, round-robin DNS and short TTL might mitigate this?
Is lack of high-availability avoidable? Is there an alternative way to architect this solution? Does Traefik itself offer guidance on how this solution should be structured?

JMeter - RMI vs SSH tunneling

I am using Apache-JMeter for distributed performance testing.
The master & slave communicate via Java RMI. It works fine so far. I do not see any issues.
But in some forums/blog, I see people use SSH Tunneling/Port forwarding for communication between master & slave.
I tried to google to find the advantages of SSH tunneling over RMI. I could not find any.
Is the communication via SSH is faster than RMI? Could someone please clarify?
NOTE:
I am trying to find the advantages of using SSH tunneling for JMeter distributed testing over RMI. In which case we will prefer SSH tunneling ?
The standard arrangement is based on RMI and works fine if all the systems are in the same network.
If you need to put the systems on different networks, the you have to set up some kind of VPN between them and in that case SSH tunneling can do the trick.
It makes sense only in case when jmeter master and slave cannot communicate directly and connection needs to be done via hopoff node or in secure environments where only one SSH port is open and you need to establish connectivity.
You're comparing apples and oranges. RMI is an application protocol and API. SSH tunneling is a system utility. They're not interchangeable or comparable.

Managing Multiple Reverse SSH Tunnels

I want to install a number of raspberry pis at remote locations and be able to log in to them remotely. (Will begin with 30-40 boxes and hopefully grow to 1000 individual raspberry pis soon.)
I need to be able to remotely manage these boxes. Going the easier route, forwarding a port on the router and setting a DHCP reservation, requires either IT support from the company we'll be doing the install for (many of which don't have IT), or it will require one of our IT people physically installing each box.
My tentative solution is to have each box create a reverse SSH tunnel to our server. My question is: How feasible would this be? How easy would it be to manage that many connections? Would it be an issue for a small local server to have 1000+ concurrent SSH connections? Is there an easier solution to this problem?
My end goal is to be able to ship someone a box, have them plug it in, and be able to access it.
Thanks,
w
An alternate solution would be to:
Install OpenVPN server on your server machine. How to install OpenVPN Server on the PI. Additionally, add firewall rules that block everything but traffic directed for the client's ssh and other services ports (if desired), from administrating machine(s).
Run OpenVPN clients on your Raspberry PI client machines. They will connect back to your VPN server. On a side note, the VPN server and administrating machine(s) need not be the same machine if resources are limited on the VPN server. How to install OpenVPN on the client Raspberry PIs.
SSH from administrating machine(s) to each client machine. Optionally, you could use RSA authentication to simplify authentication.
Benefits include encryption for the tunnel including ssh encryption for administrating, as well as being able to monitor other services on their respective ports.
I made a WebApp to manage this exact same setting in about 60 minutes with my java web template. All I can share are some scripts that I use to list the connection and info about them. You can use those to build your own app, it is really simple to display this in some fancy way in a fast web.
Take a look at my scripts: https://unix.stackexchange.com/a/625771/332669
Those will allow you to get the listening port, as well as the public IPs they're binded from. With that you can easilly plan a system where everything is easilly identificable with a simple BBDD.
You might find this docker container useful https://hub.docker.com/r/logicethos/revssh/

Is ssh port forwarding an acceptable way to communicate with internal API services?

Is you're building a distributed architecture with various services, is it acceptable to have those services communicate via ssh port forwarding, so that to a client a service looks like it's being served on a local port?
The only person who can answer "is it acceptable" is you, or your client.
Is it wise? Probably not, because SSL with certificates at both ends will deliver the same capability with a much less troublesome intermediate layer, but that is an engineering decision you have to make.