SSH Tunnel to FushionAuth on EKS - amazon-eks

I installed the FusionAuth on EKS through the bastion as this architecture (helm install from the bastion host)
Everything is okay, I can access the FusionAuth admin from the bastion host (a linux) after port-forward
Question: is it possible if I want to access the FusionAuth admin from my laptop? like a SSH Tunnel? I don't want to expose it out to public.

You don't need SSH tunnel for this. You can use reverse proxy on the bastion, FusionAuth offers ready made configuration here.

Related

Can we create multiple cloudflare argo tunnel with different domains from same machine?

I have multiple domains that points to single IP but since I don’t to expose my IP I want to use Argo Tunnel and achieve the same functionality(Point all domains to same server).
But the problem is, with Argo tunnel I am unable to add multiple domains. I can’t create multiple tunnels with different domains to the same machine as for one machine there is one certificate installed and to initiate new argo tunnel previous certificate needs to be deleted.
How can create tunnel for abc.com, xyz.com qrs.com with single server ?
i have done this on my ubuntu cloud server. Follow these steps.
Step 1:
i moved the ~/.cloudflared/cert.pem to ~/.cloudflared.cert.pem.abc.com
Step 2 ( authenticate new domain xyz.com )
run in terminal: cloudflared login
once authenticated then run the follwing command to start the new tunnel
sudo cloudflared tunnel --hostname xyz.com --url http://127.0.0.1
you can also put this command in the background to keep it running in the background.
This will do the work you need but it has a problem.
the problem is that whenever you will restart or create any tunnel then you will require to add the cert.pem of that domain to this location ~/.cloudflared/cert.pem and then you can start that tunnel. once the tunnel is running this file is no longer required.
so in this process it will require replacing the cert.pem file everytime you start a new tunnel or restart any existing one.
This is the onlyway to support multiple tunnels at same time or you can use CNAME Setup feature of cloudflare but that needs the plan to be Business or higher.

Unable to SSH between guest VM's which are on different hosts in cluster

I'm having problems SSH'ing between ESXi guests that are on different hosts within the cluster. I've one guest that is on the routable cluster virtual network that I am using as a bastion server to access guests on a private network - the distributed port group spans all hosts.
I'm using SSH ProxyJump to route through the bastion host to the other guest VM's. When the guests on the private network are on the same cluster host as the bastion there is no problem. When the guests are on a different host, I get a connect refused by the remote server error. If I manually migrate the VM to the same cluster as the bastion, the error goes away.
I found this answer which relates to SSH'ing between ESXi hosts, not guests on hosts, and suggests that SSH Client needs to be allowed on the outgoing firewall of each host. It seems like it could be relevant, but my vSphere knowledge is limited and I don't have sufficient admin rights to make this change myself.
I'd be grateful if anyone could confirm if my inability to SSH between guests on different hosts is as a result of not having SSH Client enabled in the outbound firewall or if there is some other reason why I can't get an SSH connection?
From the link you posted:
You need to open the required ssh ports in the ESXi firewall.
In the vSphere Client check the host -> Configuration -> Security Profile -> Firewall -> Properties
and enable "SSH Client" if you need outgoing scp connections resp. "SSH server" if you want to enable incoming scp connections.
Instead of opening SSH client for outgoing firewall of each host, please configure it this way:
Outgoing Server Receiving Server
SSH Client -> Outgoing firewall -> Incoming firewall -> SSH Server
It was an underlying network issue - physical switch was dropping my VLAN tagged packets as VLAN ID wasn't configured on it.

what is bitnami activemq URL to connect to?

I have created activeMQ through bitnami google cloud vm, I do not know what URL to use ,what URL to send messages to?
Bitnami developer here,
You can connect to the ActiveMQ admin panel by browsing to http://YOUR_DOMAIN:8161/. You must use the username and password obtained from the server dashboard. Note that if you want to connect to ActiveMQ from a different machine, you must have ports 61616 and 8161 open for remote access.
You also could use an SSH tunnel like the one below...
ssh -i YOUR_KEY_FILE -N -L 8161:127.0.0.1:8161 bitnami#YOUR_DOMAIN
...and then browse to http://127.0.0.1:8161/
By default, all ActiveMQ transport connectors are enabled.
If you want to debug errors, you can find the main ActiveMQ log file at /opt/bitnami/activemq/data/activemq.log.
I hope it helps.

What's the best way to reverse ssh tunnel to access system behind corp firewall?

I am trying to access a linux server through ssh. Typically this is accessed through a Win2012 jump server using putty.
I was able to setup a reverse ssh connection in putty from jump server to a AWS VM through HTTP proxy. And this was supposed to forward it to my linux server. But when I connect to my AWS VM and initiate ssh over my remote port, the whole thing just hangs. What am I doing wrong, and is there a better/easier way? No malicious intent, I have physical access to both jump server and linux server. Just bypassing shitty corp firewall.
Can you explain what you did in details ?
Typically on unix systems, for a reverse ssh tunnel, you can do this on your server behind the firewall:
ssh -NR ssh_port_AWS:localhost:ssh_port_local_server user#ip_AWS
You need to replace
ssh_port_AWS by the port of the distant server that you want to use to access the local server.
ssh_port_local_server by the port of the ssh server of your local server (if you don't change anything, 22).
user#ip_AWS by your AWS connection details (user#IP)

how docker-machine uses docker api to copy certificates

My question is, as I understand docker-machine uses docker remote API to do whatever it does, for example to regenerate certificates. I have checked docker API but couldn't find how it's possible to send certificates to that machine using only docker api, can someone help please?
The TLS files are hosted locally on the Docker client. For this reason you should protect the files as if they were a root password.
This page will walk you through generating the files needed to negotiate a connection over TLS. Note that the remote daemon must be running TLS.
https://docs.docker.com/engine/security/https/
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=$HOST:2376 version
Note: Docker over TLS should run on TCP port 2376.
Warning: As shown in the example above, you don’t have to run the
docker client with sudo or the docker group when you use certificate
authentication. That means anyone with the keys can give any
instructions to your Docker daemon, giving them root access to the
machine hosting the daemon. Guard these keys as you would a root
password!