GitLab ssh over cloudflare and proxy - ssh

I have installed gitlab on lxc container in a proxmox.
It works like gitlab<->proxy<->cloudflare.
Everything works fine except SSH clone/push/pull, BUT, if I'll add an entry to the /ets/hosts (on the local machine or any other server where im using gitlab) line my public IP of the proxy and domain name of my gitlab - its OK.
proxy VM is lxc container too. There im just redirecting 22 port to gitlab VM with a rule
-A PREROUTING -d AAA.AAA.AAA.AAA/32 -p tcp -m tcp --dport 22 -j DNAT --to-destination 192.168.10.150:22
ssh -T git#git.MYHOST
this works with entry in hosts file. But if remove - its not working.
ERRORS:
# git pull
ssh: connect to host git.peacedata.su port 22: Network is unreachable
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPD on 24.04.2020
I found out, that Cloudflare blocks 22port.
I have some workaround, but I need most "beautiful" solution)).

So, I just added direct IP address to /etc/hosts and all works as a charm.
More explained about cloudflare opened ports and why so on link: https://blog.cloudflare.com/cloudflare-now-supporting-more-ports/

Related

GitLab CI runner with SSH ProxyJump

I have the following settings in my /etc/ssh/ssh_config file:
Host serverA
User idA
Host serverB
User idB
ProxyJump serverA
I’ve also copied the public keys, so if I locally run ssh serverB I’m correctly connected to serverB as idB through serverA.
Now, here’s my runner configuration in /etc/gitlab-runner/config.toml:
[[runners]]
name = "ssh-runner-1"
url = "http://my-cicd-server"
token = "xxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.ssh]
user = "idB"
host = "serverB"
identity_file = "/home/gitlab-runner/.ssh/id_ed25519"
When I run a CI/CD job on this runner I get a « connection refused » error:
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
I conclude that the ProxyJump configuration is not applied, and since the machine with the runner can’t directly connect to serverB, I get denied access.
How can I configure the runner to apply the proxy jump configuration?
The GitLab runner uses a Go-based SSH client. It does not respect your system SSH configuration and does not have the same configurability as the standard ssh (usually OpenSSH) packages you typically find installed in operating system distributions or similar packages.
The Go client does not support the ProxyJump configuration.
Your best bet would probably be to configure a tunneled connection where your entrypoint does not require SSH configuration options that are not supported.
Local port forwarding
One way might be to open a local port-forwarding tunnel, then in your GitLab configuration, specify the host as localhost and port as the forwarded port.
For example:
Open the tunnel -- local port 2222 forwards to port 22 on ServerB via ssh connection through ServerA
ssh -L 2222:ServerB:22 -N ServerA
Configure runner to use the tunnel:
...
[runners.ssh]
host = "localhost"
port = 2222
...
With this approach, you may have to write some automation on your server to restore the tunnel connection in the event it is broken. How you might do this depends on your operating system and preferred service manager. Or use a tool like autossh
This is basically how the ProxyJump configuration works under the hood.
IP/Port forwarding system
A similar approach would be to have your jump system automatically forward connections to the desired destination. This might be something like a software firewall rule (e.g. using iptables routing rules). That way the forwarding occurs transparently. Then simply tell the runner to target ServerA and the traffic will be transparently moved to ServerB.
This approach is more reliable, since you won't have to do anything to keep the tunnel alive if it ever drops. However, the configuration is much more complex and requires a static IP for ServerB.
For example, on ServerA, assuming the IP of ServerB is 10.10.10.10 the following iptables configuration could be used:
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.10.10.10:22
iptables -t nat -A POSTROUTING -j MASQUERADE
reference.
Then the GitLab runner configuration:
...
[runners.ssh]
host = "ServerA"
port = 2222
...
Lastly, it may also be useful to know that disable_strict_host_key_checking is an undocumented configuration option for the runner as well, in the event you need this.

How to configure ssh to listen to private network IP address?

I have a system with centOS 7 installed. And on the second system I have windows 10. Both the machines are connected to private network. Now, I want to access the centOS machine remotely over ssh.
I checked the IP address of my windows machine, and then I edited the
/etc/ssh/sshd_config
file on the centOS system, With the following entries
ListenAddress <Ip_address_of_window_machine>
But when I restart the ssh service using the following command
systemctl restart sshd.service
I get the following error
bind to port 22 on <ip-address> failed. cannot assign requested address
But when I configure entries like this
ListenAddress 0.0.0.0
ListenAddress [::]
it works fine. But I want to bound my ssh to just particular iP-address
The ListenAddress configuration options tells sshd process to bind to a specific network interface on the server. If you want restrict access to a CentOS host then you need to use firewall. Though firewalld is the proper way to go (with zones and so on), old good iptables will do the job:
sudo iptables -A INPUT -p tcp -s a.b.c.d --dport ssh -j ACCEPT
sudo iptables -A INPUT -p tcp --dport ssh -j REJECT
Where a.b.c.d is the ip address of windows hosts.
NOTICE: By configuring firewall over the networks you can easily lock yourself out!

LXD / Container / Apache2 / Iptables - Unable to load external sources in website

I have a container setup with LXD running several wordpress webpages (apache2)
All is working fine.
I added portforwarding by:
lxc config device add CONTAINER lxd_proxy_port80 proxy listen=tcp:0.0.0.0:80 connect=tcp:INTERNALIP:80
…and same for port 443. Thats all working correctly.
Unfortunately I cannot see originating IP’s in my apache2 logs (var/log/apache2/access.log) but only see the local IP.
By using iptables I wanted to change this. I did:
iptables -A FORWARD -p tcp -d LOCALIP --dport 443 -j ACCEPT
iptables -A FORWARD -p tcp -d LOCALIP --dport 80 -j ACCEPT
and deleting my proxy device with
lxc config device remove CONTAINER lxd_proxy_port80
lxc config device remove CONTAINER lxd_proxy_port443
I can actually access files on my server correctly and I also see now my external IP in the apache2 access logs.
However, wordpress does not reach update-servers anymore (external) and seems to have problems reaching the outside world and one of my wordpress pages cannot access the index.php file anymore (it hangs loading). I suppose the latter effect is due to some external content not being loaded correctly.
Could you help me understand what is going on?
This conversation answers the questions:
https://discuss.linuxcontainers.org/t/iptables-apache-in-lxd-container/6143
A good video on this:
https://www.youtube.com/watch?v=1p-fbS_OYTg
My solution did end up working by adding a -d MYIP/32 in the iptables rule to only make it apply for incoming traffic.

SSH reverse tunnel not working for webserver

I have a webapp running on a Raspberry Pi, which is behind a NAT, and I'm trying to make a tunnel to the company's server so that I can access it from the web. Right now I've been able to establish a tunnel using ssh -fN -R 192.168.0.28:54321:localhost:443 username#192.168.0.28 (both the server and the RPi are in the same LAN at the time), and doing curl -k https://192.168.0.28:54321 returns the contents of the webpage hosted in the RPi, but only if I do it from the server. I have set GatewayPorts yes and AllowTcpForwarding yes (which anyway is the default).
It was the firewall on the server blocking the port. ¬¬
To open said port, the command is sudo iptables -I INPUT -p tcp --dport 54321 -j ACCEPT, which says that any connection comming to the TCP port 54321 must be accepted.

Accessing a CentOS 7 (minimal) server running on VirtualBox from outside

Is it possible to access my Apache server from outside the VirtualBox on Google Chrome browser? Its running on CentOS 7 on VirtualBox.
I have tried connecting to the ip address of the CentOS virtual machine but it didn't work. Its using 'Bridged Adapter' networking in the VM settings and i checked the ip address using the 'ip addr' command. Thanks.
Of course you can. Though you need to add a tunnel to allow access to your Centos 7 machine web service from the host machine.
For example, my VM's bridge IP address (the interface that connects to the world) is 192.168.1.38 and its interface is enp0s3. Let's say I'm running the web service on my second interface, enp0s8 with IP 192.168.100.101 on port 8000. Here's how you create the tunnel:
iptables -t nat -A PREROUTING -p tcp -i enp0s3 --dport 80 -j DNAT --to-destination 192.168.100.101:8000
services iptables save
That's it. You should be able to go to your host's Chrome browser and type in the url 192.168.1.38 and be presented with your web service. If it's still not working I'd suggest looking into your iptables rules to see if any is blocking this traffic.