gitea: Built-in ssh server not starting when sshd server running - ssh

I have a problem with my gitea version 1.15.5 running on my raspberry pi. I appears that the built in ssh server is not starting:
ssh -p 2222 git#myaddress.com
ssh: connect to host myaddress.com port 2222: Connection refused
I already assured that "myaddress.com" points to the correct machine and that the firewall rules are adapted. The web interface works just fine.
When I checked, if the port is actually used by gitea, I realized the built-in ssh server is not running:
sudo lsof -i -P -n | grep LISTEN
sshd [...] root [...] TCP *:22 (LISTEN)
sshd [...] root [...] TCP *:22 (LISTEN)
[...]
gitea [...] git [...] TCP *:3000 (LISTEN)
As you can see, there is no process listening on port 2222.
I have an internal sshd server running on that machine at port 22 and I would like to keep those two seperate, if possible. Or is the problem lying there and you can't use the built-in gitea ssh server together with an sshd server?
Here is an excerpt of my app.ini configuration:
APP_NAME = gitea
RUN_USER = git
RUN_MODE = prod
[server]
SSH_DOMAIN = myaddress.com
DOMAIN = myaddress.com
HTTP_PORT = 3000
ROOT_URL = https://myaddress.com/
DISABLE_SSH = false
SSH_PORT = 2222

After some more googling, I found the solution myself:
If there is an sshd server running, gitea does not automatically start its built-in ssh server. Instead, you have to force it by adding this line under [server] in the app.ini configuration:
[server]
START_SSH_SERVER = true
Since, according to the gitea config cheat sheet:
START_SSH_SERVER: false: When enabled, use the built-in SSH server.
I've posted this, in case anyone ever runs into the same problem.

Related

GitLab CI runner with SSH ProxyJump

I have the following settings in my /etc/ssh/ssh_config file:
Host serverA
User idA
Host serverB
User idB
ProxyJump serverA
I’ve also copied the public keys, so if I locally run ssh serverB I’m correctly connected to serverB as idB through serverA.
Now, here’s my runner configuration in /etc/gitlab-runner/config.toml:
[[runners]]
name = "ssh-runner-1"
url = "http://my-cicd-server"
token = "xxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.ssh]
user = "idB"
host = "serverB"
identity_file = "/home/gitlab-runner/.ssh/id_ed25519"
When I run a CI/CD job on this runner I get a « connection refused » error:
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
I conclude that the ProxyJump configuration is not applied, and since the machine with the runner can’t directly connect to serverB, I get denied access.
How can I configure the runner to apply the proxy jump configuration?
The GitLab runner uses a Go-based SSH client. It does not respect your system SSH configuration and does not have the same configurability as the standard ssh (usually OpenSSH) packages you typically find installed in operating system distributions or similar packages.
The Go client does not support the ProxyJump configuration.
Your best bet would probably be to configure a tunneled connection where your entrypoint does not require SSH configuration options that are not supported.
Local port forwarding
One way might be to open a local port-forwarding tunnel, then in your GitLab configuration, specify the host as localhost and port as the forwarded port.
For example:
Open the tunnel -- local port 2222 forwards to port 22 on ServerB via ssh connection through ServerA
ssh -L 2222:ServerB:22 -N ServerA
Configure runner to use the tunnel:
...
[runners.ssh]
host = "localhost"
port = 2222
...
With this approach, you may have to write some automation on your server to restore the tunnel connection in the event it is broken. How you might do this depends on your operating system and preferred service manager. Or use a tool like autossh
This is basically how the ProxyJump configuration works under the hood.
IP/Port forwarding system
A similar approach would be to have your jump system automatically forward connections to the desired destination. This might be something like a software firewall rule (e.g. using iptables routing rules). That way the forwarding occurs transparently. Then simply tell the runner to target ServerA and the traffic will be transparently moved to ServerB.
This approach is more reliable, since you won't have to do anything to keep the tunnel alive if it ever drops. However, the configuration is much more complex and requires a static IP for ServerB.
For example, on ServerA, assuming the IP of ServerB is 10.10.10.10 the following iptables configuration could be used:
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.10.10.10:22
iptables -t nat -A POSTROUTING -j MASQUERADE
reference.
Then the GitLab runner configuration:
...
[runners.ssh]
host = "ServerA"
port = 2222
...
Lastly, it may also be useful to know that disable_strict_host_key_checking is an undocumented configuration option for the runner as well, in the event you need this.

GitLab ssh over cloudflare and proxy

I have installed gitlab on lxc container in a proxmox.
It works like gitlab<->proxy<->cloudflare.
Everything works fine except SSH clone/push/pull, BUT, if I'll add an entry to the /ets/hosts (on the local machine or any other server where im using gitlab) line my public IP of the proxy and domain name of my gitlab - its OK.
proxy VM is lxc container too. There im just redirecting 22 port to gitlab VM with a rule
-A PREROUTING -d AAA.AAA.AAA.AAA/32 -p tcp -m tcp --dport 22 -j DNAT --to-destination 192.168.10.150:22
ssh -T git#git.MYHOST
this works with entry in hosts file. But if remove - its not working.
ERRORS:
# git pull
ssh: connect to host git.peacedata.su port 22: Network is unreachable
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPD on 24.04.2020
I found out, that Cloudflare blocks 22port.
I have some workaround, but I need most "beautiful" solution)).
So, I just added direct IP address to /etc/hosts and all works as a charm.
More explained about cloudflare opened ports and why so on link: https://blog.cloudflare.com/cloudflare-now-supporting-more-ports/

How to configure ssh to listen to private network IP address?

I have a system with centOS 7 installed. And on the second system I have windows 10. Both the machines are connected to private network. Now, I want to access the centOS machine remotely over ssh.
I checked the IP address of my windows machine, and then I edited the
/etc/ssh/sshd_config
file on the centOS system, With the following entries
ListenAddress <Ip_address_of_window_machine>
But when I restart the ssh service using the following command
systemctl restart sshd.service
I get the following error
bind to port 22 on <ip-address> failed. cannot assign requested address
But when I configure entries like this
ListenAddress 0.0.0.0
ListenAddress [::]
it works fine. But I want to bound my ssh to just particular iP-address
The ListenAddress configuration options tells sshd process to bind to a specific network interface on the server. If you want restrict access to a CentOS host then you need to use firewall. Though firewalld is the proper way to go (with zones and so on), old good iptables will do the job:
sudo iptables -A INPUT -p tcp -s a.b.c.d --dport ssh -j ACCEPT
sudo iptables -A INPUT -p tcp --dport ssh -j REJECT
Where a.b.c.d is the ip address of windows hosts.
NOTICE: By configuring firewall over the networks you can easily lock yourself out!

SSH port forwarding: bind: Address already in use

I have a remote server which I want to ssh it and forward my local port into it.
My application is hosting port 443 on my local machine. I'm connecting to the server with gcloud. This is the command:
gcloud compute --project "**^" ssh --zone "***" "***"
The target is to allow other to communicate, on port 9000, with the server and this traffic will redirected into my local machine, on port 443. On other words, accessing server on port 9000 is equal to access my computer on port 443.
So I do ssh port forwarding
gcloud compute --project "**^" ssh --zone "***" "***" -- -L 443:127.0.0.1:9000 -N
and get back this error:
bind: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 443
Could not request local forwarding.
What I'm doing wrong?
The correct option is -R
... -R 9000:localhost:443
linux
if you have no other ssh connection you can fix it with :
killall ssh

Running ssh on Amazon EC2 instance on port other than 22

I am not able to access Amazon EC2 instance via ssh as i am behind a firewall.
So, i thought of running ssh on port other than 22, like 80 or 443.
I tried starting Amazon EC2 instance via Web Management Console with following 'user data':
#!/bin/bash -ex
perl -pi -e 's/^#?Port 22$/Port 80/' /etc/ssh/sshd_config
service sshd restart || service ssh restart
The idea being that the above script would execute on instance startup and switch ssh from port 22 to port 80. (Ref: http://alestic.com/2010/12/ec2-ssh-port-80)
But ssh is still not accessible on port 80.
Apparently 'user data' script is not being executed on start up?
I can 'only' start stop instances via Web Management Console, not from command-line (being behind firewall)
Any ideas?
To connect to an AWS instance through ssh from a port different than default 22:
Open the security group of your instance so that it allows connections to that port from the source that you choose (0.0.0.0/0 for any source).
In your instance:
It is a new instance you could use an user-data script like this one:
#!/bin/bash -ex
perl -pi -e 's/^#?Port 22$/Port 443/' /etc/ssh/sshd_config
service sshd restart || service ssh restart
Please note that this only works if you are launching a new instance:
User data scripts and cloud-init directives only run during the first boot cycle when an instance is launched.
If it is not a new Instance, edit the /etc/ssh/sshd_config file adding/changing Port 22 to the port that you want (i.e: Port 443) to connect through ssh and then do service ssh restart and you should be done.
Note: I did this with an Ubuntu instance, with another Linux instances may be slightly different.
The amazon firewall blocks all ports other than 22. You first have to enable port 80/443/whatever.
HOWTO:
Go to "security groups" -> click on the group you chose for your instance, then on the "Inbound" tab.
There you can add your ports.
EDIT: If by chance you also installed apache or some other webserver, port 80 will be used and cannot be used by sshd. I do not know which operating system is installed on your server, but maybe some webserver is already included?
EDIT 2: As per the last comment, it seems nowadays all ports are blocked by default. So you will have to open port 22 if you need it. Wasn't the case eight years ago, but configurations change ;)
Here is what I came up with to run sshd on 443 and 22 having rhel8 on ec2
make sure your security groups allow connection from your network/ip to the desired ports (in my case 22 and 443)
tcp 443 1.2.3.4/32 #allow access to 443 from IP 1.2.3.4
tcp 22 1.2.3.4/32 #allow access to 22 from IP 1.2.3.4
Login to the EC2 and
#install semanage with
sudo yum install -y policycoreutils-python-utils
#delete 443 from http ports
sudo semanage port -d -t http_port_t -p tcp 443
#add 443 to ssh ports
sudo semanage port -m -t ssh_port_t -p tcp 443
Edit /etc/ssh/sshd_config
Port 22
Port 443
Restart sshd
sudo service sshd restart