NFS server and client installation in ubuntu - nfs

I have installed NFS server and Client on separate machines.
The problem is when i mount the shared directory on client side
using command
mount server_addr:server_dir client_dir
But it gives an error saying that
don't have permission to share this directory (can't access the server_dir)
Though I have given read-write (rw) permissions to the client by updating
/etc/exports file in server as
server_dir client_ip(rw,sync,no_root_squash)
But still giving error as access denied on client side.
Any kind of help/suggestions are appreciated.

set iptables permissions (related to firewall)
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F**

Related

How to open port 3306 on Google Colab with UFW, IPTables

I have installed MySQL in Google Colab and have been successful in accessing from the command line using
!mysql -e " --- any valid database command ---- "
My next task is make this MySQL server visible from a remote client. This requires, making a change in the mysqld.cnf file, which I have done and then open up port 3306 for remote access. Once this happens, then I will use an ngrok tunnel to expose port 3306 on the public internet and use a MySQL client (like HeidiSQL).
I am unable to open the port 3306.
I have installed ufw using
!apt install ufw
But when I try
!ufw status
I get
ERROR: problem running iptables: iptables v1.6.1: can't initialize iptables table `filter': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
However, when I try to install IPTABLES, I get
iptables is already the newest version (1.6.1-2ubuntu2).
iptables set to manually installed.
Finally, I try
!whoami
I get
root
So I am root and iptables is at the latest version and yet ufw is not running. Obviously, I am doing something wrong. Would be grateful if someone can help me, sort this problem
Strangely, enough, it works WITHOUT having to OPEN the ports. So this question has become meaningless!
I just tried accessing the MySQL server running in Google Colab from an external MySQL client ( HeidiSQL) and it worked perfectly well.

GitLab CI runner with SSH ProxyJump

I have the following settings in my /etc/ssh/ssh_config file:
Host serverA
User idA
Host serverB
User idB
ProxyJump serverA
I’ve also copied the public keys, so if I locally run ssh serverB I’m correctly connected to serverB as idB through serverA.
Now, here’s my runner configuration in /etc/gitlab-runner/config.toml:
[[runners]]
name = "ssh-runner-1"
url = "http://my-cicd-server"
token = "xxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.ssh]
user = "idB"
host = "serverB"
identity_file = "/home/gitlab-runner/.ssh/id_ed25519"
When I run a CI/CD job on this runner I get a « connection refused » error:
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
I conclude that the ProxyJump configuration is not applied, and since the machine with the runner can’t directly connect to serverB, I get denied access.
How can I configure the runner to apply the proxy jump configuration?
The GitLab runner uses a Go-based SSH client. It does not respect your system SSH configuration and does not have the same configurability as the standard ssh (usually OpenSSH) packages you typically find installed in operating system distributions or similar packages.
The Go client does not support the ProxyJump configuration.
Your best bet would probably be to configure a tunneled connection where your entrypoint does not require SSH configuration options that are not supported.
Local port forwarding
One way might be to open a local port-forwarding tunnel, then in your GitLab configuration, specify the host as localhost and port as the forwarded port.
For example:
Open the tunnel -- local port 2222 forwards to port 22 on ServerB via ssh connection through ServerA
ssh -L 2222:ServerB:22 -N ServerA
Configure runner to use the tunnel:
...
[runners.ssh]
host = "localhost"
port = 2222
...
With this approach, you may have to write some automation on your server to restore the tunnel connection in the event it is broken. How you might do this depends on your operating system and preferred service manager. Or use a tool like autossh
This is basically how the ProxyJump configuration works under the hood.
IP/Port forwarding system
A similar approach would be to have your jump system automatically forward connections to the desired destination. This might be something like a software firewall rule (e.g. using iptables routing rules). That way the forwarding occurs transparently. Then simply tell the runner to target ServerA and the traffic will be transparently moved to ServerB.
This approach is more reliable, since you won't have to do anything to keep the tunnel alive if it ever drops. However, the configuration is much more complex and requires a static IP for ServerB.
For example, on ServerA, assuming the IP of ServerB is 10.10.10.10 the following iptables configuration could be used:
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.10.10.10:22
iptables -t nat -A POSTROUTING -j MASQUERADE
reference.
Then the GitLab runner configuration:
...
[runners.ssh]
host = "ServerA"
port = 2222
...
Lastly, it may also be useful to know that disable_strict_host_key_checking is an undocumented configuration option for the runner as well, in the event you need this.

Unable to use webmin outside my LAN: UBUNTU SERVER on VMWARE

I have a windows PC
I have installed Ubuntu server on my Vmware and switched to Bridge Network
Now I installed webmin
sudo service webmin start
with ssl=1
also done this
sudo iptables -A INPUT -p tcp -d 0/0 -s 0/0 --dport 10000 -j ACCEPT
I can access webmin from my computer and on my LAN
also via browser on any device on my wifi https://192.168.187.129:10000/
But I cannot access this from outside network
But i cannot use this outside of my lan.
I can connect with ssh on my lan only
also done sudo ufw allow 10000
No answer on this
https://superuser.com/questions/1122496/cant-acces-webmin-outside-the-virtual-machine-running-it-virtualbox-ubuntu-s
Enable port forwarding on your router. 192.168 is reserved for internal networks and cannot be routed across the Internet. Your router will have it's on external IP address and you will need to enable port forwarding so that when you hit externalIP:10000 it gets forwarded to 192.168.187.129:10000.
Of course, this will mean that Webmin is exposed to anyone on the Internet who wants to try to log in, so make sure you set strong passwords. You may want to consider locking it down so that only a subset of external IP's can connect as well.

How to specify RemoteForward in the ssh config file?

I'm trying to setup a an ssh tunnel with remote port forwarding. The idea is the have a VPS act as a means to ssh into remote deployed systems (which currently incorporate a Raspberry Pi). Everything seems to work, but I run into issues when trying to move all arguments into the ~/.ssh/config file.
what does work is the setting of the HostName, User, Port and IdentityFile. However setting the RemoteForward parameter does not seem to work.
The following works:
ssh -R 5555:localhost:22 ssh-tunnel
How ever when using the following line in the config file;
Host ssh-tunnel
...
RemoteForward 5555 localhost:22
The following command returns the message "Bad remote forwarding specification 'ssh-tunnel'"
ssh -R ssh-tunnel
Obvious I found the answer almost immediately after posting the question. Using the -R flag requires you to set the remote forwarding in the command line call. However because remote forwarding is set in the config file you shouldn't add it to the command. However something confusing occurs in that aside from setting up the tunnel you also ssh into the remote server. To avoid this add the -f and the -N flag. This results in the following command:
ssh -f -N ssh-tunnel

Iptables on centos 7 rejects SSH and WHM connection

I installed centos 7 and cPanel; disabled/masked firewalld and installed and enabled iptables. As soon as I enabled iptables, I disconnected from WHM and SSH. When I disable iptables in rescue mode, I can connect to server via SSH and WHM.
I checked the rules in /etc/sysconfig/iptables, but there is no any rule that rejects access to SSH or WHM ports.
My next step was to install CSF.
Any idea how to fix it?
The quick solution to get rid of the issue is flushing all the Iptables rules with the command
iptables -F
However since you want to keep the Iptables running you will have to configure it to open the required ports with the command
iptables -A INPUT -p tcp --dport 22 -j ACCEPT --- 22 is for SSH , same way you will have to open other ports.