What is the FULL TUTORIAL to set up X11 forwarding with the last CentOS CLEAN install? - ssh

Is anyone can give to me the FULL process to set up X11 forwarding from a CentOS fresh and clean install on a dedicated server ?
So, i have access to the server only by ssh
The problem is simple : i already tried i think all solution i find in google to make X11 forwarding working :
set in /etc/ssh/sshd_config
X11Forwarding yes
and
X11UseLocalhost no or X11UseLocalhost yes
and
XAuthLocation /usr/bin/xauth (and xauth is in this path)
and
AddressFamily inet or AddressFamily any
restarting sshd after each write with /etc/init.d/sshd restart (and it tell to me it stop and start)
i tried to install many and many things (restarting sshd after each install) like :
yum groupinstall 'X Window System' (it works well)
xorg-x11-utils (it works)
xorg-x11-fonts-* (it works)
xorg-x11-xauth (already installed)
yum install xorg-x11-xauth.x86_64 (it works)
when i try "strings /usr/sbin/sshd |grep xauth" i got :
/usr/bin/xauth
xauthlocation
maxauthtries
No xauth program; cannot forward with spoofing.
but /usr/bin/xauth give me :
Using authority file /root/.Xauthority
xauth>
so xauth is in the right place...
i tried all ssh option -X, -x, -Y -XY.... nothing worked.
i tried to set display myself, but nothing worked :
DISPLAY is not set, Can't open display and other errors like that.
And just after ssh login $DISPLAY is empty, always.
And i'm not sure that i have not forget some solution i have already tried...
Anyone to help me to get X11 forwarding working ?
I have
CentOS release 6.5 (Final)
and my hoster is OVH
PS : sorry for my bad english

I encountered this same issue, due to an ~/.Xauthority file not being generated for new users upon connecting via ssh. I'd made all appropriate changes to /etc/ssh/sshd_config and /etc/ssh/ssh_config and reset the service via
/etc/init.d/sshd restart
But I never had any luck until I changed my SELinux settings after finding this - ssh X11 forwarding won't work
Of course, you only want to implement changes to SELinux if it's acceptable for your use case. But for me, setting SELinux to permissive with
setenforce 0
and setting the following in /etc/selinux/config - so that this change persisted after reboot
SELINUX=permissive
I would like to emphasize that my situation is a non-critical operation within a (hopefully!) securely-managed intranet. I would NOT suggest turning off SELinux at work, or at home if you're hoping to open ports or configure VPN for your home network. Please consider: http://securityblog.org/2006/05/21/software-not-working-disable-selinux/

Related

WSL2 Ubuntu 22.04 guest lagging when resolving domain name

I noticed latelty (probably after upgrading WSL kernel to 5.15.79.1) that almost every terminal command involved in domain resolution is dalayed for about 5 sec. The same behaviour is shared by ssh, telnet, ping, wget, curl, etc. What is interesting is the fact that dig and nslookup are free from that issue. The delay at resolving is easily visible with verbose versions of wget or ssh. it looks like below.
SSH example
WGET example
I must add that previously I disabled automatic resolv.conf generation with /etc/wsl.conf
I already tried changing several values for nameserver in resolv.conf (8.8.8.8, 208.67.222.222, 1.1.1.1, etc.), redirecting the DNS queries to my local dnsmasq and as a last resolve I also disabled IPv6 for Ubuntu guest with /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
None of those seem to help. Do you have any suggestions?

SSH -X "Warning: untrusted X11 forwarding setup failed: xauth key data not generated"

Hey I'm having an issue getting ssh X forwarding to work. The setup is I'm sshing into my ubuntu VM off OSX Yosemite host machine.
I already installed xQuartz on OSX, xauth on ubuntu, and I believe I've have all the correct options set in ssh_config files.
I get the
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
X11 forwarding request failed on channel 0
message when opening a connection with ssh -X, and when I tried to run an X application:
xterm: Xt error: Can't open display:
xterm: DISPLAY is not set
I have the identical setup on my other machine except running Mavericks and it works fine, is there something specific to Yosemite specific I have to worry about?
Note that some incomplete answers might lead to security flaws.
Using ssh -Y means here having fake xauth information which is bad!
ssh -X should work since XQuartz, once enabled, uses xauth. The only problem is that ssh is looking for xauth in /usr/X11R6/bin and on macOS with XQuartz it is in /opt/X11/bin
Secure solution:
Enable the first option in the Security tab of preferences (Cmd-,) which enables authenticated connections.
Edit ~/.ssh/config, add XAuthLocation /opt/X11/bin/xauth to the host config.
ssh -X your_server works in a secure manner.
Ensure xauth is installed on the destination host.
On macOS Sierra, I now have to do ssh -Y instead of ssh -X to get a display from a linux machine to work on my Mac.
I received the same warning as you after upgrading to Yosemite.
After I added ForwardX11Trusted yes in my ~/.ssh/config file, the warning disappeared.
Do you have the following lines in your ~/.ssh/config file for enabling Trusted X11 forwarding?
Host APPROPRIATE_HOSTNAME
ForwardX11Trusted yes
ForwardX11 yes
OTHER_OPTIONS
Gilles Gouaillardet has the answer that solved this for me. Edit ~/.ssh/config to contain
Host *
XAuthLocation /opt/X11/bin/xauth
and ssh -X hostname now works (XQuartz 2.7.11, macOS 10.4 Mojave)
I already had the latest XQuartz 2.7.11 installed, but I think I've also updated the OS a few times since then. I reinstalled XQuartz 2.7.11, and now it is working fine.
ForwardX11Trusted is required even for connections you think are untrusted when your X server doesn't have the SECURITY extension (Apple servers have a ton of visuals that take up over 100 lines, so I suggest "xdpyinfo | grep SECURITY" to check; if that returns no output, you don't have it). There may be other reasons and exceptions, but this worked for me.
I've just downloaded the latest X11 version and it worked again
I just hit this issue using Mac OS X 10.6.8 to Linux Debian 9.
None of the solutions provided worked.
Root cause was: loopback interface was "DOWN" on the target Linux host.
I had to type the following on the target host to fix the issue
ip link set lo up
Same as answered by user Xvalidated above. but there was no ssh_config file in my .ssh directory.
1. copy ./etc/ssh_config to ~/.ssh/ #file if not there
2. edit
Host hostname
ForwardX11Trusted yes
ForwardX11 yes
As answered before, I would like to add one more thing which will work to reinstall X-supporting software.
When you login the cluster, do not use -X or -Y options.
Example:
ssh -Y remotelogin: gives me X11 related warning.
ssh remotelogin: No warning, works fine.

Write failed : broken pipe

I have a headless Ubuntu server. I ran a command on the server (snapraid sync) over SSH from my Mac. The command said it would take about 6 hrs, so I left it over night.
When I came down this morning, the Terminal on the Mac said: "Write failed: broken pipe"
I'm not sure if the command executed fully. Is this a timeout issue? If so, how can I keep the SSH connection alive overnight?
This should resolve the problem for Mac osX version: 10.8.2
add:
ServerAliveInterval 120
TCPKeepAlive no
to this file:
~/.ssh/config
Or, if you want it to be a global change in the SSH client, to this file
/private/etc/ssh_config
"ServerAliveInterval 120" basically says to "ping" the server with a NULL packet every 120s, and "TCPKeepAlive no" means to not set the SO_KEEPALIVE socket option (since you shouldn't need it with ServerAliveInterval already set, and apparently it's "spoofable" or some odd).
The servers similarly have something they could set for the same effect (ClientKeepAliveInterval) but typically you don't have control over those settings as much.
You can use "screen" util for that. Just connect to the server over SSH, start screen session by "screen" command execution, start your command there and disconnect (don't exit screen session). When you think your command already done you can connect to the server and attach to your screen session where you can see the command execution result/progress (in case one should be).
See "man screen" for more details.
This should resolve the problem for ubuntu and linux mint
add:
ServerAliveInterval 120
TCPKeepAlive yes
to
/etc/ssh/ssh_config file
Instead of screen I'd recommend tmux, an (arguably) better competitor to screen
tmux new-session -s {name}
That command creates a session. Any time after that you want to connect:
tmux a -t {name}
there are two solutions
To update server and restart server sshd
echo "ClientAliveInterval 60" | sudo tee -a /etc/ssh/sshd_config
To update client
echo "ServerAliveInterval 60" >> ~/.ssh/config
After having tried to change many of above parameters in sshd_config (ClientAliveInterval, ClientMaxCount,TCPKeepAlive...) nothing had changed. I have spend hours and days to look for a solution on forums and blogs...
It appears that the problem of broken pipe which forbids to connect with ssh/sftp came from permissions settings on ChrootDirectory.
the ChrootDirectory has to be owned by root/root with 755 permision
lower permissions 765/766/775... won't work but strongers do (eg 700)
if you need to give a write permission to connected user, you can give it in sub-directories.
if chroot is owned by sftpUser:sftpGroup, it won't work neither...
chroot-> root:root 755
|
---subdirectories-> sftpUser:sftpGroup 700 up to 770
hope it would help
If you're still having problem after editing /etc/ssh/sshd_config or if ~/.ssh/config
simply does not exist on your machine then I highly recommend reinstalling ssh. This solution took about a minute to fig both "Broken pipe" errors and "closed by remote host" errors.
sudo apt-get purge openssh-server
sudo apt update
sudo apt install openssh-server
jeremyforan's answer is correct, however I've found that if you are trying to use scp it is necessary to explicitly point it to a config file configured as described, it seems to not obey the normal hierarchy of config. For example:
scp -F ~/.ssh/config myfile joe#myserver.com:~
works, while omitting the -F still results in the broken pipe error.
Ubuntu :
ssh -o ServerAliveInterval=5 -o ServerAliveCountMax=1 user#x.x.x.x
I use an ASUS router with two internet input lines. I appoint my IP to a certain line, and it works.

ssh server connect to host xxx port 22: Connection timed out on linux-ubuntu [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I am trying to connect to remote server via ssh but getting connection timeout.
I ran the following command
ssh testkamer#test.dommainname.com
and getting following result
ssh: connect to host testkamer#test.dommainname.com port 22: Connection timed out
but if try to connect on another remote server then I can login successfully.
So I think there is no problem in ssh and other person try to login with same login and password he can successfully login to server.
Please help me
Thanks.
Here are a couple of things that could be preventing you from connecting to your Linode instance:
DNS problem: if the computer that you're using to connect to your
remote server isn't resolving test.kameronderdehamer.nl properly
then you won't be able to reach your host. Try to connect using the
public IP address assigned to your Linode and see if it works (e.g.
ssh user#123.123.123.123). If you can connect using the public IP
but not using the hostname that would confirm that you're having
some problem with domain name resolution.
Network issues: there
might be some network issues preventing you from establishing a
connection to your server. For example, there may be a misconfigured
router in the path between you and your host, or you may be
experiencing packet loss. While this is not frequent, it has
happenned to me several times with Linode and can be very annoying.
It could be a good idea to check this just in case. You can have a look
at Diagnosing network issues with MTR (from the Linode
library).
That error message means the server to which you are connecting does not reply to SSH connection attempts on port 22. There are three possible reasons for that:
You're not running an SSH server on the machine. You'll need to install it to be able to ssh to it.
You are running an SSH server on that machine, but on a different port. You need to figure out on which port it is running; say it's on port 1234, you then run ssh -p 1234 hostname.
You are running an SSH server on that machine, and it does use the port on which you are trying to connect, but the machine has a firewall that does not allow you to connect to it. You'll need to figure out how to change the firewall, or maybe you need to ssh from a different host to be allowed in.
EDIT: as (correctly) pointed out in the comments, the third is certainly the case; the other two would result in the server sending a TCP "reset" package back upon the client's connection attempt, resulting in a "connection refused" error message, rather than the timeout you're getting. The other two might also be the case, but you need to fix the third first before you can move on.
I got this error and found that I don't have my SSH port (non standard number) whitelisted in config server firewall.
Just adding this here because it worked for me. Without changing any settings (to my knowledge), I was no longer able to access my AWS EC2 instance with: ssh -i /path/to/key/key_name.pem admin#ecx-x-x-xxx-xx.eu-west-2.compute.amazonaws.com
It turned out I needed to add a rule for inbound SSH traffic, as explained here by AWS. For Port range 22, I added 0.0.0.0/0, which allows all IPv4 addresses to access the instance using SSH.
Note that making the instance accessible to all IPv4 addresses is a security risk; it is acceptable for a short time in a test environment, but you'll likely need a longer term solution.
If you are on Public Network, Firewall will block all incoming connections by default. check your firewall settings or use private network to SSL
The possibility could be, the SSH might not be enabled on your server/system.
Check sudo systemctl status ssh is Active or not.
If it's not active, try installing with the help of these commands
sudo apt update
sudo apt install openssh-server
Now try to access the server/system with following command
ssh username#ip_address
This happens because of firewall connection.
Reset your firewall connection from your hosting website.
It will start working.
After connecting to the server again add this to your (ufw) security
sudo ufw allow 22/tcp
There can be many possible reasons for this failure.
Some are listed above. I faced the same issue, it is very hard to find the root cause of the failure.
I will recommend you to check the session timeout for shh from ssh_config file.
Try to increase the session timeout and see if it fails again
My VPN connection was not enabled. I was trying all possible way to open up the Firwall and Ports until I realized, I am working from home and my VPN connection was down.
But yes, Firewall and ssh configurations can be a reason.
Try connecting to a vpn, if possible. That was the reason I was facing problem.
Tip: if you're using an ec2 machine, try rebooting it. This worked for me the other day :)
I had this issue while trying to ssh into a local nextcloud server from my Mac.
I had no issues ssh-ing in once, but if I tried to have more than one concurrent connection, it would hang until it timed out.
Note, I was sshing to my user#public-ip-address.
I realized the second connection only didn't work when I tried to ssh into it when on the same network, ie my home network
Furthermore, when I tried ssh user#server-domain it worked!
The end fix was to use ssh user#server-domain rather than ssh user#public-ip
I have experienced a couple of nasty issues that lead to these errors, and these are different from everyone else's answer here:
Wrong folder access rights. You need to have specific directory permissions on you ssh folders and files.
a. The .ssh directory permissions should be 700 (drwx------).
b. The public key (.pub file) should be 644 (-rw-r--r--).
c. The private key (id_rsa) on the client host, and the authorized_keys file on the server, should be 600 (-rw-------).
Nasty docker network configuration. This just happened to me on an AWS EC2 instance. It turned out that I had a docker network with an ip range that interfered with the ssh access granted by the security group and VPC. The docker network's range was e.g. 192.168.176.0/20 (i.e. a range from 192.168.176.1->192.168.191.254), whereas the security group had a range of 192.168.179.0/24; interfering with the SSH access.
I had this error when trying to SSH into my Raspberry pi from my MBP via bash terminal. My RPI was connected to the network via wifi/wlan0 and this IP had been changed upon restart by my routers DHCP.
Check IP being used to login via SSH is correct. Re-check IP of device being SSH'd into (in my case the RPI), which can be checked using hostname -I
Confirm/amend SSH login credentials on "guest" device (in my case the MBP) and it worked fine in my attempt.
I faced a similar issue. I checked for the below:
if ssh is not installed on your machine, you will have to install it firstly. (You will get a message saying ssh is not recognized as a command).
Port 22 is open or not on the server you are trying to ssh.
If the control of remote server is in your hands and you have permissions, try to disable firewall on it.
Try to ssh again.
If port is not an issue then you would have to check for firewall settings as it is the one that is blocking your connection.
For me too it was a firewall issue between my machine and remote server.I disabled the firewall on the remote server and I was able to make a connection using ssh.
my main machine is windows 10 and I have CEntOS 7 VBox
Search in your main machine for "known_hosts"
usually, known_host location in windows in "user/.ssh/known_host"
open it using notepad and delete the line where your centos vbox ip
then try connect in your terminal
in mac os user you can find known_hosts in "~/.ssh/known_hosts"
Make sure to ask the admin to authorize your device.
On Linux run:
sudo zerotier-cli listnetworks
if it returns status ACCESS DENIED ask the admin to authorize your node. This is mentioned here.
https://discuss.zerotier.com/t/solved-cant-join-network/1919
This issue is also caused if the Dynamic Host Configuration Protocol is not set-up properly.
To solve this first check if your IP Address is configured using
ping ipaddress,
If there is no packet loss and the IP Address is working fine try any other solution. If there is no response and you have 100% packet loss, it means that your IP Address is not working and not configured.
Now configure your IP Address using,
sudo dhclient -v devicename
To check your device you can use the 'ip a' command
For eg. My device was usb0 since I had connected the device through usb
This will configure an IP Address automatically and you can even see which one is configured. You can again check with the 'ip a' command to confirm.
This may be very case specific and work in some cases only but
check to see if you were previously connecting through some VPN software/application.
Try connecting again to the VPN. Worked in my case.
This happened to me after enabling port 22 with "sudo ufw allow ssh". Before that, I was getting a refusal from my machine when entering with ssh from another one. After enabling it, I thought it would work, but instead it showed the message "connection timed out". As I had just installed Ubuntu with the option of installing basic functions alongside, I checked whether I had the openssh-server with the command sudo apt list --installed | grep openssh-server. It turned out that Ubuntu had installed by defect the openssh-client instead. I uninstalled it and installed the openssh-server following the basic commands:
sudo apt-get purge openssh-client
sudo apt update
sudo apt install openssh-server
After that, a simple "sudo ufw allow ssh" worked perfectly and I was finally able to access the machine with an ssh command.
What worked for me was that i went to my security group and reset my IP and it worked
Here are some considerations which i took to resolve a similar issue that I had:
Port 22
IGW (Internet Gateway)
VPC
Scene 1> This is for port 22 not enabled with right configurations. If the port is set to custom or myip, the probable scene is this won't work.
Scene 2> When you delete the internet gateway, the network is created and the instance will be functional too, but the routing from the internet will not work. Hence make sure that if there is a VPC, it has an Internet Gateway attached.
Scene 3> Check the VPC for the subnet associations and routing table entries. This might probably tell you the cause. I found one in this kind of troubleshooting. The route used to land up in a "blackhole" (shows up in the route table section of the console). To fix this I had to check and find out my internet gateway and found the issue with the IGW.
Moral of the story: always trace backward in the network!
In my case I'm on windows, I reset my firewall settings, and it fixed
If you get any error check the basic a version control request with ssh -V and If it is not installed, install it with the sudo apt-get install openssh-server command.
Check your virtual machine ssh connection with sudo service ssh status at console.
Check "Active" rows and if write a inactive(dead) the console write sudo service ssh start
Result: Now you can check your connection with sudo service ssh status command and send ssh connection request.
Reset the firewall and reboot your VPS from your hosting service, it will start working perfectly fine
check whether accidentally you have deleted the default vpc or default subnets ,while creating your own vpc and subnets.
I have done this mistake while creating vpc, hence got this error while connecting via ssh.
alos check whether u have attched IGW to public subnets.
Its not complicated.
First, go disable your firewall(USE YOUR CONTROL PANEL)after you check if your openssh is active.
Disable firewall, then use putty or any alternative to basically disable using this command sudo ufw disable
try now
Update the security group of that instance. Your local IP must have updated. Every time it’s IP flips. You will have to go update the Security group.

Why does running "apachectl -k start" not work, but "sudo apachectl -k start" does?

I'm working on my OS X with the default installation of Apache. For some reason, when I run the "apachectl" command without the "sudo" I get "no listening sockets available / unable to open logs." I'm guessing this is a permissioning thing, so can someone help me out? I'm using Apache 2.2.
Also, side question, where the the Apache script file that is basically the "exe" that linux executes? I'm trying to intergrate my server with Aptana Studio, and it requires the path to the Apache install. I know in Windows, this would be "C:\path\to\httpd.exe", but I don't know how this works in linux.
Is your server listening on port 80? (Usually) only root is allowed to open ports below 1024. Hence the need for sudo.
As you can see, lots of people wonder how to get around this. One possible solution is to perform port-forwarding on your router. (I'm assuming here that you are behind a router...). Then incoming connections on port 80 can be forwarded to e.g. port 8080. Thus only locally does one need to connect to port 8080. (There may be more elegant solutions... somebody else will post them.)
I think generally (on both OS X and Linux - I'm not sure which one you're referring to) the httpd binary is located at: /usr/sbin/httpd
If you need to be able to restart Apache, and you can't do so as root (for whatever reason..), then you may have to settle for a non 'well known' port.
try this
(with php)
$a = shell_exec('sudo -u root -S /etc/init.d/apache2 restart < /home/$user/passfile');
password should stored in passfile