Can't run "ssh -X" on MacOS Sierra - ssh

I just upgraded to MacOS Sierra, and I realized that I can't seem to run the "ssh -X" command in the Terminal anymore. It used to launch xterm windows, but now it's like I didn't even put the -X option anymore. It was working absolutely fine right before I updated. Other than going from OS X Yosemite to MacOS Sierra, I didn't change anything else in the setup.
EDIT:
As suggested, this is what I found in the debug logs that might be causing this problem.
debug1: No xauth program.
Warning: untrusted X11 forwarding setup failed: xauth key data not generated

I didn't need to reinstall XQuartz, but, based on Matt Widjaja's answer, I came up with a refinement.
sudo vi /etc/ssh/ssh_config (This is ssh client config, not sshd_config)
Under the Host * entry add (or add where appropriate per-host)
XAuthLocation /usr/X11/bin/xauth (The location of xauth changed in Sierra)
ServerAliveInterval 60 (Pings the server every 60 seconds to keep your ssh connection alive)
ForwardX11Timeout 596h (Allows untrusted X11 connections beyond the 20 minute default)
No need to restart ssh, except, of course, existing ssh client connections.
It sounds like -Y (trusted X11) would be preferable to untrusted. If you switch over to trusted, the ForwardX11Timeout line can probably be removed.
The ServerAliveInterval line is also an optional preference.
It may also be possible to make these changes in ~/.ssh/config (the user's config file) but the permissions have to be correct.
EDIT: I removed ForwardX11 and ForwardX11Trusted. They aren't needed and ForwardX11 is less secure and causes problems for git (or other tools using ssh).

I noticed macOS Sierra resetted my X11 settings so that it disabled my xAuth program. To re-enable xAuth on macOS Sierra:
Reinstall X11/xQuartz to presumably reset any changes macOS Sierra made. I made the following changes below too although it sounds like this might be enough.
Load up a Terminal
sudo <text editor of your choice> /etc/ssh/sshd_config
In that file, uncomment the following lines, and set it to these values:
X11Forwarding yes
X11DisplayOffset 10
[Update on 10/07/2017] When you reinstall X11/XQuartz, above all else, it should add an: XAuthLocation <path_to_your_xauth> where mine was in /opt/X11/bin/xauth. This was probably the golden step that explained why reinstalling worked.
Restart ssh via the terminal. I did this by running:
sudo launchctl unload /System/Library/LaunchDaemons/ssh.plist
sudo launchctl load -w /System/Library/LaunchDaemons/ssh.plist

I'm having the same issues with X11 forwarding with the -X option after upgrading to Mac OS X Sierra.
Have a look at the ssh option -Y (trusted X11 forwarding). While using ssh -Y <host> things work for me.

It's an old question but I recently ran into the same issue on my Mac running 10.12.6. The DISPLAY variable is not set in the terminal and ssh -X doesn't work. This is what I did that solved the problem:
Reinstall XQuartz using Homebrew:
brew cask install xquartz (the option --forced may be necessary)
Add the XQuartz launcher to the system default (following the solution in this Reddit post:
launchctl load -w /Library/LaunchAgents/org.macosforge.xquartz.startx.plist
Restart the system.
After doing these, my DISPLAY variable is set properly:
$ echo $DISPLAY
/private/tmp/com.apple.launchd.mfXFpzZ0gC/org.macosforge.xquartz:0
And X11 forwarding in ssh works as well.

Just adding the one line XAuthLocation /usr/X11/bin/xauth to /etc/ssh/ssh_config works on my Mac, running MacOS Sierra, to ssh into a Linux host and be able to run X Windows programs remotely and have them display under XQuartz on my Mac.

My solution to this was the following.
(1) Launch xquartz before trying the ssh -X. In the xquartz options, I just enabled 'Open at login', and then it is always running in the background.
(2) Go to the xquartz Preferences menu, and on the Security window, click the box that says "Allow connections from clients".
After doing these things, everything works fine.

Just upgraded my macbook from El Capitan to Sierra. Simply reinstalling Xquartz has done the trick for me, using ssh -X [linux server]

I spent the whole day looking for solution only to realize that the recent Sierra does not ship with XQuartz installed https://support.apple.com/en-gb/HT201341. Upon install (https://www.xquartz.org/) all works.

If Quartz is installed, all that is needed is to add the line "X11Trusted yes" under "Host *" in the /etc/ssh/ssh_config file.

restarting XQuartz worked for me.

In my case, adding XAuthLocation /opt/X11/bin/xauth to /etc/ssh/sshd_config (note that it's not /etc/ssh/ssh_config) on macOS host worked after installing XQaurtz via brew install --cask xquartz as XQaurtz provide xauth binary

Related

WSL after trying all the steps to upgrade to version 2 if get a failure: WslRegisterDistribution failed with error: 0x80370102

Launch any WSL from within VS Code "No WSL distros found. New distros can be installed from the Microsoft Store.
Source Remote - WSL (Extensions)
BUTON: "Add Distro" --> takes you to the store. Install the version of Linux, and try to start it and you get the following. I've had WSL working before and now cannot get it back to the original working WSL.
I tried some things to update to version 2 and I could list version with command "WSL -l -v" it showed up but not anymore. I've got the features set properly. WSL feature is enabled. I did try setting a bios "Virtualization Technology" this is on a HP laptop with AMD 64 bit cpu. That is off now after finding it doesn't help.
Features on: "Virtual Machine Platform", "Windows Hypervisor Platform" "Windows Subsystem for Linux" There are others but these seem relevant.
Installing, this may take a few minutes...
WslRegisterDistribution failed with error: 0x80370102
Error: 0x80370102 The virtual machine could not be started because a required feature is not installed.
Press any key to continue...
This had to have the bios setting. Also because of being Windows 10 Home didn't have Hyper V feature, but there is a batch that can add this. There is a msi to install WSL 2 so wsl.exe was upgraded. This got the results to show WSL version 2 for ubuntu-20-04 install Also there was some resets needed to get the "WSL -l -v" command to have running.
I don't know 100% certain that these are the cure to the issue. But may help someone who cannot get the Linux to start w/out various error running on windows subsystem Linux. Took me better part of a day to fix.
find the following by googling "del hyper-v.txt" you will hit a page telling you "How to activate Hyper-V in Windows 10 Home"
source of batch command
https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/915
pushd "%~dp0"
dir /b %SystemRoot%\servicing\Packages\*Hyper-V*.mum >hyper-v.txt
for /f %%i in ('findstr /i . hyper-v.txt 2^>nul') do dism /online /norestart /add-package:"%SystemRoot%\servicing\Packages\%%i"
del hyper-v.txt
Dism /online /enable-feature /featurename:Microsoft-Hyper-V -All /LimitAccess /ALL
pause
Reboot, still not running, but needed the following sequence.
wsl --shutdown
netsh winsock reset
netsh int ip reset
ipconfig /release
ipconfig /renew
ipconfig /flushdns
wsl -l -v
NAME STATE VERSION
* Ubuntu-20.04 Running 2
initially there was a root user only. Updated new user with the following in %userprofile% folder. Omit the dash and dot.
Ubuntu2004 config --default-user addedUsername
The WSL now is seen by VS Code extensions and no longer flagging to update to version 2. Now on to RxJS tutorial under VS Code.
Make sure all related settings in the bios are enabled.
Make sure all related features in the windows features are turned on.
Alright, if there is an error:
Turn off all related features
Restart it.
Turn on.
Restart it.
related features = ( virtualization )

why ssh connection timed out in vscode?

I installed git instead of openssl to use Remote-SSH in VSCode.However,after I completed the config document and tried to connect to the remote host.I failed. The error info is showed in the below pic.error info
error info:
[11:27:12.631] remote-ssh#0.48.0
[11:27:12.632] win32 x64
[11:27:12.656] SSH Resolver called for "ssh-remote+23321", attempt 1
[11:27:12.659] SSH Resolver called for host: 23321
[11:27:12.659] Setting up SSH remote "23321"
[11:27:12.790] Using commit id "26076a4de974ead31f97692a0d32f90d735645c0" and quality "stable" for server
[11:27:12.798] Testing ssh with ssh -V
[11:27:13.099] ssh exited with code: 0
[11:27:13.100] Got stderr from ssh: OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019
[11:27:13.128] Running script with connection command: ssh -T -D 49485 23321 bash
[11:27:13.132] Install and start server if needed
[11:27:13.151] Terminal shell path: C:\Windows\System32\cmd.exe
[11:27:30.151] Resolver error: Connecting with SSH timed out
[11:27:30.178] ------
I had the same problem but the above solutions didn't work with my setup,
but the following setting did work:
"remote.SSH.useLocalServer": false
I got this solution from github reported issues and fix
In my case, the problem was caused by a too long authentication process on the server-side.
Solved it by extending the Connect Timeout from 15 to 30 seconds.
Instructions:
open your vscode Command Palette (via keyboard shortcut or from the
View menu).
search for the Remote-SSH: Settings.
scroll till you find the Connect Timeout.
change it to a longer duration than 15 secs.
key F1
Remote-SSH: Settings
Connect Timeout: from 15 seconds to 60 seconds solve my connection issue
You can try the following approaches:
ssh to your remote server. Then run the following commands to clean data folder and bin folder under .vscode-server folder on the server:
cd ~/.vscode-server
rm data/* -rf
rm bin/* -rf
If step 1 does not work, ssh to your remote server and delete the entire .vscode-server folder with the following command:
rm -rf ~/.vscode-server
Please note that this will also remove the extensions that you installed on the server.
Downgrade the version of the remote-SSH extension in vscode. Look up the extension in the vscode interface, right click on that, and you will find the option "Install Another Version ...". Install the previous version of the current one. If it does not work keep downgrading the version.
I had the same problem before, I solved this by deleting "terminal.integrated.inheritEnv": false inside ~/.config/Code/User/setting.json
I found the solution here in this thread from user oreilm49:
https://github.com/microsoft/vscode-remote-release/issues/1137
in vscode settings :
search conpty and uncheck it
I had same issue, my problem was solved after changing settings in the json file:
I removed "terminal.integrated.inheritEnv": false inside ~/.config/Code/User/setting.json
I added "remote.SSH.useLocalServer": true inside ~/.config/Code/User/setting.json
It worked for me after so many different trials
This might be a very foolish solution but it actually works for me, so I will write it down in case any other people get into the same problem.
I made modifications to the config file for SSH, then all the trials for connection ran into the error of 'Connecting SSH timed out'. I tried many possible solutions but none of them solved my problem.
Then I just closed the VScode and restarted it. Then everything works.
I had a case of this. I my client (local computer) is a Mac, and I was connecting to Linux host. I just went to the setting "Remote Platform" under Remote.SSH settings, and explicitly told it that I am connecting to a Linux remote. After this, it started to work.
I had this issue because of version missmatch of client and server. After updating both to the same version, it worked for me.
The issue with me was timeout at first. I tried increasing the timeout in settings but then later found the issue was with "tar".
The vscode-server.tar.gz (probably a little change in the file name) was not able to install due to tar not being present in my host.
So I installed tar in the host as "yum install tar"
And then tried reconnecting to the server and it worked

Vagrant can't connect to the VM

EDIT6: submitted an official path bug: https://github.com/mitchellh/vagrant/issues/7512
EDIT5: When I do vagrant destroy and vagrant up, everything works easily. But when I turn off the VM and turn it back on (you have to restart your PC some day), it won't work again. Either the sequence for vagrant up when the VM is created is bugged or VirtualBox is bugged. Destroying and rebuilding the VM is not the option, cause the DB migration and everything takes ~30 mins at least. Either way, DON'T USE VAGRANT ON WINDOWS 10.
EDIT4: I downgraded to Virtual Box 5.0.0.10, that fix the wrong path problem, but the error Command not in installer persists.
EDIT3: When I went into vagrant up --debug, I found out that it cycles. It gets into line
INFO subprocess: Starting process: ["C:/Program Files/Oracle/VirtualBox/VBoxManage.exe", "showvminfo", "8aaee3a3-806f-4
8ad-9928-91e2b7baba5d", "--machinereadable"]
and then it does
INFO subprocess: Command not in installer, restoring original environment...
The path to VM uses forwards slashes instead of backslashes. Is this a bug? Is there a way to manually set path to VM? I have put C:\Program Files\Oracle\VirtualBox in my PATH.
EDIT2: DON'T USE VAGRANT ON WINDOWS 10, it's bugged in many ways, also VM is not optimalized for win10 yet, you'll get bunch of issues that you won't be able to solve. Also tried the Otto from Hashicorp, not working either. Rip.
EDIT: okay, so when I do vagrant destroy and vagrant up, after 10 minutes of installation it works like a charm. But after I restart my PC or logout in any way, Vagrant is unable to connect to the VM, neither with a private key, nor with login/password. Is that a bug?
When I do vagrant up, VM starts properly, but Vagrant is unable to connect. All it says is Warning: Remote connection disconnect. Retrying...
When I try to connect via vagrant ssh, I get only ssh_exchange_identification: read: Connection reset by peer. When I check GUI of the VM, it is waiting for login, and when I login with defult login/password, it is working as intended, so the problem must be Vagrant not being able to connect to the VM.
I tried:
checking if my pc supports virtualization and checking if it is on
trying to connect with password instead of a key
configuring networking adapetrs
turning off firewall
clean reinstall
I am using Vagrant 1.8.1 and VirtualBox 5.0.20 on Windows 10.
This is my vagrant file:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.provider :virtualbox do |vb|
vb.memory = 2048
vb.gui = true
vb.cpus = 2
end
config.vm.network :private_network, type: "dhcp"
config.vbguest.auto_update = false
config.ssh.insert_key = false
config.vm.provision :shell, path: "bootstrap.sh"
end
[Edit 17/06/2016]
The problem should be resolved with Virtualbox 5.0.22.
https://www.virtualbox.org/wiki/Changelog
https://www.virtualbox.org/ticket/15412
[Original answer below]
In contrast to my earlier answer I now don't think that I encounter the same problem as you have described here. However I still think that you encounter a different variation the problem.
As of feedback received from Virtualbox development https://www.virtualbox.org/ticket/15412 I learned that Virtualbox 5.0.20 includes changes to the NAT Forwarding Rules to address other bugs. When a VM is saved and started again, Virtualbox now removes the network cable for 5 seconds. This is supposed to trigger the DHCP client to request a new lease. This information in turn is then used by Virtualbox to infer the IP address and NAT should work.
In my particular case I encounter this problem with Ubuntu 16.04 as guest VM whereas with Ubuntu 14.04 it works. This indicates to me that the DHClient on Ubuntu 14.04 does request a new lease after the cable was disconnected by Virtualbox whereas this is not the case with Ubuntu 16.04.
In order to verify that you encounter the same problem, I wonder if you could run the below test and let me know.
Login to the Trusty VM console (i.e. the one that you get displayed when you run the VM in the foreground)
Install 'arping' (sudo apt-get -y install arping)
Create the below script 'sendARP.sh'
#!/bin/bash
IFACE=$(ifconfig | grep 'Link encap:Ethernet' | awk '{print $1}')
IP=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')
arping -c 1 -i $IFACE $IP
Make it an executable 'chmod +x sendARP.sh'
Save the Trusty VM (vagrant suspend)
Start your Trusty VM from saved state (vagrant up)
Login to the Trusty VM console (i.e. the one that you get displayed when you run the VM in the foreground)
Run the script 'sudo ./sendARP.sh'
Test whether you can connect via SSH from the remote location/ Virtualbox host
Bugs:
https://github.com/mitchellh/vagrant/issues/7306
https://www.virtualbox.org/ticket/15412

Found unknown Linux distribution on /dev/sdb2: grub configuration dual boot Arch Linux and NetBSD-7.0

I have Arch Linux on /dev/sdb1 and NetBSD-7.0 on /dev/sdb2.
On Arch Linux when I run sudo grub-mkconfig -o /boot/grub/grub.cfg I get a message like Found unknown Linux distribution on /dev/sdb2 but when I reboot, there is no grub option for that unknown Linux distribution which I know it is NetBSD-7.0.
How can I add NetBSD-7.0 to my grub menu option when rebooting.
There is a similar post, currently looking into it.
UPDATE: I mounted NetBSD partition with sudo mount -t ufs -o ro,ufstype=ufs2 /dev/sdb2 /mnt/ (ufstype=44bsd did not work) and then ran grub-mkconfig -o /boot/grub/grub.cfg but yet the issue persists.
UPDATE: Rebooted and pressed c to get the grub command line. Following commands booted the NetBSD-7.0:
ls
Ran ls to see the correct name of disks and partitions, /dev/sdb2 on Linux was (hd0,gpt2) on Grub. Then ran the following:
insmod ufs2
set root=(hd0,gpt2)
knetbsd /netbsd
boot
And NetBSD-7.0 booted.
To add NetBSD option to Grub menu, modified file /etc/grub/40_custom on Arch Linux like below:
menuentry "NetBSD-7.0"{
insmod ufs2
set root=(hd0,gpt2)
knetbsd /netbsd
}
However, after modifying 40_custom like above, NetBSD option does not appear on Grub menu. I don't know why.
Unless you have a typo, it looks like the 40_custom file is in the wrong directory. it should be located at /etc/grub.d/40_custom, notice the .d.
If your /boot is located on a separate partition, make sure that it is mounted with mount /boot before generating the grub.cfg. Otherwise your new grub.cfg won't be used.
Check which partition grub is loading the configuration from by running echo ${prefix} within the grub command line. It's possible that grub is loading the configuration from a partition that you don't expect.
Verify that netbsd was added to the config with grep -i netbsd /boot/grub/grub.cfg before rebooting to avoid some frustration after generating grub.cfg

Apache doesn't start on Snow Leopard using Terminal but works using Web Sharing (System Preferences)

I am using the default Apache installation that comes with Snow Leopard and I have some things installed like MySQL, Rudix (Unix ports and packages) and Xcode.
When I type:
$ sudo apachectl start
I receive this output:
dyld: Symbol not found: _apr_dir_open$INODE64
Referenced from: /usr/local/sbin/httpd
Expected in: /usr/local/lib/libapr-1.0.dylib
in /usr/local/sbin/httpd
/usr/local/sbin/apachectl: line 78: 2023 Trace/BPT trap $HTTPD -k $ARGV
I don't know if it's related but my .bash_profile has this line (I typed it because import MySQLdb was not working in Python):
export DYLD_LIBRARY_PATH="/usr/local/mysql/lib/:$DYLD_LIBRARY_PATH"
If I tick Web Sharing using System Preferences the Apache starts and work, but I wan't to start it using the terminal, maybe I am forgot to pass important arguments to the apachectl command.
The Web Sharing option of System Preferences enables the Apple-supplied Apache. Its apachectl is /usr/sbin/apachectl. You appear to have installed another version of Apache in /usr/local; note the /usr/local/sbin/apachectl path. So you are not using the Apple-supplied Apache installation when you are running from the terminal and the version you are using appears to not have been installed correctly. One way to ensure you are using the Apple-supplied Apache is to specify the full path:
$ sudo /usr/sbin/apachectl start