Emacs Tramp smartcard config - ssh

I use ssh-key based authentication, with the keys being held on a smartcard. I am migrating to a new machine, where in my previous machine I had Emacs+Tramp set up nicely with the workflow.
However, now I am having issues. I found a solution, however I am wondering if there is a better way.
The setup
If I have an .ssh/config with the following entry:
Host remote
HostName 1.2.3.4
User root
remote has my SSH keys authorised, and if I run ssh remote in a normal shell, I am prompted for my smartcard pin, and can SSH with no issues.
However, in Emacs using tramp, I would normally ssh entering the filepath to ssh:remote:. However, in my fresh installation it instead prompts me for a username, and then a password.
First attempts
Following the suggestion of this answer, I increased the log level of tramp.
It showed me that tramp was running the following command: exec ssh -o ControlMaster=auto -o ControlPath='tramp.%C' -o ControlPersist=no -e none remote. Running this in a normal shell worked as expected.
I found that running ssh remote in eshell had the same problem.
I thought that maybe Emacs didn't have access to my ~/.bashrc config, where I configure my smartcard details:
export GPG_TTY="$(tty)"
export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)
gpgconf --launch gpg-agent
The solution (is there a better one?)
This answer suggested launching Emacs with bash -c emacs.
This ended up solving the problem, however I wonder if there is a more robust solution, i.e. one encoded in my config.el file, or similar.

As you can see, the problem is caused by inconsistencies between the Emacs and shell environment variables. You can use exec-path-from-shell, especially if you are using macOS. Or you can just setenv manually. Finally, Spacemacs and Doom have their own way of handling it, I see you mention config.el, not sure if you are using Doom, you can refer to them as well.

Related

SSH connection command to embedded OS QNX Neutrino via paramiko [duplicate]

I am trying to run sesu command in Unix server from Python with the help of Paramiko exec_command. However when I am running this command exec_command('sesu test'), I am getting
sh: sesu: not found
When I am running simple ls command it giving me desired output. Only with sesu command it is not working fine.
This is how my code looks like:
import paramiko
host = host
username = username
password = password
port = port
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
stdin,stdout,stderr=ssh.exec_command('sesu test')
stdin.write('Password')
stdin.flush()
outlines=stdout.readlines()
resp=''.join(outlines)
print(resp)
The SSHClient.exec_command by default does not run shell in "login" mode and does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced, than in your regular interactive SSH session (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
Possible solutions (in preference order):
Fix the command not to rely on a specific environment. Use a full path to sesu in the command. E.g.:
/bin/sesu test
If you do not know the full path, on common *nix systems, you can use which sesu command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "sesu test"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
PATH="$PATH;/path/to/sesu" && sesu test
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the get_pty parameter:
stdin,stdout,stderr = ssh.exec_command('sesu test', get_pty=True)
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
You may have a similar problem with LD_LIBRARY_PATH and locating shared objects.
See also:
Environment variable differences when using Paramiko
Certain Unix commands fail with "... not found", when executed through Java using JSch

Copying ssh key from windows machine to windows server 2019

I've been trying to get access to Windows Server 2019 without password through OpenSSH protocol.
So I've created new key which I need it to be copied to the Windows Server, I've tried this:
ssh-copy-id -i ~/.ssh/id_rsa user#server
But I get this after entering correct password:
'exec' is not recognized as an internal or external command,
operable program or batch file.
The system cannot find the path specified.
The system cannot find the path specified.
My issue is how to transfer key from one windows machine(using gitbash, WSL, powershell or whatever)
to Windows Server 2019 location of authorized keys if I am not mistaken.
I am desperate enough to do it manually but location of those keys is mystery to me, do I need to set something on Windows Server first so that it can accept keys for authentication ?
What is the alternative on ssh-copy-id from Windows machine to Windows Server 2019 ?
Found solution:
Followed this helpful youtube guide, props to the
https://www.youtube.com/watch?v=Cs3wBl_mMH0&ab_channel=IT%2FOpsTalk-Deprecated-SeeChannelDescription
Also, installing OpenSSHUtils worked with:
Install-Module -Name OpenSSHUtils -RequiredVersion 0.0.2.0 -Scope AllUsers
Also this guide helped:
https://www.cloudsma.com/2018/03/installing-powershell-modules-on/
My server didn't have access so I manually copied file from:
C:\Program Files\WindowsPowerShell\Modules to the server's:
Server:\Program Files\WindowsPowerShell\Modules
First, this error message is followed by microsoft/vscode-remote-release issue 25
Current workaround (the context is VSCode, but should apply also for regular SSH connection):
Also, for anyone else here that loves their bash on windows but still wants to be able to use VSCode remote, the workaround I have currently setup is to use an autorun.cmd deployed on the servers that detects when an SSH connection is coming in and has a terminal allocated:
#echo off
if defined SSH_CLIENT (
:: check if we've got a terminal hooked up; if not, don't run bash.exe
C:\cygwin\bin\bash.exe -c "if [ -t 1 ]; then exit 1; fi"
if errorlevel 1 (
C:\cygwin\bin\bash.exe --login
exit
)
)
This is known to work with Cygwin bash, unsure about bash that ships with windows; I imagine it's very sensitive to how the TTY code works internally.
This way, launching cmd.exe works normally, using VSCode (because it does not allocate a PTY) works normally, but SSH'ing into the machine launches bash.exe.
I suspect it would also work using the bash.exe which comes with Git for Windows, should it be installed on the target server.
The destination file should be on the server:
%USERPROFILE%\.ssh\authorized_keys
If you can do it manually, simply try and scp it instead of using ssh-copy-id
scp user#server:C:/Users/<user>/.ssh/authorized_key authorized_key
# manual and local edit to add the public key
scp authorized_key user#server:C:/Users/<user>/.ssh/authorized_key
(again, I would use the scp.exe coming with Git For Windows, installed this time locally)
Found solution:
Followed this helpful youtube guide, props to the
https://www.youtube.com/watch?v=Cs3wBl_mMH0&ab_channel=IT%2FOpsTalk-Deprecated-SeeChannelDescription
Also, installing OpenSSHUtils worked with:
Install-Module -Name OpenSSHUtils -RequiredVersion 0.0.2.0 -Scope AllUsers
Also this guide helped:
https://www.cloudsma.com/2018/03/installing-powershell-modules-on/
My server didn't have access so I manually copied file from:
C:\Program Files\WindowsPowerShell\Modules to the server's:
Server:\Program Files\WindowsPowerShell\Modules

How to enable X11 forwarding in PyCharm SSH session?

The Question
I'm trying to enable X11 forwarding through the PyCharm SSH Terminal which can be executed via
"Tools -> Start SSH session..."
Unfortunately, It seems there is no way of specifying the flags like I would do in my shell for enabling the X11 Forwarding:
ssh -X user#remotehost
Do you know some clever way of achieving this?
Current dirty solution
The only dirty hack I found is to open an external ssh connection with X11 forwarding and than manually update the environment variable DISPLAY.
For example I can run on my external ssh session:
vincenzo#remotehost:$ echo $DISPLAY
localhost:10.0
And than set on my PyCharm terminal:
export DISPLAY=localhost:10.0
or update the DISPLAY variable in the Run/Debug Configuration, if I want to run the program from the GUI.
However, I really don't like this solution of using an external ssh terminal and manually update the DISPLAY variable and I'm sure there's a better way of achieving this!
Any help would be much appreciated.
P.s. Making an alias like:
alias ssh='ssh -X'
in my .bashrc doesn't force PyCharm to enable X11 forwarding.
So I was able to patch up jsch and test this out and it worked great.
Using X11 forwarding
You will need to do the following to use X11 forwarding in PyCharm:
- Install an X Server if you don't already have one. On Windows this might be the VcXsrv project, on Mac OS X the XQuartz project.
- Download or compile the jsch package. See instructions for compilation below.
- Backup jsch-0.1.54.jar in your pycharm's lib folder and replace it with the patched version. Start Pycharm with a remote environment and make sure to remove any instances of the DISPLAY environment variable you might have set in the run/debug configuration.
Compilation
Here is what you need to do on a Mac OS or Linux system with Maven installed.
wget http://sourceforge.net/projects/jsch/files/jsch/0.1.54/jsch-0.1.54.zip/download
unzip download
cd jsch-0.1.54
sed -e 's|x11_forwarding=false|x11_forwarding=true|g' -e 's|xforwading=false|xforwading=true|g' -i src/main/java/com/jcraft/jsch/*.java
sed -e 's|<version>0.1.53</version>|<version>0.1.54</version>|g' -i pom.xml
mvn clean package
This will create jsch-0.1.54.jar in target folder.
Update 2020:
I found a very easy solution. It may be due to the updated PyCharm version (2020.1).
Ensure that X11Forwarding is enabled on server: In /etc/ssh/sshd_config set
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
On client (MacOS for me): In ~/.ssh/config set
ForwardX11 yes
In PyCharm deselect Include system environment variables. This resolves the issue that the DISPLAY variable gets set to the system variable.
EDIT: As seen in the below image it works. For example I used the PyTorch implementation of DeepLab and visualize sample images from PASCAL VOC:
X11 forwarding was implemented in 2021.1 for all IntelliJ-based IDEs. If it still doesn't work, please consider creating a new issue at youtrack.jetbrains.com.
By the way, the piece of advice about patching jsch won't work for any IDE newer than 2019.1.
In parallel, open MobaXTerm and connect while X11 forwarding checkbox is enabled. Now PyCharm will forward the display through MobaXTerm X11 server.
This until PyCharm add this 'simple' feature.
Also, set DISPLAY environment variable in PyCharm run configuration like this:
DISPLAY=localhost:10.0
(the right hand side should be obtained with the command echo $DISPLAY in the server side)
Update 2022: for PyCharm newer than 2022.1: Plotting in SciView works by only setting ForwardX11 yes in .ssh/config (my laptop OS is ubuntu 22.04). I did not set any other parameters either on the server or local side.

Vagrant stuck in "Waiting for VM to Boot"

I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?

Is there a way I can have a VM gain access to my computer?

I would like to have a VM to look at how applications appear and to develop OS-specific applications, however, I want to keep all my code on my Windows machine so if I decide to nuke a VM or anything like that, it's all still there.
If it matters, I'm using VirtualBox.
This is usually handled with network shares. Share your code folder from your host machine and access it from the VMs.
Aside from network shares, another tool to use for this is a version-control system.
You should always be able make a normal network connection between the VM and the hosting OS, as though it were another computer on the same network. Which, in some sense, it is.
I do this all the time.
I have a directory in a Windows drive that I mount in my host ubuntu 12.04.
I run virtualbox ubuntu 13.04 as a guest.
I want the guest to mount the Windows directory with full non-root permissions.
I do almost all my work from a bash shell, so this method is natural for me.
When searching for methods to automatically mount virtualbox shared folders,
reliable and correct methods are hard to distinguish from those that fail.
Failures include getting and setting permissions, as well as other problems.
Methods that fail include:
modifying /etc/fstab
modifying /etc/rc.local
I am fairly certain that rc.local can be used,
but no methods I have tried worked.
I welcome improvements on these guidelines.
On virtualbox 4.2.14 running nautilus (bash terminal) on an ubuntu 13.04 guest,
Below is a working method to mount Common (sharename)
on /home/$USER/Desktop/Common (mountpoint) with full permissions.
(Note the β€˜\’ command continuation character in the find command.)
First time only: create your mountpoint, modify your .bashrc file, and run it.
Respond with password when requested.
These are the four command-lines needed:
mkdir $HOME/Desktop/Common
sudo echo β€œ$USER ALL=(ALL) NOPASSWD:ALL” >> /etc/sudoers
find $HOME/Desktop/Common -maxdepth 0 -type d -empty -exec sudo \
mount -t vboxsf -o \
uid=`id -u $USER`,gid=`id -g $USER` Common $HOME/Desktop/Common \;
source ~/.bashrc # Needed if you want to mount Common in this bash.
All other times: simply launch a bash shell.
The find command mounts the shared directory if the mountpoint directory is empty.
If the mountpoint directory is not empty, it does not run the mount command.
I hope this is error-free and sufficiently general.
Please let me know of corrections and improvements.