regenerating certificates hangs on windows 7 - ssl

I'm a total docker newbie and tried to get it working on my windows 7 64-bit machine.
The installation went okay, but the "Docker Quickstart Terminal" will not start up as expected. It seems to hang when trying to create the SSH key:
(default) Downloading https://github.com/boot2docker/boot2docker/releases/download/v
(default) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
(default) Creating VirtualBox VM...
(default) Creating SSH key...
Error creating machine: Error in driver during machine creation: exit status 1
Looks like something went wrong... Press any key to continue...
so I tried to regenerate the certificates in a cmd window and also this does not work:
>docker-machine regenerate-certs default
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Detecting the provisioner...
OS type not recognized
I've tried to deactivate my virus scanner and execute the cmd windows as admin without success.
Any ideas what to check? Are there any interesting logfiles?
here's the docker version output:
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.3
Git commit: a34a1d5
Built: Fri Nov 20 17:56:04 UTC 2015
OS/Arch: windows/amd64
An error occurred trying to connect: Get http://localhost:2375/v1.21/version: dial tcp 127.0.0.1:2375:
ConnectEx tcp: No connection could be made because the target machine actively refused it.

If you don't have hyper-v activated (that is more a Windows 10 issue), and if your BIOS VT-X/AMD-v is enabled, then something else went wrong.
If docker-machine ls still lists the default machine, delete it: docker-machine rm default.
If you had (previous to your docker-toolbox installation) a VirtualBox already installed, try and:
uninstall completely VirtualBox
in C:\Windows\system32\drivers\, find and delete these five files (there may be less left, that is ok, delete them anyway):
vboxdrv.sys,
vboxnetadp.sys,
vboxnetflt.sys,
vboxusbmon.sys,
vboxusb.sys.
in regedit, key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\, delete these 5 folders (there may be less left, that is ok, delete them anyway):
VBoxDrv,
VBoxNetAdp,
VBoxNetFlt,
VBoxUSBMon,
VBoxUSB.
Then reinstall the latest VirtualBox.
Make sure:
you have the latest docker-machine copied somewhere in your PATH (the 0.5.3 has been released 22 hours ago: releases/download/v0.5.3/docker-machine_windows-amd64.exe).
%HOME% is defined (typically to %HOMEDRIVE%%HOMEPATH%)
From there, try manually to recreate the default machine like the quick-start script did:
docker-machine create -d virtualbox --virtualbox-memory 2048 --virtualbox-disk-size 204800 default
eval $($DOCKER_MACHINE env my_new_container --shell=bash)
docker-machine ssh my_new_container

I've now tried to create a Linux VM directly in VirtualBox and start it from there: also gets some time-out. So I think it's not related to docker.
I've found a VirtualBox bug-report that says, that this can happen when you have Avira installed.
Here's a discussion about the issue on the Avira forum - unfortunatly mostly in German.
One paragraph indicates that it may help to deactivate "Advanced process protection":
Configuration -> General -> Security and disable the option "Advanced
process protection". Click "Apply" and restart the device. You should
be able to run your VM in VirtualBox after that.
In my case this does not help, so I'll need to wait for a fix or completely uninstall Avira.

(defualt) DBG | Getting to WaitForSSH function...
(defualt) DBG | Using SSH client type: external
(defualt) DBG | &{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none docker#127.0.0.1 -o IdentitiesOnly=yes -i C:\Users\Ming.docker\machine\machines\defualt\id_rsa -p 58549] C:\Program Files\OpenSSH\bin\ssh.exe }
(defualt) DBG | About to run SSH command:
(defualt) DBG | exit 0
(defualt) DBG | SSH cmd err, output: exit status 255:
(defualt) DBG | Error getting ssh command 'exit 0' : Something went wrong running an SSH command!
(defualt) DBG | command : exit 0
(defualt) DBG | err : exit status 255
(defualt) DBG | output :

Related

Unable to install nvm on WSL due to network issues

I'm trying to install nvm using curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash on WSL2, but I'm getting different errors. Initially, the curl command would return the following:
> $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0curl: (6) Could not resolve host: raw.githubusercontent.com
After running netsh int ip reset in Windows, which was suggested in another question, the same command started timing instead:
> $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:04:59 --:--:-- 0
curl: (28) Connection timed out after 300000 milliseconds
I've also tried manually saving the install.sh to my machine and running it locally (after setting its permissions with chmod +x install.sh), but that returns a similar error:
> $ ./install.sh
=> Downloading nvm from git to '/home/mparra/.nvm'
=> Cloning into '/home/mparra/.nvm'...
fatal: unable to access 'https://github.com/nvm-sh/nvm.git/': Failed to connect to github.com port 443: Connection timed out
Failed to clone nvm repo. Please report this!
I can successfully ping github.com. ping -c 100 github.com returns the following:
--- github.com ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99156ms
rtt min/avg/max/mdev = 15.280/20.739/85.205/9.141 ms
This issue suggests that a Windows update resolved the issue, but that's not an option for me since it's a work machine and I can't update beyond build 18363.2039. I've also checked that my VPN is not enabled and I set my DNS to 8.8.8.8 and 8.8.4.4, which had no effect.
Please try the following in your WSL
sudo rm /etc/resolv.conf
sudo bash -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
sudo bash -c 'echo "[network]" > /etc/wsl.conf'
sudo bash -c 'echo "generateResolvConf = false" >> /etc/wsl.conf'
sudo chattr +i /etc/resolv.conf
I can install with curl.
I have a feeling you are probably correct about this being the same issue mentioned on Github that was resolved in a Windows update.
If that's truly the case, you are probably going to continue to run into issues even after getting nvm installed. For instance, nvm probably will have trouble downloading Node releases.
The easiest solution that I can propose, if it works for you, is to simply convert to WSL1 instead of WSL2. WSL1 will handle most (but not all) Node use-cases just as well as WSL2. And WSL1 handles networking very differently than WSL2. If the Windows networking stack is working fine for you, then WSL1's should as well.
As noted in that Github issue, this seemed to be a problem that occurred only in Hyper-V instances. WSL2 runs in Hyper-V, but WSL1 does not.
If you go this route, you can either:
create a copy of your existing WSL2 distribution and convert that copy to WSL1. From PowerShell:
wsl --shutdown
wsl -l -v # Confirm <distroname>
wsl --export <distroname> path\to\backup.tar
mkdir .\path\for\new\instance
wsl --import WSL1 .\path\for\new\instance path\to\backup.tar --version 1 # WSL1 can be whatever name you choose
wsl -d WSL1
Note that you'll be root, by default. To change the default user, follow this answer.
Or, just convert the WSL2 instance to WSL1:
wsl --shutdown
wsl -l -v # Confirm <distroname>
wsl --export <distroname> path\to\backup.tar # Just in case
wsl --set-version <distroname> 1
If WSL1 doesn't work for you (at least in the short term until your company pushes that update), then there may be another option similar to the one mentioned in this comment on that Github issue. Let me know if you need to go that route, and I'll see if I can simply that a bit.

SSH protocol v.1 is no longer supported

Trying to scp files to my server like I've done every day for years... got this weird error today:
client$ scp filename.file server:/path/to/somewhere/
SSH protocol v.1 is no longer supported
client$ echo $?
255
The file does not show up on my server like it would normally after running this command.
This error only appears on scp commands. Using ssh to get into my server works fine.
Has anyone seen this before? How do I go about debugging this? Here's some version info:
client$ ssh -V
OpenSSH_8.2p1 Ubuntu-4ubuntu0.1, OpenSSL 1.1.1f 31 Mar 2020
client$ apt show openssl
Package: openssl
Version: 1.1.1f-1ubuntu2
server$ apt show openssh-server
Package: openssh-server
Version: 1:7.2p2-4ubuntu2.10
server$ sshd -V
unknown option -- V
OpenSSH_7.2p2 Ubuntu-4ubuntu2.10, OpenSSL 1.0.2g 1 Mar 2016
(note that I've added hostnames "client" and "server" for clarity)
In my sshd_config, it shows Protocol 2
server$ cat /etc/ssh/sshd_config | grep Protocol
Protocol 2
I'm running Ubuntu 16.04 on my server, which should have maintenance updates through today.
Let me know if I should run any other operations. Server is local network only, but I still want to make sure it's hardened.
Ugh, it was a typo... Keeping the post up for others who bang their head against the wall on this as I couldn't find any info on this error message from googling.
It's not in the ssh command (removed various parts for privacy), but I was supplying a port:
scp -p3122 file server:/path/
But it really should be:
scp -P3122 file server:/path/
(Use a capital 'P')
I have the same message with the command 'SSH'.
I fix the problem: that work ONLY if you use 3 elements: ssh server -l user -p port ..
And the party continue..

VSCode v1.43 remote ssh cannot connect. v1.42 works

I am connecting to a CENTOS 7.4 machine from my MAC using remote ssh extension. Everything was working fine in v1.42. I updated to v1.43 yesterday and now VSCode cannot connect. I get following error and it 'hangs' till I select close remote connection. I switched back to v1.42 and it works. Anyone else seen this?
[11:48:35.614] stderr> Authenticated to 172.18.116.204 ([172.18.116.204]:22).
[11:48:35.704] > Warning: no access to tty (Bad file descriptor).
[11:48:35.707] > Thus no job control in this shell.
[11:48:36.308] stderr> stty:
[11:48:36.308] stderr> standard input: Inappropriate ioctl for device
[11:48:36.309] stderr>
[11:48:38.151] stderr> stty:
[11:48:38.152] stderr> standard input: Inappropriate ioctl for device
[11:48:38.152] > ready: 552eb5fb743e
[11:48:38.180] > Linux 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017
[11:48:38.180] Platform: linux
[11:48:38.246] stderr> bash: line 1: syntax error near unexpected token `then'
[11:48:38.246] stderr> bash: line 1: `then'
[11:48:38.247] stderr> function: Command not found.
[11:48:38.247] > 552eb5fb743e: running
[11:48:38.248] stderr> COMMIT_ID=78a4c91400152c0f27ba4d363eb56d2835f9903a: Command not found.
[11:48:38.248] stderr> EXTENSIONS=: Command not found.
[11:48:38.249] stderr> TELEMETRY=: Command not found.
[11:48:38.263] stderr> export: Permission denied.
[11:48:38.282] stderr> ALLOW_CLIENT_DOWNLOAD=1: Command not found.
[11:48:38.282] stderr> VSCODE_AGENT_FOLDER: Undefined variable.
[11:48:38.283] stderr> _lock: Command not found.
This is an open issue in VSCode Remote-SSH version 0.50.0. See Issue #2527.
The way I resolved it was to downgrade to version 0.49.0.
In VSCode, Ctrl+Shift+X to open extensions
Click the Manage icon next to the Remote - SSH extension
Click Install Another Version... from the options
Select the version to install (0.49.0)
I also recommend disabling the Extensions Auto Update in settings so this type of thing doesn't happen in the future with this or any other extensions.
I think that is NOT a issue of VSCode(v1.42 or 1.43). You can try to downgrade Remote-SSH extension to version 0.49.
I solved it by Cleaning up the VS Code Server on the remote. No uninstalling, no downgrading...
Simply:
Close VS Code
SSH into the remote using any other way, and run the commands from the link:
kill -9 `ps ax | grep "remoteExtensionHostAgent.js" | grep -v grep | awk '{print $1}'`
kill -9 `ps ax | grep "watcherService" | grep -v grep | awk '{print $1}'`
rm -rf ~/.vscode-server # Or ~/.vscode-server-insiders
Open VS Code again (it will re-install the remote server).
EDIT:
Running VS Code v1.43, Remote-SSH extension v0.50, on Windown 10.
Remote machine is CentOS 7

xclip gives `Error: Can't open display: localhost:10.0` in tmux session in Ubuntu VirtualBox VM

I'm attempting to use xclip in a tmux session in my Ubuntu VirtualBox VM for some copy/paste keybindings, but keep getting the same error message.
I have XQuartz installed on my host machine:
ysim:~$ which xquartz
/opt/X11/bin/xquartz
ysim:~$ echo $DISPLAY
/tmp/launch-N0023n/org.macosforge.xquartz:0
I have ForwardX11 yes set in ~/.ssh/config:
Host vm
ForwardX11 yes
In my VM too, in /etc/ssh/sshd_config:
X11Forwarding yes
When I'm ssh'ed in my VM, xclip works fine when I'm not in a tmux session:
$ echo hello | xclip
$ xclip -o
hello
But errors when I'm in one:
$ echo hello | xclip
Error: Can't open display: localhost:10.0
Any ideas why this might be the case?
Update: Now it seems to only happen if I exit a tmux session, then create a new one.
I got the same error. I fixed it by exiting my tmux session, disconnecting my ssh session and reconnecting (opening another terminal window).
I was ssh'd into a server, but if you're just using a local VM, I think exiting your tmux session and reopening terminal should have the same effect.

ssh client (dropbear on a router) does no output when put in background

I'm trying to automate some things on remote Linux machines with bash scripting on Linux machine and have a working command (the braces are a relict from cmd concatenations):
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"')
But if an ampersand is concatenated to execute it in background, it seems to execute, but no output is printed, neither on stdout, nor on stderr, and even a redirection to a file (inside the braces) does not work...:
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"') &
By the way, I'm running the ssh client dropbear v0.52 in BusyBox v1.17.4 on Linux 2.4.37.10 (TomatoUSB build on a WRT54G).
Is there a way to get the output either? What's the reason for this behaviour?
EDIT:
For convenience, here's the plain ssh help output (on my TomatoUSB):
Dropbear client v0.52
Usage: ssh [options] [user#]host[/port][,[user#]host/port],...] [command]
Options are:
-p <remoteport>
-l <username>
-t Allocate a pty
-T Don't allocate a pty
-N Don't run a remote command
-f Run in background after auth
-y Always accept remote host key if unknown
-s Request a subsystem (use for sftp)
-i <identityfile> (multiple allowed)
-L <listenport:remotehost:remoteport> Local port forwarding
-g Allow remote hosts to connect to forwarded ports
-R <listenport:remotehost:remoteport> Remote port forwarding
-W <receive_window_buffer> (default 12288, larger may be faster, max 1MB)
-K <keepalive> (0 is never, default 0)
-I <idle_timeout> (0 is never, default 0)
-B <endhost:endport> Netcat-alike forwarding
-J <proxy_program> Use program pipe rather than TCP connection
Amendment after 1 day:
The braces do not hurt, with and without its the same result. I wanted to put the ssh authentication to background, so the -f option is not a solution. Interesting side note: if an unexpected option is specified (like -v), the error message WARNING: Ignoring unknown argument '-v' is displayed - even when put in background, so getting output from background processes generally works in my environment.
I tried on x86 Ubuntu regular ssh client: it works. I also tried dbclient on x86 Ubuntu: works, too. So this problem seems to be specific to the TomatoUSB build - or inside the "dropbear v0.52" was an unknown fix between the build in TomatoUSB and the one Ubuntu provides (difference in help output is just the double-sized default receive window buffer on Ubuntu)... how can a process know if it was put in background? Is there a solution to the problem?
I had the similar problem on my OpenWRT router. Dropbear SSH client does not write anything to output if there is no stdin, e.g. when run by cron. I presume that & has the same effect on process stdin (no input).
I found some workaround on author's bugtracker. Try to redirect input from /dev/zero.
Like:
ssh -i yourkey user#remotehost "echo 123" </dev/zero &
It worked for me as I tried to describe at my blog page.