Ubuntu cloud images using qemu/kvm are not properly startin ssh service - ssh

Today, I have been experiencing a weird problem when starting a cloud-img based VM using qemu/kvm. Apparently SSH service is not starting properly in the guest, and I cannot access the VM. This is what I see in the logs:
See 'systemctl status ssh.service' for details.
[ OK ] Started Dispatcher daemon for systemd-networkd.
[ OK ] Stopped OpenBSD Secure Shell server.
Starting OpenBSD Secure Shell server...
[FAILED] Failed to start OpenBSD Secure Shell server.
See 'systemctl status ssh.service' for details.
[ OK ] Stopped OpenBSD Secure Shell server.
Starting OpenBSD Secure Shell server...
[FAILED] Failed to start OpenBSD Secure Shell server.
See 'systemctl status ssh.service' for details.
[ OK ] Started Snap Daemon.
Starting Wait until snapd is fully seeded...
[ OK ] Stopped OpenBSD Secure Shell server.
Starting OpenBSD Secure Shell server...
[FAILED] Failed to start OpenBSD Secure Shell server.
See 'systemctl status ssh.service' for details.
[ OK ] Stopped OpenBSD Secure Shell server.
Starting OpenBSD Secure Shell server...
[FAILED] Failed to start OpenBSD Secure Shell server.
See 'systemctl status ssh.service' for details.
[ OK ] Stopped OpenBSD Secure Shell server.
[FAILED] Failed to start OpenBSD Secure Shell server.
Also, if I try to log in using serial TTY, I am not able to log in using the credentials I have previously specified using cloud-localds.
It is something I have never experienced in years, but this came up today. I tried with different Ubuntu cloud images, different host machines... But nothing, the problem persists.
Does someone have any clue about what is going on and perhaps shed some light on it?
Thanks a lot

Related

Appium Error: Cannot find any free port in range 8200..8299

I'm running about 90 tests in Appium (Android emulator on iMac) and it was all fine until suddenly I started observing this error:
Starting logs capture with command: /Users/username/Library/Android/sdk/platform-tools/adb -P 5037 -s emulator-5554 logcat -v threadtime
E selenium.common.exceptions.WebDriverException: Message: An unknown server-side
error occurred while processing the command. Original error: Cannot find any free port in range
8200..8299}. Please set the available port number by providing the systemPort capability or
double check the processes that are locking ports within this range and terminate these which
are not needed anymore
I did a few things to fix this problem but nothing worked:
1.
adb kill-server
adb reconnect
I did clean the emulator and restarted too.
Apart from this, I didn't find any port from 8200-8299 which is already being used in the system.
I did add systemPort capability as well but still I see the same error.
I have no idea how to fix this.
UPDATE:
Found some more logs and figured out that the port forwarding isn't being cleared by UIAutomator 2 (or adb) that's why I don't have issues with iOS but have issued in Android only. Here are the logs which is at the end of the appium server:
[debug] [35m[WD Proxy] [39m Proxying [DELETE /] to [DELETE http://127.0.0.1:8200/wd/hub/session/d1f94433-2c44-4dac-a836-461ab7f41130] with no body
[debug] [35m[UiAutomator2] [39m Deleting UiAutomator2 server session
[debug] [35m[WD Proxy] [39m Matched '/' to command name 'deleteSession'
[debug] [35m[WD Proxy] [39m Proxying [DELETE /] to [DELETE http://127.0.0.1:8201/wd/hub/session/37137b29-a9a6-4d83-b2d9-ce510f601a2d] with no body
[debug] [35m[UiAutomator2] [39m Deleting UiAutomator2 server session
where 127.0.0.1:8201 goes upto 127.0.0.1:8299 and deletes 100 active sessions which I don't expect.
Also, in netstat output I do see that TCP ports 127.0.0.1:8200 - 127.0.0.1:8299 (LISTEN) are open
Execute:
adb -s $UDID forward --remove-all
just before launching appium
and after to ensure that the ports used by the adb are free
see How do I stop an adb port forward?

Docker login error with Get Started tutorial

I'm trying to follow beginner tutorial on Docker's website and I suffer with an error on login.
OS is Ubuntu 14.04, I'm not using VirtualBox and I'm not behind any proxy and want to push to the "regular" docker repository (not private one).
All threads I've found mention proxies and private repositories but that isn't my case, I'm just trying to do simple beginner tutorial.
Here is my attempt:
$ sudo docker login
[sudo] password for myuname:
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: myDHuname
Password:
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
My docker info:
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 5
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.19.0-58-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.686 GiB
Name: myuname-ThinkPad-T420
ID: 6RW3:X3FC:T62N:CWKI:JQW5:YIPY:RAHO:ZHF4:DFZ6:ZL7X:JPOD:V7EC
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Epilogue
Now docker login is passing. I have not touched anything since yesterday when it was broken...
I can't reproduce the behavior anymore.
I encounter this issue when my first use docker. I've shadowsocks proxy on, and configed as pac mode. When I try to run docker run hello-world, I get this timeout error. When I set the proxy mode to global, the error is aslo there.
But when I disable the proxy, docker runs well. It pull remote image successfully.
docker for windows
Note: Some users reported problems connecting to Docker Hub on Docker
for Windows stable version. This would manifest as an error when
trying to run docker commands that pull images from Docker Hub that
are not already downloaded, such as a first time run of docker run
hello-world. If you encounter this, reset the DNS server to use the
Google DNS fixed address: 8.8.8.8. For more information, see
Networking issues in Troubleshooting.
The error Client.Timeout exceeded while awaiting headers indicates:
GET request to the registry https://registry-1.docker.io/v2/ timedout
The library responsible (most likely libcurl) timed out before a response was heard
The connection never formed (proxy/firewall gobbled it up)
If you see the below result you can rule out timed out and network connectivity
$ curl https://registry-1.docker.io/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
If you get the above response next would be to check if your user environment has some proxy configuration.
env | grep "proxy"
Note: The docker runs as root. Maybe you have http_proxy in your env. Most likely I am wrong. Anywho see what happens with the curl GET request
change the proxy settings in the firefox. May be you are in access restricted mode. Just add the server address in the firefox settings -> preferences -> advanced -> network -> configuration (settings). Add the server ip in the no proxy for the issue can be resolved

Error when deploying app with Play framework on Apache 2.2 on port 443

I configured Apache with SSl.
Server version: Apache/2.2.15 (Unix)
When I need to deploy my app on port 443, I get this error:
...
Play server process ID is 5941
[info] application - Application v2.8 - started on date 2016-03-30 11:51:08.332
[info] play - Application started (Prod)
Oops, cannot start the server.
If I start the app on different port, it works fine. For example:
sudo nohup app_path -Dhttp.port=9000 -Dconfig.file=config_file 2> /dev/null
But I get an error when I start the app on 443:
sudo nohup app_path -Dhttps.port=443 -Dconfig.file=config_file 2> /dev/null
My questions are:
- Am I missing something? Is there an easy fix for this?
- How can I see log of the error, because it is not descriptive at all

salt-minion tunnel initial connection

I have successfully set up a master and a minion using the Salt tutorial, between two hosted VPS (Debian 7).
I am trying to set-up a second minion on my laptop (Ubuntu 14.04), but following the same steps fails.
I am suspecting that my ISP is blocking some ports used by Salt.
I ma be wrong but that wouldn't be the first time my problems are related to that (I have some wireless connection included in my housing contract and live in some kind of young worker residence).
Is there a way to tell which ports my ISP is blocking ?
Can I tunnel my salt minion connection through ssh ?
Note : ssh runs fine, if that can help, and I have access to remote servers (the other master and minion).
Anonymised command output below :
$ salt-minion -l debug
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: nha-Ub
[DEBUG ] Configuration file path: /etc/salt/minion
[INFO ] Setting up the Salt Minion "my_machine_name"
[DEBUG ] Created pidfile: /var/run/salt-minion.pid
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Attempting to authenticate with the Salt Master at X.X.X.X
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[ERROR ] Attempt to authenticate with the salt master failed
I'd run a tcpdump on each side to see what packets are being sent and received, an example command for tcpdump:
tcpdump -i $INTERFACE -s 15000 -w testing.pcap
Where interface is eth0 etc.. (ifconfig) to confirm.
Then you can look at this in wireshark.
Another thing to look at is firewall, I'd put an allow rule in iptables for each WAN IP, just to make sure that's not causing any issues also.
By default, salt needs ports 4505 and 4506 open (TCP)

Bash Script: Wait for apache2 graceful-stop

On Ubuntu 12.04 server, I have a bash script to gracefully stop my apache2 server, removing the content of /var/www, unzipping the new content and the start the apache again. (Everything is executed as root)
echo "Test";
cd /var;
service apache2 graceful-stop;
rm -R www/ && echo "Flush...";
unzip transfer.zip > /dev/null && echo "Flushed.";
service apache2 start;
The error I get is when apache starts again:
Test
Flush...
Flushed.
(98)Address already in use: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
Unable to open logs
Action 'start' failed.
The Apache error log may have more information.
So the script doesn't wait for apache to stop.
Here what I tried so far:
I tried to wait with wait (Same error.)
service apache2 graceful-stop;
wait $!;
I tried to get the PID of apache and wait for this one (Same error)
pid=$(cat /var/run/apache2.pid)
apache2ctl graceful-stop;
wait $pid;
I tried to use apache2ctl graceful-stop instead of service apache2 graceful-stop (Same error)
What am I missing? When I use service apache2 stop, everything works fine:
* Stopping web server apache2
... waiting [ OK ]
Flush...
Flushed.
* Starting web server apache2 [ OK ]
Thanks
Edit
Here the output with the exit code of wait:
* Stopping web server apache2 [ OK ]
0
Flush...
Flushed.
* Starting web server apache2
(98)Address already in use: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
Unable to open logs
Action 'start' failed.
The Apache error log may have more information.
[fail]
It seems that Apache itself recommends waiting a couple of seconds between restarts:
http://wiki.apache.org/httpd/CouldNotBindToAddress
It is actually relatively common that releasing and binding to a port is not immediate. It may take some time (up to several minutes) for the kernel to free up the closed socket. It is called Linger Time. It is also briefly discussed by some Apache documentation, see http://httpd.apache.org/docs/2.2/misc/perf-tuning.html search for "Lingering Close".
There is a very detailed answer about the issue to this question: Socket options SO_REUSEADDR and SO_REUSEPORT, how do they differ? Do they mean the same across all major operating systems?