OpenSUSE snapshots not allowing ssh - ssh

I can't seem to ssh into any instances that are created from a snapshot of an openSUSE instance that's created within Google Cloud (ie: not from a snapshot created locally and then uploaded). I've tested this with three different openSUSE instances, 2 that I had been working on and one that I created only to test this on, and none have been able to produce snapshots that produce instances that allow ssh. To be clear, the instances created from the snapshots start up perfectly fine and show no issues from the console, but neither the console's built in ssh nor any other ssh client (putty, mobaxterm) gets anything more than a time out error. I have successfully created instances from both a Windows and Debian snapshots that I have created myself, so I'm confident it's an issue with the specific OS.
Steps to reproduce:
Create an instance based off of the openSUSE image
Create a snapshot based off of the instance you just created
Create an instance based off of the snapshot you just created
Attempt, and fail, to connect to the instance via ssh
Any help with this would be much appreciated, and thank you very much in advance.

I was able to reproduce your issue. I'll report it to Google. If your run the command
gcloud compute instances get-serial-port-output <your-new-instance>
You will notice that there's an error indicating that couldn't find the disk.

SUSE has fixed the issue yesterday on SLES distros. The following new images are now available (bug-exempt):
sles-11-sp3-v20150310
sles-12-v20150310
We are still working on a fix to openSUSE, and we still don't have a fix for existing instances.

A procedure to address running instances has been posted:
https://forums.suse.com/showthread.php?6142-Image-from-snapshot-will-not-boot&p=26957#post26957
The above post contains all the details, the procedure below addresses the question about "what to do with running instances."
SUSE Linux Enterprise Server 11 SP3 (sles-11-sp3)
1.) Edit /etc/sysconfig/bootloader
In the "DEFAULT_APPEND" assignment replace "root=/dev/disk/by-id.." with "root=/dev/sda1". Reform the same substitution for the "FAILSAFE_APPEND" assignment.
Add NON_PERSISTENT_DEVICE_NAMES=1 to the end of the line, after "quiet"
2.) Edit /etc/fstab
Replace "/dev/disk/by-id..." with "/dev/sda1"
3.) Edit /boot/menu.lst
Replace "root=/dev/disk/by-id.." with "root=/dev/sda1" and "disk=/dev/disk/by-id/..." with "disk=/dev/sda" in both options.
Add NON_PERSISTENT_DEVICE_NAMES=1 to the end of the line starting with "kernel"
4.) Reboot the instance
5.) Execute mkinitrd
6.) Edit /etc/udev/rules.d/70-persistent-net.rules (if it exists)
Remove the mac address condition, "ATTR{address}==.....", from the rules.
SUSE Linux Enterprise Server 12 (sles-12)
1.) Edit /etc/sysconfig/bootloader
In the "DEFAULT_APPEND" assignment replace "root=/dev/disk/by-id.." with "root=/dev/sda1" and "disk=/dev/disk/by-id/..." with "disk=/dev/sda". Perform the same substitution for the "FAILSAFE_APPEND" assignment.
Add NON_PERSISTENT_DEVICE_NAMES=1 to the end of the line, after "quiet"
2.) Edit /etc/fstab
Replace "/dev/disk/by-id..." with "/dev/sda1"
3.) Edit /etc/default/grub
In the "GRUB_CMDLINE_LINUX_DEFAULT" assignment replace "root=/dev/disk/by-id.." with "root=/dev/sda1" and "disk=/dev/disk/by-id/..." with "disk=/dev/sda".
Add NON_PERSISTENT_DEVICE_NAMES=1 to the end of the line, after "quiet"
4.) Create a new grub configuration (SLES 12)
export GRUB_DISABLE_LINUX_UUID=true
grub2-mkconfig > /boot/grub2/grub.cfg
5.) Execute mkinitrd
6.) Edit /etc/udev/rules.d/70-persistent-net.rules (if it exists)
Remove the mac address condition, "ATTR{address}==.....", from the rules.

A new openSUSE 13.2 image has been published that addresses the issue as well. New instances started from opensuse-13-2-v20150315 will work with no issues with the snapshot feature in GCE. For running instances use the process outlined for SUSE Linux Enterprise 12, that should work. I did not test the procedure on openSUSE.

Related

Oracle VM -- Metasploitable Issue "Can't Overwrite Medium..."

Thank you for taking the time to read this. I am trying to complete a Lab for one of my classes and I am having an issues creating a new Metasploitable VM. I introduced an error when editing a file in the last one and had to delete it.
I am using a VMDK, obtained from SourceForge. The first machine installed with no issues. When trying to create a second machine I continually get "Can't overwrite medium..." comment when finishing the set-up.
So far I have tried:
Checking the destination folder for any existing files and folders (hidden too).
Redownloading Metasploitable
Assigning a new name to the VM, using different specs, etc.
Restarting my PC.
What I don't understand is that the error is stating, if I am not mistaken, that it is trying to overwrite the VMDK in my downloads file... when I am trying to save it in c:\Users\natsu\VirtualBox VMs. I have triple checked all fields before running the installer. I do not understand why this is happening, considering the first Metasploitable machine I had installed with absolutely no issues.
]

Docker build fails always with error hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1) Windows Containers

Steps to reproduce are very easy.
Create a Dockerfile.
My Dockerfile has many more lines, but I have trimmed them so we can focus in the source of the problem.
Said that, these two lines alone (without anything more) show the problem.
FROM microsoft/iis
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; $VerbosePreference = 'Continue'; "]
Run docker build . and you get hcsshim::PrepareLayer - failed failed in Win32: Función incorrecta. (0x1).
Windows 10 Pro 1909 (but it happened too in 1903)
Docker version: 2.1.0.5
Engine: 19.03.5
Machine: 0.16.2
I have found the solution to the problem.
Reading all the https://github.com/docker/for-win/issues/3884 bug, some have found a simple solution: rename C:\windows\system32\driver\cbfsconnect2017.sys so it isn't loaded the next boot.
Disabling that driver enables me to do a docker build for the first time in windows containers in almost a year.
In my case Box Sync was the one using that driver.
EDIT: #GustavoTM have found that pCloud raises the same problem.
EDIT2: #VonC have noticed that some people in the issue in GitHub has solved it deleting this other file: C:\Windows\System32\drivers\cbfs6.sys. I haven't tried that, but i put it if it helps others.
The good thing is that I don't need to uninstall Box, but only rename that file.
This is still an issue (still open) with Win10.
Looks like uninstalling cloud storage providers with file system filters like Dropbox, Box, etc. as a workaround is an option for some users.
Deinstall cloud storage providers or virus scanners; if you identify which one is not working please share in https://github.com/docker/for-win/issues/3884
In my case was the problem similar but the file cbfs6.sys was placed somewhere in the rest of uninstalled application Jungle disk, somewhere in the folder c:\Program files\Jungle disk .... It's part of Callback File System signed by EldoS Corporation.
The folder could be rename only and not delete directly. So I could delete its immediately after the PC restart, before running the Docker. So it could be delete during the Docker service restart too.

Creating a snapshot for a proxmox VM is not possible either in the GUI or CLI

I'm trying to make a snapshot of one of my VMs via the GUI but the button to creat the snapshot is greyed out, so I wanted to try and do it using the CLI so I could see any helpful output and I got this:
pct snapshot 106 "testing"
Configuration file 'nodes/pve01/lxc/106.conf' does not exist
the list of my VMS:
qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
106 TestingServer running 1024 32.00 23131
I'm not sure what's this about so I was trying to see if somebody here could please give me a hand, I would appreciate it.
I have the same issue on some of the volumes I've attached. So basically, there's a very specific requirement for the storage type you need to have in order to make a snapshot of VM. The list below has the requirements and you can find more information here https://pve.proxmox.com/wiki/Storage#_storage_types
Hope this helps.
You can check the storage type by going to Datacenter > Storage
Once storage is created you cannot change the type of that storage.
The command 'pct snapshot' is a command to snapshot a container (not a QEMU VM). The error is indicating that it can't find a container (LXC) with VM ID 106:
Configuration file 'nodes/pve01/lxc/106.conf' does not exist
The LXC in the path here indicates that it is looking for an LXC container. Your command 'qm list' lists QEMU VMs (not containers). So you are using the wrong command.
You need 'qm snapshot' instead of 'pct snapshot'.

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

The local psql command could not be located

I'm following the instructions found here.
When I try to run $ heroku pg:psql or $ heroku pg:psql HEROKU POSTGRESQL_BROWN I recieve the following error message:
! The local psql command could not be located ! For help
installing psql, see local-postgresql
I can't find anything useful on the link it gives me (it just links to the instructions I was already using, but further down the page) nor can I find this error anywhere else.
If I've missed anything you need to know to answer this, just let me know. I'm rather new to all this and teaching myself as I go.
I had same error even after installing Postgres locally.
But after seeing this
I saw that "pqsl" was not in the PATH so I then did
PATH=%PATH%;C:\Program Files\PostgreSQL\9.2\bin
which worked for me
I have since solved this myself. When I ran heroku pg:info it says the version number is 9.1.8, I was locally running 9.2
installing 9.1.8 and ensuring Path pointed to the appropriate folder solved the problem.
After you change the path, make sure to restart the terminal!
Set the PATH. To find out the PATH of your psql script (on mac) open the sql shell script from your finder in Applications/Postgres installation. This will give you a hint as to where it is installed. That opened a window which told me it is located here: /Library/PostgreSQL/8.4/scripts/runpsql.sh
Then, I set the PATH variable from the terminal window by typing:
$ PATH="/Library/PostgreSQL/8.4/bin:$PATH"
(depends on the location of your PostgreSQL installation, find your bin path first, another exp: /usr/local/Cellar/postgresql#9.6/9.6.8/bin)
OR.....
You can also connect to the shell by opening the shell directly from your postgres installation folder. Then enter the credentials. If you don't know the credentials, here is how to find them out:
$ heroku pg:info
=== HEROKU_POSTGRESQL_RED_URL (DATABASE_URL)
$ heroku pg:credentials HEROKU_POSTGRESQL_RED_URL
Top answer wouldn't work for me oddly, my system would not add the Path via cmd with administrator access (Not sure why).
So check this > Windows key > environment variables > system variables
And add the last line (your version may differ in the path)
Make sure you've installed the toolbelt as psql is installed by default.
However you also need to ensure you've installed a local copy of PostgreSQL; if you don't the toolbelt will be unable to find the native psql client.
Assuming you have installed a local copy of PostgreSQL, make sure you can execute psql from the command line directly (i.e make sure you PATH is set correctly ). If the command does not execute, check your PATH, if it does execute see if you can connect via the PSQL connection string provided in the Heroku control panel. If you can connect reinstall the toolbelt, if you are unable to connect provision another dev database and try again.
If there are still issues, I would suggest contacting Heroku support for assistance after verifying no API issues are listed on the status page located here.
I got rid if this annoying message on Windows by adding a path element without the spaces, i.e.
C:\Progra~1\PostgreSQL\9.4\data
instead of
“C:\Program Files\PostgreSQL\9.4\data”
I followed the instructions here: http://www.computerhope.com/issues/ch000549.htm, which worked for me if you prefer to go the point-and-click configuration of the PATH variable.
This type of error usually appears in the Windows environment, because if you do not update the PATH after installing Postgresql, heroku pg:psql command does not work.
So you need to update your PATH environment variable to add the bin directory of your Postgres installation. The directory will look like this:
C:\Program Files\PostgreSQL\<VERSION>\bin.
For more information, go to the Heroku in Local setup website:
heroku-postgresql: Local setup
I had the same problem and discovered that Heroku doesn't seem to provision the latest version of PostgreSQL by default. Where the Heroku Getting Started instructions said
heroku addons:create heroku-postgresql:hobby-dev
That provisioned a v10 database for some reason (which you can check by clicking on Heroku Postgres in the Add-ons tab of your dashboard). I deleted that database and provisioned a new database using the --version flag:
heroku addons:create heroku-postgresql:hobby-dev --version 11
As of now, at least, you can find the latest version of Postgres supported by Heroku at this link: https://devcenter.heroku.com/articles/heroku-postgresql#version-support-and-legacy-infrastructure
I'm writing this in early 2019, but according to the PostgreSQL website the next version (12) is "tentatively scheduled" for third quarter of 2019 so if you're reading this in late 2019 potentially the same problem will come up for v12 instead
On Mac you can use the following:
export PATH="/Library/PostgreSQL/12/bin/:$PATH"
The only solution that I found on Windows:
go to advanced system settings
go to environment variables
select Path variable and click Edit
add a new line and enter your bin directory path (C:\Program Files\PostgreSQL<version>\bin) and click ok
restart your terminal
enter your psql command (heroku pg:psql)