Google VM additional disk storage - ssh

I created a VM with Google Compute Engine and I'm running out of space on it, so I created another disk and attached it to the VM through Google Console menu, but when I still login to the VM through SSH it's still showing up the original space of my VM.
The original VM space is 10GB and the disk is 100GB, when I login to console.developers.google.com and click on VM instances I see my VM and under it the "disk" tab for it I see "VM1, disk-1".
Through SSH I still see Usage of /: 94.8% of 9.81GB. Do I need to run a command through SSH to make it use both?

Here's how to add another disk to a Google Compute Engine VM:
create a new disk
attach the disk to the VM
format and mount the disk, e.g.,
$ sudo mkdir MOUNT_POINT
$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" \
DISK_LOCATION MOUNT_POINT
Looks like you did steps 1 and 2, so you just need to do step 3 to complete the process.

Related

Google cloud platform, vm instance's ssh permission

In my Google Cloud Platform, vm instance, I accidentally changed the permission of /etc/ssh, and now I can't access it using ssh nor filezilla.
The log is as below:
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0660 for '/etc/ssh/ssh_host_ed25519_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
key_load_private: bad permissions
The only thing I can access to is gcloud command or serial console.
I know I need to change the directory's permission back to 644 or 400, but I have no idea how as I can't access the ssh.
How do I change the permission without accessing ssh?
Any help would be much appreciated!
This problem can be solved by attaching the boot disk to another instance.
STEP 1:
Shutdown your instance with the SSH problem. Login into the Google Cloud Console. Go to Compute Engine -> VM instances. Click on your instance and make note of the "Boot disk" name. This will be the first disk under "Boot disk and local disks".
STEP 2:
Create a snapshot of the boot disk before doing anything further.
While still in Compute Engine -> Disk. Click on your boot disk. Click on "CREATE SNAPSHOT".
STEP 3:
Create a new instance in the same zone. A micro instance will work.
STEP 4:
Open a Cloud Shell prompt (this also works from your desktop if gcloud is setup). Execute this command. Replace NAME with your instance name (broken SSH system) and DISK with the boot disk name and ZONE with the zone that the system is in:
gcloud compute instance detach-disk NAME --disk=DISK --zone=ZONE
Make sure that the previous command did not report an error.
STEP 5:
Now we will attach this disk to the new instance that you created.
Make sure that the repair instance is running. Sometimes an instance can get confused on which disk to boot from if more than one disk is bootable.
Go to Compute Engine -> VM instances. Click on your instance. Click Edit. Under "Additional disks" click "Add item". For name enter/select the disk that you detached from your broken instance. Click Save.
STEP 6:
SSH into your new instance with both disks attached.
STEP 7:
Follow these steps carefully. We will mount the second disk to the root file system. Then change the permissions on the /mnt/repair/etc/ssh directory and contents.
Become superuser. Execute sudo -s
Execute df. Make sure that /dev/sdb1 is not mounted.
Create a directory for the mountpoint: mkdir /mnt/repair
Mount the second disk: mount /dev/sdb1 /mnt/repair
Change directories: cd /mnt/repair/etc
Set permissions for /etc/ssh (notice relative paths here): chmod 755 ssh
Change directories: cd ssh
Execute: chmod 644 *.pub
Execute: chmod 400 *key
ssh_config and sshd_config should still be 644. If not fix them too.
Shutdown the repair system: halt
STEP 8:
Now reverse the procedure and move the second disk back to your original instance and reattach. Start your instance and connect via SSH.
Note: To reattach the boot disk you have to use gcloud with the -boot option.
gcloud beta compute instances attach-disk NAME --disk=DISK --zone=ZONE --boot

Proposal to Migrate OpenNebula Datastore from Local FS to NFS

I have an instance of OpenNebula with 2 nodes running KVM and local file store. This means no live migration as vm images are scp'd to each node, so there is also no option of failover or Live Migration.
I would like to implement NFS shared storage and move the VM's from the local FS datastore to the NFS shared storage datastore. OpenNebula supports migrating VM's between datastores, but only datastores of the same type i.e. 'ssh' to 'ssh' but not 'ssh' to 'shared'.
I am working on a method of achieving this, and would love some feedback as to why this is a good or a bad idea.
Thanks
OpenNebula doesn't currently support migrating VM's from one type of datastore to another different type of datastore. I have been working on a method that is working and want to document it here to get some feedback and opinions on the method.
A datastore type is identified primarily by the Transfer manager Driver 'TM_MAD' setting. This setting cannot be changed, either through Sunstone or through the cli. So we need a method to do just this. This is what i did. I started with a fresh install of OpenNebula 5.4.13 in one VM, and 2 VM nodes all running Debian 9 within VMware virtual machines (don't forget to check virtualisation for the VM CPU options).
NOTE: This is an experimental process so make sure you Backup everything first!
Steps
To migrate to a different store, there are a few steps we need to do. They are as follows:
Setup the NFS share exports,
Move the VM images to the NFS share and mount the datastore,
Change the datastore types,
Configure the nodes for NFS share.
Setup NFS Server
First thing we want to do is setup the NFS shares that we want to use. I'm using a single share for the base datastore folder, but you could use separate shares for each datastore ID from different NFS servers.
On the NFS Server create the datastore folder i.e. mkdir /share/one_datastore,
Add the datastore path to exports and export the new share exportfs -rav,
Confirm the share is available showmount -e localhost
Prepare to Migrate
Before we modify the datastores there are a few things to do first:
Shut down any running VM's and undeploy them. This saves the machines states and copies the images back to the image store,
Stop Sunstone and OpenNebula services systemctl stop opennebula && systemctl stop opennebula-sunstone.
Migrate Data
Shared storage shares the VM disk images so all the nodes can access the same data. So copy the VM data to the NFS share ready for mounting.
From the Sunstone frontend server confirm the NFS shares showmount -e [nfs-server],
Create a temp folder to mount the share in mkdir /mnt/datastore,
Temporarily mount the NFS folder mount [nfs-server]:/share/one_datastore /mnt/datastore,
Move the datastore folders to the share mv /var/lib/one/datastores/* /mnt/datastore/
OpenNebula datastore folders now live on the NFS server: ls /mnt/datastore should list folders 0, 1 and 2,
Mount the NFS share to replace the OpenNebula datastore folder mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the folders are available ls /var/lib/one/datastores should list our 3 folders 0, 1 and 2,
Add the mount into /etc/fstab to persist the mount on boot.
OpenNebula frontend is now configured to access the datastore folders from the NFS share. Next we want to change the datastores type from ssh to shared.
Change Datastore Types
The data for the datastore configuration is stored in the OpenNebula database /var/lib/one/one.db. We can change the driver type by editing the datastore configuration data which then tells OpenNebula whiche drivers to use, and how to handle the datastore data. By default OpenNebula uses an sqlite database with the option of MySql. i'm using sqlite but the same works for MySql.
Open the OpenNebula database sqlite3 /var/lib/one/one.db,
View all tables with .tables. datastore_pool is the table we want to modify,
List all the records in the table select * from datastore_pool; will result in a screen-full of configuration data. Each record has an identifier oid which matches the datastore ID, like this (the first 0 is the datastore ID for the default SYSTEM database):
0|system|<DATASTORE><ID>0</ID><UID>0</UID><GID>0</GID><UNAME>oneadmin</UNAME><GNAME>oneadmin</GNAME><NAME>system</NAME><PERMISSIONS><OWNER_U>1</OWNER_U><OWNER_M>1</OWNER_M><OWNER_A>0</OWNER_A><GROUP_U>1</GROUP_U><GROUP_M>0</GROUP_M><GROUP_A>0</GROUP_A><OTHER_U>0</OTHER_U><OTHER_M>0</OTHER_M><OTHER_A>0</OTHER_A></PERMISSIONS><DS_MAD><![CDATA[-]]></DS_MAD><TM_MAD><![CDATA[ssh]]></TM_MAD><BASE_PATH><![CDATA[/var/lib/one//datastores/0]]></BASE_PATH><TYPE>1</TYPE><DISK_TYPE>0</DISK_TYPE><STATE>0</STATE><CLUSTERS><ID>0</ID></CLUSTERS><TOTAL_MB>0</TOTAL_MB><FREE_MB>0</FREE_MB><USED_MB>0</USED_MB><IMAGES></IMAGES><TEMPLATE><ALLOW_ORPHANS><![CDATA[NO]]></ALLOW_ORPHANS><DISK_TYPE><![CDATA[FILE]]></DISK_TYPE><DS_MIGRATE><![CDATA[YES]]></DS_MIGRATE><RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS><SAFE_DIRS><![CDATA[/var/tmp]]></SAFE_DIRS><SHARED><![CDATA[NO]]></SHARED><TM_MAD><![CDATA[ssh]]></TM_MAD><TYPE><![CDATA[SYSTEM_DS]]></TYPE></TEMPLATE></DATASTORE>|0|0|1|1|0
Now to change the datastore type. Grab the data from the 3rd column body
(You can run select body from datastore_pool where oid=0;) and copy to your favourite text editor (that's the chunk starting with <DATASTORE> and ending with </DATASTORE>). Find and replace:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace with: <TM_MAD><![CDATA[shared]]></TM_MAD>
Find: <SHARED><![CDATA[NO]]></SHARED>
Replace with: <SHARED><![CDATA[YES]]></SHARED>
Now to update the SYSTEM datastore record. Run the following command on the database, replacing [datastore-config] with the text block you just modified update datastore_pool set body='[datastore-config]' where oid=0,
Update IMAGE datastore is a little different. There is no SHARED option, but we want to use either shared or qcow2 drivers. I used qcow2. So: select body from datastore_pool where oid=1;:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace: <TM_MAD><![CDATA[qcow2]]></TM_MAD>
Update the record: update datastore_pool set body='[datastore-config]' where oid=1;,
Update the FILES datastore (oid=3) by replacing <TM_MAD><![CDATA[ssh]]></TM_MAD> with <TM_MAD><![CDATA[shared]]></TM_MAD> and update using the method above.
Now that the datastores have been updated to use the shared driver, lets start Sunstone and check that the datastores show up.
systemctl start opennebula && systemctl start opennebula-sunstone
Jump into Sunstone web and go to datastores. Opening each datastore to check whether SHARED is enabled, and the correct drivers show i.e. shared or qcow2.
~DONT DO ANYTHING YET~ Still need to configure the nodes!
Configure the Nodes
So because we stopped and undeployed the VMs, there shouldn't be any data in the node datastores. So we can just set up NFS shares to the datastores folder. Confirm the folders are empty first and make sure to take backups! This is an experimental process so be warned! Right, lets get onto it:
Check the contents of /var/lib/one/datastores. If you are mounting each datastore ID based folder to its own NFS share then you can do this instead of the entire datastore folder. Empty any folders with 0, 1 and 2 folders. otherwise remove all folders from the datastores folder,
If not already installed: apt-get install nfs-common,
Check for NFS shares: showmount -e [nfs-server],
Mount the nfs share to the datastore folder: mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the mount i.e. df,
Edit /etc/fstab adding the mount so its mounted on next boot.
Restart your node to confirm the datastore nfs persists, and to give them a restart!
Repeat with all host nodes.
Test it Out
In Sunstone go to the Hosts TAB and check they are up and running. Next go and grab a VM and deploy it. It should deploy without any issues and start booting.
Once up and running i like to constantly ping the VM while testing live migration. So start ping (ping [vm-ip] -t in windows) and then in Sunstone open the VM and do a 'Live Migrate' to another node. Watch the ping and check the logs to make sure it succeeded. I found i had to refresh the display, and go to the hosts TAB to check the VM had migrated. After that it showed correctly but i think its a caching issue in my browser. After the Live Migration you should still see the ping rolling along, with maybe one failed ping in the results.
Conclusion
So that's the process i used to migrate from ssh local storage to shared storage. I'v tested it and it is working without any issues. However, if you do have any issues or have an opinion on this process please let me know. If there are any pitfalls with this i have overlooked please also let me know.
Ok, have fun with it. I'm off to try moving the shared storage over to some kind of shared cluster like Ceph or GlusterFS!

Accessing external hard drive after logging into a remote machine using ssh command

I am doing an intensive computing project with a super old C program. The program requires a library called Sun Performance Library which is a commercial ware. Instead of purchasing the library by myself, I am running the program by logging onto a Solaris machine in our computer lab with the ssh command, while the working directory to store output data is still on my local Mac.
Now, a problem just occurred: the program uses large amount of disk space to save some intermediate results and the space on my local Mac is quickly filled (50 GB for each user prescribed by the administrator). These results are necessary for the next stage of computing and I cannot delete any of them before it finally produce the output data. Therefore, I have to move the working directory to an external hard drive in order to continue. Obviously,
cd /Volumes/VOLNAME
is not the correct way to do it because the remote machine will give me a prompt saying
/Volumes/VOLNAME: No such file or directory.
So, what is the correct way to do it?
sshfs recently added support for "slave mode" which allows you to do this. Assuming you have sshfs on Solaris (I'm not sure about this), the following command (ran from your Mac) will do what you want: dpipe /usr/lib/openssh/sftp-server = ssh SOLARISHOSTNAME sshfs MACHOSTNAME:/Volumes/VOLNAME MOUNTPOINT -o slave
This will result in the MOUNTPOINT directory on the server being mounted to your local external drive. Note that I'm not sure whether macOS has dpipe. If it doesn't, you can replace it with one of the equivalent solutions at How to make bidirectional pipe between two programs?. Also, if your SFTP server binary is somewhere else, substitute its path.
The common way to mount a remote volume in Solaris is via NFS, but that usually requires root permissions.
Another approach would be to make your application read its data from stdin and output its results to stdout, without using the file system directly. Then you could just redirect the data from/to your local machine through ssh. For instance:
ssh user#host </Volumes/VOLNAME/input.data >/Volumes/VOLNAME/output.data

Docker image push over SSH (distributed)

TL;DR Basically, I am looking for this:
docker push myimage ssh://myvps01.vpsprovider.net/
I am failing to grasp the rationale behind whole Docker Hub / Registry thing. I know I can run a private registry, but for that I have to set up the infrastructure of actually running a server.
I took a sneak peek inside the inner workings of Docker (well, the filesystem at least), and it looks like Docker image layers are just a bunch of tarballs, more or less, with some elaborate file naming. I naïvely think it would not be impossible to whip up a simple Python script to do distributed push/pull, but of course I did not try, so that is why I am asking this question.
Are there any technical reasons why Docker could not just do distributed (server-less) push/pull, like Git or Mercurial?
I think this would be a tremendous help, since I could just push the images that I built on my laptop right onto the app servers, instead of first pushing to a repo server somewhere and then pulling from the app servers. Or maybe I have just misunderstood the concept and the Registry is a really essential feature that I absolutely need?
EDIT Some context that hopefully explains why I want this, consider the following scenario:
Development, testing done on my laptop (OSX, running Docker machine, using docker-compose for defining services and dependencies)
Deploy to a live environment by means of a script (self-written, bash, few dependencies on dev machine, basically just Docker machine)
Deploy to a new VPS with very few dependencies except SSH access and Docker daemon.
No "permanent" services running anywhere, i.e. I specifically don't want to host a permanently running registry (especially not accessible to all the VPS instances, though that could probably be solved with some clever SSH tunneling)
The current best solution is to use Docker machine to point to the VPS server and rebuild it, but it slows down deployment as I have to build the container from source each time.
If you want to push docker images to a given host, there is already everything in Docker to allow this. The following example shows how to push a docker image through ssh:
docker save <my_image> | ssh -C user#my.remote.host.com docker load
docker save will produce a tar archive of one of your docker images (including its layers)
-C is for ssh to compress the data stream
docker load creates a docker image from a tar archive
Note that the combination of a docker registry + docker pull command has the advantage of only downloading missing layers. So if you frequently update a docker image (adding new layers, or modifying a few last layers) then the docker pull command would generate less network traffic than pushing complete docker images through ssh.
I made a command line utility just for this scenario.
It sets up a temporary private docker registry on the server, establishes an SSH Tunnel from your localhost, pushes your image, then cleans up after itself.
The benefit of this approach over docker save is that only the new layers are pushed to the server, resulting in a quicker upload.
Oftentimes using an intermediate registry like dockerhub is undesirable, and cumbersome.
https://github.com/brthor/docker-push-ssh
Install:
pip install docker-push-ssh
Example:
docker-push-ssh -i ~/my_ssh_key username#myserver.com my-docker-image
Biggest caveat is that you have to manually add your local ip to docker's insecure_registries config.
https://stackoverflow.com/questions/32808215/where-to-set-the-insecure-registry-flag-on-mac-os
Saving/loading an image on to a Docker host and pushing to a registry (private or Hub) are two different things.
The former #Thomasleveil has already addressed.
The latter actually does have the "smarts" to only push required layers.
You can easily test this yourself with a private registry and a couple of derived images.
If we have two images and one is derived from the other, then doing:
docker tag baseimage myregistry:5000/baseimage
docker push myregistry:5000/baseimage
will push all layers that aren't already found in the registry. However, when you then push the derived image next:
docker tag derivedimage myregistry:5000/derivedimage
docker push myregistry:5000/derivedimage
you may noticed that only a single layer gets pushed - provided your Dockerfile was built such that it only required one layer (e.g. chaining of RUN parameters, as per Dockerfile Best Practises).
On your Docker host, you can also run a Dockerised private registry.
See Containerized Docker registry
To the best of my knowledge and as of the time of writing this, the registry push/pull/query mechanism does not support SSH, but only HTTP/HTTPS. That's unlike Git and friends.
See Insecure Registry on how to run a private registry through HTTP, especially be aware that you need to change the Docker engine options and restart it:
Open the /etc/default/docker file or /etc/sysconfig/docker for
editing.
Depending on your operating system, your Engine daemon start options.
Edit (or add) the DOCKER_OPTS line and add the --insecure-registry
flag.
This flag takes the URL of your registry, for example.
DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000"
Close and save the configuration file.
Restart your Docker daemon
You will also find instruction to use self-signed certificates, allowing you to use HTTPS.
Using self-signed certificates
[...]
This is more secure than the insecure registry solution. You must
configure every docker daemon that wants to access your registry
Generate your own certificate:
mkdir -p certs && openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ -x509 -days 365 -out certs/domain.crt
Be sure to use the name myregistrydomain.com as a CN.
Use the result to start your registry with TLS enabled
Instruct every docker daemon to trust that certificate.
This is done by copying the domain.crt file to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt.
Don’t forget to restart the Engine daemon.
Expanding on the idea of #brthornbury.
I did not want to dabble with running python, so I came up with bash script for the same.
#!/usr/bin/env bash
SOCKET_NAME=my-tunnel-socket
REMOTE_USER=user
REMOTE_HOST=my.remote.host.com
# open ssh tunnel to remote-host, with a socket name so that we can close it later
ssh -M -S $SOCKET_NAME -fnNT -L 5000:$REMOTE_HOST:5000 $REMOTE_USER#$REMOTE_HOST
if [ $? -eq 0 ]; then
echo "SSH tunnel established, we can push image"
# push the image to remote host via tunnel
docker push localhost:5000/image:latest
fi
# close the ssh tunnel using the socket name
ssh -S $SOCKET_NAME -O exit $REMOTE_USER#$REMOTE_HOST

Detach Disk from GCE VM to mount and edit SSH

i am trying to detach disk on temp instance so i can mount and edit ssh_config but when i am using gcloud compute instances detach-disk INSTANCE --disk mydisk it shows
ERROR: (gcloud.compute.instances.detach-disk) There was a problem fetching the resource:
- Insufficient Permission
any suggestion? i'm new to google Cloud
You need to authorize your account before you can use your gcloud command. You can run the following command to authorize.
$ gcloud auth login