Why can't I use the persistent disk's storage I just bought on Google Cloud Compute Engine? - ssh

I have just set up my first Google Cloud Compute Engine instance so I can run some Python scripts on large files. As part of the setup I added a 1TB persistent disk:
When I SSH into the the virtual machine I don't see the storage added. This means I can't download my dataset.
How do I access the persistent disk?
Thanks.

When you add an additional persistent disk that makes the disk available to your compute engine but you must then format it and mount it before use. This is similar to the notion of adding an additional physical disk to your desktop. Just adding a disk means it is there from a hardware perspective but it must still be defined to the operating system.
There is documentation on the recipe here (Adding or resizing zonal persistent disks)
In summary:
Use sudo lslbk to find the device id.
Format the disk using sudo mkfs.ext4.
Use sudo mkdir to create a mount point.
Use sudo mount to mount the file system.
You can also edit /etc/fstab to mount the file system at boot time.

Related

How to mount windows volume without letter in "Linux for Windows"

I have the following task:
I installed Linux for Windows in Windows 10 Pro computer;
I installed Ubuntu 18.04 LTS;
I have a separate volume in Windows computer, which doesn't have a drive letter assigned to it;
I need to find a way to mount this Windows volume without letter in WSL Ubuntu.
I know the volume id in case it is required.
Any ideas how to achieve this?
Thx, Vlad.
First of all, my question wasn't completely right, I wrote Linux for Windows but in fact I was talking about "Windows Subsystem for Linux".
The idea is to have 1 disk drive as hardware configured RAID 0 storage which is built with 2x Samsung SSD 1Tb. But for protection of data on RAID 0, I want to use HDD which will sync data with rsync or any cloud service. I selected ownCloud.
Finally, I want to hide the HDD from the system and configure WSL to use it.
Hereby how it works for me:
1) I created a folder here: c:\Users\Public\wsl
2) I mounted the HDD in the folder created above.
3) After the HDD is mounted, I created a subfolder for my favorite Linux distribution: c:\Users\Public\wsl\ubuntu
4) I installed Ubuntu 18.04 in this folder as it described here: Installing WSL on Windows 10 without MS Store
5) The point above allows to install ownCloud server on hidden HDD. Now, in order to get it running at system boot, one can create scripts as described here: how to autoload apache2 and mysql in WSL at Windows boot
6) And finally, to get ownCloud Server running at system boot, even before any user login, one needs to do as follows:
*) Open Windows task scheduler;
*) add a task which runs autostart.sh (see how to make this script on a link above) on system boot;
*) use wscript.exe (from windows system32) as the command to run and the vbs script as parameter. Check this link if you need more details;
7) Finally, we need to setup ownCloud client on the computer and connect it with the server by using http://localhost as the server url.
So, as result of this setup, one gets faster disk system based on 2x SSH configured in RAID 0 and to protect data, one uses a local cloud server in virtual machine to get personal content synchronized with standard HDD.
If the system uses actively SSD, the cloud won't get time for syncing data. But as soon as resources are available, system will sync data in background mode into the HDD, which requires more time to write the same data.
This setup allows to use SSD system at full speed as it is required by applications and it does not limit dramatically the performance of SSD subsystem while keep syncing data in slow HDD as computer resources are available and SSD resources are available.

Is there any relation between VMWare VMotion and VMFS?

I was studying VMWare's VSphere suite, cloud computing virtualization platform.
I could not figure out whether there's any relation between VMotion and VMFS in the suite?
VMotion enables the live migration of running virtual machines from
one physical server to another with zero down time.
VMFS is a clustered file system that leverages shared storage to allow multiple physical hosts to read and write to the same storage simultaneously.
Is there any relation between them?
No.
As you mention, VMFS is the file system we use by default on "block" shared storage (i.e. LUNs). This allows us to have the same LUN mounted for read/write on multiple ESXi hosts which is not allowed with many file systems.
vMotion is when we move a running VM from one ESXi host to another. We do this by copying the running memory state from one host to another. When then "stun" the VM for a short period of time and quickly move it's virtual NIC to the new server. The VM "starts" on the far side in the same state, thus it appears like the VM has always been running. That is to say we "move" the running VM even though we are actually just creating a new VM with exactly the same memory state and disk.
The only relationship is that if you have a VM whose VMDKs live in a Datastore which is shared across multiple ESXi hosts, the vMotion process doesn't have to copy the VMDK which makes the process much simpler and faster. Since VMFS is one way we can support shared storage, it is common to have VMDK's on VMFS based datastores (in this case 1 datastore = one VMFS formatted LUN). Since VMFS is our oldest shared storage technology, it's the most common and usually be best understood by our customers.
However, any shared storage will work just fine for vMotion, including VSAN, VVOL and NFS based shared storage.

How to utilize disk space of Amazon EBS attached to a DCOS Agent machine

We have EBS attached to our centos machines which are used as DCOS agent machines. However when a DCOS cluster is
created, the mounted EBS storage is not utilized for Total DCOS disk capacity.
Please can you let me know, if there are anyways to include them. The DCOS otherwise is working properly and we are able to execute applications ( ArangoDB, Spark ) in them.
I've checked this link : https://dcos.io/docs/1.8/usage/storage/external-storage/ . But it doesn't seem to solve my purpose.
Mount Disk Resources is probably what you are looking for.
You can learn more about Mount/Path disk at the Mesos documentation.

How to extend storage in Swift(OpenStack) service?

I installed a SAIO(swift all in one) in my server, it's a ubuntu 14.04 system.
I create a loopback device for storage by following commands:
sudo mkdir /srv
sudo truncate -s 20GB /srv/swift-disk
sudo mkfs.xfs /srv/swift-disk
(http://docs.openstack.org/developer/swift/development_saio.html)
Now, I don't have enough disk to storage, I want to extand swift storage, what can I do for it?
Do you have disk space to create another virtual disk?
If so you can create another virtual disk (follow the steps in the link you provided. It's the same steps you took to create the first virtual disk) and add the new virtual disk to the ring, using the "swift-ring-build" command.

Accessing backing storage file from both Host machine and emulating machine when using USB Gadget

I have an embedded machine that is running g_file_storage. I would like to be able to access the backing store in read only mode from the machine while g_file_storage is running and a host machine is dropping files into this backing store.
Any idea how one can achieve that? I know that it is not advisable, but would like to try it anyway, and I simply need read access. Won't need to modify the backing store while it is connected
Was able to achieve this by creating a script that listens to specific events in dmesg that signals that a user has dropped a file on the USB. When that is done we unmount the drive, mount it as a drive on the machine , grab the file dropped by the user then remount the drive as a mass storage device