I have the following task:
I installed Linux for Windows in Windows 10 Pro computer;
I installed Ubuntu 18.04 LTS;
I have a separate volume in Windows computer, which doesn't have a drive letter assigned to it;
I need to find a way to mount this Windows volume without letter in WSL Ubuntu.
I know the volume id in case it is required.
Any ideas how to achieve this?
Thx, Vlad.
First of all, my question wasn't completely right, I wrote Linux for Windows but in fact I was talking about "Windows Subsystem for Linux".
The idea is to have 1 disk drive as hardware configured RAID 0 storage which is built with 2x Samsung SSD 1Tb. But for protection of data on RAID 0, I want to use HDD which will sync data with rsync or any cloud service. I selected ownCloud.
Finally, I want to hide the HDD from the system and configure WSL to use it.
Hereby how it works for me:
1) I created a folder here: c:\Users\Public\wsl
2) I mounted the HDD in the folder created above.
3) After the HDD is mounted, I created a subfolder for my favorite Linux distribution: c:\Users\Public\wsl\ubuntu
4) I installed Ubuntu 18.04 in this folder as it described here: Installing WSL on Windows 10 without MS Store
5) The point above allows to install ownCloud server on hidden HDD. Now, in order to get it running at system boot, one can create scripts as described here: how to autoload apache2 and mysql in WSL at Windows boot
6) And finally, to get ownCloud Server running at system boot, even before any user login, one needs to do as follows:
*) Open Windows task scheduler;
*) add a task which runs autostart.sh (see how to make this script on a link above) on system boot;
*) use wscript.exe (from windows system32) as the command to run and the vbs script as parameter. Check this link if you need more details;
7) Finally, we need to setup ownCloud client on the computer and connect it with the server by using http://localhost as the server url.
So, as result of this setup, one gets faster disk system based on 2x SSH configured in RAID 0 and to protect data, one uses a local cloud server in virtual machine to get personal content synchronized with standard HDD.
If the system uses actively SSD, the cloud won't get time for syncing data. But as soon as resources are available, system will sync data in background mode into the HDD, which requires more time to write the same data.
This setup allows to use SSD system at full speed as it is required by applications and it does not limit dramatically the performance of SSD subsystem while keep syncing data in slow HDD as computer resources are available and SSD resources are available.
Related
I'm new to the WSL2 and wondering if it's possible to run the same WSL2 ubuntu instance on both my desktop and laptop.
Now I am able to use wsl --export and wsl --import method to save and load the system to/from my portable hard drive. But these methods takes a long time.
I notice that wsl --import load a file named ext4.vhdx. Is there a way to load straightly from this file?
Update v2.0:
I was able to get a workaround and it works great.
Thanks to Booting from vhdx here, I was able to load straightly from my vhdx file on my portable hard disk. Windows track down its subsystem with regedit, So we can write our own(p.s: make sure to get BasePath right, it starts with "\\\\?", or you will not be able to access the subsystem' filesystem on your host system.):
Windows Registry Editor Version 5.00
[HKEY_USERS\【your SID here】\SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss\{【UUID here】}]
"State"=dword:00000001
"DistributionName"="distribution name"
"Version"=dword:00000002
"BasePath"="vhdx folder path" 【 e.g. "\\\\?\\E:\\S061\\WSL\\ubuntu-20"】
"Flags"=dword:0000000f
"DefaultUid"=dword:000003e8
I suppose the best way to do this would be to store ext4.vhd on a network storage device accessible to both devices.
I have previosly mentioned how to move ext4.vhd. You can check that out here
Basically you need to export from one machine and import it while making sure the vhd file is configured for wsl to access from the network storage
Since this should *officially* not supported expect some performance hits
Another way would be to run WSL on one computer and ssh/remote desktop to it from another device on the network
I'm of the strong belief that sharing the same ext4 vhd between two VM's simultaneously would be a bad idea. See this and this Unix & Linux StackExchange, including the part about ...
note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
However, as dopewind's answer mentioned, you can access the WSL instance on one computer (probably the desktop) from another (e.g. the laptop). There are several techniques you can use:
If you have Windows 10 Professional or Enterprise on one of the computers, you can enable Remote Desktop, which allows you to access pretty much everything on one computer from another. RDP ("Remote Desktop Protocol") even works from other devices such as an iPad or Android tablet (or even a phone, although that's a bit of a small screen for a "desktop"). That said, there are better, more idiomatic solutions for WSL ...
You could enable SSH server on the Windows 10 computer with the WSL instance (instructions). This may sound counterintuitive to some people, since Linux itself running in the WSL instance also includes an SSH server (by default). But by SSH'ing from (for example) your laptop into your desktop's Windows 10, you can then launch any WSL instance you have installed (if you choose to install more than one) via wsl -d <distroName>. You also avoid a lot of the network unpleasantness in the next option ...
You could, as mentioned above, enable SSH on the WSL instance (usually something like sudo service ssh start) and then ssh directly into it. However, note that WSL2 instances are NAT'd, so there's a whole lot more hackery that you have to do to get access to the network interface. There's a whole huge thread on the WSL Github about it. Personally, I'd recommend the "Windows SSH Server" option mentioned about to start out with, then you can worry about direct SSH access later if you need it.
Side note: Even though I have SSH enabled on my WSL instances, I still use Windows SSH to proxy to them, to avoid these networking issues.
I have an apache server running on Linux. I have the dav module enabled. When I am on my windows computer, I mount it as a network drive. However, it shows the filesystem as FAT. Is there a way to change that to either NTFS or something else that will display the disk size correctly and speed up the file transfers?
If windows shows the filesystem as FAT, it simply means that there's a bug in Windows. When windows talks to the webdav server, it's neither FAT nor NTFS. It's talking 'WebDAV', and the underlying storage mechanism is hidden and irrelevant.
In VM ware Virtual machine i have installed the Centos in Window 7.Now i want to re install my Window 7 but i do not want to loose my virtual machine Centos. I Google many time for this topic but did not find any helpful information.
Any help?
Thanks
Your virtual machine is saved under the form of multiple files, which you can easily back-up on an external hard-drive, or in the cloud. If you are using VMWare, then your machine will be split into .vmdk, .vmx, .vmxf, .vmsd and .nvram files, depending on your VM configuration.
Just check where you store the VM files, and back them up before re-installing the host system. Afterwards, just import the .vmx file back into VMWare.
In VMWare Player right-click on your VM, go to Settings, then Options, and under Working Directory you should see where your VM files are stored. Just back-up that entire folder before reinstalling.
How about running a linux application on windows platform without any OS virtualization.
Lets say we have an linux software installed on windows machine which can run successfully on windows with below mentioned approach:
A normal windows application runs on windows by creating a virtual address space on any Operating system. Program loader loads required libraries for the application from physical drive onto virtual memory address space. All those libraries related to application gets loaded when required by using File System APIs.
Now lets go in different way, instead of creating a virtual address space on local system, we can create a process address space on different machine which is capable to run the application. In our case, create address space for linux application on remote linux machine instead of local windows machine. All file system access can be grab on remote machine and transferred to local windows machine. In this way linux application located on local windows
machine, creates process address space on remote linux machine, access file system on local windows machine. All file system related apis can be remoted and routed to local machine. Linux application UI can be captured on linux machine and sent for display on local windows machine.
In this way different platform applications can be run on other platform as well without need of OS virtualization. What is your opinion on this approach and how much it is feasible. Is there any big fault in this approach which makes this approach non-feasible.
That little word- API that you have used there means translating the entire set of system-calls of an operating system to another. Calls that go into creating a socket connection or a directory to file locking etc, EVERYTHING changes. You've discussed just memory here, the GUI has it's own calls, so do drivers and networks.
By the end of 6 years, that little million-line of code that you would've written to achieve all this, when packaged and bundled, will be called; surprise, surprise- a hypervisor.
My IDE is Eclipse, running in Ubuntu 12.10 inside a VirtualBox VM. I currently work in two locations - one office has a Windows 7 PC, the other has a Mac. It seemed most efficient to move my VM onto a high-speed USB flash drive, then carry it between offices. It hasn't worked out.
I used the PC to copy the VM to the flash drive, and tested it there. It worked. I took it to the other office, plugged it into the Mac, started VirtualBox and tried to boot the VM. It said 'can't find drive at E:...' It expected a Windows drive location. So, I tried removing the disk from the VM and re-mounting it on the Mac. That resulted in a 'UUID already in use' error.
Is this transport method possible? I don't want to have to run sethduuid every time I change offices.
The VirtualBox configuration files contain paths for the virtual hard disks, so copying them to another host is problematic. The simplest solution would be to create two similar configurations, one on each host and just copying the disk file to the external flash drive. Configure the paths to the disk file on each host independently so they fit your platform.
The drawback is, that you have to maintain two configurations. But they shouldn't change that often anyway.
The UUID error happens, if try to add another disk image to the virtual media manager with a UUID that match an already existing disk image. This might be because you copied a disk image in the past without replacing the UUID. Check your disk files for duplicate UUIDs.