Is it bad to mount WSL paths into a container running under Docker for Windows? - windows-subsystem-for-linux

I'm aware that it's not a good idea to access WSL Linux files (located in %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\) directly from Windows, but does that recommendation also apply to mounting a WSL path as a volume in a container running under Docker for Windows?
For example, if I first do this on Windows:
mklink /j %USERPROFILE%\wsl %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs
Then do this in WSL with Docker already configured:
$ docker run --rm -v /c/Users/$USER/wsl/home/$USER/myapp:/myapp -ti ubuntu:18.04 bash
The above assumes the requisite "root=/" in "/etc/wsl.conf" and that the user has the same name in both environments.
I can see my files inside the container under "/myapp" just fine, but I'm not sure whether it's safe to write to that path. If both WSL and the container are running Ubuntu, is it any safer?
I really prefer to work full-time from WSL with my home directory containing the familiar Linux dot files.
And just for kicks, what if in WSL "$HOME/myapp" is a symlink to "/c/myapp"? Yes, I should then just use -v /c/myapp:/myapp for simplicity, but is traversing through the rootfs paths really bad?

Accessing the file paths through Docker on Windows still uses Windows symantecs to access the files, therefore you will bork your WSL distro instance. However the newest Windows Insider includes a Plan9 server embedded into the proprietary /init that allows access of Linux files from Windows via network share essentially. See https://blogs.msdn.microsoft.com/commandline/2019/02/15/whats-new-for-wsl-in-windows-10-version-1903/
An alternative would be to use ssh/scp from win-32 ssh on the same Windows host (or another) or from a Linux host.

Related

Why doesn't WSL create /run/user/${uid} directory?

I wonder why there is no directory like /run/user/1000 when using WSL2 with Ubuntu 18.04 image?
How can I fix this (if possible)?
The user runtime directory /run/user/$UID is a tmpfs created and mounted by systemd on user login.
Since WSL instances do not support systemd, there is no daemon that creates this directory. This also implies that systemd/systemctl commands do not run on your WSL boxes. Please refer to why systemd is disabled in WSL? for more details and discussions on how to hack systemd into your WSL box.

Mount VHDX within WSL Ubuntu 18.04 to edit Linux files

I'm looking to change a process (which currently is an elevated PowerShell script running in Windows 10, and I want to keep it close to that) I have that currently uses Paragon Linux Filesystem for Windows tool. While it does work, it doesn't work consistently. What I'd like to do instead is to use WSL on Windows 10, 1909 currently (will go to 2004 when available), to mount a VHDX which contains to partitions, /dev/sda1 for /boot, and /dev/sda2 another for an Linux LVM. The OS within this VHDX is CentOS 7.5, and the filesystem I want to modify is formatted in ext4. I need to edit some files within a logical volume within the group.
Currently, I'm running into an issue where qemu-nbd doesn't help, as there doesn't appear to be an NBD kernel mode driver provided by the Microsoft Linux kernel in Ubuntu 18.04 image from the Windows Store. I've tried guestfish (using guestmount), but it is unable to find an operating system and fails to mount any of the volumes.
Is this possible? Am I going down the wrong path, and is this not possible?
As I understand your question...
Seems to me that you want to offline access a .vhdx containing Linux
using powershell to manipulate some files...
(I think the issue here is ext4 and file rights)
1. Mount the .vhdx you want to '''work''' in a linux virtual machine as second disk
2. Install Powershell 7 in linux VM
3. Configure Powershell remote in the Linux VM (via SSH)
4. Access the Linux VM from Windows Powershell 7 and execute your scripts.
there are other ways using VMs+NBD or using WSL and mounted
drives... but this seems to be the most practical end efficient!
as you for sure know you can start/stop the VMs from Powershell

View files in docker container

I created a docker container. I can ssh into the docker container. How can I view files in my docker container with a GUI (specifically the WebStorm IDE)?
I'm running a MAC on OSX Yosemite 10.10.5.
The usual pattern is to mount your source code into the container as a volume. Your IDE can work with the files on your host machine and the processes running in the container can see the files. docs
There might be a way to set up remote file access with WebStorm, but I'd recommend trying the other approach first.
docker run daemon -v /mycodedir:/mydockerdir {libraryname}/{imagename} if you mount your work directory and map it to the directory running files in the container you will be able to edit and see them in Web Storm.

Vagrant and / or Docker workflow with full OS X filesystem integration for seamless local feel?

Recently I've been dabbling with vagrant and docker. These are quite interesting tools, but I haven't been able to convince myself that it's the way to go quite yet on my OS X machine. Being an old Unix hat, I have to say that I like having a consolidated and sandboxed environment for development purposes.
I've seen a lot of chatter and a number of friends have been using vagrant with just stock vim for editing. I'm not really a fan of that approach and would probably prefer to use the vm provider's sharing mechanism OR, more likely, NFS.
Personally I'd like to be able to edit directly in TextMate, SublimeText, Emacs (on OS X), or even perhaps use RubyMine and its various IDE features, etc.
Is there any way to really get the workflow down so that such an environment will be essentially like working on a local environment without having to pull a lot of additional background strings to make things work out?
I suppose a few well placed scripts could go a long way, but I've not found any solid answers on really making this a seamless environment.
What actually worked for me was to use boot2docker which makes it easy to install a lightweight virtual machine (with VirtualBox) that will host your docker deamon and images. The only thing you need in order to run docker commands is to run $(boot2docker shellinit) when you open a new Terminal.
If you need to also have your files on an OS X folder and share them with a running docker image, you need some additional setup, but once you do it, you won't have to do it again.
Have a look here for a nice walkthrough on how to do it. The steps in short are:
Get a special boot2docker image that allows you to use shared folders for VirtualBox
Configure VirtualBox to share a folder:
VBoxManage sharedfolder add boot2docker-vm -name home -hostpath /Users
This will share your /Users folder with the boot2docker image that hosts docker.
From you Mac share the folder you need with a folder in a docker image like:
docker run -it -v /Users/me/dev/my-project:/root/src:rw ubuntu /bin/bash
One small annoyance that I haven't found how to overcome is that you do not longer access your software through localhost because it actually runs on boot2docker instance. You have to run boot2docker ip and access that ip.
Hope that helps!

How to Run a Program (which is compiled on Local Machine) on a Remote Machine through ssh?

I want to run a program which its files is located and compiled in local machine on a remote machine through ssh. I used to scp (copy through ssh) its compiled files to remote machine and then run it. Is it possible to escape scp and run it from local machine?
#rekire's suggestion is reasonable.
alternatively, you could mount (part of) the developments machine file system on the target machine with sshfs and execute the remote binary as if it were stored locally on the target.
for more information on sshfs see e.g. https://help.ubuntu.com/community/SSHFS
I expect that both computers have the same architecture or that you cross compiled them to fit on the target.
I would use this two commands:
scp yourapp target:/path/to/store/yourapp
ssh target /path/to/store/yourapp
That requires that you have a running ssh setup which auths you on the target system.
There are several ways to setup this you could try this page http://www.rebol.com/docs/ssh-auto-login.html