Mount VHDX within WSL Ubuntu 18.04 to edit Linux files - windows-subsystem-for-linux

I'm looking to change a process (which currently is an elevated PowerShell script running in Windows 10, and I want to keep it close to that) I have that currently uses Paragon Linux Filesystem for Windows tool. While it does work, it doesn't work consistently. What I'd like to do instead is to use WSL on Windows 10, 1909 currently (will go to 2004 when available), to mount a VHDX which contains to partitions, /dev/sda1 for /boot, and /dev/sda2 another for an Linux LVM. The OS within this VHDX is CentOS 7.5, and the filesystem I want to modify is formatted in ext4. I need to edit some files within a logical volume within the group.
Currently, I'm running into an issue where qemu-nbd doesn't help, as there doesn't appear to be an NBD kernel mode driver provided by the Microsoft Linux kernel in Ubuntu 18.04 image from the Windows Store. I've tried guestfish (using guestmount), but it is unable to find an operating system and fails to mount any of the volumes.
Is this possible? Am I going down the wrong path, and is this not possible?

As I understand your question...
Seems to me that you want to offline access a .vhdx containing Linux
using powershell to manipulate some files...
(I think the issue here is ext4 and file rights)
1. Mount the .vhdx you want to '''work''' in a linux virtual machine as second disk
2. Install Powershell 7 in linux VM
3. Configure Powershell remote in the Linux VM (via SSH)
4. Access the Linux VM from Windows Powershell 7 and execute your scripts.
there are other ways using VMs+NBD or using WSL and mounted
drives... but this seems to be the most practical end efficient!
as you for sure know you can start/stop the VMs from Powershell

Related

Access File System of a WSL as a seperate Drive

I have installed Ubuntu 20.04 from Microsoft Store as a Windows Subsystem for Linux (WSL) and I need to access the system files of it from Windows Explorer and access it as a separate Partition (as C:,D:).
Anyone know how to do this? Thanks.
I think what you are searching for is Linux File System. On WSL, as you have installed Ubuntu, You Can find it in,
C:\Users\Username\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu20.04onWindows\LocalState\rootfs

Is it bad to mount WSL paths into a container running under Docker for Windows?

I'm aware that it's not a good idea to access WSL Linux files (located in %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\) directly from Windows, but does that recommendation also apply to mounting a WSL path as a volume in a container running under Docker for Windows?
For example, if I first do this on Windows:
mklink /j %USERPROFILE%\wsl %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs
Then do this in WSL with Docker already configured:
$ docker run --rm -v /c/Users/$USER/wsl/home/$USER/myapp:/myapp -ti ubuntu:18.04 bash
The above assumes the requisite "root=/" in "/etc/wsl.conf" and that the user has the same name in both environments.
I can see my files inside the container under "/myapp" just fine, but I'm not sure whether it's safe to write to that path. If both WSL and the container are running Ubuntu, is it any safer?
I really prefer to work full-time from WSL with my home directory containing the familiar Linux dot files.
And just for kicks, what if in WSL "$HOME/myapp" is a symlink to "/c/myapp"? Yes, I should then just use -v /c/myapp:/myapp for simplicity, but is traversing through the rootfs paths really bad?
Accessing the file paths through Docker on Windows still uses Windows symantecs to access the files, therefore you will bork your WSL distro instance. However the newest Windows Insider includes a Plan9 server embedded into the proprietary /init that allows access of Linux files from Windows via network share essentially. See https://blogs.msdn.microsoft.com/commandline/2019/02/15/whats-new-for-wsl-in-windows-10-version-1903/
An alternative would be to use ssh/scp from win-32 ssh on the same Windows host (or another) or from a Linux host.

How to get Sikuli working in headless mode

If we have a headless test server running sikuli (both ubuntu and windows configurations needed), how to get it work without a physical monitor and preferably for as many screen resolutions as possible.
I successfully got sikuli running in headless mode (no physical monitor connected)
Ubuntu: check Xvfb.
Windows: install display driver on the machine (to be headless) from virtualbox guest additions display drivers and use TightVNC to remotely set resolution from another machine.
Detailed steps for windows 7
Assume that:
Machine A: to be headless machine, windows 7, with vnc server ready (e.g. TightVNC server installed and waiting for connections).
Machine B: will be used to remotely setup the virtual display driver on machine A.
steps:
Download virtualbox guest additions iso file on Machine A from here (for latest version check latest version here and download VBoxGuestAdditions_x.y.z.iso)
Extract iso file (possibly with winrar) into a directory (let us call it folder D)
using command prompt cd to D folder
Driver extraction
-To extract the 32-bit drivers to "C:\Drivers", do the following:
VBoxWindowsAdditions-x86 /extract /D=C:\Drivers
-For the 64-bit drivers:
VBoxWindowsAdditions-amd64 /extract /D=C:\Drivers
Goto device manager
add hardware
Restart and connect with VNC viewer, now you should be able to change screen resolution
other valuable info on launchpad.
I got SikuliX working in a true headless mode in GCE with a Windows 2016 client system. It takes some duct tape and other Rube Goldberg contraptions to work, but it can be done.
The issue is that, for GCE (and probably AWS and other cloud environment Windows clients), you don't have a virtual video adapter and display, so, unless there's an open RDP connection to the client, it doesn't have a screen, and SikuliX/OpenCV will get a 1024x768 black desktop, and fail.
So, the question is, how to create an RDP connection without having an actual screen anywhere. I did this using Xvfb (X Windows virtual frame buffer). This does require a second VM, though. Xvfb runs on Linux. The other piece of the puzzle is xfreerdp 2.0. The 2.x version is required for compatibility with recent versions of Windows. 1.x is included with some Linux distros; 2.x may need to be built from sources, depending on what flavor Linux you're using. I'm using CentOS, which did require me to build my own.
The commands to establish the headless RDP session, once the pieces are in place, look something like this:
/usr/bin/Xvfb :0 -screen 0 1920x1080x24 &
export DISPLAY=:0.0
/usr/local/bin/xfreerdp /size:1920x1080 /u:[WindowsUser] /p:"[WindowsPassword]" /v:[WindowsTarget]
In our environment we automated this as part of the build job kicked off by Jenkins. For this to work under the Jenkins slave, it was also necessary to run the Jenkins slave as a user process, rather than a service... this can be accomplished by enabling auto admin login and setting the slave launch script as a run (on logon) command.
For those looking to automate on ec2 windows machines, this worked for me: http://www.allianceglobalservices.com/blog/executing-automation-suite-on-disconnectedlocked-machines
In summary, I used RDC to connect, put the following code in a batch file on remote desktop, double clicked it, and sikulix started working remotely (kicking me out of RDC at the same time). Note that ec2 windows machines default to 1024x768 when tscon takes over which may be too small so TightVnc can be used to increase the resolution to 1280x1024 before running.
tscon.exe 0 /dest:console
tscon.exe 1 /dest:console
tscon.exe 2 /dest:console
tscon.exe 3 /dest:console
START /DC:\Sikulix /WAIT /B C:\Sikulix\runsikulix.cmd -d 3 -r C:\test.sikuli -f C:\Sikulix\log.txt -d C:\Sikulix\userlog.txt
I just figure out a way to resolve similar issue.
My env:
local: windows pc
remote (for running sikulix + app I would like to test): windows ec2 instance
My way:
1.create a .bat file, its contents:
ping 127.0.0.1 -n 15 > nul
for /f "skip=1 tokens=3" %%s in ('query user %USERNAME%') do (
%windir%\System32\tscon.exe %%s /dest:console
)
cd "\path\to\sikulix"
java -jar sikulixide-2.0.5.jar -r /path/to/sikulix -c > logfile.log
prepare your app
run the bat (right click > run as administrator)
ping will give your 10s, so that you can bring your app back to front
you will be disconnnected from rdp connection
Explanation:
ping is like "sleep"
for loop: kick out current user & keep session alive

NFS unavailable until NFS server restart

I have quite a strange problem with NFS. I have two systems. One is my workstation, Ubuntu 13.04, linux kernel 3.8.0. Here I've got a directory with code I am working on: /home/user/source. The other is a virtual machine running on some remote server. It has Centos 6.3 and mounts the directory at /opt/source. The point is I have got a whole development environment there needed to run my code, but I want to store the code itself on my local machine for easier acces for Eclipse and other development tools.
Unfortunately, when I reboot my local machine, the NFS filesystem is unavailable on the virtual box until I run: /etc/init.d/nfs-kernel-server restart. I cannot figure out why. Here's the only line in /etc/exports on my local machine:
/home/user/source 10.0.19.192(rw,sync,subtree_check)
And here's the line from /etc/fstab on virtual machine, where the NFS is described:
10.10.1.205:/opt/source /opt/WP nfs defaults,nofail 0 0
It looks like your NFS server wasn't started at boot time. I think, even on Ubuntu 13.04, you can still manage this with the "rcconf" program:
sudo apt-get install rcconf dialog
sudo rcconf
Then, check off nfs-kernel-server.
If for some reason this isn't working, try a similar process with the sysv-rc-conf package.

Automate CentOS installation with VMware for testing

Is is possible to automate the installation of an OS using VMware or any other virtualization product?
One of our products consists of a customized version of CentOS that installs the OS and our application on a server. It's much like any CentOS/RHEL installation where you choose a mode that corresponds to different kickstart options, and then you choose your keyboard type. The rest of the installation is automatic.
What I'd like to have is an automated system that will create a new guest VM, boot it with the ISO image of our product, start the installation (including choosing the keyboard), wait for the reboot, and then launch a set of automated tests.
I know that there are plenty of ways to automate the creation of new VM guests from existing templates/images, and I know you can use the VIX API to interact with virtual machines, but the VIX API seems to require that VMware tools is already running (which won't be the case when you're booting from the CentOS install disk).
This answer (Automating VMWare or VirtualPC) indicates that you can script VMware to boot from an ISO that does an unattended installation, but I would really like to test the same process that our customers will be using.
Another option might be to use Xen's fully-virtualized mode and see if scripting it over the serial port will work.
TIA,
Jason
I have a very very similar question, it is on superuser:
https://superuser.com/questions/36047/moving-vmware-os-image-as-primary-os-on-a-system
You can also use VirtualBox instead of VMWare. The VirtualBox SDK allows you to directly control the keyboard, the mouse the serial port and the parallel port of the guest without the virtualbox guest tools installed.
Unfortunately it doesn't offer a text console interface but the serial port can be connected to a local pipe file and that can probably be worked with just as well.
This may not be exactly what you need:
I have done something similar with a Ubuntu-based install. We used preseeding (Debian's form of kickstart), to answer all the questions during the install - providing the preseed file and the installer via tftp.
In addition to the official Ubuntu mirror we added the apt-server with our own packages in the preseed file. We put a .deb version of vmware-tools on the apt-server and added it to the packages to be installed.
The .deb of vmware tools just contained the .tar.gz and a postinstall script that would extract it to /tmp and run the vmware install script (which has a switch to be run unnattended, so it does not ask any questions).
So after the reboot vmware-tools were up and running and we could use vix to script the rest (which was not very reliable).
If you should encounter problems with running vmware-config.pl during boot, you could make a custom package that just extracts the tools and an init script that installs them on first boot, disables itself and reboots.
Maybe you can use this strategy (replacing apt by yum, preseed by kickstart and tftp by a remastered iso). If you really need to test that your users choose a keyboard in the installer (which is not very different from kickstart) this would obviously not work for you..