I have setup GraalVM by downloading it through oracle page and extracting the tar file in drive E:/ ("E:\Programs\Java\graalvm-ee-java17-22.0.0.2\bin").
Then I log into WSL2 (ubuntu) bash and setup the environment variables in bashrc.
now I can execute the VM through command line...
However my IDEA Community could not load this JVM into my project. When I try to manually add JDK, it would not allow to open mnt folder and specify the path.
I can't expand the e directory. How can I overcome this issue and allow IDEA to recognize the WSL instance JDK?
It's a Windows limitation, if you open \\wsl$\Ubuntu-20.04\mnt in File Explorer, you will not be able to browse the drives. I'd recommend you install JDK in another browsable place under WSL.
Related
WSL2 stopped working suddenly. If I do a new installation of linux distros. Then it throws the following error, when I click launch button for the linux distro from play store:
Installing, this may take a few minutes...
WslRegisterDistribution failed with error: 0x80070003
Error: 0x80070003 The system cannot find the path specified.
the wsl --help command works properly. All other wsl command hangs or throws error as shown below
like wsl -l command throws this error
The system cannot find the path specified.
I had the same thing happening to me after I moved the directory of my distro.
You have to unregister the distro from WSL;
wslconfig /u Ubuntu-20.04
and then just execute the installed exe and install the whole distro to WSL again.
I had to reinstall the windows to fix the issue. Something got corrupted in the OS. However, before reinstalling the OS as I had lot of work stored in the WSL2, I took the backup of the entire WSL2 image, the big .vhdx file. This file is the Virtual Hard Disk of WSL2 Linux. The files inside cannot be directly explored from Windows at the moment.
If one has not moved the file anywhere else, it is stored here: %LOCALAPPDATA%\Packages\<PackageFamilyName>\LocalState\ext4.vhdx
Before reinstalling the OS, after taking the backup, I wanted to test if this backup runs fine on new install of WSL2. For that, I tested it on another machine, by installing the same Ubuntu WSL2 distro and replacing the .vhdx file created with the backup file. It ran fine.
So, it felt safe to do entire OS reinstall and then reinstalling WSL2 Ubuntu and finally replacing the .vhdx file with the old backup .vhdx file. So, I did loose some time. But, my data and all the applications/programs on WSL2 were intact.
I know this is old but I had the same problem after deleting a driver associated with Hyper V and fixed it by uninstalling the virtual machine platform and Windows Hypervisor along with WSL, rebooted, reinstalled all 3 and then I could install Ubuntu again
This is my first answer on stack overflow and English is not my first language.
So, I will answer this question in images. My solution would not delete the date in any existing installed Linux distribution, at least for me.
Hope you can solve this problem successfully.
enter image description here
enter image description here
enter image description here
"Enable" Virtualization from your bios settings.
Settings may differ from bios to bios (search for your machine options)
I'm looking to change a process (which currently is an elevated PowerShell script running in Windows 10, and I want to keep it close to that) I have that currently uses Paragon Linux Filesystem for Windows tool. While it does work, it doesn't work consistently. What I'd like to do instead is to use WSL on Windows 10, 1909 currently (will go to 2004 when available), to mount a VHDX which contains to partitions, /dev/sda1 for /boot, and /dev/sda2 another for an Linux LVM. The OS within this VHDX is CentOS 7.5, and the filesystem I want to modify is formatted in ext4. I need to edit some files within a logical volume within the group.
Currently, I'm running into an issue where qemu-nbd doesn't help, as there doesn't appear to be an NBD kernel mode driver provided by the Microsoft Linux kernel in Ubuntu 18.04 image from the Windows Store. I've tried guestfish (using guestmount), but it is unable to find an operating system and fails to mount any of the volumes.
Is this possible? Am I going down the wrong path, and is this not possible?
As I understand your question...
Seems to me that you want to offline access a .vhdx containing Linux
using powershell to manipulate some files...
(I think the issue here is ext4 and file rights)
1. Mount the .vhdx you want to '''work''' in a linux virtual machine as second disk
2. Install Powershell 7 in linux VM
3. Configure Powershell remote in the Linux VM (via SSH)
4. Access the Linux VM from Windows Powershell 7 and execute your scripts.
there are other ways using VMs+NBD or using WSL and mounted
drives... but this seems to be the most practical end efficient!
as you for sure know you can start/stop the VMs from Powershell
I'm aware that it's not a good idea to access WSL Linux files (located in %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\) directly from Windows, but does that recommendation also apply to mounting a WSL path as a volume in a container running under Docker for Windows?
For example, if I first do this on Windows:
mklink /j %USERPROFILE%\wsl %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs
Then do this in WSL with Docker already configured:
$ docker run --rm -v /c/Users/$USER/wsl/home/$USER/myapp:/myapp -ti ubuntu:18.04 bash
The above assumes the requisite "root=/" in "/etc/wsl.conf" and that the user has the same name in both environments.
I can see my files inside the container under "/myapp" just fine, but I'm not sure whether it's safe to write to that path. If both WSL and the container are running Ubuntu, is it any safer?
I really prefer to work full-time from WSL with my home directory containing the familiar Linux dot files.
And just for kicks, what if in WSL "$HOME/myapp" is a symlink to "/c/myapp"? Yes, I should then just use -v /c/myapp:/myapp for simplicity, but is traversing through the rootfs paths really bad?
Accessing the file paths through Docker on Windows still uses Windows symantecs to access the files, therefore you will bork your WSL distro instance. However the newest Windows Insider includes a Plan9 server embedded into the proprietary /init that allows access of Linux files from Windows via network share essentially. See https://blogs.msdn.microsoft.com/commandline/2019/02/15/whats-new-for-wsl-in-windows-10-version-1903/
An alternative would be to use ssh/scp from win-32 ssh on the same Windows host (or another) or from a Linux host.
I've currently moved from old WSL which was called Bash to Ubuntu from windows store. I'm using it along with ConEmu terminal emulator. To configure this two together, I need to specify ubuntu.exe path in conemu, but I can't find it, do you know where's it installed ? For instance, before it was C:\Windows\system32\bash.exe
Execute where ubuntu command in cmd to find correct path.
for example:
C:\Users\USERNAME\AppData\Local\Microsoft\WindowsApps\ubuntu.exe
Please help me how to connect JProfiler from windows machine to remote Virgo Jetty Server which is running in linux server.
Below are the steps I am following
From Choose Integration Wizard selecting Eclipse Virgo(Next)
Then I am selecting option of on remote computer with Linux platform(Next)
Then I am selecting JVM vendor Version etc (Next)
selecting option Wait or a connection from JProfiler GUI(Next)
Providing remote hostname:port(Next)
I was stuck at specifying remote installation directory
Here we didnt install JProfiler in our linux remote environment but we have server running there.I have seen option like If JProfiler is not installed,you can create archive and that contains profiling agent and extractit in above directory.Asking folder where to create Archive.
Can you please help what exactly this means what I need to do to create archive .Only thing I have done is installed JProfiler evaluation version in local machine and profiling local server.
Please help and let me know any additional information is required..Thanks in Advance..
If you select the option to create an archive in the integration wizard, JProfiler will create a .tar.gz file that contains the libraries for the profiling agent. You transfer that archive to the Linux server and extract it somewhere, e.g. to /home/myname/jprofiler by calling
mkdir /home/myname/jprofiler
cd /home/myname/jprofiler
tar xzvf /path/to/jprofiler_agent_linux-x64.tar.gz
In the integration wizard, specify /home/myname/jprofiler as the remote installation directory.