Vmware web development kit - virtual-machine

I am using Virtual Disk Management Kit v5.0 link.
The command i am using is
C:\Program Files\VMware\VMware Virtual Disk Development Kit\bin>vmware-mount I:
C:\Users\Rushil\Documents\Virtual Machines\Ubuntu\Ubuntu-s001.vmdk
Unfortunately the above command does nothing except that every time i run it a (sort of) vmware-mount man page is displayed on the command prompt telling about other command options. Any solutions ???

You have a space in your path to the vmdk and surely need to quote it:
vmware-mount I: "C:\Users\Rushil\Documents\Virtual Machines\Ubuntu\Ubuntu-s001.vmdk"

Related

Which WSL distro is using AppData\Local\Docker\wsl\data\ext4.vhdx after docker-desktop-data was exported and unregistered

Due to increasing space consumption of WSL I was forced to move my WSL distros to another disk.
Ubuntu
docker-desktop
docker-desktop-data
I used these commands.
wsl --shutdown
wsl --export (on all three of those distros)
wsl --import (already on another disk)
Now my environment is running fine but the ext4.vhdx in AppData\Local\Docker\wsl\data is still present and I can't remove it due to it still being used.
When I look at process hadnles
Its still being used by system which is not telling much.
If I run WSL --shutdown all virtual disks present on disk E: lose their handles and the one on disk C: is still being used.
Would you know how to find out what part of WSL or if it even is WSL is using?
Since shutting down WSL does not remove that handle it might be used by something else.
Its not docker-for-desktop that one uses different disk.
Thanks for your suggestions.
Docker Desktop for Windows, which uses WSL2, stores all image and container files in a separate virtual volume (vhdx). This virtual hard disk file can automatically grow when it needs more space (to a certain limit). Unfortunately, if you reclaim some space, i.e. by removing unused images, vhdx doesn't shrink automatically. Luckily, you can reduce its size manually by calling this command in PowerShell (as Administrator):
Optimize-VHD -Path $Env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx -Mode Full
If the above command fails with
The system failed to compact 'C:\Users\Maxx\AppData\Local\Docker\wsl\data\ext4.vhdx':
The process cannot access the file because it is being used by another process. (0x80070020).
exit form Docker Desktop or stop services and tasks using that file:
net stop com.docker.service
taskkill /IM "docker.exe" /F
taskkill /IM "Docker Desktop.exe" /F
wsl --shutdown
I reclaimed 15Gb of 40Gb.
Origin of the solution.
You can just clean data from interface. Troubleshooting -> Clean/Purge data
Upgrading from WSL1 to WSL2 made it a bit messy, but resetting docker-desktop to its default setting and then purging data from WSL (using docker-desktop troublesshot) cleared it for me.

Docker build fails always with error hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1) Windows Containers

Steps to reproduce are very easy.
Create a Dockerfile.
My Dockerfile has many more lines, but I have trimmed them so we can focus in the source of the problem.
Said that, these two lines alone (without anything more) show the problem.
FROM microsoft/iis
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; $VerbosePreference = 'Continue'; "]
Run docker build . and you get hcsshim::PrepareLayer - failed failed in Win32: FunciĆ³n incorrecta. (0x1).
Windows 10 Pro 1909 (but it happened too in 1903)
Docker version: 2.1.0.5
Engine: 19.03.5
Machine: 0.16.2
I have found the solution to the problem.
Reading all the https://github.com/docker/for-win/issues/3884 bug, some have found a simple solution: rename C:\windows\system32\driver\cbfsconnect2017.sys so it isn't loaded the next boot.
Disabling that driver enables me to do a docker build for the first time in windows containers in almost a year.
In my case Box Sync was the one using that driver.
EDIT: #GustavoTM have found that pCloud raises the same problem.
EDIT2: #VonC have noticed that some people in the issue in GitHub has solved it deleting this other file: C:\Windows\System32\drivers\cbfs6.sys. I haven't tried that, but i put it if it helps others.
The good thing is that I don't need to uninstall Box, but only rename that file.
This is still an issue (still open) with Win10.
Looks like uninstalling cloud storage providers with file system filters like Dropbox, Box, etc. as a workaround is an option for some users.
Deinstall cloud storage providers or virus scanners; if you identify which one is not working please share in https://github.com/docker/for-win/issues/3884
In my case was the problem similar but the file cbfs6.sys was placed somewhere in the rest of uninstalled application Jungle disk, somewhere in the folder c:\Program files\Jungle disk .... It's part of Callback File System signed by EldoS Corporation.
The folder could be rename only and not delete directly. So I could delete its immediately after the PC restart, before running the Docker. So it could be delete during the Docker service restart too.

Using WinSCP script for SFTP access from SSIS

I am new to WinSCP and am attempting to create a script file that will eventually be used with SSIS to download files from an SFTP site. A lot of the literature WinSCP includes explains the file downloading or uploading portions. For the time being, I just want to create a script to test the connection first and will build from there.
So far I saved the connection in WinSCP and have the following. The below code does not seem to function at all and I am not sure where else to go as I am still reading about the scripting for WinSCP. Is there a way or can someone point me in a direction to see if I am in fact connecting via through the script?
option batch on
option confirm off
open username#address
exit
Not sure what SSIS is (sorry) but I can tell you how I'd set it up from a windows batch file if that helps:
If you are open to using a different software, consider using cygwin. It mimics a linux shell so linux users on windows have a lot of linux utilities handy. That being said, there are some commands which can run on windows straight from command prompt (and thus batchable). What you'd need to do:
1) install cygwin
2) Create a "passwordless" login (using ssh-rsa authentication). To do this start your cygwin terminal and use the commands "ssh-keygen" and "ssh-copy-id" (more on that later)
3) Now you can run "sftp" from the DOS command prompt (does not require cygwin terminal) and sftp to your account. No password required because of step 2).
A few follow up info:
What can run from dos command prompt and what must be run from cygwin terminal?
If you go to the "bin" directory of cygwin (for me it's in c:\cygwin\bin) you can see all the cygwin utilities. Anything with "exe" extension can be run from dos command prompt. If no "exe" extension, must start cygwin terminal first
How to set up ssh-rsa authentication?
You can pretty much google "ssh login without password" and pull up a lot of results. This is common for setting up login from one linux system to another. You would be using the same steps using cygwin on windows. My instructions are here:
http://geekswing.com/geek/unix/how-to-ssh-login-without-a-password-using-ssh-keygen-quick-tutorial/
Storing session settings in WinSCP GUI and trying to access them from WinSCP script running in SSIS is generally a bad idea. I believe there's no example or guide on WinSCP site that would suggest doing that.
WinSCP stores its configuration in registry in HKEY_CURRENT_USER hive. The SSIS typically runs under a dedicated system account, that have its own HKEY_CURRENT_USER hive, and won't see the GUI configuration.
For details see WinSCP FAQ about your problem:
https://winscp.net/eng/docs/faq_scheduler
The best you can do is isolate your your script from configuration by using the session URL with the open command, instead of the stored site name.
See also https://winscp.net/eng/docs/scripting#configuration
Your actual problem can be completely different though. But that's hard to guess as you have not shared any details, such as error message, log file, etc.

Running batch file on parallels vm with command issued from host mac

I am trying to start a selenium grid node on a local vm (running Windows 7) by using a call from the command line on the host Mac.
The call merely tries to run a batch file on the vm.
When I run the batch file from within the vm, it executes correctly and the node starts, so I know that batch file works correctly.
The path I am using is correct, as I can run it from anywhere on the vm.
It is just that I can't seem to call it from the host Mac.
This worked at one point, but I wonder whether a windows security update might have screwed things up?
I've tried to clear every firewall I could find. I am running parallels 8 on a MacBook Air.
Here is the syntax I am using.
prlctl exec {parallels_vm_name} 'C:\Users\{user_name}\Documents\selenium\startIeNode.bat {IP_address_here}'
The quotes around your
'C:\Users\{user_name}\Documents\selenium\startIeNode.bat {IP_address_here}'
should end at after .bat.
The only reason for those quotes is for the path, not for the command itself. It should look more like:
'C:\Users\{user_name}\Documents\selenium\startIeNode.bat' {IP_address_here}
Otherwise the IP address is being set as part of the pathname instead of a parameter.
I have almost the same setup/use case that you describe: Win 7 VM on Parallels 8. I just set my system up to do exactly what you want.
create .bat file verify it works on VM
create windows shortcut to batch file
drag shortcut onto Mac desktop, folder, Dock etc.
launch batch file from Mac shortcut
In coherence mode, VM settings to enable launching Windows Apps from Mac enabled, Parallel tools installed
Because of the way things are passed in prlctl exec, commands need to be executed double-slashed, so it would be:
prlctl exec {parallels_vm_name} "C:\\Scripts\\myScript.cmd"

Debugging Solaris OS crash

I have access to a remote Solaris terminal which crashes occasionally, and I have to ask someone with physical access to boot the machine up, which it does successfully. I would like to know which tools/files should I look at to find out the cause of the crash so that I can make the necessary configuration changes and avoid it in the future.
What tools you can use will depend on what version of solaris you have running and what the actual problem
is. The first thing to do is check the system console (which it sounds like you don't have access to) and the /var/adm/messages file. This file is updated with system messages and the newest will appear at the end.
Next, you can look for a system core file. If a core file is created, it would be in /var/crash/hostname where "hostname" is the name of the machine.
If you have an actual core file in the /var/crash/hostname directory, this set of commands will give you a good
string to search google with:
# cd /var/crash/hostname
Replace "hostname" with the hostname of your machine.
# mdb -k unix.0 vmcore.0
If you have multiple core files, select the most recent version.
> ::status
This should give you a panic message, cut and paste that into google and see what you can find.
For more core file analysis read this:
http://cuddletech.com/blog/pivot/entry.php?id=965