I take part in development of disk filter driver.
Windows 8 has special recovery mode: Advanced Startup Command Prompt.
It is similar to Safe Mode Command Prompt in previous versions of Windows. But it works differently. For example not all commands are available. And not all drivers are loaded. And our driver is not loaded too.
Our driver must be loaded, because it encrypt/decrypt disk content. Without it disk content is unavailable.
How we can solve this problem? How driver can force Windows to load it in Advanced Startup Command Prompt? Probably we need to develop special type of driver, for this mode?
I cannot find detailed documentation about how Advanced Startup Command Prompt works. Is such documentation exist?
just add a small unencrypted boot bcd/swap partition upon making your install image. windows makes its own 300 meg partition too. simple to build.
3 partitions after a diskpart clean
1 windows partition (installed upon w8 1st reboot on install)
2 ntfs partition unencrypted that holds the BCD, SWAP, some other stuff
3 a single VHD File, or more :), on a huge encrypted partition
use easy bcd to choose the vhd of your choice, if you go from disk to disk, all you need is the VHD and the original BCD that the Windows was installed into. Basically a windows installation ties to the BCD, cant move one vhd from one disk to another without the BCD.
Pro,Ultimate,Server, et al are vhd bootable
Related
I'm looking to change a process (which currently is an elevated PowerShell script running in Windows 10, and I want to keep it close to that) I have that currently uses Paragon Linux Filesystem for Windows tool. While it does work, it doesn't work consistently. What I'd like to do instead is to use WSL on Windows 10, 1909 currently (will go to 2004 when available), to mount a VHDX which contains to partitions, /dev/sda1 for /boot, and /dev/sda2 another for an Linux LVM. The OS within this VHDX is CentOS 7.5, and the filesystem I want to modify is formatted in ext4. I need to edit some files within a logical volume within the group.
Currently, I'm running into an issue where qemu-nbd doesn't help, as there doesn't appear to be an NBD kernel mode driver provided by the Microsoft Linux kernel in Ubuntu 18.04 image from the Windows Store. I've tried guestfish (using guestmount), but it is unable to find an operating system and fails to mount any of the volumes.
Is this possible? Am I going down the wrong path, and is this not possible?
As I understand your question...
Seems to me that you want to offline access a .vhdx containing Linux
using powershell to manipulate some files...
(I think the issue here is ext4 and file rights)
1. Mount the .vhdx you want to '''work''' in a linux virtual machine as second disk
2. Install Powershell 7 in linux VM
3. Configure Powershell remote in the Linux VM (via SSH)
4. Access the Linux VM from Windows Powershell 7 and execute your scripts.
there are other ways using VMs+NBD or using WSL and mounted
drives... but this seems to be the most practical end efficient!
as you for sure know you can start/stop the VMs from Powershell
I support a group of engineers who use dual boot systems with Windows 10 and Ubuntu 18.04, each on a separate SSD. With everyone working from home, if an engineer needs Ubuntu re-installe, they will need to do it themselves. The problem is, that to do this, the person will need to determine which SSD to re-install Ubuntu upon. What I need is a way to tell, from Windows, which OS is on which SSD. I have tried:
diskpart
wmic
PowerShell
System Information
I have found ways to list the SSD and its size, but none of them shows me the OS. In Linux I know several commands to get this information very easily, but Windows has me stumped. Can someone please help me with this?
I found the solution to my problem using PowerShell. I used the following command to reveal the systems disk for Windows:
Get-Disk | Where-Object IsSystem -eq $True | fl
Output of above command
Note both the model number and the first 3 digits of the AllocatedSize. You do not want to install upon the disk with this information.
When installing Ubuntu and are presented with a choice for which disk to install upon, you now know which one not to use. So, you can safely install on the other SSD or disk.
I've been using VMWare Player for ages now for both Windows development on my Linux box and (more importantly) automated testing of Windows applications.
Basically what I do is to:
have my development VM running and I build my code and automatically transfer the install package to Linux.
when this shows up at Linux, automatically copy a "known-state", snapshot VM to my test work area (I say snapshot but it's really just a backup copy of the whole directory, not a real VMWare snapshot).
also automatically start the VM in the work area once it's copied.
the VM has a single never-changing startup script which pulls a real startup script from Linux and runs it.
that startup script is responsible for getting down the install package and doing a silent install.
it then runs a test suite and uploads results back to Linux where I have automated scripts which check them.
So, it's basically a one-button test process.
Now I notice more and more people seem to be using VirtualBox.
First off, I'd like to confirm that it can also do a similar thing, primarily being able to backup and restore whole VMs and having shared folders between VirtualBox and Linux.
Secondly, and this is the crux: I'd like to know if that has any concrete advantages over VMWare Player, especially for the automated testing jobs.
I switched to VirtualBox because of one concrete advantage, I wasn't able to setup the network as I wanted to in player. I don't remember if it was bridging or port-forward or whatever that didn't work, but something didn't work the way I wanted it to with the network-setup (cause I needed the pay-version for that) and thus I switched. Personally I've found that both have good and bad sides, but I still use virtualbox cause of that network-thing.
I would like to script a build of a virtual machine from a base image, with a number of files and folders being copied across to the target machine, and some software also installed on it. Is this possible? Which technology is best suited to this - VMWare, Virtual PC/Server or Virtual Box? The solution has to run on WS2003 or WS2008, so the new Windows Virtual PC is not an option for me.
Thanks, MagicAndi.
I've used VMWare for this in the past, particularly the free VMWare Server product. Create a VM and install the OS as usual, then use sysprep to package the machine and feed it an unattend file. After sysprep shuts the machine down, save it off as your base image.
When you want to create a new image, make a copy of your base image, then use the vmware-mount tool to mount the newly copied image as a drive letter. Open up the unatend file and change out the machine name, etc, and added any additional commands you want to run after the machine is powered on. Then vmware-mount /d and power on the virtual machine.
Script all this together and you've got a one-click machine generator.
I'm a fan of VWmare server -- it's free, and the vmx file format is easily understood.
The solution I have came up is to bake all the changes I need to make to the virtual machine into a custom MSI, built up using the Windows Installer XML (WiX) toolset. To install third-party software on the virtual machine, I can either track the changes to the OS by each application installer (using Process Monitor from SysInternal Software) and replicate them into my own custom MSI, or I can use a script (like this AutoIt script) to install the software from a shared directory. I also am looking into using White and PowerShell for scripting.
It look like it may be possible to automate the creation of virtual images using MS Virtual Server 2005. The following articles detail the use of PowerShell scripts to automate the creation of virtual images:
Configuration Testing With Virtual Server, Part 1
Configuration Testing With Virtual Server, Part 2
From part 2, in the section Configuration Tests on a Virtual Machine, it seems possible to transfer files and schedule scripts to run. Using these articles as a basis, it should be possible to automate the building of a MS virtual image in the same way as lordbrain described for a VMware image.
Is is possible to automate the installation of an OS using VMware or any other virtualization product?
One of our products consists of a customized version of CentOS that installs the OS and our application on a server. It's much like any CentOS/RHEL installation where you choose a mode that corresponds to different kickstart options, and then you choose your keyboard type. The rest of the installation is automatic.
What I'd like to have is an automated system that will create a new guest VM, boot it with the ISO image of our product, start the installation (including choosing the keyboard), wait for the reboot, and then launch a set of automated tests.
I know that there are plenty of ways to automate the creation of new VM guests from existing templates/images, and I know you can use the VIX API to interact with virtual machines, but the VIX API seems to require that VMware tools is already running (which won't be the case when you're booting from the CentOS install disk).
This answer (Automating VMWare or VirtualPC) indicates that you can script VMware to boot from an ISO that does an unattended installation, but I would really like to test the same process that our customers will be using.
Another option might be to use Xen's fully-virtualized mode and see if scripting it over the serial port will work.
TIA,
Jason
I have a very very similar question, it is on superuser:
https://superuser.com/questions/36047/moving-vmware-os-image-as-primary-os-on-a-system
You can also use VirtualBox instead of VMWare. The VirtualBox SDK allows you to directly control the keyboard, the mouse the serial port and the parallel port of the guest without the virtualbox guest tools installed.
Unfortunately it doesn't offer a text console interface but the serial port can be connected to a local pipe file and that can probably be worked with just as well.
This may not be exactly what you need:
I have done something similar with a Ubuntu-based install. We used preseeding (Debian's form of kickstart), to answer all the questions during the install - providing the preseed file and the installer via tftp.
In addition to the official Ubuntu mirror we added the apt-server with our own packages in the preseed file. We put a .deb version of vmware-tools on the apt-server and added it to the packages to be installed.
The .deb of vmware tools just contained the .tar.gz and a postinstall script that would extract it to /tmp and run the vmware install script (which has a switch to be run unnattended, so it does not ask any questions).
So after the reboot vmware-tools were up and running and we could use vix to script the rest (which was not very reliable).
If you should encounter problems with running vmware-config.pl during boot, you could make a custom package that just extracts the tools and an init script that installs them on first boot, disables itself and reboots.
Maybe you can use this strategy (replacing apt by yum, preseed by kickstart and tftp by a remastered iso). If you really need to test that your users choose a keyboard in the installer (which is not very different from kickstart) this would obviously not work for you..