How do I clone the virtual disk of running VM on Hyper-V 2012 - hyper-v

I want to clone the virtual disk of a running VM on Hyper-V 2012.
I know that snapshotting the VM will generate a diff disk (file extension .AVHDX). All subsequent writes are to this snapshot, and its parent is read only.
However, that parent may also be a snapshot, which means I cannot simply copy the parent.
How do I obtain a single file that contains all the virtual hard drive’s data prior to my snapshot?
Put another way, how do I export the parent of the current snapshot to a single .VHDX file?
Ideally, I would like to know how to do this using the Hyper-V V2 WMI API.

Hyper-V 2012 and later supports live merge of snapshots but not export of a running VM.
Hyper-V 2012 R2 supports live export. So, upgrade and move on, nothing to see here.
However...
If you take a snapshot of a VM, you can then add a differencing disk to the parent disk of the snapshot, create a VM from that, export that VM, then destroy the VM, then destroy the differencing disk.
It seems very round-about, but this is the only way that I know of "cloning" a running Hyper-V VM (as it is the VHD that becomes the issue, the settings can be copied and a new VM created, those settings don't need to be 'cloned').
If your VM does not have any snapshots, then you need to create one to make this work. You will be 'cloning' the snapshot while the running VM state is allowed to write back to the AVHDX known as 'now'.
Since this copy that is being created is never powered on, CPU and RAM does not matter.
Because this is not a snapshot, with the export Hyper-V gives you the differencing disk plus the parent.
If you exported a snapshot you get a single virtual disk, since Hyper-V does special things with AVHDX files.
If you want a single file, then you merge the diff that is in the export. The merge ends up breaking the configuration of the exported VM, since the differencing disk is deleted so you have to rename it.
But again, we are doing this to get a copy of the disk in a very clean (and proper) way.
Yes, a lot of work for a running export. And to avoid any file locking issues with virtual disks.
No time for complete code, but I walked this through the PowerShell just to verify it is possible.
$snap = Get-VMSnapshot "datest"
PS C:\Windows\system32> $snap.HardDrives
VMName ControllerType ControllerNumber ControllerLocation DiskNumber Path
------ -------------- ---------------- ------------------ ---------- ----
DATest IDE 0 0 D:\DATest\Server2012.vhdx
PS C:\Windows\system32> New-VHD -Differencing -ParentPath $snap.HardDrives[0].Path -Path D:\test\test.vhdx
ComputerName : SWEETUMS
Path : D:\test\test.vhdx
VhdFormat : VHDX
VhdType : Differencing
FileSize : 4194304
Size : 42949672960
MinimumSize : 42948624384
LogicalSectorSize : 512
PhysicalSectorSize : 4096
BlockSize : 2097152
ParentPath : D:\DATest\Server2012.vhdx
FragmentationPercentage :
Alignment : 1
Attached : False
DiskNumber :
IsDeleted : False
Number :
PS C:\Windows\system32> New-VM -Name Test -Path D:\Test -VHDPath D:\test\test.vhdx
Name State CPUUsage(%) MemoryAssigned(M) Uptime Status
---- ----- ----------- ----------------- ------ ------
Test Off 0 0 00:00:00 Operating normally
Export-vm -Name Test -Path d:\newTest -Passthru
# Get the FullName of the parent disk for renaming later
Merge-VHD 'D:\NewTest\Test\Virtual Hard Disks\test.vhdx' -Force
Rename-Item 'D:\NewTest\Test\Virtual Hard Disks\Server2012.vhdx' 'D:\NewTest\Test\Virtual Hard Disks\test.vhdx'
I have to blog all of this now..

Using Windows 2012 R2?
You can export directly using the ExportSystemDefinition method of the Msvm_VirtualSystemManagementService class
PowerShell example from Taylor Brown:
# Obtain Msvm_ComputerSystem object corresponding to VM to export
$vmName = "MyVirtualMachine"
$Msvm_ComputerSystem = Get-WmiObject -Namespace root\virtualization\v2 -Class Msvm_ComputerSystem -Filter "ElementName='$vmName'"
# Edit copy of the Msvm_VirtualSystemExportSettingData associated with your VM
$Msvm_VirtualSystemExportSettingData = $Msvm_ComputerSystem.GetRelated("Msvm_VirtualSystemExportSettingData","Msvm_SystemExportSettingData",$null,$null, $null, $null, $false, $null)
$Msvm_VirtualSystemExportSettingData.CopySnapshotConfiguration = 0 # 0=ExportAllSnapshots, 1=ExportNoSnapshots, 2=ExportOneSnapshot
# Call ExportSystemDefinition method of Msvm_VirtualSystemManagementService singleton
$Msvm_VirtualSystemManagementService = Get-WmiObject -Namespace root\virtualization\v2 -Class Msvm_VirtualSystemManagementService
$Msvm_VirtualSystemManagementService.ExportSystemDefinition($Msvm_ComputerSystem, "c:\export_folder", $Msvm_VirtualSystemExportSettingData.GetText(1))

Related

Why does the Switch statement seem to ask for response before the ExecutewithResults method displays its results [duplicate]

I am writing a PowerShell script in version 5.1 on Windows 10 that gets certain pieces of information about a local system ( and eventually its subnets ) and outputs them into a text file. At first, I had all of the aspects in a single function. I ran into output issues when outputting getUsersAndGroups and getRunningProcesses functions, where output from getUsersAndGroups would be injected into the output of getRunningProcesses.
The two functions are:
# Powershell script to get various properties and output to a text file
Function getRunningProcesses()
{
# Running processes
Write-Host "Running Processes:
------------ START PROCESS LIST ------------
"
Get-Process | Select-Object name,fileversion,productversion,company
Write-Host "
------------- END PROCESS LIST -------------
"
}
Function getUsersAndGroups()
{
# Get Users and Groups
Write-Host "Users and Groups:"
$adsi = [ADSI]"WinNT://$env:COMPUTERNAME"
$adsi.Children | where {$_.SchemaClassName -eq 'user'} | Foreach-Object {
$groups = $_.Groups() | Foreach-Object {$_.GetType().InvokeMember("Name", 'GetProperty', $null, $_, $null)}
$_ | Select-Object #{n='Username';e={$_.Name}},#{n='Group';e={$groups -join ';'}}
}
}
getRunningProcesses
getUsersAndGroups
When I call getUsersAndGroups after getRunningProcesses, the output looks like this ( does not output getUsersAndGroups at all ):
Running Processes:
------------ START PROCESS LIST ------------
Name FileVersion ProductVersion Company
---- ----------- -------------- -------
armsvc
aswidsagenta
audiodg
AVGSvc
avgsvca
avguix 1.182.2.64574 1.182.2.64574 AVG Technologies CZ, s.r.o.
conhost 10.0.14393.0 (rs1_release.160715-1616) 10.0.14393.0 Microsoft Corporation
csrss
csrss
dasHost
dwm
explorer 10.0.14393.0 (rs1_release.160715-1616) 10.0.14393.0 Microsoft Corporation
hkcmd 8.15.10.2900 8.15.10.2900 Intel Corporation
Idle
igfxpers 8.15.10.2900 8.15.10.2900 Intel Corporation
lsass
MBAMService
mDNSResponder
Memory Compression
powershell_ise 10.0.14393.103 (rs1_release_inmarket.160819-1924) 10.0.14393.103 Microsoft Corporation
RuntimeBroker 10.0.14393.0 (rs1_release.160715-1616) 10.0.14393.0 Microsoft Corporation
SearchFilterHost
SearchIndexer
SearchProtocolHost
SearchUI 10.0.14393.953 (rs1_release_inmarket.170303-1614) 10.0.14393.953 Microsoft Corporation
services
ShellExperienceHost 10.0.14393.447 (rs1_release_inmarket.161102-0100) 10.0.14393.447 Microsoft Corporation
sihost 10.0.14393.0 (rs1_release.160715-1616) 10.0.14393.0 Microsoft Corporation
smss
spoolsv
sqlwriter
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost
svchost 10.0.14393.0 (rs1_release.160715-1616) 10.0.14393.0 Microsoft Corporation
System
taskhostw 10.0.14393.0 (rs1_release.160715-1616) 10.0.14393.0 Microsoft Corporation
ToolbarUpdater
wininit
winlogon
WtuSystemSupport
WUDFHost
------------ END PROCESS LIST ------------
Users and Groups:
When I call getUsersAndGroups before getRunningProcesses the output of getUsersAndGroups is injected in getRunningProcesses and worse, no running processes are listed at all, but rather a lot of blank lines.
How can I separate or control the output of getUsersAndGroups so that it outputs before the output of getRunningProcesses?
The output of the injected output looks like this:
Running Processes:
------------ START PROCESS LIST ------------
Username Group
-------- -----
Administrator Administrators
debug255 Administrators;Hyper-V Administrators;Performance Log Users
DefaultAccount System Managed Accounts Group
Guest Guests
------------ END PROCESS LIST ------------
Thank you so much for your help!
tl; dr:
The underlying problem affects both Windows PowerShell and PowerShell (Core) 7+, up to at least v7.3.1, and, since it is a(n unfortunate) side effect of by-design behavior, may or may not get fixed.
To prevent output from appearing out of order, force synchronous display output, by explicitly calling Format-Table or Out-Host:
getUsersAndGroups | Format-Table
getRunningProcesses | Format-Table
Both Format-Table and Out-Host fix what is primarily a display problem, but they are suboptimal solutions in that they both interfere with providing in the output as data:
Format-Table outputs formatting instructions instead of data, which are only meaningful to PowerShell's for-display output-formatting system, namely when the output goes to the display or to one of the Out-* cmdlets, notably including Out-File and therefore also >. The resulting format is not suitable for programmatic processing.
Out-Host outputs no data at all and prints directly to the display, with no ability to capture or redirect it.
Relevant GitHub issues:
GitHub issue #4594: discussion of the surprising asynchronous behavior in general.
GitHub issue #13985: potential data loss when using the CLI.
Background information:
Inside a PowerShell session:
This is primarily a display problem, and you do not need this workaround for capturing output in a variable, redirecting it to a file, or passing it on through the pipeline.
You do need it for interactive scripts that rely on display output to show in output order, which notably includes ensuring that relevant information prints before an interactive prompt is presented; e.g.:
# !! Without Format-table, the prompt shows *first*.
[pscustomobject] #{ foo = 1; bar = 2 } | Format-Table
Read-Host 'Does the above look OK?'
From the outside, when calling the PowerShell CLI (powershell -file ... or powershell -command ...):
Actual data loss may occur if Out-Host is not used, because pending asynchronous output may never get to print if the script / command ends with exit - see GitHub issue #13985; e.g.:
# !! Prints only 'first'
powershell.exe -command "'first'; [pscustomobject] #{ foo = 'bar' }; exit"
However, unlike in intra-PowerShell-session use, Format-Table or Out-Host fix both the display and the data-capturing / redirection problem, because even Out-Host's output is sent to stdout, as seen by an outside caller (but note that the for-display representations that PowerShell's output-formatting system produces aren't generally suitable for programmatic processing).[1]
Note: All of the above equally applies to PowerShell (Core) 7+ and its pwsh CLI, up to at least v7.3.1.
The explanation of PowerShell's problematic behavior in this case:
It may helpful to demonstrate the problem with an MCVE (Minimal, Complete, and Verifiable Example):
Write-Host "-- before"
[pscustomobject] #{ one = 1; two = 2; three = 3 }
Write-Host "-- after"
In PSv5+, this yields:
-- before
-- after
one two three
--- --- -----
1 2 3
What happened?
The Write-Host calls produced output synchronously.
It is worth noting that Write-Host bypasses the normal success output stream and (in effect) writes directly to the console - mostly, even though there are legitimate uses, Write-Host should be avoided.
However, note that even output objects sent to the success output stream can be displayed synchronously, and frequently are, notably objects that are instances of primitive .NET types, such as strings and numbers, as well as objects whose implicit output formatting results in non-tabular output as well as types that have explicit formatting data associated with them (see below).
The implicit output - from not capturing the output from statement [pscustomobject] #{ one = 1; two = 2; three = 3 } - was unexpectedly not synchronous:
A blank line was initially produced.
All actual output followed the final Write-Host call.
This helpful answer explains why that happens; in short:
Implicit output is formatted based on the type of objects being output; in the case at hand, Format-Table is implicitly used.
In Psv5+, implicitly applied Format-Table now waits up to 300 msecs. in order to determine suitable column widths.
Note, however, that this only applies to output objects for whose type table-formatting instructions are not predefined; if they are, they determine the column widths ahead of time, and no waiting occurs.
To test whether a given type with full name <FullTypeName> has table-formatting data associated with it, you can use the following command:
# Outputs $true, if <FullTypeName> has predefined table-formatting data.
Get-FormatData <FullTypeName> -PowerShellVersion $PSVersionTable.PSVersion |
Where-Object {
$_.FormatViewDefinition.Control.ForEach('GetType') -contains [System.Management.Automation.TableControl]
}
Unfortunately, that means that subsequent commands execute inside that time window and may produce unrelated output (via pipeline-bypassing output commands such as Write-Host) or prompt for user input before Format-Table output starts.
When the PowerShell CLI is called from the outside and exit is called inside the time window, all pending output - including subsequent synchronous output - is effectively discarded.
The problematic behavior is discussed in GitHub issue #4594; while there's still hope for a solution, there has been no activity in a long time.
Note: This answer originally incorrectly "blamed" the PSv5+ 300 msec. delay for potentially surprising standard output formatting behavior (namely that the first object sent to a pipeline determines the display format for all objects in the pipeline, if table formatting is applied - see this answer).
[1] The CLI allows you to request output in a structured text format, namely the XML-based serialization format known as CLIXML, with -OutputFormat Xml. PowerShell uses this format behind the scenes for serializing data across processes, and it is not usually known to outside programs, which is why -OutputFormat Xml is rarely used in practice. Note that when you do use it, the Format-Table / Out-Host workarounds would again not be effective, given that the original output objects are lost.

How to restore Virtualbox ? lost last two months of work

https://forums.virtualbox.org/viewtopic.php?f=7&t=90893
Hello im desesperate and need help because i have lost about two months of work in my Windows 10 guest system.
Everything worked smoothly till i need to have more free space ( although i have a dynamic hd). So i have follow some tutorials and made some changes:
1 - I have the original almost full disk in: /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk
2 - I made a copy in an external usb device.
3 - Convert to vdi: VBoxManage clonehd /media/eduardo/Seagate\ Backup\ Plus\ Drive/Windows10-disk1.vmdk /media/eduardo/Seagate\ Backup\ Plus\ Drive/Windows10-disk.vdi --format vdi
4 - Tried to resize the disk ( from 80gb to 100gb): VBoxManage modifyhd /media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk --resize 100000 and VBoxManage modifymedium disk /media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk --resize 100000 ( think this could be an error as i had to chage size to vdi file).
5 - Then i had to change the uuid ( because an error of uuid in use arised):VBoxManage internalcommands sethduuid "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk"
6 - Then comeback to: VBoxManage clonehd "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk" " " --format vdi
and resize VBoxManage modifymedium disk "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk.vdi" --resize 120000
I tried to change my virutal machine with the new vdi file to test if everything was fine ( change my /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk disk connection to the new/media/eduardo/Seagate Backup Plus Drive/Windows10-disk.vdi) . But i detected somewhat that the system has turned back two months ago !!!!
I was not worried and decided to go back to my "untouch" vmdk, but the most strange thing is that the original "untouch" file: /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk also boots with things and files and state about two months ago. So im quite nervous.
Selección_058.png
Selección_058.png (65.19 KiB) Viewed 9 times
As watching files the 6c***** has to be the "good status" as was modified yesterday at night. Here is my file manager:
Selección_059.png
Selección_059.png (54.06 KiB) Viewed 9 times
Here is my VM ( made an snapshot about two months ago i dont remember when exactly)
https://imagebin.ca/v/4QlKV3Equ1fW
My log:
https://pastebin.com/JSLFRNMs
Hope anybody can help...
i think that the key is to return somewhat to 6c**** state of my vmdk file, i dont understand how this vmdk got changed as it was not touched
Thanks in advance
The problem was solved. It was nothing to do with resizing disks. I select the { 6cc3c***-*****} hard disk ( although it was "only" 47 gb), for surprise for me it load its "snapshot" part of 47 gb with the whole disk windows10-disk1.vmdk....
Sorry for my bad english, but its difficult to explain, in the settings of the virtual machine in storage section, select as main disk the 6cc***** and start/boot the VM
Once was loaded and working fine, i deleted the snapshot ( to bring all together to the present state) and then made another snapshot for backup.
Thanks

T-SQL Database Backup Cleanup Script logging of files to be deleted before delete

I was having issues with the SQL maintanence plan cleanup task not deleting one particularly large database backup each night, it works for days then fails then starts working again and it always works correctly on the small databases.
I researched this maintanence plan cleanup task issue and tried everything I could think of to get it working, changed extension matching to *, added a \ at the end of folder path, changed age no NONE so all files would be deleted regardless of age, and still sometimes this backup is not getting deleted.
So I implemented this SQL JOB using the script below to see if that would work, but the same issue again, intermittently the large backup file is not deleted, when I run the task manually it seems to always delete it.
My question here is, is there a way to firstly get a list of files that match the delete criteria and write them to a log file before actually attempting to delete the files, that way I could at least see if for some reason the large backup file is not matching the criteria to be deleted in the first place.
Any assistance to otherwise delete the old backup files using T-SQL without using xp_cmdshell and without using a batch or powershell script would be appreciated.
declare #dt datetime
select #dt=dateadd(hh,-22,getdate())
EXECUTE master.dbo.xp_delete_file 0,N'Z:\SQLBackups\',N'BAK',#dt,1
The version of SQL server I'm having the issue with:
Microsoft SQL Server Management Studio 10.50.4042.0
Microsoft Analysis Services Client Tools 10.50.4042.0
Microsoft Data Access Components (MDAC) 6.1.7601.17514
Microsoft MSXML 3.0 6.0
Microsoft Internet Explorer 9.10.9200.17609
Microsoft .NET Framework 2.0.50727.5485
Operating System 6.1.7601
After creating a powershell script to delete the files and finding the error "Another process is using the file" I made this script to check what process that is using the handle64.exe program and found it was the Commvault agent CLBackup.exe that was locking the file.
Backup schedule conflict is the cause of the issue.
$log = ($MyInvocation.MyCommand.Path).TrimEnd("ps1") + "log"
$handlelog = ($MyInvocation.MyCommand.Path).TrimEnd(".ps1") + "-Handle.log"
$1 = gci 'Z:\SQLBackups' | %{gci $_.fullname -Filter '*.BAK' | ? {$_.LastWriteTime -le (get-date).addhours(-22)}}
write-output "$(get-date -format g) Files will be deleted: $1" >> $log
$1 | % {remove-item $_.fullname -force -Confirm:$FALSE}
if (!!($error)) {
write-output "$(get-date -format g) Open Handles on .BAK files:" >> $handlelog
$exec = "$env:SystemRoot\system32\cmd.exe /c" + (Split-Path($MyInvocation.MyCommand.Path)) + "\handle64.exe .BAK -u -nobanner -accepteula"
Invoke-Expression -Command:$exec >> $handlelog
$error >> $log
}

Create VM snapshots using power CLI with no memory quiesce

I need to have a script to run in vcenter power CLI for creating VM snapshots.
Below is the requirement.I am new to scripting.Can someone please help me to do this.
server names should be taken from a text/csv file.
snapshot should be created with the name we are giving and also with no memory quiesce (equivalent to uncheck the 'snapshot the virtual machine's memory' option in GUI, I believe.).
VM name, created snapshot name,creation date and status (successful or failed) should be exported to an output csv file.
Any guidance would be much appreciated.
Do you have any code??? I will not write the entire script for you, but you have to connect to a vCenter, use get-vm and then you can use:
New-Snapshot [-Name] [-Description ] [-Memory] [-Quiesce] [-RunAsync] [-VM] [-Server ] [-WhatIf] [-Confirm] []
Here is the reference: https://www.vmware.com/support/developer/PowerCLI/PowerCLI41U1/html/New-Snapshot.html

How do you stop a user-instance of Sql Server? (Sql Express user instance database files locked, even after stopping Sql Express service)

When using SQL Server Express 2005's User Instance feature with a connection string like this:
<add name="Default" connectionString="Data Source=.\SQLExpress;
AttachDbFilename=C:\My App\Data\MyApp.mdf;
Initial Catalog=MyApp;
User Instance=True;
MultipleActiveResultSets=true;
Trusted_Connection=Yes;" />
We find that we can't copy the database files MyApp.mdf and MyApp_Log.ldf (because they're locked) even after stopping the SqlExpress service, and have to resort to setting the SqlExpress service from automatic to manual startup mode, and then restarting the machine, before we can then copy the files.
It was my understanding that stopping the SqlExpress service should stop all the user instances as well, which should release the locks on those files. But this does not seem to be the case - could anyone shed some light on how to stop a user instance, such that it's database files are no longer locked?
Update
OK, I stopped being lazy and fired up Process Explorer. Lock was held by sqlserver.exe - but there are two instances of sql server:
sqlserver.exe PID: 4680 User Name: DefaultAppPool
sqlserver.exe PID: 4644 User Name: NETWORK SERVICE
The file is open by the sqlserver.exe instance with the PID: 4680
Stopping the "SQL Server (SQLEXPRESS)" service, killed off the process with PID: 4644, but left PID: 4680 alone.
Seeing as the owner of the remaining process was DefaultAppPool, next thing I tried was stopping IIS (this database is being used from an ASP.Net application). Unfortunately this didn't kill the process off either.
Manually killing off the remaining sql server process does remove the open file handle on the database files, allowing them to be copied/moved.
Unfortunately I wish to copy/restore those files in some pre/post install tasks of a WiX installer - as such I was hoping there might be a way to achieve this by stopping a windows service, rather then having to shell out to kill all instances of sqlserver.exe as that poses some problems:
Killing all the sqlserver.exe instances may have undesirable consequencies for users with other Sql Server instances on their machines.
I can't restart those instances easily.
Introduces additional complexities into the installer.
Does anyone have any further thoughts on how to shutdown instances of sql server associated with a specific user instance?
Use "SQL Server Express Utility" (SSEUtil.exe) or the command to detach the database used by SSEUtil.
SQL Server Express Utility,
SSEUtil is a tool that lets you easily interact with SQL Server,
http://www.microsoft.com/downloads/details.aspx?FamilyID=fa87e828-173f-472e-a85c-27ed01cf6b02&DisplayLang=en
Also, the default timeout to stop the service after the last connection is closed is one hour. On your development box, you may want to change this to five minutes (the minimum allowed).
In addition, you may have an open connection through Visual Studio's Server Explorer Data Connections, so be sure to disconnect from any database there.
H:\Tools\SQL Server Express Utility>sseutil -l
1. master
2. tempdb
3. model
4. msdb
5. C:\DEV_\APP\VISUAL STUDIO 2008\PROJECTS\MISSICO.LIBRARY.1\CLIENTS\CORE.DATA.C
LIENT\BIN\DEBUG\CORE.DATA.CLIENT.MDF
H:\Tools\SQL Server Express Utility>sseutil -d C:\DEV*
Failed to detach 'C:\DEV_\APP\VISUAL STUDIO 2008\PROJECTS\MISSICO.LIBRARY.1\CLIE
NTS\CORE.DATA.CLIENT\BIN\DEBUG\CORE.DATA.CLIENT.MDF'
H:\Tools\SQL Server Express Utility>sseutil -l
1. master
2. tempdb
3. model
4. msdb
H:\Tools\SQL Server Express Utility>
Using .NET Refector the following command is used to detach the database.
string.Format("USE master\nIF EXISTS (SELECT * FROM sysdatabases WHERE name = N'{0}')\nBEGIN\n\tALTER DATABASE [{1}] SET OFFLINE WITH ROLLBACK IMMEDIATE\n\tEXEC sp_detach_db [{1}]\nEND", dbName, str);
I have been using the following helper method to detach MDF files attached to SQL Server in unit tests (so that SQ Server releases locks on MDF and LDF files and the unit test can clean up after itself)...
private static void DetachDatabase(DbProviderFactory dbProviderFactory, string connectionString)
{
using (var connection = dbProviderFactory.CreateConnection())
{
if (connection is SqlConnection)
{
SqlConnection.ClearAllPools();
// convert the connection string (to connect to 'master' db), extract original database name
var sb = dbProviderFactory.CreateConnectionStringBuilder();
sb.ConnectionString = connectionString;
sb.Remove("AttachDBFilename");
var databaseName = sb["database"].ToString();
sb["database"] = "master";
connectionString = sb.ToString();
// detach the original database now
connection.ConnectionString = connectionString;
connection.Open();
using (var cmd = connection.CreateCommand())
{
cmd.CommandText = "sp_detach_db";
cmd.CommandType = CommandType.StoredProcedure;
var p = cmd.CreateParameter();
p.ParameterName = "#dbname";
p.DbType = DbType.String;
p.Value = databaseName;
cmd.Parameters.Add(p);
p = cmd.CreateParameter();
p.ParameterName = "#skipchecks";
p.DbType = DbType.String;
p.Value = "true";
cmd.Parameters.Add(p);
p = cmd.CreateParameter();
p.ParameterName = "#keepfulltextindexfile";
p.DbType = DbType.String;
p.Value = "false";
cmd.Parameters.Add(p);
cmd.ExecuteNonQuery();
}
}
}
}
Notes:
SqlConnection.ClearAllPools() was very helpful in eliminating "stealth" connections (when a connection is pooled, it will stay active even though you 'Close()' it; by explicitely clearing pool connections you don't have to worry about setting pooling flag to false in all connection strings).
The "magic ingredient" is call to the system stored procedure sp_detach_db (Transact-SQL).
My connection strings included "AttachDBFilename" but didn't include "User Instance=True", so this solution might not apply to your scenario
I can't comment yet because I don't have high enough rep yet. Can someone move this info to the other answer so we don't have a dupe?
I just used this post to solve my WIX uninstall problem. I used this line from AMissico's answer.
string.Format("USE master\nIF EXISTS (SELECT * FROM sysdatabases WHERE name = N'{0}')\nBEGIN\n\tALTER DATABASE [{1}] SET OFFLINE WITH ROLLBACK IMMEDIATE\n\tEXEC sp_detach_db [{1}]\nEND", dbName, str);
Worked pretty well when using WIX, only I had to add one thing to make it work for me.
I had took out the sp_detach_db and then brought the db back online. If you don't, WIX will leave the mdf files around after the uninstall. Once I brought the db back online WIX would properly delete the mdf files.
Here is my modified line.
string.Format( "USE master\nIF EXISTS (SELECT * FROM sysdatabases WHERE name = N'{0}')\nBEGIN\n\tALTER DATABASE [{0}] SET OFFLINE WITH ROLLBACK IMMEDIATE\n\tALTER DATABASE [{0}] SET ONLINE\nEND", dbName );
This may not be what you are looking for, but the free tool Unlocker has a command line interface that could be run from WIX. (I have used unlocker for a while and have found it stable and very good at what it does best, unlocking files.)
Unlocker can unlock and move/delete most any file.
The downside to this is the apps that need a lock on the file will no longer have it. (But sometimes still work just fine.) Note that this does not kill the process that has the lock. It just removes it's lock. (It may be that restarting the sql services that you are stopping will be enough for it to re-lock and/or work correctly.)
You can get Unlocker from here: http://www.emptyloop.com/unlocker/
To see the command line options run unlocker -H
Here they are for convenience:
Unlocker 1.8.8
Command line usage:
Unlocker.exe Object [Option]
Object:
Complete path including drive to a file or folder
Options:
/H or -H or /? or -?: Display command line usage
/S or -S: Unlock object without showing the GUI
/L or -L: Object is a text file containing the list of files to unlock
/LU or -LU: Similar to /L with a unicode list of files to unlock
/O or -O: Outputs Unlocker-Log.txt log file in Unlocker directory
/D or -D: Delete file
/R Object2 or -R Object2: Rename file, if /L or /LU is set object2 points to a text file containing the new name of files
/M Object2 or -M Object2: Move file, if /L or /LU is set object2 points a text file containing the new location of files
Assuming your goal was to replace C:\My App\Data\MyApp.mdf with a file from your installer, you would want something like unlocker C:\My App\Data\MyApp.mdf -S -D. This would delete the file so you could copy in a new one.