Losing time with displays and printed content when cleaning XML files - sql

I'm trying to create a PowerShell routine to clean XML files automatically. I have succesfully created my routine, and I'm able to clear a file with different functions and scripts. But I want to launch my PowerShell routine every time that I have new XML file. So I've decided to add a system to deal with every files in a directory.
Now that I'm calling my routine to clean my PowerShell scripts, even if I don't use Write-Host, It displays rows when I'm launching my routine, and I'm losing a lot of time to clear XML files.
Here is my code:
param ([string] $sourceDirectory, [string] $targetDirectory, [string] $XSDFileName, [string] $dataSourceName, [string] $databaseName)
clear
function clearLocalVariables{
#This functions clears my local variables
}
function createSQLNodesList{
param ([string] $dataSourceName,[string] $databaseName)
#This function creates a list of available and allowed nodes in my XML Files from SQL databases.
}
The following functions are used to check my nodes, and this is where the prints and Write-Host appears when it's launched more than once:
function isNodeNameValid {
param ([string] $testedNodeName)
# This function is used to return the value of the nodeAnalysis function.
# It selects wich list the node will be analysed depending on the fact that
# it is a node for the aspect of the XML or for data.
# - $testedNodeName is a string representing the XML node analysed.
# If the node name is a 5 length string, begins with an A, and is composed of
# 4 digits ('AXXXX'), then it is data.
if(($testedNodeName.Length -eq 5) -and ($testedNodeName.Substring(0,1) -eq "A" ) -and ($testedNodeName.Substring(1,4) -match "^[-]?[0-9.]+$")) {
return nodeAnalysis -nodesList $nodesSQL -testedNodeName $testedNodeName
#Else, it is in the list for the aspect of the XML.
} else {
return nodeAnalysis -nodesList $nodesXML -testedNodeName $testedNodeName
}
}
function nodeAnalysis {
param ($nodesList,[string] $testedNodeName)
# This function is used to analyse each node name given.
# It compares the name of the name analysed to each node in the array given in parameter.
# - $nodesList is the corresponding array depending on the isNodeNameValid() method.
# - $testedNodeName is a string representing the XML node analysed.
# We compare each node of the node array to the testedNodeName. If the testedNodeName is in this array, the method returns 1.
foreach($nodeName in $nodesList) {
if ($testedNodeName -eq $nodeName) {
return 1
}
}
#If the node correspond to any node of the list, then the method returns 0.
return 0
}
# -- XML Nodes recursive cleaning method -- #
function cleanXMLContent {
param ($XMLDoc,[int] $endOfLeaf, [int] $boucle)
#This is the function I have trouble with displays and efficency :
while($endOfFile -ne 1) {
if($endOfLeaf -eq 1) {
if($XMLDoc.Name -eq "#document"){
$endOfFile = 1
}
if($XMLDoc.NextSibling) {
$XMLDoc = $XMLDoc.NextSibling
$endOfLeaf = 0
} else {
$XMLDoc = $XMLDoc.ParentNode
$endOfLeaf = 1
}
} else {
if(!(isNodeNameValid -testedNodeName $XMLDoc.Name)) {
if($XMLDoc.PreviousSibling) {
$nodeNameToDelete = $XMLDoc.Name
$siblingNodeName = $XMLDoc.PreviousSibling.Name
$XMLDoc = $XMLDoc.ParentNode
$XMLDoc.RemoveChild($XMLDoc.SelectSingleNode($nodeNameToDelete))
$XMLDoc = $XMLDoc.SelectSingleNode($siblingNodeName)
} else {
$nodeNameToDelete = $XMLDoc.Name
$XMLDoc = $XMLDoc.ParentNode
$XMLDoc.RemoveChild($XMLDoc.SelectSingleNode($nodeNameToDelete))
}
} else {
if($XMLDoc.HasChildNodes) {
$XMLDoc = $XMLDoc.FirstChild
$endOfLeaf = 0
} else {
if($XMLDoc.NextSibling) {
$XMLDoc = $XMLDoc.NextSibling
$endOfLeaf = 0
} else {
if($XMLDoc.ParentNode) {
$XMLDoc = $XMLDoc.ParentNode
if($XMLDoc.NextSibling) {
$endOfLeaf = 1
} else {
$XMLDoc = $XMLDoc.ParentNode
$endOfLeaf = 1
}
}
}
}
}
}
}
Write-Host "- Cleaning XML Nodes OK" -ForegroundColor Green
}
function createXSDSchema {
param ([string] $XSDFileName)
#This function is used to create XSD corresponding File
}
function cleanFile {
param ([string] $fileName, [string] $source, [string] $target, [string] $XSDFileName, [string] $dataSourceName, [string] $databaseName)
# -- Opening XML File -- #
#Creation of the XML Document iteration path
$date = Get-Date
[string] $stringDate = ($date.Year*10000 + $date.Month*100 + $date.Day) * 1000000 + ($date.Hour * 10000 + $date.Minute* 100 + $date.Second)
$date = $stringDate.substring(0,8) + "_" + $stringDate.substring(8,6)
#determining the path of the source and the target files.
$XMLDocPath = $source + $fileName
$XMLFutureFileNamePreWork = $fileName.Substring(0,$fileName.Length - 4)
$XMLFuturePath = $target + $XMLFutureFileNamePreWork + "cleaned" #_"+$date
#Creation of the XML Document
$XMLDoc = New-Object System.Xml.XmlDocument
$XMLFile = Resolve-Path($XMLDocPath)
#Loading of the XML File
$XMLDoc.Load($XMLFile)
[XML] $XMLDoc = Get-Content -Path $XMLDocPath
#If the XML Document exists, then we clean it.
if($XMLDoc.HasChildNodes) {
#The XML Document is cleaned.
cleanXMLContent $XMLDoc.FirstChild -endOfLeaf 0
Write-Host "- XML Cleaned" -ForegroundColor Green
#If it is a success, then we save it in a new file.
#if($AnalysisFinished -eq 1) {
#Modifying the XSD Attribute
#setting the XSD name into the XML file
createXSDSchema -XSDFileName $XSDFileName
#Creation of the XML Document
$XMLDoc.Save($XMLFuturePath+".xml")
Write-Host "- Creation of the new XML File Successfull at "$XMLFuturePath -ForegroundColor Green
#Creation of the XSD Corresponding Document
#createXSDSchema -XMLPath $XMLFuturePath
#}
} else {
Write-Host "Impossible"
}
}
Here I'm executing the whole process with the different functions. When I'm launching each functions separatly it works, but with many files it displays content and I lose a lot of time:
cd $sourceDirectory
$files = Get-ChildItem $sourceDirectory
# -- Local Variables Cleanning -- #
clearLocalVariables
Write-Host "- Variable cleaning successfull" -ForegroundColor Green
# -- SQL Connection -- #
$nodesSQL = createSQLNodesList -dataSourceName $dataSourceName -databaseName $databaseName
foreach($file in $files){
cleanFile -fileName $file -source $sourceDirectory -target $targetDirectory -XSDFileName $XSDFileName -dataSourceName $dataSourceName -databaseName $databaseName
}
Do you have any idea about how to avoid the different displays of the contents?
I have a lot of blank rows, that multiplies the cleaning time by 10 or 15.

First, refrain from loading XML files twice. Use either
$XMLDoc = New-Object System.Xml.XmlDocument
$XMLDoc.Load($XMLFile)
or
[xml]$XMLDoc = Get-Content -Path $XMLFile
They both do the same thing.
Next, replace the iterative XML traversal with a recursive one:
function Clean-XmlContent {
Param(
[Parameter(Mandatory=$true)]
[Xml.XmlElement]$Node
)
if ($Node.HasChildNodes) {
foreach ($child in $Node.ChildNodes) {
if ($child -is [Xml.XmlElement]) { Clean-XmlContent $child }
}
}
if (-not (Test-NodeName -NodeName $Node.LocalName)) {
$Node.ParentNode.RemoveChild($Node)
}
}
and call it with the XML root node:
Clean-XmlContent $XMLDoc.DocumentElement
Also, simplify the node name validation:
function Test-NodeName {
Param(
[string]$NodeName
)
if ($NodeName -match '^A\d{4}$') {
return ($nodesSQL -contains $NodeName)
} else {
return ($nodesXML -contains $NodeName)
}
}
That should speed up things considerably.

Thanks to Ansgar Wiechers, I have found a way to accelerate my code, I use recursive way to develop my code. This way my code is much faster, but the content of the rows deleted was still printed.
But to avoid having the content of the deleted nodes printed on screen, I had to use :
[void]$RemovedNode.ParentNode.RemoveChild($RemovedNode)
Instead of :
$RemovedNode.ParentNode.RemoveChild($RemovedNode)

Related

Test file structure in groovy(Spock)

How to test created and expected file tree in groovy(Spock)?
Right now I'm using Set where I specify paths which I expect to get and collecting actual paths in this way:
Set<String> getCreatedFilePaths(String root) {
Set<String> createFilePaths = new HashSet<>()
new File(root).eachFileRecurse {
createFilePaths << it.absolutePath
}
return createFilePaths
}
But the readability of the test isn't so good.
Is it possible in groovy to write expected paths as a tree, and after that compare with actual
For example, expected:
region:
usa:
new_york.json
california.json
europe:
spain.json
italy.json
And actual will be converted to this kind of tree.
Not sure if you can do it with the built-in recursive methods. There certainly are powerful ones, but this is standard recursion code you can use:
def path = new File("/Users/me/Downloads")
def printTree(File file, Integer level) {
println " " * level + "${file.name}:"
file.eachFile {
println " " * (level + 1) + it.name
}
file.eachDir {
printTree(it, level + 1)
}
}
printTree(path, 1)
That prints the format you describe
You can either build your own parser or use Groovy's built-in JSON parser:
package de.scrum_master.stackoverflow
import groovy.json.JsonParserType
import groovy.json.JsonSlurper
import spock.lang.Specification
class FileRecursionTest extends Specification {
def jsonDirectoryTree = """{
com : {
na : {
tests : [
MyBaseIT.groovy
]
},
twg : {
sample : {
model : [
PrimeNumberCalculatorSpec.groovy
]
}
}
},
de : {
scrum_master : {
stackoverflow : [
AllowedPasswordsTest.groovy,
CarTest.groovy,
FileRecursionTest.groovy,
{
foo : [
LoginIT.groovy,
LoginModule.groovy,
LoginPage.groovy,
LoginValidationPage.groovy,
User.groovy
]
},
LuceneTest.groovy
],
testing : [
GebTestHelper.groovy,
RestartBrowserIT.groovy,
SampleGebIT.groovy
]
}
}
}"""
def "Parse directory tree JSON representation"() {
given:
def jsonSlurper = new JsonSlurper(type: JsonParserType.LAX)
def rootDirectory = jsonSlurper.parseText(jsonDirectoryTree)
expect:
rootDirectory.de.scrum_master.stackoverflow.contains("CarTest.groovy")
rootDirectory.com.twg.sample.model.contains("PrimeNumberCalculatorSpec.groovy")
when:
def fileList = objectGraphToFileList("src/test/groovy", rootDirectory)
fileList.each { println it }
then:
fileList.size() == 14
fileList.contains("src/test/groovy/de/scrum_master/stackoverflow/CarTest.groovy")
fileList.contains("src/test/groovy/com/twg/sample/model/PrimeNumberCalculatorSpec.groovy")
}
List<File> objectGraphToFileList(String directoryPath, Object directoryContent) {
List<File> files = []
directoryContent.each {
switch (it) {
case String:
files << directoryPath + "/" + it
break
case Map:
files += objectGraphToFileList(directoryPath, it)
break
case Map.Entry:
files += objectGraphToFileList(directoryPath + "/" + (it as Map.Entry).key, (it as Map.Entry).value)
break
default:
throw new IllegalArgumentException("unexpected directory content value $it")
}
}
files
}
}
Please note:
I used new JsonSlurper(type: JsonParserType.LAX) in order to avoid having to quote each single String in the JSON structure. If your file names contain spaces or other special characters, you will have to use something like "my file name", though.
In rootDirectory.de.scrum_master.stackoverflow.contains("CarTest.groovy") you can see how you can nicely interact with the parsed JSON object graph in .property syntax. You might like it or not, need it or not.
Recursive method objectGraphToFileList converts the parsed object graph to a list of files (if you prefer a set, change it, but File.eachFileRecurse(..) should not yield any duplicates, so the set is not needed.
If you do not like the parentheses etc. in the JSON, you can still build your own parser.
You might want to add another utility method to create a JSON string like the given one from a validated directory structure, so you have less work when writing similar tests.
Modified Bavo Bruylandt answer to collect file tree paths, and sort it to not care about the order of files.
def "check directory structure"() {
expect:
String created = getCreatedFilePaths(new File("/tmp/region"))
String expected = new File("expected.txt").text
created == expected
}
private String getCreatedFilePaths(File root) {
List paths = new ArrayList()
printTree(root, 0, paths)
return paths.join("\n")
}
private void printTree(File file, Integer level, List paths) {
paths << ("\t" * level + "${file.name}:")
file.listFiles().sort{it.name}.each {
if (it.isFile()) {
paths << ("\t" * (level + 1) + it.name)
}
if (it.isDirectory()) {
collectFileTree(it, level + 1, paths)
}
}
}
And expected files put in the expected.txt file with indent(\t) in this way:
region:
usa:
new_york.json
california.json
europe:
spain.json
italy.json

dont work lappend in proc tcl

I want get list of included files in HDL designer with tcl script. I get example that printed list of files into a log and change it. I add global variable filenames and append all filenames to it, but when proc will be executes my variable is empty.
proc walkDependencies {decl} {
global alreadyDone filenames
# Only look at each declaration once.
if {[info exists alreadyDone($decl)]} {
return
}
set alreadyDone($decl) 1
# Only report each file once.
set declFile [$decl file]
if {[info exists alreadyDone($declFile)]} {
set reportFile 0
} else {
set reportFile 1
set alreadyDone($declFile) 1
}
foreach pkg [$decl packages] {
walkDependencies $pkg
}
if {[$decl configure class] eq "architecture"} {
walkDependencies [$decl entity]
foreach inst [$decl instances] {
if {![catch {$inst child} child]} {
walkDependencies $child
}
}
}
set file [$decl file]
set fileType [$file configure type]
if {![regexp {Text$} $fileType]} {
if {[lsearch {symbol blockInterface} $fileType] != -1} {
# This assumes ent+arch are generated to a single file.
set reportFile 0
} else {
set file [$file generated]
}
}
if {$reportFile} {
set lib [$file library]
# Exclude standard and downstreamOnly libraries.
if {[$lib configure type] eq "regular"} {
# puts "[$lib configure hardHdlDir]/[$file configure relativePathname]"
set tmp "[$lib configure hardHdlDir]/[$file configure relativePathname]"
lappend $filenames $tmp
}
}
if {[$decl configure class] eq "packageHeader"} {
walkDependencies [$decl body]
}
}
set filenames
catch {unset alreadyDone}
set lib [library open Travers_lib]
walkDependencies [$lib declaration Travers_top struct]
foreach i $filenames {
puts $i
}
if uncomment line # puts "[$lib configure hardHdlDir]/[$file configure relativePathname]" all included files will be printed into a log.
Your problem is most likely that you are using
lappend $filenames $tmp
which means that you are appending the value of tmp to a local variable whose name is equal to the value of filenames. Try
lappend filenames $tmp
instead.
Documentation:
lappend

issue accessing lexical scope using B

For debugging purposes I'd like to Access the lexical scope of different subroutines with a specific Attribute set. That works fine. I get a Problem when the first variable stores a string, then I get a empty string. I do something like this:
$pad = $cv->PADLIST; # $cv is the coderef to the sub
#scatchpad = $pad->ARRAY; # getting the scratchpad
#varnames = $scratchpad[0]->ARRAY; # getting the variablenames
#varcontents = $scratchpad[1]->ARRAY; # getting the Content from the vars
for (0 .. $#varnames) {
eval {
my $name = $varnames[$_]->PV;
my $content;
# following line matches numbers, works so far
$content = $varcontent[$_]->IVX if (scalar($varcontent[$_]) =~ /PVIV=/);
# should match strings, but does give me undef
$content = B::perlstring($varcontent[$_]->PV) if (scalar($varcontent[$_]) =~ /PV=/);
print "DEBUGGER> Local variable: ", $name, " = ", $content, "\n";
}; # there are Special vars that throw a error, but i don't care about them
}
Like I said in the comment the eval is to prevent the Errors from the B::Special objects in the scratchpad.
Output:
Local variable: $test = 42
Local variable: $text = 0
The first Output is okay, the second should Output "TEXT" instead of 0.
What am I doing wrong?
EDIT: With a little bit of coding I got all values of the variables , but not stored in the same indexes of #varnames and #varcontents. So now is the question how (in which order) the values are stored in #varcontents.
use strict;
use warnings;
use B;
sub testsub {
my $testvar1 = 42;
my $testvar2 = 21;
my $testvar3 = "testval3";
print "printtest1";
my $testvar4 = "testval4";
print "printtest2";
return "returnval";
}
no warnings "uninitialized";
my $coderef = \&testsub;
my $cv = B::svref_2object ( $coderef );
my $pad = $cv->PADLIST; # get scratchpad object
my #scratchpad = $pad->ARRAY;
my #varnames = $scratchpad[0]->ARRAY; # get varnames out of scratchpad
my #varcontents = $scratchpad[1]->ARRAY; # get content array out of scratchpad
my #vars; # array to store variable names adn "undef" for special objects (print-values, return-values, etc.)
for (0 .. $#varnames) {
eval { push #vars, $varnames[$_]->PV; };
if ($#) { push #vars, "undef"; }
}
my #cont; # array to store the content of the variables and special objects
for (0 .. $#varcontents) {
eval { push #cont, $varcontents[$_]->IV; };
eval { push #cont, $varcontents[$_]->PV; };
}
print $vars[$_], "\t\t\t", $cont[$_], "\n" for (0 .. $#cont);
EDIT2: Added runnable script to demonstrate the issue: Variablenames and variablevalues are not stored in the same index of the two Arrays (#varnames and #varcontents).

PowerShell RoboCopy path issue

Objective: Robo copy from multiple machines on the network to a network share using variables for both the machine name and the currently logged on user.
What I have: txt file with a list of computernames.
Issue: I cannot get the foreach to work with the .split("\")[1] I use on the username variable to remove the domain prefix so I can use the output from that in the robocopy path
something like
robocopy "\\$computername\c$\documents and settings\$username\backup" "\\networkshare\backup\$username\backup"
gives me the error
You cannot call a method on a null-valued expression.
At C:\Scripts\Test\backup.ps1:13 char:2
Here's what I have so far. Can somebody help please?
function Get-LoggedIn {
[CmdletBinding()]
param (
[Parameter(Mandatory=$True)]
[string[]]$computername
)
foreach ($pc in $computername){
$logged_in = (gwmi win32_computersystem -COMPUTER $pc).username
$name = $logged_in.split("\")[1]
"{1}" -f $pc,$name
}
}
$computers = Get-Content "C:\Scripts\testcomputers.txt"
foreach ($computer in $computers) {
$users = Get-LoggedIn $computer
}
$SourceFolder = "\\$computer\c$\users\$users\desktop"
$DestinationFolder = "\\networkshare\backups\$users\backup\desktop"
$Logfile = "\\networkshare\backups\$users\backup\backuplog.txt"
Robocopy $SourceFolder $DestinationFolder /E /R:1 /W:1 /LOG:$Logfile
I see multiple errors here. You're not running the copy commands inside the foreach-loop. The username property recieved from WMI can often be in the following format:
domain\computer\username (or computer\domain\username, unsure since I'm on non-domain workstation now)
Anyways, the username is always the last part, so get it by using the index [-1] instead.
Updated script (with indents!):
function Get-LoggedIn {
[CmdletBinding()]
param (
[Parameter(Mandatory=$True)]
[string[]]$computername
)
foreach ($pc in $computername){
$logged_in = (gwmi win32_computersystem -COMPUTER $pc).username
$name = $logged_in.split("\")[-1]
"{1}" -f $pc,$name
}
}
$computers = Get-Content "C:\Scripts\testcomputers.txt"
foreach ($computer in $computers) {
$users = Get-LoggedIn $computer
$SourceFolder = "\\$computer\c$\users\$users\desktop"
$DestinationFolder = "\\networkshare\backups\$users\backup\desktop"
$Logfile = "\\networkshare\backups\$users\backup\backuplog.txt"
& Robocopy $SourceFolder $DestinationFolder /E /R:1 /W:1 /LOG:$Logfile
}

How to create a dynamic variable in Powershell, sucha as date/time etc

Hi i am not exactly sure if my wording is right but i need a variable which contains current date/time whenever i write data to log ; how can i do that without initializing everytime.Currently everytime i need a update i use these both statements jointly.Is there an other way of doing this?
$DateTime = get-date | select datetime
Add-Content $LogFile -Value "$DateTime.DateTime: XXXXX"
please do let me know if any questions or clarifications regarding my question.
This script make the real Dynamic variable in Powershell ( Thanks to Lee Holmes and his Windows PowerShell Cookbook The Complete Guide to Scripting Microsoft's Command Shell, 3rd Edition)
##############################################################################
##
## New-DynamicVariable
##
## From Windows PowerShell Cookbook (O'Reilly)
## by Lee Holmes (http://www.leeholmes.com/guide)
##
##############################################################################
<#
.SYNOPSIS
Creates a variable that supports scripted actions for its getter and setter
.EXAMPLE
PS > .\New-DynamicVariable GLOBAL:WindowTitle `
-Getter { $host.UI.RawUI.WindowTitle } `
-Setter { $host.UI.RawUI.WindowTitle = $args[0] }
PS > $windowTitle
Administrator: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
PS > $windowTitle = "Test"
PS > $windowTitle
Test
#>
param(
## The name for the dynamic variable
[Parameter(Mandatory = $true)]
$Name,
## The scriptblock to invoke when getting the value of the variable
[Parameter(Mandatory = $true)]
[ScriptBlock] $Getter,
## The scriptblock to invoke when setting the value of the variable
[ScriptBlock] $Setter
)
Set-StrictMode -Version 3
Add-Type #"
using System;
using System.Collections.ObjectModel;
using System.Management.Automation;
namespace Lee.Holmes
{
public class DynamicVariable : PSVariable
{
public DynamicVariable(
string name,
ScriptBlock scriptGetter,
ScriptBlock scriptSetter)
: base(name, null, ScopedItemOptions.AllScope)
{
getter = scriptGetter;
setter = scriptSetter;
}
private ScriptBlock getter;
private ScriptBlock setter;
public override object Value
{
get
{
if(getter != null)
{
Collection<PSObject> results = getter.Invoke();
if(results.Count == 1)
{
return results[0];
}
else
{
PSObject[] returnResults =
new PSObject[results.Count];
results.CopyTo(returnResults, 0);
return returnResults;
}
}
else { return null; }
}
set
{
if(setter != null) { setter.Invoke(value); }
}
}
}
}
"#
## If we've already defined the variable, remove it.
if(Test-Path variable:\$name)
{
Remove-Item variable:\$name -Force
}
## Set the new variable, along with its getter and setter.
$executioncontext.SessionState.PSVariable.Set(
(New-Object Lee.Holmes.DynamicVariable $name,$getter,$setter))
There's a Set-StrictMode -Version 3 but you can set it as -Version 2 if you can load framework 4.0 in your powershell V2.0 session as explained Here
The use for the OP is:
New-DynamicVariable -Name GLOBAL:now -Getter { (get-date).datetime }
Here the Lee Holmes's evaluation (where it is clear what is the real flaw) about the method I used in my other answer:
Note
There are innovative solutions on the Internet that use PowerShell's debugging facilities to create a breakpoint that changes a variable's value whenever you attempt to read from it. While unique, this solution causes PowerShell to think that any scripts that rely on the variable are in debugging mode. This, unfortunately, prevents PowerShell from enabling some important performance optimizations in those scripts.
Why not use:
Add-Content $LogFile -Value "$((Get-Date).DateTime): XXXXX"
This gets the current datetime every time. Notice that it's inside $( ) which makes powershell run the expression(get the datetime) before inserting it into the string.
wrap your two commands in function so you will have just one call ?
function add-log{
(param $txt)
$DateTime = get-date | select -expand datetime
Add-Content $LogFile -Value "$DateTime: $txt"
}
Besides these other ways (which frankly I would probably use instead - except the breakpoint approach), you can create a custom object with a ScriptProperty that you can provide the implementation for:
$obj = new-object pscustomobject
$obj | Add-Member ScriptProperty Now -Value { Get-Date }
$obj.now
Using PsBreakPoint:
$act= #'
$global:now = (get-date).datetime
'#
$global:sb = [scriptblock]::Create($act)
$now = Set-PSBreakpoint -Variable now -Mode Read -Action $global:sb
calling $now returns current updated datetime value
One liner:
$now = Set-PSBreakpoint -Variable now -Mode Read -Action { $global:now = (get-date).datetime }