run powershell in VBA access - vba

I need to change a string in multiple text files
I have written the script below in ACCESS VBA but the error is TYPE MISMATCH
Dim str As String
str = "N=maher"
Call Shell("c:\windows\system32\powershell.exe" - Command("get-content -Path e:\temptest.txt") - Replace(str, "maher", "ali"))

The syntax for calling PowerShell is way off. Suggestion: get it working from the command line yourself first, and then run from Access (an odd choice: it just makes this more complicated).
A PowerShell script to do this (.ps1 file) would need to contain something like:
Get-Content -Path "E:\temptest.txt" | ForEach-Object { $_ -Replace 'maher', 'ali' } | do-something-with-the-updated-content
You need to define:
What you are replacing (you pass N=maher in but then hard code two strings for Replace.
What do to with the strings after doing the replacement (Get-Content only reads files).

Related

PowerShell - Create Text File from SQL

I'm creating a PowerShell script to generate .sql files based on some stored procedures in a SQL Server database.
On the advice of this article, I'm using .NET's System.Data.SqlClient.SqlConnection class to create my connection and run stored procedures. Per the article, I can store that data in a System.Data.DataTable object.
The trouble I'm running into is how to best build the file. I'm new to PowerShell, so I've been getting away with something like this:
$SqlFile = "$CurrentDirectory\$CurrentDate`_$Library`_$File`_Table_CREATE.sql$ReviewFlag"
"USE [$TargetDB]`n`nCREATE TABLE [$Library].[$File] (`n" | Out-File $SqlFile
$dt.DataTypeString | Out-File -Append $SqlFile
")`n" | Out-File -Append $SqlFile
But the trouble now is that I need to generate the file extension based on another query, which will also be a part of the file, but I can't do that in sequence as per above; I would need to somehow save that output and write it to the file.
Do I want to keep invoking Out-File, or would it be better to somehow convert the DataTable object to a string or list of strings that I could then concatenate into one large string to be written to the file all at once? This doesn't have to be perfect, but I would like to write something reasonable and maintainable.
Edit: I decided to go route of using an ArrayList to store the values from the DataTable, and from there it's simple string concatenation:
$ColumnStringList = New-Object -TypeName 'System.Collections.ArrayList'
foreach($val in $dt.DataTypeString) {$ColumnStringList.Add("`n`t"+$val)}
And then
$SqlFileString = $($SqlFileHeader+$ColumnStringList+$RRNCol+$SqlFileFooter)
$SqlFileString | Out-File $SqlFile

BCP update database table base on output from powershell

I have 4 files with the same csv header as following
Column1,Column2,Column3,Column4
But I only required data from Column2,Column3,Column4 for import the data into SQL database using BCP . I am using the PowerShell to select the columns that I want and import the required data using BCP but my powershell executed with no error and there are not data updated in my database table. May I know how to set the BCP to import the output from Powershell to database table. Here are my powershell script
$filePath = Get-ChildItem -Path 'D:\test\*' -Include $filename
$desiredColumn = 'Column2','Column3','Column4'
foreach($file in $filePath)
{
write-host $file
$test = import-csv $file | select $desiredColumn
write-host $test
$action = bcp <myDatabaseTableName> in $test -T -c -t";" -r"\n" -F2 -S <MyDatabase>
}
These are the output from the powershell script
D:\test\sample1.csv
#{column2=111;column3=222;column4=333} #{column2=444;column3=555;column4=666}
D:\test\sample2.csv
#{column2=777;column3=888;column4=999} #{column2=aaa;column3=bbb;column4=ccc}
First off, you can't update a table with bcp. It is used to bulk load data. That is, it will either insert new rows or export existing data into a flat file. Changing existing rows, usually called as updating, is out of scope for bcp. If that's what you need, you need to use another a tool. Sqlcmd works fine, and Powershell's got Invoke-Sqlcmd for running arbitary TSQL statements.
Anyway, the BCP utility has notoriously tricky syntax. As far as I know, one cannot bulk load data by passing the data as parameter to bcp, a source file must be used. Thus you need to save the filtered file and pass its name to bcp.
Exporting a filtered CSV is easy enough, just remember to use -NoTypeInformation switch, lest you'll get #TYPE Selected.System.Management.Automation.PSCustomObject as your first row of data. Assuming the bcp arguments are well and good (why -F2 though? And Unix newlines?).
Stripping double quotes requires another an edit to the file. Scrpting Guy has a solution.
foreach($file in $filePath){
write-host $file
$test = import-csv $file | select $desiredColumn
# Overwrite filtereddata.csv, should one exist, with filtered data
$test | export-csv -path .\filtereddata.csv -NoTypeInformation
# Remove doulbe quotes
(gc filtereddata.csv) | % {$_ -replace '"', ''} | out-file filtereddata.csv -Fo -En ascii
$action = bcp <myDatabaseTableName> in filtereddata.csv -T -c -t";" -r"\n" -F2 -S <MyDatabase>
}
Depending on your locale, column separator might be semicolon, colon or something else. Use -Delimiter '<character>' switch to pass whatever you need or change bcp's argument.
Erland's got a helpful page about bulk operations. Also, see Redgate's advice.
Without need to modify the file first, there is an answer here about how bcp can handle quoted data.
BCP in with quoted fields in source file
Essentially, you need to use the -f option and create/use a format file to tell SQL your custom field delimiter (in short, it is no longer a lone comma (,) but it is now (",")... comma with two double quotes. Need to escape the dblquotes and a small trick to handle the first doulbe quote on a line. But it works like a charm.
Also, need the format file to ignore column(s)... just set the destination column number to zero. All with no need to modify the file before load. Good luck!

Why does this work (or how)?

In my email today I received an email about getting unused drive letters. This was their solution:
Get-ChildItem function:[d-z]: -Name | Where-Object {-not (Test-Path -Path $_)}
PowerShell Magazine BrainTeaser had this for a solution, same thing.
ls function:[d-z]: -n|?{!(test-path $_)}|random
I have no idea how function:[d-z]: works. I know that for each character between 'd' to 'z' is used but I don't know why the syntax works.
Testing Get-ChildItem function:[d-a]: -Name gives you an error saying Get-ChildItem : Cannot retrieve the dynamic parameters for the cmdlet. The specified wildcard pattern is not valid:[d-a]:
So is that a dynamic parameter? How come is does not show up with Get-Help gci -full?
function: is a PSDrive which exposes the set of functions defined in the current session. PowerShell creates a function for each single letter drive, named as the letter followed by a colon.
So, function:[d-z]: lists the functions from "d:" through "z:"
function:[d-a]: doesn't work because , d-a isn't a range of letters.

Powershell - listing folders in mulitple places and changing files in those places

I'm trying to set up a script designed to change a bit over 100 placeholders in probably some 50 files. In general I got a list of possible placeholders, and their values. I got some applications that have exe.config files as well as ini files. These applications are stored in c:\programfiles(x86)\ and in d:\In general I managed to make it work with one path, but not with two. I could easily write the code to replace twice, but that leaves me with a lot of messy code and would be harder for others to read.
ls c:\programfiles(x86) -Recurse | where-object {$_.Extension -eq ".config" -or $_.Extension -eq ".ini"} | %{(gc $PSPath) | %{
$_ -replace "abc", "qwe" `
-replace "lkj", "hgs" `
-replace "hfd", "fgd"
} | sc $_PSPath; Write-Host "Processed: " + $_.Fullname}
I've tried to include 2 paths by putting $a = path1, $b = path2, c$ = $a + $b and that seems to work as far as getting the ls command to run in two different places. however, it does not seem to store the path the files are in, and so it will try to replace the filenames it has found in the folder you are currently running the script from. And thus, even if I might be in one of the places where the files is supposed to be, it's not in the other ...
So .. Any idea how I can get Powershell to list files in 2 different places and replace the same variables in both places without haveing to have the code twice ? I thought about putting the code I would have to use twice into a variable, calling it when I needed to instead of writing it again, but it seemed to resolve the code before using it, and that didn't exactly give me results since the data comes from the first part.
If you got a cool pipeline, then every problem looks like ... uhm ... fluids? objects? I have no clue. But anyway, just add another layer (and fix a few problems along the way):
$places = 'C:\Program Files (x86)', 'D:\some other location'
$places |
Get-ChildItem -Recurse -Include *.ini,*.config |
ForEach-Object {
(Get-Content $_) -replace 'abc', 'qwe' `
-replace 'lkj', 'hgs' `
-replace 'hfd', 'fgd' |
Set-Content $_
'Processed: {0}' -f $_.FullName
}
Notable changes:
Just iterate over the list of folders to crawl as the first step.
Doing the filtering directly in Get-ChildItem makes it faster and saves the Where-Object.
-replace can be applied directly to an array, no need for another ForEach-Object there.
If the number of replacements is large you may consider using a hashtable to store them so that you don't have twenty lines of -replace 'foo', 'bar'.

How to use a PowerShell variable as command parameter?

I'm trying to use a variable as a command's parameter but can't quite figure it out. Let's say MyCommand will accept two parameters: option1 and option2 and they accept boolean values. How would I use $newVar to substitute option 1 or 2? For example:
$newVar = "option1"
MyCommand -$newVar:$true
I keep getting something along the lines of 'A positional parameter cannot be found that accepts argument '-System.String option1'.
More Specifically:
Here, the CSV file is an output of a different policy. The loop goes through each property in the file and sets that value in my policy asdf; so -$_.name:$_.value should substitute as -AllowBluetooth:true.
Import-Csv $file | foreach-object {
$_.psobject.properties | where-object {
# for testing I'm limiting this to 'AllowBluetooth' option
if($_.name -eq "AllowBluetooth"){
Set-ActiveSyncMailboxPolicy -Identity "asdf" -$_.name:$_.value
}}
}
Typically to use a variable to populate cmdlet parameters, you'd use a hash table variable, and splat it, using #
$newVar = #{option1 = $true}
mycommand #newVar
Added example:
$AS_policy1 = #{
Identity = "asdf"
AllowBluetooth = $true
}
Set-ActiveSyncMailboxPolicy #AS_policy1
See if this works for you:
iex "MyCommand -$($newVar):$true"
I had the same Problem and just found out how to resolve it. Solution is to use invoke-Expression: invoke-Expression $mycmd
This uses the $mycmd-string, replaces variables and executes it as cmdlet with given parameters
Nowadays, If you don't mind evaluating strings as commands, you may use Invoke-Expression:
$mycmd = "MyCommand -$($newVar):$true"
Invoke-Expression $mycmd
I would try with:
$mycmd = "MyCommand -$($newVar):$true"
& $mycmd
result
Can't work because the ampersand operator just execute single commands without prameters, or script blocks.