Using two Where commands to run two checks - where-clause

I am using this command
Where {$_.Extension -match "zip||rar"}
but need to use this command too
Where {$_.FullName -notlike $IgnoreDirectories}
Should I use a && or II as in
Where {$_.Extension -match "zip||rar"} && Where {$_.FullName -notlike $IgnoreDirectories}
or
Where {$_.Extension -match "zip||rar"} || Where {$_.FullName -notlike $IgnoreDirectories}
?
What I am trying to accomplish is to have every zip and rar file be extracted but I want to skip the extraction of the zip or rar files in some of my directories. What is the best solution for this?

When in doubt, read the documentation.
Windows PowerShell supports the following logical operators.
-and Logical and. TRUE only when both statements are TRUE.
-or Logical or. TRUE when either or both statements are TRUE.
-xor Logical exclusive or. TRUE only when one of the statements is TRUE and the other is FALSE.
-not Logical not. Negates the statement that follows it.
! Logical not. Negates the statement that follows it. (Same as -not)
Put both clauses in the same scriptblock and connect them with the appropriate logical operator. For a logical AND between the two clauses use -and:
Where-Object {
$_.Extension -match "zip||rar" -and
$_.FullName -notlike $IgnoreDirectories
}
For a logical OR between the two clauses use -or:
Where-Object {
$_.Extension -match "zip||rar" -or
$_.FullName -notlike $IgnoreDirectories
}
In your case it's probably the former.
Note that your regular expression zip||rar matches any extension due to the empty string between the two |. To match only items with the extension .rar or .zip remove one pipe: $_.Extension -match "zip|rar".

Related

Starting Apache fails without error

I have an Apache service 'SERV[XYZ]' in my Windows services and I would like to be able to start it from a PowerShell script.
In the script, I added variable holding service name:
$serv = "SERV[XYZ]";
stop-service $serv;
start-service $serv;
but this does not start the service. PowerShell executes without error.
The *-Service cmdlets do wildcard matches on the service name. See for instance the documentation of Start-Service:
Parameters
-DisplayName<String[]>
Specifies the display names of the services to be started. Wildcards are permitted.
The pattern [XYZ] means "match any of the characters X, Y, or Z" (like in regular expressions), so your statements try to stop/start services named SERVX, SERVY and/or SERVZ. To match a literal string [XYZ] you need to prevent the square brackets from being treated as special characters, e.g. like this:
$serv = 'SERV[[]XYZ[]]'
Stop-Service $serv
Start-Service $serv
If you have only one service whose name starts with SERV you could also use a pattern like SERV*, or perhaps SERV?XYZ? where the ? (wildcard matching a single character) mask the square brackets.
Another option would be to use Get-Service without a name and filter the results via Where-Object:
$serv = 'SERV[XYZ]'
Get-Service | Where-Object { $_.Name -eq $serv } | Stop-Service
Get-Service | Where-Object { $_.Name -eq $serv } | Start-Service

Why does this work (or how)?

In my email today I received an email about getting unused drive letters. This was their solution:
Get-ChildItem function:[d-z]: -Name | Where-Object {-not (Test-Path -Path $_)}
PowerShell Magazine BrainTeaser had this for a solution, same thing.
ls function:[d-z]: -n|?{!(test-path $_)}|random
I have no idea how function:[d-z]: works. I know that for each character between 'd' to 'z' is used but I don't know why the syntax works.
Testing Get-ChildItem function:[d-a]: -Name gives you an error saying Get-ChildItem : Cannot retrieve the dynamic parameters for the cmdlet. The specified wildcard pattern is not valid:[d-a]:
So is that a dynamic parameter? How come is does not show up with Get-Help gci -full?
function: is a PSDrive which exposes the set of functions defined in the current session. PowerShell creates a function for each single letter drive, named as the letter followed by a colon.
So, function:[d-z]: lists the functions from "d:" through "z:"
function:[d-a]: doesn't work because , d-a isn't a range of letters.

Powershell - listing folders in mulitple places and changing files in those places

I'm trying to set up a script designed to change a bit over 100 placeholders in probably some 50 files. In general I got a list of possible placeholders, and their values. I got some applications that have exe.config files as well as ini files. These applications are stored in c:\programfiles(x86)\ and in d:\In general I managed to make it work with one path, but not with two. I could easily write the code to replace twice, but that leaves me with a lot of messy code and would be harder for others to read.
ls c:\programfiles(x86) -Recurse | where-object {$_.Extension -eq ".config" -or $_.Extension -eq ".ini"} | %{(gc $PSPath) | %{
$_ -replace "abc", "qwe" `
-replace "lkj", "hgs" `
-replace "hfd", "fgd"
} | sc $_PSPath; Write-Host "Processed: " + $_.Fullname}
I've tried to include 2 paths by putting $a = path1, $b = path2, c$ = $a + $b and that seems to work as far as getting the ls command to run in two different places. however, it does not seem to store the path the files are in, and so it will try to replace the filenames it has found in the folder you are currently running the script from. And thus, even if I might be in one of the places where the files is supposed to be, it's not in the other ...
So .. Any idea how I can get Powershell to list files in 2 different places and replace the same variables in both places without haveing to have the code twice ? I thought about putting the code I would have to use twice into a variable, calling it when I needed to instead of writing it again, but it seemed to resolve the code before using it, and that didn't exactly give me results since the data comes from the first part.
If you got a cool pipeline, then every problem looks like ... uhm ... fluids? objects? I have no clue. But anyway, just add another layer (and fix a few problems along the way):
$places = 'C:\Program Files (x86)', 'D:\some other location'
$places |
Get-ChildItem -Recurse -Include *.ini,*.config |
ForEach-Object {
(Get-Content $_) -replace 'abc', 'qwe' `
-replace 'lkj', 'hgs' `
-replace 'hfd', 'fgd' |
Set-Content $_
'Processed: {0}' -f $_.FullName
}
Notable changes:
Just iterate over the list of folders to crawl as the first step.
Doing the filtering directly in Get-ChildItem makes it faster and saves the Where-Object.
-replace can be applied directly to an array, no need for another ForEach-Object there.
If the number of replacements is large you may consider using a hashtable to store them so that you don't have twenty lines of -replace 'foo', 'bar'.

How to search with multiple keys in a hash table in powershell

I'm writing a Powershell script and in it I'm using a hash table to store information about database checks. The table has 5 keys (host, check, last execution time, last rep, status) and I want to search in my table for values where:
$s = $table where $host -eq $hostname -and check -eq $check
Does anyone have any idea how this is done? And if it makes any difference, the script cannot rely on .NET framework higher than 2.0
I´m new to Powershell and scripting in general so this might be very obvious but I still can't seem to find an answer on Google. Also if someone knows a good reference page for Powershell scripting I would really appreciate a link.
Gísli
EDIT: Don't see how it matters but here is a function I use to create a hash table:
function read_saved_state{
$state = #{}
$logpos = #{}
$last_log_rotate = 0
foreach($s in Get-Content $saved_state_file){
$x = $s.split('|')
if($x[0] -eq 'check'){
$state.host = $x[1]
$state.check = $x[2]
$state.lastexec = $x[3]
$state.lastrep = $x[4]
$state.status = $x[5]
}
elseif($x[0] -eq 'lastrotate'){
$last_log_rotate = $x[1]
}
elseif($x[0] -eq 'log'){
$logpos.lastpos = $x[3]
}
}
$saved_state_file has one line for each check run and can also have a line for last log rotate and last log position. There can be as many as 12 checks for one host.
I'm trying to extract a particular check, run at a particular host, and changing the lastexec_time, last_rep and status.
return $state,$logpos,$last_log_rotate
}
Assuming you have an array or list of hashtables (not entirely clear from the question), your syntax is pretty close:
$s = $tables | where {($_.host -eq $hostname) -and ($_.check -eq $check)}

Files with .sql extension identified as binary in Mercurial [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why does Mercurial think my SQL files are binary?
I generated a complete set of scripts for the stored procedures in a database. When I created a Mercurial repository and added these files they were all added as binary. Obviously, I still get the benefits of versioning, but lose a lot of efficiency, 'diff'ing, etc... of text files. I verified that these files are indeed all just text.
Why is it doing this?
What can I do to avoid it?
IS there a way to get Hg to change it mind about these files?
Here is a snippet of changeset log:
496.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFindCustomerByMatchCode.StoredProcedure.sql has changed
497.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFindUnreconcilableChecks.StoredProcedure.sql has changed
498.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFixBadLabelSelected.StoredProcedure.sql has changed
499.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFixCCOPL.StoredProcedure.sql has changed
500.1 Binary file SQL/SfiData/Stored Procedures/dbo.pFixCCOrderMoneyError.StoredProcedure.sql has changed
Thanks in advance for your help
Jim
In fitting with Mercurial's views on binary files, it does not actually track file types, which means that there is no way for a user to mark a file as binary or not binary.
As tonfa and Rudi mentioned, Mercurial determines whether a file is binary or not by seeing if there is a NUL byte anywhere in the file. In the case of UTF-[16|32] files, a NUL byte is pretty much guaranteed.
To "fix" this, you would have to ensure that the files are encoded with UTF-8 instead of UTF-16. Ideally, your database would have a setting for Unicode encoding when doing the export. If that's not the case, another option would be to write a precommit hook to do it (see How to convert a file to UTF-8 in Python for a start), but you would have to be very careful about which files you were converting.
I know it's a bit late, but I was evaluating Kiln and came across this problem. After discussion with the guys at Fogbugz who couldn't give me an answer other than "File/Save As" from SSMS for every *.sql file (very tedious), I decided to have a look at writing a quick script to convert the *.sql files.
Fortunately you can use one Microsoft technology (Powershell) to (sort of) overcome an issue with another Microsoft technology (SSMS) - using Powershell, change to the directory that contains your *.sql files and then copy and paste the following into the Powershell shell (or save as a .ps1 script and run it from Powershell - make sure to run the command "Set-ExecutionPolicy RemoteSigned" before trying to run a .ps1 script):
function Get-FileEncoding
{
[CmdletBinding()] Param (
[Parameter(Mandatory = $True, ValueFromPipelineByPropertyName = $True)] [string]$Path
)
[byte[]]$byte = get-content -Encoding byte -ReadCount 4 -TotalCount 4 -Path $Path
if ( $byte[0] -eq 0xef -and $byte[1] -eq 0xbb -and $byte[2] -eq 0xbf )
{ Write-Output 'UTF8' }
elseif ($byte[0] -eq 0xfe -and $byte[1] -eq 0xff)
{ Write-Output 'Unicode' }
elseif ($byte[0] -eq 0xff -and $byte[1] -eq 0xfe)
{ Write-Output 'Unicode' }
elseif ($byte[0] -eq 0 -and $byte[1] -eq 0 -and $byte[2] -eq 0xfe -and $byte[3] -eq 0xff)
{ Write-Output 'UTF32' }
elseif ($byte[0] -eq 0x2b -and $byte[1] -eq 0x2f -and $byte[2] -eq 0x76)
{ Write-Output 'UTF7'}
else
{ Write-Output 'ASCII' }
}
$files = get-ChildItem "*.sql"
foreach ( $file in $files )
{
$encoding = Get-FileEncoding $file
If ($encoding -eq 'Unicode')
{
(Get-Content "$file" -Encoding Unicode) | Set-Content -Encoding UTF8 "$file"
}
}
The function Get-FileEncoding is courtesy of http://poshcode.org/3227 although I had to modify it slightly to cater for UC2 little endian files which SSMS seems to have saved these as. I would recommend backing up your files first as it overwrites the original - you could, of course, modify the script so that it saves a UTF-8 version of the file instead e.g. change the last line of code to say:
(Get-Content "$file" -Encoding Unicode) | Set-Content -Encoding UTF8 "$file.new"
The script should be easy to modify to traverse subdirectories as well.
Now you just need to remember to run this if there are any new *.sql files, before you commit and push your changes. Any files already converted and subsequently opened in SSMS will stay as UTF-8 when saved.