Why Get-AzDataFactoryV2PipelineRun doesn't return InProgress pipeline runs? - azure-powershell

I am running Azure Function App with the following PowerShell script to get ADF pipeline runs. It is returning all history of pipeline runs, but it is skipping the Active( In Process) run. It is returning Completed/Failed/Cancelled runs only. Microsoft Documentation shows that it should also return InProgress status as well. I triggered the ADF and made sure that it is running and this script only returned runs starting from the last completed run and previous runs, but completely ignored Active/InProgress run.
This is the script:
#Input Parameters
$ResourceGroupName = 'ABC'
$DataFactoryName = 'XYZ'
$before = (get-date).AddDays(1)
$after = (get-date).AddDays(-1)
#Get Pipeline Run History
$runIds = Get-AzDataFactoryV2PipelineRun -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName -LastUpdatedAfter $after -LastUpdatedBefore $before
#Return all Retrieved Statuses
$runIds | ForEach-Object {
## Check for all statuses
Write-Host $_.Status
Write-Host $_.RunStart
Write-Host $_.RunEnd
Is this bug or am I missing anything here?

The code looks fine. I tested at my end it works fine. I would give it a shot by widening the timeframe.
$before = (get-date)
$after = (get-date).AddDays(-5)
Also, ensure the pipeline run is not getting completed before the execution of the script.

I figured out the issue. The issue was data object it was returning was sorted different that I expected. Because, Active Pipeline Runs have null RunEnd , it was putting Active Pipelines at the end of the returned rows. So, it was showing all previous runs sorted from most recent to previous ones (sorted desc) but then it at the end it was returning Active runs from most recent to previous ones again.

Related

JitterBit Run Only One Instance of an Operation at a Time

I ran into an issue where I had long running JitterBit operations that were scheduled. I had them scheduled close together, since I needed to keep data flowing. But, when they would take longer than expected I would wind up with multiple instances of the operation set running at the same time. This was killing my performance.
I'll put the fix in the answer below.
To resolve this issue I added an additional Script Operation at the beginning of my operation set (with the schedule running on this operation). This script simply checks to see if one of the operations in this set is already running. If not, it starts the next operation. If there is anything running, it exists and waits till the next scheduled instance.
This is a sample of my script. This one assumes that there were originally two operations in this operation set.
<trans>
$isInQueue=GetOperationQueue("<TAG>Operations/OperationToCheck01</TAG>");
$isInQueue2=GetOperationQueue("<TAG>Operations/OperationToCheck02</TAG>");
$isRunning=$isInQueue[0][1];
$isRunning2=$isInQueue2[0][1];
if(($isRunning==1 && $isRunning!=Null()) || ($isRunning2==1 && $isRunning2!=Null()),
WriteToOperationLog("Skip for now: "+$isRunning+" / "+$isRunning2);,
WriteToOperationLog("Nothign is Running - Starting Operation Chain.");
RunOperation("<TAG>Operations/OperationToCheck01</TAG>");
);
</trans>

Why doesn't my test run tearDownAfterTestClass when it fails

in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives
how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this
try
{
if(!mysqli_query($connection,$query))
{
$this->assertTrue(false);
}
}
catch (Exception $exc)
{
$msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL;
$msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;;
$this->fail($msg);
}
the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass
now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass
so i am wondering, why isn't my tearDownAfterTestClass running when a test fails
NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

How to prevent execution of a waf task if nothing changes from the last successful execution?

I have a waf task that is running a msbuild in order to build a project but I do want to run this only if last execution was not successful.
How should I do this?
Store in your build.env.MS_SUCC = 1 and retrieve the value from the previous build (for the first time you naturally have to check if the dict item MS_SUCC exists)

powershell embedding one variable in another one

In powershell , for a complicated reason an application needs a 6 character string as a sort of handle (Eg: abcdef) for a long asynchronous operation. I perform this using
$replaceHandle = "abcdef"
start-Job -handle $replaceHandle
The application stores the status of the asynchronous job in $abcdef (Prefixes string with $)
and i can access parameters of job Status at any time by requesting
Get-Status -ID $abcdef.ID
Problem for me in my code is i am not able to get this $abcdef.ID properly
I have tried
$($jobVar.ToString()).ID - this gives blank/errors out
maipulating $$jobVar.ID actually gives $'abcdef'.ID
How do i manage to get value of ($abcdef.ID) appropriately ?
It because $jobVar is first interpreted :
try :
write-host "the value is $($jobVar.ID)"
(Get-Variable $b -ValueOnly).Id
This worked

SQL Server Agent 2005 job runs but no output

Essentially I have a job which runs in BIDS and as as a stand lone package and while it runs under the SQL Server Agent it doesn't complete properly (no error messages though).
The job steps are:
1) Delete all rows from table;
2) Use For each loop to fill up table from Excel spreasheets;
3) Clean up table.
I've tried this MS page (steps 1 & 2), didn't see any need to start changing from Server side security.
Also SQLServerCentral.com for this page, no resolution.
How can I get error logging or a fix?
Note I've reposted this from Server Fault as it's one of those questions that's not pure admin or programming.
I have logged in as the proxy account I'm running this under, and the job runs stand alone but complains that the Excel tables are empty?
Here's how I managed tracking "returned state" from an SSIS package called via a SQL Agent job. If we're lucky, some of this may apply to your system.
Job calls a stored procedure
Procedure builds a DTEXEC call (with a dozen or more parameters)
Procedure calls xp_cmdshell, with the call as a parameter (#Command)
SSIS package runs
"local" SSIS variable is initialized to 1
If an error is raised, SSIS "flow" passes to a step that sets that local variable to 0
In a final step, use Expressions to set SSIS property "ForceExecutionResult" to that local variable (1 = Success, 0 = Failure)
Full form of the SSIS call stores the returned value like so:
EXECUTE #ReturnValue = master.dbo.xp_cmdshell #Command
...and then it gets messy, as you can get a host of values returned from SSIS . I logged actions and activity in a DB table while going through the SSIS steps and consult that to try to work things out (which is where #Description below comes from). Here's the relevant code and comments:
-- Evaluate the DTEXEC return code
SET #Message = case
when #ReturnValue = 1 and #Description <> 'SSIS Package' then 'SSIS Package execution was stopped or interrupted before it completed'
when #ReturnValue in (0,1) then '' -- Package success or failure is logged within the package
when #ReturnValue = 3 then 'DTEXEC exit code 3, package interrupted'
when #ReturnValue in (4,5,6) then 'DTEXEC exit code ' + cast(#Returnvalue as varchar(10)) + ', package could not be run'
else 'DTEXEC exit code ' + isnull(cast(#Returnvalue as varchar(10)), '<NULL>') + ' is an unknown and unanticipated value'
end
-- Oddball case: if cmd.exe process is killed, return value is 1, but process will continue anyway
-- and could finish 100% succesfully... and #ReturnValue will equal 1. If you can figure out how,
-- write a check for this in here.
That last references the "what if, while SSIS is running, some admin joker kills the CMD session (from, say, taskmanager) because the process is running too long" situation. We've never had it happen--that I know of--but they were uber-paranoid when I was writing this so I had to look into it...
Why not use logging built into SSIS? We send our logs toa database table and then parse them out to another table in amore user friendly format and can see every step of everypackage that was run. And every error.
I did fix this eventually, thanks for the suggestions.
Basically I logged into Windows with the proxy user account I was running and started to see errors like:
"The For each file enumerator is empty"
I copied the project files across and started testing, it turned out that I'd still left a file path (N:/) in the properties of the For Each loop box, although I'd changed the connection properties. Easier once you've got error conditions to work with. I also had to recreate the variable mapping.
No wonder people just recreate the whole package.
Now fixed and working!