I'm new to Jitterbit and I'm working in Jitterbit Studio 5.6.0.1. In our deployed project we have 4 consequential operations and the first one is scheduled. What I want to do is to put a condition on the first scheduled operation that it should not run until all operations from the previous run are finished. I want to avoid running the operation twice. Any help?
Thanks
First off, I would suggest upgrading your studio to 8.12 (current version). Aside from that, if you add a script before the operation that you want to check, you can use something along these lines:
isInQueue=GetOperationQueue("<TAG>Operations/Your_Operation</TAG>");
isRunning=isInQueue[0][1];
if(isRunning==1 && isRunning!=Null(),
"Do Something";
);
That's just a very basic idea of how you can handle this. In my situation, I have an operation that has just a script like this in it, that either direct to a script to run if not already running, or direct to a dead-end, if it needs to skip this scheduled instance.
Related
if I have two or more running python console applications at the same time of same application, but executed several times by hand or any other way.
Is there any method from python code itself to stop all extra processes, close console window and keep running only one
The solution I would use would be to have a lockfile created in the tmp directory.
The first instance would start, check for the existence of the file, create the file since it is not there, then run; the following instances will start, check for the existence of the file, then quit since it's there. The original instance would remove the lockfile as its last instruction. NOTE: If the app runs into an error and does not execute the instruction to remove the lockfile, you would need to manually remove it else the app will always see the file.
I've seen on other threads that some suggest using the ps command and look for your app's name, which would work; however, if your app will ever run on Windows, you would need to use tasklist.
My program checks if there is a new version of itself. If yes it would exit and start an updater that replaces it and then restarts.
My problem is that I haven't found any info on how to make process start right after closing the actual program.
Any suggestions?
Thanks in advance
I intended to add a comment, but I'm too low in points here. The updater itself should probably contain a check to determine whether your application is running an instance, and it should contain a timeout loop that performs this check and factor the timeout following it's startup state. That way you can awaken it, and close your application. The updater should just determine your application is not running, compare versions perform the intended update operation.
a possible solution would also be to create a task via tash sceduler or cron job, starting an out of process application, like CMD.exe.. which brings me to my original comment-question: in regards to what Operating System(s) and Platform(s) is your program intended for?
In SSIS package i have multiple scripts running within a job. At any given time i want to read how many scripts have been executed eg, 5/10 (50%) have been completed. Please tell how can i achieve that?
Currently there is no such functionality provided by SSIS to track progress of package execution.
It seems you need to write your own custom utility/application to implement same or use third party one.
There are few ways to do -
Using a switch called /Reporting or /Rep of DTEXEC at the command-line . For example:
DTEXEC /F ssisexample.dtsx /Rep P > progress.txt
2.Implement package logging or customize it.
3 . Implement Event handler on required executable. You can also use OnPipelineRowsSent log of Data Flow Task.
If you want to write your own application then below thread will provide nice starting point.
How do you code the Package Execution Progress window in C#
In my experience, we are using another software to monitor the jobs that are running. http://en.wikipedia.org/wiki/CA_Workload_Automation_AE
You can also try to create your own application that runs on the background that checks that status of your jobs, through checking the logs.
I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?
We have a particular file, say X.zip that is only modified by 1 or 2 people. Hence we don't want the build to trigger on every check-in, as the other files are mostly untouched.
I need to check for a condition prior to building, whether the checked-in item is "X.zip" or not.. if yes, then trigger a build, else don't. We use only CI builds.
Any idea on how to trigger the build only when this particular file is checked-in? Any other approaches would be greatly appreciated as i am a newbie in TFS...
Tara.
I don't know of any OOTB feature which can do this, what you would need to do is write your own custom MSBuild task which is executed prior to the build running (pre-build action).
The task will then need to use the TFS API to check the current check in for the file you want and if it's not found you'll have to set the task to failed.
This isn't really ideal as it'll indicate to Team Build a build failure, which, depending on whether you're using check in policies, may be unhelpful. It'd also be harder to at-a-glance work out which builds failed because of the task and which failed because of a real problem.
You can change the build to occur less frequently rather than every check in, which will reduce load on your build server.
Otherwise you may want to dig into Cruise Control .NET, it may support better conditional builds.
If you could move X.zip into it's own folder, then you could set up a CI build with a workspace that only looked at the folder containing X.zip.
You would then need to add an explicit call to tf get to download the rest of the code as Team Build only downloads what the workspace is looking at.
But this might be simpler than the custom task approach?