Can you have an active launch plan that doesn't have a schedule? - flyte

And if so, what is the meaning of that? Nothing happens right?

Yep, an active launch plan can have no schedule attribute. Any launch plan, regardless of status can be used to trigger a workflow execution.
In Flyte the launch plan status [active|inactive] is only used for determining whether a schedule associated with a launch plan is run.

Related

Make one Bamboo plan cause other plan to be scheduled with delay

I have a number of multi-part tests that require me to run a setup phase first, then wait about 30 minutes before I can run the verification phase. I'm limited to a single agent, so including the delay in the test itself is very undesirable as it ties up the CI/CD system.
What I would like to do is have the setup plan, when complete, cause the corresponding verification plan to be scheduled 30 minutes later.
I know you can have one plan trigger another plan, but that's not quite what I want as that will happen immediately after the first plan, which won't work.
What's the best way to do this with Bamboo or is it even possible?
Not possible. Unless you delay the job programatically from a script task which, as you mention, is not ideal for your use case.

Is there a way to get the parent execution given an execution id in Camunda?

I'm wondering if there is a way to get the parent execution of an execution in camunda. What I'm trying to achieve is basically the following:
This is a simple process involving a parallel gateway. Each of the flows is composed of a service task (external) and a user task.
In each "Pre: Task X" service task, I want to set some variables that I will use afterward in their respective user tasks. I want each execution flow of the parallel gateway to have their own variables and non-accessible from the other flows. How can I achieve this?
I was doing some tests and I found the following:
When the process is instantiated, I get instantly 5 execution instances.
What I understand is that one belongs to the process, the next two belong to each flow of the parallel gateway, and the last two belong to each of the service tasks.
If I call "complete" for one of the service tasks on the REST API with localVariables, they will instantly disappear and no further be available because they will be tied to the execution associated to the external task, which is terminated after the task completion.
Is there a way in which I can get the parent execution of the task, which in this case would be the parallel execution flow. So I can set localVariables at this level?
Thanks in advance for the valuable help
Regards
First of all 5 executions doesn't mean they are active. In your case there should only be 2 executions active when you start a new instance for the process. You can set your variables in respective executions as return value of the respective service tasks.
You can set variables for process instance but do respect you have 2 executions and 1 process instance. You can not set same variable for multiple executions.

Autosys JIL ignoring success conditions

I hope someone can point me in the right direction or shed some light on the issue I'm having. We have Autosys 11.3.5 running in Windows environment.
I have several jobs setup to launch on a remote NAS server.
I needed JOB_1 in particular to only run if another completed successfully.
Seems straight forward enough. In UI there's a section to specify Condition such as: s(job_name) as I have done and I'm assuming that ONLY if the job with name job_name succeeds that my initial job should run.
No matter what I do, when I make the second job fail on purpose (whether manually setting its status to FAILURE) or changing some of its parameters so that its natural run time causes it to fail. The other job that I run afterwards seems to ignore the condition altogether and complete successfully each time.
I've triple checked the job names (in fact I copy and pasted it from the JIL definition of the job so there are no typos), but it's still being ignored.
Any help in figuring out how to make one job only run if another did not fail (and NOT to run if it DID fail) would be appreciated.
If both the jobs are scheduled and become active together, then this should not happen.
The way i think is, you must be force starting the other job while the first is failed. If thats the case, then conditions will not work.
You need to let both the jobs start as per schedule, or at least the other job start as per schedule while the first one is failed. In that case the other job will stay in AC state until the first one is SU.
Let me know if this is not the case, i will have to rephrase another solution then.

In Jenkins build flow plugin, terminate all parallel jobs if one of them failed

We are using the jenkins build flow plugin(https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin) to run our test cases by dividing them into small sub test cases and test them in parallel.
The current problem is even one of the job fails, the other parallel jobs and the hosting flow job will continue running, which is a big waste of resources.
I checked the doc there is no place to control the jobs inside the parallel {}. Any ideas how to deal with that?
Looking at the code, I don't see a way to achieve that. I would ask the user mailing list for help.
I am thinking to use Guard / Rescue imbedded in Parallel to do this.
Adding failFast: true within parallel block would cause the build to fail as soon as one of the parallel nodes fails.
You can view this as an example.

How to delay user login until RunOnce is completed? [Win XP]

Currently I have an application that runs at startup when a user log's in to the account (administrative), as well as something under HKLM...\Run which is also executed - but I need to run something once and BEFORE both these things are executed.
My solution was to use HKLM...\RunOnce which is executed before the HKLM...\Run but the task can take 30-45 seconds which gives enough time for the user Startup to be executed and launch the application prematurely.
I thought of maybe including a SLEEP but RunOnce doesn't block the user account load... Then I considered the group policies but they do not have a RunOnce equivalent that I can use... Also I am not sure if Group Policy is run at the right time (never used it before).
Is there anyway to make my RunOnce delay the account startup of my application, or a better place where I can execute it before both HKLM...\Run and user Startup? Or any recommended alternatives?
Any ideas or help would be much appreciated...
Thanks,
Do you actually need to delay user login or you just need to delay the secondary applications? Assuming the latter you can use a Mutex to synchronize the separate processes. The first can declare and acquire a named mutex. The later processes can block wait on the mutex.