I'm wondering if I can get some help in relation to Bamboo. I am very very new to this system.
How I can trigger child plan after both parents are completed?
Thank you
I know it's possible to trigger multiple child builds after the completion of a parent build, however I'm not sure if multiple-parent dependencies are explicitly supported.
One possibility (depending on how strict your build architecture is) would be to setup one parent build as the child of another, so at the completion of the first parent build, the second would commence. Then, following its completion, the child build begins.
Parent A's child is Parent B, then
Parent B's child is your Child plan.
See https://confluence.atlassian.com/bamboo/using-bamboo/working-with-builds/setting-up-plan-build-dependencies for a limited overview of dependencies.
Related
Working on my project based on optaplanner example taskassigning. In the example, StartTimeUpdatingVariableListener updateStartTime() changes the time of the source-task. Will it be OK, right in the function, change the shadow variable of the previous task instead of the source task? Because in my scenario, each task has a waiting time (shadow variable), only when a new task is added, the previous task's waiting time can be calculated. Different source task will bring different waiting time to its previous task. Eventually the sum of all employees' waiting time will be minimized in rule. Looking at the example, in the listener, only the source task time is updated, and is surrounded by beforeVariableChanged and afterVariableChanged. Will there be any problem to update other task's shadow variable?
You can't create cycles that would cause infinite loops.
Across different shadow variable declarations
A custom shadow variable (VariableListener) can trigger another custom shadow variable (VariableListener), for example in the image below C triggers E. So the VariableListener of C that changes variable C, triggers (delayed) events on the VariableListener of E.
But the dependency tree cannot contain a cycle, OptaPlanner validates this through the sources attribute.
Notice how all the variable listeners methods of C are called before the first of E is called. They are delayed. OptaPlanner give you this guarantee behind the scenes.
For a single shadow variable declaration
The VariableListener for C is triggered when A or B changes. So when it changes a C variable, no new trigger events for that VariableListeners are created.
When a single A variable changes, one event is triggered on the VariableListener for C, which can change multiple C variables. The loop that changes those multiple C variable must not be an infinite loop.
In practice, with VRP and task assignment scheduling, I 've found that they only way to guarantee the absence of an infinite loop is to only make changes forward in time. So in a chained model, follow the next variables, but not the previous.
I was reading Linux Kernel development and trying to understand process address space semantics in case of fork(). While I'm reading in context of Kernel v2.6, and in newer versions, any of child or parent may run first, I am confused with following:
Back in do_fork(), if copy_process() returns successfully, the new child is woken up
and run. Deliberately, the kernel runs the child process first. In the common case of the
child simply calling exec() immediately, this eliminates any copy-on-write overhead
Based on my understanding of COW, if an exec () is used, COW will always happen, whether child or parent process runs first. Can someone explain how is COW eliminated in case of child running first? Does 'overhead' refer to an extra overhead that comes with COW instead of 'always copy' semantics?
fork() creates a copy of the parent's memory address space where all memory pages are initially shared between the parent and the child. All pages are markes as read-only, and on the first write to such a page, the page is copied so that parent and child have their own. (This is what COW is about.)
exec() throws away the entire current address space and creates a new one for the new program.
If the child executes first and calls exec(), the none of the shared pages needs to be unshared.
If the parent executes first and modifies some data, then these pages are unshared. If the child then starts executing and calls exec(), the copied pages will be thrown away, i.e., the unsharing was not actually necessary.
My understanding is that when a parent forks, the child becomes an exact copy of the parent. In other words, they have the same process control block (PCB). Is this completely correct? I know that the pid will obviously be different but is that it?
Each process has its own process control block. When the parent forks the child's process control block will normally start as a duplicate of the parent however it is changed (for instance one of the first is the PID) and as the child does its own thing, the child's process control block will become less of a duplicate of the parent.
Here are some slides that describes an abstract operating system process control and the process control block.
The actual specifics will vary depending on the particular operating system.
This must be a situation that other people have come across, so I thought I'd ask the question. Have people implemented good generic solutions to the problem of representing temporal relationships within nHibernate. This problem exists within a database over which I have no control, so please don't tell me the DB model is incorrect. I can't change it.
We have a simple Parent:Child relationship, where the child's Valid Time must fall within the parent's Valid Time. Put simply Parent.ValidFrom <= Child.ValidFrom && Parent.ValidTo >= Child.ValidTo. This rule is enforced in the database, meaning I can't issue an UPDATE statement that will cause the records to violate it. That is non negotiable.
Importantly it means that affects the order in which I write changes to the DB.
Expanding the child = 2 UPDATEs.
i. Expand Parent.
ii. Expand Child.
Contracting the parent = 2 UPDATEs.
i. Contract the Child.
ii. Contract the Parent.
Moving parent and child to date in future = 3 UPDATEs.
i. Change Parent ValidTo.
ii. Move Child.
iii. Move Parent.ValidFrom.
Moving parent and child to date in past = 3 UPDATEs.
i. Change Parent ValidFrom.
ii. Move Child.
iii. Move Parent.ValidTo.
So, we can see that the order in which the updates occur is very important. We can't just rely on nHibernate's default updates. Also, in some cases we need to do two UPDATEs on a single entity, where nHibernate would normally do one.
So, I want to get to the point where I can express a generic temporal Parent:Child in my domain model (probably using [attribute] decorated classes), and have some code do the hard work for me.
Has anyone run into this problem, and could anyone give any advice?
Please, again, bear in mind that I have no control over my DB schema and I'd like to write something generic that can be applied to my whole model. The only caveat is that I only care about commiting objects that I have amended in memory. So I'm not expecting to write some code to decide what the correct ValidFrom/ValidTo dates are.
Since you have no control over the order in which NH issues update statements, the best course of action is probably to use IStatelessSession to do the updates "manually".
You essentially give up change tracking; you'll need to tell NH which object to update.
Sorry if this is a dupe, couldn't find it but didnt really know what to search for, anyway...
I have three classes, Parent, Child and Other
Parent has many Child where child has a Parent_Id column
Other holds a reference to a Child through a Child_Id column
When I delete a Parent, I also want to delete all the associated Child objects. If these Child objects are referenced by any Other classes, I want their (the Other objects) Child_Id references to be nullified.
What cascade rules do I need on the two relationships?
Also, will NHibernate update entities in-memory as well as in the database?
I.e. if I have a bunch of Parent, Child and Other in memory (i.e. loaded from db, not transient) and tell NH to delete a Parent, what will happen? I assume the Parent and Child objects will become transient? What will happen to the Child property of any Other objects?
Edit: when using All-Delete-Orphan, what classes an object as an orphan? In the example above, is a Child an orphan if its parent Parent is deleted? Does the reference from Other matter when considering an entity as orphaned?
Thanks
NH does not update any of your entities in memory (except of Ids and Versions). NH is not responsible to manage the relations of you entities. It just persists what you did in memory to the database.
From this point of view it should become easier to understand.
cascade="delete" means that when the parent is deleted, the child gets deleted as well.
cascade="delete-orphan" means, that additionally, the child is even deleted if no parent references it anymore. This, of course, only works if the child is in the session.
The deleted instance gets transient in memory. References to the transient instance (from Other) will cause an exception. AFAIK, you need to remove reference to deleted instances by yourself. You probably can make it implicit by some tricks, but I doubt that this will be clean. It's business logic.
For parent-child relations, cascade="all-delete-orphan" is appropriate.
For regular reference I prefer cascade="none".
There is a great explanation by Ayende Rahien