In the project I'm working on right now, I realized that it makes more sense to move some of the jobs to a sub-group (which I just discovered with pr.create_group). I see that pr.move_to or pr.copy_to moves all jobs to a new project, but is there also a possibility to move some jobs to one sub-group and others to another one?
The job objects also have a move_to() function, so I would suggest to iterate over the jobs in a given project using iterjobs() and then move them individually. This even works in inspect mode https://github.com/pyiron/pyiron_base/blob/master/pyiron_base/job/core.py#L757
Related
Could anyone provide any best practices about multiple migration runs? Moving from TFS 2017.3.1 to Azure DevOps Service. Dealing with a fair number of work items (32k). Of course, TSTU throttling is making the run take a long time, so I was thinking of pushing what I could up front, then a second pass to pick up the new work items since the first big push. So...enabling UpdateSourceReflectedId would set the ReflectedWorkItemId on the source items that have already been migrated. But what happens if someone changes a work item that has already been pushed? Would the history delta get picked up? How is that typically resolved...I was thinking maybe a Querybit like: ReflectedWorkItemId <> '' and ChangedDate > (last run time), but is that necessary? Those already exist on target...would ReplayRevisions pick up only the missing changes? TIA...
I usually do the following for large runs:
Open work items edited in last 90 days
Closed work items edited in last 90 days
open out to more days in chunks
The important thing to note is that links are created only when both ends of the link exist.
After a long run you can then rerun "edited in last month" to bring any changes a cross.
Changes to avoid in the Source:
changing work item type
moving work item between team project
We handle these, but loosly.
IntelliJ has the "task" feature where you can track the context for a Jira task, for example.
Now, I changed several files within this task und committed a few times. After a week, I returned to this project and I'd like to see all changed files for this task, from the start of the task until where I left of. I cannot find an option to do this, is this somehow possible?
By default creating a new task offers you to create a separate branch for this task. If you did so, all your commits are in that branch.
If you used another branch, I don;t think there is an easy way to do what you want. You could try checking the saved contexts under Tools - Tasks and Context - Load context
I want to create an on-demand process of some kind in Dynamics CRM 2013 that will run on multiple records of the same type. The process will create an equal number of records of another type, and all will relate to the same parent record. I can imagine how a workflow would be used to create the new child records but I am not sure how I could create the parent record and associate it with the child records.
If you're running on multiple records, then I presume your are starting from a gridview of some sort. If that's the case, then the solution is easy. Just create a custom ribbon button that accepts the selected records as a parameter, and runs a custom javascript. That will accomplish what you need in a nice elegant solution.
Because it's running javascript, you will have full control to be able to do everything you need. One of the features of ribbon buttons, is they can receive the selected records in a parameter as an array.
But if you don't want to do all the work in javascript, you can have the script pass the parameters to a custom Workflow or Action.
As its already been mentioned, a workflow won't be able to do this alone, because it can only run on a single record, and cannot accept multiple records as an input parameter.
Jason I think the point here is to automate the process. Lee you are correct in your assessment that creating the work order with a workflow step is easy to do while creating the child work order items is either difficult or impossible. Even if you managed to hack this together with several workflows triggered by different events during the process the end result would be a UX/maintenance nightmare.
The simplest and best solution is to have piece of plugin logic that you trigger with your workflow. This plugin code would create a new work order and associated work order items based on the context of the service you run the workflow against. If you would like for this action to be triggered by a database operation instead of manually triggered this would be simple to do as well.
You aren't going to be able to do this via a CRM dialog because it can only run against one record. You can accomplish this fairly easily by leveraging existing CRM functionality:
If it doesn't already exist, create a field in your service entity (the Work Order Item) called new_MasterWorkOrder (or something similar) which is link to a Master Work Order entity.
Create your master record - this would be your Overall Work Order.
From your Work Order Item record entry listing, select all the items you want to add to the Master Work Order record created in the previous step. Alternately, you could use an Advanced Find to locate the target records.
Click the Edit button to initiate the CRM bulk/multi-record edit form.
In the new_MasterWorkOrder field, select the Overall Work Order previously created.
Save.
Once the process complete, all of the Work Order Items you selected will now be linked to your Overall Work Order.
It sounds like you might a need a step before this to create a Work Order Item from selected Service entities. You should be able to accomplish this easily by having a workflow which runs takes in a Service entity as a parameter and builds a Work Order Item from it. Once you have these built you can link them to an Overall Work Order using the process above.
I am creating a clone of Default Merge Window, to add a feature.
I already have a Merge candidates in a grid from command below:
MergeCandidate[] candidates = tfs.GetMergeCandidates(edtSelectedSource.Text, cbxTargetBranchs.Text);
Now, the user selected 1 or more candidates and I need to merge them.
But the TFS API VersionControl.Merge requires source path and target path.
At first, my question, I need to iterate each candidate and merge each file of its changesets, one by one ?
Second, how could I obtains the target path from a changeset ?
First off, I've done a fair amount of programming with the TFS API, but merging is something that I would never blindly trust to automation. Merge conflicts are best dealt with by human beings. Yes, it's painful and can be automated in many cases, but in many others - things can go terribly wrong. I would think twice and then twice again before doing this on Production branches.
Here's some tips that should help:
You need to create a temp Workspace. The Workspace is the sandbox where everything happens. The Workspace can have files and thus, file locations associated with it. Workspace items have rich metadata.
Have a look at the Workspace and WorkspaceInfo classes.
Then have a look at the workspace client:
http://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.versioncontrol.client.item.aspx
As long as the changesets are continuous, you can do it in a single merge call. If they are not continuous, you need to submit n merges for each continuous block. Let's say they select changesets 10, 15 and 20 and these are continuous (i.e. there are no additional candidates between that range) then you would submit a merge with a versionFrom of 10 and versionTo of 20.
As far as paths go, you want to use the ones that you passed into QueryMergeCandidates and you'll want to specify the full recursion type as well.
I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?