RTC: Resolving stream gaps? - rtc

I have a file in RTC (call it foo.c) that I modified and checked into a changeset (CS1) along with some other changes. I then modified that file again and checked it into a different changeset (CS2) along with other changes.
I've now run into a case where I want to deliver CS2 to a stream, but RTC is giving me an error that delivering that would create a gap in the stream (because of the change in CS1). I don't want to deliver all of CS1 yet, because it contains some changes that shouldn't be in a build yet. The original change to foo.c in CS1 was a minor removal of a #include, and doesn't affect anything else.
Is there any way to resolve that gap?
I see some stuff in the RTC documentation about applying patches, but I don't understand where it's going with that.
Is there a way to split a changeset into multiple changesets, which would allow me just to deliver the one file?

Problem: CS1 changes foo.c, CS2 further changes foo.c. You only want to deliver CS2, but RTC tells you that would introduce gaps.
Solution: Create a patch from CS2 and suspend both CS1 and CS2. Then apply the patch, merge it into your workspace and check-in the changes which will create yet another change set, CS3, identical to CS2 but with no dependency on CS1. You can now deliver CS3.
After delivering CS3, you can discard CS2 and resume CS1 which will require you to merge with CS3.
Then you should be in the state that CS1 builds on CS3 and you can choose whether or not to deliver CS1 in the future.

Update since 2012 (and the original "workaround" of delivering everything and reverting what you don't want):
See this thread:
In RTC 4.0.5 we delivered additional support when trying to accept change sets which have a gap (often encountered when trying to backport fixes).
In a very brief summary of the feature, when you accept change sets with a gap, you can now follow a gap workflow that accepts one change set at a time and, for change sets that contain gaps, creates a new change set (with aided traceability), that contains the equivalent changes.
This means users will not have to accept the change sets 'as a patch'.
Applying change sets as a patch has limitations compared to the new workflow.
This feature is summarized in the RTC 4.0.5 'New & Noteworthy' page.
Below are some videos which show this feature:
Accepting multiple change sets with gaps in the RTC 4.0.5 client for Eclipse IDE
Accepting a change set with a gap in the RTC 4.0.5 client for Eclipse IDE
That is the Locate Change Sets feature:
In RTC 5.0 we added a "fill the gap" feature where the change sets that fill the gap are shown to the user, allowing them to either accept all the change sets or to continue with the gap workflow that was available in RTC 4.0.5.
This feature is summarized in the RTC 5.0 'New & Noteworthy' page:
The classes that are involved for filling the gap include (available in RTC 5.0):
client side: IWorkspaceConnection.findChangeSetsToAcceptToFillGap(...)
server side: IScmQueryService.findChangeSetsToAcceptToFillGap(...)
Both features are explained in detail in the "Improved Gap Handling for SCM" article.
Original answer (2012)
Is there a way to split a changeset?
I don't think so, reading the changeset man page:
A file or folder in a component cannot be part of more than one active change set.
When a file or folder is included in an active change set, all changes to it become part of that change set whether or not the change set is current, and changes to that file or folder cannot be explicitly checked in to a new change set until the active change set that includes it is completed.
Having foo.c in CS1 and CS2 means CS1 has been "completed" (frozen, in essence), and it would be bad to try and split it.
The patch solution means:
cancelling CS1
adding the additional change to foo.c to CS2
redoing CS1 changes
See "How do I remove a change set from a stream?"
Story 149483 is about enhancing that cumbersome workflow, and the gap detection is being enhanced (Enhancement 24822)
The OP timwoj concludes:
I ended up just delivering all of it, then reversing the one that I didn't want.

Related

Save history of incremental changes of the flow without cloning them in Mosaic Decisions

While configuring a particular data pipeline in Mosaic Decisions, I want to try out different operations by using the available process nodes. I would like to keep the first few configured nodes for future reference and continue to add some other nodes.
To do this, I'm currently cloning the flow after each incremental change. But, due to this, many flows are getting configured and it becomes very difficult to keep track.
Is there any alternative way to save the history of these multiple configurations of the flow for future reference without cloning and executing them separately?
You can save the history of changes you have made in the flow by simply saving it as a version using Save As Version option provided in the canvas header.
You can also add a description for each of the incremental steps and edit a particular version later if you want. Later, you can also execute each of the saved versions separately by publishing that version from the Version tab and then
executing it normally.

Stop RTC baseline from being delivered

I created a baseline in my workspace and forgot I had some undelivered changes. These changes are not supposed to be part of the baseline.
The changes and baseline have not been delivered to the stream, is there anyway to get rid of this undelivered baseline and undelivered changes?
Would deleting the workspace and creating a new one work?
This thread is clear:
Both the internals and the users of RTC SCM system depend on the fact that a baseline is immutable.
A baseline is visible (via various search menus) as soon as it is created, so it would not be safe to change the content of a baseline, even if you haven't yet delivered it.
That is why RTC makes it very fast and cheap to create a new baseline whenever you need one.
BUT I agree that you should be able to:
drop a baseline from your outgoing changes list in the Pending Changes view (Story 18512: Discard a baseline: nothing done yet 8 years later) and:
hide/archive a baseline:
Enhancement 170855: Provide the ability to hide (archive) baselines that are no longer needed, 2011, reformulated in
Enhancement 169438: Need true database archive and restore capability for work items and SCM objects: 2011, itself encompassed in
Enhancement 294112: Archival techniques and tools: 2013.
Nothing is done yet.
So try simply to:
rename your current baseline with a name which makes it clear it is not meant to be used,
create a new baseline including only the change sets you want
deliver both baselines

Accurev revert action

I am working with Accurev and recently I was forced to perform a revert on a recent promote. I followed the general guidelines, accessed the stream's history and performed a revert action on a selected transaction.
That particular transaction involved already existent files but also new ones.
Now here comes the main problem: after the revert the existent files were returned to their previous version but the files that were at their first version appeared in the root of the stream regardless of the path where they were initially placed and were in the state Defunct. Of course this mainly visually disturbing but I am also wondering what will happen later on when someone will try to readd those same files to the stream.
Will there be a conflict, will they be repositioned where they were initially added? At this point I am thinking about reverting the revert action but it already seems really overcomplicated and it look like it would only generate more problems.
Lets say you have a stream hierarchy of: Stream1 - Stream2 - Workspace1
In this hierarchy you have a file called foo.c.
In Stream2, this file has a defunct status.
If you add foo.c to source control in a Workspace1 and promote, the defunct version in Stream2 is now stranded and the newly added appears with a member status. The stranded file will appear when you search for stranded elements.
If you try and promote the newly added foo.c, it will fail due to being an evil twin as the original foo.c still exists in Stream1.
To clear this, you can promote the stranded foo.c from Stream2 and then promote the newly added foo.c.

Accurev - why not Auto-Update?

Why isn't it standard behavior for Accurev to automatically run an "Update" upon opening the program? "Update" updates a user's local sandbox with the latest files from the building/promoted area.
It seems like expected functionality that the most recent files should be synchronized first.
I'm not claiming that it should always update, but curious as to why an auto-Update wouldn't be correct.
Auto-updating could produce some very unwanted results.
Take this scenario: you're in the middle of a development task, but you've made a mistake and need to revert a file that you just modified. So you open AccuRev, but before you have a chance to "revert to most recent version", you are bombarded with 100 files that have been changed upstream including the one you want to revert. You are now forced into the position of resolving all the merge conflicts before your solution will build, including the merge of your (possibly unstable) code in progress.
Requiring the user to manually update keeps a protective 'bubble' around the developer, allowing them to commit (keep) changes within their own workspace without bringing down code changes that could destabilise the work in their sandbox. When the developer gets to a point where his code is ready to share with others, that is the appropriate time to do an update and subsequently build/retest the merged codebase before promoting.
However there is one scenario that I do believe auto-updating could be useful: after a workspace is reparented. i.e. when a developer's workspace is moved from one part of the stream hierarchy to another. Every time we reparent we have to do a little dance:
Accept the confirmation dialog that reminds us (rather verbosely) that we need to update our workspace before we can promote any changes.
Double-click the workspace to view its files.
Wait for AccuRev to do a "Pending" search, to determine whether any file changes are waiting to be committed.
And finally, perform the Update.
Instead of just giving us a confirmation dialog, it would be nice if AccuRev could just ask us if we want to Update immediately.
I guess it depends on preference. I for one wouldn't like the auto-update feature.
Imagine you have a huge project and you don't want to build it every time you start Accurev. But you also can't debug because the source files and debugging info no longer correspond.

Skip AIR update

Imagine I have an AIR application to update: the preceding version number is 0.0.1, the current one is 0.0.2. Now, the preceding app is installed on many different pcs. I want to update ONLY some clients, based on a particular ID. Is it possible to skip update process for some clients?
The short answer is yes, it is possible. You can do just about whatever you need if you write your own code to check for updates, download updates to a local file, then call System:Updater.update() or Air.update:ApplicationUpdater.installFromAIRFile() The old AIRUpdater.js example from AIR 1.0 can get your started.
The problem is you'll have to update all of your 0.0.1 clients with new code before they can make a decision on 0.0.2. And if you don't have a good way to ensure they've all been updated before deploying your next version, you'll probably want to change the location of the update descriptor file in your intermediate version. Otherwise you may end up with straggler 0.0.1's that skip the intermediate version and update to 0.0.2 without your ID checking.
And I haven't tried this yet, but it might be even easier to use the newer Air.update:ApplicationUpdater class from AIR 1.5 and put your ID checking in the updateStatus event.