I accidentally delivered the change set which include some additional config files having local system specific configuration in RTC. Is there any way to discard those changes once delivered? I mean the changes should not be come as incoming changes to other team members.
Please provide any pointer if you have.
Is there any way to discard those changes once delivered?
Not exactly: once delivered, that change set will come to the other team members as incoming.
There are two solutions:
revert the stream configuration to a state previous to your deliver. That is easy only if you are delivering baselines in addition of change sets, because you can then open the stream, and in the "component" section click on "Replace with", and replace the delivered baseline with the previous one.
But... if you never delivered baselines (and delivered only change sets), this isn't easy at all.
You can try and follow "Is there a way to create a RTC snapshot or baseline based on a past date?", but that is quite tedious.
Plus, if your colleagues already accepted your change set and started delivering change sets of their own, this solution isn't recommended at all.
Or, much simpler, you create a new changeset which will cancel the one you just deliver.
Right-click on your component, and select show > history, then right click on the latest change set you incorrectly delivered, and select revert.
That will create a patch.
Right-click on that patch, and select "apply to your workspace": that will create a change set which is the negative image of the one already delivered.
Deliver that new change set.
That means your colleagues will have to accept both change sets: the incorrect one, and the new one which cancels it.
This thread introduces a variation of the first alternative:
you can really remove the change set from the stream you delivered it to.
You can do this by:
discarding the change set from your local workspace
and then replacing the content of the stream with the content of your workspace for the particular component that's affected.
This is a more risky solution, because it really replaces the content of the stream with whatever you have in your workspace... it will remove anything in the stream that you don't have in your workspace. To do this:
a. Accept any incoming changes from the stream you are working with (to prevent losing anyone else's work).
b. Right click on the owning component in the Pending Changes view and select Show->History. The change set will appear in the History view.
c. Right click on the change set and choose Discard... This will discard the change set from your workspace.
So your workspace should now have all change from the stream except the one you want to remove. You can verify this by checking that your bad change set is the only thing you see Incoming.
d. Right click on the component and choose "Replace in [your stream name]..."
Related
We have around 300k items on dmi_queue_item
If I do right click and select "destroy queue item" I see the that row no longer appears if I query by r_object_id.
Would it mean that the file no longer will be processed by the CTS service ? Need to know if this would it be the way to clear up the queue for the rendition process (to convert to PDF) or what it would it be the best way to clear up the queue ?
Also for some items/rows I get this message when doing the right click "destroy" thing, what does it mean ? or how can I avoid it ? Not sure if maybe the item was processed and the row no longer exists or is something else.
dmi_queue_item table is used as queue for all sorts of events at Content Server.
Content Transformation Service is using it to read at least two types of events, afaik.
According to Content Transformation Services, Administration Guide, ver. 7.1, page 18 it reads dm_register_assets and performs the configured content actions for this specific objects.
I was using CTS for generating content renditions for some objects using dm_transcode_content event.
However, be carefull when cleaning up dmi_queue_item since there could be many different event types. It is up to system administrators to keep this queue clean by configuring system components to use events or not to stuff up events that are not supposed to be used.
As per cleaning the queue it is advised to use destroy API command, though you can even try to delete row using DELETE query. Of course try to do this in dev environment first.
You would need to look at 2 queues:
dm_autorender_win31 and dm_mediaserver. In order to delete them you would run a query:
delete dmi_queue_item objects where name = 'dm_mediaserver' or name = 'dm_autorender_win31'
I have an activity feed system that uses Redis sorted sets.
Events happen and a message is placed into a sorted set for each relevant user, with a timestamp for the score.
The messages are then distributed to the users, either when they log in, or through a push if the user is currently logged in.
I'd like to differentiate between messages that have been "read" by the user and ones that are still unread.
From my understanding, I can't just have a "read/unread" property as part of the member as changing it would cause the member to be different and therefore be added a second time, instead of replacing the current member.
So, what I'm thinking is that for each user, I have to sorted sets - an "unread" set and a "read" set.
When new events come in, they're added to the "unread" set
When the user reads a message, I add the message to the read set and remove it from the unread set.
A little less sure on how to deliver them. I can't just union them as I lose the distinction between read/unread unless I inverse the score on the unread ones, for instance.
Returning the two sets individually (and merging them in code) makes paging difficult. I'd like to be able to the 20 most recent messages regardless of read/unread status.
So, questions:
Is the read set/unread set way the best way to do it? Is there a better way?
What's the best way to return subsets of the merged/union'd data.
Thanks!
Instead of trying to update the member, you could just pop it and insert a new version. Should be no problem, because you know both member and its timestamp.
In my app, I need to share a setting between different devices running the app. I want the first device that install the app to set the master value of the setting, then all other devices should get that setting and not overwrite it.
How do I make sure I first check if iCloud has a value before setting the value? So I don't overwrite an existing one.
Should I wait for NSUbiquitousKeyValueStoreInitialSyncChange event to be sent, and then I can check for an eventual existing value and otherwise set it for the first time? If yes, can I rely on receiving the NSUbiquitousKeyValueStoreInitialSyncChange event? If not, then it might turn out that it don't set the iCloud value at all with this approach.
If I try to set a value before NSUbiquitousKeyValueStoreInitialSyncChange is triggered for the first time, will it be discarded and then the NSUbiquitousKeyValueStoreInitialSyncChange will be triggered with the existing data in the store?
I've heard NSUbiquitousKeyValueStoreInitialSyncChange is not triggered if there is no values in the store when it synces the first time?
I have read the Apple documentation about this and seen answers here on Stack Overflow, but don't understood how to do exactly this.
How can I make sure I don't overwrite an existing value the first time I launch/install the app?
There is no way for you to surely know you have effectively synchronized with the distant store at least once and you should not count on it (imagine there is no iCloud account set up, or no connectivity or iCloud servers are down, etc.: you don't want your user to wait for you to be sure you are in sync with the cloud as it can take forever or even never happen).
What you should do:
when you start, check the store to see if there is a value.
If there is no value, just push your own value.
If the initial sync with the server did not happen yet and there is, in fact, a value in the cloud, this will be considered a conflict by the NSUbiquitousKeyValueStore. In this precise case (initial sync), the automatic policy is to revert your local value and prefer the one in the cloud instead. So your application will be notified by the NSUbiquitousKeyValueStoreDidChangeExternallyNotification of this revert with the NSUbiquitousKeyValueStoreInitialSyncChange reason.
If there was in fact no value in the cloud, your local value will be pushed and everyone will be happy.
In below diagram what is the meaning of below arrows ?
Here's what I think :
Each arrow describes where the change sets are flowing from/to. So the top workspace flows changes and accepts changes from the stream. The bottom two workspaces just flow changes to the stream, theses workspaces do not accept any changes from the stream. Is this correct ?
What is the meaning of the broken blue arrow ?
A broken arrow means: this flow target is not currently the current one.
If you open the stream, you will see a section "Flow Target" with a list of targets.
Each line can have two qualifier: "default" and "current".
Any target which isn't "current" will be represented by a broken arrow.
Current means that, when you are requesting to see the differences between one stream and another, it will display said differences between the stream and the current target.
See also this thread (more oriented for flow targets between repo workspace and streams)
"Current" means "this is the flow target that will be displayed in the Pending Changes view".
"Default" means "if you try to deliver to a flow target other than the one marked "Default", you will get a warning, asking you if you are sure that you want to deliver to a non-default target.
Here's what I think : Each arrow describes where the change sets are flowing from/to.
Yes, but this is a "model": you won't directly deliver/accept changes from a Stream. You will always do so from a repo workspace.
So the top workspace flows changes and accepts changes from the stream.
Not exactly:
the filled blue arrow means you can ask the stream for the differences between said stream and the repo workspace (no deliver possible here, just a vizualisation of differences)
the broken blue arrow means the repo workspace knows about the Stream (it is listed as a default flow target, but said stream isn't the current flow target for the top repo workspace.
That means the "Pending Changes" views won't display any differences (to accept or to deliver) for that repo workspace compared to the Stream.
The bottom two workspaces just flow changes to the stream, theses workspaces do not accept any changes from the stream. Is this correct ?
No: the target means the bottom repo workspaces knows about the Stream (they can accept or deliver changes), and that Stream is their current flow target (the "Pending changes" view actively monitor differences between the bottom repo workspace and the Stream.
According with link,
http://www.ibm.com/developerworks/br/rational/library/parallel-development-rational-team-concert/
your assertion
"Current" means "this is the flow target that will be displayed in the
Pending Changes view". "Default" means "if you try to deliver to a
flow target other than the one marked 'Default', you will get a
warning, asking you if you are sure that you want to deliver to a
non-default target".
is absolutely correct.
I have a WF4 Service with a flowchart as the root activity. It contains multiple correlated receive activites and decision branching to step through an approval process. The receive activities work perfectly until I try and use one as the trigger for a pick branch.
I am running tracking so can see that the receive is opened and in the persistance I can see the associated bookmark. When I send a client message with the receive type it does not trigger. I have a delay pick branch that fires OK but then the subsequent receive also does not work.
I have checked these receive activities individually and they work OK when not used as the pick trigger. I have tried the pick within a Sequence and a While but no difference.
I cannot see any difference between my implementation and may examples on the web. Am I missing something extra required when the receive is encapsulated by a pick branch?
There is nothing special about a PickBranch trigger that would cause a receive to behave differently so I suspect it is something with the Receive itself. What kind of errors are you seeing at the client application?