Explanation of arrows in RTC flow diagram - rtc

In below diagram what is the meaning of below arrows ?
Here's what I think :
Each arrow describes where the change sets are flowing from/to. So the top workspace flows changes and accepts changes from the stream. The bottom two workspaces just flow changes to the stream, theses workspaces do not accept any changes from the stream. Is this correct ?
What is the meaning of the broken blue arrow ?

A broken arrow means: this flow target is not currently the current one.
If you open the stream, you will see a section "Flow Target" with a list of targets.
Each line can have two qualifier: "default" and "current".
Any target which isn't "current" will be represented by a broken arrow.
Current means that, when you are requesting to see the differences between one stream and another, it will display said differences between the stream and the current target.
See also this thread (more oriented for flow targets between repo workspace and streams)
"Current" means "this is the flow target that will be displayed in the Pending Changes view".
"Default" means "if you try to deliver to a flow target other than the one marked "Default", you will get a warning, asking you if you are sure that you want to deliver to a non-default target.
Here's what I think : Each arrow describes where the change sets are flowing from/to.
Yes, but this is a "model": you won't directly deliver/accept changes from a Stream. You will always do so from a repo workspace.
So the top workspace flows changes and accepts changes from the stream.
Not exactly:
the filled blue arrow means you can ask the stream for the differences between said stream and the repo workspace (no deliver possible here, just a vizualisation of differences)
the broken blue arrow means the repo workspace knows about the Stream (it is listed as a default flow target, but said stream isn't the current flow target for the top repo workspace.
That means the "Pending Changes" views won't display any differences (to accept or to deliver) for that repo workspace compared to the Stream.
The bottom two workspaces just flow changes to the stream, theses workspaces do not accept any changes from the stream. Is this correct ?
No: the target means the bottom repo workspaces knows about the Stream (they can accept or deliver changes), and that Stream is their current flow target (the "Pending changes" view actively monitor differences between the bottom repo workspace and the Stream.

According with link,
http://www.ibm.com/developerworks/br/rational/library/parallel-development-rational-team-concert/
your assertion
"Current" means "this is the flow target that will be displayed in the
Pending Changes view". "Default" means "if you try to deliver to a
flow target other than the one marked 'Default', you will get a
warning, asking you if you are sure that you want to deliver to a
non-default target".
is absolutely correct.

Related

BPMN Intermediate Events Attached to an Activity Boundary

how would you drow diagram - example A or Example B or both are fine? In Example A there is an event, one extra task and process is back in the main flow. Example B - if the event occurs process is not back in main flow. Is it correct to draw process like in example A? Examples enclosed. Thank you in advance for help.
I draw examples (enclosed) and checked in BPMN specification but still have doubs.
I would go a step further and put a gateway in front of "Application Analysis" and then draw the arrow from the message event to that gateway (so the gateway is only used to join, doesn't need a condition, it is best practice, you could draw the arrow from the message event directly back on the task itself and it would express the same thing).
The basic reasoning is that you shouldn't have multiple tasks for the same thing in the diagram unless it is really at a different stage in the workflow.
However it isn't exactly the same as your workflow, because like this the customer could change the loan amount multiple times and not just once.
There are some problems:
I think you want to make the message event interrupting, otherwise you grant both loans, the original one and the changed one.
After "Application Analysis" there should probably be a gateway that checks the result of the analysis and only if it was ok you grant the loan.

Can an IoT Edge module twin's properties be used for message enrichment?

IoT Hub has the ability to enrich messages prior to being sent to an endpoint. It appears that the device twin's information can be added, but I don't see anything about the module twins online.
The use case here would be we will likely version a module data model contained within the messages at some point in the future. We would like to enrich the messages sent to endpoints with metadata about the state of the module it came from.
Another option that doesn't seem to exist is the ability to update device twin properties on deployment. Were this doable, then potentially we could update a modules version information at the device twin level.
Is this the wrong way to think about twins? Aka were such functionality even available, would the enrichment take the point in time twin reported value? Could there even be any guarantee that the twin has the correct reported value at the time the module sent the message? It that is the case, is the only real reliable way to send metadata about a module's message is within the message itself?
If you send an event from within a module using a module client, the keyword $twin in the message enrichment will refer to the module twin.

How do I clear up Documentum rendition queue?

We have around 300k items on dmi_queue_item
If I do right click and select "destroy queue item" I see the that row no longer appears if I query by r_object_id.
Would it mean that the file no longer will be processed by the CTS service ? Need to know if this would it be the way to clear up the queue for the rendition process (to convert to PDF) or what it would it be the best way to clear up the queue ?
Also for some items/rows I get this message when doing the right click "destroy" thing, what does it mean ? or how can I avoid it ? Not sure if maybe the item was processed and the row no longer exists or is something else.
dmi_queue_item table is used as queue for all sorts of events at Content Server.
Content Transformation Service is using it to read at least two types of events, afaik.
According to Content Transformation Services, Administration Guide, ver. 7.1, page 18 it reads dm_register_assets and performs the configured content actions for this specific objects.
I was using CTS for generating content renditions for some objects using dm_transcode_content event.
However, be carefull when cleaning up dmi_queue_item since there could be many different event types. It is up to system administrators to keep this queue clean by configuring system components to use events or not to stuff up events that are not supposed to be used.
As per cleaning the queue it is advised to use destroy API command, though you can even try to delete row using DELETE query. Of course try to do this in dev environment first.
You would need to look at 2 queues:
dm_autorender_win31 and dm_mediaserver. In order to delete them you would run a query:
delete dmi_queue_item objects where name = 'dm_mediaserver' or name = 'dm_autorender_win31'

what is the differences between "write metadata" and "set metadata" in ovs?

I mean, write metadata is implemented by the instructions in openflow, on the other side, set field in action can also set the metadata ,what is the differences between them?
As far as I can see, WRITE_METADATA and SET_FIELD for metadata do the same in Open vSwitch.
I'm guessing both are exposed by Open vSwitch to follow the OpenFlow specifications as much as possible. OpenFlow has a clear distinction between Actions and Instructions (cf. Sections 5.5 and 5.6 of OpenFlow v1.5.1): Instructions are attached to rules and applied at the end of each table, whereas Actions are attached to packets (using the Write-Actions Instruction) and applied at the end of the pipeline (or before if the Apply-Actions Instruction is executed). In Open vSwitch, the distinction is not as clear: Actions can be attached to both packets and rules.
Thus, while WRITE_METADATA is different from SET_FIELD in the OpenFlow specification because the first is an Instruction, the second an Action, you can do the same as WRITE_METADATA with a SET_FIELD Action.

How to discard the change set once delivered in RTC

I accidentally delivered the change set which include some additional config files having local system specific configuration in RTC. Is there any way to discard those changes once delivered? I mean the changes should not be come as incoming changes to other team members.
Please provide any pointer if you have.
Is there any way to discard those changes once delivered?
Not exactly: once delivered, that change set will come to the other team members as incoming.
There are two solutions:
revert the stream configuration to a state previous to your deliver. That is easy only if you are delivering baselines in addition of change sets, because you can then open the stream, and in the "component" section click on "Replace with", and replace the delivered baseline with the previous one.
But... if you never delivered baselines (and delivered only change sets), this isn't easy at all.
You can try and follow "Is there a way to create a RTC snapshot or baseline based on a past date?", but that is quite tedious.
Plus, if your colleagues already accepted your change set and started delivering change sets of their own, this solution isn't recommended at all.
Or, much simpler, you create a new changeset which will cancel the one you just deliver.
Right-click on your component, and select show > history, then right click on the latest change set you incorrectly delivered, and select revert.
That will create a patch.
Right-click on that patch, and select "apply to your workspace": that will create a change set which is the negative image of the one already delivered.
Deliver that new change set.
That means your colleagues will have to accept both change sets: the incorrect one, and the new one which cancels it.
This thread introduces a variation of the first alternative:
you can really remove the change set from the stream you delivered it to.
You can do this by:
discarding the change set from your local workspace
and then replacing the content of the stream with the content of your workspace for the particular component that's affected.
This is a more risky solution, because it really replaces the content of the stream with whatever you have in your workspace... it will remove anything in the stream that you don't have in your workspace. To do this:
a. Accept any incoming changes from the stream you are working with (to prevent losing anyone else's work).
b. Right click on the owning component in the Pending Changes view and select Show->History. The change set will appear in the History view.
c. Right click on the change set and choose Discard... This will discard the change set from your workspace.
So your workspace should now have all change from the stream except the one you want to remove. You can verify this by checking that your bad change set is the only thing you see Incoming.
d. Right click on the component and choose "Replace in [your stream name]..."