How to allow users to change my Accurev stream - accurev

I'm going to be on vacations for a month, the stream needs to be supported. How to allow other users to change my Accurev stream (change time label, backed stream, etc.)?

By default anyone can change the time basis, rename a stream, apply locks, etc..to streams. The only way to prevent these operations from occurring is thorough the server_admin_trig. If you have enabled this trigger to block users from performing these acts, then I suggest you create a group, add whoever needs to admin this stream for you and update the script to allow this group to perform those operations.

Related

Akka.net persistence delete messages from a certain sequence number

Is there a way to delete messages after a certain sequence number in Akka.net? I know that DeleteMessages(seqNumber) deletes all messages before a certain sequence number, is there a way to delete after a seqNumber? The main goal would be to revert to a previous state (perhaps those messages were created in error).
It's obviously possible to edit the database manually (or set is_deleted to true for those events) but I'm not sure if that would be a great idea.
Thanks
DeleteMessages(seqNr) exists only for purpose of saving the space in case when you're using eventsourcing with snapshots, and your system can tolerate incomplete history of events.
Deleting events is against eventsourcing as a concept. Purpose of the event is to describe fact, that has already happened. You cannot alter the past, as there might have been some other sources that already read up that event and updated some state / performed an action according to it.
Correcting effects of events in eventsourced systems usually comes down to producing a compensating event, that is going to reverse effects of the one, you want to fix.

Updating OpenFlow group table bucket list in OpenDaylight

I have a mininet (v2.2.2) network with openvswitch (v2.5.2), controlled by OpenDaylight Carbon. My application is an OpenDaylight karaf feature.
The application creates a flow (for multicasts) to a group table (type=all) and adds/removes buckets as needed.
To add/remove buckets, I first check if there is an existing group table:
InstanceIdentifier<Group> groupIid = InstanceIdentifier.builder(Nodes.class)
.child(Node.class, new NodeKey(NodId))
.augmentation(FlowCapableNode.class)
.child(Group.class, grpKey)
.build();
ReadOnlyTransaction roTx = dataBroker.newReadOnlyTransaction();
Future<Optional<Group>> futOptGrp = rwTx.read(LogicalDatastoreType.OPERATIONAL, groupIid);
If it doesn't find the group table, it is created (SalGroupService.addGroup()). If it does find the group table, it is updated (SalGroupService.updateGroup()).
The problem is that it takes some time after the RPC call add/updateGroup() to see the changes in the data model. Waiting for the Future<RPCResult<?>> doesn't guarantee that the data model has the same state as the device.
So, how do I read the group table and bucket list from the data model and make sure that I am indeed reading the same state as the current state of the device?
I know that
Add/UpdateGroupInputBuilder has setTransactionUri()
DataBroker gives transaction to read/write
you should use transaction chaining
But I cannot figure out how to combine these.
Thank you
EDIT: Or do I have to use write transactions in stead of RPC calls?
I dropped using RPC calls for writing flows and switched to using writes to the config datastore. It will still take some time to see the changes appear in the actual device and in the operational datastore but that is ok as long as I use the config datastore for both reads and writes.
However, I have to keep in mind that it is not guaranteed that changes to the config datastore will always make it to the actual device. My flows are not that complicated in the sense that conflicts are unlikely to happen. Still, I will probably check consistency between operational and configuration datastore.

Avoid two-phase commits in a event sourced application saving BLOB data

Let's assume we have an Aggregate User which has a UserPortraitImage and a Contract as a PDF file. I want to store files in a dedicated document-based store and just hold process-relevant data in the event (with a link to the BLOB data).
But how do I avoid a two-phase commit when I have to store the files and store the new event?
At first I'd store the documents and then the event; if the first transaction fails it doesn't matter, the command failed. If the second transaction fails it also doesn't matter even if we generated some dead files in the store, the command fails; we could even apply a rollback.
But could there be an additional problem?
The next question is how to design the aggregate and the event. If the aggregate only holds a reference to the BLOB storage, what is the process after a SignUp command got called?
SignUpCommand ==> Store documents (UserPortraitImage and Contract) ==> Create new User aggregate with the given BLOB storage references and store it?
Is there a better design which unburdens the aggregate of knowing that BLOB data is saved in another store? And who is responsible for storing BLOB data and forwarding the reference to the aggregate?
Sounds like you are working with something analogous to an AtomPub media-entry/media-link-entry pair. The blob is going into your data store, the meta data gets copied into the aggregate history
But how do I avoid a two-phase commit when I have to store the files and store the new event?
In practice, you probably don't.
That is to say, if the blob store and the aggregate store happen to be the same database, then you can update both in the same transaction. That couples the two stores, and adds some pretty strong constraints to your choice of storage, but it is doable.
Another possibility is that you accept that the two changes that you are making are isolated from one another, and therefore that for some period of time the two stores are not consistent with each other.
In this second case, the saga pattern is what you are looking for, and it is exactly what you describe; you pair the first action with a compensating action to take if the second action fails. So "manual" rollback.
Or not - in a sense, the git object database uses a two phase commit; an object gets copied into the object store, and then the trees get updated, and then the commit... garbage collection comes along later to discard the objects that you don't need.
who is responsible for storing BLOB data and forwarding the reference to the aggregate?
Well, ultimately it is an infrastructure concern; does your model actually need to interact with the document, or is it just carrying a claim check that can be redeemed later?
At first I'd store the documents and then the event; if the first
transaction fails it doesn't matter, the command failed. If the second
transaction fails it also doesn't matter even if we generated some
dead files in the store, the command fails; we could even apply a
rollback. But could there be an additional problem?
Not that I can think of, aside from wasted disk space. That's what I typically do when I want to avoid distributed transactions or when they're not available across the two types of data stores. Oftentimes, one of the two operations is less important and you can afford to let it complete even if the master operation fails later.
Cleaning up botched attempts can be done during exception handling, as an out-of-band process or as part of a Saga as #VoiceOfUnreason explained.
SignUpCommand ==> Store documents (UserPortraitImage and Contract) ==>
Create new User aggregate with the given BLOB storage references and
store it?
Yes. Usually the Application layer component (Command handler in your case) acts as a coordinator betweeen the different data stores and gets back all it needs to know from one store before talking to the other or to the Domain.

Possibility of restoring a deleted stream?

A new stream called stream 1 is created.
I deliver some changes to stream 1.
Later, I delete stream 1.
So:
Is there a possibility to restore a deleted stream?
If I am not able to restore the stream then, will I loose my changes delivered to it?
Is there a possibility to restore a deleted stream?
Not easily unless you had created snapshots (we covered snapshots in your previous question "Consistency of snapshot code in rtc?"): in that case, when you delete a stream, RTC would ask you to select another existing stream in order to keep ownership of those snapshots.
If you do, then it is trivial to re-create a new stream from a snapshot, assuring you to recover all components at their exact state as recorded by the snapshot.
But if you didn't set any snapshot, then you have to manually re-enter all the components, and set them to (for instance) their most recent baselines.
If I am not able to restore the stream then, will I loose my changes delivered to it?
In any case, as mentioned in the thread "Delete a Stream - any side-effects?"
Change-sets exist independently of any stream, so deleting a
stream does not delete any change-sets.
It will just be harder to get the exact list of change sets back to a new stream if they were only delivered to stream 1 (that you deleted).
Especially if those change set were never grouped inside a baseline (for a given component) or, as explained above, with a snapshot.
But those change sets are not gone.

Notification about azure blob object changes

Can I somehow subscribe for notifications about Azure's blob object changes?
My purpose is to delegate file uploads to the client using SAS and lately (after upload is complete) update the database. It looks like I need to continuously check blob's state, but it is quite resource consuming process.
You can't be notified by the Blob Storage about a change made to a blob, but as you point out, you can monitor it, requesting the ETag on a scheduled basis to see if it's done.
That being said, the cost to monitor a blob (or even a whole container) can be close to negligible if correctly implemented. Pinging the Blob Storage once per second costs you roughly $2.5 / month. Then, by using some heuristic you can probably lower this cost to $0.25 (one check per 10s on average). At this point, it's not really interesting to try to optimize more.
You can now do this using Azure functions
Create a blob trigger by specifying your storage account connection
string and your container/{name}
In outputs, select the place where
you want your notification to go to
Another option to consider is to have the client notify you when it's done uploading.
I created a file change monitor for monitoring blobs - full details at http://ben.onfabrik.com/posts/monitoring-files-in-azure-blob-storage