Undo-Redo repeated commands - undo

So, I understand that undo/redo is usually implemented by command pattern.
However, when a command is intend to repeat x times, then undo x times would be troublesome for the users.
For example, I have a "int num", when I press "+" on the keyboard, the program will do "++num". If the user increase the num from 0 to 50 by pressing "+", then the user want to undo, how do I allow the user to undo once, and the num will be back to 0.
How to implement undo so that it can handle a series of repeated commands?
Thanks in advance!

The Monitored Undo Framework ( http://muf.codeplex.com ) does this by using the concept of a Batch of operations. You can flag a set of operations as belonging to a group so that the undo system will undo / redo them as a unit of work.
Furthermore, the library allows you to optimize the situation by only storing the first / last values for a given field. That way, the undo / redo logic doesn't have to apply all 50 operations. It can simply undo by setting the value back to what it was prior to the undo batch.
Caveat: The MUF library doesn't use a traditional command pattern. It uses more of a memento pattern, tracking changes after they happen in the underlying domain model.
If you needed to have a true command pattern, then you might be able to add logic to the undo implementation that would inspect entries on the undo stack. Then, for example, if a user hits undo on the "+" operation, the stack would begin an undo, and keep undoing as long as it kept finding "+" operations on the stack. I've used this approach in cases where I couldn't batch the events, but wanted the undo stack to automatically undo more than one operation at a time.

Related

Delete a record after a period of time automatically in SQL Firebird 2.5?

We have a table which has Datetime stamp field when that record was created. How can we create a trigger or procedure to delete a record after 30 days?
Is there any advice how we can run this deletion scheduler?
Firebird doesn't have a scheduler. You will need to create an application that executes a clean up routine on a schedule yourself. You could do this as part of the normal application, or you could write a small application specifically for this purpose, and execute it with the scheduler of your OS (e.g. Windows Scheduled Tasks, or Linux Cron).
Firebird 2.1 introduced global triggers fired on database connection/disconnection and on transaction starting/ending.
https://www.firebirdsql.org/file/documentation/chunk/en/refdocs/fblangref30/fblangref30-ddl-trigger.html
While it is not exactly what you need it can be used to achieve similar results. Whether that similarity is good enough for you or not is for you to evaluate.
to delete a record after 30 days?
The question here is what you do specifically mean here. Would it still be okay, if the row is deleted in 31 day, in 40 days?
In our case, for a client-server office application, there was no time pressure and additionally there was no safe deletion as long as the programs had "documents" open.
We had to delete some global data, and while there were some marks in the database, which documents use them and which documents are currently opened - it was not very reliable. Which also meant that existing method of immediate delete occasionally could lead to application crashes.
So we reformulated a problem similar to yours the following way:
We need rows not deleted immediately but pending for deletion for 30 days or more. Those record would be rendered in the application in a special way, as a warning to users and also providing a way for them to cancel deletion, if they changed their mind (or if other users had different ideas).
The deletion would happen, in logic terms, "when there is no connected application". In technical term it could mean either "when first application is connecting, but before it started actual (business-related) work" or "when last application is disconnecting, after it ended doing actual work". We settled on the latter, we used on disconnect global database trigger.
We had not only main business-domain application, but a number of technical helper utilities. From the Firebird point of view there is no difference in them. So we had to modify "login sequence" in our main application: right after successful login it registered it's own CURRENT_CONNECTION into a special table. This is potentially slightly fragile.
ON DISCONNECT trigger used to do three actions:
it checked, if current_connection is in the table, and if it was - it called a special stored procedure, SP_LOCAL_CLEANUP.
it removed the current_connection from the table (it could had been BEFORE DELETE trigger then to call the procedure, but we decided our helper utilities should have a way to hook in, if they would need, so the call was put in the ON DISCONNECT trigger).
it checked if that table (known connected business-domain applications list) became empty, and if it did - called another special stored procedure, SP_GLOBAL_CLEANUP.
Those stored procedures were "umbrella" procedures, solely consisted of calls to different procedures, which did the actual work of checking for inconsistencies and fixing them. Like, removing marks "this document is opened for editing" if an application (or computer, or network) has crashed without removing the lock normal way. This way we could add or remove functionality without breaking Firebird object dependency chains.
In particular, one of the global sub-procedures looked into the "deletion pending" records, and deleted those "kept in recycle bin" for a time span running over 30 days. Actually, the records just had a column of planned deletion date and that could be more or less than 30 days, but that is technicality.
This meant that the actual deletion was happening "sometimes after 30 days" and it only happened when all main apps were shut down. When later those apps would be run again - they would re-read those global dictionaries tables in the updated, pruned state. The applications never again were in inconsistent state, using records removed from the database.
Potential fragile point: if users would not shut down application in the night, but just go home, it could mean there would never be a state "last application disconnected". This, however, would be a maintenance nightmare for their network admins (Windows updates and reboots, antivirus updates and reboots), so we documented the recommendation that those admins have to make sure at least once a week all the users went all together out of the database.
Potential fragile point: if the Firebird server crashes (not applications, but the server engine), then the "known connections" table would have stale values. We considered it not a practical problems, as then CURRENT_CONNECTION would be restarted as 1 value and go upward, eventually cleaning the table. But we also added a function into helper app, to use SYSDBA and monitoring tables and clean the table off non-existing connections.
You can re-use this framework if you do not have time pressure and you are okay if the actual deletion is deferred for a few days.
You can also use ON TRANSACTION START trigger instead, to shorten the delay to mere minutes, but I expect this would slow down your application badly, so would suggest against it.

Fix inconsistent state right away or lazily when data is requested

Our users go through several steps of workflow - the further they go the more objects we create. We also allow users to go back to Step#1 and change one of the existing objects. Which may cause inconsistencies so we must update/delete some of the objects at Step#2. I see 2 options:
Update/delete objects from Step#2 right away. This leads to:
Operation that's supposed to be a simple PATCH of an entity field becomes complicated. And it's a shared object between multiple workflows - so we'll have to add if-statements and do different things depending on the workflow.
Circular dependencies. Operations on Step#1 have to know about objects/operations on Step#2.
On each request in Step#1 we'd have to load data for Step#2 in order to determine whether Step#2 really needs to be updated. Which slows down operations on Step#1. So to change 1 record in DB we'll have to load hundreds (or even thousands) records for Step#2.
Many actions on Step#1 may need fixing state at Step#2. So we have to ensure we don't forget anything today and in the future.
Fix Step#2 lazily - when user goes there (our current approach). Step#2 will recognize that objects are inconsistent and fix them. Which leads to just 1 place where we need to care, but:
Until user opens Step#2 - DB will contain inconsistent objects. This hasn't resulted in any problems so far. But I can imagine it may complicate future SQL migrations.
We update DB state on GET request. This one doesn't seem like that big of a deal since GET stays idempotent anyway. But still it feels awkward.
Anyone knows better approaches? Or maybe improvements to these two?
Update
I haven't found perfect solution, but eventually we implemented an improved version of #1. When updating state on Step#1 we also set a flag "need to rebuild Step#2", when UI opens Step#2 it first checks this flag and issues a PUT to rebuild the state, and only then it GETs Step#2.
This still means that DB state is inconsistent for some period of time. But at least we'll know this for sure from the flag in DB. And if needed - we could write migrations taking this flag into account. This also allows (if needed in the future) to create an async job to fix the state.
I think it is more flexible to separate the state and the context where the objects are stored. Any creation of a new object at any step is accompanied by the preservation of the invariant and consistency of context.
There are separate rules of states - these are rules for transition from one to another and available objects for creation and separate rules for the context, rules for its consistency, which is ensured every time it changes.
What about dirty data asynchronous cleanup?
Whenever user goes back to Step #1 and changes something, mark all related data as "dirty" (e.g. add links to it in "DirtyData" table) and be done for now.
Have a DataCleanup worker (e.g. separate thread or smth) that constantly looks for data to be cleaned up.
Before editing data for Step #2, check if the data is not dirty.
Depending on your logic, 3) might result in user error (e.g. user would need to repeat Step #2). If DataCleanup worker has enough resources (i.e. it processes DirtyData table almost instantaneously), that should happen only on very rare occasions. If that is not OK, you could opt for checking for dirty data on each fetch, but that could be expensive.
It sounds like you're familiar with the HTTP spec regarding GET requests, but for future readers:
Why shouldn't a GET request change data on the server?
Why is using a HTTP GET to update state on the server in a RESTful call incorrect?
For the other bullet under 2, we probably don't need a specification to agree that persisting valid data is preferable to persisting invalid data.
So what can we do for the bullets under 1 to avoid complex branching logic in a particular step and also circular dependencies? My suggestion is an event-driven design. When step #2 changes it should fire a change event. In this scenario, step #2 has no knowledge of the concrete listener(s) who may receive its events, so it remains decoupled from any complex handling logic.
There's probably no way to guarantee you don't forget anything in the future; but if every step in the workflow is defined as a listener, it forces you to consider change events to some extent every time you implement a new step.
One side note on granularity: if a step has many changes, it can batch up its events rather than fire each one individually. You can adjust the size for efficiency.
In summary, I would strongly consider the Observer design pattern.

Redis atomic pop and add to sorted set, BRPOPLPUSH equivalent

I have one redis list "waiting" and one redis list "partying".
I have a long running process that safely blocks on the "waiting" list item to come along, and then pops it and pushes it onto the "partying" list atomically using BRPOPLPUSH. Awesome.
Users in the "waiting" list are repeatedly querying "am I in the "partying" list yet?", but there is no fast (i.e. < O(n)) method of checking if a user is in a redis list. You have to grab the whole list and loop through it.
So I'm resorting to switching from a redis list to a redis sorted set, with the 'score' as the unix timestamp of when they joined the "waiting" sorted set. I can blocking pop on the lowest score (user at the head of the queue). Using sorted sets, I can use ZSCORE to check in O(1) time if they're on either list, so it's looking hopeful.
How can I perform the nice atomic equivalent of BRPOPLPUSH on sorted sets?
It's like I need a mythical BZRPOPMIN & ZADD= BZRPOPMINZADD. If the process dies between these two, a user will effectively be disappeared from both sets.
Looking into MULTI EXEC transactions in redis, they are not what they appear to be at first glance, they're more like 'pipelines', in that I can't get the result of the first command (BZRPOPMIN) and feed it into the second command (ZADD). I'm very suspicious of putting the blocking BZRPOPMIN into the MULTI too, am I right to be?
How can I perform the nice atomic equivalent of BRPOPLPUSH on sorted sets? 
Sorry, you can't. We actually discussed this when the ZPOP family was added and decided against it: "However I'm not for the BZPOPZADD part, because instead experience with lists shown that this is not a good idea in general, unfortunately, and that that adding safety of message processing may be used other means. The worst thing abut BZPOPZADD and BRPOPLPUSH and so forth are the cascading effects, they create a lot of problems in replication for instance, and our BRPOPLPUSH replication is still not correct in certain ways (we can talk about it if you want)." (ref: https://github.com/antirez/redis/pull/4879#issuecomment-389116241)
 I'm very suspicious of putting the blocking BZRPOPMIN into the MULTI too, am I right to be?
Definitely, and blocking commands can't be called inside a transaction anyway.

Akka.net persistence delete messages from a certain sequence number

Is there a way to delete messages after a certain sequence number in Akka.net? I know that DeleteMessages(seqNumber) deletes all messages before a certain sequence number, is there a way to delete after a seqNumber? The main goal would be to revert to a previous state (perhaps those messages were created in error).
It's obviously possible to edit the database manually (or set is_deleted to true for those events) but I'm not sure if that would be a great idea.
Thanks
DeleteMessages(seqNr) exists only for purpose of saving the space in case when you're using eventsourcing with snapshots, and your system can tolerate incomplete history of events.
Deleting events is against eventsourcing as a concept. Purpose of the event is to describe fact, that has already happened. You cannot alter the past, as there might have been some other sources that already read up that event and updated some state / performed an action according to it.
Correcting effects of events in eventsourced systems usually comes down to producing a compensating event, that is going to reverse effects of the one, you want to fix.

Firebird lock table / lock record

Suppose you have one table for a Desktop application and several users.
When a user opens a record, i want to lock this record. I have tried "WITH LOCK" statement. It works fine.
But when a second users want to update the same record, i want to put a message "Sorry, you cannot work on this order because it is locked. Somebody else has opened this record before you". Firebird waits the first user to commit/rollback. I don t want to wait. I want to put an error message. Is there a simple way to ask firebird record lock status ?
Is there a way to lock a full table ? Or to put a semaphore/mutex (like get_lock on mysql)
i have tried reserving on set transaction statement but it does not work.
My wish is to display a message to the user. Not waiting.
Thanks
If you don't want to wait, then configure your transaction to use NO WAIT, or a wait timeout. However controlling business rules like this through database transactions is not advisable as it requires long running transactions which inhibit garbage collection, increases the chain of interesting transactions, and increases the chance of update conflicts.
I'd advise to use different options like:
First to update wins
Change detection (eg by a timestamp or record version counter which is also used as a condition in the update statement), and allowing the user to overwrite or abandon his update (or maybe merge)
Explicit reservation by updating the record (setting the username) in a separate transaction. This might require cleanup or the ability for a user to break the reservation (eg if someone had it open for too long).
Note that Firebird uses multi version concurrency control (MVCC), so explicit locking is not really natural. See also this answer to Locking tables firebird, delphi.
Locking tables using RESERVING should be possible, but I have never used it, so I am not entirely sure how to use it although you probably also need to specify FOR PROTECTED READ (see Interbase 6.0 Embedded SQL Guide, pages 70/71).