Detect plugin rollback - dynamics-crm-2013

Pretty simple question, but I can't find anything about it..
I have a plugin in Dynamics CRM 2013 that listens to the create and update events of an account. Depending on some business rules, some info about this account is written to an external webservice.
However, sometimes a create or update action can be rolled back from outside the scope of your plugin (for example a third party plugin), so the account won't be created or updated. The crm plugin model handles this nicely by rolling back every SDK call made in this transaction. But as I've have written some info to an external service I need to know when a rollback occured so that I can rollback the external operation manually.
Is there any way to detect a rollback in the plugin execution pipeline and execute some custom code? Alternative solutions are welcome too.
Thx in advance.

There is no trigger that can be subscribed to when the plugin rolls back, but you can determine when it happens after the fact.
Define a new Entity (Call it "Transaction Tracker" or whatever makes sense). Define these attributes for the entity
OptionSet Attribute (Call it "RollbackAction", or again, whatever makes sense).
A Text Attribute that'll serve as a Data Field.
Define a new workflow that get's kicked off when a "TransactionTracker" get's created
Have it's first step be a Wait Condition that is defined as a process Timeout that waits for 1 minute.
Have it's next step be a Custom Workflow Activity that uses the Rollback action to determine how to parse the Text Attribute, to determine if the entity has been rolled back (If it's a Create, does it exist? If it's an update, is the entities Modified On date >= the Transaction Tracker's Date?
If it has been rolled back perform whatever action is nessacary, if it hasn't been rolled back, exit the workflow (Or optionally delete the TransactionTracker Entity)
Within your plugin, before making the external call, create an OrganizationServiceProxy (since you are creating it and not using the existing one, it will be created outside the transaction and therefore, will not get cleaned up).
Create a "TransactionTracker" entity with the out of transaction service, populating that attributes as necessary.
You may need to tweak the timeout, but besides that, it should work fine.

Related

Consistent database update in SAP/ABAP O/O

I need to ensure consistent editing of SAP tables for Fiori Backend calls.
I have multiple situations, where single call to backend changes more than one table on the backend side. The changes are written to transport request.
I want to implement error-free stable solution, so that if first table was changed fine, but second table failed (duplicate entry, missing authorization), the whole bunch of changes is rejected.
However, it seems that there is only "perform FM in update task" available, which requires to put all logic of every backend db change into a FM.
Am I missing something, or SAP really has no Object Oriented way to perform consistent database updates?
The only workaround I have is to check all these preconditions upwards, which is not so nice anymore.
#Florian: Backend call is for example action "Approve" on the document, which changes: 1) Document header table, field status changes from "in workflow" to something else. 2) Approval table - current approver entry is changed. Or it is adding a new document, where 1) Document header table entry is added 2) Document history table entry is added.
I do not want to call Function Modules, I want to implement solution using only classes and class methods. I was working earlier with other ERP systems and there are statements like "Start transaction", "Commit transaction" or "Rollback transaction". Start transcation means you start a LUW, which is only committed on "Commit transaction", and if you call "Rollback transaction", all current database changes of that LUW would be cancelled. I wonder why modern SAP has none of these except for old update task FM (or is it just me not noticing a correct way to process this).
CALL UPDATE FUNCTION MODULE in UPDATE TASK is the only way. How it works in Fiori transnational App, for example,
Database A: You do some business logic, everything is fine. call UPDATE task to CUD database table A.
Database B: You do some business logic, there is some issue regarding authorization, you raise the exception(Error). UPDATE TASK to CUD database table B is NOT called.
After all the business logic are processed, in case any exception is raised, the SADL/Gateway layer would catch the exception, it would call ROLLBACK WORK which means everything is rollback. Otherwise, if there are no errors, it would call COMMIT WORK which means consistent CUDs to all tables,
btw, anything abnormal like DUPLICATE ENTRY happens within the UPDATE Function Module, depending on your coding, you can ignore it or raise MESSAGE E to abort the DB operations.
From my point of view, those kinds of issue should be avoided before your call the UPDATE Function Module.

Data Factory - Data Lake File Created Event Trigger fires twice

I'm developing Pipeline in Azure Data Factory V2. It has very simple Copy activity. The Pipeline has to start when a file is added to Azure Data Lake Store Gen 2. In order to do that I have created a Event Trigger attached to ADLS_gen2 on Blob created. Then assigned trigger to pipeline and associate trigger data #triggerBody().fileName to pipeline parameter.
To test this I'm using Azure Storage Explorer and upload file to data lake. The problem is that the trigger in Data Factory is fired twice, resulting pipeline to be started twice. First pipeline run finish as expected and second one stays in processing.
Has anyone faced this issue? I have tried to delete the trigger in DF and create new one but the result was the same with new trigger.
I'm having the same issue myself.
When writing a file to ADLS v2 there is an initial a CreateFile operation and a FlushWithClose operation and they are both triggering a Microsoft.Storage.BlobCreated event type.
https://learn.microsoft.com/en-us/azure/event-grid/event-schema-blob-storage
If you want to ensure that the Microsoft.Storage.BlobCreated event is triggered only when a Block Blob is completely committed, filter the event for the FlushWithClose REST API call. This API call triggers the Microsoft.Storage.BlobCreated event only after data is fully committed to a Block Blob.
https://learn.microsoft.com/en-us/azure/event-grid/how-to-filter-events
You can filter out the CreateFile operation by navigating to Event Subscriptions in the Azure portal and choosing the correct topic type (Storage Accounts) and subscription and location. Once you've done that you should be able to see the trigger and update the filter settings on it. I removed CreateFile.
On your Trigger definition, set 'Ignore empty blobs' to Yes.
The comment from #dtape is probably what's happening underneath, and toggling this ignore empty setting on is effectively filtering the Create portion out (but not the data written part).
This fixed the problem for me.

How to explicitly call TIBDataSet.RefreshSQL

I have list of records in TIBDataSet (Embarcadero Delphi) and I need to locate and modify one record in this list. There is chance that underlying database record has been changed by other queries and operations since TIBDataSet had been opened. Therefor I would like to call RefreshSQL for this one record (to get the latest data) before making any changes and before making post. Is it possible to do so and how?
I am not concerned about state of other records and I am sure that the record under consideration will always be updated and those updates will be commited before I need to changes this record from TIBDataSet.
As far as I understand then RefreshSQL is used for automatic retrieve of changes after TIBDataSet has posted upates to database. But I need manual (explicit) retrieval of the latest state before doing updates.
Try adding a TButton to your form and add the following code to its OnClick handler:
procedure TForm1.btnRefreshClick(Sender: TObject);
begin
IBQuery1.Refresh; // or whatever your IBX dataset is called
end;
and set a breakpoint on it.
Then run your app and another one (e.g. 2nd instance of it) and change a row in the second app, and commit it back to the db.
Navigate to the changed row in your app and click btnRefresh and use the debugger to trace execution.
You'll find that TDataSet.Refresh calls its InternalRefresh which in turn calls TIBCustomDataSet.InternalRefresh. That calls inherited InternalRefresh, which does nothing, followed by TIBCustomDataSet.InternalRefreshRow. If you trace into that, you'll find that it contructs a temporary IB query to retrieve the current row from the server, which should give you what you want before making changes yourself.
So that should do what you want. The problem is, it can be thoroughly confusing trying to monitor the data in two applications because they may be in different transaction states. So you are rather dependent on other users' apps "playing the transactional game" with you, so everyone sees a consistent view of the data.

iOS Rolling out app updates. Keeping user data intact when DB update required

I have just done a quick search and nothing too relevant came up so here goes.
I have released the first version of an app. I have made a few changes to the SQLite db since then, in the next release I will need to update the DB structure but retain the user's data.
What's the best approach for this? I'm currently thinking that on app update I will never replace the user's (documents folder, not in bundle) database file but rather alter its structure using SQL queries.
This would involve tracking changes made to the database since the previous release. Script all these changes into SQL queries and run these to bring the DB to the latest revision. I will also need to keep a field in the database to track the version number (keep in line with app version for simplicity).
Unless there are specific hooks, delegate methods that are fired at first run after an update I will put calls for this logic into the very beginning of the appDelegate, before anything else is run.
While doing this I will display "Updating app" or something to the user.
Next thing, what happens if there is an error somewhere along the line and the update fails. The DB will be out of date and the app won't function properly as it expects a newer version?
Should I take it upon myself to just delete the user's DB file and replace it with the new version from the app bundle. OR, should I just test, test, test until everything is solid on my side and if an error occurs on the user's side it's something else, in which case I can't do anything about it only discard the data.
Any ideas on this would be greatly appreciated. :)
Thanks!
First of all, the approach you are considering is the correct one. This is known as database migration. Whenever you modify the database on your end, you should collect the appropriate ALTER TABLE... etc. methods into a migration script.
Then the next release of your app should run this code once (as you described) to migrate all the user's data.
As for handling errors, that's a tough one. I would be very weary of discarding the user's data. Better would be to display an error message and perhaps let the user contact you with a bug report. Then you can release an update to your app which hopefully can do the migration with no problems. But ideally you test the process well enough that there shouldn't be any problems like this. Of course it all depends on the complexity of the migration process.

Synchronizing NHibernate Session with database - the reverse way

I am using NHibernate for a project, and I am absolutely beginner. I am fetching some objects from a table and showing it to a form, where they can be edited. If an user inserts a new object into the table from some other window, I want to show this newly inserted object into the edit window. My application uses tabbed window interface, so user can simultaneously open insert window and edit window at the same time.
So basically what I need is a way to determine if a newly created object exists in the database which is not fetched before by ISession, and if not, then fetch that new object from the database. In other words, I need to synchronize my session with the database, just like flush method, but in the reverse way.
Can anyone help me?
Publish/Subscription method works well for this. Check out the Publishing Events part of Ayende's sample desktop application. Basically after you've added a new item, you publish that information and other parts of your application that subscribed can update their lists accordingly.
You are taking the path to NHibernate Hell.
Be sure to work your infrastructure (ie defining interfaces, defining session management patterns and notification pattern) and isolate these non-business utilies from the rest of your code before using NHibernate to implement them.
Good luck.