How to explicitly call TIBDataSet.RefreshSQL - sql

I have list of records in TIBDataSet (Embarcadero Delphi) and I need to locate and modify one record in this list. There is chance that underlying database record has been changed by other queries and operations since TIBDataSet had been opened. Therefor I would like to call RefreshSQL for this one record (to get the latest data) before making any changes and before making post. Is it possible to do so and how?
I am not concerned about state of other records and I am sure that the record under consideration will always be updated and those updates will be commited before I need to changes this record from TIBDataSet.
As far as I understand then RefreshSQL is used for automatic retrieve of changes after TIBDataSet has posted upates to database. But I need manual (explicit) retrieval of the latest state before doing updates.

Try adding a TButton to your form and add the following code to its OnClick handler:
procedure TForm1.btnRefreshClick(Sender: TObject);
begin
IBQuery1.Refresh; // or whatever your IBX dataset is called
end;
and set a breakpoint on it.
Then run your app and another one (e.g. 2nd instance of it) and change a row in the second app, and commit it back to the db.
Navigate to the changed row in your app and click btnRefresh and use the debugger to trace execution.
You'll find that TDataSet.Refresh calls its InternalRefresh which in turn calls TIBCustomDataSet.InternalRefresh. That calls inherited InternalRefresh, which does nothing, followed by TIBCustomDataSet.InternalRefreshRow. If you trace into that, you'll find that it contructs a temporary IB query to retrieve the current row from the server, which should give you what you want before making changes yourself.
So that should do what you want. The problem is, it can be thoroughly confusing trying to monitor the data in two applications because they may be in different transaction states. So you are rather dependent on other users' apps "playing the transactional game" with you, so everyone sees a consistent view of the data.

Related

How to force a cache refresh in MS Access

I am working on migrating a MS Access Database over to a newer SQL platform.
But, with all of the users who are currently using it, we're migrating slowly/carefully.
The first step is that we are re-writing the VBA code into C#, which is then deployed in a .dll along with the database.
Now, the VBA code calls into the C# to do the business logic, then the VBA continues to do the displays/UI, while Access still hosts the database.
The problem comes in that I have a report that is being run after the business logic from the C# in one place, and apparently MS Access has a cache, which clears every 5 seconds. So, the transaction that occurs in the C# code writes to the database, but the VBA code is still using the cache. This is causing errors, as the records added to the database (which the VBA report is trying to report on) don't exist in the cache yet...
I'm guessing that the C# .dll must be getting treated as a "second connection" to the MS Access database, which is what seems to typically cause this error in my searches (thinks that one process is writing, and the other is reading).
Since the cache is cleared out every 5 seconds, we can just put the process to sleep, and wake it up after 5 seconds, and then run the report, but that's pretty terrible for an end user.
And, making things difficult, the cache seems like it only gets used in the deployed version (so, when running from source / in debug mode, the error never happens).
Doing some searches, there seems to be plenty of people who have said "just refresh the cache." But, the question is: within VBA, how do you refresh the cache?
Any advice would be welcome.
Thanks
I've been fighting the same issue for years as I write a lot of tools around an old Powerbuilder application that has an Access MDB back end.
The cache does exist and it is VERY real. When data is inserted on a different connection than it is queried on, the cache can be directly observed and measured. It was also documented by Microsoft before they blackholed a bunch of their old articles...
Microsoft Jet has a read-cache that is updated every PageTimeout milliseconds (default is 5000ms = 5 seconds). It also has a lazy-write mechanism that operates on a separate thread to main processing and thus writes changes to disk asynchronously. These two mechanisms help boost performance, but in certain situations that require high concurrency, they may create problems.
I've found a couple workarounds that are not the best, but somewhat make due until I find something better or can re-write the app with a better back end database.
The seemingly best answer I've found (that may actually work for you since you say you need VBA) is to use JRO.RefreshCache. I've been trying to figure out how to implement this using C# or VB.net without any luck. Below is a link to a code example where you execute the RefreshCache method on your 2nd connection that needs to pull the data. I have not tested this myself.
https://documentation.help/MSJRO/jrmthrefreshcachex.htm
A workaround I've found that will deliver the query results within 500ms to 1000ms of insert time (instead of anywhere between 500 and 5000 ms - or more):
Use System.Data.ODBC instead of OleDB, with connection string: Driver={Microsoft Access Driver (*.mdb, *.accdb)};Dbq=;
If someone knows how to use the JRO.RefreshCache method with OLEDB and C# or VB.net, I'd be forever grateful. I believe the issue is it's looking for an ADO connection to be passed in, not an OLEDB connection.
I not aware of ANY suggesting that some 5 second cache exits? Where did this idea come from????
Furthermore, if you have 5 users, then you not going to be able to update their cache, are you?
In other words, the issue of some cache for one user still not going to solve or work with mutli-users anyway, is it?
The simple matter is if you load up a form with 100 reocrds, and then other users are ALSO working on that 100 rows, then all users will not see other changes until such time you tell access to re-load the form.
You can do this with a me.Refresh in the form, and then it will show changes made by other users (or even your c# code!!!).
However, that not really the soluion here.
How does near EVERY system deal with this issue?
Answer:
You don't, you "design" the software to take the user work flow into account.
So, in place of loading up a form with 100 rows of data? (which you should not, unless SUPER DUPER reason exists for doing that).
The you provide a UI in which the user FIRST searches for whatever it is they want to work on.
In other words, say you just booked a user on a tour. Now, they call the office back, and want to change some details of that tour. But, a different tour staff might pick up the phone. So, now a 2nd user opens the tour?
So, you solve that issue by NOT loading all the tours into that form in the first place.
you provide a search screen, so they can search for the user, find the user, maybe type in a invoice number or whatever.
You display the results in a pick list, and then launch the form to the ONE record (and perhaps detail records from child tables).
So there no concpet of a cache in Access anymore then there is in c#.
However, if you load up a datatable in c#, and then display that data?
Well, what about the other users on that system. They will not see changes to that data ANY MORE then the current access form.
So, if you want to update some data in c#? Then fine, but you need/want to do two things:
First, before you call any c# code that may update the current form reocrd? You need to FORCE a data save of that current record BEFORE you call any code, be it VBA code, or c# code that going to update that current reocrd the user is working on.
You can in Access save the current reocrd in MANY different ways, but the typical approach is:
' single record save - current record
if me.dirty then me.dirty = false
' VBA or c# code goes here.
' optional refresh the current form to reflect changes
me.Refresh
So, in most cases, it is the "design" of your software that will solve this issue.
For example, in the tour example, or in fact ANY system, the user can't work, can't update, and can't do their job UNLESS they first find/search and have a means to bring up that form + record data in the first place.
So, ANY typical good design will:
Ask the user for that name, invoce number or whatever.
Display the results of the search, and THEN allow the user to pick the record/data to work on. When they are done, they close that form and are RIGHT BACK to the search form to do battle with the next customer or task or phone call or whatever.
So, a search form might look like this:
In above, I typed in smi, and then displayed a pick list.
The user can further type in say part of the first name, and thus now get this:
So, maybe they type in a invoice number, customer number, booking number or whatever.
So, you display the results, and then they can select the row or "thing" to work on.
thus, we click on the row (or above glasses button), and then jump to the ONE record.
so, the user does whatever they have to do with the customer. Now, when done, they close the ONE thing, the ONE main reocrd.
This not only saves the data (so others in the office can now use that booking data), but it also means the data is saved. and they are NOW right back at the search screen, ready to do battle with the next customer.
So, not only does this mean we have a VERY bandwith friednly design (we only pull the one main reocrd into that form), but it also is better for work flow.
The Access form's cache thus becomes a non issue, since we only dealing with the one record.
And as I pointed out, if the system is multi-user, then you NOT going to be able to udpate and deal with multiple users cached data anyway, are you?
Think of ANY system you EVER used from a software point of view.
When you use google, does it download the WHOLE internet, and then you use ctrl-f to search megs and megs of data in the browser?
Nope!
you search first, get a list of that search, and THEN pick one!!
And when that list is display, maybe others on the internet are udpateing, and add new data - but if that was cached in your browser, then it would not work!!!
And same goes for a desktop accounting system. You don't load up all accounts, and THEN have the user go ctrl-f to search all the data. You search for the customer, invoice number and PICK ONE to work on.
And it does not make sense to load up a form with 1000 customers, and then go ctrl-f to find that customer. Same goes for a instant banking machine. It does not download ALL customers and THEN let you search. It asks you FIRST to get what you need. So, be it browser based, desktop based, or JUST ABOUT ANY software you use?
You quite much elminate the cache issue, since not pre-loading boatloads of data, but asking and letting the user search for the data they need.
So, in regards to the Access form data and cache?
If you are on a form, and call VBA code, or c# code or whatever?
If that code update the current form, you have NO MORE OR LESS of a issue when calling VBA code, or c# code!!!! If that code updates the current form, and the reocrd is dirty (has pending edits), then you get that message about the current form's reocrd having been udpated by another user!!!
So, your cache issue does NOT IN ANY WAY exist MORE or LESS as a issue in typical Access software.
As a genreal rule, if you are on a form with pending edits, and say want to pop up some form to edit releated data?
You have to ensure that pending edits are SAVED before you launch an form that can edit the same data, or run code that can/may edit that data.
As a result, ZERO cache issues should exist, and they no more or no less exist when calling sql or VBA update code in a form then calling some c# code from that form.
So, write the pending update for that form.
Then run your VBA, SQL, or c# code.
And then do a me.Refresh to display any changes made by those external routines.
there is no documetjion, or ANY article I can find that suggests some kind of 5 seocnd cache or update - it is a urban myth, and your software challenge here in regards to use c# or VBA, or even SQL server stored procedures?
They are all the same issue, and I dare say that often access is used as a front end to SQL server, and ALL OF the SAME issues exist when using SQL server with ms-access.

Consistent database update in SAP/ABAP O/O

I need to ensure consistent editing of SAP tables for Fiori Backend calls.
I have multiple situations, where single call to backend changes more than one table on the backend side. The changes are written to transport request.
I want to implement error-free stable solution, so that if first table was changed fine, but second table failed (duplicate entry, missing authorization), the whole bunch of changes is rejected.
However, it seems that there is only "perform FM in update task" available, which requires to put all logic of every backend db change into a FM.
Am I missing something, or SAP really has no Object Oriented way to perform consistent database updates?
The only workaround I have is to check all these preconditions upwards, which is not so nice anymore.
#Florian: Backend call is for example action "Approve" on the document, which changes: 1) Document header table, field status changes from "in workflow" to something else. 2) Approval table - current approver entry is changed. Or it is adding a new document, where 1) Document header table entry is added 2) Document history table entry is added.
I do not want to call Function Modules, I want to implement solution using only classes and class methods. I was working earlier with other ERP systems and there are statements like "Start transaction", "Commit transaction" or "Rollback transaction". Start transcation means you start a LUW, which is only committed on "Commit transaction", and if you call "Rollback transaction", all current database changes of that LUW would be cancelled. I wonder why modern SAP has none of these except for old update task FM (or is it just me not noticing a correct way to process this).
CALL UPDATE FUNCTION MODULE in UPDATE TASK is the only way. How it works in Fiori transnational App, for example,
Database A: You do some business logic, everything is fine. call UPDATE task to CUD database table A.
Database B: You do some business logic, there is some issue regarding authorization, you raise the exception(Error). UPDATE TASK to CUD database table B is NOT called.
After all the business logic are processed, in case any exception is raised, the SADL/Gateway layer would catch the exception, it would call ROLLBACK WORK which means everything is rollback. Otherwise, if there are no errors, it would call COMMIT WORK which means consistent CUDs to all tables,
btw, anything abnormal like DUPLICATE ENTRY happens within the UPDATE Function Module, depending on your coding, you can ignore it or raise MESSAGE E to abort the DB operations.
From my point of view, those kinds of issue should be avoided before your call the UPDATE Function Module.

Instantly "locking" a record in multi-user Access environment

So, record-locking in Access is pretty awful. I can't use the built-in record locking because it locks a "page" of records instead of just the individual records (I've tried changing the settings for using record-level locking, but it's still locking a page instead of just one record), but even if I could get that working, it wouldn't solve my issue because the record doesn't lock until the user starts to make changes in the form.
The issue is, when two people open the same record, they can start making changes and both save (thus overwriting the earlier change). To make matters worse, there are listboxes on the form that link to other tables (keyed on an ID) and the changes they make to those tables are then overwritten by any change that comes after if they both opened the same record.
Long story short, I need to make sure it's impossible for two people to even open the same record at the same time (regardless of whether or not they've made any edits to it yet).
To do this, I added a field to the table which indicates if a record has been locked by a user. When they open a form, it sets their name in the field and other users who try to open that record get a notification that it's already locked. The problem is, this "lock" isn't instantaneous. It takes a few seconds for other users to "detect" that the record is locked, so if two people try to open the same record at roughly the same time, it will allow them both to open it. I've applied a transaction to the UPDATE statement that sets the lock, but it still leaves a short window wherein the lock doesn't "take" and two people can open the same record.
So, is there a way to make an UPDATE instantaneous (so all other users immediately see its results), or better yet, a robust and comprehensive way to lock records in an Access multi-user environment?
It not clear why you only receiving “page” locking.
If you turn on row locking in file->options, then you ALSO need to set the particular form to lock the current record. So just turning on record locking will not help you. That setting ONLY sets the default for new forms - it is not a system wide setting.
If you correctly turn on locking for a form, then if two users are viewing the same record and one user starts to edit the record, then all others CANNOT edit the record. Any other user attempting to edit a record will see a “lock” icon in the record selector bar (assuming record selector is turned on for the given form). They also will receive a "beep" if they try to type into any editable control on the given form.
And when they try to edit, they will see a visible "lock" icon on the selector bar like this:
A few things:
If two users are able to edit a record, then you not have turned on locking for that given form. This feature MUST be set on a form-by-form basis. Changing the setting in file->options->client setting ONLY SETS THE DEFAULT for NEW forms you create! So the setting ONLY applies to the default for new forms – it does NOT change existing forms.
So setting record locking is ONLY a form-by-form setting.
So you ALWAYS MUST set each form you want locking to the current edited record. You set this in form design, in the data tab of the properties sheet like this:
And also keep in mind that the setting of record level locking (a different setting and feature) is an Access client setting and does NOT travel with the given application.
So since you state that two users can edit the same record, then CLEARLY you NEVER turned on record locking for that given form. The systemwide “default” record locking ONLY sets the above form default (so existing forms you have are NOT changed).
Next up:
The setting of [x] Open database by using record-level locking is an Access client setting and NOT saved with the application. So this is an Access-wide setting, not an application setting, nor one that travels with the application.
So you have to set this on each client workstation, or you have to set this in your start-up code.
If you can’t go around and change each workstation to change this setting (or you are using the Access runtime), then you can use this VBA in your start-up code to set this feature:
Application.SetOption "Use Row Level Locking", True
Note that the setting does NOT take effect until exit the application, but that’s really a “non” issue since this means the first time you run this code, some users might well be in page locking mode, and others in row locking mode. Most of the time this causes little issue.
However the next time any user launches the application then they will be in row locking mode.
I have in the past also written custom locking code. And can outline how to make this work well, but from what you posted so far, you never turned on or set locking nor had locking working correctly for any of the forms you have now anyway.
OK, I finally figured out all of the issues contributing to this and worked out a solution.
The problem is multi-faceted so I'll cover the issues separately:
First issue: My custom locks weren't instantaneous. Even though I was using a transaction, there were several seconds after a lock was placed where users could still access the same record at the same time. I was using CurrentDb.Execute to UPDATE the record and Workspaces(0).BeginTrans for the transaction. For some reason (despite Microsoft's assurances to the contrary from here: https://msdn.microsoft.com/en-us/library/office/ff197654.aspx) the issue was that the transaction wasn't working when using the Workspaces object. When I switched to DBEngine.BeginTrans, the lock was instantaneous and solved my immediate problem.
The irony is that I almost always use DBEngine for my transactions but went with Workspaces this time for no reason, so that was a bad move obviously.
Second issue: The reason I had to use custom locking in the first place was because record-level locking wasn't working as expected (despite being properly configured). It was still using page-level locking. This was due to a performance trick I was using from here: https://msdn.microsoft.com/en-us/library/dd942824%28v=office.12%29.aspx?f=255&MSPPError=-2147217396
The trick involves opening a connection to the database where your linked tables are contained, which speeds up linked table operations. The problem is that the OpenDatabase method is NOT compatible with record-level locking so it opens the db using page-level locking, and since the first user to open a database determines its lock level (as explained here: https://msdn.microsoft.com/en-us/library/aa189633(v=office.10).aspx), all subsequent connections were forced to page-level.
Third issue: My problem is that my forms are not just simple bound forms to a single table. They open a single record (not allowing the user to navigate) and provide several functions which allow the user make modifications which affect other records in other tables that are related to the record they're editing (through comboboxes and pop-up forms and what not). As a result, I can't allow two people to open the same record at the same time as it leaves way too many opportunities for users to walk over each others' changes. So even if I remove the OpenDatabase performance trick, I'd still have to force the Form to be Dirty as soon as they open it so the record locks immediately and no one else can open it. I don't know if this would be as instantaneous as my custom locking and haven't yet tested that aspect.
In any event, I need a record to be locked the instant a user opens it and for now I've decided to keep using my custom locking (with the fix for the transaction). If something else comes to light that makes that less than ideal, I can try removing the OpenDatabase trick and switching to Access's built-in locking and force an immediate lock on every record when it is opened.
You could use the method described here:
Handle concurrent update conflicts in Access silently
to handle your lock field.
Since Access doesn't make locking records easy, I'm wondering if you were to add a table with locked record entries whether that would solve the problem even though it would be the "duct tape, soup can and coat hanger" solution: You create a "Locked_Record" table with 2 fields a) record ID being updated and b) the user name of the person updating that record. That table would control exactly who owns and therefore can edit what record. Your form would have a search field and when the search term is entered and "Enter" pressed the form would search for the record by looking for it in the data and looking for it in the Locked_Record table. If found in the Locked_Record table, then you user gets an error saying "Record in use already" and display who owns the record. If not found in the data then the appropriate message is displayed. If found in the data and not found in the Locked_Record table, then a Locked_Record entry would be created and the user would then get the data displayed in the form. At this point nobody else can edit that record. When the user is done updating, either the user would need to press a button saying "Done updating" or the form would have to be closed. Either way the Locked_Record entry would be deleted so others could use that record. If the record owner doesn't close out the form or doesn't press the button then that is a training issue. This method could be user for multiple entities such as Customers, Employees, Departments, etc. You would just have to assure your application and DB is set up so any sub-forms used which might lock other tables would ONLY affect that record's entries in the other tables.
I know this is a bit old but the information here inspired me to to use the following. Basically, the me.txtApplication is a text box on the bound form. The form is bound to the table and is set to lock the edited record in the property section. This code won't do anything other than trigger that editing lock and promptly undo the change. If another user tries to load the same record it will attempt to do the same edit, trigger the error, and move to the next record or start a new record without the user being the wiser.
'Lock current record with edit-level lock by editing and removing the edit from a
field.
'If record is already locked, move to next record.
On Error Resume Next
Me.txtApplication = Me.txtApplication & "-%$^$^$$##$"
Me.txtApplication = Replace(Me.txtApplication, "-%$^$^$$##$", "")
If Err.Number = -2147352567 Then
If Me.CurrentRecord < Me.Recordset.RecordCount Then
DoCmd.GoToRecord , , acNext
Else
MsgBox "No available records.", vbOKOnly, "No Records"
DoCmd.GoToRecord , , acNewRec
'[If the condition is not true, then we are on the last record, so don't go
to the next one]
End If
End If
End Sub

Detect plugin rollback

Pretty simple question, but I can't find anything about it..
I have a plugin in Dynamics CRM 2013 that listens to the create and update events of an account. Depending on some business rules, some info about this account is written to an external webservice.
However, sometimes a create or update action can be rolled back from outside the scope of your plugin (for example a third party plugin), so the account won't be created or updated. The crm plugin model handles this nicely by rolling back every SDK call made in this transaction. But as I've have written some info to an external service I need to know when a rollback occured so that I can rollback the external operation manually.
Is there any way to detect a rollback in the plugin execution pipeline and execute some custom code? Alternative solutions are welcome too.
Thx in advance.
There is no trigger that can be subscribed to when the plugin rolls back, but you can determine when it happens after the fact.
Define a new Entity (Call it "Transaction Tracker" or whatever makes sense). Define these attributes for the entity
OptionSet Attribute (Call it "RollbackAction", or again, whatever makes sense).
A Text Attribute that'll serve as a Data Field.
Define a new workflow that get's kicked off when a "TransactionTracker" get's created
Have it's first step be a Wait Condition that is defined as a process Timeout that waits for 1 minute.
Have it's next step be a Custom Workflow Activity that uses the Rollback action to determine how to parse the Text Attribute, to determine if the entity has been rolled back (If it's a Create, does it exist? If it's an update, is the entities Modified On date >= the Transaction Tracker's Date?
If it has been rolled back perform whatever action is nessacary, if it hasn't been rolled back, exit the workflow (Or optionally delete the TransactionTracker Entity)
Within your plugin, before making the external call, create an OrganizationServiceProxy (since you are creating it and not using the existing one, it will be created outside the transaction and therefore, will not get cleaned up).
Create a "TransactionTracker" entity with the out of transaction service, populating that attributes as necessary.
You may need to tweak the timeout, but besides that, it should work fine.

iOS Rolling out app updates. Keeping user data intact when DB update required

I have just done a quick search and nothing too relevant came up so here goes.
I have released the first version of an app. I have made a few changes to the SQLite db since then, in the next release I will need to update the DB structure but retain the user's data.
What's the best approach for this? I'm currently thinking that on app update I will never replace the user's (documents folder, not in bundle) database file but rather alter its structure using SQL queries.
This would involve tracking changes made to the database since the previous release. Script all these changes into SQL queries and run these to bring the DB to the latest revision. I will also need to keep a field in the database to track the version number (keep in line with app version for simplicity).
Unless there are specific hooks, delegate methods that are fired at first run after an update I will put calls for this logic into the very beginning of the appDelegate, before anything else is run.
While doing this I will display "Updating app" or something to the user.
Next thing, what happens if there is an error somewhere along the line and the update fails. The DB will be out of date and the app won't function properly as it expects a newer version?
Should I take it upon myself to just delete the user's DB file and replace it with the new version from the app bundle. OR, should I just test, test, test until everything is solid on my side and if an error occurs on the user's side it's something else, in which case I can't do anything about it only discard the data.
Any ideas on this would be greatly appreciated. :)
Thanks!
First of all, the approach you are considering is the correct one. This is known as database migration. Whenever you modify the database on your end, you should collect the appropriate ALTER TABLE... etc. methods into a migration script.
Then the next release of your app should run this code once (as you described) to migrate all the user's data.
As for handling errors, that's a tough one. I would be very weary of discarding the user's data. Better would be to display an error message and perhaps let the user contact you with a bug report. Then you can release an update to your app which hopefully can do the migration with no problems. But ideally you test the process well enough that there shouldn't be any problems like this. Of course it all depends on the complexity of the migration process.