Data creation via BAPI is intermittently inoperative despite committing updates - abap

I have the following case:
I'm creating transports documents in a LOOP (using BAPI_CREATE). After this loop, if everything is fine, I call BAPI_TRANSACTION_COMMIT (and wait = 'X').
After that, I do another loop for the created transports to change them. But not everytime I can change the first transport (the LAST created). Could it be because that commit work has not been performed properly at all? I used WAIT UP TO 3 SECONDS before the second loop, and it worked; but I would find out the real problem and how to solve it.
Thanks.

These are different processes, even with commit and wait.
try to experiment with
SET UPDATE TASK LOCAL.

Related

Release all document locks at a specified time in Marklogic

We are planning to implement a locking mechanism for our documents using xdmp:lock-acquire API in MarkLogic with no timeout option. The document would be locked until the user edits and save the document. As part of this, we are in need to release all the locks at a specified time, say 12.00 AM everyday.
For this, we could use xdmp:lock-release API, but if there are many documents it would take some time to get complete.
Can someone suggest a better way to achieve this in MarkLogic?
If you have a potentially large set of locks that you need to process, and are concerned about timeouts or other issues from doing all of the work in a single transaction, then you can break up the work into smaller chunks or individual transactions.
There are a variety of batch processing tools and frameworks to do that. CoRB is one option that makes it easy to plug in custom selectors and processing scripts, and to execute against giant sets.
If you are looking to initiate the work from a MarkLogic scheduled task and perform all of the work within MarkLogic, then you could spawn multiple tasks to process subsets.
A simple example demonstrating how to set a "chunk size" for each transaction and to keep spawning more work:
declare function local:release-locks($locks, $chunk-size){
if (exists($locks))
then (
(: release all of these locks(you might apply some sort of filter to restrict to a subset,
and maybe a try/catch in case the lock gets released before this runs) :)
$locks[1 to $chunk-size] ! xdmp:node-uri(.) ! xdmp:lock-release(.),
(: now spawn the next set to be released in a separate transaction :)
xdmp:spawn-function(function(){
local:release-locks(subsequence($locks, $chunk-size+1), $chunk-size)
},
<options xmlns="xdmp:eval">
<update>true</update>
<commit>auto</commit>
</options>)
)
else () (: nothing left to do, stop spawning work :)
};
let $locks := xdmp:document-locks()
let $chunk-size := 1000
local:release-locks($locks, $chunk-size)
If you are looking to go down this route, there are some libraries available:
https://github.com/bradmann/marklogic-spawnlib
https://github.com/mblakele/taskbot
The risk of spawning multiple items onto the task server is that if there is a restart or interruption, some tasks may not execute and all locks may not be released. But if you are just looking to release all of the locks, you could then just re-run the script to kick off another round.

Oracle can return time out when another connection already use the same table?

if i need run an DML (insert, update, delete) in one table of database, firstly he verify if has an active DML using that table. In this momment, if has another operation, my connection wait he has finished.
There's a way to get an "time out" in this cases? Not in a global mode, only for specific cases.
--Edit for more specifications of the problem
Not sure if any kind of lock is actually used. But in my case, there is an old application in Oracle Forms and a new application written by me.
The problem is that when the user opens a specific record to update any field in the old application, and i try to edit the same record in my app, the line is blocked.
So my app it's waiting for the unlock. But the problem is that the user thinks the application is frozen and kill him, losing the changes.
But this is not the case if another Oracle Forms application attempts to edit. When it does, Oracle Forms displays the message "Could not reserve record (2). Keep trying?". Maybe it's because this old app uses any kind of lock. But i need validate this in the code.
Obs: The number 2 is the number of tries to update.
If you do a 'lock table .... wait', then it will wait until any DML on this table that is inflight commits, then gives you the lock. This will make any one coming after you wait till you release the lock. Look at the doc to see how to use this.
Then there's the possibility of locking a single row (select for update). which is more granular.
That being said, can you please explain what are you exactly trying to do? As you may not need to do this at all.

Usage of NHibernate session after exception on query

We are trying to implement retry logic to recover from transient errors in Azure environment.
We are using long-running sessions to keep track and commit the whole bunch of changes at the end of application transaction (which may spread over several web-requests). Along the way we need to get additional data from database. Our main problem is that we can't easily recover from db error because we can't "replay" all user actions.
So far we used straightforward recovery algorithm:
Try to perform operation in long-running session
In case of error, close the session, open a new one and merge entities into it
Retry the operation
It's very expensive approach in terms of time (merge is really long for big entity hierarchies). So we'd like to optimize things a little.
We'd like to perform query operations in separate session (to keep long running one untouched and safe) and on success, merge results back to the long-running session. Retry is relatively simple here - we just need to open new session and run query once more. However, with this approach we have an issue with initializing lazy properties/collections:
If we do this in separate session, we need to merge results back (a lot of entities) but merge could fail and break the long-running session
We tried different ways of "moving" original entity to different session, loading details and returning it back, but without success (evict, replicate, etc.)
There is known statement that session should be discarded in case of exception. However, the example shows write operation. Is it still true for read ones? I mean if I guarantee that no data is written back to the database, can I reuse the same session to run query again?
Do you have any other suggestions about retry logic with long-running sessions?
IMO there's no way to solve your issue. It's gonna take a lot of time to commit everything or you will have to do a lot of work to break it up into smaller sessions and handle every error that can occur while merging.
To answer your question about using the session after an exception: you cannot trust ANYTHING anymore inside this session, not even loaded entities.
Read this paragraph from Ayende's article about building a simple todo app with a recoveryplan in case of an exception in the session:
Then there is the problem of error handling. If you get an exception
(such as StaleObjectStateException, because of concurrency conflict),
your session and its loaded entities are toast, because with
NHibernate, an exception thrown from a session moves that session into
an undefined state. You can no longer use that session or any loaded
entities. If you have only a single global session, it means that you
probably need to restart the application, which is probably not a good
idea.

Is it possible to prevent a trigger from running in a transaction?

According to several resources, such as this,
A query that is executed within the context of a trigger is automatically wrapped in a transaction. If there are any distributed queries in the trigger code, the transaction is promoted to a distributed transaction automatically.
Simple question - is there a way to prevent this behavior? I'm looking for a way to explicitly prevent code in my trigger from running in the context of a transaction.
If you are trying to do something asynchronous so that the calling transaction doesn't have to wait, you may consider Service Broker, which is designed to do exactly that - go fire off some asynchronous task, and return control to the caller, regardless of transaction scope.
Another idea is to not have your trigger perform the work, but instead pop a work item onto a queue table, and have a background process running continuously to process the queue. This isn't necessarily easy to do if your work item operates on the set of data in inserted/deleted but without more context it certainly seems like a viable option.
I don't know of a way to prevent a trigger from being a part of the calling transaction - in fact that's kind of the whole point.
This is called "autonomous transaction", and the simplest way to implement is by creating a linked server to point to the original database.
See this MSDN blog for a possible solution.

TransactedReceiveScope - when does the Transaction Commit?

Scenario:
We have a wcf workflow with a client that does NOT use transactionflow.
The workflow contains several sequential TransactedReceiveScopes (using content-based correlation).
The TransactedReceiveScopes contain custom db operations.
Observations:
When we run SQL profiler against the first call, we see all the custom db calls, and the SaveInstance call in the profile trace.
We've noticed that, even though the SendReply is at the very end of TransactedReceiveScope, sometimes the sendreply occurs a good 10 seconds before the transaction gets committed.
We tried changing the TimeToPersist and TimeToUnload to zero, but that had no effect. (The trace shows the SaveInstance happening immediately anyway, but rather the commit seems to be delayed).
Questions:
Are our observations correct?
At what point is the transaction committed? Is this like garbage collection - i.e. it commits some time later when it's not busy?
Is there any way to control the commit delay, or is the only way to do this to use transactionflow from the client (anc then it should all commit when the client commits, including the persist).
The TransactedReceiveScope commits the transaction when the body is completed but as all execution is done through the scheduler that could be some time later. It is not related to garbage collection and there is no real way to influence it other that to avoid a busy machine and a lot of other parallel activities that could also be in the execution queue.