In Apex 5, is it possible to execute multiple Automatic Row Processing, on the same form, inserting/updating on different tables? - oracle-apex-5

In Apex 5, i'm trying to execute multiple Automatic Row Processing(ARP), on the same form, inserting/updating on different tables. But after the first ARP, that performs a insert on the first table, the subsequent ARPs tries to use the fields of the first table for the insert of the subsequent ARPs.
Can i separate the fields that participate in each ARP, or is it a limitation of Apex 5, that i must use only 1 ARP on 1 form page, inserting/updating on 1 table?

Related

scd slow changing demension how can i detect changes?

can i detect changes in my ODS tables before inserting it in dimension table in the DWH , i use sql and pentaho for data alimentation for information i use 4 tables to alimente my demension table ! so how can i detect changes in the 4 tables before using them ?
There two transformations steps that can help you comparing the content of two tables, Merge rows (diff) or Table compare.
You could keep a copy of the tables and each time you run your process compare the actual content with the content of the last copy, although that approach is not performance wise if the tables are too big.
Or if your database allows auditing of changes, you could activate that audit and just retrieve the rows your auditing say have been changed since last load.
There's also the option of using in the database a trigger that assures the modification date is updated each time a row is changed, so using the column where you store the modification change you can retrieve the rows changed.

Oracle: Data consistency across multiple tables to be displayed

I have 3 reports based on 3 different tables, which ideally should match each other in audit.
They are updated sequentially once in a day.
The problem here is when one of the table is updated and second one is in progress, the customer sees data discrepancy between the reports for some time.
We tried the solution where in we commit after all 3 tables are updated but we started having issue on undo tbsp. The application have many other things running on.
I am looking for a solution where in we can restrict the user to show data to a specific point, and he must see updated data only after all 3 tables are refreshed/updated.
I think you can use select * for update for all 3 tables befor start updating procedure.
In that case users can select data and will see only not changed data till update session will not finish and make commit.
You can use a flashback query to show data as-of a point in time:
select * from table1 as of timestamp timestamp '2021-12-10 12:00:00';
The application would need to determine the latest time when the tables were synchronized - perhaps with a log table that records when the update process last started. However, the flashback query also uses the UNDO tablespace. But the query should at least use less UNDO since some of the committed transactions will now free up some space.

SQL Server : executing a trigger AFTER the data is committed. Is it possible?

I might be fighting a losing battle here, but after many attempts, I thought I should ask.
We have a 'use' table that data gets written to from various different sources and in different ways.
We have a procedure that looks at this table and based on that data calculates various metrics, inserting it into another table, this is run overnight.
I'd like to be able to keep this data table up-to-date as the use table changes, rather than just regenerate overnight.
I wrote a trigger for the use table that, based on the users being updated, ran the procedure just for those users and updated their data.
The problem, of course, is this data is calculated by looking at all records for that user in the use table and as the trigger fires during the insert/transaction, the calculation doesn't include the data being inserted.
I then thought I could change the trigger to insert the userids into another table and run a trigger on that table, so it does the calculations then, thinking it was another transaction. I was wrong.
Currently, the only working solution I have is to insert userids into this 2nd table and have a sql job running every 10 seconds waiting for userids to appear in the table and then running the sync proc from there, but it's not ideal as there are a lot of dbs this would need to run through, and it would be awkward to maintain.
Ideally I'd like to be able to run the trigger after the commit has taken place so all the records are available. I've looked at AFTER INSERT but even that runs before the commit, so I'm unsure as to what use it even is.
Anyone have any ideas, or am I trying to do something that is just not possible?
Really basic example of what I mean:
tblUse
userid, activityid
1, 300
1, 301
2, 300
3, 303
So userid 1 currently has 2 records in the table
userid 1 performs an activity and a record gets added.
So after the transaction has been committed, userid 1 will have 3 records in the table.
The problem I have is if I were to count the records at the point the trigger is run I'll still only get 2 as the 3rd hasn't committed yet.
I also can't just move the calculations into the trigger and union the INSERTED table data as it's a very big and complicated script that joins on multiple tables.
Thank you for any ideas.

Maintain a counter in one table to track INSERT and DELETE in another table

I have a Spring Boot application with Postgres as database and Hibernate for persistence management. There are two tables in the db: STATISTICS and USER. Client machines submit entries to STATISTICS table and there is a 'counter' column in USER table to keep track of number of entries in the other table(increment on insert and decrement on delete).
I started with a basic query to maintain the counter: UPDATE USER u set u.counter = coalesce(u.counter, 0) + <update_by> where u.id = <user_id>. The <update_by> would be negative for deletion.
My questions:
Do I need to obtain explicit locks(something like SELECT...FOR UPDATE) to ensure concurrent updates don't work with stale data and overwrite other update's changes?
Would a database trigger be a better choice to maintain this counter? If yes, do I need to take care of locking for concurrent updates there?
(I need to maintain the counter along with the changes to other table and not do the count by going over the whole table when it is requested)
postgresql already blocks updatable records, you can use a trigger or a function, both are executed in the one transaction, blocking updatable records

Sybase ASE data purge batch - identifying # of rows to delete in a transaction

We have multiple applications using Sybase ASE (currently migrating from 15.0 to 15.7). I am working on a generic utility that can be used by all of our ASE databases to purge unwanted old data. The utility is supposed to run at a specified window (12:00 to 3:00 AM) and can slowly keep deleting the data day-by-day without affecting other users or without bringing the database down.
As multiple tables with different parameters are involved, I need to figure-out an optimal algorithm to decide the number of rows deleted in each transaction.
- Should the amount of data deleted be calculated using a formula based on user cache/ data cache/ log space/ some other param? If so, can you please mention a formula?
- For now, I am using the following brute force safety limit and looping through the delete query, committing at each loop traversal:
set rowcount 100
Additionally, do we have a way to consider the following parameters? If so, how much weightage should I give to each of these parameters?
Based on max row size of a table, dynamically fix the number of rows deleted?
Index:
If the deletion criteria matches an index, delete more rows in a tran (how many?)
If the deletion criteria matches an index, use "select for update" (ASE 15.7)
If there are other index not matching the given column criteria, should I "select into #temptbl" the index column values matching the input criteria and then do the deletes joining the #temptbl?
If there is no index at all, should I set the #of rows deleted with "set rowcount" using the table's max row size?