How can I handle an asynchronous response that eventually updates a status flag in an Oracle table?
I basically have a PL/SQL routine that make a REST call using APEX_WEB_SERVICE API.
My question is, asynchronously, this will eventually update a status flag within a table, which will tell me whether the operation was OK or FAIL.
What is the best way to poll this table to check if a response of OK or FAIL has been returned using Oracle PL/SQL?
I was looking at DBMS_LOCK.sleep() but unsure if this is the best approach. Could DBMS_ALERT also work for this?
Rather than poll the table at an interval, I would recommend using Oracle Advanced Queues along with Oracle Scheduler. AQ is designed exactly for this sort of thing. You can create a "scheduled" job that is triggered by a message (sent by the asynchronous process at the same time it updates the table) being sent to the queue. Scheduler sees the message and runs the appropriate job or job chain to finish the processing.
See here for a basic example: https://pmdba.wordpress.com/2017/08/21/aq-basics/
Related
Part 1: I would like to receive real-time notification when value changes from A -> B in an Oracle table that is an on-prem vendor database.
Part 2: Upon receiving the event a process flow will kick-off to take appropriate action such as executing WebAPI calls, Emails and SMSs.
What would be a good solution architecture for Part 1?
Since it's a vendor database writing and executing custom triggers on their tables is not desirable.
Plan B is to write a JOB to poll the tables on a schedule and pick up the changes.
I have data conversion and caching service running as self-hosted WCF service.
Now it uses database polling in constant short intervals to update its data.
I think it's unnecessary. The data can be changed only if one of the tables is changed, and when the data is changed depends on system users actions.
There is no problem in setting a trigger for specific tables, however I would need an action outside SQL-Server to update my cache. My WCF service could perform update when receiving specific URI via HTTP. So all I need is a command in table trigger which would send a request. Is it even possible?
I think about a hack I used back in the days with HTTP requests. I halted HTTP request response at server until data packet from somewhere else arrived. There was no delay between polling requests. I achieved fully asynchronous, "real-time" updates.
Maybe this approach is possible to apply with SQL? I think about a query which blocks termination until receives a signal. Well, it eventually times out, but it's good enough to try. Then - how to signal and wait in SQL? By locking and unlocking shared resource, like cursor or dummy table?
Any other options?
I need the cache update done at lowest possible frequency (because it's pretty expensive, so once per minute is great), but I need immediate update when the data is changed.
To answer your question, have you looked at xp_cmdshell?
https://msdn.microsoft.com/en-us/library/ms175046.aspx
However, the security/performance implications of such a decision could be non-trivial depending on your use case.
If i have a stored procedure or a trigger in Sql Server 2008, can it do some sql calculations 'in another non-blocking thread'? ie. something in the background
also, can two sql code blocks be ran in parallel? or two stored procs be ran in parallel?
for example. Imagine we are given the job calculating the scores for each Stack Overflow user (and please leave all 'do that elsehwere/service/batch/overnight/etc, elswhere) after a user does some 'action'.
so we have a trigger on the Post table, so when a new post is INSERTED, the trigger fires off and part of that logic, it calculates the user's latest score. Instead of waiting for the stored proc to finish and block the current sql thread / executire, can we ask it to calc the score in the background OR parallel.
cheers!
SQL Server does not have parallel or deferred execution: each block of running code in a connection is serial, one line after the other.
To decouple processing, you usually have to use SQL Server Agent jobs or use Service broker. These start executing in a new connection, new session etc
This makes sense:
What if you want to rollback your changes? What does the background thread do and how does it know?
What data does it use? New, Old, lock wait, snapshot?
What if it gets ahead of the main thread and uses stale data?
No, but you could write the request to a queue. Service Broker, a SQL Server component, provides support for this kind of thing. It's probably the best option available for asynchronous processing.
I have a procedure written in PLJava that sends out updates over JMS in my postgres database.
What I would like to do is have that function called on an interval (every 15 seconds) internally in the database (preferably not from an outside process). Is this possible? Any ideas?
If you need no external access, you are presumably able to modify the database design so that you don't need the update at all. Can you explain more about what the update is doing?
As depesz said, you could use either cron or pgAgent, but they are only able to go down to a one minute granularity, not 15 seconds. Considering sleeping inside the stored procedure until the next iteration is not a good idea, because you will have an open transaction for all that time which is a really bad idea.
Strict answer: it is not possible. Since you don't want outside process, and PostgreSQL doesn't support jobs - you are out of luck.
If you'll reconsider using outside processes, then you're most likely want something like cron, or better yet pgagent.
On absolutely other hand - what do you need to do that has to happen every 30 seconds? this seems like a problem with design.
First, you'll spend the least amount of effort if you just go with a cron job.
However, if you were starting from scracth: You are trying to periodically replicate rows from your database. I think you are looking at a replication queue.
The PGQ project (used for Londiste replication, both from Skype's SkyTools) has a queue that you can use independently. When configuring it, you set a maximum event count, and a loop delay, before batched events are generated. You can get batches spaced by no more than 15 seconds that way. You now have to produce the events that will be batched, using a trigger that calls pgq.insert_event; and consume the queues. The consumer can call your PL/Java stored proc; you'll have to rewrite the procedure to send everything in the batch instead of scanning the base table for new events.
As far as I know postgresql doesn't support scheduled tasks. You'll need to use a script with cron or at (depending on your operating system.)
Sounds like you're doing sort of replication? Every 15s sounds like a lot of updates. Could you setup a trigger (or a number of triggers) instead of polling?
If you are using JMS why not just have th task wait for input on the queue?
Per your depesz comment, you have a PL/Java stored procedure that "flushes out database tables (updates) as java objects". Since you want it to run in 15 second intervals, it must be processing a batch of updates each time. Rather than processing a batch of updates in a stored procedure every 15 seconds, why not process them one at a time when they happen via an after update trigger and eliminate the need for a timed interval. If you are aggregrating data from multiple tables to build your objects than add the triggers to you upper most tables only.
In my case the problem was that agent couldn't authorize to database so after I've made all connections trusted from localhost the service started successfully and job works fine
for more information about error you should see into windows event viewer or eq in unix based system. see my config file C:\Program Files\PostgreSQL\10\data\pg_hba.conf
I want to write a service (probably in c#) that monitors a database table. When a record is inserted into the table I want the service to grab the newly inserted data, and perform some complex business logic with it (too complex for TSQL).
One option is to have the service periodically check the table to see if new records have been inserted. The problem with doing it that way is that I want the service to know about the inserts as soon as they happen, and I don't want to kill the database performance.
Doing a little research, it seems like maybe writing a CLR trigger could do the job. I could write trigger in c# that fires when an insert occurs, and then send the newly inserted data to a Windows or WCF service.
What do you think, is that a good (or even possible) use of SQL CLR triggers?
Any other ideas on how to accomplish this?
Probably you should de-couple postprocessing from inserting:
In the Insert trigger, add the record's PK into a queue table.
In a separate service, read from the queue table and do your complex operation. When finished, mark the record as processed (together with error/status info), or delete the record from the queue.
What you are describing is sometimes called a Job Queue or a Message Queue. There are several threads about using a DBMS table (as well as other techniques) for doing this that you can find by searching.
I would consider doing anything iike this with a Trigger as being an inappropriate use of a database feature that's easy to get into trouble with anyway. Triggers are best used for low-overhead dbms structural functionality (e.g. fine-grained referential integrity checking) and need to be lightweight and synchronous. It could be done, but probably wouldn't be a good idea.
I would suggest having a trigger on the table that calls the SQL Server Service Broker, that then (asynchronously) executes a CLR stored procedure that does all your work in a different thread.
I have a service that polls the database every minute, it doesn't cause that much performance problems and it is a clean solution. Plus if your service or other wcf endpoint is not there your trigger will fail or be lost and you will have to poll anyways later.
I would not recommend using a CLR trigger, or any sort of trigger for this. You are opening yourself to having serious maintainability and potential locking issues. (A very simple trigger that chucks stuff into an audit/queue table may be acceptable IF you don't care about ##identity after inserts and you will never lock the audit/queue table up)
Instead, from your application/orm you should trigger inserting stuff into a queue table and have this queue processed on a regular basis. This can be done by having a transaction in your ORM or kicking off a stored proc the starts a transaction commits the change and audit/queue atomically. (be careful with locking here)
If you need immediate action, look at spawning a job to clear the queue after you do a insert/update/delete on the table and
Also ensure you are double checking the queue once a minute or so in case the background process was not kicked off properly. If its a web app and you want to avoid spawning threads you could communicate with a background process to clear up the queue.
Why not implement the insert in a stored procedure, and do the business logic in the procedure after the insert? What is so complicated about it that it can't be written in T-SQL?