Setting timeout for query in oracle - sql

We have data warehouse setting where we use oracle 12c and informatica for ETL. We call some hourly procedures in informatica workflow. Sometimes these procedures take more than one hour for various reasons. Is it possible to set timeout event to generate mail alert at database level or informatica level which will terminate current execution and generate mail alert for the same.
Best Regards

Well... no. This and a bunch of other features are not part of Informatica. Here an external Orchestration tool is very much helpful. One that takes care of file watching and triggering workflows upon file arrival, reports that a workflow runs too long or too short, one that will notify in case a file you expect to get has not been received and so on.

Related

SQL Server: Using triggers for workflow automation

In a media management system my task is to create a workflow automation. Currently, i have created it using SQL Server triggers and the UI using ASP.NET with JQuery.
For Ex:
When a new file enters the system the trigger works and it will update the database metadata table with some data for that file.
Millions of assets get through the system. Is it ideal to have triggers to do this process.
Is there a better way to create this automation?
Is there a "best practice" to do this kind of works?
I'm having the same issue and data enters my central asset database on several ways (may differ from client to client).
So I also want to create an easily customizable workflow in the data layer (no other dependencies)
As the other people mention, triggers may affect the parent activity.
That is overcome by writing your action that should be performed away to a queue table.
Example Trigger. Hardware.Status = "Issue Work Order"
INSERT INTO Queue (Created, Task, Completed) VALUES (GETUTCDATE(),"EXEC dbo.IssueWorkOrder(123)",0);
The insert of a record into your queue table will reduce the problems as highlighted by other user comments.
The you build a scheduling tool (hangfire, sql tasks, or whatever), that execute tasks in the queue in the data order it wAS added.
Now, of course in practice it's not as simple as that. You will have to address the following:
What if the step fails2
Dependencies of previous steps to first have been completed
Multiple operators changing a record. (the deploy time between the job step being executed, and another person updating the same record.
I guess #2 and #3 is an issue with any workflow engine / pipleline. To address this a locking mechanism must be put in place.

Trapping All Batch Job from MVS

I'm trying to trap all the batch Job from MVS.
I want to transmit all the batch job information (start,end,error) to an external system in order to conduct further analysis.
Has anyone got any idea on how to do this ?
Write an IEFACTRT exit (or whatever its modern day equivalent is) and have the systems programmers install it.
IBM actually provides a facility for this. You can have it write SMF (System Management Facility) records for all jobs. The record layouts are available and you can write code to do analysis on them or you can get 3rd party products like OmegaMon that will do the analysis and reporting for you.
as in my shop, we print the job info into plain files, and ftp down to some file servers and from where we run extract/format with some scripts and pull the data into BI platform for later analysis/visualisation.
Currently, we are studying to utilise the power of graph db like Neo4j to deeper understand our batch job relationship/better present the job relationship with people who interested. and for now we think graph db is a very neat tool for such kind of thing(batch job management)...
Hope my answer can give you some inspiration/reminders...
Typically, installations cut SMF type 30 records. Subtype 1 is written when a new transaction is started. transaction means, System Resources Manager (SRM) transaction. Don't confuse it with transactions in the context of e.g. a database system. A batch job that begins execution is such a transaction. Subtype 5 is written when a transaction ends. Along with subtype 5, there is a completion section that reports the job termination status.
Now, SMF processing is traditionally done in batch as you have to prepare the SMF records first either by extracting them from the log stream or from one of the SYS1.MANx data sets.
But recently, capabilities have been added to z/OS that allow you to hook into the process when SMF records are written. A product like the IBM Common Data Provider for z/OS can be used transform the data in the way you want it to be and to stream it to a destination of choice, for instance logstash. Following such a technique allows to process SMF records almost in real time.

Sql Server 2005 - SSIS statistics per component per run

Coming from a different ETL tool, I'm trying to figure out how to get (production) statistics on each component as it runs in SSIS.
For example, if the flat file is reading from an external source that has a high deviation (the rows/sec changes drastically at different times), I would like to know that information.
If an SSIS has a significant 'slow point' (buffer filling up / data stream impacted), I would also like to know that information.
And using sprocs for example from the DMV's, the CPU time and readIO/writeIO would also be ideal (and useful for people showing improvement by moving from sproc to SSIS in a consistent/measurable approach).
The reason I'm asking this question is I see the rows going through BIDS during debugging, but it may not reflect the actual rows/sec on each component in production.
How would one either enable/introspect/obtain these kinds of statistics for production environments (even if it takes a small hit, the numbers are a big deal).
Thanks!
-Darren
This is difficult to do in SSIS 2005. I have seen the runtime engine "just stop" when trying to perform task-level logging from event handlers in complex SSIS packages. One thought: to instrument the Data Flows only by adding Row Count Transformations just after Source Adapters and on each Data Flow Path that outputs rows. Then add an Execute SQL Task to each Data Flow Task's OnPreExecute event handler to log the start of execution, and add another Execute SQL Task to the corresponding OnPostExecute event handler. In the onPostExecute logic, store the row counts and the end time of the data flow task execution. I believe that will provide enough metrics to calculate throughput for the data flow pipeline.
Hope this helps,
Andy
Not sure if it will help, but maybe you can try to configure logging on your package and select "SSIS log provider for SQL Server Profiler"
It shows several information between begin and end of the DataSource Processing

Database Job Scheduling

I have a procedure written in PLJava that sends out updates over JMS in my postgres database.
What I would like to do is have that function called on an interval (every 15 seconds) internally in the database (preferably not from an outside process). Is this possible? Any ideas?
If you need no external access, you are presumably able to modify the database design so that you don't need the update at all. Can you explain more about what the update is doing?
As depesz said, you could use either cron or pgAgent, but they are only able to go down to a one minute granularity, not 15 seconds. Considering sleeping inside the stored procedure until the next iteration is not a good idea, because you will have an open transaction for all that time which is a really bad idea.
Strict answer: it is not possible. Since you don't want outside process, and PostgreSQL doesn't support jobs - you are out of luck.
If you'll reconsider using outside processes, then you're most likely want something like cron, or better yet pgagent.
On absolutely other hand - what do you need to do that has to happen every 30 seconds? this seems like a problem with design.
First, you'll spend the least amount of effort if you just go with a cron job.
However, if you were starting from scracth: You are trying to periodically replicate rows from your database. I think you are looking at a replication queue.
The PGQ project (used for Londiste replication, both from Skype's SkyTools) has a queue that you can use independently. When configuring it, you set a maximum event count, and a loop delay, before batched events are generated. You can get batches spaced by no more than 15 seconds that way. You now have to produce the events that will be batched, using a trigger that calls pgq.insert_event; and consume the queues. The consumer can call your PL/Java stored proc; you'll have to rewrite the procedure to send everything in the batch instead of scanning the base table for new events.
As far as I know postgresql doesn't support scheduled tasks. You'll need to use a script with cron or at (depending on your operating system.)
Sounds like you're doing sort of replication? Every 15s sounds like a lot of updates. Could you setup a trigger (or a number of triggers) instead of polling?
If you are using JMS why not just have th task wait for input on the queue?
Per your depesz comment, you have a PL/Java stored procedure that "flushes out database tables (updates) as java objects". Since you want it to run in 15 second intervals, it must be processing a batch of updates each time. Rather than processing a batch of updates in a stored procedure every 15 seconds, why not process them one at a time when they happen via an after update trigger and eliminate the need for a timed interval. If you are aggregrating data from multiple tables to build your objects than add the triggers to you upper most tables only.
In my case the problem was that agent couldn't authorize to database so after I've made all connections trusted from localhost the service started successfully and job works fine
for more information about error you should see into windows event viewer or eq in unix based system. see my config file C:\Program Files\PostgreSQL\10\data\pg_hba.conf

Start stored procedures sequentially or in parallel

We have a stored procedure that runs nightly that in turn kicks off a number of other procedures. Some of those procedures could logically be run in parallel with some of the others.
How can I indicate to SQL Server whether a procedure should be run in parallel or serial — ie: kicked off of asynchronously or blocking?
What would be the implications of running them in parallel, keeping in mind that I've already determined that the processes won't be competing for table access or locks- just total disk io and memory. For the most part they don't even use the same tables.
Does it matter if some of those procedures are the same procedure, just with different parameters?
If I start a pair or procedures asynchronously, is there a good system in SQL Server to then wait for both of them to finish, or do I need to have each of them set a flag somewhere and check and poll the flag periodically using WAITFOR DELAY?
At the moment we're still on SQL Server 2000.
As a side note, this matters because the main procedure is kicked off in response to the completion of a data dump into the server from a mainframe system. The mainframe dump takes all but about 2 hours each night, and we have no control over it. As a result, we're constantly trying to find ways to reduce processing times.
I had to research this recently, so found this old question that was begging for a more complete answer. Just to be totally explicit: TSQL does not (by itself) have the ability to launch other TSQL operations asynchronously.
That doesn't mean you don't still have a lot of options (some of them mentioned in other answers):
Custom application: Write a simple custom app in the language of your choice, using asynchronous methods. Call a SQL stored proc on each application thread.
SQL Agent jobs: Create multiple SQL jobs, and start them asynchronously from your proc using sp_start_job. You can check to see if they have finished yet using the undocumented function xp_sqlagent_enum_jobs as described in this excellent article by Gregory A. Larsen. (Or have the jobs themselves update your own JOB_PROGRESS table as Chris suggests.) You would literally have to create separate job for each parallel process you anticipate running, even if they are running the same stored proc with different parameters.
OLE Automation: Use sp_oacreate and sp_oamethod to launch a new process calling the other stored proc as described in this article, also by Gregory A. Larsen.
DTS Package: Create a DTS or SSIS package with a simple branching task flow. DTS will launch tasks in individual spids.
Service Broker: If you are on SQL2005+, look into using Service Broker
CLR Parallel Execution: Use the CLR commands Parallel_AddSql and Parallel_Execute as described in this article by Alan Kaplan (SQL2005+ only).
Scheduled Windows Tasks: Listed for completeness, but I'm not a fan of this option.
I don't have much experience with Service Broker or CLR, so I can't comment on those options. If it were me, I'd probably use multiple Jobs in simpler scenarios, and a DTS/SSIS package in more complex scenarios.
One final comment: SQL already attempts to parallelize individual operations whenever it can*. This means that running 2 tasks at the same time instead of after each other is no guarantee that it will finish sooner. Test carefully to see whether it actually improves anything or not.
We had a developer that created a DTS package to run 8 tasks at the same time. Unfortunately, it was only a 4-CPU server :)
*Assuming default settings. This can be modified by altering the server's Maximum Degree of Parallelism or Affinity Mask, or by using the MAXDOP query hint.
Create a couple of SQL Server agent jobs where each one runs a particular proc.
Then from within your master proc kick off the jobs.
The only way of waiting that I can think of is if you have a status table that each proc updates when it's finished.
Then yet another job could poll that table for total completion and kick off a final proc. Alternatively, you could have a trigger on this table.
The memory implications are completely up to your environment..
UPDATE:
If you have access to the task system.. then you could take the same approach. Just have windows execute multiple tasks, each responsible for one proc. Then use a trigger on the status table to kick off something when all of the tasks have completed.
UPDATE2:
Also, if you're willing to create a new app, you could house all of the logic in a single exe...
You do need to move your overnight sprocs to jobs. SQL Server job control will let you do all of the scheduling you are asking for.
You might want to look into using DTS (which can be run from the SQL Agent as a job). It will allow you pretty fine control over which stored procedures need to wait for others to finish and what can run in parallel. You can also run the DTS package as an EXE from your own scheduling software if needed.
NOTE: You will need to create multiple copies of your connection objects to allow calls to run in parallel. Two calls using the same connection object will still block each other even if you don't explicitly put in a dependency.