I executed a script in unix that called a function in oracle db. I didn't gave the logfile information for the unix script. Usually, when I run a script to call a db function, I give logfile for script and monitor the unix log file and know that if the function is still running or is done. Also, the logfile has information whether the function executed successfully or not.
I have following concerns, based on above situation:
Can I monitor if the function is still running or not using oracle sql developer?
Can I know if the funtion executed successfully in Oracle DB or not? If oracle saves a log of function execution and I could access that then it would be great.
Thank You
Yes, you can monitor if the function is still running by checking the session's status in v$session. See this answer for information on how: How to list active / open connections in Oracle?
As for what the execution result was... probably not.
The PL/SQL you executed won't directly appear in dba_audit_trail, but any queries it ran as part of execution might. The audit trail will show if the queries were successful or not, but it won't show the query results or the final result of the function execution.
Related
I have a complex sql script which I am running using db2 +v -txf sqls/connection.sql This script is part of a unix service which is running lot of other scripts as well. The script is querying a temporary session table so I cannot run the script manually (since the table is gone by that time). I want to be able to run the script as part of the service but would like to log the values of variables being calculated in the sql file. For eg: The sql script as the following line timestampdiff(1, char(max(END_TS) - min(START_TS))) as ELAPSED_TIME, I would like to know the values of END_TS and START_TS.
What I have tried:
I tried adding -v to the the db2 command and it printed the entire sql being executed but not the exact values.
If the Db2-server runs on Linux/unix/windows, you can use set serveroutput on; along with call dbms_output.PUT_LINE('......'); which lets you see what you logged as output when the script ends.
dbms_output.put_line docs.
The dbms_output module contains other useful services, and people familiar with Oracle will recognize it.
If the Db2-server runs on a Z/OS or i-series you should tag your question as db2-zos or db2-400 , because the answer can depend on the platform.
I have created multiple SQL DB Maintenance scripts which I am required to run in a defined order. I have 2 scripts. I want to run the 2nd script, only on successful execution of 1st script. The scripts contain queries that creates tables, stored procedures, SQL jobs etc.
Please suggest an optimal way of achieving this. I am using MS SQL Server 2012.
I am trying to implement it without using an SQL job.
I'm sure I'm stating the obvious, and it's probably because I'm not fully understand what you meant by "executed successfully", but if you meant no SQL error while running:
The optimal way to achieve it is to create a job for your scripts, then create two steps - one for the first script and for the second. Once both steps are there, you go to the advanced options of step 1 and set it up to your needs.
Screenshot
Can you create a SQL Server Agent Job? You could just set the steps to be Step 1: Run first script, Step 2: run second script. In the agent job setup you can decide what to when step 1 fails. Just have it not run step 2. You can also set it up to email you, skip to another step, run some other code etc... If anything the first code did failed with any error message, your second script would not run. -- If you really needed to avoid a job, you could add some if exists statements to your second script, but that will get very messy very fast
If the two scripts are in different files
Add a statement which would log into a table the completion and date .Change second script to read this first and exit,if not success
if both are in same file
ensure they are in a transaction and read ##trancount at the start of second script and exit ,if less than 1
SQL Server 2005’s job scheduling subsystem, SQL Server Agent, maintains a set of log files with warning and error messages about the jobs it has run, written to the %ProgramFiles%\Microsoft SQL Server\MSSQL.1\MSSQL\LOG directory. SQL Server will maintain up to nine SQL Server Agent error log files. The current log file is named SQLAGENT .OUT, whereas archived files are numbered sequentially. You can view SQL Server Agent logs by using SQL Server Management Studio.
I have one scalar-valued function, func-A and inline table-valued function, func-B. func-A calls func-B and func-B again calls func-A recursively. but the recursion level will never be too deep. It must always be 2 levels. For example, func-A calls func-B. And func-B again calls func-A and that will be the end.
This is working OK on my local SQL Server 2008 R2 but failing at production server. Error Message is displaying "Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).". But strangely, on production server, this problem is happening to certain database instances only. some instances are working OK.
How do I overcome this problem? (I think I may need to turn on some options, for example like "RECURSIVE_TRIGGERS". )Thanks In Advance.
Here's some simple steps to diagnose the recursive calls:
Use SQL profiler and capture a set of inputs that cause the issue to manifest.
Connect via Management Studio and create a new query window, and execute the command/verify it fails.
Create a SQL Server Profiler session, with the following options:
Column Filter - SPID Is Equal to the SPID for your SSMS window
Include Event: Statement Started
Include Event: Sp:StmtCompleted
This will show you individual statements in your UDF's are executing, allowing you to home in on the path. Another option is to simply edit the procedure to PRINT the parameters at the top, allowing you to home in on the recursion depth issue at the data level.
My colleague created a project in Talend to read data from Oracle database.
I used his project and so I have his Job context with connection parameters to Oracle DB and Talend successfully connects on that computer.
I've created a trivial job which is composed of two components: tOracleInput which should be reading data and tLogRow which should be redirecting output to Talend's terminal.
The problem is that when I start the job - data is not outputted to terminal and instead of row amount outputted per second I see Starting ... status.
Would it be connection issues, inappropriate java version on my computer or something else?
Starting... status means that the query is being executed. Usually it takes a few seconds to execute a simple query against the database. This is because of the Oracle database behavior that it starts to return the data without completing a full table scan. To use this feature you can use joins and filters, but not group by / order by.
On the other hand if you're using a view or executing a complex query, or just simply use DISTINCT it could happen that the query execution takes a few minutes. This is because the oracle database generates the ResultSet on the database side before returning the records.
I use an application connected with an sql database. I found using the profiler that the application runs an update query with a syntax error. I don't have access to the application's source code. The result is that the record is not updated. Is there a way to modify the query every time it is executed with something like trigger? I can't use INSTEAD OF because there ism't any record updated or inserted.
This answer
https://stackoverflow.com/a/3319031/1359088
suggests a way to log to a text file all the errors. You could write a little utility and schedule it to run every hour or whatever, which could read through this log, find the erroneous sql statements, fix them, then run them itself.