I have a complex sql script which I am running using db2 +v -txf sqls/connection.sql This script is part of a unix service which is running lot of other scripts as well. The script is querying a temporary session table so I cannot run the script manually (since the table is gone by that time). I want to be able to run the script as part of the service but would like to log the values of variables being calculated in the sql file. For eg: The sql script as the following line timestampdiff(1, char(max(END_TS) - min(START_TS))) as ELAPSED_TIME, I would like to know the values of END_TS and START_TS.
What I have tried:
I tried adding -v to the the db2 command and it printed the entire sql being executed but not the exact values.
If the Db2-server runs on Linux/unix/windows, you can use set serveroutput on; along with call dbms_output.PUT_LINE('......'); which lets you see what you logged as output when the script ends.
dbms_output.put_line docs.
The dbms_output module contains other useful services, and people familiar with Oracle will recognize it.
If the Db2-server runs on a Z/OS or i-series you should tag your question as db2-zos or db2-400 , because the answer can depend on the platform.
Related
I have a very large table script that is about ^ GB in size and cannot open in in Query Editor (obviously) due to memory/size.
I am trying to run it on the db server with the command propmt and using sqlcmd:
I am 100% sure the path and script name are correct (marked out for privacy reasons). I then used the following two scripts to get the DBServer\SQLInstance:
SELECT ##servername
SELECT ##servicename
What am I missing as it appears it has not done anything with the 21? _ prompt just sitting there. Do I need to do anything else?
I'm pretty sure the Windows command line pipeline is just choking on your previous command.
I think the best chance you have is doing this using PyPy:https://pypi.org/project/pymssql/, given the SQL instance has the memory to handle the data stream.
I have created multiple SQL DB Maintenance scripts which I am required to run in a defined order. I have 2 scripts. I want to run the 2nd script, only on successful execution of 1st script. The scripts contain queries that creates tables, stored procedures, SQL jobs etc.
Please suggest an optimal way of achieving this. I am using MS SQL Server 2012.
I am trying to implement it without using an SQL job.
I'm sure I'm stating the obvious, and it's probably because I'm not fully understand what you meant by "executed successfully", but if you meant no SQL error while running:
The optimal way to achieve it is to create a job for your scripts, then create two steps - one for the first script and for the second. Once both steps are there, you go to the advanced options of step 1 and set it up to your needs.
Screenshot
Can you create a SQL Server Agent Job? You could just set the steps to be Step 1: Run first script, Step 2: run second script. In the agent job setup you can decide what to when step 1 fails. Just have it not run step 2. You can also set it up to email you, skip to another step, run some other code etc... If anything the first code did failed with any error message, your second script would not run. -- If you really needed to avoid a job, you could add some if exists statements to your second script, but that will get very messy very fast
If the two scripts are in different files
Add a statement which would log into a table the completion and date .Change second script to read this first and exit,if not success
if both are in same file
ensure they are in a transaction and read ##trancount at the start of second script and exit ,if less than 1
SQL Server 2005’s job scheduling subsystem, SQL Server Agent, maintains a set of log files with warning and error messages about the jobs it has run, written to the %ProgramFiles%\Microsoft SQL Server\MSSQL.1\MSSQL\LOG directory. SQL Server will maintain up to nine SQL Server Agent error log files. The current log file is named SQLAGENT .OUT, whereas archived files are numbered sequentially. You can view SQL Server Agent logs by using SQL Server Management Studio.
I executed a script in unix that called a function in oracle db. I didn't gave the logfile information for the unix script. Usually, when I run a script to call a db function, I give logfile for script and monitor the unix log file and know that if the function is still running or is done. Also, the logfile has information whether the function executed successfully or not.
I have following concerns, based on above situation:
Can I monitor if the function is still running or not using oracle sql developer?
Can I know if the funtion executed successfully in Oracle DB or not? If oracle saves a log of function execution and I could access that then it would be great.
Thank You
Yes, you can monitor if the function is still running by checking the session's status in v$session. See this answer for information on how: How to list active / open connections in Oracle?
As for what the execution result was... probably not.
The PL/SQL you executed won't directly appear in dba_audit_trail, but any queries it ran as part of execution might. The audit trail will show if the queries were successful or not, but it won't show the query results or the final result of the function execution.
I have the following challange;
I would like to execute a batch of *.sql files on one database. The sql files are assumed to be named in ascending order of their execution sequence. So the main sql script should do a 'dir /s *.sql', then start each of the found scripts in order.
Is this possible ?
Below is something I found for SQL Server, but I want something similar for Oracle SQL Developer.
http://pradeep1210.wordpress.com/2012/03/15/executing-a-set-of-sql-script-files-sql-on-a-group-of-sql-server-databases/
Thanks in advance.
Raymond
Create a folder for eg:Batch_Files in you local machine ,which will contain all the sql script that you want to execute ,
Then open you sql developer .Create a file called batch.sql in your Batch_Files folder .
In Batch.sql add the sql files that you want to execute in sequence.
#file1.sql
#file2.sql
:
:
#fileN.sql
These files contains the code that you need to run in sequence .This is a very basic example.You can do various changes according to your need ,you can add anonymous block to print something after execution of files .I have not tested this is SQL-DEVELOPER ,but i think this will surely work for you .
When trying to automate reading out constraint information using sp_helpconstraint I got the bright idea of pulling out the source code of the built-in SP directly and run it myself (since it returns multiple result sets so those can't be stored in a temp table). So I ran exec sp_helptext 'sp_helpconstraint' (on SQL Azure) to generate the source code, and copied it into a new query window.
However, when I run the SP (on SQL Azure), I get lot's of error messages -- for example, that object syscomments doesn't exist even though I am using the exact same source that runs perfectly when calling sp_helpconstraint directly. Just to make sure it wasn't an anomaly with the procedure or a mistake in my copy/paste execution, I tested the exact same procedure on SQL Server 2008, and if I directly copy the SP source into a new query window, it runs perfectly (obviously after removing the return statements and manually setting the input parameters).
What gives?? Do built-in SP's run in a special context where more commands are available than normal on SQL Azure version? Is sp_helptext not returning the actual source that is being run on SQL Azure?
If you want me to try anything out, give a suggestion and I can try it on our SQL Azure Development instance. Thanks!