I have a need to recompile a package in oracle 9i. But the session gets hung forever. When I checked in V$SESSION_WAIT, got to know that it is waiting on an event 'library cache pin'. Couldn't get a possible solution for 9i version. Is there anyway to find the session, that is executing my package and kill it?
Sure.
To find what sessions run a code which contains a name:
select s.*,sa.*
FROM v$session s
left join v$sqlarea sa
on s.sql_address=sa.address AND s.sql_hash_value=sa.hash_value
where sql_text like '%your_package_name_here%';
After this, you have the pid and serial so you can kill the session you need to kill. (The code above may return sessions that you do not need to kill, for example, it is finding itself :) )
Oracle offers no built-in easy path to do this.
If your application uses the DBMS_APPLICATION_INFO routines to register module usage you're in luck: you can simply query V$SESSION filtering on MODULE and/or ACTION. Alternatively, perhaps you have trace messages in your PL/SQL which you use?
Otherwise, you need to start trawling V$SQLTEXT (or another of the several views which show SQL) for calls containing the package name. Remember to make the search case-insensitive. This will give you a SQL_ID you can link to records in V$SESSION.
This will only work if your package is the primary object; that is, if it is the top of the call stack. That is one explanation for why the package is locked for so long. But perhaps your package is called from some other package: in that case you might not get a hit in V$SQLTEXT. So you will need to find the programs which call it, through ALL_DEPENDENCIES, and sift V$SQL_TEXT for them.
Yes, that does sound like a tedious job. And that is why it is a good idea to include some form of trace in long-running PL/SQL calls.
Related
Is it possible to execute a method as a different user in Linux (or SELinux specifically)? The programs that I have run in individual sandboxes, each with a different user and process id. I have a situation where I have to execute a branch of code as a different user and with different process id to prevent the access of the memory and disk space of the code that's spawning it.
If not possible, can you throw some light on how much of the kernel code has to be changed to achieve it? (I understand its subjective. Alternatively, if you can suggest what and how to go about it, that will be much helpful).
Protecting some resources from other codes executing on the same machine is precisely what lead to the process and UID invention.
If you are searching for a mechanism that looks like a simple function call, I would say it's impossible because it requires the memory to be shared between the caller and the callee. However, using fork/exec (or wrappers like system()) will give you some isolation as long as you deal with parameters/results using system objects like program parameters or pipes.
Although, the fact that *nix user is meant to protect processes from one-another, requires that an explicit relationship be built between two users to have one user act on behalf of the other.
Actually, you may want to:
define a sudoers policy which gives the right to your first user to run a command (or a particular command) as the second one.
use popen() (or system()) in your first program to call the less privileged code.
if any, pass the parameters and parse the result from stdout
As an extra, you may use the same binary for both executions, this way, all the code can be at the same location.
I'm just the unfortunate sole debugging an iSeries/RPG/SQL issue... (I'm not an RPG expert)
I have a program which contains temporary tables declared on DB2 on the iSeries. The temporary tables are declared in a session, so when I run the application and debug the RPG via a terminal on the iSeries (I presume this is the right terminology?) Anyway, I'm effectively in two different sessions.
The SQL I am looking at does something like this...
select blah from SESSION/#temp_table left join #real_table left join _to_many_other_tables
While I can query the "real table" fine, I can't see the contents of the SESSION table... so how would I go about querying a table in a different session ?? Presumably SESSION/#temp_table is something I could query by doing something like select * from 123123/#temp_table, but how would I know what the other session id/name/variable/access token looks like?
You can use STRSRVJOB to debug another job, but this probably won't let you query that job's QTEMP. Traditionally, midrange programmers debug jobs like these interactively. Sign on to a green screen session and CALL the program you want to debug.
Between STRDBG, STRISDB, the system debugger and the SEP facility found in RDi, there are many options to tackle the debugging problem. Additionally, the open source DBG400 might be something to look at.
EDIT:
The problem is a difficult one. It looks like this is a client/server type app. When writing an app like this, I usually put a debug switch into it so I can log what's happening for debugging purposes. Stored procedures on DB2 for i can be implemented purely in SQL, or they can call out to an HLL like RPG for the implementation.
If your SPs are external, say RPG, then add some code that will copy the temporary files to a real library on the system. Implementing it as a system() or QCMDEXC call is not very intrusive to the existing program code. You can turn it on and off with a data area - again, very unintrusive, but you can set the debug state from outside the application.
If your SPs are SQL, I'd modify them to write a duplicate of the temporary file in a real library. Say there's a CREATE TABLE QTEMP/TEMP001 WITH DATA ... Add a CREATE TABLE DEBUGLIB/TEMP001 WITH DATA ... If you wanted to, you could key this extra code on a special 'debug' user profile, although that might require some security changes on the IBM i side.
I need to add a user to my DataBase for IIS AppPool\MyAppPool. I need to execute simple query
CREATE LOGIN [IIS AppPool\MyAppPool] FROM WINDOWS
I use <sql:SQLString> element in WiX.
I use <iis:WebAppPool> extension to create ApplicationPool.
But Application Pool is created after SQL strings have been executed so I got error "User or group doesn't exist" from SQL Server.
Is it possible to execute SQL strings after ApplicationPool creation? Or maybe it is possible to sequence ExecuteSqlStrings manually?
It is strange, but if I add my own custom action (which calls sqlcmd.exe and executes the query) after ConfigureIIs, everything works fine. But I don't like such a solution, I suppose using and etc. is better solution.
I had the same problem as you, and attempted to schedule ExecuteSqlStrings after ConfigureIIs to no avail. I searched around in some old installers we had made and managed to find one that needed to accomplish the same thing. Instead of scheduling ExecuteSqlStrings, it scheduled InstallSqlData after ConfigureIIs. Tried that instead (1), and now the install works correctly. I don't know the specifics of what's different between InstallSQLData and ExecuteSqlStrings (logically you would think you need to use the latter), but this worked for me and hopefully works for you too.
Probably has been asked before, but i'm looking for a utility, which can
Identify a particular session and record all activity.
Able to identify the sql that was executed under that session.
Identify any stored procedures/functions/packages that were executed.
And able to show what was passed as parameters into the procs/funcs.
I'm looking for a IDE thats lightweight, fast, available and won't take 2 day's to install, i.e something I can get down, install and use in the next 1 hour.
Bob.
if you have license for Oracle Diagnostic/Tuning Packs, you may use Oracle Active Session History feature ASH
The easiest way I can think of to do this is probably already installed in your database - it's the DBMS_MONITOR package, which writes trace files to the location identified by user_dump_dest. As such, you'd need help from someone with access to the database server to access the trace files.
But once, you've identified the SID and SERIAL# of the session you want to trace, you can just call:
EXEC dbms_monitor.session_trace_enable (:sid, :serial#, FALSE, TRUE);
To capture all the SQL statements being run, including the values passed in as binds.
Part of the setup routine for the product I'm working on installs a database update utility. The utility checks the current version of the users database and (if necessary) executes a series of SQL statements that upgrade the database to the current version.
Two key features of this routine:
Once initiated, it runs without user interaction
SQL operations preserve the integrity of the users data
The goal is to keep the setup/database routine as simple as possible for the end user (the target audience is non-technical). However, I find that in some cases, these two features are at odds. For example, I want to add a unique index to one of my tables - yet it's possible that existing data already breaks this rule. I could:
Silently choose what's "right" for the user and discard (or archive) data; or
Ask the user to understand what a unique index is and get them to choose what data goes where
Neither option sounds appealing to me. I could compromise and not create a unique index at all, but that would suck. I wonder what others do in this situation?
Check out SQL Packager from Red-Gate. I have not personally used it, but these guys make good tools overall and this seems to do what you're looking for. It let's you modify the script to customize the install:
http://www.red-gate.com/products/SQL_Packager/index.htm
You never throw a users data out. One possible option is to try and create the unique index. If the index creation fails, let them know it failed, tell them what they need to research, and provide them a script they can run if they find they have a data error that they choose to fix up.