would like to ask how to retrieve all my past database sql queries within that session? thanks
I'm pretty sure Oracle doesn't keep data on all past queries (closed cursors) for each session. I can think of a couple of ways to get this data however:
If you're using PL/SQL, most of your past cursors will remain in your session cache (up to the cursor_sharing initialization parameter). You can query the view v$open_cursor:
SELECT * FROM v$open_cursor WHERE sid=to_number(sys_context('USERENV','SID'))
Join this view to v$sqltext (or v$sqltext_with_newlines) to get the full sql text:
SELECT o.saddr, s.address, o.hash_value, s.piece, s.sql_text
FROM v$open_cursor o
JOIN v$sqltext_with_newlines s ON o.address = s.address
AND o.hash_value = s.hash_value
WHERE sid = to_number(sys_context('USERENV', 'SID'))
ORDER BY o.saddr, s.address, o.hash_value, s.piece;
You could trace your session, opening the resulting trace file once the session terminates will reveal all SQL (furthermore, you can tkprof the trace file to get a summary and statistics).
As Vincent pointed out the only way would be (afaik) to trace the session at the client level.
In addition to the open cursors (which is how Toad does it), another, less precise, way would be to use ASH (Active Session History).
The problems with ASH are that
it samples every seconds for active sessions (so you are missing all the quick ones),
it's a circular buffer (backed up by the DBA_HIST_ACTIVE_SESS_HISTORY view) so that you are missing the older ones.
This is because it's only meant to "catch" long running queries for performance purpose.
It is well adapted however if one is only interested in the queries with long response time.
For what it's worth, here is a simple query returning a session's history of long queries.
select
sqla.sql_text
from
v$active_session_history hist,
v$sqlarea sqla,
v$session ss
where
sqla.sql_id = hist.sql_id and
ss.sid = hist.session_id and
ss.serial# = hist.session_serial# and
ss.audsid = sys_context('USERENV', 'SESSIONID') ;
Related
I wonder if this could be possible or not.
I am using TOAD, connected to an oracle database (11g) and i have access to the oracle E-BUSINESS-SUITE application.
Basically, i want Toad to trace what sql are being executed by the oracle E-BUSINESS-SUITE application
I have this query:
SELECT nvl(ses.username,'ORACLE PROC')||' ('||ses.sid||')' USERNAME,
SID,
MACHINE,
REPLACE(SQL.SQL_TEXT,CHR(10),'') STMT,
ltrim(to_char(floor(SES.LAST_CALL_ET/3600), '09')) || ':'
|| ltrim(to_char(floor(mod(SES.LAST_CALL_ET, 3600)/60), '09')) || ':'
|| ltrim(to_char(mod(SES.LAST_CALL_ET, 60), '09')) RUNT
FROM V$SESSION SES,
V$SQLtext_with_newlines SQL
where SES.STATUS = 'ACTIVE'
and SES.USERNAME is not null
and SES.SQL_ADDRESS = SQL.ADDRESS
and SES.SQL_HASH_VALUE = SQL.HASH_VALUE
and Ses.AUDSID <> userenv('SESSIONID')
order by runt desc, 1,sql.piece
The oracle application looks like this:
I want to do this because i want to know which tables the oracle application is using in order to obtain the contact information for a certain customer. I mean, when a random guy is using the application, he put the account_number and click on "go". Thats what i need!, i want to know which tables are executed when the guy pressed the "go" button, i want to trace that.
I think that i could get the session_id from the guy that is using the oracle application and then, paste it on the query written above and start working on it.
Something like this:
If it is possible, how could i get the session_id from the guy that is using the oracle E-BUSINESS-SUITE Application?
Tracing the queries an active software app is running might take a while. As such it might be easier to dig the data out another way:
You want to know which table and column holds some data, like a user first name.
Generate something unique, like a GUID or some impossible name that never occurs in your db (like 'a87d5iw78456w865wd87s7dtjdi') and enter that as the First name using the UI. Save the data
Run this query against oracle:
SELECT
REPLACE(REPLACE(
'UNION ALL SELECT ''{t}'', ''{c}'' FROM {t} WHERE {c} = ''a87d5iw78456w865wd87s7dtjdi'' ',
'{t}', table_name),
'{c}', column_name
)
FROM USER_TAB_COLUMNS WHERE data_type like '%char%'
This is "an sql that writes an SQL" - It'll generate a result set that is basically a list of sql statements like this:
UNION ALL SELECT 'tablename', 'columnname' FROM tablename WHERE columnname = 'a87d5iw78456w865wd87s7dtjdi'
UNION ALL SELECT 'table2name', 'column2name' FROM table2name WHERE column2name = 'a87d5iw78456w865wd87s7dtjdi'
UNION ALL SELECT 'table3name', 'column3name' FROM table3name WHERE column3name = 'a87d5iw78456w865wd87s7dtjdi'
There will be one query for each column in each table in the db. Only CHARacter columns will be searched, by the way
Remove the first UNION ALL
Run it and wait a looong time while oracle basically searches every column in every table, in the db, for your weird name.
Eventually it produces an output like:
TABLE_NAME COLUMN_NAME
crm_contacts_info first_name
So you know your name a87d5iw78456w865wd87s7dtjdi was saved, by the UI, in crm_contacts_info.first_name
If it is possible, how could i get the session_id from the guy that is
using the oracle E-BUSINESS-SUITE Application?
Yes, this is definitely possible. First things first, you need to figure out which schema/username "the guy" is using. If you don't know, you can ask the guy or have him run some simple query (something like select user from dual; will work) to get that info.
Once you have the schema name, you can query the V$SESSION table to figure out the session id. Have the guy log in, then query the V$SESSION table. Your query would look something like this: select * from v$session where username ='[SCHEMA]'; where [SCHEMA] is the schema name that the guy is using to log in. This will give you the SID, serial #, status etc. You will need this info to trace the guy's session.
Generating a trace file for the session is relatively simple. You can start a trace for the entire database, or just for one session. Since you're only interested in the guy's session, you only need to trace that one. To begin the trace, you could use a command that looks something like this: EXEC DBMS_MONITOR.session_trace_enable(session_id=>[SESSIONID], serial_num=>[SERIAL#]); where [SESSIONID] and [SERIAL#] are the numbers you got from the previous step. Please keep in mind that the guy will need to be logged in for the session trace to give you any results.
Once the guy is logged in and you have enabled session trace, have the guy run whatever commands from the E-Business suite that you're interested in. Be aware that the more the guy (or the application) does while the trace is enabled, the more information you will have to get through to find whatever it is you're looking for. This can be a TON of data with applications. Just warning you ahead of time.
After the guy is finished doing the tasks, you need to disable the trace. This can be done using the DBMS_MONITOR package like before, only slightly different. The command would look something like this: EXEC DBMS_MONITOR.session_trace_disable(session_id=>[SESSIONID], serial_num=>[SERIAL#]); using the same [SESSIONID] and [SERIAL#] as before.
Assuming everything has been done correctly, this will generate the trace file. The reason why #thatjeffsmith mentioned server access is because you will need to access whatever server(s) the database lives on in order to get the trace file. If you do not have access to the server, you will need to work with a DBA or someone with access in order to get it. If you just need help figuring out where the trace file is, you could run the following query using the [SESSIONID] from before:
SELECT p.tracefile
FROM v$session s
JOIN v$process p ON s.paddr = p.addr
WHERE s.sid = [SESSIONID];
This should return a directory that looks similar to this: /u01/app/oracle/diag/rdbms/[database]/[instance]/trace/[instance]_ora_010719.trc
Simply navigate to that directory, pull the trace file using WinSCP, FileZilla, or the app of your choice, and that should do it.
Good luck, and hope this helps!
The SQLs executed from the EBS frontend are usually too fast to be seen in v$session. If a SQL is slower than a second (or if the snapshot timing is right), you would see it in v$active_session_history, which captures a snapshot of all active sessions every second.
The place you should look instead at is v$sqlarea, which can be done by SQL, via Toad using the Database->Monitor->SGA Trace/Optimization menu option or by our Blitz Report https://www.enginatics.com/reports/dba-sga-sql-performance-summary/.
This data however has information on module (i.e. which OAF page, Form, Concurrent etc.) and responsibility (action column) level only and it does not contain session or application user information.
The unique key is sql_id and plan_hash_value, which means that for SQLs executed by different modules and from different responsibilities, only the module executing it first will be shown.
If you sort the data by last_active_time and filter for the module in question, it's almost as good as a trace. Bind values used can be retrieved from v$sql_bind_capture, which above Blitz Report does as well.
When I execute a query for the first time in DBeaver it can take up to 10-15 seconds to display the result. In SQLDeveloper those queries only take a fraction of that time.
For example:
Simple "select column1 from table1" statement
DBeaver: 2006ms,
SQLDeveloper: 306ms
Example 2 (other way around; so theres no server-side caching):
Simple "select column1 from table2" statement
SQLDeveloper: 252ms,
DBeaver: 1933ms
DBeavers status box says:
Fetch resultset
Discover attribute column1
Find attribute column1
Late bind attribute colummn1
2, 3 and 4 use most of the query execution time.
I'm using oracle 11g, SQLDeveloper 4.1.1.19 and DBeaver 3.5.8.
See http://dbeaver.jkiss.org/forum/viewtopic.php?f=2&t=1870
What could be the cause?
DBeaver looks up some metadata related to objects in your query.
On an Oracle DB, it queries catalog tables such as
SYS.ALL_ALL_TABLES / SYS.ALL_OBJECTS - only once after connection, for the first query you execute
SYS.ALL_TAB_COLS / SYS.ALL_INDEXES / SYS.ALL_CONSTRAINTS / ... - I believe each time you query a table not used before.
Version 3.6.10 introduced an option to enable/disable a hint used in those queries. Disabling the hint made a huge difference for me. The option is in the Oracle Properties tab of the connection edit dialog. Have a look at issue 360 on dbeaver's github for more info.
The best way to get insight is to perfom the database trace
Perform few time the query to eliminate the caching effect.
Than repeat in both IDEs following steps
activate the trace
ALTER SESSION SET tracefile_identifier = test_IDE_xxxx;
alter session set events '10046 trace name context forever, level 12'; /* binds + waits */
Provide the xxxx to identify the test. You will see this string as a part of the trace file name.
Use level 12 to see the wait events and bind variables.
run the query
close the conenction
This is important to not trace other things.
Examine the two trace files to see:
what statements were performed
what number of rows was fetched
what time was elapsed in DB
for the rest of the time the client (IDE) is responsible
This should provide you enough evidence to claim if one IDE behaves different than other or if simple the DB statements issued are different.
I'm kicking tires on BI tools, including, of course, Tableau. Part of my evaluation includes correlating the SQL generated by the BI tool with my actions in the tool.
Tableau has me mystified. My database has 2 billion things; however, no matter what I do in Tableau, the query Redshift reports as having been run is "Fetch 10000 in SQL_CURxyz", i.e. a cursor operation. In the screenshot below, you can see the cursor ids change, indicating new queries are being run -- but you don't see the original queries.
Is this a Redshift or Tableau quirk? Any idea how to see what's actually running under the hood? And why is Tableau always operating on 10000 records at a time?
I just ran into the same problem and wrote this simple query to get all queries for currently active cursors:
SELECT
usr.usename AS username
, min(cur.starttime) AS start_time
, DATEDIFF(second, min(cur.starttime), getdate()) AS run_time
, min(cur.row_count) AS row_count
, min(cur.fetched_rows) AS fetched_rows
, listagg(util_text.text)
WITHIN GROUP (ORDER BY sequence) AS query
FROM STV_ACTIVE_CURSORS cur
JOIN stl_utilitytext util_text
ON cur.pid = util_text.pid AND cur.xid = util_text.xid
JOIN pg_user usr
ON usr.usesysid = cur.userid
GROUP BY usr.usename, util_text.xid;
Ah, this has already been asked on the AWS forums.
https://forums.aws.amazon.com/thread.jspa?threadID=152473
Redshift's console apparently doesn't display the query behind cursors. To get that, you can query STV_ACTIVE_CURSORS: http://docs.aws.amazon.com/redshift/latest/dg/r_STV_ACTIVE_CURSORS.html
Also, you can alter your .TWB file (which is really just an xml file) and add the following parameters to the odbc-connect-string-extras property.
UseDeclareFetch=0;
FETCH=0;
You would end up with something like:
<connection class='redshift' dbname='yourdb' odbc-connect-string-extras='UseDeclareFetch=0;FETCH=0' port='0000' schema='schm' server='any.redshift.amazonaws.com' [...] >
Unfortunately there's no way of changing this behavior trough the application, you must edit the file directly.
You should be aware of the performance implications of doing so. While this greatly enhances debugging there must be a reason why Tableau chose not to allow modification of these parameters trough the application.
I'm having insane simple query: It pulls an ID from one table. The implementation is done using EF 3.5.
This query is repeated in a loop, where I collected a ID from a file and do the search in the database. When running this program, the SQL server is stressed like crazy (the processor utilization soars to 100% for all 16 cores).
It looks like the table of this query is completely locked and nobody gets in anymore. I've read about the necessity to use DbTransaction (begin transaction, commit) or TransactionScope, but the thing is I'm only selecting/reading.
Also it's one query, which is atomic in itself, so the use of Transaction(Scope) is shady at best.
I did try an implementation, but that doesn't seem to do it.
My (LINQ) query: Image image = context.Images.First(i => i.ImageUid == identifier)
Any thoughts on why this is happening? Again I'd like to stress that I'm only selecting/reading records. I don't delete or update records in the database. This is so insanely straight forward that it is frustrating!
For sake of being complete (my attempt at a fix):
// This defaults the isolation level to 'READ COMMITTED' which
// doesn't lock the table when querying.
DbTransaction trx = context.Connection.BeginTransaction();
string isolationLevel = trx.IsolationLevel.ToString();
Image image = context.Images.First(i => i.ImageUid == identifier);
trx.Commit();
NEW: The profiler shows that the Entity framework is doing a SELECT TOP(1) in the image table. This amounts to a MASSIVE amount of reads, hundreds of thousands!
That would suggest that there is no index, but I've looked it up (see comments) and there is one! Also very weird, on the logout, again hundreds of thousands of reads.
I decided to throw out the Entity Framework and do this query using SqlConnection and SqlCommand, but the result is the same!
Next we copied the sp_executesql in the management console and found it took an amazing 4 seconds to execute. Doing the query 'direct' gives an instant result.
Something in the sp_executesql appears to slow things to a crawl. Any ideas?
I think I got it... After finding out that sp_executesql was the culprit it became clear.
See http://yasirbam.blogspot.nl/2009/06/spexecutesql-may-cause-slow-perfomance.html
Due to the stupid conversion, the index on the table is NOT used!
That explains everything visible in the SQL Profiler.
Right now the tool is being tested and it's as fast as lighting!!
I have made an update statement in a table in SQL 2008 which updated the table with some wrong data.
I didn't have a backup for the DB.
It's some important dates which got updated.
Is there anyway where i can recover the old data from the table.
Thanks
SNA
Basically no unless you want to use a commercial log reader and try go through it with a fine tooth comb. No backup of the database can be an 'update resume, leave town' scenario - harsh but it just should not happen.
Andrew basically has called it. I just want to add a few ideas you can consider if you are desperate:
Are there any reports or printouts lying around? Perhaps you can reconstruct the data from there.
Was this data entered via a web application? If so, there is a remote chance you can find the original data in the web server logs, depending upon how the app was constructed, etc.
Does this app interface (pass data to) any other applications? They may have a buffered copy of data...
Can the data be derived from any other existing data? Is there an audit log table, or another date in your schema based on this one, from which you can reconstruct the original date?
Edit:
Some commenters are mentioning that is is a good idea to test your update/delete statements before running them. For this to become habit, it helps if you have an easy method. I usually create my DELETE statements like this:
--delete --select *
from [User]
where UserID=27
To run the select in order to test your query, highlight everything from select onwards. To then run the delete if you are satisfied with the filter criteria, highlight everything from delete onwards. The two dashes in front of delete are so that if the query accidentally gets run, it will just crash due to invalid syntax.
You can use a similar construct for UPDATE statements, although it is not quite as clean.
SQL server keeps log for every transation.So you can recover your modified data from the log as well without backup.
Select [PAGE ID],[Slot ID],[AllocUnitId],[Transaction ID]
,[RowLog Contents 0], [RowLog Contents 1],[RowLog Contents 3],[RowLog Contents 4]
,[Log Record]
FROM sys.fn_dblog(NULL, NULL)
WHERE
AllocUnitId IN
(Select [Allocation_unit_id] from sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2
AND partitions.partition_id = allocunits.container_id)
Where object_id=object_ID('' + 'dbo.student' + ''))
AND Operation in ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS')
And [Context] IN ('LCX_HEAP','LCX_CLUSTERED')
Here is the artcile, that explains step by step, how to do it.
http://raresql.com/2012/02/01/how-to-recover-modified-records-from-sql-server-part-1/
Imran
Thanks for all the responses.
The problem was actually accidentally ---i missed to select the where condition in the update statement.---Rest !.
It was a quick 5 minutes task --Like just changing the date to test for one customer data--so we didn't think of taking a backup.
Yes of course you are true ..This is a lesson.
Now onwards i will be careful to write "my update statements in a transaction." or "test my update statements"
Thanks once again--for spending your time to give some insight rather ignoring the question since the only answer is "NO".
Thanks
SNA
Always take a backup before major UPDATE statements, even if it's not used, there's the peace of mind
Especially with Red Gate's Object Level Restore, one can restore individual table/row now given a backup file
Good luck, I'd suggest finding an old copy elsewhere (DEV/QA) etc...
Isn't it possible to do a rollback on an UPDATE statement?
Late one but hopefully useful…
If database is in full recovery mode then all transactions are logged in transaction log and can be retrieved. Problem is that this is not natively supported because this is not the main purpose of the transaction log.
Options are:
Commercial tools such as Apex Log (more expensive, more options) or Quest Toad (less expensive, less options for this purpose main focus is on SQL Server management)
Trying to do this yourself, like user1059637 pointed out. Problem with this approach is that it can’t read transaction log backups and is more tedious.
It comes down to how much your data is worth to you in terms of time and $.