I have a tomcat server calling a stored procedure from a jsp. In the stored procedure I have a query that fills a temp table with data. That temp table is then joined to another table over a dblink to fill another temp table using the hint - DRIVING_SITE. The last temp table is then joined to another table on our database to return a resultset to tomcat.
I'm sorry but I can't really provide a code example for all this, but the issue I'm having is this - After a while of the database link not being used, the first query made using the link will do nothing and return the error:
test.jsp caught exception, closing connection: ORA-02068: following severe error from DATABASE_LINK_NAME
ORA-03135: connection lost contact
Every subsequent query made on the database link within 10 minutes or so of the last call will be fine. The temp table can be huge or small, the amount of data queried seems to make no difference, but the first call after the idle time will get this error probably 75% of the time. Has anyone experienced this issue? If so, are there any resolutions?
The query is structured like so:
INSERT INTO temp_table_2
WITH last_submissions AS (
SELECT /*+ DRIVING_SITE(some_schema.some_table_1) */
bs.unique_id,
CASE WHEN COUNT(bs.unique_id) > 1 THEN 'Y' ELSE 'N' END some_flag,
MAX(trx.unique_id) last_submission
FROM (SELECT unique_id
FROM temp_table_1) oids,
some_schema.some_table_1#DATABASE_LINK bs,
some_schema.some_table_1#DATABASE_LINK trx
WHERE oids.unique_id = bs.unique_id
AND bs.non_unique_join_id = trx.non_unique_join_id
GROUP BY bs.unique_id),
something_relevant AS (
SELECT /*+ DRIVING_SITE(some_schema.some_table_2) */
last_value_of_something.unique_id,
last_value_of_something.some_flag,
mv.value_description status
FROM (
SELECT /*+ DRIVING_SITE(some_schema.some_table_1) */
ls.unique_id,
CASE WHEN COUNT(ls.unique_id) > 1 THEN 'Y' ELSE 'N' END some_flag,
MAX(prd.prd_some_id) last_submission
FROM last_submissions ls,
some_schema.some_table_1#DATABASE_LINK trx,
some_schema.some_table_2#DATABASE_LINK prd
WHERE ls.last_submission = trx.unique_id
AND trx.some_unique_id = prd.some_unique_id (+)
GROUP BY ls.unique_id) last_value_of_something,
some_schema.some_table_2#DATABASE_LINK prd,
some_schema.some_table_3#DATABASE_LINK cs,
some_schema.some_display_value_table#DATABASE_LINK mv
WHERE last_value_of_something.last_submission = prd.prd_some_id (+)
AND prd.some_id = cs.some_id (+)
AND cs.status_code = mv.value (+)
AND mv.value_type (+) = 'SOME_INDICATOR_FOR_DISPLAY_VALUES')
SELECT ls.unique_id unique_id,
NVL(pr.status, trx.some_code) status,
CASE WHEN ls.some_flag = 'Y' OR pr.some_flag = 'Y' THEN 'Yes' ELSE 'No' END display_the_flag
FROM /*+ DRIVING_SITE(some_schema.some_table_1) */
last_submissions ls,
some_schema.some_table_1#DATABASE_LINK trx,
something_relevant pr
WHERE ls.last_submission = trx.unique_id
AND ls.unique_id = pr.unique_id
Do you expect the network between the two database servers to be stable and to allow connections to exist for some time?
When you use a database link, the local server opens up a connection to the remote server. That connection will be kept open as long as your session is open to be used by other queries. If you are seeing connections getting dropped, that often means that there is something in the network (a firewall commonly) that is detecting and killing idle connections. It can also mean that the network between the two servers is simply unstable.
Ideally, you would resolve the problem by fixing whatever underlying network issue you have. If there is a firewall that is killing idle connections, you should be able to modify the firewall configuration to avoid killing these connections for example.
If fixing the infrastructure is not an option, you could close the connection to the remote server after every query (or at least after every query that could be followed by a long idle time)
ALTER SESSION CLOSE DATABASE LINK <<dblink name>>
That means, however, that you would be setting up and tearing down a connection to the remote server potentially on every query-- that is potentially relatively expensive and potentially causes more load on the remote server (depending, of course, on how frequently it happens and how many sessions you might have).
The whole process of pulling data over a database link into a series of temporary tables in order to serve up data to a human using a web application also strikes me as a potentially problematic architecture. Perhaps you have valid reasons for this. But I would be strongly considering using some sort of replication technology (materialized views, Streams, or GoldenGate are the built-in options) rather than pulling data at runtime over database links.
Related
I have a third party application from which queries will hit the SQL Server 2008 database to fetch the data ASAP (near to real time). The same query can be called by multiple users at different times.
Is there a way to store the latest result and serve the results for subsequent queries without actually hitting the database again and again for the same piece of data?
Get the results from a procedure that stores data in a global temporary table, or change to a permanent table if you regularly drop connections: change tempdb..##Results to Results. param = 1 refreshes the data:
Create procedure [getresults] (#refresh int = 0)
as
begin
IF #refresh = 1 and OBJECT_ID('tempdb..##Results') IS NOT NULL
drop table ##Results
IF OBJECT_ID('tempdb..##Results') IS NULL
select * into ##Results from [INSERT SQL HERE]
SELECT * FROM ##RESULTS
END
Can you create an indexed view for the data?
When the data is updated the view will be updated when the 3rd party makes a call the view contents will be returned without needing to hit the base tables
Unfortunately the SQL server you are using doesn't have cache system like for example, MySQL query cache. But as per the documentation I just saw here: Buffer Management
Data pages which are read during a SELECT are first brought into the buffer cache. Subsequent requests reading the same data can thus be served quicker than the initial request without needing to access the disc.
I am not sure even that this is valid question. But I will explain my situation and probable get an answer from experts like you.
We have on primes MS Dynamics installed. We are observing very slow performance.
We are looking at APP Log server. we are noticing 4-5 warning messages per second about "Query Execution time exceeded 10.seconds threshold"
Here is an example of the error and related query.
Query execution time of 27.7 seconds exceeded the threshold of 10 seconds. Thread: 109; Database: Main_MSCRM;
select
top 5001 "systemuser0".QueueId as "queueid"
, "systemuser0".CreatedBy as "createdby"
, "systemuser0".Address1_Latitude as "address1_latitude"
, "systemuser0".Address2_StateOrProvince as "address2_stateorprovince"
, "systemuser0".Address1_County as "address1_county"
, "systemuser0".Address2_Country as "address2_country"
, "systemuser0".Address2_PostOfficeBox as "address2_postofficebox"
, "systemuser0".PreferredPhoneCode as "preferredphonecode"
, "systemuser0".new_RegistrationNumer as "new_registrationnumer"
, "systemuser0".YammerUserId as "yammeruserid"
, "systemuser0".Title as "title"
, "systemuser0".SetupUser as "setupuser"
, "systemuser0".FirstName as "firstname"
, "systemuser0".EmployeeId as "employeeid"
, "systemuser0".Address1_Line2 as "address1_line2"
, "systemuser0".Address1_City as "address1_city"
, "systemuser0".YomiFirstName as "yomifirstname"
, "systemuser0".ExchangeRate as "exchangerate"
, "systemuser0".Address1_ShippingMethodCode as "address1_shippingmethodcode"
, "systemuser0".YomiMiddleName as "yomimiddlename"
, "systemuser0".Address2_Line2 as "address2_line2"
, "systemuser0".DefaultFiltersPopulated as "defaultfilterspopulated"
, "systemuser0".ModifiedOnBehalfBy as "modifiedonbehalfby"
, "systemuser0".Address2_Line3 as "address2_line3"
, "systemuser0".DefaultMailboxName as "defaultmailboxname"
from
SystemUser as "systemuser0"
where
(("systemuser0".IsDisabled = 0)) order by
"systemuser0".SystemUserId asc
Now when I run this query at SQL level, the result came up in less than 2 seconds. So my confusion is why it takes more time on CRM Front end side?
Apart from, the time that it takes for data rendering at CRM Front end level, I can not think anything else.
My Second confusion is when I run this query and other where I was getting warning messages with no lock in query itself, it was way faster than even 2 seconds.
What I am thinking is to write logic that will apply at DB level and whatever query hits the DB will have by default NO LOCK in it.
Is it possible even? PLEASE let me know how to get rid of these warning messages.
Thanks.
By running the query at SQL level I don't know whether you mean running it directly from the SQL Server console, but I think I can offer some insight as to why it appears to take longer when running on CRM Front end side. When you run the query directly from SQL Server, most of the time you already have an active connection to the database. This means that you don't have to wait to establish a connection, and the two big holdups would be waiting for the query to execute, and waiting to receive the result set.
However, when you run the query from your CRM front end, you presumably would have to establish a connection before you even begin the query. Setting up a connection can take longer than you might think.
So, a good test to run would be this: Execute your query twice, back-to-back, from the CRM front end and clock the running time of each one. If the second query runs much faster, than you may have discovered that the cost of establishing a connection with your SQL Server.
Maybe this is the same famous problem we usually get when command times out in ado.net but works fine in Management Studio. I think there are several fixes out there but since you don't have control over the query so try these 2 commands on the DB in question
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
I would strongly recommend running a Query Execution Plan so that it'll tell you if any queries might be improved by adding indexes.
Also, why are you trying to pull 5k records at once? Is that generated from custom code or something? CRM views are way less than that, i.e. 50-200 records. By pulling so many records at once you're increasing the likelihood of having database locks.
would like to ask how to retrieve all my past database sql queries within that session? thanks
I'm pretty sure Oracle doesn't keep data on all past queries (closed cursors) for each session. I can think of a couple of ways to get this data however:
If you're using PL/SQL, most of your past cursors will remain in your session cache (up to the cursor_sharing initialization parameter). You can query the view v$open_cursor:
SELECT * FROM v$open_cursor WHERE sid=to_number(sys_context('USERENV','SID'))
Join this view to v$sqltext (or v$sqltext_with_newlines) to get the full sql text:
SELECT o.saddr, s.address, o.hash_value, s.piece, s.sql_text
FROM v$open_cursor o
JOIN v$sqltext_with_newlines s ON o.address = s.address
AND o.hash_value = s.hash_value
WHERE sid = to_number(sys_context('USERENV', 'SID'))
ORDER BY o.saddr, s.address, o.hash_value, s.piece;
You could trace your session, opening the resulting trace file once the session terminates will reveal all SQL (furthermore, you can tkprof the trace file to get a summary and statistics).
As Vincent pointed out the only way would be (afaik) to trace the session at the client level.
In addition to the open cursors (which is how Toad does it), another, less precise, way would be to use ASH (Active Session History).
The problems with ASH are that
it samples every seconds for active sessions (so you are missing all the quick ones),
it's a circular buffer (backed up by the DBA_HIST_ACTIVE_SESS_HISTORY view) so that you are missing the older ones.
This is because it's only meant to "catch" long running queries for performance purpose.
It is well adapted however if one is only interested in the queries with long response time.
For what it's worth, here is a simple query returning a session's history of long queries.
select
sqla.sql_text
from
v$active_session_history hist,
v$sqlarea sqla,
v$session ss
where
sqla.sql_id = hist.sql_id and
ss.sid = hist.session_id and
ss.serial# = hist.session_serial# and
ss.audsid = sys_context('USERENV', 'SESSIONID') ;
I need to update my contacts database in SQL Server with changes made in a remote database (also SQL Server, on a different server on the same local network). I can't make any changes to the remote database, which is a commercial product. I'm connected to the remote database using a linked server. Both tables contain around 200K rows.
My logic at this point is very simple: [simplified pseudo-SQL follows]
/* Get IDs of new contacts into local temp table */
Select remote.ID into #NewContactIDs
From Remote.Contacts remote
Left Join Local.Contacts local on remote.ID=local.ID
Where local.ID is null
/* Get IDs of changed contacts */
Select remote.ID into #ChangedContactIDs
From Remote.Contacts remote
Join Local.Contacts local on remote.ID=local.ID
Where local.ModifyDate < remote.ModifyDate
/* Pull down all new or changed contacts */
Select ID, FirstName, LastName, Email, ...
Into #NewOrChangedContacts
From Remote.Contacts remote
Where remote.ID in (
Select ID from #NewContactIDs
union
Select ID from #ChangedContactIDs
)
Of course, doing those joins and comparisons over the wire is killing me. I'm sure there's a better way - advice?
Consider maintaining a lastCompareTimestamp (the last time you did the compare) in your local system. Grab all the remote records with ModifyDates > lastCmpareTimestamp and throw them in a local temp table. Work with them locally from there.
The last compare date is a great idea
One other method I have had great success with is SSIS (though it has a learning curve, and might be overkill unless you do this type of thing a lot):
Make a package
Set a data source for each of the two tables. If you expect a lot of change pull the whole tables, if you expect only incremental changes then filter by mod date. Make sure the results are ordered
Funnel both sets into a Full Outer Join
Split the results of the join into three buckets: unchanged, changed, new
Discard the unchanged records, send the new records to an insert destination, and send the changed records to either a staging table for a SQL-based update, or - for few rows - an OLEDB command with a parameterized update statement.
OR, if on SQL Server 2008, use Merge
I have some code to merge a local table of keys in SAS with a remote table (from a MS-SQL database).
Example code:
LIBNAME RemoteDB ODBC user=xxx password=yyy datasrc='RemoteDB' READBUFF=1500;
proc sql;
create table merged_result as
select t1.ID,
t1.OriginalInfo,
t2.RemoteInfo
from input_keys as t1
Left join RemoteDB.remoteTable (dbkey=ID) as t2
on (t1.ID = t2.ID)
order by ID;
quit;
This used to work fine (at least for 150000 rows), but doesn't now, possibly due to SAS updates. At the moment, the same code leads to SAS trying to download the entire remote table (hundreds of GB) to merge locally, which clearly isn't an option. It is obviously the dbkey= option that has stopped working. For the record, the key used to join (ID in example) is indexed in the remote table.
Instead using the dbmaster= option together with the multi_datasrc_opt=in_clause option work (in the LIBNAME statement), but only for 4500 keys and less. Trying to merge larger datasets again leads to SAS trying to download the entire remote table.
Suggestions on how to proceed?
Underwater's question indicates the implicit pass-through feature had worked previously in a manner consistent with optimized processing. After an update the implicit pass-through continues to work for his queries, albeit in a non-optimal way.
To ensure a known (explicit) equivalent near optimal processing methodology I would upload input_keys to a temp table in RemoteDB and join that remotely in pass through. This code is an example of a workable fallback whenever you are dissatisfied with the implicit decisions made by the Executor, SQL planner, and library engine.
LIBNAME tempdata oledb ... dbmstemp=yes ; * libname for remote temp tables;
* store only ids remotely;
data tempdata.id_list;
set input_keys(keep=id);
run;
* use uploaded ids in pass-through join, capture resultset and rejoin for OriginalInfo in sas;
proc sql;
connect to ... as REMOTE ...connection options...;
create table results_matched as
select
RMTJOIN.*
, LOCAL.OriginalInfo
from
(
select * from connection to remote
(
select *
from mySchema.myBigTable BIG
join tempdb.##id_list LIST
on BIG.id = LIST.id
)
) as RMTJOIN
JOIN input_keys as LOCAL
on RMTJOIN.id = LOCAL.id
;
quit;
The dbmstemp option for SQL Server connections causes new remote tables to reside in tempdb schema and be named with leading ##.
When using SQL Server use the BULKLOAD= libname option for highest performance. You may require a special GRANT from the data base administrator in order to bulk load.