Hello guys I want to find a way to identify a query executed for Extended Events in Microsoft SQL Server (to filter the Extended Event with only that executed query)
If i query the system views in SQL Server like this:
SELECT session_id, connection_id
FROM sys.dm_exec_requests
WHERE session_id = ##SPID
I get the connection_id of the current query executing which is unique until SQL Server restarts.
But Extended Events have a different value called 'sqlserver.client_connection_id' which is not the same identifier as 'connection_id' from the table 'sys.dm_exec_requests'.
Do you know where can I find the 'sqlserver.client_connection_id' in system tables? or another solution to unquely identify a executed query?
The client_connection_idin Extended Events (according to SSMS)
Provides the optional identifier provided at connection time by a client
and is the SqlConnection.ClientConnectionId, which is intended to support troubleshooting client connectivity issues.
You can locate the connection ID in the extended events log to see if
the failure was on the server if the extended event for logging
connection ID is enabled. You can also locate the connection ID in the
connection ring buffer (Connectivity troubleshooting in SQL Server
2008 with the Connectivity Ring Buffer) for certain connection errors.
If the connection ID is not in the connection ring buffer, you can
assume a network error.
So this id correlates the client-side and server-side of the connection attempt. For successful connections a row in sys.dm_exec_connections and sys.dm_exec_sessions will be created with different id's.
I'm trying to create an Extended Event with error_reported of all queries. And then filter the results in .xel file using an identifier that tells me that this was from X query.
You can capture the query in the error_reported event, eg:
CREATE EVENT SESSION [errors] ON SERVER
ADD EVENT sqlserver.error_reported(
ACTION
(
sqlserver.client_app_name,
sqlserver.session_id,
sqlserver.sql_text
)
WHERE ([severity]>=(11)))
Extended Evets by default tracks all of the connections and activity on the instance. Your filters in the definition will limit that down.
The sqlserver.client_connection_id includes all of the values from all of the queries - so if you DID know the client connection id then you could identify those results.
I'm not clear what you are trying to filter for with the Extended Event? Are you looking to see where a specific query was executed from or track all the queries on a specific connection?
The other places you can look to get the same connection info are :
SELECT * FROM sys.dm_exec_connections
SELECT * FROM sys.dm_exec_sessions
SELECT * FROM sys.dm_exec_requests
Looking at these might help you link the make the connection.
Related
A lot of times database locks appear on our Microsoft SQL Server database. The blocker query appears as Fetch
API_CURSOR000000000004D888. This string is just a sample. But it is always an API_CURSOR0000000XXXXX some value. We were able to find the SQL query running behind this cursor using steps in articles like
https://www.sqlskills.com/blogs/joe/hunting-down-the-origins-of-fetch-api_cursor-and-sp_cursorfetch/
https://social.msdn.microsoft.com/Forums/en-US/f51618eb-5332-4f10-9985-b343933579da/fetch-apicursor-unusual?forum=sqldatabaseengine
We could find the SQL query that is blocking the database. It looked like this below. Every time it is the same query.
session_id properties creation_time is_open text 200 API | Dynamic | Scroll Locks | Global (0) 05:44.8 1 (#P1 nchar(10))
SELECT *
FROM JDE_PRODUCTION.PRODDTA.F00022 (UPDLOCK)
WHERE (UKOBNM = #P1)
FOR UPDATE OF UKOBNM, UKUKID
I am seeking help here to see if there is a way we can find the actual values that are passed in the variable #P1. Please let me know if someone has ideas or already done this.
no you will not be. rather check with your application team and see how the DB connections are set up.
if its a Docker Image/Container then check the Database URL, there they may have set SelectMethod = Cursor.
with such settings, every query passed via this Connection will make Cursor call to SQL Server which is unnecessary.
When I use tableau "order jdbc" connect to hdp3.1 hive I get this error, but "extract" is working.
An error occurred while communicating with the data source.
An error occurred while communicating with the data source.
Bad Connection: Tableau could not connect to the data source.
com.tableausoftware.jdbc.TableauJDBCException: Error reading metadata for prepared query: SELECT *
FROM (
select * from dim_boxinfo
) Custom_SQL_Query
LIMIT 1
Method not supported
There was a Java error.
As a matter of connecting to a datasource, Tableau will attempt many types of queries in cascading fashion to determine the functionality of the connection. It looks like this is an example of a time where one of those query types is failing, yet is not necessary for the creation of an extract.
This link discusses the customization of JDBC connections, I do not know the settings well enough to specify which might suppress that warning. (ODBC connection customization has been around for a long time and might offer some clues as to what is possible.)
When I execute a query for the first time in DBeaver it can take up to 10-15 seconds to display the result. In SQLDeveloper those queries only take a fraction of that time.
For example:
Simple "select column1 from table1" statement
DBeaver: 2006ms,
SQLDeveloper: 306ms
Example 2 (other way around; so theres no server-side caching):
Simple "select column1 from table2" statement
SQLDeveloper: 252ms,
DBeaver: 1933ms
DBeavers status box says:
Fetch resultset
Discover attribute column1
Find attribute column1
Late bind attribute colummn1
2, 3 and 4 use most of the query execution time.
I'm using oracle 11g, SQLDeveloper 4.1.1.19 and DBeaver 3.5.8.
See http://dbeaver.jkiss.org/forum/viewtopic.php?f=2&t=1870
What could be the cause?
DBeaver looks up some metadata related to objects in your query.
On an Oracle DB, it queries catalog tables such as
SYS.ALL_ALL_TABLES / SYS.ALL_OBJECTS - only once after connection, for the first query you execute
SYS.ALL_TAB_COLS / SYS.ALL_INDEXES / SYS.ALL_CONSTRAINTS / ... - I believe each time you query a table not used before.
Version 3.6.10 introduced an option to enable/disable a hint used in those queries. Disabling the hint made a huge difference for me. The option is in the Oracle Properties tab of the connection edit dialog. Have a look at issue 360 on dbeaver's github for more info.
The best way to get insight is to perfom the database trace
Perform few time the query to eliminate the caching effect.
Than repeat in both IDEs following steps
activate the trace
ALTER SESSION SET tracefile_identifier = test_IDE_xxxx;
alter session set events '10046 trace name context forever, level 12'; /* binds + waits */
Provide the xxxx to identify the test. You will see this string as a part of the trace file name.
Use level 12 to see the wait events and bind variables.
run the query
close the conenction
This is important to not trace other things.
Examine the two trace files to see:
what statements were performed
what number of rows was fetched
what time was elapsed in DB
for the rest of the time the client (IDE) is responsible
This should provide you enough evidence to claim if one IDE behaves different than other or if simple the DB statements issued are different.
I have a tomcat server calling a stored procedure from a jsp. In the stored procedure I have a query that fills a temp table with data. That temp table is then joined to another table over a dblink to fill another temp table using the hint - DRIVING_SITE. The last temp table is then joined to another table on our database to return a resultset to tomcat.
I'm sorry but I can't really provide a code example for all this, but the issue I'm having is this - After a while of the database link not being used, the first query made using the link will do nothing and return the error:
test.jsp caught exception, closing connection: ORA-02068: following severe error from DATABASE_LINK_NAME
ORA-03135: connection lost contact
Every subsequent query made on the database link within 10 minutes or so of the last call will be fine. The temp table can be huge or small, the amount of data queried seems to make no difference, but the first call after the idle time will get this error probably 75% of the time. Has anyone experienced this issue? If so, are there any resolutions?
The query is structured like so:
INSERT INTO temp_table_2
WITH last_submissions AS (
SELECT /*+ DRIVING_SITE(some_schema.some_table_1) */
bs.unique_id,
CASE WHEN COUNT(bs.unique_id) > 1 THEN 'Y' ELSE 'N' END some_flag,
MAX(trx.unique_id) last_submission
FROM (SELECT unique_id
FROM temp_table_1) oids,
some_schema.some_table_1#DATABASE_LINK bs,
some_schema.some_table_1#DATABASE_LINK trx
WHERE oids.unique_id = bs.unique_id
AND bs.non_unique_join_id = trx.non_unique_join_id
GROUP BY bs.unique_id),
something_relevant AS (
SELECT /*+ DRIVING_SITE(some_schema.some_table_2) */
last_value_of_something.unique_id,
last_value_of_something.some_flag,
mv.value_description status
FROM (
SELECT /*+ DRIVING_SITE(some_schema.some_table_1) */
ls.unique_id,
CASE WHEN COUNT(ls.unique_id) > 1 THEN 'Y' ELSE 'N' END some_flag,
MAX(prd.prd_some_id) last_submission
FROM last_submissions ls,
some_schema.some_table_1#DATABASE_LINK trx,
some_schema.some_table_2#DATABASE_LINK prd
WHERE ls.last_submission = trx.unique_id
AND trx.some_unique_id = prd.some_unique_id (+)
GROUP BY ls.unique_id) last_value_of_something,
some_schema.some_table_2#DATABASE_LINK prd,
some_schema.some_table_3#DATABASE_LINK cs,
some_schema.some_display_value_table#DATABASE_LINK mv
WHERE last_value_of_something.last_submission = prd.prd_some_id (+)
AND prd.some_id = cs.some_id (+)
AND cs.status_code = mv.value (+)
AND mv.value_type (+) = 'SOME_INDICATOR_FOR_DISPLAY_VALUES')
SELECT ls.unique_id unique_id,
NVL(pr.status, trx.some_code) status,
CASE WHEN ls.some_flag = 'Y' OR pr.some_flag = 'Y' THEN 'Yes' ELSE 'No' END display_the_flag
FROM /*+ DRIVING_SITE(some_schema.some_table_1) */
last_submissions ls,
some_schema.some_table_1#DATABASE_LINK trx,
something_relevant pr
WHERE ls.last_submission = trx.unique_id
AND ls.unique_id = pr.unique_id
Do you expect the network between the two database servers to be stable and to allow connections to exist for some time?
When you use a database link, the local server opens up a connection to the remote server. That connection will be kept open as long as your session is open to be used by other queries. If you are seeing connections getting dropped, that often means that there is something in the network (a firewall commonly) that is detecting and killing idle connections. It can also mean that the network between the two servers is simply unstable.
Ideally, you would resolve the problem by fixing whatever underlying network issue you have. If there is a firewall that is killing idle connections, you should be able to modify the firewall configuration to avoid killing these connections for example.
If fixing the infrastructure is not an option, you could close the connection to the remote server after every query (or at least after every query that could be followed by a long idle time)
ALTER SESSION CLOSE DATABASE LINK <<dblink name>>
That means, however, that you would be setting up and tearing down a connection to the remote server potentially on every query-- that is potentially relatively expensive and potentially causes more load on the remote server (depending, of course, on how frequently it happens and how many sessions you might have).
The whole process of pulling data over a database link into a series of temporary tables in order to serve up data to a human using a web application also strikes me as a potentially problematic architecture. Perhaps you have valid reasons for this. But I would be strongly considering using some sort of replication technology (materialized views, Streams, or GoldenGate are the built-in options) rather than pulling data at runtime over database links.
I'm developing an application that pulls information from a Firebird SQL database (accessed via the network) that sits behind an existing application.
To get an idea of what the most commonly used tables are in the application, I've run Wireshark while using the application to capture the SQL statements that are transmitted to the database server when the program is running.
I have no problem viewing what tables are being accessed via the application, however some of the query values passed over the network are not being displayed in the captured SQL packets. Instead these values are replaced with what I assume is a variable of some sort.
Heres a sample query:
select * from supp\x0d\x0aWHERE SUPP.ID=? /* BIND_0 */ \x0d\x0a
(I am assumming \x0d\x0a is used to denote a newline in the SQL query)
Has anyone any idea how I may be able to view the values associated with BIND_0 or /* BIND_0 */?
Any help is much appreciated.
P.S. The version of Firebird I am using is 1.5 - I understand there are syntactical differences in the SQL used in this version and more recent versions.
That /* BIND_0 */ is simply a comment (probably generated by the tool that generated the query), the placeholder is the question mark before that. In Firebird statements are - usually - first prepared by sending the query text (with or without placeholders) to the server with operation op_prepare_statement = 68 (0x44). The server then returns a description of the bind variables and the datatypes of the result set.
When the query is actually executed, the client will send all bind variables together with the execute request (usually in operation op_execute = 63 (0x3F)) in a structure called the XSQLDA.