Simple queries take very long - sql

When I execute a query for the first time in DBeaver it can take up to 10-15 seconds to display the result. In SQLDeveloper those queries only take a fraction of that time.
For example:
Simple "select column1 from table1" statement
DBeaver: 2006ms,
SQLDeveloper: 306ms
Example 2 (other way around; so theres no server-side caching):
Simple "select column1 from table2" statement
SQLDeveloper: 252ms,
DBeaver: 1933ms
DBeavers status box says:
Fetch resultset
Discover attribute column1
Find attribute column1
Late bind attribute colummn1
2, 3 and 4 use most of the query execution time.
I'm using oracle 11g, SQLDeveloper 4.1.1.19 and DBeaver 3.5.8.
See http://dbeaver.jkiss.org/forum/viewtopic.php?f=2&t=1870
What could be the cause?

DBeaver looks up some metadata related to objects in your query.
On an Oracle DB, it queries catalog tables such as
SYS.ALL_ALL_TABLES / SYS.ALL_OBJECTS - only once after connection, for the first query you execute
SYS.ALL_TAB_COLS / SYS.ALL_INDEXES / SYS.ALL_CONSTRAINTS / ... - I believe each time you query a table not used before.
Version 3.6.10 introduced an option to enable/disable a hint used in those queries. Disabling the hint made a huge difference for me. The option is in the Oracle Properties tab of the connection edit dialog. Have a look at issue 360 on dbeaver's github for more info.

The best way to get insight is to perfom the database trace
Perform few time the query to eliminate the caching effect.
Than repeat in both IDEs following steps
activate the trace
ALTER SESSION SET tracefile_identifier = test_IDE_xxxx;
alter session set events '10046 trace name context forever, level 12'; /* binds + waits */
Provide the xxxx to identify the test. You will see this string as a part of the trace file name.
Use level 12 to see the wait events and bind variables.
run the query
close the conenction
This is important to not trace other things.
Examine the two trace files to see:
what statements were performed
what number of rows was fetched
what time was elapsed in DB
for the rest of the time the client (IDE) is responsible
This should provide you enough evidence to claim if one IDE behaves different than other or if simple the DB statements issued are different.

Related

Microsoft SQL Server database locks

A lot of times database locks appear on our Microsoft SQL Server database. The blocker query appears as Fetch
API_CURSOR000000000004D888. This string is just a sample. But it is always an API_CURSOR0000000XXXXX some value. We were able to find the SQL query running behind this cursor using steps in articles like
https://www.sqlskills.com/blogs/joe/hunting-down-the-origins-of-fetch-api_cursor-and-sp_cursorfetch/
https://social.msdn.microsoft.com/Forums/en-US/f51618eb-5332-4f10-9985-b343933579da/fetch-apicursor-unusual?forum=sqldatabaseengine
We could find the SQL query that is blocking the database. It looked like this below. Every time it is the same query.
session_id properties creation_time is_open text 200 API | Dynamic | Scroll Locks | Global (0) 05:44.8 1 (#P1 nchar(10))
SELECT *
FROM JDE_PRODUCTION.PRODDTA.F00022 (UPDLOCK)
WHERE (UKOBNM = #P1)
FOR UPDATE OF UKOBNM, UKUKID
I am seeking help here to see if there is a way we can find the actual values that are passed in the variable #P1. Please let me know if someone has ideas or already done this.
no you will not be. rather check with your application team and see how the DB connections are set up.
if its a Docker Image/Container then check the Database URL, there they may have set SelectMethod = Cursor.
with such settings, every query passed via this Connection will make Cursor call to SQL Server which is unnecessary.

Tagging an SQL Query

I would like to be able to tag an SQL query somehow, so I can relate the query execution to the web request that triggered the query. I already have a unique request id, that I tag my logs and other monitoring with, so I can easily do a complete trace across the weblogs and new relic for example.
But when I look at a report of long running SQL queries for example, I cannot trace that back to the request that triggered the SQL Query. I would really like to be able to tag the query with my request id somehow.
I can't find anything online. When I search I just find blogs about storing tags and tag clouds in SQL. Not really what I need.
Hope the question makes sense.
This is a very interesting post...
I hope, adding an extra nullable parameter to your stored procedure(s) will ensure that the profiler will catch the unique id passed during a call (in the trace) whether you use that parameter inside the procedure or not (i.e. to do something meaningful...like inserting into an audit table with unique id, procedure name, timestamp etc).
But I think that will make life difficult as you now have to update all your procedures.
If you already have logging turned on (web server) and it captures the same unique id in its request (log file) along with a timestamp then you probably can code a small utility app that reads the log file and find matching entries in the traced table by the timestamp alone.
The only thing that might go wrong is if your web server and database server have differeing times (you need to offset your calculation accordingly).
I don't know if this will help but it is certainly a very interesting project and I am hoping somebody have experienced this thing and came up with a nice solution.
Will be closely watching this post if such a solution exists....
All the best...
If I understand correctly, you want to follow up the query execution in Activity Monitor. But have you considered using a DMV or SQL PROFILER ?
In my opinion, your best bet would be to wrap it in a stored proc. This way you will be able to FILTER your trace only for this object. Here's an example of a simple select and the same select wrapped in stored proc named sproc1 :
As you can see in this image, you can start a SQL PROFILER trace and filter it on the ObjectName. You can then add other column like CPU, StartTime, ...
If you can't use a stored proc, then I would suggest to insert a comment before the exec like this:
/* ID1234 */
select * from table1
Then use SQL PROFILER the same way but you now filter on the TextData using your ID
Here the result :

How to get the query displayed when a change is made to a table or a field in a table in Postgresql?

I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?
If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.
You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.

How do I display the query time when a query completes in Vertica?

When using vsql, I would like to see how long a query took to run once it completes. For example when i run:
select count(distinct key) from schema.table;
I would like to see an output like:
5678
(1 row)
total query time: 55 seconds.
If this is not possible, is there another way to measure query time?
In vsql type:
\timing
and then hit Enter. You'll like what you'll see :-)
Repeating that will turn it off.
Regarding the other part of your question:
is there another way to measure query time?
Vertica can log a history of all queries executed on the cluster which is another source of query time. Before 6.0 the relevant system table was QUERY_REPO, starting with 6.0 it is QUERY_REQUESTS.
Assuming you're on 6.0 or higher, QUERY_REQUESTS.REQUEST_DURATION_MS will give you the query duration in milliseconds.
Example of how you might use QUERY_REQUESTS:
select *
from query_requests
where request_type = 'QUERY'
and user_name = 'dbadmin'
and start_timestamp >= CURRENT_DATE
and request ilike 'select%from%schema.table%'
order by start_timestamp;
The QUERY_PROFILES.QUERY_DURATION_US and RESOURCE_ACQUISITIONS.DURATION_MS columns may also be of interest to you. Here are the short descriptions of those tables in case you're not already familiar:
RESOURCE_ACQUISITIONS - Retains information about resources (memory, open file handles, threads) acquired by each running request for each resource pool in the system.
QUERY_PROFILES - Provides information about queries that have run.
I'm not sure how to enable that in vsql or if that's possible. But you could get that information from a script.
Here's the psuedocode (I used to use perl):
print time
system("vsql -c 'select * from table'");
print time
Or put time into a variable and do some subtraction.
The other option is to use some tool like Toad to connect to Vertica instead of using vsql.

How to remove a tuple from an SQL table after a timeout?

I am faced with a peculiar requirement which is as follows:
A network-intensive operation is triggered to a server by multiple clients, through a web-interface. However, only one operation is allowed at a time, and hence an entry(tuple) is made in an SQL table to indicate that the operation is in progress. Once the operation is complete (irrespective of success or failure), the appropriate result is displayed back to the client(s), and the corresponding tuple is removed from the SQL table.
Since the operation is network-intensive, a scenario where the operation needs to be "considered" to be cancelled, after some timeout (10 minutes) has to be introduced.
Is there ANY way the lifetime of a row in SQL be associated with a timeout value, so that is is deleted after certain time? My application is primarily written in Java 1.5 and EJB 3.0, using JPA/Hibernate to access Oracle 10g DB engine.
Thanks in advance.
Regards,
Nagendra U M
I would suggest that you try using a timestamp column containing the start time of the task.
A before trigger can be then made to delete the old column before a new one is inserted if the task timed out.
If you want to have multiple tasks with different timeouts, you can even add a column with the timeout in seconds. Just code your trigger accordingly.
I don't know that Oracle has this kind of facility but I think no db engine have this.
If you want to do it at DB level,
you must have a datetime column, e.g.; 'CreatedDate' in
table. This column will have
datetime when record was created.
Write a procedure and put it in a
schedule job. This job will run after every 10 minutes and remove the 10 minutes old records. The query will be like
this.
T-SQL: Please convert it according to your db engine.
DELETE FROM yourtable WHERE CreatedDate < DATEADD(mi, -10, GETDATE())
This will delete all records older than 10 minutes from table.
This is just to give you idea of schedule job. It is in SQL Server. I don't know about Oracle
step_by_step_guide_to_add_a_sql_job_in_sql_server_2005
It sounds like you're implementing a mutex using the database, take a look at this question and see if it helps? Sounds like transactional access to a flag table will solve this for you, as long as you catch both success & failure states in your server code.