postgres performance degradation with no reason - sql

I've a strange problem with a postgres 10 database.
After restoring a table with around 2 million record i tried to run a query to measure it's execution time since it has been acting slow on another server.
Shortly after the restore the query was executing in about 1.5 seconds.
After around one hour the same query was executing in 30/40 seconds.
The query is nothing fancy :
SELECT f1,f2,f3 FROM table WHERE f4=false
The planned execution is the same has before.
No writes has been done on the table and the server wasn't on load from other tasks.
How this is possible ? how can i investigate the cause of the problem ?

Related

Simple SQL query is taking too long to respond

I am facing an issue in my SQL server 2008 R2 version previously it was good on executing everything. But from 2 days it not even responding for a small select queries. I didn't do any update or changed any thing but it is now throwing an issue and I couldn't find where is the issue.
I have a table which contains record count of 36 581.
When I am write the simple select query for that table:
SELECT * FROM [TABLE NAME]
It is showing the first 152 records and after that it is not showing any record but taking soo much time which I can say as infinite time as I have seen the time elapsed is around 30 minutes but there is no records extra showed in the result query except those 152 which shown at first.
Try running DBCC CHECKDB on your database like
DBCC CHECKDB('#databasename')

Same SQL query performed in different times

I am trying to run a simple select statement and each time I run it it takes a different amount of time to complete.
The first time takes 0 seconds, the second time it takes 3 seconds and the third time it takes 10 seconds. And if run again the query will start from 0,3,10 and keeps going on.
Why is this happening? It seems there is some kind of logic behind it.
This is causing the service that uses the database to timeout.This query is run by a specific software for thousands of times.
SQL Query:
SELECT * FROM CONTACT_CONTACT WITH (NOLOCK) WHERE MKEY ='XXXXXXXXXXXX'
I am using SQL Server 2012. The db contains 369 tables. The table CONTACT_CONTACT contains 62497 records.

SSIS Query Takes time

We have situation where query takes time through SSIS packages during the weekend run whereas its run in 3 mins during weekdays run. There is considerable increase in the records count but with that it should run in max of 15 mins duration.
We have found temporary solution to overcome this but this one is manual effort need to performed every weekend run.
Temporary solution is, we run the SQL task source query in the SSMS before the package gets triggered from a job.
Whereas the query running through will also take longer time execute but we abort manually ran query. This creates Execution plan Cache of that DB server.
After this when query runs from package it will execute in 3 mins regardless the no of records.
Kindly let us know if any permanent fix can be done for this.
Thanks,
SANDY

Oracle Performance Issues

I'm new to Oracle. I've worked with Microsoft SQL Server for years, though. I was brought into a project that was already overdue and over budget, and I need to be "the Oracle expert." I've got a table that has 14 million rows in it. And I've got a complicated query that updates the table. But I'm going to start with the simple queries. When I issue a simple update that modifies a small number of records (100 to maybe 10,000 or so) it takes no more than 2 to 5 minutes to table scan the table and update the affected records. (Less time if the query can use an index.) But if I update the entire table with:
UPDATE MyTable SET MyFlag = 1;
Then it takes 3 hours!
If the table scan completes in minutes, why should this take hours? I could certainly use some advise on how to troubleshoot this, since I don't have enough experience with Oracle to know what diagnostics queries to run. (I'm on Oracle 11g and using Oracle SQL Developer as the client.)
Thanks.
When you do the UPDATE in Oracle, the data you are modifying are sequentially appended to the redo log and then distributed among the data blocks by a process called CHECKPOINT.
In addition, the old version of the data are copied into UNDO space to support possible transaction rollbacks and access to the old version of data by concurrent processes.
This all may take significantly more time than pure read operations which don't modify data.

First query slow on Firebird

The first query run on a large dataset on a Firebird database after starting our application is always very slow. Subsequent calls to the same query (it is a stored procedure) are fine. I assume that this is to do with something being loaded into memory but I could do with a explanation of what and whether there is anything that can be done to get around the issue.
If is a stored procedure the first query it compiles the stored procedure also it fetches the buffers and caches the result.
On the second query the procedure is not compiled again (precached) and the results are instant (the fetches are also in memory for some operating systems so no need for disk io)
one way is to optimize the sp or the tables
How larger are they? (number of records for each table)
one simple way to optimize this is to put a cron script that will run once per day/hour to prefill the caches so you will get fast sp
Maybe it's not about the query, but the connection time (delay) is long? There was such a problem with [old] Firebird/Interbase engines.
You didn't explain which Firebird version you are using but, in version 2.50, there is a bug (CORE 3227 - slow compilation of stored procedures) that can be the cause of your problem. More details:
http://www.firebirdnews.org/?p=5282&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+FirebirdNews+%28Firebird+News%29