sql last recorded row take too long too query - sql

I am using the following query to get the last row of the table. This code seems to be what should be done for this type of query. So I am wondering why it takes so long for the table to output the result.
I am looking for the potential reasons that could lead to this long time to query. For example maybe the row is not in the top (I use some conditions) and it takes time to reach the row if the query read from top to bottom of the table? Or the query need to read all the table before to conclude which row is correct (I don't think it is the case)?
Any contribution to know how the "sorting algo" is working is appreciated.
The code:
SELECT created_at , currency, balance
from YYY
where id in (ZZZ) and currency = 'XXX'
order by created_at desc
limit 1

This is your query:
select created_at, currency, balance
from YYY
where id in (ZZZ) and currency = 'XXX'
order by created_at desc
limit 1;
If your query has no indexes, then the engine needs to scan the entire table, choose the rows that match the where clause, sort those rows and choose the one with the maximum created_at.
If you have an index on YYY(currency, id, created_at desc), then the engine can simply look up the appropriate value right away using the index.
So, if you have the right optimizations in the database, then the query is likely to be much, much faster.

Related

SQL Top clause with order by clause

I am bit new to SQL, I want to write query with TOP clause and order by clause.
So, for returning all the records I write below query
select PatientName,PlanDate as Date,* from OPLMLA21..Exams order
by PlanDate desc
And I need top few elements from same query, so I modified the query to
select top(5) PatientName,PlanDate as Date,* from OPLMLA21..Exams
order by PlanDate desc
In my understanding it will give the top 5 results from the previous query, but I see ambiguity there. I have attached the screen shot of query results .
May be my understanding is wrong, I read a lot but not able to understand this please help me out.
I stated this in a comment, however, to repeat that:
 TOP (5) doesn't give the "top results" of the prior query though, no. It gives the top (first) rows from the dataset defined in the query it in is. If there are multiple rows that have the same "rank", then the row(s) returned for that rank are arbitrary. So, for example, for your query if you have 100 rows all with the same value for PlanDate, what 5 rows you get are completely arbitrary and could be different (including the order they are in) every time you run said query.
What I mean by arbitrary is that, effectively, SQL Server is free to choose whatever rows, of those applicable, are returned. Sometimes this might be the same everytime you run the query, but this by luck more than anything. As your database gets larger, you have more users querying the data, you involve joins, things like locks, indexes, parrallelism, etc all will effect the "order" that SQL Server is processing said data, and will effect an ambigious TOP clause.
Take the example data below:
ID | SomeDate
---|---------
1 |2020-01-01
2 |2020-01-01
3 |2020-01-01
4 |2020-01-01
5 |2020-01-01
6 |2020-01-02
Now, what would you expect if I ran a TOP (2) against that table with an ORDER BY clause of SomeDate DESC. Well, certainly, you'd expect the "last" row (with an ID of 6) to be returned, but what about the next row? The other 5 rows all have the same value for SomeDate. Perhaps, because your under the impression that data in a table is pre-sorted, you might expect the row with a value of 5 for ID. What if I told you that there was a CLUSTERED INDEX on ID ASC; that might well end up meaning that the row with a value of 1 is returned. What if there is also an index on SomeDate DESC?
What if the table was 10,000 of rows in size, and you also have a JOIN to another table, which also has a CLUSTERED INDEX, and some user is performing a query with some specific row locking on in while you run your query? What would you expect then?
Without your ORDER BY being specific enough to ensure that each row has a distinct ordering position, SQL Server will return other rows in an arbitrary order and when mixed with a TOP means the "top" rows will also be arbitrary.
Side note: I note in your image (of what appears to be SSMS), your "dates" are in the format yyyyMMdd. This strongly implies that you are storing a date value as a varchar or int type. This is a design flaw and needs to be fixed. There are 6 date and time data types, and 5 of them are far superior to using a string and numerical data type to storing the data.

Does a query goes through all data when you only select the last N?

I have a query that select the last 5($new) items from my database.
SELECT OvenRunData.dataId AS id, OvenRunData.data AS data
FROM ovenRuns INNER JOIN OvenRunData ON OvenRuns.id = OvenRunData.ovenRunId
WHERE OvenRunData.ovenRunId = (SELECT MAX(id) FROM OvenRuns)
ORDER BY id DESC LIMIT '$new'
I want to execute this query every 5 seconds with an AJAX request so I can update my table.
I know this query select the last 5 records but I want to know if the query runs through all records and then selects the last 5 or does it select only the last 5 without checking all the data?
I'm really worried that I'll have lag.
You need two indexes to make it fast enough:
create index ix_OvenRuns_id on OvenRuns(id)
create index ix_OvenRunData_ovenRunId on OvenRunData(ovenRunId)
you can even put OvenRunData.dataId OvenRunData.data into the second one, or create clustered index, however, these indexes definitely avoid full data scan.
That depends on the indexes.
In your case, you should have one on OverRuns(id).
More here: http://use-the-index-luke.com/sql/partial-results/top-n-queries
The LIMIT is applied after the ORDER BY, and the ORDER BY is applied to the entire result-set. So the answer to your question is, yes it must go through all of the records in your result-set determined by your WHERE clause before applying the LIMIT.

Two questions on PostgreSQL performance

1) What is the best way to implement paging in PostgreSQL?
Assume we need to implement paging. The simplest query is select * from MY_TABLE order by date_field DESC limit 10 offset 20. As far as I understand, we have 2 problems here: in case the dates may have duplicated values every run of this query may return different results and the more offset value is the longer the query runs. We have to provide additional column which is date_field_index:
--date_field--date_field_index--
12-01-2012 1
12-01-2012 2
14-01-2012 1
16-01-2012 1
--------------------------------
Now we can write something like
create index MY_INDEX on MY_TABLE (date_field, date_field_index);
select * from MY_TABLE where date_field=<last_page_date and not (date_field_index>=last_page_date_index and date_field=last+page_date) order by date_field DESC, date_field_index DESC limit 20;
..thus using the where clause and corresponding index instead of offset. OK, now the questions:
1) is this the best way to improve the initial query?
2) how can we populate that date_field_index field? we have to provide some trigger for this?
3) We should not use RowNumber() functions in Postgres because they are not using indexes and thus very slow. Is it correct?
2) Why column order in concatenated index is not affecting performance of the query?
My measurements show, that while searching using concatenated index (index consisting of 2 and more columns) there is no difference if we place the most selective column to the first place - or if we place it to the end. Why? If we place the most selective column to the first place - we run through a shorter range of the found rows which should have impact on performance. Am I right?
Use the primary key to untie in instead of the date_field_index column. Otherwise explain why that is not an option.
order by date_field DESC, "primary_key_column(s)" DESC
The combined index with the most unique column first is the best performer, but it will not be used if:
the distinct values are more than a few percent of the table
there aren't enough rows to make it worth
the range of dates is not small enough
What is the output of explain my_query?

processing large table - how do i select the records page by page?

I need to do a process on all the records in a table. The table could be very big so I rather process the records page by page. I need to remember the records that have already been processed so there are not included in my second SELECT result.
Like this:
For first run,
[SELECT 100 records FROM MyTable]
For second run,
[SELECT another 100 records FROM MyTable]
and so on..
I hope you get the picture. My question is how do I write such select statement?
I'm using oracle btw, but would be nice if I can run on any other db too.
I also don't want to use store procedure.
Thank you very much!
Any solution you come up with to break the table into smaller chunks, will end up taking more time than just processing everything in one go. Unless the table is partitioned and you can process exactly one partition at a time.
If a full table scan takes 1 minute, it will take you 10 minutes to break up the table into 10 pieces. If the table rows are physically ordered by the values of an indexed column that you can use, this will change a bit due to clustering factor. But it will anyway take longer than just processing it in one go.
This all depends on how long it takes to process one row from the table of course. You could chose to reduce the load on the server by processing chunks of data, but from a performance perspective, you cannot beat a full table scan.
You are most likely going to want to take advantage of Oracle's stopkey optimization, so you don't end up with a full tablescan when you don't want one. There are a couple ways to do this. The first way is a little longer to write, but let's Oracle automatically figure out the number of rows involved:
select *
from
(
select rownum rn, v1.*
from (
select *
from table t
where filter_columns = 'where clause'
order by columns_to_order_by
) v1
where rownum <= 200
)
where rn >= 101;
You could also achieve the same thing with the FIRST_ROWS hint:
select /*+ FIRST_ROWS(200) */ *
from (
select rownum rn, t.*
from table t
where filter_columns = 'where clause'
order by columns_to_order_by
) v1
where rn between 101 and 200;
I much prefer the rownum method, so you don't have to keep changing the value in the hint (which would need to represent the end value and not the number of rows actually returned to the page to be accurate). You can set up the start and end values as bind variables that way, so you avoid hard parsing.
For more details, you can check out this post

optimising a query

i have a table in database that is having 7k plus records. I have a query that searches for a particular id in that table.(id is auto-incremented)
query is like this->
select id,date_time from table order by date_time DESC;
this query will do all search on that 7k + data.......isn't there anyways with which i can optimize this query so that search is made only on 500 or 1000 records....as these records will increase day by day and my query will become heavier and heavier.Any suggestions?????
I don't know if im missing something here but what's wrong with:
select id,date_time from table where id=?id order by date_time DESC;
and ?id is the number of the id you are searching for...
And of course id should be a primary index.
If id is unique (possibly your primary key), then you don't need to search by date_time and you're guaranteed to only get back at most one row.
SELECT id, date_time FROM table WHERE id=<value>;
If id is not unique, then you still use the same query but need to look at indexes, other contraints, and/or caching outside the database, if the query becomes too slow.