Multiple SQL searches vs searching through one returned array - sql

Is it faster to do multiple SQL finds on one table with different conditions or to get all the items from the table in an array and then separate them out that way? I realize I'm not explaining my question that well, so here is an example:
I want to pull records on posts and display them in categories based on when they were posted, say within one year, within one month, one week, etc. The nature of the categories results in lower level categories being entirely contained within upper level ones.
Should I do a SQL find with different conditions for each category, resulting in multiple calls to the database, or should I do one search, returning all of the items and then sort them out from the array? Thanks for your responses, sorry I'm new at this.

Typically I would say that you are going to get better performance by letting your database engine do the sorting work. Each database engine has this functionality and typically it can do it faster than you can.
So I would vote to use the database to get your multiple groups rather than trying to do it yourself in memory.

I typically perform one large sql query and then break the array up in ruby to minimize the number or duration of database connections.
This isn't necessarily any faster, and I have never benchmarked it, but less reads to the db hopefully means it will scale longer.

Edit: Nevermind, I didn't quite understand the question. Just let SQL perform the ordering for you in a convenient fashion and then process the array yourself.
You can probably make it even easier if you let your SELECT statement generate helper columns to say which categories (e.g., based on the date) a record belongs to.

The simplest, and easiest to understand would be to perform multiple queries for each criteria, and then form each of those result sets into a group. I don't think you want to start traversing result sets and duplicating rows.
If you really want to do it in one query, you could try a UNION query.
SELECT *, 1 as group from posts WHERE date > '2009-07-24 00:00:00' ORDER BY date DESC
UNION ALL
SELECT *, 2 as group from posts WHERE date > '2009-07-17 00:00:00' ORDER BY date DESC
UNION ALL
SELECT *, 3 as group from posts WHERE date > '2009-06-24 00:00:00' ORDER BY date DESC
UNION ALL
SELECT *, 4 as group from posts WHERE date > '2008-07-24 00:00:00' ORDER BY date DESC
After that you just need to traverse the list once, and filter into new lists by the "group" column.

It depends. If you're using OR operators in your procedures, then things could get kind of slow. It would be better at that point to use multiple SQL statements.
But really, you need to analyze the query plans and decide for yourself if it is efficient enough or not. Run real world examples and TEST TEST TEST.

Related

Most efficient way of getting mariaDB/SQL record count

A question on a simple SQL statement, but one which I sometimes wonder over. Thought I'd see if anyone knew the answer to.
When counting the records in a table using a simple SQL statement, which has the least overheard:
1) SELECT COUNT(single_primary_field) FROM table, i.e. SELECT COUNT(user_ID) FROM users;
2) SELECT COUNT(*) FROM table
I initially thought the first may be quickest. But perhaps not having a specific field to associate with makes the second quicker?
Probably makes very little difference speed wise either way.
Thanks
COUNT(column) counts only selected column and ignore the null values.
COUNT(*) count rows and don't care values in the columns.
Using COUNT(*) is a better way for counting rows.
Count(*) is most efficent way to count according to mysql:
Count
Have a read through https://mariadb.com/kb/en/library/explain/ to look at what type of indexing your query is using, it usually hints at its performance.
I think Count(*) is going to be the fastest because maria does store a running count.

SQL Select, different than the last 10 records

I have a table called "dutyroster". I want to make a random selection from this table's "names" column, but, I want the selection be different than the last 10 records so that the same guy is not given a second duty in 10 days. Is that possible ?
Create a temporary table with only one column called oldnames which will have no records initially. For each select, execute a query like
select names from dutyroster where dutyroster.names not in (select oldnamesfrom temporarytable) limit 10
and when execution is done add the resultset to the temporary table
The other answer already here is addressing the portion of the question on how to avoid duplicating selections.
To accomplish the random part of the selection, leverage newid() directly within your select statement. I've made this sqlfiddle as an example.
SELECT TOP 10
newid() AS [RandomSortColumn],
*
FROM
dutyroster
ORDER BY
[RandomSortColumn] ASC
Keep executing the query, and you'll keep getting different results. Use the technique in the other answer for avoiding doubling a guy up.
The basic idea is to use a subquery to get all but users from the last ten days, then sort the rest randomly:
select dr.*
from dutyroster dr
where dr.name not in (select dr2.name
from dutyroster dr2
where dr2.datetimecol >= date_sub(curdate(), interval 10 day)
)
order by rand()
limit 1;
Different databases may have different syntax for limit, rand(), and for the date/time functions. The above gives the structure of the query, but the functions may differ.
If you have a large amount of data and performance is a concern, there are other (more complicated) ways to take a random sample.
you could use TOP function for SQL Server
and for MYSQL you could use LIMIT function
Maybe this would help...
SELECT TOP number|percent column_name(s)
FROM table_name;
Source: http://www.w3schools.com/sql/sql_top.asp

How to get a optimized paginated list from a query that has a UNION ALL?

I have a query formed by an UNION ALL from two tables. The results have to be ordered and paginated (like the typical list of a web application).
The original query (simplified) is:
SELECT name, id
FROM _test1 -- conditions WHERE
UNION ALL
SELECT name, id
FROM _test2 -- conditions WHERE
ORDER BY name DESC LIMIT 10,20
The problem is that the 2 tables have more than 1 million rows each, and the query is very slow.
How can I get an optimized paginated list from a UNION ALL?
Postdata:
I've used the search of Stack Overflow and I've found some similar questions of this, but the answer was incorrect or the question isn't exactly the same. Two examples:
Optimize a UNION mysql query
Combining UNION and LIMIT operations in MySQL query
I'm surprised that in Stack Overflow nobody could answered this question. Maybe it is impossible to do this query more efficiently? What could be a solution of this problem?
I would think that you could use something similar to the solution in your second link to at least help performance, but I doubt that you'll be able to get great performance on later pages. For example:
( SELECT name, id
FROM _test1 -- conditions WHERE
ORDER BY name DESC LIMIT 0, 30
)
UNION ALL
( SELECT name, id
FROM _test2 -- conditions WHERE
ORDER BY name DESC LIMIT 0, 30
)
ORDER BY name DESC
LIMIT 10, 20
You're basically limiting each subquery to the subset of possible rows that might be on the given page. In this way you only need to retrieve and merge 20 rows from each table before determining which 10 to return. Otherwise the server will potentially grab all of the rows from each table, order and merge them, then start trying to find the correct rows.
I don't use MySQL a lot though, so I can't guarantee that the engine will behave how I think it should :)
In any event, once you get to later pages you're still going to be merging larger and larger datasets. HOWEVER, I am of the strong opinion that a UI should NEVER allow a user to retrieve a set of records that let them go to (for example) page 5000. That's simply too much data for a human mind to find useful all at once and should require further filtering. Maybe let them see the first 100 pages (or some other number), but otherwise they have to constrain the results better. Just my opinion though.

Semi-Distinct MySQL Query

I have a MySQL table called items that contains thousands of records. Each record has a user_id field and a created (datetime) field.
Trying to put together a query to SELECT 25 rows, passing a string of user ids as a condition and sorted by created DESC.
In some cases, there might be just a few user ids, while in other instances, there may be hundreds.
If the result set is greater than 25, I want to pare it down by eliminating duplicate user_id records. For instance, if there were two records for user_id = 3, only the most recent (according to created datetime) would be included.
In my attempts at a solution, I am having trouble because while, for example, it's easy to get a result set of 100 (allowing duplicate user_id records), or a result set of 16 (using GROUP BY for unique user_id records), it's hard to get 25.
One logical approach, which may not be the correct MySQL approach, is to get the most recent record for each for each user_id, and then, if the result set is less than 25, begin adding a second record for each user_id until the 25 record limit is met (maybe a third, fourth, etc. record for each user_id would be needed).
Can this be accomplished with a MySQL query, or will I need to take a large result set and trim it down to 25 with code?
I don't think what you're trying to accomplish is possible as a SQL query. Your desire is to return 25 rows, no matter what the normal data groupings are whereas SQL is usually picky about returning based on data groupings.
If you want a purely MySQL-based solution, you may be able to accomplish this with a stored procedure. (Supported in MySQL 5.0.x and later.) However, it might just make more sense to run the query to return all 100+ rows and then trim it programmatically within the application.
This will get you the most recent for each user --
SELECT user_id, create
FROM items AS i1
LEFT JOIN items AS i2
ON i1.user_id = i2.user_id AND i1.create > i2.create
WHERE i2.id IS NULL
his will get you the most recent two records for each user --
SELECT user_id, create
FROM items AS i1
LEFT JOIN items AS i2
ON i1.user_id = i2.user_id AND i1.create > i2.create
LEFT JOIN items IS i3
ON i2.user_id = i3.user_id AND i2.create > i3.create
WHERE i3.id IS NULL
Try working from there.
You could nicely put this into a stored procedure.
My opinion is to use application logic, as this is very much application layer logic you are trying to implement at the DB level, i.e. filtering down the results to make the search more useful to the end user.
You could implement a stored procedure (personally I would never do such a thing) or just get the application to decide which 25 results.
One approach would be to get the most recent item from each user, followed by the most recent items from all users, and limit that. You could construct pathological examples where this probably isn't what you want, but it should be pretty good in general.
Unfortunately, there is no easy way :( I had to do something similar when I built a report for my company that would pull up customer disables that were logged in a database. Only problem was that the disconnect is ran and logged every 30 minutes. Therefore, the rows would not be distinct since the timestamp was different in every disconnect. I solved this problem with sub queries. I don't have the exact code anymore, but I beleive this is how I implemented it:
SELECT CORP, HOUSE, CUST,
(
SELECT TOP 1 hsd
FROM #TempTable t2
WHERE t1.corp = t2.corp
AND t1.house = t2.house
AND t1.cust = t2.cust
) DisableDate
FROM #TempTable t1
GROUP BY corp, house, cust -- selecting distinct
So, my answer is to elimante the non-distinct column from the query by using sub queries. There might be an easier way to do it though. I'm curious to see what others post.
Sorry, i keep editing this, I keep trying to find ways to make it easier to show what I did.

SQL Server UNION - What is the default ORDER BY Behaviour

If I have a few UNION Statements as a contrived example:
SELECT * FROM xxx WHERE z = 1
UNION
SELECT * FROM xxx WHERE z = 2
UNION
SELECT * FROM xxx WHERE z = 3
What is the default order by behaviour?
The test data I'm seeing essentially does not return the data in the order that is specified above. I.e. the data is ordered, but I wanted to know what are the rules of precedence on this.
Another thing is that in this case xxx is a View. The view joins 3 different tables together to return the results I want.
There is no default order.
Without an Order By clause the order returned is undefined. That means SQL Server can bring them back in any order it likes.
EDIT:
Based on what I have seen, without an Order By, the order that the results come back in depends on the query plan. So if there is an index that it is using, the result may come back in that order but again there is no guarantee.
In regards to adding an ORDER BY clause:
This is probably elementary to most here but I thought I add this.
Sometimes you don't want the results mixed, so you want the first query's results then the second and so on. To do that I just add a dummy first column and order by that. Because of possible issues with forgetting to alias a column in unions, I usually use ordinals in the order by clause, not column names.
For example:
SELECT 1, * FROM xxx WHERE z = 'abc'
UNION ALL
SELECT 2, * FROM xxx WHERE z = 'def'
UNION ALL
SELECT 3, * FROM xxx WHERE z = 'ghi'
ORDER BY 1
The dummy ordinal column is also useful for times when I'm going to run two queries and I know only one is going to return any results. Then I can just check the ordinal of the returned results. This saves me from having to do multiple database calls and most empty resultset checking.
Just found the actual answer.
Because UNION removes duplicates it does a DISTINCT SORT. This is done before all the UNION statements are concatenated (check out the execution plan).
To stop a sort, do a UNION ALL and this will also not remove duplicates.
If you care what order the records are returned, you MUST use an order by.
If you leave it out, it may appear organized (based on the indexes chosen by the query plan), but the results you see today may NOT be the results you expect, and it could even change when the same query is run tomorrow.
Edit: Some good, specific examples: (all examples are MS SQL server)
Dave Pinal's blog describes how two very similar queries can show a different apparent order, because different indexes are used:
SELECT ContactID FROM Person.Contact
SELECT * FROM Person.Contact
Conor Cunningham shows how the apparent order can change when the table gets larger (if the query optimizer decides to use a parallel execution plan).
Hugo Kornelis proves that the apparent order is not always based on primary key. Here is his follow-up post with explanation.
A UNION can be deceptive with respect to result set ordering because a database will sometimes use a sort method to provide the DISTINCT that is implicit in UNION , which makes it look like the rows are deliberately ordered -- this doesn't apply to UNION ALL for which there is no implicit distinct, of course.
However there are algorithms for the implicit distinct, such as Oracle's hash method in 10g+, for which no ordering will be applied.
As DJ says, always use an ORDER BY
It's very common to come across poorly written code that assumes table data is returned in insert order, and 95% of the time the coder gets away with it and is never aware that this is a problem as on many common databases (MSSQL, Oracle, MySQL). It is of course a complete fallacy and should always be corrected when it's come across, and always, without exception, use an Order By clause yourself.