query to get the results of each row in table1 with a subquery of N maximum records found to meet a condition in table2 - sql

I am trying without success to calculate building heights in my city using the LIDAR satellite dataset.
System specs
CPU: Core i7 6700k 4200MHz, 4 cores, 8 threads
RAM: 32GB DDR4 3200mhz
SSD: 1TB Samsung 970 EVO
OS: Ubuntu 18.04
Postgres setup
I am using the latest version of Postgres v12.1 database with PostGIS with the following tweaks recommended in different sources:
shared_buffers = 256MB
maintenance_work_mem = 4GB
max_parallel_maintenance_workers = 7
max_parallel_workers = 7
max_wal_size = 60GB
min_wal_size = 90MB
random_page_cost = 1.0
Database setup
In the lidar table I have more than 3000 million rows, and in the buildings table more than 150000 rows.
In the lidar table the GiST index was created: CREATE INDEX lidar_idx ON lidar USING GIST (geom);
building table: | gid | geom |
lidar table: | z | geom |
Height calculation
Currently in order to calculate the height of a building, it is necessary to check if each one of the 3000 million points (rows) is inside the area of each building and calculate the average of all the points found inside a building area.
The queries I have tried are taking forever (probably more than 5 days or even more) and I would like to simplify the query so that I can get the height of the building with a lot less points, without having to compare with all the insane 3000 million records each time for each building.
In example:
For building with id1, I would like to get only the first 100 records found which are inside the building geometry area ( ST_Within(l.geom, e.geom) ), and once those 100 records are found, pass to the next building.
For building with id2, I would like the same, get only the first 100 records found which are inside the building area.
And so on..
My main query is
SELECT e.gid, AVG(l.z) AS height
FROM lidar l,
buildings e
WHERE ST_Within(l.geom, e.geom)
GROUP BY e.gid) t
I have tried with another query, but I can not get it to work.
SELECT e.gid, AVG(l.z), COUNT(1) FILTER (WHERE ST_Within(l.geom, e.geom)) AS gidc
FROM lidar l, buildings e
WHERE gidc < 100
GROUP BY e.gid

I don't think you really want to do this at all. You should first try to make the correct query faster rather than compromising correctness by working with an arbitrary (but not random) subset of the data.
But if you do want it, then you can use a lateral join.
SELECT e.gid from
buildings e cross join lateral
(select AVG(l.z) AS height FROM lidar l WHERE ST_Within(l.geom, e.geom) LIMIT 100)
it is necessary to check if each one of the 3000million points (rows) is inside the area of each building and calculate the average of all the points found inside a building area.
This is exactly what a geometry index is for. You don't need to look at every point to get just the ones inside the a building area. If you don't have the right index, such as on lidar using gist (geom), then the lateral join query will also be awful.

Related

How to use normalization to set levels of confidence between a rating and the number of ratings in Python or SQL?

I have a list of about 800 sales items that have a rating (from 1 to 5), and the number of ratings. I'd like to list the items that are most probable of having a "good" rating in an unbiased way, meaning that 1 person voting 5.0 isn't nearly as good as 50 people having voted and the rating of the item being a 4.5.
Initially I thought about getting the smallest amount of votes (which will be zero 99% of the time), and the highest amount of votes for an item on the list and factor that into the ratings, giving me a confidence level of 0 to 100%, however I'm thinking that this approach would be too simplistic.
I've heard about Bayesian probability but I have no idea on how to implement it. My list of items, ratings and number of ratings is on a MySQL view, but I'm parsing the code using Python, so I can make the calculations on either side (but preferably at the SQL view).
Is there any practical way that I can normalize this voting with SQL, considering the rating and number of votes as parameters?
|----------|--------|--------------|
| itemCode | rating | numOfRatings |
|----------|--------|--------------|
| 12330 | 5.00 | 2 |
| 85763 | 4.65 | 36 |
| 85333 | 3.11 | 9 |
|----------|--------|--------------|
I've started off trying to assign percentiles to the rating and numOfRatings, this way I'd be able to do normalization (sum them with an initial 50/50 weight). Here's the code I've attempted:
SELECT p.itemCode AS itemCode, (p.rating - min(p.rating)) / (max(p.rating) - min(p.rating)) AS percentil_rating,
(p.numOfRatings - min(p.numOfRatings)) / (max(p.numOfRatings) - min(p.numOfRatings)) AS percentil_qtd_ratings
FROM products p
WHERE p.available = 1
GROUP BY p.itemCode
However that's only bringing me a result for the first itemCode on the list, not all of them.
Clearly the issue here is the low number of observations your data has. Implementing Bayesian's method is the way to go because it provides great probability distribution for applications involving ratings especially if there is limited observations, and it easily decides the future likelihood ratio based on given parameters (this article provides an excellent explanation about Bayesian probability for beginners).
I would suggest storing your data in CSV files so it becomes easier to manipulate in python. Denormalizing the data via joins is the first task to do before analyzing your ratings.
This is Bayesian's simplified formula to use in your python code:
R – Confidence level aka number of observations
v – number of votes for a single product
C – avg vote for all products
m - tuneable parameter aka cutoff number required for votes to be considered (How many votes do you want displayed)
Since this is the simplified formula, this article explains how its been derived from its original formula. This article is helpful too in explaining the parameters.
Knowing the formula pretty much gets 50% of your work done, the rest is just importing your data and working with it. I provided below examples similar to your problem in case you need full demonstration:
Github example 1
Github example 2

Access 2016 query runtime

I am trying to find out the amount of IP's that correspond with each big category (big_cat). In order to do this I need to JOIN 3 tables.
I have the following 3 tables:
Large categories:
small_cat | big_cat
final_parsed_userlogs_access_longuri:
ip | uri
all_categories_from_all_unique_uri:
uri | small_cat
And the following SQL query in Access 2016:
SELECT COUNT(Final_parsed_userlogs_access_longuri.ip), Large_categories.big_cat
FROM (All_categories_from_all_unique_uri INNER JOIN Final_parsed_userlogs_access_longuri ON All_categories_from_all_unique_uri.uri = Final_parsed_userlogs_access_longuri.uri) INNER JOIN Large_categories ON All_categories_from_all_unique_uri.small_cat = Large_categories.small_cat
GROUP BY Large_categories.big_cat;
The tables are 2.2 million, 4.4 million and 1.1 million lines long (in previously mentioned order). Obviously this will take quite some time to run, but I have been running this query for 1,5 hours now and it's still not finished.
Is there a way to make this query run faster? I indexed all fields already. If this is not possible; Is there a way to get a general sense of how long this query will take (through some equation or something)?

Access 2010 doubling the sum in query

I know this question has been asked and answered. I understand the problem and I understand the underlying cause and I understand the solution. What I DON'T understand is how to implement the solution.
I'll try to be detailed....
Background: Each material is being grouped on WellID (I work in oil and gas) and SandType which is my primary key in each table, these come from 2 lookup tables one for each. (I work in oil and gas)
I have 3 tables that store material (sand)) weights at 3 different stages in the job process. Basically the weight from the engineer's DESIGN, what was DELIVERED and what is in INVENTORY.
I know that the join is messed up and adding the total for each row in each table. Sometimes double triple etc.
I am grouping on WellID and SandID.
Now I don't want someone to do the work for me. I just don't know how or where in access to restrict it to what I want, or if modifying t he sql the proper way to write the code. Current work around is 3 separate sum queries one for each table, but that is going to get inefficient and added steps.
My whole database purpose and subsequent reports hinge off math on these 3 numbers so, my show stopper here is putting the fat lady on stage, and is about to become a deal breaker at the end of the line! 0
I need some advice, direction, criticism, wisdom, witty euphemisms or a new job!
The 3 tables look as follows
Design:
T_DESIGN
DesignID WellID Sand_ID Weight_DES Time_DES
89 201 1 100 4/21/2014 6:46:02 AM
98 201 2 100 4/21/2014 7:01:22 AM
86 201 4 100 4/21/2014 6:28:01 AM
93 228 5 100 4/21/2014 6:53:34 AM
91 228 1 100 4/21/2014 6:51:23 AM
92 228 1 100 4/21/2014 6:53:30 AM
Delivered:
T_BOL
BOLID WellID_BOL SandID_BOL Weight_BOL
279 201 1 100
280 201 1 100
281 228 2 5
282 228 1 10
283 228 9 100
Inventory:
T_BIN
StrapID WellID_BIN SandID_BIN Weight_BIN
11 201 1 100
13 228 1 10
14 228 1 0
17 228 1 103
19 201 1 50
The Query Results:
Test Query99
WellID
WellID SandID Sum Of Weight_DES Sum Of Weight_BOL Sum Of Weight_BIN
201 1 400 400 300
228 1 600 60 226
SQL:
SELECT DISTINCTROW L_WELL.WellID, L_SAND.SandID,
Sum(T_DESIGN.Weight_DES) AS [Sum Of Weight_DES],
Sum(T_BOL.Weight_BOL) AS [Sum Of Weight_BOL],
Sum(T_BIN.Weight_BIN) AS [Sum Of Weight_BIN]
FROM ((L_SAND INNER JOIN
(L_WELL INNER JOIN T_DESIGN ON L_WELL.[WellID] = T_DESIGN.[WellID_DES])
ON L_SAND.SandID = T_DESIGN.[SandID_DES])
INNER JOIN T_BIN
ON (L_WELL.WellID = T_BIN.WellID_BIN)
AND (L_SAND.SandID = T_BIN.SandID_BIN))
INNER JOIN T_BOL
ON (L_WELL.WellID = T_BOL.WellID_BOL) AND (L_SAND.SandID = T_BOL.SandID_BOL)
GROUP BY L_WELL.WellID, L_SAND.SandID;
Two LooUp tables are for Well Names and Sand Types. (Well has been abbreviate do to size)
L_Well:
WellID WellName_WELL
3 AAGVIK 1-35H
4 AARON 1-22
5 ACHILLES 5301 41-12B
6 ACKLINS 6092 12-18H
7 ADDY 5992 43-21 #1H
8 AERABELLE 5502 43-7T
9 AGNES 1-13H
10 AL 5493 44-23B
11 ALDER 6092 43-8H
12 AMELIA FEDERAL 5201 41-11B
13 AMERADA STATE 1-16X
14 ANDERSMADSON 5201 41-13H
15 ANDERSON 1-13H
16 ANDERSON 7-18H
17 ANDRE 5501 13-4H
18 ANDRE 5501 14-5 3B
19 ANDRE SHEPHERD 5501 14-7 1T
Sand Lookup:
LSand
SandID SandType_Sand
1 100 Mesh
2 20/40 EP
3 20/40 RC
4 20/40 W
5 30/50 Ceramic
6 30/50 EP
7 30/50 RC
8 40/70 EP
9 40/70 W
10 NA See Notes
Querying and Joining Aggregation Data through an MS Access Database
I noticed your concern for pointers on how to implement some of the theory behind your aggregation queries. While SQL queries are good power-tools to get to the core of a difficult analysis problem, it might also be useful to show some of the steps on how to bring things together using the built-in design tools of MS Access.
This solution was developed on MS Access 2010.
Comments on Previous Solutions
#xQbert had a solid start with the following SQL statement. The sub query approach could be visualized as individual query objects created in Access:
FROM
(SELECT WellID, Sand_ID, Sum(weight_DES) as sumWeightDES
FROM T_DESGN) A
INNER JOIN
(SELECT WellID_BOL, Sum(Weight_BOL) as SUMWEIGHTBOL
FROM T_BOL B) B
ON A.Well_ID = B.WellID_BOL
INNER JOIN
(SELECT WellID_BIN, sum(Weight_Bin) as SumWeightBin
FROM T_BIN) C
ON C.Well_ID_BIN = B.Well_ID_BOL
Depending on the actual rules of the business data, the following assumptions made in this query may not necessarily be true:
Will the tables of T_DESIGN, T_BOL and T_BIN be populated at the same time? The sample data has mixed values, i.e., there are WellID and SandID combinations which do not have values for all three of these categories.
INNER type joins assume all three tables have records for each dimension value (Well-Sand combination)
#Frazz improved on the query design by suggesting that whatever is selected as the "base" joining table (T_DESIGN in this case), this table must be populated with all the relevant dimensional values (WellID and SandID combinations).
SELECT
WellID_DES AS WellID,
SandID_DES AS SandID,
SUM(Weight_DES) AS Weight_DES,
(SELECT SUM(Weight_BOL) FROM T_BOL WHERE T_BOL.WellID_BOL=d.WellID_DES
AND T_BOL.SandID_BOL=d.SandID_DES) AS Weight_BOL,
(SELECT SUM(Weight_BIN) FROM T_BIN WHERE T_BIN.WellID_BIN=d.WellID_DES
AND T_BIN.SandID_BIN=d.SandID_DES) AS Weight_BIN
FROM T_DESIGN;
(... note: a group-by statement should be here...)
This was animprovement because now all joins originate from a single point. If a key-value does not exist in either T_BOL or T_BIN, results will still come back and the entire record of the query would not be lost.
Again, it may be possible that there are no T_DESIGN records matching to values stored in the other tables.
Building Aggregation Sub Query Objects
The presented data does not suggest that there is any direct interaction between the data in each of the three tables aside from lining up their results in the end for presentation based on a common key-value pair (WellID and SandID). Since we are using Access, there is a chance to do these calculations separately.
This query was designed using the "summarizing" feature of the Access query design tool. It's output, after pointing to the T_DESIGN table looked like this:
Making Dimension Table Through a Cartesian Product
There are mixed opinions out there about cartesian products, but they do actually have a purpose.
Most of the concern is that a runaway cartesian product query will make millions and millions of nonsensical data values. In this query, it's specifically designed to simulate a real business condition.
The Case for a Cartesian Product
Picking from the sample data provided:
Some of the Sand Types: "20/40 EP", "30/50 Ceramic", "40/70 EP", and "30/50 RC" that are moved between their respective wells, are these sand types found at these wells consistently throughout the year?
Without an anchoring dimension for the key-values, Wells would not be found anywhere in the database via querying. It's not that they do not exist... it's just that there is no recorded data (i.e., Sand Type Weights delivered) for them.
A Reference Dimension Query Product
A dimension query is simple to produce. By referencing the two sources of keys: L_WELL and L_SAND (both look up tables or dimensional tables) without identifying a join condition, all the different combinations of the two key-values (WellID and SandID) are made:
The shortcut in SQL looks like this:
SELECT L_WELL.WellID, L_SAND.SandID, L_WELL.WellName, L_SAND.SandType
FROM L_SAND, L_WELL;
The resulting data looks like this:
Instead of using any of the operational data tables: T_DESIGN, T_BOL, or T_BIN as sources of data for a static dimension such as a list of Oil Wells, or a catalog of Sand Types, that data has been predetermined and can even be transferred to a real table since it probably will not change much once it is created.
Correlating Sub Query Results from Different Sources
After repeating the process and creating the summary tables for the other two sources (T_BOL and T_BIN), You can finally arrange the results through a simple query and join process.
The actual JOIN operations are between the dimension table/query: QSUB_WELL_SAND and all three of the summary queries: QSUB_DES, QSUB_BOL, and QSUB_BIN.
I have chosen to chosen to implement LEFT OUTER joins. If you are not sure of the difference between the different "outer" joins, this is the choice I made through the Access Query Design dialogue:
QSUB_WELL_SAND is defined as our anchor dimension. It will always have more records than any of the other tables. An OUTER JOIN should be defined to KEEP all reference dimension records... and all Summary Table query results, regardless if there is a match between the two Query results.
QSUB_WEIGHTS/ The Query to Combine All Sub Query Results
This is what the design of the final output query looks like:
This is what the data output looks like when this query design is executed:
Conclusions and Clean Up: Some Closing Thoughts
With respect to the join to the dimension query, there is a lot of empty space where there are no records or data to report on. This is where a cleverly placed filter or query criteria can shrink the output to exactly what you care to look at the most. Here's how mine looked after I added additional ending query criteria:
My data was based on what was supplied by the OP, except where the ID's assigned to the Well Type attribute did not match the sample data. The values I assigned instead are posted below as well.
Access supports a different style of database operations. Step-wise queries can be developed to hold pre-processed, special sets of data that can be reintroduced to the other data tables and query results to develop complex query criteria.
All this being said, Programming in SQL can also be just as rewarding. Be sure to explore some of the differences between the results and the capabilities you can tap into by using one approach (sql coding), the other approach (access design wizards) or both of the approaches. There's definitely a lot of room to grow and discover new capabilities from just the example provided here.
Hopefully I haven't stolen all the fun from developing a solution for your situation. I read into your comment about "building more on top" as the harbinger of more fun to come, so I don't feel so bad...! Happy Developing!
Data Modifications from the Sample Set
Without understanding L_SAND and L_WELL this is the best I could come up with..
use sub selects to get the sums first so you don't compound the data issues on the joins.
Select WellID, Sand_ID, sumWeightDES, WellID_BOL, SUMWEIGHTBOL,
WellID_BIN, SumWeightBin
FROM
(SELECT WellID, Sand_ID, Sum(weight_DES) as sumWeightDES
FROM T_DESGN) A
INNER JOIN
(SELECT WellID_BOL, Sum(Weight_BOL) as SUMWEIGHTBOL
FROM T_BOL B) B
ON A.Well_ID = B.WellID_BOL
INNER JOIN
(SELECT WellID_BIN, sum(Weight_Bin) as SumWeightBin
FROM T_BIN) C
ON C.Well_ID_BIN = B.Well_ID_BOL
I would simplify it excluding L_WELL and L_SAND. If you are just interestend in IDs, then they really shouldn't be necessary joins. If all the other 3 tables have the WellID and SandID columns, then pick the one that is sure to have all combos.
Supposing it's the Design table, then:
SELECT
WellID_DES AS WellID,
SandID_DES AS SandID,
SUM(Weight_DES) AS Weight_DES,
(SELECT SUM(Weight_BOL) FROM T_BOL WHERE T_BOL.WellID_BOL=d.WellID_DES AND T_BOL.SandID_BOL=d.SandID_DES) AS Weight_BOL,
(SELECT SUM(Weight_BIN) FROM T_BIN WHERE T_BIN.WellID_BIN=d.WellID_DES AND T_BIN.SandID_BIN=d.SandID_DES) AS Weight_BIN
FROM T_DESIGN
GROUP BY WellID, SandID;
... and make sure all your tables have an index on WellID and SandID.
Just to be clear. I dont' think it's a good idea to start the join from the lookup tables, or from their cartesian product. You can always left join them to fetch descriptions and other data. But the main query should be the one with all the combos of WellID and SandID... or if not all, at least the most. Things get difficult if none of the 3 tables (DESIGN, BOL and BIN) have all combos. In that case (and I'd say only in that case) then you might as well start with the cartesian product of the two lookup tables. You could also do a UNION, but I doubt that would be more efficient.

Iterating through table to use as parameters in function to create large new table

I have been researching this and haven't found a usable answer yet. I'm a moderate hack with SQL Server and have some success with using parameters in functions and stored procedures but this is a combination that I can't seem to get my head around.
Here is my scenario summarized for clarity:
My company sells computers as laptops and desktops and accessories. I have a tbl_Computers where I maintain Computer_Type, Model_Num, and Mix_Percent like this:
Desktop ABC .75
Desktop XYZ .25
Laptop DEF .60
Laptop MNO .40
We also have a table for forecast by month where I maintain Computer_Type, Jul_Num, Aug_Num, and Sep_Num like this:
Desktop 100 200 150
Laptop 300 400 700
I have created a function for a planning bill of material that will find all components and accessories sold in the past twelve months for a given model. It works as follows:
P_BOM ("ABC") will result in a table with two columns: Component and Comp_Percent
CPU 1 (This means we sell 1 CPU with every desktop)
Hard Drive 2 (We sell 2 with every desktop)
Printer .8 (80% of the time we sell a printer)
What I'd like my Stored Procedure to do is to provide a single, combined table that would look like this with the following headers Component, Jul_Num, Aug_Num, and Sep_Num:
CPU 400 600 850
Hard Drive 500 800 1000
I get the CPU number by summing the following logic:
Desktop's Jul_Num x ABC's Mix_Percentage x ABC's CPU Comp_Percent
Desktop's Jul_Num x XYZ's Mix_Percentage x XYZ's CPU Comp_Percent
Laptop's Jul_Num x ABC's Mix_Percentage x ABC's CPU Comp_Percent
Laptop's Jul_Num x XYZ's Mix_Percentage x XYZ's CPU Comp_Percent
400 = (100 x .75 x 1) + (100 x .25 x 1) + (300 x .6 x 1) + (300 x .4 x 1)
Any ideas?
Thanks,
Rob
EDIT:
Thanks to the suggestion that this needn't be an iterative problem but rather a table-based solution.
I created this first table to give me:
ABC CPU 3 3 1
ABC Hard Drive 6 3 2
DEF CPU 2 2 1
DEF Hard Drive 2 2 1
MNO CPU 1 1 1
MNO Hard Drive 1 1 1
XYZ CPU 1 1 1
XYZ Hard Drive 2 1 2
Here was the SQL:
SELECT All_Components.Model_Num, All_Components.Part_Num, SUM(All_Components.Qty) AS Total, TTO.Target_Total, SUM(All_Components.Qty)
/ TTO.Target_Total AS Comp_Percent
FROM (SELECT Test_tbl_Computers.Model_Num, Test_tbl_Orders_2.Order_Num, Test_tbl_Orders_2.Part_Num, Test_tbl_Orders_2.Qty
FROM Test_tbl_Orders AS Test_tbl_Orders_2 CROSS JOIN
Test_tbl_Computers) AS All_Components INNER JOIN
(SELECT Test_tbl_Orders.Part_Num, SUM(Test_tbl_Orders.Qty) AS Target_Total
FROM Test_tbl_Orders INNER JOIN
Test_tbl_Computers AS Test_tbl_Computers_1 ON Test_tbl_Orders.Part_Num = Test_tbl_Computers_1.Model_Num
GROUP BY Test_tbl_Orders.Part_Num) AS TTO ON All_Components.Model_Num = TTO.Part_Num
WHERE (All_Components.Order_Num IN
(SELECT Order_Num
FROM Test_tbl_Orders AS Test_tbl_Orders_1
WHERE (Part_Num = All_Components.Model_Num))) AND (All_Components.Part_Num <> All_Components.Model_Num)
Then, to keep it from becoming a SQL-Monster I couldn't tame, I created another function to conduct an inner join to the forecast and mix percentages and then sum up all numbers grouping by Part_Num.
If nothing else, I appreciated having to write out my question to help focus my thoughts.
"Computer_Type, Jul_Num, Aug_Num, and Sep_Num"
One-column-per-month works for reporting or a data-entry interface, but you are going to drive yourself absolutely bonkers if you actually store the data that way. If you have the means to go back and change that table to "Computer_Type, Year, Month, Num" or "Computer_Type, Date, Num", then you should do that first.
I don't see why do you need a function here. All the data you are stating here can be stored in tables and then picked up with simple joins.
Even if the BOM is recursive you can make use of CTEs.
If you insist using the function you can use CROSS APPLY to call the function to each row.
The only function I've created thus far has been to create a
two-column table filled with all components sold on the same sales
order as a specific desktop model. Are you recommending that I post
the results of that table to another table and continue to insert new
rows for each instance of the function?
I see, you don't need to use a function.
there are many different possibilities,
one that you are probably very close to it, is converting the function to a SP and use the
INSERT INTO [TableName]
EXEC [SP_NAME]
another one convert the query into an accumulating subquery or CTE (see below)
and join it with the others.
in general avoid creating functions every time you can, because they make big damage to the database.
The BOM's are not recursive. I'm not familiar with CTE's.
Common Table Expression (CTE) are great tools when you have many concatenated subqueries, once you get used to them, they bring lot of clarity (IMO) to complex queries.
CTEs also provide native support for recursion.
There are tons of articles in the net that will guide you. just google CTE expamples or CTE tutorials
I have no experience with CROSS APPLY.
CROSS APPLY: from page http://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx
The APPLY operator allows you to invoke a table-valued function for each row returned by an outer table expression of a query.
last and not least
Any examples would be appreciated
if you would give me the partial scripts that you already had, I'll try convert it to the query that I meant to guide you.

Optimal MySQL temporary tables (memory tables) configuration?

First of all, I am new to optimizing mysql. The fact is that I have in my web application (around 400 queries per second), a query that uses a GROUP BY that i can´t avoid and that is the cause of creating temporary tables. My configuration was:
max_heap_table_size = 16M
tmp_table_size = 32M
The result: temp table to disk percent + - 12.5%
Then I changed my settings, according to this post
max_heap_table_size = 128M
tmp_table_size = 128M
The result: temp table to disk percent + - 18%
The results were not expected, do not understand why.
It is wrong tmp_table_size = max_heap_table_size?
Should not increase the size?
Query
SELECT images, id
FROM classifieds_ads
WHERE parent_category = '1' AND published='1' AND outdated='0'
GROUP BY aux_order
ORDER BY date_lastmodified DESC
LIMIT 0, 100;
EXPLAIN
| 1 |SIMPLE|classifieds_ads | ref |parent_category, published, combined_parent_oudated_published, oudated | combined_parent_oudated_published | 7 | const,const,const | 67552 | Using where; Using temporary; Using filesort |
"Using temporary" in the EXPLAIN report does not tell us that the temp table was on disk. It only tells us that the query expects to create a temp table.
The temp table will stay in memory if its size is less than tmp_table_size and less than max_heap_table_size.
Max_heap_table_size is the largest a table can be in the MEMORY storage engine, whether that table is a temp table or non-temp table.
Tmp_table_size is the largest a table can be in memory when it is created automatically by a query. But this can't be larger than max_heap_table_size anyway. So there's no benefit to setting tmp_table_size greater than max_heap_table_size. It's common to set these two config variables to the same value.
You can monitor how many temp tables were created, and how many on disk like this:
mysql> show global status like 'Created%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 20 |
| Created_tmp_files | 6 |
| Created_tmp_tables | 43 |
+-------------------------+-------+
Note in this example, 43 temp tables were created, but only 20 of those were on disk.
When you increase the limits of tmp_table_size and max_heap_table_size, you allow larger temp tables to exist in memory.
You may ask, how large do you need to make it? You don't necessarily need to make it large enough for every single temp table to fit in memory. You might want 95% of your temp tables to fit in memory and only the remaining rare tables go on disk. Those last 5% might be very large -- a lot larger than the amount of memory you want to use for that.
So my practice is to increase tmp_table_size and max_heap_table_size conservatively. Then watch the ratio of Created_tmp_disk_tables to Created_tmp_tables to see if I have met my goal of making 95% of them stay in memory (or whatever ratio I want to see).
Unfortunately, MySQL doesn't have a good way to tell you exactly how large the temp tables were. That will vary per query, so the status variables can't show that, they can only show you a count of how many times it has occurred. And EXPLAIN doesn't actually execute the query so it can't predict exactly how much data it will match.
An alternative is Percona Server, which is a distribution of MySQL with improvements. One of these is to log extra information in the slow-query log. Included in the extra fields is the size of any temp tables created by a given query.