First of all, I am new to optimizing mysql. The fact is that I have in my web application (around 400 queries per second), a query that uses a GROUP BY that i can´t avoid and that is the cause of creating temporary tables. My configuration was:
max_heap_table_size = 16M
tmp_table_size = 32M
The result: temp table to disk percent + - 12.5%
Then I changed my settings, according to this post
max_heap_table_size = 128M
tmp_table_size = 128M
The result: temp table to disk percent + - 18%
The results were not expected, do not understand why.
It is wrong tmp_table_size = max_heap_table_size?
Should not increase the size?
Query
SELECT images, id
FROM classifieds_ads
WHERE parent_category = '1' AND published='1' AND outdated='0'
GROUP BY aux_order
ORDER BY date_lastmodified DESC
LIMIT 0, 100;
EXPLAIN
| 1 |SIMPLE|classifieds_ads | ref |parent_category, published, combined_parent_oudated_published, oudated | combined_parent_oudated_published | 7 | const,const,const | 67552 | Using where; Using temporary; Using filesort |
"Using temporary" in the EXPLAIN report does not tell us that the temp table was on disk. It only tells us that the query expects to create a temp table.
The temp table will stay in memory if its size is less than tmp_table_size and less than max_heap_table_size.
Max_heap_table_size is the largest a table can be in the MEMORY storage engine, whether that table is a temp table or non-temp table.
Tmp_table_size is the largest a table can be in memory when it is created automatically by a query. But this can't be larger than max_heap_table_size anyway. So there's no benefit to setting tmp_table_size greater than max_heap_table_size. It's common to set these two config variables to the same value.
You can monitor how many temp tables were created, and how many on disk like this:
mysql> show global status like 'Created%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 20 |
| Created_tmp_files | 6 |
| Created_tmp_tables | 43 |
+-------------------------+-------+
Note in this example, 43 temp tables were created, but only 20 of those were on disk.
When you increase the limits of tmp_table_size and max_heap_table_size, you allow larger temp tables to exist in memory.
You may ask, how large do you need to make it? You don't necessarily need to make it large enough for every single temp table to fit in memory. You might want 95% of your temp tables to fit in memory and only the remaining rare tables go on disk. Those last 5% might be very large -- a lot larger than the amount of memory you want to use for that.
So my practice is to increase tmp_table_size and max_heap_table_size conservatively. Then watch the ratio of Created_tmp_disk_tables to Created_tmp_tables to see if I have met my goal of making 95% of them stay in memory (or whatever ratio I want to see).
Unfortunately, MySQL doesn't have a good way to tell you exactly how large the temp tables were. That will vary per query, so the status variables can't show that, they can only show you a count of how many times it has occurred. And EXPLAIN doesn't actually execute the query so it can't predict exactly how much data it will match.
An alternative is Percona Server, which is a distribution of MySQL with improvements. One of these is to log extra information in the slow-query log. Included in the extra fields is the size of any temp tables created by a given query.
Related
I recently watched an online course about oracle SQL performance tuning. In the video, the lecturer constantly compares the COST value from the Autotrace when comparing the performance of two queries.
But I've also read from other forums and websites where it states that COST is a relative value specific to that query and should not be used for an absolute metric for evaluating performance. They suggest looking at things like consistent gets, physical reads, etc instead.
So my interpretation is that it makes no sense to compare the COST value for completely different queries that are meant for different purposes because the COST value is relative. But when comparing the same 2 queries, one which has been slightly modified for "better performance", it is okay to compare the COST values. Is my interpretation accurate?
When is it okay to compare the COST value as opposed to some other metric?
What other metrics should we look at when evaluating/comparing query performance?
In general, I would be very wary about comparing the cost between two queries unless you have a very specific reason to believe that makes sense.
In general, people don't look at the 99.9% of queries that the optimizer produces a (nearly) optimal plan for. People look at queries where the optimizer has produced a decidedly sub-optimal plan. The optimizer will produce a sub-optimal plan for one of two basic reasons-- either it can't transform a query into a form it can optimize (in which case a human likely needs to rewrite the query) or the statistics it is using to make its estimates are incorrect so what it thinks is an optimal plan is not. (Of course, there are other reasons queries might be slow-- perhaps the optimizer produced an optimal plan but the optimal plan is doing a table scan because an index is missing for example.)
If I'm looking at a query that is slow and the query seems to be reasonably well-written and a reasonable set of indexes are available, statistics are the most likely source of problems. Since cost is based entirely on statistics, however, that means that the optimizer's cost estimates are incorrect. If they are incorrect, the cost is roughly equally likely to be incorrectly high or incorrectly low. If I look at the query plan for a query that I know needs to aggregate hundreds of thousands of rows to produce a report and I see that the optimizer has assigned it a single-digit cost, I know that somewhere along the line it is estimating that a step will return far too few rows. In order to tune that query, I'm going to need the cost to go up so that the optimizer's estimates accurately reflect reality. If I look at the query plan for a query I know should only need to scan a handful of rows and I see a cost in the tens of thousands, I know that the optimizer is estimating that some step will return far too many rows. In order to tune that query, I'm going to need the cost to go down so that the optimizer's estimates reflect reality.
If you use the gather_plan_statistics hint, you'll see the estimated and actual row counts in your query plan. If the optimizer's estimates are close to reality, the plan is likely to be pretty good and cost is likely to be reasonably accurate. If the optimizer's estimates are off, the plan is likely to be poor and the cost is likely to be wrong. Trying to use a cost metric to tune a query without first confirming that the cost is reasonably close to reality is seldom very productive.
Personally, I would ignore cost and focus on metrics that are likely to be stable over time and that are actually correlated with performance. My bias would be to focus on logical reads since most systems are I/O bound but you could use CPU time or elapsed time as well (elapsed time, though, tends not to be particularly stable because it depends on what happens to be in cache at the time the query is run). If you're looking at a plan, focus on the estimated vs. actual row counts not on the cost.
The actual run time of a query is by far the most important metric for tuning queries. We can ignore cost and other metrics 99.9% of the time.
If the query is relatively small and fast, and we can easily re-run it and find the actual run times with the GATHER_PLAN_STATISTICS hint:
-- Add a hint to the query and re-run it.
select /*+ gather_plan_statistics */ count(*) from all_objects;
-- Find the SQL_ID of your query.
select sql_id, sql_fulltext from gv$sql where lower(sql_text) like '%gather_plan_statistics%';
-- Plus in the SQL_ID to find an execution plan with actual numbers.
select * from table(dbms_xplan.display_cursor(sql_id => 'bbqup7krbyf61', format => 'ALLSTATS LAST'));
If the query was very slow, and we can't easily re-run it, generate a SQL Monitor report. This data is usually available for a few hours after the last execution.
-- Generate a SQL Monitor report.
select dbms_sqltune.report_sql_monitor(sql_id => 'bbqup7krbyf61') from dual;
There are whole books written about interpreting the results. The basics are you want to first examine the execution plan and focus on the operations with the largest "A-Time". If you want to understand where the query or optimizer went bad, compare the "E-Rows" with "A-Rows", since the estimated cardinality drives most of the optimizer decisions.
Example output:
SQL_ID bbqup7krbyf61, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from all_objects
Plan hash value: 3058112905
--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:03.58 | 121K| 622 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:03.58 | 121K| 622 | | | |
|* 2 | FILTER | | 1 | | 79451 |00:00:02.10 | 121K| 622 | | | |
|* 3 | HASH JOIN | | 1 | 85666 | 85668 |00:00:00.12 | 1479 | 2 | 2402K| 2402K| 1639K (0)|
| 4 | INDEX FULL SCAN | I_USER2 | 1 | 148 | 148 |00:00:00.01 | 1 | 0 | | | |
...
As with most things in Engineering, it really comes down to why / what you are comparing and evaluating for.
COST is a general time-based estimate for Oracle that is used as the ranking metric in it's internal optimiser. This answer explains that selection process pretty well.
In general, COST as a metric is a good way to compare the expected computation time of two different queries, since it measures the estimated time cost of the query expressed as # of block reads. So, if you are comparing the performance of the same query, one optimised for time, then COST is a good metric to use.
However, if your query or system is bottle-necked or constraint on something other than time (e.g. memory efficiency), then COST is will be a poor metric to optimise against. In those cases, you should pick a metric that is relevant to your end goal.
I have an optimisation problem.
I have a table containing about 15MB of JSON stored as rows of VARCHAR(65535). Each JSON string is an array of arbitrary size.
95% contains 16 or fewer elements
the longest (to date) contains 67 elements
the hard limit is 512 elements (before 64kB isn't big enough)
The task is simple, pivot each array such that each element has its own row.
id | json
----+---------------------------------------------
01 | [{"something":"here"}, {"fu":"bar"}]
=>
id | element_id | json
----+------------+---------------------------------
01 | 1 | {"something":"here"}
01 | 2 | {"fu":"bar"}
Without having any kind of table valued functions (user defined or otherwise), I've resorted to pivoting via joining against a numbers table.
SELECT
src.id,
pvt.element_id,
json_extract_array_element_text(
src.json,
pvt.element_id
)
AS json
FROM
source_table AS src
INNER JOIN
numbers_table AS pvt(element_id)
ON pvt.element_id < json_array_length(src.json)
The numbers table has 512 rows in it (0..511), and the results are correct.
The elapsed time is horrendous. And it's not to do with distribution or sort order or encoding. It's to do with (I believe) redshift's materialisation.
The working memory needed to process 15MB of JSON text is 7.5GB.
15MB * 512 rows in numbers = 7.5GB
If I put just 128 rows in numbers then the working memory needed reduces by 4x and the elapsed time similarly reduces (not 4x, the real query does other work, it's still writing the same amount of results data, etc, etc).
So, I wonder, what about adding this?
WHERE
pvt.element_id < (SELECT MAX(json_array_length(src.json)) FROM source_table)
No change to the working memory needed, the elapsed time goes up slightly (effectively a WHERE clause that has a cost but no benefit).
I've tried making a CTE to create the list of 512 numbers, that didn't help. I've tried making a CTE to create the list of numbers, with a WHERE clause to limit the size, that didn't help (effectively Redshift appears to have materialised using the 512 rows and THEN applied the WHERE clause).
My current effort is to create a temporary table for the numbers, limited by the WHERE clause. In my sample set this means that I get a table with 67 rows to join on, instead of 512 rows.
That's still not great, as that ONE row with 67 elements dominates the elapsed time (every row, no matter how many elements, gets duplicated 67 times before the ON pvt.element_id < json_array_length(src.json) gets applied).
My next effort will be to work on it in two steps.
As above, but with a table of only 16 rows, and only for row with 16 or fewer elements
As above, with the dynamically mixed numbers table, and only for rows with more than 16 elements
Question: Does anyone have any better ideas?
Please consider declaring the JSON as an external table. You can then use Redshift Spectrum's nested data syntax to access these values as if they were rows.
There is a quick tutorial here: "Tutorial: Querying Nested Data with Amazon Redshift Spectrum"
Simple example:
{ "id": 1
,"name": { "given":"John", "family":"Smith" }
,"orders": [ {"price": 100.50, "quantity": 9 }
,{"price": 99.12, "quantity": 2 }
]
}
CREATE EXTERNAL TABLE spectrum.nested_tutorial
(id int
,name struct<given:varchar(20), family:varchar(20)>
,orders array<struct<price:double precision, quantity:double precision>>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://my-files/temp/nested_data/nested_tutorial/'
;
SELECT c.id
,c.name.given
,c.name.family
,o.price
,o.quantity
FROM spectrum.nested_tutorial c
LEFT JOIN c.orders o ON true
;
id | given | family | price | quantity
----+-------+--------+-------+----------
1 | John | Smith | 100.5 | 9
1 | John | Smith | 99.12 | 2
Neither the data format, nor the task you wish to do, is ideal for Amazon Redshift.
Amazon Redshift is excellent as a data warehouse, with the ability to do queries against billions of rows. However, storing data as JSON is sub-optimal because Redshift cannot use all of its abilities (eg Distribution Keys, Sort Keys, Zone Maps, Parallel processing) while processing fields stored in JSON.
The efficiency of your Redshift cluster would be much higher if the data were stored as:
id | element_id | key | value
----+------------+---------------------
01 | 1 | something | here
01 | 2 | fu | bar
As to how to best convert the existing JSON data into separate rows, I would frankly recommend that this is done outside of Redshift, then loaded into tables via the COPY command. A small Python script would be more efficient at converting the data that trying strange JOINs on a numbers table in Redshift.
Maybe if you avoid parsing and interpreting JSON as JSON and instead work with this as text it can work faster. If you're sure about the structure of your JSON values (which I guess you are since the original query does not produce the JSON parsing error) you might try just to use split_part function instead of json_extract_array_element_text.
If your elements don't contain commas you can use:
split_part(src.json,',',pvt.element_id)
if your elements contain commas you might use
split_part(src.json,'},{',pvt.element_id)
Also, the part with ON pvt.element_id < json_array_length(src.json) in the join condition is still there, so to avoid JSON parsing completely you might try to cross join and then filter out non-null values.
I have this problem where I need to do a COUNT(COLUMN_NAME) and SUM(COLUMN_NAME) on a few of the tables. The issue is the time it's taking forever on SQL Server to do this.
We have over 2 billion records for which I need to perform these operations.
In Oracle, we can force a parallel execution for a single query/session by using a PARALLEL hint. For example for a simple SELECT COUNT, we can do
SELECT /*+ PARALLEL */ COUNT(1)
FROM USER.TABLE_NAME;
I searched if there is something available for SQL Server and I couldn't comeup with something concrete where I can specify a table hint for a parallel execution. I believe, SQL Server decides for itself whether to do a parallel or sequential execution depending on the query cost.
The same query in Oracle with a parallel hint takes 2-3 mins to perform whereas on SQL Server it takes about an hour and half.
I am reading the article Forcing a Parallel Query Execution Plan . For me it looks like you could for testing purpose force a Parallel execution. The author says in the conclution:
Conclusion
Even experts with decades of SQL Server experience and detailed
internal knowledge will want to be careful with this trace flag. I
cannot recommend you use it directly in production unless advised by
Microsoft, but you might like to use it on a test system as an extreme
last resort, perhaps to generate a plan guide or USE PLAN hint for use
in production (after careful review).
This is an arguably lower risk strategy, but bear in mind that the
parallel plans produced under this trace flag are not guaranteed to be
ones the optimizer would normally consider. If you can improve the
quality of information provided to the optimizer instead to get a
parallel plan, go that way :)
The article is refering to a Trace Flag:
There’s always a Trace Flag
In the meantime, there is a workaround. It’s not perfect (and most
certainly a choice of very last resort) but there is an undocumented
(and unsupported) trace flag that effectively lowers the cost
threshold to zero for a particular query
So as far my understanding of this article you could do something like this:
SELECT
COUNT(1)
FROM
USER.TABLE_NAME
OPTION (RECOMPILE, QUERYTRACEON 8649)
In oracle if do select count() on a column then sql will follow index. In below plan you can see "INDEX FAST FULL SCAN" this will make sql run faster. You can try same in sqlserver, do your table has index. You shall try create index on the column which your counting. But in oracle case it will use any other column index. In below sql has "count(DN)" but it use index of some other column.
SQL> set linesize 500
SQL> set autotrace traceonly
SQL> select count(DN) from My_TOPOLOGY;
Execution Plan
----------------------------------------------------------
Plan hash value: 2512292876
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 164 (64)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| FM_I2_TOPOLOGY | 90850 | 164 (64)| 00:00:01 |
--------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
180 consistent gets
177 physical reads
0 redo size
529 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
In oracle Is there any way to determine howlong the sql query will take to fetch the entire records and what will be the size of it, Without actually executing and waiting for entire result.
I am getting repeatedly to download and provide the data to the users using normal oracle SQL select (not datapump/import etc) . Some times rows will be in millions.
Actual run time will not known unless you run it, but you can try to estimate it..
first you can do explain plan explain only, this will NOT run query -- based on your current stats it will show you more or less how it will be executed
this will not have actual time and efforts to read the data from datablocks..
do you have large blocksize
is this schema normalized/de-normalized for query/reporting?
how large is row does it fit in same block so only 1 fetch is needed?
of rows you are expecting
based on amount of data * your network latency
Based on this you can try estimate time
This requires good statistics, explain plan for ..., adjusting sys.aux_stats, and then adjusting your expectations.
Good statistics The explain plan estimates are based on optimizer statistics. Make sure that tables and indexes have up-to-date statistics. On 11g this usually means sticking with the default settings and tasks, and only manually gathering statistics after large data loads.
Explain plan for ... Use a statement like this to create and store the explain plan for any SQL statement. This even works for creating indexes and tables.
explain plan set statement_id = 'SOME_UNIQUE_STRING' for
select * from dba_tables cross join dba_tables;
This is usually the best way to visualize an explain plan:
select * from table(dbms_xplan.display);
Plan hash value: 2788227900
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 12M| 5452M| 00:00:19 |
|* 1 | HASH JOIN RIGHT OUTER | | 12M| 5452M| 00:00:19 |
| 2 | TABLE ACCESS FULL | SEG$ | 7116 | 319K| 00:00:01 |
...
The raw data is stored in PLAN_TABLE. The first row of the plan usually sums up the estimates for the other steps:
select cardinality, bytes, time
from plan_table
where statement_id = 'SOME_UNIQUE_STRING'
and id = 0;
CARDINALITY BYTES TIME
12934699 5717136958 19
Adjust sys.aux_stats$ The time estimate is based on system statistics stored in sys.aux_stats. These are numbers for metrics like CPU speed, single-block I/O read time, etc. For example, on my system:
select * from sys.aux_stats$ order by sname
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO DSTART 09-11-2014 11:18
SYSSTATS_INFO DSTOP 09-11-2014 11:18
SYSSTATS_INFO FLAGS 1
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN CPUSPEEDNW 3201.10192837466
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN SLAVETHR
SYSSTATS_MAIN SREADTIM
The numbers can be are automatically gathered by dbms_stats.gather_system_stats. They can also be manually modified. It's a SYS table but relatively safe to modify. Create some sample queries, compare the estimated time with the actual time, and adjust the numbers until they match.
Discover you probably wasted a lot of time
Predicting run time is theoretically impossible to get right in all cases, and in practice it is horribly difficult to forecast for non-trivial queries. Jonathan Lewis wrote a whole book about those predictions, and that book only covers the "basics".
Complex explain plans are typically "good enough" if the estimates are off by one or two orders of magnitude. But that kind of difference is typically not good enough to show to a user, or use for making any important decisions.
I'm building an Amazon Redshift data warehouse, and experiencing unexpected performance impacts based on the defined size of the VARCHAR column. Details are as follows. Three of my columns are shown from pg_table_def:
schemaname | tablename | column | type | encoding | distkey | sortkey | notnull
------------+-----------+-----------------+-----------------------------+-----------+---------+---------+---------
public | logs | log_timestamp | timestamp without time zone | delta32k | f | 1 | t
public | logs | event | character varying(256) | lzo | f | 0 | f
public | logs | message | character varying(65535) | lzo | f | 0 | f
I've recently run Vacuum and Analyze, I have about 100 million rows in the database, and I'm seeing very different performance depending on which columns I include.
Query 1:
For instance, the following query takes about 3 seconds:
select log_timestamp from logs order by log_timestamp desc limit 5;
Query 2:
A similar query asking for more data runs in 8 seconds:
select log_timestamp, event from logs order by log_timestamp desc limit 5;
Query 3:
However, this query, very similar to the previous, takes 8 minutes to run!
select log_timestamp, message from logs order by log_timestamp desc limit 5;
Query 4:
Finally, this query, identical to the slow one but with explicit range limits, is very fast (~3s):
select log_timestamp, message from logs where log_timestamp > '2014-06-18' order by log_timestamp desc limit 5;
The message column is defined to be able to hold larger messages, but in practice it doesn't hold much data: the average length of the message field is 16 charachters (std_dev 10). The average length of the event field is 5 charachters (std_dev 2). The only distinction I can really see is the max length of the VARCHAR field, but I wouldn't think that should have an order of magnitude affect on the time a simple query takes to return!
Any insight would be appreciated. While this isn't the typical use case for this tool (we'll be aggregating far more than we'll be inspecting individual logs), I'd like to understand any subtle or not-so-subtle affects of my table design.
Thanks!
Dave
Redshift is a "true columnar" database and only reads columns that are specified in your query. So, when you specify 2 small columns, only those 2 columns have to be read at all. However when you add in the 3rd large column then the work that Redshift has to do dramatically increases.
This is very different from a "row store" database (SQL Server, MySQL, Postgres, etc.) where the entire row is stored together. In a row store adding/removing query columns does not make much difference in response time because the database has to read the whole row anyway.
Finally the reason your last query is very fast is because you've told Redshift that it can skip a large portion of the data. Redshift stores your each column in "blocks" and these blocks are sorted according the sort key you specified. Redshift keeps a record of the min/max of each block and can skip over any blocks that could not contain data to be returned.
The limit clause doesn't reduce the work that has to be done because you've told Redshift that it must first order all by log_timestamp descending. The problem is your ORDER BY … DESC has to be executed over the entire potential result set before any data can be returned or discarded. When the columns are small that's fast, when they're big it's slow.
Out of curiosity, how long does this take?
select log_timestamp, message
from logs l join
(select min(log_timestamp) as log_timestamp
from (select log_timestamp
from logs
order by log_timestamp desc
limit 5
) lt
) lt
on l.log_timestamp >= lt.log_timestamp;