Oracle slow query - sql

When I query my table just like this select * from mytable, sometimes (I query the table in PLSQL developer or SQL navigator) query returns results fast and sometimes it takes 25-26 seconds. Of course, this doesn't affect the performance of business transactions.
I traced both status and it gave below results:
Fast Time:
select *
from
mytable
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.64 1.14 0 169184 0 100
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.64 1.14 0 169184 0 100
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net more data to client 40 0.00 0.00
SQL*Net message from client 2 0.00 0.00
********************************************************************************
Slow Time:
select *
from
mytable
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 2.91 23.74 169076 169184 0 100
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 2.91 23.74 169076 169184 0 100
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net more data to client 40 0.00 0.00
SQL*Net message from client 2 0.00 0.00
db file scattered read **10686** 0.29 20.20
db file sequential read 6 0.00 0.01
latch: object queue header operation 1 0.00 0.00
********************************************************************************

At the first time, it finds all the rows in the buffer cache(see query section), memory IO is faster than disk IO.
query
----------
0
0
169076
-------
QUERY
Total number of buffers retrieved in consistent mode for all parse, execute, or fetch calls. Usually, buffers are retrieved in consistent mode for queries
The second time, the required rows are no longer available, might have flushed due to aging or space required by some other queries, hence the Oracle process has to pull all the rows from disk(see under disk section) which is slower than memory IO. And of course, the second time the query has spent most of the time on db file scattered read due to missing index on the table referenced in the query.
disk
----------
0
0
169076
-------
DISK
Total number of data blocks physically read from the datafiles on disk for all parse, execute, or fetch calls

Related

SQL - Some result when no row

I have a table (DATA), from which I would like to perform some simple math queries
Security Date Price
-------- ---- ------
1 2017-08-31 130
2 2017-08-31 150
1 2017-07-31 115
2 2017-07-31 140
1 2017-06-30 100
2 2017-06-30 130
1 2017-05-31 90
1 2017-04-30 85
1 2017-03-31 80
SELECT x.Security, x.Price/y.Price-1 'MONTHLY RETURN', x.Price/z.Price-1 'QUARTERLY RETURN'
FROM DATA AS x
JOIN DATA AS y
ON x.Security = y.Security
JOIN DATA AS z
ON x.Security = z.Security
WHERE x.Security IN (1,2)
AND x.Date = '2017-08-31'
AND y.Date = '2017-07-31'
AND z.Date = '2017-05-31'
I would like to have full table, even when there is no result (150/NO ROW)-1
Security MONTHLY RETURN QUARTERLY RETURN
-------- -------------- ----------------
1 (130/115)-1 (130/90)-1
2 (150/140)-1 NULL or any other data
Instead SQL is returning results just for Security 1 as there is no data (NO ROW) for Security 2 for '2017-05-31'. However it is not a good solution as I have data for '2017-07-31' so I would like to see them for Security 2.
Result of my query:
Security MONTHLY RETURN QUARTERLY RETURN
-------- -------------- ----------------
1 (130/150)-1 (130/90)-1
Is there any way to somehow prepare a table in which I will have all the data and let's say NULL when there is no result. Please note that normally I will have 20 or 30 securities.
I will be grateful for your assist on this one.
This is my second month with SQL and it is a way better than at the beginning but I still have some issues with logic. Do you know any good literature to start with?

Print Text Representation of Tensorflow (tf-slim) Model

Is there any way to print a textual representation of a tf-slim model along the lines of what nolearn offers:
## Layer information
name size total cap.Y cap.X cov.Y cov.X filter Y filter X field Y field X
-------------- ---------- ------- ------- ------- ------- ------- ---------- ---------- --------- ---------
input 1x144x192 27648 100.00 100.00 100.00 100.00 144 192 144 192
Conv2DLayer 12x144x192 331776 100.00 100.00 2.08 1.56 3 3 3 3
Conv2DLayer 12x144x192 331776 60.00 60.00 3.47 2.60 3 3 5 5
MaxPool2DLayer 12x72x96 82944 60.00 60.00 3.47 2.60 3 3 5 5
...
DenseLayer 7 7 100.00 100.00 100.00 100.00 144 192 144 192
EDIT:
I can use something like this to print the info for a given layer:
print("%s: %s" % (layer.name, layer.get_shape()))
what I would need to complete the table, would be some way to crawl or walk up the "layer stack" (i.e. get from a given layer to the incoming / input layer(s).
It is not textual representation that you seek, but maybe TensorBoard will suffice? You can visualize whole computation graph and monitor your model using this tool.
https://www.tensorflow.org/how_tos/summaries_and_tensorboard/

Interpreting Toad's Query Times for more than 500 rows

When I execute a query in Toad that returns more than 500 rows, does the number of milliseconds on the bottom left represent how long it took to execute the entire query, or to fetch 500 rows?
For example, the above query returns 7000 rows. Did the entire query take 1000ms, or just the act of fetching 500 rows?
It appears that by default Toad only fetches the first 500 records and stops.
This can be confirmed by tracing the TOAD session and creating a tkprof report of the resulting trace file.
In my test case, I created a table with one million rows:
create table a_million_rows as
select rownum as x
from dual
connect by level <= 1000000;
Then, I ran the select * from a_million_rows statement in Toad.
According to the tkprof report, only 501 rows were retrieved from the database:
select *
from
a_million_rows
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 5 4 0 501
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 5 4 0 501
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 93
Rows Row Source Operation
------- ---------------------------------------------------
501 TABLE ACCESS FULL A_MILLION_ROWS (cr=4 pr=5 pw=0 time=0 us cost=35 size=13951951 card=1073227)

query a table so that results are combined and returned in rows

I need to pull data for a report so that columnar data is now row data:
empID planID coverage
----- ------ --------
15 1 100
15 11 200
15 12 NULL
15 13 500
20 1 100
20 11 250
20 12 400
20 13 NULL
Becomes:
emp Basic Supplemental Spouse Dependent
--- ----- ------------ ------ ---------
15 100 200 500
20 100 250 400
I've tried various JOINS and best case scenerio get 4x the number of results since it repeats for each line in the former table.
SELECT DISTINCT
CASE benefitSelected.planID WHEN 1 THEN benefitSelected.coverageLev END AS Basic,
CASE benefitSelected_1.planID WHEN 11 THEN benefitSelected_1.coverageLev END AS Supplemental,
CASE benefitSelected_2.planID WHEN 12 THEN benefitSelected_2.coverageLev END AS Spouse,
CASE benefitSelected_3.planID WHEN 13 THEN benefitSelected_3.coverageLev END AS Dependent
FROM benefitSelected AS benefitSelected_3
FULL OUTER JOIN benefitSelected AS benefitSelected_2 ON benefitSelected_3.empID = benefitSelected_2.empID
FULL OUTER JOIN benefitSelected AS benefitSelected_1 ON benefitSelected_2.empID = benefitSelected_1.empID
FULL OUTER JOIN benefitSelected
RIGHT OUTER JOIN employee ON benefitSelected.empID = employee.empID
ON benefitSelected_1.empID = benefitSelected.empID
What am I doing wrong and how do I get the results I want?
Thank you for your kind attention!
EDIT:
Results from PIVOT query below
-- -------- --------- ---- --------
1 10000.00 NULL NULL NULL
1 NULL 0.00 NULL NULL
1 NULL NULL 0.00 NULL
1 NULL NULL NULL 0.00
8 10000.00 NULL NULL NULL
8 NULL 100000.00 NULL NULL
8 NULL NULL 0.00 NULL
8 NULL NULL NULL 10000.00
Should be:
-- -------- --------- ---- --------
1 10000.00 0.00 0.00 0.00
8 10000.00 100000.00 0.00 10000.00
SELECT *
FROM (SELECT empID,
coverage,
CASE planID
WHEN 1 THEN 'Basic'
WHEN 11 THEN 'Supplemental'
WHEN 12 THEN 'Spouse'
WHEN 13 THEN 'Dependent'
END PlanDesc
FROM benefitSelected) AS A PIVOT (Sum([coverage]) FOR [PlanDesc] IN ([Basic], [Supplemental], [Spouse], [Dependent])) p
Query returns:
empID Basic Supplemental Spouse Dependent
15 100 200 NULL 500
20 100 250 400 NULL

Statistics and Cardinality Estimation - Why am I seeing this result?

I came across this little issue when trying to solve a more complex problem and have gotten to the end of my rope with trying to figure the optimizer out. So, let's say I have a table called `MyTable' that can be defined like this:
CREATE TABLE MyTable (
GroupClosuresID int identity(1,1) not null,
SiteID int not null,
DeleteDateTime datetime null
, CONSTRAINT PK_MyTable PRIMARY KEY (GroupClosuresID, SiteID))
This table has 286,685 rows in it and running DBCC SHOW_STATISTICS('MyTable','PK_MyTable') will yield:
Name Updated Rows Rows Sampled Steps Density Average key length String Index Filter Expression Unfiltered Rows
-------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -------------------- ------ ------------- ------------------ ------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------
PK_MyTable Aug 10 2011 1:00PM 286685 286685 18 0.931986 8 NO NULL 286685
(1 row(s) affected)
All density Average Length Columns
------------- -------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3.743145E-06 4 GroupClosuresID
3.488149E-06 8 GroupClosuresID, SiteID
(2 row(s) affected)
RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS
------------ ------------- ------------- -------------------- --------------
1 0 8 0 1
129 1002 7 127 7.889764
242 826 6 112 7.375
531 2010 6 288 6.979167
717 1108 5 185 5.989189
889 822 4 171 4.807017
1401 2044 4 511 4
1763 1101 3 361 3.049861
14207 24780 1 12443 1.991481
81759 67071 1 67071 1
114457 31743 1 31743 1
117209 2047 1 2047 1
179109 61439 1 61439 1
181169 1535 1 1535 1
229410 47615 1 47615 1
235846 2047 1 2047 1
275456 39442 1 39442 1
275457 0 1 0 1
Now I run a query on this table with no additional indexes or statistics having been created.
SELECT GroupClosuresID FROM MyTable WHERE SiteID = 1397 AND DeleteDateTime IS NULL
Two new statistics objects now appear, one for the SiteID column and the other for DeleteDateTime column. Here they are respectively (Note: Some non-relevant information has been excluded):
Name Updated Rows Rows Sampled Steps Density Average key length String Index Filter Expression Unfiltered Rows
-------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -------------------- ------ ------------- ------------------ ------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------
_WA_Sys_00000002_7B0C223C Aug 10 2011 1:15PM 286685 216605 200 0.03384706 4 NO NULL 286685
(1 row(s) affected)
All density Average Length Columns
------------- -------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0.0007380074 4 SiteID
(1 row(s) affected)
RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS
------------ ------------- ------------- -------------------- --------------
.
.
.
1397 59.42782 16005.02 5 11.83174
.
.
.
Name Updated Rows Rows Sampled Steps Density Average key length String Index Filter Expression Unfiltered Rows
-------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -------------------- ------ ------------- ------------------ ------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------
_WA_Sys_00000006_7B0C223C Aug 10 2011 1:15PM 286685 216605 201 0.7447883 0.8335911 NO NULL 286685
(1 row(s) affected)
All density Average Length Columns
------------- -------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0.0001065871 0.8335911 DeleteDateTime
(1 row(s) affected)
RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS
----------------------- ------------- ------------- -------------------- --------------
NULL 0 255827 0 1
.
.
.
The execution plan generated for the query I ran above gives me no surprises. It consists of a simple Clustered Index Scan with 14282.3 estimated rows and 15676 actual rows. From what I've learned about statistics and cost estimation, using the two histograms above we can multiply the selectivity of SiteID (16005.02 / 286685) times the selectivity of DeleteDateTime (255827 / 286685) to get a composite selectivity of 0.0498187307480119. Multiplying that times the total number of rows (286685) gives us the exact same thing the optimizer did: 14282.3.
But here is where I get confused. I create an index with CREATE INDEX IX_MyTable ON Mytable (SiteID, DeleteDateTime) which creates its own statistics object:
Name Updated Rows Rows Sampled Steps Density Average key length String Index Filter Expression Unfiltered Rows
-------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -------------------- ------ ------------- ------------------ ------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------
IX_MyTable Aug 10 2011 1:41PM 286685 286685 200 0.02749305 8.822645 NO NULL
286685
(1 row(s) affected)
All density Average Length Columns
------------- -------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0.0007107321 4 SiteID
7.42611E-05 4.822645 SiteID, DeleteDateTime
3.488149E-06 8.822645 SiteID, DeleteDateTime, GroupClosuresID
(3 row(s) affected)
RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS
------------ ------------- ------------- -------------------- --------------
.
.
.
1397 504 15686 12 42
.
.
.
When I run the same query as before (SELECT GroupClosuresID FROM MyTable WHERE SiteID = 1397 AND DeleteDateTime IS NULL) I still get 15676 rows returned, but my estimated row count is now 181.82.
I've tried manipulating numbers to try and figure out where this estimation is coming from but I just can't get it. I have to assume it is related to the density values for IX_MyTable.
Any help would be greatly appreciated. Thanks!!
EDIT: Here is the execution plan for that last query execution.
This one took some digging!
It's a product of:
NULL density in your date field (from your first set of stats 255827/286685 = .892363
...times the density of the first field (siteid) in your new index: 0.0007107321
The formula is:
.00071017321 * 286685 = 203.7562
-- est. rows with your value in siteid based on even distribution of values
255827 / 286685 = 0.892363
-- Probability of a NULL across all rows
203.7562 * 0.892363 = 181.8245
I'm guessing that since the row count in this instance doesn't actually affect anything, the optimizer took the easiest route and just multiplied the probabilities together.
Just wanted to write about it, but JNK was first.
Basically hash function calculates results now for two columns. And hash function result for SiteID = 1397 AND DeleteDateTime IS NULL matches approx 181 rows.
http://en.wikipedia.org/wiki/Hash_table#Hash_function