Ignite does not have persistence.
sql is as folows:
SELECT CONCAT(MENAME,'<=#->', NATIVEMEDN,'<=#->', ORIGINSYSTEMID), COUNT(1) FROM
T_CURRENT_ALARM WHERE (specialAlarmStatus =0 and (invalidated =0) and specialAlarmStatus in
(0) and merged in (0,1) ) AND (changeFlag !=3 or changeFlag is null) GROUP BY
CONCAT(meName,'<=#->', nativeMeDn,'<=#->', originSystemId) order by count(1) DESC limit 10
this SQL statement execution times out for 30 seconds.
The SQL statement execution result is as follows:
alarmForJmeter_32768<=#->3900f481-9641-4673-88fb-4c8b70be5bcd<=#->0 1
alarmForJmeter_32768<=#->47873ea8-5fc6-4085-98f5-3a3be43fbe36<=#->0 1
alarmForJmeter_32768<=#->81ba6f80-8996-4933-9b9a-93a2c78e9f96<=#->0 1
alarmForJmeter_32768<=#->f0a1209a-29bb-4ff8-a77f-b00f253977d7<=#->0 1
alarmForJmeter_32768<=#->fbdae380-3017-4360-a58b-8317e213b94a<=#->0 1
ignite's log is as follows:
Query execution is too long [duration=3090ms, type=REDUCE, distributedJoin=false,
enforceJoinOrder=false, lazy=false, schema=alarmCache, sql='SELECT CONCAT(MENAME,'<=#->',
NATIVEMEDN,'<=#->', ORIGINSYSTEMID), COUNT(1) FROM AlarmRecord WHERE ( (specialAlarmStatus
=?) and (invalidated =?) and specialAlarmStatus in (?) and merged in (?,?) ) AND (changeFlag
!=? or changeFlag is null) GROUP BY CONCAT(meName,'<=#->', nativeMeDn,'<=#->',
originSystemId) order by count(1) DESC limit 10', plan=SELECT\n __C0_0 AS __C0_0,\n
CAST(SUM(__C0_1) AS BIGINT) AS __C0_1\nFROM PUBLIC.__T0\n /* "alarmCache"."merge_scan" */\n
/* scanCount: 80024 */\nGROUP BY __C0_0\nORDER BY 2 DESC\nLIMIT 10, reqId=2054]
[2021-11-11 20:13:09,620][WARN ][0][0][long-qry-#49][ROOT][IgniteLoggerImp][72] Query
execution is too long [duration=6097ms, type=REDUCE, distributedJoin=false,
enforceJoinOrder=false, lazy=false, schema=alarmCache, sql='SELECT CONCAT(MENAME,'<=#->',
NATIVEMEDN,'<=#->', ORIGINSYSTEMID), COUNT(1) FROM AlarmRecord WHERE ( (specialAlarmStatus
=?) and (invalidated =?) and specialAlarmStatus in (?) and merged in (?,?) ) AND (changeFlag
!=? or changeFlag is null) GROUP BY CONCAT(meName,'<=#->', nativeMeDn,'<=#->',
originSystemId) order by count(1) DESC limit 10', plan=SELECT\n __C0_0 AS __C0_0,\n
CAST(SUM(__C0_1) AS BIGINT) AS __C0_1\nFROM PUBLIC.__T0\n /* "alarmCache"."merge_scan" */\n
/* scanCount: 144277 */\nGROUP BY __C0_0\nORDER BY 2 DESC\nLIMIT 10, reqId=2054]
other:
Query produced big result set. [fetched=100000, duration=301ms, type=REDUCE,
distributedJoin=false, enforceJoinOrder=false, lazy=false, schema=alarmCache, sql='SELECT
CSN,SEVERITY,ACKED,CLEARED FROM AlarmRecord WHERE (specialAlarmStatus =?) and (invalidated
=?) and specialAlarmStatus in (?) and merged in (?,?) ORDER BY ARRIVEUTC DESC', plan=SELECT\n
__C0_0 AS CSN,\n __C0_1 AS SEVERITY,\n __C0_2 AS ACKED,\n __C0_3 AS CLEARED\nFROM
PUBLIC.__T0\n /* "alarmCache"."merge_sorted" */\n /* scanCount: 456014 */\nORDER BY
=__C0_4 DESC\n/* index sorted */, reqId=15205]
Here are Ignite docs about SQL performance tuning:
https://ignite.apache.org/docs/latest/perf-and-troubleshooting/sql-tuning
The main points are:
check your indexes
enable lazy loading
check your index inline size
Related
GOOD Day for you guys , i have a question regarding the with clause with union all the select statment in the procudre is working fine but when i'm trying to add condiotion to each with clause it giving this error ("Inappropriate into") the below is an example of what i'm trying to do :
-------------------------------------------SOO1
WITH SS AS (( SELECT /) */
COUNT(ROWNUM) AS S001
FROM x.MSSE A
WHERE (SUCCESS_STATUS='F'
UNION ALL
SELECT
COUNT(ROWNUM) AS S001
FROM X.MSSE A
WHERE (SUCCESS_STATUS='N'
),
S333 AS
(SELECT /*+ INDEX (A MSSEADTD_SI02) */
COUNT(ROWNUM) AS S333_
FROM X.MSSE A
WHERE (SUCCESS_STATUS='M' )
union all
SELECT /*+ INDEX (A MSSEORMD_SI03) */
COUNT(ROWNUM) AS S333_
FROM HHL7.MSSEORMD A
WHERE (SUCCESS_STATUS='A'
),
LIS AS
(SELECT /*+ INDEX (A MSSEORUD_SI02) */
COUNT(ROWNUM)AS LIS_
FROM A.MSS A
WHERE (SUCCESS_STATUS='D')
SELECT SUM(S001) INTO Value_A
FROM
(
SELECT S001 FROM SS
)
UNION ALL
SELECT SUM(S333_) INTO VALUE_C
FROM
(SELECT S333_ FROM S333)
UNION ALL
SELECT SUM(LIS_)INTO VALUE_D FROM
(SELECT LIS_ FROM LIS)
IF Value_A > 30000 THEN
--DO THIS
IF VALUE_C > 30000 THEN
--DO THAT
--...ETC
The way I see it, it is UNION - but somewhat different from the one you wrote because part of data comes from one table (x.msse) and another from hh17.msseormd. It is unclear whether you can (or can not) join these two tables so ... union can be one option.
As number and datatype of columns returned by select statements in union must match, the 2nd select selects two 0 constant values.
The final select sums values returned in the CTE.
with temp as
(-- values from X.MSSE table
select
sum(case when success_status in ('F', 'N') then 1 else 0 end) s001,
sum(case when success_status in ('M') then 1 else 0 end) s333,
sum(case when success_status in ('D') then 1 else 0 end) lis
from x.msse
union all
-- values from HH17.MSSEORMD table
select
0 s001,
sum(case when success_status in ('A') then 1 else 0 end) s333,
0 lis
from hhl7.msseormd
)
select sum(s001), sum(s333), sum(lis)
into value_a, value_c, value_d
from temp;
IFs follow, but - that's easier part, once you fetch value_a etc.
[EDIT, based on your comment]
If that's the case, then one "huge" CTE won't help at all. I'd suggest you to split it into several parts, one CTE per target variable. Something like this:
-- values that make VALUE_A: ------------------------
with ss as
(select count(*) cnt from msse where success_status in ('F', 'N')
union all
select count(*) cnt from abcd where xyz = 23
union all
...
)
select sum(cnt)
into value_a
from ss;
-- values that make VALUE_C: ------------------------
with s333 as
(select count(*) cnt from msse where success_status in ('M')
union all
select count(*) cnt from msseormd where success_status = 'A'
union all
select count(*) cnt from qwerty where zzy = 154
union all
...
)
select sum(cnt)
into value_c
from s333;
-- etc., for all VALUE_x variables
As of error you got: you can't have multiple INTO clauses in the same SELECT statement. You could, though, do this:
with (your current CTE goes here)
-- final SELECT:
select (select sum(s001_) from ss),
(select sum(s333_) from s333),
(select sum(lis_) from lis)
into value_a, value_c, value_d
from dual
but I'd say that - if there are dozens of tables involved, one huge CTE really doesn't help at all, but makes things more complex than they should be.
Your query is equivalent to just:
SELECT set_a.S001, set_c.S333_, set_d.LIS_
INTO Value_A, VALUE_C, VALUE_D
FROM (SELECT COUNT(1) AS S001
FROM x.MSSE
WHERE SUCCESS_STATUS IN ('N','F')) set_a
, (SELECT COUNT(1) AS S333_
FROM X.MSSE
WHERE SUCCESS_STATUS IN ('M','A')) set_c
, (SELECT COUNT(1) AS LIS_
FROM A.MSS A
WHERE SUCCESS_STATUS = 'D') set_d;
IF Value_A > 30000 THEN
--DO THIS
END IF;
IF VALUE_C > 30000 THEN
--DO THAT
END IF;
There is 0 need of those unions.
EDIT: If you have different tables you can adopt either of these methods:
/* Preserve unions and use it as subquery*/
SELECT SUM(S333_)
INTO VALUE_C
FROM (SELECT /*+ INDEX (A MSSEADTD_SI02) */
COUNT(ROWNUM) AS S333_
FROM X.MSSE A
WHERE SUCCESS_STATUS = 'M'
UNION ALL
SELECT /*+ INDEX (A MSSEORMD_SI03) */
COUNT(ROWNUM)
FROM HHL7.MSSEORMD A
WHERE SUCCESS_STATUS = 'A')
/* Use scalar subquery */
SELECT (SELECT /*+ INDEX (A MSSEADTD_SI02) */
COUNT(ROWNUM) AS S333_
FROM X.MSSE A
WHERE SUCCESS_STATUS = 'M')
+ (SELECT /*+ INDEX (A MSSEORMD_SI03) */
COUNT(ROWNUM)
FROM HHL7.MSSEORMD A
WHERE SUCCESS_STATUS = 'A')
INTO VALUE_C
FROM dual
SELECT a.valor,
Cc_obt_nom_titular_cuenta (1, a.valor) Titular,
count('*') cnt
FROM cc_audit_obj a,
cuenta_efectivo b
WHERE a.campo = 'NUM_CUENTA'
AND Trunc(a.fecha) BETWEEN '01-jun-2021' AND '05-jun-2021'
AND b.num_cuenta = a.valor
AND b.cod_empresa = '1'
GROUP BY a.valor, a.usuario
ORDER BY cnt desc
LIMIT 50
The limit tag will give the error stated in title.
I want to limit the records to 50. How can I do this?
I want to fetch data from bigQuery database but I get an error
=>The query is too large. The maximum query length is 256.000K characters, including comments and white space characters.
i will show a part of query which i repeated 21 times
WITH data AS
(
SELECT
IFNULL(department, 'UNKNOWN_DEPARTMENT') AS dept,
> 'C7s'
AS campus,
COUNTIF(task.taskRaised.raisedAt.milliSeconds BETWEEN 1542565800000 AND 1543170599999) AS taskCount_0,
COUNTIF(task.taskRaised.raisedAt.milliSeconds BETWEEN 1542565800000 AND 1543170599999
AND IF (task.deadline.currentEscalationLevel NOT IN
(
'ESC_ACKNOWLEDGEMENT'
)
, task.deadline.currentEscalationLevel, 'NOT_ESCALATED') NOT IN
(
'NOT_ESCALATED'
)
) AS escCount_0,
COUNTIF(task.taskRaised.raisedAt.milliSeconds BETWEEN 1541961000000 AND 1542565799999) AS taskCount_1,
COUNTIF(task.taskRaised.raisedAt.milliSeconds BETWEEN 1541961000000 AND 1542565799999
AND IF (task.deadline.currentEscalationLevel NOT IN
(
'ESC_ACKNOWLEDGEMENT'
)
, task.deadline.currentEscalationLevel, 'NOT_ESCALATED') NOT IN
(
'NOT_ESCALATED'
)
) AS escCount_1,
COUNTIF(task.taskRaised.raisedAt.milliSeconds BETWEEN 1541356200000 AND 1541960999999) AS taskCount_2,
COUNTIF(task.taskRaised.raisedAt.milliSeconds BETWEEN 1541356200000 AND 1541960999999
AND IF (task.deadline.currentEscalationLevel NOT IN
(
'ESC_ACKNOWLEDGEMENT'
)
, task.deadline.currentEscalationLevel, 'NOT_ESCALATED') NOT IN
(
'NOT_ESCALATED'
)
) AS escCount_2
FROM
> `nsimplbigquery.TaskManagement.C7s_*`
WHERE
_TABLE_SUFFIX IN
(
'2018_47_11',
'2018_45_11',
'2018_46_11'
)
AND IFNULL(department, 'UNKNOWN_DEPARTMENT') IN
(
'ENGG_AND_MAINT_DEPARTMENT',
'FNB_DEPARTMENT',
'TELECOM_DEPARTMENT',
'IT_DEPARTMENT',
'BILLING_AND_INSURANCE',
'HOUSEKEEPING_DEPARTMENT'
)
AND task.taskRaised.raisedAt.milliSeconds BETWEEN 1541356200000 AND 1543170599999
GROUP BY
dept
)
,
mainQuery AS
(
SELECT
dept,
campus,
SUM(taskCount_0) AS taskCount_0,
SUM(escCount_0) AS escCount_0,
CAST(SAFE_DIVIDE(SUM(escCount_0), SUM(taskCount_0)) * 10000 AS INT64) AS escPerc_0,
SUM(taskCount_1) AS taskCount_1,
SUM(escCount_1) AS escCount_1,
CAST(SAFE_DIVIDE(SUM(escCount_1), SUM(taskCount_1)) * 10000 AS INT64) AS escPerc_1,
SUM(taskCount_2) AS taskCount_2,
SUM(escCount_2) AS escCount_2,
CAST(SAFE_DIVIDE(SUM(escCount_2), SUM(taskCount_2)) * 10000 AS INT64) AS escPerc_2
FROM
data
GROUP BY
ROLLUP (campus, dept)
)
SELECT
dept,
campus,
taskCount_0,
escCount_0,
escPerc_0,
taskCount_1,
escCount_1,
escPerc_1,
taskCount_2,
escCount_2,
escPerc_2
FROM
mainQuery
WHERE
campus IS NOT NULL
ORDER BY
CASE
WHEN
dept IS NULL
THEN
1
ELSE
0
END
ASC, dept ASC, campus ASC;
This is the query which I repeat so many times so can due to I have so many ids Where C7s i changed with following ids
C7z,
C7u,
H0B,
IDp,
ITR,
C7i,
C7j,
C7k,
C7l,
C7m,
C7o,
C71,
C7t,
F6qZ,
C7w,
GIui,
Fs,
C70,
C7p,
C7r
if you see my explainantion i quote a line this nsimplbigquery.TaskManagement.C7s_*
so at next query the table names is changed
like
nsimplbigquery.TaskManagement.C7z_*
Instead of repeating your whole SELECT statement 21 times, rather use below approach. You will have 3x21=63 entries in the that list for _TABLE_SUFFIX - but you will be able to get around your issue with query length
FROM `nsimplbigquery.TaskManagement.*`
WHERE _TABLE_SUFFIX IN (
'C7s_2018_47_11',
'C7s_2018_45_11',
'C7s_2018_46_11',
'C7z_2018_47_11',
'C7z_2018_45_11',
'C7z_2018_46_11',
'C7u_2018_47_11',
'C7u_2018_45_11',
'C7u_2018_46_11',
...
...
...
'C7r_2018_47_11',
'C7r_2018_45_11',
'C7r_2018_46_11',
)
I have a postgres query which is almost taking 7 seconds.
Joining two tables with where clause and when i use distinct it takes 7 seconds and without distinct i get the result in 500ms. I even applied index but of no help. How can i tune the query for better perfomance
select distinct RES.* from ACT_RU_TASK RES inner join ACT_RU_IDENTITYLINK I on
I.TASK_ID_ = RES.ID_ WHERE RES.ASSIGNEE_ is null
and I.TYPE_ = 'candidate'and ( I.GROUP_ID_ IN ( 'us1','us2') )
order by RES.priority_ desc LIMIT 10 OFFSET 0
For every RES.ID_ i have two I.TASK_ID_ so i need only unique records
Instead of using distinct use exists:
select RES.*
from ACT_RU_TASK RES
where exists (select 1
from ACT_RU_IDENTITYLINK I
where I.TASK_ID_ = RES.ID_ and
I.TYPE_ = 'candidate' and
I.GROUP_ID_ IN ( 'us1','us2')
) and
RES.ASSIGNEE_ is null
order by RES.priority_ desc
LIMIT 10 OFFSET 0;
For this query, you want an index on ACT_RU_IDENTITYLINK(TASK_ID_, TYPE_, GROUP_ID_). It is also possible that an index on ACT_RU_TASK(ASSIGNEE_, priority_, ID_) could be used.
The query below accesses the Votes table that contains over 30 million rows. The result set is then selected from using WHERE n = 1. In the query plan, the SORT operation in the ROW_NUMBER() windowed function is 95% of the query's cost and it is taking over 6 minutes to complete execution.
I already have an index on same_voter, eid, country include vid, nid, sid, vote, time_stamp, new to cover the where clause.
Is the most efficient way to correct this to add an index on vid, nid, sid, new DESC, time_stamp DESC or is there an alternative to using the ROW_NUMBER() function for this to achieve the same results in a more efficient manner?
SELECT v.vid, v.nid, v.sid, v.vote, v.time_stamp, v.new, v.eid,
ROW_NUMBER() OVER (
PARTITION BY v.vid, v.nid, v.sid ORDER BY v.new DESC, v.time_stamp DESC) AS n
FROM dbo.Votes v
WHERE v.same_voter <> 1
AND v.eid <= #EId
AND v.eid > (#EId - 5)
AND v.country = #Country
One possible alternative to using ROW_NUMBER():
SELECT
V.vid,
V.nid,
V.sid,
V.vote,
V.time_stamp,
V.new,
V.eid
FROM
dbo.Votes V
LEFT OUTER JOIN dbo.Votes V2 ON
V2.vid = V.vid AND
V2.nid = V.nid AND
V2.sid = V.sid AND
V2.same_voter <> 1 AND
V2.eid <= #EId AND
V2.eid > (#EId - 5) AND
V2.country = #Country AND
(V2.new > V.new OR (V2.new = V.new AND V2.time_stamp > V.time_stamp))
WHERE
V.same_voter <> 1 AND
V.eid <= #EId AND
V.eid > (#EId - 5) AND
V.country = #Country AND
V2.vid IS NULL
The query basically says to get all rows matching your criteria, then join to any other rows that match the same criteria, but which would be ranked higher for the partition based on the new and time_stamp columns. If none are found then this must be the row that you want (it's ranked highest) and if none are found that means that V2.vid will be NULL. I'm assuming that vid otherwise can never be NULL. If it's a NULLable column in your table then you'll need to adjust that last line of the query.