Oracle query to check for failure more than 90 % - sql

I have a situation where I need to write a monitoring query to run every 2 hour to raise alert when processed count becomes less than 90%.
Lets say we have a Table Incoming Message where all incoming messages are captured and another table where all processed messages are captured.
This is what I came up with, this works but I am wondering if there is better way of doing this?
SELECT (CASE WHEN PROCESSEDCOUNT <= INCOMINGCOUNT * .9
THEN 'ALERT:: Process Count ' || PROCESSEDCOUNT || ' is less than 90% of Incoming count ' || INCOMINGCOUNT || '. '
ELSE 'FINE:: Process Count ' || PROCESSEDCOUNT || ' is more than or equal to 90% of Incoming count ' || INCOMINGCOUNT || '. '
END ) as Status
from
(SELECT
(SELECT COUNT(*)
FROM INCOMING_TABLE D WHERE INSERTION_TIME > SYSDATE - (1/12)
AND EXISTS (SELECT * FROM PROCESSED_TABLE C WHERE ( D.MESSAGE_ID = C.MESSAGE_ID)
AND C.PROCESSED_TIME > SYSDATE- (1/12))) AS PROCESSEDCOUNT,
(SELECT COUNT(*) FROM INCOMING_TABLE WHERE INSERTION_TIME > SYSDATE - (1/12)) AS INCOMINGCOUNT
FROM DUAL);
PROCESSED_TABLE used for storing other records as well, that is the
reason I need to use EXISTS to figure out process count.
I understand as time captured in two tables may not fall into same
time duration. We are not worried about that right now, just want to
make sure majority of records processed.
We are using oracle 11g, if that helps.

You are querying the same data from INCOMING_TABLE twice, which isn't really efficient ;-)
One possibility could be to outer join:
SELECT
CASE
WHEN COUNT(C.MESSAGE_ID) <= COUNT(*) * .9
THEN 'ALERT:: Process Count ' || COUNT(C.MESSAGE_ID) || ' is less than 90% of Incoming count ' || COUNT(*) || '. '
ELSE 'FINE:: Process Count ' || COUNT(C.MESSAGE_ID) || ' is more than or equal to 90% of Incoming count ' || COUNT(*) || '. '
END as Status
FROM INCOMING_TABLE D
LEFT OUTER JOIN PROCESSED_TABLE C
ON C.MESSAGE_ID = D.MESSAGE_ID
AND C.PROCESSED_TIME > SYSDATE- (1/12)
WHERE D.INSERTION_TIME > SYSDATE - (1/12)
/
That will work if you can be sure either zero or one record exists in PROCESSED_TABLE for each message_id. Maybe you can add a AND C.PROCESS_TYPE = ... or something to make that condition come true.
If you cannot guarantee that a join to PROCESSED_TABLE returns at most one row, you can move your EXISTS to inside a COUNT instead of the WHERE clause and thereby again avoid accessing INCOMING_TABLE twice:
SELECT (CASE WHEN PROCESSEDCOUNT <= INCOMINGCOUNT * .9
THEN 'ALERT:: Process Count ' || PROCESSEDCOUNT || ' is less than 90% of Incoming count ' || INCOMINGCOUNT || '. '
ELSE 'FINE:: Process Count ' || PROCESSEDCOUNT || ' is more than or equal to 90% of Incoming count ' || INCOMINGCOUNT || '. '
END ) as Status
from
(
SELECT COUNT(*) INCOMINGCOUNT
, COUNT(
CASE
WHEN EXISTS (SELECT * FROM PROCESSED_TABLE C
WHERE D.MESSAGE_ID = C.MESSAGE_ID
AND C.PROCESSED_TIME > SYSDATE- (1/12))
THEN 1
END
) PROCESSEDCOUNT
FROM INCOMING_TABLE D
WHERE D.INSERTION_TIME > SYSDATE - (1/12)
)
/
(PS. If you are at the start of writing a lot of code to handle a messaging queue, I would also suggest like #DARK_A to look into Advanced Queues instead of building your own. There is a lot of issues you need to handle in a messaging system, so why have that trouble if you can use what Oracle has already built ;-)

Related

SQL Sylob return a boolean

I am currently discovering the Sylob 5 ERP and I would like some help on one of my queries because the Sylob support is overwhelmed with demands.
I would like my query to display if one of the operations in my fabrication order is late.
I already have a query that displays every operations and if it's late or not:
SELECT
distinct
ordreFabrication.codeOF as Code_OF,
operationOrdreFabrication.libelle as Libelle,
operationOrdreFabrication.centreCharge.libelle || ' (' ||
operationOrdreFabrication.centreCharge.code || ')' as Centre_charge,
operationOrdreFabrication.dateDebutPrevue as Début_prévu,
(
CASE
WHEN current_date() > operationOrdreFabrication.dateDebutPrevue and
operationOrdreFabrication.etatAvancementOperationOF = '0' THEN ('<div style=''color:red''>' ||
'Retard sur le début' || ' </div>')
WHEN current_date() > operationOrdreFabrication.dateFinPrevue and
operationOrdreFabrication.etatAvancementOperationOF != '2' THEN ('<div style=''color:red''>' ||
'Retard sur la fin' || ' </div>')
ELSE ('Aucun retard')
END
) as Retard,
operationOrdreFabrication.dateDebutReelle as Début_Réel,
operationOrdreFabrication.dateFinPrevue as Fin_prévue
FROM
OperationOrdreFabricationEntite as operationOrdreFabrication
left outer join operationOrdreFabrication.ordreFabrication as ordreFabrication
WHERE
operationOrdreFabrication.id not like 'DefaultRecord_%'
AND
operationOrdreFabrication.dateFinValidite is null
AND
ordreFabrication.dateFinValidite is null
AND
operationOrdreFabrication.sousTraitance in ('false')
AND
((current_date() > operationOrdreFabrication.dateDebutPrevue and
operationOrdreFabrication.etatAvancementOperationOF = '0'
) OR (current_date() > operationOrdreFabrication.dateFinPrevue and
operationOrdreFabrication.etatAvancementOperationOF != '2'))
ORDER BY
1 asc
But I want this to return true or false when my case returns anything else than "Aucun retard"
so I can use it as a subquery
Looks like you want EXISTS (NOT EXISTS)
select exists(
select 1
from (
--you original query
) t
where retard = 'Aucun retard') flag

Generate a list of rentals, with the client's information for the outlet with the most rentals

I have 4 tables : RentalAgreement (Rentals), Clients, vehicle and outlet.
I need to list rentalAgreementNumbers of THE outlet with the highest rentals with client info and outlet address.
Thanks!
So far, I am getting all 11 rows - that is, all agreements for all outlets
SELECT outlet.Street || ' ' || outlet.City || ', ' || outlet.state || ' - ' || outlet.zipcode AS "Outlet Street Address", rentalNo AS "ID", startDate AS "Start Date", returnDate AS "Return Date", clientName, client.Phone
FROM
client JOIN (ragreement JOIN (vehicle JOIN outlet USING (outNo)) USING (licenseNo)) USING (clientNo)
GROUP BY outlet.Street || ' ' || outlet.City || ', ' || outlet.state || ' - ' || outlet.zipcode, rentalNo, startDate, returnDate, clientName, client.Phone
HAVING COUNT(rentalNo) = (SELECT
MAX(COUNT(rentalNo))
FROM
ragreement
GROUP BY (rentalNo));
How can I modify this to get only 5 rental agreements listed for outlet1 - which has the highest rental agreements in my table?
After the select command, you can set a limit. Mssql syntaxt for this is
Select top(5) [columns] from....
You also need to add a order by clause at the end of the query
... Order by [columns or calculations]

Postgres 9.1's concat_ws equivalent in Amazon redshift

I've got a query originally written for pg9.1 that I'm trying to fix for use on redshift as follows
select concat_ws(' | ', p.gb_id, p.aro_id, p.gb_name) c from (
select ca.p_id,
avg(ca.ab) as ab
from public.fca
join temp_s_ids s on ca.s_id = s.s_id
group by ca.p_id
) as x
join public.dim_protein as p on x.protein_id = p.protein_id;";
I've been trying to test it out on my own, but as it is created from temporary tables that are created by a php session, I haven't had any luck yet. However, my guess is that the concat_ws function isn't working as expected in redshift.
I don't believe there is an equivalent in redshift. You will have to roll your own. If there are no NULLS you can just use the concatenation operator ||:
SELECT p.gb_id || ' | ' || p.aro_id || ' | ' || p.gb_name c
FROM...
If you have to worry about nulls (and its separator):
SELECT CASE WHEN p.gb_id IS NOT NULL THEN p.gb_id || ' | ' END || CASE WHEN p.aro_id IS NOT NULL THEN p.aro_id || ' | ' END || COALESCE(p.gb_name, '') c
FROM
Perhaps that can be simplified, but I believe it will do the trick.
To handle NULLs, you can do:
select trim('|' from
coalesce('|' || p.gb_id) ||
coalesce('|' || p.p.aro_id) ||
coalesce('|' || p.gb_name)
)
from . . .

Speed up a select query on a table with date parts in separate columns

The following query is taking a very long time to run.
The logins table has 10M records, and there's an index on month, day and year. What could be done to speed up the query?
SELECT
cast(logins.month || '/' || logins.day || '/' || logins.year as date) as loginDt, logins.person
FROM logins
LEFT JOIN MIN_LUNCH
ON MIN_LUNCH.person = logins.person
AND MIN_LUNCH.date = cast(logins.month || '/' || logins.day || '/' || logins.year as date)
WHERE
cast(logins.month || '/' || logins.day || '/' || logins.year as date) between '01/01/2010' and '03/01/2010'
To fix your query, you would have to fix you table structure.
Consider either using a BIGINT column to store a unix-timestamp or a DATE column. This would make querying the database significantly easier, as well as faster.
What you query may look like after changing the structure:
SELECT
from_unixtime(logins.login_date, '%m-%d') as loginDt,
logins.person
FROM
logins
LEFT JOIN MIN_LUNCH ON MIN_LUNCH.person = logins.person AND MIN_LUNCH.date = logins.date
WHERE
logins.date between 1262332800 and 1267430400;
Assuming you can't alter the table, I think you could speed up the query somewhat by using the date parts to limit the logins table to the specified date range so at least you won't be doing that cast for every row in the table and your indexes won't be completely useless.
SELECT
loginRange.loginDt, loginRange.person
FROM
(SELECT
cast(logins.month || '/' || logins.day || '/' || logins.year as date) as loginDt, logins.person
FROM logins
WHERE logins.month IN ('01','02','03') AND logins.year = '2010') as loginRange
LEFT JOIN MIN_LUNCH ON
MIN_LUNCH.person = logins.person
AND MIN_LUNCH.date = loginRange.loginDt
Obviously it won't be as good as it could be if the table was using the correct data types, and if you can alter the table you should fix that instead.
It should be possible to simplify the WHERE clause, for the interval of date that you provided:
WHERE logins.month IN (1, 2, 3) AND logins.year = 2010
This part of the query should be able to use existing indexes, but you are still left with the JOIN condition, where you need to match a date datatype with three columns containing date parts:
MIN_LUNCH.date = cast(logins.month || '/' || logins.day || '/' || logins.year as date)
Here, for every record, your RDBMS needs to perform a casting operation; this defeats existing indexes.
For this reason and for the (great) cause of SQL datatypes, I do recommend that you fix your database structure and store dates as dates in the logins table.
You could simply add a new column and fill it from the existing data:
ALTER TABLE logins ADD login_date DATE; -- or the relevant date datatype for your RDBMS
UPDATE logins SET login_date =
CAST(logins.month || '/' || logins.day || '/' || logins.year as date);
From there on, you can use simple joins between both tables. Queries should benefit from following indexes:
logins(person, login_date)
min_lunch(person, date)

Count how many percent of values on each column are nulls

Is there a way, through the information_schema or otherwise, to calculate how many percent of each column of a table (or a set of tables, better yet) are NULLs?
Your query has a number of problems, most importantly you are not escaping identifiers (which could lead to exceptions at best or SQL injection attacks in the worst case) and you are not taking the schema into account.
Use instead:
SELECT 'SELECT ' || string_agg(concat('round(100 - 100 * count(', col
, ') / count(*)::numeric, 2) AS ', col_pct), E'\n , ')
|| E'\nFROM ' || tbl
FROM (
SELECT quote_ident(table_schema) || '.' || quote_ident(table_name) AS tbl
, quote_ident(column_name) AS col
, quote_ident(column_name || '_pct') AS col_pct
FROM information_schema.columns
WHERE table_name = 'my_table_name'
ORDER BY ordinal_position
) sub
GROUP BY tbl;
Produces a query like:
SELECT round(100 - 100 * count(id) / count(*)::numeric, 2) AS id_pct
, round(100 - 100 * count(day) / count(*)::numeric, 2) AS day_pct
, round(100 - 100 * count("oDd X") / count(*)::numeric, 2) AS "oDd X_pct"
FROM public.my_table_name;
Closely related answer on dba.SE with a lot more details:
Check whether empty strings are present in character-type columns
In PostgreSQL, you can easily compute it using the statistics tables if your autovacuum setting is on (check it by SHOW ALL;). You can also set the vacuum interval to configure how fast your statistics tables should be updated. You can then compute the NULL percentage (aka, null fraction) simply using the query below:
select attname, null_frac from pg_stats where tablename = 'table_name'
Think there is not built-in features for this. You can make this self
Just walk thorough each column in table and calc count() for all rows and count() for rows where column is null.
There is possible and optimize this for one query for one table.
OK, I played around a little and made a query that returns a query--or queries if you use LIKE 'my_table%' instead of = 'my_table_name':
SELECT 'select '|| string_agg('(count(*)::real-count('||column_name||')::real)/count(*)::real as '||column_name||'_percentage ', ', ') || 'from ' || table_name
FROM information_schema.columns
WHERE table_name LIKE 'my_table_name'
GROUP BY table_name
It returns a ready-to-run SQL query, like:
"SELECT (count(*)::real-count(id)::real)/count(*)::real AS id_percentage , (count(*)::real-count(value)::real)/count(*)::real AS value_percentage FROM my_table_name"
id_percentage;value_percentage
0;0.0177515
(The caps didn't go exactly right for readability.)