Send email through procedure in PL SQL - sql

I want to make a procedure through which I can send email to my team mail id.
actually there are number of procedures running daily in our system.So i want to send the status of each procedure means if procedure is getting success,failed or running.If the procedure is in running state i have to show the running time in that email.And also if the procedure is failed then i have to show the error message in that email.Actually one table is present in our database in which procedure
id,date,start_time,end_time,status,error
desc columns are there.I shared the details below.
proc id date start_time end_time status error desc
1 6/7/2017 7/5/2017 9:55:16 AM 7/6/2017 1:36:25 AM SUCCESS
2 6/28/2017 6/29/2017 8:30:02 AM 6/29/2017 2:20:15 PM FAIL -1555 - ORA-01555: snapshot too old: rollback segment number 334 with name "_SYSSMU334_1817651691$" too small
ORA-02063:
3 7/5/2017 7/6/2017 9:34:54 AM RUNNING
As i said i have to show all the 3 types of status in the email which is to be sent through procedure.kindly help me regarding this.

Related

Scheduling of jobs through SQL Server stored procedure

I have to write a stored procedures for scheduling the Azure pipelines (Jobs).
Frequency ----Number of times batch needs to run in a day
Timing column will have entry for batch start time
Table A will have static entries for batches. Frequency denotes in a day how many times job will run and timing column will have the batch run time separated by comma(,)
Batch_ID Batch_Name Frequency Timing
-----------------------------------------------
1 ABC 2 7:00,13:00
Table B will have listing of jobs corresponding to one particular batch.This table will be static and have one time entry like table B.
Table B
Batch_ID JOB_ID JOB_NM
--------------------------------
1 1 Job_1
1 2 Job_1
Table C will contain the dependencies of the jobs in a batch
Table C
Batch_ID JOB_ID DEPENDENY_JOB_ID
----------------------------------------
1 1
1 2 1
When Batch executes, table D will be populated with batch start time.
Table D
Batch_ID Batch_Name Status start_Time end_time
-------------------------------------------------------
1 abc Start 7:00
As soon as Table E is populated,table D will populated with Job details.Job 2 will start only when job 1 finishes.
Table E
Batch_ID Batch_Name JOB_ID JOB_NM Start_Time End_Time
----------------------------------------------------------------------
1 abc 1 Job_1 7:00
1 abc 2 Job_2 7:15
When Job 2 completes then we will update the Table D end time column.
Once first run is completed, we need to check frequency column of table A and run the job again (if it's more than 1) and do the entire exercise again.
In case our 1st batch didn't complete before the start time of batch 2 then we have to hold the 2nd batch until batch 1 is completed.
Could anyone help me how to start this?
As #Gordon Linoff said, you are lacking a question on your "question".
If I can give an opinion on this, I dont think its a good design idea to split your logic between data factory and stored procedures in a database. Be mindful that in the future, the user mantaining the pipelines may not have access to the database and will not be able to understand half of it. Even if YOU are the one mantaining this, 2 years from now chances are you are going to forget what you did and following the line between 2 resources may take you more time than it should. It will also make troubleshooting harder.
It really depends on the scenario you are working on, but to sum it up: try to have everything logic related in one place.
Hope this helped!

Using SELECT ... FOR UPDATE to poll for a value change

I have a table that contains tasks and their status, akin to:
| task_id | task_status |
+---------+-------------+
| 71 | 1 |
| 85 | 3 |
| 110 | 2 |
Let's call the table TASKS.
Status is an enumerated value, for example:
= SCHEDULED
= RUNNING
= DONE
I need to poll this status to inform the user about a task he started. Currently, I'm just polling it on the server using a while loop, like this pseudocode:
status = old_status
while(timeout_not_expired and status==old_status) {
status = get_status("SELECT task_status FROM TASKS WHERE task_id=%1", task_id)
wait(check_interval)
}
return status
That's nasty, not only it spams the Oracle SQL server, it also spams our log of SQL queries.
So I did a bit of googling and found about SELECT ... FOR UPDATE. I tried to run this statement:
SELECT
task_status
FROM TASKS
WHERE task_id = 361
FOR UPDATE OF task_status
But it returns immediately. So the question:
Is this even what FOR UPDATE is for?
If yes, how do I get it to wait on the row with a timeout?
No, that isn't what that clause is for. From the documentation:
The FOR UPDATE clause lets you lock the selected rows so that other users cannot lock or update the rows until you end your transaction.
Your query selects the current status for that task and locks the row, essentially on the assumption that you plan to update it, and don't want anyone else to be able to change it between your select and subsequent update.
So after you perform that query, no-one else can update the status of that task until you commit or rollback - kind of the opposite of what you're trying to achieve.
You could look at alert or queueing mechanisms, but you might want to investigate continuous query notification, though it could be overkill for this.

Is it possible to match the "next" unmatched record in a SQL query where there is no strictly unique common field between tables?

Using Access 2010 and its version of SQL, I am trying to find a way to relate two tables in a query where I do not have strict, unique values in each table, using concatenated fields that are mostly unique, then matching each unmatched next record (measured by a date field or the record id) in each table.
My business receives checks that we do not cash ourselves, but rather forward to a client for processing. I am trying to build a query that will match the checks that we forward to the client with a static report that we receive from the client indicating when checks were cashed. I have no control over what the client reports back to us.
When we receive a check, we record the name of the payor, the date that we received the check, the client's account number, the amount of the check, and some other details in a table called "Checks". We add a matching field which comes as close as we can get to a unique identifier to match against the client reports (more on that in a minute).
Checks:
ID Name Acct Amt Our_Date Match
__ ____ ____ ____ _____ ______
1 Dave 1001 10.51 2/14/14 1001*10.51
2 Joe 1002 12.14 2/28/14 1002*12.14
3 Sam 1003 50.00 3/01/14 1003*50.00
4 Sam 1003 50.00 4/01/14 1003*50.00
5 Sam 1003 50.00 5/01/14 1003*50.00
The client does not report back to us the date that WE received the check, the check number, or anything else useful for making unique matches. They report the name, account number, amount, and the date of deposit. The client's report comes weekly. We take that weekly report and append the records to make a second table out of it.
Return:
ID Name Acct Amt Their_Date Unique1
__ ____ ____ ____ _____ ______
355 Dave 1001 10.51 3/25/14 1001*10.51
378 Joe 1002 12.14 4/04/14 1002*12.14
433 Sam 1003 50.00 3/08/14 1003*50.00
599 Sam 1003 50.00 5/11/14 1003*50.00
Instead of giving us back the date we received the check, we get back the date that they processed it. There is no way to make a rule to compare the two dates, because the deposit dates vary wildly. So the closest thing I can get for a unique identifier is a concatenated field of the account number and the amount.
I am trying to match the records on these two tables so that I know when the checks we forward get deposited. If I do a simple join using the two concatenated fields, it works most of the time, but we run into a problem with payors like Sam, above, who is making regular monthly payments of the same amount. In a simple join, if one of Sam's payments appears in the Return table, it matches to all of the records in the Checks table.
To limit that behavior and match the first Sam entry on the Return table to the first Sam entry on the Checks table, I wrote the following query:
SELECT return.*, checks.*
FROM return, checks
WHERE (( ( checks.id ) = (SELECT TOP 1 id
FROM checks
WHERE match = return.unique1
ORDER BY [our_date]) ));
This works when there is only one of Sam's records in the Return table. The problem comes when the second entry for Sam hits the Return table (Return.ID 599) as the client's weekly reports are added to the table. When that happens, the query appropriately (for my purposes) only lists that two of Sam's checks have been processed, but uses the "Top 1 ID" record to supply the row's details from the Return table:
Checks_Return_query:
Checks.ID Name Acct Amt Our_Date Their_Date Return.ID
__ ____ ____ ____ _____ ______ ________
1 Dave 1001 10.51 2/14/14 3/25/14 355
2 Joe 1002 12.14 2/28/14 4/04/14 378
3 Sam 1003 50.00 3/01/14 3/08/14 433
4 Sam 1003 50.00 4/01/14 3/08/14 433
In other words, the query repeats the Return table info for record Return.ID 433 instead of matching Return.ID 599, which is I guess what I should expect from the TOP 1 operator.
So I am trying to figure out how I can get the query to take the two concatenated fields in Checks and Return, compare them to find matching sets, then select the next unmatched record in Checks (with "next" being measured either by the ID or Our_Date) with the next unmatched record in Return (again, with "next" being measured either by the ID or Their_Date).
I spent many hours in a dark room turning the query into various joins, and back again, looking at functions like WHERE NOT IN, WHERE NOT EXISTS, FIRST() NEXT() MIN() MAX(). I am afraid I am way over my head.
I am beginning to think that I may have a structural problem, and may need to write the "matched" records in this query to another table of completed transactions, so that I can differentiate between "matched" and "unmatched" records better. But that still wouldn't help me if two of Sam's transactions are on the same weekly report I get from my client.
Are there any suggestions as to query functions I should look into for further research, or confirmation that I am barking up the wrong tree?
Thanks in advance.
I'd say that you really need another table of completed transactions, it could be temporary table.
Regarding your fears "... if two of Sam's transactions are on the same weekly report ", you can use cursor in order to write records "one-by-one" instead of set based transaction.

how to write stored procedure to select a particular column using functions

I have two tables
tblSTATUS
StatusID|PlanID|Description|EmailSubject|EmailFrom|EmailTo||Comment
1 8 Approved aaa
2 7 Rejected bbb
3 7 Rejected ccc
4 42 Rejected ccc
tblSTATUSREASON
PlanID|REASONS
7 failed
7 not eligible
42 not eligible
when i send email to particular person if their plan is (only) rejected it stores in table tblstatusreason the reasons for rejected and PlanID used which is dependent on tblStatus.
i should retrieve evrything in grid view C# code using stored procedures and displaying it for user according to description.
Now my problem is i can retrieve and display all the other columns but i dont know how to display rReasons so i want to select [REASONS] from tblSTATUSREASON where for that particluar description = rejected from tblSTATUS and also i dont want to change my tables/columns.I need sql stored procedure for this particular thing
You need also where
select R.PlanID,REASONS
FROM tblSTATUSREASON R
INNER JOIN tblSTATUS S
ON R.PlanID = S.PlanID
AND description = 'rejected'

Cumulative average number of records created for specific day of week or date range

Yeah, so I'm filling out a requirements document for a new client project and they're asking for growth trends and performance expectations calculated from existing data within our database.
The best source of data for something like this would be our logs table as we pretty much log every single transaction that occurs within our application.
Now, here's the issue, I don't have a whole lot of experience with MySql when it comes to collating cumulative sum and running averages. I've thrown together the following query which kind of makes sense to me, but it just keeps locking up the command console. The thing takes forever to execute and there are only 80k records within the test sample.
So, given the following basic table structure:
id | action | date_created
1 | 'merp' | 2007-06-20 17:17:00
2 | 'foo' | 2007-06-21 09:54:48
3 | 'bar' | 2007-06-21 12:47:30
... thousands of records ...
3545 | 'stab' | 2007-07-05 11:28:36
How would I go about calculating the average number of records created for each given day of the week?
day_of_week | average_records_created
1 | 234
2 | 23
3 | 5
4 | 67
5 | 234
6 | 12
7 | 36
I have the following query which makes me want to murderdeathkill myself by casting my body down an elevator shaft... and onto some bullets:
SELECT
DISTINCT(DAYOFWEEK(DATE(t1.datetime_entry))) AS t1.day_of_week,
AVG((SELECT COUNT(*) FROM VMS_LOGS t2 WHERE DAYOFWEEK(DATE(t2.date_time_entry)) = t1.day_of_week)) AS average_records_created
FROM VMS_LOGS t1
GROUP BY t1.day_of_week;
Halps? Please, don't make me cut myself again. :'(
How far back do you need to go when sampling this information? This solution works as long as it's less than a year.
Because day of week and week number are constant for a record, create a companion table that has the ID, WeekNumber, and DayOfWeek. Whenever you want to run this statistic, just generate the "missing" records from your master table.
Then, your report can be something along the lines of:
select
DayOfWeek
, count(*)/count(distinct(WeekNumber)) as Average
from
MyCompanionTable
group by
DayOfWeek
Of course if the table is too large, then you can instead pre-summarize the data on a daily basis and just use that, and add in "today's" data from your master table when running the report.
I rewrote your query as:
SELECT x.day_of_week,
AVG(x.count) 'average_records_created'
FROM (SELECT DAYOFWEEK(t.datetime_entry) 'day_of_week',
COUNT(*) 'count'
FROM VMS_LOGS t
GROUP BY DAYOFWEEK(t.datetime_entry)) x
GROUP BY x.day_of_week
The reason why your query takes so long is because of your inner select, you are essentialy running 6,400,000,000 queries. With a query like this your best solution may be to develop a timed reporting system, where the user receives an email when the query is done and the report is constructed or the user logs in and checks the report after.
Even with the optimization written by OMG Ponies (bellow) you are still looking at around the same number of queries.
SELECT x.day_of_week,
AVG(x.count) 'average_records_created'
FROM (SELECT DAYOFWEEK(t.datetime_entry) 'day_of_week',
COUNT(*) 'count'
FROM VMS_LOGS t
GROUP BY DAYOFWEEK(t.datetime_entry)) x
GROUP BY x.day_of_week