Determining query's progress (Oracle PL/SQL) - sql

I am a developer on a web app that uses an Oracle database. However, often the UI will trigger database operations that take a while to process. As a result, the client would like a progress bar when these situations occur.
I recently discovered that I can query V$SESSION_LONGOPS from a second connection, and this is great, but it only works on operations that take longer than 6 seconds. This means that I can't update the progress bar in the UI until 6 seconds has passed.
I've done research on wait times in V$SESSION but as far as I've seen, that doesn't include the waiting for the query.
Is there a way to get the progress of the currently running query of a session? Or should I just hide the progress bar until 6 seconds has passed?

Are these operations Pl/SQL calls or just long-running SQL?
With PL/SQL operations we can write messages with SET_SESSION_LONGOPS() in the DBMS_APPLICATION_INFO package. We can monitor these messages in V$SESSION_LONGOPS. Find out more.
For this to work you need to be able to quantify the operation in units of work. These must be iterations of something concrete, and numeric not time. So if the operation is insert 10000 rows you could split that up into 10 batches. The totalwork parameter is the number of batches (i.e. 10) and you call SET_SESSION_LONGOPS() after every 1000 rows to increment the sofar parameter. This will allow you to render a thermometer of ten blocks.
These messages are session-based but there's no automatic way of distinguishing the current message from previous messages from the same session & SID. However if you assign a UID to the context parameter you can then use that value to filter the view.
This won't work for a single long running query, because there's no way for us to divide it into chunks.

i found this very usefull
dbms_session.set_module("MY Program" , "Kicking off ... ")
..
dbms_session.set_action("Extracting data ... ")
..
dbms_session.set_action("Transforming data ... ")
..
you can monitor the progress using
select module , action from v$session where sid = :yoursessionid

I've done quite a lot of web development with Oracle over the years and found that most users prefer an indeterminate progress bar, than a determinate bar that is inaccurate (a la pretty much any of Microsoft's progress bars which annoy me no end), and unfortunately there is no infallible way of accurately determining query progress.
Whilst your research into the long ops capability is admirable and would definitely help to make the progress of the database query more reliable, it can't take into account the myriad of other variables that may/will affect the web operation's transactional progress (network load, database load, application server load, client-side data parsing, the user clicking on a submit button 1,000 times, etc and so on).
I'd stick to the indeterminate progress method using Javascript callbacks. It's much easier to implement and it will manage your user's expectations as appropriate.

Using V$_SESSION_LONGOPS requires to set TIMED_STATISTICS=true or SQL_TRACE=true. Your database schema must be granted the ALTER SESSION system privilege to do so.
I once tried using V$_SESSION_LONGOPS with a complex and long running query. But it turned up that V$_SESSION_LONGOPS may show the progress of parts of the query like full table scans, join operations, and the like.
See also: http://www.dba-oracle.com/t_v_dollar_session_longops.htm
What you can do is just to show the user "the query is still running". I implemented a <DIV> nested into a <TD> that gets longer with every status request sent by the browser. Status requests are initiated by window.SetTimeout (every 3 seconds) and are AJAX calls to a server-side procedure. The status report returned by the server-side procedure simply says "we are still running". The progress bar's width (i.e. the <DIV>'s width) increments by 5% of the <TD>s width every time and is reset to 5% after showing 100%.
For long running queries you might track the time they took in a separate table, possibly with individual entries for varying where clauses. You could use this to display the average time plus the time that just elapsed in the client-side dialog.
If you have a long running PL/SQL procedure or the like on the server side doing several steps, try this:
create a table for status messages
use a unique key for any process the user starts. Suggestion: client side's javascript date in milliseconds + session ID.
in case the long running procedure is to be started by a link in a browser window, create a job using DBMS_JOB.SUBMIT to run the procedure instead of running the procedure directly
write a short procedure that updates the status table, using PRAGMA AUTONOMOUS_TRANSACTION. This pragma allows you to commit updates to the status table without committing your main procedure's updates. Each major step of your main procedure should have an entry of its own in this status table.
write a procedure to query the status table to be called by the browser
write a procedure that is called by an AJAX call if the use clicks "Cancel" or closes the window
write a procedure that is called by the main procedure after completion of each step: it queries the status table and raises an exception with an number in the 20,000s if the cancel flag was set or the browser did not query the status for, say, 60 seconds. In the main procedure's exception handler look for this error, do a rollback, and update the status table.

Related

ABAP Program to notify Users X amount of days before user account will be disabled

I'm currently learning ABAP and trying to make an enhancement but have broken down in confusion on how to go about building on top of existing code. I have a program that runs periodically via a background job that disables user accounts X amount of days (in this case 90 days of inactive usage based on USR02~TRDAT).
I want to add an enhancement to notify the User via their email address (result of usr02~bname to match usr21~bname to pass the usr21~persnumber and usr21~addrnumber to adr6 which will point to the adr6~smtp_addr of the user, providing the usr02~bname -> adr6~smtp_addr relationship) based on their last logon date being 30, 15, 7, 5, 3, and 1 day away from the 90 day inactivity threshold with a link to the SAP system to help them reactivate the account with ease.
I'm beginning to think that an enhancement might not be a good idea but rather create a new program and schedule the background job daily. Any guidance or information would be greatly appreciated...
Extract
CLASS cl_inactive_users_reader DEFINITION.
PUBLIC SECTION.
TYPES:
BEGIN OF ts_inactive_user,
user_name TYPE syst_uname,
days_of_inactivity TYPE int1,
END OF ts_inactive_user.
TYPES tt_inactive_users TYPE STANDARD TABLE OF ts_inactive_user WITH EMPTY KEY.
CLASS-METHODS read_inactive_users
IMPORTING
min_days_of_inactivity TYPE int1
RETURNING
VALUE(result) TYPE tt_inactive_users.
ENDCLASS.
Then refactor
REPORT block_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 90 ).
LOOP AT inactive_users INTO DATA(inactive_user).
" block user
ENDLOOP.
And add
REPORT warn_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 60 ).
LOOP AT inactive_users INTO DATA(inactive_user).
CASE inactive_user-days_of_inactivity.
" choose urgency
ENDCASE.
" send e-mail
ENDLOOP.
and run both reports daily.
Don't create a big ball of mud by squeezing new features into existing code.
From SAP wiki:
The enhancement concept allows you to add your own functionality to SAP's standard business applications without having to modify the original applications. To modify the standard SAP behavior as per customer requirements, we can use enhancement framework.
As per your description, it doesn't sound like a use case for an enhancement. It isn't an intervention in an existing process. The original process and your new requirement are two different processes with some mutual logical part - selection of days of inactivity of users. The two shouldn't rely on each other.
Structurally I think it is best to have a separate program for computing which e-mails need to be sent and when, and a separate program for actually sending them.
I would copy your original program to a new one, and modify it a little bit so that instead of disabling a user, it records into some table for each user: 1) an e-mail 2) a date when to send 3) how many days left (30, 15, 7, etc) 4) status if the e-mail was sent or not. Initially you can even have multiple such jobs for each period (30, 15, 7 etc) and pass it as a parameter (which you use inside instead of 90).
This program you run daily as a job and it populates that table with e-mail "tasks" of what needs to be sent today. It just adds new lines, so lines from yesterday should stay in there.
The 2nd program should just read that table and send actual e-mails and update the statuses. You run that program daily as well.
This way you have:
overview: just check the table to see what's going on
control: if the e-mailer dies or hangs, you can restart it and it will continue where it left off; with statuses you avoid sending duplicate mails
you can make sure that you don't send outdated e-mails if in your mailer script you ignore all tasks older than say 2 days
I want to clarify your confusion about the use of enhancements:
You would want to use enhancements in terms of 'something' happens or is going to happen in the system and you would want to change this standard way.
That something, let's call it event or process could be for example an order is placed, a certain user is logging onto the system or a material has been or is going to be changed.
The change could be notifying another system of an order or checking the logged on user with additional checks for example his GUI version and warn him/her if not up-to-date.
Ask yourself, what process on the system does the execution of your program or code depend on. Does anything need to happen before the program is executed? No, only elapsing time.
Even if you had found an enhancement, you would want to use. If this process using the enhancement would not be run in 90 days, your mails would not be sent, because the enhancement would never been called.
edit: That being said, supposing you mean by enhancement 'building on your existing program' instead of 'creating a new one' would be absolutely not the right terminology for enhancement in the sap universe.
I would extend the functionality of your existing program, since you already compute how many days are left and you would have only one job to maintain.

Oracle APEX Dynamic Action PageLoad Processing

Situation: I currently have a page that is a type of Dashboard page. It contains several charts and reports. This page is the home page of the application. When the page first loads a Request Type of First is passed into the page. As the page renders several page processes calling Oracle Package, setting values, etc are executed. Once the page processes are complete, i believe the dynamic actions are then fired off. There are probably 10 DA's that are fired on PageLoad. The very first DA is fires on PageLoad but has a condition of REQUEST = 'First'. If condition is met, the page is then resubmitted with a Request Type of: search.
Question: How does APEX process Dynamic Actions, specifically PageLoad? If there are 10 Dynamic Actions all sequences are greater then another, does APEX or Page Processing handle one at a time or do they all execute at the same time regardless of sequence and conditions?
Reason for Question: I created a DA to execute an insert statement into a DB Table. This DA executes on page load with no conditions and is sequence#: 0. Then, i created another DA to do the same but is the very last sequence#. When logged into the application and then viewed the data inserted into the DB table, it showed both records 4 times. the first two were inserted when request type = First. Then the DA already mentioned fired and resubmitted the page which then inserted two more records. I was only expecting three records to be in the DB Table. One for the request type First and two for the request type search.
Why are you using dynamic actions? Why not use processes that execute during page render?
Dynamic action execute order can't be guaranteed in that way, in the same way has a JavaScript loop behaves like nothing you'd think. JavaScript does it's own thing.
If you're executing PL/SQL in your dynamic actions, this will behave synchronously or asynchronously. Either way, they will be launched with x separate processes. The only way to control order would be to have them in the same dynamic action and 'wait for result' set to yes.
Which brings me back to: why use dynamic actions?

SSRS Data-Driven Subscription [based on static Subscription table] Not Picking Up Changes Made to Subscription Table

I have a .RDL report which I designed in BIDS and have deployed to my report server. The report asks for three parameters before viewing report: Year, Month and Customer ID. The report works great and does exactly what it is supposed to.
While I used to run each report individually because there were 2-3 customers, now there are 30+ customers who receive the report, so I wanted to switch to a more automated fulfillment method to get the reports generated. After doing some research it appears that a using Report Manager to create a "Data Driven Subscription" (DDS) using the "Windows File Share" option gives me the capabilities I need.
As part of creating the DDS, I created a table called [Subscription] which is a table containing one row for each customer receiving the report and has the following columns:
Year
Month
CustomerID
FileName
FileLocation
Overwrite
Format
...so through using the DDS Wizard in Report Manager, I was able to successfully set up a Data Driven Subscription (which is linked to various columns in the [Subscription] table) which creates a new report for each customer in the [Subscription] table, saves [and overwrites, if necessary] it in a location of my choosing as a PDF (specified in [Subscription].[FileLocation], or the FileLocation column of my table for each row), and runs every minute (I plan on changing frequency to once a week, eventually).
This works flawlessly, giving me a new set of 30 reports in the directory of my choosing, with each report having a name I assigned in the FileName column of my table. Exactly what I was looking for.
HERE'S THE PROBLEM: When I update the FileLocation or FileName (or anything, really) in the [Subscription] table - it doesn't pick up the changes right away. Sometimes it doesn't even pick it up at all (for example I updated the [ReportName] column for one customer from Report_711622 to SpecialReport_711622, so that the output file for that customer should be named SpecialReport_711622 while all of the other reports should be called Report_XXXXX [no Special prefix]. But the file name of report for Customer 711622 remains the same!
It's almost like the job only see's what it needs to do once a day, and then does not go back and reference the [Subscription] table until I leave for the night, then when I come back in the morning it picks up the change.
Since I am about to scale this process out to a large customer-base using a different report, I need to be able to make edits to the [Subscription] table and have them get picked up by the Data Driven Subscription immediately (and if not immediately, at least a fixed interval of time that I can adjust, so that I can know 100% when the change will get picked up).
Does anyone know what's causing my lag? How do I change it so that updates to the Subscription table get picked up regularly? I'm also having issues with creating new DDS on other reports (following the exact process outlined above) - I've created the subscriptions, for every minute, and it says they are running and the number of outputs match the number of customers with 0 errors, but there are no files in the drive I specified (or anywhere else I've looked, for that matter).
Any help would be greatly appreciated!
I think the answer lies in the mechanism SSRS uses. There are a few places "lag" can occur.
The subscription is in fact an SQL Agent job which creates a record in the Event table. This table is a queue that SSRS checks to do scheduled tasks.
There is a small amount of time between the moment the subscription creates the Event record and the moment SQL reads it and starts creating the dataset for your DDS. The creation of the DDS dataset takes some time, too. In this time, the subscription will be in the Pending state. If you change anything in the data during this time, The subscription will still use the old data as report parameters. So obviously you will not notice your change until the next scheduled run.
Which brings me to the following: if a subscription is still being run and the next schedule kicks in (chances are, because yours runs every minute), the engine will not execute it, but wait for the next subscription schedule, and so on. So that's another possibility of lag - and cause of missing reports for a certain schedule minute. The subscription processes reports sequentially, one row from your DDS recordset at a time. Again, this takes some time. You can also see that in the subscription window when it says: # of # processed.
I suggest you look at the Event table in the database ReportServer during an execution. Also the ExecutionHistory views (there are 3) may be interesting. A scheduled run shows up as a RequestType = 1 and generates one record for each report. You can see the exact timing and parameters of each report that is run in the subscription. You may be able to extract the data you need to resolve your other issues.
EDIT: Here is a more elaborate guide to DDS data and events
http://blogs.msdn.com/b/deanka/archive/2009/01/13/diagnosing-and-troubleshooting-subscriptions.aspx
http://blogs.msdn.com/b/deanka/archive/2010/02/16/troubleshooting-subscriptions-part-ii-using-the-report-services-trace-log-file.aspx
Could this "Double-Hop" problem be the source of my issues? I'm so stuck on this one!
The Double-Hop Problem - MSDN Knowledgecast

How to show Progress-bar in Status bar when the SQL Query retrieve data from SQL Server?

I have a VB.NET Application.
In the same have some query, which result will display in a DataGridView.
It need 2 to 3 min to execute my query.
Now I want to show a Progressbar on Status bar that will display the Data Retrieve progress . After Complete, it will hide from Statusbar and a Label will show "Completed".
If any expert help me I will be thankful to him.
You question covers a lot of ground. The first thing that comes to my mind is, why do you have a query that takes so long to run, and can it be optimized? How bit is the data set you are returning? 10 rows, 100s rows, 100000s? If the dataset is large, are you using pagination? ADO.Net does provide an async interface model, see here Async SQLCommand. One possibility is to issue your query async then show your progress bar, and close the progress bar when you get a completed notification.
Another option is to start the query in another thread, but that is probably not needed if you can use the async SQLCommand. However, if you want to show a progress bar that updates to show %complete, as opposed to a "waiting" indicator, that will be challenging. For one thing, you have no clue as to how long the query will take. It is not a deterministic process such as reading a file from disk, where you can estimate file size, transfer speed, etc. I do not think there is any mechanism in SQL server to query, "how's my query going?" A poor and wasteful alternative would be to first run one query to get the count of records you expect, then divide the fetching query into smaller subqueries with some range parameter and execute in "chunks". Again, that is not efficient. Maybe someone knows of alternatives.

Delete record 24 hours after insert

Is there a way to automatically delete a row 24 hours after its creation in Transact-SQL?
I'm making a site (learning experience) where the user needs to click a validation link sent by e-mail once they register. I want the users to validate themselves within 24 hours.
I suppose what I'd need is a trigger, but I'm really not sure on the syntax, nor if it is even possible.
I'm not sure of your schema but I would do it a different way. I would have a date/time against the database record that corresponds to the validation link. When they click the link, verify that the date and time of the database record is within 24 hours of the current time. If so, allow it, otherwise reject it.
Q: Is there a way to automatically delete a row 24 hours after its creation in Transact-SQL?
A: Sure. Write a "sqlcmd" script, wrap it in a .bat file, and invoke it from Windows Scheduled Tasks:
http://windows.microsoft.com/en-US/windows7/schedule-a-task
Alternatively, depending on your version, you could schedule the same SQL script from SQL Server Agent:
http://msdn.microsoft.com/en-us/library/ms189237.aspx
Putting a different spin on things:
When the user clicks your link, you can check if the current time (with respect to MSSQL) is >> 24 hours. If so, you'll reply with a "Too late" message (rather than validating the entry).
In any case - you absolutely, completely, totally, do NOT want to use a trigger!