SSRS Datadriven subscriptions Limitations - pdf

I am trying to generate 10000 pdf reports into the windows file share location using SSRS data driven subscription methodology and I found that when I run for small bathces and it works and it surely fail when I give 10000 at a time. This behavior is unpredictable and not able to scale the solution. Ex:
When I put 10000 load it generates 2700 and fails rest but when I try to run failed records in another batch then it gets me the PDFs. It fails sometime with small batch sizes also. No proper reason logged.
Thanks

Related

Error 3 after OPEN DATASET if big data volume is processed, none otherwise

The problem is that I received a ticket from the AMS support team, which I cannot debug because for given input parameters on the selection screen, the program is looping for 10 hours and that's why the program is set as a background job.
The point of the program is that it should save some data in xls file on the application server.
The important thing is that for some input parameters on the selection screen program WORKS (smaller date intervals, also fewer data to work with), but right now I have to explain to the consultant why the program cannot write that much data into the file on the application server.
To conclude, a Background job is linked to the program which is grabbing a lot of data from DB, in some cases when there is an enormous amount of the data, the program cannot open the file for output so there is no data in xls.
My question is, how big the limit for OUTPUT mode in OPEN DATASET is and why I get an "error opening file" when I have bigger intervals in the selection screen.
OPEN DATASET lv_file FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
IGNORING CONVERSION ERRORS.
IF sy-subrc EQ 0. "PROGRAM FAILS HERE, SY-SUBRC eq 3
|
|
The program works when we select fewer data from DB, I have to provide the answer to the question: "why it fails when I grab a big amount of data.
Error in dialog mode :
Error in background mode :
UPDATE: this answer assumes that the original direction ("because of data volume") was based on a misinterpretation of what happened, because of a simple coincidence. It often happens, but I may be wrong of course. This assumption is based on the latest OP comment: "What i found interesting, that on the background job list, if there are 3 jobs for that user, two of them have failed and the target server was the 2nd one,but there is one job which succeeded in opening the file, his target system is system #1, but the difference is that that job had duration of ~1 hour and not 10 hours like two others.")
When you run a background job and there's an error opening a file from time to time, it may be due to the fact that you have an ABAP system with several application servers, and that one of them (at least) is not configured correctly to map a given folder to a "network" folder shared by all other application servers.
To make sure, you can see in which application server the failed job has been executed, by displaying its details (transaction code SM37). Then run the program twice, once in the application server where a job failed, once in the application server where a job succeeded, with the same input parameters.
It should succeed and fail accordingly.
To run a program in a given application server, there are two solutions:
Either start a job by indicating the desired target application server
Or switch your SAP GUI user session to the application server you want:
Use SM51 to display the list of all application servers
double click the concerned server
that opens the overview screen in a new user session started in that server
Enter /NSE38 in the command field and start the program in dialog (it will run in that server).
Now that it's almost certain this is the cause, you should ask the administrator to correct the issue, that in the given application server, he should add a "mapping" from the file folder to the shared folder (do the same as he did in other application servers).

how to generate two mht files from microsoft PSR?

I am using Microsoft PSR from a java program to record user actions. I know there is a limitation on number of screen captures which is 100 for each PSR recording session. But is there a way to overflow the screen captures to a second mht file? So if there are 200 screen shots, first 100 will be in the first mht and second 100 will be in the second mht file.
Any suggestions on how to resolve this problem will be grealy appreciated!
We were also facing the same issue in .net and below is a workaround that we did.
We split the functionality such that after every min or so our system will stop the psr and will start it again. Downside of this approach is you will get multiple MHT files but upside is 100 screenshot problem is resolved (Assumption user won't be able to hit 100 times in 60 secs. Albeit you could also set it to 30secs or any time interval).
Microsoft doesn't allow you to increase the PSR screenshot size by any way. Generating/ storing output in multiple files is the only option.
All d best.

A process monitor based on periodic sql selects - does this exist or do I need to build it?

I need a simple tool to visualize the status of a series of processes (ETL processes, but that shouldn't matter). This process monitor need to be customizable with color coding for different status codes. The plan is to place the monitor on a big screen in the office making any faults instantly visible to everyone.
Today I can check the status of these processes by running an sql statement against the underlying tables in our oracle database. The output of these queries are the abovementioned status codes for each process. I'm imagining using these sql statements, run periodically (say, every minute or so), as an input to this monitor.
I've considered writing a simple web interface for doing this, but I'm thinking something like this should exist out there already. Anyone have any suggestions?
If just displaying on one workstation another option is SQL Developer Custom Reports. You would still have to fire up SQL Developer and start the report, but the custom reports have a setting so they can be refreshed at a specified interval (5-120 seconds). Depending on the 'richness' of the output you want you can either:
Create a simple Table report (style = Table)
Paste in one of the queries you already use as a starting point.
Create a PL/SQL Block that outputs HTML via DBMS_OUTPUT.PUT_LINE statements (Style = plsql-dbms_output)
Get creative as you like with formatting, colors, etc using HTML tags in the output. I have used this to create bar graphs to show progress of v$Long_Operations. A full description and screen shots are available here Creating a User Defined HTML Report
in SQL Developer.
If you just want to get some output moving you can forego SQL Developer, schedule a process to use your PL/SQL block to write HTML output to a file, and use a browser to display your generated output on your big screen. Alternately make the file available via a web server so others in your office can bring it up. Periodically regnerate the file and make sure to add a refresh meta tag to the page so browsers will periodically reload.
Oracle Application Express is probably the best tool for this.
I would say roll your own dashboard. Depends on your skillset, but I'd do a basic web app in Java (spring or some mvc framework, I'm not a web developer but I know enough to create a basic functional dashboard). Since you already know the SQL needed, it shouldn't be difficult to put together and you can modify as needed in future. Just keep it simple I would say (don't need a middleware or single sign-on or fancy views/charts).

Generate and Save the files automatically to my local disk using Selenium

I have a Report Generator which is an intranet web application generates some reports. There are about 100 reports. Those reports are of PDF and Excel type. And I want to ensure that all these reports are generated without any issue. This is a daily job.
Each report takes an average of 2 min. Manual checking process takes 200 min.
As this is a testing process and not bothered about the contents in the files I want to automate the process.
We are using Selenium test cases to test our web application.
Is there any way to Save these reports on my location disk using Selenium ?
To answer your question, no. Browsers won't allow it, unless a user chooses to upload. But even if there is a way, i would advise against using it.
Even if you can do this by any means its HIGHLY NOT RECOMMENDED
This will be a huge security threat and it won't be allowed. Javascript is inside a security sandbox and won't allow these kind of things.
What if the server is sending a potentially dangerous file that might affect the client system?
See JavaScript security
At best, you could display the file download prompt. The browser's security (and common sense :)) won't allow you to do anything more. If you absolutely must do unsupervised file downloads, you could use some kind of ActiveX, or a Java applet.

How can I speed up batch processing job in Coldfusion?

Every once in awhile I am fed a large data file that my client uploads and that needs to be processed through CMFL. The problem is that if I put the processing on a CF page, then it runs into a timeout issue after 120 seconds. I was able to move the processing code to a CFC where it seems to not have the timeout issue. However, sometime during the processing, it causes ColdFusion to crash and has to restarted. There are a number of database queries (5 or more, mixture of updates and selects) required for each line (8,000+) of the file I go through as well as other logic provided by me in the form of CFML.
My question is what would be the best way to go through this file. One caveat, I am not able to move the file to the database server and process it entirely with the DB. However, would it be more efficient to pass each line to a stored procedure that took care of everything? It would still be a lot of calls to the database, but nothing compared to what I have now. Also, what would be the best way to provide feedback to the user about how much of the file has been processed?
Edit:
I'm running CF 6.1
I just did a similar thing and use CF often for data parsing.
1) Maintain a file upload table (Parent table). For every file you upload you should be able to keep a list of each file and what status it is in (uploaded, processed, unprocessed)
2) Temp table to store all the rows of the data file. (child table) Import the entire data file into a temporary table. Attempting to do it all in memory will inevitably lead to some errors. Each row in this table will link to a file upload table entry above.
3) Maintain a processing status - For each row of the datafile you bring in, set a "process/unprocessed" tag. This way if it breaks, you can start from where you left off. As you run through each line, set it to be "processed".
4) Transaction - use cftransaction if possible to commit all of it at once, or at least one line at a time (with your 5 queries). That way if something goes boom, you don't have one row of data that is half computed/processed/updated/tested.
5) Once you're done processing, set the file name entry in the table in step 1 to be "processed"
By using the approach above, if something fails, you can set it to start where it left off, or at least have a clearer path of where to start investigating, or worst case clean up in your data. You will have a clear way of displaying to the user the status of the current upload processing, where it's at, and where it left off if there was an error.
If you have any questions, let me know.
Other thoughts:
You can increase timeouts, give the VM more memory, put it in 64 bit but all of those will only increase the capacity of your system so much. It's a good idea to do these per call and do it in conjunction with the above.
Java has some neat file processing libraries that are available as CFCS. if you run into a lot of issues with speed, you can use one of those to read it into a variable and then into the database
If you are playing with XML, do not use coldfusion's xml parsing. It works well for smaller files and has fits when things get bigger. There are several cfc's written out there (check riaforge, etc) that wrap some excellent java libraries for parsing xml data. You can then create a cfquery manually if need be with this data.
It's hard to tell without more info, but from what you have said I shoot out three ideas.
The first thing, is with so many database operations, it's possible that you are generating too much debugging. Make sure that under Debug Output settings in the administrator that the following settings are turned off.
Enable Robust Exception Information
Enable AJAX Debug Log Window
Request Debugging Output
The second thing I would do is look at those DB queries and make sure they are optimized. Make sure selects are happening with indicies, etc.
The third thing I would suspect is that the file hanging out in memory is probably suboptimal.
I would try looping through the file using file looping:
<cfloop file="#VARIABLES.filePath#" index="VARIABLES.line">
<!--- Code to go here --->
</cfloop>
Have you tried an event gateway? I believe those threads are not subject to the same timeout settings as page request threads.
SQL Server Integration Services (SSIS) is the recommended tool for complex ETL (Extract, Transform, and Load) work, which is what this sounds like. (It can be configured to access files on other servers.) The question might be, can you work up an interface between Cold Fusion and SSIS?
If you can upgrade to cf8 and take advantage of cfloop file="" which would give you greater speed and the file would not be put in memory (which is probably the cause of the crashing).
Depending on the situation you are encountering you could also use cfthread to speed up processing.
Currently, an event gateway is the only way to get around the timeout limits of an HTTP request cycle. CF does not have a way to process CF pages offline, that is, there is no command-line invocation (one of my biggest gripes about CF - very little offling processing).
Your best bet is to use an Event Gateway or rewrite your parsing logic in straight Java.
I had to do the same thing, Ben Nadel has written a bunch of great articles uses java file io, to allow you to more speedily read files, write files etc...
Really helped improve the performance of our csv importing application.