The problem is that I received a ticket from the AMS support team, which I cannot debug because for given input parameters on the selection screen, the program is looping for 10 hours and that's why the program is set as a background job.
The point of the program is that it should save some data in xls file on the application server.
The important thing is that for some input parameters on the selection screen program WORKS (smaller date intervals, also fewer data to work with), but right now I have to explain to the consultant why the program cannot write that much data into the file on the application server.
To conclude, a Background job is linked to the program which is grabbing a lot of data from DB, in some cases when there is an enormous amount of the data, the program cannot open the file for output so there is no data in xls.
My question is, how big the limit for OUTPUT mode in OPEN DATASET is and why I get an "error opening file" when I have bigger intervals in the selection screen.
OPEN DATASET lv_file FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
IGNORING CONVERSION ERRORS.
IF sy-subrc EQ 0. "PROGRAM FAILS HERE, SY-SUBRC eq 3
|
|
The program works when we select fewer data from DB, I have to provide the answer to the question: "why it fails when I grab a big amount of data.
Error in dialog mode :
Error in background mode :
UPDATE: this answer assumes that the original direction ("because of data volume") was based on a misinterpretation of what happened, because of a simple coincidence. It often happens, but I may be wrong of course. This assumption is based on the latest OP comment: "What i found interesting, that on the background job list, if there are 3 jobs for that user, two of them have failed and the target server was the 2nd one,but there is one job which succeeded in opening the file, his target system is system #1, but the difference is that that job had duration of ~1 hour and not 10 hours like two others.")
When you run a background job and there's an error opening a file from time to time, it may be due to the fact that you have an ABAP system with several application servers, and that one of them (at least) is not configured correctly to map a given folder to a "network" folder shared by all other application servers.
To make sure, you can see in which application server the failed job has been executed, by displaying its details (transaction code SM37). Then run the program twice, once in the application server where a job failed, once in the application server where a job succeeded, with the same input parameters.
It should succeed and fail accordingly.
To run a program in a given application server, there are two solutions:
Either start a job by indicating the desired target application server
Or switch your SAP GUI user session to the application server you want:
Use SM51 to display the list of all application servers
double click the concerned server
that opens the overview screen in a new user session started in that server
Enter /NSE38 in the command field and start the program in dialog (it will run in that server).
Now that it's almost certain this is the cause, you should ask the administrator to correct the issue, that in the given application server, he should add a "mapping" from the file folder to the shared folder (do the same as he did in other application servers).
I am trying to generate 10000 pdf reports into the windows file share location using SSRS data driven subscription methodology and I found that when I run for small bathces and it works and it surely fail when I give 10000 at a time. This behavior is unpredictable and not able to scale the solution. Ex:
When I put 10000 load it generates 2700 and fails rest but when I try to run failed records in another batch then it gets me the PDFs. It fails sometime with small batch sizes also. No proper reason logged.
Thanks
I'm currently in the process of developing a file which is used to fill in a form based on the data from the sheet. The issue I'm facing is that the page gets refreshed for each entered field for one section of the code. The refresh time varies based on the computer.
My current solution is to induce a wait of approximately 5-6 seconds which can be inefficient based on the terminal used. Is there a way that the execution can be paused simply till the page is done loading? Also, there are two versions of Excel that are in use, essentially 2013 and 2003. Not sure if that's relevant but I've had to sort out numerous compatibility issues.
I'm working in an environment that has 4 different SharePoint implementations. All 4 environments run the same chunk of code, but the List View Threshold is not configured to the same value, which results in some searches exceeding the threshold while others run without problems.
Is there a way to find the value of the threshold programmatically?
To answer my own question, yes.
Asking The Google for the same thing in PowerShell gave me enough clues to figure out this line of code : SPContext.Current.Web.Site.WebApplication.MaxItemsPerThrottledOperation
I am using SQL Express 2012 Advanced and Visual Studio Pro 2012.
I have developed a report that displays signatures (stored in a database) on the report in reportviewer, one signature on each line. The report works fine as long as there aren't too many pages of data. Once the number of pages reaches around 100-200 pages, the images fail to load. You can see where this occurs if you watch the page count go up it counts very quickly by single numbers as it is rendering in the reportviewer then slows down and starts counting by jumps in numbers. If I lower the size of images saved in the database, I can get more images to load. So it seems to be a problem with the amount of memory the report is using.
Is there a cache setting for report viewer? Better yet, is there a way to know programmatically if the images are failing to load and to commit an action at that point?