Error 3 after OPEN DATASET if big data volume is processed, none otherwise - abap

The problem is that I received a ticket from the AMS support team, which I cannot debug because for given input parameters on the selection screen, the program is looping for 10 hours and that's why the program is set as a background job.
The point of the program is that it should save some data in xls file on the application server.
The important thing is that for some input parameters on the selection screen program WORKS (smaller date intervals, also fewer data to work with), but right now I have to explain to the consultant why the program cannot write that much data into the file on the application server.
To conclude, a Background job is linked to the program which is grabbing a lot of data from DB, in some cases when there is an enormous amount of the data, the program cannot open the file for output so there is no data in xls.
My question is, how big the limit for OUTPUT mode in OPEN DATASET is and why I get an "error opening file" when I have bigger intervals in the selection screen.
OPEN DATASET lv_file FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
IGNORING CONVERSION ERRORS.
IF sy-subrc EQ 0. "PROGRAM FAILS HERE, SY-SUBRC eq 3
|
|
The program works when we select fewer data from DB, I have to provide the answer to the question: "why it fails when I grab a big amount of data.
Error in dialog mode :
Error in background mode :

UPDATE: this answer assumes that the original direction ("because of data volume") was based on a misinterpretation of what happened, because of a simple coincidence. It often happens, but I may be wrong of course. This assumption is based on the latest OP comment: "What i found interesting, that on the background job list, if there are 3 jobs for that user, two of them have failed and the target server was the 2nd one,but there is one job which succeeded in opening the file, his target system is system #1, but the difference is that that job had duration of ~1 hour and not 10 hours like two others.")
When you run a background job and there's an error opening a file from time to time, it may be due to the fact that you have an ABAP system with several application servers, and that one of them (at least) is not configured correctly to map a given folder to a "network" folder shared by all other application servers.
To make sure, you can see in which application server the failed job has been executed, by displaying its details (transaction code SM37). Then run the program twice, once in the application server where a job failed, once in the application server where a job succeeded, with the same input parameters.
It should succeed and fail accordingly.
To run a program in a given application server, there are two solutions:
Either start a job by indicating the desired target application server
Or switch your SAP GUI user session to the application server you want:
Use SM51 to display the list of all application servers
double click the concerned server
that opens the overview screen in a new user session started in that server
Enter /NSE38 in the command field and start the program in dialog (it will run in that server).
Now that it's almost certain this is the cause, you should ask the administrator to correct the issue, that in the given application server, he should add a "mapping" from the file folder to the shared folder (do the same as he did in other application servers).

Related

%ABAT-W-CREPRCERR in ActiveBatch 11

Our client uses an automation software called ActiveBatch (by Advanced Systems Concepts, Inc.). They're currently using ActiveBatch v8 and is now on the the process of migrating the automated jobs to a newer ActiveBatch v11.
Most the jobs have no problems coping with the newer software and they're running OK as of this writing. However, there is one job that is unable to run, rather, initialize in the first place. This job runs OK on v8. Whenever this job is being run on v11, it produces an error message:
%ABAT-W-CREPRCERR, error creating batch process for job %1
Quite self-explanatory; means the process for the particular job was not created. As per checking the user manual, it stated that the job's log file might explain more why the error occurred. Problem is, the log file is not very helpful as it only show magic numbers shown below:

Further readings states that it's Byte Order Mark for UTF-8. I don't know much about this stuff but since the log file only contains those characters, I'm not sure they're helpful at all.
Another thing, if I run the job manually (running EXE via Windows Explorer), no problems will be encountered and it will be a success. The job by the way is a Power Builder 9 application.

MS Access occassional "Cannot open any more databases" error

I have an MS Access (2013) application with a split database. Everything seems to run smoothly except for occasionally I will get Error 3048: Cannot open any more databases.
The error occurs when the front end tries to run vba code which involves pulling data from the back end and will stall on any line with: Set DB = OpenDatabase() or DoCmd.RunSQL() commands.
The strange thing is that this error seems to be time based. I can access the back end hundreds of times without error if I do it quickly enough but after some time has passed (~1 hr) the error shows up. In fact, I can open the application and leave it running in the background (with no code running) then go back into it after an hour and I will get the error the first time the code tries to open the back end.
I've searched the length and breath of this site and google for solutions so I know this error has been addressed before. To save people reiterating the usual fixes I will list what I've tested for so far with no success:
Recordset limit: I'm not leaving any recordsets open, every time I open one I make sure to close it. The same for the databases. All my requests
to the back end are done via 3 or 4 vba functions and each of these has
a Rec.Close or DB.Close corresponding to every OpenRecordset()
and OpenDatabase() and I never have more than 2 recordsets open at
a time.
Control limit: I have 151 controls on the biggest form in the application so I should be below the limit (I believe this is 245 for a single form?)
Corrupt database: I've copied all my forms and code to a new Access database and run a Compact and Repair.
Machine Issue: I've tested the application on several machines and reproduced the same error.
Anyway with most of the above situations I would expect the application not to run at all, instead of running fine for a set amount of time and then crashing.
Some other points of note:
Citrix Users: The users are split between normal windows machines users who are experiencing this error and others who are using the application through a virtual desktop software (Citrix) who are having no issues. Unfortunately I don't know enough about this virtual desktop to really work out what that implies.
Background vs Foreground: Some users have claimed that the application only crashes if it has been running for a long time AND they switch over to another program and switch back. I've confirmed that simply switching between the application and other programs doesn't cause it to crash but haven't yet been able to leave it running in the foreground long enough to confirm if it crashes without switching between programs.
I've been struggling with this for days, anyone able to help me out?

How to protect files from power failure (Win7)

My VB.NET application uses config.xml file to store all configurations. This file is written often (every 20 seconds or so) as it also stores state of user's session etc. From time to time, users report this file to be suddenly of zero length or full of unknown characters and application is thus not executable.
I tracked the problem to find out that this happens when file is written short before power failure or even normal shutdown (!). I tested it with notepad - I edited the file and unplugged power cord immediately after editing, then after reboot, file was corrupted. File is usually empty after initiating system shutdown, or it is full of zeros (bitwise) after power outage. So this doesn't look like issue of my application, but rather general OS or disk/FS issue.
This is happening on more than ten different PCs. All have Win7, NTFS, some have SSD disk, some not.
Can this be prevented? Can I ensure that file contains either data before the edit or data after the edit, but never ends up in corrupted state?
You can't prevent, that the user will unplug power cord and system will unexpectively shutdown.
I believe, that you can work around it by storing multiple (at least two) files and if the last one is corrupted, use the previous one.
EDIT: This means, that should not save the same file when timer ticks. Because you didn't provided details, I can't suggest exact code change. You can define a boolean variable which you can negate when timer ticks and then switch saved file:
blnUseSecondaryFile = Not blnUseSecondaryFile
If blnSecondaryFile Then
savepath = "file2.xml"
Else
savepath = "file1.xml"
End If

Determining what other application are in use

I was wondering if it is possible to determine within my program what other apps are currently being used by the user. For example, the user might be using Safari and Mail.
From there, I was curious to see if I can determine if the user is actively using the open app. IF the computer is sitting and the user is not using it, I would determine that none of the open apps are currently in use. If a user is actively searching the web, I would determine that Safari is currently being used (or whatever other internet browser).
From there, I was wondering if it is possible to see what the user is doing in the app... well this one is mostly for internet browsers. I want to know which website the user is currently on. If this isn't possible from a normal application, would it be possible to do in a web browser extension?
Thanks for the help!
You can get some hint from "ps" unix command, but not the full answer to your question:
ps aux
Look at the "STAT" column, roughly:
R means running
+ means interactive
Precisely:
state The state is given by a sequence of characters, for example, ``RWNA''. The first character indicates the run state of the process:
I Marks a process that is idle (sleeping for longer than about 20 seconds).
R Marks a runnable process.
S Marks a process that is sleeping for less than about 20 seconds.
T Marks a stopped process.
U Marks a process in uninterruptible wait.
Z Marks a dead process (a ``zombie'').
Additional characters after these, if any, indicate additional state information:
+ The process is in the foreground process group of its control terminal.
< The process has raised CPU scheduling priority.
> The process has specified a soft limit on memory requirements and is currently exceeding that limit; such a process is (necessarily) not swapped.
A the process has asked for random page replacement (VA_ANOM, from vadvise(2), for example, lisp(1) in a garbage collect).
E The process is trying to exit.
L The process has pages locked in core (for example, for raw I/O).
N The process has reduced CPU scheduling priority (see setpriority(2)).
S The process has asked for FIFO page replacement (VA_SEQL, from vadvise(2), for example, a large image processing program using virtual memory to sequentially
address voluminous data).
s The process is a session leader.
V The process is suspended during a vfork(2).
W The process is swapped out.
X The process is being traced or debugged.
From there, you can have a few indication of what the user is using, BUT it doesn't mean he is interacting with it.
False positive: background servers that will do batch heavy processing (they will be in R state).
Another way to look at it is to find which application is being in Foregroud currently.
Try:
osascript -e 'tell application "System Events"' -e 'set frontApp to name of first application process whose frontmost is true' -e 'end tell'
From shell.
You can let it run:
while sleep 5; do osascript -e 'tell application "System Events"' -e 'set frontApp to name of first application process whose frontmost is true' -e 'end tell'; done
And even with this, try it and then press F11 and F12 : you'll see that while on Expose / Dashboard or else, this doesn't get correctly refreshed...
See:
http://alvinalexander.com/mac-os-x/applescript-unix-mac-osx-foreground-application-result

Cache data in SQL CE database

Background
I have an SQL CE database, that is constantly updated (every second).
I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window.
And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that.
Details
SQL CE dabatase is exposed through WCF web service.
Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient.
It is a low-traffic application, so I don't expect more than few (5) connections at the same time.
Loosing snapshot is not a big deal, user can simply generate new one.
database is accessed by self-hosted WCF web service using Linq-to-SQL.
Web site is ASP.NET MVC hosted on UltiDev Cassini.
database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound.
Problem
I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download.
Solution 1:
Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself.
Solution 2:
Cache the snapshot in-memory on either DB server, or web server.
Question:
Is there anything wrong with proposed solutions? Any different solution suggestions?
A consideration is the typical usage pattern. Do most snapshots eventually result in either being printed or exported or both?
If such is the case, we might as well "get it in memory" (temporarily) in the form of a non blocking (asynchronous) select statement from the device to the server. In this fashion the data will "be there" or well on its way when user decides to use it.
If on the other hand many snapshot end up not being effectively used, Solution #1 seems quite ok (maybe the table could be named after the account/user, hence guaranteeing "self clean up" based on the number of snapshot a user can maintain at a given time (though it seems to be just one, with even the tolerance of loosing it sometimes).
500 rows by 10 columns isn't really very large at all. For the sake of simplicity in this case, I might just generate the CSV data at the same time I generate the initial snapshot page, and then place the CSV data in a hidden field in the snapshot page. The "Print" and "Download CSV" buttons would then POST the form that contains the CSV data to a Print page that generates the printable version from the posted CSV data, or a page that streams the CSV directly back to the client's browser, respectively. This way, at least, you wouldn't have any clean-up issues to deal with, and you avoid having to cache something on the server (either in the cache proper or in the database) that might well end up never being used at all.
If you cached the CSV data in a hidden field client-side, you could even handle both the printing and the CSV display completely client-side with javascript, although I don't know if that's worth the trouble or not.