ROLL IN / OUT in SAP Memory Management - sap

I have a question concerning The ROLL IN/OUT Operations :
"Whenever a dialog step is executed, a roll action occurs between the roll buffer in the shared memory and the memory area, which is allocated according to ztta/roll_first in a dialog process. Then the area in the shared memory is accessed that belongs to this user context.
The following graphic displays the roll process performed by the dispatcher.
Roll-in: cross-user data is rolled in from the common resource in the work process (and is processed there).
Roll-out: User-specific data is rolled out from the work process in the common resource (after the dialog step has ended).
The common resource stands for the different SAP memory types."
According to this passage in SAP HELP Forum is the Roll IN = From Shared To Local and The Roll Out = From Local To Shared , Am I getting it right ?
Because this Figure made me a little bit confused :
is The Figure False ?

Yes. In the figure, the terms "Roll in" and "Roll out" need to be exchanged.

Related

JEE 7 JSR 352 passing data from batchlet to a chunk-step

I have read the standard (and the javadoc) but still have some questions.
My use case is simple:
A batchlet fetches data from an external source and acknowledges the data (meaning that the data is deleted from the external source after acknowledgement).
Before acknowledging the data the batchlet produces relevant output (in-menory-object) that is to be passed to the next chunk oriented step.
Questions:
1) What is the best practice for passing data between a batchlet and a chunk step?
It seems that I can do that by calling jobContext#setTransientUserData
in the batchlet and then in my chunk step I can access that data by calling
jobContext#getTransientUserData.
I understand that both jobContext and stepContext are implemented in threadlocal-manner.
What worries me here is the "Transient"-part.
What will happen if the batchlet succeeds but my chunk-step fails?
Will the "TransientUserData"-data still be available or will it be gone if the job/step is restarted?
For my use case it is important that the batchlet is run just once.
So even if the job or the chunk step is restarted it is important that the output data from the successfully-run-batchlet is preserved - otherwise the batchlet have to be once more. (I have already acknowledged the data and it is gone - so running the batchlet once more would not help me.)
2)Follow up question
In stepContext there is a couple of methods: getPersistentUserData and setPersistentUserData.
What is these method's intended usage?
What does the "Persistent"-part refer to?
Are these methods relevant only for partitioning?
Thank you!
/ Daniel
Transient user data is just transient, and will not be available during job restart. A job restart can happen in a different process or machine, so users cannot count on job transient from previous run being available at restart.
Step persistent user data are those application data that the batch job developers deem necessary to save/persist for purpose of restarting, monitoring or auditing. They will be available at restart, but they are typically scoped to the current step (not across steps).
From reading your brief descriptioin, I got the feeling that your 2 steps are too tightly coupled and you can almost consider them one single unit of work. You want them either both succeed or both fail in order to maintain your application state integrity. I think that could be the root of the problem.

Time Domain Data for DTC storage in Autosar Diagnostic

Autosar Diagnostic is implemented by taking standards of UDS ( ISO 14229).
As per that, once DTC is logged the snap shot data is stored as per UDS. Snap shot data is implemented via freeze frame data concept in Autosar Dem Module.
But I want to save some more information about DTC apart form snap shot data. I want to store data to be stored before 3 second and after 1 second of confirming DTC with sampling of 400 millisecond. So I need to store 10 samples of data every time when DTC gets locked.
I want to implement this time domain data in Autosar Diagnostic. Can I do that?, If yes, How?
Thanks.
We had a customer, that wanted to have almost the same, 15 FreezeFrames, 12 before the failure, one at the failure, and two after that, with a similar cyle. We used a ringbuffer updated cyclically. We used a callout from Dem (either DemCallbackEventStatusChanged() or DemCallbackDTCStatusChanged()), to stop the ringbuffer and count for two more. After they are logged, we stored them in an extra NvM Block. You might have several of these NvM blocks, and link that number to the DemEvent (FF Data?). E.g. the NvM Block could be a NVM_DATASET, so you could use an index. When reading out DTCs look for the assignment and read out the NvM DataSet index.
Otherwise, you might find a way with StorageConditions, disable them at first at first reporting and enabling after the freezeframes are complete?
I don't know though of a Dem feature to support this directly.
I don't really understand where your problem is.
As you mentioned, snapshot data is stored together with the DTC. The content of the snapshot data you can define referencing DIDs. So, you need to define a new (internal) DID (in Dcm) where you provide your time domain data and add this DID in the Dem to the snapshot data (freeze frame).

The role of hardware vs software in a context switch

I have read the description in several popular OS textbooks of what happens during a context switch. None of them have left me completely satisfied, though the one quoted below (Tanenbaum) comes most close. There are a couple questions it leaves me with. Each one is highlighted in bold and elaborated below.
Assume that user process 3 is running when a disk interrupt occurs. User process 3's program counter, program status word, and often one or more registers are pushed onto the (current) stack by the interrupt hardware. The computer then jumps to the address specified in the interrupt vector. That is all the hardware does. From here on, it is up to the software, particularly, the interrupt service procedure.
Why must the hardware save the PC and PSW, but the software can save everything else (see next quote below)?
I am guessing it is because once execution has jumped to the interrupt service procedure, the PC and PSW are lost (because they've been replaced by those for the service procedure). So the hardware must do it. Is this correct?
All interrupts start by saving the registers, sometimes in the process table entry for the current process. Then the information pushed onto the stack by the interrupt is removed and the stack pointer is set to point to a temporary stack used by the process handler.
The way this is worded (the word "removed" specifically) it makes it look like the old process' registers are saved (by kernel software) to the process table, and then the PC and PSW which were pushed to old process stack by the hardware (previous paragraph) are just discarded (again, the word "removed"). Obviously they can't be discarded since we'll need them in the future, and also it would be stupid since we made a point to put them on that stack!
I am guessing that when they say "removed" they mean "removed... and then put in the process table along with all the register and other info that the kernel already put there." So now the inactive process is ready to go again since (a) its process table is complete and (b) the temp stuff (PC/PSW) that was on top of its stack is cleaned away. Is this correct?
Question 1- Yes, correct - PC & PSW and any other registers - depending on the architecture - are destroyed when jumping into interrupt handler routine. After the interrupt handler finishes, the stored information is used to restore the interrupted process state as nothing has happend.
Question 2 - remove from stack means to move respective stack pointer to a value which it has before the removed data were added. Respective stack pointer is decremented or incremented depending on the direction of stack growth.

Batch printing exception

I get this error while printing multiple .xps documents to a physical printer
Dim defaultPrintQueue As PrintQueue = GetForwardPrintQueue(My.Settings.SelectedPrinter)
Dim xpsPrintJob As PrintSystemJobInfo
xpsPrintJob = defaultPrintQueue.AddJob(JobName, Document, False)
Documents are spooled succesfully till, a print job exception occurs
The InnerException is Insufficient memory to continue the execution of the program.
The source is PresentationCore.dll
Where should i start searching?
When attempting to perform tasks that may fail due to temporary or permanent restrictions on some resource, I tend to use a back-off strategy. This strategy has been followed on things as diverse as message queuing and socket opens.
The general process for such a strategy is as follows.
set maxdelay to 16 # maximum time period between attempts
set maxtries to 10 # maximum attempts
set delay to 0
set tries to 0
while more actions needed:
if delay is not 0:
sleep delay
attempt action
if action failed:
add 1 to tries
if tries is greater than maxtries:
exit with permanent error
if delay is 0:
set delay to 1
else:
double delay
if delay is greater than maxdelay:
set delay to maxdelay
else:
set delay to 0
set tries to 0
This allows the process to run at full speed in the vast majority of cases but backs off when errors start occurring, hopefully giving the resource provider time to recover. The gradual increase in delays allows for more serious resource restrictions to recover and the maximum tries catches what you would term permanent errors (or errors that are taking too long to recover).
I actually prefer this try-it-and-catch-failure approach to the check-if-okay-then-try one since the latter can still often fail if something changes between the check and the try. This is called the "better to seek forgiveness than ask permission" method, which also works quite well with bosses most of the time, and wives a little less often :-)
One particularly useful case was a program which opened a separate TCP session for each short-lived transaction. On older hardware, the closed sockets (those in TCP WAIT state) eventually disappeared before they were needed again.
But, as the hardware got faster, we found that we could open sessions and do work much quicker and Windows was running out of TCP handles (even when increased to the max).
Rather than having to re-engineer the communications protocol to maintain sessions, this strategy was implemented to allow graceful recovery in the event handles were starved.
Granted it's a bit of a kludge but this was legacy software approaching end-of-life, where bug fixes are often just enough to get it working and it wasn't deemed strategic enough to warrant spending a lot of money in fixing it properly.
Update: It may be that there's a (more permanent) problem with PresentationCore. This KB article states that there's a memory leak in WPF within .NET 3.5SP1 (of which your print driver may be a client).
If the backoff strategy doesn't fix your problem (it may not if it's a leak in a long lived process), you might want to try applying the hotfix. Me, I'd replicate the problem in a virtual machine and then patch that to test it (but I'm an extreme paranoid).
It was found by googling PresentationCore Insufficient memory to continue the execution of the program and checking the first link here. Search for the string "hotfix that relates to this issue" on that page.
Before adding a new job to the queue you should check the queue state. More info on PrintQueue.IsOutOfMemory property and related properties that can be queried to verify that the queue is not in an error state.
Of course pax' hint to use a defensive strategy when accessing resources like printers is best practice. For starter you may want to put the line adding the job into a try block.
You might want to consider launching a new process to handle the printing of each document, the overhead should be low compared to the effort of printing the documents.

(Windows) Exception Handling: to Event Log or to Database?

I've worked in shops where I've implemented Exception Handling into the event log, and into a table in the database.
Each have their merits, of which I can highlight a few based on my experience:
Event Log
Industry standard location for exceptions (+)
Ease of logging (+)
Can log database connection problems here (+)
Can build report and viewing apps on top of the event log (+)
Needs to be flushed every so often, if alot is reported there (-)
Not as extensible as SQL logging [add custom fields like method name in SQL] (-)
SQL/Database
Can handle large volumes of data (+)
Can handle rapid volume inserts of exceptions (+)
Single storage location for exception in load balanced environment (+)
Very customizable (+)
A little easier to build reporting/notification off of SQL storage (+)
Different from where typical exceptions are stored (-)
Am I missing any major considerations?
I'm sure that a few of these points are debatable, but I'm curious what has worked best for other teams, and why you feel strongly about the choice.
You need to differentiate between logging and tracing. While the lines are a bit fuzzy, I tend to think of logging as "non developer stuff". Things like unhandled exceptions, corrupt files, etc. These are definitely not normal, and should be a very infrequent problem.
Tracing is what a developer is interested in. The stack traces, method parameters, that the web server returned an HTTP Status of 401.3, etc. These are really noisy, and can produce a lot of data in a short amount of time. Normally we have different levels of tracing, to cut back the noise.
For logging in a client app, I think that Event Logs are the way to go (I'd have to double check, but I think ASP.NET Health Monitoring can write to the Event Log as well). Normal users have permissions to write to the event log, as long as you have the Setup (which is installed by an admin anyway) create the event source.
Most of your advantages for Sql logging, while true, aren't applicable to event logging:
Can handle large volumes of data:
Do you really have large volumes of unhandled exceptions or other high level failures?
Can handle rapid volume inserts of exceptions: A single unhandled exception should bring your app down - it's inherently rate limited. Other interesting events to non developers should be similarly aggregated.
Very customizable: The message in an Event Log is pretty much free text. If you need more info, just point to a text or structured XML or binary file log
A little easier to build reporting/notification off of SQL storage: Reporting is built in with the Event Log Viewer, and notification systems are, either inherent - due to an application crash - or mixed in with other really critical notifications - there's little excuse for missing an Event Log message. For corporate or other networked apps, there's a thousand and 1 different apps that already cull from Event Logs for errors...chances are your sysadmin is already using one.
For tracing, of which the specific details of an exception or errors is a part of, I like flat files - they're easy to maintain, easy to grep, and can be imported into Sql for analysis if I like.
90% of the time, you don't need them and they're set to WARN or ERROR. But, when you do set them to INFO or DEBUG, you'll generate a ton of data. An RDBMS has a lot of overhead - for performance (ACID, concurrency, etc.), storage (transaction logs, SCSI RAID-5 drives, etc.), and administration (backups, server maintenance, etc.) - all of which are unnecessary for trace logs.
I wouldn't log straight to the database. As you say, database issues become tricky to log :)
I would log to the filesystem, and then have a job which bulk-inserts from files to the database. Personally I like having the logs in the database in the log run primarily for the scaling situation - I pretty much assume I'll have more than one machine running, and it's handy to be able to effectively have a combined log. (Each entry should state the machine it comes from, of course.)
Report and viewing apps can be done very easily from a database - there may be fewer log-specialized reporting tools out there at the moment, but pretty much all databases have generalised reporting functionality.
For ease of logging, I'd use a framework like log4net which takes a lot of the effort out of it, and is a tried and tested solution. Aside from anything else, that means you can change your output strategy with no code changes. You could even log to both the event log and the database if necessary, or send some logs to one place and some to the other. (I've assumed .NET here, but there are similar logging frameworks for many platforms.)
One thing that needs considering about event logging is that there are products out there which can monitor your servers' event logs (like Microsoft Operations Manager) and intelligently do notification, and gather statistics on their contents.
A "minus" of SQL-based logging is that it adds another layer of dependencies to your application, which may or may not always be acceptable. I've done both in my career. I once or twice even used a MSMQ based message queue to queue log events and empty the queue into a MSSQL database to eliminate the need for my client software to have a connection to the DB.
One note about writing to the event log: that requires certain permissions for your application users that in some environments may be restricted by default.
Where I'm at we do most of our logging to a database, with flat files as backup. It's pretty nice, we can do things like get an RSS feed for an app to watch for a few days when we make a change.