Keeping SAP's RFC data for consecutive calls of RFC using JCO - saprfc

I was wondering if it was possible to keep an RFC called via JCO opened in SAP memory so I can cache stuff, this is the scenario I have in mind:
Suppose a simple function increments a number. The function starts with 0, so the first time I call it with import parameter 1 it should return 1.
The second time I call it, it should return 2 and so on.
Is this possible with JCO?
If I have the function object and make two successive calls it always return 1.
Can I do what I'm depicting?

Designing an application around the stability of a certain connection is almost never a good idea (unless you're building a stability monitoring software). Build your software so that it just works, no matter how often the connection is closed and re-opened and no matter how often the session is initialized and destroyed on the server side. You may want to persist some state using the database, or you may need to (or want to) use the shared memory mechanisms provided by the system. All of this is inconsequential for the RFC handling itself.
Note, however, that you may need to ensure that a sequence of calls happen in a single context or "business transaction". See this question and my answer for an example. These contexts are short-lived and allow for what you probably intended to get in the first place - just be aware that you should not design your application so that it has to keep these contexts alive for minutes or hours.

The answer is yes. In order to make it work, you need to implement two tasks:
The ABAP code needs to store its variable in the ABAP session memory. A variable in the function group's global section will do that. Or alternatively you could use the standard ABAP technique "EXPORT TO MEMORY/IMPORT FROM MEMORY".
JCo needs to keep the user session between calls. By default, JCo resets the backend-side user session after every call, which of course destroys all data stored in that user session memory. In order to prevent it, you need to use JCoContext.begin() and JCoContext.end() to get a stateful RFC connection that keeps the user session alive on backend side.
Sample code:
JCoDestination dest = ...
JCoFunction func = ...
try{
JCoContext.begin(dest);
func.execute(dest); // Will return "1"
func.execute(dest); // Will return "2"
}
catch (JCoException e){
// Handle network problems, ABAP exceptions, SYSTEM_FAILUREs
}
finally{
// Make sure to release the stateful connection, otherwise you have
// a resource-leak in your program and on backend side!
JCoContext.end(dest);
}

Related

How does the distributed executor service in Redisson work with regards to scoping / closuring?

If I push a Runnable to a redisson distributed executor service, what rules am I required to oblige by?
Surely , I can not have free reign, I do not see how that is possible, yet, it is not mention in the docs at all, nor are any rules apparently enforced by the API, like R extends Serializable or similar.
If I pass this runnable:
new Runnable(()-> {
// What can I access here, and have it be recreated in whatever server instance picks it up later for execution?
// newlyCreatedInstanceCreatedJustBeforeThisRunnableWasCreated.isAccissible(); // ?
// newlyComplexInstanceSuchAsADatabaseDriverThatisAccessedHere.isAccissible(); // ?
// transactionalHibernateEntityContainingStaticReferencesToComplexObjects....
// I think you get the point.
// Does Redisson serialize everything within this scope?
// When it is recreated later, surely, I can not have access to those exact objects, unless they run on the same server, right?
// If the server goes does and up, or another server executes this runnable, then what happens?
// What rules do we have to abide by here?
})
Also, what rules do we have to abide by when pushing something to a RQueue, RBlockingDequeu, or Redisson live objects?
It is not clear from the docs.
Also, would be great if a link to a single site documentation site could be provided. The one here requires a lot of clickin and navigation:
https://github.com/redisson/redisson/wiki/Table-of-Content
https://github.com/redisson/redisson/wiki/9.-distributed-services#933-distributed-executor-service-tasks
You can have an access to RedisClient and taskId. Full state of task object will be serialized.
TaskRetry setting applied to each task. If task isn't executed after 5 minutes since the moment of start then it will requeued.
I agree that the documentation is lacking some "under the hood" explanations.
I was able to execute db reads and inserts through the Callable/runnable that was submitted to the remote ExecutorService.
I configured a single Redis on a remote VM, the database and the app running locally on my laptop.
The tasks were executed without any errors.

Geode region[key] get triggers region listener create event

Using Geode 1.2 and 9.1 Pivotal native client the following code:
IRegion<string, IPdxInstance> r = cache.GetRegion<string, IPdxInstance>("myRegion");
return r[key];
then triggers an AfterCreate event for myRegion. Why does that happen when no data is created, only read?
Same here, never used Native Client. I agreed with what #Urizen suspected - you are calling r[key] from an instance of Geode that doesn't have the entry, so it pulls the data from other instance, which "create" the entry locally.
You have a few options here:
Performing an interest registration for the instance you are initiating the call using registerAllKeys() (doc here). There is a catch here: (might not be applicable for native client), in Java API, you have an option to register interest with an InterestResultPolicy. If you use KEYS_VALUES, you will load all data to local from remote on startup WITHOUT triggering afterCreate callback. If you choose KEYS only or NONE, you will likely have similar problem.
You can check for boolean flag remoteOrigin in EntryEvent. If it is false, it is purely local. In a non-WAN setup, this should be enough to distinguish your local operation from remotely initiated operation (be it a cache syncing or a genuine creation initiated by other cache). Vaguely remembering WAN works a bit different here.
I've never used the Native Client but, at a first glance, it should be expected for the afterCreate event to be invoked on the client side as the entry is actually being created on the local cache. What I mean is that the entry might exists on the server but, internally, the client needs to retrieve it from the server, and then create it locally (thus invoking the afterCreate for the locally installed CacheListener). Makes sense?.

Circular module dependencies between stores

In my react native app that tracks instrument practice I have three stores:
SessionStore
GoalStore
InstrumentStore
The stores each manage one model (Session, Goal, Instrument) and getting/updating the server via a REST api.
The SessionStore listens to actions regarding Sessions (obviously): session.add, session.update. But it also listens to changes to the other stores, to be able to update the Sessions if a Goal or Instrument changes name.
Correspondingly the InstrumentStore listens to Instrument actions, but also to Session actions to update statistics on how many sessions uses a particular instrument.
To be able to not have race conditions, the InstrumentStore will act on the action session.add but wait for the SessionStore to handle the action first (to ensure the Session has been updated in the API). To do this I use dispatcher.waitFor with the SessionStore dispatchToken as a semaphore.
The problem: since all stores use each others dispatchTokens they all have to import each other. This is a circular dependency on modules and leads to strange race conditions. Sometimes one of the stores haven't been constructed when it's included by one of the other stores.
Here are my stores: https://github.com/osirisguitar/GuitarJournalApp/tree/feature/flat-ui/js/stores
Am I using the flux pattern the wrong way?
Addition
This is what I want to happen (in sequence):
Session is updated:
Send updated session to API
Refresh SessionStore
Refresh GoalStore
Refresh InstrumentStore
2, 3 and 4 need to wait for 1 to complete, that's why GoalStore and InstrumentStore need the SessionStore dispatch token.
Goal is update:
Send updated goal to API
Refresh GoalStore
Refresh SessionStore
2 and 3 need to wait for 1, this is why SessionStore needs the GoalStore dispatchToken which introduces the circular dependency.
You have some duplication going on.
All stores will hear all dispatches. That's the beauty of having a single dispatcher. So when you dispatch a sessions.add or sessions.update action, you're hitting three different Stores, and two of them are doing the exact same thing. That's a no-no.
As a rule, each Store's dispatch token should only be responsible for updating that store. So your Goal and Instrument stores should not be updating the SessionsStore. The .refresh and .emit should be happening within the SessionsStore dispatch token only.
EDIT to answer your edited question.
I think your confusion is because you're not recognizing that the dispatcher.register takes in a function as it's argument, and not an object.
Functions, in JS, do not evaluate their contents on declaration. They are evaluated when executed only.
Simple example;
func = function(){ console.log(testVar) } // No error, even though testVar is undefined
func() // ERROR: testVar is undefined
var testVar = 'hey';
func() // log: 'hey';
dispatcher.register takes a function as it's input, and returns an key (in the format ID_#). That key is generated by the dispatcher itself without running the input function. The input function is simply stored for later and run each time a payload is dispatched.
That means that you don't need the internal variables to be defined until your first dispatch. And because you also don't want to dispatch anything until you've created your stores, this becomes a non-issue.
But it also means that the dispatcher, by default, has a sort-of circular dependency against itself (relying on the return values of it's own functions, as stored in external variables). But that's the design of the dispatcher. Unless you're going to write a new dispatcher, that's just part of the deal.
It's also worth pointing out that if you create a true circular dependency by calling multiple waitFors that deadlock against one another, the dispatcher will correctly throw an error saying as much;
Dispatcher.waitFor(...): Circular dependency detected while waiting for ID_#

Changing server state with GET?

OK, I know this is bad. As a rule, GET requests should be for read-only queries; they should not change the state of the server and its data. But how I can handle the next situation?
A client must fetch all the "chat rooms" near to him. This typically may be something like GET /chatrooms?lat=x&lng=y. But, if there are no chatrooms near to him, then a room must be automatically created. Of course this can be achieved doing POST /chartrooms.
But this implies 2 requests to the server and I want just to do one: a GET /chatrooms?lat=x&lng=yand then if there are no rooms, in the server create a new one and return it to the client. So GET must change, sometimes, the state in the server (create a new room)
The server side would be something like this (pseudo-code):
#GET /chatrooms
List<ChatRoom> getAll():
lat = getQuery("lat")
lng = getQuery("lng")
chatRooms[] = findChatRoomsByLatLng(lat, lng)
if (chatRooms.size > 0): // This is the normal GET
return json(chatRooms, 200)
else // In this case GET change the server state
chatRoom = new ChatRoom(lat, lng)
saveChatRoom(chatRoom)
return json(chatRoom, 200)
That is my problem.
You're referring to the idea of GET as a "safe method". Being "safe" doesn't mean "can't change server state". It means "can't be used by the client to request a change in server state". In your case, the change is clearly done by the server and is totally invisible to the user. This is in line with the last paragraph from the RFC, quoted below. I see no problem with you using the GET to create the relevant chat room if none exists.
Section 9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects,
so therefore cannot be held accountable for them.
If a room is requested which does not exists, simply create the room and return it. The client should not even notice this. GET is ok for that.
Its just like dynamically generated images, like thumbnails or whatever. Would you send two requests to download a dynamically generated image? I would not do so.

Apache JENA TDB files locked after creation with web application

I am using JENA to create a triple store (TDB functionality) with the following code:
public void createTDBFromOWL() {
Dataset dataset = TDBFactory.createDataset(newTripleStoreLocation);
dataset.begin(ReadWrite.WRITE);
try {
//getting the model inside the transaction
Model model = dataset.getDefaultModel();
FileManager fileManager=FileManager.get();
Model holder=fileManager.readModel(model, newOWLFileLocation);
//committing dataset
dataset.commit();
model.close();
holder.close();
} finally {
dataset.end();
dataset.close();
}
}
After I create the triple store, the files created are locked by my application server (Glassfish), and I can't delete them until I manually stop Glassfish and it releases its lock. As shown in the above code, I think I am closing everything, so I don't get why a lock is maintained on the files.
When you call Dataset#close(), the implementation delegates that call to an underlying
DatasetGraphBase#close(), which then ultimately delegates to DatasetGraphTDB#_close().
This results in calls to TripleTable#close() and QuadTable#close(). Both of these call (several) NodeTupleTable#close(). Continuing with the indirection, this calls NodeTable#close() and TupleTable#close(). The former is an interface, so we'd need to make a proper guess as to which class is run in your implementation. The latter iterates through a collection of TupleIndex objects and calls close() on each of them. TupleIndex is, also, an interface.
There is only one meaningful heirarchy of descendents from TupleIndex that results in something which can lock a file, which leads us to TupleIndexRecord#close(). We can then follow a particular implementation of RangeIndex called BPlusTree all the way down until we see actual ownership of the MappedByteBuffer
Ultimately, while reading the implementation of BlockAccessMapped#close(), it seems like the entire heirarchy is closing things properly, down to the final classes, but that this longstanding bug may be the culprit. From the documentation:
once a file has been mapped a number of operations on that file will
fail until the mapping has been released (e.g. delete, truncating to a
size less than the mapped area). However the programmer can't control
accurately the time at which the unmapping takes place --- typically
it depends on the processing of finalization or a PhantomReference
queue.
So there you have it. Despite Jena's best efforts, one cannot yet control when that file will be unmapped in Java. This ends up being the tradeoff for memory-mapped file IO in java.