Static or instance class for reuse between RFC calls? - oop

I am implementing some piece of functionality which must be called in the remote system, so I wrap this functionality into a global class which would be called by an RFC module. The calculations to be done are rather complex, heavy and involve many DB calls, so I am seeking ways to pre-save some results for the future RFC calls. The module will be called very frequently and this trick can save many seconds of runtime.
The question is: should I use static class or instance class in my RFC wrapper?
The idea is to put calculation results into itab attributes of the class and re-use them in future calls. This data will have some validity interval though and after expiration of time it will be invalidated and re-calculated again.
In SAP recommendations we see that static classes generally are not recommended with some exceptions, and for re-use SAP recommends to stick to singletones. Does the singleton idea applies to my use-case as well?
In my understanding if I put tables into the attributes of static class they will be alive in memory for a while (from ABAPDOCU):
They are persisted in the memory for as long as the current internal session exists
and as can be seen from this magnificent figure by Sandra
the user session will be reused (for how long?) between RFC calls, and so my saved tables/attributes will be reused too.
It is better than shared memory which has its overhead and disadvantages, e.g. it is valid only for current AS instance.
Is ir a viable idea or I'd better stick to singleton with instance class?

The thing with static classes vs. singletons is more of a question of code style. When you implement the singleton pattern, then the instance is stored in a static attribute. So there is no difference between a singleton and an all-static class when it comes to persistence. And that persistence is only through the internal session. The internal sessions of RFC calls are bound to the RFC session. When the caller is another ABAP program running on a different server, then it will keep its RFC session while it is running, just like a regular internal session would. So you can use static attributes to cache data within one execution of one program, but you can not use them to share data between multiple executions of the program. (When the RFC call comes from a non-ABAP program, then the lifetime of the RFC session will end when the program explicitly closes the session).
So should you use a static class / singleton here? Only if all those requests happen through one run of the same program (but then why don't you cache on the client or create an RFC function module which accepts requests in bulk?)
The program is only executed once by one user
If you have multiple requests from multiple users, then you have a typical use-case for shared memory. The argument about the shared memory instance being bound to the application server is pretty irrelevant here, because RFC user sessions are also bound to an application server.
If you need caching across application servers, then there is the option to cache on the database server by creating a database table to cache the results of more time-consuming queries. But retrieving those results would still require a database request, so it only makes sense for queries which take several seconds to complete and still don't return much data. They are not useful for queries which just take milliseconds or where the reason for the runtime is the amount of data they return.

Related

When to use following Transient, scoped and singleton

I read some articles about this and I get to know how to use Transient, Scoped, and Singleton but I am confused when to use one of these.
What I am understood:
Singleton: In situation when you need to store number of employees then you can create singleton cause every time you create new employee then it will increment the number so in that situation you need singleton.
Scoped: For example you are playing game in which number of life is 5 and then you need to decrease the number when player's game over. And in every new time you need new instance because every new time you need number of life is 5.
Transient: when to use Transient??
Please correct me if I am wrong.
And give the better example of all of them if possible.
As far as I know, the Singleton is normally used for a global single instance. For example, you will have an image store service you could have a service to load images from a given location and keeps them in memory for future use.
A scoped lifetime indicates that services are created once per client request. Normally we will use this for sql connection. It means it will create and dispose the sql connection per request.
A transient lifetime services are created each time they're requested from the service container. For example, during one request you use httpclient service to call other web api request multiple times, but the web api endpoint is different. At that time you will register the httpclient service as transient. That means each time when you call the httpclient service it will create a new httpclient to send the request not used the same one .
Note, Microsoft provides the recommendations here and here.
When designing services for dependency injection:
Avoid stateful, static classes and members. Avoid creating global state by designing apps to use singleton services instead.
Avoid direct instantiation of dependent classes within services. Direct instantiation couples the code to a particular implementation.
Make services small, well-factored, and easily tested.

Facade in Object Oriented Programming

In OOP, should a Facade be an object or just a class? Which is better?
Most of the examples in Wikipedia creates Facade as an object which should be instantiated before use.
CarFacade cf = new CarFacade();
cf.start();
Can it be designed to be like this instead?
CarFacade.start();
UPDATE
Can a Facade facilitate a singleton?
A facade
represents a high level API for a complex subsystem (module).
reduces client code dependencies.
This means that your client code only uses the facade and does
not have a lot of dependencies to classes behind that facade.
It is better to use an instance of an interface, because
you can replace it for tests. E.g. mock the subsystem the facade represents.
you can replace it at runtime.
When you use a static methods, your client code is bound to that method implementations at compile-time. This is usually the opposite of the open/close principle.
I said "usually the opposite", because there are examples when static methods are used, but the system is still open for extension. E.g.
ServiceLoader
The static load methods only scan the classpath and lookup service implementations. Thus adding classes and META-INF/services descriptions to the classpath will add other available services without changing the ServiceLoader's code.
Spring's AuthenticationFacade for example uses a ThreadLocal internally. This makes it possible to replace the behavior of the AuthenticationFacade. Thus it is open for extension too.
Finally I think it is better to use an instance and interface like I would use for most of the other classes.
It's two fold. You can use it as a static method. Say for instance in spring security I use AuthenticationFacade to access currently logged in user Principal details like so. AuthenticationFacade.getName()
There are other instances, in which mostly people create an instance of Facade and use it. In my opinion neither approach is superior over the other. Rather it depends on your context.
Finally Facade can use Singleton pattern to make sure that it creates only one instance and provides a global point of access to it.
This question is highly subjective. The only reason I am responding is because I reviewed some of my own code and found where I had written a Façade in one application as a singleton and written almost the same Façade in a different application requiring an instance. I'm going to discuss why I chose each of those routes in their respective applications so that I can evaluate if I made the correct choice.
A façade vs the open/close principle is already explained by #Rene Link. In my personal experience, you have to think of it this way: Does the object hold the state of itself?
Let's say I have a façade that wraps the Azure Storage API for .NET (https://learn.microsoft.com/en-us/azure/storage/common/storage-samples-dotnet)
This facade holds information about how to authenticate against the storage API so that it the client can do something like this:
Azure.Authenticate(username, password);
Azure.CreateFile("My New Text File", "\\FILELOCATION");
As you can see in this example, I have not created an instance and i'm using static methods, therefore following the singleton pattern. While this makes for code that is more concise, I now have an issue if I need to authenticate to a given path with a different credential than the one already provided, I would have to do something like this:
Azure.Authenticate(username, password)
Azure.CreateFile("My New Text File", "\\FILELOCATION");
Azure.Authenticate(username2, password2);
Azure.CreateFile("My Restrictied Text File", "\\RESTRTICTEDFILELOCATION");
While this would work, it can be hard to determine why authentication failed when I call Azure.ReadFile, as I have no idea what username and password may have been passed into the singleton from thread4 on form2 (which is no where to found) This is a prime example of where you should be using an instance. It would make much more since to do something like this:
Using (AzureFacade myAzure = Azure.Authenticate(username, password))
{
Azure.CreateFile("My New Text File", "\\FILELOCATION"); // I will always know the username and password.
}
With that said, what happens if the developer needs to create a file in Azure in a method that has no idea what the username and password to Azure may be. A good example of this would be an application that periodically connects to Azure and performs some multi-threaded tasks. In said application, the user setups a connection string to azure and all mulit-threaded tasks are performed using that connection string. Therefore, there is no need to create an instance for each thread (as the state of the object will always be the same) However, in order to maintain thread safety, you don't want to share the same instance across all the threads. This is where a singleton, thread-safe pattern may come into play. (Spring's AuthenticationFacade according to #Rene Link) So that I could do something like this (psudocode)
Thread[] allTask = // Create 5 threads
Azure.Authenticate(username, password) // Authenticate for all 5 threads.
allTask.start(myfunction)
void myFunction()
{
Azure.CreateFile("x");
}
Therefore, the choice between an instance of a façade v. a singleton façade is completely dependent on the intended application of the facade, however both can definitely exist.

Ninject: What happens to non-disposable InRequestScope and InTransientScope objects after the HTTP request is finished?

I have searched a lot about these question, here and a lot of other places, but not getting everything I want to know!
From a WebApi project point-of-view, when are InTransientScope objects Created? In the Ninject docs it is stated that such objects are created whenever requested, but in a web api project that handles HTTP requests, the instance is created at the request start time so in this regard it is the same as InRequestScope then?
In a WebApi project, is it okay to use InTransientScope objects knowing that they will never be kept track of by Ninject? If Ninject never keeps track of Transient objects, then what is the purpose of this scope and what happens to such objects after they have been used?
If I declare an object with InRequestScope and that object doesn't implement the IDisposable interface, what happens to such object after the web request has completed? Will it be treated the same way as an InTransientScope object?
Are different scopes to be used for: WebApi controllers, Repositories(that use a InRequestScope Session that is created separately) and Application services?
There's two purposes for scopes:
Only allow one object to be created per scope
(optionally) dispose of the object once the scope ends.
As said, the disposal is optional. If it doesn't implement the IDisposable interface it's not being disposed. There's plenty of usecases for that.
The InTransientScope is the default scope - the one being used if you don't specify another one. It means that every time a type A is requested from the kernel one activation takes place and the result is returned. The activation logic is specified by the binding part that follows immediately after the Bind part (To<...>, ToMethod(...),...).
However, this is not necessarily at the time the web-request starts and the controller is instanciated. For example, you can use factories or service location (p.Ex. ResolutionRoot.Get<Foo>()) to create more objects after the controller has been created. To answer your questions in short:
When: When a request takes place or whenever your code asks for a type from Ninject either directly (IResolutionRoot.Get(..)) or through a factory. As InTransientScope objects are not being tracked they will not be disposed, however, if they are not disposable and the entire request code requests only one IFoo then practically there's is no discernible difference (apart from the slight performance hit due totracking InRequestScope()-ed objects)
As long as you don't need to make sure that instances are shared and/or disposed this is completely fine. After they are not being used anymore, they will get garbage-collected like any object you would new yourself.
When the scope ends ninject will remove the weak reference to the non-IDisposable object. The object itself will not be touched - just like when bound InTransientScope()
That depends on your specific requirements and implementation details. Generally one needs to make sure that long-scoped objects don't depend on short-scoped objects. For example, a Singleton-Service should not depend on a Request-scoped object. As a baserule, everything should be InTransientScope() unless there's a specific reason why it should not be. The reason will dictate what scope to use...

Custom NHibernate session implementation

I'm working on a system that performs bulk processing using NHibernate. I know that NHibernate was not designed for bulk processing, but nonetheless the system is working perfectly thanks to a number of optimizations.
The object at the lowest level of granularity (i.e. the root of my aggregates) has a number of string properties that cannot (or, it does not make sense to) be modeled as many-to-one's (e.g. "Comment"). In reality, the fields in the DB corresponding to these properties take only so many values (for example because most - but not all - comments are machine-generated), with the result that when hydrating tons of objects, lots of memory is wasted by having thousands and thousands of instances of strings with the same values.
I was thinking of optimizing this scenario transparently by creating my own NHibernate custom type that enhances NHibernate's StringType by overriding NullSafeGet() and doing a dictionary lookup to return the same instance of each string occurrence over and over. In other words, I would perform a kind of string interning myself. The use of a custom type allows me to select which properties of which objects should be "interned" by just specifying this type in the mapping files.
Ideally, I would like to "stick" this dictionary into the session, so that the lifetime of this string pool is tied with the lifetime of the first level cache. After all, from our system's point of view, it makes sense to intialize this string pool at the same time a session and its first-level cache are initialized, and to nuke the string pool at the same time a session is closed. It is also a desirable property that concurrent sessions are completely isolated from each other by having their own private dictionaries.
Problem is, I can't find a way to "inject" a custom implementation of NHibernate's session into NHibernate itself so that an IType can access it at NullSafeGet() time, short of creating my own personal NHibernate code branch.
Is there a way to provide NHibernate with a custom session implementation?
I see three different approaches to solve this:
1. Use a interceptor
In the IInterceptor, you get:
void AfterTransactionBegin(ITransaction tx);
void BeforeTransactionCompletion(ITransaction tx);
2. Wrap opening and closing the session:
Opening and closing the session is an explicit call. It should be easy to wrap this into a method.
public ISession OpenSession()
{
var session = sessionFactory.CreateSession();
StringType.Initialize();
}
You could make it much nicer. I wrote a transaction service, which has events. Then you could handle begin transaction and end transaction events.
3. Don't attach the string cache to the session
It doesn't need to be related to the session. The strings are immutable objects, it doesn't hurt when you mix them between sessions. To avoid that the cache grows unlimitedly, you could write your own or use an existing "most recently used"-cache. After growing to a certain size, it throws away the oldest items.
This would probably require some time to implement, but would be very nice and easy to use.

Beans, methods, access and change? What is the recommened practice for handling them (i.e. in ColdFusion)?

I am new to programming (6 weeks now). i am reading a lot of books, sites and blogs right now and i learn something new every day.
Right now i am using coldfusion (job). I have read many of the oop and cf related articles on the web and i am planning to get into mxunit next and after that to look at some frameworks.
One thing bothers me and i am not able to find a satisfactory answer. Beans are sometimes described as DataTransferObjects, they hold Data from one or many sources.
What is the recommended practice to handle this data?
Should i use a separate Object that reads the data, mutates it and than writes it back to the bean, so that the bean is just a storage for data (accessible through getters) or should i implement the methods to manipulate the data in the bean.
I see two options.
1. The bean is only storage, other objects have to do something with its data.
2. The bean is storage and logic, other objects tell it to do something with its data.
The second option seems to me to adhere more to encapsulation while the first seems to be the way that beans are used.
I am sure both options fit someones need and are recommended in a specific context but what is recommended in general, especially when someone does not know enough about the greater application picture and is a beginner?
Example:
I have created a bean that holds an Item from a database with the item id, a name, and an 1d-array. Every array element is a struct that holds a user with its id, its name and its amount of the item. Through a getter i output the data in a table in which i can also change the amount for each user or check a user for deletion from this item.
Where do i put the logic to handle the application users input?
Do i tell the bean to change its array according to the user input?
Or do i create an object that changes the array and writes that new array into the bean?
(All database access (CreateReadUpdateDelete) is handled through a DataAccessObject that gets the bean as an argument. The DAO also contains a gateway method to read more than one record from the database. I use this method to get a table of items, which i can click to create the bean and its data.)
You're observing something known as "anemic domain model". Yes, it's very common, and no, it's not good OO design. Generally, logic should be with the data it operates on.
However, there's also the matter of separation of concerns - you don't want to stuff everything into the domain model. For example, database access is often considered a technically separate layer and not something the domain models themselves should be doing - it seems you already have that separated. What exactly should and should not be part of the domain model depends on the concrete case - good design can't really be expressed in absolute rules.
Another concern is models that get transferred over the network, e.g. between an app server and a web frontend. You want these to contain only the data itself to reduce badnwidth usage and latency. But that doesn't mean they can't contain logic, since methods are not part of the serialized objects. Derived fields and caches are - but they can usually be marked as transient in some way so that they are not transferred.
Your bean should contain both your data and logic.
Data Transfer Objects are used to transfer objects over the network, such as from ColdFusion to a Flex application in the browser. DTOs only contain relevant fields of an object's data.
Where possible you should try to minimise exposing the internal implementation of your bean, (such as the array of user structs) to other objects. To change the array you should just call mutator functions directly on your bean, such as yourBean.addUser(user) which appends the user struct to the internal array.
No need to create a separate DAO with a composed Gateway object for your data access. Just put all of your database access methods (CRUD plus table queries) into a single Gateway object.