Command Pattern clarification - oop

I cannot see any of the command pattern classes e.g. invoker, receiver manifesting in the accepted answer of the following link Long list of if statements in Java. I have gone with the accepted answer to solve my 30+ if/else statements.
I have one repository that I am trying to pass DTOs to save to the database. I want the repository to invoke the correct save method for the DTO so I am checking the instance type at runtime.
Here is the implementation in Repository
private Map<Class<?>, Command> commandMap;
public void setCommandMap(Map<Class<?>, Command> commandMap) {
this.commandMap = commandMap;
}
and a method that will populate the commandMap
commandMap.put(Address.class, new CommandAddress());
commandMap.put(Animal.class, new CommandAnimal());
commandMap.put(Client.class, new CommandClient());
and finally the method that saves
public void getValue(){
commandMap.get(these.get(0).getClass()).save();
}
The service class that uses the Repo registers the commandMap.
Does the accepted answer represent a sort of (approximate) implementation of the Command pattern?

It seems like an enum that implements an exec interface will eliminate your if/else problem or turn it into a switch.
It does not look like you need a command patterm.
Gof says:
Use the Command pattern when you want to
parameterize objects by an action to perform, as MenuItem objects did above. You can express such parameterization in a procedural language with a callback function, that is, a function that's registered somewhere to be called at a later point. Commands are an object-oriented replacement for callbacks.
specify, queue, and execute requests at different times. A Command object can have a lifetime independent of the original request. If the receiver of a request can be represented in an address space-independent way, then you can transfer a command object for the request to a different process and fulfill the request there.
support undo. The Command's Execute operation can store state for reversing its effects in the command itself. The Command interface must have an added Unexecute operation that reverses the effects of a previous call to Execute. Executed commands are stored in a history list. Unlimited-level undo and redo is achieved by traversing this list backwards and forwards calling Unexecute and Execute, respectively.
support logging changes so that they can be reapplied in case of a system crash. By augmenting the Command interface with load and store operations, you can keep a persistent log of changes. Recovering from a crash involves reloading logged commands from disk and reexecuting them with the Execute operation.
structure a system around high-level operations built on primitives operations. Such a structure is common in information systems that support transactions. A transaction encapsulates a set of changes to data. The Command pattern offers a way to model transactions. Commands have a common interface, letting you invoke all transactions the same way. The pattern also makes it easy to extend the system with new transactions.
Which of the above do you want to do?

Related

Is it a good practice the attach an event related parameter to an object's model as a variable?

This is about an API handling the validation during saving an object. Which means that the front-end client sends a request to the API to a specific end point, then on the back-end the API creates a new object if the right conditions are meet.
Right now the regular method that we use is that the models has a ruleset for each fields and then the validation is invoked when the save function is invoked, but technically the validation is done right before the object is saved into the database.
Then during today's code review I came across a solution which I wasn't sure if it's a good practice or not. And it was about that the front-end must send a specific parameter to the API every time. This is because other APIs are using our API as well, and we needed to know if the request was sent as and API request or a browser request. If this parameter is present then we want to execute an extra validation function on a specific field.
(1)If I would have to implement it, then I would check the incoming parameter in the service handler or in the controller level, and if I got one, I would invoke the validation right away, and if it fails I would throw an error.
(2)The implementation I saw however adds an extra variable to the model, and sets the model variable when there is an incoming parameter, then validates only when the save function is invoked on the object(which first validates the ruleset defined on the object fields, then saves the object into the database)
So my problem with (2) is that the object now grown bigger with an extra variable that is only related to a specific event. So I would say it's better to implement (1). But (2) also has an advantage, and that is when you create the object on different end point by parsing the parameters, then the validation will work there as well, even if the developer forget to update the code there.
Now this may seems like a silly question because, why would I care about just 1 extra variable, but this is like a bedrock of something good or bad. So if I say this is ok, then from now on the models will start growing with extra variables that are only related to specific events, which I think should be handled on the controller/service handler level. On the other hand the code would be more reliable if it's not the developer who should remember all the 6712537 functionalities and keep them in mind when makes some changes somewhere. Let's say all the devs will get heart attack tomorrow from the excitement of an amazing discovery, and a new developer has to work on the project while he doesn't know about these small details, and then he has to change something on the code that is related to this functionality - so that new feature should be supported by this old one as well.
So my question is if is there any good practice on this, and what do you think what would be the best approach?
So I spent some time on thinking on the solution, and I think the best is to have an array of acceptable trigger variables in the model class. Then when the parameters are passed to the model on the controller level, then the loader function can be modified that it takes the trigger variables from the parameters and save it in the model's associative array variable that stores the trigger variables.
By default this array is empty, and it doesn't matter how much new variables are needed to be created, it will only contain the necessary ones when those are used.
Then of course the loader function needs to be modified in a way that it can filter out the non trigger variables as well as it is done for the regular fields, and there can be even a rule set of validation on the trigger variables if necessary.
So this solves the problem with overgrowing the object with unnecessary variables and the centralized validation part, because now the validation can be always done in the model instead of the controller.
And since the loader function is modified to store the trigger variables in the model's trigger variables array variable, the developer never has to remember that this functionality was created. Which is good, because in the future when he creates a new related function or end point that should handle object creation, he will not miss it to validate it against the old functionality, because the the loader function that he modified in the past like this will handle it for him.
It needs to be noted tho, that since the loader function doesn't differentiate between the parameters, and where to load them other then checking the names of the parameters with the filter functions, these parameter names should be identical from each other, otherwise a buggy functionality can be created accidentally. Like if you forget that a model attribute with the same name was used, then you can accidentally trigger an event that was programmed to be triggered if the trigger variable with the same name is present. However this can be solved by prefixing the trigger variables for example.

Schedulers in Project Reactor with Spring Webflux

Project Reactor is awesome, easily I can switch a thread to processing some parts on another thread but I've looked inside to Schedulers.fromExecutorService() method, and this method every time allocates new ExecutorService. So when this method is called then always schedulers are creating and allocated again. I am not sure but I think it potential memory leak...
Mono<String> sometext() {
return Mono
.fromCallable(() -> "" )
.subscribeOn(Schedulers.newParallel("my-custom));
}
I wonder about registering Scheduler as bean, it singleton so only once will be allocated not every time or create him in the constructor. Many of the blogs explaining the threading model in this way.
...
private final Scheduler scheduler = Schedulers.newParallel("my-custom);
..
Mono.fromCallable(() -> "" ).subscribeOn(scheduler)
Schedulers.newParallel() will indeed create a new scheduler with an associated backed threadpool every time you call it - so yes, you're correct, if you're using that method then you want to make sure you store a reference to it somewhere so you can reuse it. Simply providing the same name argument won't just retrieve the new scheduler, it'll just create a different one with the same name.
How you do this is up to you - it can be via a spring bean (as long as it's a singleton and not a prototype bean!), a field, or whatever else fits best in with your use case.
However, before all of this I'd first consider whether you definitely need to create a separate parallel scheduler at all. The Schedulers.parallel() scheduler is a default parallel scheduler that can be used for parallel work out the tin (it doesn't create a new one on each invocation), and unless you need separately configured parallel schedulers for separate services for some reason, best practice is just to use that.

How to tell if middleware contains a Run()?

Is there any way to tell in ASP.NET Core if any given middleware will contain a Run() call which will stop the pipeline? It seems that UseMvc() is one big one, but I am not even certain about that, I just keep reading that it needs to go at the end, I assume it is because it contains a call to Run().
Perhaps there is a way to generate a visualisation of the pipeline for all middleware currently in use, showing which one contains the Run() call?
There is no sure way to tell, beyond reading documentation on each specific piece of middleware.
quoting itminus in the comments on my question:
Not only Run(), but also MapWhen() will terminate the process. Also, anyone could create a custom middleware that doesn't invoke the next delegate and then cause to a terminate.
It's the duty of middleware to determine whether there's a need to to call next. There's no built-in way to visualize the pipeline except you read the document/source code. That's because all the middlewares will be built into a single final delegate at startup time. When there's an incoming message, the final delegate will be used to process requests. As a programmer, we know what will be done by the middlewares, we know the time when it branches, and we know the time it terminates that's because we write the code. But the program won't know it until it actually runs just because the final delegate is built at startup time.

Keeping SAP's RFC data for consecutive calls of RFC using JCO

I was wondering if it was possible to keep an RFC called via JCO opened in SAP memory so I can cache stuff, this is the scenario I have in mind:
Suppose a simple function increments a number. The function starts with 0, so the first time I call it with import parameter 1 it should return 1.
The second time I call it, it should return 2 and so on.
Is this possible with JCO?
If I have the function object and make two successive calls it always return 1.
Can I do what I'm depicting?
Designing an application around the stability of a certain connection is almost never a good idea (unless you're building a stability monitoring software). Build your software so that it just works, no matter how often the connection is closed and re-opened and no matter how often the session is initialized and destroyed on the server side. You may want to persist some state using the database, or you may need to (or want to) use the shared memory mechanisms provided by the system. All of this is inconsequential for the RFC handling itself.
Note, however, that you may need to ensure that a sequence of calls happen in a single context or "business transaction". See this question and my answer for an example. These contexts are short-lived and allow for what you probably intended to get in the first place - just be aware that you should not design your application so that it has to keep these contexts alive for minutes or hours.
The answer is yes. In order to make it work, you need to implement two tasks:
The ABAP code needs to store its variable in the ABAP session memory. A variable in the function group's global section will do that. Or alternatively you could use the standard ABAP technique "EXPORT TO MEMORY/IMPORT FROM MEMORY".
JCo needs to keep the user session between calls. By default, JCo resets the backend-side user session after every call, which of course destroys all data stored in that user session memory. In order to prevent it, you need to use JCoContext.begin() and JCoContext.end() to get a stateful RFC connection that keeps the user session alive on backend side.
Sample code:
JCoDestination dest = ...
JCoFunction func = ...
try{
JCoContext.begin(dest);
func.execute(dest); // Will return "1"
func.execute(dest); // Will return "2"
}
catch (JCoException e){
// Handle network problems, ABAP exceptions, SYSTEM_FAILUREs
}
finally{
// Make sure to release the stateful connection, otherwise you have
// a resource-leak in your program and on backend side!
JCoContext.end(dest);
}

Apache JENA TDB files locked after creation with web application

I am using JENA to create a triple store (TDB functionality) with the following code:
public void createTDBFromOWL() {
Dataset dataset = TDBFactory.createDataset(newTripleStoreLocation);
dataset.begin(ReadWrite.WRITE);
try {
//getting the model inside the transaction
Model model = dataset.getDefaultModel();
FileManager fileManager=FileManager.get();
Model holder=fileManager.readModel(model, newOWLFileLocation);
//committing dataset
dataset.commit();
model.close();
holder.close();
} finally {
dataset.end();
dataset.close();
}
}
After I create the triple store, the files created are locked by my application server (Glassfish), and I can't delete them until I manually stop Glassfish and it releases its lock. As shown in the above code, I think I am closing everything, so I don't get why a lock is maintained on the files.
When you call Dataset#close(), the implementation delegates that call to an underlying
DatasetGraphBase#close(), which then ultimately delegates to DatasetGraphTDB#_close().
This results in calls to TripleTable#close() and QuadTable#close(). Both of these call (several) NodeTupleTable#close(). Continuing with the indirection, this calls NodeTable#close() and TupleTable#close(). The former is an interface, so we'd need to make a proper guess as to which class is run in your implementation. The latter iterates through a collection of TupleIndex objects and calls close() on each of them. TupleIndex is, also, an interface.
There is only one meaningful heirarchy of descendents from TupleIndex that results in something which can lock a file, which leads us to TupleIndexRecord#close(). We can then follow a particular implementation of RangeIndex called BPlusTree all the way down until we see actual ownership of the MappedByteBuffer
Ultimately, while reading the implementation of BlockAccessMapped#close(), it seems like the entire heirarchy is closing things properly, down to the final classes, but that this longstanding bug may be the culprit. From the documentation:
once a file has been mapped a number of operations on that file will
fail until the mapping has been released (e.g. delete, truncating to a
size less than the mapped area). However the programmer can't control
accurately the time at which the unmapping takes place --- typically
it depends on the processing of finalization or a PhantomReference
queue.
So there you have it. Despite Jena's best efforts, one cannot yet control when that file will be unmapped in Java. This ends up being the tradeoff for memory-mapped file IO in java.