Reusing a JavaBean class in other beans - javabeans

I have two objects Node1 and Node2 and both are using Bean1 (JavaBean).
I am trying to include the properties of Bean1 in both Node1 and Node2 without manually duplicating the properties of Bean1. Also, I am trying to reuse the BeanInfo of Bean1 in both the objects as well.
I was thinking of using cglib but is it possible at all?
More details:
Nodes are modeling job inputs i.e., Node1 is input for Job1 and so on. Input types may have a common type inside, which is Bean1. Say, Bean1 has a property bean1Prop and Node1 has a property node1Prop and Node2 has a property node2Prop.
I would need something like
Node1{ node1Prop, bean1Prop } and Node2{ node2Prop, bean1Prop }. Note that each node will have a separate instance of Bean1 during init. Now, I can get the properties manually by delegation which I do not want to do to avoid code duplication. I just want to tell each node class that I want to use the bean properties of Bean1.
cglib seems to be able to create such objects at runtime. But I guess the runtime type would be just an Object?
Also, Bean1 has a beaninfo class which I want to reuse in Node1BeanInfo (by getAdditionalBeanInfo()) and similarly in Node2BeanInfo.
I am not sure if Node1BeanInfo and Node2BeanInfo would be recognized (by a JavaBeanTool) as the beaninfo classes for the runtime-generated objects by cglib.
I must add that I have never used cglib before.
My primary goal is to avoid code duplication which will be significant in this case when a common type is used for many job inputs (which is very common).
I appreciate your thoughts on this.

I'm not sure that you mean, but it sounds like this:
public class Node1 {
private Bean1 bean1;
public Node1(Bean1 bean1) {
this.bean1 = bean1;
}
}
public class Node2 {
private Bean1 bean1;
public Node2(Bean1 bean1) {
this.bean1 = bean1;
}
}
Just be careful: if Node1 and Node2 both point to the same Bean1 instance, you'll have to worry about changes by one class immediately being visible to the other. You can cause yourself some nasty surprises with this arrangement.

Yes, Beans are a good way to reuse code.
No, there's no reason Node1 and Node2 can't share Bean1 to implement the same functionality. In fact, they should.
So far, so good...
If you want to share STATE between your nodes, however, your first question is HOW you wish to store the state.
SIMPLEST APPROACH: write a clear-text property file
ALTERNATIVE APPROACH: store properties in a database
With either approach, you could:
a) have the bean read the property at startup (from database, property file, etc)
b) have your client query the bean for the property value
STILL ANOTHER ALTERNATIVE: Just hard code the values in your bean :)

Related

Optaplanner: prevent custom List from beeing cloned by FieldAccessingSolutionCloner

I have a #PlanningSolution class, that has one field with a custom List implementation as type.
When solving I run into the following issue (as described in the optaplanner documentation):
java.lang.IllegalStateException: The cloneCollectionClass (class java.util.ArrayList) created for originalCollectionClass (class Solution$1) is not assignable to the field's type (class CustomListImpl).
Maybe consider replacing the default SolutionCloner.
As this field has no impact on planning, can I prevent FieldAccessingSolutionCloner from trying to clone that particular field e.g. by adding some annotation? I dont want to provide a complete custom SolutionCloner.
When inspecting the sources of FieldAccessingSolutionCloner I found out that I only needed to override the method retrieveCachedFields(...) or constructCloneCollection(...) so I tried to extend FieldAccessingSolutionCloner but then I need a public no-args-constructor. There I dont know how to initialise the field solutionDescriptor in the no-args-constructor to use my ExtendedFieldAccessingSolutionCloner as solution cloner.
If the generic solution cloner decided to clone that List, there is probably a good reason for it do so: one of the the elements in that list probably has a reference to a planning entity or the planning solution - and therefore the entire list needs to be planning cloned.
If that's not the case, this is a bug in OptaPlanner. Please provide the classes source code of the class with that field and the CustomListImpl class too, so we can reproduce and fix it.
To supply a custom SolutionCloner, follow the docs which will show something like this (but this is a simple case without chained variables, so it's easy to get right, but solution cloning is notoriously difficult!).
#PlanningSolution(solutionCloner = VaccinationSolutionCloner.class)
public class VaccinationSolution {...}
public class VaccinationSolutionCloner implements SolutionCloner<VaccinationSolution> {
#Override
public VaccinationSolution cloneSolution(VaccinationSolution solution) {
List<PersonAssignment> personAssignmentList = solution.getPersonAssignmentList();
List<PersonAssignment> clonedPersonAssignmentList = new ArrayList<>(personAssignmentList.size());
for (PersonAssignment personAssignment : personAssignmentList) {
PersonAssignment clonedPersonAssignment = new PersonAssignment(personAssignment);
clonedPersonAssignmentList.add(clonedPersonAssignment);
}
return new VaccinationSolution(solution.getVaccineTypeList(), solution.getVaccinationCenterList(), solution.getAppointmentList(),
solution.getVaccinationSlotList(), clonedPersonAssignmentList, solution.getScore());
}
}

Are serializers the right spot to remove shared state from Akka messages?

I am working on a distributed algorithm and decided to use a Akka to scale it across machines. The machines need to exchange messages very frequently and these messages reference some immutable objects that exist on every machine. Hence, it seems sensible to "compress" the messages in the sense that the shared, replicated objects should not be serialized in the messages. Not only would this save on network bandwidth but it also would avoid creating duplicate objects in the receiver side whenever a message is deserialized.
Now, my question is how to do this properly. So far, I could think of two options:
Handle this on the "business layer", i.e., converting my original message objects to some reference objects that replace references to the shared, replicated objects by some symbolic references. Then, I would send those reference objects rather than the original messages. Think of it as replacing some actual web resource with a URL. Doing this seems rather straight-forward in terms of coding but it also drags serialization concerns into the actual business logic.
Write custom serializers that are aware of the shared, replicated objects. In my case, it would be okay that this solution would introduce the replicated, shared objects as global state to the actor systems via the serializers. However, the Akka documentation does not describe how to programmatically add custom serializers, which would be necessary to weave in the shared objects with the serializer. Also, I could imagine that there are a couple of reasons, why such a solution would be discouraged. So, I am asking here.
Thanks a lot!
It's possible to write your own, custom serializers and let them do all sorts of weird things, then you can bind them at the config level as usual:
class MyOwnSerializer extends Serializer {
// If you need logging here, introduce a constructor that takes an ExtendedActorSystem.
// class MyOwnSerializer(actorSystem: ExtendedActorSystem) extends Serializer
// Get a logger using:
// private val logger = Logging(actorSystem, this)
// This is whether "fromBinary" requires a "clazz" or not
def includeManifest: Boolean = true
// Pick a unique identifier for your Serializer,
// you've got a couple of billions to choose from,
// 0 - 40 is reserved by Akka itself
def identifier = 1234567
// "toBinary" serializes the given object to an Array of Bytes
def toBinary(obj: AnyRef): Array[Byte] = {
// Put the code that serializes the object here
//#...
Array[Byte]()
//#...
}
// "fromBinary" deserializes the given array,
// using the type hint (if any, see "includeManifest" above)
def fromBinary(
bytes: Array[Byte],
clazz: Option[Class[_]]): AnyRef = {
// Put your code that deserializes here
//#...
null
//#...
}
}
But this raises an important question: if your messages all references data that is shared on the machines already, why would you want to put in the message the pointer to the object (very bad! messages should be immutable, and a pointer isn't!), rather than some sort of immutable, string objectId (kinda your option 1) ? This is a much better option when it comes to preserving the immutability of the messages, and there is little change in your business logic (just put a wrapper over the shared state storage)
for more info, see the documentation
I finally went with the solution proposed by Diego and want to share some more details on my reasoning and solution.
First of all, I am also in favor of option 1 (handling the "compaction" of messages in the business layer) for those reasons:
Serializers are global to the actor system. Making them stateful is actually a most severe violation of Akka's very philosophy as it goes against the encapsulation of behavior and state in actors.
Serializers have to be created upfront, anyway (even when adding them "programatically").
Design-wise, one can argue that "message compaction is not a responsibility of the serializer, either. In a strict sense, serialization is merely the transformation of runtime-specific data into a compact, exchangable representation. Changing what to serialize, is not a task of a serializer, though.
Having settled upon this, I still strived for a clear separation of "message compaction" and the actual business logic in the actors. I came up with a neat way to do this in Scala, which I want to share here. The basic idea is to make the message itself look like a normal case class but still allow these messages to "compactify" themselves. Here is an abstract example:
class Sender extends ActorRef {
def context: SharedContext = ... // This is the shared data present on every node.
// ...
def someBusinessLogic(receiver: ActorRef) {
val someData = computeData
receiver ! MyMessage(someData)
}
}
class Receiver extends ActorRef {
implicit def context: SharedContext = ... // This is the shared data present on every node.
def receiver = {
case MyMessage(someData) =>
// ...
}
}
object Receiver {
object MyMessage {
def apply(someData: SomeData) = MyCompactMessage(someData: SomeData)
def unapply(myCompactMessage: MyCompactMessage)(implicit context: SharedContext)
: Option[SomeData] =
Some(myCompactMessage.someData(context))
}
}
As you can see, the sender and receiver code feels just like using a case class and in fact, MyMessage could be a case class.
However, by implementing apply and unapply manually, one can insert its own "compactification" logic and also implicitly inject the shared data necessary to do the "uncompactification", without touching the sender and receiver. For defining MyCompactMessage, I found Protocol Buffers to be especially suited, as it is already a dependency of Akka and efficient in terms of space and computation, but any other solution would do.

What is mean by serializing from one VM to another when using JPA

I am reading JPA 2.0. I encounter a sentence that
We have used the transient modifier instead of the #Transient annotation so that
if the Employee gets serialized from one VM to another then the translated name
will get reinitialized to correspond to the locale of the new VM.
#Entity
public class Employee {
#Id private int id;
private String name;
private long salary;
transient private String translatedName;
// ...
public String toString() {
if (translatedName == null) {
translatedName = ResourceBundle.getBundle("EmpResources").getString("Employee");
}
return translatedName + ": " + id + " " + name;
}
}
What I understood is that when we use #Entity annotation and container encounter it then it call JPA provider that do the things. Like map id to ID column in database. Although we didn't mention the #Column annotation on the name and salary, but by default it maps to column NAME and SALARY in database. We used transient on translatedName so the JAP leave it as it is, not mapping applied to it. It's just a field in this class. But i am unable to get the understanding of the sentence
if the Employee gets serialized from one VM to another
Someone please explain it to me? Also tell me that what i defined above about the workflow of JAP is correct? Like what happening when container encounter #Entity annotation?
Thanks
When a class implements the java.io.Serializable interface, instances of this class are serializable. That means that the JVM can transform the object into a sequence of bytes. These bytes can be sent over the network, or saved on a disk, and can be read by another VM and transformed back into a Java object.
If a field has the transient Java keyword, it means that this field will be ignored by this serialization mechanism. The field won't be serialized.
A field annotated with #Transient is considered as a non-persistent field by JPA. It won't save it in the database, and it won't load it from the database. But it will be serialized if the object is sent to another JVM.
The Java transient keyword automatically makes a field #Transient. This means that a transient field, won't be serialized, and won't be saved by JPA either.
In the "JEE5 world" you can use detached entities as you would have used transfer objects. (I am not judging whether this is a good idea or not!)
Thus you can call for example a service method (e.g. EJB 3 SLSB method) that returns an instance of Employee remotely with the usual remote-call semantics regarding serialization.
It should be noted, that if an instance of Employee was serialized successfully, then your Java Runtime might be broken, as the class does not implement Serializable.
If you don't want to save the state of your entity arrtibute to DB and also don't want the state to get transferred to another jvm, then use Transient keyword.
If you don't want to save the state of your entity arrtibute to DB, but want the state to be transferred to another jvm, then use #Transient annotation.

Expected behaviour of a Repository

I'm writing an ORM and am unsure of the expected behaviour of the Repository, or more precisely, the frontier between the Repository and the Unit Of Work.
From my understanding, a Repository might look like this:
interface IPersonRepository
{
public function find(Criteria criteria);
public function add(Person person);
public function delete(Person person);
}
According to Fowler (PoEAA, page 322):
A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. [...] Objects can be added to and removed from the Repository, as they can from a simple collection of objects.
This would imply that the following test should work (assuming that we already have a Person persisted, whose last name is Fowler):
collection = repository.find(lastnameEqualsFowlerCriteria);
person = collection[0];
assertEquals(person.lastname, "Fowler");
person.lastname = "Evans";
newCollection = repository.find(lastnameEqualsFowlerCriteria);
assertFalse(newCollection.contains(person));
That means that when mapping to a database, even if no explicit save() method has been called somewhere, the Person model must have been automatically persisted by the Repository, so that the next query returned the correct collection, not containing the original Person.
But, isn't that the role of the Unit Of Work, to decide which model to persist to the database, and when?
In the above implementation, the Repository has to decide to persist the Person previously retrieved when receiving another find() call, so that the result is consistent with the modification. But if no other find() call were issued, the model would not have been persisted implicitly at all.
In the context of a Unit Of Work, it is not really a problem, because we can start a transaction at the beginning, and rollback any insert to the db anyway if needed.
But when used alone, can't this Repository lead to unexpected, unpredictable behaviour?
A Repository mediates between the
domain and data mapping layers, acting
like an in-memory domain object
collection. [...] Objects can be added
to and removed from the Repository, as
they can from a simple collection of
objects.
This does not mean you do not need a save method. You still need to explicitly commit your changes to storage.
See The Unit Of Work Pattern And Persistence Ignorance
public interface IUnitOfWork {
void MarkDirty(object entity);
void MarkNew(object entity);
void MarkDeleted(object entity);
void Commit();
void Rollback();
}
In a way, you can think of the Unit of Work as a place to dump all transaction-handling code. The responsibilities of the Unit of Work are to:
Manage transactions.
Order the database inserts, deletes, and updates.
Prevent duplicate updates. Inside a single usage of a Unit of Work object, different parts of the code may mark the same Invoice object as changed, but the Unit of Work class will only issue a single UPDATE command to the databas
I think what you;re asking about is following: http://martinfowler.com/eaaCatalog/identityMap.html
Repository should keep fetched objects in memory and all subsequent calls for that entity should not be retrieved from persistence storage, hence your example should work fine.

Object inside of Object

What is it called when an object has an object of the same type inside of itself?
Example:
public class Foo{
public Foo myFoo;
}
I don't think there's any specific name for this. Although this concept is used in many different common programming constructs. For instance, when representing a graph, tree, or linked list, the nodes usually have references to other nodes that they are linked/connected to.
It means that Foo is a 'recursive data structure'. Examples of this are trees, graphs, linked lists, etc. There aren't many significant programs written that don't use at least some recursive structures, e.g. in any SQL server implementation it's pretty common that the query plan that gets executed will be defined in a similar way. As a tiny example, the WHERE clause might get translated to a FilterNode that acts on data received from some other Node (like a table scan):
public interface Node { }
public class FilterNode implements Node {
public Node underlyingNode;
public Condition filterCondition;
}
In many cases the overall structure forms a directed acyclic graph, which means it's easy to safely traverse it recursively. But if it's got cycles then you need to be careful that you don't get into infinite recursion (which is what another answer above is humorously warning about).
Recursive containment.... :)
To add to what Kibbee said, this is a type of a composite pattern