When should / shouldn't we serialize a class in Spark? - serialization

I have a class that reads from a file in HDFS and tries to create a graph from that. I do some transformations on the file in the class initialization that don't work unless I make the class serializable.
class GraphLoader(path:String, sc:SparkContext)
extends java.io.Serializable {
val records = sc.textFile(path).map(x => x.split(",")).filter(x => x(0) == "1" || x(0) == "2")
records.cache()
val people:RDD[(Long,PersonProperty)] = records.
flatMap(line => List(line(1).safeToLong, line(4).safeToLong)).
map(number => (number, PersonProperty("default")))
...
.safeToLong is basically a method I defined in an implicit class that I use to convert Strings to Longs and deal with any Exceptions that I encounter.
It won't run without the Serializable extension and I'm bothered because I feel like it's a pretty heavy thing to pass. Is there a better/more elegant way to do this?

Spark is an engine for distributed (cluster) computing, which inherently requires communication among different nodes (JVMs). This communication in turn requires serialization, because every time a class or object leaves its JVM, it must be serialized.
The bottom line is that most of the Spark code you write will need to be serializable. Any code that isn't cannot take advantage of the distributed nature of Spark. You should tune the serialization to optimize your Spark application.

Related

When is a class a data class?

I know what classes are about, but for better understanding I need a use case. Recently I discovered the construct of data classes. I get the idea behind normal classes, but I cannot imagine a real use case for data classes.
When should I use a data class and when I use a "normal" class? For all I know, all classes keep data.
Can you provide a good example that distinguishes data classes from non-data classes?
A data class is used to store data. It's lighter than a normal class, and can be compared to an array with key/value (dictionary, hash, etc.), but represented as an object with fixed attributes. In kotlin, according to the documentation, that adds those attributes to the class:
equals()/hashCode() pair
toString() of the form "User(name=John, age=42)"
componentN() functions corresponding to the properties in their order of declaration.
copy() function
Also it has a different behavior during class inheritence :
If there are explicit implementations of equals(), hashCode(), or toString() in the data class body or final implementations in a
superclass, then these functions are not generated, and the existing
implementations are used.
If a supertype has componentN() functions that are open and return compatible types, the corresponding functions are generated for the
data class and override those of the supertype. If the functions of
the supertype cannot be overridden due to incompatible signatures or
due to their being final, an error is reported.
Providing explicit implementations for the componentN() and copy() functions is not allowed.
So in kotlin, if you want to describe an object (a data) then you may use a dataclass, but if you're creating a complex application and your class needs to have special behavior in the constructor, with inheritence or abstraction, then you should use a normal class.
I do not know Kotlin, but in Python, a dataclass can be seen as a structured dict. When you want to use a dict to store an object which has always the same attributes, then you should not put it in a dict but use a Dataclass.
The advantage with a normal class is that you don't need to declare the __init__ method, as it is "automatic" (inherited).
Example :
This is a normal class
class Apple:
def __init__(size:int, color:str, sweet:bool=True):
self.size = size
self.color = color
self.sweet = sweet
Same class as a dataclass
from dataclasses import dataclass
#dataclass
class Apple:
size: int
color: str
sweet: bool = True
Then the advantage compared to a dict is that you are sure of what attribute it has. Also it can contains methods.
The advantage over to a normal class is that it is simpler to declare and make the code lighter. We can see that the attributes keywords (e.g size) are repeated 3 times in a normal class, but appear only once in a dataclass.
The advantage of normal class also is that you can personalize the __init__ method, (in a dataclass also, but then you lose it's main advantage I think) example:
# You need only 2 variable to initialize your class
class Apple:
def __init__(size:int, color:str):
self.size = size
self.color = color
# But you get much more info from those two.
self.sweet = True if color == 'red' else False
self.weight = self.__compute_weight()
self.price = self.weight * PRICE_PER_GRAM
def __compute_weight(self):
# ...
return (self.size**2)*10 # That's a random example
Abstractly, a data class is a pure, inert information record that doesn’t require any special handling when copied or passed around, and it represents nothing more than what is contained in its fields; it has no identity of its own. A typical example is a point in 3D space:
data class Point3D(
val x: Double,
val y: Double,
val z: Double
)
As long as the values are valid, an instance of a data class is entirely interchangeable with its fields, and it can be put apart or rematerialized at will. Often there is even little use for encapsulation: users of the data class can just access the instance’s fields directly. The Kotlin language provides a number of convenience features when data classes are declared as such in your code, which are described in the documentation. Those are useful when for example building more complex data structures employing data classes: you can for example have a hashmap assign values to particular points in space, and then be able to look up the value using a newly-constructed Point3D.
val map = HashMap<Point3D, String>()
map.set(Point3D(3, 4, 5), "point of interest")
println(map.get(Point3D(3, 4, 5))) // prints "point of interest"
For an example of a class that is not a data class, take FileReader. Underneath, this class probably keeps some kind of file handle in a private field, which you can assume to be an integer (as it actually is on at least some platforms). But you cannot expect to store this integer in a database, have another process read that same integer from the database, reconstruct a FileReader from it and expect it to work. Passing file handles between processes requires more ceremony than that, if it is even possible on a given platform. That property makes FileReader not a data class. Many examples of non-data classes will be of this kind: any class whose instances represent transient, local resources like a network connection, a position within a file or a running process, cannot be a data class. Likewise, any class where different instances should not be considered equal even if they contain the same information is not a data class either.
From the comments, it sounds like your question is really about why non-data classes exist in Kotlin and why you would ever choose not to make a data class. Here are some reasons.
Data classes are a lot more restrictive than a regular class:
They have to have a primary constructor, and every parameter of the primary constructor has to be a property.
They cannot have an empty primary constructor.
They cannot be open so they cannot be subclassed.
Here are other reasons:
Sometimes you don't want a class to have a copy function. If a class holds onto some heavy state that is expensive to copy, maybe it shouldn't advertise that it should be copied by presenting a copy function.
Sometimes you want to use an instance of a class in a Set or as Map keys without two different instances being considered as equivalent just because their properties have the same values.
The features of data classes are useful specifically for simple data holders, so the drawbacks are often something you want to avoid.

Are serializers the right spot to remove shared state from Akka messages?

I am working on a distributed algorithm and decided to use a Akka to scale it across machines. The machines need to exchange messages very frequently and these messages reference some immutable objects that exist on every machine. Hence, it seems sensible to "compress" the messages in the sense that the shared, replicated objects should not be serialized in the messages. Not only would this save on network bandwidth but it also would avoid creating duplicate objects in the receiver side whenever a message is deserialized.
Now, my question is how to do this properly. So far, I could think of two options:
Handle this on the "business layer", i.e., converting my original message objects to some reference objects that replace references to the shared, replicated objects by some symbolic references. Then, I would send those reference objects rather than the original messages. Think of it as replacing some actual web resource with a URL. Doing this seems rather straight-forward in terms of coding but it also drags serialization concerns into the actual business logic.
Write custom serializers that are aware of the shared, replicated objects. In my case, it would be okay that this solution would introduce the replicated, shared objects as global state to the actor systems via the serializers. However, the Akka documentation does not describe how to programmatically add custom serializers, which would be necessary to weave in the shared objects with the serializer. Also, I could imagine that there are a couple of reasons, why such a solution would be discouraged. So, I am asking here.
Thanks a lot!
It's possible to write your own, custom serializers and let them do all sorts of weird things, then you can bind them at the config level as usual:
class MyOwnSerializer extends Serializer {
// If you need logging here, introduce a constructor that takes an ExtendedActorSystem.
// class MyOwnSerializer(actorSystem: ExtendedActorSystem) extends Serializer
// Get a logger using:
// private val logger = Logging(actorSystem, this)
// This is whether "fromBinary" requires a "clazz" or not
def includeManifest: Boolean = true
// Pick a unique identifier for your Serializer,
// you've got a couple of billions to choose from,
// 0 - 40 is reserved by Akka itself
def identifier = 1234567
// "toBinary" serializes the given object to an Array of Bytes
def toBinary(obj: AnyRef): Array[Byte] = {
// Put the code that serializes the object here
//#...
Array[Byte]()
//#...
}
// "fromBinary" deserializes the given array,
// using the type hint (if any, see "includeManifest" above)
def fromBinary(
bytes: Array[Byte],
clazz: Option[Class[_]]): AnyRef = {
// Put your code that deserializes here
//#...
null
//#...
}
}
But this raises an important question: if your messages all references data that is shared on the machines already, why would you want to put in the message the pointer to the object (very bad! messages should be immutable, and a pointer isn't!), rather than some sort of immutable, string objectId (kinda your option 1) ? This is a much better option when it comes to preserving the immutability of the messages, and there is little change in your business logic (just put a wrapper over the shared state storage)
for more info, see the documentation
I finally went with the solution proposed by Diego and want to share some more details on my reasoning and solution.
First of all, I am also in favor of option 1 (handling the "compaction" of messages in the business layer) for those reasons:
Serializers are global to the actor system. Making them stateful is actually a most severe violation of Akka's very philosophy as it goes against the encapsulation of behavior and state in actors.
Serializers have to be created upfront, anyway (even when adding them "programatically").
Design-wise, one can argue that "message compaction is not a responsibility of the serializer, either. In a strict sense, serialization is merely the transformation of runtime-specific data into a compact, exchangable representation. Changing what to serialize, is not a task of a serializer, though.
Having settled upon this, I still strived for a clear separation of "message compaction" and the actual business logic in the actors. I came up with a neat way to do this in Scala, which I want to share here. The basic idea is to make the message itself look like a normal case class but still allow these messages to "compactify" themselves. Here is an abstract example:
class Sender extends ActorRef {
def context: SharedContext = ... // This is the shared data present on every node.
// ...
def someBusinessLogic(receiver: ActorRef) {
val someData = computeData
receiver ! MyMessage(someData)
}
}
class Receiver extends ActorRef {
implicit def context: SharedContext = ... // This is the shared data present on every node.
def receiver = {
case MyMessage(someData) =>
// ...
}
}
object Receiver {
object MyMessage {
def apply(someData: SomeData) = MyCompactMessage(someData: SomeData)
def unapply(myCompactMessage: MyCompactMessage)(implicit context: SharedContext)
: Option[SomeData] =
Some(myCompactMessage.someData(context))
}
}
As you can see, the sender and receiver code feels just like using a case class and in fact, MyMessage could be a case class.
However, by implementing apply and unapply manually, one can insert its own "compactification" logic and also implicitly inject the shared data necessary to do the "uncompactification", without touching the sender and receiver. For defining MyCompactMessage, I found Protocol Buffers to be especially suited, as it is already a dependency of Akka and efficient in terms of space and computation, but any other solution would do.

OOAD - File-Format-Reader class vs Object-Model class: which should depend on which?

Let's consider, as an example, the domain of GPS and Geographical (GIS) entities.
We would model the meaningful geographic entities (points, paths, regions) as classes in any desired programming language, and these classes would be a conceptual, "implementation-free" representation of these entities.
On the other hand, there are a lot of file formats that save these features with more or less the same meaning. In the GPS domain the most common file formats are GPX, KML, ShapeFile, WellKnownText, etc.
Supposing, then, I want to create a GpsFeatureCollection class which would contain a Points property, a Paths property, and so on. Also, I would implement classes like GpsReader, KmlReader, ShapeFileReader (and their respective Writers) and so on.
THE QUESTION IS:
Which is the best practice in OOAD:
Have a GpsFeatureCollection to instantiate a FileFormat(Reader/Writer) class?
Have a GpsFeatureCollection to implement Read/WriteFromFormat methods instead of classes?
Have each file format reader to instantiate an empty GpsFeatureCollection, populate it with data read from file, then pass the populated object as a return value?
Have a mediator class to avoid any dependency between FileFormatClass and ObjectModelClass?
None of the above?
"Well, it depends..."
I am really interested in doing "the right thing". My immediate plans are to use Python, but most probably this would matter for other languages too. This is causing some "analysis paralysis" in my pet project currently...
Here is my take wherein I pass reader and writer instances to read() and write() methods, this seems to achieve good level of decoupling and yet provides flexibility to pick various readers and writers.
Code uses Java-like syntax
Declare a Reader interface, we will assuming multiple implementation such KMLReader,
ShapeFileReader, etc
interface Reader {
GpsFeatureCollection read();
}
Declare a Writer interface, we will assuming multiple implementation such KMLWriter, ShapeFileWriter, etc
interface Writer {
void write(GpsFeatureCollection c);
}
Let's declare GpsFeatureCollection class to have read and write methods which accept respective interfaces as parameter to perform the job.
class GpsFeatureCollection {
...
public static GpsFeatureCollection read(Reader r) {
return r.read();
}
public static void write(Writer w) {
w.write(this);
}
}
Some example of usage using different readers and writers.
// Reading data
GpsFeaureCollection data = GpsFeatureCollection.read(new ShapeFileReader("/tmp/shapefile"));
// Writing data
data.write(new KMLWriter("/tmp/kmlfile"));

Is protobuf-net suited for serializing arbitrary object/domain models?

I have been exploring the CQRS/DDD-principles and patterns for a while now and have started implementing a sample project where I have split my storage-model into a WriteModel and a ReadModel. The WriteModel will use a simple NoSQL-like database where aggregates are stored in a key-value style, with value being just a serialized version of the aggregate.
I am now looking at ProtoBuf-Net for serializing and deserializing my domain model aggregates in and out of storage. Other than this post I haven't found any guidance or tips for using ProtoBuf-Net in this area. The point is that the (ideal) requirements for serialization and deserialization of aggregates is that the domain model should have as little knowledge as possible about this infrastructural concern, which implies the following:
No attributes on the classes
No constructors, getters, setters or any other piece of code just for the sake of serialization.
Ability to use any (custom) type possible and have it serialized/deserialized.
Thus far I have implemented just the serialization of the first versions of my aggregates which works perfectly fine. I use the RuntimeTypeModel.Default-instance to configure the MetaModel at runtime and have UseConstructor = false everywhere, which enables me to completely separate the serialization mechanics from my domain-assembly. I have even implemented a custom post-deserialization mechanism that enables me to just-in-time initialize fields after ProtoBuf-Net has deserialized it into a valid instance. So suppose I have class AggregateA like so:
[Version(1)]
public sealed class AggregateA
{
private readonly int _x;
private readonly string _y;
...
}
Then in my serialization-library I have code something along the following lines:
var metaType = RuntimeTypeModel.Default.Add(typeof(AggregateA), false);
metaType.UseConstructor = false;
metaType.AddField(1, "_x");
metaType.AddField(2, "_y");
...
However, I realize that up to this point I have only implemented the basic scenario, and I am now starting to think about how to approach versioning of my model. I am particularly interested in larger refactoring-scenario's, where type A has been split into type A1 and A2, for example:
[Version(2)]
public sealed class AggregateA1
{
private readonly int _x;
...
}
[Version(2)]
public sealed class AggregateA2
{
private readonly string _y;
...
}
Suppose I have a serialized bunch of instances of AggregateA, but now my domain model knows only AggregateA1 and AggregateA2, how would you handle this scenario with ProtoBuf-Net?
A second question deals with point 3: is ProtoBuf-Net capable of handling arbitrary types if you're willing to put in some extra configuration-effort? I've read about exceptions raised when using the DateTimeOffset-type, which makes me think not all types can be serialized by the framework out-of-the-box, but can I serialize these types by registering them in the RuntimeTypeModel? Should I even want to go there? Or better to forget about serializing common .NET types other than the simple ones?
protobuf-net is intended to work with predictable known models. It is true that everything can be configured at runtime, but I have not put any thought as to how to handle your A1/A2 scenario, precisely because that is not a supported scenario (in my defense, I can't see that working nicely with most serializers). Thinking off the top of my head, if you have the configuration/mapping data somewhere, then you could simply deserialize twice; i.e. as long as we still tell it that AggregateA1._x maps to 1 and AggregateA2._y maps to 2, you can do:
object a1 = model.Deserialize(source, null, typeof(AggregateA1));
source.Position = 0; // rewind
object a2 = model.Deserialize(source, null, typeof(AggregateA2));
However, more complex tweaks would require additional thought.
Re "arbitrary types"... define "arbitrary" ;p In particular, there is support for "surrogate" types which can be useful for some transformations - but without a very specific "problem statement" it is hard to answer completely.
Summary:
protobuf-net has an intended usage, which includes both serialization-aware (attributed, etc) and non-aware scenarios (runtime configuration, etc) - but it also works for a range of more bespoke scenarios (letting you drop to the raw reader/writer API if you want to). It does not and cannot guarantee to be a direct fit for every serialization scenario imaginable, and how well it behaves will depend on how far from that scenario you are.

Copying NHibernate POCO to DTO without triggering lazy load or eager load

I need to create DTOs from NHibernate POCO objects. The problem is that the POCO objects contain dynamic proxies, which should not be copied to the DTO. I eager load all the collections and references I need to transfer in advance, I don't want NHibernate to start loading referenced collections which I did not load in advance.
Several similar questions on SO received answers which either:
Suggest Session.GetSessionImplementation().PersistenceContext.Unproxy();
Suggest turning off Lazy Loading.
In my case the first suggestion is irrelevant, as according to my understanding it causes eager loading to replace the proxies. In reality, it doesn't even work - it doesn't remove the proxies in my objects. (Any explanation why?)
The second suggestion, turning off lazy loading seems to cause all references and collections to eager load, basically loading the entire DB. My expectation was that if lazy loading is off, and I have not requested a collection, it will not be loaded. (Am I correct that NHibernate offers no such option?)
I am using NHibernate 3.3.1 with fluent configuration.
To reiterate my main question, I need to create DTOs clean of proxies, copied from POCOs which contain proxies, and I don't want to load the data behind those proxies.
Any helpful suggestion which includes example code and automates the process with ValueInjecter / AutoMapper will be immensely helpful.
Edit #1:
Following Roger Alsing's suggestion to use projections, I realized that what I'm actually looking for is a ValueInjecter-like convention based mapping. Here is why. Initially, my DTOs will be defined the same as the model's POCOs. This is due to a large code base which depends on the existing POCOs being transferred on the client-side project.
Using projections, I will have to specify which subset of fields have to be copied, and this subset may be different in each context (as, ideally, a DTO would differ). This will mean a lot of new code introduced to the server side, when there should be the second option.
Using ValueInjecter, I will be able to populate the DTOs by convention in one call, without writing specific projections, or having to maintain those into the future. That is, if I am able to have ValueInjecter ignore Proxy objects.
Given that using projections is a good but not ideal solution in my situation, is there a way to configure something like ValueInjecter to copy POCOs without copying proxies or triggering eager/lazy loads on copy?
I solve this by selecting DTO's as projections using Linq or whatever query language the O/R Mapper may have.
e.g.
return from c in customers
select new CustomerDTO
{
Name = c.Name ,
Orders = c.Orders.Select (o => new OrderDTO {...} )
};
This way, you don't need to resort to reflection magic or any other fancy stuff.
And the query fetches exactly what you need in one go, thus, this is usually much more efficient than fetching entities and then transforming them to DTO's in mem.
(it can be less efficient in some cases incase the resulting SQL query contains extra joins for whatever reason..)
I'm using the following ValueResolver with AutoMapper:
/// <summary>
/// ValueResolver that will set NHibernate proxy objects to null, instead of triggering a lazy load of the object
/// </summary>
public class IgnoreNHibernateProxyValueResolver : IValueResolver
{
public ResolutionResult Resolve(ResolutionResult source)
{
var prop = source.Type.GetProperty(source.Context.MemberName).GetValue(source.Value, null);
var proxy = prop as INHibernateProxy;
if (proxy != null && proxy.HibernateLazyInitializer.IsUninitialized)
{
return source.Ignore();
}
return source.New(prop);
}
}
for ValueInjecter solution I recommend using SmartConventionInjection (you need to copy the code from the linked page into your solution)
and after specify a convention that won't touch the proxy properties
here's a start:
public class MapPoco: SmartConventionInjection
{
protected override bool Match(SmartConventionInfo c)
{
return c.SourceProp.Name == c.TargetProp.Name;
}
}
Take a look on Projections in Introduction to QueryOver in NH 3.0
CatSummary summaryDto = null;
IList<CatSummary> catReport =
session.QueryOver<Cat>()
.SelectList(list => list
.SelectGroup(c => c.Name).WithAlias(() => summaryDto.Name)
.SelectAvg(c => c.Age).WithAlias(() => summaryDto.AverageAge))
.TransformUsing(Transformers.AliasToBean<CatSummary>())
.List<CatSummary>();