Jackson, how to expose fields when serializing a class which extend a collection? - jackson

I have a class that we use for paginated results, as follows:
public class PaginatedList<T> extends LinkedList<T> {
private int offset;
private int count;
private int totalResultCount;
//...
}
and I'd like Jackson to serialize it like this:
{
"results":[1,2,3],
"offset":0,
"count":3,
"totalResultCount":15
}
(where the parent list contains the three integer values 1,2 and 3.)
In my first attempt I discovered that Jackson effectively ignores any properties on classes which are assignable to a Collection class. In hindsight, this makes sense, and so I'm now in search of a workaround. A search of SO resulted in two similar questions:
jackson-serialization-includes-subclasss-fields
jaxb-how-to-serialize-fields-in-a-subclass-of-a-collection
However, both of these resulted in the suggestion to switch from inheritance to composition.
I am specifically looking for a solution that allows the class to extend a collection. This 'PaginatedList' class is part of the common core of the enterprise, and extends Collection so that it can be used (and introspected) as a collection throughout the code. Changing to composition isn't an option. That being said, I am free to annotate and otherwise change this class to support serialization as I described above.
So, from what I can tell, there's two parts I'm missing (what I'm looking for in an answer):
How to get Jackson to 'see' the added properties?
How to get Jackson to label the collection's content as a 'results' property in the JSON output?
(PS: I'm only concerned with serialization.)

Ashley Frieze pointed this out in a comment, and deserves the credit for this answer.
I solved this by creating a JsonSerializer instance as follows:
public class PaginatedListSerializer extends JsonSerializer<PaginatedList> {
#Override
public Class<PaginatedList> handledType() {
return PaginatedList.class;
}
#Override
public void serialize(PaginatedList value, JsonGenerator jgen, SerializerProvider provider) throws IOException, JsonProcessingException {
jgen.writeStartObject();
jgen.writeArrayFieldStart("results");
for (Object entry : value) {
jgen.writeObject(entry);
}
jgen.writeEndArray();
jgen.writeNumberField("offset", value.offset);
jgen.writeNumberField("count", value.count);
jgen.writeNumberField("totalResultCount", value.totalResultCount);
jgen.writeEndObject();
}
}
and, of course, register it as a module:
SimpleModule testModule = new SimpleModule("PaginatedListSerializerModule", new Version(1, 0, 0, null, null, null));
testModule.addSerializer(new PaginatedListSerializer());
mapper.registerModule(testModule);

Related

DDD ValueObject and Enumeration , is there any good way to implement serialization?

In DDD, Value Object and Enumeration are quite beautiful so that I want use both two in the daily program logic, not only domain logic. When use customized value objects and enumerations, serialization problem is coming : should I implemented all the value objects and enumeration with System.Text.Json.JsonConverter<T> , or is there any good way to handle serialization and deserialization ?
Update:
to make it clear, Eumeration demo as below(ValueObject derived classes are same.):
[JsonConverter(typeof(CustomizedConverter))]
public class CustomizedEnumeration1 : Enumeration
{
public string Customized { get; protected set; }
public ... // some other customized property or class
public CustomizedEnumeration(int id, string name, string customized) : base(id, string) {
Customized = customized;
}
}
public class Customized2 : Enumeration
{ ... }
public class OtherCustomized: Enumeration
{ ... }
In DDD, properties sometimes are sealed by protected/private setter, deserialization has no right to set the value. Many derived classes can't deserialize as expected, so we have to rewrite serialization with System.Text.Json.JsonConverter<T> one by one. rewrite every derived Enumeration / Valueobject converter is not good, can any one point out any easy abstraction for that ?
You can achieve your desired result. You need to switch to NewtonsoftJson serialization.
Call this in Startup.cs in the ConfigureServices method:
services.AddControllers().AddNewtonsoftJson();
After this, your constructor will be called by deserialization for classes with private setter.
There is no need for custom converters.
For reference, I am using ASP Net Core 3.1

Repository OO Design - Multiple Specifications

I have a pretty standard repository interface:
public interface IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
{
TDomainEntity Find(Guid id);
void Add(TDomainEntity entity);
void Update(TDomainEntity entity);
}
We can use various infrastructure implementations in order to provide default functionality (e.g. Entity Framework, DocumentDb, Table Storage, etc). This is what the Entity Framework implementation looks like (without any actual EF code, for simplicity sake):
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
public TDomainEntity Find(Guid id)
{
// Find, map and return entity using Entity Framework
}
public void Add(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Insert entity using Entity Framework
}
public void Update(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Update entity using Entity Framework
}
}
There is a mapping between the TDomainEntity domain entity (aggregate) and the TDataEntity Entity Framework data entity (database table). I will not go into detail as to why there are separate domain and data entities. This is a philosophy of Domain Driven Design (read about aggregates). What's important to understand here is that the repository will only ever expose the domain entity.
To make a new repository for, let's say, "users", I could define the interface like this:
public interface IUserRepository : IRepository<User>
{
// I can add more methods over and above those in IRepository
}
And then use the Entity Framework implementation to provide the basic Find, Add and Update functionality for the aggregate:
public class UserRepository : EntityFrameworkRepository<Stop, StopEntity>, IUserRepository
{
// I can implement more methods over and above those in IUserRepository
}
The above solution has worked great. But now we want to implement deletion functionality. I have proposed the following interface (which is an IRepository):
public interface IDeleteableRepository<TDomainEntity>
: IRepository<TDomainEntity>
{
void Delete(TDomainEntity item);
}
The Entity Framework implementation class would now look something like this:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity, IDeleteableDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find(), Add() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
entity.IsDeleted = true;
entity.DeletedDate = DateTime.UtcNow;
// Update entity using Entity Framework
// ...
}
}
As defined in the class above, the TDataEntity generic now also needs to be of type IDeleteableDataEntity, which requires the following properties:
public interface IDeleteableDataEntity
{
bool IsDeleted { get; set; }
DateTime DeletedDate { get; set; }
}
These properties are set accordingly in the Delete() implementation.
This means that, IF required, I can define IUserRepository with "deletion" capabilities which would inherently be taken care of by the relevant implementation:
public interface IUserRepository : IDeleteableRepository<User>
{
}
Provided that the relevant Entity Framework data entity is an IDeleteableDataEntity, this would not be an issue.
The great thing about this design is that I can start granualising the repository model even further (IUpdateableRepository, IFindableRepository, IDeleteableRepository, IInsertableRepository) and aggregate repositories can now expose only the relevant functionality as per our specification (perhaps you should be allowed to insert into a UserRepository but NOT into a ClientRepository). Further to this, it specifies a standarised way in which certain repository actions are done (i.e. the updating of IsDeleted and DeletedDate columns will be universal and are not at the hand of the developer).
PROBLEM
A problem with the above design arises when I want to create a repository for some aggregate WITHOUT deletion capabilities, e.g:
public interface IClientRepository : IRepository<Client>
{
}
The EntityFrameworkRepository implementation still requires TDataEntity to be of type IDeleteableDataEntity.
I can ensure that the client data entity model does implement IDeleteableDataEntity, but this is misleading and incorrect. There will be additional fields that are never updated.
The only solution I can think of is to remove the IDeleteableDataEntity generic condition from TDataEntity and then cast to the relevant type in the Delete() method:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
var deleteableEntity = entity as IDeleteableEntity;
if(deleteableEntity != null)
{
deleteableEntity.IsDeleted = true;
deleteableEntity.DeletedDate = DateTime.UtcNow;
entity = deleteableEntity;
}
// Update entity using Entity Framework
// ...
}
}
Because ClientRepository does not implement IDeleteableRepository, there will be no Delete() method exposed, which is good.
QUESTION
Can anyone advise of a better architecture which leverages the C# typing system and does not involve the hacky cast?
Interestly enough, I could do this if C# supported multiple inheritance (with separate concrete implementation for finding, adding, deleting, updating).
I do think that you're complicating things a bit too much trying to get the most generic solution of them all, however I think there's a pretty easy solution to your current problem.
TDataEntity is a persistence data structure, it has no Domain value and it's not known outside the persistence layer. So it can have fields it won't ever use, the repository is the only one knowing that, it'a persistence detail . You can afford to be 'sloppy' here, things aren't that important at this level.
Even the 'hacky' cast is a good solution because it's in one place and a private detail.
It's good to have clean and maintainable code everywhere, however we can't afford to waste time coming up with 'perfect' solutions at every layer. Personally, for view and persistence models I prefer the quickest and simplest solutions even if they're a bit smelly.
P.S: As a thumb rule, generic repository interfaces are good, generic abstract repositories not so much (you need to be careful) unless you're serializing things or using a doc db.

Composition, I don't quite get this?

Referring to the below link:
http://www.javaworld.com/javaworld/jw-11-1998/jw-11-techniques.html?page=2
The composition approach to code reuse provides stronger encapsulation
than inheritance, because a change to a back-end class needn't break
any code that relies only on the front-end class. For example,
changing the return type of Fruit's peel() method from the previous
example doesn't force a change in Apple's interface and therefore
needn't break Example2's code.
Surely if you change the return type of peel() (see code below) this means getPeelCount() wouldn't be able to return an int any more? Wouldn't you have to change the interface, or get a compiler error otherwise?
class Fruit {
// Return int number of pieces of peel that
// resulted from the peeling activity.
public int peel() {
System.out.println("Peeling is appealing.");
return 1;
}
}
class Apple {
private Fruit fruit = new Fruit();
public int peel() {
return fruit.peel();
}
}
class Example2 {
public static void main(String[] args) {
Apple apple = new Apple();
int pieces = apple.peel();
}
}
With a composition, changing the class Fruit doesn't necessary require you to change Apple, for example, let's change peel to return a double instead :
class Fruit {
// Return String number of pieces of peel that
// resulted from the peeling activity.
public double peel() {
System.out.println("Peeling is appealing.");
return 1.0;
}
}
Now, the class Apple will warn about a lost of precision, but your Example2 class will be just fine, because a composition is more "loose" and a change in a composed element does not break the composing class API. In our case example, just change Apple like so :
class Apple {
private Fruit fruit = new Fruit();
public int peel() {
return (int) fruit.peel();
}
}
Whereas if Apple inherited from Fruit (class Apple extends Fruit), you would not only get an error about an incompatible return type method, but you'd also get a compilation error in Example2.
** Edit **
Lets start this over and give a "real world" example of composition vs inheritance. Note that a composition is not limited to this example and there are more use case where you can use the pattern.
Example 1 : inheritance
An application draw shapes into a canvas. The application does not need to know which shapes it has to draw and the implementation lies in the concrete class inheriting the abstract class or interface. However, the application knows what and how many different concrete shapes it can create, thus adding or removing concrete shapes requires some refactoring in the application.
interface Shape {
public void draw(Graphics g);
}
class Box implement Shape {
...
public void draw(Graphics g) { ... }
}
class Ellipse implements Shape {
...
public void draw(Graphics g) { ... }
}
class ShapeCanvas extends JPanel {
private List<Shape> shapes;
...
protected void paintComponent(Graphics g) {
for (Shape s : shapes) { s.draw(g); }
}
}
Example 2 : Composition
An application is using a native library to process some data. The actual library implementation may or may not be known, and may or may not change in the future. A public interface is thus created and the actual implementation is determined at run-time. For example :
interface DataProcessorAdapter {
...
public Result process(Data data);
}
class DataProcessor {
private DataProcessorAdapter adapter;
public DataProcessor() {
try {
adapter = DataProcessorManager.createAdapter();
} catch (Exception e) {
throw new RuntimeException("Could not load processor adapter");
}
}
public Object process(Object data) {
return adapter.process(data);
}
}
static class DataProcessorManager {
static public DataProcessorAdapter createAdapter() throws ClassNotFoundException, InstantiationException, IllegalAccessException {
String adapterClassName = /* load class name from resource bundle */;
Class<?> adapterClass = Class.forName(adapterClassName);
DataProcessorAdapter adapter = (DataProcessorAdapter) adapterClass.newInstance();
//...
return adapter;
}
}
So, as you can see, the composition may offer some advantage over inheritance in the sense that it allows more flexibility in the code. It allows the application to have a solid API while the underlaying implementation may still change during it's life cycle. Composition can significantly reduce the cost of maintenance if properly used.
For example, when implementing test cases with JUnit for Exemple 2, you may want to use a dummy processor and would setup the DataProcessorManager to return such adapter, while using a "real" adapter (perhaps OS dependent) in production without changing the application source code. Using inheritance, you would most likely hack something up, or perhaps write a lot more initialization test code.
As you can see, compisition and inheritance differ in many aspects and are not preferred over another; each depend on the problem at hand. You could even mix inheritance and composition, for example :
static interface IShape {
public void draw(Graphics g);
}
static class Shape implements IShape {
private IShape shape;
public Shape(Class<? extends IShape> shape) throws InstantiationException, IllegalAccessException {
this.shape = (IShape) shape.newInstance();
}
public void draw(Graphics g) {
System.out.print("Drawing shape : ");
shape.draw(g);
}
}
static class Box implements IShape {
#Override
public void draw(Graphics g) {
System.out.println("Box");
}
}
static class Ellipse implements IShape {
#Override
public void draw(Graphics g) {
System.out.println("Ellipse");
}
}
static public void main(String...args) throws InstantiationException, IllegalAccessException {
IShape box = new Shape(Box.class);
IShape ellipse = new Shape(Ellipse.class);
box.draw(null);
ellipse.draw(null);
}
Granted, this last example is not clean (meaning, avoid it), but it shows how composition can be used.
Bottom line is that both examples, DataProcessor and Shape are "solid" classes, and their API should not change. However, the adapter classes may change and if they do, these changes should only affect their composing container, thus limit the maintenance to only these classes and not the entire application, as opposed to Example 1 where any change require more changes throughout the application. It all depends how flexible your application needs to be.
If you would change Fruit.peel()'s return type, you would have to modify Apple.peel() as well. But you don't have to change Apple's interface.
Remember: The interface are only the method names and their signatures, NOT the implementation.
Say you'd change Fruit.peel() to return a boolean instead of a int. Then, you could still let Apple.peel() return an int. So: The interface of Apple stays the same but Fruit's changed.
If you would have use inheritance, that would not be possible: Since Fruit.peel() now returns a boolean, Apple.peel() has to return an boolean, too. So: All code that uses Apple.peel() has to be changed, too. In the composition example, ONLY Apple.peel()'s code has to be changed.
The key word in the sentence is "interface".
You'll almost always need to change the Apple class in some way to accomodate the new return type of Fruit.peel, but you don't need to change its public interface if you use composition rather than inheritance.
If Apple is a Fruit (ie, inheritance) then any change to the public interface of Fruit necessitates a change to the public interface of Apple too. If Apple has a Fruit (ie, composition) then you get to decide how to accomodate any changes to the Fruit class; you're not forced to change your public interface if you don't want to.
Return type of Fruit.peel() is being changed from int to Peel. This doesn't meant that the return type of Apple.peel() is being forced to change to Peel as well. In case of inheritance, it is forced and any client using Apple has to be changed. In case of composition, Apple.peel() still returns an integer, by calling the Peel.getPeelCount() getter and hence the client need not be changed and hence Apple's interface is not changed ( or being forced to be changed)
Well, in the composition case, Apple.peel()'s implementation needs to be updated, but its method signature can stay the same. And that means the client code (which uses Apple) does not have to be modified, retested, and redeployed.
This is in contrast to inheritance, where a change in Fruit.peel()'s method signature would require changes all way into the client code.

Serialize class based on one interface it implements with Jackson or Gson

I have the following:
An interface I1 extends Ia, Ib, Ic
An interface I2.
A class C implements I1, I2. And this class has its own setters and getters as well.
C cInstance = new C():
//Jackson
ObjectMapper mapper = new ObjectMapper();
mapper.writeValue(new File("somefile.json"), cInstance);
//Gson
Gson gson = new Gson();
String json = gson.toJson(cInstance);
The output will be cInstance serialized according to the properties of C and what it inherited.
However, I like the properties are being serialized to be according to the setters/getters in I1 (only the cInstance properties represented in the I1 interface).
How can I do this with Jackson knowing that I have too many classes with the same problem and I can't modify the class definition or add annotations.
And the same issue applies to Deserialization (Deserializing according to an interface)
Thanks
First of all, you can always attach "mix-in annotations" even without adding annotations directly (see wiki page). With this, annotation to use would be:
#JsonSerialize(as=MyInterface.class)
but if you do not want to use mix-ins, you can force specific type to use with
objectMapper.typedWriter(MyInterface.class).writeValue(....)
Jackson's VisibilityChecker provides an easy way for filtering certain properties, especially because it allows you to test for visibility (equals "will be serialized or not") for each method/field individually.
At least this helps for the serialization phase.
Here is what I did (using Jackson version 1.9.11):
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.map.introspect.AnnotatedMethod;
import org.codehaus.jackson.map.introspect.VisibilityChecker;
public static class InterfaceVisibilityChecker extends VisibilityChecker.Std {
private final Set<Method> visibleMethods;
public InterfaceVisibilityChecker(Class<?>... clazzes) {
super(JsonAutoDetect.Visibility.PUBLIC_ONLY);
this.visibleMethods = new HashSet<>();
for (Class<?> clz : clazzes) {
this.visibleMethods.addAll(Arrays.asList(clz.getMethods()));
}
}
#Override
public boolean isGetterVisible(Method m) {
return super.isGetterVisible(m) && isVisible(m);
}
#Override
public boolean isGetterVisible(AnnotatedMethod m) {
return isGetterVisible(m.getAnnotated());
}
private boolean isVisible(Method m) {
for (Method visiMthd : visibleMethods) {
if (isOverwriteMethod(m, visiMthd)) return true;
}
return false;
}
private boolean isOverwriteMethod(Method subMethod, Method superMethod) {
// names must be equal
if (! subMethod.getName().equals(superMethod.getName())) return false;
// return types must be assignable
if (! superMethod.getReturnType().isAssignableFrom(subMethod.getReturnType())) return false;
// parameters must be equal
if (! Arrays.equals(subMethod.getParameterTypes(), superMethod.getGenericParameterTypes())) return false;
// classes must be assignable
return superMethod.getDeclaringClass().isAssignableFrom(subMethod.getDeclaringClass());
}
}
The main idea is to use the standard VisibilityChecker and extend it by a check whether the method is declared in one of the given interfaces.
The checker is applied to an ObjectMapper instance using the following snippet:
ObjectMapper om = new ObjectMapper();
om.setVisibilityChecker(new InterfaceVisibilityChecker(
I1.class,
I2.class,
Ia.class,
Ib.class,
Ic.class
));
Some comments on the solution above:
The checker is not complete, methods like isIsGetterVisible or isFieldVisible can be handled in a similar manner if needed.
isOverwriteMethod is not optimized at all, it's checks could be cached.

Inheriting ConstructorArguments in Ninject

I'm trying to find a method of passing a constructor argument to the constructors of child classes.
These objects are immutable so I'd prefer to use constructor arguments.
The issue I have encountered is that ConstructorArgument does not inherit to child instantiations and the following statements are not interchangeable:
_parsingProcessor = _kernel.Get<IParsingProcessor>(new ConstructorArgument("dataFilePath", dataFilePath);
and
_parsingProcessor = _kernel.Get<IParsingProcessor>(new Parameter("dataFilePath", dataFilePath, true);
So, how can get an inheritable ConstructorArgument and when does it makes sense, if ever, to new the Parameter class?
Yes, you can do this, but it's probably not what you really want. If the container is not actually responsible for instantiating its own dependencies, then its dependencies probably shouldn't be sharing its constructor arguments - it just doesn't make sense.
I'm pretty sure I know what you're trying to do, and the recommended approach is to create a unique binding specifically for your one container, and use the WhenInjectedInto conditional binding syntax, as in the example below:
public class Hello : IHello
{
private readonly string name;
public Hello(string name)
{
this.name = name;
}
public void SayHello()
{
Console.WriteLine("Hello, {0}!", name);
}
}
This is the class that takes a constructor argument which we want to modify, depending on who is asking for an IHello. Let's say it's this boring container class:
public class MyApp : IApp
{
private readonly IHello hello;
public MyApp(IHello hello)
{
this.hello = hello;
}
public virtual void Run()
{
hello.SayHello();
Console.ReadLine();
}
}
Now, here's how you do up the bindings:
public class MainModule : NinjectModule
{
public override void Load()
{
Bind<IApp>().To<MyApp>();
Bind<IHello>().To<Hello>()
.WithConstructorArgument("name", "Jim");
Bind<IHello>().To<Hello>()
.WhenInjectedInto<MyApp>()
.WithConstructorArgument("name", "Bob");
}
}
Basically all this binding is doing is saying the name should be "Jim" unless it's being requested by Hello, which in this case it is, so instead it will get the name "Bob".
If you are absolutely certain that you truly want cascading behaviour and understand that this is very dangerous and brittle, you can cheat using a method binding. Assuming that we've now added a name argument to the MyApp class for some unspecified purpose, the binding would be:
Bind<IHello>().ToMethod(ctx =>
ctx.Kernel.Get<Hello>(ctx.Request.ParentContext.Parameters
.OfType<ConstructorArgument>()
.Where(c => c.Name == "name")
.First()));
Please, please, make sure you are positive that this is what you want before doing it. It looks easy but it is also very likely to break during a simple refactoring, and 95% of the "customized dependency" scenarios I've seen can be addressed using the WhenInjectedInto binding instead.