Morphia Mapper for entity (for whole Java class / POJO) - morphia

I am having a problem using Morphia with a custom Mapper/Type Converter
Given following POJO
#Entity("users")
public class User{
private String username = null;
private String password = null;
}
problem is, in the given MongoDB (not under my control) the values are not simply laid out like
{
"email": "xy#test.com",
"password": "abc"
}
but the objects look more like
{
"usersettings": {
"email": "xy#test.com",
"password": [
"abc", "cde", "efg"
]
}
}
(The real world Mongo document is much more complex as you may expect)
So I have to map "usersettings.email" to my "username" member and "usersettings.password.0" (first array only) to my "password" member.
I know there are TypeConverters in Morphia and you can register them, but they only work for members, not for classes.
In other words this is not working (it is just ignored at runtime):
#Entity("users")
#Converters("MyUserConverter.class") <-- this does NOT WORK!
public class User{
private String email = null;
private String password = null;
}
It would work for members like this:
#Entity("users")
public class User{
private String email = null;
#Converters("MyCustomTypeConverter.class") <-- this would work, but does not help in my case!
private MyCustomType password = null;
}
Problem is, I need to map the whole class and not only certain members.
How can I do that?

Morphia doesn't support something quite like that. The document structure generally has to match the object structure. However, you can use the lifecycle annotations to massage the DBObject used to load and save your entities. In this you could use #PreLoad to reshape the DBObject coming in to reflect the locations expected by your Java objects. And then #PrePersist to move those values back under usersettings before writing back to mongo.

Related

how to hide field during serialization (but not deserialization)

In our project (springMVC) Rest API project I wish to only use ONE model for both request and response (to avoid having to add tons of code to copy field from object to object)
I'd like to use Swagger to handle all the doc, but I'm running into a little problem. For example let say I have a model User
public class User {
private Long id;
private String username;
private String password;
}
And a simple controller
public void createUser(#RequestBody User user)...
public User getUser(Long id) ..
Now I would like swagger to hide the property password on deserialization but not serialization (so having it display for the Input but the output)
and the opposite for the Id field.
I have tried using #JsonIgnore coupled with #JsonProperty but on the swagager-ui it either displays everything or hides everything. I cannot manage to it work.
Could someone indicate me what is the best way of archiving my goal ? Is it possible to use a single model for request and response while using swagger? In case it is not possible to use #JsonIgnore, is there a way to archive this differently ?
Swagger doesn't want you to have different input/output models with the same name. You should simply create an interface and attach that to the input, and for the output extend that interface or add an implementation with the additional field. For example, please see here for modeling tips:
https://swaggerhub.com/api/swagger-tutorials/modeling-samples/1.0.0
Your exact use case is one of them. The solution posted in the above link is here:
definitions:
User:
description: this is a user that would be passed into the system
properties:
username:
type: string
UserResponse:
allOf:
- $ref: '#/definitions/User'
- type: object
required:
- id
properties:
id:
type: string
format: uuid
readOnly: true
where User is the input object, and UserResponse is the output object, with the additional id field.
Add #JsonIgnore with getter of the field and #JsonProperty with the setter or with the field . As Due to use of immutable code or final fields sometime setter doesn't work.
example :
public class Student {
private Float name;
private String rollnum;
private String section;
#JsonProperty
private Boolean passOrFailed;
#JsonIgnore
public Boolean getpassOrFailed {
return active;
}
}
Remember to use both else else it will lead to removing element in deserialization

How to easily access widely different subsets of fields of related objects/DB tables?

Imagine we have a number of related objects (equivalently DB tables), for example:
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
}
public class Job {
private String company;
private int salary;
..
}
public class House {
private Address address;
private int age;
private int numRooms;
..
}
public class Address {
private String town;
private String street;
..
}
How to best design a system for easily defining and accessing widely varying subsets of data on these objects/tables? Design patterns, pros and cons, are very welcome. I'm using Java, but this is a more general problem.
For example, I want to easily say:
I'd like some object with (Person.name, Person.height, Job.company, Address.street)
I'd like some object with (Job.company, House.numRooms, Address.town)
Etc.
Other assumptions:
We can assume that we're always getting a known structure of objects on the input, e.g. a Person with its Job, House, and Address.
The resulting object doesn't necessarily need to know the names of the fields it was constructed from, i.e. for subset defined as (Person.name, Person.height, Job.company, Address.street) it can be the array of Objects {"Joe Doe", 180, "ACompany Inc.", "Main Street"}.
The object/table hierarchy is complex, so there are hundreds of data fields.
There may be hundreds of subsets that need to be defined.
A minority of fields to obtain may be computed from actual fields, e.g. I may want to get a person's age, computed as (now().getYear() - Person.birtday.getYear()).
Here are some options I see:
A SQL view for each subset.
Minuses:
They will be almost the same for similar subsets. This is OK just for field names, but not great for the joins part, which could ideally be refactored out to a common place.
Less testable than a solution in code.
Using a DTO assembler, e.g. http://www.genericdtoassembler.org/
This could be used to flatten the complex structure of input objects into a single DTO.
Minuses:
I'm not sure how I'd then proceed to easily define subsets of fields on this DTO. Perhaps if I could somehow set the ones irrelevant to the current subset to null? Not sure how.
Not sure if I can do computed fields easily in this way.
A custom mapper I came up with.
Relevant code:
// The enum has a value for each field in the Person objects hierarchy
// that we may be interested in.
public enum DataField {
PERSON_NAME(new PersonNameExtractor()),
..
PERSON_AGE(new PersonAgeExtractor()),
..
COMPANY(new CompanyExtractor()),
..
}
// This is the container for field-value pairs from a given instance of
// the object hierarchy.
public class Vector {
private Map<DataField, Object> fields;
..
}
// Extractors know how to get the value for a given DataField
// from the object hierarchy. There's one extractor per each field.
public interface Extractor<T> {
public T extract(Person person);
}
public class PersonNameExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getName();
}
}
public class PersonAgeExtractor implements Extractor<Integer> {
public int extract(Person person) {
return now().getYear() - person.getBirthday().getYear();
}
}
public class CompanyExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getJob().getCompany();
}
}
// Building the Vector using all the fields from the DataField enum
// and the extractors.
public class FullVectorBuilder {
public Vector buildVector(Person person) {
Vector vector = new Vector();
for (DataField field : DataField.values()) {
vector.addField(field, field.getExtractor().extract(person));
}
return vector;
}
}
// Definition of a subset of fields on the Vector.
public interface Selector {
public List<DataField> getFields();
}
public class SampleSubsetSelector implements Selector {
private List<DataField> fields = ImmutableList.of(PERSON_NAME, COMPANY);
...
}
// Finally, a builder for the subset Vector, choosing only
// fields pointed to by the selector.
public class SubsetVectorBuilder {
public Vector buildSubsetVector(Vector fullVector, Selector selector) {
Vector subsetVector = new Vector();
for (DataField field : selector.getFields()) {
subsetVector.addField(field, fullVector.getValue(field));
}
return subsetVector;
}
}
Minuses:
Need to create a tiny Extractor class for each of hundreds of data fields.
This is a custom solution that I came up with, seems to work and I like it, but I feel this problem must have been encountered and solved before, likely in a better way.. Has it?
Edit
Each object knows how to turn itself into a Map of fields, keyed on an enum of all fields.
E.g.
public enum DataField {
PERSON_NAME,
..
PERSON_AGE,
..
COMPANY,
..
}
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
public Map<DataField, Object> toMap() {
return ImmutableMap
.add(DataField.PERSON_NAME, name)
.add(DataField.BIRTHDAY, birthday)
.add(DataField.HEIGHT, height)
.add(DataField.AGE, now().getYear() - birthday.getYear())
.build();
}
}
Then, I could build a Vector combining all the Maps, and select subsets from it like in 3.
Minuses:
Enum name clashes, e.g. if Job has an Address and House has an Address, then I want to be able to specify a subset taking street name of both. But how do I then define the toMap() method in the Address class?
No obvious place to put code doing computed fields requiring data from more than one object, e.g. physical distance from Address of House to Address of Company.
Many thanks!
Over in-memory object mapping in the application, I would favor database processing of the data for better performance. Views, or more elaborate OLAP/datawarehouse tooling could do the trick. If the calculated fields remain basic, as in "age = now - birth", I see nothing wrong with having that logic in the DB.
On the code side, given the large number of DTOs you have to deal with, you could use classless dynamic (available in some JVM languages) or JSON objects. The idea is that when a data structure changes, you only need to modify the DB and the UI, saving you the cost of changing a whole bunch of classes in between.

Providing Jackson Mapper multiple ways to deserialize the same object

I'm trying to deserialize two types of json:
{
name: "bob",
worksAt: {
name: "Bobs department store",
location: "downtown"
},
age: 46
}
and
{
name: "Tom",
worksAt: "company:Bobs department store",
age: 27
}
into these objects:
The first way creates two new objects, the second way requests the object from the database based on the contents of a string.
sort of like how jackson mapper can deserialize an arbitrary string into an object, for objects like this:
public class Company{
public String name;
public Employee[] employees
public Company(){}
public Company(String json){
//turn string into object using whatever encoding you want blah blah blah...
}
}
The trouble is I need both. I need it to handle objects and strings. Both could arrive from the same input.
The first think I tried was making a Converter
It says these create a delegate type to pass to the deserializer, but the converter is always applied even when the datatype isn't a string. So that didn't work.
I've also tried a normal deserializer, but I can't find a way to defer to the BeanDeserializer. The beanDeserializer is so complicated that I can't manually instantiate it. I also see no way to defer to a default deserializer in jackson mapper.
Do I have to re-implement jackson mappers deserialization to do this? Is there any way for a deserializer to say "I can't do this, use the default implementation."?
Edit: Some further progress. Based on the Jackson Mapper source code, it looks like you can instatiate bean deserializers like this:
DeserializationConfig config = ctxt.getConfig();
JavaType type = config.constructType(_valueClass);
BeanDescription introspect = config.introspect(type);
JsonDeserializer<Object> beanDeserializer = ctxt.getFactory().createBeanDeserializer(ctxt, type , introspect);
but for some reason all the _beanProperties have the FailingDeserializer set for their _valueDeserializer and the whole thing fails. So I have no idea why that happens...
Have you tried writing a custom deserializer? This gives you the most control on how Jackson deserializes the object. You may be able to try to deserialize one way, and if there's an error, try another way.
Jackson can also handle polymorphic deserialization, though this would require a small change to the json to include type information, and it sounds like your problem constraints might not allow that.
If I understand the problem correctly, I would recommend using JsonNode. You can define a setter in your top-level type like this:
setWorksAt(JsonNode node) {
if (node.getNodeType == JsonNodeType.STRING) {
String name = node.getText();
name = name.substring(name.lastIndexOf(':'));
this.company = new Company(name);
} else if (node.getNodeType == JsonNodeType.OBJECT) {
this.company = mapper.treeToValue(node, Company.class);
}
}
That allows you to handle the two separate worksFor inputs, while still allowing the standard mapper to handle any substructures for the OBJECT case.
With recent versions of Jackson (2.8+ I think, definitely works with 2.9) you can use multiple #JsonCreator and do something like this:
public class Company {
private String name;
private String location;
private Company(String name, String location) {
this.name = name;
this.location = location;
}
private Company(String stringRepresentation) {
// add code here to parse string and extract name and location
}
#JsonCreator
private static Company fromJson(
#JsonProperty("name") String name,
#JsonProperty("location") String location)
{
return new Company(name, location);
}
#JsonCreator
private static Company fromJson(String str) {
return Company(str);
}
}

Deserializing IEnumerable with private backing field in RavenDb

I've been modeling a domain for a couple of days now and not been thinking at all at persistance but instead focusing on domain logic. Now I'm ready to persist my domain objects, some of which contains IEnumerable of child entities. Using RavenDb, the persistance is 'easy', but when loading my objects back again, all of the IEnumerables are empty.
I've realized this is because they don't have any property setters at all, but instead uses a list as a backing field. The user of the domain aggregate root can add child entities through a public method and not directly on the collection.
private readonly List<VeryImportantPart> _veryImportantParts;
public IEnumerable<VeryImportantPart> VeryImportantParts { get { return _veryImportantParts; } }
And the method for adding, nothing fancy...
public void AddVeryImportantPart(VeryImportantPart part)
{
// some logic...
_veryImportantParts.Add(part);
}
I can fix this by adding a private/protected setter on all my IEnumerables with backing fields but it looks... well... not super sexy.
private List<VeryImportantPart> _veryImportantParts;
public IEnumerable<VeryImportantPart> VeryImportantParts
{
get { return _veryImportantParts; }
protected set { _veryImportantParts = value.ToList(); }
}
Now the RavenDb json serializer will populate my objects on load again, but I'm curious if there isn't a cleaner way of doing this?
I've been fiddeling with the JsonContractResolver but haven't found a solution yet...
I think I've found the root cause of this issue and it's probably due to the fact that many of my entities were created using:
protected MyClass(Guid id, string name, string description) : this()
{ .... }
public static MyClass Create(string name, string description)
{
return new MyClass(Guid.NewGuid(), name, description);
}
When deserializing, RavenDb/Json.net couldn't rebuild my entities in a proper way...
Changing to using a public constructor made all the difference.
Do you need to keep a private backing field? Often an automatic property will do.
public IList<VeryImportantPart> VeryImportantParts { get; protected set; }
When doing so, you may want to initialize your list in the constructor:
VeryImportantParts = new List<VeryImportantPart>();
This is optional, of course, but it allows you to create a new class and start adding to the list right away, before it is persisted. When Raven deserializes a class, it will use the setter to overwrite the default blank list, so this just helps with the first store.
You certainly won't be able to use a readonly field, as it couldn't be replaced during deserialization. It might be possible to write a contract resolver or converter that fills an existing list rather than creating a new one, but that seems like a rather complex solution.
Using an automatic property can add clarity to your code anyway - as it is less confusing whether to use the field or the property.

Data access layer design in DDD

Excuse me for my poor English.
Ok, I'm thinking about DDD approach now and it sounds great but... There is one little question about it. DDD says that the domain model layer is totally decoupled from the data access layer (and all other layers). So when the DAL will save some business object it will have access to public properties of this object only. Now the question:
How can we guarantee (in general) that a set of public data of an object
is all we need to restore the object later?
Example
We have the following business rules:
User and domain must be provided for the business object on create.
User and domain cannot be changed after object creation.
The business object have the Email property which looks like "user#domain".
Here is a pure POCO which describes those rules:
public class BusinessObject
{
private string _user;
private string _domain;
public BusinessObject(string user, string domain)
{
_user = user;
_domain = domain;
}
public string Email
{
get { return _user + "#" + _domain; }
}
}
So at some moment the DAL will save this object to the external storage (i.e. SQL database). Obviously, the DAL will save the "Email" property to the associated field in DB. Everything will work just fine until the moment when we'll ask the DAL to restore the object. How the DAL can do this? The object must have a public setter for the "Email" field at least. Something like
public string Email
{
set
{
string[] s = value.Split("#");
_user = s[0];
_domain = s[1];
}
}
Actually, the object will have public getters/setters for both "User" and "Domain" fields and method GetEmail(). But stop. I don't want my POCO to have such functionality! There are no business rules for it. This must be done for the ability to save/restore the object only.
I see another option. The ORM which is a part of the DAL could be asked to store all of the private fields needed to restore the object. But this is impossible if we want to keep the domain model separated from the DAL. The DAL cannot rely on certain private members of the business object.
The only workaround I can see is to have some system-level instrument which can create the dump of the object for us and can restore object from this dump anytime. And the DAL must put this dump to the storage in addition to public properties of the object. So when the DAL needs to restore the object from storage it will use the dump for this. And the public properties saved to storage can be used when the DAL is performing operations that don't need the object to be instantiated (i.e. most of link2sql queries).
Am I doing it wrong? Do I need read more? About some patterns, ORM's maybe?
I think you got this part wrong:
I see another option. The ORM which is a part of the DAL could be
asked to store all of the private fields needed to restore the object.
But this is impossible if we want to keep the domain model separated
from the DAL. The DAL cannot rely on certain private members of the
business object.
Domain model does not depend on DAL. Its the other way around, DAL depends on Domain model.
ORM has intimate knowledge of Domain Objects, including private fields. There is absolutely nothing wrong with that. In fact this is the best way to implement persistent-ignorance in DDD. This is how the Domain class can look like. Note that
fields can be private and readonly
public Constructor is only used by client code, not by DAL.
no need for property getters and setters
Business object is almost 100% ignorant of persistence issues
The only thing DAL/ORM needs is private parameterless consturctor:
public class BusinessObject {
private readonly string _user;
private readonly string _domain;
private BusinessObject(){}
public BusinessObject(string user, string domain) {
_user = user;
_domain = domain;
}
public string Email {
get { return _user + "#" + _domain; }
}
}
And the magic happens in ORM. Hibernate can restore this object from database using this mapping file:
<class name="BusinessObject" table="BusinessObjects">
...
<property name="_user" column="User" />
<property name="_domain" column="Domain" />
...
</class>
Another aspect of persistence-ignorant domain code is DDD Repository:
Definition: A Repository is a mechanism for encapsulating storage,
retrieval, and search behavior which emulates a collection of objects.
Repository interface belongs to Domain and should be based on Ubiquitous Language as much as possible. Repository implementation on the other hand belongs to DAL (Dependency Inversion Principle).
public class BusinessObject
{
private string _user;
private string _domain;
public BusinessObject(string email)
{
string[] s = value.Split("#");
_user = s[0];
_domain = s[1];
}
public BusinessObject(string user, string domain)
{
_user = user;
_domain = domain;
}
public string Email
{
get { return _user + "#" + _domain; }
}
}
One simple solution is to just have your DAL call new BusinessObject(email)