Orika mapping multiple Strings into a List<String> - orika

Using Orika how do I map multiple single Strings into a List of Strings?
Given:
Class A
String field1
String field2
String field3
Class B
List fields
So field1, field2 and field 3 will all be elements in fields. How do I code Orika to handle this?

You can do it using
factory.classMap(ClassA.class, ClassB.class)
.byDefault()
.customize(new CustomMapper<ClassA, ClassB>() {
public void mapAToB(ClassA source, ClassB dest) {/*custom logic*/}
public void mapBToA(ClassB source, ClassA dest) {/*custom logic*/}
})
.register();
In CustomMapper you can to override only the desired direction of mapping or both if needed. This way Orika will handle all automatic mapping and you can still use java code to customize the process.

Related

Is there a way to search all fields with redis-om-spring using the EntityStream API?

In redis-om-spring I can search all fields by simply adding a search method to the repository.
public interface ProductRepository extends RedisDocumentRepository<Product, String> {
Page<Product> search(String text, Pageable pageable);
}
When using the EntityStream, I can search on specific fields, but not across all fields.
var result = entityStream.of(Product.class)
.anyMatch(new StartsWithPredicate<>(Product$.UNIQUE_ID.getField(),"100790"))
#AllArgsConstructor
public class Product{
#Id
String uniqueId;
#Searchable
String field1;
#Searchable
String field2;
#Searchable
String fieldN;
}
repo.save(new Product("UA","searchForA1","searchForA2","searchForAN");
repo.save(new Product("UB","searchForB1","searchForB2","searchForBN");
repo.save(new Product("UC","searchForC1","searchForC2","searchForCN");
I need to search across all fields. Am I missing something in the EntityStream API or is this not possible?
Something that generates:
FT.SEARCH my-idx "thesearchTerm"
Yes, there is a filter method in the SearchStream interface that takes a free-form text String:
SearchStream<E> filter(String freeText);
See https://github.com/redis/redis-om-spring/blob/main/redis-om-spring/src/main/java/com/redis/om/spring/search/stream/SearchStream.java#L20

Jackson serialization for enum without quotes

Could someone help me with the kind of configuration that i need to do for my ObjectWriter such that it produces, an enum value without quotes ?
Is this possible to do this without using a custom serializer for enum ? with simple configuration ?
Can i declare some annotations on top of my enums or some kind of configuration to my objectwriter such that, it always produces enum values without quotes ?
ObjectWriter.writeValueAsString(object) ---> This should write enum value without quotes.
For a one-off serialization you can do this in two steps.
convert the enum to a TextNode
get the text value of the text node
In the example below the CAT enum is printed with quotes, while DOG is printed without quotes.
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.TextNode;
public class WriteEnumAsString {
public enum Animal {
CAT,
DOG,
}
public static void main(String[] args) throws JsonProcessingException {
var om = new ObjectMapper();
var catString = om.writeValueAsString(Animal.CAT);
System.out.println(catString);
// "CAT"
TextNode dogNode = om.valueToTree(Animal.DOG);
String dogString = dogNode.textValue();
System.out.println(dogString);
// DOG
}
}

How to easily access widely different subsets of fields of related objects/DB tables?

Imagine we have a number of related objects (equivalently DB tables), for example:
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
}
public class Job {
private String company;
private int salary;
..
}
public class House {
private Address address;
private int age;
private int numRooms;
..
}
public class Address {
private String town;
private String street;
..
}
How to best design a system for easily defining and accessing widely varying subsets of data on these objects/tables? Design patterns, pros and cons, are very welcome. I'm using Java, but this is a more general problem.
For example, I want to easily say:
I'd like some object with (Person.name, Person.height, Job.company, Address.street)
I'd like some object with (Job.company, House.numRooms, Address.town)
Etc.
Other assumptions:
We can assume that we're always getting a known structure of objects on the input, e.g. a Person with its Job, House, and Address.
The resulting object doesn't necessarily need to know the names of the fields it was constructed from, i.e. for subset defined as (Person.name, Person.height, Job.company, Address.street) it can be the array of Objects {"Joe Doe", 180, "ACompany Inc.", "Main Street"}.
The object/table hierarchy is complex, so there are hundreds of data fields.
There may be hundreds of subsets that need to be defined.
A minority of fields to obtain may be computed from actual fields, e.g. I may want to get a person's age, computed as (now().getYear() - Person.birtday.getYear()).
Here are some options I see:
A SQL view for each subset.
Minuses:
They will be almost the same for similar subsets. This is OK just for field names, but not great for the joins part, which could ideally be refactored out to a common place.
Less testable than a solution in code.
Using a DTO assembler, e.g. http://www.genericdtoassembler.org/
This could be used to flatten the complex structure of input objects into a single DTO.
Minuses:
I'm not sure how I'd then proceed to easily define subsets of fields on this DTO. Perhaps if I could somehow set the ones irrelevant to the current subset to null? Not sure how.
Not sure if I can do computed fields easily in this way.
A custom mapper I came up with.
Relevant code:
// The enum has a value for each field in the Person objects hierarchy
// that we may be interested in.
public enum DataField {
PERSON_NAME(new PersonNameExtractor()),
..
PERSON_AGE(new PersonAgeExtractor()),
..
COMPANY(new CompanyExtractor()),
..
}
// This is the container for field-value pairs from a given instance of
// the object hierarchy.
public class Vector {
private Map<DataField, Object> fields;
..
}
// Extractors know how to get the value for a given DataField
// from the object hierarchy. There's one extractor per each field.
public interface Extractor<T> {
public T extract(Person person);
}
public class PersonNameExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getName();
}
}
public class PersonAgeExtractor implements Extractor<Integer> {
public int extract(Person person) {
return now().getYear() - person.getBirthday().getYear();
}
}
public class CompanyExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getJob().getCompany();
}
}
// Building the Vector using all the fields from the DataField enum
// and the extractors.
public class FullVectorBuilder {
public Vector buildVector(Person person) {
Vector vector = new Vector();
for (DataField field : DataField.values()) {
vector.addField(field, field.getExtractor().extract(person));
}
return vector;
}
}
// Definition of a subset of fields on the Vector.
public interface Selector {
public List<DataField> getFields();
}
public class SampleSubsetSelector implements Selector {
private List<DataField> fields = ImmutableList.of(PERSON_NAME, COMPANY);
...
}
// Finally, a builder for the subset Vector, choosing only
// fields pointed to by the selector.
public class SubsetVectorBuilder {
public Vector buildSubsetVector(Vector fullVector, Selector selector) {
Vector subsetVector = new Vector();
for (DataField field : selector.getFields()) {
subsetVector.addField(field, fullVector.getValue(field));
}
return subsetVector;
}
}
Minuses:
Need to create a tiny Extractor class for each of hundreds of data fields.
This is a custom solution that I came up with, seems to work and I like it, but I feel this problem must have been encountered and solved before, likely in a better way.. Has it?
Edit
Each object knows how to turn itself into a Map of fields, keyed on an enum of all fields.
E.g.
public enum DataField {
PERSON_NAME,
..
PERSON_AGE,
..
COMPANY,
..
}
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
public Map<DataField, Object> toMap() {
return ImmutableMap
.add(DataField.PERSON_NAME, name)
.add(DataField.BIRTHDAY, birthday)
.add(DataField.HEIGHT, height)
.add(DataField.AGE, now().getYear() - birthday.getYear())
.build();
}
}
Then, I could build a Vector combining all the Maps, and select subsets from it like in 3.
Minuses:
Enum name clashes, e.g. if Job has an Address and House has an Address, then I want to be able to specify a subset taking street name of both. But how do I then define the toMap() method in the Address class?
No obvious place to put code doing computed fields requiring data from more than one object, e.g. physical distance from Address of House to Address of Company.
Many thanks!
Over in-memory object mapping in the application, I would favor database processing of the data for better performance. Views, or more elaborate OLAP/datawarehouse tooling could do the trick. If the calculated fields remain basic, as in "age = now - birth", I see nothing wrong with having that logic in the DB.
On the code side, given the large number of DTOs you have to deal with, you could use classless dynamic (available in some JVM languages) or JSON objects. The idea is that when a data structure changes, you only need to modify the DB and the UI, saving you the cost of changing a whole bunch of classes in between.

Scala: How to transform a POJO like object into a SQL insert statement using Scala reflection

I'm facing this (at least for me) interesting task: getting a SQL insert statement from a POJO like object. Let me say I don't need to add a framework between my Scala application and the DB because I just need to insert data into a single DB table.
So, supposing the attributes of my class are named equally to those of the DB table, I'd like to use Scala reflection in order to get from a class like this one
class MyDataObj {
var a:Int = 345
var b:Boolean = false
var c:Double = 1243.98
var d:String = "A random string"
}
a SQL insert statement like this
INSERT INTO table_a (a, b, c, d) values (345, false, 1243.98, 'A random String');
Well, what we need is
1) access to the class attributes
2) access to the attribute types
3) access to the attribute values of the object instance
In order to get something like this
List( ("a","Int",345), ("b","Boolean",false), ("c","Double",1243.98), ... )
that will be easy to transform into what we want.
Up to now, I've just found out how to access to the attributes names
val columns = typeOf[MyDataObj].members.view.filter{_.isTerm}.
filter{!_.isMethod}.map{_.name}.toList
How can I get the rest I need?
Thanks as usual for supporting me.
In your case, you can use the following codes:
val o = new MyDataObj
val attributes = o.getClass.getDeclaredMethods.filter {
_.getReturnType != Void.TYPE
}.map {
method => (method.getName, method.getReturnType, method.invoke(o))
}
Here I use getDeclaredMethods to get the public methods in the MyDataObj. You need to notice that getDeclaredMethods can not get methods in its parent class.
For MyDataObj, getDeclaredMethods will return the following methods:
public double MyDataObj.c()
public boolean MyDataObj.b()
public java.lang.String MyDataObj.d()
public int MyDataObj.a()
public void MyDataObj.c_$eq(double)
public void MyDataObj.d_$eq(java.lang.String)
public void MyDataObj.b_$eq(boolean)
public void MyDataObj.a_$eq(int)
So I add a filter to filter out irrelevant methods.

EclipseLink - #ReadTransformer

I have this code:
#Column(name = "foo")
#ReadTransformer(transformerClass=transformer.class)
private Date foo;
public static class transformer implements AttributeTransformer {
#Override
public void initialize(AbstractTransformationMapping atm) {
}
#Override
public Object buildAttributeValue(Record record, Object o, Session sn) {
}
}
My question is, how do I get the value to transform (from column foo) inside of buildAttributeVaule? It is not inside the record array.
You need one or more #WriteTransformer to write the fields you want selected (and thus get them selected), #Column is not used with a transformation mapping.
However, if you just have a single column, then just use a converter instead, #Convert,
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Mapping/Basic_Mappings/Default_Conversions_and_Converters
First check that the SQL generated is reading in the "foo" column by turning on logging. If it is, then check that the database is returning "foo" and not "FOO" - java is case sensitive on string looksups. It could be that "FOO" is in the record instead of "foo".