Hibernate/Spring taking out class mapping. About reflection - sql

Im trying to write an aplication with uses hibernate to write to database, however in some actions i have to use JDBC on data in tables made by HB.
JDBS is requred to give administrator ability to create SQL queries with will return statistic info about data in database like number of processed document of specified type, numbers of success/failed log in attempts or total value of products in orders.
To do that i've done an from that allows to create class that has override toString() with return nice sql query string.
All works but now im trying to make administrator live easier by hiving him an ability to choose of table/column names. And here is an problem, because they are created by hibernate. some by #column annotation other by field name.
How can i check how field mapping?
I know its all about reflections but didnt do much of that in java yet.
example
#Entity
#Table(name= "my_table_name" )
public class TableOFSomething implements Serializable{
//This field isn't mapped into database and info about it is not requred.
//In fact, info about it may cause an error.
private static final long serialVersionUID = 7L;
#Id
#Column(name="id")
private String id;
private String fieldOne;
#Column(name="field_two")
private String fieldTwo;
#Column(name="renamed_just_for_fun")
private int Number;
//code with getters & setters
}
How to write methods that will have definition like
public <T> String tableName(Class<T> Target); //returns name of table in database
public <T> ArrayList<String> tabelFields(Class<T> Target); //returns name of fields in database

Hibernate has API - getClassMetadata that can explore the mapping. The API might change and is now located in another place , but i will use it and not in reflection for this.
look on this post for more details:
Get the table name from the model in Hibernate
if you want reflection , so use this link

import java.lang.reflect.Field;
import java.lang.reflect.Modifier;
import java.util.ArrayList;
import javax.persistence.Column;
import javax.persistence.Table;
import odi.beans.statistic.QueryBean;
public class ReflectionTest {
public static void main(String[] args) {
ReflectionTest test=new ReflectionTest();
System.out.println("Table name of "+QueryBean.class.getName()+" is "+test.getTableName(QueryBean.class));
System.out.println("Column names in this table are:");
for(String n: test.getColumnNames(QueryBean.class)){
System.out.println("\t"+n);
}
System.out.println("Good bye ;)");
}
public <T> ArrayList<String> getColumnNames(Class<T> target) {
ArrayList<String> ret=new ArrayList<>();
Field[] fields = target.getDeclaredFields();
String fieldName =null;
for (Field f : fields) {
//jump to next if if field is static
if (Modifier.isStatic(f.getModifiers()))
continue;
if (f.isAnnotationPresent(Column.class)) {
Column a = f.getAnnotation(Column.class);
fieldName = a.name();
} else {
fieldName = f.getName();
}
ret.add(fieldName);
}
return ret;
}
public <T> String getTableName(Class<T> target){
String ret=target.getSimpleName();
if (target.isAnnotationPresent(Table.class))
{
Table t=target.getAnnotation(Table.class);
ret=t.name();
}
return ret;
}
}
Is it cover all possibilities?
I know now that Hibernate way would be easier, but this is also about learning of very useful reflection mechanism :)
EDIT:
Important question:
Will this work only on annotations or also on xml mapping?

Related

Is there a way to search all fields with redis-om-spring using the EntityStream API?

In redis-om-spring I can search all fields by simply adding a search method to the repository.
public interface ProductRepository extends RedisDocumentRepository<Product, String> {
Page<Product> search(String text, Pageable pageable);
}
When using the EntityStream, I can search on specific fields, but not across all fields.
var result = entityStream.of(Product.class)
.anyMatch(new StartsWithPredicate<>(Product$.UNIQUE_ID.getField(),"100790"))
#AllArgsConstructor
public class Product{
#Id
String uniqueId;
#Searchable
String field1;
#Searchable
String field2;
#Searchable
String fieldN;
}
repo.save(new Product("UA","searchForA1","searchForA2","searchForAN");
repo.save(new Product("UB","searchForB1","searchForB2","searchForBN");
repo.save(new Product("UC","searchForC1","searchForC2","searchForCN");
I need to search across all fields. Am I missing something in the EntityStream API or is this not possible?
Something that generates:
FT.SEARCH my-idx "thesearchTerm"
Yes, there is a filter method in the SearchStream interface that takes a free-form text String:
SearchStream<E> filter(String freeText);
See https://github.com/redis/redis-om-spring/blob/main/redis-om-spring/src/main/java/com/redis/om/spring/search/stream/SearchStream.java#L20

Native SQL select query using Spring JPA Data Annotation #Query, to cover non-empty, empty and null values at the same time

I have something like this in my repository class in a Spring project:
#Query(value = "SELECT * FROM accounts WHERE (first_name LIKE %:firstName% AND last_name LIKE %:lastName%)", nativeQuery = true)
public List<Account> searchByFirstnameAndLastname(#Param("firstName")String firstName,#Param("lastName")String lastName);
I want it to return everything if the parameters are not provided. Even the ones with null firstname/lastname. And it ignores the null values because of the wildcard used. Since null is different from ''.
I was thinking of an if-statement structure and building the query in runtime based on the provided parameters and then setting the value for the #Query annotation.
I tried generating the where clause and passing it as a parameter but it didn't work. I guess the way Spring Data JPA processes the value of this annotation caused it.
Any idea what is the best solution to this?
Have you tried containing keyword like below :
List<Account> findByFirstnameContainingAndLastNameContaining(String firstName,String lastName);
Docs: https://docs.spring.io/spring-data/jpa/docs/current/reference/html/
You cannot go far with #Query
For dynamic queries(with many optional filters), the way to go is using Criteria API or JPQL. I suggest the Criteria API as it is object oriented and suitable for dynamic queries.
I would suggest to use QueryDSL. It is mentioned in the docs JB Nizet already posted. There is is nice but quite old tutorial here.
With QueryDSL it is very convenient to create your queries dynamically and it is easier to understand than the JPA Criteria API.
The only difficulty in using QueryDSL is the need to automatically create the query objects from your entities but this can be automated by using maven.
There are two ways to handle your situation.
The hard way is using RepositoryFactoryBean as follow
create a custom RepositoryFactoryBean
public class DaoRepositoryFactoryBean, T, I extends Serializable>
extends JpaRepositoryFactoryBean
{
#Override
protected RepositoryFactorySupport createRepositoryFactory(EntityManager entityManager)
{
return new DaoRepositoryFactory(entityManager);
}
private static class DaoRepositoryFactory<E extends AbstractEntity, I extends Serializable> extends JpaRepositoryFactory
{
private EntityManager entityManager;
public DaoRepositoryFactory(EntityManager entityManager)
{
super(entityManager);
this.entityManager = entityManager;
}
#Override
protected Object getTargetRepository(RepositoryMetadata metadata)
{
return new DaoImpl<E>((Class<E>) metadata.getDomainType(), entityManager);
}
#Override
protected Class<?> getRepositoryBaseClass(RepositoryMetadata metadata)
{
return Dao.class;
}
}
}
create Dao interface
#NoRepositoryBean
public interface Dao extends JpaRepository
{
List findByParamsOrAllWhenEmpty();
}
create your implementation
#Transactional(readOnly = true)
public class DaoImpl extends SimpleJpaRepository implements Dao
{
private EntityManager entityManager;
public DaoImpl(Class<E> domainClass, EntityManager em)
{
super(domainClass, em);
this.entityManager = em;
this.domainClass = domainClass;
}
List<E> findByParamsOrAllWhenEmpty()
{
//implement your custom query logic
//scan your domainClass methods for Query anotations and do the rest
}
}
introduce it to Spring Jpa Data
jpa:repositories
base-package=""
query-lookup-strategy="" factory-class="com.core.dao.DaoRepositoryFactoryBean"
The easy way using Custom Impl which in this case you can't use #Query annotation.
"coalesce" on MySQL or "IsNull" on SQL Server is my preferred solution. They return back the first non-null value of a list and you may use it as a trick to deal with an empty string just like a null:
#Query(value = "SELECT * FROM accounts WHERE (COALESCE(first_name,'') LIKE %:firstName% AND COALESCE(last_name,'') LIKE %:lastName%)", nativeQuery = true)
public List<Account> searchByFirstnameAndLastname(#Param("firstName")String firstName,#Param("lastName")String lastName);
Thanks to the questioner and the answerer :D at this page:
like '%' does not accept NULL value

How to collect a stream into a CopyOnWriteArrayList

I'm getting "Incompatible types, required: CopyOnWriteArrayList, found: Object" with the following. I'm using IntelliJ 2016.1.1.
CopyOnWriteArrayList<Foo> l = fields.stream()
.distinct()
.collect(toCollection(CopyOnWriteArrayList::new));
The problem is that fields has an inappropriate type, most likely, it has a raw type, which will turn the generic invocations of the Stream chain into unchecked operations returning their erased type, which is Object for the terminal collect call.
Using the right type, this works without problems, i.e.
List<String> fields=Arrays.asList("foo", "bar", "baz", "foo");
CopyOnWriteArrayList<String> l =
fields.stream()
.distinct()
.collect(Collectors.toCollection(CopyOnWriteArrayList::new));
works. But note that building a CopyOnWriteArrayList this way is rather expensive as the name “copy on write” already suggests. Copying the entire contents on each insertion leads to quadratic time complexity.
The solution is to collect into a temporary collection, better suited to incremental building, before converting to the desired target type. That copying step might look like overhead, but it’s linear overhead, compared to the quadratic complexity of collecting directly into the CopyOnWriteArrayList.
CopyOnWriteArrayList<String> l =
fields.stream()
.distinct()
.collect(Collectors.collectingAndThen(
Collectors.toList(), CopyOnWriteArrayList::new));
Note that in this specific case, distinct implicitly builds a Set behind the scenes, so we can improve the performance by building the Set explicitly in place of the temporary List and remove the distinct step:
CopyOnWriteArrayList<String> l =
fields.stream()
.collect(Collectors.collectingAndThen(
Collectors.toCollection(LinkedHashSet::new),
CopyOnWriteArrayList::new));
which leads to the conclusion that for this specific use case, we can have it all simpler and potentially even more efficient:
CopyOnWriteArrayList<String> l = new CopyOnWriteArrayList<>(new LinkedHashSet<>(fields));
It seams like your fields object is not of type Foo , otherwise it should work find below working code.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.stream.Collectors;
public class Foo {
private String name;
Foo(String name){
this.name=name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
#Override
public String toString() {
return "Foo [name=" + name + "]";
}
public static void main(String[] args) {
List<Foo> fields = new ArrayList<>();
fields.add(new Foo("aa"));
fields.add(new Foo("bb"));
CopyOnWriteArrayList<Foo> l = fields.stream().distinct().collect(Collectors.toCollection(CopyOnWriteArrayList::new));
System.out.println("l"+l);
}
}
PS:If your fields is non generic then also this will give error

How to easily access widely different subsets of fields of related objects/DB tables?

Imagine we have a number of related objects (equivalently DB tables), for example:
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
}
public class Job {
private String company;
private int salary;
..
}
public class House {
private Address address;
private int age;
private int numRooms;
..
}
public class Address {
private String town;
private String street;
..
}
How to best design a system for easily defining and accessing widely varying subsets of data on these objects/tables? Design patterns, pros and cons, are very welcome. I'm using Java, but this is a more general problem.
For example, I want to easily say:
I'd like some object with (Person.name, Person.height, Job.company, Address.street)
I'd like some object with (Job.company, House.numRooms, Address.town)
Etc.
Other assumptions:
We can assume that we're always getting a known structure of objects on the input, e.g. a Person with its Job, House, and Address.
The resulting object doesn't necessarily need to know the names of the fields it was constructed from, i.e. for subset defined as (Person.name, Person.height, Job.company, Address.street) it can be the array of Objects {"Joe Doe", 180, "ACompany Inc.", "Main Street"}.
The object/table hierarchy is complex, so there are hundreds of data fields.
There may be hundreds of subsets that need to be defined.
A minority of fields to obtain may be computed from actual fields, e.g. I may want to get a person's age, computed as (now().getYear() - Person.birtday.getYear()).
Here are some options I see:
A SQL view for each subset.
Minuses:
They will be almost the same for similar subsets. This is OK just for field names, but not great for the joins part, which could ideally be refactored out to a common place.
Less testable than a solution in code.
Using a DTO assembler, e.g. http://www.genericdtoassembler.org/
This could be used to flatten the complex structure of input objects into a single DTO.
Minuses:
I'm not sure how I'd then proceed to easily define subsets of fields on this DTO. Perhaps if I could somehow set the ones irrelevant to the current subset to null? Not sure how.
Not sure if I can do computed fields easily in this way.
A custom mapper I came up with.
Relevant code:
// The enum has a value for each field in the Person objects hierarchy
// that we may be interested in.
public enum DataField {
PERSON_NAME(new PersonNameExtractor()),
..
PERSON_AGE(new PersonAgeExtractor()),
..
COMPANY(new CompanyExtractor()),
..
}
// This is the container for field-value pairs from a given instance of
// the object hierarchy.
public class Vector {
private Map<DataField, Object> fields;
..
}
// Extractors know how to get the value for a given DataField
// from the object hierarchy. There's one extractor per each field.
public interface Extractor<T> {
public T extract(Person person);
}
public class PersonNameExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getName();
}
}
public class PersonAgeExtractor implements Extractor<Integer> {
public int extract(Person person) {
return now().getYear() - person.getBirthday().getYear();
}
}
public class CompanyExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getJob().getCompany();
}
}
// Building the Vector using all the fields from the DataField enum
// and the extractors.
public class FullVectorBuilder {
public Vector buildVector(Person person) {
Vector vector = new Vector();
for (DataField field : DataField.values()) {
vector.addField(field, field.getExtractor().extract(person));
}
return vector;
}
}
// Definition of a subset of fields on the Vector.
public interface Selector {
public List<DataField> getFields();
}
public class SampleSubsetSelector implements Selector {
private List<DataField> fields = ImmutableList.of(PERSON_NAME, COMPANY);
...
}
// Finally, a builder for the subset Vector, choosing only
// fields pointed to by the selector.
public class SubsetVectorBuilder {
public Vector buildSubsetVector(Vector fullVector, Selector selector) {
Vector subsetVector = new Vector();
for (DataField field : selector.getFields()) {
subsetVector.addField(field, fullVector.getValue(field));
}
return subsetVector;
}
}
Minuses:
Need to create a tiny Extractor class for each of hundreds of data fields.
This is a custom solution that I came up with, seems to work and I like it, but I feel this problem must have been encountered and solved before, likely in a better way.. Has it?
Edit
Each object knows how to turn itself into a Map of fields, keyed on an enum of all fields.
E.g.
public enum DataField {
PERSON_NAME,
..
PERSON_AGE,
..
COMPANY,
..
}
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
public Map<DataField, Object> toMap() {
return ImmutableMap
.add(DataField.PERSON_NAME, name)
.add(DataField.BIRTHDAY, birthday)
.add(DataField.HEIGHT, height)
.add(DataField.AGE, now().getYear() - birthday.getYear())
.build();
}
}
Then, I could build a Vector combining all the Maps, and select subsets from it like in 3.
Minuses:
Enum name clashes, e.g. if Job has an Address and House has an Address, then I want to be able to specify a subset taking street name of both. But how do I then define the toMap() method in the Address class?
No obvious place to put code doing computed fields requiring data from more than one object, e.g. physical distance from Address of House to Address of Company.
Many thanks!
Over in-memory object mapping in the application, I would favor database processing of the data for better performance. Views, or more elaborate OLAP/datawarehouse tooling could do the trick. If the calculated fields remain basic, as in "age = now - birth", I see nothing wrong with having that logic in the DB.
On the code side, given the large number of DTOs you have to deal with, you could use classless dynamic (available in some JVM languages) or JSON objects. The idea is that when a data structure changes, you only need to modify the DB and the UI, saving you the cost of changing a whole bunch of classes in between.

Pattern name/Convention -> Class that merge different attributes from other classes

I wanted to know if there is a known pattern or convention for the following scenario:
I have two classes: MAT (name:String, address:String) & MATversion(type:String, version:int)
Now I have a DataGrid (DataTable) which will take a generic List of objects for the column mapping and data filling.
The columns should be name, type, version. (Which are distributed in MAT and MATversion)
So I create a class to make this work. This class will merge the needed properties from each class (MAT, MATversion).
-> MAT_MATversion (name:String, type:String, version:int).
Does there exist a naming convention for such an class like MAT_MATversion? Any pattern that mirrors that?
Thanks!
Is there any specific reason why the merged result has to be a unique class?
I'm assuming every MAT object has a single MATversion
you can add a couple of custom properties who will return the type and version of the underlying MATversion object
In C# this would result in something like this
public class MAT{
public String name{ get;set;};
public String adress{ get;set;};
public MATversion myVersion;
public String type {
get{
return myVersion.type;
}
set{
myVersion.type = value;
}
}
public int version {
get{
return myVersion.version;
}
set{
myVersion.version = value;
}
}
}
I'm aware that this doesn't answer the question about design patterns, but I couldn't ask/suggest another approach in a comment since I don't have that right yet.