I have this code:
#Column(name = "foo")
#ReadTransformer(transformerClass=transformer.class)
private Date foo;
public static class transformer implements AttributeTransformer {
#Override
public void initialize(AbstractTransformationMapping atm) {
}
#Override
public Object buildAttributeValue(Record record, Object o, Session sn) {
}
}
My question is, how do I get the value to transform (from column foo) inside of buildAttributeVaule? It is not inside the record array.
You need one or more #WriteTransformer to write the fields you want selected (and thus get them selected), #Column is not used with a transformation mapping.
However, if you just have a single column, then just use a converter instead, #Convert,
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Mapping/Basic_Mappings/Default_Conversions_and_Converters
First check that the SQL generated is reading in the "foo" column by turning on logging. If it is, then check that the database is returning "foo" and not "FOO" - java is case sensitive on string looksups. It could be that "FOO" is in the record instead of "foo".
Related
In my project I'm using Jersey 2.23.1 with Jackson for JSON support.
When I'm getting a request with something like { "foo":null, "bar":"123" } as JSON, matching with class A{String foo; String bar;} Jersey first creates and instance of A (with default values if specified in constructor), then deserialize JSON to a temporary object A', then copies all JSON fields that were specified in JSON from A' to A. If I have default values in A-class constructor, and have fields equal to null in JSON, all my default values are erased and replaced by null. So in the example above, if I have a default value for the foo field, it will be replaced by null in the object Jersey will return as param for my #Path annotated method.
I'm using #JsonInclude(Include.NON_NULL) on A class to avoid the transfer of null fields during Response. But it only works for serialization, what about deserialization? I mean, when having { "foo":null } as JSON results in field "foo" = null in new object instance after deserialization.
Here is some code to sum all of this :
#JsonIgnoreProperties(ignoreUnknown = true)
#JsonInclude(value = Include.NON_NULL)
public class User {
public enum EUserRole {
PARENT, STUDENT, PROF, ADMIN
}
#Id
public String id;
public String firstName;
public String lastName;
public EUserRole role;
public User() {
id = ObjectId.get().toString();
role = EUserRole.STUDENT;
lastName = "RandomLastName";
}
}
if I'm passing this kind of JSON
{
"id":null,
"lastName":null,
"firstName":"Random First Name",
"role":"STUDENT"
}
to my method (in controller)
#POST
public Response createUser(final User entity) {
}
it results that all null fields in JSON are set to null in my entity and not set to the constructor default values.
Do you know if there is a way to specify Jackson to ignore null fields during deserialization? Or is this a Jersey-related behavior?
There is no way to ignore data from JSON payload in that sense, based on value contained (you can use ignoral to just ignore all values for given property).
So if you want to avoid null assignment, you need define a setter that will just swallow null value (that is, only assign non-null).
Ability to prevent null assignment might a useful feature to add via #JsonFormat.Feature, something like:
// hypothetical no such feature exists yes
#JsonFormat(without = JsonFormat.Feature.ALLOW_NULL_ASSIGNMENT)
so perhaps this could be a feature request.
And the reason I think this belongs to per-property handling is that this seems like a specific rule for some of the properties. Although perhaps there could also be a matching global setting if it seems users really like such null-ignoral.
I would like to create a calculator application that can switch between different number bases. As far as entering digits is concerned, I was thinking the following would be a flexible api:
public interface ICalculator
{
string Enter(INumberElement element);
}
public class BaseTenCalculator : ICalculator
{
public string Enter(INumberElement element)
{
...
}
}
public class BaseTwoCalculator : ICalculator
{
public string Enter(INumberElement element)
{
...
}
}
My problem is that for the BaseTenCalculator, I would like a method
Enter(BaseTenNumberElement element)
and for a BaseTwoCalculator, I would like a method
Enter(BaseTwoNumberElement element)
to make sure only valid digits for that number base get entered. However, the only way I can think of enforcing this constraint is downcasting the 'element' argument in the two different implementations, and throwing an exception if INumberElement is not of the correct type. I feel like this is 'wrong', and I'm missing something. Is there another way? Is it even possible to create a common interface for two different number base calculators?
public interface ICalculator<in T> where T : INumberElement
{
string Enter(T element);
}
public class BaseTenCalculator : ICalculator<BaseTenNumberElement>
{
public string Enter(BaseTenNumberElement element)
{
throw new NotImplementedException();
}
}
public class BaseTwoCalculator : ICalculator<BaseTwoNumberElement>
{
public string Enter(BaseTwoNumberElement element)
{
throw new NotImplementedException();
}
}
I think you're thinking of the problem incorrectly. A number is a number regardless of base. Base is only a visible representation of the number. A good example to work from might be BigInteger. It has a constructor: BigInteger(String val, int radix), and a function: toString(int radix). All the work of representing the number is done the same. The only thing that differs is parsing from a string representation into the number, and then getting back out into a number format in a particular base.
You could create a multi-base calculator by using BigInteger or BigDecimal underneath and just using a base selection to set the radix value to parse or print the number(s). You'd also want to limit the input buttons (assuming you're using buttons), but that's really just a counting problem.
I'm facing this (at least for me) interesting task: getting a SQL insert statement from a POJO like object. Let me say I don't need to add a framework between my Scala application and the DB because I just need to insert data into a single DB table.
So, supposing the attributes of my class are named equally to those of the DB table, I'd like to use Scala reflection in order to get from a class like this one
class MyDataObj {
var a:Int = 345
var b:Boolean = false
var c:Double = 1243.98
var d:String = "A random string"
}
a SQL insert statement like this
INSERT INTO table_a (a, b, c, d) values (345, false, 1243.98, 'A random String');
Well, what we need is
1) access to the class attributes
2) access to the attribute types
3) access to the attribute values of the object instance
In order to get something like this
List( ("a","Int",345), ("b","Boolean",false), ("c","Double",1243.98), ... )
that will be easy to transform into what we want.
Up to now, I've just found out how to access to the attributes names
val columns = typeOf[MyDataObj].members.view.filter{_.isTerm}.
filter{!_.isMethod}.map{_.name}.toList
How can I get the rest I need?
Thanks as usual for supporting me.
In your case, you can use the following codes:
val o = new MyDataObj
val attributes = o.getClass.getDeclaredMethods.filter {
_.getReturnType != Void.TYPE
}.map {
method => (method.getName, method.getReturnType, method.invoke(o))
}
Here I use getDeclaredMethods to get the public methods in the MyDataObj. You need to notice that getDeclaredMethods can not get methods in its parent class.
For MyDataObj, getDeclaredMethods will return the following methods:
public double MyDataObj.c()
public boolean MyDataObj.b()
public java.lang.String MyDataObj.d()
public int MyDataObj.a()
public void MyDataObj.c_$eq(double)
public void MyDataObj.d_$eq(java.lang.String)
public void MyDataObj.b_$eq(boolean)
public void MyDataObj.a_$eq(int)
So I add a filter to filter out irrelevant methods.
Im trying to write an aplication with uses hibernate to write to database, however in some actions i have to use JDBC on data in tables made by HB.
JDBS is requred to give administrator ability to create SQL queries with will return statistic info about data in database like number of processed document of specified type, numbers of success/failed log in attempts or total value of products in orders.
To do that i've done an from that allows to create class that has override toString() with return nice sql query string.
All works but now im trying to make administrator live easier by hiving him an ability to choose of table/column names. And here is an problem, because they are created by hibernate. some by #column annotation other by field name.
How can i check how field mapping?
I know its all about reflections but didnt do much of that in java yet.
example
#Entity
#Table(name= "my_table_name" )
public class TableOFSomething implements Serializable{
//This field isn't mapped into database and info about it is not requred.
//In fact, info about it may cause an error.
private static final long serialVersionUID = 7L;
#Id
#Column(name="id")
private String id;
private String fieldOne;
#Column(name="field_two")
private String fieldTwo;
#Column(name="renamed_just_for_fun")
private int Number;
//code with getters & setters
}
How to write methods that will have definition like
public <T> String tableName(Class<T> Target); //returns name of table in database
public <T> ArrayList<String> tabelFields(Class<T> Target); //returns name of fields in database
Hibernate has API - getClassMetadata that can explore the mapping. The API might change and is now located in another place , but i will use it and not in reflection for this.
look on this post for more details:
Get the table name from the model in Hibernate
if you want reflection , so use this link
import java.lang.reflect.Field;
import java.lang.reflect.Modifier;
import java.util.ArrayList;
import javax.persistence.Column;
import javax.persistence.Table;
import odi.beans.statistic.QueryBean;
public class ReflectionTest {
public static void main(String[] args) {
ReflectionTest test=new ReflectionTest();
System.out.println("Table name of "+QueryBean.class.getName()+" is "+test.getTableName(QueryBean.class));
System.out.println("Column names in this table are:");
for(String n: test.getColumnNames(QueryBean.class)){
System.out.println("\t"+n);
}
System.out.println("Good bye ;)");
}
public <T> ArrayList<String> getColumnNames(Class<T> target) {
ArrayList<String> ret=new ArrayList<>();
Field[] fields = target.getDeclaredFields();
String fieldName =null;
for (Field f : fields) {
//jump to next if if field is static
if (Modifier.isStatic(f.getModifiers()))
continue;
if (f.isAnnotationPresent(Column.class)) {
Column a = f.getAnnotation(Column.class);
fieldName = a.name();
} else {
fieldName = f.getName();
}
ret.add(fieldName);
}
return ret;
}
public <T> String getTableName(Class<T> target){
String ret=target.getSimpleName();
if (target.isAnnotationPresent(Table.class))
{
Table t=target.getAnnotation(Table.class);
ret=t.name();
}
return ret;
}
}
Is it cover all possibilities?
I know now that Hibernate way would be easier, but this is also about learning of very useful reflection mechanism :)
EDIT:
Important question:
Will this work only on annotations or also on xml mapping?
Hi all I have a horrid database I gotta work with and linq to sql is the option im taking to retrieve data from. anywho im trying to reuse a function by throwing in a different table name based on a user selection and there is no way to my knowledge to modify the TEntity or Table<> in a DataContext Query.
This is my current code.
public void GetRecordsByTableName(string table_name){
string sql = "Select * from " + table_name;
var records = dataContext.ExecuteQuery</*Suppossed Table Name*/>(sql);
ViewData["recordsByTableName"] = records.ToList();
}
I want to populate my ViewData with Enumerable records.
You can call the ExecuteQuery method on the DataContext instance. You will want to call the overload that takes a Type instance, outlined here:
http://msdn.microsoft.com/en-us/library/bb534292.aspx
Assuming that you have a type that is attributed correctly for the table, passing that Type instance for that type and the SQL will give you what you want.
As casperOne already answered, you can use ExecuteQuery method first overload (the one that asks for a Type parameter). Since i had a similar issue and you asked an example, here is one:
public IEnumerable<YourType> RetrieveData(string tableName, string name)
{
string sql = string.Format("Select * FROM {0} where Name = '{1}'", tableName, name);
var result = YourDataContext.ExecuteQuery(typeof(YourType), sql);
return result;
}
Pay attention to YourType since you will have to define a type that has a constructor (it can't be abstract or interface). I'd suggest you create a custom type that has exactly the same attributes that your SQL Table. If you do that, the ExecuteQuery method will automatically 'inject' the values from your table to your custom type. Like that:
//This is a hypothetical table mapped from LINQ DBML
[global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.ClientData")]
public partial class ClientData : INotifyPropertyChanging, INotifyPropertyChanged
{
private int _ID;
private string _NAME;
private string _AGE;
}
//This would be your custom type that emulates your ClientData table
public class ClientDataCustomType
{
private int _ID;
private string _NAME;
private string _AGE;
}
So, on the former example, the ExecuteQuery method would be:
var result = YourDataContext.ExecuteQuery(typeof(ClientDataCustomType), sql);