How to set auto increment value to the primary key in ActiveAndroid - activeandroid

I'm new to active Android and I have done CRUD operation, but I am unable to set auto increment for the primary key. I already tried below code but it is of not use to me:
#Table(name = "Employee", id = "EmpId")
public class Employee extends Model {
#Column(name = "empid")
public long empid;
#Column(name="name")
public String name;
}
Here employee is my tablename. I have primary 2 fields, one is empid and name. I need to set auto increment value to the primary key.
How can I do that?

To do an update with a unique column as your pseudo primary key, the annotation would look something like this:
#Column(name = "empid", unique = true, onUniqueConflict = Column.ConflictAction.REPLACE)
public long empid;
As we can read in a documentation:
One important thing to note is that ActiveAndroid creates an id field
for your tables. This field is an auto-incrementing primary key.
Moreover, if you would like to create custom primary key in you model, you can check solution mentioned in GitHub issue connected with ActiveAndroid, which looks like this:
#Table(name = "Employee", id = "EmpId")
public class Employee extends Model {
#Column(name = "id")
public long id;
#Column(name="name")
public String name;
}
Then, id field is custom primary key, which will be auto-incremented.

Related

JPA/Hibernate overlapping PK and FK Columns

we're using Postgres and JPA/Hibernate to import a lot of data on a biweekly basis (~50-100M rows per import). We're trying to partition our tables per import, which has us running into some Hibernate PK/FK column mapping problems. The setup is essentially this on the SQL side
CREATE TABLE row (
import_timestamp timestamp,
id uuid,
PRIMARY KEY (import_timestamp, id)
) PARTITION BY LIST (import_timestamp);
CREATE TABLE row_detail (
import_timestamp timestamp,
id uuid,
row_id uuid,
PRIMARY KEY(import_timestamp, id),
CONSTRAINT row_detail_row_fk FOREIGN KEY (row_id, import_timestamp) REFERENCES row (id, import_timestamp)
) PARTITION BY LIST (import_timestamp);
and this on the Java side:
#Entity(name = "row")
public class RowEntity {
#EmbeddedId
private PartitionedId id;
#OneToMany(cascade = ALL, mappedBy = "row")
private List<RowDetailEntity> details;
}
#Entity(name = "row_detail")
public class RowDetailEntity {
#EmbeddedId
private PartitionedId id;
#ManyToOne
#JoinColumns({
#JoinColumn(name = "row_id", referencedColumnName = "id"),
#JoinColumn(name = "importTimestamp", referencedColumnName = "importTimestamp")
})
private RowEntity row;
}
#Embeddable
public class PartitionedId implements Serializable {
private Instant importTimestamp;
private UUID id;
}
Hibernate then complains on boot that:
column: import_timestamp (should be mapped with insert="false" update="false")
I can silence that error by doing as it says, but that makes little sense, because I am forced to set insertable=false and updatable=false for both #JoinColumn()s, which would mean row_id isn't populated on insert.
I could go the #MapsId route, but only if I give the row_detail table a PK that includes all 3 properties (import_timestamp, id, row_id), and I don't really want or need that.
So the question is, how do I get Hibernate to understand my overlapping, but not entirely nested PK/FK?

JPA/Hibernate not using all fields in composite primary key

I have a many-to-one relationship as below (I have removed columns that do not contribute to this discussion):
#Entity
#SecondaryTable(name = "RecordValue", pkJoinColumns = {
#PrimaryKeyJoinColumn(name = "RECORD_ID", referencedColumnName = "RECORD_ID") })
Class Record {
#Id
#Column(name = "RECORD_ID")
long recordId;
#OneToMany(mappedBy="key")
Set<RecordValue> values;
}
#Entity
class RecordValue {
#EmbeddedId
RecordValuePK pk;
#Column
long value;
#ManyToOne
#MapsId("recordId")
private Record key;
}
#Embeddable
class RecordValuePK {
#Column(name = "RECORD_ID")
#JoinColumn(referencedColumnName = "RECORD_ID", foreignKey = #ForeignKey(name = "FK_RECORD"))
long recordId;
#Column(name = "COLLECTION_DATE")
LocalDate collectionDate;
}
When hibernate creates tables, the RecordValue table has primary key consisting of only RECORD_ID and NOT COLLECTION_DATE.
What could be the problem?
Hibernate debug log shows the following:
DEBUG - Forcing column [collection_date] to be non-null as it is part of the primary key for table [recordvalue]
DEBUG - Forcing column [key_record_id] to be non-null as it is part of the primary key for table [recordvalue]
DEBUG - Forcing column [record_id] to be non-null as it is part of the primary key for table [recordvalue]
.
.
Hibernate:
create table Record (
RECORD_ID bigint not null,
primary key (RECORD_ID)
)
Hibernate:
create table RecordValue (
COLLECTION_DATE date not null,
VALUE bigint not null,
key_RECORD_ID bigint not null,
RECORD_ID bigint not null,
primary key (RECORD_ID)
)
Removing the #SecondaryTable specification has resolved this issue. The #SecondaryTable specification was forcing both tables to have the same the primary key. The found this solution after reading this blog:
https://antoniogoncalves.org/2008/05/20/primary-and-secondary-table-with-jpa.

Spring Data JPA persistence - Could not commit JPA transaction - ORA-00001: unique constraint violated

I am trying to save an entity that has a many-to-many association to another entity and cascade the persistence to the associated entity and create the association using spring data jpa repository.
I can insert the parent entity_a which contains a set of entity_b using entityARepository.save(entityA). Spring jpa is taking care of all the inserts needed in the transaction. All the entity_b's get inserted, entity_a's get inserted and the join table in the middle has the association inserted as well. If I update the same entity_a with a new value in, say timestamp column, the same entityARepository.save(entityA) handles this and does a corresponding update.
The problem happens when there already exists entity_b (which has an association between some entity_a) and I try to insert a new entity_a with the same entity_b. It is many to many so this is how the data model is supposed to be. But instead of updating the existing entity_b during this entityA save() transaction, it tries to do inserts on entity_b and a constraint violation exception on the primary key is thrown.
org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.0.v20150309-bf26070): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (USER1.SYS_C0013494) violated
Error Code: 1
Call: INSERT INTO ENTITY_B (ID, NAME, VALUE, TIME_STAMP) VALUES (?, ?, ?, ?)
bind => [4 parameters bound]
Query: InsertObjectQuery(EntityB [name=shape, value=circle])
The problem is that spring doesn't have update(). It only has save which should handle update if it receives the same primary key. It's not doing that when a new entity_a is saved and has a collection of entity_b, if any entity_b exists, the whole transaction is failing sure to primary key constraint violation of entity_b.
public class EntityA {
#Id
#SequenceGenerator( name = "EntityASeq", sequenceName = "SQ_ENTITY_A", allocationSize = 1, initialValue = 1 )
#GeneratedValue(strategy = GenerationType.IDENTITY, generator = "EntityASeq")
#Column(name = "ID")
private Integer id;
#ManyToMany(cascade = {CascadeType.PERSIST, CascadeType.MERGE}, fetch = FetchType.LAZY)
#JoinTable(name = "MY_JOINED_TABLE",
joinColumns = {
#JoinColumn(name = "a_id", referencedColumnName = "ID")},
inverseJoinColumns = {
#JoinColumn(name = "b_id", referencedColumnName = "ID")})
private Set<EntityB> attributes;
// These three columns below have a unique constraint together.
#Column(name = "name")
private String name;
#Column(name = "tenant")
private String tenant;
#Column(name = "type")
private String type;
#Column(name = "timestamp")
private Timestamp timestamp;
}
public class EntityB {
#Id
#SequenceGenerator( name = "EntityBSeq", sequenceName = "SQ_ENTITY_B", allocationSize = 1, initialValue = 1 )
#GeneratedValue(strategy = GenerationType.IDENTITY, generator = "EntityBSeq")
#Column(name = "ID")
private Integer id;
#ManyToMany(mappedBy = "attributes")
private Set<EntityA> aSet;
// These two columns below have a unique constraint together.
#Column(name = "name")
private String name;
#Column(name = "value")
private String value;
#Column(name = "timestamp")
private Timestamp timestamp;
}
The id for each is generated by default. I also have a unique constraint on a few columns, which means if an EntityB has the same name/value as an existing one in the database, I want to just update the timestamp. That works if entity_a is already in the table and it has the same entity_b's. A and B's timestamp are updated and no error when I persist with entityARepository.save(entityA). (I do some checking on the db with findOne because the id is auto generated an not known. So if a name/value exist, I don't try to insert with a new id, I use the same one in the db and it works (similarly with entity_atenant/name/type.
It also works when I persist an existing entity_a with updated entity_b's. So if a new entity_b is associated with entity_a (that exists as an association with a different entity_a), etc, that works and the persistence is working.
The issue again, is just on INSERT of entityA via repo.save() when some entity_b
s already exist for other associations. It should be doing:
INSERT INTO entity_a ...
UPDATE entity_b ...
INSERT INTO MY_JOINED_TABLE ...
But it seems like it's doing
INSERT INTO entity_a ...
INSERT INTO entity_b ... -- fails because primary key constraint fails
INSERT INTO MY_JOINED_TABLE ...
EDIT: I tried removing CascadeType.PERSIST but I get an error saying
During synchronization a new object was found through a relationship that was not marked cascade PERSIST: EntityB [name=color, value=blue].
I wanted to try to manually insert/update but I couldn't do that. It wants me to have the EntityA specified with PERSIST because it has associations to the entityB
I tried inserting in the reverse and now I'm having issues inserting from entityB.save() when there already exists some entityA and I'm adding a new entityA to entityB

Get record from another table using JPA

I have been trying to figure out how to do this for sometime without any luck and have not managed to find anything useful while search on Google either.
I have THREE tables:
HOTEL
- id
- name
- local_id (foreign key)
DESCRIPTION
- id
- description
- hotel_id (foreign key)
- locale_id (foreign key)
LOCALE
- id
- local
I also have the following HOTEL DAO model:
#Entity
#Table(name = "HOTEL")
public class Hotel implements Serializable {
#Column(name = "id")
private long id;
#Column(name = "description")
private HotelDescription description;
}
Using JPA, how can I retrieve the data from table DESCRIPTION based on hotel_id and locale_id to populate description in DAO model hotel?
Well, you also have HotelDescription JPA entity, right? So you can define bidirectional mapping for entities.
instead of
#Column(name = "description")
private HotelDescription description;
you should have something like
#OneToOne(mappedBy = "hotel", cascade = CascadeType.ALL)
private HotelDescription desc;
and on the other side, in HotelDescription you should have back mapping
#OneToOne
#JoinColumn(name = "hotel_id")
private Hotel hotel;
When you will extract Hotel entity, JPA will also fetch child entity (HotelDescription) for you.
if you want to use #OneToMany mapping it will be (many descriptions for one hotel)
#OneToMany(mappedBy = "hotel", cascade = CascadeType.ALL)
private HotelDescription desc;
and on the other side
#ManyToOne
#JoinColumn(name = "hotel_id")
private Hotel hotel;
In JPA you can use several types of mapping like OneToMany, ManyToMany... That's only basics. Find a tutorial. You may start here: http://docs.oracle.com/javaee/6/tutorial/doc/bnbqa.html (not the best one probably)
Oh. And make sure you annotate id with #Id
I would consider ditching the Locale table and working with java.util.Locale directly. Hibernate (not sure about other JPA implementations) has auto type conversion from char column to java.util.Locale. This would then look something like:
DESCRIPTION
- id
- description
- hotel_id (foreign key)
- locale
And Entity:
import java.util.Locale;
#Entity
#Table(name = "HOTEL")
public class Hotel implements Serializable {
#Column(name = "id")
private long id;
#OneToMany
#JoinColumn(name = "holiday_id", nullable = false)
#MapKeyColumn(name = "locale_id")
private Map<Locale, HotelDescription> descriptions;
public String getDescriptionForLocale(Locale locale){
//try an exact match e.g. en_us
if(descriptions.containsKey(locale){
return descriptions.get(locale).getDescription();
}
//try language only e.g. en
else if (decsriptions.containsKey(locale.getLanguage){
return descriptions.get(locale.getlanguage()).getDescription();
}
//return a default or null
return ??
}
}

Fluent NHibernate: Custom ForeignKeyConvention not working with explicitly specified table names

EDIT: for the tl;dr crowd, my question is: How do I access the mappings from inside the ForeignKeyConvention in order to determine the table name that a given type is mapped to?
The long version:
I am using Fluent NHibernate to configure NHibernate, and I have a custom foreign key convention that is failing when I alias tables and columns.
My tables use a convention where the primary key is always called "PK", and the foreign key is "FK" followed by the name of the foreign key table, e.g., "FKParent". For example:
CREATE TABLE OrderHeader (
PK INT IDENTITY(1,1) NOT NULL,
...
)
CREATE TABLE OrderDetail (
PK INT IDENTITY(1,1) NOT NULL,
FKOrderHeader INT NOT NULL,
...
)
To make this work, I've built a custom ForeignKeyConvention that looks like this:
public class AmberForeignKeyConvention : ForeignKeyConvention
{
protected override string GetKeyName( Member member, Type type )
{
if ( member == null )
return "FK" + type.Name; // many-to-many, one-to-many, join
return "FK" + member.Name; // many-to-one
}
}
This works so long as my entities are named the same as the table. But it breaks when they aren't. For example, if I want to map the OrderDetail table to a class called Detail, I can do so like this:
public class DetailMap : ClassMap<Detail>
{
public DetailMap()
{
Table( "OrderDetail" );
Id( o => o.PK );
References( o => o.Order, "FKOrderHeader" );
...
}
}
The mapping works for loading a single entity, but when I try to run any kind of complicated query with a join, it fails, because the AmberForeignKeyConvention class is making incorrect assumptions about how the columns are mapped. I.e., it assumes that the foreign key should be "FK" + type.Name, which in this case is Order, so it calls the foreign key "FKOrder" instead of "FKOrderHeader".
So as I said above: My question is, how do I access the mappings from inside the ForeignKeyConvention in order to determine a given type's mapped table name (and for that matter, their mapped column names, too)? The answer to this question seems to hint at the right direction, but I don't understand how the classes involved work together. When I look through the documentation, it's frightfully sparse for the classes I've looked up (such as the IdMapping class).
the idea is to load the mappings
public class AmberForeignKeyConvention : ForeignKeyConvention
{
private static IDictionary<Type, string> tablenames;
static AmberForeignKeyConvention()
{
tablenames = Assembly.GetExecutingAssembly().GetTypes()
.Where(t => typeof(IMappingProvider).IsAssignableFrom(t))
.ToDictionary(
t => t.BaseType.GetGenericArguments()[0],
t => ((IMappingProvider)Activator.CreateInstance(t)).GetClassMapping().TableName);
}
protected override string GetKeyName( Member member, Type type )
{
return "FK" + tablenames[type]; // many-to-one
}
}