i'm using jdbc to work with my db.
I want to store a HashMap directly in the db, oracle.
i figure i need to serialize the Map, and its contents, which are only String.
Map already implements Serializeable, but my question is how do i put the Map in the SQL using jdbc?
for instance, i have a jdbcTemplate.update("insert into ....", new Object[]{....}, new int[]{....})
do i just put the Map in the Object array?
Thanks
You need a table with key/value columns, and if you're storing multiple maps, an identity column (or a foreign key to another table holding data that the hashmap is a member of).
Then in JDBC, create a prepared statement (insert into myhashmap (id, foreign_id, key, val) values (?, ?, ?, ?)) ONCE, and then loop over every element in the hashmap, setting the parameters on the statement and calling execute on the query.
(sorry, not near any code to post code snippets and don't want to type in buggy examples).
This is also trivially extendible to maps of String -> Object, where you wish to store every field in Object in the DB. I.e., it's basically like a standard table in a database, and a 'key' column. Lists are similar, but with a 'rank' column to indicate ordering.
The way I have done it in the past is to create one column in a table to store your serialized objects. The datatype should be "blob" or equivalent in Oracle.
It can then be written/retrieved using ResultSet.setBytes() and ResultSet.getBytes()
Related
Say i have an entity with an auto generated primary key. Now if i try to save the entity with values of all other fields which may not be unique.
The entity gets auto populated with the id of the row got inserted. How did it get hold of that primary key value?
EDIT:
If the primary key column is say identity column whose value is totally decided by the database. So it does an insert statement without that column value and the db decides the value to use does it communicate back its decision (I dont think so)
Hibernate use three method for extracting the DB auto generated field depending on what is support by the jdbc driver or the dialect you are using.
Hibernate extract generated field value to put it back in the pojo :
Using the method Statement.getGeneratedKeys (Statement javadocs)
or
Inserting and selecting the generated field value directly from the insert statement. (Dialect Javadocs)
or
Executing a select statement after the insert to retrieve the generated IDENTITY value
All this is done internally by hibernate.
Hope it`s the explication you are looking for.
This section of the Hibernate documentation describes the auto generation of ids. Usually the AUTO generation strategy is used for maximum portability and assuming that you use Annotations to provide your domain metadata you can configure it as follows:
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private long id;
Anyway the supplied link should provide all the detail you need on generated ids.
When you create an object with the, say, sequence-derived surrogate primary key, you pass it to the Hibernate session with that field set to the value that Hibernate interprets as "not assigned", by default 0. This field is not populated with the assigned value until the corresponding record is not inserted into the database table. You can trigger insertion by either explicitly calling flush() on the hibernate session or performing a database read in the same session. After that you can check the value of that field and it will be assigned rather than 0.
Edit: In short what strategy should one use on insert and select scripts with complex objects (eg. two select calls, one for each table; a single select call with unions)?
We have a database insert (postgresql) that includes a list of objects that is serialized (in text xml), and put it into a cell in a row amongst normal strings and such. We would like to create a new table with those lists with references back to the key of the original item. Where should the object be split off? I don't think it is possible in the SQL query, but if so that would be ideal. Our favorite spot currently is just before we set up our JDBCProcedures.
string name
int id
List<sub-objects>
and currently this is being stored in a DB schema like:
name varchar(20)
id int
subObjs text [or other character type big enough to hold the serialized XML]
Please provide a little more information about the structure of your objects and clarify your question. It's not entirely clear what you're asking here.
That said, let me try to take a stab:
If you have objects in Java code with structure somewhat like this:
string name
int id
object[] list_of_sub-objects
and currently this is being stored in a DB schema like:
name varchar(20)
id int
subObjs text [or other character type big enough to hold the serialized XML]
Is that about right?
And then your question is:
We would like to create a new table with those lists with references back to the key of the original item. Where should the object be split off? I don't think it is possible in the SQL query, but if so that would be ideal.
When you say the list-attribute item is "serialized" in your existing system, do you mean as XML? It looks like XML parsing in SQL itself is still in development for postgreSQL, and in any case it's likely to be a lot of trouble to code something like that up if you do not already know how.
But you already have application code which represents your objects in a non-serialized fashion. You could write a function in your application codebase which performs the migration. Load the records from the old database table into application objects according to your existing schema, then write them back into your new pair of DB tables according to your new schema.
This conceptually simplifies the problem down to something you can represent in pseudocode, i.e. "how do I map the structure of my object from the old database schema to the new one?"
I hope this helps! If you can clarify your structure a bit, I might be able to contribute some more specific pseudocode for the solution I'm proposing here.
We ended up splitting the insert into two calls (one for main object, one for sub-object) so that each table would have its own insert, but created a single select so that we could use the advantages of foreign keys in the query.
I am importing raw data in groovy, hundreds of thousands of entries. I use the table fields as the keys of a hash map and then use the add(hash) method on a groove sql Dataset. The Dataset goes to a postgres table and the ID field is autogenerated from a sequence. I need to get the ID of each record as it is inserted.
In java + hibernate the ID gets inserted automatically inside the corresponding field of the object being persisted. In this case the add() method does not return anything nor does it add an id field to the hash table. I am trying to avoid using hibernate/gorm here for efficiency reason.
Thanks for any pointers or for a better approach.
groovy.sql.SQL has an executeInsert() query which returns a list of the auto-generated column values for each inserted row.
I'm not sure how to explain this. So here goes...
I'm trying to fit the method for lazy loading blobs as described here but I'm stuck with only one table.
I have a schema (fixed, in a legacy system) which looks something like this:
MyTable
ID int
Name char(50)
image byte
This is on Informix, and the byte column is a simple large object. Now normally I would query the table with "SELECT ID, Name, (image is not null) as imageexists..." and handle the blob load later.
I can construct my object model to have two different classes (and thus two different map definitions) to handle the relationship, but how can I "fool" nhibernate into using the same table to show this one-to-one relationship?
Short answer: you can't.
You either need to map it twice or (my preference) create a DTO that has the fields you want. In HQL you'd do something like:
select new MyTableDTO(t.ID, t.name) from MyTable t
Is it possible in hibernate to have an entity where some IDs are assigned and some are generated?
For instance:
Some objects have an ID between 1-10000 that are generated outside of the database; while some entities come in with no ID and need an ID generated by the database.
You could use 'assigned' as the Id generation strategy, but you would have to give the entity its id before you saved it to the database. Alternately you could build your own implementation of org.hibernate.id.IdentifierGenerator to provide the Id in the manner you've suggested.
I have to agree w/ Cade Roux though, and doing so seems like it be much more difficult than using built in increment, uuid, or other form of id generation.
I would avoid this and simply have an auxiliary column for the information about the source of the object and a column for the external identifier (assuming the external identifier was an important value you wanted to keep track of).
It's generally a bad idea to use columns for mixed purposes - in this case to infer from the nature of a surrogate key the source of an object.
Use any generator you like, make sure it can start at an offset (when you use a sequence, you can initialize it accordingly).
For all other entities, call setId() before you insert them. Hibernate will only generate an id if the id property is 0. Note that you should first insert objects with ids into the db and then work with them. There is a lot of code in Hibernate which expects the object to be in the DB when id != 0.
Another solution is to use negative ids for entities which come with an id. This will also make sure that there are no collisions when you insert an new object.