iBatis LazyLoading Configuration - lazy-loading

I need to serialize some objects loaded by iBatis, but am getting NotSerializableException because lazyLoadingEnabled="true".
I see that this is a known limitation (see https://issues.apache.org/jira/browse/IBATIS-529), but I'm yet to find a workaround outside of turning lazy loading off for the entire application.
The question I have is whether there is any finer grained control over lazy loading. This is currently enabled/disabled in the sqlMapConfig/settings which applies to all the sqlMap resources. Is there a way to enable/disable this programmatically or just for certain sqlMaps?

never found a solution for controlling lazy loading explicitly...however, I did solve my issue by simply converting my object into XML (therefore loading all necessary fields explicitly) and using that for my payload (to avoid object serialization altogether)...obviously not ideal, but it worked for my needs and seems on par performance wise (given that object serialization is slow anyways)

Find a configuration file in your project with <sqlMapConfig> element and in the nested <settings> element, you can specify lazyLoadingEnabled="false" but if your query fetches other complex objects with external fetch, you surely will have a real Stack Overflow problem!

Related

What kind of objects get serialized and why? When would I use this?

I understand what serialized is. I simply do not know when I would use it. I have seen the discouraged practice of session data in a database and things like that but other than that I do not know.
What kind of objects state would I save in a database, file system, anything that needs persistence? Why would I use it for a non-"permanent" reason?
I do not have a context per se. All I really do are client server web apps. I may get to use a Java stack for it, but I'd really like to understand this part of things, should I need it.
I have asked similar questions. I'm just not understanding.
In a sentence, using a generic serialiser is a reasonable way to save stuff to disk, move stuff over a network in a manner which doesn't require you to design a data format, write code that emits data in that format, and write a parser for that format (all error-prone) by hand.
Any time you want to persist an object (or object hierarchy) beyond its existence inside a single execution on a single machine, you are going to want to serialise and deserialise.
Some scenarios that come to my mind are
Caching: when you want to offload in-memory objects to disk (the caching framework can serialise the object to disk)
For thick clients (either a desktop application or an app using RMI) you'll need to transfer objects from one JVM to another, and this is done by serialising them
I can't think of any other scenarios from the top of my head.

What includes EclipseLink internal optimization via weaving

I am new in EclipseLink and just right now I am getting know it step by step. Right now I am working on performance optimizations via weaving inb order to use lazy loading for ***ToOne relationships, fetch groups for partial loading of entity instances, change tracking for commit performance optimizations and internal optimizations for ... And here the question is. Unfortunately I haven't found via googling a the right performances via this tactic.
Does somebody could explain what kind of internal optimizations EclipseLink performs via this weaving setting ?
Thanks in advance,
Simeon
I'd recommend you break up your question to make it more specific on what exactly you are looking for, but I'll try to add information.
Weaving allows EclipseLink to change the bytecodes of your entities to add provider specific methods etc so that you do not need to introduce a dependency within your model. Each of the terms listed in the doc you found - lazy loading, fetch goups etc - are all performance enhancements that you would need to look up individually. All can be used without weaving, but would require changes to your entity to implement EclipseLink interfaces and methods.
Lazy loading delays fetching a relationship until your application accesses it. getEmployee() in your entity for instance will just return the reference employee attribute - without weaving, the employee must have been fetched already or a null will be returned incorrectly. With weaving, code can be added to the Entity so that it goes to the database to fetch it on demand.
Fetch groups are similar concept that apply to basic mappings instead of relationships, while change tracking is more advanced and allows EclipseLink to be notified when you make a change to the entity rather than having to compare changes with a prebuilt backup on commit. Each will have independent references within the EclipseLink documentation.

nhibernate lazy loading uses implicit transaction

This seems to be a pretty common problem: I load an NHibernate object that has a lazily loaded collection.
At some later point, I access the collection to do something.
I still have the nhibernate session open (as it's managed per view or whatever) so it does actually work but the transaction is closed so in NHprof I get 'use of implicit transactions is discouraged'.
I understand this message and since I'm using a unit of work implementation, I can fix it simply by creating a new transaction and wrapping the call to the lazy loaded collection within it.
My problem is that this doesn't feel right...
I have this great NHibernate framework that gives me nice lazy loading but I can't use it without wrapping every property access in a transaction.
I've googled this a lot, read plenty of blog posts, questions on SO, etc, but can't seem to find a complete solution.
This is what I've considered:
Turn off lazy loading. I think this is silly, it's like getting a full on sports car and then only ever driving it in eco mode. Eager loading everything would hurt performance and if I just had ids instead of references then why bother with Nhibernate at all?
Keep the transaction open longer. Transactions should not be long lived and keeping one open as long as a view is open would just be asking for trouble.
Wrap every lazy load property access in a transaction. Works but is bloaty and error prone. (i.e. if I forget to wrap an accessor then it will still work fine. Only using NHProf will tell me the problem)
Always load all the data for the properties I might need when I load the initial object. Again, this is error prone, both with loading data that you don't need (because the later call to access it has been removed at some point) or with not loading data that you do
So is there a better way?
Any help/thoughts appreciated.
I has had the same feelings when I first encountered this warning in NHProf. In web applications I think the most popular way is to have opened transaction (and unit of work) for the whole duration of request. For desktop applications managing transactions (as well as sessions) may be painful. You can use automatic transaction management frameworks (e.g. Castle) and declare with attributes service methods that should be run within transaction. With this approach you can wrap multiple operations into single transaction denending on your requirements. Also, I was using session-per-view approach with one opened session per view and manual transaction management (in this case I just ignored profiler warnings about implicit transactions).
As for your considerations: I strongly don't recommend 2) and 3). 1) and 4) are points to consider. But the general advice is: think, then try different approaches and find a solution that suits better for your particular situation.

Is there any reason I shouldn't cache in nHibernate?

I've just discovered the joy of Cache.ReadWrite() in fluent nHibernate, and have been analyzing the results with nhprof extensively.
It seems to be quite useful, but that seems a bit deceptive. Is there any particular reason I wouldn't want to cache a very frequently used object from a query? I mean, I have to presume I should not just go around decorating every single Mapping with a Cache property ... or should I?
As usual, it depends :)
If something has potential to be updated by background processes that don't use the second level cache, or changed directly in the database, caching will cause problems.
Entities that are infrequently accessed may not be good candidates for second level caching either, as they will just take up space.
Also, you may see some weirdness if you have collections mapped as Inverse - the changes will not be picked up by the second level cache correctly and you'll need to manually evict the collection.
As sJhonny points out below, if you have a web farm scenario (or any where your app is running on several servers) you'll need to use a distributed cache (like memcached) instead of the built in ASP.net cache.

Object serialization practical uses?

How many software projects have you worked on used object serialization? I personally never came across a scenario where object serialization was used. One use case i can think of is, a server software storing objects to disk to save memory. Are there other types of software where object serialization is essential or preferred over a database?
I've used object serialization in a lot of my projects. Sometimes we use it to store computer-specific settings locally. I have also used XML serialization to simplify interaction and generation of XML documents. It is also very beneficial in communication protocols. Serialize on one end and re-inflate on the other end.
Well, converting objects to XML or JSON is a form of serialization that is quite common on the web. I've also worked on a project where objects were created and serialized to a binary file in one application and then imported into another custom application (though that's fragile since it uses C# and serialization has broken in the past between versions of the .NET framework). Also, application settings that have a complex structure may be useful to serialize. I also think remoting APIs use serialization to communicate. Basically, serialization in general is simply a way to store the states of your objects, and this has many different uses.
Here are few uses I can think of :
Send an object across network, the most common example is serializing objects across a cluster
Serialize object for (sort of) caching, ie save the state in a file and read it back later
Serialize passive/huge data to a file to minimize the memory consumption and read it back whenever required.
I'm using serialization to pass objects across a TCP socket. You put XmlSerializers on either side, and it parses your data into readily available objects. If you do a little ground work, you can get it so that you're basically passing objects back and forth, and it makes socket communication extremely easy, reducing it to nothing more than socket.Send(myObject);.
Interprocess communication is a biggie.
you can combine db & serialization. f.ex. when you have to store an object with a lot of attributes (often dynamic, i.e. one object attribute set will be different from another one) to the relational DB, and you don't want to create a new column per each attribute
We started out with a system that serialized all of the thousands of in-memory objects to disk every 15 minutes or so. When that started taking too long we switched over to a mixed mode of saving the objects into a relational db and pickle file (this was a python system btw). Eventually the majority of the data was stored in a relational database. Interestingly, the system was written in such a way that all of the application code couldn't care less what was going on down there. It was all done using XP and thousands of automated tests.
Document based applications such as word processors and vector graphics editors will often serialize the document model to disk when the user invokes the Save command. Serialization is often preferred over complex databases in these apps.
Using serialization saves you time each time you want to implement an import/export functionality.
Every time you need to export your system's data, create backups or store some kind of settings, you could use serialization instead and just save the state of the objects that represent the actual config, data or whatever else.
Only when you need a specific format of the exported/imported data, there is a sense in building a custom parser and exporter/importer.
Serialization is also change-proof. Whenever you change the format of the object that is involved in the exchange functionality, it is automatically exportable and you don't have to change the logic behind your export/import parts.
We used it for a backup & update functionality. It was basically serialized hibernate objects being backed up, then the DB schema is altered through the update and we delivered a helper class that "coverted" the old objects to the new DB schema. This way we had a pretty solid update mechanism that wouldnt break easily and does an automatic backup at the same time.
I've used XML serialization heavily on one project. The technique was used to persist to database data structures that had no common structure, so the data couldn't be stored directly. I also used serialization to separate application settings that could be changed at runtime.