I come from a web development background and haven't done anything significant in Java in quite some time.
I'm doing a small project, most of which involves some models with relationships and straightforward CRUD operations with those objects.
JPA/EclipseLink seems to suit the problem, but this is the kind of app that has File->Open and File->Save features, i.e. the data will be stored in files by the user, rather than persisting in the database between sessions.
The last time I worked on a project like this, I stored the objects in ArrayList objects, but having worked with MVC frameworks since, that seems a bit primitive. On the other hand, using JPA, opening a file would require loading a whole bunch of objects in the database, just for the convenience of not having to write code to manage the objects.
What's the typical approach for managing model data with Java SE desktop applications?
JPA was specifically build with databases in mind. This means that typically it operates on a big datastore with objects belonging to many different users.
In a file based scenario, quite often files are not that big and all objects in the file belong to the same user and same document. In that case I'd say for a binary format the old Java serialization still works for temporary files.
For longer term or interchangeable formats XML is better suited. Using JAXB (included in the standard Java library) you can marshal and demarshal Java objects to XML using an annotation based approach that on the surface resembles JPA. In fact, I've worked with model objects that have both JPA and JAXB annotations so they can be stored in a Database as well as in an XML file.
If your desktop app however uses files that represents potentially huge datasets for which you need paging and querying, then using JPA might still be the better option. There are various small embedded DBs available for Java, although I don't know how simple it is to let a data source point to a user selected file. Normally a persistence unit in Java is mapped to a fixed data source and you can't yet create persistence units on the fly.
Yet another option would be to use JDO, which is a mapping technology like JPA, but not an ORM. It's much more independent of the backend persistence technology that's being used and indeed maps to files as well.
Sorry that this is not a real answer, but more like some things to take into account, but hope it's helpful in some way.
Related
and thank you for reading.
I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification.
I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach.
there are litterally hundreds of classes - the code file is half a meg in size
duplicate classes (ex. Type, Type1 - which both represent the same type)
there are classes declared as inheriting from a base class, but that base class is not generated/defined
I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold:
Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema?
Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML?
Thank you very much!
-Sasha Borodin
DFWHC.org
Sasha, If you are going to use code generation, you likely should never start with the modular schemas. When you put a code generator against the modular schemas, you'll generate a class for all the common compoents in the HR-XML library and a good bit of the common components in OAGIS. You don't want this. HR-XML is distributed with standalone schemas, which are a better starting point. An even better starting point would be to create a flattened package xsd containing only the types brought in by the WSDL. If you use a couple standalone schemas, you are going to at least have some duplications among your generated code.
Well, you could try and do something like this:
convert your XSD to C# code separately, using something like the xsd.exe tool from Microsoft, or something like Xsd2Code as a Visual Studio Plugin.
Xsd2Code in Visual Studio http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=Xsd2Code&DownloadId=41336
once you have your C# classes, weed out any inconsistencies, duplications, and so forth
package everything up into a separate class library assembly
now, when generating your WCF service from the WSDL, either using Add Service Reference from Visual Studio or the svcutil.exe tool, reference that assembly with all the data classes. Doing so, WCF should skip re-creating the whole set of classes again, and use whatever is available in that data assembly
With this, you might be able to get this mess under control.
We run multiple websites which use the same rich functional backend running as a library. The backend is comprised of multiple components with a lot of objects shared between them. Now, we need to separate a stateless rule execution component into a different container for security reasons. It would be great if I could have access to all the backend objects seamlessly in the rules component (rather than defining a new interface and objects/adapters).
I would like to use a RPC mechanism that will seamlessly support passing our java pojos (some of them are hibernate beans) over the wire. Webservices like JAXB, Axis etc. are needing quite a bit of boiler plate and configuration for each object. Whereas those using Java serialization seem straightforward but I am concerned about backward/forward compatibility issues.
We are using Xstream for serializing our objects into persistence store and happy so far. But none of the popular rpc/webservice framework seem use xstream for serialization. Is it ok to use xstream and send my objects over HTTP using my custom implementation? OR will java serialization just work OR are there better alternatives?
Advance thanks for your advise.
The good thing with standard Java serialization is that it produces binary stream which is quite a bit more space- and bandwidth-efficient than any of these XML serialization mechanisms. But as you wrote, XML can be more back/forward compatibility friendly, and it's easier to parse and modify by hand and/or by scripts, if need arises. It's a trade-off; if you need long-time storage, then it's advisable to avoid plain serialization.
I'm a happy XStream user. Zero problems so far.
Consider the client application that should store its data on remote server. We do not want it to access this data "on-fly", but rather want it to have a copy of this data in local database. So we do not need connection with remote server to use application. Eventually we want to sync local database with remote server. The good example of what I am talking about is Evernote service. This type of applications is very common, for instance, in mobile development, where user is not guaranteed to have permanent Internet connection, bandwidth is limited and traffic can be expensive.
ORM (object relational mapping) solutions generally allows developer to define some intermediate "model" for his business logic data. And then work with it as some object hierarchy in his programming language, having the ability to store it relational database.
Why not to add to an ORM system a feature that will allow automatic synchronization of two databases (client and server), that share the same data model? This would simplify developing the applications of the type I described above. Is there any systems for any platform or language that have this or similar feature implemented?
this links may provide some useful information
http://gwtsandbox.com/?q=node/34
http://www.urielkatz.com/archive/detail/google-gears-orm-v01/
http://www.urielkatz.com/archive/detail/google-gears-orm/
AFAIK, there are no such ORM tools.
It was one of original goals of our team (I'm one of DataObjects.Net developers), but this feature is still not implemented. Likely, we'll start working on database sync this spring, but since almost nothing is done yet, there is no exact deadline for this.
There is at least one Open Source ORM matching your needs, but it is a Delphi ORM.
It is called mORMot, and use JSON in a stateless/RESTless architecture to communicate over GDI messages, named pipes or HTTP/1.1. It is able to connect to any database engine, and embed an optimized SQlite3 engine.
It is a true Client-Server ORM. That is, ORM is not used only for data persistence of objects (like in other implementations), but as part of a global n-Tier, Service Oriented Architecture. This really makes the difference.
In Jackrabbit I have experienced two ways to save my POJOs into repository nodes for storage in the Jackrabbit JCR:
writing my own layer
and
using Apache Graffito
Writing my own code has proven time consuming and labor intensive (had to write and run a lot of ugly automated tests) though quite flexible.
Using Graffito has been a disappointment because it seems to be a "dead" project stuck in 2006
What are some better alternatives?
Another alternative is to completely skip an OCM framework and simply use javax.jcr.Node as a very flexible DAO itself. The fundamental reason why OCM frameworks exist is because with RDBMS you need a mapping from objects to the relational model. With JCR, which is already very object-oriented (node ~= object), this underlying reason is gone. What is left is that with DAOs you can restrict what your programmers can access in their code (incl. the help of autocompletion). But this approach does not really leverage the JCR concept, which means schema-free and flexible programming. Using the JCR API directly in your code is the best way to follow that concept.
Imagine you want to add a new property to an existing node/object later in the life of your application - with an OCM framework you have to modify it as well and make sure it still works properly. With direct access to nodes it is simply a single point of change. I know, this is a good way to get problems with typos in eg. property names; but this fear is not really backed by reality, since you will in most cases very quickly notice typos or non-matching names when you test your application. A good solution is to use string constants for the common node or property names, even as part of your APIs if you expose the JCR API across them. This still gives you the flexibility to quickly add new properties without having to adopt OCM layers.
For having some constraints on what is allowed or what is mandatory (ie. "semi-schema") you can use node types and mixins (since JCR 2.0 you can also change the node type for existing content): thus you can handle this completely on the repository level and don't have to care about typing and constraints inside your application code - apart from catching the exceptions ;-)
But, of course, this choice depends on your requirements and personal preferences.
You might want to have a look at Jackrabbit OCM that is alive and kickin. Of course another way is to manually serialize/deserialize the POJOs. For that there are many different options. Question is whether you need fix schema to query the objects in JCR. If you just want to serialize into XML then XStream is a very painless way to do so. If you need a more fix schema there is also Betwixt from Apache Commons.
It depends on your needs. When you directly use javax.jcr.node, it means your code is heavily coupled to the underlying mechanism. In medium and even some small sized projects, this is not a good idea. Obviously the question will be how to go from the Node to your own domain model. The problem is quite similar as with going from Jdbc ResultSet to your own domain model. Mind you, I mean from a technical point of view the problem is similar. From a functional point of view, there are huge differences between using JDBC and JCR.
Another deciding factor is whether you can impose a structure in your JCR content or not. Some application domains can (but still match better with JCR than JDBC), in other domains the content may be highly unstructured in nature. In such case OCM is clearly overkill. I'd still advice to write your own wrapper layer around javax.jcr.* classes.
There's also https://github.com/ilikeorangutans/omf, a very flexible object to JCR mapper. Unfortunately it doesn't have write support yet. However we're successfully using this framework in a large CMS installation.
There is also the JCROM project at http://code.google.com/p/jcrom/. That project went dormant for a couple of years, but there have been a few new releases as of summer 2013.
What do you suggest for Data Access layer? Using ORMs like Entity Framework and Hibernate OR Code Generators like Subsonic, .netTiers, T4, etc.?
For me, this is a no-brainer, you generate the code.
I'm going to go slightly off topic here because there's a bigger underlying fallacy at play. The fallacy is that these ORM frameworks solve the object/relational impedence mismatch. This claim is a barefaced lie.
I find the best way to resolve the object/relational impedance mismatch is to either use OOP exclusively and use an object database or use the idioms of the relational database exclusively and ignore OOP.
The abstraction "everything is a table" is to me, much more powerful than the abstraction "everything is a class." It takes less code, less intellectual effort and leads to faster code when you code to the database rather than to an object model.
To me this seems obvious. If your application is data driven then surely your code should be data driven too? Yet to say this is hugely controversial.
The central problem here is that OOP becomes a really leaky abstraction when used in conjunction with a database. Code that look perfectly sensible when written to the idioms of OOP looks completely insane when you see the traffic that code generates at the database. When that messiness becomes a performance problem, OOP is the first casualty.
There is really no way to resolve this. Databases work with sets of data. OOP focus on instances of classes. Trying to marry the two is always going to end in divorce.
So to answer your question, I believe you should generate your classes and try and make them map the underlying database structure as closely as possible.
Perhaps controversially, I've always felt that using code generators for the ADO.NET plumbing is fundamentally solving the wrong problem.
At some point, hopefully not too long after learning about Connection Strings, SqlCommands, DataAdapters, and all that, we notice that:
Such code is ugly
It is very boring to write
It's very easy to miss something if you're doing it by hand
It has to be repeated every time you want to access the database
So, the problem to solve is "how to do the same thing lots of times fast"?
I say no.
Using code generators to make this process quick still means that you have a ton of code, all the same, all over your business classes (or your data access layer, if you separate that from your business logic).
And then, if you want to do something generically (like track stored procedure usage, for instance), you end up having to customise your code generator if it doesn't already have the feature you want. And even if it does, you still have to regenerate everything all the time.
I like to do things once, not many times, no matter how fast I can do them.
So I rolled my own Data Access class that knows how to add parameters, set up and close connections, manage transactions, and other cool stuff. It only had to be written once, and calling its methods from a Business object that needs some database stuff done consists of one line of code.
When I needed to make the application support multithreaded database accesses, it required a change to the Data Access class only, and all the business classes just do what they already did.
There is no right answer it all depends on your project. As Simon points out if your application is all data driven, then it might make sense depending on the size and complexity of the domain to use non oop paradigm. I had a lot of success building a system using a Transaction Script pattern, which passed XML Messages around the system.
However this system started to break down after five or six years as the application grew in size and complexity (5 or 6 webs, several web services, tons of COM+ components, legacy and .net code, 8+ databases with 800+ tables 4,000+ procedures). No one knew what anything was, and duplication was running rampant.
There are other ways to alleviate the maintance then OOP; however, if you have a very complex domain then hainvg a rich domain model is ideal IMHO, as it allows for the business rules to be expressed in nice encapsulated components.
To answer your question, avoid code generators if you can. Code generators are a recipe for disaster, but if you do go with code generation do not modify the generated code. Also be sure to have a good process in place that is easy for developers to get the new generated code.
I recommend using either the following: ORM or hand roll a lightweight DAL. I am currently transitioning a project over to nHibernate off my hand rolled DAL and am having a lot of success; however, I like having the option of using either option. Further if you properly seperate your concerns (Data Access from Business Layer from Presentation) you can have a single service layer that might talk to a Dao (Data Access Object) that for one object is an ORM but for another is hand rolled). I like this flexibility as it allows to apply the best tool to the job.
I like nHibernate over a hand rolled DAL because while my DAL does abstract away most of the ADO.Net code you still have to write the code that takes a data reader to an object or an object and creates the parameters.
I've always preferred to go the code generator route, especially in C# where you can make use of extended classes to add functionality to the basic data objects.
Hate to say this, but it depends. If you find an ORM tool that fits your needs go for it. We wrote our own system in small steps while developing the application. We are using C++ and there are not that many tools out there anyway. Ours ended up being a XML description of the database, from that the SQL generation script and the basic object layer and metadata were generated.
Do your homework and evaluate theses tools and you will find a good fit for your needs.