I have a Jena ontology model (OntModel) which I'm modifying programatically. This model was initially created using the default ModelFactory method to create an Ontology model (with no parameters). The problem was, as the program ran and the model was changed, the default Jena Reasoner would run (and run and run and run). The process was entirely too slow for what I need and would run out of memory on large data sets.
I changed the program to use a different ontology model factory method to create a model with no reasoner. This ran extremely fast and exhibited none of the memory problems I saw earlier (even for very large data sets). The problem with this approach is that I can only access the data by directly using it's direct class type (I can not gain access to objects using it's parent class).
For example, imagine I had two class resources, "flower" and "seed". These inherit from a common parent, "plant material". My program takes all the "seeds", runs a method called "grow" which transforms the "seed" object into a "flower" object. The "grow" method runs too slow and runs out of memory when using a Reasoner (even the micro Reasoner). If I turn off the Reasoner, then I can't access all the "flowers" and "seeds" using the "plant material" class.
Is there a preferred way (a happy medium) to doing this... allowing the ability to access objects using their superclass while also being fast and not a memory hog?
I've been looking for a way to "turn off the reasoner" while I run my "grow" method and then turn it back one once the method completes. Is this possible somehow?
I got some help and suggestions, then this is how I solved this problem.
Basically, I gained access to another model without a Reasoner, batched all my changes to the basic model, then rebound the full model with the reasoner to get the updates.
Here's some psuedo code. It doesn't exactly match my "real" scenario, but you get the idea.
// Create a model with a reasoner and load the full model from owl files or
// whatever
OntModel fullModel = ModelFactory.createOntologyModel();
fullModel.read(...);
// create a model without a reasoner and load it from the full model with
// reified statements
OntModel basicModel = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM);
basicModel.add(fullModel);
// batch modifications to the basic model programatically
//(**** RUNS REALLY QUICK *****)
// rebind the full model
fullModel.rebind();
// continue on....
Related
I'm developing kind of a social network with neo4j, and i wanted to make my Node object a bit more specific for my own needs. Does it considered a good practice to wrap a neo4j Node object or to inherit from it?
My problem with the wrapping approach arises when indexing the nodes objects with the built in Lucene engine. For example, what benefits will i earn if i'll wrap my Node object with a "Profile" class (with methods such as "addFriend", "setFirstName", etc..), but on the other hand, whenever i will run a query against my index i'll get back raw Node objects and not my wrapped objects? I can make some dirty solution for this case, by saving a reference for the wrapped object inside my node properties, but it looks very strange for me to do it.
What would you recommend to do in such case, in order to get a clean and well designed code?
Thanks.
I have found that wrapping a Node does not lead to very maintainable code/design. As you mentioned, one thing you need to take care of is not returning a Node but translating it to your domain object.
If your object has mostly getX methods, then you can just execute Cypher queries, compose your domain object(s) and return those. You don't even need to wrap the Node in this case- all you need is some property that you can use to look up the Node.
If you have setX methods, then you can update the Node via Cypher statements either via a save that updates all properties or on each setX (not great, as you'd be updating too often the setX method now implies persistence). Either of the two approaches does not require the Node to be wrapped.
I tried in earlier projects to wrap the Node but found that it leads to much more trouble and a generally smelly design. Now I work with pure domain POJOS's and keep Neo4j code in the persistence layer only, and this works much better for me. You haven't mentioned which language you're using- if Java, then I believe Spring Data can take care of a lot of boilerplate code.
Put your search code INTO the class they belongs to.
If you need to get, I don't know, something like getFriends from a Post class, you will create the method fromPosts into the Person class, and the getFriends method into Post.
From post, you will call the query from Person class, execute the query and return an Array / List of the nodes mapped into the Person class.
So your getFriends method into the Post class will be something like:
Person.fromPosts(self).results.map { |node| Person.new(node) }
Is simple to do that doing just a map of the result with a Person.new (or new Person, depend from which language are you using) and pass the node to the Person. This means that you must have a new method that populate object from a node.
Spring Data Neo4j is the definitive solution to your need, it maps annotated entity classes to Neo4j with advanced mapping functionality and provides access to nodes and relationships at different levels of abstraction.
I am working on an iPad application, that requires me to store data locally if the user doesn't have internet access and later on sync with the back-end database.
For local storage, I am planning to use Core Data with SQLite.
I am using Core Data for the first time and it seems, it retrieves entity and store entity in the form of a dictionary.
So should I be creating Model classes at all ?
What is a good design for such application.
I have a DataEngine class whose responsibility is to store entity on a server or local DB based on connectivity.
Now I am little confused If I need to create a Model class and ask individual model classes to save themselves using NSMangaedObjectContext with a dictionary representation Or just directly save data instead of creating a model object and asking it to do it ?
Should I be using a Moel class for each entity and that will server as interface between JSON representation that coms/goes from/to server.
and the representation I get from managedObjectContext.
Or shall I compeletely rely on the entity and relation ships that Core Data creates ?
I'll do this backwards: first some things for you to check and then some ideas.
Check my own question.
I'd say that on your custom categories you can do the interface between the JSON representation and your classes.
You should also check RestKit which can do already much of what you need.
You're talking about two separate problems as far as I can understand:
Syncing local data to the server based on connectivity;
Using model classes.
Problem 1
I think you should have a class with the common code and each of your model classes should have its own mapping (to map between model and JSON) and saving methods.
Another class, that may be your DataEngine class, takes care of saving the right objects at the right time.
Take a look at RestKit as it helps with the mapping and the saving. I'm not sure about the syncing though.
Problem 2
I think you should have model classes. It helps a lot to work with objects and you have then a place to save methods for finding different kinds of data.
For this my question might be useful for you because you can create a CoreData model with generated class files and update it whenever you want while keeping your custom code.
For example: Let's say I'm grabbing a list of names and saving it to an NSMutableArray. Do I implement the method of actually calling the server to fetch the data in the controller (UIViewController) or the model(Friends object)?
It's a design decision that depends on what you're trying to accomplish. If your model only makes sense in the context of a single service, or if you want your model to provide access to all the data on the server, then build the connection to the server into your data model. This might make sense if you are, for example, building a client for a service like Twitter or Flickr.
On the other hand, if you're just grabbing a file from a server and that's the end of it, it may make sense to do the communication in the controller. Controllers tend to be less reusable and more customized for the particular behavior of the application. Keeping the specifics about where the data comes from out of the model makes the model more reusable. It also makes it easy to test -- you can write test code that just reads a local file and stores the data in the model.
That's a good question. I think the best way is through a controller because it decouples your model from requiring the other model to be present for it to work properly. Although I don't think you violate "proper mvc" by doing it in the model either.
I think you want to put it in the model. What you'll do is interrogate the model for the data and then the model will handle how to populate itself whether it's from an internal data store or an external one (like a server).
One approach is to use the repository pattern. To do this, you create Repository objects in your Model folder and you place all of you database-related methods in them. Your controllers call the repository classes to get the data. This allows you to separate the real model objects from the database accessing methods.
I use the MVCS pattern (Model-View-Controller-Store), which I discovered in Aaron Hillegass's book "IOS Programming: The Big Nerd Ranch Guide" (http://www.bignerdranch.com/book/ios_programming_the_big_nerd_ranch_guide_rd_edition_)
The store is specifically designed to fetch the data, whether it comes from a server, a local file, a persisted collection, a database, etc.
It allows to build very evolutive applications. For example, you can build your application based on a web service, and the day you want to persist your data, you juste have to modify the store, without having to modify a single line of code in your controller.
It's a lot like the Repository Pattern (http://msdn.microsoft.com/en-us/library/ff649690.aspx) (cf BobTurbo's answer)
I'd personally make a DAO, or data helper class. It's very hard to follow the strict MVC in objective C when things get more complicated. However, putting it in the model or the VC is not wrong as well.
For example, I have window (non-document model) - it has a controller associated with it. Within this window, I have a list and an add button. Clicking the add button brings up another "detail" window / dialog (with an associated controller) that allows the user to enter the detail information, click ok, and then have the item propagated back to the original window's list. Obviously, I would have an underlying model object that holds a collection of these entities (let's call the singular entity an Entity for reference).
Conceivably, I have just one main window, so I would likely have only one collection of entities. I could stash it in the main window's controller – but then how do I pass it to the detail window? I mean, I probably don't want to be passing this collection around - difficult to read / maintain / multithread. I could pass a reference to the parent controller and use it to access the collection, but that seems to smell as well. I could stash it in the appDelegate and then access it as a "global" variable via [[NSApplication sharedApplication] delegate] - that seems a little excessive, considering an app delegate doesn't really have anything to do with the model. Another global variable style could be an option - I could make the Entity class have a singleton factory for the collection and class methods to access the collection. This seems like a bigger abuse than the appDelegate - especially considering the Entity object and the collection of said entities are two separate concerns. I could create an EntityCollection class that has a singleton factory method and then object methods for interaction with the collection (or split into a true factory class and collection class for a little bit more OO goodness and easy replacement for test objects). If I was using the NSDocument model, I guess I could stash it there, but that's not much different than stashing it in the application delegate (although the NSDocument itself does seemingly represent the model in some fashion).
I've spent quite a bit of time lately on the server side, so I haven't had to deal with the client-side much, and when I have, I just brute forced a solution. In the end, there are a billion ways to skin this cat, and it just seems like none of them are terribly clean or pretty. What is the generally accepted Cocoa programmer's way of doing this? Or, better yet, what is the optimum way to do this?
I think your conceptual problem is that you're thinking of the interface as the core of the application and the data model as something you have to find a place to cram somewhere.
This is backwards. The data model is the core of the program and everything else is grafted onto the data model. The model should encapsulate all the logical operations that can be performed on the data. An interface, GUI or otherwise, merely sends messages to the data model requesting certain actions.
Starting with this concept, it's easy to see that having the data model universally accessible is not sloppy design. Since the model contains all the logic for altering the data, you can have an arbitrarily large number of interfaces accessing it without the data becoming muddled or code complicated because the model changes the data only according to its own internal rules.
The best way to accomplish universal access is to create a singleton producing class and then put the header for the class in the application prefix headers. That way, any object in the app can access the data model.
Edit01:
Let me clarify the important difference between a naked global variable and a globally accessible class encapsulated data model.
Historically, we viewed global variables as bad design because they were just raw variables. Any part of the code could alter them at will. This nakedness led to obvious problems has you had to continuously guard against some stray fragment of code altering the global and then bringing the app down.
However, in a class based global, the global variable is encapsulated and protected by the logic implemented by the encapsulating class. This encapsulation means that while any stray fragment of code may attempt to alter the global variable inside the class, it can only do so if the encapsulating class permits the alteration. The automatic validation reduces the complexity of the code because all the validation logic resides in one single class instead of being spread out all over the app in any random place that data might be manipulated.
Instead of creating a weak point as in the case of a naked global variable, you create strong and universal validation and management of the data. If you find a problem with the data management, you only have to fix it in one place. Once you have a properly configured data model, the rest of the app becomes ridiculously easy to write.
My initial reaction would be to use a "modal delegate," a lot like NSAlerts do. You'd create your detail window by passing a reference to a delegate, which the detail window would message when it is done creating the object. The delegate—which would probably be the controller for the main window—could then handle the "done editing" message and add the object to the collection. I'd tend to not want to pass the collection around directly.
I support the EntityCollection class. If you have a list of objects, that list should be managed outside a specific controller, in my opinion.
I use the singleton method where the class itself manages it's own collections, setup and teardown. I find this separates the database/storage functionality from the controllers and keeps things clean. It's nice and easy to just call [Object objects] and have it return a reference to my list of objects.
As my Cocoa skills gradually improve I'm trying not to abuse the MVC as I did early on when I'd find myself backed into a hole built by my previous assumptions. I don't have anyone here to bounce this off of so hoping one of you can help...
I have a custom Model class that has numerous & varied properties (NSString, NSDate, NSNumber, etc.). I have a need to serialize the properties for transmission. Occasionally as this data is being processed for serialization a questions may come up that the user will need to respond to (UIAlertView, etc.)
Without bogging down in too many more specifics where does this code belong?
Part of me says Model because it's about persistence of data - in a way.
Part of me says View because it's another interpretation of the core data (no pun intended) contained within the model. And the user will have to interact with dialogs on occasion as data is processed
Part of me says Controller because it's managing the transformation of data between model & view.
Is it a combination of all three? If so how would communication be handled between classes as the data is being processed? NSNotifications? Direct method calls?
This may be something that you'd want to use the Visitor pattern for -- http://en.wikipedia.org/wiki/Visitor_pattern -- because you might eventually want to use different sorts of serialization for different things and you can have different visitor classes rather than a lot of special cases in the model code.
Here's a discussion of the Visitor pattern in objective-c/cocoa: http://www.cocoadev.com/index.pl?VisitorPattern
Here's an (old!!!) article from Dr. Dobbs about the visitor pattern in objective-c: http://www.drdobbs.com/184410252
The reason that the problem you're working on doesn't fit well into the MVC paradigm is that the serialization that you're doing is like a view on a stream-based rendering surface and it is displayed. Sometimes, this can be done really smoothly in the model but sometimes it's more complex and you need to look at your case to figure out which one it is.
Frequently, the transmission/web service (or whatever) code you're using will have its own handler for this data, for example ObjectiveResource adds a serialization and deserialization handler that works as an extension to NSObject that enables it to do a lot of this stuff transparently, and you might look into that code (particularly the ObjectiveSupport part) if you're trying to do this more generically.
Typically almost all application specific code belongs in the controller. The controller should interact and observe (via notification) the model and update the view as appropriate.
If you are doing model processing such that it is something that might be re-used in another app with the same model, then that processing could be in the model.
Views can be laid out in Interface Builder or created in code and/or be subclassed for custom drawing, but they should not have application logic and would not interact directly with the model.
I would suggest putting the serialising code in the model. If the process fails it can report that to whatever's listening to it (the view / controller) which can then present the UIAlertView, correct the problem and re-submit for another attempt.
I'd say in the model.
The call to serialize the data will be done by the controller. If the data cannot be serialized then the model should return an error which the controller then has to handle.