I've been trying for the last few days to use paper_trail as an undo system, where multiple objects can be rolled back at once; I began with the Railscast, and have been trying my best to get something working. I feel like I am close, but also feel like my implementation could be better, or possibly there is a different way that I don't know about or is beyond my skill.
So far I've tried this:
Added a parent_id column to versions. This holds another versions id.
Created a method to set this ID, where after doing a save, update or delete, I pass the parent and child models in and then update that childs latest version's parent_id with the parents latest version id. It looks like this:
def set_parent_to_version(parent, child)
child.versions.last.update_attributes(:parent_id => parent.versions.last.id)
end
And then when I make an update that affects more than the parent model, I call this and it updates the just made version of the child model as well.
In my revert method, I then find the version by params[:id] and all children by parent_id params[:id], and reify them all. This work well to an extent.
The problem is with callbacks - I have a callback on a model that, before_destroy, it modifies another model. However, in this case I don't know if the user initiated this destroy or a different model, and so I can't get the parent_id. In other words, Employee has many items, employee destroys items, items modify employee before_destroy. The before_destroy doesn't know why it was destroyed or who did it.
I know the papertrail gem already says it just doesn't work well with associations, but I feel like this must be possible - somehow looking at the callbacks maybe? I might be in too deep on this, but for the actions that the undo works on, it's super nice and I don't want to drop it.
ANY ideas appreciated. Thanks!
Related
I'm trying to let my API clients make a POST request that bulk modifies objects that the client doesn't have their IDs.
I'm thinking of implementing this design but I don't feel good about it, are there any better solutions than this?
POST url/objects/modify?name=foo
This request will modify all objects with the name foo
This can be a tricky thing to do with an API because it doesn't age very well.
By that I mean that over time, you might introduce more criteria for the data stored on resources (e.g., you can only set this field to "archived" if the create_time field is older than 6 months). When that happens, your bulk updates will start to only work on some resources and now you have to communicate that back to the person calling the API.
For example, for any failures you need to explain that the update worked for some resources (and list them out) but failed on others (and list them out) and the reason why for each failure (and remember you might have different failure conditions for different resources).
If you're set on going down this path, the closest thing I can think of is the "criteria-based delete" method shown here: https://google.aip.dev/165.
I need to populate my map with annotations. Each annotation has corresponding Place resource that is being fetched from remote server. Each Place has associated Category - it is fetched from the server too as a separate resource.
Let's assume that to populate a given region I need to fetch 100 places, each belonging to the one of 20 categories (actually there are much more of them).
I use AFNetworking to fetch the both of resources. I try to cache both places and categories for offline use, so before the annotations are displayed on map, I write fetched resources to the Core Data tables.
Each place retrieves its associated category resource by demand and I need to write both a place in the 'places' table, and category in the 'categories' table.
Because of fetching is being done asynchronously, when writing particular category to table I can't know if maybe another place "thread" attempts to write the same associated category to 'categories' table.
So, the question is: what is the pattern for working with Core Data tables, when they need to be populated with information retrieved asynchronously? Specifically how any given thread which is going to write a category could know that there is already one trying to do that?
UPDATE 1: My current problem is that currently I am having the duplication of categories. My guess is that obviously each category which attempts to be written is not aware about the parallel write of the same category.
UPDATE 2: The most simple description of my case is the following:
I create a new Category entity with some fields in one thread and in the meantime in another thread I create exactly the same Category entity with the same fields aiming to be the same Category object like the first thread has.
One thread wins calling [managedObjectContext save:&error], but then before the actual record is appeared in the PersistentStore, the second does call the save too. The question is: how to prevent the duplication of records in 'categories' table?
UPDATE 3: I am considering both variants of using managed object contexts: 1) reusing one shared moc instance by all threads 2) instantiate a new moc on each thread
Thanks!
The "official" answer is going to be something about using an NSOperationQueue and/or taking manual steps to ensure that all your accesses to the NSManagedObjectContext occur on the same thread that created the context. There are a number of references and tutorials that you can follow to implement this approach.
As an alternative, there's a thread-safe Core Data extension on github that will do this for you. If you use it, it will automatically synchronize your database operations so that you don't have to worry about whether or not another thread is doing something with the context. You can just insert things as they come in, and the framework will ensure that your operations are translated into something that won't make Core Data explode.
Full disclosure: I built the github project.
I am creating a new web app and would like some help on design plans.
I have "store" objects, and each one has a number of "message" objects. I want to show a store page that shows this store's messages. Using Doctrine, I have mapped OneToMany using http://symfony.com/doc/current/book/doctrine.html
However, I want to show messages in reverse chronological order. So I added a:
* #ORM\OrderBy({"whenCreated" = "DESC"})
Still I am calling the "store" object, then calling
$store->getMessages();
Now I want to show messages that have been "verified". At this point, I am unsure how to do this using #ORM so I was thinking I need a custom Repository layer.
My question is twofold:
First, can I do this using the Entity #ORM framework?
And second, which is the correct way to wrap this database query?
I know I eventually want the SQL SELECT * FROM message WHERE verified=1 AND store_id=? ORDER BY myTime DESC but how to make this the "Symfony2 way"?
For part 1 of your question... technically I think you could do this, but I don't think you'd be able to do it in an efficient way, or a way that doesn't go against good practices (i.e. injecting the entity manager into your entity).
Your question is an interesting one, because at first glance, I would also think of using $store->getMessages(). But because of your custom criteria, I think you're better off using a custom repository class for Messages. You might then have methods like
$messageRepo->getForStoreOrderedBy($storeId, $orderBy)
and
$messageRepo->getForStoreWhereVerified($storeId).
Now, you could do this from the Store entity with methods like $store->getMessagesWhereVerified() but I think that you would be polluting the store entity, especially if you need more and more of these custom methods. I think by keeping them in a Message repository, you're separating your concerns in a cleaner fashion. Also, with the Message repository, you might save yourself a query by not needing to first fetch your Store object, since you would only need to query to Message table and use its store_id in your WHERE clause.
Hope this helps.
I'm moving from pure DDD paradigm to CQRS. My current concern is with Event Sourcing and, more specifically, organizing Event Store. I've read tons of blogs posts but still can't understand some things. So correct me if I'm wrong.
Each event basically consists of:
- Event date/time
- type of Event (we can figure out type of AggregateRoot from this as well)
- AggregateRoot id (Guid)
- AggregateRoot version (to maintain the order of updates)
- Event data (some serialized class with data necessary to make update)
Now, if my Event data consists of simple value types (ints, strings, enums, etc.) then it's easy. But what if I have to pass another AggregateRoot? I can't serialize the whole AR as a part of Event data (think of all the data and lazy loading), basically I only need to store Id of that AR. But then, when I need to apply that event, I'd need to get that AR from database first. And it doesn't feel right to do so from my Domain Model (calling Repositories and working with AR Ids).
What's the best approach for this?
p.s. For a concrete example, let's assume there's a Model which consists of Task and User entities (both ARs). Task hold a reference to User responsible. But the responsible User can be changed.
Update: I think I've found the source of my confusion. I believe event sourcing should be used only for building read model. And in this case passing Ids and raw data is ok. But the same events used on aggregates themselves. And this I cannot understand.
In DDD an aggregate is a consistency/invariant boundary, so one may never depend on another to maintain its invariants. When we start using this design restriction we find very few situations where is necessary to store a full reference to the other, usually we store its id and (if necessary) version and a copy of the relevant attributes.
For example, using the usual Order/LineItem and Product problem we would copy the Product's id and price in the LineItem, instead of a full reference. This way prevents changes in the Product's price affect the Order/LineItem aggregate's invariants. If is necessary to update the LineItem price after Product price changes we need to keep track of the PriceChanged event from used Products and send a compensating command to the Order/LineItem. Usually this coordination/synchronization is handled by a saga.
In Event Sourcing, the state of the aggregate is defined by Events, and nothing more. All domain model stuff (ala DDD) is there just to decide what domain events should be raised. Event should know nothing about your Domain, it should be simple DTO. In fact, it is perfectly OK to have Event Sourcing without DDD.
As i understand Event Sourcing, it is supposed to help people get rid of relational data models and ORM like NHibernate or Entity Framework, since each of them is a science on its own. Programmers could then simply focus on business logic. I saw here some relational schemas used for event stores, and they were simply ID, Version, Timestamp plus an NClob or NVarchar(max) column to store the event payload schema-less.
I want to keep an entity, being configured by the user on several pages, in Session. This entity is loaded with NHibernate, with some of its properties/collections lazy-loaded. Say:
Session["order"] = new Order(productRepository.Get(id))
on some next page, get Session["order"] and now work with it
but, at this time order is OK but its Product (and nested stuff) is broken since they're lazy-loaded in different session.
Is it possible to tell NHibernate that I want to eager-load my transient order's properties to the deepest level? Or, the only solution will be to eager-load at the time of
productRepository.Get(id)
? Like,
Session.LoadNestedProperties(order, Eager);
Update: http://www.ribbing.net/index.php?option=com_content&task=view&id=35&Itemid=1 seems to solve the issue. However I'm not sure that reflection is great...
You could eager load all the graph with the objects you need, which is a little bit tricky.
Or you could try the following:
I assume your Order has a single Product. This Product is your problem since it becomes a detached object when the user visits the second page. You could use something like:
session.Update(myorder.Product)
to reattach the Product instance to the current session. After that lazy loading should work fine.