I am using state_machine gem and I'd like to store every state transition.
Is it just a matter of creating a new model called MyClassTransition with a transition:string from_state:state ? and add new record in that the new table on transition?
Is there a better practice to store these transition? I need a kind of a log ...
Any recommendations are welcome.
Instead of inventing this yourself, why don't you look at something like papertrail. This is what we use to record all our changes and it allows you to rollback.
There's a great tutorial on Railscasts:
http://asciicasts.com/episodes/255-undo-with-papertrail
You might also want to have a look at this: https://github.com/wvanbergen/state_machine-audit_trail
Unlike Papertrail, it only log the state attribute and doesn't support rollback/undo/revert. If you don't need a rollback, it's more simpler.
And sometimes you're using before/after_transition callbacks that affect other models, you can't leverage Papertrail's rollback system.
Related
I'm trying to figure out if FeathersJS suits my needs. I have looked at several examples and use cases. FeathersJS uses a set of request methods : find, get, create, update, patch and delete. No other methods let alone custom methods can be implemented and used, as confirmed on this other SO post..
Let's imagine this application where users can save their app settings. Careless of following method conventions, I would create an endpoint describing the action that is performed by the user. In this case, we could have, for instance: /saveSettings. Knowing there won't be any setting-finding, -creation, -updating (only some -patching) or -deleting. I might also need a /getSettings route.
My question is: can every action be reduced down to these request methods? To me, these actions are strongly bound to a specific collection/model. Sometimes, we need to create actions that are not bound to a single collection and could potentially interact with more than one collection/model.
For this example, I'm guessing it would be translated in FeathersJS with a service named Setting which would hold two methods: get() and a patch().
If that is the correct approach, it looks to me as if this solution is more server-oriented than client-oriented in the sense that we have to know, client-side, what underlying collection is going to get changed or affected. It feels like we are losing some level of freedom by not having some kind of routing between endpoints and services (like we have in vanilla ExpressJS).
Here's another example: I have a game character that can skill-up. When the user decides to skill-up a particular skill, a request is sent to the server. This endpoint can look like POST: /skillUp What would it be in FeathersJS? by implementing SkillUpService#create?
I hope you get the issue I'm trying to highlight here. Do you have some ideas to share or recommendations on how to organize the API in this particular framework?
I'm not an expert of featherJs, but if you build your database and models with a good logic,
these methods are all you need :
for the settings example, saveSettings corresponds to setting.patch({options}) so to the route settings/:id?options (method PATCH) since the user already has some default settings (created whith the user). getSetting would correspond to setting.find(query)
To create the user AND the settings, I guess you have a method to call setting.create({defaultOptions}) when the user CREATE route is called. This would be the right way.
for the skillUp route, depends on the conception of your database, but I guess it would be something like a table that gives you the level/skills/character, so you need a service for this specific table and to call skillLevel.patch({character, level})
In addition to the correct answer that #gui3 has already given, it is probably worth pointing out that Feathers is intentionally restricting in order to help you create RESTful APIs which focus on resources (data) and a known set of methods you can execute on them.
Aside from the answer you linked, this is also explained in more detail in the FAQ and an introduction to REST API design and why Feathers does what it does can be found in this article: Design patterns for modern web APIs. These are best practises that helped scale the internet (specifically the HTTP protocol) to what it is today and can work really well for creating APIs. If you still want to use the routes you are suggesting (which a not RESTful) then Feathers is not the right tool for the job.
One strategy you may want to consider is using a request parameter in a POST body such as { "action": "type" } and use a switch statement to conditionally perform the desired action. An example of this strategy is discussed in this tutorial.
Need suggestion on which library to use for a large react native mobile app using redux ? redux-offline or react-native-offline ?
I need to regularly check connection status, render view depending on the connection status, add actions to queue when offline and run them when online, cancel actions if some contradiction is there, and persist/rehydrate data offline.
I am using redux-offline in my react-native project, it works just great. The feature that you are looking for all are presents like
It regularly checks for connection status
Add action to offline anytime (online \ offline)
Run the action as soon as device became online (moreover, you can decide the retry interval)
You can write your own discard method to drop any action based on your business requirement.
It uses redux-persist which automatically persist\rehydrate data. Also, you can provide your own store mechanism.
redux-offline is working just great for me, Sorry, I haven't used react-native-offline yet so can't provide you any benchmark.
I would suggest going for react-native-offline.
React-native-Offline provides :
Easier queue handling for actions based on a regex expression or a list of actions
Auto Triggering the online only actions , once the network is back.
Your Saga looks cleaner and readable, with both online/offline cases , on a maintenance perspective.
Redux-Offline provides:
It basically on separate online and offline actions
Each action and associated rollback needs to be handled
Both provides the redux-presist with connectors of your preference.
I have evaluated both, and for my use case I decided to go with react-native-offline. I liked its integration and ease of setup with redux-sagas, and it's offlineQueue was very convenient to have when you expect your users to conduct many operations offline.
I'm new to React, I think the basics have sunk in but I'm stuck on something. We're going to re-build one of our old systems and I'd like to do it in React.
Our system is an internal CRM, each set of client data is about a Mb in size, so efficiency is one of our priorities. The logic is done on a separate API, used by lots of different systems, so 99% of this front end is CRUD only.
(I hope I'm explaining this Ok!)
So onto my question. If I make a small change to a part of the client data, say I add an 'Audit' to the client... there is a chance that LOTS of other data changes. Complex enough that I don't want to replicate the logic both front end & API side.
Would I need to have the API return the full Mb of data, to have the root level app re-render all it's components? Or is there a more efficient way of doing it? Should I be setting up each component to periodically ping the API to check for changes individually?
I'm just a little bit lost where to start tackling the idea of it. Any help is much appreciated!
First things first - React Components rerender when any props or state field was changed.
If you change smth on the client-side and changes should affect server-side which important for user, then you do should updated your app view. To make it more smooth you can use shouldComponentUpdate method of Component's lifecycle to prevent unnecessary re-renders.
If server-side updates are not important for user (some meta data...), then you may not update the state of you application, by that you prevent re-renders.
I am creating a new web app and would like some help on design plans.
I have "store" objects, and each one has a number of "message" objects. I want to show a store page that shows this store's messages. Using Doctrine, I have mapped OneToMany using http://symfony.com/doc/current/book/doctrine.html
However, I want to show messages in reverse chronological order. So I added a:
* #ORM\OrderBy({"whenCreated" = "DESC"})
Still I am calling the "store" object, then calling
$store->getMessages();
Now I want to show messages that have been "verified". At this point, I am unsure how to do this using #ORM so I was thinking I need a custom Repository layer.
My question is twofold:
First, can I do this using the Entity #ORM framework?
And second, which is the correct way to wrap this database query?
I know I eventually want the SQL SELECT * FROM message WHERE verified=1 AND store_id=? ORDER BY myTime DESC but how to make this the "Symfony2 way"?
For part 1 of your question... technically I think you could do this, but I don't think you'd be able to do it in an efficient way, or a way that doesn't go against good practices (i.e. injecting the entity manager into your entity).
Your question is an interesting one, because at first glance, I would also think of using $store->getMessages(). But because of your custom criteria, I think you're better off using a custom repository class for Messages. You might then have methods like
$messageRepo->getForStoreOrderedBy($storeId, $orderBy)
and
$messageRepo->getForStoreWhereVerified($storeId).
Now, you could do this from the Store entity with methods like $store->getMessagesWhereVerified() but I think that you would be polluting the store entity, especially if you need more and more of these custom methods. I think by keeping them in a Message repository, you're separating your concerns in a cleaner fashion. Also, with the Message repository, you might save yourself a query by not needing to first fetch your Store object, since you would only need to query to Message table and use its store_id in your WHERE clause.
Hope this helps.
Using Ncqrs, is there a way to replay every single event ever happened (all aggregate types) and feed these through my denormalizers in order to recreate the whole read model from scratch?
Edit:
I though it's be good to provide a more specific use case. I'm building this inside a ASP.NET MVC application and using Entity Framework (Code first) for working with the read models. In order to speed up development (and because I'm lazy), I want to use a database initializer that recreates the database schemas once any read model changes. Then using the initializer's seed method to repopulate them.
There is unfortunately nothing built in to do this for you (though I haven't updated the version of ncqrs I use in quite a while so perhaps that's changed). It is also somewhat non-trivial to do it since it depends on exactly what you want to do.
The way I would do it (up to this point I have not had a need) would be to:
Call to the event store to get all relevant events
Depending on what you are doing this could be all events or just the events for one aggregate root, or a subset of events for one or more aggregate roots.
Re-create the read-model in memory from scratch (to save slow and unnecessary writing)
Store the re-created read-model in place of the existing one
Call to the event store one more time to get any events that may have been missed
Repeat until there are no new events being returned
One thing to note, if you are recreating the entire read-model database from scratch I would off-line the service temporarily or queue up new events until you finish.
Again there are different ways you could approach this problem, your architecture and scenarios will probably dictate how best to do it.
We use a MsSqlServerEventStore, to replay all the events I implemented the following code:
var myEventBus = NcqrsEnvironment.Get<IEventBus>();
if (myEventBus == null) throw new Exception("EventBus is not found in NcqesEnvironment");
var myEventStore = NcqrsEnvironment.Get<IEventStore>() as MsSqlServerEventStore;
if (myEventStore == null) throw new Exception("MsSqlServerEventStore is not found in NcqesEnvironment");
var myEvents = myEventStore.GetEventsAfter(GetFirstEventIdFromEventStore(), int.MaxValue);
myEventBus.Publish(myEvents);
This will push all the events on the eventbus and the denormalizers will process all the events. The function GetFirstEventIdFromEventStore just queries the eventstore and returns the first Id from the eventstore (where SequentialId = 1)
What I ended up doing is the following. At the service startup, before any commands are being processed, if the read model has changed, I throw it away and recreate it from scratch by processing all past events in my denormalizers. This is done in the database initializer's seed method.
This was a trivial task using the MS SQL event storage as there was a method for retrieving all events. However, I'm not sure about other event storages.