How should I create half-day events in Rails - ruby-on-rails-3

I'm creating an app in Rails that is essentially a holiday management tool. Employee requests holiday; email sent to manager for approval; manager approves/rejects etc.
The app will allow whole or half-day holidays to be taken and I'm wondering about the best way to handle the half-days. I don't want to present the user with a time picker. I would prefer to offer a date-picker and AM/PM checkboxes.
I suppose I'm looking for opinion on whether I should 1) use the chosen date in conjunction with say the AM checkbox to create a DateTime entry in the DB e.g. leave starts on 10 February in the AM = "2011-02-10 00:00" or 2) should I simply record a Date in the DB with a string reference to AM in a separate field.
I want to output leave in the form of .ics files and a stream so the first option to me makes the most sense but is likely to create a real fudge in the code. Any thoughts or further options appreciated.
Thanks
Robin

Why not create durations (pairs of datetimes) for every holiday rather than just one? That should model the ical representation better than just storing single times as events.
As far as how to handle that at the view level, you're probably going to want to use the Presenter Pattern since you're really manipulating events rather than times.
A presenter is basically a proxy with added business logic that represents a better mapping for how the view interacts with the model.
It's a lightweight layer (they're normally just normal Ruby classes, rather than AR::Base or other heavyweight rails models) that wrap your models, and are usually instantiated at the controller level, and passed to your views rather than the model themselves.
http://blog.jayfields.com/2006/09/rails-model-view-controller-presenter.html
http://blog.jayfields.com/2007/03/rails-presenter-pattern.html
http://www.slideshare.net/adorepump/presenting-presenters-on-rails
Here's what I mean: https://gist.github.com/984025

Related

Getting around 10 BPF max to implement good Entity and Process structure in Dynamics

Aiming to enable consistent user experience and have every business process be able to follow identical high level structure.
Relevant end user experience target state vision (context / requirements):
They could start in Dynamics 365 from Company screen, select the process they want to initiate
New Case screen opened along with visible process steps unique to that process. User inputs generic Case fields, next step
User inputs fields that are specific to that business process and next
One or more processing phases which conclude the initial user flow (and instead show up in queues of different user groups - or queues that various automation implementations read & execute on)
Aim is that every single process follows this model. Every single process (of which there are hundreds in total) is a Case (core entity) and the fields which are specific to are located in a custom entity (which is specific to the business process).
I.e. there are in target state hundreds of business processes and each has (with some exceptions):
Custom entity (only the fields that are specific to that process - we don't want to pollute the core entities with use case specific fields after all)
Custom page / default view (tab within Case editing / view screen to display the custom entity's information) - to user it should simply appear as part of Case information (they never even know there is a separate entity in play)
Business process flow (BPF) specific to the process
This enables:
Everything is a Case - full visibility from a single view into what is happening related to specific customer and overall (the default design approach of Dynamics of having everything fragmented all over the place is just plain idiotic)
Decoupling at several levels - business process, custom entity, custom page can safely be developed by stream-aligned teams with minimal overlap with the core shared entities and areas (which are taken care of by centralized team). Automation steps and exact technology are entirely decoupled from BPF implementation (interaction via queues / specific subsets of CRM API)
For this to work (BPF specific):
For "Everything is a Case" to work the BPF needs to be associated to Case entity (second / some processing steps associated to the related custom entity) because that is the standard way
Problem:
There is a maximum of 10 BPFs for a specific Entity which makes no sense. I get restriction of active processes for an Entity instance but for overall Entity. But there is no getting over this directly
So how to mitigate this without breaking the entire approach?
The primary approach seems to be:
Associate the BPF to the custom entity instead (since one exists for each process)
Problem with this:
** No we can no longer open the Case and see the associated process - at least not directly. Same applies to creating new case of specific type (Case type = specific biz process = specific BPF + custom entity)
So how to get around this? How to design the system so it maximally utilizes Dynamics core elements and approaches but also enables this "Everything is a Case" where users have a consistent approach regardless of what type of process this is - and also where we have unifying technical abstraction for everything (also enables very clear API towards external systems and automation techs).
Microsoft Dynamics 365, Power Automate Business Process Flows

How to instantiate several controllers of single type in ASP.NET Core 3

I thought I had a common business case but cannot find an appropriate solution for that.
So, suppose I have LoaderController which has a number of database actions like load, save, etc. It loads and saves the objects of a single type (i.e. this is not a generic controller). BUT: it can load objects from different databases - that's the point.
So, if a user calls /db1/load - the system loads data from database db1, /db2/load - from database db2, etc. Also, I have a kind of a shared environment, so I can't be sure that the first segment is always the database name. There might be other cases like /report/id which don't correspond to LoaderController but have to live with that type of route "/x1/x2" as well.
Well, the basic idea is to somehow register all my database endpoints like (pseudocode):
Register("db1", controller = typeof(LoaderController), parameter="ConnectionStringToDb1"), but I cannot understand how to do that. Not only registering that type of route is an issue but also pushing the connection string parameter into the particular controller in any way.

API interface design - toggle or 2 different interfaces

I am studying interface design.
Here is what I curious about.
Some of open API support 2 different interfaces to implement toggling. i.e. instagram like interface. It separates like interface(like, cancel like)
What is the advantage of separate those two.(separating into two interfaces makes end-user more complicated in my view)
I question this, since it could be implemented with toggle.
i.e. user send item_id and user_id. server check database(this item is already liked or not), and update.
Thanks for answer!
The real benefit to having two interfaces for toggling is that it doesn't require the user to know the current state of the thing they are attempting to change (i.e. it doesn't require me to first query for the state).
If I am a consumer of an API, typically I will want to perform actions such as like-ing something. Very rarely can I think of a case where I would want to perform the action of do the opposite of what I did previously (unless I'm feeling like flip-flopping). If you didn't have two endpoints for like and unlike then you'd first have to poll the API to get the current status, and then perform the toggle that you're talking about if needed.
This situation introduces more logic into your code, requires that you make 1-2 calls to the API, and assumes that the state didn't change between calls; whereas having two endpoints reduces the logic, limits your API calls to 1 per action, and you don't have to worry about the state changing unexpectedly.
In the case where you try to like something that the user has already liked, then the API would simply return a successful result and not alter the underlying data.
One reason to prefer an interface where you specify the desired state explicitly is that it will be idempotent. That is, the resulting state is the same even if the request is made multiple times.
This is a pretty contrived example, but if two different people sharing the same account tried to like the same thing within a small enough window, you could end up with it being un-liked instead.

FactoryImpl to set atts via props for bound inputs

First, thanks for any advice. I am new to all of this and apologize for any obvious blunders.
Second, the question:
In an interface for entering clients that often possess a number of roles, it seemed efficient to create a set of inputs which possessed both visual characteristics and associated data binding based simply on the inputs name.
For example, inquirerfirstname would be any caller or emailer who contacted our company.
The name would dictate a label, placeholder, and the location in firebase where the data would be stored.
The single name could be used--I thought--with a relational table (state machine or series of nested ifs) to define the properties of the input and change its outward appearance and inner bindings through property manipulation.
I created a set of nested iffs, and console logged the property changes in the inputs, but their representation in the host element (a collection of inputs that generated messages to clients as well as messages to sales staff) remained unaffected.
I attempted using the ready callback. I forced the state change with a button.
I was unable to use the var name = new MyInput( name). I believe using this method would be most effective but am unsure how to "stamp" the JavaScript into a heavyweight stamped parent element.
An example of a more complicated and dynamic use of a constructor and a factory implementation that can read database (J-son) objects and respond to generate HTML elements would be awesome.
In vanilla a for each would seem to do the trick but definitions and structure as well as binding would not be organic--read it might be easier just to HTML stamp the inputs in polymer by hand.
I would be really greatful for any help. I have looked for a week and failed to find one example that took data binding, physical appearance, attribute swapping, property binding and object reading into account.
I guess it's a lot, but each piece independently (save the use of the constructor) I think I get.
Thanks again.
Jason
Ps: I am aware that the stamping of the element seems to preclude dynamic property attribute and binding assignments. I was hoping a compute attribute mixed with a factoryimpl would be an option (With a nice example).

Event Sourcing using NHibernate

I'm moving from pure DDD paradigm to CQRS. My current concern is with Event Sourcing and, more specifically, organizing Event Store. I've read tons of blogs posts but still can't understand some things. So correct me if I'm wrong.
Each event basically consists of:
- Event date/time
- type of Event (we can figure out type of AggregateRoot from this as well)
- AggregateRoot id (Guid)
- AggregateRoot version (to maintain the order of updates)
- Event data (some serialized class with data necessary to make update)
Now, if my Event data consists of simple value types (ints, strings, enums, etc.) then it's easy. But what if I have to pass another AggregateRoot? I can't serialize the whole AR as a part of Event data (think of all the data and lazy loading), basically I only need to store Id of that AR. But then, when I need to apply that event, I'd need to get that AR from database first. And it doesn't feel right to do so from my Domain Model (calling Repositories and working with AR Ids).
What's the best approach for this?
p.s. For a concrete example, let's assume there's a Model which consists of Task and User entities (both ARs). Task hold a reference to User responsible. But the responsible User can be changed.
Update: I think I've found the source of my confusion. I believe event sourcing should be used only for building read model. And in this case passing Ids and raw data is ok. But the same events used on aggregates themselves. And this I cannot understand.
In DDD an aggregate is a consistency/invariant boundary, so one may never depend on another to maintain its invariants. When we start using this design restriction we find very few situations where is necessary to store a full reference to the other, usually we store its id and (if necessary) version and a copy of the relevant attributes.
For example, using the usual Order/LineItem and Product problem we would copy the Product's id and price in the LineItem, instead of a full reference. This way prevents changes in the Product's price affect the Order/LineItem aggregate's invariants. If is necessary to update the LineItem price after Product price changes we need to keep track of the PriceChanged event from used Products and send a compensating command to the Order/LineItem. Usually this coordination/synchronization is handled by a saga.
In Event Sourcing, the state of the aggregate is defined by Events, and nothing more. All domain model stuff (ala DDD) is there just to decide what domain events should be raised. Event should know nothing about your Domain, it should be simple DTO. In fact, it is perfectly OK to have Event Sourcing without DDD.
As i understand Event Sourcing, it is supposed to help people get rid of relational data models and ORM like NHibernate or Entity Framework, since each of them is a science on its own. Programmers could then simply focus on business logic. I saw here some relational schemas used for event stores, and they were simply ID, Version, Timestamp plus an NClob or NVarchar(max) column to store the event payload schema-less.