DDD - How should the domain rules be designed so the upper layers know when a rule was broken? - oop

Example : ShoppingCart entity has a rule that no more than 10 items can be added . The shoppingCart will have a method addItemToCart(item) that will check if it the limit was not exceeded. If the cart limit is exceeded the cart should somehow inform somehow the upper layer calling that method that a specific business rule was broken.
One thing that comes in mind is to use exceptions but these seem an unproper use of exceptions. How should the messaging be done? What patterns could help for this problem , keeping the domain unrelated to the other layers and properly encapsulated ?

If the rule is max 10 items in the cart, then ShoppingCart should expose isFull() so upper layers can check before calling addItem(). Or disable Add button, or whatever - ShoppingCart doesn't know how the rule information is used and shouldn't assume it will only be required when addItem is called. As a safety check addItem can throw an exception but this shouldn't be the primary mechanism to communicate to upper layer.

There isn't a recipe for that, it really depends on your app. For your specific example I believe things are quite simple: ShoppingCart is pretty much a container and even that it's a domain concept it's still a container. Also the UI should be aware of this rule too, so it shouldn't allow for a greater number of items than it suppose to accept.
But let's say the shopping cart POST model get to the controller. It's easy to have a service or use directly the ShopppingCart object to validate the input. The point is, you expect this kind of situation so you deal with it as fast as possible. but if you decide that the UI shouldn't allow this to happen in the first place, then the ShoppingCart should throw an exception, because it's a bug (UI permitted an invalid operation).
Personally, I wouldn't have a ShoppingCart domain concept modeled. The ShoppingCart would be an input model used to create an Order from it.
As a thumb rule, if I expect some situations (which require to give feedback to the user) then I'd have some validation before I'm using the data to update the domain. The domain objects should throw everytime a business rule is violated. Once again, there isn't a recipe, I'd probably do different things in different apps.

The same way as in any other program: choose a method that works well with your team and with your implementation environment. Exceptions work fine.

Related

Is it a good practice the attach an event related parameter to an object's model as a variable?

This is about an API handling the validation during saving an object. Which means that the front-end client sends a request to the API to a specific end point, then on the back-end the API creates a new object if the right conditions are meet.
Right now the regular method that we use is that the models has a ruleset for each fields and then the validation is invoked when the save function is invoked, but technically the validation is done right before the object is saved into the database.
Then during today's code review I came across a solution which I wasn't sure if it's a good practice or not. And it was about that the front-end must send a specific parameter to the API every time. This is because other APIs are using our API as well, and we needed to know if the request was sent as and API request or a browser request. If this parameter is present then we want to execute an extra validation function on a specific field.
(1)If I would have to implement it, then I would check the incoming parameter in the service handler or in the controller level, and if I got one, I would invoke the validation right away, and if it fails I would throw an error.
(2)The implementation I saw however adds an extra variable to the model, and sets the model variable when there is an incoming parameter, then validates only when the save function is invoked on the object(which first validates the ruleset defined on the object fields, then saves the object into the database)
So my problem with (2) is that the object now grown bigger with an extra variable that is only related to a specific event. So I would say it's better to implement (1). But (2) also has an advantage, and that is when you create the object on different end point by parsing the parameters, then the validation will work there as well, even if the developer forget to update the code there.
Now this may seems like a silly question because, why would I care about just 1 extra variable, but this is like a bedrock of something good or bad. So if I say this is ok, then from now on the models will start growing with extra variables that are only related to specific events, which I think should be handled on the controller/service handler level. On the other hand the code would be more reliable if it's not the developer who should remember all the 6712537 functionalities and keep them in mind when makes some changes somewhere. Let's say all the devs will get heart attack tomorrow from the excitement of an amazing discovery, and a new developer has to work on the project while he doesn't know about these small details, and then he has to change something on the code that is related to this functionality - so that new feature should be supported by this old one as well.
So my question is if is there any good practice on this, and what do you think what would be the best approach?
So I spent some time on thinking on the solution, and I think the best is to have an array of acceptable trigger variables in the model class. Then when the parameters are passed to the model on the controller level, then the loader function can be modified that it takes the trigger variables from the parameters and save it in the model's associative array variable that stores the trigger variables.
By default this array is empty, and it doesn't matter how much new variables are needed to be created, it will only contain the necessary ones when those are used.
Then of course the loader function needs to be modified in a way that it can filter out the non trigger variables as well as it is done for the regular fields, and there can be even a rule set of validation on the trigger variables if necessary.
So this solves the problem with overgrowing the object with unnecessary variables and the centralized validation part, because now the validation can be always done in the model instead of the controller.
And since the loader function is modified to store the trigger variables in the model's trigger variables array variable, the developer never has to remember that this functionality was created. Which is good, because in the future when he creates a new related function or end point that should handle object creation, he will not miss it to validate it against the old functionality, because the the loader function that he modified in the past like this will handle it for him.
It needs to be noted tho, that since the loader function doesn't differentiate between the parameters, and where to load them other then checking the names of the parameters with the filter functions, these parameter names should be identical from each other, otherwise a buggy functionality can be created accidentally. Like if you forget that a model attribute with the same name was used, then you can accidentally trigger an event that was programmed to be triggered if the trigger variable with the same name is present. However this can be solved by prefixing the trigger variables for example.

How to organize endpoints when using FeathersJS's seemingly restrictive api methods?

I'm trying to figure out if FeathersJS suits my needs. I have looked at several examples and use cases. FeathersJS uses a set of request methods : find, get, create, update, patch and delete. No other methods let alone custom methods can be implemented and used, as confirmed on this other SO post..
Let's imagine this application where users can save their app settings. Careless of following method conventions, I would create an endpoint describing the action that is performed by the user. In this case, we could have, for instance: /saveSettings. Knowing there won't be any setting-finding, -creation, -updating (only some -patching) or -deleting. I might also need a /getSettings route.
My question is: can every action be reduced down to these request methods? To me, these actions are strongly bound to a specific collection/model. Sometimes, we need to create actions that are not bound to a single collection and could potentially interact with more than one collection/model.
For this example, I'm guessing it would be translated in FeathersJS with a service named Setting which would hold two methods: get() and a patch().
If that is the correct approach, it looks to me as if this solution is more server-oriented than client-oriented in the sense that we have to know, client-side, what underlying collection is going to get changed or affected. It feels like we are losing some level of freedom by not having some kind of routing between endpoints and services (like we have in vanilla ExpressJS).
Here's another example: I have a game character that can skill-up. When the user decides to skill-up a particular skill, a request is sent to the server. This endpoint can look like POST: /skillUp What would it be in FeathersJS? by implementing SkillUpService#create?
I hope you get the issue I'm trying to highlight here. Do you have some ideas to share or recommendations on how to organize the API in this particular framework?
I'm not an expert of featherJs, but if you build your database and models with a good logic,
these methods are all you need :
for the settings example, saveSettings corresponds to setting.patch({options}) so to the route settings/:id?options (method PATCH) since the user already has some default settings (created whith the user). getSetting would correspond to setting.find(query)
To create the user AND the settings, I guess you have a method to call setting.create({defaultOptions}) when the user CREATE route is called. This would be the right way.
for the skillUp route, depends on the conception of your database, but I guess it would be something like a table that gives you the level/skills/character, so you need a service for this specific table and to call skillLevel.patch({character, level})
In addition to the correct answer that #gui3 has already given, it is probably worth pointing out that Feathers is intentionally restricting in order to help you create RESTful APIs which focus on resources (data) and a known set of methods you can execute on them.
Aside from the answer you linked, this is also explained in more detail in the FAQ and an introduction to REST API design and why Feathers does what it does can be found in this article: Design patterns for modern web APIs. These are best practises that helped scale the internet (specifically the HTTP protocol) to what it is today and can work really well for creating APIs. If you still want to use the routes you are suggesting (which a not RESTful) then Feathers is not the right tool for the job.
One strategy you may want to consider is using a request parameter in a POST body such as { "action": "type" } and use a switch statement to conditionally perform the desired action. An example of this strategy is discussed in this tutorial.

API interface design - toggle or 2 different interfaces

I am studying interface design.
Here is what I curious about.
Some of open API support 2 different interfaces to implement toggling. i.e. instagram like interface. It separates like interface(like, cancel like)
What is the advantage of separate those two.(separating into two interfaces makes end-user more complicated in my view)
I question this, since it could be implemented with toggle.
i.e. user send item_id and user_id. server check database(this item is already liked or not), and update.
Thanks for answer!
The real benefit to having two interfaces for toggling is that it doesn't require the user to know the current state of the thing they are attempting to change (i.e. it doesn't require me to first query for the state).
If I am a consumer of an API, typically I will want to perform actions such as like-ing something. Very rarely can I think of a case where I would want to perform the action of do the opposite of what I did previously (unless I'm feeling like flip-flopping). If you didn't have two endpoints for like and unlike then you'd first have to poll the API to get the current status, and then perform the toggle that you're talking about if needed.
This situation introduces more logic into your code, requires that you make 1-2 calls to the API, and assumes that the state didn't change between calls; whereas having two endpoints reduces the logic, limits your API calls to 1 per action, and you don't have to worry about the state changing unexpectedly.
In the case where you try to like something that the user has already liked, then the API would simply return a successful result and not alter the underlying data.
One reason to prefer an interface where you specify the desired state explicitly is that it will be idempotent. That is, the resulting state is the same even if the request is made multiple times.
This is a pretty contrived example, but if two different people sharing the same account tried to like the same thing within a small enough window, you could end up with it being un-liked instead.

Is it possible to use Bukkit for Minecraft to define a new kind of mob?

I'd like to write a Minecraft mod which adds a new type of mob. Is that possible? I see that, in Bukkit, EntityType is a predefined enum, which leads me to believe there may not be a way to add a new type of entity. I'm hoping that's wrong.
Yes, you can!
I'd direct you to some tutorials on the Bukkit forums. Specifically:
Creating a Meteor Entity
Modifying the Behavior of a Mob or Entity
Disclaimer: the first is written by me.
You cannot truly add an entirely new mob just via Bukkit. You'd have to use Spout to give it a different skin. However, in the case you simply want a mob, and are content with sharing a skin of another entity, then it can be done.
The idea is injecting the EntityType values via Java's Reflection API. It would look something like this:
public static void load() {
try {
Method a = EntityTypes.class.getDeclaredMethod("a", Class.class, String.class, int.class);
a.setAccessible(true);
a.invoke(a, YourEntityClass.class, "Your identifier, can be anything", id_map);
} catch (Exception e) {
//Insert handling code here
}
}
I think the above is fairly straightforward. We get a handle to the private method, make it public, and invoke its registration method.id_map contains the entity id to map your entity to. 12 is that of a fireball. The mapping can be found in EntityType.class. Note that these ids should not be confused with their packet designations. The two are completely different.
Lastly, you actually need to spawn your entity. MC will continue spawning the default entity, since we haven't removed it from the map. But its just a matter of calling the net.minecraft.server.spawnEntity(your_entity, SpawnReason.CUSTOM).
If you need a skin, I suggest you look into SpoutPlugin. It does require running the Spout client to join to such a server, but the possibilities at that point are literally infinite.
It would only be possible with client-side mods as well, sadly. You could look into Spout, (http://www.spout.org/) which is a client mod which provides an API for server-side plugins to do more on the client, but without doing something client side, this is impossible.
It's not possible to add new entities, but it is possible to edit entity behaviors for example one time, I made it so that you could tame iron golems and they followed you around.
Also you can sort of achieve custom looking human entities by accessing player entities and tweaking network packets
It's expensive as you need to create a player account to achieve this that then gets used to act as a mob. You then spawn a named entity and give it the same behaviour AI as you would with an existing mob. Keep in mind however you will need to write the AI yourself (you could borrow code straight from craftbukkit/bukkit) and you will need to push the movement and events of this mob to players within sight .. As technically speaking all your doing is pushing packets to the client from the serve on what's actually happening but if your outside that push list nothing will happen as other players will see you being knocked around by invisible something :) it's a bit of a mental leap :)
I'm using this concept to create Npc that act as friendly and factional armies. I've also used mobs themselves as friendly entities (if you belong to a dark faction)
I'd like to personally see future server API that can push model instructions to the client for server specific cache as well as the ability to tell a client where to download mob skins ..
It's doable today but I'd have to create a plugin for the client to achieve this which is then back to a game of annoyance especially when mojang push out a new release and all the plugins take forever to rise with its tide
In all honesty this entire ecosystem could be managed more strategically but right now I think it's just really ad hoc product management (speaking as a former product manager of .net I'd love to work on this strategy it would be such a fun gig)

Confused about Http verbs

I get confused when and why should you use specific verbs in REST?
I know basic things like:
Get -> for retrieval
Post -> adding new entity
PUT -> updating
Delete -> for deleting
These attributes are to be used as per the operation I wrote above but I don't understand why?
What will happen if inside Get method in REST I add a new entity or inside POST I update an entity? or may be inside DELETE I add an entity. I know this may be a noob question but I need to understand it. It sounds very confusing to me.
#archil has an excellent explanation of the pitfalls of misusing the verbs, but I would point out that the rules are not quite as rigid as what you've described (at least as far as the protocol is concerned).
GET MUST be safe. That means that a GET request must not change the server state in any substantial way. (The server could do some extra work like logging the request, but will not update any data.)
PUT and DELETE MUST be idempotent. That means that multiple calls to the same URI will have the same effect as one call. So for example, if you want to change a person's name from "Jon" to "Jack" and you do it with a PUT request, that's OK because you could do it one time or 100 times and the person's name would still have been updated to "Jack".
POST makes no guarantees about safety or idempotency. That means you can technically do whatever you want with a POST request. However, you will lose any advantage that clients can take of those assumptions. For example, you could use POST to do a search, which is semantically more of a GET request. There won't be any problems, but browsers (or proxies or other agents) would never cache the results of that search because it can't assume that nothing changed as a result of the request. Further, web crawlers would never perform a POST request because it could not assume the operation was safe.
The entire HTML version of the world wide web gets along pretty well without PUT or DELETE and it's perfectly fine to do deletes or updates with POST, but if you can support PUT and DELETE for updates and deletes (and other idempotent operations) it's just a little better because agents can assume that the operation is idempotent.
See the official W3C documentation for the real nitty gritty on safety and idempotency.
Protocol is protocol. It is meant to define every rule related to it. Http is protocol too. All of above rules (including http verb rules) are defined by http protocol, and the usage is defined by http protocol. If you do not follow these rules, only you will understand what happens inside your service. It will not follow rules of the protocol and will be confusing for other users. There was an example, one time, about famous photo site (does not matter which) that did delete pictures with GET request. Once the user of that site installed the google desktop search program, that archieves the pages locally. As that program knew that GET operations are only used to get data, and should not affect anything, it made GET requests to every available url (including those GET-delete urls). As the user was logged in and the cookie was in browser, there were no authorization problems. And the result - all of the user photos were deleted on server, because of incorrect usage of http protocol and GET verb. That's why you should always follow the rules of protocol you are using. Although technically possible, it is not right to override defined rules.
Using GET to delete a resource would be like having a function named and documented to add something to an array that deletes something from the array under the hood. REST has only a few well defined methods (the HTTP verbs). Users of your service will expect that your service stick to these definition otherwise it's not a RESTful web service.
If you do so, you cannot claim that your interface is RESTful. The REST principle mandates that the specified verbs perform the actions that you have mentioned. If they don't, then it can't be called a RESTful interface.