Laravel Model Concrete Properties - oop

I have a lot of love for Laravel, it is one of the nicest frameworks I have used (I have probably used 50% php, RoR, Django, .Net). I am a little confused over why a model's property names are not declared inside the class or at least having an option to define them?
Something like below would be ideal, this would make it nicer to code in an IDE. I am sure there is a reason why it is built like this, I am interested to find out.
class Blog extends Eloquent
{
public $id;
public $blogTitle;
public $date;
}

Eloquent will automatically provide the model object with it's properties based on the columns in the database.
So it makes no sense to manually specify them, since Eloquent will handle it for you.
Everything you do manually could potentially be a source of error.
My guess is that this is the reason why.

Related

Benefit of "program to an interface" in the particular usecase

I have a use case to store given object as JSON in local file system and my current implementation looks like below. Let's say I want to store the same object in different format in some remote location or database in the future. It require modifications in the constructor and implementation(addConfig) in ConfigStore. But, parameters of addConfig will remain same which means it will require changes only in the places where we construct the ConfigStore object.
Here, I am programming to an implementation. Though I introduce interface and modify my ConfigStore class to implement it, I still need to update all places which create instance of ConfigStore when I move to different format or data store later. So, does it really make sense to use interface for this particular use case? If yes, what are the advantages?
I know the concept of interface and I have been using it widely. But, I am trying to understand if "Program to interfaces but not implementations" is really applicable in this use case. I see many of my team mates are using interface just for the similar kind of purpose (i.e) what if we move to different store later and so, I want to get some thoughts here.
ConfigStore {
#Autowired private final String mPathRoot;
#Autowired private final ObjectMapper mObjectMapper;
public void addConfig(Config config, String countryCode) {
// Code goes here
}
}
In my opinion, coding to an interface principal should always be adopted. If you find yourself with a design in which you need to update all places that uses "ConfigStore" for new format than you need to take a closer look at your overall design.
I can think of two common pitfalls when adopting "program to interface":
Creation of specific object
Need for different parameters or sequence for different implementation of the interface.
Regarding the latter, it is usually solvable by rethinking your abstractions. There is no ready answer for that.
However, regarding the first pitfall, the best way to solve this is by using "dependency injection" and one of the "Factory" patterns to "hide away the mess". This way only relevant factory code will need to be updated for new "formats".
Hope this helps.

Why would I create an interface for each mapper class?

In cases of MVC applications where the model is split into separate domain and mapper layers, why would you give each of the mapper classes its own interface?
I have seen a few examples now, some from well respected developers such as the case with this blog, http://site.svn.dasprids.de/trunk/application/modules/blog/models/
I suspect that its because the developers are expecting the code to be re-used by others who may have their own back-ends. Is this the case? Or am I missing something?
Note that in the examples I have seen, developers are not necessarily creating interfaces for the domain objects.
Since interfaces are contracts between classes (I'm kinda assuming that you already know that). When a class expects you to pass an object with as specific interface, the goal is to inform you, that this class instance expect specific method to be executable on said object.
The only case that i can think of, when having a defined interface for data mappers make sense might be when using unit of work to manage the persistence. But even then it would make more sense to simply inject a factory, that can create data mappers.
TL;DR: someone's been overdoing.
P.S.: it is quite possible, that I am completely wrong about this one, since I'm a bit biased on the subject - my mappers contain only 3 (+constructor) public methods: fetch(), store() and remove() .. though names method names tend to change. I prefer to take the retrieval conditions from domain object, as described here.

About methods in OOP

I'm relatively new in OOP.
I understand classes, methods, etc, etc but I'm having troubles with the philosophy.
Right now, I'm working on a project to manage projects, with project management, class, methods, variables, users, groups, log and task management.
So, starting with Project class, i've that:
public function create_project()
public function get_projects()
public function delete_project()
Then, ProjectClass class:
public class create_class()
public class get_classes()
public class delete_class()
But then, I though that is not the right way, so I've changed to:
Project class methods:
set_name, get_name (and similar methods)
add_class
get_classes
add_log
get_logs
ProjectClass class methods:
set_project_id (and get)
add_variables (and get)
add_method
...
So, in the first case, is the Project class who create new projects, the ProjectClass class who creates the clases and the Method class who creates the methods, and in the second case, is the Project class who creates and manages its classes and is the ProjectClass class who creates and manages its methods.
So, is any of theses "styles" correct?
If is the second case the correct case, who creates the projects? Itself?
Thank you so much
In the general case it is really hard to tell if a design is better than the other if you don't have clear responsibilities to assign (and by this I mean behavior outside from getters and setters). As time went by I moved away from upfront design to a iterative/incremental one, tackling one problem at a time and refactoring the design as needed. In this case I would try to lay down the basic requirements of your system and start a design-implementation cycle for each of them, re-structuring your model as you go tackling new requirements.
Just an an example consider this question: Does it make sense to have a class that is not bounded to a project? If the answer is no then it can be a good idea to have a method like Project>>createClass(aClassName), since you are explicitly stating that a class is created in the context of a project. Also you can make the proper connections between a class and the project it belongs to inside the method's implementation. However it is also a valid approach to define a constructor in the ProjectClass class that takes a project as a parameter. In that way you are saying "if you want to create a new class, then you must provide the project where it belongs to". Which approach to use depends on many things, one of them being programmer tastes :), so it is really hard to state if one is better than the other without having a specific context to evaluate them.
Finally, if it helps, there are a few things that are worth mentioning:
Assuming that public function create_project() is an instance method, why does an instance of a Project know how to create other projects? At first it doesn't make much sense, since that is basically a class-side responsibility, unless you have a specific motivation for this (e.g. like the Prototype pattern).
Why does a project answer to get_projects()? Are they related in some way? Or it just list all the projects? Then again, this sounds like a class-side responsibility.
I generally don't like to add the concept that the message receiver represents as part of the message. So, I wouldn't call the message delete_project(), since it is redundant to state $project->delete_project() (you already know the receiver of the message is a project).
You should be consistent with your class names. If you use ProjectClass to represent classes then you should use ProjectMethod to represents methods (though I personally don't like these names, IMHO they are misleading). It is quite important to chose proper names and keep them consistent in your domain model.
HTH

Single repository with generic methods ... bad idea?

I'm currently trying out a few different ways of implementing repositories in the project I'm working on, and currently have a single repository with generic methods on it something like this:
public interface IRepository
{
T GetSingle<T>(IQueryBase<T> query) where T : BaseEntity;
IQueryable<T> GetList<T>(IQueryBase<T> query) where T : BaseEntity;
T Get<T>(int id) where T : BaseEntity;
int Save<T>(T entity) where T : BaseEntity;
void DeleteSingle<T>(IQueryBase<T> query) where T : BaseEntity;
void DeleteList<T>(IQueryBase<T> query) where T : BaseEntity;
}
That way I can just inject a single repository into a class and use it to get whatever I need.
(by the way, I'm using Fluent NHibernate as my ORM, with a session-per-web-request pattern, and injecting my repository using Structuremap)
This seems to work for me - the methods I've defined on this repository do everything I need. But in all my web searching, I haven't found other people using this approach, which makes me think I'm missing something ... Is this going to cause me problems as I grow my application?
I read a lot of people talking about having a repository per root entity - but if I identify root entities with some interface and restrict the generic methods to only allow classes implementing that interface, then aren't I achieving the same thing?
thanks in advance for any offerings.
I'm currently using a mix of both generic repositories (IRepository<T>) and custom (ICustomRepository). I do not expose IQueryable or IQueryOver from my repositories though.
Also I am only using my repositories as a query interface. I do all of my saving, updating, deleting through the Session (unit of work) object that I'm injecting into my repository. This allows me to do transactions across different repositories.
I've found that I definitely cannot do everything from a generic repository but they are definitely useful in a number of cases.
To answer your question though I do not think it's a bad idea to have a single generic repository if you can get by with it. In my implementation this would not work but if it works for you then that's great. I think it comes down to what works best for you. I don't think you will ever find a solution out there that works perfectly for your situation. I've found hybrid solutions work best for me.
I've done something similar in my projects. One drawback is that you'll have to be careful you don't create a select n+1 bug. I got around it by passing a separate list of properties to eagerly fetch.
The main argument you'll hear against wrapping your ORM like this is that it's a leaky abstraction. You'll still have to code around some the "gotchas" like select n+1 and you don't get to take full advantage of things like NH's caching support (at least not without extra code).
Here's a good thread on the pros and cons of this approach on Ayende's blog. He's more or less opposed to the pattern, but there are a few counter arguments too.
I've implemented such kind of repository for NHibernate. You can see example here.
In that implementation you are able to do eager loading and fetching. The pitfall is that with NH you will often need to be able to use QueryOver or Criteria API to access data (unfortunately LINQ provider is still far from being perfect). And with such an abstraction it could be a problem leading to leaky abstraction.
I have actually moved away from repository pattern and creating a unit of work interfaces - I find it limiting.
Unless you anticipate a change in the datastore i.e. going from DB to textfile or XML - which has never been the case for me, you are best off using ISession. You are trying to abstract your data access and this is exactly what NHibernate does. Using repository limits really cool features like Fetch(), FetchMany() futures etc. ISession is your unit of work.
Embrace NHibernate and use the ISession directly!
I've used this approach successfully on a few projects. It gets burdensome passing in many IRepository<T> to my Service layers for each BaseEntity, but it works. One thing I would change is put the where T : on the interface rather than the methods
public interface IRepository<T> where T : BaseEntity

OOP class design, Is this design inherently 'anti' OOP?

I remember back when MS released a forum sample application, the design of the application was like this:
/Classes/User.cs
/Classes/Post.cs
...
/Users.cs
/Posts.cs
So the classes folder had just the class i.e. properties and getters/setters.
The Users.cs, Post.cs, etc. have the actual methods that access the Data Access Layer, so Posts.cs might look like:
public class Posts
{
public static Post GetPostByID(int postID)
{
SqlDataProvider dp = new SqlDataProvider();
return dp.GetPostByID(postID);
}
}
Another more traditional route would be to put all of the methods in Posts.cs into the class definition also (Post.cs).
Splitting things into 2 files makes it much more procedural doesn't it?
Isn't this breaking OOP rules since it is taking the behavior out of the class and putting it into another class definition?
If every method is just a static call straight to the data source, then the "Posts" class is really a Factory. You could certainly put the static methods in "Posts" into the "Post" class (this is how CSLA works), but they are still factory methods.
I would say that a more modern and accurate name for the "Posts" class would be "PostFactory" (assuming that all it has is static methods).
I guess I wouldn't say this is a "procedural" approach necessarily -- it's just a misleading name, you would assume in the modern OO world that a "Posts" object would be stateful and provide methods to manipulate and manage a set of "Post" objects.
Well it depends where and how you define your separation of concerns. If you put the code to populate the Post in the Post class, then your Business Layer is interceded with Data Access Code, and vice versa.
To me it makes sense to do the data fetching and populating outside the actual domain object, and let the domain object be responsible for using the data.
Are you sure the classes aren't partial classes. In which case they really aren't two classes, just a single class spread across multiple files for better readability.
Based on your code snippet, Posts is primarily a class of static helper methods. Posts is not the same object as Post. Instead of Posts, a better name might be PostManager or PostHelper. If you think of it that way, it may help you understand why they broke it out that way.
This is also an important step for a decoupling (or loosely coupling) you applications.
What's anti-OOP or pro-OOP depends entirely on the functionality of the software and what's needed to make it work.