Looking for some guidance.
I'm building an application, SL4 with WCF as the backend service. My WCF Service layer sits over a Domain Model and I'm converting my Domain Entities to screen specific DTOs using an assembler.
I have a screen (security related) which shows a User and the Groups that they are a member of, now the user can add and remove groups for the user after which they can hit the apply button. Only when this apply button is hit will the changes be submitted.
Currently I have a UserDetailDto which is sent to the client to populate the screen and my intention was on hitting apply to send a UserDetailUpdateDto back to the server to perform the actual update to the domain model.
Does this sound ok to start?
If so when the user is making changes client-side should my UserDetailUpdateDto be sending back the changes, ie. whats been added and whats been removed.
Not sure, guidance would be great.
Guidance is always to tricky when so much is unknown about the requirements and the deployment environment. However, your approachs sounds reasonable to me. The key things I like about this:
1) You are keeping your DTOs separate from your Domain Entities. In small simple apps it can be fine to send entities over the wire, but they can start to get in each other's way as complexity and function increase.
2) You are differentiating between Query object (UserDetailDto) and Command object (UserDetailUpdateDto). Again the two can often be satisfied using a single object but you will start to them bloat as complexity/function increases because the object is serving two masters (the Query object is to be consumed at the client and the Command object is to be consumed at the server). I use a convention where all command DTOs start with a verb (e.g. UpdateUserDetail), it just makes it easier to sort 'data' from 'methods' at the client end.
If the SL application is likely to become large with more complex screen logic it may be worth looking at the Model-View-ViewModel (MVVM) pattern. It allows you to separate screen design from screen function. It provides a bit more flexibility in distributing work around a development team and better supports unit testing.
As far as what gets sent back in the UpdateUserDetail object, I think this should be guided by what is going to be easiest to work with at the domain model (or the WCF service sitting over your domain model). Generally, smaller is better when it comes to DTOs.
Related
I am working on two different services:
The first one handles all of the write operations through a REST API, it contains all of the required business logic to maintain data in a consistent state, and it persists entities on a database. It also publishes events to a message broker when an entity is changed (creation, update, deletion, etc). It's structured in a DDD fashion.
The second one only handles reads, also with a REST API. It subscribes to the same message broker in order to process the events published by the first service, then it saves the received data to an in memory database for fast reads.
Nothing fancy, just CQRS with eventual consistency.
For the first service, I had a clear mind on how to structure the application:
I have the domain package with subpackages for each different aggregate. Each aggregate has its own domain objects, and its own repository interface.
I have the application package with different application services, and they basically just orchestrate the domain objects and call repositories to persist/update data, and the event publisher to publish domain events. The event publisher interface is also in this package.
I have the infrastructure package, which includes a persistence package, where the repository implementations reside, and a messaging package, where the event publisher implementation resides.
Finally, the interfaces package is where I keep the controllers/handlers for the REST API.
For the second service, I'm very unsure on how to structure it. My doubts are the following:
Should I use the repository pattern? To be fair it seems redundant and not very useful in this scenario. There are no domain objects nor rules here, cause the data to be saved/updated is already validated by the first service.
If I avoid using the repository pattern, I suppose I'd have to inject the database client in my application service, and access the data directly. Is this a good practice? If yes, where would the returned objects fit? Would they also be part of the application layer?
Would it make sense to skip the application service entirely and inject the database client straight up in the controller/handler? What if the queries are a bit complicated? This would pollute the controllers with a lot of db logic, making it harder to switch implementations (there would be no interface in this case).
What do you think?
The Query side will only contain the methods for getting data, so it can/should be really simple.
You are right, an abstraction on top of your persistence like a repository pattern can feel redundant.
You can actually call the database in your controller. Even when it comes to testing, on the query side you only need basically integration tests that test the actual database. Having unit tests won't test much.
On the other hand, it can make sense to wrap the database calling logic in a query service similar to a repository. You would inject only that query service interface in your controller, which should use your ubiquitous language! You would have all the db logic in this query service and keep the db complexity there, while keeping the controller really simple.
You can avoid complex queries by having multiple read models based on your events depending on your needs.
I'm new to web development and I'm attempting to understand REST. The tutorial I'm watching makes mention of the difference between "procedures" and "state transformation". Stating that REST is based on the notion of "state transformation", but it does not delineate the difference between the two.
This has left me wondering what is the difference between the two? Why can't an operation which transforms the state of a resource also be considered a procedure? After all, 'procedure' sounds like a generic enough term that it would also encompass an operation that would transform the state of a resource.
So, what is the difference between performing a procedure on a resource, and performing a state transformation? Or is it merely a matter of semantics?
I have also tried searching for the answer but can't seem to find anything that will shed light on this.
TL;DR
RPC focues on sending a payload containing method names and arguments in a predefined format. Clients couple tightly to servers through a shared interface (Skeletton classes, WSDL or other interface definition languages (IDLs))
REST focues on decoupling clients from servers and on introducing indirections, like support of multiple different media types to marshal resource state in, and the whole interaction concepts summarized by HATEOAS where hypertext controls are used to drive the application state forward through a domain application protocol / state machine on the server side. Responses usually contain semi-structured data, which usually don't go well with simple CRUD application, that follow the definition of corresponding media type definition (i.e. the HTML spec). If you will the state of a resource is transformed into a representation format adhering to the rules in the media type definition and transferred to the remote side
In network programming, remote procedure call (RPC)-style invocations, i.e. often used in RMI, Corba, SOAP or similar frameworks, will send usually a method name that should be invoked at the server alongside with parameters to feed the method with. The return value is then marshalled into corresponding response and sent back to the caller. What a client could invoke is usually exposed via external stuff, i.e. skeletton classes, WSDL- or other form of contracts and so on. So far, so simple. This is how most of the networking stuff works. However, the drawback here is that a client is tightly coupled to the exposed interface (skeletton classes, WSDL, external documentation) and many problems in internet computing arise due to changes over time that are not adequatly depictable in those interfaces.
If you take a closer look though at how the Web works for decades, change is an inherent part of it. Your browser will just show the most recent state of a resource (Web page) it has. It might eigther got it from its cache or from a server it asked for. If the version available in its cache is older than a predefined threshold value it will ignore the cached value and request a new version. If there happened an update since the last version your browser is automatically served with the new version. Fielding, who was working on the HTTP 1.0 and 1.1 spec back then, analyzed how the interaction on the Web takes place and generalized his findings into the REST architecture design. So, if you will, REST is just Web surfing for applications.
Unfortunately a mojority of enthusiasts and professional have not yet understood what REST really is and there is so much false information available in regards to REST, even here at Stackoverflow most people don't seem to care actually and posts explaining the true nature of REST are downvoted and wrong information upvoted.
So, what does REST different than typical RPC-like method invocations?
First, REST relies on a certain set of uniform interfaces, that are the same for every participant in that architecture. These are i.e. HTTP as transport layer and a naming scheme for resource (URI) so that everyone acts on these fixed principles. This helps to reduce interoperability issues that are just way to common in traditional network programming.
Next, it relies on a basic principle: Servers teach clients what they need to know. But how does a server know what a client need to know? Well, as Jim Webber pointed out, the designer of the application develops a state machine (or domain application protocol) a client will follow through. Think of a checkout system on your favorite online shop. At one point it presents you the items in your trolly and offers you a choice to progress to the next "page" where you can enter the shipping address and on further progressing through the state machine you will be asked for your payment options and so on until at one point to finished the checkout and are served with a "Thank you" page that summarizes your order. Under the hood you just progressed through their protocol on how to place orders and used application controls to progress your client further through their state machine. You therefore got served with some Web forms and links that you used to fulfill your task. In essence, this is what Hypertext as the engine of application state (or HATEOAS for short) is all about.
On the Web HTML forms are used to teach a client about what properties a resource supports, which ones are editable and so on. Besides that, it also teaches clients on the actual URI to send input data to, the HTTP operation to utilze as well as, mostly implicitly given, the media type to marshal the request into. I.e. a regular HTML form will use application/x-www-form-urlencoded as its default media type to send the data to the server. So a full HTTP request for an input of a first and last name may look like this:
POST /path/to/resource HTTP/1.1
Host: acme.org
Connection: close
Accept: */*
User-Agent: ...
Content-Type: application/x-www-form-urlencoded
Content-Length: 32
firstName=Roman&lastName=Vottner
The same data could be sent using a different representation format, if it were supported by the media type the form was issued for. Unfortunately, HTML does not support that many.
Links provided by a server should usually be annotated (or accompanyied) by so called link relation names that put the current resource in relation with the given URI. If you will they are the predicate in a tripple of subject (current page), predicate (link relation name) and object (link target resource). Such names, of course, should be standadized or at least follow the Web linking extension mechanism. URIs itself are opaque, meaning they themselves don't provide meaning and should therefore not get parsed and analyzed at all. A common mistake often seen in so called "REST APIs" is that they have typed resources, i.e. a user resource or a car resource that can be marshalled on the client side to a programming language specific object (i.e. Java object of class User or the like) that is pretty common in traditional RPC-sytle programming. In a REST architecture the representation format however is usually semi-structured data, i.e. a mix of syntax defining control inputs or elements and actual data. As such, a direct mapping from DB-Entry, to Model-Object to a resource itself, as done by so many CRUD applications, is not possible.
Why is this all done in first place?
If you compare traditional network programming a client is usually only able to work with that one server and if something at that server changes clients may be affected and thus stop working. There is a tight coupling between those two apparent. The REST architecture introduces a couple of indirections, i.e. usage of link relations instead of attempting to analyze meaningful URIs as well as usage of a multitude of possible media-types instead of relying on a specified version format, which help to decouple clients from servers. I.e. instead on coupling to the server in regards of the message exchanged, both, client and server couple to media types. Through content-type negotiation a client simply tells the server of its capabilities and the server should generate a response the client can process. Instead of focusing on one message format, REST has the freedom of almost infinite ones as long as both, client and server, support these. The more media types a peer supports, the more likely it will be to interact with other peers in that network.
All these points I've mentioned above lead to a strict decoupling of client and servers, which grant the latter one to evolve freely without having to fear that changes introduce will break clients as neither the transport protocol nor the naming scheme have changed and the changes introdcued are still in scope of the media-type definition. So, well-behaved peers in that network will be able to pick up changes on the fly automatically. This is especially handy if you develop an application that should withstand the sands of time and still server clients in years to come.
If you don't need such properties, there is nothing wrong with not being "RESTful" at all, just don't call such services/APIs REST then. Also, developing REST is for sure more overhead compared to typical RPC-style interactions.
I’m having a little bit of difficulty understanding some architectural principles when developing a service. If you make a call to a WCF service and it returns a collection of items(Orders) (which are custom made classes made up From LINQ-to-SQL entity data) to a client and each item has a collection of items(OrderItems) (one-to-many) that are also made up from the same LINQ-to-SQL context. If I make another call to the service and request a particular OrderItem and modify its details on the client side, how then does the first collection of Items realise that one of its Orders OrderItem has changed from the client side. I am taking the approach of when changing the OrderItem I send the OrderItem object to the WCF service for storage via LINQ-to-SQL commands but to update the collection that the client first called I use IList interface to search and replace each instance of the OrderItem. Also subscribing each item to the PropertyChanged event give some control. This does work with certain obvious limitations but how would one 'more correctly' approach this by perhaps managing all of the data changing from the service itself.. ORM? static classes? If this is too difficult question to answer, perhaps some link or even chat group that I can discuss this as I understand that this site is geared for quick Q/A type topics rather than guided tutorial discussions.
Thanks all the same.
Chris Leach
If you have multiple clients changing the same data at the same time, at the end of the day you system must implement some sort of Concurrency Control. Broadly thats going to fall into one of two categories: pessimistic or optimistic.
In your case it sounds like you are venturing down the optimistic route, whereby anyone can access the resource via the service - it does not get locked or accessed exclusively. What that means is ultimately you need to detect and resolve conflicts that will arise when one client changes the data before another.
The second architectural requirement you seem to be describing is some way to synchronize changes between clients. This is a very difficult problem. One way is to build some sort of publish/subscribe system whereby, after a client retrieves some resources from the service, it also subscribes to get updates to changes to resource. You can do this either in a push or pull based fashion (pull is probably simpler, i.e. just poll for changes).
Fundamentally you are trying to solve a reasonably complex problem, but its also one which pops up quite frequently in software.
What are some questions I can ask myself about our design to identify if we should use DTOs or Self-Tracking Entities in our application?
Here's some things I know of to take into consideration:
We have a standard n-tier application with a WPF/MVVM client, WCF server, and MS SQL Database.
Users can define their own interface, so the data needed from the WCF service changes based on what interface the user has defined for themselves
Models are used on both the client-side and server-side for validation. We would not be binding directly to the DTO or STE
Some Models contain properties that get lazy-loaded from the WCF service if needed
The Database layer spams multiple servers/databases
There are permission checks on the server-side which affect how the data is returned. For example, some data is either partially or fully masked based on the user's role
Our resources are limited (time, manpower, etc)
So, how can I determine what is right for us? I have never used EF before so I really don't know if STEs are right for us or not.
I've seen people suggest starting with STEs and only implement DTOs if they it becomes a problem, however we currently have DTOs in place and are trying to decide if using STEs would make life easier. We're early enough in the process that switching would not take too long, but I don't want to switch to STEs only to find out it doesn't work for us and have to switch everything back.
If I understand your architecture, I think it is not good for STEs because:
Models are used on both the client-side and server-side for validation. We would not be binding directly to the DTO or STE
The main advantage (and the only advantage) or STEs is their tracking ability but the tracking ability works only if STE is used on both sides:
The client query server for data
The server query EF and receive set of STEs and returns them to the client
The client works with STEs, modifies them and sends them back to the server
The server receives STEs and applies transferred changes to EF => database
In short: There are no additional models on client or server side. To fully use STEs they must be:
Server side model (= no separate model)
Transferred data in WCF (= no DTOs)
Client side model (= no separate model, binding directly to STEs). Otherwise you will be duplicating tracking logic when handling change events on bounded objects and modifying STEs. (The client and the server share the assembly with STEs).
Any other scenario simply means that you don't take advantage of self tracking ability and you don't need them.
What about your other requirements?
Users can define their own interface, so the data needed from the WCF service changes based on what interface the user has defined for them.
This should be probably possible but make sure that each "lazy loaded" part is separate structure - do not build complex model on the client side. I've already seen questions where people had to send whole entity graph back for updates which is not what you always want. Because of that I think you should not connect loaded parts into single entity graph.
There are permission checks on the server-side which affect how the data is returned. For example, some data is either partially or fully masked based on the user's role
I'm not sure how do you want actually achieve this. STEs don't use projections so you must null fields directly in entities. Be aware that you must do this when entity is not in tracking state or your masking will be saved to the database.
The Database layer spams multiple servers/databases
It is something that is not problem of STEs. The server must use a correct EF context to load and save data.
STEs are implementation of change set pattern. If you want to use them you should follow their rules to take full advantage of the pattern. They can save some time if used correctly but this speed up comes with sacrifice of some architectural decisions. As any other technology they are not perfect and sometimes you can find them hard to use (just follow self-tracking-entities tag to see questions). They also have some serious disadvantages but in .NET WPF client you will not meet them.
You can opt STE for given scenario,
All STEs are POCOs, .Net dynamically add one layer to it for change tracking.
Use T4 templates to generate the STEs, it will save your time.
Uses of tools like Automapper will save your time for manually converting WCF returned data contract to Entity or DTO
Pros for STE -
You don't have to manually track the changes.
In case of WCF you just have to say applydbchanges and it will automatically refresh the entity
Cons for STE -
STEs are heavier than POCO, because of dynamic tracking
Pros for POCO -
Light weight
Can be easily bridged with EF or nH
Cons for POCO -
Need to manually track the changes with EF.(painful)
POCO are dynamic proxied and don't play nice on the wire see this MSDN article for the workaround though. So they can be made to but IMO you're better off going STE as I believe they align nicely with WPF/MVVM development.
I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.