I have a question regarding working with Knockout and an API over which I have no control. Forgive my vagueness, but to avoid IP issues, I can't give too much more information. Basically, I have a page in my application that requests data from multiple endpoints (10+) which send back a ton of data, much of which is unnecessary. I've developed a way to make observable those fields I care about, so that's not an issue.
My issue is with assembling the UI...I tried merging all the responses into my view model and then creating the UI in the typical knockout way. This makes extracting the necessary data into new objects and posting back to the respective endpoints quite difficult, though, unless I manually code it all.
I then thought of possibly namespacing the responses to keep them separated and iterate over them when it comes time to post back, since they'll be encapsulated for their own endpoints, but I'm hoping someone out there has more experience with a non-REST API and, specifically, working with multiple endpoints in a single view model. Thanks!
Nothing is stopping you from splitting out your separate endpoints into different services or objects. You could use something like RequireJS to setup dependencies for your viewmodel.
Related
I am modifying a CakePHP application to have an API available on it. My intention is to try to keep keep the endpoints as close to being RESTful / Crud oriented as possible. Although I have a use case that that I am unsure of.
I have the following requests for adding and editing tasks:
PUT /tasks
PATCH /tasks/:id
One of the behaviors of task entity in the system I am working on is that they send emails to the effected users associated with the task, when a save or edit is performed. This allows the various parties surrounding the task to be updated on the status of the particular task.
However the the one issue is that in some uncommon cases the end user will need to be able to toggle if they want an email to be sent on the front end.
What is the proper RESTful / Crud oriented approach to flag the task endpoints to not fire the email in the API request?
There is no record of the email in the application's database and it is nice to have to functionality tied into the task life cycle hooks and called implicitly. So I am not crazy about doing something like creating an /emailTask set of endpoints. It seems like an optional flag in the task request would be cleaner, but might not be maintainable if we begin to have similar needs for other behaviors associated with tasks.
Thanks in advance for the help!
PUT /tasks
If you're intending to use this for adding tasks, use POST instead. PUT /tasks implies that you are overwriting all tasks.
As for side-effects, this to me feels like a decent use-case for a custom HTTP header. Perhaps something like Suppress-Notifications: ?1 ?
Why ?1 as a value? This is going to be the future default for new HTTP headers that specify a boolean:
https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-header-structure-15#section-4.1.9
I'd read it somewhere that whenever one needs to do data intensive work then Webapi could be used. Ex: autocomplete textbox where we get data from using ajax on key press.
Now someone told me that Webapi shouldn't be used within applications which are not externally accessed. Rather action should be used to the same work as it is capable of returning the data back in a similar fashion to webapi.
I'd like to know your suggestions over it.
Depends on how you look at it. If all you need is ajax-ification of your controller actions, then you really don't need Web-API. Your actions can return a JsonResult and it is very easy to consume that from your client side through an AJAX call.
Web-API makes it easy for you to expose you actions to external clients. It supports HTTP protocol and Json and XML payloads automatically, out of the box, without you writing the code for it. Now, there is nothing preventing you from consuming the same Web-API actions from your own internal clients in an AJAX manner.
So the answer to your question depends on your design. If you don't have external clients, then there is no string need for you to have Web-API. Your standard controller actions can do the job.
Looking for some guidance.
I'm building an application, SL4 with WCF as the backend service. My WCF Service layer sits over a Domain Model and I'm converting my Domain Entities to screen specific DTOs using an assembler.
I have a screen (security related) which shows a User and the Groups that they are a member of, now the user can add and remove groups for the user after which they can hit the apply button. Only when this apply button is hit will the changes be submitted.
Currently I have a UserDetailDto which is sent to the client to populate the screen and my intention was on hitting apply to send a UserDetailUpdateDto back to the server to perform the actual update to the domain model.
Does this sound ok to start?
If so when the user is making changes client-side should my UserDetailUpdateDto be sending back the changes, ie. whats been added and whats been removed.
Not sure, guidance would be great.
Guidance is always to tricky when so much is unknown about the requirements and the deployment environment. However, your approachs sounds reasonable to me. The key things I like about this:
1) You are keeping your DTOs separate from your Domain Entities. In small simple apps it can be fine to send entities over the wire, but they can start to get in each other's way as complexity and function increase.
2) You are differentiating between Query object (UserDetailDto) and Command object (UserDetailUpdateDto). Again the two can often be satisfied using a single object but you will start to them bloat as complexity/function increases because the object is serving two masters (the Query object is to be consumed at the client and the Command object is to be consumed at the server). I use a convention where all command DTOs start with a verb (e.g. UpdateUserDetail), it just makes it easier to sort 'data' from 'methods' at the client end.
If the SL application is likely to become large with more complex screen logic it may be worth looking at the Model-View-ViewModel (MVVM) pattern. It allows you to separate screen design from screen function. It provides a bit more flexibility in distributing work around a development team and better supports unit testing.
As far as what gets sent back in the UpdateUserDetail object, I think this should be guided by what is going to be easiest to work with at the domain model (or the WCF service sitting over your domain model). Generally, smaller is better when it comes to DTOs.
I’m having a little bit of difficulty understanding some architectural principles when developing a service. If you make a call to a WCF service and it returns a collection of items(Orders) (which are custom made classes made up From LINQ-to-SQL entity data) to a client and each item has a collection of items(OrderItems) (one-to-many) that are also made up from the same LINQ-to-SQL context. If I make another call to the service and request a particular OrderItem and modify its details on the client side, how then does the first collection of Items realise that one of its Orders OrderItem has changed from the client side. I am taking the approach of when changing the OrderItem I send the OrderItem object to the WCF service for storage via LINQ-to-SQL commands but to update the collection that the client first called I use IList interface to search and replace each instance of the OrderItem. Also subscribing each item to the PropertyChanged event give some control. This does work with certain obvious limitations but how would one 'more correctly' approach this by perhaps managing all of the data changing from the service itself.. ORM? static classes? If this is too difficult question to answer, perhaps some link or even chat group that I can discuss this as I understand that this site is geared for quick Q/A type topics rather than guided tutorial discussions.
Thanks all the same.
Chris Leach
If you have multiple clients changing the same data at the same time, at the end of the day you system must implement some sort of Concurrency Control. Broadly thats going to fall into one of two categories: pessimistic or optimistic.
In your case it sounds like you are venturing down the optimistic route, whereby anyone can access the resource via the service - it does not get locked or accessed exclusively. What that means is ultimately you need to detect and resolve conflicts that will arise when one client changes the data before another.
The second architectural requirement you seem to be describing is some way to synchronize changes between clients. This is a very difficult problem. One way is to build some sort of publish/subscribe system whereby, after a client retrieves some resources from the service, it also subscribes to get updates to changes to resource. You can do this either in a push or pull based fashion (pull is probably simpler, i.e. just poll for changes).
Fundamentally you are trying to solve a reasonably complex problem, but its also one which pops up quite frequently in software.
I am bit confused about ADO.Net Data Services.
Is it just meant for creating RESTful web services? I know WCF started in the SOAP world but now I hear it has good support for REST. Same goes for ADO.Net data services where you can make it work in an RPC model if you cannot look at everything from a resource oriented view.
At least from the demos I saw recently, it looks like ADO.Net Data Services is built on WCF stack on the server. Please correct me if I am wrong.
I am not intending to start a REST vs SOAP debate but I guess things are not that crystal clear anymore.
Any suggestions or guidelines on what to use where?
In my view ADO.Net data services is for creating restful services that are closely aligned with your domain model, that is the models themselves are published rather then say some form of DTO etc.
Using it for RPC style services seems like a bad fit, though unfortunately even some very basic features like being able to perform a filtered counts etc. aren't available which often means you'll end up using some RPC just to meet the requirements of your customers i.e. so you can display a paged grid etc.
WCF 3.5 pre-SP1 was a fairly weak RESTful platform, with SP1 things have improved in both Uri templates and with the availability of ATOMPub support, such that it's becoming more capable, but they don't really provide any elegant solution for supporting say JSON, XML, ATOM or even something more esoteric like payload like CSV simultaneously, short of having to make use of URL rewriting and different extension, method name munging etc. - rather then just selecting a serializer/deserializer based on the headers of the request.
With WCF it's still difficult to create services that work in a more a natural restful manor i.e. where resources include urls, and you can transition state by navigating through them - it's a little clunky - ADO.Net data services does this quite well with it's AtomPub support though.
My recommendation would be use web services where they're naturally are services and strong service boundaries being enforced, use ADO.Net Data services for rich web-style clients (websites, ajax, silverlight) where the composability of the url queries can save a lot of plumbing and your domain model is pretty basic... and roll your own REST layer (perhaps using an MVC framework as a starting point) if you need complete control over the information i.e. if you're publishing an API for other developers to consume on a social platform etc.
My 2ø worth!
Using WCF's rest binding is very valid when working with code that doesn't interact with a database at all. The HTTP verbs don't always have to go against a data provider.
Actually, there are options to filter and skip to get the page like feature among others.
See here: