Ember adapter and serializer - serialization

I'm building an Ember application with ember-cli and, as a persistence layer, an HTTP API using rails-api + Grape + ActiveModelSerializer. I am at a very basic stage but I want to setup my front-end and back-end in as much standard and clean way as possible before going on with developing further API and ember models.
I could not find a comprensive guide about serialization and deserialization made by the store but I read the documentation about DS.ActiveModelSerializer and DS.ActiveModelAdapter (which says the same things!) along with their parent classes.
What are the exact roles of adapter and serializer and how are they related?
Considering the tools I am using do I need to implement both of them?
Both Grape/ActiveModelSerializer and EmberData offer customization. As my back-end and front-end are for each other and not for anything else which side is it better to customize?

Hmmm...which side is better is subjective, so this is sort of my thought process:
Generally speaking, one would want an API that is able to "talk to anything" in case a device client is required or in case the API gets to be consumed by other parties in the future, so that would suggest that you'd config your Ember App to talk to your backend. But again, I think this is a subjective question/answer 'cause no one but you and your team can tell what's good for a given scenario you are or might be experiencing while the app gets created.
I think the guides explain the Adapter and Serializer role/usage and customization pretty decently these days.
As for implementing them, it may be necessary to create an adapter for your application to define a global namespace if you have one (if your controllers are behind another area like localhost:3000/api/products, then set namespace: 'api' otherwise this is not necessary), or similarly the host if you're using cors, and if you're doing with cli you might want to set the security policy in the environment to allow connections to other domains for cors and stuff like that. This can be done per model as well. But again, all of this is subjective as it depends on what you want/need to achieve.

Related

Ocelot gateway support multiple claims in RouteClaimsRequirement

At the moment our config on .net core looks like this
"RouteClaimsRequirement": { "Claim": "settings_read" },
Is it possible to add more claims like below.
"RouteClaimsRequirement": { "Claim": "settings_read,settings_admin" },
Otherwise people with admin permissions would end up getting 403 error.
We're dealing with the same issue currently. After a thorough investigation, we landed in a set of possible approaches. Whichever suits your stomach, it's up to you to decide.
The properly proper approach is implement API scopes on the client that connects to your system. Depending on the flavor of your security (we're using IdS4), this may make a lot of or none sense at all.
The semi-proper approach is to horse around implementing custom middleware in Ocelot moving the control logic to C# from the config YAML.
The less proper approach is to set up a bunch of sections in the YAML file, one for each endpoint, verb, path etc. dealing with each its own claim (single) value. This solution sucks, scales poorly and I hope that it actually fails either way.
We're currently discussing which of the two first approaches that is the most appropriate.

Zend Framework 3 singletons

I'm creating a new application in Zend Framework 3 and i have a question about a design pattern
Without entering in much details this application will have several Services, as in, will be connecting to external APIs and even in multiple databases, the workflow is also very complex, a single will action can have multiple flows depending on several external information (wich user logged in, configs, etc).
I know about dependency injections and Zend Framework 3 Service Manager, however i am worried about instanciating sereval services when the flow will actually use only a few of them in certain cases, also we will have services depending on other services aswell, for this, i was thinking about using singletons.
Is singleton really a solution here? I was looking a way to user singletons in Zend Framework 3 and haven't figured out a easy way since i can't find a way to user the Service Manager inside a service, as I can't retrive the instance of the Service Manager outside of the Factory system.
What is an easy way to implement singletons in Zend Framework 3?
Why use singletons?
You don't need to worry about too many services in your service manager since they are started only when you get them from the service manager.
Also don't use the service manager inside another class except a factory. In ZF3 it's removed from the controllers for a reason. One of them is testability. If all services are inject with a factory, you can easily write tests. Also if you read your code next year, you can easily see what dependencies are needed inside a class.
If you find there are too many services being injected inside a class which are not always needed you can:
Use the ProxyManager. This lazy loads a service but doesn't start it until a method is called.
Split the service: Move some parts from a service into a new service. e.g. You don't need to place everything in an UserService. You can also have an UserRegisterService, UserEmailService, UserAuthService and UserNotificationsService.
In stead of ZF3, you can also think about zend-expressive. Without getting into too much detail, it is a lightweight middleware framework. You can use middleware to detect what is needed for a request and route to the required action to process the request. Something like this can probably also done in ZF3 but maybe someone else can explain how to do it there.

Existing SOAP service and new Angular Web App

We have an established WCF SOAP service. Its interface is defined in WSDL, from which C# classes are generated for our server (customers generate client-side bindings in various languages, from the same WSDL). The WSDL has a current version, which we can change a bit, and old versions, which we can't change or drop without a deprecation period, consultation etc. The SOAP requests tend to be complicated, having multiple XML namespaces within the same request.
The WCF SOAP service has a lot of "smarts" in it, and provides exactly the kinds of fetching and reporting facilities that we need for a new Web application that we need to make. We hope to use AngularJS for the client side of that. But these complex SOAP requests aren't easy to make in JavaScript world. If only we had a REST service, we could use angular Resource service. If not that, then a server that spoke JSON, albeit in an RPC style like SOAP, would run a fairly close second.
I've had various ideas for how the impedance mismatch between our server and client might be mitigated. But nothing sounds quick or easy.
I've thought of: -
Write a new REST service. Exactly what the client-side wants, but a serious piece of new development.
WebHttpBinding looks to offer something. But seems to me like it requires C# markup of custom attribute (how to achieve when our C# is generated from WSDL) and possibly wouldn't support our complex types
Obtain or write loads of client-side JS to abstract away calling SOAP services. But, unless this can be auto-generated from the WSDL, it's a huge amount of client-side code to write.
Write an IDispatchMessageFormatter for the server, to accept some JSON format of messages that I invent. Sounds hard, especially as good examples of people implementing and integrating IDispatchMessageFormatter seem hard to come by.
Write a MessageEncoder to swap between JSON and XML. But this isn't really an encoding operation, as became very clear when I tried to write it!
I'm searching for suggestions.
Generally, I recommend a REST service for any AngularJS development and have wrapped a number of legacy systems with Node.js API servers. Of course there is a massive amount of "it depends", but generally most projects will be happier and more productive following that route.
Some Things To Think About
How well does your current SOAP API fit the user interface requirements?
Are you experienced with Express, Sinatra, Flask or other micro-framework that allow rapid development of REST APIs? I find I can build a solid Node.js Express API server in a couple of hours and then extend it as I build the AngularJS application out.
How experienced are you with AngularJS? It's a more advanced project to build a complex data layer client-side.
Six Reasons Why REST is Important for AngularJS
It's much faster to write Angular code using $resource and $http. Get the API right is a good recommendation for effective AngularJS development. Indeed, you could argue that AngularJS is designed for REST, and that's why plain JavaScript works for the model (see 2).
Angular's plain-old JavaScript object data model works well with a REST API that speaks JSON that matches the user interface. However, issues arise when there isn't a good fit- Angular doesn't have a formal data model so you end up writing an lot of code trying to rationalize your API to work well with Angular. 3rd party libraries like breeze.js may offer some solution, but it's still awkward.
You can scale easily with caching. It's easy to add Redis or memcache or Varnish or other common HTTP caching solutions into the mix. Resource-based abstractions are perfect for caching strategies due to the transparency and idempotency of a REST api.
Loose coupling of front-end and server- it will be easier to support changes to the backend if you migrate off SOAP or need to integrate with other services.
It's generally easier to test JSON APIs separately from AngularJS logic, so your test suites will be simpler and more effective.
Your new REST API will be easier to leverage for future AngularJS and JSON-oriented projects.
I hope that helps.
Cheers,
Nick

How to route WCF REST services?

Was planning to use Service Routing (on WCF/REST) to do some common tasks before a request hits the actual service. Now that I read more about it, looks like REST is not supported yet on RoutingService and the suggested method is to use System.Web.Routing or ARR.
What needs to happen in the router is a key validation, a header value extraction and versioning.
ARR doesn't look right for this as it just routes and there is no "handler" we have access to. System.Web.Routing looks like a lot of custom implementation which might undermine the efficiency of WCF.
An old school alternative am thinking of is to have the common functionalities in one chain-of-responsibilities implementation and just compose it in every service. This has the disadvantage of being referenced in N number of places for N services. But this increasingly looks like the only alternative if I don't want to mess with the WCF handling of endpoints.
Am looking for advice on a right way to do this, and any samples.
Didn't try, but maybe writing a custom service behavior can solve your problem. Take a look here : Extending WCF with Custom Behaviors.
The idea is to extend the WCF engine with a custom behavior, then attaching your service with this behaviors. This is transparent for the services.
Take a look at HttpMessageHandlers in the new WCF Web Api project htttp://wcf.codeplex.com This mechanisms allows you to do something similar to Rack or WSGI. I have a couple of examples of what you can do with them on my blog http://www.bizcoder.com/index.php/2011/05/22/how-to-get-ahead-with-messagehandlers/

good practice: REST API as the interface between the interface layer and business layer?

I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.