WCF OData for multiplatform development? - wcf

The OP in this question asks about using an WCF/OData as an internal data access layer.
Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly
The resounding reply seems to be don't do it. I'm in similar position to the OP, but have a concern not raised in the original question. I'm trying to develop (natively) for a lot of different platforms but want to keep as much of the data and business logic server side as possible. So I'll have iOS/Android/Web (MVC)/Desktop applications. Currently, I have a single WinForms applications with an ORM data access layer (LLBLGen Pro).
I'm envisioning moving most of my business / data access logic (possibly still with LLBLGen or other ORM) behind a WCF / OData interface. Then making all my different clients on the different platforms very thin (basically UI and WCF calls).
Is this also overengineered? Am I missing a simpler solution?

I cannot see any problem in your architecture or consider it overengeenered as a OData is a standard protocol and your concept conforms the DRY principle as well.
I change the question: Why would you implement the same business logic in each client to introduce more possible bugs and loose the possibility to fix the errors at one single and centralized place. Your idea makes you able to implement the security layer only once.
OData is a cross-platform standard and you can find a OData libraries for each development platform (MSDN, OData.org, JayData for JavaScript). Furthermore, you can use OData FunctionImports/Service methods and entity-level methods, which will simplify your queries.

If you are running multiplatform development, then you may find more practical to choose platform-agnostic communication protocol, such as HTTP, rather than bringing multiple drivers and ORMs to access your data Sources directly. In addition since OData is a REST protocol, you don't need much on the Client side: anything that can format OData HTTP requests and parse HTTP responses. There are however a few aspects to be aware of:
OData server is not a replacement for an SQL database. It supports batches but they are not the same as DB transactions (although in many cases can be used to model transactional operations). It supports parent-child relations but it does not support JOINs in classic SQL sense. So you have to plan what you expose as OData entity. It's too easy to build an OData server using WCF Data Services wrapping EF model. Too easy because People often expose low Level database content instead of building high level domain types.
As for today an OData multiplatorm clients are still under development, but they are coming. If I may suggest something I am personally working on, have a look at Simple.Data OData adapter (https://github.com/simplefx/Simple.OData, look at its Wiki pages for examples) - it has a NuGet package. While this a Client Library that only supports .NET 4.0, part of it is being extracted to be published as a portable class Library Simple.OData.Client to support .NET 4.x, Windows Store, Silverlight 5, Windows Phone 8, Android and iOS. In fact, if you check winrt branch of the Git repository, you will find a multiplatform PCL already, it's just not published on NuGet yet.

Related

Should a REST API reflect server-side application architecture

I'm in the middle of writing my first web app. Just wondering how the conventions are when it comes to REST API designs. Is it better to have it reflect my server side architecture or whatever seems to be easier to reason about?
I'm thinking of either doing:
/serviceProvider/product
or
/product/serviceProvider
My server side architecture are all separated into modules organized by service providers, however they all expose a product query API.
APIs ideally should be designed to make most sense for its consumer. There isn't really a good reason to reflect your "server architecture" at all. In fact, it's what's usually called a leaky abstraction or a leaky API and is considered bad practice, mainly because your application structure may change and then you have these possible scenarios:
you need to change your API, which is a non-trivial task when it's already being used by someone;
your API stops being reflective of your application structure which leads to inconsistencies;
exposing your application structure or database schema to the world may have security implications.
With these things in mind, you might as well design the API with focus on ease of use in the first place. The consumer of your API doesn't need to know or care about your application architecture.
I believe that keeping on the same architecture is important because you're forced to offer simple API and it will enforce you a simplified architecture on the server side.
That said, of course that you don't want to expose any server side method or even every server side property of the returned objects.
In Kaltura we also believe in flat (not nested) paths to simplify the API.
For more guidelines, see my blog: http://restafar.com/create-new-rest-server/

Understanding BizTalk Development

Coming from a .net developer's perspective, I've been recently introduced to BizTalk. I was expecting something like a series of Service References, auto-mapping classes and workflows. I really wasn't expecting heavy XSD use and I was surprised by the orchestration maps.
I just don't understand why it isn't more like a bunch of enterprise features built on a foundation of WCF.
Can anyone help me understand the idea behind how BizTalk was designed?
BizTalk can work with WCF services, but doesn't need to for some simple scenarios. It can also work in scenarios where custom non-WCF adapters are needed - it includes many useful ones out of the box, like for FTP, SFTP, File system access, POP3, Sharepoint, Azure ServiceBus, MSMQ, and MQSeries. Custom adapters can be written for legacy systems and services that don't expose WCF endpoints. There are many WCF adapters for cases where WCF is useful, and these adapters can be used and configured a bit more easily than drawing up a WCF Service from scratch. BizTalk can also expose its services as WCF endpoints.
The real power of BizTalk is in its server architecture, which allows for high availability, durable messaging, suspending and resuming messages, advanced debugging options, and rapid development of artifacts (like maps and orchestrations). It also provides for some powerful out of the box support for EDI, HL7, and WCF LoB integration work.
XML is at the heart and soul of the BizTalk messaging engine. This is good because XML is standardized and powerful; it's bad because XML is unwieldy at times, especially when dealing with larger messages and BLOBs.
ReceivePorts get the data into BizTalk's messaging engine (using adapters and receive locations). Send ports send the XML (or other) data out using the adapters mentioned above.
Maps use XSLT behind the scenes to transform the XML messages; it's possible to direct a map to use custom XSLT, or to use C#, VB, or JScript as well. However, for most trivial mapping tasks, the visual mapping interface allows for rapid development and testing of mappings between different message types. They can be called from receive ports, send ports, or orchestrations.
Orchestrations are more or less services that use the XLANGs language. When designed properly, they can provide very powerful processing of business logic and application handling, all with the above mentioned architectural features that BizTalk provides (durable messaging, high availability).
I look at it from a different perspective. BizTalk is more inline with Web/SOAP and cross-platform standards, Xml and now JSON, than WCF. BizTalk also supports a lot more protocols than WCF. BizTalk supports WCF, not the other way around.
That the WCF stack can build Contracts on and serialize/deserialize .Net classes is the custom approach. Keep in mind, WCF is just hiding all the Xml/Xsd from you, it's still there and is the same as what BizTalk uses.
BizTalk was designed and shipped before WCF as a reliable, cross-platform, multi-protocol integration engine. In terms of capabilities, the BizTalk stack as a whole is several orders of magnitude beyond WCF. In practice, we spend a lot of time in a BizTalk app working around the limitations of WCF.*
*For clarity, I'm referring to the OOB binding elements mostly and their application to actual implementations. WCF as a framework is perfectly serviceable.
My research indicates that BizTalk has remained largely unchanged since 2004, and thus would not experience the kind of technological convergence seen in other areas of the Microsoft stack. The reason for this appears to be because of a painful migration from BizTalk 2002 to 2004 that no one wants to replicate. Reminiscent, to me, of the many versions of the Entity Framework.
In 2010-2011, there was a "BizTalk is dead" movement, with promises that a combination of WCF, Workflow Foundation, and AppFabric on Azure would be the replacement tools. There has been little talk of it at all since 2012 -- it looks like the two technologies both had their unique pros and cons, but never would the two compete.
BizTalk has the strength of out-of-the-box throttling and disk persistence and an assortment of adapters that aren't standardized elsewhere (enterprise-iness). It's as if its stance is to tame an unwieldy beast. It appears to suffer, still, from taking advantage of scalability options that have come about in the last 10 years. The other stack is more along the lines of what I initially expected but lacking in enterprise-iness.
I don't quite have my head wrapped around BizTalk being described as a publish/subscribe model versus... some other model. Need to look more into that.
In conclusion, I don't like either technology set, and I think they're both in need of work.
Thanks to all who read this question and those who answered it. I know subjective answers aren't a big thing on stack overflow.

WCF Data Service vs WCF RIA Service

I need to evaluate SOA architecture between WCF Data Services vs WCF RIA Services. Following are some of my parameters:
Multiple Client (HTML5/iOS/Android/Windows 8 Metro/Windows Phone 7)
Disconnected and offline operation
Validation engine
Performance
Network data compression
Support for Cloud Environment
Could anyone help me to gather some data for my evaluation. Also, is there any other good option available for SOA implementation.
I am aware of DevForce.
I'm intimately familiar with RIA Services and know where it falls short. I know little about data services and DevForce, but I know that DevForces advertises to be better than RIA Services in exactly those areas where it annoys me, which is:
RIA can't do group-by or joins of any sort. (Interestingly, the DevExpress toolkit can
do some trickery to group on a RIA Services source in some cases.)
It does understand relationships, but not of the many-to-many kind where it would have to
handle a translation to a bridge table transparently. (EDIT: this is planned for Open Ria Services)
The change tracking works through a context (unit of work) which can only be submitted or
rejected as a whole (out-of-the-box anyway). That usually leads to an application with
many contexts and weird copy operations to transfer entities. The RIAServicesContrib
project helps with that.
It appears to be no longer maintained. I base this on the fact that when Entity Framework 4.1 released their new DbContext API (for code first), Microsoft released a compatibility library with which you could use RIA and EF code first. That library has a version lock on EF 4.1 though, and Microsoft now just states that RIA Services doesn't support DbContext in the form of an Orwellian note to Visual Studio 2012. (EDIT: DbContext is now supported again - EF is currently supported up to version 5, with 6 being likely only supported in Open Ria Services)
Some tasks such as observing changes of related entities programatically (rather than
through data binding) are hard.
Some things which should be really simple, such as getting the context from an attached
entity, are hard.
All queries are single requests, only the remaining CUD (of the CRUD) is batched.
Custom methods to invoke along with normal CUD operations are very limited. In
particular, it's not
possible to cancel one that is scheduled without cancelling the whole context. That has
made them almost useless in most cases where I wanted to use them.
You will have to decide whether or not to use the DomainDataSource, which is a beast
that does too much and too little. You can fetch everything programatically too, but
some things are really quick to wire up with this xaml helper.
There is no built-in support for serializing entities to isolated storage.
Silverlight (and Javascript I believe) are the only supported platforms - no WPF.
(EDIT: this is planned for Open Ria Services - in particular, it should be able to serve BreezeJS)
Since Data Services is older (I think), I didn't care to ever look closely at it. I did however recently skim over the feature list of DevForce and I believe that sounds exciting, although I can't say anything about it from experience.
(EDIT: I found a very knowledgeable comparison of RIA Services and WCF by Colin Blair here.)
The architect compares his product to RIA Services here. I covered some of his points, but not all.
Altogether I can say that RIA Services is clearly better than raw WCF, but it's also clear there has to be something better than that. I hope that's DevForce.
Both expose entities via OData, but RIA Services is specifically targeted to:
Silverlight consumption
Poor man's services - they're easier to get up and running with little effort
WCF Data Services are far more powerful and configurable. The biggest difference (IMO) is that RIA services require one host type per entity, whereas WCF Data Services can automatically host an entire content (a type with multiple IQueryable properties).
That said, both implementations are pretty half baked (again IMO only) and not really well thought out or implemented. ...You may be better off with traditional WCF operations hosted with WebGet/WebInvoke attributes...or using the WCF Web API.
I wouldn't go with DevForce only because it mainly really target Silverlight implementations (if I recall correctly). That said, they're package is pretty cool and far more feature complete than RIA or WCF Data Services.

Are NHibernate and XML Webservices (.asmx) a good match?

I'm looking at new architecture for my site and was wondering if pairing NHibernate with a web service core is a good idea. What I want to do is make my webservice the core of my business, from the site front ends to the utilties I write. I'm trying to make all of my UIs completely ignorant of anything but my service API's.
In a simple strawman experiement, I'm running into issues with Serialzing my Iesi ISets....this is causing me to rethink the strategy altogether.
I know I could just develop a core Library (dll) and reference that in each of my applications, but maintaining that dll's version over a minimum of 6 applications seems like it's going to cause me much pain.
With NHibernate, what are the pro's and con's of those two approaches?
I see no problem in using NHibernate and webservices together - I just don't think it's a good idea to send the entities themselves over "to the other side".
A better approach is to use a set of DTOs that are made for the service - then you won't be running into issues like that of serializing unknown types and such.
You can use a library like AutoMapper to do the mapping from the entities to the DTOs.
There's a lot of stuff written about this, some of it:
http://martinfowler.com/bliki/FirstLaw.html
http://ayende.com/Blog/archive/2009/05/14/the-stripper-pattern.aspx
http://elegantcode.com/2008/04/27/dtos-or-serialized-domain-entities/
DTOs vs Serializing Persisted Entities
As a side note for the service it self, you could design wise use an approach like Davy Brion describes here: http://davybrion.com/blog/2009/11/requestresponse-service-layer-series/
I don't know NHibernate, but want to remind you that you should be using WCF for new web service development, unless you are stuck in the past (.NET 2.0). Microsoft now considers ASMX web services to be "legacy technology", and you can imagine what that means.

WCF; what's the big deal?

I'm just about getting into WCF; but from what I've read so far, like the sample scenarios I found on MSDN and some other sites, I can do all that with web services and applications that call those web services. So why the need for an elaborate layer like WCF?
Most of the comparisons I've googled for explain it more from a programming point of view. Still trying to find answers without much success as to when it makes business and of course programming sense to use the WCF layer as opposed to traditional application to web services model.
Anyone here with experience on both and can advice on how to go about choosing either web services or going the WCF way? What are those things that can't absolutely be done using just plain old web services called by applications and where the WCF layer will save the day.
You've fallen for the Microsoft trap of "it's just about web services" :-)
It's actually a lot more:
it's about service-oriented programming in general (not just web services - you can also write TCP/IP based services, MSMQ queue-based messaging and a lot more)
it's about unifying all the diverse programming models that existed so far (ASMX, Enterprise Services, DCOM, .NET remoting)
it's about providing a lot of ready-made and ready-to-use plumbing which can handle things like reliable messaging, transaction support, security in any shape or form you'd like, service discovery, and a lot more
it's about separating the service implementation from the details of how clients will call it and making this a configurable stack of protocols, encodings etc.
Sure - most of this stuff can be done in ASMX, or .NET remoting - but try to convert an ASMX web service to be callable in your intranet using TCP/IP and transport security... Many of those "older" technologies have a very intricate and direct link to how they're being used - you can't easily change that without changing the whole service code.
WCF separates all these "plumbing details" like what endpoint to call, what protocol to use to call it, how to handle security etc. out into a "WCF stack" that's configurable and composable, so you can easily switch your service XYZ to use HTTP allowing anonymous users to call it, to using TCP/IP with Windows credentials required - your service code won't change a bit - it's only configuration of the plumbing.
That to me is the most compelling reason for WCF - I can totally concentrate on my actual service code, and not pollute it with lots of plumbing stuff - how to handle transports and text encodings and all that. And I can easily change that and adapt to new requirements and needs in deployment without having to touch my actual service code.
Plus, the second major point is extensibility - most of the older technologies just had their one, set way of doing things and many didn't lend themselves to being extended. You had to either adapt to use it the way they did it - or forget about it. WCF has a vast and very intricate system for extending just about anything - you can create your own transport protocol (people have created UDP or SMTP based bindings), you can create your own message encoders (like I had to do to talk to a web service which could only understand ISO-8859-1 encoded messages), and you can extend just about anything else in WCF - all in an organized, well-documented, very stable and safe way.
So these two things - separating out plumbing into configurable layers, and extensibility to the maximum - are the most compelling reasons for me to use WCF.
Edit: Kobi's link above, is a far better answer than mine.
WCF is basically a better architecture for supporting communications. It breaks many dependencies such as hosting (not iis dependent), transport, security, addressing into plugin components, and allows customisation to a very high degree.
Yes you can do a lot with traditional technologies, however you can do more with WCF. If you don't need the features now then of course you can can continue with legacy technologies, however if you prefer you can opt for a better architecture now with an eye on the future but it comes at a cost of having to switch technologies now.
Take this example. If you have a legacy asmx web service, how easily can you offer the same service via an MSMQed endpoint? With WCF its as simple as adding new config settings.
I assume that you are not asking "why not just stick with SOAP/HTTP". WSF allows you to choose a number of different transports rather than just simple HTTP, but as you observe the WS-* technologies allow you to do all that. So I think you're asking why use a powerful but complex framework when the raw technolgies are not impossibly complex?
You could ask this same question of any Framework. You could just use the basic technologies and avoid the learning curve of adopting the framework.
Frameworks such as WCF do have a learning curve, but consider what happens if you don't use them:
You find that you write boiler-plate code for each service invocation. You then either accept duplication or begin to refactor and build your own libraries. Before long you've developed your own Framework, but it's not the same as anybody else's. So then any new team memeber has to learn your local framework, serious learning currve.
Note also that WCF addresses issues such as the monitoring of the deployed solution.
The biggest appeal to me is testability. Services are defined by a CLR interface, which is quite easy to mock inside a test harness. Some words of warning, however. With great flexibility comes some pain in the configuration process, along with a few "gotchas". An example of a gotcha is that WCF--adhering closely to a "best practice"--requires an active SSL connection in order to pass SOAP authentication credentials over HTTP. This hinders testing quite a bit.