Has anybody compared WCF and ZeroC ICE? - wcf

ZeroC's ICE (www.zeroc.com) looks interesting and I am interested in looking at it and comparing it to our existing software that uses WCF. In particular, our WCF app uses server callbacks (via HTTP).
Anybody who's compared them? How did it go? I'm particularly interested in the performance aspect, since interoperability isn't much of a concern for us right now. Thanks!

I did a very terse review of ICE a few years ago, and although I haven't compared them directly before, having reasonable knowledge of WCF my thoughts might have some relevance.
Firstly, it's not entierely fair to compare WCF with ICE as WCF as ICE is a specific remote communication mechanism and WCF is a higher level remote communications framework.
While WCF is often thought of as implementing SOAP web services, and that is indeed its main use to date, it can also be used for implementing remote services using all manner of encodings and transport channels, which means it can theoretically be used for performant comms between applications.
In comparison, ICE is a cross-platform remote communicaton mechanism that uses binary encoding for performant communications between applications. It's something of a simplified evolution of CORBA and is more directly comparable to CORBA, DCOM, .NET Remoting, and JNI.
However, even though there's no direct correspondence between ICE and WCF, if you need your .NET app to communicate remotely then they're both contenders. Some of the decision points you might want to consider include:
Resourcing. It'll be easier to find developers with WCF experience than ICE experience.
Performance. If you want performance then ICE performs fast, but WCF can also be used in a performant configuration. Alternatively, .NET Remoting can provide very good performance, and whatever the MS-sponsored benchmarks say I've seen it outperform WCF by 10%.
Cross-platform. If you need to communicate with non-Windows applications then you're limited with the WCF options you can use. In addition, since every SOAP stack seems to implement the standards differently it can be a pain creating truly generic Web Services (though WS-I helps)
If you don't need every ounce of performance from day one, then I'd personally plump for WCF to start with, and then consider ICE if performance ever becomes critical. Even then it might be cheaper to scale out your service boxes than it is to move to ICE, and if you don't have any exotic cross-platform needs then you could always look at reconfiguring WCF for binary encoding etc

Michi Henning from ZeroC has recently published a white paper on just this topic -- "Choosing Middleware: Why Performance and Scalability do (and do not) Matter". It compares Ice, WCF (binary & SOAP), and RMI with various performance metrics, platforms, languages, etc. There's more information on Michi's blog, but the white paper is also quite readable, with all the standard caveats of any benchmark.
Disclaimer: I've used Ice and RMI extensively, but never WCF.

Apache Thrift is another contender to ICE and WCF. It was developed and open sourced by Facebook. Apache Thrift is nice in some ways because its not only extremely efficient on the encoding side, it also supports adding of fields to structures without breaking all of the clients (something we found extremely useful for our projects).
Google Protocol Buffers would seem not really a contender as it doesn't mention .NET support on the home page. However, some community addons support C#. In addition, ICE provides emulation for Google Protocol Buffers if you're working with existing services.

Data point: we just converted a callback multi-platform and multi-language project from Ice to Thrift with pretty good results. Ice does a lot for you, so we had to implement disconnection listeners, connection events, etc. ourselves. And in one case we got bit in the proverbial with a big object lock that Ice was letting us get away with -- this caused a deadlock in the Thrift server but it was easily fixed by less lazy coding on the C# side.
I've just finished benchmarking, and in our application anything that pushes large amounts of data is faster than, or on par with, Ice. Shorter messages with more over-head (i.e., a "heartbeat" that updates a status over the protocol) is a bit slower.
The most important bit was that in order to implement the callback service correctly we had to extend Thrift interfaces and define our own protocol, along with a Thrift "Processor" and callback client-server. But I freely admit our application is /very/ special. The existing protocols and servers should be sufficient. But extending them, even to use multiplex sockets from .Net, was not terribly difficult.

We are using ICE to integrate modules written in both C++, Java and C#. The nice thing is that our server can access components on remote machines as well, so if we need more performance we can shift processing to different machines.
I've used both WCF and ICE, and I'd say that ICE is cleaner on the implementation side. ICE also has very detailed and readable documentation.
ICE supports some things that WCF cannot do, including load balancing, automated remote client updates, etc.

Related

Understanding BizTalk Development

Coming from a .net developer's perspective, I've been recently introduced to BizTalk. I was expecting something like a series of Service References, auto-mapping classes and workflows. I really wasn't expecting heavy XSD use and I was surprised by the orchestration maps.
I just don't understand why it isn't more like a bunch of enterprise features built on a foundation of WCF.
Can anyone help me understand the idea behind how BizTalk was designed?
BizTalk can work with WCF services, but doesn't need to for some simple scenarios. It can also work in scenarios where custom non-WCF adapters are needed - it includes many useful ones out of the box, like for FTP, SFTP, File system access, POP3, Sharepoint, Azure ServiceBus, MSMQ, and MQSeries. Custom adapters can be written for legacy systems and services that don't expose WCF endpoints. There are many WCF adapters for cases where WCF is useful, and these adapters can be used and configured a bit more easily than drawing up a WCF Service from scratch. BizTalk can also expose its services as WCF endpoints.
The real power of BizTalk is in its server architecture, which allows for high availability, durable messaging, suspending and resuming messages, advanced debugging options, and rapid development of artifacts (like maps and orchestrations). It also provides for some powerful out of the box support for EDI, HL7, and WCF LoB integration work.
XML is at the heart and soul of the BizTalk messaging engine. This is good because XML is standardized and powerful; it's bad because XML is unwieldy at times, especially when dealing with larger messages and BLOBs.
ReceivePorts get the data into BizTalk's messaging engine (using adapters and receive locations). Send ports send the XML (or other) data out using the adapters mentioned above.
Maps use XSLT behind the scenes to transform the XML messages; it's possible to direct a map to use custom XSLT, or to use C#, VB, or JScript as well. However, for most trivial mapping tasks, the visual mapping interface allows for rapid development and testing of mappings between different message types. They can be called from receive ports, send ports, or orchestrations.
Orchestrations are more or less services that use the XLANGs language. When designed properly, they can provide very powerful processing of business logic and application handling, all with the above mentioned architectural features that BizTalk provides (durable messaging, high availability).
I look at it from a different perspective. BizTalk is more inline with Web/SOAP and cross-platform standards, Xml and now JSON, than WCF. BizTalk also supports a lot more protocols than WCF. BizTalk supports WCF, not the other way around.
That the WCF stack can build Contracts on and serialize/deserialize .Net classes is the custom approach. Keep in mind, WCF is just hiding all the Xml/Xsd from you, it's still there and is the same as what BizTalk uses.
BizTalk was designed and shipped before WCF as a reliable, cross-platform, multi-protocol integration engine. In terms of capabilities, the BizTalk stack as a whole is several orders of magnitude beyond WCF. In practice, we spend a lot of time in a BizTalk app working around the limitations of WCF.*
*For clarity, I'm referring to the OOB binding elements mostly and their application to actual implementations. WCF as a framework is perfectly serviceable.
My research indicates that BizTalk has remained largely unchanged since 2004, and thus would not experience the kind of technological convergence seen in other areas of the Microsoft stack. The reason for this appears to be because of a painful migration from BizTalk 2002 to 2004 that no one wants to replicate. Reminiscent, to me, of the many versions of the Entity Framework.
In 2010-2011, there was a "BizTalk is dead" movement, with promises that a combination of WCF, Workflow Foundation, and AppFabric on Azure would be the replacement tools. There has been little talk of it at all since 2012 -- it looks like the two technologies both had their unique pros and cons, but never would the two compete.
BizTalk has the strength of out-of-the-box throttling and disk persistence and an assortment of adapters that aren't standardized elsewhere (enterprise-iness). It's as if its stance is to tame an unwieldy beast. It appears to suffer, still, from taking advantage of scalability options that have come about in the last 10 years. The other stack is more along the lines of what I initially expected but lacking in enterprise-iness.
I don't quite have my head wrapped around BizTalk being described as a publish/subscribe model versus... some other model. Need to look more into that.
In conclusion, I don't like either technology set, and I think they're both in need of work.
Thanks to all who read this question and those who answered it. I know subjective answers aren't a big thing on stack overflow.

Azure Service Bus Queues integration approaches in .NET

There are different approaches to implement brokered messaging communication between services using Service Bus Queues (Topics):
CloudFX Messaging
QueueClient
WCF integrated approach
Which of those approaches are more useful in which cases?
Any comparison of performance, abstraction level, testability, flexibility or facilities would be great.
OK, now that I understand your question better, I see where the confusion is.
All 3 of the options that you are looking into are written by Microsoft.
Also, all 3 of those options are simply an abstraction - a client interface into the service that MS is providing.
None of them are faster, slower, etc. However, I would say that if you went the WCF route, then you can more easily abstract the technology choice a bit better.
What I mean by that is - you can develop a "GetMessage" contract in WCF that points to the service bus... and then later on change the design, and configure WCF to point to some other service and you wouldn't have to change the code.
So, that's one advantage for WCF.
That being said, CloudFX is built by Microsoft to give extra common functionality around the usage of the Azure Service Bus ... so don't ignore that. Look into the benefits of that API and decide if you and your team need those features.
Lastly, QueueClient is simply what CloudFX improves on, but adds no benefits like WCF. So you probably don't want to go with this route (considering your other 2 options).
Keep in mind that Azure uses a REST API under the hood for most of the communication... and so you might hit some unexpected performance issues if you don't configure your application correctly: http://tk.azurewebsites.net/2012/12/10/greatly-increase-the-performance-of-azure-storage-cloudblobclient/

WCF; what's the big deal?

I'm just about getting into WCF; but from what I've read so far, like the sample scenarios I found on MSDN and some other sites, I can do all that with web services and applications that call those web services. So why the need for an elaborate layer like WCF?
Most of the comparisons I've googled for explain it more from a programming point of view. Still trying to find answers without much success as to when it makes business and of course programming sense to use the WCF layer as opposed to traditional application to web services model.
Anyone here with experience on both and can advice on how to go about choosing either web services or going the WCF way? What are those things that can't absolutely be done using just plain old web services called by applications and where the WCF layer will save the day.
You've fallen for the Microsoft trap of "it's just about web services" :-)
It's actually a lot more:
it's about service-oriented programming in general (not just web services - you can also write TCP/IP based services, MSMQ queue-based messaging and a lot more)
it's about unifying all the diverse programming models that existed so far (ASMX, Enterprise Services, DCOM, .NET remoting)
it's about providing a lot of ready-made and ready-to-use plumbing which can handle things like reliable messaging, transaction support, security in any shape or form you'd like, service discovery, and a lot more
it's about separating the service implementation from the details of how clients will call it and making this a configurable stack of protocols, encodings etc.
Sure - most of this stuff can be done in ASMX, or .NET remoting - but try to convert an ASMX web service to be callable in your intranet using TCP/IP and transport security... Many of those "older" technologies have a very intricate and direct link to how they're being used - you can't easily change that without changing the whole service code.
WCF separates all these "plumbing details" like what endpoint to call, what protocol to use to call it, how to handle security etc. out into a "WCF stack" that's configurable and composable, so you can easily switch your service XYZ to use HTTP allowing anonymous users to call it, to using TCP/IP with Windows credentials required - your service code won't change a bit - it's only configuration of the plumbing.
That to me is the most compelling reason for WCF - I can totally concentrate on my actual service code, and not pollute it with lots of plumbing stuff - how to handle transports and text encodings and all that. And I can easily change that and adapt to new requirements and needs in deployment without having to touch my actual service code.
Plus, the second major point is extensibility - most of the older technologies just had their one, set way of doing things and many didn't lend themselves to being extended. You had to either adapt to use it the way they did it - or forget about it. WCF has a vast and very intricate system for extending just about anything - you can create your own transport protocol (people have created UDP or SMTP based bindings), you can create your own message encoders (like I had to do to talk to a web service which could only understand ISO-8859-1 encoded messages), and you can extend just about anything else in WCF - all in an organized, well-documented, very stable and safe way.
So these two things - separating out plumbing into configurable layers, and extensibility to the maximum - are the most compelling reasons for me to use WCF.
Edit: Kobi's link above, is a far better answer than mine.
WCF is basically a better architecture for supporting communications. It breaks many dependencies such as hosting (not iis dependent), transport, security, addressing into plugin components, and allows customisation to a very high degree.
Yes you can do a lot with traditional technologies, however you can do more with WCF. If you don't need the features now then of course you can can continue with legacy technologies, however if you prefer you can opt for a better architecture now with an eye on the future but it comes at a cost of having to switch technologies now.
Take this example. If you have a legacy asmx web service, how easily can you offer the same service via an MSMQed endpoint? With WCF its as simple as adding new config settings.
I assume that you are not asking "why not just stick with SOAP/HTTP". WSF allows you to choose a number of different transports rather than just simple HTTP, but as you observe the WS-* technologies allow you to do all that. So I think you're asking why use a powerful but complex framework when the raw technolgies are not impossibly complex?
You could ask this same question of any Framework. You could just use the basic technologies and avoid the learning curve of adopting the framework.
Frameworks such as WCF do have a learning curve, but consider what happens if you don't use them:
You find that you write boiler-plate code for each service invocation. You then either accept duplication or begin to refactor and build your own libraries. Before long you've developed your own Framework, but it's not the same as anybody else's. So then any new team memeber has to learn your local framework, serious learning currve.
Note also that WCF addresses issues such as the monitoring of the deployed solution.
The biggest appeal to me is testability. Services are defined by a CLR interface, which is quite easy to mock inside a test harness. Some words of warning, however. With great flexibility comes some pain in the configuration process, along with a few "gotchas". An example of a gotcha is that WCF--adhering closely to a "best practice"--requires an active SSL connection in order to pass SOAP authentication credentials over HTTP. This hinders testing quite a bit.

WCF in the enterprise, any pointers from your experience?

Looking to hear from people who are using WCF in an enterprise environment.
What were the major hurdles with the roll out?
Performance issues?
Any and all tips appreciated!
Please provide some general statistics and server configs if you can!
WCF can be configuration hell. Be sure to familiarize yourself with its diagnostics and svcTraceViewer, lest you get madenning cryptic, useless exceptions. And watch out for the generated client's broken implementation of the disposable pattern.
I've been recently hired to a company that previously handled their client/server communication with traditional asp.net web services and passing dataset's back and forth.
I re-wrote the core so now there is a Net.Tcp "connected" client... and everything is done through there. It was a week worth of "in-production-discoveries"... but well worth it.
The pain points we had to find out late in the game was:
1) The default throttling blocked the 11th user onward (it defaults to allow only 10).
2) The default "maxBufferSize" was set to 65k, so the first bitmap that needed to be downloaded crashed the server :)
3) Other default configurations (max concurent connections, max concurrent calls, etc).
All in all, it was absolutely worth it... the app is a lot faster just by changing their infrustructure and now that we have "connected" users... the server can send messages down to the clients.
Other beautiful gains is that, since we know 100% who is connected, we can actually enforce our licensing policy at the application level. Before now (and before I was hired) my company had to simply log, and then at the end of the month bill the clients extra for connecting too many times.
As already stated, configuration nightmare and exceptions can be cryptic. You can enable tracing and use the trace log viewer to generally troubleshoot a problem but its definitely a shifting of gears to troubleshoot a WCF service, especially once you've deployed it and you are experiencing problems before your code is even executing.
For communication between components within my organization I ended up using [NetDataContract] on my services and proxies which is recommended against (you can't integrate with platforms outside of .NET and to integrate you need the assembly that has the contracts) though I found the performance to be stellar and my overall development time reduced by using it. For us it was the right solution.
WCF is definitely great for enterprise stuff as it is designed with scalability, extensibility, security, etc... in mind.
as maxidad said, it can be very hard though as exceptions often tell you nearly nothing, if you use security (obvisously for enterprise scenarios) you have to deal with certificates, meaningless MessageSecurityExceptions and so on.
Dealing with WCF services is definitely harder than with old asmx service, but it's worth the effort once you're in.
supplying server configs will not be useful to you as it has to fit to your scenario. using the right bindings is very important, as well as security, concurreny. there is no single way to go when using wcf. just think about your requirements. do you need callbacks, what are your users? what kind of security do you need?
however, WCF will be definitely the right technology for enterprise scale applications.

Is it advisable to build a web service over other web services?

I've inherited this really weird codebase where they've built an external web service over a bunch of internal web services just to add authentication/authorization using WS-Security, WS-Encryption, et al. Less than a month into this engagement, I'm already feeling the pain of coupling volatile components through rigid WSDL, esp considering some of them use WCF and other choose to go WSDL first. Managing various versions of generated proxies and wrappers at various levels is a nightmare!
I'll admit the design is over-complicated and could have been much better, but my question essentially is:
Would you ever build a web service just to provide a cross cutting concern over a bunch of services?
Would this be better implemented as web service handlers?
and lastly...
Would you categorize this under the Web Service Gateway pattern?
I saw that very thing being built one year ago. I almost cried when the team took months to build 4 web services, 2 of which simply wrapped other internal ones, using WCF and some serious encryption. The only reason they wrapped the internal ones was to change the potential error numbers coming back.
So, would I ever intentionaly do that? Nope.
Would it be better implemented as almost anything else? yep.
Would I categorize it under the WTF pattern? absolutely.
UPDATE:
One thing I just remembered is that there is an architecture called "Enterprise Service Bus" It's purpose is to provide a common interface into other SOA systems. This way it doesn't matter what the different applications use for their end point mechanisms (WCF, WSE 1/2/3, RESTful, etc).
BizTalk is one example of an ESB and there are many other off the shelf programs that can be used. Basically, your app passes a message to the ESB and it handles sending that message, in a reliable way, to the other systems as well as marshalling any responses back.
This also means that you could insulate other applications from many types of changes to the end points. Of course, if the new end points require additional information, then you'd have to modify the callers. However, if all they are changing is the mechanism then a good ESB would be able to handle those changes without impacting your app.
I have seen similar implementations if you are exposing the services to the outside world and if you need to tighten down the security..check this MSDN column..