NServiceBus Single Server Installation - nservicebus

I'm looking at designing a commercial web site from scratch in the new year and I was planning on using NServiceBus. Other components would include RavenDB, ASP.NET MVC, Ninject, Bootstrap, etc.
My question is, if I build scalability in from the beginning, particularly if in the first 6 months I plan to run the site from a single server, would it be a foolish thing to use NServiceBus from the outset? Will I experience much of a choke-point by pushing everything through MSMQ rather than direct calls to methods in DLLs? Should NServiceBus only be added to mature systems, or systems that are intended to be deployed to more than one server?

While the fastest possible call would be a direct in-memory invocation, I wouldn't look at optimizing for latency of calls in most web scenarios. Instead, focus on what type of logic can be run asynchronously with respect to the user request. That will be most influential on your overall scalability.
NServiceBus provides a fairly clean programming model for asynchronous invocations that can later on be distributed across multiple processes and machines.

Related

How to migrate thick client to the cloud

Current situation:
Thick client wrote in .NET
We have a very old computation software that we can't maintain anymore.
We don't really know how the kernel is working (people left, 15 years old code).
We have the code and some technical experts.
We want to migrate it to the cloud behind a public API in order to serve some SPA application or even thick client applications.
What is you recommendation about that problem?
We have thought about:
Lift-n-Shift
Lift-Adjust-n-Shift
Rearchitecting or redeveloping from the ground
Repurchasing a new cloud solution (but it doesn't seem to have any)
All options that you mentioned are possible but which one to choose really depends on your business needs time and budget.
Lift and shift (vms)
This is mostly quickest approach and you may simply use VMs to migrate to cloud. But managing VMs is your responsibility and is on going committment.
Lift adjust and shift (containers)
in my opinion you get benefits of cloud when you start using PAAS services. You may consider containerize (docker) your application and migrate it to cloud and start using paas services. your dev ops cycle will be quick and scaling is easy. Since you are not managing vms anymore it's less hassle.
rearchitect amd redevlop
this could be costly and time consuming and really depends if your business requirements allow you to do that. if you plan to expand the existing code base then you may consider this else it could be big deal when you can simply migrate your services using approaches mentioned above.

What is the recommended approach for raising database-triggered events with NServiceBus? Is direct SQL Service Broker integration no longer viable?

My team is currently in the initial stages of designing implementations using NServiceBus (v4, possibly v5) in a number of different contexts to facilitate integration between a number of our custom applications. However we would also like to utilize NServiceBus to raise business events triggered from some of our off-the-shelf third-party systems. These systems do not provide inherent messaging or eventing apis, so our current thinking is to hook into their underlying databases using triggers and potentially SQL Service Broker as a bridge to NServiceBus.
I've looked at ServiceBroker.net but that seems to use NServiceBus v2 or v3 api's, interfaces, etc., by creating a totally new ITransport. We're planning on using more recent versions of NServiceBus though, so this doesn't seem to be a solid option. Other somewhat similar questions here on SO (all from a few years ago) seem to be answered with guidance to simply use the SQL Transport. That uses table-based pseudo-queues instead of MSMQ, but what's not clear is if it is then advisable to have SQL triggers hand-craft NServiceBus message records and manually INSERT them into the pseudo-queue tables directly, or whether there would still be some usage of SQL Service Broker in the middle that somehow more natively pops the NServiceBus messages onto the bus. And if somehow using the SQLTransport is the answer, what would be best practice to bridge the messages over to the main MSMQTransport-based bus?
It seemed like there was some concerted movement on SQL Service Broker bridging over to NServiceBus several years ago, but was deprecated once the native NServiceBus SQLTransport was introduced. I feel like maybe I'm missing something in terms of the modern NServiceBus approach to generating data-driven events in a design that is more real-time than a looped polling design.
You may want to take a look at the Gateway feature. You should be able to run 2 different transports and use the Gateway feature to bridge the two via HTTP.
We have a similar system, although it's slightly easier in that we control the underlying databases and applications (i.e. not 3rd party) and the current proof of concept uses the ServiceBroker / SQLDependency / ServiceBus as part of its architecture.
If you go this route, I would also advise using triggers to populate a common table, then monitoring that.
I didn't know about ServiceBroker.Net until today, so can't comment. I also haven't looked at CLR stored procs / triggers; whether there's any possibilities there.
Somebody else asked a question about nServiceBus and ServiceBroker which I answered here which may be useful for anyone looking to implement this.

Web apps architecture: 1 or n API

Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!

MPI vs. Microsoft WCF vs. Microsoft TPL

I have a scientific program written in F# which I want to parallelize and run on 1 server with multiple processors (64) and for the future also in the cloud (Windows Azure?). The program will have a simple 1-1 communication between the nodes (no broadcast etc.).
If I used WCF, would it be as fast as MPI? What has MPI that WCF does not? There exists Pure MPI .NET written for WCF which puzzles me even more. I do not know if to use WCF or MPI.NET or Pure Mpi running on WCF.
PS: I guess that TPL is out of the game for 64 processors and more, right?
It is difficult to give a concrete answer, because it all depends on the specific aspects of your application, its current architecture (I suppose you already have some app) etc.
As you mention MPI and WCF, I assume that the application is written as several components that communicate with each other. The best way to structure this kind of application is to use F# agents.
As far as I understand, you want to run the application on a single server first. If you write it using agents, the agents can just communicate directly with each other (so you don't need MPI or WCF).
TPL should work well on a single-server (with lots of CPUs), but it will not scale to the distributed setting - you cannot run Task on another machine. However, you can use it inside individual components (e.g. agents) that will be distributed.
Regarding MPI vs. WCF - I don't have enough experience to answer that. However, if you use agent-based architecture, it should be easy to try various options. You may also check out fracture and related projects, which aims to implement high-performance sockets for F# (and possibly distributed agents in the future).
If you're doing it on 1 server you could just execute one process and execute the code in parallel. That way you could share memory more easily and faster than doing it through messages like MPI and WCF. Although the overhead of communication might not be that much, depending on your problem + solution.
Also the changes to your code would be much less that way, F# can usually be turned into prallel code with little effort. Going to MPI/WCF would require you to rewrite large portions.
Googling for F# + parallel gives plenty useful info that you should read first, like this for a good start:
http://blogs.msdn.com/b/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx
So on 1 server, I woudl use the parallel features of F#, it's designed to prallelize easily.
Later when you want to go for cloud, that would be turning it into cleint-server. That's a different problem then parallization. I would treat and solve them seperately.
On the MPI vs WCF. WCF is designed as a RPC technology, i.e. you call remote procedures and get answers. If you want to use it for parallel programming with separate processes, you would have to create the boilerplate code for that. (Keep track of subsribed clients etc.)
MPI was designed to run that kind of architecture and handles it much more easily. (the first process gets number 0 and is the master, the other are slaves get numbered incrementally etc.)
Howver I don't think MPI will be very good to go cloud, since that invloves http, protocols, security etc. Not sure how well MPI works for those kind of things, WCF will handle that very well indeed.
The fact that there is an MPI.NET for WCF is because MPI is about a certain style of parallizing code that a lot of people are familiar with. So you can use the programming concepts and use them on the .NET platform leveraging WCF for the communications.
Something else you might want to look into if you need to exchange a lot of data over the wire is protocol-buffers (see protobuf-net for instance). That can easily be combined with WCF for communication and is very lean in serializing structured data so you can send over the wire efficiently.
Gert-Jan
WCF and MPI are different concepts. WCF is like a person A asks a person B to do something where as MPI is like a person A creates clones of himself (all clone have same ability/logic) and then these clones work on specific parts of the problem to be solved and once done they combine their results.
So choosing between which one fits your specific application depends on the problem your application is trying to solve. It may even be a combination of both WCF and MPI. Where your client application asks the WCF to do some task and the WCF create clones of the "problem solver" using MPI and when the clone are done with solving the problem (in parallel) they return the aggregated result back to the WCF and then that result is sent to client application.
You might also want to take at the 'mbrace' product, which provides a cloud monad (http://blogs.msdn.com/b/dsyme/archive/2011/08/23/m-brace-f-in-the-cloud.aspx). It's still at a fairly early stage though. I'm no expert but it may be that you can run an mbrace-based solution as effectively a private cloud on your 64-processor setup. When you outgrow that, a move to Azure would be seamless.

Is it advisable to build a web service over other web services?

I've inherited this really weird codebase where they've built an external web service over a bunch of internal web services just to add authentication/authorization using WS-Security, WS-Encryption, et al. Less than a month into this engagement, I'm already feeling the pain of coupling volatile components through rigid WSDL, esp considering some of them use WCF and other choose to go WSDL first. Managing various versions of generated proxies and wrappers at various levels is a nightmare!
I'll admit the design is over-complicated and could have been much better, but my question essentially is:
Would you ever build a web service just to provide a cross cutting concern over a bunch of services?
Would this be better implemented as web service handlers?
and lastly...
Would you categorize this under the Web Service Gateway pattern?
I saw that very thing being built one year ago. I almost cried when the team took months to build 4 web services, 2 of which simply wrapped other internal ones, using WCF and some serious encryption. The only reason they wrapped the internal ones was to change the potential error numbers coming back.
So, would I ever intentionaly do that? Nope.
Would it be better implemented as almost anything else? yep.
Would I categorize it under the WTF pattern? absolutely.
UPDATE:
One thing I just remembered is that there is an architecture called "Enterprise Service Bus" It's purpose is to provide a common interface into other SOA systems. This way it doesn't matter what the different applications use for their end point mechanisms (WCF, WSE 1/2/3, RESTful, etc).
BizTalk is one example of an ESB and there are many other off the shelf programs that can be used. Basically, your app passes a message to the ESB and it handles sending that message, in a reliable way, to the other systems as well as marshalling any responses back.
This also means that you could insulate other applications from many types of changes to the end points. Of course, if the new end points require additional information, then you'd have to modify the callers. However, if all they are changing is the mechanism then a good ESB would be able to handle those changes without impacting your app.
I have seen similar implementations if you are exposing the services to the outside world and if you need to tighten down the security..check this MSDN column..