Service fabric on premise minimum nodes for stateless services - service-fabric-stateless

I have seen Microsoft documentation on the minimum 5 node clusters for production. I have only stateless services, and since there is no data does it have to be 5, can it not be 3?

I got my answer. Because the naming service and and fail-over manager service are stateful, we need minimum 5.
Explained here.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-common-questions

Related

What is the difference between WCF and Azure Function?

I cannot understand the difference between WCF (service oriented) , and Azure Function or AWS lambda ( FaaS). It seems to me both are invoking remote functions, while WCF has a host. but what is the technical difference between them?
WCF or the Windows Communication Foundation, is another framework, this time for writing and consuming services. These are either web services, or other, e.g. TCP based services, even MSMQ based services. This is, in my opinion, what you should be looking at for exposing your back-end. WCF provides you the ability to easily specify a contract and implementation, while leaving the hosting of the service and the instantiation to IIS (IIS being Microsoft's web server, which is also running under the covers on Azure).
Azure, towards you, is a hosting provider. It helps you scale your application servers based on demand (e.g. number of mobile clients downloading & installing your application).
A little marketing speak: Azure lowers your cost of ownership for your own solutions because it takes away the initial investment in firstly figuring out (guessing) the amount of hardware you need and then building/renting a data center and/or hardware. It also provides some form of middleware for your applications, like AppFabric, so that they can communicate in the "cloud" a bit better. You also get load balancing on Azure, distributed hosting (e.g. Europe datacenters, USA datacenters...), fail safe mechanism already in place (automatic instance instantiation if one were to fail) and obviously, pay as you go & what you use benefits.
Here is the reference: Introduction to Azure Functions, Azure and WCF

High availability in JBoss Fuse Fabric for CXF Rest services

We are trying to figure out how best to create a highly available Fuse Fabric infrastructure where there should not be any requirement for client side configuration. We mostly have CXF Rest services. If we create odd number of fabric containers and join them, will it create a highly available Fabric WITHOUT any client side configuration? Meaning, can the client point to one URL and Fuse Fabric will be able to fail over to any other container of the Fabric if one of them is down? I have read through multiple documents but could not find any direct answer.
Thanks.
You can use http-gateway http://fabric8.io/gitbook/gateway.html to expose single ip:port for a service that is highly available in fabric

WCF on Azure Cloud Services with Azure Website Clients

I cannot seem to find any combination of tutorials or information online to set me in the right direction, so I'm hoping the community can help me out!
I have some experience with WCF in the past (mostly simple/default http implementations), but nothing to the level I am attempting with my current architecture. Unfortunately 99% of the info I'm finding for WCF is a couple of years old, and most of it does not address Azure specific details. Most books are published back in 2007, and do not address the newer IDE/Tooling or WCF updates since that time. Needless to say I have a few open questions, and would love to get pointed in the right direction after exhausting Google, Stack Overflow, MSDN & YouTube!
In a nutshell:
I want to centralize all business logic behind a single WCF service
on Azure (it will be load balanced on a Cloud Service).
I have a number of web clients that will be consuming this service.
All the clients are C#/.NET MVC projects that I control (I do not need or want the
WCF endpoints to be publicly available)
I would prefer to whitelist access to the endpoints, rather than
implement authentication (for performance & simplicity)
Hear are my questions and potential speed bumps:
Is WCF the right solution? Is there a newer better technology I should be using?
If I use a Cloud Service for my WCF solution, is WebRole or WorkerRole my best option and why? Are hosting the service as a Website an option? (It would save cost)
In my research I've landed on the fact that using NetTCP binding is faster than using the default Http bindings. But I can't find a simple example of how to set this up using VS 2013/.Net 4.5/Azure Cloud Service. Is there a good tutorial for this? Also, I'm assuming NamedPipes are not on option for me?
Since all the consumers of the WCF service will be running on Azure Websites, is NetTCP still possible? How do I create service references? I'm assuming I just use the NetTCP endpoint address, but what about whitelisting for security within the Azure infrastructure?
How can my Azure Website clients connect to TCP within Azure the fastest? Affinity groups don't seem to be an option for Websites, should I abandon this and deploy all my clients as WebRoles so they can share Affinity with my WCF Service? Is Azure smart enough to know that the website is calling a machine within the same region and keep the connection within the region? How is this ensured?
I will have a debug, stage and production environment for my WCF service. What is the best way to switch between the various endpoints on my azurewebsite client(s)? I'd prefer to do it during startup in my global.asax file using C#, rather than in my web.config. I only intend to keep one setting in my Web.Config for "Environment". Ideally I will have a Switch() statement in my startup file that will determine with WCF environment endpoint to use for my Service References.
My apologies for the array of questions. I was thinking about breaking this out into multiple posts, but keeping them in the same context seemed to be the only way to ensure that I am communicating the scope of my inquiry.
Thank you.
I found a great series of videos on Microsoft Virtual Academy that answers all of my questions:
Azure & Services
The key videos in this series are: 1,2 & 7. Here is a direct link to each one:
Intro to WCF
WCF on Azure
Advanced Topics

"Private" TCP WCF services in Azure?

We have three "types" applications:
MainSite (MVC Web Role, 6 instances)
CoreServices (TCP-based WCF Worker Role, 20 instances)
NewFeaturesPreviewSiteOne (MVC Web Role, 3 instances)
NewFeaturesPreviewSiteTwo
... 14. NewFeaturesPreviewSiteTwelve
Both MainSite and CoreServices are bundled up as two roles in one deployment. This is updated ~once every 2 months. MainSite accesses CoreServices via an InternalEndpoint on CoreServices. This works great!
We now want to add NewFeaturesPreviewSite (in reality, we have 12 totally different/unrelated apps that you can think of like this). NewFeaturesPreviewSite is updated every couple days and is its own deployment. However, we REALLY want this to consume the already-deployed CoreServices app.
What is the best (or a good) way to accomplish this while considering the following?
Load-balancing is a must-have (20+ CoreServices instances handling requests from three NewFeaturesPreviewSite instances).
We do NOT want CoreServices being publicly exposed to the internet or to anything outside of our applications we're deploying to Azure.
I'd really like to have a solution that leverages Azure's PaaS platform rather than its IaaS platform.
Ultimately, I suspect there's something with Azure's Local Network or Virtual Private Network features that might help me here but I'm not sure - there's something about those that I don't quite get yet.
From the public documents from Microsoft you might not be able to communicate through internal endpoints from another deployment (cloud service). This means you have to open an input endpoint on your core service for your new feature services. But I have the impression that Steve Marx had had a blog post said if you somehow know the internal endpoint, you can just connect to it from another cloud service role if both of them are located in the same data center.

Mule Inter - App communication in same instance

I have explored the web on MULE and got to understand that for Apps to communicate among themselves - even if they are deployed in the same Mule instance - they will have to use either TCP, HTTP or JMS transports.
VM isn't supported.
However I find this a bit contradictory to ESB principles. We should ideally be able to define EndPoints in and ESB and connect to that using any Transport? I may be wrong.
Also since all the apps are sharing the same JVM one would expect to be able to communicate via the in-memory VM queue rather than relying on a transactionless HTTP protocol, or TCP where number of connections one can make is dependent on server resources. Even for JMS we need to define and manage another queue and for heavy usage that may have impact on performances. Though I agree if we have distributed and clustered systems may be HTTP or JMS will be only options.
Is there any plan to incorporate VM as a inter-app communication protocol or is there any other way one Flow can communicate with another Flow Endpoint but in different app?
EDIT : - Answer from Mulesoft
http://forum.mulesoft.org/mulesoft/topics/concept_of_endpoint_and_inter_app_communication
Yes, we are thinking about inter-app communication for a future release.
Still is not clear when we are going to do it but we have a couple of ideas on how we want this feature to behave. We may create a server level configuration in which you can define resources to use in all your apps. There you would be able to define a VM connector and use it to send messages between apps in the same server.
As I said, this is just an idea.
Regarding the usage of VM as inter-app communication, only MuleSoft can answer if VM will have a future feature or not.
I don't think it's contradictory to the ESB principle. The "container" feature is pretty well defined in David A Chappell's "Enterprise Service Bus book" chapter 6. The container should try it's best to keep the applications isolated.
This will provide some benefits like "independently deployable integration services" (same chapter), easier clusterization, and other goodies.
You should approach same VM inter-app communications as if they where between apps placed in different servers.
Seems that Mule added in 3.5 version, a feature to enable communication between apps deployed in the same server. But sharing a VM connector is only available in the Enterprise edition.
Info:
http://www.mulesoft.org/documentation/display/current/Shared+Resources#SharedResources-DefiningDomains
Example:
http://blogs.mulesoft.org/optimize-resource-utilization-mule-shared-resources/