Do I need Azure Traffic Manager with Azure API Services? - api

My goal is to host web APIs in Azure using API Services using Azure Service Fabric. Azure Traffic Manager seemed to gear towards Infrastructure As A Service (IAAS) for its service such load balancing, fault tolerance etc. The Azure Service Fabric is geared towards Platform As A Service (PAAS) model with its own clustering. If my goal is to host web APIs (authored by leveraging Azure Service Fabric and without the need for Web Apps) in Azure, can I skip using Azure Traffic Manager because Service Fabric already provides the clustering? If not, then why should I use the Azure Traffic Manager? Am I missing something?

You are missing the fact that everything you have mentioned in there is reliant on a single region and that occasionally regions die.
If you have a Service Fabric located in West Europe for instance, and the West Europe region dies you have lost your entire solution.
If you have a Service Fabric in West Europe and North Europe, with data replicated, and a Traffic Manager profile spread across the two of them. In that instance when a region dies, you still have a working solution.
When building an available solution, you need to be able to point to any item within a solution and ask what happens if it fails. That should go from a VM to a whole region of datacenters.
Of course you need to weigh that against the cost of replicating your entire solution to another DC purely for reliance purposes. Region failures are very rare occurrences and you if you can handle some downtime you might be better simply ensuring your data is replicated to another region (by using GRS storage) and having a process to bring that data back online from another region.
But of course by doing that, you have made a decision about what you want to happen in the event of that failure. Which is the whole point.

Related

What is the difference between WCF and Azure Function?

I cannot understand the difference between WCF (service oriented) , and Azure Function or AWS lambda ( FaaS). It seems to me both are invoking remote functions, while WCF has a host. but what is the technical difference between them?
WCF or the Windows Communication Foundation, is another framework, this time for writing and consuming services. These are either web services, or other, e.g. TCP based services, even MSMQ based services. This is, in my opinion, what you should be looking at for exposing your back-end. WCF provides you the ability to easily specify a contract and implementation, while leaving the hosting of the service and the instantiation to IIS (IIS being Microsoft's web server, which is also running under the covers on Azure).
Azure, towards you, is a hosting provider. It helps you scale your application servers based on demand (e.g. number of mobile clients downloading & installing your application).
A little marketing speak: Azure lowers your cost of ownership for your own solutions because it takes away the initial investment in firstly figuring out (guessing) the amount of hardware you need and then building/renting a data center and/or hardware. It also provides some form of middleware for your applications, like AppFabric, so that they can communicate in the "cloud" a bit better. You also get load balancing on Azure, distributed hosting (e.g. Europe datacenters, USA datacenters...), fail safe mechanism already in place (automatic instance instantiation if one were to fail) and obviously, pay as you go & what you use benefits.
Here is the reference: Introduction to Azure Functions, Azure and WCF

Azure face transactions and limits using containers provided by cognitive services

If I use Azure face for verification using a container image with servers on my premises, do I still need to abide by the TPS limits?
If I use the face web API hosted by Microsoft, as face verification needs faceId from Face-Detect. Does that verification take 2 transactions? Does the same apply for Azure Face hosted on my server using the face container?
The TPS limits of the cloud API do not apply to the container; this is one of the reasons Microsoft offers the option for you to host the service. Your TPS limit is bound by how much compute resources you allocate to your container(s).
The transaction counting for cloud and container services are the same. As you've noted, face verification will take two transactions, one to generate the face ID, and another to compute the match.

Service Fabric - Local Cluster - Queuing

I am in a situation where I can use Service Fabric (locally) but cannot leverage Azure Service Bus (or anything "cloud"). What would be the corollary for queuing/pub-sub? Service Fabric is allowed since it is able to run in a local container, and is "free". Other 3rd party messaging infrastructure, like RabbitMQ, are also off the table (at the moment).
I've built systems using a locally grown bus, built on MSMQ and WCF, but I don't see how to accomplish the same thing in SF. I suspect I can have SF services use a custom ICommunicationListener that exposes msmq, but that would only be available inside the cluster (the way I understand it). I can build an HTTPBridge (in SF) in front of those to make them available outside the cluster, but then I'd lose the lifetime decoupling (client being able to call a service, using queues, even if that service isn't online at the time) since the bridge itself wouldn't benefit from any of the aspects of queuing.
I have a few possibilities but all suffer from some malady that only exists because of SF, locally. Also, the same code needs to easily deploy to full Azure SF (where I can use ASB and this issue disappears) so I don't want to build two separate systems just because of where I am hosting it in some instances.
Thanks for any tips.
You can build this yourself, for example like this. This uses a BrokerService that will distribute message-data to subscribed services and actors.
You can also run a containerized queuing platform like RabbitMQ with volumes.
By running the queue system inside the cluster you won't introduce an external dependency.
The problem is not SF, The main issue with your design is that you are coupling architectural requirements to implementations. SF runs on top of VirtualMachines, in the end, the only difference is that SF put the services in those machines, using another solution you would have an Agent Deploying these services in there or doing a Manual deployment. The challenges are the same.
It is clear from the description that the requirement in your design is a need for a message queue, the concept of queues are the same does not matter if it is Service Bus, RabbitMQ or MSMQ. Each of then will have the basic foundations of queues with specifics of each implementation, some might add transactions, some might implement multiple patterns, and so on.
If you design based on specific implementation, you will couple your solution to the implementation and make your solution hard to maintain and face challenges like you described.
Solutions like NServiceBus and Masstransit reduce a lot of these coupling from your code, and if you think these are not enough, you can create your own abstraction. Then you use configurations to tied your business logic to implementations.
Despite the above advice, I would not recommend you using different
solutions per environment, because as said previously, each solution
has it's own implementations and they might not assimilate to each other, as example, you might face issues in
production because you developed against MSMQ on DEV and TEST
environments, and when deployed to Production you use ServiceBus, they
have different limitations, like message size, retention period and son
on.
If you are willing to use MSMQ, you can add MSMQ to the VMs running your cluster and connect from your services without any issue. Take a look into this SO first: How can I use MSMQ in Azure Service Fabric

Windows service Bus evaluation

My management is evaluating non-Azure Microsoft Windows Service Bus (Azure is out of consideration for security reasons). It will be used to setup topic/subscription model with a number of WCF services with netMessagingBinding that we building, so I just have a few basic questions about that.
Are there any specific hardware requirements like dedicated server, dedicated database etc. for WSB to run in production environment?
It's easy to configure WCF service to listen on a specific topic subscription. Is there any way for WCF service to listen to multiple subscriptions?
Appreciate the answers.
You can install the service components and the databases all on one server (that is the default). However, for a number of reasons, we installed the services on a dedicated app server and then created the Service bus databases on an existing database server. The install package allows you to specify a different db server. Check this article for the minimum server requirements
Yes you can get one WCF service to listen to multiple subscriptions. You would need to create two (or more) System.ServiceModel.ServiceHost instances and then run them inside one process. For example we had one windows service running two ServiceHost's. Each host listened at a different queue and therefore implemented a different contract. This meant where queues were logically grouped we didn't need a new windows service per queue. You could do the same with subscriptions.
For question one, you will have to go through the exercise of hardware sizing. the good news is that WCF services can scale vertically, so you can add up servers if there were issues in handling client load.
To do hardware sizing you will have to make an estimate the expected load and then do performance/scalablity testing to figure the load bearing capacity of your serviceBus/services .
you could find a lot of resources for load testing like this one http://seroter.wordpress.com/2011/10/27/testing-out-the-new-appfabric-service-bus-relay-load-balancing/
once you do load testing and come up with the numbers, you can then do sizing using references like this one http://msdn.microsoft.com/en-us/library/bb310550.aspx

Wcf Domain Service vs Silverlight Enabled Wcf services

I am working with silverlight project that is consuming domain services. Actually i find that quite messy as one domain service class and metadata. I have already worked with Wcf services and found them very easy to update and handle. But domain service's modification (as new field or tables are added) is really a pain.
I want to know why people prefer domain services over silverlight enabled Wcf services? I mean advantages or disadvantages of both and performance implication
After goggling i found this are things you should see :
To authenticate users faster in the domain
To authenticate resources(gps etc) faster for the users
Utilization of resources
Utilization of network and descreasing the overall traffic in the
network.
The main benefit is that of the users and passwords management, which
could grow to be massive amount of work having to manage them
individually on each independent servers. The proposed changes of
migrating the whole platform to Active Directory environment will
assist in propagating the changes (such as new users, password
changes, new security requirements via GPO, etc) on to the servers
(which will run as domain clients, only 1 or 2 will run Primary and
Secondary ADC. Not all these servers are going to run host AD or be
an ADC, server OS is used due to it's robustness and reliability).
disadvantage
cost of infrastructure
good planning is must
Complex structure for user