If I use Azure face for verification using a container image with servers on my premises, do I still need to abide by the TPS limits?
If I use the face web API hosted by Microsoft, as face verification needs faceId from Face-Detect. Does that verification take 2 transactions? Does the same apply for Azure Face hosted on my server using the face container?
The TPS limits of the cloud API do not apply to the container; this is one of the reasons Microsoft offers the option for you to host the service. Your TPS limit is bound by how much compute resources you allocate to your container(s).
The transaction counting for cloud and container services are the same. As you've noted, face verification will take two transactions, one to generate the face ID, and another to compute the match.
Related
I cannot understand the difference between WCF (service oriented) , and Azure Function or AWS lambda ( FaaS). It seems to me both are invoking remote functions, while WCF has a host. but what is the technical difference between them?
WCF or the Windows Communication Foundation, is another framework, this time for writing and consuming services. These are either web services, or other, e.g. TCP based services, even MSMQ based services. This is, in my opinion, what you should be looking at for exposing your back-end. WCF provides you the ability to easily specify a contract and implementation, while leaving the hosting of the service and the instantiation to IIS (IIS being Microsoft's web server, which is also running under the covers on Azure).
Azure, towards you, is a hosting provider. It helps you scale your application servers based on demand (e.g. number of mobile clients downloading & installing your application).
A little marketing speak: Azure lowers your cost of ownership for your own solutions because it takes away the initial investment in firstly figuring out (guessing) the amount of hardware you need and then building/renting a data center and/or hardware. It also provides some form of middleware for your applications, like AppFabric, so that they can communicate in the "cloud" a bit better. You also get load balancing on Azure, distributed hosting (e.g. Europe datacenters, USA datacenters...), fail safe mechanism already in place (automatic instance instantiation if one were to fail) and obviously, pay as you go & what you use benefits.
Here is the reference: Introduction to Azure Functions, Azure and WCF
Am reading people having trouble with using rediscache in azure functions on consumption plan.
please advise best practice for using caching in azure functions 2.0
my rediscache will be used by api's as well as azure functions (consumption plan). since connection object is supposed to be singleton and reused in case of functions on every request it will create new connection will this create problems ?
Please be aware of the various limitations for Azure Redis Cache performance per pricing tier, especially SSL and Non-SSL connections per second. Please also be aware of the benefits + behaviors of using the StackExchange.Redis configuration options.
To resolve a few issues we are running into with docker and running multiple instances of some services, we need to be able to share values between running instances of the same docker image. The original solution I found was to create a storage account in Azure (where we are running our kubernetes instance that houses the containers) and a Key Vault in Azure, accessing both via the well defined APIs that microsoft has provided for Data Protection (detailed here).
Our architect instead wants to use Kubernetes Persitsent Volumes, but he has not provided information on how to accomplish this (he just wants to save money on the azure subscription by not having an additional storage account or key storage). I'm very new to kubernetes and have no real idea how to accomplish this, and my searches so far have not come up with much usefulness.
Is there an extension method that should be used for Persistent Volumes? Would this just act like a shared file location and be accessible with the PersistKeysToFileSystem API for Data Protection? Any resources that you could point me to would be greatly appreciated.
A PersistentVolume with Kubernetes in Azure will not give you the same exact functionality as Key Vault in Azure.
PesistentVolume:
Store locally on a mounted volume on a server
Volume can be encrypted
Volume moves with the pod.
If the pod starts on a different server, the volume moves.
Accessing volume from other pods is not that easy.
You can control performance by assigning guaranteed IOPs to the volume (from the cloud provider)
Key Vault:
Store keys in a centralized location managed by Azure
Data is encrypted at rest and in transit.
You rely on a remote API rather than a local file system.
There might be a performance hit by going to an external service
I assume this not to be a major problem in Azure.
Kubernetes pods can access the service from anywhere as long as they have network connectivity to the service.
Less maintenance time, since it's already maintained by Azure.
My goal is to host web APIs in Azure using API Services using Azure Service Fabric. Azure Traffic Manager seemed to gear towards Infrastructure As A Service (IAAS) for its service such load balancing, fault tolerance etc. The Azure Service Fabric is geared towards Platform As A Service (PAAS) model with its own clustering. If my goal is to host web APIs (authored by leveraging Azure Service Fabric and without the need for Web Apps) in Azure, can I skip using Azure Traffic Manager because Service Fabric already provides the clustering? If not, then why should I use the Azure Traffic Manager? Am I missing something?
You are missing the fact that everything you have mentioned in there is reliant on a single region and that occasionally regions die.
If you have a Service Fabric located in West Europe for instance, and the West Europe region dies you have lost your entire solution.
If you have a Service Fabric in West Europe and North Europe, with data replicated, and a Traffic Manager profile spread across the two of them. In that instance when a region dies, you still have a working solution.
When building an available solution, you need to be able to point to any item within a solution and ask what happens if it fails. That should go from a VM to a whole region of datacenters.
Of course you need to weigh that against the cost of replicating your entire solution to another DC purely for reliance purposes. Region failures are very rare occurrences and you if you can handle some downtime you might be better simply ensuring your data is replicated to another region (by using GRS storage) and having a process to bring that data back online from another region.
But of course by doing that, you have made a decision about what you want to happen in the event of that failure. Which is the whole point.
My management is evaluating non-Azure Microsoft Windows Service Bus (Azure is out of consideration for security reasons). It will be used to setup topic/subscription model with a number of WCF services with netMessagingBinding that we building, so I just have a few basic questions about that.
Are there any specific hardware requirements like dedicated server, dedicated database etc. for WSB to run in production environment?
It's easy to configure WCF service to listen on a specific topic subscription. Is there any way for WCF service to listen to multiple subscriptions?
Appreciate the answers.
You can install the service components and the databases all on one server (that is the default). However, for a number of reasons, we installed the services on a dedicated app server and then created the Service bus databases on an existing database server. The install package allows you to specify a different db server. Check this article for the minimum server requirements
Yes you can get one WCF service to listen to multiple subscriptions. You would need to create two (or more) System.ServiceModel.ServiceHost instances and then run them inside one process. For example we had one windows service running two ServiceHost's. Each host listened at a different queue and therefore implemented a different contract. This meant where queues were logically grouped we didn't need a new windows service per queue. You could do the same with subscriptions.
For question one, you will have to go through the exercise of hardware sizing. the good news is that WCF services can scale vertically, so you can add up servers if there were issues in handling client load.
To do hardware sizing you will have to make an estimate the expected load and then do performance/scalablity testing to figure the load bearing capacity of your serviceBus/services .
you could find a lot of resources for load testing like this one http://seroter.wordpress.com/2011/10/27/testing-out-the-new-appfabric-service-bus-relay-load-balancing/
once you do load testing and come up with the numbers, you can then do sizing using references like this one http://msdn.microsoft.com/en-us/library/bb310550.aspx