Planning to migrate our existing application to Azure.
Our existing architecture with security flow is as follows
ASP MVC 3.0 UI layer that takes user name password from the user
We are planning to migrate the UI layer onto a compute cloud.
and will be accessible at say uilayerdomainname.com which would have a SSL cert.
WCF REST webservices layer that amongst other things does authentication as well. This is currently on say servicename.cloudapp.net. (We could map it to servicelayername.com and get a SSL for that domain name as well).
SQL Azure database
The UI layer sends the credentials to the service layer which authenticates it against the SQL azure database.
Question
Both the WCF compute cloud and UI Layer are on the same region in Azure. Would the communication between these two be prone to man in the middle attacks? Does my WCF compute cloud need SSL as well? We do have two domain names with SSLs and so could just map the services to one.
Is there any way I can restrict traffic between the UI layer and the WCF compute cloud - allow only the UI layer to access the services layer?
Would the performance be better if I publish both the WCF services and UI layer on the same instance? It sort of shoots down the nice layered architecture but if it improves performance I could go with it. We don't want to jump through too many hoops to accomodate the app to Azure lest it becomes difficult to migrate out of it.
If you host your services in a Worker Role, then they can be available only to your Web Role. You can also host it elsewhere and monitor requests in code. Azure Roles in the same deployment can communicate with one another in a very specific way that is not available outside of the deployment.
In Azure deployments, you need to very specifically define your public endpoint because the roles are hosted behind a load-balancer. If you host your WCF service from within a worker-role it will not be accessible publicly.
Hope this helped
If you configure the WCF service and UI layer to only communicate through internal endpoints then the communication is private. There is no need to purchase or configure an SSL certificate for the WCF service unless it is made public.
Further, the only traffic between these internal endpoints will be between your instances -- so, the traffic is already restricted between your UI layer and the WCF service.
This is the case for both Web roles and worker roles: you can configure a Web role hosting your WCF service to have a private internal endpoint.
Depending on the architecture of your system you may see better performance if you have the UI and WCF layer on the same machine.
If your interface is "chatty" and calls the WCF service several times for each UI request then you'll definitely see a performance improvement. If there's just one or two calls then the improvement is likely to be minimal compared to the latency of your database.
Related
I am building a micro-service-oriented .NET Core web application and now I want to add real-time communication. It is possible to create a SignalR server and publish it on Azure? I want to use it in my microservices to send messages to users when a certain even occurs.
Yes, you can deploy your app to Azure and point your users to your hub endpoint with no problems. You have two options here:
Use SignalR and manually manage the connections and other signalR stuff if you will scale your application. For example, when you have 2 web apps and the client connects to one of them, you need to "tell" to other app that you have a new client connected using for example Redis Blackplane.
Use Azure SignalR and this kind of management is not needed, what you need to provide is only 1 app with the hub logic. So when a client connects to your hub it is automaticaly redirected to the Azure SignalR.
You can read more about this two options here:
https://learn.microsoft.com/pt-pt/azure/azure-signalr/signalr-concept-scale-aspnet-core
Why not deploy SignalR myself?ยด
It is still a valid approach to deploy your own Azure web app supporting ASP.NET Core SignalR as a backend component to your overall web application.
One of the key reasons to use the Azure SignalR Service is simplicity. With Azure SignalR Service, you don't need to handle problems like performance, scalability, availability. These issues are handled for you with a 99.9% service-level agreement.
Also, WebSockets are typically the preferred technique to support real-time content updates. However, load balancing a large number of persistent WebSocket connections becomes a complicated problem to solve as you scale. Common solutions leverage: DNS load balancing, hardware load balancers, and software load balancing. Azure SignalR Service handles this problem for you.
Another reason may be you have no requirements to actually host a web application at all. The logic of your web application may leverage Serverless computing. For example, maybe your code is only hosted and executed on demand with Azure Functions triggers. This scenario can be tricky because your code only runs on-demand and doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. See the overview on how to use SignalR Service with Azure Functions for more details.
Yes you can, this is the official quick start sample.
https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-quickstart-dotnet-core
I have a web app that talks to a service layer via WCF. These need to be internal endpoints and should be .net TCP bindings. However I also have some services in the service layer that don't need to be consumed internally but need to be exposed to the outside world i.e. http/https input endpoints. What is the best way in implementing this in Azure?
I was hoping someone could provide clarification / advice on the following points:
If I use internal endpoints are these load balanced? There seems to be a lot of contradicting info around the web. I have read that you need to implement your own algorithm, but I have also read that this has now been implemented by Microsoft and it is automatic.
Should the service layer be a web role or a worker role? It seems that there is a bit of a workaround to get internal TCP bindings working with a web role?
Is there a specific set of guidelines as to which one to use? i.e. web role or worker role.
I am assuming I am going to need two instances regardless of whether or not I use a web role or worker role? but wouldn't this depend on the first point? i.e. if there is no load balancer is there even any point in having 2 worker role instances?
Would it be better to split my service layer into two layers? One to expose the internal endpoints and another to expose the public endpoints?
Thanks in advance.
My previous answer got truncated. Take a look at Azure Service bus, you can create relays there to expose your internal WCF services
You can use a service relay for this, take a look # Azure
I could not find a direct answer to this. Basically I have services MainService and SubService. The idea is that the Client software calls some methods in the MainService, but SubService calls another part of the service in MainService.
I am deploying to Azure and I want to have two separate interfaces in MainService, one for client and one for SubService and I don't want Client service to have any chance of access to the interface the SubService uses.
Given that I am new to WCF services, I am not sure how to approach this. Do I need multiple web roles for different interfaces that access the same database and handle concurrency issues etc. there, or can I somehow include multiple interfaces but restrict the availability by, for example, certificates. I am not exactly sure on Azure firewall rules, but if the interface in MainService that is meant for the SubService could be mapped to a separate port that would be behind a firewall rule, that would also be a viable solution.
tl;dr: Need two separate interfaces in a WCF service, one for client software (open for outer world), one for a sub-system service. Both services are to be run in Azure. What are my options?
You can use standard WCF authorization and authentication. For example: http://msdn.microsoft.com/en-us/library/ff647503.aspx
If you wanted to use Azure Service Bus with relay messaging, you could use some of the authentication and authorization provided by Service Bus. But, I'm not sure if there's any extra value there compared to just hosting your WCF in a web role (you'd have to do that in either case, but the access to the service would be decoupled from the clients via Service Bus).
The company I'm working at does not have a great Infrastructure, it is treated as one big network, there is no network segregation. As such when were developing applications we have a TEST/UAT/PROD/DR setup. I have been developing a suite of services that have been deployed on all the above regions. How do i make sure that a developer can not call a production web service by accident? Is there anyway to restrict the service by caller (ie: server name?).
BTW all these services are internal and are not externally available.
Thanks Again for your help.
Josh
You could use Role based Authorization
Authorization In WCF-Based Services
Currently we run a UI web role and a web service web role(WCF REST) on Azure. Each role contains 2 instances (for load balancing and meeting the SLA reqs.)
The UI Web role and web service web role are within the same subscription but in different deployments. We do not want to merge the code bases (maintainability etc etc). So the UI layer is on xyz.cloudapp.net and the Web Service layer is on abc.cloudapp.net.
Currently, the requirement is to make the web service web role an internal endpoint i.e only accessible by the UI layer. The literature on configuring internal endpoints and accessing it from a different deployment is not very clear.
I am assuming that the two different roles need to be part of a single deployment for this to work. Can this be done without affecting the deployments? Any pointers in the right direction would be greatly appreciated.
Internal endpoints are only accessible within a single deployment, and do not route through the load balancer (so if you have 2 instances of your wcf services accessible on internal endpoint, you'd need to distribute calls between the instances). This, of course, would require you to put both your web role and wcf web role into the same deployment.
You might want to consider service bus for a secure way of reaching your wcf services from your web role instances. Or... expose the wcf services via input endpoint but secure the service.
There's an approach I like to call the virtual DMZ that sould meet your needs: http://brentdacodemonkey.wordpress.com/?s=virtual+dmz
It leverages the ACS and WCF bindings to allow you to create access control to input endpoints (which are then load balanced). Of course, if you don't want something tha robust, you can go with just a standard old WCF mutual auth scenario.
That said, David makes an excellent point. Internal endpoints are only accessible with a single deployed service. This is because that service represents an isolation boundary (think virtual lan branch) and the only input endpoints can be adressed from outside of that boundary.
Have you considered using ACS (Access Control Services) for restricting access using claims-based authentication to your WCF endpoint?
There are numerous protection schemes you could provide via WCF bindings.
Internal Endpoints can only communicate with inter-roles in the same deployment. If you have 2 separate deployments (abc.cloudapp.net and xyz.cloudapp.net, internal endpoints won't help you).