I am building a micro-service-oriented .NET Core web application and now I want to add real-time communication. It is possible to create a SignalR server and publish it on Azure? I want to use it in my microservices to send messages to users when a certain even occurs.
Yes, you can deploy your app to Azure and point your users to your hub endpoint with no problems. You have two options here:
Use SignalR and manually manage the connections and other signalR stuff if you will scale your application. For example, when you have 2 web apps and the client connects to one of them, you need to "tell" to other app that you have a new client connected using for example Redis Blackplane.
Use Azure SignalR and this kind of management is not needed, what you need to provide is only 1 app with the hub logic. So when a client connects to your hub it is automaticaly redirected to the Azure SignalR.
You can read more about this two options here:
https://learn.microsoft.com/pt-pt/azure/azure-signalr/signalr-concept-scale-aspnet-core
Why not deploy SignalR myself?´
It is still a valid approach to deploy your own Azure web app supporting ASP.NET Core SignalR as a backend component to your overall web application.
One of the key reasons to use the Azure SignalR Service is simplicity. With Azure SignalR Service, you don't need to handle problems like performance, scalability, availability. These issues are handled for you with a 99.9% service-level agreement.
Also, WebSockets are typically the preferred technique to support real-time content updates. However, load balancing a large number of persistent WebSocket connections becomes a complicated problem to solve as you scale. Common solutions leverage: DNS load balancing, hardware load balancers, and software load balancing. Azure SignalR Service handles this problem for you.
Another reason may be you have no requirements to actually host a web application at all. The logic of your web application may leverage Serverless computing. For example, maybe your code is only hosted and executed on demand with Azure Functions triggers. This scenario can be tricky because your code only runs on-demand and doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. See the overview on how to use SignalR Service with Azure Functions for more details.
Yes you can, this is the official quick start sample.
https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-quickstart-dotnet-core
Related
I know that it uses SignalR and requires a consistent connection over the network, and this mean there will be some kind of sticky session problem, which also mean we cannot just scale the deployment like in k8s in a braindead easy way.
I heard that SignalR can be configured to use a Redis backplane, but there doesn't seem to be an easy to do it with ASP.Net core, what steps do I need to do to scale server side Blazor, where the core part is about scaling SignalR? How do we integrate it with cloud native load balancers?
You can use Azure SignalR for scaling the connections in Azure. The docs might help you out on this.
I can't find an example code for publishing ASP.NET Webforms websites to Azure Functions. Months ago I tried to replicate the C# example but I ended up with only being able to use the precompiled batch function type.
I want to publish VB.NET web apps - any framework version, using Web Deploy...
Here are some important concepts you should know about Azure Web App and Azure Function:
Azure Web App:
Azure Web App is a sand box. The only way an Azure web app can be accessed via the internet is through the only two already-exposed HTTP (80) and HTTPS (443) TCP ports.
For Nodejs App deployed to Azure, Azure will create a named pipe for your server to listen, and pass the request from 443 port(as you use https) to the named pipe.
Azure Function:
Azure Functions is a solution for easily running small pieces of code, or "functions," in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it. Functions can make development even more productive, and you can use your development language of choice, such as C#, F#, Node.js, Python or PHP. Pay only for the time your code runs and trust Azure to scale as needed. Azure Functions lets you develop serverless applications on Microsoft Azure.
Api apps and Web apps are pretty much the same deal. Logic Apps and Functions are the same in a sense that they allow you to do something as a response to event or on a schedule, but Functions are a way to run code (or existing app) and Logic Apps are more like a workflow constructor, where you take existing actions and chain them (so no coding, or almost no)
Deploy:
Use ftp to deploy your web form to Azure Function. There will be no problems with the deployment, but the webpage will not display.
Note:
Although Azure Function and Azure Web App are very similar in many cases. But if you choose to deploy web form app, you will still find differences. Even if you can deploy your project to Azure Web App, it won’t display any webpages.
I have a query on a design hope you guys can clarify my doubt.
I have a specific requirement in which Mule is used just to expose the back end services in API gateway, backend services are written in Spring boot and other technology, all these services needs to be exposed in API gateway.
Is this a good practice to do that and if yes how can we do that?
I saw that in API manager we can create proxy layer on top of the services developed in Mule but is it possible to create proxies for the services developed in different technologies?
Absolutely ... For creating proxy service, it doesn't matter what type of technology does the backend service have.
It can create a proxy layer for any kind of backend service available either locally, in cloud or other remote location till the service url is accessible.
This proxy will create an additional layer hiding the actual url to the external world.
it doesn't matter what technology you are using for development as long as those are REST services and accessible to the cloudhub application. You can deploy those on-premise and can integrate your local runtime with cloudhub. Also, mule supports spring projects and you can directly configure your spring project/details inside mule.
I have one WPF client-server application. Now I have scenario like client will connect to server and server will push data to client periodically. I am bit confused about what technology and way should I choose for notification to clients.
SignalR is best for web application I think and I have desktop application. With WCF service, we can implement push notification through Duplex channel and callback. So can you please guide me what are the merits and demerits in using SignalR or WCF service ?
Thanks
Below are my observations from experiences:
SignalR pros:
Easy to startup, lower learning curve. You can easily run an example found from web
Exception handling (e.g. connection drops, timeouts) is embedded inside the API
SignalR cons:
Only supporting HTTP protocol
Duplex pros:
Supports TCP in addition to HTTP. This may be a serious performance gain if you know your client types and your system is working in a closed network. Also, working over TCP adds more connection stability than HTTP
Duplex cons:
Higher learning curve - harder to startup and have a stable solution. Want to verify it? Download a duplex and a SignalR sample from the web and see how much time you will spend to successfully run each other.
You need to handle all the exceptional cases (connection drops, timeouts, etc.)
I know I am not the only one who faced serious timeout problems when you want to use the duplex service for a long time. We need to make service calls periodically to keep client connections alive.
By the way, there are APIs exist for JavaScript, Desktop and Silverlight projects to consume SignalR services.
SignalR is not just about web. SignalR server side code does not care about the technology of its clients, you just need to have implementors at the client side.
If we isolate pusing data to the client, I would strongly recommend SignalR as it's much simpler than WCF in this aspect, I had my share of problems with WCF and I guess you had some yourself.
I found a simple console/web application sample here.
In general, Duplex WCF and using Callback like here seems very messy to me, there is a lot of configuration server side and this is why I think SignalR is simpler.
In addition, you can't use duplex (AFAIK) with javascript and objective-c.
I think you already got lots of data points about each of them. But selection of SignalR will provide you added advantage over development efforts which is in most of cases major decision block while selecting a technology.
You don't need to worry about API development / testing etc. and can have focus on your own implementation of the project.
Hope it helps!
SignalR can easily be used now with multiple clients from javascript, .NET both WinForms and WPF, and can even be used with a C++ client; Using a self hosted .NET signalr server (OWIN) is really nice way to have a standalone server that pushes / receives / broadcasts to multiple clients. The only thing that may be easier is ZeroMQ using its publish subscribe methodology.
One point that nobody has raised so far:
SignalR 1.0.1 requires .NET 4 on the server and client. Depending on
the version of your client and server that you are targeting that
might be an important factor to consider.
If you just want to update periodically for new data, you might be better to just use WCF and a polling mechanism from the client side rather than using either duplex WCF or signalr.
Planning to migrate our existing application to Azure.
Our existing architecture with security flow is as follows
ASP MVC 3.0 UI layer that takes user name password from the user
We are planning to migrate the UI layer onto a compute cloud.
and will be accessible at say uilayerdomainname.com which would have a SSL cert.
WCF REST webservices layer that amongst other things does authentication as well. This is currently on say servicename.cloudapp.net. (We could map it to servicelayername.com and get a SSL for that domain name as well).
SQL Azure database
The UI layer sends the credentials to the service layer which authenticates it against the SQL azure database.
Question
Both the WCF compute cloud and UI Layer are on the same region in Azure. Would the communication between these two be prone to man in the middle attacks? Does my WCF compute cloud need SSL as well? We do have two domain names with SSLs and so could just map the services to one.
Is there any way I can restrict traffic between the UI layer and the WCF compute cloud - allow only the UI layer to access the services layer?
Would the performance be better if I publish both the WCF services and UI layer on the same instance? It sort of shoots down the nice layered architecture but if it improves performance I could go with it. We don't want to jump through too many hoops to accomodate the app to Azure lest it becomes difficult to migrate out of it.
If you host your services in a Worker Role, then they can be available only to your Web Role. You can also host it elsewhere and monitor requests in code. Azure Roles in the same deployment can communicate with one another in a very specific way that is not available outside of the deployment.
In Azure deployments, you need to very specifically define your public endpoint because the roles are hosted behind a load-balancer. If you host your WCF service from within a worker-role it will not be accessible publicly.
Hope this helped
If you configure the WCF service and UI layer to only communicate through internal endpoints then the communication is private. There is no need to purchase or configure an SSL certificate for the WCF service unless it is made public.
Further, the only traffic between these internal endpoints will be between your instances -- so, the traffic is already restricted between your UI layer and the WCF service.
This is the case for both Web roles and worker roles: you can configure a Web role hosting your WCF service to have a private internal endpoint.
Depending on the architecture of your system you may see better performance if you have the UI and WCF layer on the same machine.
If your interface is "chatty" and calls the WCF service several times for each UI request then you'll definitely see a performance improvement. If there's just one or two calls then the improvement is likely to be minimal compared to the latency of your database.