We're working on a service for mobile phones using orientdb.
Clients are the mobile phones that synchronize via a web service with the database.
Currently, exchanges between the web service and orientdb are written in php and use the http api.
Everything works fine, but we're now wondering how to scale up the system.
We're thinking about deploying a cluster behind a load balancer like haProxy.
Has anyone tested this solution ?
Is it working?
And what about performance and stability?
Related
I'm having an issue with a project I'm working on. I have a Vue client which does API calls to my backend which is written in .NET Core 3.1. Both these applications are deployed on diffent servers.
Now the problem is that my backend server does not allow me to do API calls straight from the browser. So I have to do some kind of 'redirect' on the client server to reach my API.
So for example:
If I call backend_server/api/values I get an error (Firewall).
I think I should make like a second API or something, but I'm not sure how to handle this issue.
Does anybody have any experience on this? Any help is welcome!
Kind regards
You can have multiple options here
Remove the firewall rule -
This will allow your API to get hit from browser. If firewall is not managed by you you can't do this
Add IP or Port exception rule in firewall -
Instead of deactivating the entire inbound rule on server, you can allow specific ports or IP on firewall. Again if you have control on firewall
Create Proxy API -
Another way is you can create a middleware API that forwards your request and acts as a proxy. This will suffice performance, resource, time and compromise security. I recommend not to do this, But it's easily possible in .NET Core
Specify CORS policy -
If your Vue.js and API originates from same origin (IP), You can configure CORS in server which will restrict access to API only from same origin. That means only www.google.com can access GoogleAPI, Likewise. This will protect the API from other origins
Tunnel via VPN -
If security is a concern, Use a VPN service to tunnel your API requests. This can't be possible for every client using your web service.
The best way is to open a specific rule on server for your application if possible. Writing a proxy in between will have lot of disadvantages although can be accomplished.
I am building a micro-service-oriented .NET Core web application and now I want to add real-time communication. It is possible to create a SignalR server and publish it on Azure? I want to use it in my microservices to send messages to users when a certain even occurs.
Yes, you can deploy your app to Azure and point your users to your hub endpoint with no problems. You have two options here:
Use SignalR and manually manage the connections and other signalR stuff if you will scale your application. For example, when you have 2 web apps and the client connects to one of them, you need to "tell" to other app that you have a new client connected using for example Redis Blackplane.
Use Azure SignalR and this kind of management is not needed, what you need to provide is only 1 app with the hub logic. So when a client connects to your hub it is automaticaly redirected to the Azure SignalR.
You can read more about this two options here:
https://learn.microsoft.com/pt-pt/azure/azure-signalr/signalr-concept-scale-aspnet-core
Why not deploy SignalR myself?´
It is still a valid approach to deploy your own Azure web app supporting ASP.NET Core SignalR as a backend component to your overall web application.
One of the key reasons to use the Azure SignalR Service is simplicity. With Azure SignalR Service, you don't need to handle problems like performance, scalability, availability. These issues are handled for you with a 99.9% service-level agreement.
Also, WebSockets are typically the preferred technique to support real-time content updates. However, load balancing a large number of persistent WebSocket connections becomes a complicated problem to solve as you scale. Common solutions leverage: DNS load balancing, hardware load balancers, and software load balancing. Azure SignalR Service handles this problem for you.
Another reason may be you have no requirements to actually host a web application at all. The logic of your web application may leverage Serverless computing. For example, maybe your code is only hosted and executed on demand with Azure Functions triggers. This scenario can be tricky because your code only runs on-demand and doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. See the overview on how to use SignalR Service with Azure Functions for more details.
Yes you can, this is the official quick start sample.
https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-quickstart-dotnet-core
I have .net core API inside the web app and that web app is backend pool for azure application gateway. while trying to access the web app got below error.
"502 - Web server received an invalid response while acting as a gateway or proxy server."
On app GW, health prob for that web app in unhealthy but while access the API as a https://abc.azurewebsites.net/api/values then it works.
When we deploy API in Web App Service then apiname.azurewebsites.net does not work give any probes to application gateway and treat unhealthy. API works like xxx.azurewebsites.net/api/values and Application Gateway also know this path. We have to put /api/values in override backend path of http settings. Same have to do in health probes.
Yes, you can first verify if the backend API could access directly without app gateway. Then this error may happen due to the following main reasons:
NSG, UDR or Custom DNS is blocking access to backend pool members.
Back-end VMs or instances of virtual machine scale set are not responding to the default health probe.
Invalid or improper configuration of custom health probes.
Azure Application Gateway's back-end pool is not configured or empty.
None of the VMs or instances in virtual machine scale set are healthy.
Request time-out or connectivity issues with user requests.
Generally, the Backend healthy status and details could point it out and show some clues. You could also verify all of the above reasons one by one according to this DOC.
I am working with CoAP protocol on IoT but also I need a web service. I implemented the web service on Apache with HTTP protocol and a Proxy that converts CoAP-HTTP request and responses. But I don't want to use the Proxy to convert CoAP-HTTP. I want to implement directly CoAP web service. Do you have any idea about that. On Apache or different things. Just any idea?
As you wrote On Apache or different things, I will here talk about the second option :). To implement the CoAP server itself, I would recommend either
NodeJS with the CoAP package
Java implementation Californium, from Eclipse.org
More complete list available at http://coap.technology/impls.html#server-side, see Server-side
And then handle the communication with your Apache HTTP server via WebSockets and REST APIs.
coap.me is also great to run tests during development.
I was reading this article in looking for differences between creating an API using WebAPI and MVC and came across this statement:
In simple load testing on my local machine, I’ve found that Web API
endpoints hosted in console apps are nearly 50% faster than both
ASP.NET controller actions and Web API endpoints hosted within MVC
projects.
As such, I'm interested in how this would take shape in a production environment.
Obviously I'm looking for performance, so I looked into OWIN and self-hosting. However I'm not clear on if this offers the same efficiency as the console app discussed above.
Can someone please explain the proposal of hosting an API console application for consumption in a production environment - i.e. how would you connect a URL to the console app, etc.?
Thanks.
My understanding is self hosted OWIN apps can be run within any kind of app domain e.g console, windows forms, windows service, AWS EC2, Azure Worker Role etc. The application you should run it in is dependent upon the hosting environment you choose, there are lots of options.