I'm trying to create a UniMRCP plugin. It is not clear from the documentation how multiple simultaneous requests from clients are supposed to be handled with the Plugin. Which of the options below is the case?
The server creates a plugin on a different thread for each request.
The server queues the requests and sends them serially.
The plugin is supposed to manage the different requests based on session ID.
other option.
Based on the answer to the above, what would be the best strategy to implement the plugin.
Related
What would you suggest as the most efficient coroutine based http client for kotlin (that runs on linux).
One additional requirement is to be also able to limit the number of in progress requests.
Ktor is a pretty standard HTTP client & server library (based on coroutines using the [CIO] engine). You can also create a custom plugin which would allow you to limit requests as you see fit.
I have started to develop an ecommerce application using a microservices architecture. Every microservice will have a separate database. For now, I know I want to use a Node.js microservice to handle products and also serve as a search engine for them. I plan on having a Ruby on Rails server-microservice that should handle all the requests and then if the request is not meant to be processed by it, (e.g. the request is to add a new product) to send this information somehow using RabbitMQ to the Node.js microservice, and let it perform the action. Is this an acceptable architectural design or I'm completely off route?
Ruby on Rails server-microservice that should handle all the requests (You can do better)
A. For this, what you need is an Reverse Proxy.
A reverse proxy is able to forward each incoming request to the microservice that's responsible for processing it.
It can also act as a Load Balancer : it'll distribute the incoming requests accross many services (if, for instance, you want to deploy multiple instances of the same service)
...
B. You will also need an API Gateway for managing Authentication & Authorization, and handling Security, Traceability, Logging, ... of the requests.
For (A) & (B), you can you use either Nginx or Kong
Use RabbitMQ in case you want to establish Event-based and/or Asynchronous communication among your microservices. Here's a simple example : Everytime a user confirms an Order, OrderService informs ProductService to update the quantity of the product that's been ordered.
The advantage of using RabbitMQ here is that OrderService won't stay on a blocking state while waiting for ProductService to let him know whether he received the info or not, or he updated the quantity or not ... he'll move on and handle the other incoming requests.
For my deployment I have a number of 3rd party systems that can only send HTTP POST requests with metrics (I need in the queue) and they cannot be re-configured. My goal is to have specific endpoints (or vhosts) that when POST'd to will automatically route to the correct queue, without needing the necessary routing key and other standard rabbitmq JSON data. As this modification is not possible in the 3rd party systems.
I can't find any way to do this natively as of now, but I believe it may be possible to configure a HTTP reverse proxy in the front, whereby any data sent to the specific endpoint, will be re-directed to the correct rabbitMQ HTTP endpoint, where I could then bolt in the nessary JSON data so it can be parsed by rabbitmq and placed in the realvent queue. I wanted to check if this is the only logical solution to this, or am I missing something obvious that can be done within rabbitmq's administration page or via config files.
We are developing a custom console to manage development environments. We have several application templates preloaded in openshift, and whenever a developer wants to create a new environment, we would need to tell openshift (via REST API) to create a new application based on one of those templates (oc new-app template).
I can't find anything in the REST API specification. Is there any alternative way to do this?
Thanks
There is no single API that today creates all of that in one go. The reason is that the create flow is intended to span multiple disjoint API servers (today, Kube and OpenShift resources can be created at once, and in the future, individual Kube extensions). We wanted to preserve the possibility that a client was authenticated to each individual API group. However, it makes it harder to write easy clients like this, so it is something we plan on adding.
Today the flow from the CLI and WebUI is:
Fetch the template
Invoke the POST /processedtemplates endpoint
For each "object" returned invoke the right create call.
I searched some of the similar questions, but haven't had a right solution yet.
I need to test a web cluster (which consists of many nodes, to provide some set of REST-ful APIs).
Not only HTTP GET request, I need to generate dynamic POST/PUT request in some manners. There are many tools, but I couldn't find right tool for generating POST/PUT request with non-static data.
Since I need to generate quite a large amount of requests, the load test tool should run in distributed nodes. In shorts:
ability to write the custom request for HTTP GET, POST and PUT. (any kind of major language such as Java, Ruby, etc. is okay)
ability to works in distributed Linux environment. (i.e. use multiple nodes to generate the requests)
ability to works on both HTTP and HTTPS
optional: generating nice-looking graphs
optional: construct a new request and queue for later (for state-ful API testing)
Based on certain condition, the request generator needs to parse JSON document in the HTTP body, and process it to make another GET/POST/PUT request.
Checkout Tsung, Faban, and Rain. Most likely, you have to edit some scripts within their frameworks.