Is there some kind of service to queue api calls? - api

I need to call the desk.com api to create cases when a customer completes a form on my site. However sometimes the API is down for maintenance (too often!) and my call will fail.
Presently I just write the details to a log on error and send myself an email. Then I create the case manually.
So I'm thinking to write some kind of message queue so instead of calling the api in-process, I can put the request in queue, then have some process work the queue and make the api calls. they way if the api call fails the process will just try again next scheduled interval.
Since there are so many web APIs in the world, I figure surely other people must be having the same problem. So are there some third-party solutions which effectively do what I'm trying to do? or some open-source project or something to deal with this issue?
Cheers!

Amazon Simple Queue Service (SQS) is a fast, reliable, scalable, fully managed queue service. SQS makes it simple and cost-effective to decouple the components of a cloud application. You can use SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available.
http://aws.amazon.com/sqs/

Related

How to deal with application crashes with RabbitMQ

Recently, I have implemented RabbitMQ for a couple of use cases. Sending mails is one of them (which is quite common in practice)
My Problem Statement:
A web service(say service A) needs to publish 1000 messages in the queue (which will be picked by some mail sending engine). But unfortunately, after publishing 500 messages to the queue, my app crashes.
Now, if I hit the same service again then the 500 messages that were already pushed in the first go will be pushed again. Though the mails duplication isn't a big deal for now, but is definitely not desired. How to deal with this one. Any thoughts ?
Solutions that I came up with:
Using the batch feature - but it is not supported by AsyncRabbitTemplate so I'm restrained from using that.
Using the database. But that's definitely cumbersome. I won't use this one as well.
If you can identify the duplicates, you can use the Idempotent Receiver enterprise integration pattern on the consumer side.
Spring Integration has an implementation.
However, it's not clear why you are using the async template since that is for send and receive operations. This application sounds like it only needs to send the requests, not wait for a reply.
It's also not clear how batching can help since the crash could occur on the consumer side after it has processed half of the batch.
In either case, you need to track where you got to before the crash.

Microservices Why Use RabbitMQ?

I haven't found an existing post asking this but apologize if I missed it.
I'm trying to get my head round microservices and have come across articles where RabbitMQ is used. I'm confused why RabbitMQ is needed. Is the intention that the services will use a web api to communicate with the outside world and RabbitMQ to communicate with each other?
In Microservices architecture you have two ways to communicate between the microservices:
Synchronous - that is, each service calls directly the other microservice , which results in dependency between the services
Asynchronous - you have some central hub (or message queue) where you place all requests between the microservices and the corresponding service takes the request, process it and return the result to the caller. This is what RabbitMQ (or any other message queue - MSMQ and Apache Kafka are good alternatives) is used for. In this case all microservices know only about the existance of the hub.
microservices.io has some very nice articles about using microservices
A message queue provide an asynchronous communications protocol - You have the option to send a message from one service to another without having to know if another service is able to handle it immediately or not. Messages can wait until the responsible service is ready. A service publishing a message does not need know anything about the inner workings of the services that will process that message. This way of handling messages decouple the producer from the consumer.
A message queue will keep the processes in your application separated and independent of each other; this way of handling messages could create a system that is easy to maintain and easy to scale.
Simply put, two obvious cases can be used as examples of when message queues really shine:
For long-running processes and background jobs
As the middleman in between microservices
For long-running processes and background jobs:
When requests take a significant amount of time, it is the perfect scenario to incorporate a message queue.
Imagine a web service that handles multiple requests per second and cannot under any circumstances lose one. Plus the requests are handled through time-consuming processes, but the system cannot afford to be bogged down. Some real-life examples could include:
Images Scaling
Sending large/many emails (like newsletters)
Search engine indexing
File scanning
Video encoding
Delivering notifications
PDF processing
Calculations
The middleman in between microservices:
For communication and integration within and between applications, i.e. as the middleman between microservices, a message queue is also useful. Think of a system that needs to notify another part of the system to start to work on a task or when there are a lot of requests coming in at the same time, as in the following scenarios:
Order handling (Order placed, update order status, send an order, payment, etc.)
Food delivery service (Place an order, prepare an order, deliver food)
Any web service that needs to handle multiple requests
Here is a story explaining how Parkster (a digital parking service) are breaking down their system into multiple microservices by using RabbitMQ.
This guide follow a scenario where a web application allows users to upload information to a web site. The site will handle this information and generate a PDF and email it back to the user. Handling the information, generating the PDF and sending the email will in this example case take several seconds and that is one of the reasons of why a message queue will be used.
Here is a story about how and why CloudAMQP used message queues and RabbitMQ between microservices.
Here is a story about the usage of RabbitMQ in an event-based microservices architecture to support 100 million users a month.
And finally a link to Kontena, about why they chose RabbitMQ for their microservice architecture: "Because we needed a stable, manageable and highly-available solution for messaging.".
Please note that I work for the company behind CloudAMQP (hosting provider of RabbitMQ).
The same question can be why REST is necessary for microservices? Microservice concept is not something new under moon. A long time distribution of workflow was used for backend engineering and asynchronous request processing, Microservice is the same component in a separated jvm which matches with S(single responsibility) in SOLID. What makes it micro SERVICE - is that it is balanced. And that is the all! Particularly (!), it can be REST Service on Spring Cloud/REST base, which is registered by Eureka, has proxy gateway and load balancing over Zuul and Ribbon. But it is not the whole world of microservices!By the way, asynchronous distributed processing is one of tasks which microservices are used for. Long time ago services(components) in separated JVM was integrated over any messaging and the pattern is known as ESB. Microservices are the same subjects the pattern. Due to fashion for Spring Cloud REST seems like it is the only way of microservices. Nope! Message based asynchronous microservice architecture is supported by Vertx https://dzone.com/articles/asynchronous-microservices-with-vertx, for example. Why not to use RabbitMQ as message channel? In this case load balancing can be provided by building RabbitMQ cluster. For example:https://codeburst.io/using-rabbitmq-for-microservices-communication-on-docker-a43840401819. So, world is much wide more.

Is it appropriate to use message queues for synchronous rpc calls via ajax

I have a web application that uses the jquery autocomplete plugin, which essentially sends via ajax a request containing text that has been typed into a textbox to our web server, once the web server receives this request, it is then handed off to rabbitmq.
I know that we do get benefits from using messaging, but it seems like using it for blocking rpc calls is a misuse and that something like WCF is far more appropriate in this instance, is this the case or is it considered acceptable architecture?
It's possible to perform RPC synchronous requests with RabbitMQ. Here it's explained very well, with its drawback included! So it's considered an acceptable architecture. Discouraged, but acceptable whenever the synchronous response is mandatory.
As a possible counter-effect is that adding RabbitMQ in the middle, you will add some latency to the solution.
However you have the possibility to gain in terms of reliability, flexibility, scalability,...
What benefit would you get from it? And in fairness if you put the message in the queue how is is synchronous? unless the same process that placed the message in the queue is the one removing it, but that is pretty much useless no?
Now, if all you want to do is place the message in the queue and process it later on is grand.
Also the fact that you had WCF to the mixture is IMHO a symptom that something is perhaps not clear enough. You could use WCF as an API gateway and use it to write the message to the queue so this is not really about WCF or Queues, but more like sync vs async.
The way you are putting your ideas, does not look alright to me.

Best way to queue WCF requests so that only one is processed at a time

I'm building a WCF service to handle all QuickBooks SDK functionality for two companies. Since the QuickBooks SDK needs to open/close the actual QuickBooks application to process a request, only one can be handled at a time or QuickBooks goes into a really bad state. I'm looking for the best way to allow end users to make a QuickBooks data request, and have my WCF application hold that request until the previous request is completed.
If nothing is currently being processed, then the request will go through immediately.
Does anyone know of the best method to handle that type of functionality? Anything third party/built-in .NET libraries?
Thanks!
Use WCF Throttling. Its configurable and will solve your problem without code changes.
See my answer for WCF ConcurrencyMode Single and InstanceContextMode PerCall.
One way to do this is to Place a Queue between the user and the Quickbooks Application:
The request from the user is placed i a Queue or Data table.
A background process reads the one item at a time out of the Queue, sends it to Quickbooks and Places the result in a result table.
The Client applictaion reads the result from the result table.
This requires some work, but the user will allways be able to submit requests and only one will be processed at a time.
The solution given by ErnieL will also work if you use Concurrency mode Single, but in Heavy load scenarios the users will get timeouts.

NServiceBus Sagas and REST API Integration best-practices

What is the most sensible approach to integrate/interact NServiceBus Sagas with REST APIs?
The scenario is as follows,
We have a load balanced REST API. Depending on the load we can add more nodes.
REST API is a wrapper around a DomainServices API. This means the API can be consumed directly.
We would like to use Sagas for workflow and implement NServiceBus Distributor to scale-out.
Question is, if we use the REST API from Sagas, the actual processing happens in the API farm. This in a way defeats the purpose of implementing distributor pattern.
On the other hand, using DomainServives API directly from Sagas, allows processing locally within worker nodes. With this approach we will have to maintain API assemblies in multiple locations but the throughput could be higher.
I am trying to understand the best approach. Personally, I’d prefer to consume the API (if readily available) but this could introduce chattiness to the system and could take longer to complete as compared to to in-process.
A typical sequence could be similar to publishing an online advertisement,
Advertiser submits a new advertisement request via a web application.
Web application invokes the relevant API endpoint and sends a command
message.
Command message initiates a new publish advertisement Saga
instance.
Saga sends a command to validate caller permissions (in
process/out of process API call)
Saga sends a command to validate the
advertisement data (in process/out of process API call)
Saga sends a
command to the fraud service (third party service)
Once the content and fraud verifications are successful,
Saga sends a command to the billing system.
Saga invokes an API call to save add details. (in
process/out of process API call)
And this goes on until the advertisement is expired, there are a number of retry and failure condition paths.
After a number of design iterations we came up with the following guidelines,
Treat REST API layer as the integration platform.
Assume API endpoints are capable of abstracting fairly complex micro work-flows. Micro work-flows are operations that executes in a single burst (not interruptible) and completes with-in a short time span (<1 second).
Assume API farm is capable of serving many concurrent requests and can be easily scaled-out.
Favor synchronous invocations over asynchronous message based invocations when the target operation is fairly straightforward.
When asynchronous processing is required use a single message handler and invoke API from the handlers. This will delegate work to the API farm. This will also eliminate the need for a distributor and extra hardware resources.
Avoid Saga’s unless if the business work-flow contains multiple transactions, compensation logic and resumes. Tests reveals Sagas do not perform well under load.
Avoid consuming DomainServices directly from a message handler. This till do the work locally and also introduces a deployment hassle by distributing business logic.
Happy to hear out thoughts.
You are right on with identifying that you will need Sagas to manage workflow. I'm willing to bet that your Domain hooks up to a common database. If that is true then it will be faster to use your Domain directly and remove the serialization/network overhead. You will also lose the ability to easily manage the transactions at the database level.
Assuming your are directly calling your Domain, the performance becomes a question of how the Domain performs. You may take steps to optimize the database, drive down distributed transaction costs, sharding the data, etc. You may end up using the Distributor to have multiple Saga processing nodes, but it sounds like you have some more testing to do once a design is chosen.
Generically speaking, we use REST APIs to model the commands as resources(via POST) to allow interaction with NSB from clients who don't have direct access to messaging. This is a potential solution to get things onto NSB from your web app.