NEst RabbitMQ RPC nack doesn't reject the promise - rabbitmq

With nestJS rabbit MQ I use RPC from one server to another one. What I basically want is await this.amqpConnection.request to reject the promise when the consumer throw an error. As I understand from this and this and this documentation when you NACK (negative ack) the message it should put the RPC response to a corresponding queue with an error information which would be passed to a client. Here's some code example:
Producer
import { AmqpConnection } from '#golevelup/nestjs-rabbitmq';
let amqpConnection: AmqpConnection;
sendMessage(payload: any) {
try {
const result = await this.amqpConnection.request({
exchange: 'user',
routingKey: 'test',
queue: 'test',
payload,
});
console.log(result);
} catch (e) {
console.error(e); // I expect this flow !!!
}
}
Consumer:
#RabbitRPC({
exchange: 'user',
routingKey: 'test',
queue: 'test',
errorBehavior: MessageHandlerErrorBehavior.NACK
})
public async rpcHandler(payload: any): Promise<any> {
throw Error("Error text that should be proxied to console.error"):
}
But the message doesn't even return to the client. Client gets rejected after a timeout, rather because of an error during consumer processing. If my understanding is not correct I would like to know if there's a builtin mechanism with nestjs to reject the message w/o manually creating some response types for an error and handling it on a client.

You should throw exactly the RpcException from #nestjs/microservices and only with a certain argument
throw new RpcException({
code: 42, // some number code
message: 'wait. oh sh...', // some string value
});
Also if you using a custom exception filter then it should extends the BaseRpcExceptionFilter from #nestjs/microservices. Call the getType() method on an ArgumentsHost and if it's result equals to 'rpc' then using the super.catch method.

Related

Lambda doesn't reattempt processing the SQS message even after an error is returned from the handler

I am trying to invoke the lambda function using standard SQS. I have handled errors using a try-catch block and whenever an error is caught, it will be returned. Otherwise, a response message will be returned with 200 OK.
I want to reprocess the messages which returned errors. But lambda won't reprocess those messages.
Even the Retention time period(5 min) > Visibility time out(1 min)
Why does this happen?
const { spawnSync, execSync } = require('child_process');
const fs = require('fs');
const { S3Client, GetObjectCommand, PutObjectCommand } = require("#aws-sdk/client-s3");
const { DynamoDBClient, UpdateItemCommand } = require("#aws-sdk/client-dynamodb");
const { marshall } = require('#aws-sdk/util-dynamodb');
exports.lambdaHandler = async (event) => {
try {
const body = JSON.parse(event["Records"][0]['body']);
try {
// Code base 1
// All the above imported dependencies will be used here
const response = {
statusCode: 200,
body: JSON.stringify({ message: "Function Executed" })
};
console.log('Response: ',response);
return response;
}
catch (err) {
console.log("[ERROR]: ", err);
console.log('body is: ', body);
console.log("err returning");
return err;
}
}
catch (error) {
console.log("[ERROR]: ", error);
console.log("error returning");
return error;
}
};
// Below functions are used in code base 1
// No try catch block or error hadling in below code bases
async function downloadFile() {
//code base 2
};
async function uploadFile() {
// code base 3
};
async function updateUsdz() {
// code base 4
}
You are, as you say, literally returning the error. However, for lambda in this scenario, you're simply returning an object. It does not matter whether or not the object is an error object or any other object. According to the system, your lambda will have been successfully executed, the SQS service will receive a success response from AWS Lambda and the message will be dropped from the queue as it is handled.
If you want to use the retry functionality that SQS provides, you must ensure that you're lambda fails. In this case, this means actually throwing the error again after you've executed the code you want to execute on failure (e.g., logging the error). If you throw the error, the handler function will fail (instead of simply returning an error object) and the message will not be deleted from the SQS queue but will be retried.
For example:
exports.lambdaHandler = async (event) => {
try {
// Do something
} catch (error) {
console.log("[ERROR]: ", error);
throw error;
}
};
As a side note, if you're using a solution like this, ensure you are attaching a dead-letter queue to your SQS queue in order to catch messages that can never be handled by the lambda. If you do not have such a queue to catch those messages, these will keep being retried which effectively creates an infinite loop which could cost quite a lot of money.

Angular GlobalErrorHandler and HttpErrorResponse - Resolver throwing badly formatted HttpErrorResponse

I've created global error handler in my Angular 6 application:
main error handler method:
handleError(error: Error | HttpErrorResponse) {
const router = this.injector.get(Router);
const notificationService = this.injector.get(NotificationsService);
this._logger(error);
if (!navigator.onLine) {
notificationService.displayNotification('error', 'timespan', {heading: 'Internet connection lost!', body: ''});
} else if (error instanceof HttpErrorResponse) {
notificationService.displayNotification('error', 'click', this._httpErrorMessage(error));
} else {
// CLIENT error
router.navigate(['/error-page']);
}
}
Problem:
Many of HTTP service calls are being performed in resolvers:
resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<ClientDetailsModel> {
if (route.params.cif) {
const reqBody = new GetClientDetailsRequestModel({cif: route.params.cif, idWewPrac: this.userContext.getUserSKP()});
return this.clientsService.getClientDetails(reqBody)
.pipe(
map((clientDetails: { customerDetails: ClientDetailsModel }) => {
if (clientDetails.customerDetails) {
return clientDetails.customerDetails;
}
return null;
})
);
}
If Http error occurs in such a call, error received by my global error handler is formed as HttpErrorResponse wrapped inside Error (message of Error is HttpErrorResponse):
Uncaught (in promise): HttpErrorResponse: {"headers":{"normalizedNames":{},"lazyUpdate":null},"status":400,"statusText":"OK","url":"https://...
If Http errors occurs outside of resolvers global error handler works perfectly fine.
To reach my goal (throwing HttpErrorResponse from resolver) I need to specify the way to handle error in error callback inside subscription, but I cannot do it because resolver is the one who manages subscription.
Is there a way to specify how resolver should handle errors?
I would like to avoid manual parsing of these wrapped errors.
I was searching for a solution, but could only create a work-a-round.
This will check for the HttpErrorResponse text and tries to parse the JSON which results into the real error object.
Not great at all, but better then nothing.
handleError(error: any): void {
console.error('Errorhandler catched error: ' + error.message, error);
// We need to have this little hack in oder to access the real error object
// The Angular resolver / promise wraps the error into the message, serialized as json.
// So we extract this error again.
// But first lets check if we actually dealing with an HttpErrorResponse ...
if (error.message.search('HttpErrorResponse: ')) {
// The error includes an HTTPErrorResponse, so we try to parse it's values ...
const regex = new RegExp('^.*HttpErrorResponse:\\s(\\{.*\\})$');
const matches = regex.exec(error.message);
if (matches !== null) {
// matches the regex, convert...
const httpErrorResponse = JSON.parse(matches[1]); // This is now the real error object with all the fields
this.handleHttpErrorResponse(httpErrorResponse);
} else {
// It contains HttpErrorResponse, but no JSON part...
this.toastr.error('There was an unknown communication error',
'Communication error',
{timeOut: 10000});
}
} else {
this.toastr.error('Unknown error occured',
'Well that should not happen. Check the log for more information...',
{timeOut: 10000});
}
}

rxjs - handling async errors in finally

I'm interested in invoking an async function when I unsubscribe from an observable, however, I'm unable to handle the errors in a synchronized way once unsubscribing, for example I have the following piece of code, and the error isn't catched.
const observable = Observable.of(true).finally( async () => {
throw new Error('what');
});
try {
observable.subscribe().unsubscribe();
} catch (e) {
console.log('We did not capture this');
}
What are the possible ways of handling async errors in finally?
I think there're two problems:
Calling unsubscribe() won't do anything because of(true) sends next and complete notification immediately on subscription. So when you call unsubscribe() yourself you're already unsubscribed because of the complete notification. Then the actual call unsubscribe() won't do anything in this case.
When you mark a method as async it means you're in fact returning a Promise that is rejected. But .finally is expecting a function that it invokes and does nothing with it's return value. When you mark it as async it returns a Promise but nobody listens to it.
If you want to catch the error you will have to remove the async keyword,
const observable = Observable.of(true).finally(() => {
throw new Error('what');
});
with async it will turn it into an asynchronous function and won't be caught. What you are doing now is similar to
try {
setTimeout(()=> {throw new Error('what');},10)
} catch (e) {
console.log('We did not capture this');
}
Throw will excuted in the eventloop instead of the main thread.

Error handling in HAPI

Catch all and any errors in a hapi request lifecycle.
I have a signup handler,
public signup(request: Hapi.Request, reply: Hapi.Base_Reply) {
this.database.user.create(request.payload).then((user: any) => {
return reply({
"success": true
});
}).catch((err) => {
reply(Boom.conflict('User with the given details already exists'));
});
}
Now, I am catching the error, but I can't be always sure that I will get this error message only. What if there is an error in the database?
How to catch such database errors or any other unknown errors for all the requests. ???
Maybe you have to return the err.message in your reply like
reply(Boom.conflig(err.message))
or if you want to manage or manipulate the error you have to verify the type of error like
if (err instanceof DatabaseError) {
// manage database error
}
I figured out a way to handle such errors in Hapi.
What I was looking for was a PreResponse handler. Now, the PreResponse handler can log all the errors and I can throw a 500 error response.
What I am saying is by simply writing
reply(err)
I can send a 500 error and can catch this error using preResponse handler. Something like this,
server.ext('onPreResponse', (request: Hapi.Request, reply: Hapi.ReplyWithContinue) => {
const response = request.response;
if (!response.isBoom) { // if not error then continue :)
return reply.continue();
}
console.log(response);
return reply(response);
});

Node.js + socket.io + node-amqp and queue binginds when "re" connecting thru socket.io

I have one scenario which is very close to this sample:
One main screen:
this screen (client side) will connect to the socket.io server thru server:9090/scope (io.connect("http://server:9090/scope)) and will send one event "userBindOk" (socket.emit("userBindOk", message)) to the socket.io server;
the server receives the connection and the "userBindOk". At this moment, the server should get the active connection to rabbitmq server and bind the queue to the respective user that just connected to the application thru socket.io. sample:
socket.on("connection", function(client){
//client id is 1234
// bind rabbitmq exchange, queue, and:
queue.subscribe(//receive callback);
})
So far, no problem - I can send/receive messages thru socket.io without problems.
BUT, If I refresh the page, all those steps will be done again. As consequence, the binding to the queue will occur, but this time related to another session of the socket.io client. This means that if I send a message to the queue which is related to the first socket.io session (before the page refresh), that bind should (I think) receive the message and send it to a invalid socket.io client (page refresh = new client.id on the socket.io context). I can prove this behaviour because every time I refresh the page I need to send x times more messages. For instance: I`ve connected for the first time: - so, 1 message - one screen update; refresh the page: I need to send 2 messages to the queue and only the second message will be received from the "actual" socket.io client session - this behaviour will occur as many as I refresh the page (20 page refreshs, 20 messages to be sent to a queue and the server socket.io "last" client will send the message to the client socket.io to render into the screen).
The solutions I believe are:
Find a way to "unbind" the queue when disconnecting from the socket.io server - I didn`t see this option at the node-amqp api yet (waiting for it :D)
find a way to reconnect the socket.io client using the same client.id. This way I can identify the client that is coming and apply some logic to cache the socket.
Any ideas? I tried to be very clear... But, as you know, it`s not so eaey to expose your problem when trying to clarify something that is very specific to some context...
tks
I solved it like this:
I used to declare the rabbitMq queue as durable=true,autoDelete=false,exclusive=false and in my app there was 1 queue/user and 1 exchange(type=direct) with the routing_key name=queueName, my app also used the queue for other client diffent to browsers like android app or iphone app as push fallback, so i use to crear 1 queue for earch user.
The solution to this problem was to change my rabbitMQ queue and exchange declaration. Now i declare the exchange/user as fanout and autoDelete=True, and the user is going to have N queues with durable=true, autoDelete=true, exclusive=true (No. queue = No. clients) and all the queues are bind to the user-exchange(multicast).
NOTE: my app is wirten in django, and i use node+socket+amqp to be able to comunicate with the browser using web.scokets, so i use node-restler to query my app api to get the user-queue info.
thats the rabbitMQ side, for the node+amqp+socket i did this:
server-side:
onConnect: the declaration of the user exchange as fanout, autoDelete, durable. then declaration of the queue as durable, autodelete and exclusive, then the queue.bind to the user-exchange and finaly the queue.subscribe and the socket.disconnect will destroy the queue so there are going to exist queue as client connected the app and this solve the problem of the refresh and allow the user to have more than 1 window-tab with the app:
Server-side:
/*
* unCaught exception handler
*/
process.on('uncaughtException', function (err) {
sys.p('Caught exception: ' + err);
global.connection.end();
});
/*
* Requiere libraries
*/
global.sys = require('sys');
global.amqp = require('amqp');
var rest = require('restler');
var io = require('socket.io').listen(8080);
/*
* Module global variables
*/
global.amqpReady = 0;
/*
* RabbitMQ connection
*/
global.connection = global.amqp.createConnection({
host: host,
login: adminuser,
password: adminpassword,
vhost: vhost
});
global.connection.addListener('ready',
function () {
sys.p("RabbitMQ connection stablished");
global.amqpReady = 1;
}
);
/*
* Web-Socket declaration
*/
io.sockets.on('connection', function (socket) {
socket.on('message', function (data) {
sys.p(data);
try{
var message = JSON.parse(data);
}catch(error){
socket.emit("message", JSON.stringify({"error": "invalid_params", "code": 400}));
var message = {};
}
var message = JSON.parse(data);
if(message.token != undefined) {
rest.get("http://dev.kinkajougames.com/api/push",
{headers:
{
"x-geochat-auth-token": message.token
}
}).on('complete',
function(data) {
a = data;
}).on('success',
function (data){
sys.p(data);
try{
sys.p("---- creating exchange");
socket.exchange = global.connection.exchange(data.data.bind, {type: 'fanout', durable: true, autoDelete: true});
sys.p("---- declarando queue");
socket.q = global.connection.queue(data.data.queue, {durable: true, autoDelete: true, exclusive: false},
function (){
sys.p("---- bind queue to exchange");
//socket.q.bind(socket.exchange, "*");
socket.q.bind(socket.exchange, "*");
sys.p("---- subscribing queue exchange");
socket.q.subscribe(function (message) {
socket.emit("message", message.data.toString());
});
}
);
}catch(err){
sys.p("Imposible to connection to rabbitMQ-server");
}
}).on('error', function (data){
a = {
data: data,
};
}).on('400', function() {
socket.emit("message", JSON.stringify({"error": "connection_error", "code": 400}));
}).on('401', function() {
socket.emit("message", JSON.stringify({"error": "invalid_token", "code": 401}));
});
}
else {
socket.emit("message", JSON.stringify({"error": "invalid_token", "code": 401}));
}
});
socket.on('disconnect', function () {
socket.q.destroy();
sys.p("closing socket");
});
});
client-side:
The socket intance with options 'force new connection'=true and 'sync disconnect on unload'= false.
The client side use the onbeforeunload and onunload windows object events to send socket.disconnect
The client on socket.connect event send the user token to node.
proces message from socket
var socket;
function webSocket(){
//var socket = new io.Socket();
socket = io.connect("ws.dev.kinkajougames.com", {'force new connection':true, 'sync disconnect on unload': false});
//socket.connect();
onSocketConnect = function(){
alert('Connected');
socket.send(JSON.stringify({
token: Get_Cookie('liveScoopToken')
}));
};
socket.on('connect', onSocketConnect);
socket.on('message', function(data){
message = JSON.parse(data);
if (message.action == "chat") {
if (idList[message.data.sender] != undefined) {
chatboxManager.dispatch(message.data.sender, {
first_name: message.data.sender
}, message.data.message);
}
else {
var username = message.data.sender;
Data.Collections.Chats.add({
id: username,
title: username,
user: username,
desc: "Chat",
first_name: username,
last_name: ""
});
idList[message.data.sender] = message.data.sender;
chatboxManager.addBox(message.data.sender, {
title: username,
user: username,
desc: "Chat",
first_name: username,
last_name: "",
boxClosed: function(id){
alert("closing");
}
});
chatboxManager.dispatch(message.data.sender, {
first_name: message.data.sender
}, message.data.message);
}
}
});
}
webSocket();
window.onbeforeunload = function() {
return "You have made unsaved changes. Would you still like to leave this page?";
}
window.onunload = function (){
socket.disconnect();
}
And that's it, so no more round-robing of the message.