RabbitMQ StompJs recieving just one Message - rabbitmq

I'm trying to recieve a Message from the RabbitMq-Que with the Stomp-plugin. This works fine but the Problem is that I get every Message from the Queue. So if the Queue has 13 Messages, I get 13. The Key is that I just want to get 1 Message and after sending Ack or Nack the next one. Do somebody got any Idea how to get only one Message? Thanks for Help.
Here the code I got:
GetMessage()
{
this.GetRabbitMqClient().then((client)=>
{
var headers ={ack:'client', 'x-max-priority': '10'};
var subscription = client.subscribe("/queue/TestQue",this.messageCallback,headers);
});
}
private messageCallback = function(Message :IMessage)
{
console.log(Message.body);
setTimeout(()=> {Message.ack();},100000000000000);
}
private GetRabbitMqClient( ):Promise<Client> {
var promise = new Promise<Client>((resolve,reject)=>{
var client = new Client(
{
brokerURL: "ws://localhost:15674/ws",
connectHeaders:
{
login: "guest",
passcode: "guest"
},
// debug: function (str) {
// console.log(str);
// },
reconnectDelay: 5000,
heartbeatIncoming: 4000,
heartbeatOutgoing: 4000
});
client.onConnect = function (frame) {
resolve(client);
};
client.onStompError = function (frame) {
// Will be invoked in case of error encountered at Broker
// Bad login/passcode typically will cause an error
// Complaint brokers will set `message` header with a brief message. Body may contain details.
// Compliant brokers will terminate the connection after any error
reject(frame);
console.log('Broker reported error: ' + frame.headers['message']);
console.log('Additional details: ' + frame.body);
};
client.activate();
});
return promise;
}

I found the solution:
you just need to set in the headers the attribute:
'prefetch-count':'1'

Related

Nestjs - redis communication between 2 microservices with ClientProxy

I need a help from somebody experienced. I've built 2 microservices recently (let's call them Amber and Boris) which are communicating between each other using ClientProxy and REDIS. From time to time, when Amber is asking for data from Boris, it gets timeout Error.
This is Amber config:
constructor(companyName: string, userId: number) {
this.companyName = companyName;
this.userId = userId;
this.client = ClientProxyFactory.create({
transport: Transport.REDIS,
options: {
retryAttempts: 0,
retryDelay: 0,
url: 'redis://<some_url>:<some_port>,
},
});
}
Then request-response:
private async sendRequest(pattern: string, payload?: object): Promise<any[]> {
payload = payload || {};
try {
const result = await this.client.send(
{ type: pattern },
{ userId: this.userId, companyName: this.companyName, ...payload}
)
.pipe(
timeout(30000),
map((response: any) => { // Success...
return response;
}),
catchError((error) => { // Error...
return throwError(error);
}),
)
.toPromise();
return result;
} catch (err) {
Logger.error('Couldn\'t get data from Boris service: ' + err.message)
}
}
Then on Boris service, I have basically just Controller set with #MessagePattern and I'm just returning data:
#MessagePattern({type: 'getAvailableCases'})
findAll(#Payload() data: object): Promise<object> {
this.assignPayload(data);
return this.getData();
}
Important to say, Boris service is doing queries to database in order to return data. But on db side seems there is no problem.
What I'm interested in the most is:
whether I have ClientProxy set up properly
whether I have answer processing set up properly with pipe() and toPromise(), as I'm not well familiarized with ClientProxy and RxJs.
Thank you a hundred times for any response!
Turned out, the ClientProxy wasn't releasing connections to Redis after the communication is done. This way, the number of connections was increasing until there was no connection left.
The solution is to close connection after the data are returned:
this.client.close();

error handling in angular 5, catch errors from backend api in frontend

I need advise for handling errors in front-end of web application.
When I call a service to get the community according to community in web app, I want it to catch an error. For example for catching errors like 404.
There is a service for getting community according to id provided.
getCommunity(id: number) {
return this.http.get(`${this.api}/communities/` + id + ``);
}
that is called in events.ts file
setCommunityBaseUrl() {
this.listingService.getCommunity(environment.communityId).subscribe((data: any) => {
this.communityUrl = data.url + `/` + data.domain;
});
}
The id is provided in environment. Let's say there are 20 communities in total. When I provide id = 1 the events according to community = 1 appears.
export const environment = {
production: ..,
version: 'v2',
apiUrl: '...',
organization: '...',
websiteTitle: '...',
communityId: 1,
googleMapsApiKey: '...'
};
The problem is that when I provide id = null all community events are occurring | all events list in the backend is occurring.
Please, help ^^
When you subscribe you subscribe with an Observer pattern. So the first function you pass in
.subscribe(() => {} );
fires when the Observable calls .next(...)
and after that you can provide another function which will fire whenever the Observable calls .error(...)
so
.subscribe(() => {}, (error) => { handleTheError(error); } );
The this.http.get(...); returns an Observable which will fire the .error(...) on http error
We also know that this.http.get(...) completes or "errors" and it's not an endless one (a one that never completes). So you can make it a promise and manipulate on it promise like.
async getMeSomething(...) {
try {
this.mydata = await this.http.get(...).toPromise();
}
catch(error) {
handleTheError(error)
}
}
But what I really recommend is to use Swagger for your backend and then generate the API Client class with NSwagStudio so you don't have to write the client manually or adjust it or deal with error catching. I use it all the time and it saves us an enormous amount of time
Because you are using ".subscribe" you can create your own error handler and catch the errors like this, directly on the method.
This is an example on how you can use this:
constructor(
private _suiteAPIService: SuitesAPIService,
private _testcaseService: TestcaseService,
public _tfsApiService: TfsApiService,
private _notificationService: NotificationService) { }
errorHandler(error: HttpErrorResponse) {
return observableThrowError(error.message || "Server Error")
}
public something = "something";
GetTestcasesFromSuiteSubscriber(Project, BuildNumber, SuiteId) {
this._suiteAPIService.GetTestResults(Project, BuildNumber, SuiteId).subscribe(
data => {
console.log(data);
this._testcaseService.ListOfTestcases = data;
//Notofication service to get data.
this._notificationService.TestcasesLoaded();
},
error => {
//Here we write som error
return this.something;
}
);
}

WebRTC - Failed to set remote answer sdp: Called in wrong state: STATE_INPROGRESS

I'm following the example here: https://www.w3.org/TR/webrtc/#simple-peer-to-peer-example
I've modified the code because I only need one-way streaming:
var configuration = null; //{ "iceServers": [{ "urls": "stuns:stun.example.org" }] };
var peerConnection;
var outboundPeerStream = null;
var outboundPeerStreamSessionId = null;
var createPeerConnection = function () {
if (peerConnection)
return;
peerConnection = new RTCPeerConnection(configuration);
// send any ice candidates to the other peer
peerConnection.onicecandidate = function (event) {
signalrModule.sendClientNotification(JSON.stringify({ "candidate": event.candidate }));
};
// let the "negotiationneeded" event trigger offer generation
peerConnection.onnegotiationneeded = peerStreamingModule.sendOfferToPeers;
// once remote track arrives, show it in the remote video element
peerConnection.ontrack = function (event) {
var inboundPeerStream = event.streams[0];
remoteStreamHelper.pushStreamToDom(inboundPeerStream, foo);
}
}
// this gets called either on negotiationNeeded and every 30s to ensure all peers have the offer from the stream originator
peerStreamingModule.sendOfferToPeers = function () {
peerConnection.createOffer().then(function (offer) {
return peerConnection.setLocalDescription(offer);
}).then(function () {
// send the offer to the other peer
signalrModule.sendClientNotification(JSON.stringify({ "desc": peerConnection.localDescription}));
}).catch(logger.internalLog);
};
// this gets called by the stream originator when the stream is available to initiate streaming to peers
peerStreamingModule.initializeWithStream = function (outboundStream, sessionId) {
outboundPeerStream = outboundStream;
outboundPeerStreamSessionId = sessionId;
createPeerConnection();
peerConnection.addStream(outboundStream);
//peerStreamingModule.sendOfferToPeers(); I don't think I need this...
}
peerStreamingModule.handleP2PEvent = function (notification) {
if (!peerConnection)
createPeerConnection();
if (notification.desc) {
var desc = notification.desc;
// if we get an offer, we need to reply with an answer
if (desc.type == "offer") {
peerConnection.setRemoteDescription(desc).then(function () {
return peerConnection.createAnswer();
}).then(function (answer) {
return peerConnection.setLocalDescription(answer);
}).then(function () {
signalrModule.sendClientNotification(JSON.stringify({ "desc": peerConnection.localDescription, "sessionId": sessionManager.thisSession().deviceSessionId() }), app.username());
}).catch(logger.internalLog);
} else if (desc.type == "answer") {
peerConnection.setRemoteDescription(desc).catch(logger.internalLog);
} else {
logger.internalLog("Unsupported SDP type. Your code may differ here.");
}
} else
pc.addIceCandidate(notification.candidate).catch(logger.internalLog);
}
This seems to be working, but I'm stumped with two parts:
1) WebRTC - Failed to set remote answer sdp: Called in wrong state: STATE_INPROGRESS - this is appearing in my logs from time to time - am I doing something wrong in the above that is causing this?
2) Am I correctly implementing sendOfferToPeers and initializeWithStream? I'm afraid that the sendOfferToPeers getting triggered on interval from the originator isn't how the spec is intended to be used; my goal is to ensure that all peers eventually receive an offer no matter when they join or whether or not they're facing connectivity issues that drop the original offer / negotiation.
// this gets called either on negotiationNeeded and every 30s to ensure all peers have the offer
You can't send the same offer to multiple peers. It's peer-to-peer, not peer-to-peers. One-to-many requires at minimum a connection per participant, and probably a media server to scale.
Also, SDP is not for discovery. The offer/answer exchange is a fragile state-machine negotiation between two end-points only, to set up a single connection.
You should solve who's connecting with whom ahead of establishing the WebRTC connection.

RTCPeerConnection.iceConnectionState changed from checking to closed

Following the method here I'm trying to answer an audio call initiated with a Chrome browser from an iPhone simulator(with React Native).
A summary of the event sequence:
received call signal
got local stream
sent join call signal
received remote description(offer),
created PeerConnection
added local stream
received candidate
added candidate
7 and 8 repeated 15 times (that is 16 times in total)
onnegotiationneeded triggered
signalingState changed into have-remote-offer
onaddstream triggered
the callback function of setRemoteDescription was triggered, created answer.
signalingState changed into stable
iceconnectionstate changed into checking
onicecandidate triggered for the first time.
emited the candidate from 15
onicecandidate triggered for the 2nd time. The candidate is null
iceconnectionstate changed into closed
Step 7,8,9 may appear at different places after 6 and before 19.
I have been stuck on this problem for quite a while. I don't even know what to debug at this time. What are the possible causes of the closing of connection? I can post more logs if needed.
One observation is that the two RTCEvent corresponding to iceconnectionstatechange has the following properties:
isTrusted:false
The target RTCPeerConnection has
iceConnectionState:"closed"
iceGatheringState:"complete"
Here are my functions to handle remoteOffer and remoteCandidates:
WebRTCClass.prototype.onRemoteOffer = function(data) {
var ref;
if (this.active !== true) {
return;
}
var peerConnection = this.getPeerConnection(data.from);
console.log('onRemoteOffer', data,peerConnection.signalingState);
if (peerConnection.iceConnectionState !== 'new') {
return;
}
var onSuccess = (function(_this){
return function(){
console.log("setRemoteDescription onSuccess function");
_this.getLocalUserMedia((function(_this) {
return function(onSuccess,stream) {
peerConnection.addStream(_this.localStream);
var onAnswer = (function(_this) {
return function(answer) {
var onLocalDescription = function() {
return _this.transport.sendDescription({
to: data.from,
type: 'answer',
ts: peerConnection.createdAt,
description: {
sdp: answer.sdp,
type: answer.type
}
});
};
return peerConnection.setLocalDescription(new RTCSessionDescription(answer), onLocalDescription, _this.onError);
};
})(_this);
return peerConnection.createAnswer(onAnswer, _this.onError);
}
})(_this)
);
}
})(this);
return peerConnection.setRemoteDescription(new RTCSessionDescription(data.description),onSuccess,console.warn);
};
WebRTCClass.prototype.onRemoteCandidate = function(data) {
var peerConnection, ref;
if (this.active !== true) {
return;
}
if (data.to !== this.selfId) {
return;
}
console.log('onRemoteCandidate', data);
peerConnection = this.getPeerConnection(data.from);
if ((ref = peerConnection.iceConnectionState) !== "closed" && ref !== "failed" && ref !== "disconnected" && ref !== "completed") {
peerConnection.addIceCandidate(new RTCIceCandidate(data.candidate));
}
};
I found that if I call the following two functions one by one, then it will work.
peerConnection.setRemoteDescription(new RTCSessionDescription(data.description),onSuccess,console.warn);
(...definition of onAnswer ...)
peerConnection.createAnswer(onAnswer, this.onError);
My previous codes called createAnswer within the onSuccess callback of setRemoteDescription. That did work for the react-native-webrtc demo, but not with Rocket.Chat. Still don't fully understand it. But my project can move on now.

Aborted upload causes Sails js/Skipper to crash

Ref: https://github.com/balderdashy/skipper/issues/49
Adapter: skipper-gridfs
Basic controller code:
req.file('fileTest')
.upload({
// You can apply a file upload limit (in bytes)
maxBytes: maxUpload,
adapter: require('skipper-gridfs'),
uri: bucketConnect,
saveAs : function (__newFileStream,cb) {
cb(null, __newFileStream.filename);
}
}, function whenDone(err, uploadedFiles) {
if (err) {
var error = { "status": 500, "error" : err };
return res.serverError(error);
}else {
I have a jQuery-File-Upload client ( https://blueimp.github.io/jQuery-File-Upload/ ) impementing the "cancel" procedure by using jqXHR abort described here (https://github.com/blueimp/jQuery-File-Upload/wiki/API ):
$('button.cancel').click(function (e) {
jqXHR.abort();
});
After the client aborts, the server crashes with the following message:
events.js:72
throw er; // Unhandled 'error' event
^
Error: Request aborted
at IncomingMessage.onReqAborted (.../node_modules/sails/node_modules/skipper/node_modules/multiparty/index.js:175:17)
at IncomingMessage.EventEmitter.emit (events.js:92:17)
at abortIncoming (http.js:1911:11)
at Socket.serverSocketCloseListener (http.js:1923:5)
at Socket.EventEmitter.emit (events.js:117:20)
at TCP.close (net.js:466:12)
I've used try/catch but it didn't work, the server crashes anyway.
I am not sure if this is a Skipper issue or a Multiparty issue -- my knowledge stops here ( https://github.com/andrewrk/node-multiparty/blob/master/index.js ):
function onReqAborted() {
waitend = false;
self.emit('aborted');
handleError(new Error("Request aborted"));
}
function onReqEnd() {
waitend = false;
}
function handleError(err) {
var first = !self.error;
if (first) {
self.error = err;
req.removeListener('aborted', onReqAborted);
req.removeListener('end', onReqEnd);
if (self.destStream) {
self.destStream.emit('error', err);
}
}
cleanupOpenFiles(self);
if (first) {
self.emit('error', err);
}
}
At first I thought this was the way the jqXHR request was aborted, but it seems to be a generic Skipper issue on aborted uploads, since the simple act of closing the tab during an upload will crash the server (different message):
_stream_writable.js:233
cb(er);
^
TypeError: object is not a function
at onwriteError (_stream_writable.js:233:5)
at onwrite (_stream_writable.js:253:5)
at WritableState.onwrite (_stream_writable.js:97:5)
at Writable.<anonymous> (.../node_modules/skipper-gridfs/index.js:179:25)
at Writable.g (events.js:180:16)
at Writable.EventEmitter.emit (events.js:117:20)
at PassThrough.<anonymous> (.../node_modules/skipper-gridfs/index.js:194:36)
at PassThrough.g (events.js:180:16)
at PassThrough.EventEmitter.emit (events.js:117:20)
at .../node_modules/sails/node_modules/skipper/standalone/Upstream/prototype.fatalIncomingError.js:55:17
I have tried aborting the upload by closing the tab while using a simple upload controller (not Skipper) and there is no crash:
var uploadFile = req.file('fileTest');
console.log(uploadFile);
uploadFile.upload(function onUploadComplete (err, files) { // Files will be uploaded to .tmp/uploads
if (err) return res.serverError(err); // IF ERROR Return and send 500 error with error
console.log(files);
res.json({status:200,file:files});
});
So, did anybody see this happening and is there any workaround?
This issue has been solved in skipper#0.5.4 and skipper-disk#0.5.4
Ref.: https://github.com/balderdashy/skipper/issues/49
Also there is an Issue in skipper-gridfs#0.5.3
Link: https://github.com/willhuang85/skipper-gridfs/issues/20