How to await a WebSocket response in gatling without sending a message - kotlin

I'm working with WebSockets in Gatling and have a slight issue in that I get multiple responses with the same message and different values, but the original message is sent only once. The subsequent web socket responses are triggered by a graphql post.
Obviously .exec(ws("Web Socket Request").await(5) doesn't work so Im wondering how I would await and check the subsequent message?
You can see the requests in dev console here. Message with "id": "11" is sent and message with "id": "11" is received, then another one ("id": "11") is received after the graphql call.
So my code currently looks like this:
.exec(graphQlRequests.onlineStatus(false))
.exec(graphQlRequests.chatRooms())
.exec(
ws("Video Watch").sendText(subscriptionQueries.videoWatch()).await(5)
.on(ws.checkTextMessage("Video Watch").check(jsonPath("\$..choices[0].id")
.saveAs("choice")))
)
.exec(ws("New Messages").sendText(subscriptionQueries.newMessages()))
.pause(1)
.exec(graphQlRequests.startVideo())
.exec(graphQlRequests.chatRoom())
.exec(graphQlRequests.onlineStatus(false))
.exec(graphQlRequests.chatRoom())
.exec(graphQlRequests.onlineStatus(true))
.pause(9)
.exec(graphQlRequests.onlineStatus(true))
.pause(3)
.exec(graphQlRequests.videoChoice())
// Capture the second WebSocket message here.....
.exec(graphQlRequests.onlineStatus(true))
.pause(15)
.exec(graphQlRequests.onlineStatus(false))
What is the best way to capture the second request? Do I need to open up a separate SSE connection?

That's a similar issue as https://github.com/gatling/gatling/issues/3809.
In short, the expected WebSocket inbound message might have already been received between your HTTP request and your WebSocket await, so the latter misses it.
You're welcome to help with specifying a proper solution on our bug tracker (but no "upvote", "me too" or "+1" please, it doesn't help in any way and doesn't make things happen faster).

Related

How can a Slack app communicate to the user why a slash command failed, and let the user edit the command?

When an app fails to handle a Slack slash command, the text is brought back to allow the user to edit it and immediately send it again. I have a slash-command that searches, but might fail to find any results. In that case, I would like the user to immediately be able to modify their search. The Slack docs explain that I should always return a 200 HTTP status, but then Slack also erases the command and the user can't immediately try again. When I tried to respond with a 404 status, the users get an alarming message like failed with the error "http_client_error". Is there a way to fail but also provide a custom message to the user why?
Yes, but you must not use HTTP status codes to community a failed search.
Just always return HTTP OK 200 and then add a response message telling the user what went wrong. You can do that by directly replying to the request from Slack within 3 seconds, or alternatively by sending a message to the response_url.
This is also clearly expressed in the offical documentation for slash commands:
Sending Error Responses
There are going to be times when you need to let the user know that something went wrong - perhaps the user supplied an incorrect text parameter alongside the command, or maybe there was a failure in an API being used to generate the command response.
It would be tempting in this case to return an HTTP 500 response to
the initial command, but this isn't the right approach. The status
code returned as a response to the command should only be used to
indicate whether or not the request URL successfully received the data
payload - while an error might have occurred in processing and
responding to that payload, the communication itself was still
successful. (Source)
As far as I know it is not possible to signal Slack that the user should be able to edit his last command though.

What exactly does the expected HTTP response for a Reverse-AJAX request look like?

I'm trying to implement a simple Web Service (running on an Arduino board using an Ethernet shield) that can provide (push) information to a subscribed client by means of Reverse-AJAX.
The web service hosts a single web page that presents information from a (2D-LIDAR) sensor connected to that server board. Whenever the sensor output changes (very frequently and rapidly) the clients viewing that page should be instantly updated. For this application Reverse-AJAX / AJAX Push seems to be the option of choice, however I'm struggling to get the server part working.
This is what's in my aforementioned web page to "listen" for updates:
var xhr = new XMLHttpRequest();
xhr.multipart = true;
xhr.open( 'GET', 'push', true) ;
xhr.onreadystatechange = function() {
if (xhr.readyState == 4) {
processEvents( window.JSON.parse( xhr.responseText ) );
}
}
xhr.send( null );
I'd like to keep the XmlHttpRequest running forever and have it call the processEvents function whenever a chunk of (JSON) data comes in from the server side. However I'm not sure what the server response, especially the HTTP response header should look like to make this work as expected.
Whenever I have the server send a HTTP response header like this
HTTP/1.1 200 OK\r\n
Connection: keep-alive\r\n
Content-Length: 100\r\n
Content-Type: text/json\r\n
\r\n
the XmlHttpRequest finishes after receiving exactly one "chunk" of data. I also tried without "Content-Length" header, but "Content-Type: multipart/mixed; boundary=..." or "Transfer-Encoding: chunked" but both never happened to fire processEvents, supposedly because the browser was waiting for the response to complete, whatever that means.
I'm therefore looking for a working example of such a HTTP Response to an AJAX-Push request. What does a HTTP Response generally need to look like to be accepted by the indefinitely running XmlHttpRequest and to fire processEvents whenever new data arrives?
Btw. I tried those things using Firefox 64.0.
if you a looking for a low latency http based comm you should take a look at websockets. Your xmlhttp with Post method will have a high latency of about 50 milliseconds , trust me, I tried to develop a rgb controller based on a rainbow color picker before, either sync and asynchronous post methods wasn’t working well for me, the script just hold waiting for a response.
Now to answer you question specifically, download postman, a software that allows you to simulate all the http methods requests and headers you wish to. Also gives you the code to implement in many languages, and don’t forget f12 on chrome > network tab, this way you can check how the output of or http requests are being handled

How do you get IBM MobileFirst Platform ChallengeHandler to handle very large responses correctly?

When working with a large-sized data response from an HTTP Adapter, the size of the response appears to cause our challenge handler to fire a handleChallenge() method.
My question is, why would the size of the response cause the mobilefirst security challenge handler to fire, when the session is still valid?
More Details:
Our application uses an ISAM security appliance with Header based authentication. When an HTTP adapter call we make comes back with a content-length greater than 20,377, the adapter call triggers the handleChallenge() method of our challenge handler. When inspecting the response, we have seen that the responseJSON is actually populated with our required data, so really the handleChallenge should never have fired.
When we ping the adapter directly through the browser with the same parameters, it works fine. We've been able to isolate that this problem is occurring in the worklight.js / mobilefirst realm.
Does anyone have any idea if, or why, the Challenge Handler in worklight.js would not function as expected with a very large response size?
The bottom line is that it should. There is no reason for it not to.
If you have not been able to resolve this otherwise, my suggestion is to open an IBM PMR (support ticket) to have the development team investigate the issue.
We ended up ( sort of ) diagnosing it. At a certain payload size, the "/*secure {" fell off the response ( of which we're still not certain why. Our loginChallengeHandler function was based off of some example we found in some IBM documentation, and would improperly mark the response as a login form if the /*secure wasn't present. Once we tightened up the challenge handler, it worked.

What is the benefit of passing HTTP status with API responses?

With your error responses you need to pass some attribute that indicates the error type anyway. So that makes HTTP status code redundant or sometimes even inaccurate.
Any real benefit here, and why it shouldn't just be 200?
Assuming we're talking about a RESTful API:
Because if you return an error message with a HTTP Status Code of 200, the client will assume his request was successful, and maybe not read the response body at all.
If the status code is 200, the client will not be looking for an error message in the response body.
An HTTP status code is not redundant. Pretty much in the same way a book's index is not redundant. First you look at the index for a topic you want to read on, and then you flip to that page and read more on that topic. Similarly, the client first reads the status code to know what happened, and then reads the response body to find out more about it.

How to access error response body with AngularJS

my head is spinning cause of the following issue. I'm accessing my webservice (running on my localhost:4434) with AngularJS and if something goes wrong, the webservice sends a response 400 containing a json body which contains a message that tells you what exactly went wrong.
Problem is I cannot access the message on the client? It is almost as if it never reaches the client?? (This isn't the case, I've confirmed that it reaches the client already) This is the angular code that I use on the client site.
$scope.create = function() {
$http.post('http://localhost:4434/scrapetastic/foo', $scope.bar).
success(function(data, status, headers, config) {
console.log("Call to log: "+status);
console.log("Call to log: "+data);
}).
error(function(data, status) {
console.log("Error|Data:"+data);
console.log(status);
});
}
If I submit malformed data a corresponding error response is generated but as I said ... somehow I cannot access the message that is contained in the response body. This is what I get:
I've tried all sorts of things but am seriously stuck now...perhaps someone has an idea on how to access the payload of the response or at least what to do next? I'm also dealing with CORS perhaps it has something to do with that.
Thanks!
I'm going to take a wild guess here and say that your problem is an XSS issue.
Not only do you not have the data variable, but as far as I can tell from your screenshot, status == 0.
Your screenshot also says Origin: http://localhost, which makes this request considered XSS (since the port is different). That would explain why status is 0.
Edit: You can use jsonp to get around the issue.