Force Twilio to respond to request slowly in order to test app's handling - testing

I'm debugging an application that sends SMS messages via the Twilio REST API. The other day we had a strange bug that we can't reproduce, and I think it may have been happening because the Twilio API took very long to respond (2-3 seconds) and the app didn't handle the delay well.
We're working on improving the app to better handle a scenario like this, but I'm not sure how to test if we've really fixed the issue. Is there a way to force Twilio to respond slowly, in order to test this?
I realize that I could make my own mock web service with a long delay and substitute it in for Twilio -- but I'd like to avoid that if possible. In particular, I'm using one of the Twilio helper libraries for all of my call-outs, and would like to avoid monkey-patching them if at all possible.

You might want to try configuring a proxy server in your tests, that responds very slowly. Here's one that we use, in Nginx. Note it requires the nginx-lua module.
location ~ /slow {
# Proxy pass is necessary so the incoming request is accepted and
# processed by nginx.
proxy_pass http://127.0.0.1:11418;
}
server {
listen 127.0.0.1:11418;
location ~ / {
# return a funny HTTP code here, so it's clear that the slow block got
# hit.
add_header 'X-Served-By: slow-as-heck';
content_by_lua 'ngx.sleep(25); ngx.exit(418)';
}
}
You can also try connecting a dead IP that won't send back a TCP reset, like 10.255.255.1.
Hope it helps,
Kevin

Twilio evangelist here.
There is not currently a way to tell Twilio to simulate a slow HTTP response or an HTTP response timeout.
Depending on your platform there may be a way to catch a network timeout, which is what it sounds like you suspect caused the problem.
Hope that helps.

Related

kotlin https request timeout using ktor httpclient suggestions

i have an app that is being used in many places across my country and it comunicated with a flask server (deployed on apache).
sometimes the reception is not so good in areas of my country. my users sometimes fetching/sending data to the server, but its loading too much time and then its fails/ stuck. i think somthing is lost in the comunication between the app and the server. also, it happend from 6:am in the morning where i guess there are more activity/load on the server
what is the best configuration to apply in this case for the http request:
this is what i use.
requestTimeoutMillis = 15000
connectTimeoutMillis = 10000
socketTimeoutMillis = 10000
further more, if anybody have a suggestion for a protocol that can "hide" this behevior from the user. they tell me that if its stuck, they going to surf on some website like facebook, go back to the app, then try to press the send button again and its working.
thanks

Sonos API subscription callbacks stopped

I have perl on apache http service that's been working fine for several years to issue sonos cmds and receive callbacks. About two weeks ago, I stopped receiving any callbacks.
I subscribed successfully (response={}) for groupVolume, playbackMetadata, and playback events.
I am successfully getting webhook messages from other services (e.g., Vonage) using https, so it seems the port is open to my server, and apache is successfully processing these requests. I see no trace of any messages from the sonos api in my apache logs.
I have no trouble issuing commands (setMute, getFavorites, getPlaybackMetadata, etc.). Only the callbacks are a problem.
I ran the ssltools checker from digicert but found no issues.
I can't recall making any changes to the home router config.
Does anyone else have a problem like this or know how to diagnose what's happening?
I installed WireShark but am overwhelmed with the functionality and don't know how to narrow down what I should be looking for to see if the messages are being received and blocked somehow.
it may be unlikely, but is it possible that there isn't any usage of your integration that would result in callbacks being sent to your service? For example - if volume isn't being changed, or playback isn't happening, you won't receive events.
If that's not the case, additional information is required to debug this issue. Could you please email developer-feedback#sonos.com with the following information:
The name of your service/application
The date/time your service stopped receiving callback events. You said about two weeks ago, but could you be more specific?
The clientId used by your code. This is the UUID you generated when you initially created the "API Key" on developer.sonos.com. Format is xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (note - we do not need the secret associated with this key).
With that information we should be able to determine the cause of your missing callbacks.

Simulate Access disable feature in Worklight , when worklight server itself is down.

I am trying show end users maintainence window such as "we are down please try later" and disable the application but my problem is what if my worklight server itself is down and not reachable and i cannot use the feature provided by worklight console,
Is there a way i make my app talk to a different server which returns back the below json data when a app is disabled , can i simulate this behaviour is this possible.
json recieved on access disabled in worklight :-
/*-secure-
{"WL-Authentication-Failure":{"wl_remoteDisableRealm":{"message”:”We are down, Please try again soon","downloadLink":null,"messageType":"BLOCK"}}}*/
I have some conceptual problems with this question.
Typically a production environment (simplified) would not consist of a single server serving your end-users... meaning, there would be a cluster of nodes, each node being a Worklight Server, and this cluster would be behind a load balancer that would direct the incoming requests. And so in a situation where a node is down for maintenance like in your scenario there would still be more servers able to serve - there would be no down time.
And thus at this point your suggestion to simulate a Remote Disable by sending it from another(?) Worklight Server seems not so much the correct path to take (it may even be simply wrong). Have you had this second Worklight Server, why wouldn't it just serve the apps business like usual? See again my first paragraph about clustering.
Now lets assume there is still a downtime, that affects all servers. The application's client logic should be able to handle failed connections to the Worklight Server. In such a case you should handle this in the WL.Client.connect()'s onFailure callback function to display a WL.SimpleDialog that looks just like a Remote Disable's dialog... or perhaps via the initOption.js's onConnectionFailure callback.
Bottom line: you cannot simulate the JSON that is sent back for the wl_RemoteDisable realm; it is part of a larger security mechanism.
Additionally though, perhaps a way to better handle maintenance mode on your server is to have the HTTP server return a specific HTTP status code, check for this code and display a proper message based on the returned HTTP status code.
To check for this code in a simple example:
Note: the getStatus method is available starting MobileFirst Platform Foundation 7.0 (formerly "Worklight").
function wlCommonInit(){
WL.Client.connect({onSuccess:success, onFailure:failure});
}
function success(response) {
// ...
}
function failure(response) {
if (response.getStatus() == "503") {
// site is down for maintenance - display a proper message.
} else if ...
}

Play 2.1.0 + Apache 2.2 Reverse proxy => 502 proxy error when idle

Config
We have a play 2.1.0 with angularjs setup in a production mode.
We have reverse proxy load balancer setup with apache 2.2 something like mentioned in here
http://www.playframework.com/documentation/2.1.0/HTTPServer
This whole app is running in an iframe inside navigated from a jboss application.
Problem
Most of the time it works and sometimes when the connection is left idle for 2/3 hours, untouched, no one hit the reverse proxy url to load the jboss/play, then we are getting the 502 proxy error in the iframe content after a few mins wait.
Play receives the request, but somehow decides not to respond at all. This occurs only for the first time or couple of time after the wakeup. Then when we refresh the page play receives the request and responds it properly.
Tried
We get a tcpdump on the play port and it we have got all the requests being received, but no response sent from play for the failed scenario. Whereas the same request got responded by play subsequent times.
X-Forwarded-For: ,X-Forwarded-Host: X-Forwarded-Server: .. Connection: Keep-Alive - all these headers are being sent in the lost response tcpdump.
Tried KeepAlive, with timeouts in the proxy server, not much help. Why the play didn't respond for the initial connections after idle state, is there any conf we can set to keep it alive?
Workaround
Polling the play server url constantly every half an hour from the same server makes this issue not reproducible.
Still any help/suggestions would be really appreciated to fix this issue..
I tried to solve this problem myself. Approaches like the answers mentioned here and here did not change anything.
I then decided to go for nginx again which I have been using with Play applications before. The setup is to be found here. Since then the problem is gone.

IronMQ push queue sending unknown HTTP requests

I setup my push queue endpoint as POST /iron, which works fine. But I'm getting a bunch of other requests too. Are these from Iron.io? What's the point of them? They're just filling up my Apache log. My server is returning 500 errors for all of them (500 instead of 404 in development mode).
POST /webhooks
POST /orders/webhook
POST /api/orders/webhook
Edit: I looked into it using multicast and noticed only my first server was getting these weird requests. They seem to be totally unrelated to iron.io. I guess it's just coincidence they're webhook requests and I just noticed them now. Probably someone put my server as an endpoint for their webhooks. >_<
If you added all those endpoints (subscribers) to your queue it's possible that IronMQ sends multiple requests. Check your queue's subscribers list.
GET /projects/{Project ID}/queues/{Queue Name}
If it contains multiple endpoints and its type is multicast - this is the reason of multiple requests on your side. In this case remove all odd subscribers (or setup new queue).
DELETE /projects/{Project ID}/queues/{Queue Name}/subscribers
In other case contact support ( :
More information at http://dev.iron.io/mq
IronMQ won't send any "unknown" requests. If your endpoint doesn't return a 200, the push queue will keep retrying the message until it either a) receives a 200, or b) fails the "max_retries" number of times.
Also per Featilion's answer, check the multicast/unicast/subscriber setup as well. If you are getting requests to those other endpoints then there's something up with your subscriber setup.
Feel free to jump into live chat if you don't figure out your answer rather quickly.