IronMQ push queue sending unknown HTTP requests - apache

I setup my push queue endpoint as POST /iron, which works fine. But I'm getting a bunch of other requests too. Are these from Iron.io? What's the point of them? They're just filling up my Apache log. My server is returning 500 errors for all of them (500 instead of 404 in development mode).
POST /webhooks
POST /orders/webhook
POST /api/orders/webhook
Edit: I looked into it using multicast and noticed only my first server was getting these weird requests. They seem to be totally unrelated to iron.io. I guess it's just coincidence they're webhook requests and I just noticed them now. Probably someone put my server as an endpoint for their webhooks. >_<

If you added all those endpoints (subscribers) to your queue it's possible that IronMQ sends multiple requests. Check your queue's subscribers list.
GET /projects/{Project ID}/queues/{Queue Name}
If it contains multiple endpoints and its type is multicast - this is the reason of multiple requests on your side. In this case remove all odd subscribers (or setup new queue).
DELETE /projects/{Project ID}/queues/{Queue Name}/subscribers
In other case contact support ( :
More information at http://dev.iron.io/mq

IronMQ won't send any "unknown" requests. If your endpoint doesn't return a 200, the push queue will keep retrying the message until it either a) receives a 200, or b) fails the "max_retries" number of times.
Also per Featilion's answer, check the multicast/unicast/subscriber setup as well. If you are getting requests to those other endpoints then there's something up with your subscriber setup.
Feel free to jump into live chat if you don't figure out your answer rather quickly.

Related

Should the NOTIFY/M-SEARCH messages be all headers when using spring-integration-ip to send messages?

I have written an application, that successfully listens to a multicast-host 239.255.255.250:1900 and [FF02::C]:1900. I receive the desired NOTIFY and M-SEARCH messages using spring-integration-ip's MulticastSendingMessageHandler.
However: while I am able to send messages using the UnicastSendingMessageHandler, it does not seem like e.g. VLC will recognize my running server.
I went through the UPnP Device Architecture PDF back and forth and manually sent the 3+2+1 NOTIFY messages and also responded to M-SEARCH, but somehow I am not able to make e.g. VLC recognize my server.
I also see no access on my HTTP server (separate application on a different port, but properly linked in the LOCATION attribute of the NOTIFY and M-SEARCH-response messages). No attempts at all.
Do I need to send the data using MessageHeaders (headers) instead of payload? What's the pre-requisite for a possible media server to be listed? Sending the NOTIFY messages? Responding to M-SEARCH messages? More?
And what are the allowed devicetype and servicetype values? Or do they vary?
If anyone wants, I can add some code, but the listening part is working alright and messages are sent, just supposedly not understood by their receivers (sending using Unicast to the address, who sent the M-SEARCH message, but on port 1900).
Honestly: I am not sure how to even word my question(s). I tried reading through the RSSDP source code, but I still do not fully get it.
Any pointers are greatly welcomed.

Sonos API subscription callbacks stopped

I have perl on apache http service that's been working fine for several years to issue sonos cmds and receive callbacks. About two weeks ago, I stopped receiving any callbacks.
I subscribed successfully (response={}) for groupVolume, playbackMetadata, and playback events.
I am successfully getting webhook messages from other services (e.g., Vonage) using https, so it seems the port is open to my server, and apache is successfully processing these requests. I see no trace of any messages from the sonos api in my apache logs.
I have no trouble issuing commands (setMute, getFavorites, getPlaybackMetadata, etc.). Only the callbacks are a problem.
I ran the ssltools checker from digicert but found no issues.
I can't recall making any changes to the home router config.
Does anyone else have a problem like this or know how to diagnose what's happening?
I installed WireShark but am overwhelmed with the functionality and don't know how to narrow down what I should be looking for to see if the messages are being received and blocked somehow.
it may be unlikely, but is it possible that there isn't any usage of your integration that would result in callbacks being sent to your service? For example - if volume isn't being changed, or playback isn't happening, you won't receive events.
If that's not the case, additional information is required to debug this issue. Could you please email developer-feedback#sonos.com with the following information:
The name of your service/application
The date/time your service stopped receiving callback events. You said about two weeks ago, but could you be more specific?
The clientId used by your code. This is the UUID you generated when you initially created the "API Key" on developer.sonos.com. Format is xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (note - we do not need the secret associated with this key).
With that information we should be able to determine the cause of your missing callbacks.

How to abort code when publish message on non exist queue in rabbitmq

I have wrote server-client application.
Server Side
server will initilise a queue queue1 with routing key key1 on direct exchange.
After initilise and declaration it consume data whenever someone write on it.
Client Side
client will publish some data on that exchange using routing key key1 .
Also i have set mandotory flag to true before i publish.
Problem
everything is fine when i start server first .but i got problem when i start client first and it publish data with routing key. When client published data there is no exception from broker.
Requirement
I want exception or error when i published data on non existing queue.
If you will publish messages with mandatory flag set to true, then that message will returned back in case it cannot be routed to any queue.
As to nonexistent exchanges, it is forbidden to publish messages to non-existent exchanges, so you'll have to get an error about that, something like NOT_FOUND - no exchange 'nonexistent_exchange' in vhost '/'.
You can declare exchanges an queues and bind them as you need on client side too. These operations are idempotent.
Note, that creating and binding exchanges and queues on every publish may have negative performance impact, so do that on client start, not every publish.
P.S.: if you use rabbitmq-c, then it is worth to cite basic_publish documentation
Note that at the AMQ protocol level basic.publish is an async method:
this means error conditions that occur on the broker (such as publishing to a non-existent exchange) will not be reflected in the return value of this function.
I spend a lot time to find do that. I have a example code in python using pika lib to show how to send messsage with delivery mode to prevent waiting response when send message to noneexist queue(broker will ignore meessage so that do not need receive response message)
import pika
# Open a connection to RabbitMQ on localhost using all default parameters
connection = pika.BlockingConnection()
# Open the channel
channel = connection.channel()
# Declare the queue
channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False)
# Enabled delivery confirmations
channel.confirm_delivery()
# Send a message
if channel.basic_publish(exchange='test',
routing_key='test',
body='Hello World!',
properties=pika.BasicProperties(content_type='text/plain',
delivery_mode=1),
mandatory=True):
print('Message was published')
else:
print('Message was returned')
Reference:
http://pika.readthedocs.org/en/latest/examples/blocking_publish_mandatory.html

Force Twilio to respond to request slowly in order to test app's handling

I'm debugging an application that sends SMS messages via the Twilio REST API. The other day we had a strange bug that we can't reproduce, and I think it may have been happening because the Twilio API took very long to respond (2-3 seconds) and the app didn't handle the delay well.
We're working on improving the app to better handle a scenario like this, but I'm not sure how to test if we've really fixed the issue. Is there a way to force Twilio to respond slowly, in order to test this?
I realize that I could make my own mock web service with a long delay and substitute it in for Twilio -- but I'd like to avoid that if possible. In particular, I'm using one of the Twilio helper libraries for all of my call-outs, and would like to avoid monkey-patching them if at all possible.
You might want to try configuring a proxy server in your tests, that responds very slowly. Here's one that we use, in Nginx. Note it requires the nginx-lua module.
location ~ /slow {
# Proxy pass is necessary so the incoming request is accepted and
# processed by nginx.
proxy_pass http://127.0.0.1:11418;
}
server {
listen 127.0.0.1:11418;
location ~ / {
# return a funny HTTP code here, so it's clear that the slow block got
# hit.
add_header 'X-Served-By: slow-as-heck';
content_by_lua 'ngx.sleep(25); ngx.exit(418)';
}
}
You can also try connecting a dead IP that won't send back a TCP reset, like 10.255.255.1.
Hope it helps,
Kevin
Twilio evangelist here.
There is not currently a way to tell Twilio to simulate a slow HTTP response or an HTTP response timeout.
Depending on your platform there may be a way to catch a network timeout, which is what it sounds like you suspect caused the problem.
Hope that helps.

Heroku Intercepting Some Gmail Incoming Messages

I am serving my Rails 3 app on Heroku, my mail through Google, and the domain through Enom. This is for www.challengage.com
This works 95% of the time, however, once in a while, when someone tries to reply to an email I send them, it fails with the below error message because my email, josh#challengage.com, somehow got replaced with josh#herokuapp.challengage.com when they recieved it. I think it has something to do with Mail Delivery Subsystems, but I'm not sure. It also only seems to happen when emailing University professionals.
Error Message:
From: Mail Delivery Subsystem [mailto:MAILER-DAEMON#smtp2.syr.edu]
Sent: Monday, July 15, 2013 2:08 PM
To: David DiMaggio
Subject: Undeliverable: FW: Challengage - Work Team Simulation product for interviewing evaluations
Delivery has failed to these recipients or groups:
paul#challengage.herokuapp.com
The server has tried to deliver this message, without success, and has stopped trying. Please try sending this message again. If the problem continues, contact your helpdesk.
The following organization rejected your message: challengage.herokuapp.com.
Any ideas?
Thanks everyone.
This is almost certainly because you're using a CNAME for your email records.
Although most email servers will reflect the original domain when sending a message, others will replace it with the domain that's at the end of the CNAME.
This means that instead of sending to someone#challengage.com they send to someone#challengage.herokuapp.com instead.
The mail server sees the request to send to someone#challengage.herokuapp.com and decides that it doesn't look after challengage.herokuapp.com and so from it's perspective the message is rejected.
We used to see this issue with CloudMailin customers and started to recommend that they don't use CNAMES where email is involved and just make use of adding MX records direct to the Apex domain.
With Heroku this poses a problem though as you don't have a single IP that you can use to access their servers. We eventually ended up using Route 53 to host our domain, then adding an SSL endpoint (to get load balancer details) and then adding that load balancer to Route 53's Alias command so that it automatically always gave the correct results. Alternatively you can setup some sort of static IP based system on your apex domain to redirect.