Getting MTA blocked from zen.spamhaus.org but the website check shows IP is OK - amazon-ses

I'm using zen.spamhaus.org in my sendmail config.
FEATURE(dnsbl',zen.spamhaus.org')dnl
I'm using AWS SES to send email and when I try to relay an email I get:
Nov 9 09:01:00 Web-Mail sendmail[12751]: ruleset=check_relay, arg1=e226-2.smtp-out.us-east-2.amazonses.com, arg2=127.255.255.254, relay=e226-2.smtp-out.us-east-2.amazonses.com [23.251.226.2], reject=550 5.7.1 Rejected: 23.251.226.2 listed at zen.spamhaus.org
But if I go to the the spamhaus website and check the IP it says there are no issues.
https://check.spamhaus.org/not_listed/?searchterm=23.251.226.2
23.251.226.2 has no issues
This has just started happening recently. I tried white listing the SES server in my access.db to no avail.
Any help would be appreciated.
I tried white listing the SES server in my access.db to no avail.
Also tried sbl.spamhaus.org with the same results.
Turns out it's also blocking other valid MTA's
Nov 9 09:43:26 Web-Mail sendmail[12990]: ruleset=check_relay, arg1=mail-dm6nam10olkn2106.outbound.protection.outlook.com, arg2=127.255.255.254, relay=mail-dm6nam10olkn2106.outbound.protection.outlook.com [40.92.41.106], reject=550 5.7.1 Rejected: 40.92.41.106 listed at zen.spamhaus.org
Which explains why I'm getting reports from other people saying their emails are being returned.

I am experiencing a similar issue, lots of people receiving rejected email notices because of zen.spamhaus.org wrongly sending blocked responses.
As you have found going to the spamhaus website indicates no issues with the ips.
But this is the only mention of the issue that I can find!
I am using postfix
I ahve removed zen.spamhause.org from my smtpd_recipient_restrictions config for now and things are returning to normal.
Looks like the DNS for zen.spamhaus.org isn't resolving. Could be the issue
Ok looks like I was rate limited - I am working on a project that sent my 203 emails in error. I think I fell foul of samhaus's rate limiter for too many queries in a short time.

Related

Sonos API subscription callbacks stopped

I have perl on apache http service that's been working fine for several years to issue sonos cmds and receive callbacks. About two weeks ago, I stopped receiving any callbacks.
I subscribed successfully (response={}) for groupVolume, playbackMetadata, and playback events.
I am successfully getting webhook messages from other services (e.g., Vonage) using https, so it seems the port is open to my server, and apache is successfully processing these requests. I see no trace of any messages from the sonos api in my apache logs.
I have no trouble issuing commands (setMute, getFavorites, getPlaybackMetadata, etc.). Only the callbacks are a problem.
I ran the ssltools checker from digicert but found no issues.
I can't recall making any changes to the home router config.
Does anyone else have a problem like this or know how to diagnose what's happening?
I installed WireShark but am overwhelmed with the functionality and don't know how to narrow down what I should be looking for to see if the messages are being received and blocked somehow.
it may be unlikely, but is it possible that there isn't any usage of your integration that would result in callbacks being sent to your service? For example - if volume isn't being changed, or playback isn't happening, you won't receive events.
If that's not the case, additional information is required to debug this issue. Could you please email developer-feedback#sonos.com with the following information:
The name of your service/application
The date/time your service stopped receiving callback events. You said about two weeks ago, but could you be more specific?
The clientId used by your code. This is the UUID you generated when you initially created the "API Key" on developer.sonos.com. Format is xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (note - we do not need the secret associated with this key).
With that information we should be able to determine the cause of your missing callbacks.

Apache Reverse Proxy changes status code

Background
We have been running an application on JBoss that is exposed to the clients via an Apache Reverse Proxy. We recently introduced "HTTP 429 Too many requests" to slow down high velocity requests.
Problem
However, it seems that apache2 changes the HTTP status code from 429 to 500.
Root cause analysis
Confirmed from JBoss that it sends HTTP 429, by bypassing the proxy, and talking to it directly.
Confirmed from /var/log/apache2/access.log, that apache2 gets HTTP 429
10.0.0.161 - - [16/Jul/2014:07:27:47 +0000] "POST /the/URL/ HTTP/1.1" 429 1018 "-" "curl/7.36.0" |0/466110|
Curl Client gets 500, somehow.
There's also been a bug filed few years back on Bugzilla #900827. I remember reading that it has been fixed in 2.2.18. Yet, I still face the problem -- which leads me to think there's probably a different problem altogether.
Questions
As I have read elsewhere, Apache might not relay the code perfectly for custom HTTP status codes. But isn't HTTP 429 as a part of additional HTTP status codes RFC, a standard code enough to be recognised and relayed?
Is there something crucial that I am missing here?
PS: Since this question is more about HTTP status spec, I asked here. If the community feels its more about apache, please feel free to vote to move the question to Server Fault.
I just stumbled upon your question because i was once again researching a similar problem, where our Apache Reverse Proxy returned a 500 status code on an ActiveSync Response 449.
I also found the Bugzilla entry you mentioned and the statments that it should have been fixed with version 2.2.18, however we use 2.2.22 and still faced the problem.
After further reading into the comments in the Bugzilla entry which then lead to the apache bug entry #44995. Reading these comments, especially the last one lead me to the believe that the issue, especially with custom error codes without status message has not been fixed in any 2.2.x versions but is included in 2.3/2.4
So we moved on and updated our reverse proxy to a 2.4 version, and to our surprise the error code 449 was correctly passed over by the proxy.
As you didnt mentioned your apache2 version used, i can only guess that an update to 2.4 or 2.3 might be a possible solution for you.

Heroku Intercepting Some Gmail Incoming Messages

I am serving my Rails 3 app on Heroku, my mail through Google, and the domain through Enom. This is for www.challengage.com
This works 95% of the time, however, once in a while, when someone tries to reply to an email I send them, it fails with the below error message because my email, josh#challengage.com, somehow got replaced with josh#herokuapp.challengage.com when they recieved it. I think it has something to do with Mail Delivery Subsystems, but I'm not sure. It also only seems to happen when emailing University professionals.
Error Message:
From: Mail Delivery Subsystem [mailto:MAILER-DAEMON#smtp2.syr.edu]
Sent: Monday, July 15, 2013 2:08 PM
To: David DiMaggio
Subject: Undeliverable: FW: Challengage - Work Team Simulation product for interviewing evaluations
Delivery has failed to these recipients or groups:
paul#challengage.herokuapp.com
The server has tried to deliver this message, without success, and has stopped trying. Please try sending this message again. If the problem continues, contact your helpdesk.
The following organization rejected your message: challengage.herokuapp.com.
Any ideas?
Thanks everyone.
This is almost certainly because you're using a CNAME for your email records.
Although most email servers will reflect the original domain when sending a message, others will replace it with the domain that's at the end of the CNAME.
This means that instead of sending to someone#challengage.com they send to someone#challengage.herokuapp.com instead.
The mail server sees the request to send to someone#challengage.herokuapp.com and decides that it doesn't look after challengage.herokuapp.com and so from it's perspective the message is rejected.
We used to see this issue with CloudMailin customers and started to recommend that they don't use CNAMES where email is involved and just make use of adding MX records direct to the Apex domain.
With Heroku this poses a problem though as you don't have a single IP that you can use to access their servers. We eventually ended up using Route 53 to host our domain, then adding an SSL endpoint (to get load balancer details) and then adding that load balancer to Route 53's Alias command so that it automatically always gave the correct results. Alternatively you can setup some sort of static IP based system on your apex domain to redirect.

Strophe gets invalid SID

I'm building a web based client with strophe and jquery and I'm using openfire as server.
Almost everything its working, I can get the roster list, send and receive messages, but when I'm trying to change my presence from avaliable to xa or dnd or any other else the server stops answering and I start to get 404 errors on my POSTs and invalid SID messages like this: "NetworkError: 404 Invalid SID. - http://myurl.com/".
I've been through other topics around and it seems to be a problem with my avaliable credentials, but I don't have any clear evidence of what's wrong.
thanks in advance for any help.
I'm also using Strophe and for me everything works just fine! I don't know what could be you problem, since you didn't share your code with us, so my answer probably won't help you, but here's what I do to update my presence:
connection.send($pres().c('status').t('dnd or any status you want'));

Solve "HTTP 510 Not Extended" Error

My server fired every week a 510 HTTP error. After reboot the apache, the problem was solve.
But this is more a workaround as a solution for this problem.
Any ideas, how to solve this problem?
Got the same issue , one of sites that im holding got the same story - many errors in firebug "510 Not extended" .
Checked configuration of apache and server , there was limit on numbers of connections and total outgoing traffic bandwidth .
When put it to unlimited all working perfect when ,enabling limiting get this error, need to find the middle when it working good and there no "unlimited" for all traffic set.
The reason why we set this limiting was - one of the sites on server was raising traffic, up to limit of whole server - too much connections and outgoing traffic ... we didnt find this site ,but added module that allow you manage traffic and connection limiting per site.
So in my opinion you have the same issue when one of the sites raising your traffic and you start get this error - you as you said restarting apache - and that solves this permanently.