Amazon SES SPF Record Fails [closed] - amazon-ses

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
We have set up SPF with Amazon SES.
When we run a test with https://www.dmarcanalyzer.com/
We get the following:
So everything appears to be correct.
However with our dmarc report we get a fail
<record>
<row>
<source_ip>209.85.220.69</source_ip>
<count>1</count>
<policy_evaluated>
<disposition>none</disposition>
<dkim>pass</dkim>
<spf>fail</spf>
</policy_evaluated>
</row>
<identifiers>
<header_from>flowcx.com.au</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>flowcx.com.au</domain>
<result>pass</result>
<selector>6jm2ei7phvrqgxrufpn4j6rbk757tr6a</selector>
</dkim>
<dkim>
<domain>amazonses.com</domain>
<result>pass</result>
<selector>6gbrjpgwjskckoa6a5zn6fwqkn67xbtw</selector>
</dkim>
<spf>
<domain>mail.flowcx.com.au</domain>
<result>softfail</result>
</spf>
</auth_results>
</record>
From what I can see this is due to the ip address being 209.85.220.69.
This is not in the SES range? I know we can add this to our spf record - but why is Amazon SES sending from this address?

The short answer: Amazon is not sending from 209.85.220.69. Google is.
Background: If you look up the PTR record in DNS for 209.85.220.69 you'll find that it is actually a Google server forwarding an email previously sent from AmazonSES.
In this case the AmazonSES DKIM signature survives (no signed header fields were changed), while, on forwarding, SPF breaks because Google's server is not in the SPF record.

Related

I have disallowed everything for 10 days [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Due to an update error, I put in prod a robots.txt file that was intended for a test server. Result, the prod ended up with this robots.txt :
User-Agent: *
Disallow: /
That was 10 days ago and I now have more than 7000 URLS blocked Error (Submitted URL blocked by robots.txt) or Warning (Indexed through blocked byt robots.txt).
Yesterday, of course, I corrected the robots.txt file.
What can I do to speed up the correction by Google or any other search engine?
You could use the robots.txt test feature. https://www.google.com/webmasters/tools/robots-testing-tool
Once the robots.txt test has passed, click the "Submit" button and a popup window should appear. and then click option #3 "Submit" button again --
Ask Google to update
Submit a request to let Google know your robots.txt file has been updated.
Other then that, I think you'll have to wait for Googlebot to crawl the site again.
Best of luck :).

SEO - What to do when content is taken offline [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm going to have a site where content remains on the site for a period of 15 days and then gets removed.
I don't know too much about SEO, but my concern is about the SEO implications of having "content" indexed by the search engines, and then one day it suddenly goes and leaves a 404.
What is the best thing I can do to cope with content that comes and goes in the most SEO friendly way possible?
The best way will be to respond with HTTP Status Code 410;
from w3c:
The requested resource is no longer available at the server and no
forwarding address is known. This condition is expected to be
considered permanent. Clients with link editing capabilities SHOULD
delete references to the Request-URI after user approval. If the
server does not know, or has no facility to determine, whether or not
the condition is permanent, the status code 404 (Not Found) SHOULD be
used instead. This response is cacheable unless indicated otherwise.
The 410 response is primarily intended to assist the task of web
maintenance by notifying the recipient that the resource is
intentionally unavailable and that the server owners desire that
remote links to that resource be removed. Such an event is common for
limited-time, promotional services and for resources belonging to
individuals no longer working at the server's site. It is not
necessary to mark all permanently unavailable resources as "gone" or
to keep the mark for any length of time -- that is left to the
discretion of the server owner.
more about status codes here
To keep the traffic it may be an option to not delete but archive the old content. So it remains accessible by its old URL but linked at some deeper points in the archive on your site.
If you really want to delete it then it is totally ok to return with 404 or 410. Spiders understand that the resource is not available anymore.
Most search engines use something called a robot.txt file. You can specify which URLs and Paths you want the search engine to ignore. So if all of your content is at www.domain.com/content/* then you can have Google ignore that whole branch of your site.

Would 401 Error be a good choice? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
On one of my sites have a lot of restricted pages which is only available to logged-in users, and for everyone else it outputs a default "you have to be logged in ... " view.
The problem is; a lot of these pages are listed on Google with the not-logged-in-view, and it looks pretty bad when 80% of the pages in the list have the same title and description/preview.
Would it be a good choice to, along with my default not-logged-in-view, send a 401 unauthorized header? And would this stop Google (and other engines) to index these pages?
Thanks!
(and if you have another (better?) solution I would love to hear about it!)
Use a robots.txt to tell search engines not to index the not logged in pages.
http://www.robotstxt.org/
Ex.
User-agent: *
Disallow: /error/notloggedin.html
401 Unauthorized is the response code for requests that requires user authentication. So this is exactly the response code you want and have to send. Status Code Definitions
EDIT: Your previous suggestion, response code 403, is for requests, where authentication makes no difference, eg. disabled directory browsing.
here are the status codes googlebot understands and recommends.
http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=40132
in your case an HTTP 403 would be the right one.

Finding if domain name is already registered? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Hi, Is there any API to lookup if a given domain name is already registerd by somebody and get alternative (auto suggested available domain names)?
EDIT:- I think the thing I need is called domain-search not the lookup :)
I've written a whois for PHP, Perl, VB and C# all using a trick that queries '{domain}.whois-servers.net'.
It works well for all but the obscure domains that require registration (and usually fees) such as tonga .tv or .pro domains.
PHP Whois (version 3.x but should still work)
C# Whois
COM Whois (DLL only, I lost the source)
This page shows it in action. You can do some simple string matching to check if a domain is registered or not based on the result you get back.
It's called whois... and for auto-suggestion, there is the domain service at 1&1.
http://www.mashape.com/apis/Name+Toolkit#Get-domain-suggestions - Advanced domain name suggestions and domain checking API.
I think you can use http://whois.domaintools.com/ to get the information. Send a web request as http://whois.domaintools.com/example.com and it will return the information of example.com. But you need to parse the response to filter the required information.
http://whois.bw.org/ is very good. does suggestions and such.
I want an api that I can call from my code or pages. The XML API on www.domaintools.com seems like the thing that I need. I'm looking into it.
thanks for your support. I've found a service by domaintools.com called whoisapi. You can query available domain names and other information by sending an xml request to their servers.

Adding headers in postfix incoming mails [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am unable to see the following headers in e-mails received on my Postfix e-mail receiving server:
Return-Path
Received: from
Similar to header on gmail
Received: from dev16 ([123.123.123.123])
by mx.google.com with SMTP id xxxxxxxxxxxxxxxx;
Tue, 27 Oct 2009 05:52:56 -0700 (PDT)
Return-To:
Please suggest me what should I do to add these headers in the received e-mails.
Thanks in advance.
Ashish
Actually the e-mail contains these headers, but the mail viewer client needs to be configured to show these headers.
Actually i needed to parse the email headers for anti spoofing, and i was unable to see these headers with the mail client i was using therefore i thought these headers are not present.
But once i checked the actual mbox file and cleared all my doubts.
Also for appending custom headers in Postfix received e-mail one can use milter protocol implemented in Postfix by using 'lib-milter' provided by sendmail.
For java implementation one can use 'jilter'.