Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am unable to see the following headers in e-mails received on my Postfix e-mail receiving server:
Return-Path
Received: from
Similar to header on gmail
Received: from dev16 ([123.123.123.123])
by mx.google.com with SMTP id xxxxxxxxxxxxxxxx;
Tue, 27 Oct 2009 05:52:56 -0700 (PDT)
Return-To:
Please suggest me what should I do to add these headers in the received e-mails.
Thanks in advance.
Ashish
Actually the e-mail contains these headers, but the mail viewer client needs to be configured to show these headers.
Actually i needed to parse the email headers for anti spoofing, and i was unable to see these headers with the mail client i was using therefore i thought these headers are not present.
But once i checked the actual mbox file and cleared all my doubts.
Also for appending custom headers in Postfix received e-mail one can use milter protocol implemented in Postfix by using 'lib-milter' provided by sendmail.
For java implementation one can use 'jilter'.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
We have set up SPF with Amazon SES.
When we run a test with https://www.dmarcanalyzer.com/
We get the following:
So everything appears to be correct.
However with our dmarc report we get a fail
<record>
<row>
<source_ip>209.85.220.69</source_ip>
<count>1</count>
<policy_evaluated>
<disposition>none</disposition>
<dkim>pass</dkim>
<spf>fail</spf>
</policy_evaluated>
</row>
<identifiers>
<header_from>flowcx.com.au</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>flowcx.com.au</domain>
<result>pass</result>
<selector>6jm2ei7phvrqgxrufpn4j6rbk757tr6a</selector>
</dkim>
<dkim>
<domain>amazonses.com</domain>
<result>pass</result>
<selector>6gbrjpgwjskckoa6a5zn6fwqkn67xbtw</selector>
</dkim>
<spf>
<domain>mail.flowcx.com.au</domain>
<result>softfail</result>
</spf>
</auth_results>
</record>
From what I can see this is due to the ip address being 209.85.220.69.
This is not in the SES range? I know we can add this to our spf record - but why is Amazon SES sending from this address?
The short answer: Amazon is not sending from 209.85.220.69. Google is.
Background: If you look up the PTR record in DNS for 209.85.220.69 you'll find that it is actually a Google server forwarding an email previously sent from AmazonSES.
In this case the AmazonSES DKIM signature survives (no signed header fields were changed), while, on forwarding, SPF breaks because Google's server is not in the SPF record.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am writing a code for spider in Scrapy for this website
[ https://www.garageclothing.com/ca/ ]
this website uses jsessionid.
I want to get that in my code(spider)
Can anybody guide me that how can i get
jsessionid in my code.
Currently i just copy paste the jsessionid from inspection tools of browser after visiting that website on browser.
This site uses JavaScript to set JSESSIONID. But if you will disable JavaScript, and try to load the page, you'll see that it requests the following URL:
https://www.dynamiteclothing.com/?postSessionRedirect=https%3A//www.garageclothing.com/ca&noRedirectJavaScript=true (1)
which redirects you to this URL:
https://www.garageclothing.com/ca;jsessionid=YOUR_SESSION_ID (2)
So you can do the following:
start requests with the URL (1)
in callback, extract session ID from URL (2) (which will be stored in response.url)
make the requests you want with the extracted session ID in cookies
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
Dim wc = New System.Net.WebClient
Dim apistring = wc.DownloadString("https://www.coinexchange.io/api/v1/getmarketsummaries")
The URL open just fine in browser. Somehow webclient can't get that. Hmmm....
What's the problem?
Update: I used a modified webclient with useragent and cookies and it works. I think it checks for things like user agent but I do not know.
I still do not know what the problem is and still curious. If anyone want to examine and check feel free.
Basically what exactly this site look for, and what software we can use to easily check what the problem is.
Some websites will not respond to a plain HTTP Request that contain only the Host header. They require additional common headers that typically would be set when being originated in a web browser.
Most commonly when a WebClient request fails the server is looking for the User-Agent or the Accept header. The server may rely on these headers to determine how to output the Response to client. A typical example is when an API looks at the Accept for text/html, application/xml or text/javascript or tapplication/json to determine if it should return HTML, XML, Javascript or JSON.
Depending on the site it might also look for Referer, 'Cookie', Accept-Language and/or Accept-Encoding headers.
Trying a combination of those values based on what your browser produces.
For this particular website, the browser sends:
The header this site is looking for is the User-Agent header. If it is not present it closes the connection and returns no response.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I would like to add Expires header to my images files stored in S3. I have just found out Cyberduck that easily add metadata. However, I would like to add Expires like 1 month after the request (like I do with static files in my webserver with Nginx). I donĀ“t know if this is possible. Otherwise, I can set expires with a date, i.e 1 2018-06-20 but, I think when I get this date, I will need to update all my files with a new date in the future. I would like to set this header "dinamically" one month later. Is it possible? Any other approach?
Set Cache-Control: public, max-age=2592000.
This will tell the client that the object can be cached for up to 30 days from the time of download.
Setting Expires is no longer considered best practice, and in any event, S3 only supports a static value, here.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
On one of my sites have a lot of restricted pages which is only available to logged-in users, and for everyone else it outputs a default "you have to be logged in ... " view.
The problem is; a lot of these pages are listed on Google with the not-logged-in-view, and it looks pretty bad when 80% of the pages in the list have the same title and description/preview.
Would it be a good choice to, along with my default not-logged-in-view, send a 401 unauthorized header? And would this stop Google (and other engines) to index these pages?
Thanks!
(and if you have another (better?) solution I would love to hear about it!)
Use a robots.txt to tell search engines not to index the not logged in pages.
http://www.robotstxt.org/
Ex.
User-agent: *
Disallow: /error/notloggedin.html
401 Unauthorized is the response code for requests that requires user authentication. So this is exactly the response code you want and have to send. Status Code Definitions
EDIT: Your previous suggestion, response code 403, is for requests, where authentication makes no difference, eg. disabled directory browsing.
here are the status codes googlebot understands and recommends.
http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=40132
in your case an HTTP 403 would be the right one.