While using Rails ActionMailer with Multipart Emails I created both:
approve_trade_email.html.erb
AND
approve_trade_email.text.erb
I do receive a well HTML formatted email in my Mail client (Mac OSX) but when checking my Gmail account for the same email I got an empty body with a noname attachement with the multiparts inside?
Help?
Why do I get this in Gmail?
thx
Joel
HERE THE NONAME ATTACHEMENT IN GMAIL:
----==_mimepart_4eab3a61bb3a8_10583ff27b4e363c43018
Date: Sat, 29 Oct 2011 01:27:29 +0200
Mime-Version: 1.0
Content-Type: text/plain;
charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Content-ID: <4eab3a61bc857_10583ff27b4e363c43191#joel-maranhaos-macbook-pro.local.mail>
Do not to forget to make a donation on our Site: /home/index?path_only=false
----==_mimepart_4eab3a61bb3a8_10583ff27b4e363c43018
Date: Sat, 29 Oct 2011 01:27:29 +0200
Mime-Version: 1.0
Content-Type: text/html;
charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Content-ID: <4eab3a61bd5fa_10583ff27b4e363c432cb#joel-maranhaos-macbook-pro.local.mail>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<link href="/assets/powerplants.css" media="screen" rel="stylesheet" type="text/css" />
</head>
<body id="email">
<p><b>Do not to forget to make a donation on our Site.</b></p>
</body>
</html>
----==_mimepart_4eab3a61bb3a8_10583ff27b4e363c43018--
I got the same problem and found the solution:
It's a bug in the old ActionMailer API of Rails 3, which does not include the multipart boundary definition in mail header.
see: https://github.com/rails/rails/pull/3090
You just have to use the new API (using the mail method)
class UserMailer < ActionMailer::Base
default :from => "notifications#example.com"
def welcome_email(user)
#user = user
#url = "http://example.com/login"
mail(:to => user.email, :subject => "Welcome to My Awesome Site")
end
end
Related
We are having some trouble to mount an AWS S3 bucket (using s3fs v1.90) into an AWS EC2 instance which:
is running Ubuntu 18.04
requires IMDS v2 session tokens
is behind a proxy
The HTTP response code returned by the curl lib is "417 - Expectation Failed" (more details below). I found some hints on the www that the 417 error might relate to our proxy config, see:
HTTP POST Returns Error: 417 "Expectation Failed."
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019LuWSAU
This makes me believe that our NO_PROXY config is not being picked up by s3fs, but I'm really not sure...
Anyway, this is what we've tried to do in order to mount the bucket:
sudo s3fs SOME_BUCKET ./mnt-s3/ -o iam_role=SOME_ROLE,url=https://s3.eu-central-1.amazonaws.com,endpoint=eu-central-1,allow_other,uid=1000,gid=1000,mp_umask=007,use_cache=/tmp/s3foldercache,dbglevel=debug -f
This is the output:
2021-09-08T12:36:27.681Z [INF] curl.cpp:CheckIAMCredentialUpdate(1826): IAM Access Token refreshing...
2021-09-08T12:36:27.681Z [INF] curl.cpp:GetIAMCredentials(3068): [IAM role=SOME_ROLE]
2021-09-08T12:36:27.681Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 31
2021-09-08T12:36:27.681Z [DBG] curl.cpp:RequestPerform(2509): connecting to URL http://169.254.169.254/latest/api/token
2021-09-08T12:36:27.682Z [ERR] curl.cpp:RequestPerform(2622): HTTP response code 417, returning EIO. Body Text: <?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>417 - Expectation Failed</title>
</head>
<body>
<h1>417 - Expectation Failed</h1>
</body>
</html>
2021-09-08T12:36:27.682Z [ERR] curl.cpp:GetIAMCredentials(3105): AWS IMDSv2 token retrieval failed: -5
2021-09-08T12:36:27.682Z [DBG] curl.cpp:RequestPerform(2509): connecting to URL http://169.254.169.254/latest/meta-data/iam/security-credentials/SOME_ROLE
2021-09-08T12:36:27.684Z [ERR] curl.cpp:RequestPerform(2622): HTTP response code 401, returning EIO. Body Text: <?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>401 - Unauthorized</title>
</head>
<body>
<h1>401 - Unauthorized</h1>
</body>
</html>
2021-09-08T12:36:27.684Z [ERR] curl.cpp:CheckIAMCredentialUpdate(1830): IAM Access Token refresh failed
2021-09-08T12:36:27.684Z [DBG] curl_handlerpool.cpp:ReturnHandler(103): Return handler to pool
2021-09-08T12:36:27.684Z [INF] curl_handlerpool.cpp:ReturnHandler(110): Pool full: destroy the oldest handler
2021-09-08T12:36:27.685Z [CRT] s3fs.cpp:s3fs_check_service(3520): Failed to check IAM role name(SOME_ROLE).
2021-09-08T12:36:27.685Z [ERR] s3fs.cpp:s3fs_exit_fuseloop(3372): Exiting FUSE event loop due to errors
When running curl directly, though, we do receive a valid IMDS v2 token:
$ curl -v -X PUT -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" http://169.254.169.254/latest/api/token
* Trying 169.254.169.254...
* TCP_NODELAY set
* Connected to 169.254.169.254 (169.254.169.254) port 80 (#0)
> PUT /latest/api/token HTTP/1.1
> Host: 169.254.169.254
> User-Agent: curl/7.58.0
> Accept: */*
> X-aws-ec2-metadata-token-ttl-seconds: 21600
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Length: 56
< Content-Type: text/plain
< Date: Wed, 08 Sep 2021 13:14:02 GMT
< X-Aws-Ec2-Metadata-Token-Ttl-Seconds: 21600
< Connection: close
< Server: EC2ws
<
* Closing connection 0
SOME_TOKEN
Finally, this is our proxy config (defined by environment variables):
$ echo $HTTP_PROXY
<SOME_HOST>:<SOME_PORT>
$ echo $NO_PROXY
169.254.169.254,*.eu-central-1.amazonaws.com
So, my best guess is that s3fs might be ignoring the NO_PROXY variable, trying to use our proxy when asking local IP 169.254.169.254 for a new token.
A fix for this is being worked on in https://github.com/s3fs-fuse/s3fs-fuse/pull/1766
I have problem when my cordova apps try to call the procedure in adapter, the adapter doesn't return the response.
The response of adapter is just like this :
errorMsg: "OK"
invocationContext: null
responseHeaders:
cache-control: "no-cache"
connection: "close"
content-type: "text/html; charset=UTF-8"
expires: "0"
pragma: "no-cache"
__proto__: Object
responseText: "<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><META HTTP-EQUIV="CONTENT-TYPE" CONTENT="TEXT/HTML; CHARSET=utf-8"/><title>Error</title></head><body><H2>Error</H2><table summary="Error" border="0" bgcolor="#FEEE7A" cellpadding="0" cellspacing="0" width="400"><tr><td><table summary="Error" border="0" cellpadding="3" cellspacing="1"><tr valign="top" bgcolor="#FBFFDF" align="left"><td><STRONG>Error</STRONG></td></tr><tr valign="top" bgcolor="#FFFFFF"><td>This page can't be displayed. Contact support for additional information.<br/>The incident ID is: N/A.</td></tr></table></td></tr></table></body></html>"
status: 200
I have no idea whats the problem is. Please Help to solve this.
Update:
This is payload that I send into adapter (REST)
{
"id": "xxx",
"userID": "ANDxxx",
"firstName": "ANDY",
"lastName": "JOHNSON",
.............
}
Notes :
- I use mobile first 8.0.0.00-20180504-092633 version
- I think it's intermittent. Because just some data in payload that I sent into adapter that return no response
I'm using the Soundcloud API wrapper for PHP to upload some tracks.
The app worked fine for years until a few days.
Now I'm getting a HTTP 100 code followed by a HTTP 502 error.
This is the request:
Array
(
[CURLOPT_HEADER] => 1
[CURLOPT_RETURNTRANSFER] => 1
[CURLOPT_USERAGENT] => PHP-SoundCloud/2.3.2
[CURLOPT_POST] => 1
[CURLOPT_POSTFIELDS] => Array
(
[track[title]] => Boris - Like Water (Original Mix) [Alleanza]
[track[description]] => <p>It is said that there is something in the New York air that makes sleep useless and these night-loving cuts from Boris sustain the fact of his own city. Bottom heavy might be the ideal definition for these showpieces but when we secure work of this calibre to the label throwing light upon words would be a violation of what is regarded as sacred to new age techno for the reason that interpretation to such elegance should be prevailed at the given time of function.</p>
[track[asset_data]] => #/home/cubelab/platform/labels/alleanza/ALLE043/BORIS_-_Like_Water_-_ALLEANZA.mp3
[track[sharing]] => private
[track[tag_list]] => "Boris" "Like Water" "Original Mix" "Techno" "Alleanza"
[track[genre]] => Techno
[track[track_type]] => original
[track[downloadable]] =>
[track[release]] => ALLE043
[track[release_day]] => 04
[track[release_month]] => 08
[track[release_year]] => 2014
[track[purchase_url]] => http://www.beatport.com/release/like-water-the-master/1340844
[track[artwork_data]] => #/home/cubelab/platform/labels/alleanza/ALLE043/ALLE043.jpg
[track[label_name]] => Alleanza
)
[CURLOPT_HTTPHEADER] => Array
(
[0] => Accept: application/json
[1] => Authorization: OAuth *-*****-*******-****************
)
)
This is the response from the server:
HTTP/1.1 100 Continue
HTTP/1.1 502 Bad Gateway
Content-Type: text/html
Date: Fri, 08 Aug 2014 09:51:03 GMT
Server: ECS (ams/4989)
Content-Length: 349
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>502 - Bad Gateway</title>
</head>
<body>
<h1>502 - Bad Gateway</h1>
</body>
</html>
It seems like a Soundcloud server error, and I don't know how to do any further debugging.
had same problem. same timeframe. kept checking every day manually. and OMG, just now my script worked 100% fine. and it should work for you now too. can you confirm?
so to answer your question: just give it time and all will come back to normal operation.. heh.. that's soundcloud for you, german engineering and such... it's very moody and sometimes scripts that worked for years stop working. and always just for a few days...
I have read other posts here and it looks like most of the time this boils down to fetch being async. I don't think that is my problem because 1. I test the results in the success callback of fetch and 2. I can console.log(model.toJSON()) in the js console later and it still is not updated
Notes: I am getting a good json response from the API and I can get the data by putting 'parse' in my model declaration like so
parse: function(data){
alert(data.screenname);
}
Here is my code, why is the model not being updated with the fetch call
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script>
<script src="vendor/components/underscore/underscore-min.js"></script>
<script src="vendor/components/backbone/backbone-min.js"></script>
</head>
<body>
Hello
<script>
$.ajaxSetup({
beforeSend: function(xhr){
xhr.setRequestHeader("Authorization", "Basic " + window.btoa("username" + ":" + "password"));
}
});
var User=Backbone.Model.extend({
parse: function(data){
alert(data.screenname);
},
urlRoot: 'http://api.myapi.com/user'
});
var user=new User({id:'1'});
user.fetch({
success: function(collection, response, options){
console.log(response);
console.log(user.toJSON());
}
});
</script>
</body>
</html>
When I log response, it show a good json coming back, but user.toJSON just shows the id as 1.
I can use parse in the model declaration to manually assign each value in the model from the response, but that seems like a dumb way to do it. I was under the impression that fetch() was supposed to populate the model with the result from the server.
**UPDATED**
Here is the response I get back from the server
{"id":1,"email":"test#email.com","password":"pass","screenname":"myname","id_zipcode":1,"id_city":1,"date_created":"2014-12-25 12:12:12"}
Here are the response headers from my api
HTTP/1.1 200 OK
Date: Tue, 04 Feb 2014 18:31:41 GMT
Server: Apache/2.2.24 (Unix) DAV/2 PHP/5.5.6 mod_ssl/2.2.24 OpenSSL/0.9.8y
X-Powered-By: PHP/5.5.6
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Authorization, Origin, X-Requested-With, Content-Type, Accept
Set-Cookie: PHPSESSID=pj1hm0c2ubgaerht3i5losga4; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Length: 139
Keep-Alive: timeout=5, max=99
Connection: Keep-Alive
Content-Type: application/json; charset=utf-8
You have overridden your parse() method to effectively do nothing. It should return all the attributes to set on your model; you have not returned anything, hence, nothing was being set on the model.
It should look like this.
var User=Backbone.Model.extend({
parse: function(data){
alert(data.screenname);
return data; //all attributes in data will be set on the model
},
urlRoot: 'http://api.myapi.com/user'
});
When I run "vmc info", I get an error : Error (JSON 404):
cloud#rest:~/cloudfoundry/.deployments/rest/log$ vmc info -t
>>>
REQUEST: get http://api.mwt.needforspeed.info/info
RESPONSE_HEADERS:
content_length : 239
date : Thu, 11 Oct 2012 07:32:17 GMT
content_type : text/html; charset=iso-8859-1
content_encoding : gzip
server : Apache/2.2.22 (Ubuntu)
vary : Accept-Encoding
RESPONSE: [404]
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /info was not found on this server.</p>
<hr>
<address>Apache/2.2.22 (Ubuntu) Server at api.mwt.needforspeed.info Port 80</address>
</body></html>
<<<
Error (JSON 404): <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /info was not found on this server.</p>
<hr>
<address>Apache/2.2.22 (Ubuntu) Server at api.mwt.needforspeed.info Port 80</address>
</body></html>
It looks like you have apache webserver running on port 80.
If you install cloudfoundry to a plain vanilla Ubuntu, it will install nginx as a webserver.
So first stop your apache, normally with:
sudo /etc/init.d/apache2 stop
and than check that your nginx (router in cloudfoundry's term) is running:
source ~/.cloudfoundry_deployment_local
~/cloudfoundry/vcap/dev_setup/bin/vcap_dev -n rest status router
please note that you need the extra -n rest parameter as it seems that you didn't use the default development name: dev_box