kinvey rest api upload - api

I'm trying to upload on Kinvey using REST API method.
I can successfully get the google storage URL link provided after sending a 'POST' request to https://baas.kinvey.com/blob/:myAppId
The problem is when I'm sending a 'PUT' request to the google storage URL, I'm getting this error:
XMLHttpRequest cannot load (my storage.google URL). Response to
preflight request doesn't pass access control check: No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin (my localhost) is therefore not allowed access.

This appears to be a fairly standard CORS error (which you can read a LOT more about over here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS ) , which takes place when you are making a cross-origin request. There's a lot of different ways that you can approach this issue, but the easiest would probably be to use one of our SDK's to help you. If you take a look over at http://devcenter.kinvey.com/html5/downloads you will find an SDK that you can include in your projects and guides / documentation for it in the top navigation.
File uploads using the HTML5 library are fairly trivial as well. Here's some sample code that I have whipped up:
HTML portion:
<input type="file" name="_file" id="_file" onchange="fileSelected();" />
<div id="fileinfo">
<div id="filename"></div>
<div id="filetype"></div>
</div>
Javascript portion:
function fileSelected(){
var oFile = document.getElementById('_file').files[0];
var oReader = new FileReader();
oReader.onload = function(e) {
document.getElementById('fileinfo').style.display = 'block';
document.getElementById('filename').innerHTML = 'Name: ' + oFile.name;
document.getElementById('filetype').innerHTML = 'Type: ' + oFile.type;
};
oReader.readAsDataURL(oFile);
fileUpload(oFile);
}
function fileUpload(file) {
var file = document.getElementById('_file').files[0];
var promise = Kinvey.File.upload(file,{
filename: document.getElementById('fileinfo').toString(),
mimetype: document.getElementById('filetype').toString()
})
promise.then(function() {
alert("File Uploaded Successfully");
}, function(error){
alert("File Upload Failure: " + error.description);
});
}
This will be slightly different for each of Kinvey's Javascript libraries, but should follow roughly the same outline. Get file, call Kinvey.File.Upload asynchronously, and let the SDK do it's magic. This should handle all the ugliness of CORS for you.
Thanks,

Related

Correct code to upload local file to S3 proxy of API Gateway

I created an API function to work with S3. I imported the template swagger. After deployment, I tested with a Node.js project by the npm module aws-api-gateway-client.
It works well with: get bucket lists, get bucket info, get one item, put a bucket, put a plain text object, however I am blocked with put a binary file.
firstly, I ensure ACL is allowed with all permissions on S3. secondly, binary support also added
image/gif
application/octet-stream
The code snippet is as below. The behaviors are:
1) after invokeAPI, the callback function is never hit, after sometime, the Node.js project did not respond. no any error message. The file size (such as an image) is very small.
2) with only two times, the uploading seemed to work, but the result file size is bigger (around 2M bigger) than the original file, so the file is corrupt.
Could you help me out? Thank you!
var filepathname = './items/';
var filename = 'image1.png';
fs.stat(filepathname+filename, function (err, stats) {
var fileSize = stats.size ;
fs.readFile(filepathname+filename,'binary',function(err,data){
var len = data.length;
console.log('file len' + len);
var pathTemplate = '/my-test-bucket/' +filename ;
var method = 'PUT';
var params = {
folder: '',
item:''
};
var additionalParams = {
headers: {
'Content-Type': 'application/octet-stream',
//'Content-Type': 'image/gif',
'Content-Length': len
}
};
var result1 = apigClient.invokeApi(params,pathTemplate,method,additionalParams,data)
.then(function(result){
//never hit :(
console.log(result);
}).catch( function(result){
//never hit :(
console.log(result);
});;
});
});
We encountered the same problem. API Gateway is meant for limited data (10MB as of now), limits shown here,
http://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Self Signed URL to S3:
Create an S3 self signed URL for POST from the lambda or the endpoint where you are trying to post.
How do I put object to amazon s3 using presigned url?
Now POST the image directly to S3.
Presigned POST:
Apart from posting the image if you want to post additional properties, you can post it in multi-form format as well.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
If you want to process the file after delivering to S3, you can create a trigger from S3 upon creation and process with your Lambda or anypoint that need to process.
Hope it helps.

Force reload cached image with same url after dynamic DOM change

I'm developping an angular2 application (single page application). My page is never "reloaded", but it's content changes according to user interactions.
I'm having some cache problems especially with images.
Context :
My page contains an editable image list :
<ul>
<li><img src="myImageController/1">Edit</li>
<li><img src="myImageController/2">Edit</li>
<li><img src="myImageController/3">Edit</li>
</ul>
When i want to edit an image (Edit link), my dom content is completly changed to show another angular component with a fileupload component.
The myImageController returns the LastModified header, and cache-control : no-cache and must-revalidate.
After a refresh (hit F5), my page does a request to get all img src, which is correct : if image has been modified, it is downloaded, if not, i just get a 304 which is fine.
Note : my images are stored in database as blob fields.
Problem :
When my page content is dynamically reloaded with my single page app, containing img tags, the browser do not call a GET http request, but immediatly take image from cache. I assume this a browser optimization to avoid getting the same resource on the same page multiple times.
Wrong solutions :
The first solution is to add something like ?time=(new Date()).getTime() to generate unique urls and avoid browser cache. This won't send the If-Modified-Since header in the request, and i will download my image every time completly.
Do a "real" refresh : the first page load in angular apps is quite slow, and i don't to refresh all.
Tests
To simplify the problem, i trying to create a static html page containing 3 images with the exact same link to my controller : /myImageController/1. With the chrome developper tool, i can see that only one get request is called. If i manage to get mulitple server calls in this case, it would probably solve my problem.
Thank you for your help.
5th version of HTML specification describes this behavior. Browser may reuse images regardless of cache related HTTP headers. Check this answer for more information. You probably need to use XMLHttpRequest and blobs. In this case you also need to consider Same-origin policy.
You can use following function to make sure user agent performs every request:
var downloadImage = function ( imgNode, url ) {
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.responseType = "blob";
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status == 200 || xhr.status == 304) {
var blobUrl = URL.createObjectURL(xhr.response);
imgNode.src = blobUrl;
// You can also use imgNode.onload callback to release blob resources.
setTimeout(function () {
URL.revokeObjectURL(blobUrl);
}, 1000);
}
}
};
xhr.send();
};
For more information check New Tricks in XMLHttpRequest2 article by Eric Bidelman, Working with files in JavaScript, Part 4: Object URLs article by Nicholas C. Zakas and URL.createObjectURL() MDN page and Same-origin policy MDN page.
You can use the random ID trick. This changes the URL so that the browser reloads the image. Not that this can be done in the query parameters to force a full cache break or in the hash to allow the browser to re-validate the image from the cache (and avoid re-downloading it if unchanged).
function reloadWithCache(img: HTMLImageElement, url: string) {
img.src = url.replace(/#.*/, "") + "#" + Math.random();
}
function reloadBypassCache(img: HTMLImageElement, url: string) {
let sep = img.indexOf("?") == -1? "?" : "&";
img.src = url + sep + "nocache=" + Math.random()
}
Note that if you are using reloadBypassCache regularly you are better off fixing your cache headers. This function will always hit your origin server leading to higher running costs and making CDNs ineffective.

Google Apps Script: Salesforce API Call

Just finished breakfast and already hit a snag. I'm trying to call the salesforce REST api from my google sheets. I've written a working script locally in python, but converting it into JS, something went wrong:
function authenticateSF(){
var url = 'https://login.salesforce.com/services/oauth2/token';
var options = {
grant_type:'password',
client_id:'XXXXXXXXXXX',
client_secret:'111111111111',
username:'ITSME#smee.com',
password:'smee'
};
var results = UrlFetchApp.fetch(url, options);
}
Here is the error response:
Request failed for https://login.salesforce.com/services/oauth2/token
returned code 400. Truncated server response:
{"error_description":"grant type not
supported","error":"unsupported_grant_type"} (use muteHttpExceptions
option to examine full response) (line 12, file "Code")
Mind you, these exact parameters work fine in my local python script (putting the key values inside quotations).
Here are the relevant docs:
Google Script: Connecting to external API's
Salesforce: REST API guide
Thank you all!
Google's UrlFetchApp object automatically defaults to a GET request. To authenticate, you have to explicitly set in the options the method "post":
function authenticateSF(){
var url = 'https://login.salesforce.com/services/oauth2/token';
var payload = {
'grant_type':'password',
'client_id':'XXXXXXXXXXX',
'client_secret':'111111111111',
'username':'ITSME#smee.com',
'password':'smee'
};
var options = {
'method':'post',
'payload':payload
};
var results = UrlFetchApp.fetch(url, options);
}

Using Node JS to proxy http and modify response

I'm trying to write a front end to an API service with Node JS.
I'd like to be able to have a user point their browser at my node server and make a request. The node script would modify the input to the request, call the api service, then modify the output and pass back to the user.
I like the solution here (with Express JS and node-http-proxy) as it passes the cookies and headers directly from the user through my site to the api server.
proxy request in node.js / express
I see how to modify the input to the request, but i can't figure out how to modify the response. Any suggestions?
transformer-proxy could be useful here. I'm the author of this plugin and I'm answering here because I found this page when looking for the same question and wasn't satisfied with harmon as I don't want to manipulate HTML.
Maybe someone else is looking for this and finds it useful.
Harmon is designed to plug into node-http-proxy https://github.com/No9/harmon
It uses trumpet and so is stream based to work around any buffering problems.
It uses an element and attribute selector to enable manipulation of a response.
This can be used to modify output response.
See here: https://github.com/nodejitsu/node-http-proxy/issues/382#issuecomment-14895039
http-proxy-interceptor is a middleware I wrote for this very purpose. It allows you to modify the http response using one or more transform streams. There are tons of stream-based packages available (like trumpet, which harmon uses), and by using streams you can avoid buffering the entire response.
var httpProxy = require('http-proxy');
var modifyResponse = require('http-proxy-response-rewrite');
var proxy = httpProxy.createServer({
target:'target server IP here',
});
proxy.listen(8001);
proxy.on('error', function (err, req, res) {
res.writeHead(500, {
'Content-Type': 'text/plain'
});
res.end('Something went wrong. And we are reporting a custom error message.');
});
proxy.on('proxyRes', function (proxyRes, req, res) {
modifyResponse(res, proxyRes.headers['content-encoding'], function (body) {
if (body && (body.indexOf("<process-order-response>")!= -1)) {
var beforeTag = "</receipt-text>"; //tag after which u can add data to
// response
var beforeTagBody = body.substring(0,(body.indexOf(beforeTag) + beforeTag.length));
var requiredXml = " <ga-loyalty-rewards>\n"+
"<previousBalance>0</previousBalance>\n"+
"<availableBalance>0</availableBalance>\n"+
"<accuruedAmount>0</accuruedAmount>\n"+
"<redeemedAmount>0</redeemedAmount>\n"+
"</ga-loyalty-rewards>";
var afterTagBody = body.substring(body.indexOf(beforeTag)+ beforeTag.length)+
var res = [];
res.push(beforeTagBody, requiredXml, afterTagBody);
console.log(res.join(""));
return res.join("");
}
return body;
});
});

Using Google's ClientLogin Interface via XMLHttpRequest in Javascript

I am trying to learn the ClientLogin Interface detailed on the Account Authentication APIs on Google code website.
I am using Firefox 3.5pre (Shiretoko) and XMLHttpRequest object in Javascript to follow the process. Here's a stripped down version of what I have:
<html>
<head>
<title>Test</title>
<script type="text/javascript">
//<![CDATA[
function update() {
var auth_params = "accountType=HOSTED_OR_GOOGLE&Email=val"
+"&passwd=val&service=cl&source=MMA-Learning";
var request = new XMLHttpRequest();
request.open('POST', 'https://www.google.com/accounts/ClientLogin', true);
request.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
request.setRequestHeader("Content-Length", auth_params.length);
request.setRequestHeader("Connection", "close");
request.onreadystatechange = function () {
if (this.readyState == 4 && this.status == 200) {
alert ("Request done");
}
};
try {
request.send( auth_params );
} catch (e) {
alert ("Send Exception:\n"+e);
}
}
//]]>
</script>
</head>
<body>
Authenticate
</body>
</html>
When I click on the Authenticate link, all I get back is a Bad Request response. Examining the request headers, I don't see Content-Type set to application/x-www-form-urlencoded.
I am using Firebug 1.5X to examine the traffic.
For now, all I want to do is generate request mentioned in the Sample Request section and get a response mentioned in the Sample Responses section. If I get there, I want to get some account specific data like, unread Google Reader feeds etc.
I suspect that you've been bitten by Javascript's "same origin" policy. It prevents Javascript, including XmlHttpRequest, from accessing one domain from another. More information is available from Mozilla.
There are hacks to get around this, but I have no idea if they'll work with Google's API.
the 'p' in 'passwd' is a small 'p' instead of a capital 'P'
you probably figured that out tho. When you post and you find the answer, it is always polite if you post the answer as well. This helps the people in the future who will look at your post for information
That 'p' took me two hours to find because i persummed that the code google gave was copied correctely and there was no case mistakes
no point in Internet being full of questions with no answers