I'm trying to start a Selenium test with a POST request to my application.
Instead of a simple open(/startpoint)
I would like to do something like open(/startpoint, stuff=foo,stuff2=bar)
Is there any way to do that?
I'm asking this because the original page which posts to this start point depends on external providers that are often offline (development environment) and so will often fail too early (and are not the subject of the test)
I guess sending data as GET would work too. I would just prefer using a POST method.
If you are using Python selenium bindings, nowadays, there is an extension to selenium - selenium-requests:
Extends Selenium WebDriver classes to include the request function
from the Requests library, while doing all the needed cookie and
request headers handling.
Example:
from seleniumrequests import Firefox
webdriver = Firefox()
response = webdriver.request('POST', 'url here', data={"param1": "value1"})
print(response)
Short answer: No.
But you might be able to do it with a bit of filthing. If you open up a test page (with GET) then evaluate some JavaScript on that page you should be able to replicate a POST request. See JavaScript post request like a form submit to see how you can replicate a POST request in JavaScript.
Hope this helps.
One very practical way to do this is to create a dummy start page for your tests that is simply a form with POST that has a single "start test" button and a bunch of <input type="hidden"... elements with the appropriate post data.
For example you might create a SeleniumTestStart.html page with these contents:
<body>
<form action="/index.php" method="post">
<input id="starttestbutton" type="submit" value="starttest"/>
<input type="hidden" name="stageid" value="stage-you-need-your-test-to-start-at"/>
</form>
</body>
In this example, index.php is where your normal web app is located.
The Selenium code at the start of your tests would then include:
open /SeleniumTestStart.html
clickAndWait starttestbutton
This is very similar to other mock and stub techniques used in automated testing. You are just mocking the entry point to the web app.
Obviously there are some limitations to this approach:
data cannot be too large (e.g. image data)
security might be an issue so you need to make sure that these test files don't end up on your production server
you may need to make your entry points with something like php instead of html if you need to set cookies before the Selenium test gets going
some web apps check the referrer to make sure someone isn't hacking the app - in this case this approach probably won't work - you may be able to loosen this checking in a dev environment so it allows referrers from trusted hosts (not self, but the actual test host)
Please consider reading my article about the Qualities of an Ideal Test
I used driver.execute_script() to inject an html form into the page, and then submit it. It looks like this:
def post(path, params):
driver.execute_script("""
function post(path, params, method='post') {
const form = document.createElement('form');
form.method = method;
form.action = path;
for (const key in params) {
if (params.hasOwnProperty(key)) {
const hiddenField = document.createElement('input');
hiddenField.type = 'hidden';
hiddenField.name = key;
hiddenField.value = params[key];
form.appendChild(hiddenField);
}
}
document.body.appendChild(form);
form.submit();
}
post(arguments[1], arguments[0]);
""", params, path)
# example
post(path='/submit', params={'name': 'joe'})
If you'd like, you can just add it the function to \selenium\webdriver\chrome\webdriver.pyand then use it in your code with driver.post()
Selenium IDE allows you to run Javascript using storeEval command. Mentioned above solution works fine if you have test page (HTML, not XML) and you need to perform only POST request.
If you need to make POST/PUT/DELETE or any other request then you will need another approach:
XMLHttpRequest!
Example listed below has been tested - all methods (POST/PUT/DELETE) work just fine.
<!--variables-->
<tr>
<td>store</td>
<td>/your/target/script.php</td>
<td>targetUrl</td>
</tr>
<tr>
<td>store</td>
<td>user=user1&password</td>
<td>requestParams</td>
</tr>
<tr>
<td>store</td>
<td>POST</td>
<td>requestMethod</td>
</tr>
<!--scenario-->
<tr>
<td>storeEval</td>
<td>window.location.host</td>
<td>host</td>
</tr>
<tr>
<td>store</td>
<td>http://${host}</td>
<td>baseUrl</td>
</tr>
<tr>
<td>store</td>
<td>${baseUrl}${targetUrl}</td>
<td>absoluteUrl</td>
</tr>
<tr>
<td>store</td>
<td>${absoluteUrl}?${requestParams}</td>
<td>requestUrl</td>
</tr>
<tr>
<td>storeEval</td>
<td>var method=storedVars['requestMethod']; var url = storedVars['requestUrl']; loadXMLDoc(url, method); function loadXMLDoc(url, method) { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4) { if(xmlhttp.status==200) { alert("Results = " + xmlhttp.responseText);} else { alert("Error!"+ xmlhttp.responseText); }}}; xmlhttp.open(method,url,true); xmlhttp.send(); }</td>
<td></td>
</tr>
Clarification:
${requestParams} - parameters you would like to post (e.g. param1=value1¶m2=value3¶m1=value3)
you may specify as many parameters as you need
${targetUrl} - path to your script (if your have page located at http://domain.com/application/update.php then targetUrl should be equal to /application/update.php)
${requestMethod} - method type (in this particular case it should be "POST" but can be "PUT" or "DELETE" or any other)
Selenium doesn't currently offer API for this, but there are several ways to initiate an HTTP request in your test. It just depends what language you are writing in.
In Java for example, it might look like this:
// setup the request
String request = "startpoint?stuff1=foo&stuff2=bar";
URL url = new URL(request);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("POST");
// get a response - maybe "success" or "true", XML or JSON etc.
InputStream inputStream = connection.getInputStream();
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
String line;
StringBuffer response = new StringBuffer();
while ((line = bufferedReader.readLine()) != null) {
response.append(line);
response.append('\r');
}
bufferedReader.close();
// continue with test
if (response.toString().equals("expected response"){
// do selenium
}
Well, i agree with the #Mishkin Berteig - Agile Coach answer. Using the form is the quick way to use the POST features.
Anyway, i see some mention about javascript, but no code. I have that for my own needs, which includes jquery for easy POST plus others.
Basically, using the driver.execute_script() you can send any javascript, including Ajax queries.
#/usr/local/env/python
# -*- coding: utf8 -*-
# proxy is used to inspect data involved on the request without so much code.
# using a basic http written in python. u can found it there: http://voorloopnul.com/blog/a-python-proxy-in-less-than-100-lines-of-code/
import selenium
from selenium import webdriver
import requests
from selenium.webdriver.common.proxy import Proxy, ProxyType
jquery = open("jquery.min.js", "r").read()
#print jquery
proxy = Proxy()
proxy.proxy_type = ProxyType.MANUAL
proxy.http_proxy = "127.0.0.1:3128"
proxy.socks_proxy = "127.0.0.1:3128"
proxy.ssl_proxy = "127.0.0.1:3128"
capabilities = webdriver.DesiredCapabilities.PHANTOMJS
proxy.add_to_capabilities(capabilities)
driver = webdriver.PhantomJS(desired_capabilities=capabilities)
driver.get("http://httpbin.org")
driver.execute_script(jquery) # ensure we have jquery
ajax_query = '''
$.post( "post", {
"a" : "%s",
"b" : "%s"
});
''' % (1,2)
ajax_query = ajax_query.replace(" ", "").replace("\n", "")
print ajax_query
result = driver.execute_script("return " + ajax_query)
#print result
#print driver.page_source
driver.close()
# this retuns that from the proxy, and is OK
'''
POST http://httpbin.org/post HTTP/1.1
Accept: */*
Referer: http://httpbin.org/
Origin: http://httpbin.org
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.0.0 Safari/538.1
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Content-Length: 7
Cookie: _ga=GAxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx; _gat=1
Connection: Keep-Alive
Accept-Encoding: gzip, deflate
Accept-Language: es-ES,en,*
Host: httpbin.org
None
a=1&b=2 <<---- that is OK, is the data contained into the POST
None
'''
from selenium import webdriver
driver = webdriver.Firefox()
driver.implicitly_wait(12)
driver.set_page_load_timeout(10)
def _post_selenium(self, url: str, data: dict):
input_template = '{k} <input type="text" name="{k}" id="{k}" value="{v}"><BR>\n'
inputs = ""
if data:
for k, v in data.items():
inputs += input_template.format(k=k, v=v)
html = f'<html><body>\n<form action="{url}" method="post" id="formid">\n{inputs}<input type="submit" id="inputbox">\n</form></body></html>'
html_file = os.path.join(os.getcwd(), 'temp.html')
with open(html_file, "w") as text_file:
text_file.write(html)
driver.get(f"file://{html_file}")
driver.find_element_by_id('inputbox').click()
_post_selenium("post.to.my.site.url", {"field1": "val1"})
driver.close()
Related
I want to use Python to fill this form.
I tried using Mechanize but this is a Microsoft Form which uses JavaScript and has no form tag and no GET/POST URL. Maybe BeautifulSoup/Selenium can do this, but I do not have any experience in scraping JS forms. Can anyone help me out and suggest how to go about this?
Here's what I've tried, Mechanize is unable to recognize any form on the page:
import mechanize
def main():
br = mechanize.Browser()
br.set_handle_robots(False)
br.set_handle_refresh(False)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
response = br.open("https://forms.office.com/Pages/ResponsePage.aspx?id=8Pm7rtoj40mYvzIXGrvJvCxQDveyljlCrKN2Teo3EHFUQVNaWDlYRkhYR09JRTZWRFpKTTNIQU9HUC4u")
for form in br.forms():
print("Form name:", form.name) #prints nothing
print(form) #prints nothing
if __name__ == '__main__':
main()
Selenium works fine.
You'll need to install the components
install selenium pip install selenium
You need to ensure you download the correct chromedriver (or other driver) for your browser and OS versions and add it to path
Then this runs:
from selenium import webdriver
driver = webdriver.Chrome()
url = "https://forms.office.com/Pages/ResponsePage.aspx?id=8Pm7rtoj40mYvzIXGrvJvCxQDveyljlCrKN2Teo3EHFUQVNaWDlYRkhYR09JRTZWRFpKTTNIQU9HUC4u"
driver.get(url)
name = driver.find_element_by_xpath("//div[#class='question-title-box'][.//span[text()='NAME']]/following-sibling::*//input")
name.send_keys("hello, World")
setionSelection = "F"
section = driver.find_element_by_xpath("//div[#class='question-title-box'][.//span[text()='Section']]/following-sibling::*//input[#value='" + setionSelection + "']")
section.click()
date = driver.find_element_by_xpath("//input[contains(#placeholder, 'Please input date')]")
date.send_keys("01/12/2020")
submit = driver.find_element_by_xpath("//div[text()='Submit']")
submit.click()
The xapths are a little long but they're based on the question text so potentially stable
For an alternative approach - When you say there is no POST url, did you check devtools? - That exposes the destination of the form:
Request URL: https://forms.office.com/formapi/api/aebbf9f0-23da-49e3-98bf-32171abbc9bc/users/f70e502c-96b2-4239-aca3-764dea371071/forms('8Pm7rtoj40mYvzIXGrvJvCxQDveyljlCrKN2Teo3EHFUQVNaWDlYRkhYR09JRTZWRFpKTTNIQU9HUC4u')/responses
Request Method: POST
it also exposes the payload... This is the first submit:
{startDate: "2020-08-17T10:40:18.504Z", submitDate: "2020-08-17T10:40:18.507Z",…}
answers: "[{"questionId":"r8f09d63e6f6f42feb2f8f4f8ed3f9389","answer1":"Hello, World"},{"questionId":"r28fe12073dfa47399f8ce95ae679dccf","answer1":"G"},{"questionId":"r8f9e9fedcc2e410c80bfa1e0e3ef9750","answer1":"2020-08-28"}]"
startDate: "2020-08-17T10:40:18.504Z"
submitDate: "2020-08-17T10:40:18.507Z"
Those post URL UUID/GUIDs questions IDs seem to be satic for this form. Every time i run form they're not chaning. This is the second run:
{startDate: "2020-08-17T10:43:48.544Z", submitDate: "2020-08-17T10:43:48.546Z",…}
answers: "[{"questionId":"r8f09d63e6f6f42feb2f8f4f8ed3f9389","answer1":"test me"},{"questionId":"r28fe12073dfa47399f8ce95ae679dccf","answer1":"G"},{"questionId":"r8f9e9fedcc2e410c80bfa1e0e3ef9750","answer1":"2020-08-12"}]"
startDate: "2020-08-17T10:43:48.544Z"
submitDate: "2020-08-17T10:43:48.546Z"
Once you capture this once you'll probably be able to do it through the API without a GUI.
... Just to make sure, i tried it and i get success...
import requests
url = "https://forms.office.com/formapi/api/aebbf9f0-23da-49e3-98bf-32171abbc9bc/users/f70e502c-96b2-4239-aca3-764dea371071/forms('8Pm7rtoj40mYvzIXGrvJvCxQDveyljlCrKN2Teo3EHFUQVNaWDlYRkhYR09JRTZWRFpKTTNIQU9HUC4u')/responses"
myobj = {"startDate":"2020-08-17T10:48:40.118Z","submitDate":"2020-08-17T10:48:40.121Z","answers":"[{\"questionId\":\"r8f09d63e6f6f42feb2f8f4f8ed3f9389\",\"answer1\":\"Hello again, World\"},{\"questionId\":\"r28fe12073dfa47399f8ce95ae679dccf\",\"answer1\":\"F\"},{\"questionId\":\"r8f9e9fedcc2e410c80bfa1e0e3ef9750\",\"answer1\":\"2020-08-26\"}]"}
x = requests.post(url, data = myobj)
My answers are just hard coded into the data object but it seems to work.
Remember to pip install requests if you don't already have it
I was checking the active links in a website with selenium web driver and java. I have passed the links to the array and while verifying I am getting the response as 403 forbidden for all links in the site. It is just a public website anyone can access. The links are working properly when clicking manually. I wanted to know Why it is not showing 200 and what can be done on this situation.
This is for Selenium webdriver with Java
for(int j=0;j< activelinks.size();j++) {
System.out.println("Active Link address and status >>> " + activelinks.get(j).getAttribute("href"));
HttpURLConnection connection = (HttpURLConnection)new URL(activelinks.get(j).getAttribute("href")).openConnection();
connection.connect();
String response = connection.getResponseMessage();
int responsecode = connection.getResponseCode();
connection.disconnect();
System.out.println(activelinks.get(j).getAttribute("href")+ ">>"+ response+ " " + responsecode);}
I expect the response code as 200, but the actual output is 403
I believe your need to add the relevant Cookies to the HTTPUrlConnection, or even better consider switching to OkHttp library which is under the hood of Selenium Java Client
So you basically need to fetch the cookies from the browser using driver.manage.getCookies() function and generate a proper Cookie request header for the subsequent calls.
Example code:
driver.manage().getCookies()
.forEach(cookie -> cookieBuilder
.append(cookie.getName())
.append("=")
.append(cookie.getValue())
.append(";"));
OkHttpClient client = new OkHttpClient().newBuilder().build();
for (WebElement activelink : activelinks) {
Request request = new Request.Builder()
.url(activelink.getAttribute("href"))
.addHeader("Cookie", cookieBuilder.toString())
.build();
Response urlResponse = client.newCall(request).execute();
String response = urlResponse.message();
int responsecode = urlResponse.code();
System.out.println(activelink.getAttribute("href") + ">>" + response + " " + responsecode);
}
If you need nothing else but response code you can consider using HEAD method to avoid executing calls for the full URLs - this will allow you to save traffic and your test will be much faster.
403 Forbidden
The HTTP 403 Forbidden client error status response code indicates that the server understood the request but refuses to authorize it.
This status is similar to 401, but in this case, re-authenticating will make no difference. The access is permanently forbidden and tied to the application logic, such as insufficient rights to a resource.
Reason
I don't see any such issue in your code block. However, there is a possibility that the WebDriver controlled Browser Client is getting detected and hence the subsequent requests are getting blocked and there can be numerous factors as follows:
User agent
Plugins
Languages
WebGL
Browser features
Missing image
You can find a couple of detailed discussion in:
How does recaptcha 3 know I'm using selenium/chromedriver?
Selenium and non-headless browser keeps asking for Captcha
Solution
A generic solution will be to use a proxy or rotating proxies from the Free Proxy List.
You can find a detailed discussion in Change proxy in chromedriver for scraping purposes
Outro
You can a couple relevant discussions in:
Can a website detect when you are using selenium with chromedriver?
Selenium webdriver: Modifying navigator.webdriver flag to prevent selenium detection
Failed to load resource: the server responded with a status of 429 (Too Many Requests) and 404 (Not Found) with ChromeDriver Chrome through Selenium
Had the same problem, user agent was the issue in my case (read more here: https://www.javacodegeeks.com/2018/05/how-to-handle-http-403-forbidden-error-in-java.html).
Also check what request methods are allowed on your website, you can do that by looking at any endpoint in "Network" tab in Chrome. It should list the allowed request methods, in my case I couldn't use "HEAD", but "GET" did the trick.
Code:
List<WebElement> links = driver.findElements(By.tagName("a"));
boolean brokenLink = false;
for (WebElement link : links) {
String url = link.getAttribute("href");
HttpURLConnection conn = (HttpURLConnection) new URL(url).openConnection();
conn.setRequestMethod("GET");
conn.setRequestProperty("Content-Type", "application/json");
conn.setRequestProperty("User-Agent",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36");
conn.connect();
int httpCode = conn.getResponseCode();
if (httpCode >= 400) {
System.out.println("BROKEN LINK: " + url + " " + httpCode);
brokenLink = true;
Assert.assertFalse(brokenLink);
}
else {
System.out.println("Working link: " + url + " " + httpCode);
}
}
I am currently attempting to set a secure cookie for a connection that is incoming from a Python client using Tornado, however, although setting the cookie works fine for connecting incoming from browsers, the set_secure_cookie call does not seem to work in the case of a Python client.
Below are excerpts from my Tornado server code which serves both WebSocket and HTTP Requests:
class BaseHandler(tornado.web.RequestHandler):
def get_current_user(self):
return self.get_secure_cookie("user")
class LoginHandler(BaseHandler):
def get(self):
self.write('<html><body><form action="/login" method="post">'
'Name: <input type="text" name="name">'
'<input type="submit" value="Sign in">'
'</form></body></html>')
def post(self):
print("post received: ", self.get_argument("name"))
try:
print('trying to set cookie')
self.set_secure_cookie("user", self.get_argument("name"))
except Exception as e:
print(e)
print("cookie: ", self.get_current_user())
self.redirect("http://192.168.6.21/")
def main():
application = tornado.web.Application([
(r'/ws', EchoWebSocket),
(r'/login', LoginHandler)
], cookie_secret="nescafeh")
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(9000)
tornado.ioloop.IOLoop.instance().start()
And attempting to send a POST request to get the cookie set on a separate client:
s=requests.Session()
r = s.post("http://127.0.0.1:9000/login", data={'name': 'hello'})
sleep(2)
print(r.text)
No errors are returned when trying to set the cookie, removing the 'self.redirect' line to see the response from the POST request does not help (there is no text printed).
Thanks a lot!
r.text is empty because you're not writing anything to the response body in your handler.
Try self.write("something") in your handler's post method and r.text should print out the response.
You can also check r.cookies to see if your cookie is set or not.
I'm developping an angular2 application (single page application). My page is never "reloaded", but it's content changes according to user interactions.
I'm having some cache problems especially with images.
Context :
My page contains an editable image list :
<ul>
<li><img src="myImageController/1">Edit</li>
<li><img src="myImageController/2">Edit</li>
<li><img src="myImageController/3">Edit</li>
</ul>
When i want to edit an image (Edit link), my dom content is completly changed to show another angular component with a fileupload component.
The myImageController returns the LastModified header, and cache-control : no-cache and must-revalidate.
After a refresh (hit F5), my page does a request to get all img src, which is correct : if image has been modified, it is downloaded, if not, i just get a 304 which is fine.
Note : my images are stored in database as blob fields.
Problem :
When my page content is dynamically reloaded with my single page app, containing img tags, the browser do not call a GET http request, but immediatly take image from cache. I assume this a browser optimization to avoid getting the same resource on the same page multiple times.
Wrong solutions :
The first solution is to add something like ?time=(new Date()).getTime() to generate unique urls and avoid browser cache. This won't send the If-Modified-Since header in the request, and i will download my image every time completly.
Do a "real" refresh : the first page load in angular apps is quite slow, and i don't to refresh all.
Tests
To simplify the problem, i trying to create a static html page containing 3 images with the exact same link to my controller : /myImageController/1. With the chrome developper tool, i can see that only one get request is called. If i manage to get mulitple server calls in this case, it would probably solve my problem.
Thank you for your help.
5th version of HTML specification describes this behavior. Browser may reuse images regardless of cache related HTTP headers. Check this answer for more information. You probably need to use XMLHttpRequest and blobs. In this case you also need to consider Same-origin policy.
You can use following function to make sure user agent performs every request:
var downloadImage = function ( imgNode, url ) {
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.responseType = "blob";
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status == 200 || xhr.status == 304) {
var blobUrl = URL.createObjectURL(xhr.response);
imgNode.src = blobUrl;
// You can also use imgNode.onload callback to release blob resources.
setTimeout(function () {
URL.revokeObjectURL(blobUrl);
}, 1000);
}
}
};
xhr.send();
};
For more information check New Tricks in XMLHttpRequest2 article by Eric Bidelman, Working with files in JavaScript, Part 4: Object URLs article by Nicholas C. Zakas and URL.createObjectURL() MDN page and Same-origin policy MDN page.
You can use the random ID trick. This changes the URL so that the browser reloads the image. Not that this can be done in the query parameters to force a full cache break or in the hash to allow the browser to re-validate the image from the cache (and avoid re-downloading it if unchanged).
function reloadWithCache(img: HTMLImageElement, url: string) {
img.src = url.replace(/#.*/, "") + "#" + Math.random();
}
function reloadBypassCache(img: HTMLImageElement, url: string) {
let sep = img.indexOf("?") == -1? "?" : "&";
img.src = url + sep + "nocache=" + Math.random()
}
Note that if you are using reloadBypassCache regularly you are better off fixing your cache headers. This function will always hit your origin server leading to higher running costs and making CDNs ineffective.
I'm trying to upload on Kinvey using REST API method.
I can successfully get the google storage URL link provided after sending a 'POST' request to https://baas.kinvey.com/blob/:myAppId
The problem is when I'm sending a 'PUT' request to the google storage URL, I'm getting this error:
XMLHttpRequest cannot load (my storage.google URL). Response to
preflight request doesn't pass access control check: No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin (my localhost) is therefore not allowed access.
This appears to be a fairly standard CORS error (which you can read a LOT more about over here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS ) , which takes place when you are making a cross-origin request. There's a lot of different ways that you can approach this issue, but the easiest would probably be to use one of our SDK's to help you. If you take a look over at http://devcenter.kinvey.com/html5/downloads you will find an SDK that you can include in your projects and guides / documentation for it in the top navigation.
File uploads using the HTML5 library are fairly trivial as well. Here's some sample code that I have whipped up:
HTML portion:
<input type="file" name="_file" id="_file" onchange="fileSelected();" />
<div id="fileinfo">
<div id="filename"></div>
<div id="filetype"></div>
</div>
Javascript portion:
function fileSelected(){
var oFile = document.getElementById('_file').files[0];
var oReader = new FileReader();
oReader.onload = function(e) {
document.getElementById('fileinfo').style.display = 'block';
document.getElementById('filename').innerHTML = 'Name: ' + oFile.name;
document.getElementById('filetype').innerHTML = 'Type: ' + oFile.type;
};
oReader.readAsDataURL(oFile);
fileUpload(oFile);
}
function fileUpload(file) {
var file = document.getElementById('_file').files[0];
var promise = Kinvey.File.upload(file,{
filename: document.getElementById('fileinfo').toString(),
mimetype: document.getElementById('filetype').toString()
})
promise.then(function() {
alert("File Uploaded Successfully");
}, function(error){
alert("File Upload Failure: " + error.description);
});
}
This will be slightly different for each of Kinvey's Javascript libraries, but should follow roughly the same outline. Get file, call Kinvey.File.Upload asynchronously, and let the SDK do it's magic. This should handle all the ugliness of CORS for you.
Thanks,