I’m experiencing weird issue with file upload on server behind cloudflare. When I try to upload an image from browser (chrome & mozilla) I get this error “413 Request Entity Too Large - cloudflare” which means that the sent request is bigger than 100MB (as said in the docs) but actually the image size is 18MB.
When I send the same request from postman - it works!
I’m sending even bigger files from postman and all are being uploaded successfully.
What could be the problem?
Javascript
return request.post('/resource/image')
.use(SuperagentPromisePlugin)
.set('Authorization', 'Bearer xxx')
.attach('file', file)
.then((response) => {
//...
});
Related
Method POST fails:
java.net.SocketException: An established connection has been dropped by software on your host computer, http call failed after 180865 milliseconds for url: http://localhost:8080/api/v2/files?filename=more.wav&contractId=51846706-c05f-4089-882d-7229f9b96d42
16:44:41.580 classpath:karate/features/files/files_post_exceptions.feature:106
I tried this scenario:
Scenario: As admin create file with size greater than 10MB
* def Path = 'classpath:karate/data/more-than-10MB.wav'
Given url HOST_V2
And path '/files'
And header Authorization = 'Bearer ' + TOKEN
And header content-type = 'multipart/form-data'
And params { contractId :'#(contractId)',filename : 'more-than-10MB.wav'}
And multipart file file = {read: '#(Path)', contentType: 'application/octet-stream',filename: 'more-than-10MB.wav'}
When method post
Then status 500
I expect a status 500.
Sounds like your network has throttled something. That said, Karate may have issues with large binary uploads. So you can use this workaround - which is to call curl via the command-line. Doing this for one or two "non-happy" path tests is perfectly fine (in my opinion).
Here are details: Using cURL for API automation in Karate
I'm using Vue CLI and axios.
I have a searchbar where the user can input (potentially) any website and read info about the HTTP request and response.
Some of the information I need to get are: HTTP protocol, Status code, Location (if redirected), Date and Server.
What I'm doing is a simple axios GET request taking the input from the searchbar.
I'm trying to get my head around the CORS domain issues, but even then, when I input a CORS supported site like myjson I can access only the CORS-safelisted response headers which are not what I'm looking for.
This is the axios call:
axios
.get(url)
.then((r) => {
console.log(r);
console.log(r.headers.server); //undefined
})
.catch((e) => {
console.error(e);
});
Is the brief I'm presenting even possible?
UPDATE
I've then tried removing the chrome extension I used to enable CORS requests and installed Moesif Origin & CORS Changer extension. After restarting my PC I have now access to the remaining response headers.
I don't really know exactly what went wrong with the previous extension, but hopefully this helps somebody.
It's also worth pointing out that at the current date I'm writing this edit, myjson site has been flagged by chrome as non-safe for privacy issues. I've simply made HTTP requests to other sites and got the response headers as described.
The response to a cross-origin request for https://myjson.dit.upm.es/about contains the CORS-related headers
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, PATCH, PUT, DELETE, POST, OPTIONS
but no Access-Control-Expose-Headers. Without that, a cross-origin client cannot access the Server header, because it is not CORS-safelisted.
It would work if you had your server make the request and evaluate the headers, not the axios client.
I have a standard HTTPS Axios request from my Frontend (which is based on Vue), to the our company's API which is on another server(server use SSL sertificate).
testApi() {
axios.get('https://rng-hub2.staging.rng:8001/rng/3/')
.then(function (response) {
// handle success
console.log(response);
})
.catch(function (error) {
// handle error
console.log(error);
})
.finally(function () {
// always executed
});
},
Which cause an error like this:
In Firefox:
In Chrome the error looks like this:
As I was thinking, in browsed developing tools under tab of Netwerk -> Response, I should also see an error, which is true for Chrome, but eventyally is not true for Firefox.
So Chrome shows me:
But in the Firefox I receive my data in exactly right format:
Have any idea how I can retrieve this data correctly and assign it to the response variable in .then section?
About Cross-Origin Request Blocked error: API's server administrator told me, that he have added my IP to the CORS "trusted list". However I'm not sure, because according to this post: https://jonhilton.net/cross-origin-request-blocked/
in my Response Header I should receive an additional header with my local IP like:
Access-Control-Allow-Origin: http://192.168.32.44
But I'm not.
This proxy staff also didn't work:
How to deal with CORS error on Vue CLI 3?
Please give me hint what am I doing wrong
Found the solution. The problem was deeper then I thought. So short answer is: If you are working in local network, with different API servers, they might be certified with inner corporate CA (Certificate authority) to be able to communicate via HTTPS protocol. So what you need is, to ask from your server administrator to give you private_key with which you gonna sign all the request to a specific API. In guzzle its looks like this:
new GuzzleClient(['verify' => '/path/to/self-signed/cert.pem']);
Have a SPA with a redux client and an express webapi. One of the use cases is to upload a single file from the browser to the express server. Express is using the multer middleware to decode the file upload and place it into an array on the req object. Everything works as expected when running on localhost.
However when the app is deployed to AWS, it does not function as expected. Deployment pushes the express api to an AWS Lambda function, and the redux client static assets are served by Cloudfront CDN. In that environment, the uploaded file does make it to the express server, is handled by multer, and the file does end up as the first (and only) item in the req.files array where it is expected to be.
The problem is that the file contains the wrong bytes. For example when I upload a sample image that is 2795 bytes in length, the file that ends up being decoded by multer is 4903 bytes in length. Other images I have tried always end up becoming larger by approximately the same factor by the time multer decodes and puts them into the req.files array. As a result, the files are corrupted and are not displaying as images.
The file is uploaded like so:
<input type="file" name="files" onChange={this.onUploadFileSelected} />
...
onUploadFileSelected = (e) => {
const file = e.target.files[0]
var formData = new FormData()
formData.append("files", file)
axios.post('to the url', formData, { withCredentials: true })
.then(handleSuccessResponse).catch(handleFailResponse)
}
I have tried setting up multer with both MemoryStorage and DiskStorage. Both work, both on localhost and in the aws lambda, however both exhibit the same behavior -- the file is a larger size and corrupted in the store.
I have also tried setting up multer as both a global middleware (via app.use) and as a route-specific middleware on the upload route (via routes.post('the url', multerMiddlware, controller.uploadAction). Again, both exhibit the same behavior. Multer middleware is configured like so:
const multerMiddleware = multer({/* optionally set dest: '/tmp' */})
.array('files')
One difference is that on localhost, both the client and express are served over http, whereas in aws, both the client and express are served over https. I don't believe this makes a difference, but I have yet been unable to test -- either running localhost over https, or running in aws over http.
Another peculiar thing I noticed was that when the multer middleware is present, other middlewares do not seem to function as expected. Rather than the next() function moving flow down to the controller action, instead, other middlewares will completely exit before the controller action invocation, and when the controller invocation exits, control does not flow back into the middlware after the next() call. When the multer middleware is removed, other middlewares do function as expected. However this observation is on localhost, where the entire end-to-end use case does function as expected.
What could be messing up the uploaded image file payload when deployed to the cloud, but not on localhost? Could it really be https making the difference?
Update 1
When I upload this file (11228 bytes)
Here is the HAR chrome is giving me for the local (expected) file upload:
"postData": {
"mimeType": "multipart/form-data; boundary=----WebKitFormBoundaryC4EJZBZQum3qcnTL",
"text": "------WebKitFormBoundaryC4EJZBZQum3qcnTL\r\nContent-Disposition: form-data; name=\"files\"; filename=\"danludwig.png\"\r\nContent-Type: image/png\r\n\r\n\r\n------WebKitFormBoundaryC4EJZBZQum3qcnTL--\r\n"
}
Here is the HAR chrome is giving me for the aws (corrupted) file upload:
"postData": {
"mimeType": "multipart/form-data; boundary=----WebKitFormBoundaryoTlutFBxvC57UR10",
"text": "------WebKitFormBoundaryoTlutFBxvC57UR10\r\nContent-Disposition: form-data; name=\"files\"; filename=\"danludwig.png\"\r\nContent-Type: image/png\r\n\r\n\r\n------WebKitFormBoundaryoTlutFBxvC57UR10--\r\n"
}
The corrupted image file that is saved is 19369 bytes in length.
Update 2
I created a text file with the text hello world that is 11 bytes long and uploaded it. It does NOT become corrupted in aws. This is the case even if I upload it with the txt or png suffix, it ends up as 11 bytes in length when persisted.
Update 3
Tried uploading with a much larger text file (12132 bytes long) and had the same result as in update 2 -- the file is persisted intact, not corrupted.
Potential answers:
Found this https://forums.aws.amazon.com/thread.jspa?threadID=252327
API Gateway does not natively support multipart form data. It is
possible to configure binary passthrough to then handle this multipart
data in your integration (your backend integration or Lambda
function).
It seems that you may need another approach if you are using API Gateway events in AWS to trigger the lambda that hosts your express server.
Or, you could configure API Gateway to work with binary payloads per https://stackoverflow.com/a/41770688/304832
Or, upload directly from your client to a signed s3 url (or a public one) and use that to trigger another lambda event.
Until we get a chance to try out different API Gateway settings, we found a temporary workaround: using FileReader to convert the file to a base64 text string, then submit that. The upload does not seem to have any issues as long as the payload is text.
Attempting to sign in (and enter a session) using user credentials in an Angular app using the Backand SDK. From the Backand docs I am attempting to sign in using the Backand.signin() method (from my local) which looks to be initially sending an OPTIONS http request to the API which unfortunately is causing this cross origin error:
XMLHttpRequest cannot load https://api.backand.com/token. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:xxxx' is therefore not allowed access. The response had HTTP status code 400.
The exact response from the endpoint is: {"error":"unsupported_grant_type"}
I've combed through the documentation extensively but can't find anyone else having these errors.
This is exact code I am using:
function Login(username, password, callback) {
Backand.signin(username, password).then(function(response){
console.log(response);
}, function(error){
console.log(error);
});
}
The error is logged to the console as a null object.
It looks like the error was in fact on my end.
While attempting to set up my own Authorization service in my Angular app I inadvertently was adding an encoded Authorization token header somehow. When the requests were being made to Backand from the Backand SDK, the headers were not correctly set and thus causing issues.