Why do some invalid MIME types trigger a "TypeError," and other invalid MIME types bypass the error and trigger an unprompted download? - express

I'm making a fairly simple Express app with only a few routes. My question isn't about the app's functionality but about a strange bit of behavior of an Express route.
When I start the server and use the /search/* route, or any route that takes in a parameter, and I apply one of these four content-types to the response:
res.setHeader('content-type', 'plain/text');
res.setHeader('content-type', 'plain/html');
res.setHeader('content-type', 'html/plain');
res.setHeader('content-type', 'html/text');
the parameter is downloaded as a file, without any prompting. So using search/foobar downloads a file named "foobar" with a size of 6 bytes and an unsupported file type. Now I understand that none of these four types are actual MIME types, I should be using either text/plain or text/html, but why the download? These two MIME types behave like they should, and the following MIME types with a type but no subtype all fail like they should, they all return an error of TypeError: invalid media type:
res.setHeader('content-type', 'text');
res.setHeader('content-type', 'plain');
res.setHeader('content-type', 'html');
Why do some invalid types trigger an error, and other invalid types bypass the error and trigger a download?
What I've found out so far:
I found in Express 4.x docs that res.download(path [, filename]) transfers the file at path as an “attachment,” and will typically prompt the user for the download, but this download is neither prompted nor intentional.
I wasn't able to find any situation like this in the Express docs (or here on SO) where running a route caused a file to automatically download to your computer.
At first I thought the line res.send(typeof(res)); was causing the download, but after commenting out lines one at a time and rerunning the server, I was able to figure out that only when the content-type is set to 'plain/text' does the download happen. It doesn't matter what goes inside res.send(), when the content-type is plain/text, the text after /search/ is downloaded to my machine.
Rearranging the routes reached the same result (everything worked as it should except for the download.)
The app just hangs at whatever route was reached before /search/foo, but the download still comes through.
My code:
'use strict';
var express = require('express');
var path = require('path');
var app = express();
app.get('/', function (req, res) {
res.sendFile(path.join(__dirname+'/index.html'));
});
app.get('/search', function(req,res){
res.send('search route');
});
app.get('/search/*', function(req, res, next) {
res.setHeader('content-type', 'plain/text');
var type = typeof(res);
var reqParams = req.params;
res.send(type);
});
var server = app.listen(process.env.PORT || 3000, function(){
console.log('app listening on port ' + process.env.PORT + '!');
});
module.exports = server;
Other Details
Express version 4.15.2
Node version 4.7.3
using Cloud9
am Express newbie
my repo is here, under the branch "so_question"

Why do some invalid types trigger an error...
Because a MIME-type has a format it should adhere to (documented in RFC 2045), and the ones triggering the error don't match that format.
The format looks like this:
type "/" subtype *(";" parameter)
So there's a mandatory type, a mandatory slash, a mandatory subtype, and optional parameters prefixed by a semicolon.
However, when a MIME type matches that format, it's only syntactically valid, not necessarily semantically, which brings us to the second part of your question:
...and other invalid types bypass the error and trigger a download?
That follows from that is written in RFC 2049:
Upon encountering any unrecognized Content-Type field, an implementation must treat it as if it had a media type of "application/octet-stream" with no parameter sub-arguments. How such data are handled is up to an implementation, but likely options for handling such unrecognized data include offering the user to write it into a file (decoded from its mail transport format) or offering the user to name a program to which the decoded data should be passed as input.
(emphasis mine)

The order in which you define your routes matters a lot in express, you probably need to move your default '/' route to be after the '/search/*' route.

Related

Error: Network Connection Lost - saving form data (file) to R2 bucket

I have this handler in my worker:
const data = await event.request.formData();
const key = data.get('filename');
const file = data.get('file');
if (typeof key !== 'string' || !file) {
return res.send(
{ message: 'Post body is not valid.' },
undefined,
400
);
}
await BUCKET.put(key, file);
return new Response(file);
If I comment out the await BUCKET.put(key, file); line, then I get the response of the file as expected. But with that line in the function, I get the error:
Uncaught (in promise) Error: Network connection lost.
I have confirmed that by changing the put to a get, I can retrieve files from that bucket, so there doesn't seem to be a problem with the connection itself.
Are you still having this problem? I'll need your account ID to figure out what's going on. If you DM me (Vitali) on Discord your account ID (& this SO link for context) I can probably help you out (or email me directly at cloudflare.com using vlovich as the account if you don't have/don't want to sign up on Discord). I'm the tech lead for R2.
EDIT 2022-09-07.
I just noticed that you're calling formData on the request. This is causing you to read the object into RAM. Workers has a 128 MiB limit so what's likely happening is that you're exceeding that limit (probably egregiously since we do give some buffer) and thus Cloudflare is terminating your Worker.
What you'll want to do is make sure you upload the file raw (not as a form) and access the raw ReadableStream. Alternatively, you can try writing a TransformStream to parse out the payload in a streaming fashion if you're confident the file payload (& any metadata you need) will come after the name. Usually it's easier to change your upload mechanism.

Why do we have to set responseType when using XMLHttpRequest?

I implemented a HTML which upload a file, and then download another file from server.
When handling the "download part", i noticed that if i download a binary file, i have to set responseType to blob or the file will be broken.
What confused me is that, HTTP header contains content-type which could tell XMLHttpRequest what type of file the server is sending. Why i have to set it manually? I don't understand the logic because it's server's turn to tell what the file type is, rather than predicted by client
const xhr = new XMLHttpRequest();
xhr.responseType = 'blob'
.......
xhr.onload = function(e) {
if (this.status == 200) {
var blob = new Blob([this.response]); // if i don't set responseType, this.response will be broken
let a = document.createElement("a");
Your question made me realise I'm not entirely sure of the following, but it is how I have always looked at it.
responseType sets the type of xhr.response so you can process it as a Blob; it lets you retrieve the results of the xhr request as a Blob. If you don't set it, xhr.response will be Text.
A server may have sent the data the right way based on a mime type, but it still only sends a stream of bytes with a mime type; the interpretation of the received bytes lies on your end, and the received data won't automatically be of the type Blob based on the mime type.
Blob is not a file type on the server; the server may know and send the mime types of files, but Blob isn't one of them, and xhr.response won't be a Blob just because the mime type suggests that a Blob would be the right type.
Also, you may want to process the xhr.response differently from what can be inferred from the mime type, and in that sense, it is a kind of override (though not with the same functionality as xhr.overrideMimeType()).

Nodejs how to pass parameters into an exported route from another route?

Suppose I module export "/route1" in route1.js, how would I pass parameters into this route from "/route2" defined in route2.js?
route1.js
module.exports = (app) => {
app.post('/route1', (req,res)=>{
console.log(req.body);
});
}
route2.js
const express = require('express');
const app = express();
//import route1 from route1.js
const r1 = require('./route1')(app);
app.post('/route2', (req, res) => {
//how to pass parameters?
app.use(???, r1) ?
})
In short, route 1 output depends on the input from route 2.
You don't pass parameters from one route to another. Each route is a separate client request. http, by itself, is stateless where each request stands on its own.
If you describe what the actual real-world problem you're trying to solve is, we can help you with some of the various tools there are to use for managing state from one request to the next in http servers. But, we really need to know what the REAL world problem is to know what best to suggest.
The general tools available are:
Set a cookie as part the first response with some data in the cookie. On the next request sent from that client, the prior cookie will be sent with it so the server can see what that data is.
Create a server-side session object (using express-session, probably) and set some data in the session object. In the 2nd request, you can then access the session object to get that previously set data.
Return the data to the client in the first request and have the client send that data back in the 2nd request either in query string or form fields or custom headers. This would be the truly stateless way of doing things on the server. Any required state is kept in the client.
Which option results in the best design depends entirely upon what problem you're actually trying to solve.
FYI, you NEVER embed one route in another like you showed in your question:
app.post('/route2', (req, res) => {
//how to pass parameters?
app.use(???, r1) ?
})
What that would do is to install a new permanent copy of app.use() that's in force for all incoming requests every time your app.post() route was hit. They would accumlate forever.

How to make CORS API call from Blazor client app with authentication using AutoRest Client?

I am trying to call Web API from Blazor Client App. The API sends required CORS headers and works fine when I call the API using plain Javascript.
The API needs Auth cookies to be included when making a call so using JavaScript I can call:
fetch(uri, { credentials: 'include' })
.then(response => response.json())
.then(data => { console.log(data) })
.catch(error => console.log('Failed'));
Now, I am trying to do the same on Blazor. I came across this section on the docs which says:
requestMessage.Properties[WebAssemblyHttpMessageHandler.FetchArgs] = new
{
credentials = FetchCredentialsOption.Include
};
When I make a call now, it fails with following exception:
WASM: System.Net.Http.HttpRequestException: TypeError: Failed to execute 'fetch' on 'Window': The provided value '2' is not a valid enum value of type RequestCredentials.
I noticed that adding following on Statrup.cs allows me to call API including credentials (here):
if (RuntimeInformation.IsOSPlatform(OSPlatform.Create("WEBASSEMBLY")))
{
WebAssemblyHttpMessageHandler.DefaultCredentials = FetchCredentialsOption.Include;
}
Now, I would like to call the API using AutoRest generated API Client so that I can reuse existing client and save lot of time. Setting DefaultCredentials as above doesn't work and shows following exception:
WASM: Microsoft.Rest.HttpOperationException: Operation returned an invalid status code 'InternalServerError'
Setting the requestMessage.Properties as above, says
The provided value '2' is not a valid enum value of type RequestCredentials`.
I am already injecting HttpClient from Blazor using this technique.
This is not really the answer... I just need space
Setting the requestMessage.Properties as above, says The provided
value '2' is not a valid enum value of type RequestCredentials
If so, what is wrong with the other method I suggested, which I guess is working.
Incidentally,
The provided value '2' is not a valid enum value of type
RequestCredentials
is not related to Blazor, right ? No such type (RequestCredentials) in Blazor. Perhaps your code, whatever it may be, gets the numeric value of the FetchCredentialsOption.Include and not its Enum string
Consider instantiating an HttpRequestMessage object, and configuring it according to your requirements.
Hope this helps...

AngularJS resource service with jsonp fails

I'm trying to fetch the JSON output of a rest api in AngularJS. Here are the problems I'm facing:
The Rest api url has the port number in it which is being interpolated by AngularJS for a variable. I tried several resolutions for this in vain.
I'm having issues with JSONP method. Rest api isn't hosted on the same domain/server and hence a simple get isn't working.
The parameters to the rest api are slash separated and not like a HTML query string. One of the parameters is an email address and I'm thinking the '#' symbol is causing some problem as well. I wasn't able to fix this either.
My rest api looks something like: http://myserver.com:8888/dosomething/me#mydomain.com/arg2.
Sample code / documentation would be really helpful.
I struggled a lot with this problem, so hopefully this will help someone in the future :)
JSONP expects a function callback, a common mistake is to call a URL that returns JSON and you get a Uncaught SyntaxError: Unexpected token : error. Instead, JSONP should return something like this (don't get hung up on the function name in the example):
angular.callbacks._0({"id":4,"name":"Joe"})
The documentation tells you to pass JSON_CALLBACK on the URL for a reason. That will get replaced with the callback function name to handle the return. Each JSONP request is assigned a callback function, so if you do multiple requests they may be handled by angular.callbacks._1, angular.callbacks._2 and so forth.
With that in mind, your request should be something like this:
var url = 'http://myserver.com:8888/dosomething/me#mydomain.com/arg2';
$http.jsonp(url + '?callback=JSON_CALLBACK')
.then(function (response) {
$scope.mydata = response.data;
...
Then AngularJS will actually request (replacing JSON_CALLBACK):
http://myserver.com:8888/dosomething/me#mydomain.com/arg2?callback=angular.callbacks._0
Some frameworks have support for JSONP, but if your api doesn't do it automatically, you can get the callback name from the querystring to encapsulate the json.
Example is in Node.js:
var request = require('request');
var express = require('express');
var app = express();
app.get('/', function(req, res){
// do something to get the json
var json = '{"id":4,"name":"Joe"}';
res.writeHead(200, {"Content-Type": "application/javascript"});
res.write(req.query.callback + '(' + json + ')');
res.end();
});
app.listen(8888);
The main issue I was facing here was related to CORS. I got the $http to retrieve the JSON data from the server by disabling the web security in Chrome - using the --disable-web-security flag while launching Chrome.
Regarding the 8888 port, see if this works:
$scope.url = 'http://myserver.com:port/dosomething/:email/:arg2';
$scope.data = $resource($scope.url, {port:":8888", email:'me#mydomain.com',
arg2: '...', other defaults here}, …)
Try escaping the ':'
var url = 'http://myserver.com\:8888/dosomething/me#mydomain.com/arg2';
Pretty sure I read about this somewhere else