Serve Entire React CRA from AWS S3 bucket using Express - express

I am basically trying to use express as a sort of reverse proxy. My end goal is to serve up different react CRA bundles to different users. Right now though I am just working on a Proof of Concept to see if it is even possible to do this.
TLDR Goal:
use express to point to a specific CRA bundle stored in a s3 bucket and serve it
This is the code I am using in express:
app.get('/*', function (req, res) {
const bucketParams = {
Bucket: 'some-dumb-bucket',
Key: 'cra/index.html'
};
s3.getObject(bucketParams)
.on('httpHeaders', function (statusCode, headers) {
res.set('Content-Length', headers['content-length']);
res.set('Content-Type', headers['content-type']);
res.set('Last-Modified', headers['last-modified']);
this.response.httpResponse.createUnbufferedStream()
.pipe(res);
})
.send();
})
The problem I am encountering is that all of my content is coming back with wrong headers. When I go into s3 and view the metadata it has the right headers so why is it fetching all the headers as "text/html"?

I figured out what I was doing wrong! It was looping through and grabbing the same index.html headers. Fix:
app.get('/*', function (req, res) {
const bucketParams = {
Bucket: 'some-dumb-bucket',
Key: 'auth/index.html'
};
if (req.url && req.url !== '/') {
bucketParams.Key = `auth${req.url}`;
} else
bucketParams.Key = `auth/index.html`;
// send the assets over from s3
s3.getObject(bucketParams)
.on('httpHeaders', function (statusCode, headers) {
res.set('Content-Length', headers['content-length']);
res.set('Content-Type', headers['content-type']);
res.set('Last-Modified', headers['last-modified']);
res.set('ETag', headers['etag']);
this.response.httpResponse.createUnbufferedStream()
.pipe(res);
})
.send();
});
Code could be a tiny bit cleaner but PoC working.

Related

Express multer - disallow file uploads except for specific routes?

Well currently I am disallowing all file uploads to routes by setting up the server like:
const upload = multer();
const server = express();
module.exports = () => {
// ...
server.use(logger('dev'));
server.use(express.json());
server.use(express.urlencoded({ extended: false }));
server.use(express.raw());
server.use(cookieParser());
server.use(express.static(path.join(projectRoot, 'public')));
server.set('trust proxy', 1);
server.use(upload.none());
server.use('/', router);
// ...
}
Which correctly blocks all files. Now I wish to allow uploading files only in the POST request to /test:
import * as express from "express";
import multer from "multer";
const upload = multer({storage: multer.memoryStorage()});
const router = express.Router();
router.post('/test', upload.single('pdf'), function(req, res, next) {
const r = 'respond with a test - POST';
res.send(r);
});
However when I try to use this in postman I get the error "multerError", "LIMIT_UNEXPECTED_FILE" for the field 'pdf'. I notice that if I remove the line server.use(multer.none()) it works, but then I can upload files to any place anyways, not exactly what I like?
Nothing will be uploaded to your server unless you specify a multer middleware on the entire server, on a route, or on a particular path. So you can safely remove the server.use(upload.none());.
The middleware will then not try to consume the payload of the incoming request. How much load the receiving (without consumption) of the payload causes on the server, I don't know, but you could theoretically destroy the connection whenever the client tries to submit a payload:
req.on("data", function() {
req.destroy();
});
But perhaps the creation of new connection afterwards causes more load on the server overall.

Routing for a Static Exported Next JS site on S3 and Cloudfront

I'm using Next JS to export a static HTML site that is uploaded on S3 and hosted on Cloudfront. I don't want to upload it onto a node server and just want it fully static. I don't know if I'm doing something incorrectly or missing out on something but the hosted site on cloudfront does not seem to have any routes. Hence, although I can access the site on the main page (www.mysite.com), if I try to reach a relative path (www.mysite.com/page1), I can't reach it unless I do (www.mysite.com/page1.html). That's when I realized that Cloudfront just points to the S3 bucket and does not have any way to generate routes. Is there a good way to provide Cloudfront with those routes?
I've looked online for solutions and tried the below:
-Implemented exportPathMaps: I might have done this incorrectly. An example would help if you have one
-turned on trailingSlash
-used Lambda edge to reroute the user to www.mysite.com/page1.html while preserving the URL to www.mysite.com/page1. <- This method is too costly as it used lambda so I don't like it
For now, I have a redirect script that does what the lambda edge function does (append the .HTML onto the request) but I'm sure there is a better way to do this.
You can use a lambda trigger to copy any new file in the s3 bucket which has a key like key/index.html as key or key/.
This is what I use :
exports.handler = (event, context, callback) => {
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var params = {
Bucket: event.Records[0].s3.bucket.name,
CopySource: "/" + event.Records[0].s3.bucket.name + "/" + event.Records[0].s3.object.key,
Key: event.Records[0].s3.object.key.replace(/index.html$/g, '')
};
s3.copyObject(params, function(err, data) {
if (err) {
console.log(err, err.stack);
}
});
var params2 = {
Bucket: event.Records[0].s3.bucket.name,
CopySource: "/" + event.Records[0].s3.bucket.name + "/" + event.Records[0].s3.object.key,
Key: event.Records[0].s3.object.key.replace(/\/index.html$/g, '')
};
s3.copyObject(params2, function(err, data) {
if (err) {
console.log(err, err.stack);
}
});
};
What we found is nextjs router pathname is not populated with the expected route on first hit if you use next export. Until this issue is fixed within nextjs, you can use a provider in _app.js that wraps your components and adjusts the route or put this before the return statement inside you _app.js default function:
import { useRouter } from 'next/router'
const { asPath, push, pathname } = useRouter()
if (asPath.split('?')[0] != pathname.split('?')[0] && !pathname.includes('[')) {
// Work around for next export breaking SPA routing on first hit
console.log('Browser route ' + asPath + ' did not match nextjs router route ' + pathname)
push(asPath)
return <div>Loading...</div>
}

Uploading files higher than few MBs directly to S3 using Multer-S3 using express

I'm trying to upload PDF files using Multer-S3 directly to S3 using express server.
For some reason, it works when I try to upload SVG file it works great, but when I'm trying to upload something else like PDF there is no error, but when I try to access to file in my S3 bucket it's not opening or showing a blank PDF page.
My code:
filesService.js
const multer = require('multer');
const multerS3 = require('multer-s3-transform');
const AWS = require('aws-sdk');
const BUCKET_NAME = process.env.FILES_BUCKET_NAME;
const s3 = new AWS.S3({});
var limits = {
files: 1, // allow only 1 file per request
fileSize: 20 * 1024 * 1024 *1024, // (replace MBs allowed with your desires)
};
const upload = multer({
limits: limits,
storage: multerS3({
acl: "public-read",
s3: s3,
bucket: BUCKET_NAME,
contentType: multerS3.AUTO_CONTENT_TYPE,
key: (req, file, cb) => {
console.log(file);
cb(null, file.originalname)
}
})
}).single('file');
module.exports = {upload};
route.js
router.post('/files', async (req, res ,next) => {
await filesService.upload(req, res, (err) => {
if(err) {
console.error(err);
} else {
console.log(req.file);
location = req.file.location;
res.status(200).send({link: req.file.location});
}
});
res.status(500);
})
I'm trying to do it using Postman with the following configurations:
Headers: Content-Type - multipart/form-data
Body: key: file, value: PDF selected
There's no error returned from my route, but again, when I try to access a file different than SVG it doesnt work.
As an alternative, and since we chose to make our PDF much bigger, we took a decision to move the PDF generator from the front-end to the backend, so there was no need using Multer for that anymore, we generated the PDF on Express backend server using pdfkit npm package, then uploaded it once it done generating the PDF directly from the same server to the related S3 bucket, using aws-sdk npm package.
So, finally what we chose to do is just to send the related PDF data from the front to the backend and generate the PDF there.

CRA embedded in express app breaks express routing

I have created an CRA app and have a couple express routes loading the CRA build files, for example:
app.get('/files', async (req, res, next) => {
...
try {
res.format({
html: function() {
const fileLoc = './public/react_ui/build/index.html';
const stream = fs.createReadStream(path.resolve(fileLoc));
stream.pipe(res);
}
});
} catch (e) {
next(e);
res.redirect(SEE_OTHER.http_status, '/login');
}
});
Prior to added the CRA, the express app exposed the /public folder like this:
// access to express App code files
app.use(express.static(__dirname + '/public'));
Now that I have the CRA app embedded, I wanted to expose the build files like this, otherwise the index.html file created by building the CRA does not know where the /static/js/* are:
// access to React App build files
app.use(express.static(__dirname + '/public/react_ui/build'));
However, it breaks the express routing. For instance, when I logout of the app, it is supposed to send me to the endpoint / and this checks if I am logged in or not, if not, then it is supposed to send me to the login page like this:
app.get('/', function(req, res) {
...
isLoggedIn(req, function(status) {
switch (status.status) {
case 200:
res.redirect(303, '/loader');
break;
default:
res.redirect(303, '/login');
}
});
});
However, this is what is breaking. If I remove the command to expose the /build folder above, then the routing works again and I am sent to the login page, but accessing the CRA pages breaks, because the build files are NOT FOUND.
// access to React App build files - if removed, routing works again
app.use(express.static(__dirname + '/public/react_ui/build'));
Does anyone have any suggestions as to why this is happening? I don't know if this is a react app issue, an express issue, or something else. Any insights would be helpful.
You have conflicting routes.
app.js
app.use('/', express.static(__dirname + 'path/to/static/build'));
// Dont use '/' as it used for static route.
app.use('/auth', (req, res) => {
...
isLoggedIn(req, function(status) {
switch (status.status) {
case 200:
res.redirect(303, '/loader');
break;
default:
res.redirect(303, '/login');
}
});
})
Note you can use whatever route for static build. I have given general convention.

Serve gzip html page in node

I use webpack-compression-plugin ta compress all my static files and hml files beforehand to gzip and brotli format. If browser supports it I use brotli, if not gzip and last option is original file. So I would have something like this for example after bundling.
bundle.js
bundle.js.gz
bundle.js.br
On server I use express-static-gzip to serve static files and everything is working fine. All my client static assets are compressd and served like that.
import expressStaticGzip from 'express-static-gzip'
const app: Express = new Express()
process.env.PWD = process.cwd()
app.set('view engine', 'ejs')
app.set('views', path.join(process.env.PWD + '/src/server/views'))
app.use(expressStaticGzip(path.join(process.env.PWD + '/src/dist'), {indexFromEmptyFile: false, enableBrotli: true, maxAge: '1y'}))
app.use((req, res, next) => {
res.set('Cache-Control', 'no-cache')
return next()
})
/* Use server side rendering for first load */
app.use(appRenderer)
// Routes
app.get('*', (req, res) => {
res.render('index')
})
app.listen(PORT, () => {
console.log(`
Express server is up on port ${PORT}
Production environment
`)
})
The problem I have is with my html file, root. Although I also have gzip and br version of it, it is not served like that. I make it by bundling server side code. Express compression module doesn't work and I also want static compression. I am not using nginx.
With the help of this plugin and as was suggested here I got it working
My code:
Ensure that you've pre-gzipped .js and .css files
const checkForHTML = req => {
const url = req.url.split('.');
const extension = url[url.length -1];
if (['/'].indexOf(extension) > -1) {
return true; //compress only .html files sent from server
}
return false;
};
var compress = require('compression');
app.use(compress({filter: checkForHTML}));
const encodeResToGzip = contentType => (req, res, next) => {
req.url = req.url + '.gz';
res.set('Content-Encoding', 'gzip');
res.set('Content-Type', contentType);
next();
};
app.get("*.js", encodeResToGzip('text/javascript'));
app.get("*.css", encodeResToGzip('text/css'));
I wanted compression to happen only for .html because I'm using .ejs template, so need to compress .html on runtime. Compressing static files(js/css) using express compression isn't good idea because it will do it on every request and those are static files.
Or else, cache your results as suggested here
Other solution using nginx, as you posted in your comments also seems nice.