I always thought that frontend should not be over bloated in size, usually by "frontend" I imagined a set of HTML, CSS and JS files, which are kind of small, especially when minified and compressed. So you can use whatever framework or library you love, your dev node_modules could be enormous in size, but after the compilation you get something lightweight to be served e.g by Nginx. Yeah, I just described an SPA-like setup, not an SSR when there's a server process running.
I had an experience building a website with NuxtJS, and it has only runtime logic, so no backend was required. I just did yarn generate and served all the resulted static with Nginx.
Now I'm building an application which requires a backend (it's a separate Python process), with dynamic pages like /users/john and /users/jane. Nuxt documentation says I can't use generate option anymore, cause such routing is dynamic. (technically I can write a set of fetch functions to load users from API during build time and generate corresponding pages, but it doesn't work well for runtime data). The only option is to use server target of NuxtJs.
There's even a page describing how to serve Nuxt application with Nginx. It assumes you should use yarn start command, which starts a Node process. It works fine, dynamic content is routed, caching is performed by Nginx, but.. it doesn't fit in a model that "frontend is lightweight". I use docker, and it means that now I need to bring huge node_modules with me. nuxt package itself is about 200 MB, which is kinda big for a small frontend app. I can run yarn install --production to save some space, but it still doesn't solve an issue that resulted image is huge.
Previously, when I wrote some apps in React, they resulted in a single index.html which I served by Nginx. It means, such dynamic routing was handled by frontend, using react-router or similar stuff.
To better understand my concerns, here's some rough comparison:
My old React apps: ~5 MB of disk space, 0 RAM, 0 CPU, routing is done by index.html file
My previous site with Nuxt static option: ~5 MB of disk space, 0 RAM, 0 CPU, routing is done by file system (/page1/index.html, /page2/index.html)
My current site with Nuxt server option: ~ 400 MB or even more disk space for a docker image, RAM, CPU, routing is done by Nuxt runtime
I don't really want to overcomplicate things. Allocating a lot of resources for a simple web app is too much, especially when you can solve the task with a help of a few static files.
The questions are:
Am I missing some option in NuxtJS to solve my issue?
Am I just misusing NuxtJS, and it's better to get plain VueJS, some vue-router, and develop the app as I described in "previously with react" section?
I think you are making a mistake here about SPA mode.
Assume that you have a page named users in your Nuxt pages, your folder structure is like this:
[pages]
[users]
[_name]
index.vue
When you requesting /users/john you can take the john from params and making an axios call to your server.
After that, you can use the nuxt generate command to create your dist folder and after that serve the dist folder with Nginx. Everything will work fine.
Check this simple routing approach in the browser
const routes = {
'/': Home,
'/users': Users
}
new Vue({
el: '#app',
data: {
currentRoute: window.location.pathname
},
computed: {
ViewComponent () {
return routes[this.currentRoute] || NotFound
}
},
render (h) { return h(this.ViewComponent) }
})
In the Users component integrate with your python backend.
You can use SPA mode in NuxtJS (mode: 'spa', or ssr: false, in latest versions), then run yarn generate or nuxt generate, it will generate a full bundle in dist folder that can be served with any HTTP server.
This works fine with all dynamic routes, I tested it with simple php -S localhost:8000 that's just serves a folder via HTTP.
This works due to a trick with 200.html https://dev.to/adnanbabakan/deploy-a-single-page-application-with-200-html-2p0f
For my project it generated all needed data and folder size is just 13mb (with all images, fonts, etc...).
You can read more about how static routing is done in Nuxt docs: https://router.vuejs.org/guide/essentials/history-mode.html#example-server-configurations
Related
I have a survey built with Next.js using Incremental Static Regeneration (ISR). I would like to bundle it so I can can either publish it to npm or host a single entry file so I can use the survey on other applications.
It's currently hosted on Vercel and uses getStaticProps and getStaticPaths to call my API of 'surveys' and 'survey questions'. ISR is great because it allows me to dynamically load each step of the survey based off of the API structure and if I modify it then the revalidate property will regenerate the new order or questions from the survey. It also lets me get away with only having one page for all surveys/question types.
My App structure is like this:
- src
- pages
- [surveyid]
- [...question].tsx
Based on the request (and response that was received during build time/revalidation) the static files for the survey-id are created and the next router will route to each survey question based off the next step in the json object from the api eg., /surveyid/question-1, /surveyid/question-2 etc..
This is all working well in production when deployed to Vercel.
When it comes to bundling this so a survey can be loaded into other sites I have been quite lost.
When I run next build it builds the prod files that are served to vercel but there are many entry points and not a single .js file.
I tried running next export and serving the .out folder locally and the pages and links are accurate but this breaks the ISR and after reading next.js documentation it states that next export doesn't work with ISR.
Ideally I would like to be able to build the application to a single entry file eg., index.js and then either publish as a package to npm or host on my server. Then load the survey by installing the package or adding the direct url to the src file of a script tag in my other projects eg., <script src='https://survey.com/widget.js'></script> and provide some settings/options to the request so I can tell it which survey to return.
Is there a way for this to be done while still maintaining ISR?
Would I have to create some sort of entry file to dispatch the request and return the static files from my vercel server instead as a workaround?
I am currently trying to see if I can use rollup to build it out to a single file but I am unsure if this will break the next router when it comes to the dynamic rendering (or revalidation) of pages.
In a perfect world I would like to leverage some of the cool features of next like their middleware to determine the geolocation from the request header as well. But i'm happy if I can just get the survey to render in another project at this point.
My project with Nuxt JS is set with target:static and ssr: false.
This app need to connect to a local endpoint to retrieve some informations.
I have multiple endpoints and I need multiple instances of the app, every app must read only his endpoint.
The question is: how change the endpoint address for every app without rebuild everyone?
I tried with env file or a json file in the static folder (in order to have access to this file in the dist folder after the build process).
But if I modify the content of the env/json file in the dist folder and then reload the webpage (or also restart the web server that serve the dist folder), the app continue to use the original endpoint provided at the build time.
There is a way or I have to switch to server side rendering mode (which I would rather not use)?
Thank you!
When you use SSG, it will bundle your app at build time. Last time I checked, there was no hack regarding that. (I don't have the Github issue under my hand but it's a popular one)
And at the same time, I don't really see how it would be done since you want to mix something static and dynamic at the same time.
SSR is the only way here.
Otherwise, you may have some other logic to generate dynamic markup when updating your endpoints (not related to Nuxt) by fetching a remote endpoint I guess.
With the module nuxt content it's possible to create a folder "/content" in project directory and read json files from that directory.
After, when creating the dist with nuxt generate command, the "content" folder it's included in "_nuxt" folder of the dist and if you modify the content of the json file and refresh the webpage that read it, will take the new values.
I have a sveltekit website that I deployed to cloudflare pages, the problem is that when I deploy the app with the static adapter and try to visit the site it says "No webpage was found for the web address" but when I use the cloudflare adapter it works successfully, so I was intending to use the cloudflare adapter but I noticed that the number of "Functions requests today" was increasing although my app does not have any functions (some how every request is counted as a server function), So what am I doing wrong here?
When you run npm run build does the build directory contain an index.html file? If not you may need to specify prerender.default = true like so:
import adapter from '#sveltejs/adapter-static';
/** #type {import('#sveltejs/kit').Config} */
const config = {
kit: {
adapter: adapter(),
prerender: {
default: true
}
}
};
export default config;
With that you should get a /build directory that contains index.html. Next just follow the instructions from Cloudflare Pages documentation for deploying your site https://developers.cloudflare.com/pages/framework-guides/deploy-anything/
These instructions include the following:
Deploying with Cloudflare Pages
Deploy your site to Pages by logging in to the Cloudflare dashboard > Account Home > Pages and selecting Create a project. Select the new GitHub repository that you created and, in the Set up builds and deployments section, provide the following information:
Configuration option
Value
Production branch
main
Build command (optional)
<YOUR_BUILD_COMMAND>
Build output directory
<YOUR_BUILD_DIR>
Unlike many of the framework guides, the build command and build directory for your site are going to be completely custom. If you do not need a build step, leave the Build command field empty and specify a Build output directory. The build output directory is where your application's content lives.
After configuring your site, you can begin your first deploy. Your custom build command (if provided) will run, and Pages will deploy your static site.
For the complete guide to deploying your first site to Cloudflare Pages, refer to the Get started guide.
After you have deployed your site, you will receive a unique subdomain for your project on *.pages.dev. Cloudflare Pages will automatically rebuild your project and deploy it. You will also get access to preview deployments on new pull requests, so you can preview how changes look to your site before deploying them to production.
From these instructions it looks like you only need to set Production branch to your main branch (or which ever branch you would like deployed) and Build output directory to build (unless otherwise specified in your svelte.config.json). Ensure that your .gitignore does not include the /build directory unless you want to use the Build command config then go ahead and do that.
I have some react code (written by someone else) that needs to be served. The preferred method is via a Google Storage Bucket, fronted by their Cloud CDN, and this works. However, due to some quirks in the code, there is a requirement to override 404s with 200s, and serve content from the homepage instead (i.e. if there is a 404, don't serve a 404, serve the content of the homepage and return as a 200 instead)
(If anyone is interested, this override currently is implemented in CloudFront on AWS. Google CDN does not provide this functionality yet)
So, if the code is served at "www.mysite.com/app/" and someone hits "www.mysite.com/app/not-here" (which would return a 404), what should happen is that the response should NOT be 404, but a 200 with the content being served from index.html instead.
I was able to get this working by bundling all the code inside a docker container and then using the solution here. However, this setup means if we have a code change, all the running containers need to be restarted, and the customer expects zero downtime, hence the bucket solution.
So I now need to do the same thing but with the files being proxied in (with the upstream being the CDN).
I cannot use the original solution since the files are no longer local, and httpd can't check for existence of something that is not local.
I've tried things like ProxyErrorOverride and ErrorDocument, and managed to get it to redirect, but that is not what is needed.
Does anyone know how/if this can be done?
If the question is: how to catch the 404 error provided by Cloud Storage when a file is missing with httpd/apache? I don't know.
However, I think that isn't the best solution. Serving files directly from Cloud Storage is convenient but not industrial.
Imagine, you deploy several broken files successively, how to rollback in a stable format?
The best is to package your different code release in an atomic bag, a container for instance. Each version are in a different container and performing a rollback is easier and consistent.
Now your "container restart" issue. I don't know on which platform you are running your container. If your run it on a Compute Engine (a VM) it's maybe the worse solution. Today, there is container orchestration system that allows you to deploy, scale up and down the containers, and to perform progressive rollout, to replace, without downtime, the existing running containers by a newer version.
Cloud Run is a wonderful serverless solution for that, you also have Kubernetes (GKE on Google Cloud) that you can use with Knative for a better developer experience.
I'm having trouble setting up a development workflow that will do both of the following tasks simultaneously:
Recompile static assets (js, css) on file change. Serve these static assets. Notify the browser that the page must reload whenever assets are changed. Use react-hot-loader.
Use express to run a server which serves an API which the static assets will "consume". Have this express server restart any time my server files change.
Is the best practice to have webpack-dev-server serve the static assets, and have a separate web server serve the API on a different port? If so, the API calls in the javascript will need to specify the host and port, and will need to be changed before going to production. It seems all the tutorials online focus on #1, and don't set up an API server. Can anyone point me in the correct direction?
I'm not opposed to using gulp and browserify or SystemJS instead of webpack, but it seems that if I want to use react-hot-loader, I need to use webpack.
You can do something like this:
Create an express() proxy
Create a webpack-dev-server
Link the assets with absolute url
Start both servers.
Note: Make sure to have different ports to run both the servers.
var proxy = require('proxy-middleware');
var express = require('express');
var url = require('url');
//----------------- Web Proxy--------------------
var app = express();
app.use('/assets', proxy(url.parse('http://localhost:8000/dist/assets')));
app.get('/api/url', function(req, res) {}
app.post('/api/url', function(req, res) {}
// ---------------Webpack-dev-server---------------------
var server = new WebpackDevServer(webpack(config), config.devServer);
// ---------------Run both servers-----------------------
server.listen(config.port, 'localhost', function(err) {});
app.listen(8080);
We have been using gulp+webpack for the last year and it has been great. We have an API Gateway which only serves up one static html file(the index.html) and everything else is just REST end points. Then in the index.html we reference a css file or two and a couple scripts that load from our CDN(Cloudfront).
If you run that way in production, your local setup just needs to use the webpack dev server as your "local CDN". You are correct, your javascript will need to know what the api url is since that will change local vs production. We actually have local, dev, stage, and production - once you have it setup to work in 2 environments its not hard to add more(which is good!)
Now our index.html has some template variables that get filled in by the API Gateway when it serves it up. Similar to this:
<script>
var siteConfig = {
apiBase: '<%=apiBaseUri%>',
cdnBase: '<%=cdnBaseUri%>'
};
</script>
<script src="<%=cdnBaseUri%>/assets/js/bundle.min.js"></script>
Then you just pull in siteConfig when your react app is starting up and prepend those variables to any api/cdn calls. Swap the variables depending on your environment, and bam, you're done!
A slightly simpler approach is instead of filling in those vars when the page is served up you could do it at build time. Thats how we started but as things evolved it turned out easier to manage at run time.
BTW, I am pretty sure you can accomplish all of this using just webpack - gulp probably isn't necessary but it was easier for us at the time to use gulp for running unit tests, linting, etc.