Gulp connect-livereload doesnt work with gzip - gzip

Recently i have added the gulp-gzip task to my gulp script pipeline, that also has a livereload to refresh the browser when any file changes.
gulp.task('webserver', function() {
connect.server({ port: 8080, root: 'public/', livereload: true });
});
The reload works fine but the server doesnt serve the gziped files fine.
In the browser network it appears the files content compressed:
If i launch a simple-http-server in the path the gz files are served correctly. It is possible to tune the gulp-connect server to solve de gz issue?

From my understanding, instead of gzipping your files and then delivering them from the server, you should, as you said, tune the gulp-connect server to deliver the files compressing them beforehand. For this, you can use the middleware configuration attribute in the gulp-connect server to provide the gzip extension as it is used in this gist and, of course, you will need to add the connect-gzipdependency to your project. Hope it helps!

Related

Using NuxtJS for dynamic routes without server target

I always thought that frontend should not be over bloated in size, usually by "frontend" I imagined a set of HTML, CSS and JS files, which are kind of small, especially when minified and compressed. So you can use whatever framework or library you love, your dev node_modules could be enormous in size, but after the compilation you get something lightweight to be served e.g by Nginx. Yeah, I just described an SPA-like setup, not an SSR when there's a server process running.
I had an experience building a website with NuxtJS, and it has only runtime logic, so no backend was required. I just did yarn generate and served all the resulted static with Nginx.
Now I'm building an application which requires a backend (it's a separate Python process), with dynamic pages like /users/john and /users/jane. Nuxt documentation says I can't use generate option anymore, cause such routing is dynamic. (technically I can write a set of fetch functions to load users from API during build time and generate corresponding pages, but it doesn't work well for runtime data). The only option is to use server target of NuxtJs.
There's even a page describing how to serve Nuxt application with Nginx. It assumes you should use yarn start command, which starts a Node process. It works fine, dynamic content is routed, caching is performed by Nginx, but.. it doesn't fit in a model that "frontend is lightweight". I use docker, and it means that now I need to bring huge node_modules with me. nuxt package itself is about 200 MB, which is kinda big for a small frontend app. I can run yarn install --production to save some space, but it still doesn't solve an issue that resulted image is huge.
Previously, when I wrote some apps in React, they resulted in a single index.html which I served by Nginx. It means, such dynamic routing was handled by frontend, using react-router or similar stuff.
To better understand my concerns, here's some rough comparison:
My old React apps: ~5 MB of disk space, 0 RAM, 0 CPU, routing is done by index.html file
My previous site with Nuxt static option: ~5 MB of disk space, 0 RAM, 0 CPU, routing is done by file system (/page1/index.html, /page2/index.html)
My current site with Nuxt server option: ~ 400 MB or even more disk space for a docker image, RAM, CPU, routing is done by Nuxt runtime
I don't really want to overcomplicate things. Allocating a lot of resources for a simple web app is too much, especially when you can solve the task with a help of a few static files.
The questions are:
Am I missing some option in NuxtJS to solve my issue?
Am I just misusing NuxtJS, and it's better to get plain VueJS, some vue-router, and develop the app as I described in "previously with react" section?
I think you are making a mistake here about SPA mode.
Assume that you have a page named users in your Nuxt pages, your folder structure is like this:
[pages]
[users]
[_name]
index.vue
When you requesting /users/john you can take the john from params and making an axios call to your server.
After that, you can use the nuxt generate command to create your dist folder and after that serve the dist folder with Nginx. Everything will work fine.
Check this simple routing approach in the browser
const routes = {
'/': Home,
'/users': Users
}
new Vue({
el: '#app',
data: {
currentRoute: window.location.pathname
},
computed: {
ViewComponent () {
return routes[this.currentRoute] || NotFound
}
},
render (h) { return h(this.ViewComponent) }
})
In the Users component integrate with your python backend.
You can use SPA mode in NuxtJS (mode: 'spa', or ssr: false, in latest versions), then run yarn generate or nuxt generate, it will generate a full bundle in dist folder that can be served with any HTTP server.
This works fine with all dynamic routes, I tested it with simple php -S localhost:8000 that's just serves a folder via HTTP.
This works due to a trick with 200.html https://dev.to/adnanbabakan/deploy-a-single-page-application-with-200-html-2p0f
For my project it generated all needed data and folder size is just 13mb (with all images, fonts, etc...).
You can read more about how static routing is done in Nuxt docs: https://router.vuejs.org/guide/essentials/history-mode.html#example-server-configurations

gatsby compression doesn't work in live server, only on local

I am using the brotli plugin for compressing my bundles in gatsby: https://github.com/ovhemert/gatsby-plugin-brotli
the plugin is configured as follows:
{
resolve: "gatsby-plugin-brotli",
options: {
extensions: ["css", "html", "js", "svg", "ttf"],
},
},
}
For some reason, it seems to work only when I "gatsby serve" it on my local machine (localhost:9000), but uploading it to a bucket on S3, shows no compressions what so ever:
local deployment:
s3 deployment:
Nothing works, no matter what I've tried.
for uploading to S3, I've been using the gatsby-plugin-s3 package:
https://github.com/jariz/gatsby-plugin-s3
Any idea?
Thanks!
EDIT:
I checked the bucket to make sure that it contains the compressed files and it does, but also the decompressed files:
So I guess I need to refine my questions, but not sure how... not sure what caused this issue
So after spending quite some time on this issue, I finally found the answer in AWS docs.
I'm using CloudFront to serve the site, but since this site is a demo site for testing purposes, I didn't purchase an HTTPS Cert for it.
According to AWS Docs, CF doesn't serve compressed content over HTTP, ONLY on HTTPS:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
Purchasing and installing a certificate SOLVED this problem for me.

AWS EBS - Rails5 / nginx - robots.txt not found error (404)

I deployed a pretty standard Rails 5 app with AWS EBS.
My /robots.txt is not reacheable and requests to it's URL return a 404 error.
I put it in the /public folder along with 404.html, 422.html and 500.html pages, which are correctly served by nginx.
Any clue about what might be wrong? What shall I check?
EB CLI 3.14.6 (Python 2.7.1)
Ruby 2.4.3 / Rails 5.1.4 / Puma (gem) 3.7
Looks like a very similar question have been asked 4 years ago on the official AWS forum: https://forums.aws.amazon.com/thread.jspa?threadID=150904
Only 4 years later a brave guy from AWS stepped in with a reply! Here below the quoted reply:
Hello hello! I'm Chris, the new Ruby platforms person at Elastic
Beanstalk. Visiting this thread today, it looks like there's been a
lot of pain (and also confusion!) from Beanstalk's Ruby+Puma's
handling of static files.
Quick summary: When this thread was created (in 2014), Beanstalk was
essentially using the default Nginx that comes with Amazon Linux, with
only some logging modifications to support the health monitoring. That
spawned this thread, as static files are generally expected to be
served the the web server when one is present.
So, the folks here went and fixed the /assets folder. Great!
Unfortunately, there was a misunderstanding with the request to fix
serving the /public folder - Beanstalk's Puma platform instead serves
things in '/public' from '/pubilc', not from '/'. This is definitely
an issue, so here's some workarounds:
Workaround 1: Turning on serve static assets. Yes, this wastes some
application threads here or there, but if your use case is only
robots.txt and favicon.ico, you're only robbing a couple of appserver
threads. I'd pick this one unless I was running my application servers
hot.
Workaround 2: Write an .ebextension to modify the Nginx configuration
to serve /public at /. I'm in the process of writing one, so I'll tack
it as a reply to this when I've given it the thought it deserves. Some
of the current ones may serve your app's code, so double check the
configuration if you've already done this workaround.
I've created a tracking issue for the team with this level of detail,
so we'll work to get this corrected. Thank you all for your feedback -
we'd love to serve you and your apps better.
Since then, no further replies; if anybody knows the "aws-approved-way" to edit nginx config with .ebextensions let's post it here please! :)
In AWS EB with PUMA, static files under the public folder are served under the /public/ url. Webcrawlers expect the file available at /robots.txt
I've struggled to try and implement routing to these files and settled instead on a more 'Rails' way of implementing this.
1) config/routes.rb
get "/robots.txt", to: "robots#show"
2) app/controllers/robots_controller.rb
class RobotsController < ApplicationController
def show
render "show", layout: false, content_type: "text/plain"
end
end
3) app/views/robots_txts/show.erb
User-agent: *
Disallow: /
The above link to AWS forums is erroring with a 400 right now, so here's how I fixed this issue. Ruby 2.7 running on AWS2 platform:
Static Files in sub-directory of /public:
Create a file under the .ebextensions folder called static-files.conf. Content should look similar to:
option_settings:
aws:elasticbeanstalk:environment:proxy:staticfiles:
/w3c: public/w3c
/images: public/images
This will ensure that all requests to domain.com/images and domain.com/w3c are served from the appropriate /public sub-directory.
Static Files in top level of /public directory:
For top-level files like robots.txt or sitemap.xml add appropriate entry to routes.rb to serve the static content directly:
get '/robots.txt', to: proc {|env| [200, {}, [File.open(Rails.root.join('public', 'robots.txt')).read]] }
get '/sitemap.xml', to: proc {|env| [200, {}, [File.open(Rails.root.join('public', 'sitemap.xml')).read]] }
Ensure production.rb has static files config set properly:
config.serve_static_files = false
This last part is most-important.

Make Kestrel serve file existing minified version of requested file

What I want to achieve is the following.
When I recibe a request for a HTML, JS or CSS file, return its minified version. I minify when I publish, not dinamically.
Example:
Browser asks por /index.html
Server checks the extensions
Server checks if a index.min.html exists on disk
If exists, server returns index.min.html file, else index.html
All in a single request
I tried read about rewriting and file providers, but I didn't found a standard nor easy way to do this.
What should be the way to do this?
Thanks in advance!

Nginx is Slower than Apache downloading main.bundle.js

I have an Angular2 app that I've been developing for a bit now. Locally I run an Nginx server but the deployment server is using Apache. To unify things I worked to move the deployment server to Nginx but I am getting extremely slow results with Nginx.
Apache loads in ~5 seconds (1.1MB transferred)
Nginx loads in 16-20 seconds (5MB transferred)
These are both on the same server pointing to the exact same directory. The actual size of main.bundle.js is 4470365 main.bundle.js so it seems Nginx is loading the entire file.
How is Apache able to download only 737K?
You can check for the features enabled in both the files with nginx and apache by clicking on the exact file in Inspect element Network Tab. Then go to Headers and then Response Headers as illustrated in the attached image.
Check if the gzip compression is enabled in any one of the server. That is the only reason for lesser file size.