Efficient Nuxt generated static site hosting: Better on Amazon AWS or a Cloud Droplet - amazon-s3

not sure if it belongs here or is well titled, but I finish soon my first Nuxt project and I am not sure, where to host it.
Usually I would use a Ionos or digital ocean droplet, but I was told that aws amplify or S3 (I have no Idea about any solution) might be cheaper or maybe cost nothing, since it is a small project, cause it depends on how intense process are ...
If true, would that apply as well, when I would need to run git pull and then the build/generate process, once a day, to get new content (via nuxt/content)?
Sorry if expressed poorly and thanks in advance for any helpful suggestion.

This question do not really belong to stackoverflow because it's essentially opinion based.
By order of preference, I do personally recommend those:
Netlify
Vercel
Digitalocean
Github pages
Surge
More on the official documentation of Nuxt: https://nuxtjs.org/docs/2.x/deployment/netlify-deployment

Related

Make nconf (or other) config available to getServerSideProps... should I eject from nextjs cli?

First of all, I know that nextjs has support for .env files... and this is great.
However, I do not wish to write secrets to disk, ever, becuase they might wind up in a docker image in an amazon ecr repo and someday get read by a hacker... so I won't write them to a yaml or a .env file. This is our company policy: we integrate with hashicorp vault.
Now, my idea was to get these secrets, and store them in nconf. Nconf is just a memory-based storage engine for organizing config... nothing special. I had planned to eject from nextjs cli and use a custom express server (with typescript). Fine... I can do that. But it's a little bit of a pain because it seems like people aren't doing that as much as they did 3 years ago when I used nextjs last.
That is probably because they don't want to miss out on the automatic static rendering, and neither do I.
But basically what I want to do is to make a global variable available server-side in nextjs on every page: my nconf config. I want to run things on the server and not in the browser (no secrets in the browser).
For instance... what about middleware? Can I run middleware without setting up a custom express server and ejecting? I feel like we're going to need middleware at some point, we're make an enterprise app. So I'm kind of using nconf as a litmus test. But hey, if there's a good way to handle secrets, LMK.
Am I missing something in the nextjs docs? Are there events or hooks I can tap into? Or is the whole thing kind of "nextjs way or the highway?" Because in that case I will need to eject. (I grew up in Drupal, where there were tons of hooks and you could do what you needed to with the right hook.)
Thanks for your help.

Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?

I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo

Is there a Plone package to serve images via CDN (Amazon cloudfront)

Is there a way to manage images uploaded into a plone system, but have them be synced and automatically served with Amazon S3/Cloudfront?
I've seen reference to a project that doesn't look like it's been touched since early 2011: http://plone.org/products/collective.cdn.core/ and that only has experimental image support, and not necessarily for Amazon s3/cloudfront
There is not yet an add-on product that makes this "point-and-click" easy. https://github.com/collective/collective.cdn.core suggest that collective.cdn.core has continued to be developed, although the authors haven't pushed their releases to plone.org (shame shame!). It still does not appear to include "native" Amazon/CloudFront support, but I suspect the add-on authors would welcome either code contributions or sponsorship to add that end.

Capistrano deployment with lots of images

So we have this basic Rails 3 website with capistrano 2.5.19 plus multi-stage extension.
The site is simple, but it has 40,000+ of images out there. So deployments take a long time, going both to our QA server and production. The issue is not usually network load, because capistrano only downloads what changed in svn. The issue is the time it takes for our servers to backup the old release (40k worth of images) and copy the new release (another 40k of images.)
Does anyone know of a best practice approach to this? Is the only way to split this into two SVN folders and two deployment scripts combined with some symlink magic? Or can i tell capistrano to exclude the images on certain deployments where I know images have not changed?
Well, we have this issue too. A solution is a library called fast_remote_cache if you're on linux.
https://github.com/37signals/fast_remote_cache
The idea is that it hard links to the cache so the copy is much faster. Once the site finally gets large enough that even this takes too long, then it is time to consider asset servers.
It's probably better not to have all those images in your repository, or at least in a different repository.
You'll want to see about setting up an asset server. They're easy to hook into Rails, as long as you use the XXX_tag helpers. And you could just have the asset server run plain old Apache - not need for anything dynamic on it...
You might also be able to hook a "cloud" file store (I'm thinking Amazon S3, but there are plenty of others) in to serve the same purpose - they'll provide file backup (and version control, in some cases), and you won't even have to worry about running the asset server yourself.
Hope this helps!

Anyone actually using Mosso Files (Amazon S3 competitor)?

We have a bunch of data on S3 (images) but just started reading about Mosso Files (rackspace). Sometime this month they are going to add CDN capabilities so any file you upload is part of the limelight CDN.
Anyone using this service, it's not as well documented or publicized at S3.
Yes, it's not well documented or publicized as S3. But dude it has CDN support which S3 is lack off (unless you willing to pay extra of course). Bad thing is you can't FTP into Mosso CloudFile, you will either have to upload it through web-based control panel or API. Yet, it's still cheap and worth especially with CDN.
I am using the service and it's pretty good and cost effective compare to S3.
We use it for all our client sites, from images to podcasts, and it's hand down, the best way to distribute content and make it highly available - especially at this price!
cheers