Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run? - odoo

I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)

Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo

Related

Make nconf (or other) config available to getServerSideProps... should I eject from nextjs cli?

First of all, I know that nextjs has support for .env files... and this is great.
However, I do not wish to write secrets to disk, ever, becuase they might wind up in a docker image in an amazon ecr repo and someday get read by a hacker... so I won't write them to a yaml or a .env file. This is our company policy: we integrate with hashicorp vault.
Now, my idea was to get these secrets, and store them in nconf. Nconf is just a memory-based storage engine for organizing config... nothing special. I had planned to eject from nextjs cli and use a custom express server (with typescript). Fine... I can do that. But it's a little bit of a pain because it seems like people aren't doing that as much as they did 3 years ago when I used nextjs last.
That is probably because they don't want to miss out on the automatic static rendering, and neither do I.
But basically what I want to do is to make a global variable available server-side in nextjs on every page: my nconf config. I want to run things on the server and not in the browser (no secrets in the browser).
For instance... what about middleware? Can I run middleware without setting up a custom express server and ejecting? I feel like we're going to need middleware at some point, we're make an enterprise app. So I'm kind of using nconf as a litmus test. But hey, if there's a good way to handle secrets, LMK.
Am I missing something in the nextjs docs? Are there events or hooks I can tap into? Or is the whole thing kind of "nextjs way or the highway?" Because in that case I will need to eject. (I grew up in Drupal, where there were tons of hooks and you could do what you needed to with the right hook.)
Thanks for your help.

Efficient Nuxt generated static site hosting: Better on Amazon AWS or a Cloud Droplet

not sure if it belongs here or is well titled, but I finish soon my first Nuxt project and I am not sure, where to host it.
Usually I would use a Ionos or digital ocean droplet, but I was told that aws amplify or S3 (I have no Idea about any solution) might be cheaper or maybe cost nothing, since it is a small project, cause it depends on how intense process are ...
If true, would that apply as well, when I would need to run git pull and then the build/generate process, once a day, to get new content (via nuxt/content)?
Sorry if expressed poorly and thanks in advance for any helpful suggestion.
This question do not really belong to stackoverflow because it's essentially opinion based.
By order of preference, I do personally recommend those:
Netlify
Vercel
Digitalocean
Github pages
Surge
More on the official documentation of Nuxt: https://nuxtjs.org/docs/2.x/deployment/netlify-deployment

Running integration/e2e tests on top of a Kubernetes stack

I’ve been digging a bit into the way people run integration and e2e tests in the context of Kubernetes and have been quite disappointed by the lack of documentation and feedbacks. I know there are amazing tools such as kind or minikube that allow to run resources locally. But in the context of a CI, and with a bunch of services, it does not seem to be a good fit, for obvious resources reasons. I think there are great opportunities with running tests for:
Validating manifests or helm charts
Validating the well behaving of a component as part of a bigger whole
Validating the global behaviour of a product
The point here is not really about the testing framework but more about the environment on top of which the tests could be run.
Do you share my thought? Have you ever experienced running such kind of tests? Do you have any feedbacks or insights about it?
Thanks a lot
Interesting question and something that I have worked on over the last couple of months for my current employer. Essentially we ship a product as docker images with manifests. When writing e2e tests I want to run the product as close to the customer environment as possible.
Essentially to solve this we have built scripts that interact with our standard cloud provider (GCloud) to create a cluster, deploy the product and then run the tests against it.
For the major cloud providers this is not a difficult tasks but can be time consuming. There are a couple of things that we have learnt the hard way to keep in mind while developing the tests.
Concurrency, this may sound obvious but do think about the number of concurrent builds your CI can run.
Latency from the cloud, don't assume that you will get an instant response to every command that you run in the cloud. Also think about the timeouts. If you bring up a product with lots of pods and services what is the acceptable start up time?
Errors causing build failures, this is an interesting one. We have seen errors in the build due to network errors when communicating with our test deployment. These are nearly always transitive. It is best to avoid these making the build fail.
One thing to look at is GitLab are providing some documentation on how to build and test images in their CI pipeline.
On my side I use travis-ci. I build my container image inside it, then run k8s with kind (https://kind.sigs.k8s.io/) inside travis-CI, and then launch my e2e tests.
Here is some additional information on this blog post: https://k8s-school.fr/resources/en/blog/k8s-ci/
And the scripts to install kind inside travis-ci in 2 lines: https://github.com/k8s-school/kind-travis-ci.git. It allows lots of customization on the k8s side (enable psp, change CNI plugin)
Here is an example: https://github.com/lsst/qserv-operator
Or I use Github Actions CI, which allows to install kind easily: https://github.com/helm/kind-action and provide plenty of features, and free worker nodes for open-source projects.
Here is an example: https://github.com/xrootd/xrootd-k8s-operator
Please note that Github action workers may not scale for large build/e2e tests. Travis-CI scales pretty well.
In my understanding, this workflow coud be moved to an on-premise gitlab CI where your application can interact with other services located inside your network.
One interesting thing is that you do not have to maitain a k8s cluster for your CI, kind will do it for you!

NetApp 7-Mode simulator CIFS share creation

What are the different ways to create CIFS share on NetApp 7-mode simulator? I created share using command line argument and NetApp onCommand System Manager. I want to know is there any other way to do the same things?
Well, everything will create basically the same way. System Manger is using the API, you can code yourself some scripts or apps to use the API, or you can use Workflow Automation(WFA) to do this work, but again, this is just calling the API. Once the 7-mode shares are setup, they can be managed with the windows mmc though.
Regarding 7 Mode Systems:
It is always better to work with CLI. Just one example:
If you create a volume on 7 mode systems you have no chance to define the security style. You have to change it after the volume is created. That will be often forgotten.
I only work with CLI.

Complex system and Vagrant

On production, we have a web infrastructure as it follows:
Load Balancer (haproxy)
API Server (PHP + apache)
Frontend Server (Javascript + nginx)
MySql Server
Redis Server
I'd love to start using Vagrant to make production environments exactly the same as the development ones, plus making it easy for a new developer to jumpstart doing his job.
The big question is: how should I build the box?
Should I put everything in one box or should I build more boxes? And how many?
It depends on convention you've reached with developers. Ask yourself one question: in what type of structure do you wish to work: distributed or centralized.
If answer is "distributed", you can make one box per one project. You won't mess up when you will nedd to up any project that got over last modification few time ago. But this method intakes much memory and storage space and sometimes it doesn`t make sense, if the most part of your projects based on the same production environment.
If answer is "centralized", it means one box per all projects builded on the same environment is enough for you. It saves plenty of time, but also it`s easy to confuse, when you're looking for an old project. You can set up Docker container per each project in your Vagrant box.
Additionally I'd like to suggest you Packer usage for box building. That`s definitely clear instrument for this goal, it can make "ready to work" Vagrant box for every virtualization environment and execute shell scripts/CMS scripts. Just put in box everything essential for production environment, later developers can add some package dependencies through Vagrant provisioning and share it by Vagrantfile settings.