Openstack Octavia - How to Load balance Web Applications - load-balancing

I am trying to fire up Octavia Load balancer to balance user requests over 5 servers running a web application. I have been searching for a tutorial on this as the OpenStack API does not give a detailed guideline.
A brief background to my current OpenStack setup. We have OpenStack installed using juju and Octavia was installed also using juju, this link https://jaas.ai/octavia/15 and this is the overlay bundle used https://github.com/openstack-charmers/openstack-bundles/blob/master/stable/overlays/loadbalancer-octavia.yaml. After this installation, to bring Octavia up is my problem. I followed this tutorial https://docs.openstack.org/octavia/latest/install/install-ubuntu.html, but it seems to be doing what the juju commands did, so I am at a loss to how I am supposed to start up an Octavia instance.
Can someone point me to a resource that explains this?
Thank you.

glad to have you trying out Octavia.
The document you are looking for is the load balancing cookbook: https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html
It is included in our end-user section of the Octavia documentation here: https://docs.openstack.org/octavia/latest/user/index.html
Michael

Related

Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?

I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo

Problem with installing spartacussampledataaddon for use with Spartacus 3.0

I've got a problem with setting my environment to work with Spartacus. I've started by following documentation and performed all operations step by step. Finally, I've obtained working local Commerce Cloud (2005.4) instance with -spa websites showing up in the Backoffice.
Content of Backoffice
Next, I've created fresh Spartacus 3.0 using these docs and connected it to my Commerce. What I get is a storefront with non-working PDP, Search Result Page (B2B), etc. I'm getting Translation key missing 'TabPanelContainer.tabs.TabPanelContainer' in the chunk 'product' error all over the site.
PDP error
I don't know what might be the source of the problem because I'm strictly following the official documentation. Any help will be appreciated!
The addon "spartacussampledataaddon" was changed to "spartacussampledata" extension. Please download the "spartacussampledata", and try again. (https://sap.github.io/spartacus-docs/installing-sap-commerce-cloud-2005/). Also please make sure the base-site configured in Spartacus is the "spa" site.

Hazelcast apparently is installed but doesn't show on plugin's page

I'm trying to install Hazelcast on 2 EC2 instances. My Openfire is 3.9.3, and I downloaded hazelcast 3.4 from their official site.
If I try to upload it to my Openfire, it apparently works, returns me to success page, but it doesn't list on my plugins page, and when I go to /system-clustering.jsp, it shows me that clustering is not available. Other plugins works exactly as expected.
What can be happening? Thanks in advance.

How To implement LuceneNet using Amazon S3

I'm trying to implement Lucene in my app using Amazon S3 to storage the indexes that I generate, but I can find any code examples or a clear article. So anyone that have some kind of experience with this please give a guide or something that can help me start
There's a similar question here.
Here's an interesting article of how the biggest Solr service provider Lucid Imagination proposes to deploy their Solr implementation on EC2.
And here's their Search-as-a-Service solution.
If you're not bound to S3, you can use dedicated Solr cloud service called WebSolr.
Also, if you need complete ALM/CI solution for your development project, there's a WebSolr module included in CloudBees.

Tips and steps for setting up ruby on rails continuous integration server with github integration

I'm still new with ruby on rails and am tasked with setting up a continuous integration server/service that uses the code hosted on github and alerts when the tests fails. Amazon EC2 was recommended as a service to use as platform for the service.
I did some research and tried to set up the system using a step by step tutorial but I'm not used to work with Amazon EC2, so I kinda failed doing that.
Can you help me with some advices or first steps to take?
Thanks
For CI server you can use CruiseControl developed by ThoughWorks
. Its open source as well . You can get it from github as well.
And pretty much simple to set up only takes 10 min to setup and as its written in ruby its pretty much simple to hack and customize to as per your need .
Reply me if you find anything problem in setup CI
Thanks