Are there any ansible module(s) to manage OpenStack Load-balancers (OpenStack LBaas)? - load-balancing

I want to define a pool in Openstack LBaas (Load Balancer as a Service) and then assign a VIP to it in order to create a Load-Balance cluster of servers. I want to automate this things using Ansible. I am looking for Ansbile modules that could help achieve the required thing.

Ansible don't provide a core module for Neutron management yet and it doesn't appear on the openstack-ansible github project.
Checking the TODO for the openstack-ansible project shows that they are still planning on working on adding Neutron LBaas configuration.

Ansible 2.7 now provides what you need if you have Octavia installed and enabled on your OpenStack Cloud
Add/Delete load balancer from OpenStack Cloud:
https://docs.ansible.com/ansible/latest/modules/os_loadbalancer_module.html#os-loadbalancer-module
Add/Delete a listener for a load balancer from OpenStack Cloud
https://docs.ansible.com/ansible/latest/modules/os_listener_module.html#os-listener-module
Add/Delete a pool in the load balancing service from OpenStack Cloud
https://docs.ansible.com/ansible/latest/modules/os_pool_module.html#os-pool-module

Related

How to mount volume for stateless service that uses Apache Ignite

I have a service, that runs on run on kubernetes, uses Apache Ignite to store some data for processing, runs in replication mode with native persistence enabled. How to rightly mount the volume so the data is persisted the disk? Please note, this question is not related to mounting volumes in Kubernetes, rather the configuration/method to enable persistence in service running with embedded Ignite server in Kubernetes.
Note: The application may run multiple replicas.
Edit: As volumes (pvc) cannot be shared by multiple pods, only pod runs successfully, and other pods are in pending state.
The stateless means the system does not have dependency during its start or execution, but only be as stateless as possible. So, as the need itself is persistence, the Ignite has to be deployed as stateful using the StatefulSet. The StatefulSet will automatically provision separate volumes & mount it to every pod.
Checkout out Ignite guides for mounting K8 on AWS, GKE, and Azure

Can I run RediSearch on a separate server then redis?

Originally was trying to install RediSearch ontop of Aws ElastiCache but it seems they dont support modules in their managed service (makes sense).
Then I was looking into running RediSearch on a separate EC2 instance with my VPC instance that would allow me to utilize it while not having to install it in ElastiCache directly.
Is this possible?
Thanks!
RediSearch is available as a service on Redis Enterprise Cloud from Redis Labs on AWS, Azure and GCP.

How does Apache Ignite deploy in K8S?

On the Ignite website, I see that in Amazon EKS, Microsoft Azure Kubernetes Service Deployment, and Google Kubernetes Engine Deployment, deploy on each of the three platforms ignite.If I am on my own deployed K8S, can I deploy?Is it the same as deploying the Ignite on three platforms?
Sure, just skip the initial EKS/Azure initialization steps since you don't need them and move directly to the K8s configuration.
Alternatively, you might try Apache Ignite and GridGain k8s operator that simplifies the deployment.

Google Cloud Manage Tomcat Service

Does google cloud or aws provide manage Apache tomcat which just take war file and do auto-scaling based on load increase and decrease ? not compute engine. I dont want to create VM. this should be manage by manage service.
Google App Engine can directly take and run a WAR file - just use the appcfg deployment method.
You will have more options if you package with docker, as this then provides an image type that can be run in many places (Multilpe GCP, AWS and Azure options, on-prem Kubernetes, etc). This can even be as simple as building a dockerfile that just copies the WAR into a jetty image:
FROM jetty:latest
COPY YOUR_WAR.war /var/lib/jetty/webapps
It might be better to explode the war though - see discussion in this question
AWS provide ** AWS Elastic Beanstalk **
The AWS Elastic Beanstalk Tomcat platform is a set of environment configurations for Java web applications that can run in a Tomcat web container. Each configuration corresponds to a major version of Tomcat, like Java 8 with Tomcat 8.
Platform-specific configuration options are available in the AWS Management Console for modifying the configuration of a running environment. To avoid losing your environment's configuration when you terminate it, you can use saved configurations to save your settings and later apply them to another environment.
To save settings in your source code, you can include configuration files. Settings in configuration files are applied every time you create an environment or deploy your application. You can also use configuration files to install packages, run scripts, and perform other instance customization operations during deployments.
It also provide autoscaling
The Auto Scaling group in your Elastic Beanstalk environment uses two Amazon CloudWatch alarms to trigger scaling operations. The default triggers scale when the average outbound network traffic from each instance is higher than 6 MB or lower than 2 MB over a period of five minutes. To use Amazon EC2 Auto Scaling effectively, configure triggers that are appropriate for your application, instance type, and service requirements. You can scale based on several statistics including latency, disk I/O, CPU utilization, and request count.

How to configure Redis in Google Cloud Platform connected to App Engine to autoscale

I want to deploy a autoscalable Redis in GCP and connect it to my app in app engine in my app.yml
Is Cloud Launcher the proper way to launch the Redis Service? I did so and selected the Redis click to deploy option (Not the bitnami one).
I configured the instances and deployed them.
After the instances were ready, the next command appeared:
gcloud compute ssh --project <project-name> --zone <zone-name> <redis-instance-name>
After having this, do I have to configure the next things?
Instance IP address (I want it to be only accessible from inside my GCP Account) (Do I need to configure the 3 instances or does the sentinel takes care of the redirection?)
Password of the sentinel redis
Firewall