I have a service, that runs on run on kubernetes, uses Apache Ignite to store some data for processing, runs in replication mode with native persistence enabled. How to rightly mount the volume so the data is persisted the disk? Please note, this question is not related to mounting volumes in Kubernetes, rather the configuration/method to enable persistence in service running with embedded Ignite server in Kubernetes.
Note: The application may run multiple replicas.
Edit: As volumes (pvc) cannot be shared by multiple pods, only pod runs successfully, and other pods are in pending state.
The stateless means the system does not have dependency during its start or execution, but only be as stateless as possible. So, as the need itself is persistence, the Ignite has to be deployed as stateful using the StatefulSet. The StatefulSet will automatically provision separate volumes & mount it to every pod.
Checkout out Ignite guides for mounting K8 on AWS, GKE, and Azure
Related
On the Ignite website, I see that in Amazon EKS, Microsoft Azure Kubernetes Service Deployment, and Google Kubernetes Engine Deployment, deploy on each of the three platforms ignite.If I am on my own deployed K8S, can I deploy?Is it the same as deploying the Ignite on three platforms?
Sure, just skip the initial EKS/Azure initialization steps since you don't need them and move directly to the K8s configuration.
Alternatively, you might try Apache Ignite and GridGain k8s operator that simplifies the deployment.
This is regarding the use case where we are trying to use the Redis in PCF (Pivotal Cloud Foundry). In our use case, we will refresh the Redis cache daily once or twice with the required data and then API will query Redis and then provide the response.
One thing of particular concern for us is that we want API queries to happen from Redis only that means Redis to be available at all times. But whenever we are refreshing the Redis DB, Redis would not be able to serve the APIs since it is refreshing the keys. To avoid that we wanted to setup a Redis in cluster mode or master-slave mode so if one instance is being written another can be read from.
How can we setup Redis cluster or master-slave mode in PCF and then fulfil our requirement?
Please provide any other suggestions as well that you may have.
At the time I write this, the Redis for Pivotal Platform product does not support clustering. See Availability, in the docs here -> https://docs.pivotal.io/redis/2-3/erc.html#offerings.
All Redis for Pivotal Platform services are single VMs without clustering capabilities. This means that planned maintenance jobs (e.g., upgrades) can result in 2–10 minutes of downtime, depending on the nature of the upgrade. Unplanned downtime (e.g., VM failure) also affects the Redis service.
Redis for Pivotal Platform has been used successfully in enterprise-ready apps that can tolerate downtime. Pre-existing data is not lost during downtime with the default persistence configuration. Successful apps include those where the downtime is passively handled or where the app handles failover logic.
If you require clustered Redis, you'd need to look at a different offering. Redis Labs has some offerings that integrate with PCF, you could use a Cloud Provider's Redis offering, or you could host your own.
If the solution you use isn't integrated into PCF, you can create a user-provided service with cf cups and provide the Redis credentials to your application that way. It will function just like a Redis service instance created through the marketplace.
Currently hazelcast is using cloud discovery for communication.
So if there are 4 kubernetes pods and each of them is having in-memory hazelcast. whenever hazelcast cache is updated in one of the pod, it gets updated in one of the other pod. but in case both of these pods get downscaled and get terminated, the data which is only in these 2 pods is lost. Can we have something like redis where we can provide server, port of the hazelcast cluster and it will be independent of kubernetes pod
Please check the following Blog Post ("Scale without Data Loss!" section) to read how to scale Hazelcast cluster on Kubernetes to avoid data loss.
Also, you can check the official README of hazelcast/hazelcast-kubernetes plugin. There is a section dedicated to scaling there.
I see Fargate as a good service for deploying a Docker Compose based stack, but I was wondering if it is any good for "long-running" web services, not just ones where you need dynamic scaling and undeterminate workloads (e.g. containers that are created and die on demand).
That depends on your use-case. ECS lets you quickly deploy containerized applications. With Fargate we don't need to manage the underlying infrastructure (say server-less approach for containers!). Fargate is suitable for long-running apps, microservices, and batch jobs.
Few of my observations on Fargate are:
Fargate storage is ephemeral - We cannot store container data in disk such as volumes. (although Fargate provides 10 GB of volume mounts that is nonpersistent empty storage.)
Logs can be sent to Cloudwatch using awslogs driver. Recently AWS announced the support for Splunk log driver.
Fargate uses only awavpc network mode.
Fargate supports environment variables. Environment variables are the only way to pass parameters to the container.
I wanted to share some data to every worker in a swarm cluster.what are the possible methods to do the same.The swarm was created from docker cloud with azure integration.
Can I attach a single data disk to all worker VM's in an azure swarm cluster?
adding a single datadisk to all worker VM's is not possible. As Azure is not able to provide a 'shared disk' facility. The only things which comes close to it is the usage of Azure Files see herr --> https://learn.microsoft.com/en-us/azure/storage/storage-how-to-use-files-linux
for further details