We use HSQLDB as a filesystem based database as our application requirements for a RDBMS is minimal. We would now like to move this application to Pivotal Cloud Foundry. S3 compatible storage (on cloud) is the only service compatible "filesystem" on physical machines.
So if we move our current HSQLDB to S3, we would not be able to make a direct JDBC connection to the HSQLDB "file" (as accessing S3 objects need authetication etc).
Has anyone faced such a situation before? Are there ways to access HSQLDB with S3 as a storage medium ?
Thanks,
Midhun
Pivotal Cloud Foundry allows you to bind volume mounts to your cf push-ed applications. Thanks to the NFS volume service (see cf marketplace -s), you can bind volume mounts to your application with the usual cf create-service and cf bind-service commands. Then your HSQLDB files must be written under the filesystem directory where the NFS volume is mounted.
This could be handy a solution for running your app in Cloud Foundry with persistent filesystem storage for your HSQLDB database.
Default PCF installations provide such mount from some NFS server. Here is the NFS volumes documentation and especially for your PCF operator, how to enable this feature.
Related
Since NFS has single point of failure issue. I am thinking to build a storage layer using S3 or Google Cloud Storage as PersistentVolumn in my local k8s cluster.
After a lot of google search, I still cannot find an way. I have tried using s3 fuse to mount volume to local, and then create PV by specifying the hotPath. However, a lot of my pods (for example airflow, jenkins), complained about no write permission, or say "version being changed".
Could someone help figuring out the right way to mount S3 or GCS bucket as a PersistenVolumn from local cluster without using AWS, or GCP.
S3 is not a file system and is not intended to be used in this way.
I do not recommend to use S3 this way, because in my experience any FUSE-drivers very unstable and with I/O operations you will easily ruin you mounted disk and stuck in Transport endpoint is not connected nightmare for you and your infrastructure users. It's also may lead to high CPU usage and RAM leakage.
Useful crosslinks:
How to mount S3 bucket on Kubernetes container/pods?
Amazon S3 with s3fs and fuse, transport endpoint is not connected
How stable is s3fs to mount an Amazon S3 bucket as a local directory
I'd love to expose a Google storage bucket over HDFS to a service.
Service in question is a cluster (SOLR) that can speak only to HDFS, given I have no hadoop (nor need for it), ideally I'd like to have a docker container that would user a Google storage bucket as a backend and expose it's contents via HDFS.
If possible I'd like to avoid mounts (like fuse gcsfs), has anyone done such thing?
I think I could just do mount gcsfs and setup a single node cluster with HDFS, but is there a simpler / more robust way?
Any hints / directions are appreciated.
The Cloud Storage Connector for Hadoop is the tool you might need.
It is not a Docker image but rather an install. Further instructions can be found in the GitHub repository under README.md and INSTALL.md
If it is accessed from AWS S3 you'll need a Service Account with access to Cloud Storage and set the env variable GOOGLE_APPLICATION_CREDENTIALS to /path/to/keyfile.
To use SOLR with GCS, you need indeed a hadoop cluster and you can do that in GCP by creating a dataproc cluster then use the connector mentioned to connect your SOLR solution with GCS. for more info check this SOLR
I want deploy a s3 compatible blob storage in my Kubernetes Cluster. I already use GlusterFS for volumes like mongodb, and I tried to set up minio with the helm chart https://github.com/helm/charts/tree/master/stable/minio. I just realize I can't scale up minio easily because of erasure code.
So I have some questions about blob storage solutions :
Is GlusterFS blob storage service stable and reliable (https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/gluster-s3-storage-template) ?
Do I must use OpenShift to deploy GlusterFS blob storage as I read in the web ? I think no because I can see simple Kubernetes manifests in the GlusterFS repo like this one : https://github.com/gluster/gluster-kubernetes/blob/master/deploy/kube-templates/gluster-s3-template.yaml.
Is it easy to use Minio federation in Kubernetes ? Is it easily scalable with a "helm upgrade --set replicas=X" or do I need manually upgrade minio configuration ?
As you can see, I feel lost with this s3 storage. So if you have more information/solutions, do not hesitate.
Thanks in advance !
About reliability you should read more about user experience like:
An end user review of GlusterFS
Community Survey Feedback, 2019
Why openshift with glusterFS:
For standalone Red Hat Gluster Storage, there is no component installation required to use it with OpenShift Container Platform. OpenShift Container Platform comes with a built-in GlusterFS volume driver, allowing it to make use of existing volumes on existing clusters but Red Hat Gluster Storage is a commercial storage software product, based on Gluster.
How to deploy it in AWS
For minio please follow official docs:
ConfigMap allows injecting containers with configuration data even while a Helm release is deployed.
To update your MinIO server configuration while it is deployed in a release, you need to
Check all the configurable values in the MinIO chart using helm inspect values stable/minio.
Override the minio_server_config settings in a YAML formatted file, and then pass that file like this helm upgrade -f config.yaml stable/minio.
Restart the MinIO server(s) for the changes to take effect
I didn't try but, but as per documentation:
For federation I can see additional environment variables in the values.yaml.
In addition you should Run MinIO in federated mode Federation Quickstart Guide
Here you can find differences between google and amazon s3 sotrage
or Cloud Storage interoperability from gcloud perspective.
Hope this help.
I want to deploy spinnaker for my team. But I encounter a problem. The document of spinnaker said:
Before you can deploy Spinnaker, you must configure it to use one of the supported storage types.
Azure Storage
Google Cloud Storage
Redis
S3
Can spinnaker use local storage such as mysql database?
The Spinnaker microservice responsible for persisting your pipeline configs and application metadata, front50, has support for the storage systems you listed. One could add support for additional systems like mysql by extending front50, but that support does not exist today.
Some folks have had success configuring front50 to use s3 and pointing it at a minio installation.
I have searched minio.io for hours but id dosn't provide any good information about clustering, dose it has rings and instance are connected? or mini is just for single isolated machine. And for running a cluster we have to run many isolated instance of it and the our app choose to which instance we write?
if yes:
When I write a file to a bucket does minio replicate it between multi server?
I is it like amazon s3, or openstack swift that support of storing multi copy of object in different servers (and not multi disk on the same machine).
Here is the document for distributed minio: https://docs.minio.io/docs/distributed-minio-quickstart-guide
From what I can tell, minio does not support clustering with automatic replication across multiple servers, balancing, etcetera.
However, the minio documentation does say how you can set up one minio server to mirror another one:
https://gitlab.gioxa.com/opensource/minio/blob/1983925dcfc88d4140b40fc807414fe14d5391bd/docs/setup-replication-between-two-sites-running-minio.md
Minio also Introduced Continuous Availability and Active-Active Bucket Replication. CheckoutTheir active-active Replication Guide