Can spinnaker use local storage such as mysql database? - spinnaker

I want to deploy spinnaker for my team. But I encounter a problem. The document of spinnaker said:
Before you can deploy Spinnaker, you must configure it to use one of the supported storage types.
Azure Storage
Google Cloud Storage
Redis
S3
Can spinnaker use local storage such as mysql database?

The Spinnaker microservice responsible for persisting your pipeline configs and application metadata, front50, has support for the storage systems you listed. One could add support for additional systems like mysql by extending front50, but that support does not exist today.
Some folks have had success configuring front50 to use s3 and pointing it at a minio installation.

Related

HDFS over S3 / Google storage bucket translation layer - how?

I'd love to expose a Google storage bucket over HDFS to a service.
Service in question is a cluster (SOLR) that can speak only to HDFS, given I have no hadoop (nor need for it), ideally I'd like to have a docker container that would user a Google storage bucket as a backend and expose it's contents via HDFS.
If possible I'd like to avoid mounts (like fuse gcsfs), has anyone done such thing?
I think I could just do mount gcsfs and setup a single node cluster with HDFS, but is there a simpler / more robust way?
Any hints / directions are appreciated.
The Cloud Storage Connector for Hadoop is the tool you might need.
It is not a Docker image but rather an install. Further instructions can be found in the GitHub repository under README.md and INSTALL.md
If it is accessed from AWS S3 you'll need a Service Account with access to Cloud Storage and set the env variable GOOGLE_APPLICATION_CREDENTIALS to /path/to/keyfile.
To use SOLR with GCS, you need indeed a hadoop cluster and you can do that in GCP by creating a dataproc cluster then use the connector mentioned to connect your SOLR solution with GCS. for more info check this SOLR

Cloud Foundry on Google Cloud Platform

My application has Jackrabbit Oak implementation, so it uses direct binary upload features through S3DataStore for storing files on AWS-S3. For AWS-S3 integration, we had created service broker instance on Cloud Foundry which was on our on-premises server.
Now I moved Cloud Foundry on Google Cloud Platform, but when I am trying to search AWS-S3 service broker using cf marketplace command. I could not see the aws-s3 service broker.
How to get aws-s3 service broker and if it is not possible then, is there any way to integrate aws-s3 storage with application deployed on CloudFoundry on GCP in case of above scenarios.
It's hard to know what you had deployed on your platform as we don't have any context of what was installed there. Just a guess, but it sounds like you had the Tanzu AWS Service Broker installed. It has service offerings for aws-s3.
https://docs.pivotal.io/aws-services/creating.html#view
You can still install the Tanzu AWS Service Broker when running Tanzu Application Service on top of GCP, you just need to have a AWS account where the broker will create your service instances. The broker creates AWS resources on behalf of the users and it does it under a given AWS account, so as long as you still have an AWS account you can make it work.
That said, there's also a GCP broker that functions in the same, so if you are trying to move off AWS to GCP entirely you could look at using the GCP broker instead. GCP has a similar cloud storage offering.
https://docs.pivotal.io/partners/gcp-sb/index.html
Once you install either broker, you'll see the service plan offerings in your marketplace. If you're still not seeing them, check cf service-access as an admin user. You may need to enable access to those services with cf enable-service-access.
Go back to the team that moved you between CF/AWS and CF/GCP and tell them you need your S3 broker back :)

HSQLDB on S3 Compatible Service

We use HSQLDB as a filesystem based database as our application requirements for a RDBMS is minimal. We would now like to move this application to Pivotal Cloud Foundry. S3 compatible storage (on cloud) is the only service compatible "filesystem" on physical machines.
So if we move our current HSQLDB to S3, we would not be able to make a direct JDBC connection to the HSQLDB "file" (as accessing S3 objects need authetication etc).
Has anyone faced such a situation before? Are there ways to access HSQLDB with S3 as a storage medium ?
Thanks,
Midhun
Pivotal Cloud Foundry allows you to bind volume mounts to your cf push-ed applications. Thanks to the NFS volume service (see cf marketplace -s), you can bind volume mounts to your application with the usual cf create-service and cf bind-service commands. Then your HSQLDB files must be written under the filesystem directory where the NFS volume is mounted.
This could be handy a solution for running your app in Cloud Foundry with persistent filesystem storage for your HSQLDB database.
Default PCF installations provide such mount from some NFS server. Here is the NFS volumes documentation and especially for your PCF operator, how to enable this feature.

Which s3 compatible blob storage?

I want deploy a s3 compatible blob storage in my Kubernetes Cluster. I already use GlusterFS for volumes like mongodb, and I tried to set up minio with the helm chart https://github.com/helm/charts/tree/master/stable/minio. I just realize I can't scale up minio easily because of erasure code.
So I have some questions about blob storage solutions :
Is GlusterFS blob storage service stable and reliable (https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/gluster-s3-storage-template) ?
Do I must use OpenShift to deploy GlusterFS blob storage as I read in the web ? I think no because I can see simple Kubernetes manifests in the GlusterFS repo like this one : https://github.com/gluster/gluster-kubernetes/blob/master/deploy/kube-templates/gluster-s3-template.yaml.
Is it easy to use Minio federation in Kubernetes ? Is it easily scalable with a "helm upgrade --set replicas=X" or do I need manually upgrade minio configuration ?
As you can see, I feel lost with this s3 storage. So if you have more information/solutions, do not hesitate.
Thanks in advance !
About reliability you should read more about user experience like:
An end user review of GlusterFS
Community Survey Feedback, 2019
Why openshift with glusterFS:
For standalone Red Hat Gluster Storage, there is no component installation required to use it with OpenShift Container Platform. OpenShift Container Platform comes with a built-in GlusterFS volume driver, allowing it to make use of existing volumes on existing clusters but Red Hat Gluster Storage is a commercial storage software product, based on Gluster.
How to deploy it in AWS
For minio please follow official docs:
ConfigMap allows injecting containers with configuration data even while a Helm release is deployed.
To update your MinIO server configuration while it is deployed in a release, you need to
Check all the configurable values in the MinIO chart using helm inspect values stable/minio.
Override the minio_server_config settings in a YAML formatted file, and then pass that file like this helm upgrade -f config.yaml stable/minio.
Restart the MinIO server(s) for the changes to take effect
I didn't try but, but as per documentation:
For federation I can see additional environment variables in the values.yaml.
In addition you should Run MinIO in federated mode Federation Quickstart Guide
Here you can find differences between google and amazon s3 sotrage
or Cloud Storage interoperability from gcloud perspective.
Hope this help.

minio: What is the cluster architecture of minio.io object storage server?

I have searched minio.io for hours but id dosn't provide any good information about clustering, dose it has rings and instance are connected? or mini is just for single isolated machine. And for running a cluster we have to run many isolated instance of it and the our app choose to which instance we write?
if yes:
When I write a file to a bucket does minio replicate it between multi server?
I is it like amazon s3, or openstack swift that support of storing multi copy of object in different servers (and not multi disk on the same machine).
Here is the document for distributed minio: https://docs.minio.io/docs/distributed-minio-quickstart-guide
From what I can tell, minio does not support clustering with automatic replication across multiple servers, balancing, etcetera.
However, the minio documentation does say how you can set up one minio server to mirror another one:
https://gitlab.gioxa.com/opensource/minio/blob/1983925dcfc88d4140b40fc807414fe14d5391bd/docs/setup-replication-between-two-sites-running-minio.md
Minio also Introduced Continuous Availability and Active-Active Bucket Replication. CheckoutTheir active-active Replication Guide