OpenStack Swift S3 object storage configuration in Kaltura community edition - amazon-s3

Kaltura community edition support file remote storage including S3 object storage from Amazon AWS. Configuration settings in GUI are:
With Amazon AWS settings it works perfectly, but it doesn't work with custom storage endpoint like OpenStack Swift. Despite that storage url in configuration is different than AWS, server still using one of preconfigured regions of Amazon AWS endpoints.
Server is Ubunty 16, Kaltura community edition version - Propus - 16.06, installation from: Kaltura installation scripts
After installation there are different AWS configuration files for S3 endpoints on /opt/kaltura
One is in /opt/kaltura/app/vendor/aws/Aws/Common/Resources/public-endpoints.php
After adding custom endpoint in code
'newregion' => array(
'endpoint' => '100.100.XX.XX:8080'
)
and writing in GUI configuration new region, system still using Amazon AWS endpoints.
Expected configuration for S3 Non-AWS enpoint could be:
Storage URL: 100.100.XX.XX:8080 [OpenStack, not AWS endpoint]
Storage base Directory: bucketname/foldername
Path Manager: External Path
Storage username: Openstack S3 Access
Storage Password: Openstack S3 Secret

Related

AWS Transfer or S3 VPC Interface EndPoint

I have a requirement to SFTP ".csv" files from corporate on-premise linux box to S3 bucket.
The Current Setup is as follows:
The on-premise linux box is NOT connected to internet.
Corporate Network is connected with AWS with Direct Connect.
There are several VPCs for different purposes. Only One VPC has IGW and Public Subnet (to accept requests coming from Public Internet), all other VPCs do not have IGW and Public Subnets.
Corporate Network and several AWS VPCs (those having no IGW) are connected with each other through Transit Gateway.
Can someone please advise whether I should use AWS Transfer or S3 VPC Interface Endpoints to transfer files to S3 bucket from on-premise (corporate network)? and why?
I appreciate your valuable advise in advance.
You should Create a server endpoint that can be accessed only within your VPC - AWS Transfer Family.
Note that this is a special endpoint for AWS Transfer. It is not an endpoint for Amazon S3.
Alternatively, you could run an SFTP server on an Amazon EC2 instance, as long as the instance also has access to Amazon S3 to upload the files received.
Of course, I'd also recommend avoiding SFTP altogether and upload directly to Amazon S3 if it is at al possible. Using SFTP adds complexity and expense that is best avoided.

Can i redirect the storage path of my AGORA from AWS to a local server?

As a startup, I am trying to cut down on my cost, and the smallest AWS package is about $1250/monthly, so Can I redirect the storage path of my AGORA from AWS to a local server?

Spinnaker configuration

I'm having question about spinnaker-Halyard installation, Can spinnaker manage AWS cloud provider without being installed on EC2 instance?. meaning that can I install spinnaker locally and add aws account and manage pipelines
Can spinnaker manage AWS cloud provider without being installed on EC2 instance?
Spinnaker can be installed on any Ubuntu server - for example, you could run a Spinnaker instance from Google's Click to Deploy image and have it manage your EC2 account.
Spinnaker is comprised of a bunch of microservices, so running it on a local workstation may be cumbersome. I suggest dedicating a specific machine to it. Alternatively, if you're set on running it locally, you could install Halyard locally and point it to a Minikube installation on your machine.
You can setup the these many providers under your spinnaker setup
https://www.spinnaker.io/setup/install/providers/
App Engine
Amazon Web Services
Azure
Cloud Foundry
DC/OS Google
Compute Engine
Kubernetes (legacy)
Kubernetes V2 (manifest based)
Openstack Oracle
You just need to integrate your service accounts into spinnaker to authorize resource creation.
Yes It will work just you need to create service account and Need to pass kubeconfig file to spinnaker, then spinnaker handle Deployment part automatically, you need to configure spinnaker for that.
Some useful link
https://www.spinnaker.io/setup/security/authorization/service-accounts/
https://www.spinnaker.io/setup/

Setting up Spinnaker on Kubernetes and accessing spinnaker UI

I have deployed the individual spinnaker components to kubernetes and when I am trying to access spinnaker through http://localhost:9000 I get an empty response from the server. I verified the configuration for clouddriver-local.yml, spinnaker-local.yml and everything seems good. Am i missing anything here? when I am trying to curl localhost:9000, I get an empty response from the server
here is the kubernetes setup info
Hi Spinnaker has evolved by this time and it should be easier to set up by now. If you want to do PoC only or deploy to small enterprise projects then i suggest you use Armory's Minnaker
Now if you want to deploy large projects to a robust and fully enhanced kubernetes cluster then that is a different story and the steps are as it follows:
Minimum 4 CPUs and 12 GB of memory
Access to an existing object storage bucket
Access to an IAM role or user with access to the bucket. (AWS IAM for AWS S3)
An existing Kubernetes Ingress controller or the permissions to install the NGINX Ingress Controller (ForDeck UI access)
Installation
Create a Kubernetes namespace for Spinnaker and Halyard
Grant the default ServiceAccount in the namespace access to the cluster-admin ClusterRole in the namespace.
Run Halyard (Spinnaker installer) as a Pod in the created namespace (with a StatefulSet).
Create a storage bucket for Spinnaker to store persistent configuration in.
Create an user (AWS IAM in case of AWS deployment) that Spinnaker will use to access the bucket (or alternately, granting access to the bucket via roles).
Rung hal client interactively in the Kubernetes Pod:
Build out the hal config YAML file (.hal/config)
Configure Spinnaker with the IAM credentials and bucket information
Turn on other recommended settings (artifacts and http artifact providers: github, bitbucket, etc)
Install Spinnaker hal deploy
Expose Spinnaker (Deck through ingress)
For more details refer to
Armory's doc
Spinnaker Distributed installation in Kubernetes
Hope the guideline helps

Access S3 in cron job in docker on Elastic Beanstalk

I have a cron job in a docker image that I deploy to elastic beanstalk. In that job I wish to include read and write operations on S3 and have included the AWS CLI tools for that purpose.
But AWS CLI isn't very useful without credentials. How can I securely include the AWS credentials in the Docker image, such that that AWS CLI will work? Or should I take some other approach?
Always try to avoid setting credentials on machines if they run within AWS.
Do the following:
Go into the IAM console and create an IAM role, then edit the policy of that role to have appropriate S3 read/write permissions.
Then go the Elastic Beanstalk console, find your environment and go to the the configuration/instances section. Set the "instance profile" to use the role you created (a profile is associated with a role, you can see it in the IAM console when you're viewing the role).
This will mean that each beanstalk EC2 instance will have the permissions you set in the IAM role (the AWS CLI will automatically use the instance profile of the current machine if available).
More info:
http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html#use-roles
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html