s3fs mount storage in Cloud (NOT amazon's) - amazon-s3

I am new to s3, object storage and linux.
I have a CentOS linux server and an active subscription to public cloud for object storage. Now I want to connect and mount the cloud object storage to the linux server.
I have installed fuse s3fs to the linux server, I have added the access and secret access key to the password file..but I am missing where I should set the endpoint. I know that by default it points to amazon's s3 but I want to use a different service and I cannot find where I should set the endpoint. Any help would be appreciated..thank you!

you can try running:
s3fs <_bucket_name_> <_local_mount_point_> -o nocopyapi -o
use_path_request_style -o nomultipart -o sigv2 -o
url=http://s3-api.sjc-us-geo.objectstorage.softlayer.net/ -d -f -o curldbg -of2 -o allow_other -o passwd_file=/etc/ibmcos.pwd
if that doesn't work for you, find out your endpoint url from the portal or look here:
https://ibm-public-cos.github.io/crs-docs/endpoints

Related

s3fs use credentials and config within $HOME/.aws as opposed to a /passwd-s3fs file

I'm looking at the readme of s3fs repo.
I wouldlike to mount an S3 dir locally using this tool. The readme says:
s3fs supports the standard AWS credentials file stored in ${HOME}/.aws/credentials. Alternatively, s3fs supports a custom passwd file.
The subsequent examples all seem to use the custom passwd file as opposed to the credentials in ~/.aws. I would like to use credentials in ~/.aws.
my .aws credentials and config file looks something like this:
~/.aws/credentials:
[work]
aws_access_key_id=123abc
aws_secret_access_key=mykeyhere
aws_s3_endpoint=s3.us-south.cloud-object-storage.appdomain.cloud
~/.aws/config:
[work]
region=us-south
I attempted to 'hello world' / run s3fs for the first time. Readme example provided:
Run s3fs with an existing bucket mybucket and directory /path/to/mountpoint:
s3fs mybucket /path/to/mountpoint -o passwd_file=${HOME}/.passwd-s3fs
I don't have a passwd file I want to use the credentials in .aws instead and don't know how to do that. Tried:
Just leaving off the -o passwd option and hoping it would default:
s3fs companyname-us-south-analytics-ml-models /home/doug/Documents/Projects/companyname/Projects/companynameS3
s3fs: could not determine how to establish security credentials.
I then tried adding the aws credentials file per the example:
s3fs companyname-us-south-analytics-ml-models /home/doug/Documents/Projects/companyname/Projects/companynameS3 -o passwd_file=${HOME}/.aws/credentials
s3fs: could not determine how to establish security credentials.
I then tried referencing 'work' per my aws config files (clutching at branches here)
s3fs companyname-us-south-analytics-ml-models /home/doug/Documents/Projects/companyname/Projects/companynameS3 -o work
s3fs: could not determine how to establish security credentials.
I looked at man s3fs and found some info under authentication:
AUTHENTICATION
The s3fs password file has this format (use this format if you have only one set of creden‐
tials):
accessKeyId:secretAccessKey
If you have more than one set of credentials, this syntax is also recognized:
bucketName:accessKeyId:secretAccessKey
Password files can be stored in two locations:
/etc/passwd-s3fs [0640]
$HOME/.passwd-s3fs [0600]
I could not find anything on authenticating with the settings in ~/.aws.
How can I set up s3fs using the credentials in .aws?
If you want to use the "work" profile from ${HOME}/.aws/credentials then you need to add the -o profile=work option.

LDAP Apache Directory Studio Authentication Failed

I am trying to integrate multiple directory services to Keycloak hence I am following the article: Setup User Federation with Keycloak
I have pulled the docker data and running them as mentioned:
docker pull rroemhild/test-openldap
docker run --privileged -d -p 389:389 -p 636:636 --name da-01 rroemhild/test-openldap
Now I am trying to connect the same using the Apache Directory Studio and when I try to authenticate I get the message
I am not sure what I am doing wrong. I am trying with the mentioned password: GoodNewsEveryone
I basically tried running the docker in a different port and it worked:
docker run --rm -p 10389:10389 -p 10636:10636 rroemhild/test-openldap

How do I get to my spinnaker dashboard after Installing minnaker on my aws ec2

I installed spinnaker on my AWS EC2, login into the dashboard in the first time but immediately after I logout and login again using the same base URL i am being directed to a different person github account, what might have happened, does it mean my account is hacked or what, somebody advise please.
Being directed to the link attached below, instead of the ip address taking me to the spinnaker dashboard and yet I am using the correct base address
These are the instructions i follow for Minnaker on EC2 (ap-southeast-2)
Pre-requisites
Obtain an AWS Elastic IP
From AWS EC2 console choose a Region preferably ap-southeast-2 and
launch an EC2 instance with 16 GB memory, 4 cpu min and 60 GB disk.
An initial deployment can be performed using instance= m4.xlarge
Attach the AWS Elastic IP to the Spinnaker Instance
Access the instance through SSH
Get minnaker
curl -LO https://github.com/armory/minnaker/releases/latest/download/minnaker.tgz
Untar
tar -xzvf minnaker.tgz
Go to minnaker directory
cd minnaker
Use the Public IP value from The Elastic IP as the $PUBLIC_IP
Obtain Private IP of the instance hostname -I and add them to local environment variables $PRIVATE_IP
export PRIVATE_IP=$(hostname -I)
export PUBLIC_IP=AWS_ELASTIC_IP
Execute the command below to install Open Source Spinnaker
./scripts/install.sh -o -P $PRIVATE_IP
Validate installation
UI
Validate installation going to generated URL https://PUBLIC_IP
Use user admin and get the password at etc/spinnaker/.hal/.secret/spinnaker_password
The UI should load
Kubernetes Deployment
Minnaker is deployed inside an EC2 as a lightweight Kubernetes K3S cluster
Run kubectl version
Get info from cluster kubectl cluster-info
Tweak bash completion and enable a simple alias.
kubectl completion bash
kubectl completion bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
echo 'alias k=kubectl' >>~/.bashrc
`echo 'complete -F __start_kubectl k' >>~/.bashrc
Validate Spinnaker is running
k -n spinnaker get pods -o wide
Halyard Config
Validate a default halyard config is been set up
sudo chmod 755 /usr/local/bin/hal
#!/bin/bash
set -x
HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
k -n spinnaker exec -it ${HAYLYARD} -- hal $# config
Minnaker repo
Clone the repository
Go to Scripts directory cd minnaker/scripts
Add permissions to the installation script chmod 775 all.sh
git clone https://github.com/armory/minnaker
References
armory/minnaker

How to setup a small website using docker

I have a question regarding Docker. That container's concept being totally new to me and I am sure that I haven't grasped how things work (Containers, Dockerfiles, ...) and how they could work, yet.
Let's say, that I would like to host small websites on the same VM that consist of Apache, PHP-FPM, MySQL and possibly Memcache.
This is what I had in mind:
1) One image that contains Apache, PHP, MySQL and Memcache
2) One or more images that contains my websites files
I must find a way to tell in my first image, in the apache, where are stored the websites folders for the hosted websites. Yet, I don't know if the first container can read files inside another container.
Anyone here did something similar?
Thank you
Your container setup should be:
MySQL Container
Memcached Container
Apache, PHP etc
Data Conatainer (Optional)
Run MySQL and expose its port using the -p command:
docker run -d --name mysql -p 3306:3306 dockerfile/mysql
Run Memcached
docker run -d --name memcached -p 11211:11211 borja/docker-memcached
Run Your web container and mount the web files from the host file system into the container. They will be available at /container_fs/web_files/ inside the container. Link to the other containers to be able to communicate with them over tcp.
docker run -d --name web -p 80:80 \
-v /host_fs/web_files:/container_fs/web_files/ \
--link mysql:mysql \
--link memcached:memcached \
your/docker-web-container
Inside your web container
look for the environment variables MYSQL_PORT_3306_TCP_ADDR and MYSQL_PORT_3306_TCP_PORT to tell you where to conect to the mysql instance and similarly MEMCACHED_PORT_11211_TCP_ADDR and MEMCACHED_PORT_11211_TCP_PORT to tell you where to connect to memcacheed.
The idiomatic way of using Docker is to try to keep to one process per container. So, Apache and MySQL etc should be in separate containers.
You can then create a data-container to hold your website files and simply mount the volume in the Webserver container using --volumes-from. For more information see https://docs.docker.com/userguide/dockervolumes/, specifically "Creating and mounting a Data Volume Container".

How to transfer files from a remote server to my Amazon S3 instance?

I have about 15 gigs of data in 5 files that I need to transfer to an Amazon S3 bucket, they are currently hosted on a remote server that I have no scripting or shell access to - I can only download them VIA an httpd link.
How can I transfer these files to my Amazon S3 bucket without first having to download them to my local machine then re-upload them to S3?
If you want to automate the process, use AWS SDK.
Like in following case use AWS PHP SDK:
use Aws\Common\Aws;
$aws = Aws::factory('/path/to/your/config.php');
$s3 = $aws->get('S3');
$s3->putObject(array(
'Bucket' => 'your-bucket-name',
'Key' => 'your-object-key',
'SourceFile' => '/path/to/your/file.ext'
));
More details:
http://blogs.aws.amazon.com/php/post/Tx9BDFNDYYU4VF/Transferring-Files-To-and-From-Amazon-S3
http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html
Given that you only have 5 files, use the S3 file uploader http://console.aws.amazon.com/s3/home?region=us-east-1 (Actions, Upload) after you have downloaded the files to some intermediate machine. EC2 running Windows might be the best solution as the upload to S3 would be very fast. You can download Chrome onto your EC2 instance from chrome.google.com, or use the existing web browser (IE) to do the job.
[1] SSH with keys
sh-keygen -f ~/.ssh/id_rsa -q -P ""
cat ~/.ssh/id_rsa.pub
Place this SSH key into your ~/.ssh/authorized_keys file
mkdir ~/.ssh
chmod 0700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 0644 ~/.ssh/authorized_keys
[2] Snapshot ZFS, minimize transfer with LZMA, send with RSYNC
zfs snapshot zroot#150404-SNAPSHOT-ZROOT
zfs list -t snapshot
Compress to file with lzma (more effective than bzip2)
zfs send zroot#150404-SNAPSHOT-ZROOT | lzma -9 > /tmp/snapshots/zroot#150404-SNAPSHOT-ZROOT.lzma
rsync -avz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --progress --partial /tmp/snapshots/zroot#150404-SNAPSHOT-ZROOT.lzma <username>#<ip-address>:/
[3] Speedup transfer with MBUFFER, send with ZFS Send/Recieve
Start the receiver first. This listens on port 9090, has a 1GB buffer, and uses 128kb chunks (same as zfs):
mbuffer -s 128k -m 1G -I 9090 | zfs receive zremote
Now we send the data, also sending it through mbuffer:
zfs send -i zroot#150404-SNAPSHOT-ZROOT zremote#150404-SNAPSHOT-ZROOT | mbuffer -s 128k -m 1G -O <ip-address>:9090
[4] Speedup transfer by only sending diff
zfs snapshot zroot#150404-SNAPSHOT-ZROOT
zfs snapshot zroot#150405-SNAPSHOT-ZROOT [e.g. one day later]
zfs send -i zroot#150404-SNAPSHOT-ZROOT zroot#150405-SNAPSHOT-ZROOT | zfs receive zremote/data
See also my notes