getting error while using s3cmd with following config - amazon-s3

I am trying s3cmd from command line and when i am configuring using s3cmd --configure following lines of information its asking me. I know I am giving proper Access Key and Secret key. But I think the problem is with Default Region: Mumbai or S3 Endpoint: ap-south-1.amazonaws.com or
DNS-style bucket+hostname:port template for accessing a bucket:. My S3 bucket is in Mumbai, India.
Access Key: ASOMETHINGJFDGERCEMUA
Secret Key: 5q2tbwdf43/sdfsdfsdf/AopqPd73QaiN4fr3e3fv8wE
Default Region: Mumbai
S3 Endpoint: ap-south-1.amazonaws.com
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.ap-south-1.amazonaws.com
Encryption password:
Path to GPG program: None
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Error:
ERROR: Test failed: [Errno 8] nodename nor servname provided, or not known

For Asia Pacific (Mumbai), use the following values.
Region: ap-south-1
S3 Endpoint: s3.ap-south-1.amazonaws.com
Refer here for the complete list of S3 Endpoints.
Note: Check whether your version of s3cmd supports Signature Version 4.
Prefer Aws CLI over S3cmd.

Related

fetchS3 processor is giving error:-The AWS Access Key Id you provided does not exist in our records. (Service: Amazon s3; Status Code:403)

I have a started minIo server using the below docker compose file, after that I have created a bucket and uploaded few files, I have a nifi template where I have listS3 processor and fetchS3 processor, with lists3 processor I am able to fetch the objects present in s3 bucket, but fetch S3 is not working, getting the error mentioned in title of the post even after giving the same access key and secret key as used in minIO. I am using minio server as an alternate to AWS
The docker compose file is also mentioned below:-
version: "2"
services:
minio:
image: minio/minio:RELEASE.2020-12-16T05-05-17Z
volumes:
- minio-data1:/data
ports:
- "9001:9000"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: minio server /data
volumes:
minio-data1:
networks:
default:
external:
name: ngp-dev
Just give end point url where your local minio server is running.

AWS CLI S3 list with a default endpoint

I'm using the following command on some ec2 instances in order to get some configuration files from an s3 bucket. The ec2 has an instance role attached with s3 full permissions:
aws s3 cp s3://bucket-name/file ./ --region eu-west-1
This work as expected on some instances provided by me with a default ami, but one some existing instances in the same region and AZ with the same instance role i'm facing the following error:
Connect timeout on endpoint URL: "https://bucket-name.eu-west-1.amazonaws.com/?list-type=2&delimiter=2%F&prefix=&encoding-type=url"
failed to run commands: exit status 255
My question is why the S3Uris is not prefixed with s3:// and returns the error with url string https:// ? it's clear that this aws cli version tries to reach s3 through https not by s3:// endpoint provided by me in the command. Is there anyway to overwrite this?
My question is why the S3Uris is not prefixed with s3:// and returns
the error with url string https:// ?
Behind the scenes aws cli call the AWS services using HTTPS so that why is why on timeout you see https://bucket-name.eu-west-1... timeout instead of s3:// .
By default, the AWS CLI sends requests to AWS services by using HTTPS on TCP port 443. To use the AWS CLI successfully, you must be able to make outbound connections on TCP port 443.
aws-cli-chap-using
The timeout on some instance might be because they are in private subnet without NAT gateway.
you can simply verify this by doing ping google.com if it not responding then the instance in the private subnet without NAT or has no outbound allowed traffic.

kubernetes authentication against the API server

I have setup a kubernetes cluster from scratch. This just means I did not use services provided by others, but used the k8s installer it self. Before we used to have other clusters, but with providers and they give you tls cert and key for auth, etc. Now this cluster was setup by myself, I have access via kubectl:
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
$
I also tried this and I can add a custom key, but then when I try to query via curl I get pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope.
I can not figure out where can I get the cert and key for a user to authenticate using the API for tls auth. I have tried to understand the official docs, but I have got nowhere. Can someone help me find where those files are or how to add or get certificates that i can use for the rest API?
Edit1: my .kube.config file looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t(...)=
server: https://private_IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS(...)Qo=
client-key-data: LS0(...)tCg==
It works from the localhost just normally.
In the other hand I noticed something. From the localhost I can access the cluster by generating the token using this method.
Also notice that for now I do not mind about creating multiple roles for multiple users, etc. I just need access to the API from remote and can be using "default" authentication or roles.
Now when I try to do the same from remote I get the following:
I tried using that config to run kubectl get all from remote, it runs for a while and then ends in Unable to connect to the server: dial tcpprivate_IP:6443: i/o timeout.
This happens because the config has private_IP, then I changed the IP to Public_IP:6443 and now get the following : Unable to connect to the server: x509: certificate is valid for some_private_IP, My_private_IP, not Public_IP:6443
Keep present that this is and AWS ec2 instance with elastic IP (You can think of Elastic IP as just a public IP on a traditional setup, but this public ip is on your public router and then this router routes requests to your actual server on private network). For AWS fans like I said, I can not use the EKS service here.
So how do I get this to be able to use the Public IP?
It seems your main problem is the TLS server certificate validation.
One option is to tell kubectl to skip the validation of the server certificate:
kubectl --insecure-skip-tls-verify ...
This has obviously the potential to be "insecure", but that depends on your use case
Another option is to recreate the cluster with the public IP address added to the server certificate. And it should also be possible to recreate only the certificate with kubeadm without recreating the cluster. Details about the latter two points can be found in this answer.
You need to setup RBAC for the user. define roles and rolebinding. follow the link for reference -> https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/

Connecting Spring Cloud Config and AWS Code Commit using HTTPS credentials

I am trying to connect my Spring Cloud Config to a repo on AWS CodeCommit using HTTPS but I keep getting an error saying Cannot clone or checkout repository.
This is what I have done so far:
Created a user in AWS IAM and generated HTTPS GIT username and password credentials.
Added the AWS CodeCommit git URL and user credentials into the application.yml file
server:
port: 8888
spring:
cloud:
config:
discovery:
enabled: true
server:
encrypt.enabled: false
git:
uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/XXXXX
username: XXXXXXXXXX
password: XXXXXXXXXX
Added the AWS java-sdk-core library as a build dependency.
Is there anything else I need to do?
Document encrypt.* values need to go in bootstrap.{yml|properites}

OpenShift Origin Build - unable to use git as a source

I'm trying to do a simple build of a nodejs app I wrote in OpenShift Origin using the following yaml:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "dyn-kickstart"
spec:
triggers:
- type: "GitHub"
github:
secret: "secret101"
source:
git:
uri: git#bitbucket.org:serverninja02/dynamic-kickstart.git
sourceSecret:
name: "github"
strategy:
type: Docker
dockerStrategy:
dockerfilePath: .
forcePull: true
noCache: true
output:
to:
kind: "DockerImage"
name: "docker-registry-default.apps.reedfamily.local/serverninja/dynamic-kickstart:v0.0.1
The command I'm running to create the build:
$ cat dynamic-kickstart.yml | oc create -f -
What I'm running into is that the build service account doesn't seem to be able to access the github url to clone:
Cloning "git#bitbucket.org:serverninja02/dynamic-kickstart.git" ...
error: build error: Warning: Permanently added 'bitbucket.org,192.168.1.81' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I did follow the instructions on creating the ssh-privatekey secret, placing in the secret store, and linking to the build sa. I also double-checked that key and tested through ssh forwarding that I can log into the OpenShift node and ssh git#bitbucket.org.
I'm not sure what I'm doing wrong but even with using the http git url and making it a public repo, it still doesn't work as it complains about the peer certificate not being trusted:
Cloning "https://serverninja02#bitbucket.org/serverninja02/dynamic-kickstart.git" ...
error: build error: fatal: unable to access 'https://serverninja02#bitbucket.org/serverninja02/dynamic-kickstart.git/': Peer's certificate issuer has been marked as not trusted by the user.
At this point, I'm unsure where to go with this as OpenShift Origin doesn't seem to want to build anything from git as a source.
Any help or suggestions would be greatly appreciated!
OpenShift Version: 1.3.0
OpenShift Kubernetes Version: v1.3.0+52492b4
This is a flat network behind a router. DNS is on Active Directory with a wildcard entry for *.apps.reedfamily.local.
This is a test bed environment in a .local domain. However I'm using this build to potentially build this out as a POC for my company to host OpenShift.
I figured out the answer to my problem!!! So I'll share:
The /etc/resolv.conf was configured automatically during the build of my OpenShift nodes when I ran openshift-ansible. Unfortunately, there was a search domain placed in /etc/resolv.conf that must have been causing issues.
# Generated by NetworkManager
search apps.reedfamily.local
nameserver 192.168.1.40
Once I removed "search apps.reedfamily.local", that fixed the problem immediately on the next build!