How do I add a boolean OR to cognito-idp list-user's --filter option - amazon-cognito

Currently, I can get a user in the user pool by specifying their username like so:
aws cognito-idp list-users --user-pool-id xxx --filter 'username="ABC"'
However, what do I do if I want to get both user "ABC" and user "DEF"? Is it possible to add a boolean OR to the filter string?
Something like
aws cognito-idp list-users --user-pool-id xxx --filter 'username="ABC" or username="DEF"'

You can jmespath in the query parameter instead of --filter, the syntax for multiple values is
[? expr1 || expr2]
So you can use
aws cognito-idp list-users --user-pool-id us-west-demo --query 'Users[?Username==`demo1` || Username==`demo2`]'
AmazonCognito-list-users

this command working. i use single quote in filter object
aws cognito-idp list-users --user-pool-id ap-southeast-AAAAAAAA --limit 20 --region ap-southeast-1 --filter 'preferred_username="tserensodnom.t#gmail.com"'

Related

Getting error while AWS EKS cluster backup using Velero tool

Please let me know what is my mistake!
Used this command to backup AWS EKS cluster using velero tool but it's not working :
./velero.exe install --provider aws --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/ --secret-file ./minio.credentials --use-restic --backup-location-config region=minio,s3ForcePathStyle=true,s3Url=s3Url=s3://backup-archive/eks-cluster-backup/prod-eks-cluster/ --kubeconfig ../kubeconfig-prod-eks --plugins velero/velero-plugin-for-aws:v1.0.0
cat minio.credentials
[default]
aws_access_key_id=xxxx
aws_secret_access_key=yyyyy/zzzzzzzz
region=ap-southeast-1
Getting Error:
../kubectl.exe --kubeconfig=../kubeconfig-prod-eks.txt logs deployment/velero -n velero
time="2020-12-09T09:07:12Z" level=error msg="Error getting backup store for this location" backupLocation=default controller=backup-sync error="backup storage location's bucket name \"backup-archive/eks-cluster-backup/\" must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/persistence/object_store.go:110" error.function=github.com/vmware-tanzu/velero/pkg/persistence.NewObjectBackupStore logSource="pkg/controller/backup_sync_controller.go:168"
Note: I have tried --bucket backup-archive but still no use
This is the source of your problem: --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/.
The error says: must not contain a '/' .
This means it cannot contain a slash in the middle of the bucket name (leading/trailing slashes are trimmed, so that's not a problem). Source: https://github.com/vmware-tanzu/velero/blob/3867d1f434c0b1dd786eb8f9349819b4cc873048/pkg/persistence/object_store.go#L102-L111.
If you want to namespace your backups within a bucket, you may use the --prefix parameter. Like so:
--bucket backup-archive --prefix /eks-cluster-backup/prod-eks-cluster/.

How to enable a kv secret engine in vault using HTTP APIs

I am trying to enable kv secret engine at secret path in my vault setup..
I can easily do it using CLI
vault secrets enable -path=secret kv
But I have to make it work using Vault's HTTP APIs.
I have gone through documentation but could not find any endpoint for the above command.
Thanks in advance
This is covered under the System Backend/sys/mounts API reference page.
Issue a POST request to /v1/sys/mounts/<mountpoint> with a payload containing the type (kv) and various configuration options. For KV, you probably want to specify version: 2 (or type kv-v2) unless you want to stick to V1.
See the above link for details on the possible parameters.
Here is the example from the docs:
payload.json:
{
"type": "aws",
"config": {
"force_no_cache": true
}
}
Request:
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data #payload.json \
http://127.0.0.1:8200/v1/sys/mounts/my-mount

How to check content of a Noobaa bucket

I am able to check status of Nooba bucket using noobaa bucket status <bucket> command.
$ noobaa bucket status XYZ
INFO[0005] ✅ Exists: NooBaa "noobaa"
INFO[0005] ✅ Exists: Service "noobaa-mgmt"
INFO[0006] ✅ Exists: Secret "noobaa-operator"
INFO[0006] ✅ Exists: Secret "noobaa-admin"
INFO[0008] ✈️ RPC: bucket.read_bucket() Request: {Name:XYZ}
INFO[0010] ✅ RPC: bucket.read_bucket() Response OK: took 14.3ms
Bucket status:
Bucket : XYZ
OBC Namespace : xyz-namespace
OBC BucketClass : default-bucket-class
Type : REGULAR
Mode : OPTIMAL
ResiliencyStatus : OPTIMAL
QuotaStatus : QUOTA_NOT_SET
Num Objects : 1
Data Size : 3.000 B
Data Size Reduced : 5.000 B
Data Space Avail : 1.000 PB
But I am not able to check content present inside Noobaa bucket.
How can we check content of a Noobaa bucket? using Noobaa CLI or any other way?
Your question made me realize that noobaa CLI should have noobaa object list command so I opened a new issue for this enhancement on the operator github repo. Thanks :)
Until this is added, there are several ways we use to list objects:
run noobaa ui - notice that it opens the browser quickly, but on the terminal it prints the credentials for you to use for login. You can probably find the buckets and the drill down to the objects in the UI on your own, and you can also check out some recorded videos that navigate the UI - for example this video.
Take the admin S3 credentials and endpoint from noobaa status and then use your favorite s3 client - I currently use aws-cli or rclone:
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $NOOBAA_S3_ENDPOINT --no-verify-ssl s3'
and then:
s3 ls XYZ
Not many noticed but the NooBaa system CR contains a useful Readme text in its status, with commands to "Test S3 client" - ready to copy-paste to set up your aws-cli, including kubectl port-forward to support secure networks and reading the credentials from secrets. Check it out with kubectl describe noobaa. This 40 seconds youtube video shows this briefly. BTW, the readme text is generated for the system but its text does not contain actual secrets, only kubectl commands to read those secrets if permitted to.
$ kubectl describe noobaa
...
Phase: Ready
Readme:
Welcome to NooBaa!
-----------------
NooBaa Core Version: 5.3.0-9f579d9
NooBaa Operator Version: 2.1.0
Lets get started:
1. Connect to Management console:
Read your mgmt console login information (email & password) from secret: "noobaa-admin".
kubectl get secret noobaa-admin -n backup-service -o json | jq '.data|map_values(#base64d)'
Open the management console service - take External IP/DNS or Node Port or use port forwarding:
kubectl port-forward -n backup-service service/noobaa-mgmt 11443:443 &
open https://localhost:11443
2. Test S3 client:
kubectl port-forward -n backup-service service/s3 10443:443 &
NOOBAA_ACCESS_KEY=$(kubectl get secret noobaa-admin -n backup-service -o json | jq -r '.data.AWS_ACCESS_KEY_ID|#base64d')
NOOBAA_SECRET_KEY=$(kubectl get secret noobaa-admin -n backup-service -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|#base64d')
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3'
s3 ls
...
Last option, which should have been mentioned first, but unfortunately I just saw it is broken in the current version v2.1.0 (opened new issue), is to use the generic noobaa api command in order to call the object_api list_objects method like so:
noobaa api object list_objects '{ "bucket": "first.bucket" }'
I hope that helps, feel free to open github issues with suggestions/issues.
Thanks!
(NooBaa CTO)

Multi-part upload S3

I am trying to complete a multi-part upload to S3 where I was able to generate key and upload id from below command. When I pass the values to complete the upload, I'm getting the error. I googled to figureout this error pops up when we use int value in string datatypes. Can someone help pls why this occurs in S3 upload.
bash-3.2$ aws s3api create-multipart-upload --bucket awspythnautomation --key 'docker'
{
"Bucket": "awspythnautomation",
"Key": "docker",
"UploadId": "ySvpOo_9DwDLmfB84GqvJQAQeZQi1_U6_Qs2StKpvxCI.tKTFJKES9nNXDoY5zqkJX4yEuPdcICwTZ.X5xwkaNyYop1r9VOloMKjxji_TakQYLobYy7IcRoUUuHcebgh"
}
bash-3.2$ aws s3api complete-multipart-upload --multipart-upload fileb://Docker.dmg --bucket awspythnautomation --key 'docker' --upload-id ySvpOo_9DwDLmfB84GqvJQAQeZQi1_U6_Qs2StKpvxCI.tKTFJKES9nNXDoY5zqkJX4yEuPdcICwTZ.X5xwkaNyYop1r9VOloMKjxji_TakQYLobYy7IcRoUUuHcebgh
***'in <string>' requires string as left operand, not int***

How can I check if S3 objects is encrypted using boto?

I'm writing a python scripts to find out whether S3 object is encrypted. I tried using following code but key.encrypted always returns None even though I can see the object on S3 is encrypted.
keys = bucket.list()
for k in keys:
print k.name, k.size, k.last_modified, k.encrypted , "\n"
k.encrypted always returns None.
For what it's worth, you can do this using boto3 (which can be used side-by-side with boto).
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket-name')
for obj in bucket.objects.all():
key = s3.Object(bucket.name, obj.key)
print key.server_side_encryption
See the boto3 docs for a list of available key attributes.
expanding on #mfisherca's response, you can do this with the AWS CLI:
aws s3api head-object --bucket <bucket> --key <key>
# or query the value directly
aws s3api head-object --bucket <bucket> --key <key> \
--query ServerSideEncryption --output text
You can also check the encryption state for specific objects using the head_object call. Here's an example in Python/boto:
#!/usr/bin/env python
import boto3
s3_client = boto3.client('s3')
head = s3_client.head_object(
Bucket="<S3 bucket name>",
Key="<S3 object key>"
)
if 'ServerSideEncryption' in head:
print head['ServerSideEncryption']
See: http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.head_object