With version 6+, redis provides username/password as a way of authentication. With redis-cli after logging in, i can use:
$ redis-cli
redis-cli-prompt> auth user pass
But I couldn't not find API/tool which is giving the option of passing username and password.
Requesting community to help if you are aware of this and share the details of how to use ACL (user/pass) to access redis using API.
Related
Does anyone know how to configure AWS IAM/Cognito/AppSync to allow access to the AppSync API for unauthenticated users, without using Amplify? I see a lot of examples of how to configure it WITH Amplify and API keys (they expire).
I already have:
a Cognito Identity Pool with roles for authenticated and unauthenticated access
the role for unauthenticated acces modified to allow read access to the AppSync resources (wildcards)
An AppSync API with AIM as the default authentication method
What I miss in documentation and examples is:
how to connect AppSync to this specific Identity Pool
how to make an unauthenticated call using Postman or JavaScript in a browser
How to make an unauthenticated call using Postman?
According to appsync docs:
Unauthenticated APIs require more strict throttling than authenticated
APIs. One way to control throttling for unauthenticated GraphQL
endpoints is through the use of API keys. An API key is a hard-coded
value in your application that is generated by the AWS AppSync service
when you create an unauthenticated GraphQL endpoint.
So having ABC123 as the api key, you can send a query this way:
$ curl -XPOST -H "Content-Type:application/graphql" -H "x-api-key:ABC123" -d '{ "query": "query { movies { id } }" }' https://YOURAPPSYNCENDPOINT/graphql
Edit: Sorry didnt realize it was Identity pool, not user pools. Leaving here anyway. This below is for USER POOLS
How to connect AppSync to this specific Identity Pool?
When you create the default authorization mode in your appsync or when you add Additional authorization providers, you set the requirements for any mode you specify. In the case of AMAZON_COGNITO_USER_POOLS you set the following:
AWS Region
user pool
default action
The way you create the resources may vary from one tech to another, for example, using the aws cli:
$ aws appsync --region us-west-2 create-graphql-api --authentication-type AMAZON_COGNITO_USER_POOLS --name userpoolstest --user-pool-config '{ "userPoolId":"test", "defaultEffect":"ALLOW", "awsRegion":"us-west-2"}'
For more explanation check appsync documentation (link provided), the examples are from there.
1/ Everyday at 3am, we are runnning a script alfa.sh on server A in order to send some backups to AWS (s3 bucket).
As a requirement we had to configure AWS (aws configure) on the server which means the Secret Key and Access Key are stored on this server. We now would like to use short TTL credential valid only from 3am to 3:15am . Vault Hashicorp does that very well
2/ On server B we have a Vault Hashicorp installed and we managed to generate short ttl dynamic secrets for our s3 bucket (access key / secret key).
3/We now would like to pass the the daily generated dynamic secrets to our alpha.sh. Any idea how to achieve this?
4/Since we are generating a new Secret Key and Access Key, I understand that a new AWS configuration "aws configure" will have to be performed on server A in order to be able to perform the backup. Any experience with this?
DISCLAIMER: I have no experience with aws configure so someone else may have to answer this part of the question. But I believe it's not super relevant to the problem here, so I'll give my partial answer.
First things first - solve your "secret zero" problem. If you are using the AWS secrets engine, it seems unlikely that your server is running on AWS, as you could skip the middle man and just give your server an IAM policy that allowed direct access to the S3 resource. So find the best Vault auth method for your use case. If your server is in a cloud like AWS, Azure, GCP, etc or container like K8S, CF provider, or has a JWT token delivered along with a JWKS endpoint Vault can trust, target one of those, and if all else fails, use AppRole authentication delivering a wrapped token via a trusted CI solution.
Then, log into Vault in your shell script using those credentials. The login will look different depending on the auth method chosen. You can also leverage Vault Agent to automatically handle the login for you, and cache secrets locally.
#!/usr/bin/env bash
## Dynamic Login
vault login -method="${DYNAMIC_AUTH_METHOD}" role=my-role
## OR AppRole Login
resp=$(vault write -format=json auth/approle/login role-id="${ROLE_ID}" secret-id="${SECRET_ID}")
VAULT_TOKEN=$(echo "${resp}" | jq -r .auth.client_token)
export VAULT_TOKEN
Then, pull down the AWS dynamic secret. Each time you read a creds endpoint you will get a new credential pair, so it is important not to make multiple API calls here, and instead cache the entire API response, then parse the response for each necessary field.
#!/usr/bin/env bash
resp=$(vault read -format=json aws/creds/my-role)
AWS_ACCESS_KEY_ID=$(echo "${resp}" | jq -r .data.access_key)
export AWS_ACCESS_KEY_ID
AWS_SECRET_KEY_ID=$(echo "${resp}" | jq -r .data.secret_key)
export AWS_SECRET_KEY_ID
This is a very general answer establishing a pattern. Your environment particulars will determine manner of execution. You can improve this pattern by leveraging features like CIDR binds, number of uses of auth credentials, token wrapping, and delivery via CI solution.
I am using Keycloak to authenticate a Vue app that is running on Docker. Currently, my configuration includes using grant_type=password along with client-id and client-secret to authenticate a client.
Because I want to make client-secret configurable, what is the best way to use keycloak client-secret in docker-compose.
Authenticating with service account using gcloud
We are using below command for activating service account using .json file.
gcloud auth activate-service-account <service_account> --key-file <file_name>
After doing this we are able to deploy templates.
But we are not supposed to keep json file on server for authentication purpose.
Is there any other way of authenticating for deploying templates?
Is there any way to deploy templates using client secret and client id without using json file ?
To authorize Cloud SDK tools without storing private key, alternatively use tokens, see OAuth:
gcloud init on your local terminal, see Run gcloud init documentation
gcloud init on Compute Engine VM Instance, see Set up gcloud compute documentation
To avoid prompts, provide parameters for gcloud init in the command line (works only when $ gcloud config set disable_prompts false)
$ gcloud init --account=[account-name] --configuration=[config-name] --project=[prj-name] --console-only
For more details see documentation Managing SDK Configurations and Managing SDK Properties
There is also Google Cloud Shell, with 5GB of persistent disk storage and no additional authorization required to use Cloud SDK, see Starting Cloud Shell
To provide authorization you also can use Cloud Identity and Access Management API. You may also find helpful the answer for similar question on Stack Overflow
I have integrated milton webdav with hadoop hdfs and able to read/write files to the hdfs cluster.
I have also added the authorization part using linux file permissions so only authorized users can access the hdfs server, however, I am stuck at the authentication part.
It seems hadoop does not provide any in built authentication and the users are identified only through unix 'whoami', meaning I cannot enable password for the specific user.
ref: http://hadoop.apache.org/common/docs/r1.0.3/hdfs_permissions_guide.html
So even if I create a new user and set permissions for it, there is no way to identify whether the user is authenticate or not. Two users with the same username and different password have the access to the all the resources intended for that username.
I am wondering if there is any way to enable user authentication in hdfs (either intrinsic in any new hadoop release or using third party tool like kerbores etc.)
Edit:
Ok, I have checked and it seems that kerberos may be an option but I just want to know if there is any other alternative available for authentication.
Thanks,
-chhavi
Right now kerberos is the only supported "real" authentication protocol. The "simple" protocol is completely trusting the client's whois information.
To setup kerberos authentication I suggest this guide: https://ccp.cloudera.com/display/CDH4DOC/Configuring+Hadoop+Security+in+CDH4
msktutil is a nice tool for creating kerberos keytabs in linux: https://fuhm.net/software/msktutil/
When creating service principals, make sure you have correct DNS settings, i.e. if you have a server named "host1.yourdomain.com", and that resolves to IP 1.2.3.4, then that IP should in turn resolve back to host1.yourdomain.com.
Also note that kerberos Negotiate Authentication headers might be larger than Jetty's built-in header size limit, in that case you need to modify org.apache.hadoop.http.HttpServer and add ret.setHeaderBufferSize(16*1024); in createDefaultChannelConnector(). I had to.