I am building an HTTP API using sam local start-api. Each endpoint of this API is mapped to a lambda handler I have written in Javascript code.
One of these lambda handlers requires to download and upload files from S3, for which I am using this S3Client from #aws-sdk/client-s3. I have tried to initialize such client as follows:
const s3Client = new S3Client({
region: "eu-west-1"
});
expecting that it reads the credentials from my ~/.aws/credentials file, but it does not. All operations via this client fail due to lack of permissions.
I would like to know what is the correct way of using this S3Client from within a lambda handler that I am testing locally using sam local.
If you're not using the default profile in your AWS credentials file, Sam CLI commands have a --profile option to specify which profile to use.
For example:
sam local start-api --profile my_profile
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-api.html
Related
I have a Netcore api code that includes retrieving and uploading files to aws S3. It works when I run it locally since I have saved IAM credentials locally in another folder. However, when I deploy it with aws lambda function and try to access S3 I get AmazonS3Exception "access denied". I'm wondering how can I setup access to IAM credentials remotely as I have done locally?
You should be assigning an IAM role as the Lambda function's execution role. Your code should be able to pick that up and use it automatically. If your code isn't picking that up automatically then edit your question to show the relevant code.
My code uses the AWS Javascript SDK to upload to S3 directly from a browser. Before the upload happens, my server sends it a value to use for 'Authorization'.
But I see no way in the AWS.S3.upload() method where I can add this header.
I know that underneath the .upload() method, AWS.S3.ManagedUpload is used but that likewise doesn't seem to return a Request object anywhere for me to add the header.
It works successfully in my dev environment when I hardcode my credentials in the S3() object, but I can't do that in production.
How can I get the Authorization header into the upload() call?
Client Side
this posts explains how to post from a html form with a pre-generated signature
How do you upload files directly to S3 over SSL?
Server Side
When you initialise the S3, you can pass the access key and secret.
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
accessKeyId: '[value]',
secretAccessKey: '[value]'
});
const params = {};
s3.upload(params, function (err, data) {
console.log(err, data);
});
Reference: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
Alternatively if you are running this code inside AWS services such as EC2, Lambda, ECS etc, you can assign a IAM role to the service that you are using. The permissions can be assigned to the IAM Role
I suggest that you use presigned urls.
Basically I use serverless framework serverless for a function that allows me to send/receive emails through mailgun.
For this I have a config.js file set up in my serverless folder.
This config.js contains all my API keys, email address, login, etc for my “mailgun” function.
I want to use Google Cloud KMS to encrypt the resource config.js, because I am afraid my senstive data gets stolen and misused.
The encrypted file is config.js.enc. google_key_management_service
But serverless deploy does not decrypt my config.js.enc. It throws me a resource/syntax error…
Any solutions/ideas how I can make KMS work for my config.js file in my serverless framework?
I added AWS tags also, because they have a similar KMS as Google Cloud. But actually I think the real issue is with the serverless framework and to make encrypted files work in deploying serverless with sls deploy command, but I could be mistaken.
The serverless framework appears to include native AWS SSM integrations:
functions:
myfunc:
# other config
environment:
TWITTER_ACCESS_TOKEN: ${ssm:myFunc}
However, as you noted, there's no similar functionality on GCP, so you'll need to roll some of this on your own. You may be interested in some of the strategies outlined in Secrets in Serverless:
Do you need the secrets?
It's always important to ask - do I actually need these secrets? Could you leverage Cloud Provider IAM (or even cross-cloud OIDC) instead of injecting secrets into my application? Where possible, try to leverage the IAM solution provided by the various clouds. Obviously there are still quite a few cases where a secret is required.
Encrypted environment variables
Before the function is launched, you encrypt the plaintext secrets locally into ciphertext (encrypted strings). Here's an example with gcloud, but you can also use the API or other tools like HashiCorp Vault:
$ gcloud kms encrypt \
--ciphertext-file=- \
--plaintext-file=/path/to/my/secret \
--key=my-kms-key \
--key-ring=my-kms-keyring \
--location=us-east4 \
| base64
This will output a base64-encoded, encrypted string, which you then store in your config.js:
CiQAePa3VBJLbunLSqIJT+RS4nYiKdIaW6U69Y...
On startup, configure your application to:
Base64 decode the string
Decrypt the ciphertext using Cloud KMS
Store the plaintext in-memory for as long as the secret is needed
I'm not sure what language(s) you are using, but here's a nodejs sample. You can find a lot more samples on GitHub at sethvargo/secrets-in-serverless:
const cryptoKeyID = process.env.KMS_CRYPTO_KEY_ID;
const kms = require('#google-cloud/kms');
const client = new kms.v1.KeyManagementServiceClient();
let username;
client.decrypt({
name: cryptoKeyID,
ciphertext: process.env.DB_USER,
}).then(res => {
username = res[0].plaintext.toString().trim();
}).catch(err => {
console.error(err);
});
let password;
client.decrypt({
name: cryptoKeyID,
ciphertext: process.env.DB_PASS,
}).then(res => {
password = res[0].plaintext.toString().trim();
}).catch(err => {
console.error(err);
});
exports.F = (req, res) => {
res.send(`${username}:${password}`)
}
Google Cloud Storage
Since you're on GCP, another option is to use Google Cloud Storage (GCS) directly to store the secrets. This would remove your coupling from the serverless framework.
Make a bucket:
$ gsutil mb gs://${GOOGLE_CLOUD_PROJECT}-serverless-secrets
Make the bucket private:
$ gsutil defacl set private gs://${GOOGLE_CLOUD_PROJECT}-serverless-secrets
$ gsutil acl set -r private gs://${GOOGLE_CLOUD_PROJECT}-serverless-secrets
Write some secrets into the bucket. Even though they are being committed as plaintext, they are encrypted at rest, and access is tightly controlled via IAM.
$ gsutil -h 'Content-Type: application/json' cp - gs://${GOOGLE_CLOUD_PROJECT}-serverless-secrets/app1 <<< '{"username":"my-user", "password":"s3cr3t"}'
Then create a service account which has permission to read from the bucket, and assign that service account to your functions.
Finally, read from the bucket at function start (Python example this time):
import os
import json
from google.cloud import storage
blob = storage.Client() \
.get_bucket(os.environ['STORAGE_BUCKET']) \
.get_blob('app1') \
.download_as_string()
parsed = json.loads(blob)
username = parsed['username']
password = parsed['password']
def F(request):
return f'{username}:{password}'
using the AWS CLI I'm trying to run
aws cloudformation create-stack --stack-name FullstackLambda --template-url https://s3-us-west-2.amazonaws.com/awsappsync/resources/lambda/LambdaCFTemplate.yam --capabilities CAPABILITY_NAMED_IAM --region us-west-2
but I get the error
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
I have already set my credential with
aws configure
PS I got the create-stack command from the AppSync docs (https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html)
Looks like you accidentally skipped l letter at the end of template file name:
LambdaCFTemplate.yam -> LambdaCFTemplate.yaml
First make sure the S3 URL is correct. But since this is a 403, I doubt it's the case.
Yours could result from a few different scenarios.
1.If both APIs and IAM user are MFA protected, you have to generate temporary credentials using aws sts get-session-token and use it
2.Use a role to provide cloudformation read access to the template object in S3. First create a IAM role with read access to S3. Then create a parameter like below and ref it in resource properties IamInstanceProfile block
"InstanceProfile":{
"Description":"Instance Profile Name",
"Type":"String",
"Default":"iam-test-role"
}
I'm trying to use JClouds to talk to an OpenStack / swift storage cloud installation that only exposes a S3 API (it does not support the swift / rackspace API).
I tried:
Properties overrides = new Properties();
overrides.setProperty(Constants.PROPERTY_ENDPOINT, CLOUD_SERVIE_ENDPOINT);
// get a context with nova that offers the portable ComputeService api
BlobStoreContext context = new BlobStoreContextFactory().createContext("aws-s3", ident,
password, ImmutableSet.<Module> of(), overrides);
The server replies with an authentication error 403. Using the standard AWS sdk or python boto works fine, so it's not a server problem, but most likely incorrect use of jclouds.
jclouds in fact supports swift, so you don't need to do anything special. I'd recommend using jclouds 1.3.1, and configure the dependency org.jclouds.api/swift
Then, you just need to enter you endpoint, identity, credential
Properties overrides = new Properties();
overrides.setProperty("swift.endpoint", "http://1.1.1.1:8080/auth");
BlobStoreContext context = new BlobStoreContextFactory().createContext("swift", "XXXXXX:YYYYY", "password", ImmutableSet.<Module> of(), overrides);
The following should work for you. It is known to work on vBlob, for example.
import static org.jclouds.s3.reference.S3Constants.PROPERTY_S3_VIRTUAL_HOST_BUCKETS;
...
Properties overrides = new Properties();
overrides.setProperty(PROPERTY_S3_VIRTUAL_HOST_BUCKETS, "false");
BlobStore blobstore = ContextBuilder.newBuilder(new S3ApiMetadata()) // or "s3"
.endpoint("http://host:port")
.credentials(accessKey, secretKey)
.overrides(overrides)
.buildView(BlobStoreContext.class).getBlobStore();
If your clone doesn't accept s3 requests at the root url, you'll need to set another parameter accordingly.
import static org.jclouds.s3.reference.S3Constants.PROPERTY_S3_SERVICE_PATH;
...
overrides.setProperty(PROPERTY_S3_SERVICE_PATH, "/services/Walrus");
...
.endpoint("http://host:port/services/Walrus")