My setup is the following:
React-native app client -> AWS API Gateway -> AWS Lambda function -> AWS S3 -> AWS Transcribe -> AWS S3
I am successfully able to upload an audio file to an S3 bucket from the lambda, start the transcription and even access it manually in the S3 bucket. However when I try to access the json file with the transcription data using TranscriptFileUri I am getting 403 response.
On the s3 bucket with the transcriptions I have the following CORS configuration:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag"
]
}
]
My lambda function code looks like this:
response = client.start_transcription_job(
TranscriptionJobName=jobName,
LanguageCode='en-US',
MediaFormat='mp4',
Media={
'MediaFileUri': s3Path
},
OutputBucketName = 'my-transcription-bucket',
OutputKey = str(user_id) + '/'
)
while True:
result = client.get_transcription_job(TranscriptionJobName=jobName)
if result['TranscriptionJob']['TranscriptionJobStatus'] in ['COMPLETED', 'FAILED']:
break
time.sleep(5)
if result['TranscriptionJob']['TranscriptionJobStatus'] == "COMPLETED":
data = result['TranscriptionJob']['Transcript']['TranscriptFileUri']
data = requests.get(data)
print(data)
In Cloudwatch I get the following: <Response [403]> when printing the response.
As far as I can tell, your code is invoking requests.get(data) where data is the TranscriptFileUri. What does that URI look like? Is it signed? If not, as I suspect, then you cannot use requests to get the file from S3 (it would have to be a signed URL or a public object for this to work).
You should use an authenticated mechanism such as get_object.
Related
I am trying to view files from my S3 Bucket. I am a PreSigned URL and the react Package react-file-viewer.
Whenever I call the Signed URL through react-file-viewer I get an error 403 forbidden. But if I copy and paste the Pre-Signed URL into my search bar I can view the file. I can also download the files and they open the data.
This is what my response is.
Request Method:
HEAD
Status Code:
403 Forbidden
Remote Address:
xx.x.xx.xx..x..x
Referrer Policy:
strict-origin-when-cross-origin
In my S3 bucket, I have this as my CORS header:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"HEAD",
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]
and my pre-signed URL is
var url = s3.getSignedUrl('getObject', {
Bucket: BucketName,
Key: fileURLData[fileSpot].file_url,
Expires: signedUrlExpireSeconds,
})
Does this look like an CORS issue or something to do with my S3 Bucket permissions?
I am trying in an AWS lambda to get the bucket logging settings for my buckets. For this I enumerate the buckets with S3.listBuckets() - which works just fine. I then iterate over the bucket names like this (Typescript):
const bucketNames = await getBucketNames() // <- works without problems
for (const bucketName of bucketNames) {
try {
console.log(`get logging for bucket ${bucketName}`) // <-- getting to this log
const bucketLogging: GetBucketLoggingOutput = await s3.getBucketLogging({
Bucket: bucketName,
ExpectedBucketOwner: accountId
}).promise()
// check logging setup and adjust if necessary
} catch (error) {
console.log(JSON.stringify(error))
}
}
The call to getBucketLogging() fails
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-19T11:16:26.671Z",
"requestId": "****",
"extendedRequestId": "****",
"statusCode": 403,
"retryable": false,
"retryDelay": 70.19937788683632
}
The accountId that is passed in is definitely right (it's optional anyway); the lambda is in the same account as the bucket owner (which is the sole condition described in the docs at https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getBucketLogging-property).
When doing this call from a terminal CLI I have no problems to get results, only when running from a lambda.
What am I missing or overseeing?
You should make sure to attach the respective IAM permissions to your lambda function. Just because you have the s3:ListBuckets role doesn't mean that it is also permitted to perform the same for the BucketLogging information. Please refer to the following docs for more details on S3 IAM actions: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html
Using terraform scripts, I create a new EC2, add policy to access an S3 bucket, and supply a userdata script that runs aws s3 cp s3://bucket-name/file-name . to copy a file from that S3 bucket, among other commands.
In /var/log/cloud-init-output.log I see fatal error: Unable to locate credentials, presumably caused by executing aws s3 cp ... line. When I execute the same command manually on the EC2 after it's been created, it works fine (which means the EC2 policy for bucket access is correct).
Any ideas why the aws s3 cp command doesn't work during userdata execution but works when the EC2 is already created? Could it be that the S3 access policy is only applied to the EC2 after the EC2 has been fully created (and after userdata has been run)? What should be the correct workaround?
data "aws_iam_policy_document" "ec2_assume_role" {
statement {
effect = "Allow"
actions = [
"sts:AssumeRole",
]
principals {
type = "Service"
identifiers = [
"ec2.amazonaws.com",
]
}
}
}
resource "aws_iam_role" "broker" {
name = "${var.env}-broker-role"
assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json
force_detach_policies = true
}
resource "aws_iam_instance_profile" "broker_instance_profile" {
name = "${var.env}-broker-instance-profile"
role = aws_iam_role.broker.name
}
resource "aws_iam_role_policy" "rabbitmq_ec2_access_to_s3_distro" {
name = "${env}-rabbitmq_ec2_access_to_s3_distro"
role = aws_iam_role.broker.id
policy = data.aws_iam_policy_document.rabbitmq_ec2_access_to_s3_distro.json
}
data "aws_iam_policy_document" "rabbitmq_ec2_access_to_s3_distro" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:GetObjectVersion"
]
resources = ["arn:aws:s3:::${var.distro_bucket}", "arn:aws:s3:::${var.distro_bucket}/*"]
}
}
resource "aws_instance" "rabbitmq_instance" {
iam_instance_profile = ${aws_iam_instance_profile.broker_instance_profile.name}
....
}
This sounds like a timing issue where cloud-init is executed before the EC2 profile is set/ready to use. In your cloud-init script, I would make a loop to run a particular AWS cli command or even use the metadata server to retrieve information about the IAM credentials of the EC2 instance.
As the documentation states, you receive the following response when querying the endpoint http://169.254.169.254/latest/meta-data/iam/security-credentials/iam_role_name:
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
So your cloud-init/user-data script could wait until the Code attribute equals to Success and then proceed with the other operations.
I'm unable to read a file from encrypted s3 bucket in a lambda.
Below is my policy document where i'm giving access to s3 as well as kms. I've attached this policy to lambda.
When i try to read a file from the bucket, I get Access Denied error.
I'm adding kms:RequestAlias condition to kms statement so that the lambda will only have access to keys which has mytoken in the alias.
I suspect this is where i'm making mistake because if i remove the condition, the lambda gets access to all keys and read the encrypted file without any issues.
Can someone help me restrict access to only keys which has mytoken in the alias
data "aws_iam_policy_document" "lambda_s3_policy_doc" {
statement {
sid = ""
effect = "Allow"
resources = [
"arn:aws:s3:::mybucket*",
"arn:aws:s3:::mybucket*/*"
]
actions = [
"s3:AbortMultipartUpload",
"s3:CreateBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:PutObject"
]
}
statement {
effect = "Allow"
actions = [
"kms:Decrypt",
"kms:DescribeKey",
"kms:Encrypt",
"kms:GenerateDataKey"
]
resources = ["*"]
condition {
test = "StringLike"
variable = "kms:RequestAlias"
values = [
"alias/*mytoken*"
]
}
}
}
What worked for me (I was trying to download the files from a couple of encrypted buckets directly from the AWS console and your case is in fact slightly different) was replacing kms:RequestAlias with kms:ResourceAliases.
statement {
sid = "AllowKMSAccessUsingAliases"
effect = "Allow"
actions = [
"kms:Decrypt",
]
resources = [
"arn:aws:kms:eu-central-1:111111111111:key/*",
]
condition {
test = "ForAnyValue:StringEquals"
variable = "kms:ResourceAliases"
values = [
"alias/alias-bucket-1",
"alias/alias-bucket-2",
]
}
}
According to what the AWS documentation says, it makes sense at least for me: it seems that you should use kms:RequestAlias when you are including the alias as part of your KMS request like in the image below:
When you use kms:ResourceAliases what gets checked is the alias associated to the KMS resource involved in the operation regardless of whether the alias was explicitly included in the request or not
So probably, your lambda function, when asking for the un-encryption of a file in a bucket, is using the KMS id in the request instead of the KMS alias and if that is the case kms:RequestAlias won't work because there is no alias in the request to be checked.
I am trying to upload images to my S3 bucket when sending chat messages to my Aurora Database using AppSync with Lambda configured as it's data source.
My resolver for the mutation is:
{
"version": "2017-02-28",
"operation": "Invoke",
"payload": {
"field": "createMessage",
"arguments": $utils.toJson($context.arguments)
}
}
The messages are being saved correctly in the database however the S3 image data files are not being saved in my S3 bucket. I believe I have configured everything correctly except for the resolver which I am not sure about.
Uploading files with AppSync when data source is lambda is basically the same as for every other data source and it does not depend on resolvers.
Just make sure you have your credentials for complex objects set up (JS example using Amplify library for authorization):
import { Auth } from 'aws-amplify'
const client = new AWSAppSyncClient({
url: /*your endpoint*/,
region: /*your region*/,
complexObjectsCredentials: () => Auth.currentCredentials(),
})
And also you need to provide S3 complex object as an input type for your mutation:
input S3ObjectInput {
bucket: String!
key: String!
region: String!
localUri: String
mimeType: String
}
Everything else will work just fine even with lambda data source. Here you can find more information related to your question(in that example dynamoDB is used but it is basically the same for lambda: https://stackoverflow.com/a/50218870/9359164