How do I access s3 bucket from IAM account using Java - amazon-s3

I have an s3 account with AmazonS3FullAccess put when I try to use it to run s3.listObjects("name") I get a 403 error...
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>59C510407179770D</RequestId><HostId>aLPzqYkTKx6nkUWVtZWYS+2fYexzniKWkn2D9+aG6pdxBAjtxAcC85uvGC4HqDnQIifLaf+oy1E=</HostId></Error>
s3.doesBucketExistV2("name") returns true.
My policy looks like this...
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Do I need to add the user somewhere?
Update
Looks like it could be a problem with not getting the AWS creds (which is weird becuase of this line)...
Deprecated. By doesBucketExistV2(String) which will correctly throw an exception when credentials are invalid instead of returning true. See Issue #1256.
If I run ((AmazonS3Client) s3).awsCredentialsProvider.getCredentials() it returns null. My creds are in a amazon.properties file like this...
#PropertySources({
#PropertySource("classpath:amazon.properties")
})
// amazon.properties
amazon.accessKey=${AMZN_ACCESS_KEY}
amazon.secretKey=${AMZN_SECRET_KEY}
aws.accessKeyId=${AMZN_ACCESS_KEY}
aws.secretKey=${AMZN_SECRET_KEY}
and echo $AMZN_ACCESS_KEY returns the value I would expect.
Update 2
It appears to be something with the properties not getting read properly if I am explicit like this...
BasicAWSCredentials awsCreds = new BasicAWSCredentials(key, secret);
final AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withRegion(service.getRegion())
.build();
It worked, so 2 questions 1.) Why does the doesBucketExistV2 return true even when I am not logged in properly 2.) Why are the system properties not working?

You need to pass bucket name as below. The sample code should be as below
Refer here.
/* The following example list two objects in a bucket. */
var params = {
Bucket: "name",
MaxKeys: 2
};
s3.listObjects(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response

Related

AWS S3 getBucketLogging fails when called from lambda function

I am trying in an AWS lambda to get the bucket logging settings for my buckets. For this I enumerate the buckets with S3.listBuckets() - which works just fine. I then iterate over the bucket names like this (Typescript):
const bucketNames = await getBucketNames() // <- works without problems
for (const bucketName of bucketNames) {
try {
console.log(`get logging for bucket ${bucketName}`) // <-- getting to this log
const bucketLogging: GetBucketLoggingOutput = await s3.getBucketLogging({
Bucket: bucketName,
ExpectedBucketOwner: accountId
}).promise()
// check logging setup and adjust if necessary
} catch (error) {
console.log(JSON.stringify(error))
}
}
The call to getBucketLogging() fails
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-19T11:16:26.671Z",
"requestId": "****",
"extendedRequestId": "****",
"statusCode": 403,
"retryable": false,
"retryDelay": 70.19937788683632
}
The accountId that is passed in is definitely right (it's optional anyway); the lambda is in the same account as the bucket owner (which is the sole condition described in the docs at https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getBucketLogging-property).
When doing this call from a terminal CLI I have no problems to get results, only when running from a lambda.
What am I missing or overseeing?
You should make sure to attach the respective IAM permissions to your lambda function. Just because you have the s3:ListBuckets role doesn't mean that it is also permitted to perform the same for the BucketLogging information. Please refer to the following docs for more details on S3 IAM actions: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html

500 Internal Server Error when uploading to an AWS S3 bucket from Nuxt

I am trying to upload a file to AWS S3 using aws-sdk v3 from a Nuxt app's Vue Component.
Here's how I upload it.
<script>
export default {
...
methods: {
onSubmit(event) {
event.preventDefault()
this.addPhoto()
},
addPhoto() {
// Load the required clients and packages
const { CognitoIdentityClient } = require('#aws-sdk/client-cognito-identity')
const { fromCognitoIdentityPool } = require('#aws-sdk/credential-provider-cognito-identity')
const {
S3Client,
PutObjectCommand,
ListObjectsCommand,
DeleteObjectCommand,
} = require('#aws-sdk/client-s3')
const REGION = 'us-east-1' // REGION
const albumBucketName = 'samyojya-1'
const IdentityPoolId = 'XXXXXXX'
const s3 = new S3Client({
region: REGION,
credentials: {
accessKeyId: this.$config.CLIENT_ID,
secretAccessKey: this.$config.CLIENT_SECRET,
sessionToken: localStorage.getItem('accessToken'),
},
})
var file = this.formFields[0].fieldName
var fileName = this.formFields[0].fieldName.name
var photoKey = 'user-dp/' + fileName
var s3Response = s3.send(
new PutObjectCommand({
Bucket: albumBucketName,
Key: photoKey,
Body: file,
}),
)
s3Response
.then((response) => {
console.log('Successfully uploaded photo.' + JSON.stringify(response))
})
.catch((error) => {
console.log(
'There was an error uploading your photo: Error stacktrace' + JSON.stringify(error.message),
)
const { requestId, cfId, extendedRequestId } = error.$metadata
console.log({ requestId, cfId, extendedRequestId })
})
},
...
}
</script>
The issue now is that the browser complains about CORS.
This is my CORS configuration on AWS S3
I'm suspecting something while creating the upload request using SDK. (I'm open to use an API that is better than what I'm using).
Nuxt setting that allows CORS.
Something else on S3 CORS config at permissions
Network tab on chrome dev tools shows Internal Server Error (500) for prefetch. (Don't know why we see 2 entries here)
Appreciate any pointers on how to debug this.
I was having the same issue today. The S3 logs were saying it returned a 200 code response, but Chrome was seeing a 500 response. In Safari, the error showed up as:
received 'us-west-1'; expected 'eu-west-1'
Adding region: 'eu-west-1' (i.e. the region where the bucked was created)to the parameters when creating the S3 service solved the issue for me.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-region.html#setting-region-constructor
In the bucket policy use this
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://example/*"
}
}
}
]}
and use the region of your bucket
const s3 = new aws.S3({
apiVersion: 'latest',
accessKeyId: process.env.AWS_ACCESS_KEY_ID_CUSTOM,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY_CUSTOM,
region: 'us-west-1',
})
I am having the same problem, but according to the docs you should be using Cognito Identity to access the bucket. Only in V3 for clients to be able to access the buckets from the browser you must use Cognito Identity to authenticate users in order to have access to bucket/object commands. Currently trying to implement, so I am not 100% how to do it just the process. Feel free to take a look. I hope this helps. ~~~~~~~~~~~~~~~~~~~~~~~~~~
| Cognito SDK Link: | https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
| Example: | https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-browser-credentials-cognito.html
The error needs to be fixed on the backend, since it's CORS. It's clearly states a missing header of Access-Control-Allow-Origin.
So, checking it in the official AWS docs gives you the answer: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
I was doing multiple things wrongly here. Every answer on this post helped me make a little progress while debugging. Can't thank you enough!
My bucket policy was not using role-based ALLOW/DENY that has to correspond to authenticated role on my cognito identity pool.
Needed to rightly configure the Authentication provider as Cognito Userpool.
Making sure the region is right. Cognito region could be different from S3 region.
Make sure CORS policy includes relevant information like "Access-Control-Allow-Origin".
Double check the token includes the right credentials. This comes very handy cognito decode-verify
Was stand-alone testing from the browser. But this is not a good approach. Use an API server to take the file and push to S3 from there.

aws cloudsearch uploadDocument returning signaturedoesnotmatch

Im using com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient to uploadDocuments() with passing AWS secretkey and access id, End points .
Access Policy - Access all for all services
It is returning
Service: AmazonCloudSearchDomain; Status Code: 403; Error Code:
SignatureDoesNotMatch;
But with same package i have tried search() with same credentials , im getting search result correctly as expected.
Some one please help for above exception
This may be caused by your access policy allowing public access to search requests, but not upload. So there may be an issue with the credentials being passed, but you don't see that error when performing search requests, because credentials aren't necessary for that type of request.
For example, this access policy below would allow anyone to search without presenting credentials. But any other operation (like uploading documents) would require a valid set of credentials that have access to the CloudSearch domain.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "cloudsearch:search"
}
]
}

Terraform s3 event notification error

I am having trouble trying to create s3 event notifications. Does anyone know the resolutions to this?
Error is:
*Error applying plan:
1 error(s) occurred:
* module.Test-S3-Bucket.aws_s3_bucket_notification.s3-notification: 1 error(s) occurred:
* aws_s3_bucket_notification.s3-notification: Error putting S3 notification configuration: InvalidArgument: Unable to validate the following destination configurations
status code: 400, request id: AD9B5BF2FF84A6CB, host id: ShUVJ+TdkpqAZfpeDM3grkF9Vue3Q/AF0LydchperKTF6XdQyDM6BisZi/38pGAh/ZqS+gNyrSM=*
Below is the code that gives me the error:
resource "aws_s3_bucket" "s3-bucket" {
bucket = "${var.bucket_name}"
acl = ""
lifecycle_rule {
enabled = true
prefix = ""
expiration {
days = 45
}
}
tags {
CostC = "${var.tag}"
}
}
resource "aws_s3_bucket_notification" "s3-notification" {
bucket = "${var.bucket_name}"
topic {
topic_arn = "arn:aws:sns:us-east-1:1223445555:Test"
events = [ "s3:ObjectCreated:*", "s3:ObjectRemoved:*" ]
filter_prefix = "test1/"
}
}
If you haven't done it already, you need to specify a policy on the topic that grants the SNS:Publish permission to S3 (only from the bucket specified in the Condition attribute) - if you are also provisioning the topic via Terraform then something like this should do it (we know, as it caught us out just a few days ago too!):
resource "aws_sns_topic" "my-sns-topic" {
name = "Test"
policy = <<POLICY
{
"Version":"2012-10-17",
"Statement":[{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-east-1:1223445555:Test",
"Condition":{
"ArnLike":{"aws:SourceArn":"${aws_s3_bucket.s3-bucket.arn}"}
}
}]
}
POLICY
}
Hope that helps.
Well, I know that this is not your exact case, but I had the same error and I didn't manage to find an answer here, and because this post is the first that Google gave me, I will leave the answer to my case here in the hope that it will help someone else.
So, I notice that after Terraform apply I had this error and I went to the UI to see what happened and found this message:
The Lambda console can't validate one or more event sources for this trigger. The most common cause is when a source ARN includes a wildcard (*) character. You can manage unvalidated triggers using the AWS CLI or AWS SDK.
And guess what? I really had a wildcard (*) character in ARN like this:
source_arn = "{aws_s3_bucket.bucket.arn}/*"
So I changed it to:
source_arn = aws_s3_bucket.bucket.arn
And it worked. So, if you read this - there might be the same mistake in your case.

Amazon S3 Access image by url

I have uploaded an image to Amazon S3 storage. But how can I access this image by url? I have made the folder and file public but still get AccessDenied error if try to access it by url https://s3.amazonaws.com/bucket/path/image.png
This is an older question, but for anybody who comes across this question, once I made the file public I was able to access my image as https://mybucket.s3.amazonaws.com/myfolder/afile.jpg
in my case i have uploaded image privately so that i was unable to access. i did following code
const AWS = require('aws-sdk')
const myBucket = 'BUCKET_NAME'
const myKey = 'FILE_NAME.JPG'
const signedUrlExpireSeconds = 60 * 1
const s3 = new AWS.S3({
accessKeyId: "ACCESS_KEY_ID",
signatureVersion: 'v4',
region: 'S3_REGION',
secretAccessKey: "ACCESS_SECRET"
});
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
You can access your image by using:
https://s3.amazonaws.com/bucketname/foldername/imagename.jpg
or if there are no folders, you can do:
https://s3.amazonaws.com/bucketname/imagename.jpg
upvote if helps. It conforms to present AWS dated 30 may 2017.
Seems like you can now simply right-click on any folder inside a bucket and select 'Make Public' to make everything in that folder public. It may not work at the bucket level itself.
One of easiest way is to make a bucket policy.
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "MakeItPublic",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname.com/*"
}]
}
make sure you access image using the same case as it was uploaded and stored on S3.
for example, if you uploaded image_name.JPG, you should use the same name, but not image_name.jpg
For future reference, if you want to access a file in Amazon S3 the URL needs to be something like:
bucketname.s3.region.amazonaws.com/foldername/image.png
Example: my-awesome-bucket.s3.eu-central-1.amazonaws.com/media/img/dog.png
Don't forget to set the object to public.
Inside S3 if you click on the object will you see a field called: Object URL. That's the object's web address.
On your console, right click on the image you want to access and click on "Make Public"; when thats done right click on the image again and click on "Properties" and copy the Link from the Extended view.
I came across this question whilst looking for a solution to a similar problem with being unable to access images.
It turns out that images with a % in their filename, when being accessed, must have the % symbol URL encoded to %25.
i.e. photo%20of%20a%20banana%20-%2019%20june%202016.jpg needs to be accessed via photo%2520of%2520a%2520banana%2520-%252019%2520june%25202016.jpg.
However, URL encoding the full path didn't work for us, since the slashes, etc would be encoded, and the path would not work. In our specific case, simply replacing % with %25 in all access paths made the difference.
I was having the same problem. I have the issue the spacing in image url. I did this to make it work:
String imgUrl=prizes.get(position).getImagePreview().replaceAll("\\s","%20");
now pass this url to picasso:
Picasso.with(mContext)
.load(imgUrl)
.into(mImageView);
Just add Permission to follow the below image.
To access private images via URL you must provide Query-string authentication. Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.
Just addon to #akotian answer, you can get the object URL by clicking the object as follows
and to access publically you can set the ACL programmatically while uploading the object to the bucket
i.e sample java request
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.contentType(contentType)
.bucket(LOGO_BUCKET_NAME)
.key(LOGO_FOLDER_PREFIX+fileName)
.acl(ObjectCannedACL.PUBLIC_READ)// this make public read
.metadata(metadata)
.build();
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
]
}
]
}
use this policy for that bucket, which makes it public.
Adding bucket policy worked for me
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*"
}
]
}
Turn off Block public access (bucket settings) from Permissions tab inside your bucket. You also need to Edit the permissions of the object. Provide Read access on Grantee Everyone (public access). Then chech "I understand the effects of these changes on this object." and Save changes.
Change your Bucket.. It may works