I'm writing in NodeJS and trying to send winston log data to S3 bucket using s3-streamlogger, but I get access denied.
Testing it from the CLI it is working fine read and write. the only reason I can think of is that we are using MFA in our AWS account.
Any ideas? workarounds?
Thanks
Related
I've confirmed that my S3 credentials are set correctly and even tried full permissions on the bucket and still receive the same message. On the Azure side, I've added Storage Blob Data Owner permissions to my account and can list files I manually upload through the portal with the credentials I used (signing in through AAD rather than using a token).
Any help is appreciated here.
As it turned out there were no issues with either the Azure or AWS configuration. Trying the same commands in a linux VM worked perfectly. If you're experiencing this error it may be an issue with the AWS CLI or some other local config.
When i try to write to an S3 Bucket which is AES-256 Encrypted from my Spark Streaming App running on EMR it is throwing 403. For what ever reason the Spark Session is not honoring the "fs.s3a.server-side-encryption-algorithm" config option.
Here is the code i am using.
sparkSession.sparkContext().hadoopConfiguration().set("fs.s3a.access.key",accessKeyId);
sparkSession.sparkContext().hadoopConfiguration().set("fs.s3a.secret.key", secretKeyId);
sparkSession.sparkContext().hadoopConfiguration().set("fs.s3a.server-side-encryption-algorithm","AES256");
When i use regular Java Code using AWS SDK i can upload the files without any issues.
Some how the Spark Session is not honoring this.
Thanks
Sateesh
Able to resolve it. Silly mistake on my part.
We need to have the following property as well.
sparkSession.sparkContext().hadoopConfiguration().set("fs.s3.enableServerSideEncryption","true");
I got error something like this
[google-id].gserviceaccount.com does not have storage.objects.create access to upload
I already make my bucket as public, and the service account as owner, or I did something missing for set it up ???
I was trying to upload files on my express , to test it , I am new for this,
can anyone tell me what's wrong and what I should set ???
One way to give permissions to your application/user would be through the following command:
gsutil iam ch user:[google-id].gserviceaccount.com:objectCreator gs://[YOUR_BUCKET]
This is fully documented at Using Cloud IAM permissions. You can also perform this action using Cloud Console. An example of using that interface is provided in the documentation previously linked.
When i access s3 through my code which is running under ecs. I can access only few files but for few files i get access denied exception. I have access to the bucket and the object are not restricted based on object leve;. i am using DefaultAWSCredentialsProviderChain.
But the same files can be downloaded using AWS cli under the same assumed role.
Can help me understand what can be the issue?
I have a private installation of a server which is fully s3-compatible. I have one bucket there and I can check it using s3 browser. I am trying to interact with the server using boto3 for python (using the same credentials that I use in s3 browser), however, for any request I get NoSuchBucket error. This is my code:
s3 = boto3.resource('s3',
endpoint_url=hostname,
use_ssl=False,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)
for bucket in s3.buckets.all():
print(bucket.name)
Initially I thought there was an issue with credentials, but then I was able to interact with the server through s3 browser client.
So the problem is: I really don't understand the error code, since I am not querying any particular bucket. What could be the cause of the problem?
Problem solved! It was a DNS resolution issue.