uploading large file on google cloud bucket failing - ruby-on-rails-3

I am uploading a large file from my local system or an remote_url to google bucket. However everytime I am getting the below error.
/usr/local/lib/ruby/3.0.0/openssl/buffering.rb:345:in `syswrite': execution expired (Google::Apis::TransmissionError)
/usr/local/lib/ruby/3.0.0/openssl/buffering.rb:345:in `syswrite': execution expired (HTTPClient::SendTimeoutError)
I am using carrierwave initializer to define configuration of my google service account and its details. Please suggest if there is any configuration I am missing or if I add to increase the timeout or retries.
My Carrierwave initializer:
begin
CarrierWave.configure do |config|
config.fog_provider = 'fog/google'
config.fog_credentials = {
provider: 'Google',
google_project: '{project name}',
#google_json_key_string: Rails.application.secrets.google_cloud_storage_credential_content
google_json_key_location: '{My json key-file location}'
}
config.fog_attributes = {
expires: 600000,
open_timeout_sec: 600000,
send_timeout_sec: 600000,
read_timeout_sec: 600000,
fog_authenticated_url_expiration: 600000
}
config.fog_authenticated_url_expiration = 600000
config.fog_directory = 'test-bucket'
end
#
rescue => error
puts error.message
end

This might have to do with the duration of time between when the connection and initialized and when it actually gets used. Adding: persistent: false to the fog_credentials should make it create a new connection for each request. This is a bit less performant, but it should at least work consistently, unlike what you appear to be running into presently.

Related

node S3 Object Storage Linode

Im trying to use the aws-sdk to acces my linode S3 compatible bucket, but everything I try doesn't work. Not sure what the correct endpoint should be? For testing purposes is my bucket set to public read/write.
const s3 = new S3({
endpoint: "https://linodeobjects.com",
region: eu-central-1,
accesKeyId: <accesKey>,
secretAccessKey: <secretKey>,
});
const params = {
Bucket: bucketName,
Key: "someKey",
Expires: 60,
};
const uploadURL = await s3.getSignedUrlPromise("putObject", params);
The error im getting
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'Could not load credentials from any providers',
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'EC2 Metadata roleName request returned error',
code: 'TimeoutError',
time: 2021-07-15T08:29:49.999Z,
retryable: true,
originalError: [Object]
}
}
}
It seems like a problem with the credentials of the environment that this code is executed in and not with the bucket permissions themselves.
The pre-signing of the URL is an operation that is done entirely locally. It uses local credentials (i.e., access key ID and secret access key) to create a sigv4 signature for the URL. This also means that whether or not the credentials used for signing the URL are valid is only checked at the moment the URL is used, and not at the moment of signing the URL itself.
The error simply indicates that from all the ways the SDK is trying to find credentials (more info here) it cannot find credentials it can use to sign the URL.
This might be unrelated, but according to the documentation, the endpoint should be the following: The endpoint URI to send requests to. The default endpoint is built from the configured region. The endpoint should be a string like 'https://{service}.{region}.amazonaws.com' or an Endpoint object. Which, in the code example above, is not the case.
You should set the endpoint to be eu-central-1.linodeobjects.com. When using Linode object storage the region is not determined by the endpoint that you use.

In Fargate container why can I CRUD S3 but can't create a presigned post

I'm using node in a docker container and locally I use my IAM keys for both creating, reading and deleting files to an S3 bucket as well as creating pre-signed posts. When up on a Fargate container, I create a taskRole and attach a policy which gives it full access to S3.
taskRole.attachInlinePolicy(
new iam.Policy(this, `${clientPrefix}-task-policy`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['S3:*'],
resources: ['*'],
}),
],
})
);
With that role, I can create, read and delete files with no issues from the API. When the API tries to create a pre-signed post however, I get the error:
Error: Unable to create a POST object policy without a bucket, region, and credentials
It seems super strange to me that I can run the other operations, but it fails with the presignedPOST, especially since my S3 actions are all allowed.
const post: aws.S3.PresignedPost = await s3.createPresignedPost({
Bucket: bucket,
Fields: { key },
Expires: 60,
Conditions: [['content-length-range', 0, 5242880]],
});
Here is the code I use. I am logging the bucket and key so I'm positive that they are valid values. One thought I had was when running locally, I will run aws.configure to set my keys but in Fargate I purposefully omit that. I thought that it was getting the right keys since the other s3 operations work without fail. Am I approaching this right?
When using IAM role credentials with AWS sdk, you must either use the asynchronous (callback) version of createPresignedPost or guarantee that your credentials have been resolved before calling the await version of this method.
Something like this will work with IAM based credentials:
const s3 = new AWS.S3()
const _presign = params => {
return new Promise((res, rej) => {
s3.createPresignedPost(params, (err, data) => {
if (err) return rej(err)
return res(data)
})
})
}
// await _presign(...) <- works
// await s3.createPresignedPost(...) <- won't work
Refer: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property

Add cache_control to images with Ruby aws-sdk

I'm trying to add cache_control to images within s3 buckets via a Ruby script, yet I keep running into an Access Denied (Aws::S3::Errors::AccessDenied) error. All my environment variables are correct, and I'm having no issue creating new buckets with the script, it's just every time I try to add cache_control it throws an error.
I've tried digging into the aws-sdk documentation, https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/S3/Object.html, but I haven't been able to wrap my mind around what is wrong with my script.
Here's what the code looks like when it's functional:
def initialize(region: 'us-west-2', bucket_name: 'project-images')
#bucket = Aws::S3::Resource.new(region: region).bucket(bucket_name)
end
def copy(to:, from:)
bucket.objects(prefix: from).each do |object|
_, filename = object.key.split("/")
object.copy_to(bucket: bucket.name, key: "#{to}/#{filename}")
end
end
Here's what the code looks like when I'm getting the Access Denied errors from trying to add in the cache_control and additional options:
def initialize(region: 'us-west-2', bucket_name: 'project-images')
#bucket = Aws::S3::Resource.new(region: region).bucket(bucket_name)
end
def copy(to:, from:)
bucket.objects(prefix: from).each do |object|
_, filename = object.key.split("/")
object.copy_to(bucket: bucket.name, key: "#{to}/#{filename}", acl: "public-read", cache_control: "max-age=154400", metadata_directive: "REPLACE")
end
end
Any help would be greatly appreciated!

AWS Cognito User Migration - Exception during user migration

I have created userpool and trying to migrate user from RDS which invokes lambda function that returns the updated event object. but its not working for me.
I have followed as provided solution by removing below 2 fields, still not working .. :(
"desiredDeliveryMediums": "EMAIL",
"forceAliasCreation": "false"
Here is the response object that am sending from lambda. still facing same issue - Exception during user migration
Please let me know what am missing here. Thanks in advance
def lambda_handler(event, context):
print event
event["response"] = {
"userAttributes": {
"email": event["userName"],
"email_verified": "true",
},
"finalUserStatus": "CONFIRMED",
"messageAction": "SUPPRESS",
"desiredDeliveryMediums": "EMAIL",
"forceAliasCreation": "false"
}
print event
return event
I was having this problem, and I overcame it by increasing the memory allocated to the lambda from the default 128MB to 1024MB. I am using cdk to deploy, so I did this in the lamdba creation:
const nodeUserMigration = new NodejsFunction(this, 'myLambdaName', {
entry: path.join(
__dirname,
'userMigration.ts'
),
runtime: Runtime.NODEJS_18_X,
timeout: Duration.minutes(5),
memorySize: 1024, // This is what I added to overcome the `UserNotFoundException: Exception migrating user in app client (redactedClientId)`
environment: {
// redacted environment variables
},
});
Instead of
return event
You need
context.succeed(event)
It is probably possible to use return event directly; however, there would be other properties required to get Cognito to recognize it (things such as isBase64Encoded) and I don't know what they might be. Neither does Amazon have any documentation on them.
Oh, and desiredDeliveryMediums should be an array of strings.

Paypal REST API invalid credentials

I use the REST api in my nodejs application.
All is working good with sandbox but when i update with live credentials i get:
{ [Error: Response Status : 401]
response:
{ error: 'invalid_client',
error_description: 'The client credentials are invalid',
httpStatusCode: 401 },
httpStatusCode: 401 }
I updated my account to buisness but still not working, i use the live endpoint and Live credentials.
What should i do in order to make this work?
I had the same issue using PayPalSDK/rest-sdk-nodejs and solved passing with the configuration parameters (host, client_id, client_secret, ...) also the parameter 'mode' set to 'live'. Otherwise the default mode used by the library is 'sandbox' and hence the impossibility to use the live credentials.
As matteo said, if you switch from dev to live environment, only updateing the client id and secret isn't enough. You need to set the ApiContext-Mode to "live".
PayPals PHP REST-API-SDK comes with some great samples. Take a look at the bootstrap.php in /vendor/paypal/rest-api-sdk-php/sample/ in line 84. There are some configurations happening, after getting the api context.
<?php
$apiContext = new ApiContext(
new OAuthTokenCredential(
$clientId,
$clientSecret
)
);
// Comment this line out and uncomment the PP_CONFIG_PATH
// 'define' block if you want to use static file
// based configuration
$apiContext->setConfig(
array(
'mode' => 'sandbox',
'log.LogEnabled' => true,
'log.FileName' => '../PayPal.log',
'log.LogLevel' => 'DEBUG', // PLEASE USE `INFO` LEVEL FOR LOGGING IN LIVE ENVIRONMENTS
'cache.enabled' => true,
// 'http.CURLOPT_CONNECTTIMEOUT' => 30
// 'http.headers.PayPal-Partner-Attribution-Id' => '123123123'
//'log.AdapterFactory' => '\PayPal\Log\DefaultLogFactory' // Factory class implementing \PayPal\Log\PayPalLogFactory
)
);