How to delete a elasticache subnet group with cloud custodian? - cloudcustodian

I want to use cloud custodian to cleanup some aws resources.( elasticache in this case) .
However ; I got error when trying to delete elasticache subnet groups. According to custodian cache.subnet-group documentation, delete is not a valid action for cache.subnet-group.
How do I perform delete in this case?
my policy file.
policies:
- name: cleanup-elasticache-subnet-group
resource: aws.cache-subnet-group
filters:
- type: value
key: "CacheSubnetGroupName"
op: contains
value: foo
actions:
- type: delele >>> !!! delete is not a valaid action type !!!

Since Cloud Custodian doesn't natively provide the delete action today, your main options are:
Open a feature request for an aws.cache-subnet-group.delete action.
Use Cloud Custodian to report/notify on subnet groups you need to delete, but handle the actual deletion with external tools.
For what it's worth, you can review open feature requests here. I didn't see any for this action, but it's always good to check!

Related

How to validate or filter a wildcard in path for http endpoints in Serverless and AWS API gateway before the process triggs the lambda function?

I have the following http path devices/{sn} in a Serverless-AWS APIgateway API. The wildcard sn is a 15 digits [A-Z0-9] pattern.
In the API today any string that is not recognized as a valid path is redirected to this end-point. Ex: devices/test goes to devices/{sn}, devices/bla goes to devices/{sn} and so on. All those strings will query the database and return null because there is no such sn in the table. I could create a validation process inside the lambda to avoid the unnecessary database query. But I want to save lambda resource and I would like to validate before call the lambda.
This is what I have today for this endpoint:
- http:
path: devices/{sn}
method: GET
private: false
cors: true
authorizer: ${file(env.yml):${self:provider.stage}.authorizer}
request:
parameters:
paths:
sn: true
How can I setup this validation or filter in Serverless.yml?
In fact it should be a very straight-forward functionality of AWS/Serverless.
Let's say we have the following scenario: myPath/{id}. In this case id is a integer (a pk in a table). If I type myPath/blabla it will trigg the lambda. The system will spend resource. It shoul have a kind of previous validation - trig the endpoint only if the {id} === integer.
Your issue is very similar to this issue
According to the post and from my experience, No, I don't think you can perform validation in api-gateway level.

Can't create a S3 bucket with KMS_MANAGED key and bucketKeyEneabled via CDK

I have a S3 bucket with this configuration:
I'm trying to create a bucket with this same configuration via CDK:
Bucket.Builder.create(this, "test1")
.bucketName("com.myorg.test1")
.encryption(BucketEncryption.KMS_MANAGED)
.bucketKeyEnabled(true)
.build()
But I'm getting this error:
Error: bucketKeyEnabled is specified, so 'encryption' must be set to KMS (value: MANAGED)
This seems like a bug to me, but I'm relatively new to CDK so I'm not sure. Am I doing something wrong, or is this indeed a bug?
I encountered the issue recently, and I have found the answer. I want to share the findings here just in case anyone gets stuck.
Yes, this was a bug in the AWS-CDK. The fix was merged this month: https://github.com/aws/aws-cdk/pull/22331
According to the CDK doc (https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.Bucket.html#bucketkeyenabled), if bucketKeyEnabled is set to true, S3 will use its own time-limited key instead, which helps reduce the cost (see https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html); it's only relevant when Encryption is set to BucketEncryption.KMS or BucketEncryption.KMS_MANAGED.
bucketKeyEnabled flag , straight from docs:
Specifies whether Amazon S3 should use an S3 Bucket Key with
server-side encryption using KMS (SSE-KMS)
BucketEncryption has 4 options.
NONE - no encryption
MANAGED - Kms key managed by AWS
S3MANAGED - Key managed by S3
KMS - key managed by user, a new KMS key will be created.
we don't need to set bucketKeyEnabled at all for any scenario. In this case, all we need is aws/s3 , so,
bucketKeyEnabled: Need not be set.(since this is only for SSE-KMS)
encryption: Should be set to BucketEncryption.S3_MANAGED
Example:
const buck = new s3.Bucket(this, "my-bucket", {
bucketName: "my-test-bucket-1234",
encryption: s3.BucketEncryption.KMS_MANAGED,
});

What if access control rule defined for participant/asset contradicts access control rule for transaction?

I have a question regarding access control.
Specifically, the question is about the relationship between access control rules defined for participants or assets on the one hand and asset control rules defined for transactions accessing those participants/assets.
Here is an example:
Assume a Hyperledger Fabric network is used to create some kind of social network for employees of a company.
The following rule states that an employee has write access to his own data:
rule EmployeesHaveWriteAccessToTheirOwnData {
description: "Allow employees write access to their own data"
participant(p): "org.company.biznet.Employee"
operation: UPDATE
resource(r): "org.company.biznet.Employee"
condition: (p.getIdentifier() == r.getIdentifier())
action: ALLOW
}
Let's assume that the write access is facilitated through a transaction called "UpdateTransaction". Further assume that (maybe by accident) the action value of the access control rule of transaction "UpdateTransaction" is set to "Denied"
rule EmployeeCanSubmitTransactionsToUpdateData {
description: "Allow employees to update their data"
participant: "org.company.biznet.Employee"
operation: CREATE
resource: "org.company.biznet.UpdateTransaction"
action: Denied
}
Now there is the following situation:
Each employee is (through rule 1) given the right to change his/her data.
At the same time employees are not allowed to submit the transaction "UpdateTransaction" to change the data (see rule 2).
Is it now impossible for employees to change their data? Or are employees still able to change their data without submitting the transaction "UpdateTransaction"?
Put differently: is there a way for participants to access data (for which they have access rights) without using any of the transactions defined in the .cto-file?
I think the answer is, it depends.
In your example, denying access to the org.company.biznet.UpdateTransaction transaction would result in org.company.biznet.Employee participants being unable to use that transaction to update their data, even though they would otherwise be allowed.
Having said that, you should keep the system transactions in mind since they provide another potential route for org.company.biznet.Employee participants to update their own data.
For example, I tried that out on the basic-sample-network by replacing the EverybodyCanSubmitTransactions rule with
rule NobodyCanSubmitTransactions {
description: "Do not allow all participants to submit transactions"
participant: "org.example.basic.SampleParticipant"
operation: CREATE
resource: "org.example.basic.SampleTransaction"
action: DENY
}
That business network includes an OwnerHasFullAccessToTheirAssets rule and I was able to use the org.hyperledger.composer.system.UpdateAsset transaction to make updates for participants that owned an asset using the command,
composer transaction submit -d "$(cat txn.json)" -c party1#basic-sample-network
Where txn.json contained,
{
"$class": "org.hyperledger.composer.system.UpdateAsset",
"resources": [
{
"$class": "org.example.basic.SampleAsset",
"assetId": "ASSET1",
"owner": "resource:org.example.basic.SampleParticipant#PARTY1",
"value": "5000"
}
],
"targetRegistry": "resource:org.hyperledger.composer.system.AssetRegistry#org.example.basic.SampleAsset"
}
That wouldn't work if you had locked down the system namespace in your ACL rules though. (ACLs need a lot of thought!)
The other important thing to remember about ACLs is that they do not apply if you use the getNativeAPI method to access data via the Hyperledger Fabric APIs in your transaction processor functions.
Check out the system namespace reference along with the ACL reference, plus there is an ACL tutorial which may be of interest if you haven't seen it.

Receiving "Invalid policy document or request headers!"

I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.

Amazon S3 error- A conflicting conditional operation is currently in progress against this resource.

Why I got this error when I try to create a bucket in amazon S3?
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the name is available again.
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the Bucket name is available again.
Kindly note, I received this error when my access-priviliges were blocked.
The error means your Operation for creating new bucket at S3 is aborted.
There can be multiple reasons for this, you can check the below points for rectifying this error:
Is this Bucket available or is Queued for Deletion
Do you have adequate access privileges for this operation
Your Bucket Name must be unique
P.S: Edited this answer to add more details as shared by Sanity below, and his answer is more accurate with updated information.
You can view the related errors for this operation here.
I am editing my asnwer so that correct answer posted below can be selected as correct answer to this question.
Creating a S3 bucket policy and the S3 public access block for a bucket at the same time will cause the error.
Terraform example
resource "aws_s3_bucket_policy" "allow_alb_access_bucket_elb_log" {
bucket = local.bucket_alb_log_id
policy = data.aws_iam_policy_document.allow_alb_access_bucket_elb_log.json
}
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
}
Solution
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
#--------------------------------------------------------------------------------
# To avoid OperationAborted: A conflicting conditional operation is currently in progress
#--------------------------------------------------------------------------------
depends_on = [
aws_s3_bucket_policy.allow_alb_access_bucket_elb_log
]
}
We have also observed this error several times when we try to move bucket from one account to other. In order to achieve this you should do the following :
Backup content of the S3 bucket you want to move.
Delete S3 bucket on the account.
Wait for 1/2 hours
Create a bucket with the same name in another account
Restore s3 bucket backup
I received this error running a terraform apply with the error:
Error: error creating public access block policy for S3 bucket
(bucket-name): OperationAborted: A conflicting
conditional operation is currently in progress against this resource.
Please try again.
status code: 409, request id: 30B386F1FAA8AB9C, host id: M8flEj6+ncWr0174ftzHd74CXBjhlY8Ys70vTyORaAGWA2rkKqY6pUECtAbouqycbAZs4Imny/c=
It said to "please try again" which I did and it worked the second time. It seems there wasn't enough wait time when provisioning the initial resource with Terraform.
To fully resolve this error, I inserted a 5 second sleep between multiple requests. There is nothing else that I had to do.