Firstly, I love Pulumi.
We're trying to propose Pulumi as a solution for a distributed architecture and it is going swimmingly. The uncertainty I have right now is if it's possible to attach an existing cloud resource to the Pulumi configuration.
There is already an S3 bucket existing with media, what I'm wondering is if it is possible to define the S3 bucket in our Pulumi config, or does Pulumi have to be the creator of the cloud resource before it can be managed by Pulumi?
This is possible with a get function of a resource. In case of an S3 Bucket named "tpsReports-4f64efc" and a Lambda function "zipTpsReports-19d51dc", it would look like this:
const tpsReports = aws.s3.Bucket.get("tpsReports", "tpsReports-4f64efc");
const zipFunc = aws.lambda.Function.get("zipTpsReports", "zipTpsReports-19d51dc");
When you run your Pulumi program, the status of these resources will say read instead of create or update.
If you want to go one step further and adopt an existing resource to be fully managed by Pulumi, this blog post documents the entire process.
Related
I could see there is a function called addCorsRule under type Bucket to add a cors rule to a newly created S3.
I am trying to achieve similar thing for a existing S3 bucket that is not created by me, and it's in another aws account.
The return type of all the finding existing bucket function e.g. fromBucketAttributes is IBucket. It does not provide a method for add a new cors rule.
I am wondering if there will be a work around to achieve this?
Thank you very much.
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.Bucket.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.IBucket.html
Digging through our Pulumi code, I see that it sets the hostedZoneId on the S3 bucket it creates and I don't understand what it's used for.
The bucket holds internal content (Pulumi state files) and is not set as a static website.
The Pulumi docs in AWS S3 Bucket hostedZoneId only state:
The Route 53 Hosted Zone ID for this bucket's region.
with a link to what appears to be an irrelevant page (looks like a copy-paste error since that link is mentioned earlier on the page).
S3 API docs don't mention the field either.
Terraform's S3 bucket docs, which Pulumi relies on many times but is also a good reference for S3 API in general, exposes this as an output, but not as an input attribute.
Does anyone know what this attribute is used for?
I know there is a specific instruction pre-written on metaplex package that allow us to update on-chain metadata of a NFT, but I couldn't find or figure out a way to update off-chain data through api call from either frontend or backend.
I believe this can be done through a service like AWS S3, but I am wondering if there is a better way to achieve this goal
Questions:
So, 1) I want to ask is there a better method to update off-chain data through AWS S3 bucket, and 2) If so, could you explain how? 3) If not, could you explain how can I update json file on S3 Bucket through api calls from my frontend or backend.
Thanks!
Off-chain metadata is not part of the Solana Network so there are not any function/command that allow you to change data inside a offchain object.
What are your options?
You can change the onchain pointer to the offchain link to another (Metadata account has an uri field inside the onchain data that you can change to any other uri using metaboss for example). This is the normal option if you are using an storage like Arweave, because arweave stored files are unmutables.
If you are using something like S3, the way to update the offchain data is creating a backend that allow you to interact with ur AWS storage and change the data that you want. This will affect the NFT that has the onchain pointer to that offchain file, so you dont have to run any onchain update, just need to create a backend system that allow you to interact with ur AWS files.
If you wish to have Dynamic Metadata you can check out Cardinal.io as Well. Link
I have a lambda layer named "function.zip" in code commit and in s3 bucket . Cloudformation works fine without any issue since i'm using the function.zip saved in s3 and using s3 uri as content uri in CFT for "AWS::Serverless::LayerVersion". Since i am using the s3 uri, "function.zip" will be available only in one region(ohio).
I want this fuction.zip be available in all the region in same account. Will it be possible to access the "function.zip" from all the region using codecommit by keeping "AWS::Serverless::LayerVersion" content uri in Cloudformation as codecommit rather than using s3uri. If you could share the sample template and how are you accessing it by saving it in codecommit that would be helpful.
On creating a bucket using s3 API, I get the
Container storage location with specified provisioning code not available (Service: Amazon S3; Status Code: 400; Error Code: InvalidLocationConstraint; Request ID: f377cc84-2e76-490b-8161-4407a4b8d9d7), S3 Extended Request ID: null error.
However, I can create a bucket using the service portal on Softlayer. Programmatically I can get the latest list of buckets and even delete it, but creation throws the above error.
A recent update has introduced unintended behavior around bucket creation, and we're working to fix it. The system is expecting a Location Constraint of us-standard. Provide the following XML block in the body of the request:
<CreateBucketConfiguration>
<LocationConstraint>us-standard</LocationConstraint>
</CreateBucketConfiguration>
If using an SDK, you'd follow the conventions of the particular library you are using. For example, using boto3 creating a new bucket might look like this:
bucket = s3.create_bucket(Bucket='my-bucket',
CreateBucketConfiguration={'LocationConstraint': 'us-standard'})
In java, it would look like:
s3.createBucket(bucketName, "us-standard");
Here's a link to the docs (we're steadily improving them).
Let me know if that doesn't help, or if you are using another tool or SDK. :)