Scrypto: ResourceCheckFailure when calling method - smartcontracts

I have this method in my component that is supposed to return a GumBall tokens after the user sends a payment:
pub fn buy_gumball(&self, payment: Bucket) -> Bucket {
self.payments.put(payment.take(self.gumball_cost));
self.gumball_vault.take(1)
}
When I call that method, I get a ResourceCheckFailure:
> resim call-method [component_address] buy_gumball 10,030000000000000000000000000000000000000000000000000004
Instructions:
├─ DeclareTempBucket
├─ CallMethod { component_address: 02c1897261516ff0597fded2b19bf2472ff97b2d791ea50bd02ab2, method: "withdraw", args: [10, 030000000000000000000000000000000000000000000000000004] }
├─ TakeFromContext { amount: 10, resource_address: 030000000000000000000000000000000000000000000000000004, to: Bid(0) }
├─ CallMethod { component_address: 0268709f61e9f60d5d8b8157b5d4939511f194a9f6cfd8656db600, method: "buy_gumball", args: [Bid(0)] }
├─ DropAllBucketRefs
├─ DepositAllBuckets { account: 02c1897261516ff0597fded2b19bf2472ff97b2d791ea50bd02ab2 }
└─ End { signers: [04005feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9] }
Results:
├─ Ok(None)
├─ Ok(Some(Bid(1)))
├─ Ok(None)
└─ Err(ResourceCheckFailure)
Logs: 0
New Entities: 0
Any idea why I get this ?

The Radix Engine makes sure that all buckets are either returned, stored in a vault or burned at the end of a transaction. This is to be confident that no resources will ever get lost because a developer forgot to put the bucket content somewhere.
Imagine you would have sent more than the required payment. The extra XRD would then be lost if the RE didn't do those checks.
In your case, I would suggest to return the payment bucket back to the caller:
pub fn buy_gumball(&self, payment: Bucket) -> (Bucket, Bucket) {
self.payments.put(payment.take(self.gumball_cost));
(self.gumball_vault.take(1), payment)
}
That way, if the user sends more XRD than required, they will get their change back and RE will be happy.

Related

When deploying new AWS S3 Buckets via Terraform I receive an error that resource already exists

I've created a little test environment, using Gitlab and Terraform to deploy infrastructure to AWS. I'm still very new to this so apologies if this is a stupid question.
I have a file called s3.tf, I created a new AWS bucket in there with no issues at all, pushed the branch to Gitlab and merged it, the pipeline ran and Terraform deployed the new S3 bucket to AWS.
Now, I wanted to test creating another S3 bucket, so I duplicated the code in s3.tf and just adjusted the name of the bucket etc. Now when I merge this pushrequest in Gitlab the pipeline still succeeds and the new bucket was created, but I get this warning/error:
│ Error: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
│ status code: 409, request id: TEZR4YQMYQPF6QYD, host id: HXt04y7lxANMaIp94g5rnovGwHduElooxrMGDMCdIfuswmtBAsmRdah3Rkx5cBkzaMfRwbid3l6sGHBPoOG7ew==
│
│ with aws_s3_bucket.terraform-state-storage-s3-witcher,
│ on s3.tf line 1, in resource "aws_s3_bucket" "terraform-state-storage-s3-witcher":
│ 1: resource "aws_s3_bucket" "terraform-state-storage-s3-witcher" {
Is it expected that I will see this error every single time I deploy a new S3 bucket? Or can I adjust where the terraform apply runs from that s3.tf file?
Thanks!
(Current contents of s3.tf):
resource "aws_s3_bucket" "terraform-state-storage-s3-witcher" {
bucket = "gb-terraform-state-s3-witcher"
versioning {
# enable with caution, makes deleting S3 buckets tricky
enabled = false
}
lifecycle {
prevent_destroy = false
}
tags = {
name = "S3 Remote Terraform State Store"
}
}
resource "aws_s3_bucket" "terraform-state-storage-s3-witcherv2" {
bucket = "gb-terraform-state-s3-witcherv2"
versioning {
# enable with caution, makes deleting S3 buckets tricky
enabled = false
}
lifecycle {
prevent_destroy = false
}
tags = {
name = "S3 Remote Terraform State Store"
}
}

What data does a terraform null_resource store in state?

In short, I am generating a key/cert pair on a local machine in order to keep keys out of terraform state. Will my keys end up in terraform state via the apply_ssl_tls.sh script inside of a null resource?
variable "site" {}
module "zone" {
source = "../../../../shared/terraform/zone.tf"
site = var.site
}
module "dns" {
source = "../../../../shared/terraform/dns.tf"
site = var.site
}
# Keys are generated on the local machine via a script in order to keep keys out of state.
# Will a null_resource keep keys out of state?
resource "null_resource" "ssl-tls" {
provisioner "local-exec" {
command = "../../../../shared/scripts/apply_ssl_tls.sh"
}
provisioner "local-exec" {
when = destroy
command = "../../../../shared/scripts/destroy_ssl_tls.sh"
}
}

Unable to add versioning_configuration for multiple aws s3 in terraform version 4.5.0

Trying to create multiple AWS s3 buckets using Terraform with the below-provided code.
Provider Version: 4.5.0
Tried without count function and with for_each function as well
resource "aws_s3_bucket" "public_bucket" {
count = "${length(var.public_bucket_names)}"
bucket = "${var.public_bucket_names[count.index]}"
# acceleration_status = var.public_bucket_acceleration
tags = {
ProjectName = "${var.project_name}"
Environment = "${var.env_suffix}"
}
}
resource "aws_s3_bucket_versioning" "public_bucket_versioning" {
bucket = aws_s3_bucket.public_bucket[count.index].id
versioning_configuration {
status = "Enabled"
}
}
Facing below error
Error: Reference to "count" in non-counted context
│
│ on modules/S3-Public/s3-public.tf line 24, in resource "aws_s3_bucket_versioning" "public_bucket_versioning":
│ 24: bucket = aws_s3_bucket.public_bucket[count.index].id
│
│ The "count" object can only be used in "module", "resource", and "data" blocks, and only when the "count" argument is set.
Your current code creates multiple S3 buckets, but only attempts to create a single bucket versioning configuration. You are referencing a count variable inside the bucket versioning resource, but you haven't declared a count attribute for that resource yet.
You need to declare count on the bucket versioning resource, just like you did for the s3 bucket resource.
resource "aws_s3_bucket_versioning" "public_bucket_versioning" {
count = "${length(var.public_bucket_names)}"
bucket = aws_s3_bucket.public_bucket[count.index].id
versioning_configuration {
status = "Enabled"
}
}

golang test exit status -1 and shows nothing

Recently I've been working on a Restful app in golang, strange things happened when I try to write tests in different subdirectories. My project structure is:
├── handlers/
│ ├── defs.go
│ ├── hello_test.go
│ ├── hello.go
├── server/
│ ├── codes.go
│ ├── middlewares_test.go
│ ├── middlewares.go
├── utility/
│ ├── auth.go
│ ├── auth_test.go
All files in handlers/ are declared "package handlers", all files in server/ are declared "package server", and so on. When I run go test in utility/ and handlers/ everything is fine. But if I run go test in server/, it returns me nothing but just:
[likejun#localhost server]$ go test
exit status 1
FAIL server_golang/server 0.003s
It seems that it exits with a code 1 before even run, could someone tells me why this happened? Thank you, I've been spent the whole afternoon on it.
the code in middleware_test.go
package server
import (
"io"
"net/http"
"net/http/httptest"
"testing"
)
func TestHello(t *testing.T) {
req, err := http.NewRequest(http.MethodGet, "/", nil)
if err != nil {
t.Fatal(err)
}
rec := httptest.NewRecorder()
func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json")
io.WriteString(w, `{"hello": "world"}`)
}(rec, req)
if status := rec.Code; status != http.StatusOK {
t.Errorf("handler returned wrong status code: got %d want %d", status, http.StatusOK)
}
if rec.Header().Get("Content-Type") != "application/json" {
t.Errorf("handler returned wrong content type header: got %s want %s", rec.Header().Get("Content-Type"), "application/json")
}
expected := `{"hello": "world"}`
if rec.Body.String() != expected {
t.Errorf("handler returned unexpected body: got %s want %s", rec.Body.String(), expected)
}
}

How to use Grafana Http API with client-side javascript

Is it possible to use the Grafana Http API with client-side javascript?
I start with the very basic of getting the json of an already created dashboard.
function getHome2Dashboard(callback) {
$.ajax({
type: 'GET',
url: 'http://localhost:3000/api/dashboards/db/home-2',
crossDomain: true,
dataType: 'json',
headers: {
"Authorization": "Bearer eyJrIjoiYkdURk91VkNQSTF3OVdBWmczYUNoYThPOGs0QTJWZVkiLCJuIjoidGVzdDEiLCJpZCI6MX0="
},
success: function(data)
{
if( callback ) callback(data);
},
error: function(err)
{
console.log(err);
}
});
but I get a:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
I also tried using a jsonp approach, dev tools show that the server sends back the json data but js fails because (I think) the result is not wrapped inside a callback function. Any suggestions on how to implement this are welcome...
// At the moment I think of something like:
┌──────────┐ ┌───────────┐
│ Browser │ <-------> │ Grafana │
└──────────┘ └───────────┘
// In order to overcome the cross-origin problems,
// should I go towards something like this?:
┌──────────┐ ┌───────────┐ ┌───────────┐
│ Browser │ <-------> │ Web api │ <-------> │ Grafana │
└──────────┘ └───────────┘ └───────────┘
According to these links there is currently no way in Grafana to set CORS headers directly in the server, so you will need to put the Grafana server behind a reverse proxy like nginx and add the headers there.
See the Mozilla Developer documentation about CORS to understand the issue you are facing.