How to export, then access exported methods in Lua - oop

I have a file, display.lua in which I have code to load some resources.
----display.lua
Resources = {}
function Resources:new(rootdir)
local newObj = {image = {}, audio = {}, root = ""}
newObj.root = rootdir
return setmetatable(newObj, self)
end
function Resources:getSpriteSheet(name)
--- etc etc etc
end
and then I have a game variable I use to store gamestate, this is within another file game.lua.
---game.lua
require "display.lua"
function Game:new()
local newObj = {mode = "", map = {}, player = {}, resources = {}}
self.__index = self
return setmetatable(newObj, self)
end
function Game:init()
self.resources = Resources:new("/home/example/etc/game/")
local spriteSheet = self.resources:getSpriteSheet("spritesheet.png")
end
I have access to the resources code via use of require. My issue is that within Game:init() I can't access Resources:getSpriteSheet(), the lua interpreter complains of "attempt to call method (getSpriteSheet) a nil value"
I assume here I would have to export the methods in Resources but I don't know how I'd go about doing this, as I'm quite new to Lua.

I think you want return setmetatable(newObj, {__index = self}) instead of return setmetatable(newObj, self).
Also, require "display.lua" should probably be require "display" and game.lua should have Game = {} somewhere at the top. With these changes your example works for me.

Related

Tarantool broadcast call

I have cluster with several replicasets. I want to call some stored function on all nodes without calculate bucket_id, and after to map results. How should I do it?
You can use module cartridge.rpc function get_candidates for getting all nodes with some role, which you want to call and after to use module cartridge.pool function map_call for calling your function and mapping results. This function available from 1.2.0-17 version of cartridge. So your code could be like this:
local cartridge = require('cartridge')
local nodes = cartridge.rpc_get_candidates('my_role_name', { leaders_only = true, healthy_only = true })
local pool = require('cartridge.pool')
local results, err = pool.map_call('_G.my_function_name', { func_args }, { uri_list = nodes, timeout = 10 })
if (err ~= nil) then
#your error handling#
end
All function response will be saved to results variable and mapped for every URI. All errors will be saved to err variable as map with keys: line, class_name, err, file, suberrors, str
Another proposal.
If you use vshard and want to perform map-reduce over storages:
local replicaset, err = vshard.router.routeall()
for _, replica in pairs(replicaset) do
local _, err = replica:callrw('function', { args })
if err ~= nil then
return nil, err
end
end

Workaround for `count.index` in Terraform Module

I need a workaround for using count.index inside a module block for some input variables. I have a habit of over-complicating problems, so maybe there's a much easier solution.
File/Folder Structure:
modules/
main.tf
ignition/
main.tf
modules/
files/
main.tf
template_files/
main.tf
End Goal: Create an Ignition file for each instance I'm deploying. Each Ignition file has instance-specific info like hostname, IP address, etc.
All of this code works if I use a static value or a variable without cound.index. I need help coming up with a workaround for the address, gateway, and hostname variables specifically. If I need to process the count.index inside one of the child modules, that's totally fine. I can't seem to wrap my brain around that though. I've tried null_data_source and null_resource blocks from the child modules to achieve that, but so far no luck.
Variables:
workers = {
Lab1 = {
"lab1k8sc8r001" = "192.168.17.100/24"
}
Lab2 = {
"lab2k8sc8r001" = "192.168.18.100/24"
}
}
gateway = {
Lab1 = [
"192.168.17.1",
]
Lab2 = [
"192.168.18.1",
]
}
From modules/main.tf, I'm calling the ignition module:
module "ignition_workers" {
source = "./modules/ignition"
virtual_machines = var.workers[terraform.workspace]
ssh_public_keys = var.ssh_public_keys
files = [
"files_90-disable-auto-updates.yaml",
"files_90-disable-console-logs.yaml",
]
template_files = {
"files_eth0.nmconnection.yaml" = {
interface-name = "eth0",
address = element(values(var.workers[terraform.workspace]), count.index),
gateway = element(var.gateway, count.index % length(var.gateway)),
dns = join(";", var.dns_servers),
dns-search = var.domain,
}
"files_etc_hostname.yaml" = {
hostname = element(keys(var.workers[terraform.workspace]), count.index),
}
"files_chronyd.yaml" = {
ntp_server = var.ntp_server,
}
}
}
From modules/ignition/main.tf I take the files and template_files variables to build the Ignition config:
module "ingition_file_snippets" {
source = "./modules/files"
files = var.files
}
module "ingition_template_file_snippets" {
source = "./modules/template_files"
template_files = var.template_files
}
data "ct_config" "fedora-coreos-config" {
count = length(var.virtual_machines)
content = templatefile("${path.module}/assets/files_ssh_authorized_keys.yaml", {
ssh_public_keys = var.ssh_public_keys
})
pretty_print = true
snippets = setunion(values(module.ingition_file_snippets.files), values(module.ingition_template_file_snippets.files))
}
I am not quite sure what you are trying to achieve so I can not give any detailed examples.
But modules in terraform do not support count or for_each yet. So you can also not use count.index.
You might want to change your module to take lists/maps of input and create those lists/maps via for-expressions by transforming them from some input variables.
You can combine for with if to create a filtered subset of your source list/map. Like in:
[for s in var.list : upper(s) if s != ""]
I hope this helps you work around the missing count support.

Terraform 0.12: Output list of buckets, use as input for another module and iterate

I'm using Tf 0.12. I have an s3 module that outputs a list of buckets, that I would like to use as an input for a cloudfront module that I've got.
The problem I'm facing is that when I do terraform plan/apply I get the following error count.index is 0 |var.redirect-buckets is tuple with 1 element
I've tried all kinds of splats moving the count.index call around to no avail. My sample code is below.
module.s3
resource "aws_s3_bucket" "redirect" {
count = length(var.redirects)
bucket = element(var.redirects, count.index)
}
mdoule.s3.output
output "redirect-buckets" {
value = [aws_s3_bucket.redirect.*]
}
module.cdn.variables
...
variable "redirect-buckets" {
description = "Redirect buckets"
default = []
}
....
The error is thrown down here
module.cdn
resource "aws_cloudfront_distribution" "redirect" {
count = length(var.redirect-buckets)
default_cache_behavior {
// Line below throws the error, one amongst many
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
....
//Another error throwing line
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
Any help is greatly appreciated.
module.s3
resource "aws_s3_bucket" "redirects" {
for_each = var.redirects
bucket = each.value
}
Your variable definition for redirects needs to change to something like this:
variable "redirects" {
type = map(string)
}
module.s3.output:
output "redirect_buckets" {
value = aws_s3_bucket.redirects
}
module.cdn
resource "aws_cloudfront_distribution" "redirects" {
for_each = var.redirect_buckets
default_cache_behavior {
target_origin_id = "cloudfront-distribution-origin-${each.value.id}.s3.amazonaws.com"
}
Your variable definition for redirect-buckets needs to change to something like this (note underscores, using skewercase is going to behave strangely in some cases, not worth it):
variable "redirect_buckets" {
type = map(object(
{
id = string
}
))
}
root module
module "s3" {
source = "../s3" // or whatever the path is
redirects = {
site1 = "some-bucket-name"
site2 = "some-other-bucket"
}
}
module "cdn" {
source = "../cdn" // or whatever the path is
redirects_buckets = module.s3.redirect_buckets
}
From an example perspective, this is interesting, but you don't need to use outputs from S3 here since you could just hand the cdn module the same map of redirects and use for_each on those.
There is a tool called Terragrunt which wraps Terraform and supports dependencies.
https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/#dependencies-between-modules

How to pass variable from another lua file?

How to pass variable from another lua file? Im trying to pass the text variable title to another b.lua as a text.
a.lua
local options = {
title = "Easy - Addition",
backScene = "scenes.operationMenu",
}
b.lua
local score_label_2 = display.newText({parent=uiGroup, text=title, font=native.systemFontBold, fontSize=128, align="center"})
There are a couple ways to do this but the most straightforward is to treat 'a.lua' like a module and import it into 'b.lua' via require
For example in
-- a.lua
local options =
{
title = "Easy - Addition",
backScene = "scenes.operationMenu",
}
return options
and from
-- b.lua
local options = require 'a'
local score_label_2 = display.newText
{
parent = uiGroup,
text = options.title,
font = native.systemFontBold,
fontSize = 128,
align = "center"
}
You can import the file a.lua into a variable, then use it as an ordinary table.
in b.lua
local a = require("a.lua")
print(a.options.title)

How to use ngx_write_chain_to_temp_file correctly?

I am writing nginx module which construct nginx chain then write this chain buffer to nginx temporary file to use it later (just after write happen). I've been searching every page and the only solution come up is the one bellow:
// Create temp file to test
ngx_temp_file_t *tf;
tf = ngx_pcalloc(r->pool, sizeof (ngx_temp_file_t));
if (tf == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
tf->file.fd = NGX_INVALID_FILE;
tf->file.log = nlog;
tf->path = clcf->client_body_temp_path;
tf->pool = r->pool;
tf->log_level = r->request_body_file_log_level;
tf->persistent = r->request_body_in_persistent_file;
tf->clean = r->request_body_in_clean_file;
// if (r->request_body_file_group_access) {
// tf->access = 0660;
// }
if (ngx_create_temp_file(&tf->file, tf->path, tf->pool, tf->persistent, tf->clean, tf->access) != NGX_OK) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
if (ngx_write_chain_to_temp_file(tf, bucket->first) == NGX_ERROR) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
This code does not return NGX_ERROR, is this meant nginx successful write temporary file into client_body_temporay_path? It the answer is yes, after that, I use fopen to open file, the file is not exist?
Can anyone please give me the right solution to handle ngx_write_chain_to_temp_file?
I find myself the solution
ngx_temp_file_t *tf;
tf = ngx_pcalloc(r->pool, sizeof (ngx_temp_file_t));
tf->file.fd = NGX_INVALID_FILE;
tf->file.log = nlog;
tf->path = clcf->client_body_temp_path;
tf->pool = r->pool;
tf->persistent = 1;
rc = ngx_create_temp_file(&tf->file, tf->path, tf->pool, tf->persistent, tf->clean, tf->access);
//ngx_write_chain_to_file(&tf->file, bucket->first, bucket->content_length, r->pool);
ngx_write_chain_to_temp_file(tf, bucket->first);
The only thing I cannot understand is if I set tf->persistentto false (0), after the file created, I cannot read from it even if I've not passed response to output_filter yet.