In the setup I'm working the variable $var1 is declared in several places. However I'm not sure which line is last applied. The structure is as follows.
The puppet module, module1, contains a vars.pp class which is inherited by its init.pp manifest. In the vars.pp manifest var1 is declared as "value-vars".
#vars.pp
$var1 = "value-vars"
This module is applied to any node that matches a certain regex which is defined in the nodes.pp.
#nodes.pp
node "/nodepattern/" inherits base {
require module1
}
nodes.pp inherits from base.pp which declares var1 as "value-base".
#base.pp
$var1 = "value-base"
Now when the module is applied to a certain node, what value would var1 contain?
Is it "value-vars" because node block is applied before the class?
UPDATE
├── puppet3
│ ├──**manifests**
│ │ └───**nodes**
│ │ └──base.pp (node "base", $var1 = "value-base")
│ ├──nodes.pp (various nodes inheriting base node, contains module1 node)
│ ├──**modules**
│ │ ├──**module1**
│ │ │ ├──**manifests**
│ │ ├──vars.pp (class "vars", $var1 = "value-vars")
│ │ ├──init.pp (class "module1", inherits vars class)
I sense some confusion here. A manifest cannot "inherit" another manifest. What's worse - from Puppet 4.0, a manifest file will not even be able to import another one.
This leaves scarce options to declare globally scoped variables. You should avoid declaring the same variable globally in different .pp files anyway, because any compilation that imports both files will fail!
Structure that works like "if this node includes module A, use value X for variable N" is tricky with Puppet. Manifests work best if there is one central piece of information that you can rely on, e.g.
node <long-cloud-instance-name-here> {
$uses_app_foo = true
$is_master_server = false
include my_cloud_app
}
Both the decision to include module A and the assignment of X to N should then be based on those node scope variables.
This pattern gets old for large numbers of nodes. It is therefor advisable to devise a Hiera hierarchy that helps you define your node data with less redundancy.
Update
Seeing as you are using classes apparently, here are the additional rules that should make things clearer:
a variable that is declared in the local scope (class or define body) hides variables from wider scopes that share its name
a variable that is declared in the node block hides a global variables with the same name
Evaluation order does not come into play. The scoping rules always apply. Multiple assignments on the same scope are forbidden and lead to a compiler error. As you are not facing that scenario, the above rules do apply.
$foo = 'global'
# $foo == 'global'
node default {
$foo = 'node'
include bar
# $foo == 'node'
}
class bar {
$foo = 'class'
# $foo == 'class'
include baz
}
class baz {
# foo == 'node' (!)
}
Related
I have two files:
Main.kt:
package A
fun gg1() = "hello"
Main2.kt:
package B
import A.gg1
fun gg2() = gg1()
Trying to compile the code:
iv#LAPTOP:~$ tree .
.
├── A
│ └── Main.kt
└── B
└── Main2.kt
2 directories, 2 files
iv#LAPTOP:~$ kotlinc B/Main2.kt
B/Main2.kt:3:8: error: unresolved reference: A
import A.gg1
^
B/Main2.kt:5:13: error: unresolved reference: gg1
fun gg2() = gg1()
^
iv#LAPTOP:~$
Why do i get this error and how to fix it ?
You're only passing B/Main2.kt to kotlinc. You need to pass the other file too if you want the compiler to be aware of its existence.
Imports don't work as file/path references: the import A.gg1 doesn't tell the compiler to look for A/Main.kt (how would it know the file name?). There is no technical relation between the package names and the paths of the files, just a convenient convention.
In fact, imports are mostly syntax sugar to avoid using fully qualified names within the code (so the code itself looks a bit simpler), but they are not necessary per se, and you could just use declarations without imports if you wanted to (except in some specific cases).
Maybe this related question could shed some light too:
How does import find the file path in Kotlin?
Looking for ideas if this is possible. (I seem to recall posts elsewhere that it isn't directly available, but having difficulty finding them to reference... sorry in advance.)
My team is attempting to code all of our org's currently-deployed Azure policies/initiatives into Terraform. In particular, there may be different parameters for the initiatives between different environments (sandbox/dev/test/prod), so the last thing I'm trying to code is the parameter definitions and parameter values into variables (in one .tfvars files per environment) instead of in code (in main.tf each for policyset definition and assignment). This way we can have one code-base, even if there are different parameters needed per environment.
Here's the problem... the type of the defaultValue attribute of the parameter definitions is not consistent across all the parameters.
Sample attempt at variable definition:
variable "parameters_apim_definitions" {
type = map(object({
type = string
metadata = map(string)
allowedValues = list(string0
defaultValue = any
}))
description = "Definitions of parameters for the sql_governance policyset."
}
For example, in different parameters, defaultValue might be a string (say, "Management"), a number (365), or a boolean (true), or even an array ([ "Developer", "Premium" ]).
Trying this, though, I get the following error:
│ Error: Incorrect variable type
│
│ on variables.tf line 34:
│ 34: variable "parameters_apim_definitions" {
│
│ The resolved value of variable "parameters_apim_definitions" is not appropriate: cannot find a common base type for
│ all elements.
Unfortunately, it looks like, according to https://www.terraform.io/language/expressions/type-constraints#dynamic-types-the-any-constraint, that "any" can translate into any type, but they all need to be the SAME type.
Any ideas? I've got one (multiple default definitions, one for each possible type, with try clauses to pick the non-null one), but... what ugly code that will take.
I'm struggling with the "global" aspect of functions as it relates to modules. Maybe someone here could tell me if this example would work, explain why and then tell me the right way to do it.
If I have two modules:
f1.lua
local mod = T{}
function mod.print_msg(msg)
print(msg)
end
return mod
f2.lua
local mod = T{}
function mod.print_hello()
msgmod.print_msg('Hello')
end
return mod
and both are called in a "main" file
msgmod = assert(loadfile(file_path .. 'f1.lua'))()
himod = assert(loadfile(file_path .. 'f2.lua'))()
himod.print_hello()
Would print_hello still work if called from f2 or would I need to loadfile() f1.lua in f2?
It would work if called after the msgmod = ... has been executed (in any file), but not before. This is a confusing situation due to the usage of globals.
Typically, you do not want to use globals like this in modules. You should handle dependencies using require just as you would #include them in C++. So, f2.lua, which wants to use print_msg defined in f1.lua, might look like this:
local f1 = require('f1')
local mod = T{}
function mod.print_hello()
f1.print_msg('Hello')
end
return mod
You should also use require in your main file (and get in the habit of making everything local):
local msgmod = require('f1')
local himod = require('f2')
himod.print_hello()
Note that we could have omitted the first line, since we aren't actually using f1 in main, and f2 will require it automatically when we require f2. Unlike loadfile, require automatically caches loaded modules such that they are loaded only once. Again, require is almost always what you want to use.
The general pattern for writing modules is to require all dependency modules into locals, then use them as you like to implement your module functions:
local dep1 = require('dep1')
local dep2 = require('dep2')
...
local mod = {}
function mod.foo ()
return dep1.bar(dep2.bazz())
end
return mod
I want to use Raku Modules to group some functions, I often use. Because these functions are all loosely coupled, I don't like to add them in a class.
I like the idea of use, where you can select, which functions should be imported, but I don't like it, that the functions, which are imported are then stored in the global namespace.
For example if I have a file my_util.pm6:
#content of my_util.pm6
unit module my_util;
our sub greet($who) is export(:greet) {
say $who;
}
sub greet2($who) is export(:greet2) {
say $who;
}
sub greet3($who) is export(:greet3) {
say $who;
}
and a file test.p6:
#!/usr/bin/perl6
#content of test.p6
use v6.c;
use lib '.';
use my_util :greet2;
greet("Bob"); #should not work (because no namespace given) and also doesn't work
greet2("Bob"); #should not work (because no namespace given) but actually works
greet3("Bob"); #should not work (because no namespace given) and also doesn't work
my_util::greet("Alice"); #works, but should not work (because it is not imported)
my_util::greet2("Alice"); #should work, but doesn't work
my_util::greet3("Alice"); #should not work (because it is not imported) and also doesn't work
I would like to call all functions via my_util::greet() and not via greet() only.
The function greet() defined in my_util.pm6 comes very close to my requirements, but because it is defined as our, it is always imported. What I like is the possibility, to select which functions should be imported and it should be possible to leave it in the namespace defined by the module (i.e. it doesn't pollute the global namespace)
Does anyone know, how I can achieve this?
To clear up some potential confusion...
Lexical scopes and package symbol tables are different things.
my adds a symbol to the current lexical scope.
our adds a symbol to the current lexical scope, and to the public symbol table of the current package.
use copies the requested symbols into the current lexical scope.
That's called "importing".
The :: separator does a package lookup – i.e. foo::greet looks up the symbol greet in the public symbol table of package foo.
This doesn't involve any "importing".
As for what you want to achieve...
The public symbol table of a package is the same no matter where it is referenced from... There is no mechanism for making individual symbols in it visible from different scopes.
You could make the colons part of the actual names of the subroutines...
sub foo::greet($who) is export(:greet) { say "Hello, $who!" }
# This subroutine is now literally called "foo::greet".
...but then you can't call it in the normal way anymore (because the parser would interpret that as rule 4 above), so you would have to use the clunky "indirect lexical lookup" syntax, which is obviously not what you want:
foo::greet "Sam"; # Could not find symbol '&greet'
::<&foo::greet>( "Sam" ); # Hello, Sam!
So, your best bet would be to either...
Declare the subroutines with our, and live with the fact that all of them can be accessed from all scopes that use the module.
Or:
Add the common prefix directly to the subroutine names, but using an unproblematic separator (such as the dash), and then import them normally:
unit module foo;
sub foo-greet($who) is export(:greet) { ... }
sub foo-greet2($who) is export(:greet2) { ... }
sub foo-greet3($who) is export(:greet3) { ... }
I am a newbe to F#, but am quite familiar with C#. And I was wondering if there is a difference between declaring top level modules and local modules (e.g. performance, etc.), other than that the namespace declaration is not needed for top-level modules (it is part of the module declaration).
I cannot find anything in the documentation (MSDN F# Modules) specifying other differences.
Basically, coming from the C# world I prefer
//Version 1
namespace My.Namespace
module MyModule =
let a = 1
over
//Version 2
module My.Namespace.MyModule
let a = 1
Given that in both versions there will be only one module in the file, does Version 2 bring any disadvantages (compared to Version 1)?
Those are equivalent. According to F# 3.0 specs:
An implementation file that begins with a module declaration defines a
single namespace declaration group with one module. For example:
module MyCompany.MyLibrary.MyModule
let x = 1
is equivalent to:
namespace MyCompany.MyLibrary
module MyModule =
let x = 1