Finding locally compiled Raku modules - module

How can find modules that have been installed locally, and which I can use in a Raku program?
Assume that we have three distributions: Parent, Brother, Sister. Parent 'provides' Top.rakumod, while Brother and Sister provide 'Top::Son.rakumod' and 'Top::Daughter.rakumod', respectively. Brother and Sister have a 'depends': 'Top' in their META6.json.
Each distribution is in its own git repo. And each are installed by zef.
Suppose Top is set up as a class with an interface method, perhaps something like: multi method on-starting { ... }, which each sub-class has to implement, and which when run, provides the caller with information about the sub-class. So both Top::Brother and Top::Daughter implement on-starting. There could also be distributions Top::Aunt and so on, which are not locally installed. We need to find which are installed.
So, now we run an instance of Top (as defined in Parent). It needs to look for installed modules that match Top::*. The place to start (I think) is $*REPO, which is a linked list of repositories containing the modules that are installed. $*REPO also does the CompUnit::Repository role, which in turn has a 'need' method.
What I don't understand is how to manipulate $*REPO to get a list of all the candidate modules that match Top::*, along the whole of the linked list.
Once I have the list of candidates, I can use ^can to check it has a on-starting method, and then call that method.
If this is not the way to get to a result where Top finds out about locally installed modules, I'd appreciate some alternatives to the scheme I just laid out.

CompUnit::Repository (CUR) has a candidates method for searching distributions, but it does not allow for searching by name prefix (since it also does fast lookups which require the full name to get its sha1 directory/lookup). For a CompUnit::Repository::FileSystem (CURFS) you can call .distribution to get the distribution is provides, and for CompUnit::Repository::Installation (CURI) you can call .installed to get all the distributions it provides:
raku -e ' \
say $*REPO.repo-chain \
.grep(CompUnit::Repository::FileSystem | CompUnit::Repository::Installation) \
.map({ $_ ~~ CompUnit::Repository::FileSystem ?? $_.distribution !! $_.installed.Slip }) \
.grep(*.defined) \
;'
If you want to match namespaces you would need to then grep on the distributions name or the name of its modules:
my #matches = #distributions.grep({ $_.meta<provides>.keys.first({.starts-with("Top::")}) });
This way of handling things can be seen in the Pluggable module (which I'd suggest using if you also want to load such code)
Of course you explicitly asked only for installed modules, but ignoring CURFS doesn't make any sense -- as an application developer it isn't supposed to matter where or how a module is loaded. If someone wants to use -I ./foo instead of installing it there isn't a good reason to ignore it. If you insist on doing this anyway it should be obvious how to change the example above to accommodate.
Once I have the list of candidates, I can use ^can to check it has a on-starting method, and then call that method.
Having a list of candidates doesn't let you do anything other than inspect the meta file or slurp the source code of various files. At the very least you would load whatever module you wanted to call e.g. .^can first, and there is going to be a few steps involved in that, which the distribution object can't be used directly to load with (you extract the full name from it and use that to load it) -- so again I'd suggest using Pluggable

Related

How could i get the complete list of puppet modules availabe from puppet repo

I'm looking for a way to get the list of complete puppet modules from the puppet repo
As far as I am aware, there is no direct way to get such a list. I would consider inquiring directly of Puppet, Inc., as it's not out of the question that they would be willing to run a one-off special query for you. I'm sure they would want to know what you want to do with the list, though. And, of course, this is by no means a sure thing.
You could also use a bot to screen-scrape the multiple HTML pages of the all-module list, and process the results to extract the list you want. But that would be a lot more work than just asking.
Note well that the Forge contents are not static. New modules are added regularly, and module versions are updated from time to time. I'm uncertain about the policy for removals, but it seems that they generally do not remove modules, but rather deprecate them. In any case, any list of the Forge contents would necessarily be a snapshot from a single point in time.
You should also understand that although the Forge itself is operated by Puppet, most of the modules are contributed and maintained by community members, if they are maintained at all. There is also an unknown but probably large number of modules in use in the world that are not available from the Forge. Thus, the list you are asking for cannot be construed as a list of official modules, nor as a list of all the modules there are.

Terraform - Are single resource modules always bad?

I decided to learn more about Terraform and see if I could replicate what I did manually in the console, using Terraform. I set up two VMs, one that was publicly accessible and one that was not and had to be accessed through the first VM. These two VMs are almost identical, apart from the firewall rules.
In the interest of being DRY, I thought I'd create a module, so that I don't have to repeat all the options for the two VMs and just specify the differences. Since I wasn't sure about how to create a module, I checked the documentation and found the following:
When to write a module
[...]
We do not recommend writing modules that are just thin wrappers around single other resource types. If you have trouble finding a name for your module that isn't the same as the main resource type inside it, that may be a sign that your module is not creating any new abstraction and so the module is adding unnecessary complexity. Just use the resource type directly in the calling module instead.
Source: https://www.terraform.io/docs/modules/index.html#when-to-write-a-module
It makes sense to me that publishing a module that is just a wrapper around a single resource may not be that useful, but for internal use in your configuration, it seems like a useful tool to make your configuration DRY. If 9 out of 10 arguments are the same for all of your VMs, why wouldn't you create a module to hide the 9 common arguments from the main configuration and not repeat them?
As I am new to Terraform, I just want to make sure that I am not teaching myself bad practices.

using require in layer.conf in yocto

Considering all freedom that yocto gives to the developer, I have a question.
I would like to make this my_file.inc available only for recipes in one particular meta-layer. I know, that, for instance, using INHERIT keyword inside the local.conf will make my_class.bbclass file available globally for each recipe.
Is it a good practice to add this:
require my_file.inc
inside layer.conf? Or should I change my_file.inc to the my_file.bbclass, and, add INHERIT = "my_file.bbclass" to the layer.conf?
Any other possibilities?
Even if it seems to work, neither of your approaches is technically completely correct. The key point is that all .conf files are parsed first and everything they contain is globally visible throughout the whole build process. So if you add something through the layer.conf file, itis not being pulled in through an unexpected place, it also is not being limited that layer only and might therefore cause breakage at other places.
While I do not have a really good and clean solution, maybe the following can help you:
You can make your custom recipes react on certain keywords in DISTRO_FEATURES or MACHINE_FEATURES. Then you can create a two-staged approach:
Add the desired keyword in local.conf (or your MACHINE, or DISTRO, or whatever configuration)
Make the recipes react to it. If you need the mechanism in several places, then it might be useful to pour it into a .bbclass that your layer brings along and that you pull in for the respective recipes.
This way the effect is properly contained.
Maybe part 5.1.3.2 from the Yocto Project answers your question:
Avoid duplicating include files. Use append files (.bbappend) for each recipe that uses an include file. Or, if you are introducing a new recipe that requires the included file, use the path relative to the original layer directory to refer to the file. For example, use require recipes-core/package/file.inc instead of require file.inc. If you're finding you have to overlay the include file, it could indicate a deficiency in the include file in the layer to which it originally belongs. If this is the case, you should try to address that deficiency instead of overlaying the include file. For example, you could address this by getting the maintainer of the include file to add a variable or variables to make it easy to override the parts needing to be overridden.
So to avoid duplicate inclusion later, it would be better not to include your .inc file(s) this way.

How to reference the absolute directory of a project in Autoconf (to call custom scripts in portable way)?

I'm writing a custom check for installed libraries in autoconf:
AC_DEFUN([AC_GHC_PKG_CHECK],[
...
GHC_PKG_RESULT=$($PYTHON autotools/check-ghc-version-range ....)
...
])
where my Python script that actually performs the check resides in the autotools/ sub-directory of the project.
However, this is not portable, for example make dist-check fails because then autoconf tools are called from a different directory. How can I reference the absolute path to my Python script so that it gets called properly no matter what the current directory is?
ac_top_srcdir or ac_abs_top_srcdir should work in this case:
AC_DEFUN([AC_GHC_PKG_CHECK],[
...
GHC_PKG_RESULT=$($PYTHON $ac_top_srcdir/autotools/check-ghc-version-range ....)
...
])
EDIT: I don't think this approach will work -- it seems that $ac_top_srcdir aren't evaluated until later (AC_OUTPUT?).
What I think might work in this instance is to do something similar to what the runtime C tests do: blast a configuration test to a temporary file (conftest.py instead of conftest.c in this case) and run it. Unfortunately, there's (yet) no builtin macros or for automake/autoconf other tools that directly assist with this task.
Fortunately it seems that a clever person has written at least a couple different ways to do this. The first one is GNU pyconfigure which seems to have facilities for writing Python test code as I described above. The second one is more of an ad hoc macro collection that he used for his project.
You can use $srcdir.
It's not necessarily an absolute path, but it's a path that points from the top of the build tree to the top of the source tree.

How can I create a single Clojure source file which can be safely used as a script and a library without AOT compilation?

I’ve spent some time researching this and though I’ve found some relevant info,
Here’s what I’ve found:
SO question: “What is the clojure equivalent of the Python idiom if __name__ == '__main__'?”
Some techniques at RosettaCode
A few discussions in the Cojure Google Group — most from 2009
but none of them have answered the question satisfactorily.
My Clojure source code file defines a namespace and a bunch of functions. There’s also a function which I want to be invoked when the source file is run as a script, but never when it’s imported as a library.
So: now that it’s 2012, is there a way to do this yet, without AOT compilation? If so, please enlighten me!
I'm assuming by run as a script you mean via clojure.main as follows:
java -cp clojure.jar clojure.main /path/to/myscript.clj
If so then there is a simple technique: put all the library functions in a separate namespace like mylibrary.clj. Then myscript.clj can use/require this library, as can your other code. But the specific functions in myscript.clj will only get called when it is run as a script.
As a bonus, this also gives you a good project structure, as you don't want script-specific code mixed in with your general library functions.
EDIT:
I don't think there is a robust within Clojure itself way to determine whether a single file was launched as a script or loaded as a library - from Clojure's perspective, there is no difference between the two (it all gets loaded in the same way via Compiler.load(...) in the Clojure source for anyone interested).
Options if you really want to detect the manner of the launch:
Write a main class in Java which sets a static flag then launched the Clojure script. You can easily test this flag from Clojure.
Use AOT compilation to implement a Clojure main class which sets a flag
Use *command-line-args* to indicate script usage. You'll need to pass an extra parameter like "script" on the command line.
Use a platform-specific method to determine the command line (e.g. from the environment variables in Windows)
Use the --eval option in the clojure.main command line to load your clj file and launch a specific function that represents your script. This function can then set a script-specific flag if needed
Use one of the methods for detecting the Java main class at runtime
I’ve come up with an approach which, while deeply flawed, seems to work.
I identify which namespaces are known when my program is running as a script. Then I can compare that number to the number of namespaces known at runtime. The idea is that if the file is being used as a lib, there should be at least one more namespace present than in the script case.
Of course, this is extremely hacky and brittle, but it does seem to work:
(defn running-as-script
"This is hacky and brittle but it seems to work. I’d love a better
way to do this; see http://stackoverflow.com/q/9027265"
[]
(let
[known-namespaces
#{"clojure.set"
"user"
"clojure.main"
"clj-time.format"
"clojure.core"
"rollup"
"clj-time.core"
"clojure.java.io"
"clojure.string"
"clojure.core.protocols"}]
(= (count (all-ns)) (count known-namespaces))))
This might be helpful: the github project lein-oneoff describes itself as "dependency management for one-off, single-file clojure programs."
This lets you define everything in one file, but you do need the oneoff plugin installed in order to run it from the command line.