Pass modules to other modules - module

I'm trying to use Irmin with MirageOS, and I'm struggling with all those modules.
I took a look at the Canopy sources to try and figure out how Irmin is supposed to be used, and I have this :
let start console clock resolver conduit =
let (module Context) = Irmin_mirage.context (resolver, conduit) in
let module Mirage_git_memory = Irmin_mirage.Git.Mem.KV(Context)(Git.Inflate.M) in
let module Store = Mirage_git_memory(Irmin.Contents.String) in
[...]
From the start function I can use Store fine, set and read the repo ..
How do I pass Store around ? Since all of these types depend on start's parameters, I can't (or don't know how) to define those modules anywhere else, and all my attempts to pass or define Store anywhere else have failed with errors about constructor that would escape their scopes.
I did manage to create a store.ml file of my own (like in Canopy), but it's just moving the problem to a new module, I still have no idea how to pass it around.
In canopy they seem to use the Store module exclusively from the start function, which for their purpose is fine, but isn't for what I want to do.
I'm trying to use Irmin, but I assume it's not an Irmin problem, I'm probably just very wrong as to how the module system works in ocaml. When I try to pass it to another function or module I end up with errors like
The signature for this packaged module couldn't be inferred.
Which seems logical, but I don't know how to fix that.
Thanks

First class modules (like let (module Context)) are a bit difficult to handle for the OCaml compiler, and in particular it is often not able to infer their type by itself.
The solution is to add a manual annotation:
let (module Context : Irmin_mirage.CONTEXT) = Irmin_mirage.context (resolver, conduit) in
...

Related

How to mock module attributes and other modules in Elixir?

I'm very new to elixir so this is probably basic but couldn't see much online
if I have the following code
defmodule A do
def my_first_function do
# does stuff
end
end
defmodule B do
#my_module_attribute A.my_first_function()
end
then how do I mock module A in my tests so I can just tell the tests what I want it to return?
also is it possible to just mock #my_module_attribute instead? would be good to have an answer to both approaches (and then what is considered the better pattern)
I think perhaps you are wanting to use module attributes for more than what they are useful for, and perhaps there is some confusion over the exact definition of "mock".
Avoid thinking of module attributes as "class variables" -- even though they look like they might serve the same purpose and they are in the same place, their behavior is different. Be especially careful when a module attribute relies on a function call. One common pitfall is to use module attributes to store values read from configuration, e.g. using Application.fetch_env!/2. The problem is that the module attributes are evaluated at compile time (not at run time), so you can easily end up with an unexpected value. (The compiler now warns about this gotcha explicitly and there is now the Application.compile_env!/2 provided to better communicate that particular use case).
I usually reserve module attributes for raising the visibility of simple constants and I tend to avoid using them for storing the result of any function execution.
When it comes back down to "mocking", you still have to think about the fact that Elixir is compiled -- it's not Javascript. Someone geekier than me can explain the mechanics, but the module attributes don't exist the same way at runtime as they do at compile time.
"Mocking" during tests usually means substituting one module or function for another, and this swap is often easier to do at runtime. One common pattern looks a bit like Dependency Injection (but purists may object to the comparison).
Consider a function like this that relies on some OtherModule to do its work:
def my_function(input, opts \\ []) do
other_module = Keyword.get(opts, :service, OtherModule)
other_module.get_thing(input)
# do more stuff...
end
Instead of hard-coding the OtherModule inside of my_function, its name is read out of the optional opts. This provides a useful way to override the output of that OtherModule when you need to test it.
In your test, you can do something like this:
test "example mock" do
assert something = MyApp.MyMod.my_function("foo", service: MockService)
end
When the test provides MockService as the :service module, the get_thing function will be called on the MockService module and not on the OtherModule. If the provided module does not define a get_thing function, the code will fail to execute. This is where having a behaviour comes in handy because it helps guarantee that your module has implemented the needed functions. The Mox testing library, for example, relies on the behaviour contracts, but from the example above you can see premise.
If you squint, you can see that this "injection" approach is somewhat similar to how Javascript often accepts a callback function as an argument, but in Elixir it is more common to pass around module names instead of captured functions.

Julia custom module scope of variables

I am starting writing my first custom module in Julia.
What I am doing is writing all the files in a folder, then import them in a ModuleName.jl file and eventually writing a test program which executes a precompiled main() function which calls my custom module (I like keeping a modular style of programming).
The problem is that I think I am missing something on the use of using and import keywords. In my test file I have the following lines:
push!(LOAD_PATH,"./ModuleNameFolder")
using ModuleName
I thought that functions of ModuleName if loaded with using could be called without explicit ModuleName.myfunct(), but only through myfunct() while this is not the case. If I omit the ModuleName the compiler throws an UndefVarError.
What am I doing wrong? I would like to bring all the functions of my custom module on the main scope
Welcome to Julia.
What do you mean by precompiled main() function? Tests in Julia are normally set on a specific file that is run automatically at each push of your code on the repository that you use to host the code.
Any how, try include ./ModuleName followed by using .ModuleName (note the dot). And remember to export the objects in ModuleName that you want to make available directly.
Have a look on my tutorial: https://syl1.gitbook.io/julia-language-a-concise-tutorial/language-core/11-developing-julia-packages

Use data in plugin outside the SCIPsolve call

I would like to share data between a plugin and my main function (this is, use it outside the call to the SCIPsolve function). For example, a branching rule sets a certain int variable to 1 and then, after the optimization is done I can go and check wether the variable was changes or not.
I thought I could accomplish this by using the plugin data (e.g. SCIP_BranchruleData) but it can't be accessed from outside the plugin's source file.
How can I do it?
I will appreciate any help.
Rodolfo
An easy solution is to add a getter function to the branchrule which you implement in branch_xyc.c and prototype in branch_xyz.h. Then your code needs to include the header file and you can access the fields in the branchdata.
See also the documentation of branch_allfullstrong.cpp where an external function is defined and you can see how to get the branchdata and branchrule when passing just a SCIP pointer.

How are words bound within a Rebol module?

I understand that the module! type provides a better structure for protected namespaces than object! or the 'use function. How are words bound within the moduleā€”I notice some errors related to unbound words:
REBOL [Type: 'module] set 'foo "Bar"
Also, how does Rebol distinguish between a word local to the module ('foo) and that of a system function ('set)?
Minor update, shortly after:
I see there's a switch that changes the method of binding:
REBOL [Type: 'module Options: [isolate]] set 'foo "Bar"
What does this do differently? What gotchas are there in using this method by default?
OK, this is going to be a little tricky.
In Rebol 3 there are no such things as system words, there are just words. Some words have been added to the runtime library lib, and set is one of those words, which happens to have a function assigned to it. Modules import words from lib, though what "import" means depends on the module options. That might be more tricky than you were expecting, so let me explain.
Regular Modules
For starters, I'll go over what importing means for "regular" modules, ones that don't have any options specified. Let's start with your first module:
REBOL [Type: 'module] set 'foo "Bar"
First of all, you have a wrong assumption here: The word foo is not local to the module, it's just the same as set. If you want to define foo as a local word you have to use the same method as you do with objects, use the word as a set-word at the top level, like this:
REBOL [Type: 'module] foo: "Bar"
The only difference between foo and set is that you hadn't exported or added the word foo to lib yet. When you reference words in a module that you haven't declared as local words, it has to get their values and/or bindings from somewhere. For regular modules, it binds the code to lib first, then overrides that by binding the code again to the module's local context. Any words defined in the local context will be bound to it. Any words not defined in the local context will retain their old bindings, in this case to lib. That is what "importing" means for regular modules.
In your first example, assuming that you haven't done so yourself, the word foo was not added to the runtime library ahead of time. That means that foo wasn't bound to lib, and since it wasn't declared as a local word it wasn't bound to the local context either. So as a result, foo wasn't bound to anything at all. In your code that was an error, but in other code it might not be.
Isolated Modules
There is an "isolate" option that changes the way that modules import stuff, making it an "isolated" module. Let's use your second example here:
REBOL [Type: 'module Options: [isolate]] set 'foo "Bar"
When an isolated module is made, every word in the module, even in nested code, is collected into the module's local context. In this case, it means that set and foo are local words. The initial values of those words are set to whatever values they have in lib at the time the module is created. That is, if the words are defined in lib at all. If the words don't have values in lib, they won't initially have values in the module either.
It is important to note that this import of values is a one-time thing. After that initial import, any changes to these words made outside the module don't affect the words in the module. That is why we say the module is "isolated". In the case of your code example, it means that someone could change lib/set and it wouldn't affect your code.
But there's another important module type you missed...
Scripts
In Rebol 3, scripts are another kind of module. Here's your code as a script:
REBOL [] set 'foo "Bar"
Or if you like, since script headers are optional in Rebol 3:
set 'foo "Bar"
Scripts also import their words from lib, and they import them into an isolated context, but with a twist: All scripts share the same isolated context, known as the "user" context. This means that when you change the value of a word in a script, the next script to use that word will see the change when it starts. So if after running the above script, you try to run this one:
print foo
Then it will print "Bar", rather than have foo be undefined, even though foo is still not defined in lib. You might find it interesting to know that if you are using Rebol 3 interactively, entering commands into the console and getting results, that every command line you enter is a separate script. So if your session looks like this:
>> x: 1
== 1
>> print x
1
The x: 1 and print x lines are separate scripts, the second taking advantage of the changes made to the user context by the first.
The user context is actually supposed to be task-local, but for the moment let's ignore that.
Why the difference?
Here is where we get back to the "system function" thing, and that Rebol doesn't have them. The set function is just like any other function. It might be implemented differently, but it's still a normal value assigned to a normal word. An application will have to manage a lot of these words, so that's why we have modules and the runtime library.
In an application there will be stuff that needs to change, and other stuff that needs to not change, and which stuff is which depends on the application. You will want to group your stuff, to keep things organized or for access control. There will be globally defined stuff, and locally defined stuff, and you will want to have an organized way to get the global stuff to the local places, and vice-versa, and resolve any conflicts when more than one thing wants to define stuff with the same name.
In Rebol 3, we use modules to group stuff, for convenience and access control. We use the runtime library lib as a place to collect the exports of the modules, and resolve conflicts, in order to control what gets imported to the local places like other modules and the user context(s). If you need to override some stuff, you do this by changing the runtime library, and if necessary propagating your changes out to the user context(s). You can even upgrade modules at runtime, and have the new version of the module override the words exported by the old version.
For regular modules, when things are overridden or upgraded, your module will benefit from such changes. Assuming those changes are a benefit, this can be a good thing. A regular module cooperates with other regular modules and scripts to make a shared environment to work in.
However, sometimes you need to stay separate from these kinds of changes. Perhaps you need a particular version of some function and don't want to be upgraded. Perhaps your module will be loaded in a less trustworthy environment and you don't want your code hacked. Perhaps you just need things to be more predictable. In cases like this, you may want to isolate your module from these kinds of external changes.
The downside to being isolated is that, if there are changes to the runtime library that you might want, you're not going to get them. If your module is somehow accessible (such as by having been imported with a name), someone might be able to propagate those changes to you, but if you're not accessible then you're out of luck. Hopefully you've thought to monitor lib for changes you want, or reference the stuff through lib directly.
Still, we've missed another important issue...
Exporting
The other part of managing the runtime library and all of these local contexts is exporting. You have to get your stuff out there somehow. And the most important factor is something that you wouldn't suspect: whether or not your module has a name.
Names are optional for Rebol 3's modules. At first this might seem like just a way to make it simpler to write modules (and in Carl's original proposal, that is exactly why). However, it turns out that there is a lot of stuff that you can do when you have a name that you can't when you don't, simply because of what a name is: a way to refer to something. If you don't have a name, you don't have a way to refer to something.
It might seem like a trivial thing, but here are some things that a name lets you do:
You can tell whether a module is loaded.
You can make sure a module is only loaded once.
You can tell whether an older version of a module was there earlier, and maybe upgrade it.
You can get access to a module that was loaded earlier.
When Carl decided to make names optional, he gave us a situation where it would be possible to make modules for which you couldn't do any of those things. Given that module exports were intended to be collected and organized in the runtime library, we had a situation where you could have effects on the library that you couldn't easily detect, and modules that got reloaded every time they were imported.
So for safety we decided to cut out the runtime library completely and just export words from these unnamed modules directly to the local (module or user) contexts that were importing them. This makes these modules effectively private, as if they are owned by the target contexts. We took a potentially awkward situation and made it a feature.
It was such a feature that we decided to support it explicitly with a private option. Making this an explicit option helps us deal with the last problem not having a name caused us: making private modules not have to reload over and over again. If you give a module a name, its exports can still be private, but it only needs one copy of what it's exporting.
However, named or not, private or not, that is 3 export types.
Regular Named Modules
Let's take this module:
REBOL [type: module name: foo] export bar: 1
Importing this adds a module to the loaded modules list, with the default version of 0.0.0, and exports one word bar to the runtime library. "Exporting" in this case means adding a word bar to the runtime library if it isn't there, and setting that word lib/bar to the value that the word foo/bar has after foo has finished executing (if it isn't set already).
It is worth noting that this automatic exporting happens only once, when the body of foo is finished executing. If you make a change to foo/bar after that, that doesn't affect lib/bar. If you want to change lib/bar too, you have to do it manually.
It is also worth noting that if lib/bar already exists before foo is imported, you won't have another word added. And if lib/bar is already set to a value (not unset), importing foo won't overwrite the existing value. First come, first served. If you want to override an existing value of lib/bar, you'll have to do so manually. This is how we use lib to manage overrides.
The main advantages that the runtime library gives us is that we can manage all of our exported words in one place, resolving conflicts and overrides. However, another advantage is that most modules and scripts don't actually have to say what they are importing. As long as the runtime library is filled in properly ahead of time with all the words you need, your script or module that you load later will be fine. This makes it easy to put a bunch of import statements and any overrides in your startup code which sets up everything the rest of your code will need. This is intended to make it easier to organize and write your application code.
Named Private Modules
In some cases, you don't want to export your stuff to the main runtime library. Stuff in lib gets imported into everything, so you should only export stuff to lib that you want to make generally available. Sometimes you want to make modules that only export stuff for the contexts that want it. Sometimes you have some related modules, a general facility and a utility module or so. If this is the case, you might want to make a private module.
Let's take this module:
REBOL [type: module name: foo options: [private]] export bar: 1
Importing this module doesn't affect lib. Instead, its exports are collected into a private runtime library that is local to the module or user context that is importing this module, along with those of any other private modules that the target is importing, then imported to the target from there. The private runtime library is used for the same conflict resolution that lib is used for. The main runtime library lib takes precedence over the private lib, so don't count on the private lib overriding global things.
This kind of thing is useful for making utility modules, advanced APIs, or other such tricks. It is also useful for making strong-modular code which requires explicit imports, if that is what you're into.
It's worth noting that if your module doesn't actually export anything, there is no difference between a named private module or a named public module, so it's basically treated as public. All that matters is that it has a name. Which brings us to...
Unnamed Modules
As explained above, if your module doesn't have a name then it pretty much has to be treated as private. More than private though, since you can't tell if it's loaded, you can't upgrade it or even keep from reloading it. But what if that's what you want?
In some cases, you really want your code run for effect. In these cases having your code rerun every time is what you want to do. Maybe it's a script that you're running with do but structuring as a module to avoid leaking words. Maybe you're making a mixin, some utility functions that have some local state that needs initializing. It could be just about anything.
I frequently make my %rebol.r file an unnamed module because I want to have more control over what it exports and how. Plus, since it's done for effect and doesn't need to be reloaded or upgraded there's no point in giving it a name.
No need for a code example, your earlier ones will act this way.
I hope this gives you enough of an overview of the design of R3's module system.

dojo provide with declare

I'm trying to clear something up that I'm not getting from the dojo docs.
When I create a dojo provide I assume this is like creating a namespace with objects. For example.
myApp = { container: {} }
Written in dojo provide would be:
dojo.provide('myApp.container');
Now I have read somewhere where this is a global. Not sure I get that as its a namespace or are people true in saying this.
Another issue I'm having is if I use a declare to create a class do I need to use provide to create that namespace for me. for example
myClass.js file
dojo.provide('myApp.myClass');
dojo.declare("myApp.myClass", null, {
constructor: function(){
console.log("myApp.myClass created");
}
});
Now if there is truth to provide causing global variables then would this not be a global class now.
When I do a console.log from my app.js file which is my main.js file its not showing as a global but in fact as namespace myApp.myClass.
So can someone clear this up as its a little strange if there is truth in it.
Firstly, to clarify the term "global", technically myApp is a global - it is a variable on the browser's window object. While yes, ultimately the object/class your module defines is contained within that global object (and thus "namespaced" under it), that top level namespace itself manifests as a global variable; it is accessible to any script in the page/app.
Now, onto the declare question. Assuming this code is going into its own module to be loaded via dojo.require, yes, you still need the dojo.provide. While one purpose of dojo.provide is to ensure the variable you will be populating (e.g. myApp.MyClass) and any parent namespaces exist up-front, its other purpose is basically to act like an ACK to dojo.require's SYN - i.e., "yes, you asked for myApp.MyClass, and that's who I am." I'm pretty sure you would find that in the absence of that dojo.provide, dojo.require("myApp.MyClass") would fail, thinking it never found the module it was looking for.
Hope that answers your questions.