How does Raku decide which version of a module gets loaded? - module

When I do use Foo:ver<1.0>; it loads version 1.0 of module Foo. But what happens when I do use Foo;?

TL;DR: When given no specific version a default Raku install will load the latest version from the first CompUnit::Repository it encounters that matches any version of that module (and not neccesarily the highest version out of all CompUnit::Repository).
It is possible to create and load a non-core CompUnit::Repository that itself would only load random versions of a module unless otherwise specified. This answer does not apply to these and will focus on how the various core CompUnit::Repository behave and is specced.
The first thing that determines what module will be loaded is which CompUnit::Repository matches the requested identity first. The default repository chain will look something like this:
# EXAMPLE 1
$ raku -e '.say for $*REPO.repo-chain'
inst#/home/ugexe/.raku
inst#/home/ugexe/raku/install/share/perl6/site
inst#/home/ugexe/raku/install/share/perl6/vendor
inst#/home/ugexe/raku/install/share/perl6
The inst# prefix tells us this is a CompUnit::Repository::Installation. This is relevant because such a repo can contain multiple distributions -- including multiple versions of the same distribution -- which is not true of the single-distribution CompUnit::Repository::FileSystem used for -I. or -Ilib (which are really -Ifile#/home/ugexe/repos/Foo and -Ifile#/home/ugexe/repos/Foo/lib).
# EXAMPLE 2
$ raku -I. -e '.say for $*REPO.repo-chain'
file#/home/ugexe/repos/Foo
inst#/home/ugexe/.raku
inst#/home/ugexe/raku/install/share/perl6/site
inst#/home/ugexe/raku/install/share/perl6/vendor
inst#/home/ugexe/raku/install/share/perl6
Lets assume the following:
file#/home/ugexe/repos/Foo contains Foo:ver<0.5>
inst#/home/ugexe/.raku contains Foo:ver<0.1> and Foo:ver<1.0>
inst#/home/ugexe/.raku contains Foo:ver<2.0> and Foo:ver<0.1>
use Foo; will load:
EXAMPLE 1 - Foo:ver<1.0> from inst#/home/ugexe/.raku
EXAMPLE 2 - Foo:ver<0.5> from file#/home/ugexe/repos/Foo
Even though the highest version out of all the repositories is Foo:ver<2.0> the first repository in the chain that matches any version of Foo (i.e. use Foo) wins, so Foo:ver<2.0> is never chosen. You might guess this makes "highest version" the second thing that determines which version of a module gets loaded, but its really the 4th! However I've mentioned it here because for typical usage this is sufficient enough.
The 2nd thing that determines which version of a module get loaded is the api field. This essentially is another version field that, when combined with the version itself, gives a basic way of pinning major versions.
Lets assume the following:
file#/home/ugexe/repos/Foo contains Foo:api<0>:ver<0.5>
inst#/home/ugexe/.raku contains Foo:api<1>:ver<0.1> and Foo:api<0>:ver<1.0>
use Foo; will load:
EXAMPLE 1 - Foo:api<1>:ver<0.1> from inst#/home/ugexe/.raku
EXAMPLE 2 - Foo:api<0>:ver<0.5> from file#/home/ugexe/repos/Foo
Even though in EXAMPLE 1 the highest version is Foo:api<0>:ver<1.0>, the highest api version is Foo:api<1>:ver<0.1> and thus is chosen.
The 3rd thing that determines which version of a module gets loaded is the auth field. Unlike api and ver it does not imply any sorting. And also unlike api and ver field you probably shouldn't be using it in your e.g. use Foo -- it is policy focused and will serve to be a power-tool/escape hatch most developers should hopefully never have to worry about (ab)using.
Lets assume the following:
file#/home/ugexe/repos/Foo contains Foo:auth<github:ugexe>:ver<0.5>
inst#/home/ugexe/.raku contains Foo:ver<0.1> and Foo:auth<github:ugexe>:ver<1.0>
use Foo; will load:
EXAMPLE 1 - Foo:auth<github:ugexe>:ver<1.0> from inst#/home/ugexe/.raku
EXAMPLE 2 - Foo:auth<github:ugexe>:ver<0.5> from file#/home/ugexe/repos/Foo
In both examples use Foo; is the same as use Foo:auth(*):ver(*), so even though one of the repo assumptions contains a module with no auth this does not mean it is an exact match for use Foo;. Instead the :auth(*) includes any auth value as a match (effectively meaning auth is ignored altogether).
For more examples the spec tests are a good source

Related

I keep getting an error from the vue compiler that ive neveer seen before

it seems to be one of my dependancies but i have no clue whats going on here and i cant find anything on the internet.
(i clearly need to know more about webpack)
any help is muchly appreciated
weirdly enough this only started happening when i cloned my repo on my work pc, i hadnt even made any changes
this is the error im getting from my console
95% emitting
WARNING Compiled with 3 warnings 2:06:09 PM
warning in C:/Users/1/Documents/GitHub/GFS/client/node_modules/webpack/buildin/global.js
There are multiple modules with names that only differ in casing.
This can lead to unexpected behavior when compiling on a filesystem with other case-semantic.
Use equal casing. Compare these module identifiers:
* C:\Users\1\Documents\GitHub\GFS\client\node_modules\webpack\buildin\global.js
Used by 2 module(s), i. e.
C:\Users\1\Documents\GitHub\GFS\client\node_modules\node-libs-browser\node_modules\punycode\punycode.js
* C:\Users\1\documents\github\gfs\client\node_modules\webpack\buildin\global.js
Used by 2 module(s), i. e.
C:\Users\1\documents\github\gfs\client\node_modules\vue\dist\vue.esm.js
warning in C:/Users/1/Documents/GitHub/GFS/client/node_modules/webpack/hot/emitter.js
There are multiple modules with names that only differ in casing.
This can lead to unexpected behavior when compiling on a filesystem with other case-semantic.
Use equal casing. Compare these module identifiers:
* C:\Users\1\Documents\GitHub\GFS\client\node_modules\webpack\hot\emitter.js
Used by 1 module(s), i. e.
C:\Users\1\Documents\GitHub\GFS\client\node_modules\webpack-dev-server\client\index.js?http://localhost:8082
* C:\Users\1\documents\github\gfs\client\node_modules\webpack\hot\emitter.js
Used by 1 module(s), i. e.
C:\Users\1\documents\github\gfs\client\node_modules\webpack\hot\dev-server.js
warning in C:/Users/1/Documents/GitHub/GFS/client/node_modules/webpack/hot/log.js
There are multiple modules with names that only differ in casing.
This can lead to unexpected behavior when compiling on a filesystem with other case-semantic.
Use equal casing. Compare these module identifiers:
* C:\Users\1\Documents\GitHub\GFS\client\node_modules\webpack\hot\log.js
Used by 1 module(s), i. e.
C:\Users\1\Documents\GitHub\GFS\client\node_modules\webpack\hot nonrecursive /^\.\/log$/
* C:\Users\1\documents\github\gfs\client\node_modules\webpack\hot\log.js
Used by 2 module(s), i. e.
C:\Users\1\documents\github\gfs\client\node_modules\webpack\hot\dev-server.js
as it turns out, all i had to do was re navigate to the project dir from my terminal and i had to make sure i was using the correct capitlization there
(git bash sucks)
i used ../Documents/github... insteda of ../documents/github...
thank you internet.
yall have a great day and thanks for the help

TFAgents: how to take into account invalid actions

I'm using TF-Agents library for reinforcement learning,
and I would like to take into account that, for a given state,
some actions are invalid.
How can this be implemented?
Should I define a "observation_and_action_constraint_splitter" function when
creating the DqnAgent?
If yes: do you know any tutorial on this?
Yes you need to define the function, pass it to the agent and also appropriately change the environment output so that the function can work with it. I am not aware on any tutorials on this, however you can look at this repo I have been working on.
Note that it is very messy and a lot of the files in there actually are not being used and the docstrings are terrible and often wrong (I forked this and didn't bother to sort everything out). However it is definetly working correctly. The parts that are relevant to your question are:
rl_env.py in the HanabiEnv.__init__ where the _observation_spec is defined as a dictionary of ArraySpecs (here). You can ignore game_obs, hand_obs and knowledge_obs which are used to run the environment verbosely, they are not fed to the agent.
rl_env.py in the HanabiEnv._reset at line 110 gives an idea of how the timestep observations are constructed and returned from the environment. legal_moves are passed through a np.logical_not since my specific environment marks legal_moves with 0 and illegal ones with -inf; whilst TF-Agents expects a 1/True for a legal move. My vector when cast to bool would therefore result in the exact opposite of what it should be for TF-agents.
These observations will then be fed to the observation_and_action_constraint_splitter in utility.py (here) where a tuple containing the observations and the action constraints is returned. Note that game_obs, hand_obs and knowledge_obs are implicitly thrown away (and not fed to the agent as previosuly mentioned.
Finally this observation_and_action_constraint_splitter is fed to the agent in utility.py in the create_agent function at line 198 for example.

Where is contains( Junction) defined?

This code works:
(3,6...66).contains( 9|21 ).say # OUTPUT: «any(True, True)␤»
And returns a Junction. It's also tested, but not documented.
The problem is I can't find its implementation anywhere. The Str code, which is also called from Cool, never returns a Junction (it does not take a Junction, either). There are no other methods contain in source.
Since it's autothreaded, it's probably specially defined somewhere. I have no idea where, though. Any help?
TL;DR Junction autothreading is handled by a single central mechanism. I have a go at explaining it below.
(The body of your question starts with you falling into a trap, one I think you documented a year or two back. It seems pretty irrelevant to what you're really asking but I cover that too.)
How junctions get handled
Where is contains( Junction) defined? ... The problem is I can't find [the Junctional] implementation anywhere. ... Since it's autothreaded, it's probably specially defined somewhere.
Yes. There's a generic mechanism that automatically applies autothreading to all P6 routines (methods, operators etc.) that don't have signatures that explicitly control what happens with Junction arguments.
Only a tiny handful of built in routines have these explicit Junction handling signatures -- print is perhaps the most notable. The same is true of user defined routines.
.contains does not have any special handling. So it is handled automatically by the generic mechanism.
Perhaps the section The magic of Junctions of my answer to an earlier SO Filtering elements matching two regexes will be helpful as a high level description of the low level details that follow below. Just substitute your 9|21 for the foo & bar in that SO, and your .contains for the grep, and it hopefully makes sense.
Spelunking the code
I'll focus on methods. Other routines are handled in a similar fashion.
method AUTOTHREAD does the work for full P6 methods.
This is setup in this code that sets up handling for both nqp and full P6 code.
The above linked P6 setup code in turn calls setup_junction_fallback.
When a method call occurs in a user's program, it involves calling find_method (modulo cache hits as explained in the comment above that code; note that the use of the word "fallback" in that comment is about a cache miss -- which is technically unrelated to the other fallback mechanisms evident in this code we're spelunking thru).
The bit of code near the end of this find_method handles (non-cache-miss) fallbacks.
Which arrives at find_method_fallback which starts off with the actual junction handling stuff.
A trap
This code works:
(3,6...66).contains( 9|21 ).say # OUTPUT: «any(True, True)␤»
It "works" to the degree this does too:
(3,6...66).contains( 2 | '9 1' ).say # OUTPUT: «any(True, True)␤»
See Lists become strings, so beware .contains() and/or discussion of the underlying issues such as pmichaud's comment.
Routines like print, put, infix ~, and .contains are string routines. That means they coerce their arguments to Str. By default the .Str coercion of a listy value is its elements separated by spaces:
put 3,6...18; # 3 6 9 12 15 18
put (3,6...18).contains: '9 1'; # True
It's also tested
Presumably you mean the two tests with a *.contains argument passed to classify:
my $m := #l.classify: *.contains: any 'a'..'f';
my $s := classify *.contains( any 'a'..'f'), #l;
Routines like classify are list routines. While some list routines do a single operation on their list argument/invocant, eg push, most of them, including classify, iterate over their list doing something with/to each element within the list.
Given a sequence invocant/argument, classify will iterate it and pass each element to the test, in this case a *.contains.
The latter will then coerce individual elements to Str. This is a fundamental difference compared to your example which coerces a sequence to Str in one go.

How to find an example instance of a cyclic dependency in jQAssistant?

I found the output of the dependency:packageCycles constraint shipped with jQAssistant hard to interpret. Specifically I'm keen on finding an example instance of classes that make up the cyclic dependency.
Given I found a cyclce of packages, for each pair of adjunct packages I would need to find two classes that connect them.
This is my first attempt at the Cypher query, but there are still some relevant parts missing:
MATCH nodes = (p1:Package)-[:DEPENDS_ON]->(p2: Package)-[:DEPENDS_ON*]->(p1)
WHERE p1 <> p2
WITH extract(x IN relationships(nodes) |
(:Type)<--(:Package)-[x]->(:Package)-->(:Type)) AS cs
RETURN cs
Specifically, in order to really connect the two packages, the two types should be related to each other with DEPENDS_ON as shown below:
(:Type)<--(:Package)-[x]->(:Package)-->(:Type)
| ^
| DEPENDS_ON |
+--------------------------------------+
For above pattern I would have to return the two types (and not the packages, for instance). Preferably the output for a single cyclic dependency consists of a list of qualified class names (otherwise multiple one cannot possibly distinguish the class chains of more than one cyclic dependency).
For this specific purpose I find Cypher to be very limited, support for identifying and collecting new graph patterns during path traversal does not seem to be the easiest thing to do. Also the attempt to give names to the (:Type) nodes resulted in syntax errors.
Also I messed a lot around with UNWIND, but to no avail. It lets you introduce new MATCH clauses on per-element basis (say, the elements of relationships(nodes)), but I do not know of another method to undo the damaging effects of unwind: the surrounding list structure is removed, such that the traces of multiple cyclic dependencies merge into each other. Additionally the results appear permuted to me. That being said below query is conceptually also very close on what I am trying to achieve but does not work:
MATCH nodes = (p1:Package)-[:DEPENDS_ON]->(p2: Package)-[:DEPENDS_ON*]->(p1)
WHERE p1 <> p2
WITH relationships(nodes) as rel
UNWIND rel AS x
MATCH (t0:Type)<-[:CONTAINS]-(:Package)-[x]->(:Package)-[:CONTAINS]->(t1:Type),
(t0)-[:DEPENDS_ON]->(t1)
RETURN t0.fqn, t1.fqn
I do appreciate that there seems to be some scripting support within jQAssistant. However, this would really be my last resort, since it is surely more difficult to maintain than a Cypher query.
To rephrase it: given a path, I'm looking for a method to identify a sub-pattern for each element, project a node out of that match, and to collect the result.
Do you have any ideas on how could one accomplish that with Cypher?
Edit #1: Within one package, one has also to consider that the class that is target to the inbound edge of type DEPENDS_ON may not be the same class that is source to the outgoing edge. In other words, as a result
two classes of the same package may be part of the trace
if one wanted to express the cyclic dependency trace as a path, one must take into account detours that navigate to classes in the same package. For instance (edges in bold mark package entry / exit; an edge of type DEPENDS_ON is absent between the two types):
-[:DEPENDS_ON]->(:Type)<-[:CONTAINS]-(:Package)-[:CONTAINS]->(:Type)-[DEPENDS_ON]->
Maybe it gets a little clearer using the following picture:
Clearly "a, b, c" is a package cycle and "TestA, TestB1, TestB2, TestC" is a type-level trace for justifying the package-level dependency.
The following query extends the package cycle constraint by drilling down on type level:
MATCH
(p1:Package)-[:DEPENDS_ON]->(p2:Package),
path=shortestPath((p2)-[:DEPENDS_ON*]->(p1))
WHERE
p1 <> p2
WITH
p1, p2
MATCH
(p1)-[:CONTAINS]->(t1:Type),
(p2)-[:CONTAINS]->(t2:Type),
(p1)-[:CONTAINS]->(t3:Type),
(t1)-[:DEPENDS_ON]->(t2),
path=shortestPath((t2)-[:DEPENDS_ON*]->(t3))
RETURN
p1.fqn, t1.fqn, EXTRACT(t IN nodes(path) | t.fqn) AS Cycle
I'm not sure how well the query will work on large projects, we'll need to give it a try.
Edit 1: Updated the query to match on any type t3 which is located in the same package as t1.
Edit 2: The answer is not correct, see discussion below.

Find unused cases of an enum (Objective-C / Swift)

We’ve just migrated a couple of thousand localised strings in an iOS project from an old struct to an enum. We’d now like to find any which are unused.
I’m looking for a way to find any cases of an enum which are not used anywhere within my project, short of searching the project for them one by one.
We have the strings in Objective-C and Swift versions, so either will work.
Any ideas?
About your only option is to comment out each enum value and see which ones result in an error. The ones that don't aren't being used.
If you have a lot of enum values, comment them out in batches of 10 or 15. Do a compile. Scan the errors and uncomment out the values reported in an error. This leaves the unused enum values commented out.
There is a way to automate this, recently I was asked to search for unused endpoints in a large project, to make this semi-automatic:
1- run this grep command, to search for used endpoints
grep -r --include='*.swift' "EndpointEnum"
2- use a text editor (I used sublime), to sort, make unique and save into used.txt
3- add all the enum values in all.txt
4- diff between the two files, and keep the lines starting with a "-"
this will give you the unused enum values.