I am working on fairly large Mathematica projects and the problem arises that I have to intermittently check numerical results but want to easily revert to having all my constructs in analytical form.
The code is fairly fluid I don't want to use scoping constructs everywhere as they add work overhead. Is there an easy way for identifying and clearing all assignments that are numerical?
EDIT: I really do know that scoping is the way to do this correctly ;-). However, for my workflow I am really just looking for a dirty trick to nix all numerical assignments after the fact instead of having the foresight to put down a Block.
If your assignments are on the top level, you can use something like this:
a = 1;
b = c;
d = 3;
e = d + b;
Cases[DownValues[In],
HoldPattern[lhs_ = rhs_?NumericQ] |
HoldPattern[(lhs_ = rhs_?NumericQ;)] :> Unset[lhs],
3]
This will work if you have a sufficient history length $HistoryLength (defaults to infinity). Note however that, in the above example, e was assigned 3+c, and 3 here was not undone. So, the problem is really ambiguous in formulation, because some numbers could make it into definitions. One way to avoid this is to use SetDelayed for assignments, rather than Set.
Another alternative would be to analyze the names in say Global' context (if that is the context where your symbols live), and then say OwnValues and DownValues of the symbols, in a fashion similar to the above, and remove definitions with purely numerical r.h.s.
But IMO neither of these approaches are robust. I'd still use scoping constructs and try to isolate numerics. One possibility is to wrap you final code in Block, and assign numerical values inside this Block. This seems a much cleaner approach. The work overhead is minimal - you just have to remember which symbols you want to assign the values to. Block will automatically ensure that outside it, the symbols will have no definitions.
EDIT
Yet another possibility is to use local rules. For example, one could define rule[a] = a->1; rule[d]=d->3 instead of the assignments above. You could then apply these rules, extracting them as say
DownValues[rule][[All, 2]], whenever you want to test with some numerical arguments.
Building on Andrew Moylan's solution, one can construct a Block like function that would takes rules:
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
You can then save your numeric rules in a variable, and use BlockRules[ savedrules, code ], or even define a function that would apply a fixed set of rules, kind of like so:
In[76]:= NumericCheck =
Function[body, BlockRules[{a -> 3, b -> 2`}, body], HoldAll];
In[78]:= a + b // NumericCheck
Out[78]= 5.
EDIT In response to Timo's comment, it might be possible to use NotebookEvaluate (new in 8) to achieve the requested effect.
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
nb = CreateDocument[{ExpressionCell[
Defer[Plot[Sin[a x], {x, 0, 2 Pi}]], "Input"],
ExpressionCell[Defer[Integrate[Sin[a x^2], {x, 0, 2 Pi}]],
"Input"]}];
BlockRules[{a -> 4}, NotebookEvaluate[nb, InsertResults -> "True"];]
As the result of this evaluation you get a notebook with your commands evaluated when a was locally set to 4. In order to take it further, you would have to take the notebook
with your code, open a new notebook, evaluate Notebooks[] to identify the notebook of interest and then do :
BlockRules[variablerules,
NotebookEvaluate[NotebookPut[NotebookGet[nbobj]],
InsertResults -> "True"]]
I hope you can make this idea work.
Related
I am searching some SWI-Prolog function which is able to make some set union with variables as parameters inside. My aim is to make the union first and define the parameters at further on in source code.
Means eg. I have some function union and the call union(A, B, A_UNION_B) makes sense. Means further more the call:
union(A, [1,2], C), A=[3].
would give me as result
C = [3, 1, 2].
(What you call union/3 is most probably just concatenation, so I will use append/3 for keeping this answer short.)
What you expect is impossible without delayed goals or constraints. To see this, consider the following failure-slice
?- append(A, [1,2], C), false, A=[3].
loops, unexpected. % observed, but for us unexpected
false. % expected, but not the case
This query must terminate, in order to make the entire question useful. But there are infinitely many lists of different length for A. So in order to describe all possible solutions, we would need infinitely many answer substitutions, like
?- append(A, [1,2], C).
A = [], C = [1,2]
; A = [_A], C = [_A,1,2]
; A = [_A,_B], C = [_A,_B,1,2]
; A = [_A,_B,_C], C = [_A,_B,_C,1,2]
; ... .
The only way around is to describe that set of solutions with finitely many answers. One possibility could be:
?- when((ground(A);ground(C)), append(A,B,C)).
when((ground(A);ground(C)),append(A,B,C)).
Essentially it reads: Yes, the query is true, provided the query is true.
While this solves your exact problem, it will now delay many otherwise succeeding goals, think of A = [X], B = [].
A more elaborate version could provide more complex tests. But it would require a somehow different definition than append/3 is. Some systems like sicstus-prolog provide block declarations to make this more smoothly (SWI has a coarse emulation for that).
So it is possible to make this even better, but the question remains whether or not this makes much sense. After all, debugging delayed goals becomes more and more difficult with larger programs.
In many situations it is preferable to prevent this and produce an instantiation error in its stead as iwhen/2 does:
?- iwhen((ground(A);ground(C)),append(A,B,C)).
error(instantiation_error,iwhen/2).
That error is not the nicest answer possible, but at least it is not incorrect. It says: You need to provide more instantiations.
If you really want to solve this problem for the general case you have to delve into E-unification. That is an area with most trivial problem statements and extremely evolved answers. Often, just decidability is non-trivial let alone an effective algorithm. For your particular question, either ACI (for sets) or ANlr (for concatenation) are of interest. Where ACI requires solving Diophantine Equations and associative unification alone is even more complex than that. I am unaware of any such implementation for a Prolog system that solves the general problem.
Prolog IV offered an associative infix operator for concatenation but simply delayed more complex cases. So debugging these remains non-trivial.
I want to make a hash of sets. Well, SetHashes, since they need to be mutable.
In fact, I would like to initialize my Hash with multiple identical copies of the same SetHash.
I have an array containing the keys for the new hash: #keys
And I have my SetHash already initialized in a scalar variable: $set
I'm looking for a clean way to initialize the hash.
This works:
my %hash = ({ $_ => $set.clone } for #keys);
(The parens are needed for precedence; without them, the assignment to %hash is part of the body of the for loop. I could change it to a non-postfix for loop or make any of several other minor changes to get the same result in a slightly different way, but that's not what I'm interested in here.)
Instead, I was kind of hoping I could use one of Raku's nifty hyper-operators, maybe like this:
my %hash = #keys »=>» $set;
That expression works a treat when $set is a simple string or number, but a SetHash?
Array >>=>>> SetHash can never work reliably: order of keys in SetHash is indeterminate
Good to know, but I don't want it to hyper over the RHS, in any order. That's why I used the right-pointing version of the hyperop: so it would instead replicate the RHS as needed to match it up to the LHS. In this sort of expression, is there any way to say "Yo, Raku, treat this as a scalar. No, really."?
I tried an explicit Scalar wrapper (which would make the values harder to get at, but it was an experiment):
my %map = #keys »=>» $($set,)
And that got me this message:
Lists on either side of non-dwimmy hyperop of infix:«=>» are not of the same length while recursing
left: 1 elements, right: 4 elements
So it has apparently recursed into the list on the left and found a single key and is trying to map it to a set on the right which has 4 elements. Which is what I want - the key mapped to the set. But instead it's mapping it to the elements of the set, and the hyperoperator is pointing the wrong way for that combination of sizes.
So why is it recursing on the right at all? I thought a Scalar container would prevent that. The documentation says it prevents flattening; how is this recursion not flattening? What's the distinction being drawn?
The error message says the version of the hyperoperator I'm using is "non-dwimmy", which may explain why it's not in fact doing what I mean, but is there maybe an even-less-dwimmy version that lets me be even more explicit? I still haven't gotten my brain aligned well enough with the way Raku works for it to be able to tell WIM reliably.
I'm looking for a clean way to initialize the hash.
One idiomatic option:
my %hash = #keys X=> $set;
See X metaoperator.
The documentation says ... a Scalar container ... prevents flattening; how is this recursion not flattening? What's the distinction being drawn?
A cat is an animal, but an animal is not necessarily a cat. Flattening may act recursively, but some operations that act recursively don't flatten. Recursive flattening stops if it sees a Scalar. But hyperoperation isn't flattening. I get where you're coming from, but this is not the real problem, or at least not a solution.
I had thought that hyperoperation had two tests controlling recursing:
Is it hyperoperating a nodal operation (eg .elems)? If so, just apply it like a parallel shallow map (so don't recurse). (The current doc quite strongly implies that nodal can only be usefully applied to a method, and only a List one (or augmentation thereof) rather than any routine that might get hyperoperated. That is much more restrictive than I was expecting, and I'm sceptical of its truth.)
Otherwise, is a value Iterable? If so, then recurse into that value. In general the value of a Scalar automatically behaves as the value it contains, and that applies here. So Scalars won't help.
A SetHash doesn't do the Iterable role. So I think this refusal to hyperoperate with it is something else.
I just searched the source and that yields two matches in the current Rakudo source, both in the Hyper module, with this one being the specific one we're dealing with:
multi method infix(List:D \left, Associative:D \right) {
die "{left.^name} $.name {right.^name} can never work reliably..."
}
For some reason hyperoperation explicitly rejects use of Associatives on either the right or left when coupled with the other side being a List value.
Having pursued the "blame" (tracking who made what changes) I arrived at the commit "Die on Associative <<op>> Iterable" which says:
This can never work due to the random order of keys in the Associative.
This used to die before, but with a very LTA error about a Pair.new()
not finding a suitable candidate.
Perhaps this behaviour could be refined so that the determining factor is, first, whether an operand does the Iterable role, and then if it does, and is Associative, it dies, but if it isn't, it's accepted as a single item?
A search for "can never work reliably" in GH/rakudo/rakudo issues yields zero matches.
Maybe file an issue? (Update I filed "RFC: Allow use of hyperoperators with an Associative that does not do Iterable role instead of dying with "can never work reliably".)
For now we need to find some other technique to stop a non-Iterable Associative being rejected. Here I use a Capture literal:
my %hash = #keys »=>» \($set);
This yields: {a => \(SetHash.new("b","a","c")), b => \(SetHash.new("b","a","c")), ....
Adding a custom op unwraps en passant:
sub infix:« my=> » ($lhs, $rhs) { $lhs => $rhs[0] }
my %hash = #keys »my=>» \($set);
This yields the desired outcome: {a => SetHash(a b c), b => SetHash(a b c), ....
my %hash = ({ $_ => $set.clone } for #keys);
(The parens seem to be needed so it can tell that the curlies are a block instead of a Hash literal...)
No. That particular code in curlies is a Block regardless of whether it's in parens or not.
More generally, Raku code of the form {...} in term position is almost always a Block.
For an explanation of when a {...} sequence is a Hash, and how to force it to be one, see my answer to the Raku SO Is that a Hash or a Block?.
Without the parens you've written this:
my %hash = { block of code } for #keys
which attempts to iterate #keys, running the code my %hash = { block of code } for each iteration. The code fails because you can't assign a block of code to a hash.
Putting parens around the ({ block of code } for #keys) part completely alters the meaning of the code.
Now it runs the block of code for each iteration. And it concatenates the result of each run into a list of results, each of which is a Pair generated by the code $_ => $set.clone. Then, when the for iteration has completed, that resulting list of pairs is assigned, once, to my %hash.
I'm trying to solve a non-deterministic problem with DEAP. the problem is that the module only evaluate new chromosome and uses the score kept in memory for old ones.
How can i set the module, so at each generation the ENTIRE population will be evaluated, and not just the new ones?
Thx
I don't think DEAP package can do that.
You can simply implement the algorithm on your own or find the new packages.
Anyway, check out my library contains most of the state-of-the-art meta-heuristic algorithms. It also evaluates the entire population in each generation.
https://github.com/thieunguyen5991/mealpy
You can modify 2-3 lines in your current chosen algorithm like below to force evaluation on all items. This can be done via copying from the source, to your local script, and then editing the invalid_individual flagged item check pre evaluation. Make sure in main you call local easimple and not algorithms.easimple to make the switch to local code.
If you are using easimple or eaMuPlusLambda, for example, you can find that function here in this file:
https://github.com/DEAP/deap/blob/master/deap/algorithms.py#L85
The 0th gen case here may not change(but can change anyway, unless your individuals come with a fitness already and you want to skip evaluation):
#(line 149 in above URL)
invalid_ind = [ind for ind in population if not ind.fitness.valid]
And then inside the generational process loop:
#(line 171 in url above):
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
Removing the invalid check will result in all items being passed to evaluation:
invalid_ind = [ind for ind in population] #149
...
invalid_ind = [ind for ind in offspring] #171
But keep the algorithms import! note you also need to change varAnd(case of easimple) to algorithms.varAnd to prevent a break.
offspring = algorithms.varAnd(offspring, toolbox, cxpb, mutpb)
I am currently developing my own programming language. The codebase (in Lua) is composed of several modules, as follows:
The first, error.lua, has no dependancies;
lexer.lua depends only on error.lua;
prototypes.lua also has no dependancies;
parser.lua, instead, depends on all the modules above;
interpreter.lua is the fulcrum of the whole codebase. It depends on error.lua, parser.lua, and memory.lua;
memory.lua depends on functions.lua;
finally, functions.lua depends on memory.lua and interpreter.lua. It is required from inside memory.lua, so we can say that memory.lua also depends on interpreter.lua.
With "A depends on B" I mean that the functions declared in A need those declared in B.
The real problem, though, is when A depends on B which depends on A, which, as you can understand from the list above, happens quite frequently in my code.
To give a concrete example of my problem, here's how interpreter.lua looks like:
--first, I require the modules that DON'T depend on interpreter.lua
local parser, Error = table.unpack(require("parser"))
--(since error.lua is needed both in the lexer, parser and interpreter module,
--I only actually require it once in lexer.lua and then pass its result around)
--Then, I should require memory.lua. But since memory.lua and
--functions.lua need some functions from interpreter.lua to work, I just
--forward declare the variables needed from those functions and then those functions themself:
--forward declaration
local globals, new_memory, my_nil, interpret_statement
--functions I need to declare before requiring memory.lua
local function interpret_block()
--uses interpret_statement and new_memory
end
local function interpret_expresion()
--uses new_memory, Error and my_nil
end
--Now I can safely require memory.lua:
globals, new_memory, my_nil = require("memory.lua")(interpret_block, interpret_espression)
--(I'll explain why it returns a function to call later)
--Then I have to fulfill the forward declaration of interpret_executement:
function interpret_executement()
--uses interpret_expression, new_memory and Error
end
--finally, the result is a function
return function()
--uses parser, new_fuction and globals
end
The memory.lua module returns a function so that it can receive interpret_block and interpret_expression as arguments, like this:
--memory.lua
return function(interpret_block, interpret_expression)
--declaration of globals, new_memory, my_nil
return globals, new_memory, my_nil
end
Now, I got the idea of the forward declarations here and that of the functions-as-modules (like in memory.lua, to pass some functions from the requiring module to the required module) here. They're all great ideas, and I must say that they work greatly. But you pay in readability.
In fact, breaking in smaller pieces the code this time made my work harder that it would have been if I coded everything in a single file, which is impossible for me because it's over than 1000 lines of code and I'm coding from a smartphone.
The feeling I have is that of working with spaghetti code, only on a larger scale.
So how could I solve the problem of my code being ununderstandable because of some modules needing each other to work (which doesn't involve making all the variables global, of course)? How would programmers in other languages solve this problem? How should I reorganize my modules? Are there any standard rules in using Lua modules that could also help me with this problem?
If we look at your lua files as a directed graph, where a vertice points from a dependency to its usage, the goal is to modify your graph to be a tree or forest, as you intend to get rid of the cycles.
A cycle is a set of nodes, which, traversed in the direction of the vertices can reach the starting node.
Now, the question is how to get rid of cycles?
The answer looks like this:
Let's consider node N and let's consider {D1, D2, ..., Dm} as its direct dependencies. If there is no Di in that set that depends on N either directly or indirectly, then you can leave N as it is. In that case, the set of problematic dependencies looks like this: {}
However, what if you have a non-empty set, like this: {PD1, ..., PDk} ?
You then need to analyze PDi for i between 1 and k along with N and see what is the subset in each PDi that does not depend on N and what is the subset of N which does not depend on any PDi. This way you can define N_base and N, PDi_base and PDi. N depends on N_base, just like all PDi elements and PDi depends on PDi_base along with N_base.
This approach minimalizes circles in the dependency tree. However, it is quite possible that a function set of {f1, ..., fl} exists in this group which cannot be migrated into _base as discussed due to dependencies and there are still cycles. In this case you need to give a name to the group in question, create a module for it and migrate all to functions into that group.
Suppose I've defined a list of variables
{a,b,c} = {1,2,3}
If I want to double them all I can do this:
{a,b,c} *= 2
The variables {a,b,c} now evaluate to {2,4,6}.
If I want to apply an arbitrary transformation function to them, I can do this:
{a,b,c} = f /# {a,b,c}
How would you do that without specifying the list of variables twice?
(Set aside the objection that I'd probably want an array rather than a list of distinctly named variables.)
You can do this:
Function[Null, # = f /# #, HoldAll][{a, b, c}]
For example,
In[1]:=
{a,b,c}={1,2,3};
Function[Null, #=f/##,HoldAll][{a,b,c}];
{a,b,c}
Out[3]= {f[1],f[2],f[3]}
Or, you can do the same without hard-coding f, but defining a custom set function. The effect of your foreach loop can be reproduced easily if you give it Listable attribute:
ClearAll[set];
SetAttributes[set, {HoldFirst, Listable}]
set[var_, f_] := var = f[var];
Example:
In[10]:= {a,b,c}={1,2,3};
set[{a,b,c},f1];
{a,b,c}
Out[12]= {f1[1],f1[2],f1[3]}
You may also want to get speed benefits for cases when your f is Listable, which is especially relevant now since M8 Compile enables user-defined functions to benefit from being Listabe in terms of speed, in a way that previously only built-in functions could. All you have to do for set for such cases (when you are after speed and you know that f is Listable) is to remove the Listable attribute of set.
I hit upon an answer to this when fixing up this old question: ForEach loop in Mathematica
Defining the each function as in the accepted answer to that question, we can answer this question with:
each[i_, {a,b,c}, i = f[i]]