How can I force two jupyter sliders to interact with one another (non-trivially)? Is "tag" available for handler? - slider

I want to create two ipywidget sliders, say one with value x, the other with value 1-x. When I change one slider, the other one should be automatically changed acccordingly. I am trying to use observe for callback. I see that I might use owner and description to identify which slider was modified. But I don't think description supposed to be used for this purpose. After all, description should not need to be unique in the first place. I wonder if I am missing something here.
from ipywidgets import widgets
x=0.5
a=widgets.FloatSlider(min=0,max=1,description='a',value=x)
b=widgets.FloatSlider(min=0,max=1,description='b',value=1-x)
display(a,b)
def on_value_change(change):
if str(change['owner']).split("'")[1]=='a':
exec('b.value='+str(1-change['new']))
else:
exec('a.value='+str(1-change['new']))
a.observe(on_value_change,names='value')
b.observe(on_value_change,names='value')

Beginner with widgets, but I ran into the same question earlier and couldn't find a solution. I pieced together several sources and came up with something that seems to work.
Here's a model example of two sliders maintaining proportionality according to '100 = a + b', with two sliders representing a and b. :
caption = widgets.Label(value='If 100 = a + b :')
a, b = widgets.FloatSlider(description='a (=100-b)'),\
widgets.FloatSlider(description='b (= 100-a)')
def my_func1(a):
# b as function of a
return (100 - a)
def my_func2(b):
# a as function of b
return (100 - b)
l1 = widgets.dlink((a, 'value'), (b, 'value'),transform=my_func1)
l2 = widgets.dlink((b, 'value'), (a, 'value'),transform=my_func2)
display(caption, a, b)
To explain, as best as I understand... the key was to set up a directional link going each direction between the two sliders, and to provide a transform function for the math each direction across the sliders.
i.e.,:
l1 = widgets.dlink((a, 'value'), (b, 'value'),transform=my_func1)
What that is saying is this: .dlink((a, 'value'), (b, 'value'),transform=my_func1) is like "the value of a is a variable (input) used to determine the value of b (output)" and that "the function describing b, as a function of a, is my_func1".
With the links described, you just need to define the aforementioned functions.
The function pertaining to direct link l1 is:
def my_func1(a): # defining b as function of a
return (100 - a)
Likewise (but in reverse), l2 is the 'vice versa' to l1, and my_func2 the 'vice versa' to my_func1.
I found this to work better for learning purposes, compared to the fairly common approach of utilizing a listener (a.observe or b.observe) to log details (e.g. values) about the states of the sliders into a dictionary-type parameter (change) which can be passed into the transform functions and indexing for variable assignments.
Good luck, hope that helps! More info at https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html#Linking-Widgets

Related

Best way to insert values of 3D array inside of another larger array

There must be some 'pythonic' way to do this, but I don't think np.place, np.insert, or np.put are what I'm looking for. I want to replace the values inside of a large 3D array A with those of a smaller 3D array B, starting at location [i,j,k] in the larger array. See drawing:
I want to type something like A[i+, j+, k+] = B, or np.embed(B, A, (i,j,k)) but of course those are not right.
EDIT: Oh, there is this. So I should modify the question to ask if this is the best way (where "best" means fastest for a 500x500x50 array of floats on a laptop):
s0, s1, s2 = B.shape
A[i:i+s0, j:j+s1, k:k+s2] = B
Your edited answer looks fine for the 3D case.
If you want the "embed" function you mentioned in the original post, for arrays of any number of dimensions, the following should work:
def embed( small_array, big_array, big_index):
"""Overwrites values in big_array starting at big_index with those in small_array"""
slices = [np.s_[i:i+j] for i,j in zip(big_index, small_array.shape)]
big_array[slices]=small_array
It may be worth noting that it's not obvious how one would want "embed" to perform in cases where big_array has more dimensions than small_array does. E.g., I could imagine someone wanting a 1:1 mapping from small_array members to overwritten members of big_array (equivalent to adding extra length-1 dimensions to small_array to bring its ndim up to that of big_array), or I could imagine someone wanting small_array to broadcast out to fill the remainder of big_array for the "missing" dimensions of small_array. Anyway, you might want to avoid calling the function in those cases, or to tweak the function to ensure it will do what you want in those cases.

Arrays with attributes in Julia

I am making my first steps in julia, and I would like to reproduce something I achieved with numpy.
I would like to write a new array-like type which is essentially an vector of elements of arbitrary type, and, to keep the example simple, an scalar attribute such as the sampling frequency fs.
I started with something like
type TimeSeries{T} <: DenseVector{T,}
data::Vector{T}
fs::Float64
end
Ideally, I would like:
1) all methods that take a Vector{T} as argument to take on TimeSeries{T}.
e.g.:
ts = TimeSeries([1,2,3,1,543,1,24,5], 12.01)
median(ts)
2) that indexing a TimeSeries always returns a TimeSeries:
ts[1:3]
3) built-in functions that return a Vector to return a TimeSeries:
ts * 2
ts + [1,2,3,1,543,1,24,5]
I have started by implementing size, getindex and so on, but I definitely do not see how it could be possible to match points 2 and 3.
numpy has a quite comprehensive way to doing this: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html. R also seems to allow linking attributes attr()<- to arrays.
Do you have any idea about the best strategy to implement this sort of "array with attributes".
Maybe I'm not understanding, why is for say point 3 it not sufficient to do
(*)(ts::TimeSeries, n) = TimeSeries(ts.data*n, ts.fs)
(+)(ts::TimeSeries, n) = TimeSeries(ts.data+n, ts.fs)
As for point 2
Base.getindex(ts::TimeSeries, r::Range) = TimeSeries(ts.data[r], ts.fs)
Or are you asking for some easier way where you delegate all these operations to the internal vector? You can clever things like
for op in (:(+), :(*))
#eval $(op)(ts::TimeSeries, x) = TimeSeries($(op)(ts.data,x), ts.fs)
end

Clearing numerical values in Mathematica

I am working on fairly large Mathematica projects and the problem arises that I have to intermittently check numerical results but want to easily revert to having all my constructs in analytical form.
The code is fairly fluid I don't want to use scoping constructs everywhere as they add work overhead. Is there an easy way for identifying and clearing all assignments that are numerical?
EDIT: I really do know that scoping is the way to do this correctly ;-). However, for my workflow I am really just looking for a dirty trick to nix all numerical assignments after the fact instead of having the foresight to put down a Block.
If your assignments are on the top level, you can use something like this:
a = 1;
b = c;
d = 3;
e = d + b;
Cases[DownValues[In],
HoldPattern[lhs_ = rhs_?NumericQ] |
HoldPattern[(lhs_ = rhs_?NumericQ;)] :> Unset[lhs],
3]
This will work if you have a sufficient history length $HistoryLength (defaults to infinity). Note however that, in the above example, e was assigned 3+c, and 3 here was not undone. So, the problem is really ambiguous in formulation, because some numbers could make it into definitions. One way to avoid this is to use SetDelayed for assignments, rather than Set.
Another alternative would be to analyze the names in say Global' context (if that is the context where your symbols live), and then say OwnValues and DownValues of the symbols, in a fashion similar to the above, and remove definitions with purely numerical r.h.s.
But IMO neither of these approaches are robust. I'd still use scoping constructs and try to isolate numerics. One possibility is to wrap you final code in Block, and assign numerical values inside this Block. This seems a much cleaner approach. The work overhead is minimal - you just have to remember which symbols you want to assign the values to. Block will automatically ensure that outside it, the symbols will have no definitions.
EDIT
Yet another possibility is to use local rules. For example, one could define rule[a] = a->1; rule[d]=d->3 instead of the assignments above. You could then apply these rules, extracting them as say
DownValues[rule][[All, 2]], whenever you want to test with some numerical arguments.
Building on Andrew Moylan's solution, one can construct a Block like function that would takes rules:
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
You can then save your numeric rules in a variable, and use BlockRules[ savedrules, code ], or even define a function that would apply a fixed set of rules, kind of like so:
In[76]:= NumericCheck =
Function[body, BlockRules[{a -> 3, b -> 2`}, body], HoldAll];
In[78]:= a + b // NumericCheck
Out[78]= 5.
EDIT In response to Timo's comment, it might be possible to use NotebookEvaluate (new in 8) to achieve the requested effect.
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
nb = CreateDocument[{ExpressionCell[
Defer[Plot[Sin[a x], {x, 0, 2 Pi}]], "Input"],
ExpressionCell[Defer[Integrate[Sin[a x^2], {x, 0, 2 Pi}]],
"Input"]}];
BlockRules[{a -> 4}, NotebookEvaluate[nb, InsertResults -> "True"];]
As the result of this evaluation you get a notebook with your commands evaluated when a was locally set to 4. In order to take it further, you would have to take the notebook
with your code, open a new notebook, evaluate Notebooks[] to identify the notebook of interest and then do :
BlockRules[variablerules,
NotebookEvaluate[NotebookPut[NotebookGet[nbobj]],
InsertResults -> "True"]]
I hope you can make this idea work.

Elegantly alter a list of variables: Generalization of AddTo, TimesBy, etc

Suppose I've defined a list of variables
{a,b,c} = {1,2,3}
If I want to double them all I can do this:
{a,b,c} *= 2
The variables {a,b,c} now evaluate to {2,4,6}.
If I want to apply an arbitrary transformation function to them, I can do this:
{a,b,c} = f /# {a,b,c}
How would you do that without specifying the list of variables twice?
(Set aside the objection that I'd probably want an array rather than a list of distinctly named variables.)
You can do this:
Function[Null, # = f /# #, HoldAll][{a, b, c}]
For example,
In[1]:=
{a,b,c}={1,2,3};
Function[Null, #=f/##,HoldAll][{a,b,c}];
{a,b,c}
Out[3]= {f[1],f[2],f[3]}
Or, you can do the same without hard-coding f, but defining a custom set function. The effect of your foreach loop can be reproduced easily if you give it Listable attribute:
ClearAll[set];
SetAttributes[set, {HoldFirst, Listable}]
set[var_, f_] := var = f[var];
Example:
In[10]:= {a,b,c}={1,2,3};
set[{a,b,c},f1];
{a,b,c}
Out[12]= {f1[1],f1[2],f1[3]}
You may also want to get speed benefits for cases when your f is Listable, which is especially relevant now since M8 Compile enables user-defined functions to benefit from being Listabe in terms of speed, in a way that previously only built-in functions could. All you have to do for set for such cases (when you are after speed and you know that f is Listable) is to remove the Listable attribute of set.
I hit upon an answer to this when fixing up this old question: ForEach loop in Mathematica
Defining the each function as in the accepted answer to that question, we can answer this question with:
each[i_, {a,b,c}, i = f[i]]

Obtaining all possible states of an object for a NP-Complete(?) problem in Python

Not sure that the example (nor the actual usecase) qualifies as NP-Complete, but I'm wondering about the most Pythonic way to do the below assuming that this was the algorithm available.
Say you have :
class Person:
def __init__(self):
self.status='unknown'
def set(self,value):
if value:
self.status='happy'
else :
self.status='sad'
... blah . Maybe it's got their names or where they live or whatev.
and some operation that requires a group of Persons. (The key value is here whether the Person is happy or sad.)
Hence, given PersonA, PersonB, PersonC, PersonD - I'd like to end up a list of the possible 2**4 combinations of sad and happy Persons. i.e.
[
[ PersonA.set(true), PersonB.set(true), PersonC.set(true), PersonD.set(true)],
[ PersonA.set(true), PersonB.set(true), PersonC.set(true), PersonD.set(false)],
[ PersonA.set(true), PersonB.set(true), PersonC.set(false), PersonD.set(true)],
[ PersonA.set(true), PersonB.set(true), PersonC.set(false), PersonD.set(false)],
etc..
Is there a good Pythonic way of doing this? I was thinking about list comprehensions (and modifying the object so that you could call it and get returned two objects, true and false), but the comprehension formats I've seen would require me to know the number of Persons in advance. I'd like to do this independent of the number of persons.
EDIT : Assume that whatever that operation that I was going to run on this is part of a larger problem set - we need to test out all values of Person for a given set in order to solve our problem. (i.e. I know this doesn't look NP-complete right now =) )
any ideas?
Thanks!
I think this could do it:
l = list()
for i in xrange(2 ** n):
# create the list of n people
sublist = [None] * n
for j in xrange(n):
sublist[j] = Person()
sublist[j].set(i & (1 << j))
l.append(sublist)
Note that if you wrote Person so that its constructor accepted the value, or such that the set method returned the person itself (but that's a little weird in Python), you could use a list comprehension. With the constructor way:
l = [ [Person(i & (1 << j)) for j in xrange(n)] for i in xrange(2 ** n)]
The runtime of the solution is O(n 2**n) as you can tell by looking at the loops, but it's not really a "problem" (i.e. a question with a yes/no answer) so you can't really call it NP-complete. See What is an NP-complete in computer science? for more information on that front.
According to what you've stated in your problem, you're right -- you do need itertools.product, but not exactly the way you've stated.
import itertools
truth_values = itertools.product((True, False), repeat = 4)
people = (person_a, person_b, person_c, person_d)
all_people_and_states = [[person(truth) for person, truth in zip(people, combination)] for combination in truth_values]
That should be more along the lines of what you mentioned in your question.
You can use a cartesian product to get all possible combinations of people and states. Requires Python 2.6+
import itertools
people = [person_a,person_b,person_c]
states = [True,False]
all_people_and_states = itertools.product(people,states)
The variable all_people_and_states contains a list of tuples (x,y) where x is a person and y is either True or False. It will contain all possible pairings of people and states.