netlogo - trouble with exiting while loop when condition is false - while-loop

I have a list of ordered patches x, and I ask each of these patches to find the patches within a certain radius of themselves (this resulting list is called x_radius). I basically want to iterate through each x and then the corresponding patches in x_radius (taking the first item in x and then taking that item's first item in x_radius etc) and check if a patch variable food for a given x_radius patch is equal to or less than .5 (here I have it set to a random number for simplicity). If it is greater than .5, I would like to stop the loop. So, what I am thinking the order of commands to achieve this should be is:
Ask 1st patch in x to find patches in radius
Ask 1st patch in x_radius of the 1st patch in x to find their food amount
If their food amount is <= .5, ask the 2nd patch in x_radius of the 1st patch in x to find their food
Repeat until any patch in any x_radius of any x patch is > .5
This is a simplified bit of the code I am working on:
foreach x [p ->
ask p [
set x_radius patches in-radius 5
while [all? x_radius [food <= .5]] [
foreach x_radius [n ->
ask n [
set food random-float 1.0]
]
]
]
I see that the while loop is not exiting when the one of the x_radius patches to be evaluated has food. The code basically goes through all of the x patches without evaluating the x_radius patches, so somewhere my logic is messy. I've always had some trouble with while loops; I have a feeling it is because food is set within the while loop, but I am referencing it before? Does anyone see where I am going wrong?
UPDATE: The following code gets me closer to what I want, however, if an x_radius patch has a negative food value, the code keeps evaluating that same patch over and over again, when I would want to move on to the next x_radius patch. Do-calculations is a dummy function that calculates some value:
foreach x [p ->
ask p [
set x_radius patches in-radius 5
set food 0
foreach sort x_radius [n ->
while [food <= .5] [
ask n [
set food do-calculations]
]
]
]
]
]

Related

Netlogo calculate all turtles of a type that are touching

In Netlogo, I have a grid of turtles all touching, and I want to count the number of green turtles (MHC = 3 in code below) that form each cluster of green turtles (amid the other colours). Perhaps I am going about it all wrong, but it seems very very hard.
I have tried while loops designed to start with a single green cell (unconnected to any previous green cluster) and assign a number to its turtles-own variable block. Then each green neighbor in-radius 1 receives the same number, and so on and so on, until each touching green cell receives the same number. Then the next cluster will receive a new number and start over.
However, unless it's just a question of bad bracketing, it really doesn't seem to work. Here is the functional code (which just creates the grid of colour changing turtles):
turtles-own[MHC block]
globals[prWound]
to set-up
clear-all
reset-ticks
ask patches [sprout 1 [set color magenta]]
ask turtles [set MHC 2]
set prWound 0.0001
end
to rules
ask turtles with [MHC = 0][set color red]
ask turtles with [MHC = 1][set color green]
ask turtles with [MHC = 2][set color magenta]
ask turtles with [MHC = 3][set color blue]
ask turtles with [MHC = 4][set color orange]
ask turtles [if random 100 < 1 [set MHC (random 5)] ;vary MHC betwen 0-4,
set block 0
if random-float 1 < prWound [ask turtles in-radius 4 [die] die]
if any? patches in-radius 1 with [not any? turtles-here] and random 100 < 50 [if random 100 < 2.5
[set MHC (random 5)] hatch 1 [move-to one-of patches in-radius 1 with [not any? turtles-here]
]
tick
end
to go
rules
end
Here is the part where I try to add block values that I cannot get to work (added just before the tick):
ask turtles with [MHC = 1][
if block = 0 [set block (max([block] of turtles) + 1) ]
while [any? [turtles with [MHC = 1 and block = 0] in-radius 1] of turtles with [block = [block] of myself]]
[if any? [turtles with [MHC = 1 and block = 0] in-radius 1] of turtles with [block = [block] of myself]
[set block ([block] of myself)]]
]
I think the in-radius might be at least one of the problems - I am not sure it can be used this way.
Update: simpler approach
I leave my initial reply below unchanged, however see that a much simpler approach can be taken:
to count-blocks
set block-now 0
ask turtles [set block 0]
while [any? turtles with [condition]] [
set block-now block-now + 1
ask one-of turtles with [condition] [
join-and-search
]
]
end
to join-and-search
set block block-now
if any? (turtles-on neighbors) with [condition] [
ask (turtles-on neighbors) with [condition] [
join-and-search
]
]
end
to-report condition
ifelse (color = green and block = 0)
[report TRUE]
[report FALSE]
end
Note that, although while is used only once in this case, to join-and-search in actual fact creates a loop by calling itself, with the recursive call being performed only if any? (turtles on neighbor) with [condition] exist; which makes the candidate? passage (i.e. becoming a candidate, recruiting candidates, stop being a candidate) not required here.
I think that just a warning is due in this case: I don't know if it is best practice to let a procedure call itself. On the one hand, it sounds like something worth flagging; on the other hand, it seems to me that this join-and-search could not be any more problematic than any other loop built with a weird condition.
Initial reply
While trying to solve this, I myself discovered something that I did not consider about in-radius, and surely there lies part of the problem.
Before disclosing that, however, let me say that I am not sure if this in-radius-thing is everything that was wrong with your attempt: by the time I found out about it, I had already taken my approach to the problem.
In general, however, one piece of advice: keep your code as tidy and as readable as possible (including indentation) - it becomes a lot easier to spot where a problem lies.
That said, the main elements of my approach:
Two while loops: the first one checks whether there are any eligible turtles in the whole simulation that will initiate a new block; the second one (nested within the first one) checks whether there are any turtles left to be allocated to the current block being evaluated.
A candidate? turtles-own variable, which constitutes the condition for the second while loop. When being allocated to a block, each turtle also performs a search of its neighbours. If there are any turtles that should be added to the current block, then these get candidate? = TRUE and the inner loop starts again.
Also, I've split the relatively few commands in many procedures with relevant names. This makes both the code more readable but also more scalable: when you're going to expand on the model's variables, agentsets, conditions etc it will be easier to add line of codes to allocated sections and check whether a particular section is working on its own.
to-report condition and the global variable block-now exist mainly for readability.
As of now, this code re-counts blocks at every go (and blocks may change number between one iteration of go and the other). It will surely be possible to adapt the approach to the case in which you want to keep blocks' numbers across go iterations.
globals [
prWound
block-now
]
turtles-own [
MHC
block
candidate?
]
to setup
clear-all
reset-ticks
ask patches [sprout 1 [set color magenta]]
ask turtles [set MHC 2]
set prWound 0.0001
end
to go
rules
count-blocks
tick
end
to rules
ask turtles with [MHC = 0][set color red]
ask turtles with [MHC = 1][set color green]
ask turtles with [MHC = 2][set color magenta]
ask turtles with [MHC = 3][set color blue]
ask turtles with [MHC = 4][set color orange]
ask turtles [
if random 100 < 1 [set MHC (random 5)]
set block 0
if random-float 1 < prWound [ask turtles in-radius 4 [die] die]
if any? patches in-radius 1 with [not any? turtles-here] and random 100 < 50 [
if random 100 < 2.5 [
set MHC random 5
]
hatch 1 [move-to one-of patches in-radius 1 with [not any? turtles-here]]
]
]
end
to count-blocks
set block-now 0
ask turtles [
set block 0
]
while [any? turtles with [condition]] [start-count-round]
end
to start-count-round
set block-now (block-now + 1)
ask turtles [
set candidate? FALSE
]
ask one-of turtles with [condition] [set candidate? TRUE]
while [any? turtles with [candidate?]] [
ask turtles with [candidate?] [
join
search
conclude
]
]
end
to join
set block block-now
end
to search
let target (turtles-on neighbors) with [condition and not candidate?]
ask target [set candidate? TRUE]
end
to conclude
set candidate? FALSE
end
to-report condition
ifelse (color = green and block = 0)
[report TRUE]
[report FALSE]
end
Before
Click to see image
After
Click to see image
What about in-radius?
While it may seem intuitive that a turtle looking for turtles in-radius 1 will find the turtles standing on any of the immediate neighboring patches, that is not the case: the input number for in-radius is in fact a distance - i.e. not the number of patches that need to be crossed.
In terms of distance, turtles standing on horizontally- or vertically-neighboring patches will be at a distance of 1. Instead, turtles standing on diagonally-neighboring patches will be at a distance of 1.4:
observer> clear-all
observer> ask patch 0 0 [sprout 1]
observer> ask patch 0 1 [sprout 1]
observer> ask patch 1 1 [sprout 1]
observer> ask turtle 0 [show distance turtle 1]
(turtle 0): 1
observer> ask turtle 0 [show distance turtle 2]
(turtle 0): 1.4142135623730951
This is the reason why not even my approach was working until I replaced let target turtles in-radius 1 with [condition and not candidate?] with let target (turtles-on neighbors) with [condition and not candidate?].
Note that you use in-radius twice in the first chunk of code that you shared. While in one case that is just patches in-radius 1, which you can replace with neighbors, in another instance it is turtles in-radius 4. You might want to consider the effect of that distance thing in the latter case.
A final note on the code
Just to make sure: are you sure that the order of things in to rules is what you want? For how things are now, turtles change their MHC value but only change colour in the subsequent round of go (but, by that time, they will have changed MHC again).

How to speed up graph coloring problem in python PuLP

I am trying to solve the classic graph coloring problem using python PuLP. We have n nodes, a collection of edges in the form edges = [(node1, node2), (node2, node4), ...], and we are trying to find the minimum number of node colors so that no connected nodes share a color.
My implementation works, but is slow. It is made of three constraints, plus the one optimization of initializing node0 to color 0 to somewhat limit the search space. The code is as follows:
nodes = range(node_count)
n_colors = 10
# colors = range(node_count)
colors = range(n_colors)
prob = LpProblem("coloring", LpMinimize)
# variable xnc shows if node n has color c
xnc = LpVariable.dicts("x", (nodes, colors), cat='Binary')
# array of colors to indicate which ones were used
used_colors = LpVariable.dicts("used", colors, cat='Binary')
# minimize how many colors are used, and minimize int value for those colors
prob += lpSum([used_colors[c] * c for c in colors])
# prob += lpSum([used_colors[c] for c in colors])
# set the first node to color 0 to constrain starting point
prob += xnc[0][0] == 1
# Every node uses one color
for n in nodes:
prob += lpSum([xnc[n][c] for c in colors]) == 1
# Any connected nodes have different colors
for e in edges:
e1, e2 = e[0], e[1]
for c in colors:
prob += xnc[e1][c] + xnc[e2][c] <= 1
# mark color as used if node has that color
for n in nodes:
for c in colors:
prob += xnc[n][c] <= used_colors[c]
prob.solve()
I see that there are symmetries, and I know I could reduce this by making any new color used at most max(colors_already_used) + 1, so that if node 0 is color 0, node 1 will either have the same color, or color 1. But I am not sure how to encode this because max is not allowed the linear nature of the problem in PuLP as far as I know. I achieve a similar effect above by multiplying all colors used by their integer values, which speeds things up a bit but I do not think works as quite the efficient/deterministic constraint I seek.
Also limiting the number of colors seems to have a nice effect on the speed, but I am not sure if it is worth the preprocessing cost to try and find a heuristic before starting the optimization, since it is not clear how many colors will be needed in advance.
What other constraints could I add, or other ways I could speed it up? I am mostly interested in better ways to formulate the problem, but also open to computational optimizations ie parallelization, if they can be done in PuLP.

Self-Correcting Probability Distribution - Maintain randomness, while gravitating each outcome's frequency towards its probability

This is a common problem when you want to introduce randomness, but at the same time you want your experiment to stick close to the intended probability distribution, and can not / do not want to count on the law of big numbers.
Say you have programmed a coin with 50-50 chance for heads / tails. If you simulate it 100 times, most likely you will get something close to the intended 50-50 (binary distribution centered at 50-50).
But what if you wanted similar certainty for any number of repeats of the experiment.
A client of ours asked us this ::
We may also need to add some restrictions on some of the randomizations (e.g. if spatial location of our stimuli is totally random, the program could present too many stimuli in some locations and not very many in others. Locations should be equally sampled, so more of an array that is shuffled instead of randomization with replacement).
So they wanted randomness they could control.
Implementation details aside (arrays, vs other methods), the wanted result for our client's problem was the following ::
Always have as close to 1 / N of the stimuli in each of the N potential locations, yet do so in a randomized (hard-to-predict) way.
This is commonly needed in games (when distributing objects, characters, stats, ..), and I would imagine many other applications.
My preferred method for dealing with this is to dynamically weight the intended probabilities based on how the experiment has gone so far. This effectively moves us away from independently drawn variables.
Let p[i] be the wanted probability of outcome i
Let N[i] be the number of times outcome i has happened up to now
Let N be the sum of N[] for all outcomes i
Let w[i] be the correcting weight for i
Let W_Max be the maximum weight you want to assign (ie. when an outcome has occurred 0 times)
Let P[i] be the unnormalized probability for i
Then p_c[i] is the corrected probability for i
p[i] is fixed and provided by the design. N[i] is an accumulation - every time i happens, increment N[i] by 1.
w[i] is given by
w[i] = CalculateWeight(p[i], N[i], N, W_Max)
{
if (N == 0) return 1;
if (N[i] == 0) return W_Max;
intended = p[i] * N
current = N[i]
return intended / current;
}
And P[i] is given by
P[i] = p[i] * w[i]
Then we calculate p_c[i] as
p_c[i] = P[i] / sum(P[i])
And we run the next iteration of our random experiment (sampling) with p_c[i] instead of p[i] for outcome i.
The main drawback is that you trade control for predictability. After 4 tails in a row, it's highly likely you will see a head.
Note 1 :: The described method will provide at any step a distribution close to the original if the experiment's results match the intended results, or skewed towards (away) outcomes that have happened less (more) than intended.
Note 2 :: You can introduce a "control" parameter c and add an extra step.
p_c2[i] = c * p_c[i] + (1-c) * p[i]
For c = 1, this defaults to the described method, for c = 0 it defaults to the the original probabilities (independently drawn variables).

Verify that points lie on a grid of specified pitch

While I am trying to solve this problem in a context where numpy is used heavily (and therefore an elegant numpy-based solution would be particularly welcome) the fundamental problem has nothing to do with numpy (or even Python) as such.
The task is to create an automated test for an algorithm which is supposed to produce points distributed on a grid whose pitch is specified as an input to the algorithm. The absolute positions of the points do not matter, but their relative positions do. For example, following
collection_of_points = algorithm(data, pitch=[1.3, 1.5, 2])
collection_of_points should contain only points whose x-coordinates differ by multiples of 1.3, whose y-coordinates differ by multiples of 1.5 and whose z-coordinates differ by multiples of 2.
The test should verify that this condition is satisfied.
One thing that I have tried, which doesn't seem too ugly, but doesn't work is
points = algo(data, pitch=requested_pitch)
for p1, p2 in itertools.combinations(points, 2):
distance_between_points = np.array(p2) - np.array(p1)
assert np.allclose(distance_between_points % requested_pitch, 0)
[ Aside for those unfamiliar with python or numpy:
itertools.combinations(points, 2) is a simple way of iterating through all pairs of points
Arithmetic operations on np.arrays are performed elementwise, so np.array([5,6,7]) % np.array([2,3,4]) evaluates to np.array([1, 0, 3]) via np.array([5%2, 6%3, 7%4])
np.allclose checks whether all corresponding elements in the two inputs arrays are approximately equal, and numpy automatically pretends that the 0 which is passed in as the second argument, was really an all-zero array of the correct size
]
To see why the idea shown above fails, consider a desired pitch of 3 and two points which are separated by 8.9999999 in the relevant dimension. 8.999999 % 3 is around 2.999999 which is nowhere near the required 0.
In all of this, I can't help feeling that I'm missing something obvious or that I'm re-inventing some wheel.
Can you suggest an elegant way of writing such a check?
Change your assertion to:
np.all(np.logical_or(np.isclose(x % y, 0), np.isclose((x % y) - y, 0)))
If you want to make it more readable, you should functionalize the statement. Something like:
def is_multiple(x, y, rtol=1e-05, atol=1e-08):
"""
Test if x is a multiple of y.
"""
remainder = x % y
is_zero = np.isclose(remainder, 0., rtol, atol)
is_y = np.isclose(remainder, y, rtol, atol)
return np.logical_or(is_zero, is_y)
And then:
assert np.all(is_multiple(distance_between_points, requested_pitch))

Markovian chains with Redis

For self-education purposes, I want to implement a Markov chain generator, using as much Redis, and as little application-level logic as possible.
Let's say I want to build a word generator, based on frequency table with history depth N (say, 2).
As a not very interesting example, for dictionary of two words bar and baz, the frequency table is as follows ("." is terminator, numbers are weights):
. . -> b x2
. b -> a x2
b a -> r x1
b a -> z x1
a r -> . x1
a z -> . x1
When I generate the word, I start with history of two terminators . .
There is only one possible outcome for the first two letters, b a.
Third letter may be either r or z, with equal probabilities, since their weights are equal.
Fourth letter is always a terminator.
(Things would be more interesting with longer words in dictionary.)
Anyway, how to do this with Redis elegantly?
Redis sets have SRANDMEMBER, but do not have weights.
Redis sorted sets have weights, but do not have random member retrieval.
Redis lists allow to represent weights as entry copies, but how to make set intersections with them?
Looks like application code is doomed to do some data processing...
You can accomplish a weighted random selection with a redis sorted set, by assigning each member a score between zero and one, according to the cumulative probability of the members of the set considered thus far, including the current member.
The ordering you use is irrelevant; you may choose any order which is convenient for you. The random selection is then accomplished by generating a random floating point number r uniformly distributed between zero and one, and calling
ZRANGEBYSCORE zset r 1 LIMIT 0 1,
which will return the first element with a score greater than or equal to r.
A little bit of reasoning should convince you that the probability of choosing a member is thus weighted correctly.
Unfortunately, the fact that the scores assigned to the elements needs to be proportional to the cumulative probability would seem to make it difficult to use the sorted set union or intersection operations in a way which would preserve the significance of the scores for random selection of elements. That part would seem to require some significant application logic.