Array Numpy Side Effect - numpy

I found a strange effect when permuting array with numpy:
def permute(yy, kmax) :
kmax=5
kk= np.random.uniform(1,kmax)
nn= int(np.floor(len(yy)/kk))
yy3= np.zeros_like(yy );
np.copyto(yy3,yy)
for ii in range(0, nn):
ax= kk*ii-kk*nn
aux= yy[ax]
aux2= yy[kk*ii]
yy3[ax] = aux
yy3[kk*ii] = aux2
return yy3
and
yy= np.random.normal(0,1,50000)
yy1= permute(yy,2)
( np.var(yy)- np.var(yy1) )
( np.mean(yy)- np.mean(yy1) )
Result is not zero !!!
Do you think this comes from reference assignment in the array ?

I ran your function with np.arange(10) and got
1752:~/mypy$ python stack35004877.py
0.0
0.0
[0 1 2 3 4 5 6 7 8 9] # yy
[0 1 2 3 4 5 6 7 8 9] # yy1
And repeated it with the large random array, with the same 0s for the statistics.
Note that your code did not permute the input
Maybe it will be clearer if I clean it up:
def permute(yy, kmax=5) :
kk= np.random.randint(1,kmax) # int rather than float
nn= int(np.floor(len(yy)/kk))
print(nn,kk)
yy3= yy.copy()
for ii in range(0, nn):
ind1 = kk*ii
ind2 = ind1-kk*nn
yy3[ind2] = yy[ind2]
yy3[ind1] = yy[ind1]
return yy3
You aren't moving anything; and with kmax=2 you just copy every thing from yy to yy3 - something you already did outside the loop. With kmax=5 you don't copy everything in the loop - but the initial copy hides that.
With random.uniform(), kk is a float, and the indexes are also floats. That's not desirable, but apparently not a problem.
But even if I switch the indices:
yy3[ind2] = yy[ind1]
yy3[ind1] = yy[ind2]
I don't permute anything, because ind2 a negative value, that maps on to the same element as ind1. yy[-1] is the last item of yy.
[(0, -10), (1, -9), (2, -8),... (9, -1)]
I could work out the details, but I think you should do that yourself - with a small test case. And skip that initial copyto, that just hides errors in the iteration. Print the details, not just summary statistics from large random arrays.
And in the long run you don't want to use an iteration like this. You want to do the permutation with one indexing call. But first get this version working correctly.

Related

How to connect points with different indices (one data file) in gnuplot

I have a file "a_test.dat" with two data blocks that I can select via the corresponding index.
# first
x1 y1
3 1
6 2
9 8
# second
x2 y2
4 5
8 2
2 7
Now I want to connect the data points of both indices with an arrow.
set arrow from (x1,y1) to (x2,y2).
I can plot both blocks with one plot statement. But I cannot get the points to set the arrows.
plot "a_test.dat" index "first" u 1:2, "" index "second" u 1:2
From version 5.2 you can use gnuplot arrays:
stats "a_test.dat" nooutput
array xx[STATS_records]
array yy[STATS_records]
# save all data into two arrays
i = 1
fnset(x,y) = (xx[i]=x, yy[i]=y, i=i+1)
# parse data ignoring output
set table $dummy
plot "" using (fnset($1,$2)) with table
unset table
# x2,y2 data starts at midpoint in array
numi = int((i-1)/2)
plot for [i=1:numi] $dummy using (xx[i]):(yy[i]):(xx[numi+i]-xx[i]):(yy[numi+i]-yy[i]) with vectors
Use stats to count the number of lines in the file, so that the array can
be large enough. Create an array xx and another yy to hold the data.
Use plot ... with table to read the file again, calling your function
fnset() for each data line with the x and y column values. The function
saves them at the current index i, which it increments. It was
initialised to 1.
For 3+3 data lines, i ends up at 7, so we set numi to (i-1)/2 i.e. 3.
Use plot for ... vectors to draw the arrows. Each arrow needs 4 data
items from the array. Note that the second x,y must be a relative delta,
not an absolute position.

Sorting/Optimization problem with rows in a pandas dataframe [duplicate]

So if I was given a sorted list/array i.e. [1,6,8,15,40], the size of the array, and the requested number..
How would you find the minimum number of values required from that list to sum to the requested number?
For example given the array [1,6,8,15,40], I requested the number 23, it would take 2 values from the list (8 and 15) to equal 23. The function would then return 2 (# of values). Furthermore, there are an unlimited number of 1s in the array (so you the function will always return a value)
Any help is appreciated
The NP-complete subset-sum problem trivially reduces to your problem: given a set S of integers and a target value s, we construct set S' having values (n+1) xk for each xk in S and set the target equal to (n+1) s. If there's a subset of the original set S summing to s, then there will be a subset of size at most n in the new set summing to (n+1) s, and such a set cannot involve extra 1s. If there is no such subset, then the subset produced as an answer must contain at least n+1 elements since it needs enough 1s to get to a multiple of n+1.
So, the problem will not admit any polynomial-time solution without a revolution in computing. With that disclaimer out of the way, you can consider some pseudopolynomial-time solutions to the problem which work well in practice if the maximum size of the set is small.
Here's a Python algorithm that will do this:
import functools
S = [1, 6, 8, 15, 40] # must contain only positive integers
#functools.lru_cache(maxsize=None) # memoizing decorator
def min_subset(k, s):
# returns the minimum size of a subset of S[:k] summing to s, including any extra 1s needed to get there
best = s # use all ones
for i, j in enumerate(S[:k]):
if j <= s:
sz = min_subset(i, s-j)+1
if sz < best: best = sz
return best
print min_subset(len(S), 23) # prints 2
This is tractable even for fairly large lists (I tested a random list of n=50 elements), provided their values are bounded. With S = [random.randint(1, 500) for _ in xrange(50)], min_subset(len(S), 8489) takes less than 10 seconds to run.
There may be a simpler solution, but if your lists are sufficiently short, you can just try every set of values, i.e.:
1 --> Not 23
6 --> Not 23
...
1 + 6 = 7 --> Not 23
1 + 8 = 9 --> Not 23
...
1 + 40 = 41 --> Not 23
6 + 8 = 14 --> Not 23
...
8 + 15 = 23 --> Oh look, it's 23, and we added 2 values
If you know your list is sorted, you can skip some tests, since if 6 + 20 > 23, then there's no need to test 6 + 40.

radio button of pyqt5 not deleted [duplicate]

This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
I'm iterating over a list of tuples in Python, and am attempting to remove them if they meet certain criteria.
for tup in somelist:
if determine(tup):
code_to_remove_tup
What should I use in place of code_to_remove_tup? I can't figure out how to remove the item in this fashion.
You can use a list comprehension to create a new list containing only the elements you don't want to remove:
somelist = [x for x in somelist if not determine(x)]
Or, by assigning to the slice somelist[:], you can mutate the existing list to contain only the items you want:
somelist[:] = [x for x in somelist if not determine(x)]
This approach could be useful if there are other references to somelist that need to reflect the changes.
Instead of a comprehension, you could also use itertools. In Python 2:
from itertools import ifilterfalse
somelist[:] = ifilterfalse(determine, somelist)
Or in Python 3:
from itertools import filterfalse
somelist[:] = filterfalse(determine, somelist)
The answers suggesting list comprehensions are almost correct—except that they build a completely new list and then give it the same name the old list as, they do not modify the old list in place. That's different from what you'd be doing by selective removal, as in Lennart's suggestion—it's faster, but if your list is accessed via multiple references the fact that you're just reseating one of the references and not altering the list object itself can lead to subtle, disastrous bugs.
Fortunately, it's extremely easy to get both the speed of list comprehensions AND the required semantics of in-place alteration—just code:
somelist[:] = [tup for tup in somelist if determine(tup)]
Note the subtle difference with other answers: this one is not assigning to a barename. It's assigning to a list slice that just happens to be the entire list, thereby replacing the list contents within the same Python list object, rather than just reseating one reference (from the previous list object to the new list object) like the other answers.
You need to take a copy of the list and iterate over it first, or the iteration will fail with what may be unexpected results.
For example (depends on what type of list):
for tup in somelist[:]:
etc....
An example:
>>> somelist = range(10)
>>> for x in somelist:
... somelist.remove(x)
>>> somelist
[1, 3, 5, 7, 9]
>>> somelist = range(10)
>>> for x in somelist[:]:
... somelist.remove(x)
>>> somelist
[]
for i in range(len(somelist) - 1, -1, -1):
if some_condition(somelist, i):
del somelist[i]
You need to go backwards otherwise it's a bit like sawing off the tree-branch that you are sitting on :-)
Python 2 users: replace range by xrange to avoid creating a hardcoded list
Overview of workarounds
Either:
use a linked list implementation/roll your own.
A linked list is the proper data structure to support efficient item removal, and does not force you to make space/time tradeoffs.
A CPython list is implemented with dynamic arrays as mentioned here, which is not a good data type to support removals.
There doesn't seem to be a linked list in the standard library however:
Is there a linked list predefined library in Python?
https://github.com/ajakubek/python-llist
start a new list() from scratch, and .append() back at the end as mentioned at: https://stackoverflow.com/a/1207460/895245
This time efficient, but less space efficient because it keeps an extra copy of the array around during iteration.
use del with an index as mentioned at: https://stackoverflow.com/a/1207485/895245
This is more space efficient since it dispenses the array copy, but it is less time efficient, because removal from dynamic arrays requires shifting all following items back by one, which is O(N).
Generally, if you are doing it quick and dirty and don't want to add a custom LinkedList class, you just want to go for the faster .append() option by default unless memory is a big concern.
Official Python 2 tutorial 4.2. "for Statements"
https://docs.python.org/2/tutorial/controlflow.html#for-statements
This part of the docs makes it clear that:
you need to make a copy of the iterated list to modify it
one way to do it is with the slice notation [:]
If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy. The slice notation makes this especially convenient:
>>> words = ['cat', 'window', 'defenestrate']
>>> for w in words[:]: # Loop over a slice copy of the entire list.
... if len(w) > 6:
... words.insert(0, w)
...
>>> words
['defenestrate', 'cat', 'window', 'defenestrate']
Python 2 documentation 7.3. "The for statement"
https://docs.python.org/2/reference/compound_stmts.html#for
This part of the docs says once again that you have to make a copy, and gives an actual removal example:
Note: There is a subtlety when the sequence is being modified by the loop (this can only occur for mutable sequences, i.e. lists). An internal counter is used to keep track of which item is used next, and this is incremented on each iteration. When this counter has reached the length of the sequence the loop terminates. This means that if the suite deletes the current (or a previous) item from the sequence, the next item will be skipped (since it gets the index of the current item which has already been treated). Likewise, if the suite inserts an item in the sequence before the current item, the current item will be treated again the next time through the loop. This can lead to nasty bugs that can be avoided by making a temporary copy using a slice of the whole sequence, e.g.,
for x in a[:]:
if x < 0: a.remove(x)
However, I disagree with this implementation, since .remove() has to iterate the entire list to find the value.
Could Python do this better?
It seems like this particular Python API could be improved. Compare it, for instance, with:
Java ListIterator::remove which documents "This call can only be made once per call to next or previous"
C++ std::vector::erase which returns a valid interator to the element after the one removed
both of which make it crystal clear that you cannot modify a list being iterated except with the iterator itself, and gives you efficient ways to do so without copying the list.
Perhaps the underlying rationale is that Python lists are assumed to be dynamic array backed, and therefore any type of removal will be time inefficient anyways, while Java has a nicer interface hierarchy with both ArrayList and LinkedList implementations of ListIterator.
There doesn't seem to be an explicit linked list type in the Python stdlib either: Python Linked List
Your best approach for such an example would be a list comprehension
somelist = [tup for tup in somelist if determine(tup)]
In cases where you're doing something more complex than calling a determine function, I prefer constructing a new list and simply appending to it as I go. For example
newlist = []
for tup in somelist:
# lots of code here, possibly setting things up for calling determine
if determine(tup):
newlist.append(tup)
somelist = newlist
Copying the list using remove might make your code look a little cleaner, as described in one of the answers below. You should definitely not do this for extremely large lists, since this involves first copying the entire list, and also performing an O(n) remove operation for each element being removed, making this an O(n^2) algorithm.
for tup in somelist[:]:
# lots of code here, possibly setting things up for calling determine
if determine(tup):
newlist.append(tup)
For those who like functional programming:
somelist[:] = filter(lambda tup: not determine(tup), somelist)
or
from itertools import ifilterfalse
somelist[:] = list(ifilterfalse(determine, somelist))
I needed to do this with a huge list, and duplicating the list seemed expensive, especially since in my case the number of deletions would be few compared to the items that remain. I took this low-level approach.
array = [lots of stuff]
arraySize = len(array)
i = 0
while i < arraySize:
if someTest(array[i]):
del array[i]
arraySize -= 1
else:
i += 1
What I don't know is how efficient a couple of deletes are compared to copying a large list. Please comment if you have any insight.
Most of the answers here want you to create a copy of the list. I had a use case where the list was quite long (110K items) and it was smarter to keep reducing the list instead.
First of all you'll need to replace foreach loop with while loop,
i = 0
while i < len(somelist):
if determine(somelist[i]):
del somelist[i]
else:
i += 1
The value of i is not changed in the if block because you'll want to get value of the new item FROM THE SAME INDEX, once the old item is deleted.
It might be smart to also just create a new list if the current list item meets the desired criteria.
so:
for item in originalList:
if (item != badValue):
newList.append(item)
and to avoid having to re-code the entire project with the new lists name:
originalList[:] = newList
note, from Python documentation:
copy.copy(x)
Return a shallow copy of x.
copy.deepcopy(x)
Return a deep copy of x.
This answer was originally written in response to a question which has since been marked as duplicate:
Removing coordinates from list on python
There are two problems in your code:
1) When using remove(), you attempt to remove integers whereas you need to remove a tuple.
2) The for loop will skip items in your list.
Let's run through what happens when we execute your code:
>>> L1 = [(1,2), (5,6), (-1,-2), (1,-2)]
>>> for (a,b) in L1:
... if a < 0 or b < 0:
... L1.remove(a,b)
...
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
TypeError: remove() takes exactly one argument (2 given)
The first problem is that you are passing both 'a' and 'b' to remove(), but remove() only accepts a single argument. So how can we get remove() to work properly with your list? We need to figure out what each element of your list is. In this case, each one is a tuple. To see this, let's access one element of the list (indexing starts at 0):
>>> L1[1]
(5, 6)
>>> type(L1[1])
<type 'tuple'>
Aha! Each element of L1 is actually a tuple. So that's what we need to be passing to remove(). Tuples in python are very easy, they're simply made by enclosing values in parentheses. "a, b" is not a tuple, but "(a, b)" is a tuple. So we modify your code and run it again:
# The remove line now includes an extra "()" to make a tuple out of "a,b"
L1.remove((a,b))
This code runs without any error, but let's look at the list it outputs:
L1 is now: [(1, 2), (5, 6), (1, -2)]
Why is (1,-2) still in your list? It turns out modifying the list while using a loop to iterate over it is a very bad idea without special care. The reason that (1, -2) remains in the list is that the locations of each item within the list changed between iterations of the for loop. Let's look at what happens if we feed the above code a longer list:
L1 = [(1,2),(5,6),(-1,-2),(1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
### Outputs:
L1 is now: [(1, 2), (5, 6), (1, -2), (3, 4), (5, 7), (2, 1), (5, -1), (0, 6)]
As you can infer from that result, every time that the conditional statement evaluates to true and a list item is removed, the next iteration of the loop will skip evaluation of the next item in the list because its values are now located at different indices.
The most intuitive solution is to copy the list, then iterate over the original list and only modify the copy. You can try doing so like this:
L2 = L1
for (a,b) in L1:
if a < 0 or b < 0 :
L2.remove((a,b))
# Now, remove the original copy of L1 and replace with L2
print L2 is L1
del L1
L1 = L2; del L2
print ("L1 is now: ", L1)
However, the output will be identical to before:
'L1 is now: ', [(1, 2), (5, 6), (1, -2), (3, 4), (5, 7), (2, 1), (5, -1), (0, 6)]
This is because when we created L2, python did not actually create a new object. Instead, it merely referenced L2 to the same object as L1. We can verify this with 'is' which is different from merely "equals" (==).
>>> L2=L1
>>> L1 is L2
True
We can make a true copy using copy.copy(). Then everything works as expected:
import copy
L1 = [(1,2), (5,6),(-1,-2), (1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
L2 = copy.copy(L1)
for (a,b) in L1:
if a < 0 or b < 0 :
L2.remove((a,b))
# Now, remove the original copy of L1 and replace with L2
del L1
L1 = L2; del L2
>>> L1 is now: [(1, 2), (5, 6), (3, 4), (5, 7), (2, 1), (0, 6)]
Finally, there is one cleaner solution than having to make an entirely new copy of L1. The reversed() function:
L1 = [(1,2), (5,6),(-1,-2), (1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
for (a,b) in reversed(L1):
if a < 0 or b < 0 :
L1.remove((a,b))
print ("L1 is now: ", L1)
>>> L1 is now: [(1, 2), (5, 6), (3, 4), (5, 7), (2, 1), (0, 6)]
Unfortunately, I cannot adequately describe how reversed() works. It returns a 'listreverseiterator' object when a list is passed to it. For practical purposes, you can think of it as creating a reversed copy of its argument. This is the solution I recommend.
If you want to delete elements from a list while iterating, use a while-loop so you can alter the current index and end index after each deletion.
Example:
i = 0
length = len(list1)
while i < length:
if condition:
list1.remove(list1[i])
i -= 1
length -= 1
i += 1
The other answers are correct that it is usually a bad idea to delete from a list that you're iterating. Reverse iterating avoids some of the pitfalls, but it is much more difficult to follow code that does that, so usually you're better off using a list comprehension or filter.
There is, however, one case where it is safe to remove elements from a sequence that you are iterating: if you're only removing one item while you're iterating. This can be ensured using a return or a break. For example:
for i, item in enumerate(lst):
if item % 4 == 0:
foo(item)
del lst[i]
break
This is often easier to understand than a list comprehension when you're doing some operations with side effects on the first item in a list that meets some condition and then removing that item from the list immediately after.
If you want to do anything else during the iteration, it may be nice to get both the index (which guarantees you being able to reference it, for example if you have a list of dicts) and the actual list item contents.
inlist = [{'field1':10, 'field2':20}, {'field1':30, 'field2':15}]
for idx, i in enumerate(inlist):
do some stuff with i['field1']
if somecondition:
xlist.append(idx)
for i in reversed(xlist): del inlist[i]
enumerate gives you access to the item and the index at once. reversed is so that the indices that you're going to later delete don't change on you.
One possible solution, useful if you want not only remove some things, but also do something with all elements in a single loop:
alist = ['good', 'bad', 'good', 'bad', 'good']
i = 0
for x in alist[:]:
if x == 'bad':
alist.pop(i)
i -= 1
# do something cool with x or just print x
print(x)
i += 1
A for loop will be iterate through an index...
Consider you have a list,
[5, 7, 13, 29, 65, 91]
You have used a list variable called lis. And you use the same to remove...
Your variable
lis = [5, 7, 13, 29, 35, 65, 91]
0 1 2 3 4 5 6
during the 5th iteration,
Your number 35 was not a prime, so you removed it from a list.
lis.remove(y)
And then the next value (65) move on to the previous index.
lis = [5, 7, 13, 29, 65, 91]
0 1 2 3 4 5
so the 4th iteration done pointer moved onto the 5th...
That’s why your loop doesn’t cover 65 since it’s moved into the previous index.
So you shouldn't reference a list into another variable which still references the original instead of a copy.
ite = lis # Don’t do it will reference instead copy
So do a copy of the list using list[::].
Now you will give,
[5, 7, 13, 29]
The problem is you removed a value from a list during iteration and then your list index will collapse.
So you can try list comprehension instead.
Which supports all the iterable like, list, tuple, dict, string, etc.
You might want to use filter() available as the built-in.
For more details check here
You can try for-looping in reverse so for some_list you'll do something like:
list_len = len(some_list)
for i in range(list_len):
reverse_i = list_len - 1 - i
cur = some_list[reverse_i]
# some logic with cur element
if some_condition:
some_list.pop(reverse_i)
This way the index is aligned and doesn't suffer from the list updates (regardless whether you pop cur element or not).
I needed to do something similar and in my case the problem was memory - I needed to merge multiple dataset objects within a list, after doing some stuff with them, as a new object, and needed to get rid of each entry I was merging to avoid duplicating all of them and blowing up memory. In my case having the objects in a dictionary instead of a list worked fine:
```
k = range(5)
v = ['a','b','c','d','e']
d = {key:val for key,val in zip(k, v)}
print d
for i in range(5):
print d[i]
d.pop(i)
print d
```
The most effective method is list comprehension, many people show their case, of course, it is also a good way to get an iterator through filter.
Filter receives a function and a sequence. Filter applies the passed function to each element in turn, and then decides whether to retain or discard the element depending on whether the function return value is True or False.
There is an example (get the odds in the tuple):
list(filter(lambda x:x%2==1, (1, 2, 4, 5, 6, 9, 10, 15)))
# result: [1, 5, 9, 15]
Caution: You can also not handle iterators. Iterators are sometimes better than sequences.
TLDR:
I wrote a library that allows you to do this:
from fluidIter import FluidIterable
fSomeList = FluidIterable(someList)
for tup in fSomeList:
if determine(tup):
# remove 'tup' without "breaking" the iteration
fSomeList.remove(tup)
# tup has also been removed from 'someList'
# as well as 'fSomeList'
It's best to use another method if possible that doesn't require modifying your iterable while iterating over it, but for some algorithms it might not be that straight forward. And so if you are sure that you really do want the code pattern described in the original question, it is possible.
Should work on all mutable sequences not just lists.
Full answer:
Edit: The last code example in this answer gives a use case for why you might sometimes want to modify a list in place rather than use a list comprehension. The first part of the answers serves as tutorial of how an array can be modified in place.
The solution follows on from this answer (for a related question) from senderle. Which explains how the the array index is updated while iterating through a list that has been modified. The solution below is designed to correctly track the array index even if the list is modified.
Download fluidIter.py from here https://github.com/alanbacon/FluidIterator, it is just a single file so no need to install git. There is no installer so you will need to make sure that the file is in the python path your self. The code has been written for python 3 and is untested on python 2.
from fluidIter import FluidIterable
l = [0,1,2,3,4,5,6,7,8]
fluidL = FluidIterable(l)
for i in fluidL:
print('initial state of list on this iteration: ' + str(fluidL))
print('current iteration value: ' + str(i))
print('popped value: ' + str(fluidL.pop(2)))
print(' ')
print('Final List Value: ' + str(l))
This will produce the following output:
initial state of list on this iteration: [0, 1, 2, 3, 4, 5, 6, 7, 8]
current iteration value: 0
popped value: 2
initial state of list on this iteration: [0, 1, 3, 4, 5, 6, 7, 8]
current iteration value: 1
popped value: 3
initial state of list on this iteration: [0, 1, 4, 5, 6, 7, 8]
current iteration value: 4
popped value: 4
initial state of list on this iteration: [0, 1, 5, 6, 7, 8]
current iteration value: 5
popped value: 5
initial state of list on this iteration: [0, 1, 6, 7, 8]
current iteration value: 6
popped value: 6
initial state of list on this iteration: [0, 1, 7, 8]
current iteration value: 7
popped value: 7
initial state of list on this iteration: [0, 1, 8]
current iteration value: 8
popped value: 8
Final List Value: [0, 1]
Above we have used the pop method on the fluid list object. Other common iterable methods are also implemented such as del fluidL[i], .remove, .insert, .append, .extend. The list can also be modified using slices (sort and reverse methods are not implemented).
The only condition is that you must only modify the list in place, if at any point fluidL or l were reassigned to a different list object the code would not work. The original fluidL object would still be used by the for loop but would become out of scope for us to modify.
i.e.
fluidL[2] = 'a' # is OK
fluidL = [0, 1, 'a', 3, 4, 5, 6, 7, 8] # is not OK
If we want to access the current index value of the list we cannot use enumerate, as this only counts how many times the for loop has run. Instead we will use the iterator object directly.
fluidArr = FluidIterable([0,1,2,3])
# get iterator first so can query the current index
fluidArrIter = fluidArr.__iter__()
for i, v in enumerate(fluidArrIter):
print('enum: ', i)
print('current val: ', v)
print('current ind: ', fluidArrIter.currentIndex)
print(fluidArr)
fluidArr.insert(0,'a')
print(' ')
print('Final List Value: ' + str(fluidArr))
This will output the following:
enum: 0
current val: 0
current ind: 0
[0, 1, 2, 3]
enum: 1
current val: 1
current ind: 2
['a', 0, 1, 2, 3]
enum: 2
current val: 2
current ind: 4
['a', 'a', 0, 1, 2, 3]
enum: 3
current val: 3
current ind: 6
['a', 'a', 'a', 0, 1, 2, 3]
Final List Value: ['a', 'a', 'a', 'a', 0, 1, 2, 3]
The FluidIterable class just provides a wrapper for the original list object. The original object can be accessed as a property of the fluid object like so:
originalList = fluidArr.fixedIterable
More examples / tests can be found in the if __name__ is "__main__": section at the bottom of fluidIter.py. These are worth looking at because they explain what happens in various situations. Such as: Replacing a large sections of the list using a slice. Or using (and modifying) the same iterable in nested for loops.
As I stated to start with: this is a complicated solution that will hurt the readability of your code and make it more difficult to debug. Therefore other solutions such as the list comprehensions mentioned in David Raznick's answer should be considered first. That being said, I have found times where this class has been useful to me and has been easier to use than keeping track of the indices of elements that need deleting.
Edit: As mentioned in the comments, this answer does not really present a problem for which this approach provides a solution. I will try to address that here:
List comprehensions provide a way to generate a new list but these approaches tend to look at each element in isolation rather than the current state of the list as a whole.
i.e.
newList = [i for i in oldList if testFunc(i)]
But what if the result of the testFunc depends on the elements that have been added to newList already? Or the elements still in oldList that might be added next? There might still be a way to use a list comprehension but it will begin to lose it's elegance, and for me it feels easier to modify a list in place.
The code below is one example of an algorithm that suffers from the above problem. The algorithm will reduce a list so that no element is a multiple of any other element.
randInts = [70, 20, 61, 80, 54, 18, 7, 18, 55, 9]
fRandInts = FluidIterable(randInts)
fRandIntsIter = fRandInts.__iter__()
# for each value in the list (outer loop)
# test against every other value in the list (inner loop)
for i in fRandIntsIter:
print(' ')
print('outer val: ', i)
innerIntsIter = fRandInts.__iter__()
for j in innerIntsIter:
innerIndex = innerIntsIter.currentIndex
# skip the element that the outloop is currently on
# because we don't want to test a value against itself
if not innerIndex == fRandIntsIter.currentIndex:
# if the test element, j, is a multiple
# of the reference element, i, then remove 'j'
if j%i == 0:
print('remove val: ', j)
# remove element in place, without breaking the
# iteration of either loop
del fRandInts[innerIndex]
# end if multiple, then remove
# end if not the same value as outer loop
# end inner loop
# end outerloop
print('')
print('final list: ', randInts)
The output and the final reduced list are shown below
outer val: 70
outer val: 20
remove val: 80
outer val: 61
outer val: 54
outer val: 18
remove val: 54
remove val: 18
outer val: 7
remove val: 70
outer val: 55
outer val: 9
remove val: 18
final list: [20, 61, 7, 55, 9]
For anything that has the potential to be really big, I use the following.
import numpy as np
orig_list = np.array([1, 2, 3, 4, 5, 100, 8, 13])
remove_me = [100, 1]
cleaned = np.delete(orig_list, remove_me)
print(cleaned)
That should be significantly faster than anything else.
In some situations, where you're doing more than simply filtering a list one item at time, you want your iteration to change while iterating.
Here is an example where copying the list beforehand is incorrect, reverse iteration is impossible and a list comprehension is also not an option.
""" Sieve of Eratosthenes """
def generate_primes(n):
""" Generates all primes less than n. """
primes = list(range(2,n))
idx = 0
while idx < len(primes):
p = primes[idx]
for multiple in range(p+p, n, p):
try:
primes.remove(multiple)
except ValueError:
pass #EAFP
idx += 1
yield p
I can think of three approaches to solve your problem. As an example, I will create a random list of tuples somelist = [(1,2,3), (4,5,6), (3,6,6), (7,8,9), (15,0,0), (10,11,12)]. The condition that I choose is sum of elements of a tuple = 15. In the final list we will only have those tuples whose sum is not equal to 15.
What I have chosen is a randomly chosen example. Feel free to change the list of tuples and the condition that I have chosen.
Method 1.> Use the framework that you had suggested (where one fills in a code inside a for loop). I use a small code with del to delete a tuple that meets the said condition. However, this method will miss a tuple (which satisfies the said condition) if two consecutively placed tuples meet the given condition.
for tup in somelist:
if ( sum(tup)==15 ):
del somelist[somelist.index(tup)]
print somelist
>>> [(1, 2, 3), (3, 6, 6), (7, 8, 9), (10, 11, 12)]
Method 2.> Construct a new list which contains elements (tuples) where the given condition is not met (this is the same thing as removing elements of list where the given condition is met). Following is the code for that:
newlist1 = [somelist[tup] for tup in range(len(somelist)) if(sum(somelist[tup])!=15)]
print newlist1
>>>[(1, 2, 3), (7, 8, 9), (10, 11, 12)]
Method 3.> Find indices where the given condition is met, and then use remove elements (tuples) corresponding to those indices. Following is the code for that.
indices = [i for i in range(len(somelist)) if(sum(somelist[i])==15)]
newlist2 = [tup for j, tup in enumerate(somelist) if j not in indices]
print newlist2
>>>[(1, 2, 3), (7, 8, 9), (10, 11, 12)]
Method 1 and method 2 are faster than method 3. Method2 and method3 are more efficient than method1. I prefer method2. For the aforementioned example, time(method1) : time(method2) : time(method3) = 1 : 1 : 1.7
If you will use the new list later, you can simply set the elem to None, and then judge it in the later loop, like this
for i in li:
i = None
for elem in li:
if elem is None:
continue
In this way, you dont't need copy the list and it's easier to understand.

Create 20 unique bingo cards

I'm trying to create 20 unique cards with numbers, but I struggle a bit.. So basically I need to create 20 unique matrices 3x3 having numbers 1-10 in first column, numbers 11-20 in the second column and 21-30 in the third column.. Any ideas? I'd prefer to have it done in r, especially as I don't know Visual Basic. In excel I know how to generate the cards, but not sure how to ensure they are unique..
It seems to be quite precise and straightforward to me. Anyway, i needed to create 20 matrices that would look like :
[,1] [,2] [,3]
[1,] 5 17 23
[2,] 8 18 22
[3,] 3 16 24
Each of the matrices should be unique and each of the columns should consist of three unique numbers ( the 1st column - numbers 1-10, the 2nd column 11-20, the 3rd column - 21-30).
Generating random numbers is easy, though how to make sure that generated cards are unique?Please have a look at the post that i voted for as an answer - as it gives you thorough explanation how to achieve it.
(N.B. : I misread "rows" instead of "columns", so the following code and explanation will deal with matrices with random numbers 1-10 on 1st row, 11-20 on 2nd row etc., instead of columns, but it's exactly the same just transposed)
This code should guarantee uniqueness and good randomness :
library(gtools)
# helper function
getKthPermWithRep <- function(k,n,r){
k <- k - 1
if(n^r< k){
stop('k is greater than possibile permutations')
}
v <- rep.int(0,r)
index <- length(v)
while ( k != 0 )
{
remainder<- k %% n
k <- k %/% n
v[index] <- remainder
index <- index - 1
}
return(v+1)
}
# get all possible permutations of 10 elements taken 3 at a time
# (singlerowperms = 720)
allperms <- permutations(10,3)
singlerowperms <- nrow(allperms)
# get 20 random and unique bingo cards
cards <- lapply(sample.int(singlerowperms^3,20),FUN=function(k){
perm2use <- getKthPermWithRep(k,singlerowperms,3)
m <- allperms[perm2use,]
m[2,] <- m[2,] + 10
m[3,] <- m[3,] + 20
return(m)
# if you want transpose the result just do:
# return(t(m))
})
Explanation
(disclaimer tl;dr)
To guarantee both randomness and uniqueness, one safe approach is generating all the possibile bingo cards and then choose randomly among them without replacements.
To generate all the possible cards, we should :
generate all the possibilities for each row of 3 elements
get the cartesian product of them
Step (1) can be easily obtained using function permutations of package gtools (see the object allPerms in the code). Note that we just need the permutations for the first row (i.e. 3 elements taken from 1-10) since the permutations of the other rows can be easily obtained from the first by adding 10 and 20 respectively.
Step (2) is also easy to get in R, but let's first consider how many possibilities will be generated. Step (1) returned 720 cases for each row, so, in the end we will have 720*720*720 = 720^3 = 373248000 possible bingo cards!
Generate all of them is not practical since the occupied memory would be huge, thus we need to find a way to get 20 random elements in this big range of possibilities without actually keeping them in memory.
The solution comes from the function getKthPermWithRep, which, given an index k, it returns the k-th permutation with repetition of r elements taken from 1:n (note that in this case permutation with repetition corresponds to the cartesian product).
e.g.
# all permutations with repetition of 2 elements in 1:3 are
permutations(n = 3, r = 2,repeats.allowed = TRUE)
# [,1] [,2]
# [1,] 1 1
# [2,] 1 2
# [3,] 1 3
# [4,] 2 1
# [5,] 2 2
# [6,] 2 3
# [7,] 3 1
# [8,] 3 2
# [9,] 3 3
# using the getKthPermWithRep you can get directly the k-th permutation you want :
getKthPermWithRep(k=4,n=3,r=2)
# [1] 2 1
getKthPermWithRep(k=8,n=3,r=2)
# [1] 3 2
Hence now we just choose 20 random indexes in the range 1:720^3 (using sample.int function), then for each of them we get the corresponding permutation of 3 numbers taken from 1:720 using function getKthPermWithRep.
Finally these triplets of numbers, can be converted to actual card rows by using them as indexes to subset allPerms and get our final matrix (after, of course, adding +10 and +20 to the 2nd and 3rd row).
Bonus
Explanation of getKthPermWithRep
If you look at the example above (permutations with repetition of 2 elements in 1:3), and subtract 1 to all number of the results you get this :
> permutations(n = 3, r = 2,repeats.allowed = T) - 1
[,1] [,2]
[1,] 0 0
[2,] 0 1
[3,] 0 2
[4,] 1 0
[5,] 1 1
[6,] 1 2
[7,] 2 0
[8,] 2 1
[9,] 2 2
If you consider each number of each row as a number digit, you can notice that those rows (00, 01, 02...) are all the numbers from 0 to 8, represented in base 3 (yes, 3 as n). So, when you ask the k-th permutation with repetition of r elements in 1:n, you are also asking to translate k-1 into base n and return the digits increased by 1.
Therefore, given the algorithm to change any number from base 10 to base n :
changeBase <- function(num,base){
v <- NULL
while ( num != 0 )
{
remainder = num %% base # assume K > 1
num = num %/% base # integer division
v <- c(remainder,v)
}
if(is.null(v)){
return(0)
}
return(v)
}
you can easily obtain getKthPermWithRep function.
One 3x3 matrix with the desired value range can be generated with the following code:
mat <- matrix(c(sample(1:10,3), sample(11:20,3), sample(21:30, 3)), nrow=3)
Furthermore, you can use a for loop to generate a list of 20 unique matrices as follows:
for (i in 1:20) {
mat[[i]] <- list(matrix(c(sample(1:10,3), sample(11:20,3), sample(21:30,3)), nrow=3))
print(mat[[i]])
}
Well OK I may fall on my face here but I propose a checksum (using Excel).
This is a unique signature for each bingo card which will remain invariate if the order of numbers within any column is changed without changing the actual numbers. The formula is
=SUM(10^MOD(A2:A4,10)+2*10^MOD(B2:B4,10)+4*10^MOD(C2:C4,10))
where the bingo numbers for the first card are in A2:C4.
The idea is to generate a 10-digit number for each column, then multiply each by a constant and add them to get the signature.
So here I have generated two random bingo cards using a standard formula from here plus two which are deliberately made to be just permutations of each other.
Then I check if any of the signatures are duplicates using the formula
=MAX(COUNTIF(D5:D20,D5:D20))
which shouldn't given an answer more than 1.
In the unlikely event that there were duplicates, then you would just press F9 and generate some new cards.
All formulae are array formulae and must be entered with CtrlShiftEnter
Here is an inelegant way to do this. Generate all possible combinations and then sample without replacement. These are permutations, combinations: order does matter in bingo
library(dplyr)
library(tidyr)
library(magrittr)
generate_samples = function(n) {
first = data_frame(first = (n-9):n)
first %>%
merge(first %>% rename(second = first)) %>%
merge(first %>% rename(third = first)) %>%
sample_n(20)
}
suffix = function(df, suffix)
df %>%
setNames(names(.) %>%
paste0(suffix))
generate_samples(10) %>% suffix(10) %>%
bind_cols(generate_samples(20) %>% suffix(20)) %>%
bind_cols(generate_samples(30) %>% suffix(30)) %>%
rowwise %>%
do(matrix = t(.) %>% matrix(3)) %>%
use_series(matrix)

Understanding PsychoPy's data logging

I have a test PsychoPy Builder script that I am using to investigate some counter-intuitive behaviour. The structure is four routines:
"Init", not in a loop, the following code in "Begin Experiment":
x = 0
y = 0
z = 0
foo = [0, 0, 0]
"One", in a loop, the following code in "End Routine":
x = x + 1
foo[0] = foo[0] + 1
thisExp.addData("x", x)
thisExp.addData("y", y)
thisExp.addData("z", z)
thisExp.addData("foo", foo)
"Two", in a loop, the following code in "End Routine":
y = y + 2
foo[1] = foo[1] + 2
thisExp.addData("x", x)
thisExp.addData("y", y)
thisExp.addData("z", z)
thisExp.addData("fooY", foo[1])
thisExp.addData("foo", foo)
"Three", in a loop, the following code in "End Routine":
z = z + 3
foo[2] = foo[2] + 3
thisExp.addData("x", x)
thisExp.addData("y", y)
thisExp.addData("z", z)
thisExp.addData("foo", foo)
There is no other code, no other components. The routines "One", "Two", and "Three" form a loop in that order executed five times. The relevant columns of the CSV output file are as follows:
trials.thisRepN trials.thisTrialN trials.thisN trials.thisIndex x y z foo fooY
0 0 0 0 1 2 3 [5, 10, 15] 2
1 0 1 0 2 4 6 [5, 10, 15] 4
2 0 2 0 3 6 9 [5, 10, 15] 6
3 0 3 0 4 8 12 [5, 10, 15] 8
4 0 4 0 5 10 15 [5, 10, 15] 10
Is this the expected output? If so, why? Note that the individual variables, x, y, and z, are displaying updated values each time through the loop (at the end of the loop), while the list foo shows only the final value after the loop iterates all five times, but it shows this in every line. But calling out individual elements of the list displays as individual variables do.
What is the logic and rationale behind this?
Is there a way to make the list output perform as the others do?
Is there a way to force the output to capture/display any of these variables as they are when the addData() is invoked rather than waiting until the end of the loop?
I think I know what is going wrong here. It's probably because python assigns by reference rather than copy. This is explained in detail elsewhere but briefly,
original = [1, 2]
new = original # new is simply a reference to original! It is not a copy.
new[0] = 'Oops' # original is now ['Oops', 2] as is new (which is just a reference or pointer
In your case, the TrialHandler receives the reference, which simply points to the "foo" variable which is updated throughout the experiment. Since the log is only saved in the end of the experiment, all the rows in "foo" now points to the "foo variable" which now holds the value [5, 10, 15].
This assignment-by-reference can be extremely beautiful and handy, but sometimes cause headache like in your example. It applies to all python mutables: lists, dicts, functions, and classes. But not for immutables, like numbers, tuples and strings! That's why your script works for digits but not for the list.
There are different solutions. The simplest is probably to replace the addData calls with thisExp.addData("foo", tuple(foo)) which converts the mutable list to an immutable tuple. One can also do thisExp.addData("foo", [x for x in foo]). A more all-round solution for all kinds of objects is to run import copy in the beginning of the experiment and then add data like thisExp.addData("foo", copy.copy(foo)) in the other codeblocks (if you have a complicated object, use copy.deepcopy instead).