Presenting only a subset of conditions - psychopy

I have a simple experiment with 3 blocks. Instead of putting stimulus (words) into three different files, I set them in one file and want to use ‘selected rows’ to assign them to different tasks (blocks). Below is the flow.
[selected rows: np.random.choice(15, size = 5, replace = False)]
The problem is that after 5 trials (first block-level condition), all the words will be reshuffled so that the words appeared in block 1 may also appear in block 2/block 3.
Are there any solutions to achieve that if a word has been used in a block, then it will not appear again in the following blocks? Many thanks!

The issue is probably arising because you are effectively randomising multiple times: both with the selection of the subset of rows but also with the randomisation of the loops themselves. i.e. your loops should be set to be sequential rather than random, because you are handling the randomisation yourself by selecting a subset of rows.
Even if you do that, you now have a second problem: if you choose a subset of 5 rows for each block via np.random.choice(), those selections are independent and so it is quite likely that some variable number of rows will be selected multiple times. So what you need to do is ensure that rows are selected without replacement, across the entire experiment.
I'd suggest that you randomly shuffle all 15 row indices in a list, and then apply subsets of that list in each block. That way you can ensure that there will be no multiple selection of rows. e.g:
row_order = range(15)
np.random.shuffle(row_order)
Then in each of the three blocks, you would use these subsets:
row_order[0:5]
row_order[5:10]
row_order[10:15]
This gives you a randomised selection in each block but with no duplication of rows.

Related

Natural way of indexing elements in Flink

Is there a built-in way to index and access indices of individual elements of DataStream/DataSet collection?
Like in typical Java collections, where you know that e.g. a 3rd element of an ArrayList can be obtained by ArrayList.get(2) and vice versa ArrayList.indexOf(elem) gives us the index of (the first occurence of) the specified element. (I'm not asking about extracting elements out of the stream.)
More specifically, when joining DataStreams/DataSets, is there a "natural"/easy way to join elements that came (were created) first, second, etc.?
I know there is a zipWithIndex transformation that assigns sequential indices to elements. I suspect the indices always start with 0? But I also suspect that they aren't necessarily assigned in the order the elements were created in (i.e. by their Event Time). (It also exists only for DataSets.)
This is what I currently tried:
DataSet<Tuple2<Long, Double>> tempsJoIndexed = DataSetUtils.zipWithIndex(tempsJo);
DataSet<Tuple2<Long, Double>> predsLinJoIndexed = DataSetUtils.zipWithIndex(predsLinJo);
DataSet<Tuple3<Double, Double, Double>> joinedTempsJo = tempsJoIndexed
.join(predsLinJoIndexed).where(0).equalTo(0)...
And it seems to create wrong pairs.
I see some possible approaches, but they're either non-Flink or not very nice:
I could of course assign an index to each element upon the stream's
creation and have e.g. a stream of Tuples.
Work with event-time timestamps. (I suspect there isn't a way to key by timestamps, and even if there was, it wouldn't be useful for
joining multiple streams like this unless the timestamps are
actually assigned as indices.)
We could try "collecting" the stream first but then we wouldn't be using Flink anymore.
The 1. approach seems like the most viable one, but it also seems redundant given that the stream should by definition be a sequential collection and as such, the elements should have a sense of orderliness (e.g. `I'm the 36th element because 35 elements already came before me.`).
I think you're going to have to assign index values to elements, so that you can partition the data sets by this index, and thus ensure that two records which need to be joined are being processed by the same sub-task. Once you've done that, a simple groupBy(index) and reduce() would work.
But assigning increasing ids without gaps isn't trivial, if you want to be reading your source data with parallelism > 1. In that case I'd create a RichMapFunction that uses the runtimeContext sub-task id and number of sub-tasks to calculate non-overlapping and monotonic indexes.

Resolving performance Issues on Linq with "LIKE"

I have a recognition table containing 25,000 records, and an incoming table of strings that must be recognised using LIKE matching, typically between 200 and 4000 per batch. This used to be in sql server but I am trying to get it to go faster by doing it all in memory, however linq is much slower, taking 5 seconds instead of 250ms in sql when the incoming table has 200 rows.
The recognition table is declared as follows:
Private mRecognition377LK As New SortedDictionary(Of String, RecognitionItem)(StringComparer.CurrentCultureIgnoreCase)
The actual like comparison is here:
r = mRecognition377LK.FirstOrDefault(Function(v As KeyValuePair(Of String, RecognitionItem)) sTitle Like v.Key).Value
So this is executed for every incoming record and I thought that using v.key would enable the linq engine to not scan records that start with a different character, but it seems not.
I can reinvent the wheel and create a collection class that splits the recognition table into its constituent
E.g. if an incoming string is abcdef and we have a recognition record of "abc*" then I could store collection grouped by length of recognition item up to the first star (3), then inside that a collection of recognition items with that length, keyed on the text up to the first star (abc)
So abc* has a string length of 3 so:
r = Itemz(3).Recog("abc")
I think that will work and perform well but its a lot of faff and I'm sure that collection classes and linq would have been designed in a way that such a simple thing could be executed quickly without this performance drag.
So my question is is there a way to make this go fast without going to my proposed solution ?
DRAFT ANSWER
So having programmed up several iterations of TRIE and binary searches I realised that all this was excessive processing and that is because....
BOTH LISTS ARE SORTED
... that means we only need one loop to process both lists and join them, i.e. we are doing in C#/VB what Sql Server does when it performs a MERGE join. So now I am pursuing this as a solution and will update here as appropriate.
FINAL UPDATE
The solution is now finished, and you can indeed join as many lists as you like as long as they are all sorted ascending or all sorted descending on the attributes you are joining, and you can do this in a single loop (because they are sorted). My code is about 1000 lines and very specific, so I'm not going to post a code solution, but for anyone that hits this kind of problem in future, it seems there is nothing in linq that will help do a merge join which is not based on equality (we have LIKE matching) so writing your own merge join in a single loop is possible when the incoming data is sorted.
The basis of the algorithm is to loop through the table which is your "maintable", and advance a pointer into each other list until the text comparison becomes greater than or equal. When its equal, you don't advance this list again until it doesn't match the maintable list, since one item on the right could join many items on the left. This can be repeated for multiple arrays.
It would be nice to see a library where you can pass lambda functions to perform merge joins on multiple sorted arrays. I will consider writing one in future.
The solution runs in 0.007 seconds to join 200 records to a 70,000 record recognition list. With linq performing effectively an inner loop, it took 5 seconds. When joining 4000 records to the same 70,000 record recognition list, the performance degrades only slightly to around 0.01s, showing the great effectiveness of the merge join logic. Sql server took around 250ms to perform the join.

Looping through columns to conduct data manipulations in a data frame

One struggle I have with using Python Pandas is to repeat the same coding scheme for a large number of columns. For example, below is trying to create a new column age_b in a data frame called data. How do I easily loop through a long (100s or even 1000s) of numeric columns, do the exact same thing, with the newly created column names being the existing name with a prefix or suffix string such as "_b".
labels = [1,2,3,4,5]
data['age_b'] = pd.cut(data['age'],bins=5, labels=labels)
In general, I have many simply data frame column manipulations or calculations, and it's easy to write the code. However, so often I want to repeat the same process for dozens of columns, that's when I get bogged down, because most functions or manipulations would work for one column, but not easily repeatable to many columns. It would be nice if someone can suggest a looping code "structure". Thanks!

Limiting the number of rows returned by `.where(...)` in pytables

I am dealing with tables having having up to a few billion rows and I do a lot of "where(numexpr_condition)" lookups using pytables.
We managed to optimise the HDF5 format so a simple where-query over 600mio rows is done under 20s (we still struggling to find out how to make this faster, but that's another story).
However, since it is still too slow for playing around, I need a way to limit the number of results in a query like this simple example one (the foo column is of course indexed):
[row['bar'] for row in table.where('(foo == 234)')]
So this would return lets say 100mio entries and it takes 18s, which is way to slow for prototyping and playing around.
How would you limit the result to lets say 10000?
The database like equivalent query would be roughly:
SELECT bar FROM row WHERE foo==234 LIMIT 10000
Using the stop= attribute is not the way, since it simply takes the first n rows and applies the condition to them. So in worst case if the condition is not fulfilled, I get an empty array:
[row['bar'] for row in table.where('(foo == 234)', stop=10000)]
Using slice on the list comprehension is also not the right way, since it will first create the whole array and then apply the slice, which of course is no speed gain at all:
[row['bar'] for row in table.where('(foo == 234)')][:10000]
However, the iterator must know its own size while the list comprehension exhaustion so there is surely a way to hack this together. I just could not find a suitable way doing that.
Btw. I also tried using zip and range to force a StopIteration:
[row['bar'] for for _, row in zip(range(10000), table.where('(foo == 234)'))]
But this gave me repeated numbers of the same row.
Since it’s an iterable and appears to produce rows on demand, you should be able to speed it up with itertools.islice.
rows = list(itertools.islice(table.where('(foo == 234)'), 10000))

Redis Sorted Sets: How do I get the first intersecting element?

I have a number of large sorted sets (5m-25m) in Redis and I want to get the first element that appears in a combination of those sets.
e.g I have 20 sets and wanted to take set 1, 5, 7 and 12 and get only the first intersection of only those sets.
It would seem that a ZINTERSTORE followed by a "ZRANGE foo 0 0" would be doing a lot more work that I require as it would calculate all the intersections then return the first one. Is there an alternative solution that does not need to calculate all the intersections?
There is no direct, native alternative, although I'd suggest this:
Create a hash which its members are your elements. Upon each addition to one of your sorted sets, increment the relevant member (using HINCRBY). Of course, you'll make the increment only after you check that the element does not exist already in the sorted set you are attempting to add to.
That way, you can quickly know which elements appear in 4 sets.
UPDATE: Now that I rethink about it, it might be too expensive to query your hash to find items with value of 4 (O(n)). Another option would be creating another Sorted Set, which its members are your elements, and their score gets incremented (as I described before, but using ZINCRBY), and you can quickly pull all elements with score 4 (using ZRANGEBYSCORE).