How do I combine and aggregate Data Frame rows - sql

I have a data frame which looks somewhat like this:
endPoint power time
device1 -4 0
device2 3 0
device3 -2 0
device4 0 0
device5 5 0
device6 -5 0
device1 4 1
device2 -3 1
device3 5 1
device4 -2 1
device5 1 1
device6 4 1
....
device1 6 x
device2 -5 x
device3 4 x
device4 3 x
device5 -1 x
device6 1 x
I want to change it into something like this:
span powerAboveThreshold time
device1-device3 true 0
device2-device6 true 0
...
devicex-devicey false w
I want to aggregate rows into two new columns and group this by time and span. The value of powerAboveThreshold depends on whether or not the power for either device in the span is above 0 - so if devicex or devicey is below 0 then it will be false.
As a side-note, there is one span of devices which contains 4 devices - whereas the rest contain just 2 devices. I need to design with this in mind.
device1-device3-device6-device2
I am using the Apache Spark DataFrame API/Spark SQL to accomplish this.
edit:
Could I convert the DataFrame to an RDD and compute it that way?
edit2:
Follow-up questions to Daniel L:
Seems like a great answer from what I have understood so far. I have a few questions:
Will the RDD have the expected structure if I convert it from a DataFrame?
What is going on in this part of the program? .aggregateByKey(Result())((result, sample) => aggregateSample(result, sample), addResults). I see that it runs aggregateSample() with each key-value pair (result, sample), but how does the addResults call work? Will it be called on each item relating to a key to add each successive Result generated by aggregateSample to the previous ones? I don't fully understand.
What is .map(_._2) doing?
In what situation will result.span be empty in the aggregateSample function?
In what situation will res1.span be empty in the addResults function?
Sorry for all of the questions, but I'm new to functional programming, Scala, and Spark so I have a lot to wrap my head around!

I'm not sure you can do the text concatenation as you want on DataFrames (maybe you can), but on a normal RDD you can do this:
val rdd = sc.makeRDD(Seq(
("device1", -4, 0),
("device2", 3, 0),
("device3", -2, 0),
("device4", 0, 0),
("device5", 5, 0),
("device6", -5, 0),
("device1", 4, 1),
("device2", -3, 1),
("device3", 5, 1),
("device4", 1, 1),
("device5", 1, 1),
("device6", 4, 1)))
val spanMap = Map(
"device1" -> 1,
"device2" -> 1,
"device3" -> 1,
"device4" -> 2,
"device5" -> 2,
"device6" -> 1
)
case class Result(var span: String = "", var aboveThreshold: Boolean = true, var time: Int = -1)
def aggregateSample(result: Result, sample: (String, Int, Int)) = {
result.time = sample._3
result.aboveThreshold = result.aboveThreshold && (sample._2 > 0)
if(result.span.isEmpty)
result.span += sample._1
else
result.span += "-" + sample._1
result
}
def addResults(res1: Result, res2: Result) = {
res1.aboveThreshold = res1.aboveThreshold && res2.aboveThreshold
if(res1.span.isEmpty)
res1.span += res2.span
else
res1.span += "-" + res2.span
res1
}
val results = rdd
.map(x => (x._3, spanMap.getOrElse(x._1, 0)) -> x) // Create a key to agregate with, by time and span
.aggregateByKey(Result())((result, sample) => aggregateSample(result, sample), addResults)
.map(_._2)
results.collect().foreach(println(_))
And it prints this, which is what I understood you needed:
Result(device4-device5,false,0)
Result(device4-device5,true,1)
Result(device1-device2-device3-device6,false,0)
Result(device1-device2-device3-device6,false,1)
Here I use a map that tells me which devices go together (for your pairings and 4-device exception), you might want to replace it with some other function, hardcode it as a static function to avoid serialization or use a broadcast variable.
=================== Edit ==========================
Seems like a great answer from what I have understood so far.
Feel free to upvote/accept it, helps me an others looking for things to answer :-)
Will the RDD have the expected structure if I convert it from a DataFrame?
Yes, the main difference is that a DataFrame includes a schema, so it can better optimize the underling calls, should be trivial to use this schema directly or map to the tuples I used as an example, I did it mostly for convenience. Hernan just posted another answer that shows some of this (and also copied the initial test data I used for convenience), so I won't repeat that piece, but as he mentions, your device-span grouping and presentation is tricky and thus I preferred a more imperative way on an RDD.
What is going on in this part of the program? .aggregateByKey(Result())((result, sample) => aggregateSample(result, sample), addResults). I see that it runs aggregateSample() with each key-value pair (result, sample), but how does the addResults call work? Will it be called on each item relating to a key to add each successive Result generated by aggregateSample to the previous ones? I don't fully understand.
aggregateByKey is a very optimal function. To avoid shuffling all data from one node to another to later merge, it first does local aggregation of samples to a single result per key, locally (the first function). They it shuffles these results around and adds the up (the second function).
What is .map(_._2) doing?
Simply discarding the key from the key/value RDD after aggregation, you no longer care about it, so I just keep the result.
In what situation will result.span be empty in the aggregateSample function?
In what situation will res1.span be empty in the addResults function?
When you do aggregation, you need to provide a "zero" value. For instance, if you where aggregating numbers, Spark would do (0 + firstValue) + secondValue... etc. The if clause prevent the adding of a spurious '-' before the first device name, since we put it between devices. No different than dealing for instance with one extra comma on a list of items, etc. Check the documentation and samples for aggregateByKey, it will help you a lot.

This is the implementation for dataframes (without the concated names):
val data = Seq(
("device1", -4, 0),
("device2", 3, 0),
("device3", -2, 0),
("device4", 0, 0),
("device5", 5, 0),
("device6", -5, 0),
("device1", 4, 1),
("device2", -3, 1),
("device3", 5, 1),
("device4", 1, 1),
("device5", 1, 1),
("device6", 4, 1)).toDF("endPoint", "power", "time")
val mapping = Seq(
"device1" -> 1,
"device2" -> 1,
"device3" -> 1,
"device4" -> 2,
"device5" -> 2,
"device6" -> 1).toDF("endPoint", "span")
data.as("A").
join(mapping.as("B"), $"B.endpoint" === $"A.endpoint", "inner").
groupBy($"B.span", $"A.time").
agg(min($"A.power" > 0).as("powerAboveThreshold")).
show()
Concated names are quite a bit harder, this requires you either to write your own UDAF (supported in the next version of Spark), or to use a combination of Hive functions.

Related

radio button of pyqt5 not deleted [duplicate]

This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
I'm iterating over a list of tuples in Python, and am attempting to remove them if they meet certain criteria.
for tup in somelist:
if determine(tup):
code_to_remove_tup
What should I use in place of code_to_remove_tup? I can't figure out how to remove the item in this fashion.
You can use a list comprehension to create a new list containing only the elements you don't want to remove:
somelist = [x for x in somelist if not determine(x)]
Or, by assigning to the slice somelist[:], you can mutate the existing list to contain only the items you want:
somelist[:] = [x for x in somelist if not determine(x)]
This approach could be useful if there are other references to somelist that need to reflect the changes.
Instead of a comprehension, you could also use itertools. In Python 2:
from itertools import ifilterfalse
somelist[:] = ifilterfalse(determine, somelist)
Or in Python 3:
from itertools import filterfalse
somelist[:] = filterfalse(determine, somelist)
The answers suggesting list comprehensions are almost correct—except that they build a completely new list and then give it the same name the old list as, they do not modify the old list in place. That's different from what you'd be doing by selective removal, as in Lennart's suggestion—it's faster, but if your list is accessed via multiple references the fact that you're just reseating one of the references and not altering the list object itself can lead to subtle, disastrous bugs.
Fortunately, it's extremely easy to get both the speed of list comprehensions AND the required semantics of in-place alteration—just code:
somelist[:] = [tup for tup in somelist if determine(tup)]
Note the subtle difference with other answers: this one is not assigning to a barename. It's assigning to a list slice that just happens to be the entire list, thereby replacing the list contents within the same Python list object, rather than just reseating one reference (from the previous list object to the new list object) like the other answers.
You need to take a copy of the list and iterate over it first, or the iteration will fail with what may be unexpected results.
For example (depends on what type of list):
for tup in somelist[:]:
etc....
An example:
>>> somelist = range(10)
>>> for x in somelist:
... somelist.remove(x)
>>> somelist
[1, 3, 5, 7, 9]
>>> somelist = range(10)
>>> for x in somelist[:]:
... somelist.remove(x)
>>> somelist
[]
for i in range(len(somelist) - 1, -1, -1):
if some_condition(somelist, i):
del somelist[i]
You need to go backwards otherwise it's a bit like sawing off the tree-branch that you are sitting on :-)
Python 2 users: replace range by xrange to avoid creating a hardcoded list
Overview of workarounds
Either:
use a linked list implementation/roll your own.
A linked list is the proper data structure to support efficient item removal, and does not force you to make space/time tradeoffs.
A CPython list is implemented with dynamic arrays as mentioned here, which is not a good data type to support removals.
There doesn't seem to be a linked list in the standard library however:
Is there a linked list predefined library in Python?
https://github.com/ajakubek/python-llist
start a new list() from scratch, and .append() back at the end as mentioned at: https://stackoverflow.com/a/1207460/895245
This time efficient, but less space efficient because it keeps an extra copy of the array around during iteration.
use del with an index as mentioned at: https://stackoverflow.com/a/1207485/895245
This is more space efficient since it dispenses the array copy, but it is less time efficient, because removal from dynamic arrays requires shifting all following items back by one, which is O(N).
Generally, if you are doing it quick and dirty and don't want to add a custom LinkedList class, you just want to go for the faster .append() option by default unless memory is a big concern.
Official Python 2 tutorial 4.2. "for Statements"
https://docs.python.org/2/tutorial/controlflow.html#for-statements
This part of the docs makes it clear that:
you need to make a copy of the iterated list to modify it
one way to do it is with the slice notation [:]
If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy. The slice notation makes this especially convenient:
>>> words = ['cat', 'window', 'defenestrate']
>>> for w in words[:]: # Loop over a slice copy of the entire list.
... if len(w) > 6:
... words.insert(0, w)
...
>>> words
['defenestrate', 'cat', 'window', 'defenestrate']
Python 2 documentation 7.3. "The for statement"
https://docs.python.org/2/reference/compound_stmts.html#for
This part of the docs says once again that you have to make a copy, and gives an actual removal example:
Note: There is a subtlety when the sequence is being modified by the loop (this can only occur for mutable sequences, i.e. lists). An internal counter is used to keep track of which item is used next, and this is incremented on each iteration. When this counter has reached the length of the sequence the loop terminates. This means that if the suite deletes the current (or a previous) item from the sequence, the next item will be skipped (since it gets the index of the current item which has already been treated). Likewise, if the suite inserts an item in the sequence before the current item, the current item will be treated again the next time through the loop. This can lead to nasty bugs that can be avoided by making a temporary copy using a slice of the whole sequence, e.g.,
for x in a[:]:
if x < 0: a.remove(x)
However, I disagree with this implementation, since .remove() has to iterate the entire list to find the value.
Could Python do this better?
It seems like this particular Python API could be improved. Compare it, for instance, with:
Java ListIterator::remove which documents "This call can only be made once per call to next or previous"
C++ std::vector::erase which returns a valid interator to the element after the one removed
both of which make it crystal clear that you cannot modify a list being iterated except with the iterator itself, and gives you efficient ways to do so without copying the list.
Perhaps the underlying rationale is that Python lists are assumed to be dynamic array backed, and therefore any type of removal will be time inefficient anyways, while Java has a nicer interface hierarchy with both ArrayList and LinkedList implementations of ListIterator.
There doesn't seem to be an explicit linked list type in the Python stdlib either: Python Linked List
Your best approach for such an example would be a list comprehension
somelist = [tup for tup in somelist if determine(tup)]
In cases where you're doing something more complex than calling a determine function, I prefer constructing a new list and simply appending to it as I go. For example
newlist = []
for tup in somelist:
# lots of code here, possibly setting things up for calling determine
if determine(tup):
newlist.append(tup)
somelist = newlist
Copying the list using remove might make your code look a little cleaner, as described in one of the answers below. You should definitely not do this for extremely large lists, since this involves first copying the entire list, and also performing an O(n) remove operation for each element being removed, making this an O(n^2) algorithm.
for tup in somelist[:]:
# lots of code here, possibly setting things up for calling determine
if determine(tup):
newlist.append(tup)
For those who like functional programming:
somelist[:] = filter(lambda tup: not determine(tup), somelist)
or
from itertools import ifilterfalse
somelist[:] = list(ifilterfalse(determine, somelist))
I needed to do this with a huge list, and duplicating the list seemed expensive, especially since in my case the number of deletions would be few compared to the items that remain. I took this low-level approach.
array = [lots of stuff]
arraySize = len(array)
i = 0
while i < arraySize:
if someTest(array[i]):
del array[i]
arraySize -= 1
else:
i += 1
What I don't know is how efficient a couple of deletes are compared to copying a large list. Please comment if you have any insight.
Most of the answers here want you to create a copy of the list. I had a use case where the list was quite long (110K items) and it was smarter to keep reducing the list instead.
First of all you'll need to replace foreach loop with while loop,
i = 0
while i < len(somelist):
if determine(somelist[i]):
del somelist[i]
else:
i += 1
The value of i is not changed in the if block because you'll want to get value of the new item FROM THE SAME INDEX, once the old item is deleted.
It might be smart to also just create a new list if the current list item meets the desired criteria.
so:
for item in originalList:
if (item != badValue):
newList.append(item)
and to avoid having to re-code the entire project with the new lists name:
originalList[:] = newList
note, from Python documentation:
copy.copy(x)
Return a shallow copy of x.
copy.deepcopy(x)
Return a deep copy of x.
This answer was originally written in response to a question which has since been marked as duplicate:
Removing coordinates from list on python
There are two problems in your code:
1) When using remove(), you attempt to remove integers whereas you need to remove a tuple.
2) The for loop will skip items in your list.
Let's run through what happens when we execute your code:
>>> L1 = [(1,2), (5,6), (-1,-2), (1,-2)]
>>> for (a,b) in L1:
... if a < 0 or b < 0:
... L1.remove(a,b)
...
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
TypeError: remove() takes exactly one argument (2 given)
The first problem is that you are passing both 'a' and 'b' to remove(), but remove() only accepts a single argument. So how can we get remove() to work properly with your list? We need to figure out what each element of your list is. In this case, each one is a tuple. To see this, let's access one element of the list (indexing starts at 0):
>>> L1[1]
(5, 6)
>>> type(L1[1])
<type 'tuple'>
Aha! Each element of L1 is actually a tuple. So that's what we need to be passing to remove(). Tuples in python are very easy, they're simply made by enclosing values in parentheses. "a, b" is not a tuple, but "(a, b)" is a tuple. So we modify your code and run it again:
# The remove line now includes an extra "()" to make a tuple out of "a,b"
L1.remove((a,b))
This code runs without any error, but let's look at the list it outputs:
L1 is now: [(1, 2), (5, 6), (1, -2)]
Why is (1,-2) still in your list? It turns out modifying the list while using a loop to iterate over it is a very bad idea without special care. The reason that (1, -2) remains in the list is that the locations of each item within the list changed between iterations of the for loop. Let's look at what happens if we feed the above code a longer list:
L1 = [(1,2),(5,6),(-1,-2),(1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
### Outputs:
L1 is now: [(1, 2), (5, 6), (1, -2), (3, 4), (5, 7), (2, 1), (5, -1), (0, 6)]
As you can infer from that result, every time that the conditional statement evaluates to true and a list item is removed, the next iteration of the loop will skip evaluation of the next item in the list because its values are now located at different indices.
The most intuitive solution is to copy the list, then iterate over the original list and only modify the copy. You can try doing so like this:
L2 = L1
for (a,b) in L1:
if a < 0 or b < 0 :
L2.remove((a,b))
# Now, remove the original copy of L1 and replace with L2
print L2 is L1
del L1
L1 = L2; del L2
print ("L1 is now: ", L1)
However, the output will be identical to before:
'L1 is now: ', [(1, 2), (5, 6), (1, -2), (3, 4), (5, 7), (2, 1), (5, -1), (0, 6)]
This is because when we created L2, python did not actually create a new object. Instead, it merely referenced L2 to the same object as L1. We can verify this with 'is' which is different from merely "equals" (==).
>>> L2=L1
>>> L1 is L2
True
We can make a true copy using copy.copy(). Then everything works as expected:
import copy
L1 = [(1,2), (5,6),(-1,-2), (1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
L2 = copy.copy(L1)
for (a,b) in L1:
if a < 0 or b < 0 :
L2.remove((a,b))
# Now, remove the original copy of L1 and replace with L2
del L1
L1 = L2; del L2
>>> L1 is now: [(1, 2), (5, 6), (3, 4), (5, 7), (2, 1), (0, 6)]
Finally, there is one cleaner solution than having to make an entirely new copy of L1. The reversed() function:
L1 = [(1,2), (5,6),(-1,-2), (1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
for (a,b) in reversed(L1):
if a < 0 or b < 0 :
L1.remove((a,b))
print ("L1 is now: ", L1)
>>> L1 is now: [(1, 2), (5, 6), (3, 4), (5, 7), (2, 1), (0, 6)]
Unfortunately, I cannot adequately describe how reversed() works. It returns a 'listreverseiterator' object when a list is passed to it. For practical purposes, you can think of it as creating a reversed copy of its argument. This is the solution I recommend.
If you want to delete elements from a list while iterating, use a while-loop so you can alter the current index and end index after each deletion.
Example:
i = 0
length = len(list1)
while i < length:
if condition:
list1.remove(list1[i])
i -= 1
length -= 1
i += 1
The other answers are correct that it is usually a bad idea to delete from a list that you're iterating. Reverse iterating avoids some of the pitfalls, but it is much more difficult to follow code that does that, so usually you're better off using a list comprehension or filter.
There is, however, one case where it is safe to remove elements from a sequence that you are iterating: if you're only removing one item while you're iterating. This can be ensured using a return or a break. For example:
for i, item in enumerate(lst):
if item % 4 == 0:
foo(item)
del lst[i]
break
This is often easier to understand than a list comprehension when you're doing some operations with side effects on the first item in a list that meets some condition and then removing that item from the list immediately after.
If you want to do anything else during the iteration, it may be nice to get both the index (which guarantees you being able to reference it, for example if you have a list of dicts) and the actual list item contents.
inlist = [{'field1':10, 'field2':20}, {'field1':30, 'field2':15}]
for idx, i in enumerate(inlist):
do some stuff with i['field1']
if somecondition:
xlist.append(idx)
for i in reversed(xlist): del inlist[i]
enumerate gives you access to the item and the index at once. reversed is so that the indices that you're going to later delete don't change on you.
One possible solution, useful if you want not only remove some things, but also do something with all elements in a single loop:
alist = ['good', 'bad', 'good', 'bad', 'good']
i = 0
for x in alist[:]:
if x == 'bad':
alist.pop(i)
i -= 1
# do something cool with x or just print x
print(x)
i += 1
A for loop will be iterate through an index...
Consider you have a list,
[5, 7, 13, 29, 65, 91]
You have used a list variable called lis. And you use the same to remove...
Your variable
lis = [5, 7, 13, 29, 35, 65, 91]
0 1 2 3 4 5 6
during the 5th iteration,
Your number 35 was not a prime, so you removed it from a list.
lis.remove(y)
And then the next value (65) move on to the previous index.
lis = [5, 7, 13, 29, 65, 91]
0 1 2 3 4 5
so the 4th iteration done pointer moved onto the 5th...
That’s why your loop doesn’t cover 65 since it’s moved into the previous index.
So you shouldn't reference a list into another variable which still references the original instead of a copy.
ite = lis # Don’t do it will reference instead copy
So do a copy of the list using list[::].
Now you will give,
[5, 7, 13, 29]
The problem is you removed a value from a list during iteration and then your list index will collapse.
So you can try list comprehension instead.
Which supports all the iterable like, list, tuple, dict, string, etc.
You might want to use filter() available as the built-in.
For more details check here
You can try for-looping in reverse so for some_list you'll do something like:
list_len = len(some_list)
for i in range(list_len):
reverse_i = list_len - 1 - i
cur = some_list[reverse_i]
# some logic with cur element
if some_condition:
some_list.pop(reverse_i)
This way the index is aligned and doesn't suffer from the list updates (regardless whether you pop cur element or not).
I needed to do something similar and in my case the problem was memory - I needed to merge multiple dataset objects within a list, after doing some stuff with them, as a new object, and needed to get rid of each entry I was merging to avoid duplicating all of them and blowing up memory. In my case having the objects in a dictionary instead of a list worked fine:
```
k = range(5)
v = ['a','b','c','d','e']
d = {key:val for key,val in zip(k, v)}
print d
for i in range(5):
print d[i]
d.pop(i)
print d
```
The most effective method is list comprehension, many people show their case, of course, it is also a good way to get an iterator through filter.
Filter receives a function and a sequence. Filter applies the passed function to each element in turn, and then decides whether to retain or discard the element depending on whether the function return value is True or False.
There is an example (get the odds in the tuple):
list(filter(lambda x:x%2==1, (1, 2, 4, 5, 6, 9, 10, 15)))
# result: [1, 5, 9, 15]
Caution: You can also not handle iterators. Iterators are sometimes better than sequences.
TLDR:
I wrote a library that allows you to do this:
from fluidIter import FluidIterable
fSomeList = FluidIterable(someList)
for tup in fSomeList:
if determine(tup):
# remove 'tup' without "breaking" the iteration
fSomeList.remove(tup)
# tup has also been removed from 'someList'
# as well as 'fSomeList'
It's best to use another method if possible that doesn't require modifying your iterable while iterating over it, but for some algorithms it might not be that straight forward. And so if you are sure that you really do want the code pattern described in the original question, it is possible.
Should work on all mutable sequences not just lists.
Full answer:
Edit: The last code example in this answer gives a use case for why you might sometimes want to modify a list in place rather than use a list comprehension. The first part of the answers serves as tutorial of how an array can be modified in place.
The solution follows on from this answer (for a related question) from senderle. Which explains how the the array index is updated while iterating through a list that has been modified. The solution below is designed to correctly track the array index even if the list is modified.
Download fluidIter.py from here https://github.com/alanbacon/FluidIterator, it is just a single file so no need to install git. There is no installer so you will need to make sure that the file is in the python path your self. The code has been written for python 3 and is untested on python 2.
from fluidIter import FluidIterable
l = [0,1,2,3,4,5,6,7,8]
fluidL = FluidIterable(l)
for i in fluidL:
print('initial state of list on this iteration: ' + str(fluidL))
print('current iteration value: ' + str(i))
print('popped value: ' + str(fluidL.pop(2)))
print(' ')
print('Final List Value: ' + str(l))
This will produce the following output:
initial state of list on this iteration: [0, 1, 2, 3, 4, 5, 6, 7, 8]
current iteration value: 0
popped value: 2
initial state of list on this iteration: [0, 1, 3, 4, 5, 6, 7, 8]
current iteration value: 1
popped value: 3
initial state of list on this iteration: [0, 1, 4, 5, 6, 7, 8]
current iteration value: 4
popped value: 4
initial state of list on this iteration: [0, 1, 5, 6, 7, 8]
current iteration value: 5
popped value: 5
initial state of list on this iteration: [0, 1, 6, 7, 8]
current iteration value: 6
popped value: 6
initial state of list on this iteration: [0, 1, 7, 8]
current iteration value: 7
popped value: 7
initial state of list on this iteration: [0, 1, 8]
current iteration value: 8
popped value: 8
Final List Value: [0, 1]
Above we have used the pop method on the fluid list object. Other common iterable methods are also implemented such as del fluidL[i], .remove, .insert, .append, .extend. The list can also be modified using slices (sort and reverse methods are not implemented).
The only condition is that you must only modify the list in place, if at any point fluidL or l were reassigned to a different list object the code would not work. The original fluidL object would still be used by the for loop but would become out of scope for us to modify.
i.e.
fluidL[2] = 'a' # is OK
fluidL = [0, 1, 'a', 3, 4, 5, 6, 7, 8] # is not OK
If we want to access the current index value of the list we cannot use enumerate, as this only counts how many times the for loop has run. Instead we will use the iterator object directly.
fluidArr = FluidIterable([0,1,2,3])
# get iterator first so can query the current index
fluidArrIter = fluidArr.__iter__()
for i, v in enumerate(fluidArrIter):
print('enum: ', i)
print('current val: ', v)
print('current ind: ', fluidArrIter.currentIndex)
print(fluidArr)
fluidArr.insert(0,'a')
print(' ')
print('Final List Value: ' + str(fluidArr))
This will output the following:
enum: 0
current val: 0
current ind: 0
[0, 1, 2, 3]
enum: 1
current val: 1
current ind: 2
['a', 0, 1, 2, 3]
enum: 2
current val: 2
current ind: 4
['a', 'a', 0, 1, 2, 3]
enum: 3
current val: 3
current ind: 6
['a', 'a', 'a', 0, 1, 2, 3]
Final List Value: ['a', 'a', 'a', 'a', 0, 1, 2, 3]
The FluidIterable class just provides a wrapper for the original list object. The original object can be accessed as a property of the fluid object like so:
originalList = fluidArr.fixedIterable
More examples / tests can be found in the if __name__ is "__main__": section at the bottom of fluidIter.py. These are worth looking at because they explain what happens in various situations. Such as: Replacing a large sections of the list using a slice. Or using (and modifying) the same iterable in nested for loops.
As I stated to start with: this is a complicated solution that will hurt the readability of your code and make it more difficult to debug. Therefore other solutions such as the list comprehensions mentioned in David Raznick's answer should be considered first. That being said, I have found times where this class has been useful to me and has been easier to use than keeping track of the indices of elements that need deleting.
Edit: As mentioned in the comments, this answer does not really present a problem for which this approach provides a solution. I will try to address that here:
List comprehensions provide a way to generate a new list but these approaches tend to look at each element in isolation rather than the current state of the list as a whole.
i.e.
newList = [i for i in oldList if testFunc(i)]
But what if the result of the testFunc depends on the elements that have been added to newList already? Or the elements still in oldList that might be added next? There might still be a way to use a list comprehension but it will begin to lose it's elegance, and for me it feels easier to modify a list in place.
The code below is one example of an algorithm that suffers from the above problem. The algorithm will reduce a list so that no element is a multiple of any other element.
randInts = [70, 20, 61, 80, 54, 18, 7, 18, 55, 9]
fRandInts = FluidIterable(randInts)
fRandIntsIter = fRandInts.__iter__()
# for each value in the list (outer loop)
# test against every other value in the list (inner loop)
for i in fRandIntsIter:
print(' ')
print('outer val: ', i)
innerIntsIter = fRandInts.__iter__()
for j in innerIntsIter:
innerIndex = innerIntsIter.currentIndex
# skip the element that the outloop is currently on
# because we don't want to test a value against itself
if not innerIndex == fRandIntsIter.currentIndex:
# if the test element, j, is a multiple
# of the reference element, i, then remove 'j'
if j%i == 0:
print('remove val: ', j)
# remove element in place, without breaking the
# iteration of either loop
del fRandInts[innerIndex]
# end if multiple, then remove
# end if not the same value as outer loop
# end inner loop
# end outerloop
print('')
print('final list: ', randInts)
The output and the final reduced list are shown below
outer val: 70
outer val: 20
remove val: 80
outer val: 61
outer val: 54
outer val: 18
remove val: 54
remove val: 18
outer val: 7
remove val: 70
outer val: 55
outer val: 9
remove val: 18
final list: [20, 61, 7, 55, 9]
For anything that has the potential to be really big, I use the following.
import numpy as np
orig_list = np.array([1, 2, 3, 4, 5, 100, 8, 13])
remove_me = [100, 1]
cleaned = np.delete(orig_list, remove_me)
print(cleaned)
That should be significantly faster than anything else.
In some situations, where you're doing more than simply filtering a list one item at time, you want your iteration to change while iterating.
Here is an example where copying the list beforehand is incorrect, reverse iteration is impossible and a list comprehension is also not an option.
""" Sieve of Eratosthenes """
def generate_primes(n):
""" Generates all primes less than n. """
primes = list(range(2,n))
idx = 0
while idx < len(primes):
p = primes[idx]
for multiple in range(p+p, n, p):
try:
primes.remove(multiple)
except ValueError:
pass #EAFP
idx += 1
yield p
I can think of three approaches to solve your problem. As an example, I will create a random list of tuples somelist = [(1,2,3), (4,5,6), (3,6,6), (7,8,9), (15,0,0), (10,11,12)]. The condition that I choose is sum of elements of a tuple = 15. In the final list we will only have those tuples whose sum is not equal to 15.
What I have chosen is a randomly chosen example. Feel free to change the list of tuples and the condition that I have chosen.
Method 1.> Use the framework that you had suggested (where one fills in a code inside a for loop). I use a small code with del to delete a tuple that meets the said condition. However, this method will miss a tuple (which satisfies the said condition) if two consecutively placed tuples meet the given condition.
for tup in somelist:
if ( sum(tup)==15 ):
del somelist[somelist.index(tup)]
print somelist
>>> [(1, 2, 3), (3, 6, 6), (7, 8, 9), (10, 11, 12)]
Method 2.> Construct a new list which contains elements (tuples) where the given condition is not met (this is the same thing as removing elements of list where the given condition is met). Following is the code for that:
newlist1 = [somelist[tup] for tup in range(len(somelist)) if(sum(somelist[tup])!=15)]
print newlist1
>>>[(1, 2, 3), (7, 8, 9), (10, 11, 12)]
Method 3.> Find indices where the given condition is met, and then use remove elements (tuples) corresponding to those indices. Following is the code for that.
indices = [i for i in range(len(somelist)) if(sum(somelist[i])==15)]
newlist2 = [tup for j, tup in enumerate(somelist) if j not in indices]
print newlist2
>>>[(1, 2, 3), (7, 8, 9), (10, 11, 12)]
Method 1 and method 2 are faster than method 3. Method2 and method3 are more efficient than method1. I prefer method2. For the aforementioned example, time(method1) : time(method2) : time(method3) = 1 : 1 : 1.7
If you will use the new list later, you can simply set the elem to None, and then judge it in the later loop, like this
for i in li:
i = None
for elem in li:
if elem is None:
continue
In this way, you dont't need copy the list and it's easier to understand.

is there efficient way for pandas to get tail rows with a condition

I want to get tail rows with a condition
For example:
I want to get all negative tail rows from a column 'A' like:
test = pd.DataFrame({'A':[-8, -9, -10, 1, 2, 3, 0, -1,-2,-3]})
I expect a 'method' to get new frame like:
A
0 -1
1 -2
2 -3
note that, it is not certain of how many 'negative' numbers are in the tail. So I can not run test.tail(3)
It looks like the pandas provided 'tail()' function can only run with a given number.
But my input data frame might be too large that I dont want run a simple loop to check one by one
Is there a smart way to do that?
Is this what you wanted?
test = pd.DataFrame({'A':[-8, -9, -10, 1, 2, 3, 0, -1,-2,-3]})
test = test.iloc[::-1]
test.loc[test.index.max():test[test['A'].ge(0)].index[0]+1]
Output:
A
9 -3
8 -2
7 -1
edit, if you want to get it back into the original order:
test.loc[test.index.max():test[test['A'].ge(0)].index[0]+1].iloc[::-1]
A
7 -1
8 -2
9 -3
Optional also .reset_index(drop=True) if you need a index starting at 0.
What's the tail for? It seems like you just need the negative numbers
test.query("A < 0")
Update: Find where sign changes, split the array and choose last one
split_points = (test.A.shift(1)<0) == (test.A<0)
np.split(test, split_points.loc[lambda x: x==False].index.tolist())[-1]
Output:
A
7 -1
8 -2
9 -3
Just share a picture of performance comparing above two given answers
Thansk Patry and Macro
I improved my above test, and did another round test, as I feel the old 'testing sample' size was too small,and afaid the %%time measurement might not accurate.
My new test uses a very big head numbers with size of 10000000 and tail with 3 negative numbers
so the new test can prove how the whole data frame size impact the over all performance.
code is like bellow:
%%time
arr = np.arange(1,10000000,1)
arr = np.concatenate((arr, [-2,-3,-4]))
test = pd.DataFrame({'A':arr})
test = test.iloc[::-1]
test.loc[test.index.max():test[test['A'].ge(0)].index[0]+1].iloc[::-1]
%%time
arr = np.arange(1,10000000,1)
arr = np.concatenate((arr, [-2,-3,-4]))
test = pd.DataFrame({'A':arr})
split_points = (test.A.shift(1)<0) == (test.A<0)
np.split(test, split_points.loc[lambda x: x==False].index.tolist())[-1]
due to system impacts, I tested 10 times, the above 2 methods are very much performs the similar. In about 50% cases Patryk's code even performs faster
Check out this image bellow

Create new data frame of percentages from values of an old data frame?

So I want to create a new data frame adding the values of the Sometimes and Often column and dividing it by the values of the total column and multiplying it by 100 to get percentages (unless there is a function that automatically does this in R). How would I go about doing that?
You have added an "sql" tag to your question. Should you prefer SQL over R for reasons of experience and/or knowledge you might be interested in the fabulous sqldf package which allows you to use SQL syntax within R. You will have to download it first via install.packages("sqldf") and then you can use it as in
expl <- data.frame(sometimes = c(1, 2, 4), often = c(2, 2, 2), total =c(6, 9, 8))
library(sqldf)
sqldf("SELECT 100*(sometimes+often)/total FROM expl")
The far more often used way is to add a percent column to the same data.frame instead of introducing a new one. That way, all data are kept together and you do not loose the link to e. g. the week column.
One way to go about that would be the following one-liner:
expl <- data.frame(sometimes = c(1, 2, 4), often = c(2, 2, 2), total =c(6, 9, 8))
print(expl)
expl$percent = 100 * (expl$sometimes + expl$often)/expl$total
print(expl)
First, it looks as though Total, Sometimes, and Often are character because you have commas in them, so you would need to get rid of the commas and convert them to numeric. You can do that as follows (assuming your dataframe is called mydata):
for(i in c("Total","Sometimes","Often")) mydata[[i]] = as.numeric(gsub(",", "", mydata[[i]])
Then you can use the answer by Bernard:
mydata$percent = 100 * (mydata$Sometimes + mydata$Often)/mydata$Total
Another option using the tidyverse:
library(tidyverse)
newdataframe <- olddataframe %>%
mutate(percent = (Sometimes+Often)/Total*100) %>%
select(percent)
But as said before, better leave the percentage column with the other data. In that case, remove the %>% select(percent).

Lua Torch equivalent of np.where()?

I have a ByteTensor and want to grab the indices where there is a 1. In numpy, I could do something like
a = np.array([1,0,1,0,1])
return np.where(a)
which would return (array([0, 2, 4]),). Is this functionality defined in Torch?
(In my particular case, I want to use these indices to index into several different Tensor objects, but it'd be nice to know how to do this in general.)
You can use torch.nonzero, e.g.:
> a = torch.ByteTensor{1,0,1,0,1}
> print(torch.nonzero(a))
1
3
5
[torch.LongTensor of size 3x1]
If you really need to find the 1-s only you can chain a logical operator:
> a = torch.ByteTensor{1,2,1,6,1}
> a:eq(1):nonzero()

Comparing vectors

I am new to R and am trying to find a better solution for accomplishing this fairly simple task efficiently.
I have a data.frame M with 100,000 lines (and many columns, out of which 2 columns are relevant to this problem, I'll call it M1, M2). I have another data.frame where column V1 with about 10,000 elements is essential to this task. My task is this:
For each of the element in V1, find where does it occur in M2 and pull out the corresponding M1. I am able to do this using for-loop and it is terribly slow! I am used to Matlab and Perl and this is taking for EVER in R! Surely there's a better way. I would appreciate any valuable suggestions in accomplishing this task...
for (x in c(1:length(V$V1)) {
start[x] = M$M1[M$M2 == V$V1[x]]
}
There is only 1 element that will match, and so I can use the logical statement to directly get the element in start vector. How can I vectorize this?
Thank you!
Here is another solution using the same example by #aix.
M[match(V$V1, M$M2),]
To benchmark performance, we can use the R package rbenchmark.
library(rbenchmark)
f_ramnath = function() M[match(V$V1, M$M2),]
f_aix = function() merge(V, M, by.x='V1', by.y='M2', sort=F)
f_chase = function() M[M$M2 %in% V$V1,] # modified to return full data frame
benchmark(f_ramnath(), f_aix(), f_chase(), replications = 10000)
test replications elapsed relative
2 f_aix() 10000 12.907 7.068456
3 f_chase() 10000 2.010 1.100767
1 f_ramnath() 10000 1.826 1.000000
Another option is to use the %in% operator:
> set.seed(1)
> M <- data.frame(M1 = sample(1:20, 15, FALSE), M2 = sample(1:20, 15, FALSE))
> V <- data.frame(V1 = sample(1:20, 10, FALSE))
> M$M1[M$M2 %in% V$V1]
[1] 6 8 11 9 19 1 3 5
Sounds like you're looking for merge:
> M <- data.frame(M1=c(1,2,3,4,10,3,15), M2=c(15,6,7,8,-1,12,5))
> V <- data.frame(V1=c(-1,12,5,7))
> merge(V, M, by.x='V1', by.y='M2', sort=F)
V1 M1
1 -1 10
2 12 3
3 5 15
4 7 3
If V$V1 might contain values not present in M$M2, you may want to specify all.x=T. This will fill in the missing values with NAs instead of omitting them from the result.