pre-order and "tree_insert" on a BST - binary-search-tree

if I have a BST (call it - T) and run PRE-ORDER on it,
how can I show/prove that running the function "tree_insert" on the sequence I got from the pre-order, I get exactly the same tree-T (I started with) back?
Thanks,

For example, let's say you have 3 elements.
When you insert, the first element you insert will be taken as the root, then the next element(whether it is lower or larger) will placed accordingly to the left or right. Pre-order traversal means it will visit root first, then recursively visit the left child, then recursively visit the right child. So after pre-order will display root, the smaller element, and then the larger element. Now try inserting those 3 elements again. You will receive the same tree. (The first element inserted will be the root. Then the smaller element will again go to the left, and the larger element will automatically go to the right). You can model this with 6 different scenarios with 3 elements.
Scenario 1: Elements to insert = {1, 2, 3}
root = 1, right child = 2, right-most = 3
When doing preOrder, 1 is visited first. No left child so 2 is visited next. 2 has no left child so 3 is visited next. (1, 2, 3)
Scenario 2: Elements to insert = {2, 1, 3}
root = 2, left child = 1, right child = 3
When doing preOrder, 2 is visited first. Left child 1 so it's visited next. Right child is 3, so it's visited next. (2, 1, 3)
Scenario 3: Elements to insert = {3, 1, 2}
root = 3, left child = 1, right child of left child(with height of 1) = 2
When doing preOrder, 3 is visited first. Left child is 1 so it's visited next. No left child of 1, so 3 is visited next. (3, 1, 2)
There are 3 more scenarios which you can write out and check for yourself.

For a given pre order traversal of a tree, there can be more than one BSTs formed. A unique BST can be generated if you have INORDER traversal too alongside the pre order traversal.

Related

radio button of pyqt5 not deleted [duplicate]

This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
I'm iterating over a list of tuples in Python, and am attempting to remove them if they meet certain criteria.
for tup in somelist:
if determine(tup):
code_to_remove_tup
What should I use in place of code_to_remove_tup? I can't figure out how to remove the item in this fashion.
You can use a list comprehension to create a new list containing only the elements you don't want to remove:
somelist = [x for x in somelist if not determine(x)]
Or, by assigning to the slice somelist[:], you can mutate the existing list to contain only the items you want:
somelist[:] = [x for x in somelist if not determine(x)]
This approach could be useful if there are other references to somelist that need to reflect the changes.
Instead of a comprehension, you could also use itertools. In Python 2:
from itertools import ifilterfalse
somelist[:] = ifilterfalse(determine, somelist)
Or in Python 3:
from itertools import filterfalse
somelist[:] = filterfalse(determine, somelist)
The answers suggesting list comprehensions are almost correct—except that they build a completely new list and then give it the same name the old list as, they do not modify the old list in place. That's different from what you'd be doing by selective removal, as in Lennart's suggestion—it's faster, but if your list is accessed via multiple references the fact that you're just reseating one of the references and not altering the list object itself can lead to subtle, disastrous bugs.
Fortunately, it's extremely easy to get both the speed of list comprehensions AND the required semantics of in-place alteration—just code:
somelist[:] = [tup for tup in somelist if determine(tup)]
Note the subtle difference with other answers: this one is not assigning to a barename. It's assigning to a list slice that just happens to be the entire list, thereby replacing the list contents within the same Python list object, rather than just reseating one reference (from the previous list object to the new list object) like the other answers.
You need to take a copy of the list and iterate over it first, or the iteration will fail with what may be unexpected results.
For example (depends on what type of list):
for tup in somelist[:]:
etc....
An example:
>>> somelist = range(10)
>>> for x in somelist:
... somelist.remove(x)
>>> somelist
[1, 3, 5, 7, 9]
>>> somelist = range(10)
>>> for x in somelist[:]:
... somelist.remove(x)
>>> somelist
[]
for i in range(len(somelist) - 1, -1, -1):
if some_condition(somelist, i):
del somelist[i]
You need to go backwards otherwise it's a bit like sawing off the tree-branch that you are sitting on :-)
Python 2 users: replace range by xrange to avoid creating a hardcoded list
Overview of workarounds
Either:
use a linked list implementation/roll your own.
A linked list is the proper data structure to support efficient item removal, and does not force you to make space/time tradeoffs.
A CPython list is implemented with dynamic arrays as mentioned here, which is not a good data type to support removals.
There doesn't seem to be a linked list in the standard library however:
Is there a linked list predefined library in Python?
https://github.com/ajakubek/python-llist
start a new list() from scratch, and .append() back at the end as mentioned at: https://stackoverflow.com/a/1207460/895245
This time efficient, but less space efficient because it keeps an extra copy of the array around during iteration.
use del with an index as mentioned at: https://stackoverflow.com/a/1207485/895245
This is more space efficient since it dispenses the array copy, but it is less time efficient, because removal from dynamic arrays requires shifting all following items back by one, which is O(N).
Generally, if you are doing it quick and dirty and don't want to add a custom LinkedList class, you just want to go for the faster .append() option by default unless memory is a big concern.
Official Python 2 tutorial 4.2. "for Statements"
https://docs.python.org/2/tutorial/controlflow.html#for-statements
This part of the docs makes it clear that:
you need to make a copy of the iterated list to modify it
one way to do it is with the slice notation [:]
If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy. The slice notation makes this especially convenient:
>>> words = ['cat', 'window', 'defenestrate']
>>> for w in words[:]: # Loop over a slice copy of the entire list.
... if len(w) > 6:
... words.insert(0, w)
...
>>> words
['defenestrate', 'cat', 'window', 'defenestrate']
Python 2 documentation 7.3. "The for statement"
https://docs.python.org/2/reference/compound_stmts.html#for
This part of the docs says once again that you have to make a copy, and gives an actual removal example:
Note: There is a subtlety when the sequence is being modified by the loop (this can only occur for mutable sequences, i.e. lists). An internal counter is used to keep track of which item is used next, and this is incremented on each iteration. When this counter has reached the length of the sequence the loop terminates. This means that if the suite deletes the current (or a previous) item from the sequence, the next item will be skipped (since it gets the index of the current item which has already been treated). Likewise, if the suite inserts an item in the sequence before the current item, the current item will be treated again the next time through the loop. This can lead to nasty bugs that can be avoided by making a temporary copy using a slice of the whole sequence, e.g.,
for x in a[:]:
if x < 0: a.remove(x)
However, I disagree with this implementation, since .remove() has to iterate the entire list to find the value.
Could Python do this better?
It seems like this particular Python API could be improved. Compare it, for instance, with:
Java ListIterator::remove which documents "This call can only be made once per call to next or previous"
C++ std::vector::erase which returns a valid interator to the element after the one removed
both of which make it crystal clear that you cannot modify a list being iterated except with the iterator itself, and gives you efficient ways to do so without copying the list.
Perhaps the underlying rationale is that Python lists are assumed to be dynamic array backed, and therefore any type of removal will be time inefficient anyways, while Java has a nicer interface hierarchy with both ArrayList and LinkedList implementations of ListIterator.
There doesn't seem to be an explicit linked list type in the Python stdlib either: Python Linked List
Your best approach for such an example would be a list comprehension
somelist = [tup for tup in somelist if determine(tup)]
In cases where you're doing something more complex than calling a determine function, I prefer constructing a new list and simply appending to it as I go. For example
newlist = []
for tup in somelist:
# lots of code here, possibly setting things up for calling determine
if determine(tup):
newlist.append(tup)
somelist = newlist
Copying the list using remove might make your code look a little cleaner, as described in one of the answers below. You should definitely not do this for extremely large lists, since this involves first copying the entire list, and also performing an O(n) remove operation for each element being removed, making this an O(n^2) algorithm.
for tup in somelist[:]:
# lots of code here, possibly setting things up for calling determine
if determine(tup):
newlist.append(tup)
For those who like functional programming:
somelist[:] = filter(lambda tup: not determine(tup), somelist)
or
from itertools import ifilterfalse
somelist[:] = list(ifilterfalse(determine, somelist))
I needed to do this with a huge list, and duplicating the list seemed expensive, especially since in my case the number of deletions would be few compared to the items that remain. I took this low-level approach.
array = [lots of stuff]
arraySize = len(array)
i = 0
while i < arraySize:
if someTest(array[i]):
del array[i]
arraySize -= 1
else:
i += 1
What I don't know is how efficient a couple of deletes are compared to copying a large list. Please comment if you have any insight.
Most of the answers here want you to create a copy of the list. I had a use case where the list was quite long (110K items) and it was smarter to keep reducing the list instead.
First of all you'll need to replace foreach loop with while loop,
i = 0
while i < len(somelist):
if determine(somelist[i]):
del somelist[i]
else:
i += 1
The value of i is not changed in the if block because you'll want to get value of the new item FROM THE SAME INDEX, once the old item is deleted.
It might be smart to also just create a new list if the current list item meets the desired criteria.
so:
for item in originalList:
if (item != badValue):
newList.append(item)
and to avoid having to re-code the entire project with the new lists name:
originalList[:] = newList
note, from Python documentation:
copy.copy(x)
Return a shallow copy of x.
copy.deepcopy(x)
Return a deep copy of x.
This answer was originally written in response to a question which has since been marked as duplicate:
Removing coordinates from list on python
There are two problems in your code:
1) When using remove(), you attempt to remove integers whereas you need to remove a tuple.
2) The for loop will skip items in your list.
Let's run through what happens when we execute your code:
>>> L1 = [(1,2), (5,6), (-1,-2), (1,-2)]
>>> for (a,b) in L1:
... if a < 0 or b < 0:
... L1.remove(a,b)
...
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
TypeError: remove() takes exactly one argument (2 given)
The first problem is that you are passing both 'a' and 'b' to remove(), but remove() only accepts a single argument. So how can we get remove() to work properly with your list? We need to figure out what each element of your list is. In this case, each one is a tuple. To see this, let's access one element of the list (indexing starts at 0):
>>> L1[1]
(5, 6)
>>> type(L1[1])
<type 'tuple'>
Aha! Each element of L1 is actually a tuple. So that's what we need to be passing to remove(). Tuples in python are very easy, they're simply made by enclosing values in parentheses. "a, b" is not a tuple, but "(a, b)" is a tuple. So we modify your code and run it again:
# The remove line now includes an extra "()" to make a tuple out of "a,b"
L1.remove((a,b))
This code runs without any error, but let's look at the list it outputs:
L1 is now: [(1, 2), (5, 6), (1, -2)]
Why is (1,-2) still in your list? It turns out modifying the list while using a loop to iterate over it is a very bad idea without special care. The reason that (1, -2) remains in the list is that the locations of each item within the list changed between iterations of the for loop. Let's look at what happens if we feed the above code a longer list:
L1 = [(1,2),(5,6),(-1,-2),(1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
### Outputs:
L1 is now: [(1, 2), (5, 6), (1, -2), (3, 4), (5, 7), (2, 1), (5, -1), (0, 6)]
As you can infer from that result, every time that the conditional statement evaluates to true and a list item is removed, the next iteration of the loop will skip evaluation of the next item in the list because its values are now located at different indices.
The most intuitive solution is to copy the list, then iterate over the original list and only modify the copy. You can try doing so like this:
L2 = L1
for (a,b) in L1:
if a < 0 or b < 0 :
L2.remove((a,b))
# Now, remove the original copy of L1 and replace with L2
print L2 is L1
del L1
L1 = L2; del L2
print ("L1 is now: ", L1)
However, the output will be identical to before:
'L1 is now: ', [(1, 2), (5, 6), (1, -2), (3, 4), (5, 7), (2, 1), (5, -1), (0, 6)]
This is because when we created L2, python did not actually create a new object. Instead, it merely referenced L2 to the same object as L1. We can verify this with 'is' which is different from merely "equals" (==).
>>> L2=L1
>>> L1 is L2
True
We can make a true copy using copy.copy(). Then everything works as expected:
import copy
L1 = [(1,2), (5,6),(-1,-2), (1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
L2 = copy.copy(L1)
for (a,b) in L1:
if a < 0 or b < 0 :
L2.remove((a,b))
# Now, remove the original copy of L1 and replace with L2
del L1
L1 = L2; del L2
>>> L1 is now: [(1, 2), (5, 6), (3, 4), (5, 7), (2, 1), (0, 6)]
Finally, there is one cleaner solution than having to make an entirely new copy of L1. The reversed() function:
L1 = [(1,2), (5,6),(-1,-2), (1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)]
for (a,b) in reversed(L1):
if a < 0 or b < 0 :
L1.remove((a,b))
print ("L1 is now: ", L1)
>>> L1 is now: [(1, 2), (5, 6), (3, 4), (5, 7), (2, 1), (0, 6)]
Unfortunately, I cannot adequately describe how reversed() works. It returns a 'listreverseiterator' object when a list is passed to it. For practical purposes, you can think of it as creating a reversed copy of its argument. This is the solution I recommend.
If you want to delete elements from a list while iterating, use a while-loop so you can alter the current index and end index after each deletion.
Example:
i = 0
length = len(list1)
while i < length:
if condition:
list1.remove(list1[i])
i -= 1
length -= 1
i += 1
The other answers are correct that it is usually a bad idea to delete from a list that you're iterating. Reverse iterating avoids some of the pitfalls, but it is much more difficult to follow code that does that, so usually you're better off using a list comprehension or filter.
There is, however, one case where it is safe to remove elements from a sequence that you are iterating: if you're only removing one item while you're iterating. This can be ensured using a return or a break. For example:
for i, item in enumerate(lst):
if item % 4 == 0:
foo(item)
del lst[i]
break
This is often easier to understand than a list comprehension when you're doing some operations with side effects on the first item in a list that meets some condition and then removing that item from the list immediately after.
If you want to do anything else during the iteration, it may be nice to get both the index (which guarantees you being able to reference it, for example if you have a list of dicts) and the actual list item contents.
inlist = [{'field1':10, 'field2':20}, {'field1':30, 'field2':15}]
for idx, i in enumerate(inlist):
do some stuff with i['field1']
if somecondition:
xlist.append(idx)
for i in reversed(xlist): del inlist[i]
enumerate gives you access to the item and the index at once. reversed is so that the indices that you're going to later delete don't change on you.
One possible solution, useful if you want not only remove some things, but also do something with all elements in a single loop:
alist = ['good', 'bad', 'good', 'bad', 'good']
i = 0
for x in alist[:]:
if x == 'bad':
alist.pop(i)
i -= 1
# do something cool with x or just print x
print(x)
i += 1
A for loop will be iterate through an index...
Consider you have a list,
[5, 7, 13, 29, 65, 91]
You have used a list variable called lis. And you use the same to remove...
Your variable
lis = [5, 7, 13, 29, 35, 65, 91]
0 1 2 3 4 5 6
during the 5th iteration,
Your number 35 was not a prime, so you removed it from a list.
lis.remove(y)
And then the next value (65) move on to the previous index.
lis = [5, 7, 13, 29, 65, 91]
0 1 2 3 4 5
so the 4th iteration done pointer moved onto the 5th...
That’s why your loop doesn’t cover 65 since it’s moved into the previous index.
So you shouldn't reference a list into another variable which still references the original instead of a copy.
ite = lis # Don’t do it will reference instead copy
So do a copy of the list using list[::].
Now you will give,
[5, 7, 13, 29]
The problem is you removed a value from a list during iteration and then your list index will collapse.
So you can try list comprehension instead.
Which supports all the iterable like, list, tuple, dict, string, etc.
You might want to use filter() available as the built-in.
For more details check here
You can try for-looping in reverse so for some_list you'll do something like:
list_len = len(some_list)
for i in range(list_len):
reverse_i = list_len - 1 - i
cur = some_list[reverse_i]
# some logic with cur element
if some_condition:
some_list.pop(reverse_i)
This way the index is aligned and doesn't suffer from the list updates (regardless whether you pop cur element or not).
I needed to do something similar and in my case the problem was memory - I needed to merge multiple dataset objects within a list, after doing some stuff with them, as a new object, and needed to get rid of each entry I was merging to avoid duplicating all of them and blowing up memory. In my case having the objects in a dictionary instead of a list worked fine:
```
k = range(5)
v = ['a','b','c','d','e']
d = {key:val for key,val in zip(k, v)}
print d
for i in range(5):
print d[i]
d.pop(i)
print d
```
The most effective method is list comprehension, many people show their case, of course, it is also a good way to get an iterator through filter.
Filter receives a function and a sequence. Filter applies the passed function to each element in turn, and then decides whether to retain or discard the element depending on whether the function return value is True or False.
There is an example (get the odds in the tuple):
list(filter(lambda x:x%2==1, (1, 2, 4, 5, 6, 9, 10, 15)))
# result: [1, 5, 9, 15]
Caution: You can also not handle iterators. Iterators are sometimes better than sequences.
TLDR:
I wrote a library that allows you to do this:
from fluidIter import FluidIterable
fSomeList = FluidIterable(someList)
for tup in fSomeList:
if determine(tup):
# remove 'tup' without "breaking" the iteration
fSomeList.remove(tup)
# tup has also been removed from 'someList'
# as well as 'fSomeList'
It's best to use another method if possible that doesn't require modifying your iterable while iterating over it, but for some algorithms it might not be that straight forward. And so if you are sure that you really do want the code pattern described in the original question, it is possible.
Should work on all mutable sequences not just lists.
Full answer:
Edit: The last code example in this answer gives a use case for why you might sometimes want to modify a list in place rather than use a list comprehension. The first part of the answers serves as tutorial of how an array can be modified in place.
The solution follows on from this answer (for a related question) from senderle. Which explains how the the array index is updated while iterating through a list that has been modified. The solution below is designed to correctly track the array index even if the list is modified.
Download fluidIter.py from here https://github.com/alanbacon/FluidIterator, it is just a single file so no need to install git. There is no installer so you will need to make sure that the file is in the python path your self. The code has been written for python 3 and is untested on python 2.
from fluidIter import FluidIterable
l = [0,1,2,3,4,5,6,7,8]
fluidL = FluidIterable(l)
for i in fluidL:
print('initial state of list on this iteration: ' + str(fluidL))
print('current iteration value: ' + str(i))
print('popped value: ' + str(fluidL.pop(2)))
print(' ')
print('Final List Value: ' + str(l))
This will produce the following output:
initial state of list on this iteration: [0, 1, 2, 3, 4, 5, 6, 7, 8]
current iteration value: 0
popped value: 2
initial state of list on this iteration: [0, 1, 3, 4, 5, 6, 7, 8]
current iteration value: 1
popped value: 3
initial state of list on this iteration: [0, 1, 4, 5, 6, 7, 8]
current iteration value: 4
popped value: 4
initial state of list on this iteration: [0, 1, 5, 6, 7, 8]
current iteration value: 5
popped value: 5
initial state of list on this iteration: [0, 1, 6, 7, 8]
current iteration value: 6
popped value: 6
initial state of list on this iteration: [0, 1, 7, 8]
current iteration value: 7
popped value: 7
initial state of list on this iteration: [0, 1, 8]
current iteration value: 8
popped value: 8
Final List Value: [0, 1]
Above we have used the pop method on the fluid list object. Other common iterable methods are also implemented such as del fluidL[i], .remove, .insert, .append, .extend. The list can also be modified using slices (sort and reverse methods are not implemented).
The only condition is that you must only modify the list in place, if at any point fluidL or l were reassigned to a different list object the code would not work. The original fluidL object would still be used by the for loop but would become out of scope for us to modify.
i.e.
fluidL[2] = 'a' # is OK
fluidL = [0, 1, 'a', 3, 4, 5, 6, 7, 8] # is not OK
If we want to access the current index value of the list we cannot use enumerate, as this only counts how many times the for loop has run. Instead we will use the iterator object directly.
fluidArr = FluidIterable([0,1,2,3])
# get iterator first so can query the current index
fluidArrIter = fluidArr.__iter__()
for i, v in enumerate(fluidArrIter):
print('enum: ', i)
print('current val: ', v)
print('current ind: ', fluidArrIter.currentIndex)
print(fluidArr)
fluidArr.insert(0,'a')
print(' ')
print('Final List Value: ' + str(fluidArr))
This will output the following:
enum: 0
current val: 0
current ind: 0
[0, 1, 2, 3]
enum: 1
current val: 1
current ind: 2
['a', 0, 1, 2, 3]
enum: 2
current val: 2
current ind: 4
['a', 'a', 0, 1, 2, 3]
enum: 3
current val: 3
current ind: 6
['a', 'a', 'a', 0, 1, 2, 3]
Final List Value: ['a', 'a', 'a', 'a', 0, 1, 2, 3]
The FluidIterable class just provides a wrapper for the original list object. The original object can be accessed as a property of the fluid object like so:
originalList = fluidArr.fixedIterable
More examples / tests can be found in the if __name__ is "__main__": section at the bottom of fluidIter.py. These are worth looking at because they explain what happens in various situations. Such as: Replacing a large sections of the list using a slice. Or using (and modifying) the same iterable in nested for loops.
As I stated to start with: this is a complicated solution that will hurt the readability of your code and make it more difficult to debug. Therefore other solutions such as the list comprehensions mentioned in David Raznick's answer should be considered first. That being said, I have found times where this class has been useful to me and has been easier to use than keeping track of the indices of elements that need deleting.
Edit: As mentioned in the comments, this answer does not really present a problem for which this approach provides a solution. I will try to address that here:
List comprehensions provide a way to generate a new list but these approaches tend to look at each element in isolation rather than the current state of the list as a whole.
i.e.
newList = [i for i in oldList if testFunc(i)]
But what if the result of the testFunc depends on the elements that have been added to newList already? Or the elements still in oldList that might be added next? There might still be a way to use a list comprehension but it will begin to lose it's elegance, and for me it feels easier to modify a list in place.
The code below is one example of an algorithm that suffers from the above problem. The algorithm will reduce a list so that no element is a multiple of any other element.
randInts = [70, 20, 61, 80, 54, 18, 7, 18, 55, 9]
fRandInts = FluidIterable(randInts)
fRandIntsIter = fRandInts.__iter__()
# for each value in the list (outer loop)
# test against every other value in the list (inner loop)
for i in fRandIntsIter:
print(' ')
print('outer val: ', i)
innerIntsIter = fRandInts.__iter__()
for j in innerIntsIter:
innerIndex = innerIntsIter.currentIndex
# skip the element that the outloop is currently on
# because we don't want to test a value against itself
if not innerIndex == fRandIntsIter.currentIndex:
# if the test element, j, is a multiple
# of the reference element, i, then remove 'j'
if j%i == 0:
print('remove val: ', j)
# remove element in place, without breaking the
# iteration of either loop
del fRandInts[innerIndex]
# end if multiple, then remove
# end if not the same value as outer loop
# end inner loop
# end outerloop
print('')
print('final list: ', randInts)
The output and the final reduced list are shown below
outer val: 70
outer val: 20
remove val: 80
outer val: 61
outer val: 54
outer val: 18
remove val: 54
remove val: 18
outer val: 7
remove val: 70
outer val: 55
outer val: 9
remove val: 18
final list: [20, 61, 7, 55, 9]
For anything that has the potential to be really big, I use the following.
import numpy as np
orig_list = np.array([1, 2, 3, 4, 5, 100, 8, 13])
remove_me = [100, 1]
cleaned = np.delete(orig_list, remove_me)
print(cleaned)
That should be significantly faster than anything else.
In some situations, where you're doing more than simply filtering a list one item at time, you want your iteration to change while iterating.
Here is an example where copying the list beforehand is incorrect, reverse iteration is impossible and a list comprehension is also not an option.
""" Sieve of Eratosthenes """
def generate_primes(n):
""" Generates all primes less than n. """
primes = list(range(2,n))
idx = 0
while idx < len(primes):
p = primes[idx]
for multiple in range(p+p, n, p):
try:
primes.remove(multiple)
except ValueError:
pass #EAFP
idx += 1
yield p
I can think of three approaches to solve your problem. As an example, I will create a random list of tuples somelist = [(1,2,3), (4,5,6), (3,6,6), (7,8,9), (15,0,0), (10,11,12)]. The condition that I choose is sum of elements of a tuple = 15. In the final list we will only have those tuples whose sum is not equal to 15.
What I have chosen is a randomly chosen example. Feel free to change the list of tuples and the condition that I have chosen.
Method 1.> Use the framework that you had suggested (where one fills in a code inside a for loop). I use a small code with del to delete a tuple that meets the said condition. However, this method will miss a tuple (which satisfies the said condition) if two consecutively placed tuples meet the given condition.
for tup in somelist:
if ( sum(tup)==15 ):
del somelist[somelist.index(tup)]
print somelist
>>> [(1, 2, 3), (3, 6, 6), (7, 8, 9), (10, 11, 12)]
Method 2.> Construct a new list which contains elements (tuples) where the given condition is not met (this is the same thing as removing elements of list where the given condition is met). Following is the code for that:
newlist1 = [somelist[tup] for tup in range(len(somelist)) if(sum(somelist[tup])!=15)]
print newlist1
>>>[(1, 2, 3), (7, 8, 9), (10, 11, 12)]
Method 3.> Find indices where the given condition is met, and then use remove elements (tuples) corresponding to those indices. Following is the code for that.
indices = [i for i in range(len(somelist)) if(sum(somelist[i])==15)]
newlist2 = [tup for j, tup in enumerate(somelist) if j not in indices]
print newlist2
>>>[(1, 2, 3), (7, 8, 9), (10, 11, 12)]
Method 1 and method 2 are faster than method 3. Method2 and method3 are more efficient than method1. I prefer method2. For the aforementioned example, time(method1) : time(method2) : time(method3) = 1 : 1 : 1.7
If you will use the new list later, you can simply set the elem to None, and then judge it in the later loop, like this
for i in li:
i = None
for elem in li:
if elem is None:
continue
In this way, you dont't need copy the list and it's easier to understand.

Inserting into a BST

I want to create a visualization of a BST, but every example I find online stops after inserting only 7 or less values. Let's say I'm doing the following sequence:
insert(5),insert(7),insert(9),insert(8),insert(3),insert(2),insert(4),insert(6),insert(10).
Up until insert(6), I end up with:
My main question is: where do I go from here? Do I add on to my left-most leaf or do I add on to my "lowest" leaf?
Also: according to wikipedia the code for an insertion is:
void insert(Node* node, int value) {
if (value < node->key) {
if (node->leftChild == NULL)
node->leftChild = new Node(value);
else
insert(node->leftChild, value);
} else {
if(node->rightChild == NULL)
node->rightChild = new Node(value);
else
insert(node->rightChild, value);
}
}
But according to this, once I'm at 8 and I get insert(3), it would add 3 to the left of 8, as it would compare 3 with the node 9, see that the less-than spot is already taken by 8, then rerun the insertion with 8 being the node compared to, and place the 3 as the left child of 8. But this would just create kind of a list.
Thanks.
What seems to mislead you is that on every insert, you must start from the root (which, in your case, is 5). So, let's take the graph above and try to insert 3 following the algorithm you pasted:
3 < 5, we go left and we meet 3
3 == 3, we go right (here it's the same, the code above says "go right") and we meet 4
3 < 4, we go left. Since 4 has no left child, 3 becomes its left child.
Try to build your tree from zero using the algorithm above. Incidentally, you won't find many examples with n > 10 nodes, because they tend to be exceedingly long without any benefit for the reader.

Order of insertion for worst case black height of a red black tree

Lets say we are dealing with the keys 1-15. To get the worst case performance of a regular BST, you would insert the keys in ascending or descending order as follows:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
Then the BST would essentially become a linked list.
For best case of a BST you would insert the keys in the following order, they are arranged in such a way that the next key inserted is half of the total range to be inserted, so the first is 15/2 = 8, then 8/2 = 4 etc...
8, 4, 12, 2, 6, 10, 14, 1, 3, 5, 7, 9, 11, 13, 15
Then the BST would be a well balanced tree with optimal height 3.
The best case for a red black tree can also be constructed with the best case from a BST. But how do we construct the worst case for a red black tree? Is it the same as the worst case for a BST? Is there a specific pattern that will yield the worst case?
You are looking for a skinny tree, right? This can be produced by inserting [1 ... , 2^(n+1)-2] in reverse order.
You won't be able to. A Red-Black Tree keeps itself "bushy", so it would rotate to fix the imbalance. The length of your above worst case for a Red-Black Tree is limited to two elements, but that's still not a "bad" case; it's what's expected, as lg(2) = 1, and you have 1 layer past the root with two elements. As soon as you add the third element, you get this:
B B
\ / \
R => R R
\
R

What is Lazy Binary Search?

I don't know whether the term "Lazy" Binary Search is valid, but I was going through some old materials and I just wanted to know if anyone can explain the algorithm of a Lazy Binary Search and compare it to a non-lazy Binary Search.
Let's say, we have this array of numbers:
2, 11, 13, 21, 44, 50, 69, 88
How to look for the number 11 using a Lazy Binary Search?
Justin was wrong.
The common binarySearch algorithm first checks whether the target is equal to current middle entry before proceeding to the left or right halves if required. Lazy binarySearch algorithm postpones the equality check until the very end.
algorithm lazyBinarySearch(array, n, target)
left<-0
right<-n-1
while (left<right) do
mid<-(left+right)/2
if(target>array[mid])then
left<-mid+1
else
right<-mid
endif
endwhile
if(target==array[left])then
display "found at position", left
else
display "not found"
endif
In your case, in an array,
2 11 13 21 44 50 69 88
and you want to search for 11
First we do a trace of common binary search,
index 0 1 2 3 4 5 6 7
2 11 13 21 44 50 69 88
left mid right
First while loop:
left <= right, we enter the first while loop. We calculated the mid index by (0+7)/2=3.5=3 by integer division, mid = 3. straight away we check if target 11 is equal to the mid index entry, 11 != 21, then we decide whether to go left or right, we finds out 11 < 21, should go left. left index remains unchanged, right index becomes mid index -1, right = 3 - 1 = 2. Done this step.
Second while loop:
left <= right, 0 <= 2, we enter the seond while loop. Mid index is recalcuated: (0+2)/2=1, mid = 1. At once we do the equality check, target 11 is the same as the index 1 entry, 11 == 11. We found this entry, leaving behind all the left right mid indexes (don't care) and breaks out the while loop, return index 1.
Now we trace this search by lazy binazySearch algorithm, initial array with left/right indexes set up the same as previous.
First while loop:
left < right, we enter the first while loop. Mid index is calculated as the same above = 3. Instead of doing an equality check in common binarySearch we do a comparison with the mid index entry this time. And the comparison only checks if our target 11 is greater than the mid index entry, leaving whether they equal or not to the very end outside the while loop. So we find out 11 < 21, right index is reset to the mid index, right = 3. Done this step.
Second while loop:
left < right, 0 < 3, we enter the second while loop. mid index is recalculated as mid = (0+3)/2 = 1 by integer division. Again we do a comparison with mid index entry 11, we realise it's not greater than mid index entry. We fall into the else part of the while loop and reset the right index to be mid index, right = 1. Done this step.
Third while loop:
left < right, 0 < 1, this time we have to re-enter the while loop again since it still satisfies the while condition. Mid index becomes (0+1)/2=0, mid = 0. After comparing target 11 with mid index entry 2 we found out it's greater than it (11 > 2), left index is reset to mid + 1, left = 0 + 1 = 1. Done this step.
With left index = 1 and right index = 1, left = right, while loop condition is no longer satisfied, so there's no need to re-enter. We fall into the if part down below. Target 11 equals left index entry 11, we found the target and returns left index 1.
As you can see, lazy binarySearch does one more while loop step to finally realise the index is actually 1. And this is how the word "postpones the equality check" means in the definition I mentioned in the very beginning. Clearly the lazy binarySearch algorithm does more things than common binarySearch before reaching the program termination. And the term "lazy" is reflected in the time of when to check the target equals the mid index entry.
However lazy binarySearch is more preferable to use under some other circumstances, but it's not in the context of this case.
(The reasoning part of the algorithm is done by me, anyone wishes to copy please do credit)
source: "Data Structures Outside In With Java" by Sesh Venugopal, Prentice Hall
As far as I am aware "Lazy binary search" is just another name for "Binary search".

How do I design a DB Table when my sources are VENN DIAGRAMS?

Imagine 3 circles. Each circle has some numbers
Circle 1 has the following numbers
1, 4, 7, 9
Circle 2 has the following numbers
2, 5, 8, 9
Circle 3 has the following numbers
3, 6, 7, 8, 9
Circle 1 and Circle 2 share the following numbers
10, 9
Circle 1 and Circle 3 share the following numbers
7, 9
Circle 2 and Circle 3 share the following numbers
8, 9
All three circles share
9
Each number represents symptoms so in my case
Circle 1's numbers could be symptoms for a short circuit
Circle 2 could be numbers for a component failure
Circle 3 could be numbers for external issues
each of the three issues share certain symptoms
If given #9, we wouldn't be able to deduce the problem but could display a list of all issues involving #9
If given more #'s, we can attempt to show relevant issues.
My problem is how do I put this into a table so my code can look things up.
My Database of choice is SQLite3
#Vincent, the only issue I have is that there are several variables. I have variables called t1, t2, t3, a1, a2, a3. Each of these variables are symptoms. The user interface for my application allows the user to input a value for each variable then I want to check the DB. All values for each symptom can be any value in the 3 circles (mentioned in original problem)
Create 3 tables as:
symptom = (symptom_id, descr)
problem = (problem_id, descr)
problem_symptom = (problem_id, symptom_id)
e.g.
Symptom
Symptom_id Desc
1 doda
2 dado
3 dada
Problem
Problem_id Descr
1 Short Circuit
2 Component Failure
Symptom_Problem
Symptom_id Problem_id
1 1 --- doda is a symptom of Short circuit
2 1 --- dado is a symptom of short circuit
etc.
You can then query and join to determine the problems based on the symptoms.
I would suggest 3 tables: symptom, problem, and problem_symptoms. The latter would be a join table between the first two.