Binary Search: Number of comparisons in the worst case - time-complexity

I am trying to figure out the number of comparisons that binary search does on an array of a given size in the worst case.
Let's say there is an array A with 123,456 elements (or any other number). Binary search is applied to find some element E. The comparison is to determine whether A[i] = E. How many times would this comparison be executed in the worst case?
According to this post, the number of worst case comparisons is 2logn+1.
Result: 50
According to this post, the max. number of binary search comparisons is log2(n+1).
Result: 25
According to this post, the number of comparisons is 2logn-1.
Result: 50
I am confused by the different answers. Can anyone tell me which one is correct and how I can determine the maximum number of comparisons in the worst case?

According to this Wiki page:
In the worst case, binary search makes floor(log2(n)+1) iterations of the comparison loop, where the floor notation denotes the floor function that yields the greatest integer less than or equal to the argument, and log2 is the binary logarithm. This is because the worst case is reached when the search reaches the deepest level of the tree, and there are always floor(log2(n)+1) levels in the tree for any binary search.
Also, it's not enough to consider only comparisons A[i] = E. The binary search also includes comparisons E <= A[mid], where the mid is the midpoint of the index interval.

Related

What is the worst case target number for binary search?

For simplicity, let's say the number is between 1 and n. Is there a generic formula that gives us the number that results in the maximum number of binary search iterations?
I always have to write things about by hand, but I feel there should be an analytical form that is a function of n.

Time Complexity of 1-pass lookup given input size N**2

Given a list of lists, i.e.
[[1,2,3],[4,5,6],[7,8,9]]:
What is the time complexity of using nested For loops to see if each numeral from 1-9 is used once and only once? Furthermore, what would be the time complexity if the input is now a singular combined list, i.e. [1,2,3,4,5,6,7,8,9]?
What really matters is the size of the input, not the format. Either you have a list of 9 elements or 9 lists with 1 element, you still have 9 elements to be checked in the worst case.
The answer to the question, as stated, would be O(1), because you have a constant size input.
If what you mean is something like Given N elements what is the time complexity of checking if all number between 1 and N are present, then it would take linear time, i.e., O(N).
Indeed, an option is to use a hash table (e.g., a python set) and check if the element is already in the set, if not adding it. Note that in using this specific option you would get an expected (but not guaranteed, due to potential collisions) linear time complexity algorithm.

Determine whether there is a subset of size n which has a standard deviation <= s

Given a bunch of numbers, I am trying to determine whether there is a "clump" anywhere where numbers are very densely packed.
To make things more precise, I thought I'd ask a more specific problem: given a set of numbers, I would like to determine whether there is a subset of size n which has a standard deviation <= s. If there are many such subsets, I'd like to find the subset with the lowest standard deviation.
So question #1 : does this formal problem definition effectively capture the intuitive concept of a "clump" of densely packed numbers?
EDIT: I don't actually care about determining which numbers belong to this "clump", I'm much more interested in determining where the clump is centred, which is why I think that specifying n in advance is okay. But feel free to correct me!
And question #2 : assuming it does, what is the best way to go about implementing something like this (in particular, I want a solution with lowest time complexity)? So far I think I have a solution that runs in n log n:
First, note that the lowest-standard-deviation-possessing subset of a given size must consist of consecutive numbers. So step 1 is sort the numbers (this is n log n)
Second, take the first n numbers and compute their standard deviation. If our array of numbers is 0-based, then the first n numbers are [0, n-1]. To get standard deviation, compute s1 and s2 as follows:
s1 = sum of numbers
s2 = sum of squares of numbers
Then, wikipedia says that the standard deviation is sqrt(n*s2 - s1^2)/n. Record this value as the highest standard deviation seen so far.
Find the standard deviation of [1, n], [2, n+1], [3, n+2] ... until you hit the the last n numbers. To do each computation takes only constant time if you keep track of s1 and s2 running totals: for example, to get std dev of [1, n], just subtract the 0th element from the s1 and s2 totals and add the nth element, then recalculate standard deviation. This means that the entire standard deviation calculating portion of the algorithm takes linear time.
So total time complexity n log n.
Is my assessment right? Is there a better way to do this? I really need this to run fast on fairly large sets, so the faster the better! Space is less of an issue (I think).
Having been working recently on a similar problem, both the definition of the clumps and the proposed implementation seem reasonable.
Another reasonable definition would be to find the minimum of all the ranges of n numbers. Thus, given that the list of numbers x is sorted, one would just find the minimum of x[n]-x[1], x[n+1]-x[2], etc. This would be slightly quicker than finding the standard deviation because it would avoid the multiplications and square roots. Indeed, you can avoid the square roots even when looking for the lowest standard deviation by finding the minimum variance (the square of the standard deviation), rather than the sd itself.
A caution would be that the location of the biggest clump might be quite sensitive to the choice of n. If there is an a priori reason to select a particular n, that won't be a problem. If not, however, it might require some experimentation to select the value of n that fairly reliably finds the clumps you are looking for, whether you are selecting by range or by standard deviation. Some ideas on this can be found in Chapter 6 of the online book ABC of EDA.

binary search question

For binary search, what is the average number of comparisons needed to find a record in a file?
I'm assuming this is homework, so I'll provide a hint instead of a straight-up answer. I'll also assume you've been asked to find a relatively exact answer, not just a big-O answer.
Think of it this way: Every time you do a comparison, you halve the search space. If the search space is of size S, then the probability of finding the record on the next iteration is 1/S. If C denotes the number of comparisons, then P(find it on comparison C) = P(don't find it in < C comparisons) * P(find it on comparison C | don't find it in < C comparisons).

How can one compute the optimal parameters to a start-step-stop coding scheme?

A start-step-stop code is a data compression technique that is used to compress number that are relatively small.
The code works as follows: It has three parameters, start, step and stop. Start determines the amount of bits used to compute the first few numbers. Step determines how many bits to add to the encoding when we run out and stop determines the maximum amount of bits used to encode a number.
So the length of an encoding is given by l = start + step * i.
The "i" value of a particular code is encoded using unary. That is, a number of 1 bits followed by a terminating 0 bit. If we have reached stop then we can drop the terminating 0 bit. If i is zero we only write out the 0 bit.
So a (1, 2, 5) start-step-stop code would work as follows:
Value 0, encoded as: 0 0
Value 1, encoded as: 0 1
Value 2, encoded as: 10 000
Value 9, encoded as: 10 111
Value 10, encoded as: 11 00000
Value 41, encoded as: 11 11111
So, given a file containing several numbers, how can we compute the optimal start-step-stop codes for that file? The optimal parameters are defined as those that will result in the greatest compression ratio.
These "start-step-stop" codes looks like a different way of calling Huffman codes. See the basic technique for an outline of the pseudo-code for calculating them.
Essentially this is what the algorithm does:
Before you start the Huffman encoding you need to gather the statistics of each symbol you'll be compressing (Their total frequency in the file to compress).
After you have that you create a binary tree using that info such that the most frequently used symbols are at the top of the tree (and thus use less bits) and such that no encoding has a prefix code. Since if an encoding has a common prefix there could be ambiguities decompressing.
At the end of the Huffman encoding your start value will be depth of the shallowest leaf node, your step will always be 1 (logically this makes sense, why would you force more bits than you need, just add one at a time,) and your stop value will be the depth of the deepest leaf node.
If the frequency stats aren't sorted it will take O(nlog n) to do, if they are sorted by frequency it can be done in O(n).
Huffman codes are guaranteed to have the best average compression for this type of encoding:
Huffman was able to design the most
efficient compression method of this
type: no other mapping of individual
source symbols to unique strings of
bits will produce a smaller average
output size when the actual symbol
frequencies agree with those used to
create the code.
This should help you implement the ideal solution to your problem.
Edit: Though similar, this isn't what the OP was looking for.
This academic paper by the creator of these codes describes a generalization of start-step-stop codes, start-stop codes. However, the author briefly describes how to get optimal start-step-stop near the end of section 2. It involves using a statistical random variable, or brute-force funding the best combination. Without any prior knowledge of the file the algorithm is O((log n)^3).
Hope this helps.
The approach I used was a simple brute force solution. The algorithm followed these basic steps:
Count the frequency of each number in the file. In the same pass, compute the total amount of numbers in the file and determine the greatest number as maxNumber.
Compute the probability of each number as its frequency divided by the total amount of numbers in the file.
Determine "optimalStop" as equal to log2(maxNumber). This is the ideal number of bits that should be used to represent maxNumber as in Shannon information theory and therefore a reasonable estimate of the optimal maximum amount of bits used in the encoding of a particular number.
For every "start" value from 1 to "optimalStop" repeat step 5 - 7:
For every "step" value from 1 to ("optimalStop" - "start") / 2, repeat step 6 & 7:
Calculate the "stop" value closest to "optimalStop" that satisfies stop = start + step * i for some integer i.
Compute the average number of bits that would be used by this encoding. This can be calculated as each number's probability multiplied by its bit length in the given encoding.
Pick the encoding with the lowest average number of bits.