Find the 'Best' element from the given list - puzzle

I was recently asked this question in one of my telephonic interview.
"There is a list of elements. And you have to find the "best" element from the list. The elements are comparable to each other, but the comparison is not transitive.
E.g. if A > B and B > C, then A need NOT be greater than C.
You have to return the best element as answer, which is better than every other element in the list. It is possible, that there is no such element. In that case, return null."
My solution:
Attempt 1:
A simple O(n^2) solution. Comparison of each element with each other element.
The interviewer was not satisfied.
Attempt 2:
Start comparing first element with 2nd element and onward. For whichever element 'E', if A > E, mark E (may be by using another array/list/etc.) and do not consider E for any further comparison. This is because there is at least 1 element which is better than E, so E is definitely not the answer.
Complexity is still O(n^2) with some improvement as compared to previous attempt.
He was still not satisfied. Can anyone come up with any better solution?

Sure. You have N elements. Compare the first two. One of these is 'worse' than the other. Discard it. Compare the 'better' of the two with the next element. Continue this first pass across the list until only one element remains. This step is O(N).
The one element that survived the first pass now needs to be compared with every element from the original list except those that it was already compared with. If it 'loses' even once, you return that there is no 'best' element. If it 'wins' every comparison in this step you return this element. This step is also O(N).
This algorithm is O(N+N) = O(N) in the worst case and O(N+0) == O(N) in the best case. We can further prove that this is the best possible complexity because checking a solution is also O(N), and it cannot be less complex to get a solution than it is to check it.

Related

Time Complexity of 1-pass lookup given input size N**2

Given a list of lists, i.e.
[[1,2,3],[4,5,6],[7,8,9]]:
What is the time complexity of using nested For loops to see if each numeral from 1-9 is used once and only once? Furthermore, what would be the time complexity if the input is now a singular combined list, i.e. [1,2,3,4,5,6,7,8,9]?
What really matters is the size of the input, not the format. Either you have a list of 9 elements or 9 lists with 1 element, you still have 9 elements to be checked in the worst case.
The answer to the question, as stated, would be O(1), because you have a constant size input.
If what you mean is something like Given N elements what is the time complexity of checking if all number between 1 and N are present, then it would take linear time, i.e., O(N).
Indeed, an option is to use a hash table (e.g., a python set) and check if the element is already in the set, if not adding it. Note that in using this specific option you would get an expected (but not guaranteed, due to potential collisions) linear time complexity algorithm.

How to calculate the worst case time for binary search for a key that appears twice in the sorted array?

What would be the worst case time complexity for finding a key that appears twice in the sorted array using binary search? I know that the worst case time complexity for binary search on a sorted array is O(log n). So, in the case that the key appears more than once the time complexity should be lesser than O(log n). However, I am not sure how to calculate this.
In the worst case the binary search needs to perform ⌊log_2(n) + 1⌋ iterations to find the element or to conclude that the element is not in the array.
By having a duplicate you might just need one step less.
For instance, suppose your duplicate elements appear in the first and second indices of the array (same if they are in the last and one before the last).
In such a case you would have ⌊log_2(n)⌋ comparisons, thus, still O(log(n)) as a worst case time complexity.

What is wrong with this P argument

My teacher made this argument and asked us what could be wrong with it.
for an array A of n distinct numbers. Since there are n! permutations of A,
we cannot check for each permutation whether it is sorted, in a total time which
is polynomial in n. Therefore, sorting A cannot be in P.
my friend thought that it just should be : therefore sorting a cannot be in NP.
Is that it or are we thinking to easily about it?
The problem with this argument is that it fails to adequately specify the exact problem.
Sorting can be linear-time (O(n)) in the number of elements to sort, if you're sorting a large list of integers drawn from a small pool (counting sort, radix sort).
Sorting can be linearithmic-time (O(nlogn)) in the number of elements to sort, if you're sorting a list of arbitrary things which are all totally ordered according to some ordering relation (e.g., less than or equal to on the integers).
Sorting based on a partial order (e.g. topological sorting) must be analyzed in yet another way.
We can imagine a problem like sorting whereby the sortedness of a list cannot be determined by comparing adjacent entries only. In the extreme case, sortedness (according to what we are considering to be sorting) might only be verifiable by checking the entire list. If our kind of sorting is designed so as to guarantee there is exactly one sorted permutation of any given list, the time complexity is factorial-time (O(n!)) and the problem is not in P.
That's the real problem with this argument. If your professor is assuming that "sorting" refers to sorting integers not in any particular small range, the problem with the argument then is that we do not need to consider all permutations in order to construct the sorted one. If I have a bag with 100 marbles and I ask you to remove three marbles, the time complexity is constant-time; it doesn't matter that there are n(n-1)(n-2)/6 = 161700, or O(n^3), ways in which you can accomplish this task.
The argument is a non-sequitur, the conclusion does not logically follow from the previous steps. Why doesn't it follow? Giving a satisfying answer to that question requires knowing why the person who wrote the argument thinks it is correct, and then addressing their misconception. In this case, the person who wrote the argument is your teacher, who doesn't think the argument is correct, so there is no misconception to address and hence no completely satisfying answer to the question.
That said, my explanation would be that the argument is wrong because it proposes a specific algorithm for sorting - namely, iterating through all n! permutations and choosing the one which is sorted - and then assumes that the complexity of the problem is the same as the complexity of that algorithm. This is a mistake because the complexity of a problem is defined as the lowest complexity out of all algorithms which solve it. The argument only considered one algorithm, and didn't show anything about other algorithms which solve the problem, so it cannot reach a conclusion about the complexity of the problem itself.

Double Ended Singly Linked List - Time complexity of searching

I have read that the time complexity of a searching for an element, which is located at end of the double ended singly linked list is o(N).
But since time complexity of searching for an element at front is o(1), I think the same should apply to end element. Any ideas? Thanks
The cost of searching for an element that is at the front of the linked list is indeed one, because you would be holding a pointer to that first element. Thus, it would be O(1) to find the first element.
In the case of a double ended singly linked list, assuming you mean you hold a pointer to both the first and last element of the singly linked list, you would indeed find that the time to locate the last element would be O(1), because you have a reference to exactly where it is.
However, consider the case of a double ended singly linked list where you want to find the (n-1)th element in that list. Suddenly, you find that you have to iterate over n-1 elements until you get to that element. Thus you would find that the worst case runtime for the double ended singly linked list would be O(n-1), which is really O(n).
Even in the case where you had a double ended doubly linked list, you would find that the worst case runtime would be O(n/2), (assuming you had a mechanism to tell if the element was in the first half of second half, which is unlikely). But O(n/2) is still really O(n).
Since we generally refer to the worst case when we talk about big-o time complexity, you can see that linked lists are invariably O(n).
Note:
That's not to say that big-o is the only measure of time-complexity. Depending on your implementation, the amortized or probabilistic time-complexity could indeed be different from its worst case time complexity, and likely is.

Time Complexity (with a list of elements)

Just out of curiosity. Let's say I have a list which contain N elements(which will repeat) and a function which will return the frequency of these elements. I think the time complexity of this program should be O(N) right? As the function just need to loop through N and check whether the elements is already exist, if yes, +=, else = 1. Okay, so my friend and I have an argument as, how about if we need to multiple the elements with its frequency as well? And maybe divided by its total number? My friend think the complexity should be O(N^2) but it doesn't sounds right for me. What do you think, and why?
Thank you.
It depends on how you will record the frequencies. If you will be using an array they for each += you need to find the previous frequency value first, the complexity is quadratic. However if you maintain a hash table structure, on which access is instant on average, the complexity would be linear.