Can you explain why this time complexity is nlog(n) - time-complexity

Can anyone explain what the time complexity of this problem is
bool containsDuplicates(vector<int> &nums)
{
unordered_map<int,int> umap;
for(auto itr : nums)
{
umap[itr]++;
}
for(auto itr: umap)
{
if(itr.second > 1){return true;}
}
}
From my understanding first loop is O(n) and second loop is O(n) so time complexity is O(n)+O(n) = 2 O(n) is my understanding correct ? I read online that time complexity of above is nlog(n) is that correct ? If yes then can you please explain how ?

if n is the number of the items in vector nums, in the first loop you pass all the items in that vector = O(n), in the second loop you pass the item in the map you create from the number in the vector so the size of the map is n too ,this two loops together still O(n).

Related

Calculating the time and space complexity of function f2

I have this function and I am trying to calculate time and space complexity, I got an answer but I am not sure if it's correct, I'd like to know whether this is correct or not
void f1(int n)
{
int i,j,k=0;
for(int i=n/2; i<=n; i++)
{
for(int j=2; j<=i; j*=2)
{
k+=n;
}
}
free(malloc(k));
}
My results:
For the outer for loop it's pretty straightforward O(n).
For the inner loop, we have log((n/2)+i) each iteration, so basically log(n) each iteration.
And so the total time complexity is O(n*log(n))
For space complexity, it's O(k) whenever k receives it's final value, since final value for k is k+n nlog(n) times, we have that after all iterations, k=n^2log(n), and so space complexity is O((n^2)*log(n)).
Is this correct?

Time complexity of recursive function with input halved

For those Big O experts out there... I'm trying to deduce the time complexity of a function with two recursive calls, where the input is halved each time:
function myFunc(n) {
if (n > 1) {
let something = 0
for (let i=0; i < n; i++) {
// Do some linear stuff in here
}
myFunc(n/2)
myFunc(n/2)
}
return something;
}
I'm unsure how, exactly, the halving effects the analysis. Any help super appreciated!
The first step in the analysis of a recursive function would be to write down the recurrence relation. In this case you have:
T(n) = 2T(n/2) + O(n)
This is one of the most common forms of recurrence relations, so we can immediately state the solution without any further calculations:
T(n) = O(n log n)
It's easy to prove this result by induction. See e.g. here.

How is it possible that O(1) constant time code is slower than O(n) linear time code?

"...It is very possible for O(N) code to run faster than O(1) code for specific inputs. Big O just describes the rate of increase."
According to my understanding:
O(N) - Time taken for an algorithm to run based on the varying values of input N.
O(1) - Constant time taken for the algorithm to execute irrespective of the size of the input e.g. int val = arr[10000];
Can someone help me understand based on the author's statement?
O(N) code run faster than O(1)?
What are the specific inputs the author is alluding to?
Rate of increase of what?
O(n) constant time can absolutely be faster than O(1) linear time. The reason is that constant-time operations are totally ignored in Big O, which is a measure of how fast an algorithm's complexity increases as input size n increases, and nothing else. It's a measure of growth rate, not running time.
Here's an example:
int constant(int[] arr) {
int total = 0;
for (int i = 0; i < 10000; i++) {
total += arr[0];
}
return total;
}
int linear(int[] arr) {
int total = 0;
for (int i = 0; i < arr.length; i++) {
total += arr[i];
}
return total;
}
In this case, constant does a lot of work, but it's fixed work that will always be the same regardless of how large arr is. linear, on the other hand, appears to have few operations, but those operations are dependent on the size of arr.
In other words, as arr increases in length, constant's performance stays the same, but linear's running time increases linearly in proportion to its argument array's size.
Call the two functions with a single-item array like
constant(new int[] {1});
linear(new int[] {1});
and it's clear that constant runs slower than linear.
But call them like:
int[] arr = new int[10000000];
constant(arr);
linear(arr);
Which runs slower?
After you've thought about it, run the code given various inputs of n and compare the results.
Just to show that this phenomenon of run time != Big O isn't just for constant-time functions, consider:
void exponential(int n) throws InterruptedException {
for (int i = 0; i < Math.pow(2, n); i++) {
Thread.sleep(1);
}
}
void linear(int n) throws InterruptedException {
for (int i = 0; i < n; i++) {
Thread.sleep(10);
}
}
Exercise (using pen and paper): up to which n does exponential run faster than linear?
Consider the following scenario:
Op1) Given an array of length n where n>=10, print the first ten elements twice on the console. --> This is a constant time (O(1)) operation, because for any array of size>=10, it will execute 20 steps.
Op2) Given an array of length n where n>=10, find the largest element in the array. This is a constant time (O(N)) operation, because for any array, it will execute N steps.
Now if the array size is between 10 and 20 (exclusive), Op1 will be slower than Op2. But let's say, we take an array of size>20 (for eg, size =1000), Op1 will still take 20 steps to complete, but Op2 will take 1000 steps to complete.
That's why the big-o notation is about growth(rate of increase) of an algorithm's complexity

determine the time complexity of the algorithems

I have just started to learn time complexity, but I don't really get the idea, could you help with those questions and explain the way of thinking:
int Fun1(int n)
{
for (i = 0; i < n; i += 1) {
for (j = 0; j < i; j += 1) {
for (k = j; k < i; i += 1) {
// do something
}
}
}
}
void Fun2(int n){
i=o
while(i<n){
for (j = 0; j < i; j += 1) {
k=n
while(k>j){
k=k/2
}
k=j
while(k>1){
k=k/2
}
}
}
int Fun3(int n){
for (i = 0; i < n; i += 1) {
print("*")
}
if(n<=1){
print("*")
return
}
if (n%2 != 0){
Fun3(n-1)
}
else{
Fun3(n/2)
}
}
for function 1, I think its Theta(n^3) because it runs at most
n*n*n times but I am not sure how to prove this.
for the second I think its Theta (n^2log(n))
I am not sure
Could you help, please?
First a quick note, in Fun2(n) there should be a i++ before closing the while loop, anyway, time complexity is important in order to understand the efficiency of your algorithms. In this case you have these 3 functions:
Fun1(n)
In this function you have three nested for loops, each for loops iterates n times over a given input, we know that the complexity of this iteration is O(n). Since there are three nested for loops, the second for loop will iterate n times over each iteration of the outer for loop. The same will do the most inner loop. The resulting complexity, as you correctly said, is O(n) * O(n) * O(n) = O(n^3)
Fun2(n)
This function has a while loop that iterates n times over a given input. So the outer loop complexity is O(n). Same as before we have an inner for loop that iterates n times on each cycle of the outer loop. So far we have O(n) * O(n) which is O(n^2) as complexity. Inside the for loop we have a while loop, that differs from the other loops, since does not iterate on each element in a specific range, but it divides the range by 2 at each iteration. For example from 0 to 31 we have 0-31 -> 0-15 -> 0-7 -> 0-3 -> 0-1
As you know the number of iteration is the result of the logarithmic function, log(n), so we end up with O(n^2) * O(log(n)) = O(n^2(log(n)) as time complexity
Fun3(n)
In this function we have a for loop with no more inner loops, but then we have a recursive call. The complexity of the for loop as we know is O(n), but how many times will this function be called?
If we take a small number (like 6) as example we have a first loop with 6 iteration, then we call again the function with n = 6-1 since 6 mod 2 = 0
Now we have a call to Fun3(5), we do 5 iteration and the recursively we call Fun3(2) since 5 mod 2 != 0
What are we having here? We having a recursive call that in the worst case will call itself n times
The complexity result is O(n!)
Note that when we calculate time complexity we ignore the coefficients since are not relevant, usually the function we consider, especially in CS, are:
O(1), O(n), O(log(n)), O(n^a) with a > 1, O(n!)
and we combine and simplify them in order to know who has the best (lowest) time complexity to have an idea of which algorithm could be used

How Codility calculates the complexity? Example: Tape-Equilibrium Codility Training

I was training in Codility solving the first lesson: Tape-Equilibrium.
It is said it has to be of complexity O(N). Therefore I was trying to solve the problem with just one for. I knew how to do it with two for but I understood it would have implied a complexity of O(2N), therefore I skipped those solutions.
I looked for it in Internet and of course, in SO there was an answer
To my astonishment, all the solutions first calculate the sum of the elements of the vector and afterwards make the calculations. I understand this is complexity O(2N), but it gets an score of 100%.
At this point, I think I am mistaken about my comprehension of the time complexity limits. If they ask you to get a time complexity of O(N), is it right to get O(X*N)? Being X a value not really high ?
How does this works?
Let f and g be functions.
The Big-O notation f in O(g) means that you can find a constant number c such that f(n) ≤ c⋅g(n). So if your algorithm has complexity 2N (or XN for a constant X) this is in O(N) due to c = 2 (or c = X) holds 2N ≤ c⋅N = 2⋅N (or XN ≤ c⋅N = X⋅N).
This is how I managed to keep it O(N) along with a 100% score:
// you can also use imports, for example:
// import java.util.*;
// you can use System.out.println for debugging purposes, e.g.
// System.out.println("this is a debug message");
class Solution {
public int solution(int[] A) {
int result = Integer.MAX_VALUE;
int[] s1 = new int[A.length-1];
int[] s2 = new int[A.length-1];
for(int i=0;i<A.length-1;i++){
if(i>0){
s1[i] = s1[i-1] + A[i];
}
else {
s1[i] = A[i];
}
}
for(int i=A.length-1;i>0;i--){
if(i<A.length-1){
s2[i-1] = s2[i] + A[i];
}
else {
s2[i-1] = A[A.length-1];
}
}
for(int i=0;i<A.length-1;i++){
if(Math.abs(s1[i]-s2[i])<result){
result = Math.abs(s1[i]-s2[i]);
}
}
return result;
}
}