Is O(m+n) the same as (O(max(m,n))? - sum

If I have two variables,say m and n,and my algorithm has time complexity O(m+n),is O(m,n) the same as having O(max(m,n))?If yes,can you explain why?

Yes. Because O(m+n) is always smaller than O(2(m+n)), which only differs by a constant factor from O(m+n), and max(m+n) < 2(m+n). That's just one way to analyze it.

Related

calculate time complexity for this solution

I have the following code. I think the solution has O(n^2) because it is a nested loop according to the attached image. can anyone confirm?
function sortSmallestToLargest(data):
sorted_data={}
while data is not empty:
smallest_data=data[0]
foreach i in data:
if (i < smallest_data):
smallest_data= i
sorted_data.add(smallest_data)
data.remove(smallest_data)
return sorted_data
reference image
Ok, now I see what you are doing! Yes, you are right, it's O(n²), because you're always looping through every element of data. Your data will decrease by one every loop, but because with O complexity we don't care about constants, we can say it's O(n) for each loop. Multiplying (because one is inside the other) we have O(n²).

Randomly increasing sequence- Wolfram Mathematica

Good afternoon, I have a problem making recurrence table with randomly increasing sequence. I want it to return an increasing sequence with a random difference between two elements. Right now I've got:
RecurrenceTable[{a[k+1]==a[k] + RandomInteger[{0,4}], a[1]==-12},a,{k,1,5}]
But it returns me an arithmetic progression with chosen d for all k (e.g. {-12,-8,-4,0,4,8,12,16,20,24}).
Also, I will be really grateful for explaining why if I replace every k in my code with n I get:
RecurrenceTable[{4+a[n] == a[n],a[1] == -12},a,{n,1,10}]
Thank You very much for Your time!
I don't believe that RecurrenceTable is what you are looking for.
Try this instead
FoldList[Plus,-12,RandomInteger[{0,4},5]]
which returns, this time,
{-12,-8,-7,-3,1,2}
and returns, this time,
{-12,-9,-5,-3,0,1}

Using Subtraction in a Conditional Statement in Verilog

I'm relatively new to Verilog and I've been working on a project in which I would, in an ideal world, like to have an assignment statement like:
assign isinbufferzone = a > (packetlength-16384) ? 1:0;
The file with this type of line in it will compile, but isinbufferzone doesn't go high when it should. I'm assuming it's not happy with having subtraction in the conditional. I'm able to make the module work by moving stuff around, but the result is more complicated than I think it should need to be and the latency really starts to add up. Does anyone have any thoughts on what the most concise way to do this is? Thank you in advance for your help.
You probably expect isinbufferzone to go high if packetlength is 16384 or less regardless of a, however this is not what happens.
If packetlength is less than 16384, the value packetlength - 16384 is not a negative number −X, but some very large positive number (maybe 232 − X, or 217 − X, I'm not quite sure which, but it doesn't matter), because Verilog does unsigned arithmetic by default. This is called integer overflow.
You could maybe try to solve this by declaring some signals as signed, but in my opinion the safest way is to explicitly handle the overflow case and making sure the subtraction result is only evaluated for packetlength values of 16384 or greater:
assign isinbufferzone = (packetlength < 16384) ? 1 : (a > packetlength - 16384);

Eigen: Computation and incrementation of only Upper part with selfAdjointView?

I do something like this to get:
bMRes += MatrixXd(n, n).setZero()
.selfadjointView<Eigen::Upper>().rankUpdate(bM);
This gets me an incrementation of bMRes by bM * bM.transpose() but twice as fast.
Note that bMRes and bM are of type Map<MatrixXd>.
To optimize things further, I would like to skip the copy (and incrementation) of the Lower part.
In other words, I would like to compute and write only the Upper part.
Again, in other words, I would like to have my result in the Upper part and 0's in the Lower part.
If it is not clear enough, feel free to ask questions.
Thanks in advance.
Florian
If your bMRes is self-adjoint originally, you could use the following code, which only updates the upper half of bMRes.
bMRes.selfadjointView<Eigen::Upper>().rankUpdate(bM);
If not, I think you have to accept that .selfadjointView<>() will always copy the other half when assigned to a MatrixXd.
Compared to A*A.transpose() or .rankUpdate(A), the cost of copying half of A can be ignored when A is reasonably large. So I guess you don't need to optimize your code further.
If you just want to evaluate the difference, you could use low-level BLAS APIs. A*A.transpose() is equivalent to gemm(), and .rankUpdate(A) is equivalent to syrk(), but syrk() don't copy the other half automatically.

Analyzing time complexity (Poly log vs polynomial)

Say an algorithm runs at
[5n^3 + 8n^2(lg (n))^4]
Which is the first order term? Would it be the one with the poly log or the polynomial?
For each two constants a>0,b>0, log(n)^a is in o(n^b) (Note small o notation here).
One way to prove this claim is examine what happens when we apply a monotomically increasing function on both sides: the log function.
log(log(n)^a)) = a* log(log(n))
log(n^b) = b * log(n)
Since we know we can ignore constants when it comes to asymptotic notations, we can see that the answer to "which is bigger" log(n)^a or n^b, is the same as "which is bigger": log(log(n)) and log(n). This answer is much more intuitive to answer.