Multiplying big Theta? - time-complexity

Im doing some practice problems and I'm confused by this question. Where did O(n^2.5) come from? Are they multiplying the big theta somehow? I'm lost.

Think about it this way: (x*y)*(x/y) is x^2, right? And sqrt(x) is x^0.5. So add the exponents together and you get x^2.5.
In the first case, the log n can be simplified out, as it is both multiplied and divided by.

Related

Multi-objective optimization but the function equation is unknown?

Firstly, I am totally out of my expertise zone so please bear with me.
I developed a fluid dynamic engine with 5 exposed parameters (say A,B,C,D,E). When you give this engine these 5 parameters, it does magic and give out a value 'Z'.
I want to write a script which can explore which combinations of A-E give lowest (or close to lowest) value of Z.
I know optimization algorithm exists, but from all of my search for examples, they use some function.
So I guess my function would simply be minimize Z? But where do A-E go?
Not really an answer, but some questions and ideas that might help you think through the best way to address this. We have no understanding of how big a range of values needs to be explored for those parameters, or how Z behaves, so this is very vague...
If you look at the values of Z for given values of A...E, does the value of Z jump around a lot for small changes on the parameter values, or does the Z value change reasonably smoothly?
If the Z value is not too eratic you could try some kind of gradient descent approach using calculated values of Z for some values of the parameters to approximate the gradient - suppose changing the value of 'A' from 1 to 2 gives a better change in the Z value than a similar size change in the other parameters, then try other values of A while keeping the other parameters fixed until you find a value of A that gives the best value of Z. Then try changing the other parameter values to see which one gives the steepest descent and try to find some better value for that parameter. Repeat this process until you can't find any improvement and you will have found a (local) minimum. You could then start at a different place in your parameter space and try again - you will probably find several local minima, and may just choose the best of those. Not provably optimal but may be good enough. Of course you can get clever and use things like conjugate gradients, Newton-Raphson or similar if Z is smooth enough.
If the Z values are very eratic, then you might have to just do some sampling of the possible combinations of A...E to get values of Z and choose the best you can find. Again you might do that in some systematic way (e.g. points on a grid in your parameter space) or entirely at random, or a combination of both.
If you find that there are 'clusters' of good solutions with similar values of the parameters then maybe some kind of local search would help - the idea is that there is often a better solution in the local neighbourhood of a known good solution. So maybe try perturbing your parameter values a bit from a known solution to see if that can lead to a better solution - either by some gradient descent method or by random sampling.
Unfortunately, if your Z calculation is complex, then any method using it as a black box will likely be slow as it will need to be re-evaluated many times.
You could use a Genetic Algorithm, where your chromosomes are formed with the 5 candidate values of the variables you have to optimize, to minimize Z, and your optimization/fitness "function" is the simulation itself outputting Z.
Other viable alternatives are Particle Swarm Optimization algorithm or Ant Colony Optimization. All of those are usable algortihms for that kind of optimization problem.

Example for an algorithm with greater than n splits

I was doing the derivation for masters theorem using the tree method and I noticed something.
So we have:
$T(n)=a*T(n/b) + n^c$
From this: we notice, the last level of the tree will have $a^(log_b_n)$ splits, which equals $n^(log_b_a)$
Now, if $a=b$, I get n splits in the last level, which is I've seen used in quick sort and merge sort, and if a
Is there a practical example for greater than n splits?
Where we actually repeat operations for elements?
*Also, math overflow formatting doesn't seem to work. Would appreciate if anyone helps.
The classical matrix multiplication by divide and conquer would be such an example. The recurrence relation is: T(n)=8T(n/2)+ Theta(n^2). Another would be Straussen algorithm.
Math notation is (sadly) limited to only a few stackexchange sites.

Why do SSE integer averaging instructions (PAVGB/PAVGW) add 1 to temporary sum before calculating final result?

I have been working on SSE optimization for a video processing algorithm recently. I need to write the exactly same algorithm in C code to cross-check correctness of the algorithm. I forgot about this fact several time, that makes results of the two implementations become different.
I can modify the C implementation to make them match since this difference doesn't matter. But why these instructions are designed like this? Is there any mathematical reason behind it?
The Intel Instructions Reference only mentions this behavior and don't explain why. I also tried googling, but couldn't find anything about it.
UPDATE:
Thanks to Paul's answer. I didn't realize that is rounding/truncation problem. But since both operands are integer, the only fraction will be 0.5, and it has 2 "nearest integer". AFAIK there are several rounding methods for this situation. Why the instructions use rounding up specifically? Do most related applications need rounding up?
It's to give correct rounding, i.e. round to nearest rather than truncation. In general when you divide by N with integer values you need to do this to get correct rounding:
y = (x + N / 2) / N;
If you just do:
y = x / N;
then you will get a truncated (round to zero) result.
Round to nearest is generally preferred for image processing and DSP type applications.

Fourier Transformation -

I've been doing a lot of research on this topic and I'm finally getting somewhere. Below is two complex numbers from the java code I'm using:
-9771.0 - j2125.0
-16184.09634718744 - j53968.71008512241
I know the amplitude/magnitude can be computed by doing the sqrt(a^2 + b^2) and this as far as I've gotten with this. I've read about sample rate but I'll need a better explanation of this alone and would like to be pointed in the right direction to obtain the knowledge. I've done the powerspectum graph but I need to do this on paper so I'll know how to obtain the frequency.
Applying Fourier Transformation to two values is pretty meaningless. You apply it to series of values (signal), then frequency starts to make sense. You can't speak about frequency in series of two values.

Efficient division by 128, in tsql

As part of implementing a half-life behaviour, I need to perform x = x - x / 128 on around a hundred thousand rows every few days. Is tsql smart enough to do the division by 128 efficiently (as a bit-shift), or is it just as efficient to divide by 130?
If tsql isn't smart enough, is there anything clever that I can do to make it more efficient?
A hundred thousand rows isn't enough that the difference in perf between a divide operation and a shift operation would probably even be measurable. Especially if you only have to do it every few days. Better to spend your time worrying about other issues.
You could use a computed column with the PERSISTED flag to ensure that the values were physically stored and not recomputed every time they were displayed. That could (possibly, depending on your particular circumstances) be more efficient.
More liklely you will have problems with integer math. I don't know what your x values are but if they are also integers, you want to divide by 128.0 if you don;t want the answer to be rounded to the nearest integer.