Singly-linked set computational complexity - abstract-data-type

If you optimally implemented a Set using a singly-linked list structure, what would the computational complexity (big-O) be for the intersection operation?

Related

computational complexity of higher order derivatives with AD in jax

Let f: R -> R be an infinitely differentiable function. What is the computational complexity of calculating the first n derivatives of f in Jax? Naive chain rule would suggest that each multiplication gives a factor of 2 increase, hence the nth derivative would require at least 2^n more operations. I imagine though that clever manipulation of formal series would reduce the number of required calculations and eliminate duplications, esspecially if the derivaives are Jax jitted? Is there a different between the Jax, Tensorflow and Torch implementations?
https://openreview.net/forum?id=SkxEF3FNPH discusses this topic, but doesn t provide a computational complexity.
What is the computational complexity of calculating the first n derivatives of f in Jax?
There's not much you can say in general about computational complexity of Nth derivatives. For example, with a function like jnp.sin, the Nth derivative is O[1], oscillating between negative and positive sin and cos calls as N grows. For an order-k polynomial, the Nth derivative is O[0] for N > k. Other functions may have complexity that is linear or polynomial or even exponential with N depending on the operations they contain.
I imagine though that clever manipulation of formal series would reduce the number of required calculations and eliminate duplications, esspecially if the derivaives are Jax jitted
You imagine correctly! One implementation of this idea is the jax.experimental.jet module, which is an experimental transform designed for computing higher-order derivatives efficiently and accurately. It doesn't cover all JAX functions, but it may be complete enough to do what you have in mind.

What is the difference between complexity classes and big o notation?

What is the complexity class and big o for a given function?
Are these the same thing?
for e.g : n^2 + n
Thanks
Complexity classes and big-O notation are not the same thing. Big-O notation is just a notation to communicate the asymptotic behavior of a function; that is, O(f(n)) is the set of all functions that are upper bounded by c*f(n) for all n>N, where c and N are universal constants. So, in your example, we'd say that n^2 + n is in O(n^2), because for all n>=1, n^2 + n < 2n^2.
Complexity classes, on the other hand, are classes of languages, which we can think of as " decision problems (e.g., decide if some object X has a given property). The complexity classes describe how much computational power is required for an algorithm to solve a given decision problem. For example, say we want to decide if an array of n numbers is sorted in increasing order. Since we can do so by simply scanning the items one at a time and making sure there's no decrease, it takes n steps to solve the decision problem. Thus this decision problem is in the class P, which contains all languages with polynomial time algorithms. Note that this is a property of the problem, and not a given function. You can also decide if a list is sorted by enumerating all lists on n elements and checking the input matches any, but that would be really inefficient. Complexity classes are determined by the existence of a sufficient algorithm to solve a decision problem.

How Fast is Convolution Using FFT

I read that in order to compute the convolution of two signals x,y (1D for example), the naïve method takes O(NM).
However FFT is used to compute FFT^-1(FFT(x)FFT(y)), which takes O(N log(N)), in the case where N>M.
I wonder why is this complexity considered better than the former one, as M isn't necessarily bigger than log(N). Moreover, M is very often the length of a filter, which doesn't scale with the signal to be filtered, and will actually provide us with a complexity more similar to O(N) than to O(N^2).
Fast convolution in the frequency domain is typically more efficient than direct convolution when the size of the filter exceeds a particular threshold. So for relatively small filters direct convolution is more efficient, whereas for longer filters there comes a point at which FFT-based convolution is more efficient. The actual value of m for this "tipping point" depends on a lot of factors, but it's typically somewhere between 10 and 100.

Ansys LS-DYNA explicit dynamics element COMBI165 length is affecting the solution

I'm analysing some dynamic models using Ansys LS-DYNA explicit dynamic models.
My model only has beam elements (BEAM161), concentrated masses (MASS166) and springs/dampers (COMBI165).
The problem is that the length of the springs/dampers is affecting the results, which shouldn't happen: according to the manual, you define COMBI165's stiffness or damping coefficient, which is multiplied by the relative displacement/velocity between the nodes to obtain the reaction force.
For my particular problem, the length of the spring/damper is not conditioning the time-step, so that is not the source of the difference.

Using red-black tree for non-linear optimization

Suppose we have finite data set {x_i, y_i}.
I am looking for an efficient data structure for the data set, such that given a,b it will be possible to find efficiently x,y such that x > a, y > b and x*y is minimal.
Can it be done using a red black tree ?
Can we do it in complexity O(log n)?
Well, without a preprocessing step, of course you can't do it in O(log n) with any data structure, because that's not enough time to look at all the data. So I'll assume you mean "after preprocessing".
Red-black trees are intended for single-dimension sorting, and so are unlikely to be of much help here. I would expect a kD-tree to work well here, using a nearest-neighbor-esque query: You perform a depth-first traversal of the tree, skipping branches whose bounding rectangle either violates the x and y conditions or cannot contain a lower product than the lowest admissible product already found. To speed things up further, each node in the kD-tree could additionally hold the lowest product among any of its descendants; intuitively I would expect that to have a practical benefit, but not to improve the worst-case time complexity.
Incidentally, red-black trees aren't generally useful for preprocessed data. They're designed for efficient dynamic updates, which of course you won't be doing on a per-query basis. They offer the same asymptotic depth guarantees as other sorted trees, but with a higher constant factor.