Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I want to find the common eigenvectors of two symmetric matrices with the same dimension in R
Assuming two matrices L1 and L2 I am looking for vector X such that
L1*X = (landa)L2*X
where landa is the eigen value
The term you are looking for is Generalized Eigenvalue Problem. This is a well researched linear algebra problem.
In terms of implementation, I suggest looking at a special Netlib section, where I think your matrices will satisfy Generalized Symmetric Definite Eigenproblems solver requirements.
Intel MKL provides the functionality directly available from C, Fortran, and, as far as I know, Python.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 21 days ago.
Improve this question
I am working on classification problem for Covid19, Pneumonia and healthy lung.
I have 3000 images for each class.
Can I apply vision transformers for this image classification instead of normal CNN?
Or is there any prerequisite for applying this? I am new to transformers.
I have tried all CNN and they have achieved 95% accuracy till now.
At small to medium datasets, ViTs don't give a performance that is comparable. On really big datasets, however, they have outperformed CNNs. More information here https://www.v7labs.com/blog/vision-transformer-guide
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
One of the biggest struggle with ML research is the creation of objective functions which capture the researcher's goals. Especially when talking about generalizable AI, the definition of the objective function is very tricky.
This excellent paper for instance attempts to define an objective function to reward an agent's curiosity.
If we could measure intelligent behavior well, it would perhaps be possible to perform an optimization in which the parameters of a simulation such as a cellular automaton are optimized to maximize the emergence of increasingly intelligent behavior.
I vaguely remember having come across a group of cross-discipline researchers who were attempting to use the information theory concept of entropy to measure intelligent behavior but cannot find any resources about it now. So is there a scientific field dedicated to the quantification of intelligent behavior?
The field is called Integrated Information Theory, initially proposed by Giulio Tononi. It attempts to quantify consciousness of systems by formally defining formally the phenomenological experience of consciousness, and computing a value Phi, meant for a proxy of "consciousness".
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm studying the reduction about some hard problems on the lattice. What's the meaning of "worst case to average case reduction" ? For example, the paper "Worst-case to Average-case Reductions based on Gaussian Measures" gives the reduction from the worst case INCGDD problem to average case SIS problem, what does it mean?
A problem has average-case time complexity C if there exists an algorithm that solves the problem in C time on average, if the inputs are chosen randomly according to some distribution. Formalizing this is tricky, see here.
A problem has a worse case to average case reduction if you can show the following: if an algorithm solving the problem with average-case complexity C exists then this algorithm can be applied to also solve the worst-case with the same complexity (modulo a polynomial factor).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'd like to have random tessellations of regions in a hyperbolic space.
In the Euclidean plane I get good results by scattering random points and performing a periodic Delaunay triangulation using CGAL.
For the hyperbolic case, though, there is nothing yet available in the library, even tough work on the implementation of non-Euclidean triangulations and meshes in CGAL was ongoing already in 2011, and essentially ready by 2014.
A purportedly "easy" recipe for implementing the hyperbolic triangulation has been long available (arxiv.org:0903.3287), but I don't think it's trivial to implement it reliably.
Is there any other implementation of hyperbolic Delaunay triangulations, preferably with periodic boundary conditions?
The code that Marc mentions is computing periodic triangulations (along translations corresponding to the hyperbolic octagon), following the paper soon to be presented at SoCG'17 (see https://hal.inria.fr/hal-01411415 for a preliminary version).
We also have code that computes Delaunay triangulations in the hyperbolic plane, as presented in our JoCG paper (see http://jocg.org/index.php/jocg/article/view/141).
The branch is currently private in github, but we will make it public soon. Some parts need polishing, though, and the documentation is not yet written.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Is it possible to convert an expanded blend to a simple lightweight vector shape, without all these inbetween paths of all n steps? It seems like a complicated object to work with since computer has to recalculate all the changes that are made to the inside paths.
Go to Object > Blend > Expand. Then, with all of the steps selected, go to Pathfinder and merge all the shapes together.
I don't believe there is a way to convert it back to a single vector shape as it would have to be able to translate the blend into either a linear gradient, radial gradient, or gradient mesh.
The beauty of blends is that they aren't bound by the same rules that allow the gradient or gradient mesh tools to work, and you can get some really awesome color blends across complicated shapes.