What's the difference between the worst case and average case of a problem? [closed] - cryptography

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm studying the reduction about some hard problems on the lattice. What's the meaning of "worst case to average case reduction" ? For example, the paper "Worst-case to Average-case Reductions based on Gaussian Measures" gives the reduction from the worst case INCGDD problem to average case SIS problem, what does it mean?

A problem has average-case time complexity C if there exists an algorithm that solves the problem in C time on average, if the inputs are chosen randomly according to some distribution. Formalizing this is tricky, see here.
A problem has a worse case to average case reduction if you can show the following: if an algorithm solving the problem with average-case complexity C exists then this algorithm can be applied to also solve the worst-case with the same complexity (modulo a polynomial factor).

Related

Is there a scientific field dedicated to the quantification of intelligent behavior? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
One of the biggest struggle with ML research is the creation of objective functions which capture the researcher's goals. Especially when talking about generalizable AI, the definition of the objective function is very tricky.
This excellent paper for instance attempts to define an objective function to reward an agent's curiosity.
If we could measure intelligent behavior well, it would perhaps be possible to perform an optimization in which the parameters of a simulation such as a cellular automaton are optimized to maximize the emergence of increasingly intelligent behavior.
I vaguely remember having come across a group of cross-discipline researchers who were attempting to use the information theory concept of entropy to measure intelligent behavior but cannot find any resources about it now. So is there a scientific field dedicated to the quantification of intelligent behavior?
The field is called Integrated Information Theory, initially proposed by Giulio Tononi. It attempts to quantify consciousness of systems by formally defining formally the phenomenological experience of consciousness, and computing a value Phi, meant for a proxy of "consciousness".

Efficiently blocking invalid solutions [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What's the best way to block invalid solutions in Optaplanner? I know you can provide a negative hard score with HardSoftScore, but it might still take a long time exploring invalid solutions before arriving at a valid one.
For example, if you're seeing how many packages will fit in a truck, if the sum size of all packages exceeds the capacity of the truck, you don't want to explore any solutions in that space at all.
I think this runs counter to the way Optaplanner is expected to work, in which you have a lot of bad solutions and slowly converge towards a good solution. Veto'ing solutions doesn't give Optaplanner any information on why that solution was vetoed, and also it's possible that a better solution can only be found after traversing though a vetoable solution.
Instead, consider whether your score constraints are causing a score trap. Instead of using a fixed -1 hard score for a vetoable solution, have a score that's proportional to how bad that solution is.
In my example, this means instead of marking overcapacity solutions as hard -1, I should instead penalize them proportional to how over capacity they are, using the matchWeigher form of penalize.

Finding the common eigenvectors of two matrices [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I want to find the common eigenvectors of two symmetric matrices with the same dimension in R
Assuming two matrices L1 and L2 I am looking for vector X such that
L1*X = (landa)L2*X
where landa is the eigen value
The term you are looking for is Generalized Eigenvalue Problem. This is a well researched linear algebra problem.
In terms of implementation, I suggest looking at a special Netlib section, where I think your matrices will satisfy Generalized Symmetric Definite Eigenproblems solver requirements.
Intel MKL provides the functionality directly available from C, Fortran, and, as far as I know, Python.

How does initialization of neural networks affect convergence? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If a neural network is initialized with small random weights and run for a very large number of iterations (20k or more), cqn the final accuracy range (difference of order of magnitude 10e-4 is okay) differ much for a rerun of same model?
Yes it can differ. It will usually not be the case, but in theory it can, and sometimes it does. It's due to the randomness in the initialization and in the feeding order during training, which can both lead the optimization to end up in a different local minimum of your cost function each time. This is why researchers developed initialization techniques that are supposed to be better than others, such as Xavier initialisation.
It's good practice, if you have the time, to train several times, just to see if your results are very different between the runs, or not.

What do these questions mean and how do I approach them? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I am currently making documentations in regards to my finished product, however I do not understand what the question wants by asking for:
Qualitative assessment of performance
Quantitative assessment of performance
A qualitative assessment of performance is a assessment which doesn't use specific measurements but compares the performance with the expectation or needs of the user. So something like:
The performance of the import is low, but acceptable for the intended use.
reaction of the application to user input most cases so fast, that no waiting time is perceived.
A quantitative assessment is based on measurements:
The import processes 1 million records per hour
98% of all user interactions are processed withing 0.2 seconds
Also more detailed information like standard deviations or plotting a measure with regards to some variable, would be a quantitative assessment.
Note that both assessments are important. The quantitative is great for comparisons, for example if you want to compare the performance of two versions of an application.
The qualitative is what really matters. In the end it often doesn't matter how many millions or records you process per ms. The question is: Is the user satisfied, and in most cases the user doesn't base their feelings on some measurement, but on ... well ... their feelings.