I do something like this to get:
bMRes += MatrixXd(n, n).setZero()
.selfadjointView<Eigen::Upper>().rankUpdate(bM);
This gets me an incrementation of bMRes by bM * bM.transpose() but twice as fast.
Note that bMRes and bM are of type Map<MatrixXd>.
To optimize things further, I would like to skip the copy (and incrementation) of the Lower part.
In other words, I would like to compute and write only the Upper part.
Again, in other words, I would like to have my result in the Upper part and 0's in the Lower part.
If it is not clear enough, feel free to ask questions.
Thanks in advance.
Florian
If your bMRes is self-adjoint originally, you could use the following code, which only updates the upper half of bMRes.
bMRes.selfadjointView<Eigen::Upper>().rankUpdate(bM);
If not, I think you have to accept that .selfadjointView<>() will always copy the other half when assigned to a MatrixXd.
Compared to A*A.transpose() or .rankUpdate(A), the cost of copying half of A can be ignored when A is reasonably large. So I guess you don't need to optimize your code further.
If you just want to evaluate the difference, you could use low-level BLAS APIs. A*A.transpose() is equivalent to gemm(), and .rankUpdate(A) is equivalent to syrk(), but syrk() don't copy the other half automatically.
Related
I am new to LabVIEW and I am trying to read a code written in LabVIEW. The block diagram is this:
This is the program to input x and y functions into the voltage input. It is meant to give an input voltage in different forms (sine, heartshape , etc.) into the fast-steering mirror or galvano mirror x and y axises.
x and y function controls are for inputting a formula for a function, and then we use "evaluation single value" function to input into a daq assistant.
I understand that { 2*(|-Mpi|)/N }*i + -Mpi*pi goes into the x value. However, I dont understand why we use this kind of formula. Why we need to assign a negative value and then do the absolute value of -M*pi. Also, I don`t understand why we need to divide to N and then multiply by i. And finally, why need to add -Mpi again? If you provide any hints about this I would really appreciate it.
This is just a complicated way to write the code/formula. Given what the code looks like (unnecessary wire bends, duplicate loop-input-tunnels, hidden wires, unnecessary coercion dots, failure to use appropriate built-in 'negate' function) not much care has been given in writing it. So while it probably yields the correct results you should not expect it to do so in the most readable way.
To answer you specific questions:
Why we need to assign a negative value and then do the absolute value
We don't. We can just move the negation immediately before the last addition or change that to a subtraction:
{ 2*(|Mpi|)/N }*i - Mpi*pi
And as #yair pointed out: We are not assigning a value here, we are basically flipping the sign of whatever value the user entered.
Why we need to divide to N and then multiply by i
This gives you a fraction between 0 and 1, no matter how many steps you do in your for-loop. Think of N as a sampling rate. I.e. your mirrors will always do the same movement, but a larger N just produces more steps in between.
Why need to add -Mpi again
I would strongly assume this is some kind of quick-and-dirty workaround for a bug that has not been fixed properly. Looking at the code it seems this +Mpi*pi has been added later on in the development process. And while I don't know what the expected values are I would believe that multiplying only one of the summands by Pi is probably wrong.
Using pyiron, I want to calculate the mean square displacement of the ions in my system. How do I see the total displacement (i.e. not folded back by periodic boundary conditions) without dumping very frequently and checking when an atom passes over the boundary and gets wrapped?
Try to compare job['output/generic/unwrapped_positions'][-1] and job.structure.positions+job.output.total_displacements[-1]. If they deliver the same values, it's definitely fine both ways. If not, you can post the relevant lines in your notebook here.
I'd like to add a few comments to Jan's answer:
While job['output/generic/unwrapped_positions'] returns the unwrapped positions parsed from the output files, job.output.total_displacements returns the displacement of atoms calculated from each pair of consecutive snapshots. So if an atom moves more than half the box length in any direction, job.output.total_displacements will give wrong coordinates. Therefore, job['output/generic/unwrapped_positions'] is generally more trustworthy, but it is not available in all the codes (since some codes simply do not provide an output for unwrapped positions).
Moreover, if an interactive job is used, it is possible that job.structure.positions does not return the initial positions, i.e. job.structure.positions+job.output.total_displacements won't be initial positions + displacements.
So, in short, my answer to your question would be rather "Use job['output/generic/unwrapped_positions'] and if it's not available, use job.structure.positions+job.output.total_displacements but be aware of potential problems you might be running into."
I was reviewing some code from a library for Arduino and saw the following if statement in the main loop:
draw_state++;
if ( draw_state >= 14*8 )
draw_state = 0;
draw_state is a uint8_t.
Why is 14*8 written here instead of 112? I initially thought this was done to save space, as 14 and 8 can both be represented by a single byte, but then so can 112.
I can't see why a compiler wouldn't optimize this to 112, since otherwise it would mean a multiplication has to be done every iteration instead of the lookup of a value. This looks to me like there is some form of memory and processing tradeoff.
Does anyone have a suggestion as to why this was done?
Note: I had a hard time coming up with a clear title, so suggestions are welcome.
Probably to explicitly show where the number 112 came from. For example, it could be number of bits in 14 bytes (but of course I don't know the context of the code, so I could be wrong). It would then be more obvious to humans where the value came from, than wiriting just 112.
And as you pointed out, the compiler will probably optimize it, so there will be no multiplication in the machine code.
I have seen many example of binary search,many method how to optimize it,so yesterday my lecturer write code (in this code let us assume that first index starts from 1 and last one is N,so that N is length of array,consider it in pseudo code.code is like this:
L:=1;
R:=N;
while( L<R)
{
m:=div(R+L,2);
if A[m]> x
{
L:=m+1;
}
else
{
R:=m;
}
}
Here we assume that array is A,so lecturer said that we are not waste time for comparing if element is at middle part of array every time,also benefit is that if element is not in array,index says about where it would be located,so it is optimal,is he right?i mean i have seen many kind of binary search from John Bentley for example(programming pearls) and so on,and is this code optimal really?it is written in pascal in my case, but language does not depend.
It really depends on whether you find the element. If you don't, this will have saved some comparisons. If you could find the element in the first couple of hops, then you've saved the work of all the later comparisons and arithmetic. If all the values in the array are distinct, it's obviously fairly unlikely that you hit the right index early on - but if you have broad swathes of the array containing the same values, that changes the maths.
This approach also means you can't narrow the range quite as much as you would otherwise - this:
R:=m;
would normally be
R:=m-1;
... although that would reasonably rarely make a significant difference.
The important point is that this doesn't change the overall complexity of the algorithm - it's still going to be O(log N).
also benefit is that if element is not in array,index says about where it would be located
That's true whether you check for equality or not. Every binary search implementation I've seen would give that information.
You are given an array of elements. Some/all of them are duplicates. Find them in 0(n) time and 0(1) space. Property of inputs - Number are in the range of 1..n where n is the limit of the array.
If the O(1) storage is a limitation on additional memory, and not an indication that you can't modify the input array, then you can do it by sorting while iterating over the elements: move each misplaced element to its "correct" place - if it's already occupied by the correct number then print it as a duplicate, otherwise take the "incorrect" existing content and place it correctly before continuing the iteration. This may in turn require correcting other elements to make space, but there's a stack of at most 1 and the total number of correction steps is limited to N, added to the N-step iteration you get 2N which is still O(N).
Since both the number of elements in the array and the range of the array are variable based on n, I don't believe you can do this. I may be wrong, personally I doubt it, but I've been wrong before :-)
EDIT: And it looks like I may be wrong again :-) See Tony's answer, I believe he may have nailed it. I'll delete this answer once enough people agree with me, or it gets downvoted too much :-)
If the range was fixed (say, 1..m), you could do it thus:
dim quant[1..m]
for i in 1..m:
quant[m] = 0
for i in 1..size(array):
quant[array[i]] = quant[array[i]] + 1
for i in 1..m:
if quant[i] > 1:
print "Duplicate value: " + i
This works since you can often trade off space against time in most algorithms. But, because the range also depends on the input value, the normal trade-off between space and time is not plausible.