Array multiplication and matrix inversion with VBA - vba

I am trying to do some calculations with arrays.
e.g. I want to solve Ax=y, so I use the following code to do so, where A is a square matrix and y is a col. vector. In VBA, A is an array with two dimension and y is one with one dimension. However, this code does not work...
x = WorksheetFunction.MMult(WorksheetFunction.MInverse(A), y)
Where did I get wrong? Thanks!

You can be committing one or more of many mistakes:
Arrays not defined as Variant (Most worksheetfunctions won't work if data type is something other than Variant).
Dimensions of A and y don't match up as they need to for matrix multiplication.
In particular, won't work if y size is (1,2) instead of (2,1) as in example below.
etc... Can be anything, really. You don't tell us what error message you get.
Here's an example that works:
Dim A As Variant
Dim y As Variant
Dim x As Variant
ReDim y(1 To 2, 1 To 1)
y(1, 1) = 2
y(2, 1) = 3
ReDim A(1 To 2, 1 To 2)
A(1, 1) = 3
A(2, 1) = 1
A(1, 2) = 4
A(2, 2) = 2
x = WorksheetFunction.MMult(WorksheetFunction.MInverse(A), y)

Let matrix A (3 x 3) be an array in Range("A1:C3"), matrix y (3 x 1) be an array in Range("E1:E3"), and matrix x (3 x 1) be an array in Range("G1:G3"). Then you can try this simple program:
Range("G1:G3") = WorksheetFunction.MMult(WorksheetFunction.MInverse(Range("A1:C3")), Range("E1:E3"))
By using the same procedure, you can do this to find the result of multiplication of a matrix (n x m) with a matrix (p x q). Of course for the simplification you should declare the variables first. I hope this answer can help you.

Related

How to do matrix-wise multiply in Tensorflow?

I would appreciate some help with the following:
Given two tensors of
A = bsz x a_len x dim
B = bsz x b_len x dim
I would like to do a matrix-wise element-wise multiply such that each vector of length dim is multiplied with each vector (dim length) of B
The output should be:
bsz x a_len x b_len x dim
How can I do this in Tensorflow?
Thanks in advance!
Have you looked at tf.einsum?
https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/einsum
Does tf.einsum('abd,acd->abcd', A, B) give you what you're looking for?

When does floating-point rounding-errors occur? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
As I was debugging my VBA code, I came across this weird phenomenon:
This loop
Dim x,y as Double
x = 0.7
y = 0.1
For x = x - y To x + y Step y
Next x
runs only twice!
I tried many variations of this code to nail down the problem, and here is what I came up with:
Replacing the loop boundaries with simple numbers (0.6 to 0.8) - helped.
Replacing variables with numbers (all the combinations) - didn't help.
Replacing the for-loop with do while/until loops - helped.
Replacing the values of x and y (y=0.01, 0.3, 0.4, 0.5, 0.7, 0.8, 0.9 - helped. 0.2, 0.6 -didn't help. x=1, 2 ,3 helped. x=4, 5, 6, 7, 8, 9 - didn't help.
Converting the Double to Decimal with CDec() - helped.
Using the Currency data type instead of Double - helped.
So what we have here is a floating-point rounding-error that happens on mysterious conditions.
What I'm trying to find out is what are those conditions, so we can avoid them.
Who will unveil this mystery?
(Pardon my English, it's not my mother tongue).
GD Falcon,
Generally in solving a For...Next loop it would not be advisable to use 'double' or 'decimal' or 'currency' variables as they provide a level of uncertainty in their accuracy, it's this level of inaccuracy that is wrecking havoc on your code as the actual stop parameter (when x-y, plus (n x y) = x+y) is, in terms of absolutes, an insolvable equation unless you limit the number of decimals it uses.
It is generally considered better practice to use integers (or long) variables in a For...Next loop as their outcome is more certain.
See also below post:
How to make For loop work with non integers
If you want it to run succesfully and iterate 3 times (as I expect you want)
Try like below:
Dim x, y As Double
x = 0.7
y = 0.1
For x = Round(x - y, 1) To Round(x + y, 1) Step Round(y, 1)
Debug.Print x
Next x
Again, it is better not to use Doubles in this particular way to begin with but if you must you would have to limit the number of decimals they calculate with or set a more vague end point (i.e. x > y, rather than x = y)
The coding you use implies that you wish to test some value x against a tolerance level of y.
Assuming this is correct it would imply testing 3 times where;
test_1: x = x - y
test_2: x = x
test_3: x = x + y
The below code would do the same but it would have a better defined scope.
Dim i As Integer
Dim x, y, w As Double
x = 0.7
y = 0.1
For i = -1 To 1 Step 1
w = x + (i * y)
Debug.Print w
Next i
Good luck !

VB challenge/ help MONTE CARLO INTEGRATION

Im trying to create Monte-Carlo simulation that can be used to derive estimates for integration problems (summing up the area under
a curve). Have no idea what to do now and i am stuck
"to solve this problem we generate a number (say n) of random number pairs for x and y between 0 and 1, for each pair we see if the point (x,y) falls above or below the line. We count the number of times this happens (say c). The area under the curve is computed as c/n"
Really confused please help thank you
Function MonteCarlo()
Dim a As Integer
Dim b As Integer
Dim x As Double
Dim func As Double
Dim total As Double
Dim result As Double
Dim j As Integer
Dim N As Integer
Console.WriteLine("Enter a")
a = Console.ReadLine()
Console.WriteLine("Enter b")
b = Console.ReadLine()
Console.WriteLine("Enter n")
N = Console.ReadLine()
For j = 1 To N
'Generate a new number between a and b
x = (b - a) * Rnd()
'Evaluate function at new number
func = (x ^ 2) + (2 * x) + 1
'Add to previous value
total = total + func
Next j
result = (total / N) * (b - a)
Console.WriteLine(result)
Console.ReadLine()
Return result
End Function
You are using the rejection method for MC area under the curve.
Do this:
Divide the range of x into, say, 100 equally-spaced, non-overlapping bins.
For your function y = f(x) = (x ^ 2) + (2 * x) + 1, generate e.g. 10,000 values of y for 10,000 values of x = (b - a) * Rnd().
Count the number of y-values in each bin, and divide by 10,000 to get a "bin probability." --> p(x).
Next, the proper way to randomly simulate your function is to use the rejection method, which goes as follows:
4a. Draw a random x-value using x = (b - a) * Rnd()
4b. Draw a random uniform U(0,1). If U(0,1) is less than p(x) add a count to the bin.
4c. Continue steps 4a-4b 10000 times.
You will now be able to simulate your y=f(x) function using the rejection method.
Overall, you need to master these approaches before you do what you want since it sounds like you have little experience in bin counts, simulation, etc. Area under the curve is always one using this approach, so just be creative for integrating using MC.
Look at some good textbooks on MC integration.

Fortran: efficient matrix-vector multiplication

I have a piece of code which is a significant bottleneck:
do s = 1,ns
msum = 0.d0
do k = 1,ns
msum = msum + tm(k,s)*f(:,:,k)
end do
m(:,:,s) = msum
end do
This is a simple matrix-vector product m=tm*f (where f is length k) for every x,y.
I thought about using a BLAS routine but i am not sure if any allows multiplying along a specific dimension (k). Do any of you have any good advice?
Unfortunately you do not mention the actual shape of f, i.e. the number of x and y. Since you mention this piece of code to be a bottleneck, you can and should replace msum and use the memory m(:,:,s) and spare the first step in you loop, e.g.
do s = 1,ns
m = tm(k,1)*f(:,:,k)
do k = 2, ns
m(:,:,s) = m(:,:,s) + tm(k,s)*f(:,:,k)
end do
end do
Secondly, a more general appraoch
There are ns summations of nK 2D matrices f(:,:,1:nK) by means of scalar factors that are stored in tm(:,1:ns). The goal is to store these sums in m(:,:,1:ns). Why not sum up element-wise wrt x and y to exploit contiguuos memory sections by means of the result? You already mentioned that you can redesign such that k is the first dimension in f, i.e. f(k,:,:).
Considering only the desired outcome, you ought to have ns 2D matrices m(:,:,1:ns) that are independent of each other (outer loop remains at it is). Lets drop this dimension for a moment. The problem then becomes:
m(:,:) = \sum_{k=1}^{ns} tm_k * f_k(:,:)
We should thus sum over k, e.g. have f(k,:,:) to determine m(:,:) as follows (note that I am adding the outer loop for s again):
nK = size(f, 1) ! the "k"s
nX = size(f, 2) ! the "x"s
nY = size(f, 3) ! the "y"s
m = 0.d0
do s = 1, ns
do ii = 1, nY
call DGEMV('N', nK, nY, &
1.d0, f(:,:,nY), 1, tm(:,s), 1, &
1.d0, m(:,nY,s), 1)
end do !ii
end do !s
See the documentation of DGEMV for more details on its usage.
Of course, the above advice of excluding the first step of the loop to spare the initialization by means of zeros may be applied at well.

I need some help on designing a program that will perform a minimization using VBA Excel

How do I use Excel VBA to find the minimum value of an equation?
For example, if I have the equation y = 2x^2 + 14, and I want to make a loop that will slowly increase/decrease the value of x until it can find the smallest value possible for y, and then let me know what the corresponding value of x is, how would I go about doing that?
Is there a method that would work for much more complicated equations?
Thank you for your help!
Edit: more details
I'm trying to design a program that will find a certain constant needed to graph a nuclear decay. This constant is a part of an equation that gets me a calculated decay. I'm comparing this calculated decay against a measured decay. However, the constant changes very slightly as the decay happens, which means I have to use something called a residual-square to find the best constant to use that will fit the entire decay best to make my calculated decay as accurate as possible.
It works by doing (Measured Decay - Calculated Decay) ^2
You do that for the decay at several times, and add them all up. What I need my program to do is to slowly increase and decrease this constant until I can find a minimum value for the value I get when I add up the residual-squared results for all the times using this decay. The residual-squared that has the smallest value has the value of the constant that I want.
I already drafted a program that does all the calculations and such. I'm just not sure how to find this minimum value. I'm sure if a method works for something like y = x^2 + 1, I can adapt it to work for my needs.
Test the output while looping to look for the smallest output result.
Here's an Example:
Sub FormulaLoop()
Dim x As Double
Dim y As Double
Dim yBest As Double
x = 1
y = (x ^ 2) + 14
yBest = y
For x = 2 To 100
y = (x ^ 2) + 14
If y < yBest Then
yBest = y
End If
Next x
MsgBox "The smallest output of y was: " & yBest
End Sub
If you want to loop through all the possibilities of two variables that make up x then I'd recommend looping in this format:
Sub FormulaLoop_v2()
Dim MeasuredDecay As Double
Dim CalculatedDecay As Double
Dim y As Double
Dim yBest As Double
MeasuredDecay = 1
CalculatedDecay = 1
y = ((MeasuredDecay - CalculatedDecay) ^ 2) + 14
yBest = y
For MeasuredDecay = 2 To 100
For CalculatedDecay = 2 To 100
y = ((MeasuredDecay - CalculatedDecay) ^ 2) + 14
If y < yBest Then
yBest = y
End If
Next CalculatedDecay
Next MeasuredDecay
MsgBox "The smallest output of y was: " & yBest
End Sub