What is the major difference between CGAffineTransformMake and CGAffineTransformMakeRotation? - objective-c

What is the difference between :
self.viewBG.transform = CGAffineTransformMake(1, 0, 0, -1, 0, 0);
and
self.viewBG.transform = CGAffineTransformMakeRotation(-M_PI);
I am using table view in my chat application and i transform table with above method so please help me out on this.
Which one is better and why?
Thanks!

CGAffineTransformMake allows you to set the individual matrix values of the transform directly, whereas CGAffineTransformMakeRotation takes that work away from you and allows you to ask for a transform that rotates something by the amount you want, without you having to understand how the matrix works. The end result is the same.
The second option is much better - it is obvious what the transform is doing and by how much. Any readers that don't understand how the matrix maths of transforms work or what those individual unnamed parameters mean (which is going to be virtually all readers, myself included) are not going to know what the first line is doing.
Readability always wins.

The CGAffineTransformMake lets you set the transform yourself, while the others basically are convenience methods to do it for you.
For example, CGAffineTransformMakeRotation evaluates to this:
t' = [ cos(angle) sin(angle) -sin(angle) cos(angle) 0 0 ]
Note that the result is not the same. Using CGAffineTransformMakeRotation will result in (rounded values) [-1, 0, 0, -1, 0, 0 ]. There is also an accuracy aspect. The trigonometric functions do not always evaluate to the exact value you expect. In this case, sin(-M_PI) actual becomes -0.00000000000000012246467991473532 instead of zero because of inaccuracy of PI, I presume.
Personally, I always use the convenience methods, as the code is much easier to understand. The inaccuracy is usually something you wont notice anyway.

CGAffineTransformMake : Returns an affine transformation matrix constructed from values you provide.
CGAffineTransformMakeRotation : Returns an affine transformation matrix constructed from a rotation value you provide.
Source URL : http://mirror.informatimago.com/next/developer.apple.com/documentation/GraphicsImaging/Reference/CGAffineTransform/Reference/function_group_1.html

Related

In GAMS, how to deal with divisions?

In my GAMS model, I have a objective function that involves a division.
GAMS sets the initial values to zero whenever it solves something...brilliant idea, how could that possibly ever go wrong!....oh wait, now there's division by zero.
What is the approach to handle this? I have tried manually setting lower bounds such that division by zero is avoided, but then GAMS spits out "infeasible" solution.
Which is wrong, since I know the model is feasible. In fact, removing the division term from my model and resolving does produce a solution. This solution ought to be feasible for the original problem as well, since we are just adding terms to the objective.
Here are some common approaches:
set a lower bound. E.g. Z =E= X/Y, add Y.LO = 0.0001;
similarly, write something like: Z =E= X/(Y+0.0001)
set a initial value. E.g. Y.L = 1
Multiply both sides by Y: Z*Y =E= X
For any non-linear variable you should really think carefully about bounds and initial values (irrespective of division).
Try using the $ sign. For example: A(i,j)$C(i,j) = B(i,j) / C(i,j)

Errors to fit parameters of scipy.optimize

I use the scipy.optimize.minimize ( https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html ) function with method='L-BFGS-B.
An example of what it returns is here above:
fun: 32.372210618549758
hess_inv: <6x6 LbfgsInvHessProduct with dtype=float64>
jac: array([ -2.14583906e-04, 4.09272616e-04, -2.55795385e-05,
3.76587650e-05, 1.49213975e-04, -8.38440428e-05])
message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
nfev: 420
nit: 51
status: 0
success: True
x: array([ 0.75739412, -0.0927572 , 0.11986434, 1.19911266, 0.27866406,
-0.03825225])
The x value correctly contains the fitted parameters. How do I compute the errors associated to those parameters?
TL;DR: You can actually place an upper bound on how precisely the minimization routine has found the optimal values of your parameters. See the snippet at the end of this answer that shows how to do it directly, without resorting to calling additional minimization routines.
The documentation for this method says
The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol.
Roughly speaking, the minimization stops when the value of the function f that you're minimizing is minimized to within ftol of the optimum. (This is a relative error if f is greater than 1, and absolute otherwise; for simplicity I'll assume it's an absolute error.) In more standard language, you'll probably think of your function f as a chi-squared value. So this roughly suggests that you would expect
Of course, just the fact that you're applying a minimization routine like this assumes that your function is well behaved, in the sense that it's reasonably smooth and the optimum being found is well approximated near the optimum by a quadratic function of the parameters xi:
where Δxi is the difference between the found value of parameter xi and its optimal value, and Hij is the Hessian matrix. A little (surprisingly nontrivial) linear algebra gets you to a pretty standard result for an estimate of the uncertainty in any quantity X that's a function of your parameters xi:
which lets us write
That's the most useful formula in general, but for the specific question here, we just have X = xi, so this simplifies to
Finally, to be totally explicit, let's say you've stored the optimization result in a variable called res. The inverse Hessian is available as res.hess_inv, which is a function that takes a vector and returns the product of the inverse Hessian with that vector. So, for example, we can display the optimized parameters along with the uncertainty estimates with a snippet like this:
ftol = 2.220446049250313e-09
tmp_i = np.zeros(len(res.x))
for i in range(len(res.x)):
tmp_i[i] = 1.0
hess_inv_i = res.hess_inv(tmp_i)[i]
uncertainty_i = np.sqrt(max(1, abs(res.fun)) * ftol * hess_inv_i)
tmp_i[i] = 0.0
print('x^{0} = {1:12.4e} ± {2:.1e}'.format(i, res.x[i], uncertainty_i))
Note that I've incorporated the max behavior from the documentation, assuming that f^k and f^{k+1} are basically just the same as the final output value, res.fun, which really ought to be a good approximation. Also, for small problems, you can just use np.diag(res.hess_inv.todense()) to get the full inverse and extract the diagonal all at once. But for large numbers of variables, I've found that to be a much slower option. Finally, I've added the default value of ftol, but if you change it in an argument to minimize, you would obviously need to change it here.
One approach to this common problem is to use scipy.optimize.leastsq after using minimize with 'L-BFGS-B' starting from the solution found with 'L-BFGS-B'. That is, leastsq will (normally) include and estimate of the 1-sigma errors as well as the solution.
Of course, that approach makes several assumption, including that leastsq can be used and may be appropriate for solving the problem. From a practical view, this requires the objective function return an array of residual values with at least as many elements as variables, not a cost function.
You may find lmfit (https://lmfit.github.io/lmfit-py/) useful here: It supports both 'L-BFGS-B' and 'leastsq' and gives a uniform wrapper around these and other minimization methods, so that you can use the same objective function for both methods (and specify how to convert the residual array into the cost function). In addition, parameter bounds can be used for both methods. This makes it very easy to first do a fit with 'L-BFGS-B' and then with 'leastsq', using the values from 'L-BFGS-B' as starting values.
Lmfit also provides methods to more explicitly explore confidence limits on parameter values in more detail, in case you suspect the simple but fast approach used by leastsq might be insufficient.
It really depends what you mean by "errors". There is no general answer to your question, because it depends on what you're fitting and what assumptions you're making.
The easiest case is one of the most common: when the function you are minimizing is a negative log-likelihood. In that case the inverse of the hessian matrix returned by the fit (hess_inv) is the covariance matrix describing the Gaussian approximation to the maximum likelihood.The parameter errors are the square root of the diagonal elements of the covariance matrix.
Beware that if you are fitting a different kind of function or are making different assumptions, then that doesn't apply.

Inverse of n-dimensional numpy.gradient

Does numpy or scipy contain a function which is an inverse of the n-dimensional "gradient" fn?
E.g. if "image" contains a 2D matrix, then i want a function inv_gradient that behaves as follows:
(gx, gy) = numpy.gradient(image)
constant_vector_0 = image[0,:] - inv_gradient(gx, gy)[0,:]
constant_vector_1 = image[:,0] - inv_gradient(gx, gy)[:,0]
image == inv_gradient(gx, gy) + tile(constant_vector_0,(shape(image)[0],1)) + transpose(tile(constant_vector_1,(shape(image)[1],1)))
What you are describing is basically an inverse filter. These exist, but are limited.
One way to understand this is via the convolution theorem, and to think of the gradient as a particular kernel for a convolution, in this case something like (-1, 0, 1) in 1D. The issue then, is that the Fourier Transform (FT) of the kernel will have zeroes, and that when the FTs of the kernel and signal are multiplied, the zeroes in the kernel's FT wipes out any data from the original data in this part of the spectrum (and this gets more problematic when noise is added to the image). Specifically for the gradient, there is 0 power in the f=0 band, and this is what people are referring to in the comments, but other information is lost as well.
Still, though, you can get a lot out of an inverse filter, and maybe what you need. It's fairly case specific.
Here's a very basic and quick description of the issue, and an example (though not for gradients).

Maximizing in mathematica with multiple maxima

I'm trying to compute the maxima of some function of one variable (something like this:)
(which is calculated from a non-trivial convolution, so, no, I don't have an expression for it)
Using the command:
NMaximize[{f[x], 0 < x < 1}, x, AccuracyGoal -> 4, PrecisionGoal -> 4]
(I'm not that worried about super accuracy, a rough estimate of 10^-4 is already enough)
The result of this is x* = 0.55, which is not what should be. (i.e., it is picking the third peak).
Is there any way of telling mathematica that the global maxima is the first one when counting from x = 0 (I know this is always true), or make mathematica search with a better approach? (Notice, I don't want things like Stimulated Annealing approach; each evaluation is very costly!)
Thanks very much!
Try FindMaximum with a starting point of 0 or some similarly small value.

How to reset to identity the "current transformation matrix" with some CGContext function?

I'm doing a series of translations and rotations on the CTM and at some point I need to reset it to identity before going further with transformations.
I can't find any proper way to do it (obviously, there should have been a function named CGContextSetCTM or so) and since efficiency is the key, I don't want to use CGContextSaveGState/CGContextRestoreGState...
Get current transformation matrix via CGContextGetCTM, invert it with CGAffineTransformInvert and multiply the current matrix by the inverted one (that's important!) with CGContextConcatCTM. CTM is now identity.
The save/restore operations are probably a single memory copy of a memory region comparable to the size of the identity matrix (twice or thrice the size). It might only happen for the save operation. Consider that this is probably not much slower than a nop FUNCTION call. Each graphic operation is in the scale of several multiplication operation and I'm guessing this happens more than once in your code for each save/restore cycle. The time of one graphic operation is probably larger than a single save/restore cycle.
Note that inverting the current CTM with CGAffineTransformInvert does not work if your current CTM is singular.
The obvious case is if previously CGContextConcatCTM was performed with matrix CGAffineTransformMake(0, 0, 0, 0, 0, 0).