How to use abs function in a objective function of JuMP + Julia - optimization

I would like to solve a simple linear optimization problem with JuMP and Julia.
This is my code:
using JuMP
using Mosek
model = Model(solver=MosekSolver())
#variable(model, 2.5 <= z1 <= 5.0)
#variable(model, -1.0 <= z2 <= 1.0)
#objective(model, Min, abs(z1+5.0) + abs(z2-3.0))
status = solve(model)
println("Objective value: ", getobjectivevalue(model))
println("z1:",getvalue(z1))
println("z2:",getvalue(z2))
However, I got this error message.
> ERROR: LoadError: MethodError: no method matching
> abs(::JuMP.GenericAffExpr{Float64,JuMP.Variable}) Closest candidates
> are: abs(!Matched::Bool) at bool.jl:77 abs(!Matched::Float16) at
> float.jl:512 abs(!Matched::Float32) at float.jl:513
How can I use abs function in the JuMP code?

My problem is solved by #rickhg12hs's commnet.
If I use #NLobjective instead of #objective, It works.
This is the final code.
using JuMP
using Mosek
model = Model(solver=MosekSolver())
#variable(model, 2.5 <= z1 <= 5.0)
#variable(model, -1.0 <= z2 <= 1.0)
#NLobjective(model, Min, abs(z1+5.0) + abs(z2-3.0))
status = solve(model)
println("Objective value: ", getobjectivevalue(model))
println("z1:",getvalue(z1))
println("z2:",getvalue(z2))

I did it on a diffrent way
AvgOperationtime = [1 2]#[2.0 2.0 2.0 3.3333333333333335 2.5 2.0 2.0 2.5 2.5 2.0 2.0]
Operationsnumberremovecounter = [1 0;1 1]#[1.0 1.0 1.0 1.0 -0.0 1.0 1.0 1.0 -0.0 1.0 1.0; 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0; 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0; 1.0 1.0 1.0 -0.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0; 1.0 1.0 1.0 -0.0 1.0 1.0 1.0 -0.0 1.0 1.0 1.0]
Modelnumber = 2
Operationsnumber = 2
Basecaseworkload = 2
y = 0.1
Highestnumber = 999
Solver = GLPK.Optimizer
#Operationtime[1,1 X;0,9 2]
m = Model(with_optimizer(Solver));
#variable(m, Operationtime[1:Modelnumber,1:Operationsnumber]>=0);
#variable(m, Absoluttime[1:Modelnumber,1:Operationsnumber]>=0);
#variable(m, Absolutchoice[1:Modelnumber,1:Operationsnumber,1:2], Bin);
#objective(m, Max, sum(Absoluttime[M,O]*Operationsnumberremovecounter[M,O] for M=1:Modelnumber,O=1:Operationsnumber))
#How much Time can differ
#constraint(m, BorderOperationtime1[M=1:Modelnumber,O=1:Operationsnumber], AvgOperationtime[O]*(1-y) <= Operationtime[M,O]);
#constraint(m, BorderOperationtime2[M=1:Modelnumber,O=1:Operationsnumber], AvgOperationtime[O]*(1+y) >= Operationtime[M,O]);
#Workload
#constraint(m, Worklimit[O=1:Operationsnumber], sum(Operationtime[M,O]*Operationsnumberremovecounter[M,O] for M=1:Modelnumber) == Basecaseworkload);
#Absolut
#constraint(m, Absolutchoice1[M=1:Modelnumber,O=1:Operationsnumber], sum(Absolutchoice[M,O,X] for X=1:2) == 1);
#constraint(m, Absoluttime1[M=1:Modelnumber,O=1:Operationsnumber], Absoluttime[M,O] <= Operationtime[M,O]-AvgOperationtime[O]+Absolutchoice[M,O,1]*Highestnumber);
#constraint(m, Absoluttime2[M=1:Modelnumber,O=1:Operationsnumber], Absoluttime[M,O] <= AvgOperationtime[O]-Operationtime[M,O]+Absolutchoice[M,O,2]*Highestnumber);
optimize!(m);
println("Termination status: ", JuMP.termination_status(m));
println("Primal status: ", JuMP.primal_status(m));

Related

Create histogram like bins for a range including negative numbers

I have numbers in a range from -4 to 4, including 0, as in
-0.526350041828112
-0.125648350883331
0.991377353361933
1.079241128983
1.06322905224238
1.17477528478982
-0.0651086035371559
0.818471811380787
0.0355593553368815
I need to create histogram like buckets, and have being trying to use this
BEGIN { delta = (delta == "" ? 0.1 : delta) }
{
bucketNr = int(($0+delta) / delta)
cnt[bucketNr]++
numBuckets = (numBuckets > bucketNr ? numBuckets : bucketNr)
}
END {
for (bucketNr=1; bucketNr<=numBuckets; bucketNr++) {
end = beg + delta
printf "%0.1f %0.1f %d\n", beg, end, cnt[bucketNr]
beg = end
}
}
from Create bins with awk histogram-like
The output would look like
-2.4 -2.1 8
-2.1 -1.8 25
-1.8 -1.5 108
-1.5 -1.2 298
-1.2 -0.9 773
-0.9 -0.6 1067
-0.6 -0.3 1914
-0.3 0.0 4174
0.0 0.3 3969
0.3 0.6 2826
0.6 0.9 1460
0.9 1.2 752
1.2 1.5 396
1.5 1.8 121
1.8 2.1 48
2.1 2.4 13
2.4 2.7 1
2.7 3.0 1
I'm thinking I would have to run this 2x, one with delta let's say 0.3 and another with delta -0.3, and cat the two together.
But I'm not sure this intuition is correct.
This might work for you:
BEGIN { delta = (delta == "" ? 0.1 : delta) }
{
bucketNr = int(($0<0?$0-delta:$0)/delta)
cnt[bucketNr]++
maxBucket = (maxBucket > bucketNr ? maxBucket : bucketNr)
minBucket = (minBucket < bucketNr ? minBucket : bucketNr)
}
END {
beg = minBucket*delta
for (bucketNr=minBucket; bucketNr<=maxBucket; bucketNr++) {
end = beg + delta
printf "%0.1f %0.1f %d\n", beg, end, cnt[bucketNr]
beg = end
}
}
It's basically the code you posted + handling negative numbers.

Julia - using CartesianIndices with an array

I am trying to access specific elements of an NxN matrix 'msk', with indices stored in a Mx2 array 'idx'. I tried the following:
N = 10
msk = zeros(N,N)
idx = [1 5;6 2;3 7;8 4]
#CIs = CartesianIndices(( 2:3, 5:6 )) # this works, but not what I want
CIs = CartesianIndices((idx[:,1],idx[:,2]))
msk[CIs] .= 1
I get the following: ERROR: LoadError: MethodError: no method matching CartesianIndices(::Tuple{Array{Int64,1},Array{Int64,1}})
Is this what you want? (I am using your definitions)
julia> msk[CartesianIndex.(eachcol(idx)...)] .= 1;
julia> msk
10×10 Array{Float64,2}:
0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Note that I use a vector of CartesianIndex:
julia> CartesianIndex.(eachcol(idx)...)
4-element Array{CartesianIndex{2},1}:
CartesianIndex(1, 5)
CartesianIndex(6, 2)
CartesianIndex(3, 7)
CartesianIndex(8, 4)
as CartesianIndices is:
Define a region R spanning a multidimensional rectangular range of integer indices.
so the region defined by it must be rectangular.
Another way to get the required indices would be e.g.:
julia> CartesianIndex.(Tuple.(eachrow(idx)))
4-element Array{CartesianIndex{2},1}:
CartesianIndex(1, 5)
CartesianIndex(6, 2)
CartesianIndex(3, 7)
CartesianIndex(8, 4)
or (this time we use linear indexing into msk as it is just a Matrix)
julia> [x + (y-1)*size(msk, 1) for (x, y) in eachrow(idx)]
4-element Array{Int64,1}:
41
16
63
38

Mixture prior not working in JAGS, only when likelihood term included

The code at the bottom will replicate the problem, just copy and paste it into R.
What I want is for the mean and precision to be (-100, 100) 30% of the time, and (200, 1000) for 70% of the time. Think of it as lined up in a, b, and p.
So 'pick' should be 1 30% of the time, and 2 70% of the time.
What actually happens is that on every iteration, pick is 2 (or 1 if the first element of p is the larger one). You can see this in the summary, where the quantiles for 'pick', 'testa', and 'testb' remain unchanged throughout. The strangest thing is that if you remove the likelihood loop, pick then works exactly as intended.
I hope this explains the problem, if not let me know. It's my first time posting so I'm bound to have messed things up.
library(rjags)
n = 10
y <- rnorm(n, 5, 10)
a = c(-100, 200)
b = c(100, 1000)
p = c(0.3, 0.7)
## Model
mod_str = "model{
# Likelihood
for (i in 1:n){
y[i] ~ dnorm(mu, 10)
}
# ISSUE HERE: MIXTURE PRIOR
mu ~ dnorm(a[pick], b[pick])
pick ~ dcat(p[1:2])
testa = a[pick]
testb = b[pick]
}"
model = jags.model(textConnection(mod_str), data = list(y = y, n=n, a=a, b=b, p=p), n.chains=1)
update(model, 10000)
res = coda.samples(model, variable.names = c('pick', 'testa', 'testb', 'mu'), n.iter = 10000)
summary(res)
I think you are having problems for a couple of reasons. First, the data that you have supplied to the model (i.e., y) is not a mixture of normal distributions. As a result, the model itself has no need to mix. I would instead generate data something like this:
set.seed(320)
# number of samples
n <- 10
# Because it is a mixture of 2 we can just use an indicator variable.
# here, pick (in the long run), would be '1' 30% of the time.
pick <- rbinom(n, 1, p[1])
# generate the data. b is in terms of precision so we are converting this
# to standard deviations (which is what R wants).
y_det <- pick * rnorm(n, a[1], sqrt(1/b[1])) + (1 - pick) * rnorm(n, a[2], sqrt(1/b[2]))
# add a small amount of noise, can change to be more as necessary.
y <- rnorm(n, y_det, 1)
These data look more like what you would want to supply to a mixture model.
Following this, I would code the model up in a similar way as I did the data generation process. I want some indicator variable to jump between the two normal distributions. Thus, mu may change for each scalar in y.
mod_str = "model{
# Likelihood
for (i in 1:n){
y[i] ~ dnorm(mu[i], 10)
mu[i] <- mu_ind[i] * a_mu + (1 - mu_ind[i]) * b_mu
mu_ind[i] ~ dbern(p[1])
}
a_mu ~ dnorm(a[1], b[1])
b_mu ~ dnorm(a[2], b[2])
}"
model = jags.model(textConnection(mod_str), data = list(y = y, n=n, a=a, b=b, p=p), n.chains=1)
update(model, 10000)
res = coda.samples(model, variable.names = c('mu_ind', 'a_mu', 'b_mu'), n.iter = 10000)
summary(res)
2.5% 25% 50% 75% 97.5%
a_mu -100.4 -100.3 -100.2 -100.1 -100
b_mu 199.9 200.0 200.0 200.0 200
mu_ind[1] 0.0 0.0 0.0 0.0 0
mu_ind[2] 1.0 1.0 1.0 1.0 1
mu_ind[3] 0.0 0.0 0.0 0.0 0
mu_ind[4] 1.0 1.0 1.0 1.0 1
mu_ind[5] 0.0 0.0 0.0 0.0 0
mu_ind[6] 0.0 0.0 0.0 0.0 0
mu_ind[7] 1.0 1.0 1.0 1.0 1
mu_ind[8] 0.0 0.0 0.0 0.0 0
mu_ind[9] 0.0 0.0 0.0 0.0 0
mu_ind[10] 1.0 1.0 1.0 1.0 1
If you supplied more data, you would (in the long run) have the indicator variable mu_ind take the value of 1 30% of the time. If you had more than 2 distributions you could instead use dcat. Thus, an alternative and more generalized way of doing this would be (and I am borrowing heavily from this post by John Kruschke):
mod_str = "model {
# Likelihood:
for( i in 1 : n ) {
y[i] ~ dnorm( mu[i] , 10 )
mu[i] <- muOfpick[ pick[i] ]
pick[i] ~ dcat( p[1:2] )
}
# Prior:
for ( i in 1:2 ) {
muOfpick[i] ~ dnorm( a[i] , b[i] )
}
}"
model = jags.model(textConnection(mod_str), data = list(y = y, n=n, a=a, b=b, p=p), n.chains=1)
update(model, 10000)
res = coda.samples(model, variable.names = c('pick', 'muOfpick'), n.iter = 10000)
summary(res)
2.5% 25% 50% 75% 97.5%
muOfpick[1] -100.4 -100.3 -100.2 -100.1 -100
muOfpick[2] 199.9 200.0 200.0 200.0 200
pick[1] 2.0 2.0 2.0 2.0 2
pick[2] 1.0 1.0 1.0 1.0 1
pick[3] 2.0 2.0 2.0 2.0 2
pick[4] 1.0 1.0 1.0 1.0 1
pick[5] 2.0 2.0 2.0 2.0 2
pick[6] 2.0 2.0 2.0 2.0 2
pick[7] 1.0 1.0 1.0 1.0 1
pick[8] 2.0 2.0 2.0 2.0 2
pick[9] 2.0 2.0 2.0 2.0 2
pick[10] 1.0 1.0 1.0 1.0 1
The link above includes even more priors (e.g., a Dirichlet prior on the probabilities incorporated into the Categorical distribution).

Unable to set cell values in Python Pandas SparseDataFrame

I'm having a hard time getting cell values to stick in a SparseDataFrame when updating by index/column. I've tried setting cell values using df.at, df.ix, df.loc and the dataframe remains empty.
df = pd.SparseDataFrame(np.zeros((10,10)), default_fill_value=0)
df.at[1,1] = 1
df.ix[2,2] = 1
df.loc[3,3] = 1
df
0 1 2 3 4 5 6 7 8 9
0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Any one of these options work fine on a standard dataframe.
The one option I've found that works is
df = df.set_values(1, 1, 1)
But this is terribly slow for a large matrix.
[edit] I did see a 4 year old answer that suggested the below, but it suggests that more direct methods were in the works. I also haven't tested speed on this, but it seems like converting whole slides of the matrix to dense and back would be much slower than directly updating a row,col, val entry as you can with scipy sparse matrices.
df = pd.SparseDataFrame(columns=np.arange(250000), index=np.arange(250000))
s = df[2000].to_dense()
s[1000] = 1
df[2000] = s
In [11]: df.ix[1000,2000]
Out[11]: 1.0

Singular Value Decomposition: Different results with Jama, PColt and NumPy

I want to perform Singular Value Decomposition on a large (sparse) matrix. In order to choose the best(most accurate) library, I tried replicating the SVD example provided here using different Java and Python libraries. Strangely I am getting different results with each library.
Here's the original example matrix and it's decomposed (U S and VT) matrices:
A =2.0 0.0 8.0 6.0 0.0
1.0 6.0 0.0 1.0 7.0
5.0 0.0 7.0 4.0 0.0
7.0 0.0 8.0 5.0 0.0
0.0 10.0 0.0 0.0 7.0
U =-0.54 0.07 0.82 -0.11 0.12
-0.10 -0.59 -0.11 -0.79 -0.06
-0.53 0.06 -0.21 0.12 -0.81
-0.65 0.07 -0.51 0.06 0.56
-0.06 -0.80 0.09 0.59 0.04
VT =-0.46 0.02 -0.87 -0.00 0.17
-0.07 -0.76 0.06 0.60 0.23
-0.74 0.10 0.28 0.22 -0.56
-0.48 0.03 0.40 -0.33 0.70
-0.07 -0.64 -0.04 -0.69 -0.32
S (with the top three singular values) =
17.92 0 0
0 15.17 0
0 0 3.56
I tried using the following Java and Python libraries:
Java: PColt, Jama
Python: NumPy
Here are the results from each one of them:
Jama:
U = 0.5423 -0.065 -0.8216 0.1057 -0.1245
0.1018 0.5935 0.1126 0.7881 0.0603
0.525 -0.0594 0.213 -0.1157 0.8137
0.6449 -0.0704 0.5087 -0.0599 -0.5628
0.0645 0.7969 -0.09 -0.5922 -0.0441
VT =0.4646 -0.0215 0.8685 8.0E-4 -0.1713
0.0701 0.76 -0.0631 -0.6013 -0.2278
0.7351 -0.0988 -0.284 -0.2235 0.565
0.4844 -0.0254 -0.3989 0.3327 -0.7035
0.065 0.6415 0.0443 0.6912 0.3233
S = 17.9184 0.0 0.0 0.0 0.0
0.0 15.1714 0.0 0.0 0.0
0.0 0.0 3.564 0.0 0.0
0.0 0.0 0.0 1.9842 0.0
0.0 0.0 0.0 0.0 0.3496
PColt:
U = -0.542255 0.0649957 0.821617 0.105747 -0.124490
-0.101812 -0.593461 -0.112552 0.788123 0.0602700
-0.524953 0.0593817 -0.212969 -0.115742 0.813724
-0.644870 0.0704063 -0.508744 -0.0599027 -0.562829
-0.0644952 -0.796930 0.0900097 -0.592195 -0.0441263
VT =-0.464617 0.0215065 -0.868509 0.000799554 -0.171349
-0.0700860 -0.759988 0.0630715 -0.601346 -0.227841
-0.735094 0.0987971 0.284009 -0.223485 0.565040
-0.484392 0.0254474 0.398866 0.332684 -0.703523
-0.0649698 -0.641520 -0.0442743 0.691201 0.323284
S =
(00) 17.91837085874625
(11) 15.17137188041607
(22) 3.5640020352605677
(33) 1.9842281528992616
(44) 0.3495556671751232
Numpy
U = -0.54225536 0.06499573 0.82161708 0.10574661 -0.12448979
-0.10181247 -0.59346055 -0.11255162 0.78812338 0.06026999
-0.52495325 0.05938171 -0.21296861 -0.11574223 0.81372354
-0.64487038 0.07040626 -0.50874368 -0.05990271 -0.56282918
-0.06449519 -0.79692967 0.09000966 -0.59219473 -0.04412631
VT =-4.64617e-01 2.15065e-02 -8.68508e-01 7.99553e-04 -1.71349e-01
-7.00859e-02 -7.59987e-01 6.30714e-02 -6.01345e-01 -2.27841e-01
-7.35093e-01 9.87971e-02 2.84008e-01 -2.23484e-01 5.65040e-01
-4.84391e-01 2.54473e-02 3.98865e-01 3.32683e-01 -7.03523e-01
-6.49698e-02 -6.41519e-01 -4.42743e-02 6.91201e-01 3.23283e-01
S = 17.91837086 15.17137188 3.56400204 1.98422815 0.34955567
As can be noticed the sign of each element in the Jama decomposed matrices (u & VT) is opposite to the ones in the original example. Interestingly, for PColt and Numpy only the signs of the elements in the last two columns are inverted. Is there any specific reason behind the inverted signs? Has someone faced similar discrepancies?
Here are the pieces of code which I used:
Java
import java.text.DecimalFormat;
import cern.colt.matrix.tdouble.DoubleMatrix2D;
import cern.colt.matrix.tdouble.algo.DenseDoubleAlgebra;
import cern.colt.matrix.tdouble.algo.decomposition.DenseDoubleSingularValueDecomposition;
import cern.colt.matrix.tdouble.impl.DenseDoubleMatrix2D;
import Jama.Matrix;
import Jama.SingularValueDecomposition;
public class SVD_Test implements java.io.Serializable{
public static void main(String[] args)
{
double[][] data2 = new double[][]
{{ 2.0, 0.0, 8.0, 6.0, 0.0},
{ 1.0, 6.0, 0.0, 1.0, 7.0},
{ 5.0, 0.0, 7.0, 4.0, 0.0},
{ 7.0, 0.0, 8.0, 5.0, 0.0},
{ 0.0, 10.0, 0.0, 0.0, 7.0}};
DoubleMatrix2D pColt_matrix = new DenseDoubleMatrix2D(5,5);
pColt_matrix.assign(data2);
Matrix j = new Matrix(data2);
SingularValueDecomposition svd_jama = j.svd();
DenseDoubleSingularValueDecomposition svd_pColt = new DenseDoubleSingularValueDecomposition(pColt_matrix, true, true);
System.out.println("U:");
System.out.println("pColt:");
System.out.println(svd_pColt.getU());
printJamaMatrix(svd_jama.getU());
System.out.println("S:");
System.out.println("pColt:");
System.out.println(svd_pColt.getS());
printJamaMatrix(svd_jama.getS());
System.out.println("V:");
System.out.println("pColt:");
System.out.println(svd_pColt.getV());
printJamaMatrix(svd_jama.getV());
}
public static void printJamaMatrix(Matrix inp){
System.out.println("Jama: ");
System.out.println(String.valueOf(inp.getRowDimension())+" X "+String.valueOf(inp.getColumnDimension()));
DecimalFormat twoDForm = new DecimalFormat("#.####");
StringBuffer sb = new StringBuffer();
for (int r = 0; r < inp.getRowDimension(); r++) {
for (int c = 0; c < inp.getColumnDimension(); c++)
sb.append(Double.valueOf(twoDForm.format(inp.get(r, c)))).append("\t");
sb.append("\n");
}
System.out.println(sb.toString());
}
}
Python:
>>> import numpy
>>> numpy_matrix = numpy.array([[ 2.0, 0.0, 8.0, 6.0, 0.0],
[1.0, 6.0, 0.0, 1.0, 7.0],
[5.0, 0.0, 7.0, 4.0, 0.0],
[7.0, 0.0, 8.0, 5.0, 0.0],
[0.0, 10.0, 0.0, 0.0, 7.0]])
>>> u,s,v = numpy.linalg.svd(numpy_matrix, full_matrices=True)
Is there something wrong with the code?
.
Nothing wrong: the s.v.d. is not unique up to a sign change of the columns of U and V. (i.e. if you change the sign of i-th column of U and the i-th column of V, you still have a valid s.v.d: A = U*S*V^T). Different implementations of the svd will give slightly different results: to check for a correct svd you have to compute norm(A-U*S*V^T) / norm(A) and verify that it is a small number.
There is nothing wrong. The SVD resolves the column space and the row space of the target matrix into orthonormal bases in such a fashion as to align these two spaces and account for the dilations along the eigenvectors. The alignment angles may be unique, a discrete set, or a continuum as below.
For example, given two angles t and p and the target matrix (see footnote)
A = ( (1, -1), (2, 2) )
The general decomposition is
U = ( (0, exp[ i p ]), (-exp[ i t ], 0) )
S = sqrt(2) ( (2,0), (0,1) )
V* = ( 1 / sqrt( 2 ) ) ( (exp[ i t ], exp[ i t ]), (exp[ i p ], -exp[ i p ]) )
To recover the target matrix use
A = U S V*
A quick test of the quality of the answer is to verify the unit length of each column vector in both U and V.
Footnote:
Matrices are in row major format. That is, the first row vector in the matrix A is (1, -1).
Finally I have enough points to post an image file.