Looping and if statements in SPSS - variables

I'm new to SPSS and I'm a bit stuck on a problem. I have about 200 variables and I want to loop through pairs of them looking for variables with correlation coefficients above 0.7. I know that I can use CORRELATIONS to get a matrix of coefficients but it would be huge and difficult to look through. Basically, in pseudocode, what I want to do is:
for (i = W1_1 to W1_200) {
for (j = i to W1_200) {
if CORRELATIONS(i,j)>0.7 {
print i, j, CORRELATIONS(i,j)
}
}
}
I can't for the life of me work out how to do any of this in SPSS. Help!

SPSS has a helper function on the CORRELATIONS command to export the correlation matrix. From there you can manipulate the data to give the correlation pairs that meet your criteria. So first, lets make some fake data to illustrate.
*Making fake data.
set seed 5.
input program.
loop i = 1 to 100.
end case.
end loop.
end file.
end input program.
dataset name test.
compute #base = RV.NORMAL(0,1).
vector X(20).
loop #i = 1 to 20.
compute X(#i) = #base*(#i/20) + RV.NORMAL(0,1).
end loop.
exe.
Now, we can run the CORRELATIONS command and export the table to a new dataset (which I named here Corrs).
DATASET DECLARE Corrs.
CORRELATIONS
/VARIABLES=X1 to X20
/MATRIX=OUT('Corrs').
Unfortunately SPSS returns the full matrix (plus other info on the sample size). We can only select the rows we are interested in (ones with "CORR" in the ROWTYPE_ column) and then use a DO REPEAT to set the upper or lower half of the matrix to system missing values.
DATASET ACTIVATE Corrs.
SELECT IF ROWTYPE_ = "CORR".
*Now only making lower half of matrix.
COMPUTE #iter = 0.
DO REPEAT X = X1 TO X20.
COMPUTE #iter = #iter + 1.
IF #iter > ($casenum-1) X = $SYSMIS.
END REPEAT.
I set them to system missing values because the next part I will reshape the data using VARSTOCASES. This by default drops missing values, so we won't end up having redundant correlation pairs.
VARSTOCASES
/MAKE Corr FROM X1 TO X20
/INDEX X2 (Corr)
/DROP ROWTYPE_.
RENAME VARIABLES (VARNAME_ = X1).
Now you have your correlation pairs list and can just select out the correlations that meet your criteria.
SELECT IF ABS(Corr) >= .5.
Making of the correlation pairs can be made into a MACRO function pretty easily to return the pair list. Below is that function, recreating the exact steps used here.
DEFINE !CorrPairs (!POSITIONAL !CMDEND)
DATASET DECLARE Corrs.
CORRELATIONS
/VARIABLES=!1
/MATRIX=OUT('Corrs').
DATASET ACTIVATE Corrs.
SELECT IF ROWTYPE_ = "CORR".
COMPUTE #iter = 0.
DO REPEAT X = !1.
COMPUTE #iter = #iter + 1.
IF #iter > ($casenum-1) X = $SYSMIS.
END REPEAT.
VARSTOCASES
/MAKE Corr FROM !1
/INDEX X2 (Corr)
/DROP ROWTYPE_.
RENAME VARIABLES (VARNAME_ = X1).
!ENDDEFINE.
The macro just takes a list of variables (in the active dataset) to grab the correlations, and returns a second dataset named Corrs with the correlation pairs and the variable names defined in the X1 and X2 columns. Then after the above macro is defined the above steps can be recreated simply by below.
!CorrPairs X1 to X20.
SELECT IF ABS(Corr) >= .5.
EXECUTE.

My suggestion is to use OMS to extract your correlation values from the output into a datafile. Use a macro to only run the correlations you need:
DATASET DECLARE Correlations.
OMS /SELECT TABLES /IF COMMANDS=['Correlations'] SUBTYPES=['Correlations']
/DESTINATION FORMAT=SAV NUMBERED=TableNumber_ OUTFILE='Correlations' VIEWER=YES.
define runCorrs ()
!do !i1=1 !to 200
!do !i2=!i1 !to 200
!if (!i2<>!i1) !then
corr !concat("W_",!i1) with !concat("W_",!i2).
!ifend
!doend !doend
!enddefine.
runCorrs.
OMSEND.
datas act Correlations.
select if var2="Pearson Correlation".
VARSTOCASES /make crlVal from W_2 to W_200/index=withvar(crlVal)
/drop TableNumber_ Command_ Subtype_ Label_ Var2.
now you have a nice list of all the correlations to work with:
select if crlVal>0.7.
exe.

Related

Plotting an exponential function given one parameter

I'm fairly new to python so bare with me. I have plotted a histogram using some generated data. This data has many many points. I have defined it with the variable vals. I have then plotted a histogram with these values, though I have limited it so that only values between 104 and 155 are taken into account. This has been done as follows:
bin_heights, bin_edges = np.histogram(vals, range=[104, 155], bins=30)
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2.
plt.errorbar(bin_centres, bin_heights, np.sqrt(bin_heights), fmt=',', capsize=2)
plt.xlabel("$m_{\gamma\gamma} (GeV)$")
plt.ylabel("Number of entries")
plt.show()
Giving the above plot:
My next step is to take into account values from vals which are less than 120. I have done this as follows:
background_data=[j for j in vals if j <= 120] #to avoid taking the signal bump, upper limit of 120 MeV set
I need to plot a curve on the same plot as the histogram, which follows the form B(x) = Ae^(-x/λ)
I then estimated a value of λ using the maximum likelihood estimator formula:
background_data=[j for j in vals if j <= 120] #to avoid taking the signal bump, upper limit of 120 MeV set
#print(background_data)
N_background=len(background_data)
print(N_background)
sigma_background_data=sum(background_data)
print(sigma_background_data)
lamb = (sigma_background_data)/(N_background) #maximum likelihood estimator for lambda
print('lambda estimate is', lamb)
where lamb = λ. I got a value of roughly lamb = 27.75, which I know is correct. I now need to get an estimate for A.
I have been advised to do this as follows:
Given a value of λ, find A by scaling the PDF to the data such that the area beneath
the scaled PDF has equal area to the data
I'm not quite sure what this means, or how I'd go about trying to do this. PDF means probability density function. I assume an integration will have to take place, so to get the area under the data (vals), I have done this:
data_area= integrate.cumtrapz(background_data, x=None, dx=1.0)
print(data_area)
plt.plot(background_data, data_area)
However, this gives me an error
ValueError: x and y must have same first dimension, but have shapes (981555,) and (981554,)
I'm not sure how to fix it. The end result should be something like:
See the cumtrapz docs:
Returns: ... If initial is None, the shape is such that the axis of integration has one less value than y. If initial is given, the shape is equal to that of y.
So you are either to pass an initial value like
data_area = integrate.cumtrapz(background_data, x=None, dx=1.0, initial = 0.0)
or discard the first value of the background_data:
plt.plot(background_data[1:], data_area)

SPSS: DO REPEAT with different numbers of matched variables

I have a dataset where each case has the following set of variables:
VarA1.1 to VarA25.185 (total of 4625 variables)
VarB.1 to VarB.185 (total of 185 variables)
For each case, VarA1.1, VarA2.1, VarA3.1, etc. are all linked to the same VarB.1.
I want to use a DO REPEAT function to search through each .1 instance using both VarA and VarB.
Example code:
DO REPEAT VarA = VarA1.1 to VarA25.185
/ VarB = VarB.1 to VarB.185.
if (VarA = X) AND ((VarB-Y)<0)
VarC = Z.
END REPEAT.
EXE.
However, it seems that because there are different numbers of variables in the repeat list of VarA and VarB, they don't pair up. I want to associate each VarA#(1-25).1 with VarB.1, each VarA#(1-25).2 with each VarB.2, etc. up to VarB.185 so that in the repeat function the correct pairing of variables is used.
Thanks!
Another way to do this is to use a LOOP on the outside and a DO REPEAT on the inside. So here is some example data, with just three A variables that go to 1 to 10.
SET SEED 10.
INPUT PROGRAM.
LOOP Id = 1 TO 100.
END CASE.
END LOOP.
END FILE.
END INPUT PROGRAM.
DATASET NAME Sim.
*Making random data.
VECTOR A1.(10).
VECTOR A2.(10).
VECTOR A3.(10).
VECTOR B.(10).
NUMERIC X Y.
DO REPEAT a = A1.1 TO Y.
COMPUTE a = RV.BERNOULLI(0.5).
END REPEAT.
EXECUTE.
So here is the part you want to pay attention to. Your DO REPEAT currently loops over the 25 variables. This switches it though, so the LOOP part goes over the 25 variables, but the DO REPEAT goes over each of your A vectors.
VECTOR A1 = A1.1 TO A1.10.
VECTOR A2 = A2.1 TO A2.10.
VECTOR A3 = A3.1 TO A3.10.
VECTOR B = B.1 TO B.10.
VECTOR C.(10).
LOOP #i = 1 TO 10.
DO REPEAT A = A1 A2 A3.
IF (A(#i) = X) AND (B(#i)-Y<0) C.(#i) = B(#i).
END REPEAT.
END LOOP.
EXECUTE.
Code golf it is probably not going to beat the macro approach, since you have to define all of those VECTOR statements. But I think it is a conceptually clear way to write the program.
It looks like what you are trying to do is loop over 25 variables but repeat this for 185 variables.
It would be more intutive to use SPSS Macros to achieve this. Stepping through the below will demonstrate the building blocks for solving your data problem.
DEFINE !MyMacroName ()
SET MPRINT ON.
/* Generate some example data to match desired data format*/.
set seed = 10.
input program.
loop #i = 1 to 50.
compute case = #i.
end case.
end loop.
end file.
end input program.
dataset name sim.
execute.
!do !i =1 !to 25
vector !concat('VarA',!i,'.(185, F1.0).').
do repeat v = !concat('VarA',!i,'.1') to !concat('VarA',!i,'.185').
compute v = TRUNC(RV.UNIFORM(1,6)).
end repeat.
!doend
vector VarB.(185, F1.0).
do repeat v = VarB.1 to VarB.185.
compute v = TRUNC(RV.UNIFORM(1,6)).
end repeat.
execute.
/* Solve actual problem */.
!do !i =1 !to 185
!do !j = 1 !to 25
if (!concat('VarA',!j,'.',!i) = !concat('VarB.',!i)) !concat('VarC', !j)=1.
!doend
!doend
SET MPRINT OFF.
!ENDDEFINE.
/* Run macro */.
!MyMacroName.

Dummy Variables in Julia

In R there is nice functionality for running a regression with dummy variables for each level of a categorical variable. e.g. Automatically expanding an R factor into a collection of 1/0 indicator variables for every factor level
Is there an equivalent way to do this in Julia.
x = randn(1000)
group = repmat(1:25 , 40)
groupMeans = randn(25)
y = 3*x + groupMeans[group]
data = DataFrame(x=x, y=y, g=group)
for i in levels(group)
data[parse("I$i")] = data[:g] .== i
end
lm(y~x+I1+I2+I3+I4+I5+I6+I7+I8+I9+I10+
I11+I12+I13+I14+I15+I16+I17+I18+I19+I20+
I21+I22+I23+I24, data)
If you are using the DataFrames package, after you pool the data, the package will take care of the rest:
Pooling columns is important for working with the GLM package When fitting regression models, PooledDataArray columns in the input are translated into 0/1 indicator columns in the ModelMatrix - with one column for each of the levels of the PooledDataArray.
You can see the rest of documentation on pooled data here

find ranges to create Uniform histogram

I need to find ranges in order to create a Uniform histogram
i.e: ages
to 4 ranges
data_set = [18,21,22,24,27,27,28,29,30,32,33,33,42,42,45,46]
is there a function that gives me the ranges so the histogram is uniform?
in this case
ranges = [(18,24), (27,29), (30,33), (42,46)]
This example is easy, I'd like to know if there is an algorithm that deals with complex data sets as well
thanks
You are looking for the quantiles that split up your data equally. This combined with cutshould work. So, suppose you want n groups.
set.seed(1)
x <- rnorm(1000) # Generate some toy data
n <- 10
uniform <- cut(x, c(-Inf, quantile(x, prob = (1:(n-1))/n), Inf)) # Determine the groups
plot(uniform)
Edit: now corrected to yield the correct cuts in the ends.
Edit2: I don't quite understand the downvote. But this also works in your example:
data_set = c(18,21,22,24,27,27,28,29,30,32,33,33,42,42,45,46)
n <- 4
groups <- cut(data_set, breaks = c(-Inf, quantile(data_set, prob = 1:(n-1)/n), Inf))
levels(groups)
With some minor renaming nessesary. For slightly better level names, you could also put in min(x) and max(x) instead of -Inf and Inf.

Is it possible to optimize this Matlab code for doing vector quantization with centroids from k-means?

I've created a codebook using k-means of size 4000x300 (4000 centroids, each with 300 features). Using the codebook, I then want to label an input vector (for purposes of binning later on). The input vector is of size Nx300, where N is the total number of input instances I receive.
To compute the labels, I calculate the closest centroid for each of the input vectors. To do so, I compare each input vector against all centroids and pick the centroid with the minimum distance. The label is then just the index of that centroid.
My current Matlab code looks like:
function labels = assign_labels(centroids, X)
labels = zeros(size(X, 1), 1);
% for each X, calculate the distance from each centroid
for i = 1:size(X, 1)
% distance of X_i from all j centroids is: sum((X_i - centroid_j)^2)
% note: we leave off the sqrt as an optimization
distances = sum(bsxfun(#minus, centroids, X(i, :)) .^ 2, 2);
[value, label] = min(distances);
labels(i) = label;
end
However, this code is still fairly slow (for my purposes), and I was hoping there might be a way to optimize the code further.
One obvious issue is that there is a for-loop, which is the bane of good performance on Matlab. I've been trying to come up with a way to get rid of it, but with no luck (I looked into using arrayfun in conjunction with bsxfun, but haven't gotten that to work). Alternatively, if someone know of any other way to speed this up, I would be greatly appreciate it.
Update
After doing some searching, I couldn't find a great solution using Matlab, so I decided to look at what is used in Python's scikits.learn package for 'euclidean_distance' (shortened):
XX = sum(X * X, axis=1)[:, newaxis]
YY = Y.copy()
YY **= 2
YY = sum(YY, axis=1)[newaxis, :]
distances = XX + YY
distances -= 2 * dot(X, Y.T)
distances = maximum(distances, 0)
which uses the binomial form of the euclidean distance ((x-y)^2 -> x^2 + y^2 - 2xy), which from what I've read usually runs faster. My completely untested Matlab translation is:
XX = sum(data .* data, 2);
YY = sum(center .^ 2, 2);
[val, ~] = max(XX + YY - 2*data*center');
Use the following function to calculate your distances. You should see an order of magnitude speed up
The two matrices A and B have the columns as the dimenions and the rows as each point.
A is your matrix of centroids. B is your matrix of datapoints.
function D=getSim(A,B)
Qa=repmat(dot(A,A,2),1,size(B,1));
Qb=repmat(dot(B,B,2),1,size(A,1));
D=Qa+Qb'-2*A*B';
You can vectorize it by converting to cells and using cellfun:
[nRows,nCols]=size(X);
XCell=num2cell(X,2);
dist=reshape(cell2mat(cellfun(#(x)(sum(bsxfun(#minus,centroids,x).^2,2)),XCell,'UniformOutput',false)),nRows,nRows);
[~,labels]=min(dist);
Explanation:
We assign each row of X to its own cell in the second line
This piece #(x)(sum(bsxfun(#minus,centroids,x).^2,2)) is an anonymous function which is the same as your distances=... line, and using cell2mat, we apply it to each row of X.
The labels are then the indices of the minimum row along each column.
For a true matrix implementation, you may consider trying something along the lines of:
P2 = kron(centroids, ones(size(X,1),1));
Q2 = kron(ones(size(centroids,1),1), X);
distances = reshape(sum((Q2-P2).^2,2), size(X,1), size(centroids,1));
Note
This assumes the data is organized as [x1 y1 ...; x2 y2 ...;...]
You can use a more efficient algorithm for nearest neighbor search than brute force.
The most popular approach are Kd-Tree. O(log(n)) average query time instead of the O(n) brute force complexity.
Regarding a Maltab implementation of Kd-Trees, you can have a look here