im trying to append an array by repeatedly calling a function. When I put the append command in a loop this works fine, but not when the loop calls the function that's supposed to do the append.
import numpy as np
test_value = 555
i = 0
j = 0
test_array = np.empty([0, 3])
def test(test_value, i, j, test_array):
test_temp = []
test_temp.append(i)
test_temp.append(j)
test_temp.append(test_value)
test_temp_1 = test_temp
test_temp_2 = np.array(test_temp_1)
test_temp_2 = np.reshape(test_temp_2, (1,3))
test_array = np.append(test_array, test_temp_2, axis=0)
return test_array
for i in range(0,10):
i = i + 1
j = j + 2
test(test_value, i, j, test_array)
print ("test array", test_array)
Ideally what would happen is that test_array gets a new row added each time its looped, but so the final print of test_array stays empty.
Cheers
Related
I have function, which convert result from FFT to octave band (or 1/n - octave):
Function OctaveFilter(LowFreq, HighFreq, Im, Re) 'For amplitude
Dim i, j, SortedData(), F_From(), F_To()
Redim SortedData(Bins * n), F_From(Bins * n), F_To(Bins * n)
Dim p
For i = 1 To Bins * n
F_To(i) = Int(HighFreq(i) / df)
F_From(i) = Int(LowFreq(i) / df)
if (F_From(i) = 0) Then F_From(i) = 1 ' We cannot start from index 0
Next
For i = 1 To Bins * n
SortedData(i) = 0
For j = F_From(i) To F_To(i)
If (Length >= j) Then
SortedData(i) = SortedData(i) + Im(j)^2 + Re(j)^2
End If
Next
Next
SortInBins = sqrt(SortedData)
End Function
For example this FFT:
Amplitude
converts to this 1/3- octave bands:
1/3 - octave
But from FFT I also have Re and Im part. I want to convert these parts to octave band too. Is it the same function? How can I convert this Im part imaginary part to similar result (1/3 - octave) ?
I am getting "ValueError: too many values to unpack (expected 4)" with the below code. Please help me!!
I am trying to lemmatize and cut off common words and then add to library so I can identify most common words and find the relationship between words.
def build_dataset(words, vocabulary_size):
lexicon = []
for l in words:
all_words = word_tokenize(l.lower())
lexicon += list(all_words )
lexicon = [lemmatizer.lemmatize(i) for i in lexicon]
w_counts = Counter(lexicon)
word = []
for w in w_counts:
if 5000 > w_counts[w] > 50 :
word.append(w)
print(len(word))
return word
count = [['UNK', -1]]
count.extend(collections.Counter(word).most_common(vocabulary_size - 1))
dictionary = dict()
for l2, _ in count:
dictionary[l2] = len(dictionary)
data = list()
unk_count = 0
for l2 in word:
if l2 in dictionary:
index = dictionary[l2]
else:
index = 0
unk_count += 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words, vocabulary_size)
This is my code and when I run it appears index was outside the bounds of the array in line b = Asc(y(j + m)). I have tried Try and Catch and it didn't work out.
Public Function SMITH(x, y, SX, SY)
Dim a, b, j As Integer
result = 0
m = x.Length
n = y.Length
preBmBc(x)
preQsBc(x)
j = 0
While (j <= (n - m))
If (SX = SY.ToString.Substring(j, m)) Then
result = 1
End If
a = Asc(y(j + (m - 1)))
b = Asc(y(j + m))
j = j + Math.Max(bmBc(a), qsBc(b))
End While
Return result
End Function
Have you tried making m = "m-1" for b also? You really need to step through the code using breakpoints and the debugger to figure out when the program throws the OutOfRangeException.
this is a part of my matrix factorization code (a very weird version of nmf). My issue is that although every time when I iterate, I save the older copies of the W and H matrices, when I compare old_W and W after W finishes updating every time, they are actually the same! So the actual error output is always 0 and the while loop stops after the first iteration. However, "#print old - new" shows that the element W[r][i] is actually updated every time. What is it that I am not seeing?
def csmf(V, l, max_iter, err, alpha=0.01, beta=0.01, lamb=0.01):
W = np.random.rand(V.shape[0], l)
H = np.random.rand(l, V.shape[1])
n = V.shape[0]
N = V.shape[1]
NwOone = 60
NwOtwo = 60
NhOone = 50
NhOtwo = 50
for t in range(max_iter):
old_W = W # save old values
old_H = H
old = criterion(V,old_W,old_H,l,alpha,beta,lamb)
print "iteration ", t
##### update W
print "updating W"
setw = range(0,n)
subset_one = random.sample(setw,NwOone)
subset_two = calcGw(V, W, H, n, l, alpha, beta, NwOtwo)
chosen = np.intersect1d(subset_one,subset_two)
for r in chosen:
for i in range(len(W[0])):
update = wPosNeg(W[r],N,i,l,V,r,beta,H)
old = W[r][i]
W[r][i] = update
new = W[r][i]
#print old - new
##### update H
print "updating H"
seth = range(0,N)
subset_oneh = random.sample(seth,NhOone)
subset_twoh = calcGh(V, W, H, N, l, NhOtwo,lamb)
chosenh = np.intersect1d(subset_oneh,subset_twoh)
for s in chosenh: # column
for i in range(len(H)):
updateh = hPosNeg(H[i],n,i,l,V,s,lamb,W)
H[i][s] = updateh
##### check err
print "Checking criterion"
print criterion(V,W,H,l,alpha,beta,lamb)
print criterion(V,old_W,old_H,l,alpha,beta,lamb)
actual = abs(criterion(V,W,H,l,alpha,beta,lamb) -criterion(V,old_W,old_H,l,alpha,beta,lamb))
if actual <= err: return W, H, actual
return W, H, actual
dmat = np.random.rand(100,80)
W, H, err = csmf(dmat, 1, 10, 0.001, alpha=0.001, beta=0.001, lamb=0.001)
print err
in these lines:
old_W = W # save old values
old_H = H
you are not saving a copy, you are keeping a reference (old_W and W are the same piece of memory).
Try this:
old_W = W.copy() # save old values
old_H = H.copy()
What is the fastest way to iterate over all elements in a 3D NumPy array? If array.shape = (r,c,z), there must be something faster than this:
x = np.asarray(range(12)).reshape((1,4,3))
#function that sums nearest neighbor values
x = np.asarray(range(12)).reshape((1, 4,3))
#e is my element location, d is the distance
def nn(arr, e, d=1):
d = e[0]
r = e[1]
c = e[2]
return sum(arr[d,r-1,c-1:c+2]) + sum(arr[d,r+1, c-1:c+2]) + sum(arr[d,r,c-1]) + sum(arr[d,r,c+1])
Instead of creating a nested for loop like the one below to create my values of e to run the function nn for each pixel :
for dim in range(z):
for row in range(r):
for col in range(c):
e = (dim, row, col)
I'd like to vectorize my nn function in a way that extracts location information for each element (e = (0,1,1) for example) and iterates over ALL elements in my matrix without having to manually input each locational value of e OR creating a messy nested for loop. I'm not sure how to apply np.vectorize to this problem. Thanks!
It is easy to vectorize over the d dimension:
def nn(arr, e):
r,c = e # (e[0],e[1])
return np.sum(arr[:,r-1,c-1:c+2],axis=2) + np.sum(arr[:,r+1,c-1:c+2],axis=2) +
np.sum(arr[:,r,c-1],axis=?) + np.sum(arr[:,r,c+1],axis=?)
now just iterate over the row and col dimensions, returning a vector, that is assigned to the appropriate slot in x.
for row in <correct range>:
for col in <correct range>:
x[:,row,col] = nn(data, (row,col))
The next step is to make
rows = [:,None]
cols =
arr[:,rows-1,cols+2] + arr[:,rows,cols+2] etc.
This kind of problem has come up many times, with various descriptions - convolution, smoothing, filtering etc.
We could do some searches to find the best, or it you prefer, we could guide you through the steps.
Converting a nested loop calculation to Numpy for speedup
is a question similar to yours. There's only 2 levels of looping, and sum expression is different, but I think it has the same issues:
for h in xrange(1, height-1):
for w in xrange(1, width-1):
new_gr[h][w] = gr[h][w] + gr[h][w-1] + gr[h-1][w] +
t * gr[h+1][w-1]-2 * (gr[h][w-1] + t * gr[h-1][w])
Here's what I ended up doing. Since I'm returning the xv vector and slipping it in to the larger 3D array lag, this should speed up the process, right? data is my input dataset.
def nn3d(arr, e):
r,c = e
n = np.copy(arr[:,r-1:r+2,c-1:c+2])
n[:,1,1] = 0
n3d = np.ma.masked_where(n == nodata, n)
xv = np.zeros(arr.shape[0])
for d in range(arr.shape[0]):
if np.ma.count(n3d[d,:,:]) < 2:
element = nodata
else:
element = np.sum(n3d[d,:,:])/(np.ma.count(n3d[d,:,:])-1)
xv[d] = element
return xv
lag = np.zeros(shape = data.shape)
for r in range(1,data.shape[1]-1): #boundary effects
for c in range(1,data.shape[2]-1):
lag[:,r,c] = nn3d(data,(r,c))
What you are looking for is probably array.nditer:
a = np.arange(6).reshape(2,3)
for x in np.nditer(a):
print(x, end=' ')
which prints
0 1 2 3 4 5