Im trying to make an ordered 4 dimentional array with dimentions being latitude, longitude month and data.
i have all my data sorted based on latitude, longitude and month in separate lists, but for analysing them further i need to store them in a way that i can access it. so far i managed to to a nested for list to solve my problem and i will attach the code. i was wondering if its possible to do this with numpy since by using a 4D nested list it is taking lots of computational power. also, the datapoints that i have for all the months and locations is dynamic basically, my dimenstions would be 18,28,12,?.
P.S. Temperature_monthly function, gives me a list for a specific coordianets and month.
def param_sorted_loc_month(lst1,lst2):
"""
:param lst1: Variable Dataset
:param lst2: Monthly index
:return: Sorted temperature data for all the locations and months
"""
dummy = []
for i in range(len(lst1)):
dummy1 = []
for j in range(len(lst1[i])):
dummy2 = []
for k in range(len(lst2)):
dummy2.append(Temperature_monthly(i, j, k, lst1, lst2))
dummy1.append(dummy2)
dummy.append(dummy1)
return dummy
Related
I have a csv file with in which I want to compare each row with all other rows. I want to do a linear regression and get the r^2 value for the linear regression line and put it into a new matrix. I'm having trouble finding a way to iterate over all the other rows (it's fine to compare the primary row to itself).
I've tried using .iterrows but I can't think of a way to define the other rows once I have my primary row using this function.
UPDATE: Here is a solution I came up with. Please let me know if there is a more efficient way of doing this.
def bad_pairs(df, limit):
list_fluor = list(combinations(df.index.values, 2))
final = {}
for fluor in list_fluor:
final[fluor] = (r2_score(df.xs(fluor[0]),
df.xs(fluor[1])))
bad_final = {}
for i in final:
if final[i] > limit:
bad_final[i] = final[i]
return(bad_final)
My data is a pandas DataFrame where the index is the name of the color and there is a number between 0-1 for each detector (220 columns).
I'm still working on a way to make a new pandas Dataframe from a dictionary with all the values (final in the code above), not just those over the limit.
I have a for loop that gives me an output of 16 x 8 2D arrays per entry in the loop. I want to stack all of these 2D arrays along the z-axis in a 3D array. This way, I can determine the variance over the z-axis. I have tried multiple commands, such as np.dstack, matrix3D[p,:,:] = ... and np.newaxis both in- and outside the loop. However, the closest I've come to my desired output is just a repetition of the last array stacked on top of each other. Also the dimensions were way off. I need to keep the original 16 x 8 format. By now I'm in a bit too deep and could use some nudge in the right direction!
My code:
excludedElectrodes = [1,a.numberOfColumnsInArray,a.numberOfElectrodes-a.numberOfColumnsInArray+1,a.numberOfElectrodes]
matrixEA = np.full([a.numberOfRowsInArray, a.numberOfColumnsInArray], np.nan)
for iElectrode in range(a.numberOfElectrodes):
if a.numberOfDeflectionsPerElectrode[iElectrode] != 0:
matrixEA[iElectrode // a.numberOfColumnsInArray][iElectrode % a.numberOfColumnsInArray] = 0
for iElectrode in range (a.numberOfElectrodes):
if iElectrode+1 not in excludedElectrodes:
"""Preprocessing"""
# Loop over heartbeats
for p in range (1,len(iLAT)):
# Calculate parameters, store them in right row-col combo (electrode number)
matrixEA[iElectrode // a.numberOfColumnsInArray][iElectrode % a.numberOfColumnsInArray] = (np.trapz(abs(correctedElectrogram[limitA[0]:limitB[0]]-totalBaseline[limitA[0]:limitB[0]]))/(1000))
# Stack all matrixEA arrays along z axis
matrix3D = np.dstack(matrixEA)
This example snippet does what you want, although I suspect your errors have to do more with things not relative to the concatenate part. Here, we use the None keyword in the array to create a new empty dimension (along which we concatenate the 2D arrays).
import numpy as np
# Function does create a dummy (16,8) array
def foo(a):
return np.random.random((16,8)) + a
arrays2D = []
# Your loop
for i in range(10):
# Calculate your (16,8) array
f = foo(i)
# And append it to the list
arrays2D.append(f)
# Stack arrays along new dimension
array3D = np.concatenate([i[...,None] for i in arrays2D], axis = -1)
I would like to calculate SVD from large matrix by Dask. However, I tried naively to create an empty 2D array and update in a loop, but Dask does not allow mutating the array.
So, I'm looking for a workaround. I tried saving large ( around 65,000 x 65,000, or even more) array into HDF5 via h5py, but updating the array in a loop is quite inefficient. Should I be using mmap, memory mapped numpy instead?
Below, I shared a sample code, without any dask implementation. Should I use dask.bag or dask.delayed for this operation?
The sample code is taking in long strings and in window size of 8, generates combinations of two-letter words. In actual data, the window size would be 20 and words will be 8-letter long. And, the input string can be 3 Gb long.
import itertools
import numpy as np
np.set_printoptions(threshold=np.Inf)
# generate all possible words of length 2 (AA, AC, AG, AT, CA, etc.)
# then get numerical index (AA -> 0, AC -> 1, etc.)
bases=['A','C','G','T']
all_two = [''.join(p) for p in itertools.product(bases, repeat=2)]
two_index = {x: y for (x,y) in zip(all_two, range(len(all_two)))}
# final array to fill, size is [ 16 possible words x 16 possible words ]
counts = np.zeros(shape=(16,16)) # in actual sample we expect 65000x65000 array
# sample sequences (these will be gigabytes long in actual sample)
seq1 = "AAAAACCATCGACTACGACTAC"
seq2 = "ACGATCACGACTACGACTAGATGCATCACGACTAAAAA"
# accumulate results
all_pairs=[]
def generate_pairs(sequence):
pairs=[]
for i in range(len(sequence)-8+1):
window=sequence[i:i+8]
words= [window[i:i+2] for i in range(0, len(window), 2)]
for pair in itertools.combinations(words,2):
pairs.append(pair)
return pairs
# use function for each sequence
all_pairs.extend(generate_pairs(seq1))
all_pairs.extend(generate_pairs(seq2))
# convert 1D array of pairs into 2D counts of pairs
# for each pair, lookup word index and increase corresponding cell
for j in all_pairs:
counts[ two_index[j[0]], two_index[j[1]] ] += 1
print(counts)
EDIT: I might have asked the question a little complicated, let me try to paraphrase it. I need to construct a single large 2D array of size ~65000x65000. The array needs to be filled with counting occurrences of (word1,word2) pairs. Since Dask does not allow item assignment/mutate for Dask array, I can not fill the array as pairs are processed. Is there a workaround to generate/fill a large 2D array with Dask?
Here's simpler code to test:
import itertools
import numpy as np
np.set_printoptions(threshold=np.Inf)
bases=['A','C','G','T']
all_two = [''.join(p) for p in itertools.product(bases, repeat=2)]
two_index = {x: y for (x,y) in zip(all_two, range(len(all_two)))}
seq = "AAAAACCATCGACTACGACTAC"
counts = np.zeros(shape=(16,16))
for i in range(len(seq)-8+1):
window=seq[i:i+8]
words= [window[i:i+2] for i in range(0, len(window), 2)]
for pair in itertools.combinations(words,2):
counts[two_index[pair[0]], two_index[pair[1]]] += 1 # problematic part!
print(counts)
I have two 2-D matrices which have a shared axis.
I want to get a 3-D array that holds the results of every pairwise multiplication made between all the combinations of vectors from each matrix along that shared axis.
What is the best way to achieve this? (assuming that the matrices are big)
As an illustration, let's say I have 100 technicians and 1000 customers.
For each of these individuals I have a 1-D array with ones and zeros representing their availability on a each day of the week.
That's a 7x100 matrix for the technicians, a 7x1000 matrix for the customers.
import numpy as np
technicians = np.random.randint(low=0,high=2,size=(7,100))
customers = np.random.randint(low=0,high=2,size=(7,1000))
result = solution(technicians, customers)
result.shape # (7,100,1000)
I want to find for each technician-customer couple the days they are both available.
If I perform a pairwise multiplication between each combination of technician availability and customer availability I get a 1-D arrays that shows for each couple whether they are both available on these days. Together they create the 3-D array I'm aiming for, shaped something like 7x100x1000.
Thanks!
Try
ans = technicians.reshape((7, 1, 100)) * customers.reshape((7, 1000, 1))
We make use of numpy.broadcasting.
General Broadcasting Rules: When operating on two arrays, NumPy
compares their shapes element-wise. It starts with the trailing
dimensions, and works its way forward. Two dimensions are compatible
when
(1) they are equal, or (2) one of them is 1
Now, we are matching the shape of technicians and customers as
technician : 7 x 1 x 100
customers : 7 x 1000 x 1
Result (3d array): 7 x 1000 x 100
using reshape. Then, we can apply elementwise multiplication with *.
I have a function, say peaksdetect(), that will generate a 2-D array of unknown number of rows; I will call it a few times, let's say 3 and I would like to make of these 3 arrays, one 3-D array. Here is my start but it is very complicated with a lot of if statements, so I want to make things simpler if possible:
import numpy as np
dim3 = 3 # the number of times peaksdetect() will be called
# it is named dim3 because this number will determine
# the size of the third dimension of the result 3-D array
for num in range(dim3):
data = peaksdetect(dataset[num]) # generates a 2-D array of unknown number of rows
if num == 0:
3Darray = np.zeros([dim3, data.shape]) # in fact the new dimension is in position 0
# so dimensions 0 and 1 of "data" will be
# 1 and 2 respectively
else:
if data.shape[0] > 3Darray.shape[1]:
"adjust 3Darray.shape[1] so that it equals data[0] by filling with zeroes"
3Darray[num] = data
else:
"adjust data[0] so that it equals 3Darray.shape[1] by filling with zeroes"
3Darray[num] = data
...
If you are counting on having to resize your array, there is very likely not going to be much to be gained by preallocating it. It will probably be simpler to store your arrays in a list, then figure out the size of the array to hold them all, and dump the data into it:
data = []
for num in range(dim3):
data.append(peaksdetect(dataset[num]))
shape = map(max, zip(*(j.shape for j in data)))
shape = (dim3,) + tuple(shape)
data_array = np.zeros(shape, dtype=data[0].dtype)
for j, d in enumerate(data):
data_array[j, :d.shape[0], :d.shape[1]] = d