I have a set of simulation data to which I want to perform an FFT. I am using matplotlib to do this. However, the FFT is looking strange, so I don't know if I am missing something in my code. Would appreciate any help.
Original data:
time-varying data
FFT:
FFT
Code for the FFT calculation:
import numpy as np
import matplotlib.pyplot as plt
import scipy.fftpack as fftpack
data = pd.read_csv('table.txt',header=0,sep="\t")
fig, ax = plt.subplots()
mz_res=data[['mz ()']].to_numpy()
time=data[['# t (s)']].to_numpy()
ax.plot(time[:300],mz_res[:300])
ax.set_title("Time-varying mz component")
ax.set_xlabel('time')
ax.set_ylabel('mz amplitude')
fft_res=fftpack.fft(mz_res[:300])
power=np.abs(fft_res)
frequencies=fftpack.fftfreq(fft_res.size)
fig2, ax_fft=plt.subplots()
ax_fft.plot(frequencies[:150],power[:150]) // taking just half of the frequency range
I am just plotting the first 300 datapoints because the rest is not important.
Am I doing something wrong here? I was expecting single frequency peaks not what I got. Thanks!
Link for the input file:
Pastebin
EDIT
Turns out the mistake was in the conversion of the dataframe to a numpy array. For a reason I have yet to understand, if I convert a dataframe to a numpy array it is converted as an array of arrays, i.e., each element of the resulting array is itself an array of a single element. When I change the code to:
mz_res=data['mz ()'].to_numpy()
so that it is a conversion from a pandas series to a numpy array, then the FFT behaves as expected and I get single frequency peaks from the FFT.
So I just put this here in case someone else finds it useful. Lesson learned: the conversion from a pandas series to a numpy array yields a different result than the conversion from a pandas dataframe.
Solution:
Using the conversion from pandas series to numpy array instead of pandas dataframe to numpy array.
Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.fftpack as fftpack
data = pd.read_csv('table.txt',header=0,sep="\t")
fig, ax = plt.subplots()
mz_res=data['mz ()'].to_numpy() #series to array
time=data[['# t (s)']].to_numpy() #dataframe to array
ax.plot(time,mz_res)
ax.set_title("Time-varying mz component")
ax.set_xlabel('time')
ax.set_ylabel('mz amplitude')
fft_res=fftpack.fft(mz_res)
power=np.abs(fft_res)
frequencies=fftpack.fftfreq(fft_res.size)
indices=np.where(frequencies>0)
freq_pos=frequencies[indices]
power_pos=power[indices]
fig2, ax_fft=plt.subplots()
ax_fft.plot(freq_pos,power_pos) # taking just half of the frequency range
ax_fft.set_title("FFT")
ax_fft.set_xlabel('Frequency (Hz)')
ax_fft.set_ylabel('FFT Amplitude')
ax_fft.set_yscale('linear')
Yields:
Time-dependence
FFT
Related
I have a time series data
I am trying to find the fft .But it gives keyerror :Aligned when trying to get the value
my data looks like below
this is the code:
import datetime
import numpy as np
import scipy as sp
import scipy.fftpack
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
temp_fft = sp.fftpack.fft(data3)
Looks like your data is a pandas series. fft works with numpy arrays rather than series.
Easy resolution is to convert your series into a numpy array either via
data3.values
or
np.array(data3)
You can then pass that array into fft function. So the end result is:
temp_fft = sp.fftpack.fft(data3.values)
This should work for you now.
I am new to programming, and I'm having difficulty plotting multiple graphs. What I am trying to get is a graph containing values of K along the Y-axis plotted against values of Dk. I need this graph to contain all the K=f(Dk) for each temperature Tcwin in range (10,40,1)
While the code seems to be working well and I have obtained the data I was trying to calculate, I can't seem to plot them. Any help would be appreciated.
import numpy as np
import pandas as pd
A=3000
d_in=20
CF=0.85
w=2.26
Tcwin=12
Dk=np.arange(27.418,301.598,27.418)
dk=(Dk*1000/(A*3.600))
cp=4.19
Gw=13000
e=2.718281828
f_velocity=w*1.1/(20**0.25)
for Tcwin in range(10,40,1):
while Tcwin<35:
print(Tcwin)
f_w=0.12*CF*(1+0.15*Tcwin)
Ф_в=f_velocity**f_w
K=CF*4070*((1.1*w/(d_in**0.25))**(0.12*CF*(1+0.15*Tcwin)))*(1-(((35-Tcwin)**2)*(0.52-0.0072*dk)*(CF**0.5))/1000)
n=(K*A)/(cp*Gw*1000)
Tcwout_theor=Tcwin+(Dk*2225/(cp*Gw))
Subcooling_theor=(Tcwout_theor-Tcwin)/(e**(K*A/(cp*(Gw*1000/3600)*1000)))
TR_theor=Tcwout_theor-Tcwin
Tsat_theor=Tcwout_theor+Subcooling_theor
print(K)
print(Tcwout_theor)
print(Subcooling_theor)
print(Tsat_theor)
Tcwin+=1
else:
print('Loop done')
Is this what you are looking for? plotting after each run:
import numpy as np
import pandas as pd
A=3000
d_in=20
CF=0.85
w=2.26
Tcwin=12
Dk=np.arange(27.418,301.598,27.418)
dk=(Dk*1000/(A*3.600))
cp=4.19
Gw=13000
e=2.718281828
f_velocity=w*1.1/(20**0.25)
for Tcwin in range(10,40,1):
while Tcwin<35:
print(Tcwin)
f_w=0.12*CF*(1+0.15*Tcwin)
Ф_в=f_velocity**f_w
K=CF*4070*((1.1*w/(d_in**0.25))**(0.12*CF*(1+0.15*Tcwin)))*(1-(((35-Tcwin)**2)*(0.52-0.0072*dk)*(CF**0.5))/1000)
n=(K*A)/(cp*Gw*1000)
Tcwout_theor=Tcwin+(Dk*2225/(cp*Gw))
Subcooling_theor=(Tcwout_theor-Tcwin)/(e**(K*A/(cp*(Gw*1000/3600)*1000)))
TR_theor=Tcwout_theor-Tcwin
Tsat_theor=Tcwout_theor+Subcooling_theor
print(K)
print(Tcwout_theor)
print(Subcooling_theor)
print(Tsat_theor)
Tcwin+=1
plt.plot(K,dk) #---------------> this is the code for plotting
else:
print('Loop done')
The plot below shows the correlation for one column. The problem is that the numbers are not readable, because there are many columns in it.
How is it possible to show only 5 or 6 most important columns and not all of them with very low importance?
plt.figure(figsize=(20,3))
sns.heatmap(df.corr()[['price']].sort_values('price', ascending=False).iloc[1:].T, annot=True,
cmap='Spectral_r', vmax=0.9, vmin=-0.31)
You can limit the cells shown via .iloc[1:7]. If you also want to show the highest negative values, you could create a second plot with .iloc[-6:]. To have both together, you could use numpy's slicing function and write .iloc[np.r_[1:4, -3:0]].
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.DataFrame(np.random.rand(7, 27), columns=['price'] + [*'abcdefghijklmnopqrstuvwxyz'])
plt.figure(figsize=(20, 3))
sns.heatmap(df.corr()[['price']].sort_values('price', ascending=False).iloc[1:7].T,
annot=True, annot_kws={'rotation':90, 'size': 20},
cmap='Spectral_r', vmax=0.9, vmin=-0.31)
plt.show()
annot can also be a list of labels. Using this, you can define a string matrix that you use to display the desired numbers and set the others to an empty string.
import matplotlib.pyplot as plt
import numpy as np; np.random.seed(0)
import seaborn as sns; sns.set_theme()
import pandas as pd
from string import ascii_letters
# generate random data
rs = np.random.RandomState(33)
df = pd.DataFrame(data=rs.normal(size=(100, 26)),
columns=list(ascii_letters[26:]))
importance_index = 5 # until which idx to hide values
data = df.corr()[['A']].sort_values('A', ascending=False).iloc[1:].T
labels = data.astype(str) # make a str-copy
labels.iloc[0,:importance_index] = ' ' # mask columns that you want to hide
sns.heatmap(data, annot=labels, cmap='Spectral_r', vmax=0.9, vmin=-0.31, fmt='', annot_kws={'rotation':90})
plt.show()
The output on some random data:
This works but it has its limits, particulary with setting fmt='' (can't use it to conveniently format decimals anymore, need to do it manually now). I would also question whether your approach is even the best one to take here. I think consistency in plots is quite important. I would rather evaluate if we can't rotate the heatmap labels (I've included it above) or leave them out completely since it is technically redundant due to the color-coding. Alternatively, you could only plot the cells with the "important" values.
I would like to know why would I need to convert my dataframe to ndarray when doing a regression, since I get the same result for intercept and coef when I do not convert it?
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
from sklearn import linear_model
%matplotlib inline
# import data and create dataframe
!wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
df = pd.read_csv("FuelConsumption.csv")
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
# Split train/ test data
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
# Modeling
regr = linear_model.LinearRegression()
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
**# if I use the dataframe, train[['ENGINESIZE']] for 'x', and train[['CO2EMISSIONS']] for 'y'
below, I get the same result**
regr.fit (train_x, train_y)
# The coefficients
print ('Coefficients: ', regr.coef_)
print ('Intercept: ',regr.intercept_)
Thank you very much!
So df is the loaded dataframe, cdf is another frame with selected columns, and train is selected rows.
train[['ENGINESIZE']] is a 1 column dataframe (I believe train['ENGINESIZE'] would be a pandas Series).
I believe the preferred syntax for getting an array from the dataframe is:
train[['ENGINESIZE']].values # or
train[['ENGINESIZE']].to_numpy()
though
np.asanyarray(train[['ENGINESIZE']])
is supposed to do the same thing.
Digging down through the regr.fit code I see that it calls sklearn.utils.check_X_y which in turn calls sklearn.tils.check_array. That takes care of converting the inputs to numpy arrays, with some awareness of pandas dataframe peculiarities (such as multiple dtypes).
So it appears that if fit accepts your dataframes, you don't need to convert them ahead of time. But if you can get a nice array from the dataframe, there's no harm in do that either. Either way the fit is done with arrays, derived from the dataframe.
Let's say, I have the following code.
import numpy as np
import pandas as pd
x = pd.DataFrame(np.random.randn(100, 3)).rolling(window=10, center=True).cov()
For each index, I have a 3x3 matrix. I would like to calculate eigenvalues and then some function of those eigenvalues. Or, perhaps, I might want to compute some function of eigenvalues and eigenvectors. The point is that if I take x.loc[0] then I have no problem to compute anything from that matrix. How do I do it in a rolling fashion for all matrices?
Thanks!
You can use the analogous eigenvector/eigenvalue methods in spicy.sparse.linalg.
import numpy as np
import pandas as pd
from scipy import linalg as LA
x = pd.DataFrame(np.random.randn(100, 3)).rolling(window=10, center=True).cov()
for i in range(len(x)):
try:
e_vals,e_vec = LA.eig(x.loc[i])
print(e_vals,e_vec)
except:
continue
If there are no NaN values present then you need not use the try and except instead go for only for loop.