Getting only 100sample/second from MPU6050 through Raspberry pi Pico - frequency

I wanted to plot the raw sensor data like the accelerometer gyroscope and temperature data of MPU6050. For me MPU6050 is soldered with Pico and Pico is connected to my laptop via a USB connection.
Now, my problem is the frequency is quite low like 100 samples/second. I am using a serial library of python for this. And on Pico's side, I have used UART. I should get at least 1khz frequency as per the MPU6050 datasheet. But, I am seriously doing something wrong here.
I am quite new and don’t have much clue how to increase the frequency in this case. Any help on this will be really helpful for me. Thanks in advance.
I am using baudrate 115200. I did some trial and error and find out that there is no problem between Pico and Python with pyserial. It’s like Pico is not able to read data from MPU6050 that fast because I am getting a lot of empty bytes with the use of serial.in_waiting(). I am uploading my code here. Just a bit of overview, my main.py uses two external modules imu.py and vector3d.py to read data. I am getting only 100 samples/second but It is a lot less. According to the MPU6050 datasheet, I should get at least 1khz frequency.
#PICO Code(main.py)
from imu import MPU6050
import utime
import time
from machine import Pin, I2C, UART
i2c = I2C(0, sda=Pin(4), scl=Pin(5), freq=400000)
imu = MPU6050(i2c)
uart = UART(0, baudrate=115200)
print("UART Info : ", uart)
while True:
ax = round(imu.accel.x, 2)
ay = round(imu.accel.y, 2)
az = round(imu.accel.z, 2)
gx = round(imu.gyro.x)
gy = round(imu.gyro.y)
gz = round(imu.gyro.z)
tem = round(imu.temperature, 2)
print(ax, ay, az, gx, gy, gz, tem)
#Python side code
import serial
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
ser = serial.Serial('COM4', 115200)
print(ser)
file = open("serial_sample.csv", "w")
start_time = time.time()
curr_time = time.time()
output_list = []
header_string = "X_accel" + "," + "Y_accel" + "," + "Z_accel" + "," + "X_Pos" + "," + "Y_Pos" + "," + "Z_Pos" \
+ "," + "Temperature" + "\n"
output_list.append(header_string)
while curr_time - start_time < 10:
data = ser.readline()
data = str(data).split("\\")[0].split(" ")
output = data[0].split("'")[1] + "," + data[1] + "," + data[2] + "," + data[3] + "," + data[4] + ","
+\ data[5] + "," + data[6] + "\n"
output_list.append(output)
curr_time = time.time()
file.writelines(output_list)
file.close()

Related

Csv file search speedup

I need to build a relief profile graph by coordinates, I have a csv file with 12,000,000 lines. searching through a csv file of the same height takes about 2 - 2.5 seconds. I rewrote the csv to parquet and it helped me save some time, it takes about 1.7 - 1 second to find one height. However, I need to build a profile for 500 - 2000 values, which makes the time very long. In the future, you may have to increase the base of the csv file, which will slow down this process even more. In this regard, my question is, is it possible to somehow reduce the processing time of values?
Code example:
import dask.dataframe as dk
import numpy as np
import pandas as pd
import time
filename = 'n46_e032_1arc_v3.csv'
df = dk.read_csv(filename)
df.to_parquet('n46_e032_1arc_v3_parquet')
Latitude1y, Longitude1x = 46.6276, 32.5942
Latitude2y, Longitude2x = 46.6451, 32.6781
sec, steps, k = 0.00027778, 1, 11.73
Latitude, Longitude = [Latitude1y], [Longitude1x]
sin, cos = Latitude2y - Latitude1y, Longitude2x - Longitude1x
y, x = Latitude1y, Longitude1x
while Latitude[-1] < Latitude2y and Longitude[-1] < Longitude2x:
y, x, steps = y + sec * k * sin, x + sec * k * cos, steps + 1
Latitude.append(y)
Longitude.append(x)
time_start = time.time()
long, elevation_data = [], []
df2 = dk.read_parquet('n46_e032_1arc_v3_parquet')
for i in range(steps + 1):
elevation_line = df2[(Longitude[i] <= df2['x']) & (df2['x'] <= Longitude[i] + sec) &
(Latitude[i] <= df2['y']) & (df2['y'] <= Latitude[i] + sec)].compute()
elevation = np.asarray(elevation_line.z.tolist())
if elevation[-1] < 0:
elevation_data.append(0)
else:
elevation_data.append(elevation[-1])
long.append(30 * i)
plt.bar(long, elevation_data, width = 30)
plt.show()
print(time.time() - time_start)
Here's one way to solve this problem using KD trees. A KD tree is a data structure for doing fast nearest-neighbor searches.
import scipy.spatial
tree = scipy.spatial.KDTree(df[['x', 'y']].values)
elevations = df['z'].values
long, elevation_data = [], []
for i in range(steps):
lon, lat = Longitude[i], Latitude[i]
dist, idx = tree.query([lon, lat])
elevation = elevations[idx]
if elevation < 0:
elevation = 0
elevation_data.append(elevation)
long.append(30 * i)
Note: if you can make assumptions about the data, like "all of the points in the CSV are equally spaced," faster algorithms are possible.
It looks like your data might be on a regular grid. If (and only if) every combination of x and y exist in your data, then it probably makes sense to turn this into a labeled 2D array of points, after which querying the correct position will be very fast.
For this, I'll use xarray, which is essentially pandas for N-dimensional data, and integrates well with dask:
# bring the dataframe into memory
df = dk.read('n46_e032_1arc_v3_parquet').compute()
da = df.set_index(["y", "x"]).z.to_xarray()
# now you can query the nearest points:
desired_lats = xr.DataArray([46.6276, 46.6451], dims=["point"])
desired_lons = xr.DataArray([32.5942, 32.6781], dims=["point"])
subset = da.sel(y=desired_lats, x=desired_lons, method="nearest")
# if you'd like, you can return to pandas:
subset_s = subset.to_series()
# you could do this only once, and save the reshaped array as a zarr store:
ds = da.to_dataset(name="elevation")
ds.to_zarr("n46_e032_1arc_v3.zarr")

solve_ivp error: 'Required step size is less than spacing between numbers.'

Been trying to solve the newtonian two-body problem using RK45 from scipy however keep running into the TypeError:'Required step size is less than spacing between numbers.' I've tried different values of t_eval than the one below but nothing seems to work.
from scipy import optimize
from numpy import linalg as LA
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
import numpy as np
from scipy.integrate import solve_ivp
AU=1.5e11
a=AU
e=0.5
mss=2E30
ms = 2E30
me = 5.98E24
mv=4.867E24
yr=3.15e7
h=100
mu1=ms*me/(ms+me)
mu2=ms*me/(ms+me)
G=6.67E11
step=24
vi=np.sqrt(G*ms*(2/(a*(1-e))-1/a))
#sun=sphere(pos=vec(0,0,0),radius=0.1*AU,color=color.yellow)
#earth=sphere(pos=vec(1*AU,0,0),radius=0.1*AU)
sunpos=np.array([-903482.12391302, -6896293.6960525, 0. ])
earthpos=np.array([a*(1-e),0,0])
earthv=np.array([0,vi,0])
sunv=np.array([0,0,0])
def accelerations2(t,pos):
norme=sum( (pos[0:3]-pos[3:6])**2 )**0.5
gravit = G*(pos[0:3]-pos[3:6])/norme**3
sunaa = me*gravit
earthaa = -ms*gravit
tota=earthaa+sunaa
return [*earthaa,*sunaa]
def ode45(f,t,y,h):
"""Calculate next step of an initial value problem (IVP) of an ODE with a RHS described
by the RHS function with an order 4 approx. and an order 5 approx.
Parameters:
t: float. Current time.
y: float. Current step (position).
h: float. Step-length.
Returns:
q: float. Order 2 approx.
w: float. Order 3 approx.
"""
s1 = f(t, y[0],y[1])
s2 = f(t + h/4.0, y[0] + h*s1[0]/4.0,y[1] + h*s1[1]/4.0)
s3 = f(t + 3.0*h/8.0, y[0] + 3.0*h*s1[0]/32.0 + 9.0*h*s2[0]/32.0,y[1] + 3.0*h*s1[1]/32.0 + 9.0*h*s2[1]/32.0)
s4 = f(t + 12.0*h/13.0, y[0] + 1932.0*h*s1[0]/2197.0 - 7200.0*h*s2[0]/2197.0 + 7296.0*h*s3[0]/2197.0,y[1] + 1932.0*h*s1[1]/2197.0 - 7200.0*h*s2[1]/2197.0 + 7296.0*h*s3[1]/2197.0)
s5 = f(t + h, y[0] + 439.0*h*s1[0]/216.0 - 8.0*h*s2[0] + 3680.0*h*s3[0]/513.0 - 845.0*h*s4[0]/4104.0,y[1] + 439.0*h*s1[1]/216.0 - 8.0*h*s2[1] + 3680.0*h*s3[1]/513.0 - 845.0*h*s4[1]/4104.0)
s6 = f(t + h/2.0, y[0] - 8.0*h*s1[0]/27.0 + 2*h*s2[0] - 3544.0*h*s3[0]/2565 + 1859.0*h*s4[0]/4104.0 - 11.0*h*s5[0]/40.0,y[1] - 8.0*h*s1[1]/27.0 + 2*h*s2[1] - 3544.0*h*s3[1]/2565 + 1859.0*h*s4[1]/4104.0 - 11.0*h*s5[1]/40.0)
w1 = y[0] + h*(25.0*s1[0]/216.0 + 1408.0*s3[0]/2565.0 + 2197.0*s4[0]/4104.0 - s5[0]/5.0)
w2 = y[1] + h*(25.0*s1[1]/216.0 + 1408.0*s3[1]/2565.0 + 2197.0*s4[1]/4104.0 - s5[1]/5.0)
q1 = y[0] + h*(16.0*s1[0]/135.0 + 6656.0*s3[0]/12825.0 + 28561.0*s4[0]/56430.0 - 9.0*s5[0]/50.0 + 2.0*s6[0]/55.0)
q2 = y[1] + h*(16.0*s1[1]/135.0 + 6656.0*s3[1]/12825.0 + 28561.0*s4[1]/56430.0 - 9.0*s5[1]/50.0 + 2.0*s6[1]/55.0)
return w1,w2, q1,q2
t=0
T=10**5
poss=[-903482.12391302, -6896293.6960525, 0. ,a*(1-e),0,0 ]
sol = solve_ivp(accelerations2, [0, 10**5], poss,t_eval=np.linspace(0,10**5,1))
print(sol)
Not sure what the error even means because I've tried many different t_evl and nothing seems to work.
The default values in solve_ivp are made for a "normal" situation where the scales of the variables are not too different from the range from 0.1 to 100. You could achieve these scales by rescaling the problem so that all lengths and related constants are in AU and all times and related constants are in days.
Or you can try to set the absolute tolerance to something reasonable like 1e-4*AU.
It also helps to use the correct first order system, as I told you recently in another question on this topic. In a mechanical system you get usually a second order ODE x''=a(x). Then the first order system to pass to the ODE solver is [x', v'] = [v, a(x)], which could be implemented as
def firstorder(t,state):
pos, vel = state.reshape(2,-1);
return [*vel, *accelerations2(t,pos)]
Next it is always helpful to apply the acceleration of Earth to Earth and of the sun to the sun. That is, fix an order of the objects. At the moment the initialization has the sun first, while in the acceleration computation you treat the state as Earth first. Switch all to sun first
def accelerations2(t,pos):
pos=pos.reshape(-1,3)
# pos[0] = sun, pos[1] = earth
norme=sum( (pos[1]-pos[0])**2 )**0.5
gravit = G*(pos[1]-pos[0])/norme**3
sunacc = me*gravit
earthacc = -ms*gravit
totacc=earthacc+sunacc
return [*sunacc,*earthacc]
And then it never goes amiss to use the correctly reproduced natural constants like
G = 6.67E-11
Then the solver call and print formatting as
state0=[*sunpos, *earthpos, *sunvel, *earthvel]
sol = solve_ivp(firstorder, [0, T], state0, first_step=1e+5, atol=1e-6*a)
print(sol.message)
for t, pos in zip(sol.t, sol.y[[0,1,3,4]].T):
print("%.6e"%t, ", ".join("%8.4g"%x for x in pos))
gives the short table
The solver successfully reached the end of the integration interval.
t x_sun y_sun x_earth y_earth
0.000000e+00 -9.035e+05, -6.896e+06, 7.5e+10, 0
1.000000e+05 -9.031e+05, -6.896e+06, 7.488e+10, 5.163e+09
that is, for this step the solver only needs one internal step.

Increasing the volume of recording in Real Time of after saving it?

I used python to make a prototype, to increase the volume of audio signal in real time. It worked by using new_data = audioop.mul(data, 4, 4) where 'data' is chunks from Pyaudio in streaming.
Now, I have to apply similar in ObjectiveC, and even after searching I am unable to find it. How can it be done in Objective C? Do we have such control over data flow in Objective C and If we can't, Is there anyway that a recorded sample's volume can be increased?
import pyaudio
import wave
import audioop
import sys
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
CHUNK = 1024
RECORD_SECONDS = 7
WAVE_OUTPUT_FILENAME1 = sys.argv[1]
WAVE_OUTPUT_FILENAME2 = sys.argv[2]
device_index = 2
print("----------------------record device list---------------------")
audio = pyaudio.PyAudio()
print(audio)
info = audio.get_host_api_info_by_index(0)
numdevices = info.get('deviceCount')
for i in range(0, numdevices):
if (audio.get_device_info_by_host_api_device_index(0, i).get('maxInputChannels')) > 0:
print ("Input Device id ", i, " - ", audio.get_device_info_by_host_api_device_index(0, i).get('name'))
print("-------------------------------------------------------------")
index = int((input()))
print(type(index))
print("recording via index "+str(index))
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,input_device_index = index,
frames_per_buffer=CHUNK)
print ("recording started")
Recordframes = []
Recordframes2= []
print(int(RATE / CHUNK * RECORD_SECONDS))
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
new_data = audioop.mul(data, 4, 4)
print("hshsh")
Recordframes.append(data)
Recordframes2.append(new_data)
# data = stream.read(CHUNK)
# print("hshsh")
# Recordframes.append(data)
# print ("recording stopped")
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME1, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(Recordframes))
waveFile2 = wave.open(WAVE_OUTPUT_FILENAME2, 'wb')
waveFile2.setnchannels(CHANNELS)
waveFile2.setsampwidth(audio.get_sample_size(FORMAT))
waveFile2.setframerate(RATE)
waveFile2.writeframes(b''.join(Recordframes2))
waveFile.close()
waveFile2.close()
You can use AVAudioEngine (link) to tap into the raw audio data. Alternatively, still using AVAudioEngine, you could add an AVAudioUnitEQ (link) node to your audio graph and use that boost the gain.
Using either method, you can then write out to a file using AVAudioFile (link).

Multiprocessing on dataset from pyspark returns JVM error

I need to run some clustering algorithms in parallel in Jupyter notebook. The clustering function I want to parallel works when doing multithreading or when run individually. However, it returns
raise Py4JError("{0} does not exist in the JVM".format(name))
when I try multiprocessing. I don't have much experience with multiprocessing, what could I be doing wrong?
Code for clustering:
def clustering(ID, df):
pandas_df = df.select("row", "features", "type") \
.where(df.type == ID).toPandas()
print("process " + str(ID) + ": preparing data for clustering")
feature_series = pandas_df["features"].apply(lambda x: x.toArray())
objs = [pandas_df, pd.DataFrame(feature_series.tolist())]
t_df = pd.concat(objs, axis=1)
print("process " + str(ID) + ": initiating clustering")
c= #clustering algo here
print("process " + str(ID) + " DONE!")
return
Code for multiprocessing:
import multiprocessing as mp
k = 4
if __name__ == '__main__':
pl = []
for i in range(0,k):
print("sending process:", i)
process = mp.Process(target=clustering, args=(i, df))
jobs.append(process)
process.start()
for process in pl:
print("waiting for join from process")
process.join()
Error was caused by the subprocesses not being able to access the same memory(in which the pyspark dataframe resided).
Solved by partitioning the dataset first by putting the access to the pyspark dataframe in another function like so:
pandas_df = df.select("row", "features", "type") \
.where(df.type == ID).toPandas()
And then running the clustering on the separated Pandas dataframes.

How to format input data for textsum data_convert_example

I was hoping someone may be able to see where I am failing here. So I have scraped some data from buzzfeed and now I am trying to format a text file with which I can then send into data_convert_examples text_to_data formatter.
I thought I had the answer a couple times, but I am still running up against a brick wall when I process this as binary and then try to train against the data.
What I did was run the binary_to_text on the toy dataset and then opened the file in notepad++ under windows, showing all characters, and matched what I believed to be the format.
I appologize for the long function below, but I really am unsure as to where the issue might be and figured this was the best way to provide enough info. Anyone have any ideas or recommendations?
def processPath(self, toPath):
try:
fout = open(os.path.join(toPath, '{}-{}'.format(self.baseName, self.fileNdx)), 'a+')
for path, dirs, files in os.walk(self.fromPath):
for fn in files:
fullpath = os.path.join(path, fn)
if os.path.isfile(fullpath):
#with open(fullpath, "rb") as f:
with codecs.open(fullpath, "rb", 'ascii', "ignore") as f:
try:
finalRes = ""
content = f.readlines()
self.populateVocab(content)
sentences = sent_tokenize((content[1]).encode('ascii', "ignore").strip('\n'))
for sent in sentences:
textSumFmt = self.textsumFmt
finalRes = textSumFmt["artPref"] + textSumFmt["sentPref"] + sent.replace("=", "equals") + textSumFmt["sentPost"] + textSumFmt["postVal"]
finalRes += (('\t' + textSumFmt["absPref"] + textSumFmt["sentPref"] + (content[0]).strip('\n').replace("=", "equals") + textSumFmt["sentPost"] + textSumFmt["postVal"]) + '\t' +'publisher=BUZZ' + os.linesep)
if self.lineNdx != 0 and self.lineNdx % self.lines == 0:
fout.close()
self.fileNdx+=1
fout = open(os.path.join(toPath, '{}-{}'.format(self.baseName, self.fileNdx)), 'a+')
fout.write( ("{}").format( finalRes.encode('utf-8', "ignore") ) )
self.lineNdx+=1
except RuntimeError as e:
print "Runtime Error: {0} : {1}".format(e.errno, e.strerror)
finally:
fout.close()
After further analysis, it seems that the source of the problem is more with the source data and the way it is constructed rather than data_convert_example.py itself. I'm closing this as the heading is not in-line with the source of the issue.
I found the source of my problem was that I had a space between "Article" and the equals sign. After removing that I was able to successfully train.