Using RF Modules with Raspberry Pi Pico - uart

I have 2 Raspberry Pi Picos running MicroPython. I am trying to use a 433 MHz RF transmitter on one Pico and a 433 MHz RF receiver on the other Pico. I am currently using UART to transmit and receive data:
# Receiver
import os
import machine
from time import sleep
uart = machine.UART(0, 4800)
print(uart)
led = machine.Pin(25, machine.Pin.OUT)
b = None
msg = ""
while True:
if uart.any():
b = uart.readline()
try:
msg = b.decode('utf-8')
print(str(msg))
except:
print("Failed (" + str(type(b)) + "): " + str(b))
pass
led.toggle()
sleep(1)
and
# Transmitter
import os
import machine
from time import sleep
uart = machine.UART(0, 4800)
print(uart)
led = machine.Pin(25, machine.Pin.OUT)
while True:
sleep(5)
led.toggle()
uart.write('Hello, World!')
But the receiver prints stuff like this even when the transmitter is not transmitting. (I can't paste it here as it messes with the formatting)
As an experiment, I connected the TX on one Pico to the RX of the other pico, and it was able to send the data successfully. Therefore, I believe it is the transmitter and receiver getting interference from other signals.
My Question:
Arduino has libraries for packet radio. (see this) Is there anything similar in MicroPython or for the Raspberry Pi Pico?
Thanks.

Related

Controlling USB Thorlabs camera via Python - OpenCV

There are several topics on this but many of them are very very old and no real solutions have been offered (or none that work for me for sure).
I am trying various libraries to get Python to read the frames of my USB camera (DCC1545M) and they all have various module or DLL import errors. I'm trying Instrumental, Thorcam API, py-harware, micromanager..
Specifically I would ideally love to get it to work with OpenCV, because of all the useful computer vision features that you can later use on the image, which I am not sure you can do with the other libraries.
However, I encounter the same issue as everyone else, in that openCV can not read the USB camera in the first place.
cap = cv2.VideoCapture(1) ## tried difference indices
cap.isOpened() ## returns FALSE
img_counter = 0
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
while True:
ret,frame = cap.read() ## returned frame is empty
cv2.imshow('preview',frame)
k = cv2.waitKey(1)
if k%256==32: # if SPACE is pressed, take image
img_name = 'frame_number_{}.png'.format(img_counter)
cv2.imwrite(img_name,frame)
print('frame taken ')
img_counter += 1
cap.release()
cv2.destroyAllWindows()
I have installed the driver from Thorlabs website and I have the uc480_64.dll. The camera is successfully located using the Instrumental() library:
from instrumental import list_instruments, instrument
from ctypes import *
paramsets = list_instruments() ## camera found
print(paramsets)
which returns
[<ParamSet[UC480_Camera] serial=b'4102675270' model=b'C1285R12M'
id=1>]
I was wondering if anyone knows if in the last couple of years openCV has managed to find a way to read USB cameras and if so, what is the way?
Or of any other reliable method, which allows further image processing on the captured frames.
PS: I posted this on superuser because apparently hardware questions are not allowed on stackoverflow, but suoeruser migrated it back here .. So apologies if it is off-topic here as well.
Can you communicate with the camera it its native software?
https://www.thorlabs.com/software_pages/ViewSoftwarePage.cfm?Code=ThorCam
Our lab is using "pylablib cam-control" to communicate with a variety of cameras (including Thorlabs USB ones): https://pylablib-cam-control.readthedocs.io/en/latest/
Or if you would prefer writing your own code, pylablib includes a class for Thorlabs USB cameras (actually has been tested with your specific camera).
https://pylablib.readthedocs.io/en/latest/devices/uc480.html#cameras-uc480
Try the following code. It works with my Thorlab DCx camera:
import cv2
import numpy as np
from instrumental.drivers.cameras import uc480
# init camera
instruments = uc480.list_instruments()
cam = uc480.UC480_Camera(instruments[0])
# params
cam.start_live_video(framerate = "10Hz")
while cam.is_open:
frame = cam.grab_image(timeout='100s', copy=True, exposure_time='10ms')
frame1 = np.stack((frame,) * 3,-1) #make frame as 1 channel image
frame1 = frame1.astype(np.uint8)
gray = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
#now u can apply opencv features
cv2.imshow('Camera', gray)
if cv2.waitKey(30) & 0xFF == ord('q'):
break
cam.close()
cv2.destroyAllWindows()

Getting Tensorflow To Run Faster

I have developed a machine learning python script (let's call it classify_obj written with python 3.6) that imports TensorFlow. It was developed initially for bulk analysis but now I find the need to run this script repeatedly on smaller datasets to cater for more real time usage. I am doing this on Linux RH7.
Process Flow:
Master tool (written in Java) call classify_obj with object input to categorize.
classify_obj generates the classification result as a csv (takes about 7-10s)
Master tool reads the result from #2
Master tool proceeds to do other logic
Repeat #1 with next object input
To breakdown the time taken, I switched off the main logic and just do the modules import without performing any other action. I found that the import takes about 4-5s out of the 7-10s run time on the small dataset. The classification takes about 2s. I am also looking at other ways to reduce the run time for other areas but the bulk seems to be from the import.
Import time: 4-6s
Classify time: 1s
Read, write and other logic time: 0.2s
I am thinking what options are there to reduce the import time?
One idea I had was to modify the classify_obj into a "stay alive" process. The master tool after completing all its activity will stop this process/service. The intent (not sure if this would be the case) is that all the required libraries are already loaded during the process start and when the master tool calls that process/service, it will only incur the classification time instead of needing to import the libraries repeated.
What do you think about this? Also how can I set this up on Linux RHEL 7.4? Some reference links would be greatly appreciated.
Other suggestion would be greatly appreciated.
Thanks and have a great day!
This is the solution I designed to achieve the above.
Reference: https://realpython.com/python-sockets/
I have to create 2 scripts.
1. client python script: Used to pass the raw data to be classified to the server python script using socket programming.
server python script: Loads the keras (tensorflow) lib and model at launch. Continues to stay alive until a 'stop' request from client (to exit the while loop). When the client script sends the data to the server script, server script will process the incoming data and return a ok/not ok output back to the client script.
In the end, the classification time is reduced to 0.1 - 0.3s.
Client Script
import socket
import argparse
from argparse import ArgumentParser
def main():
parser = ArgumentParser(description='XXXXX')
parser.add_argument('-i','--input', default='NA', help='Input txt file path')
parser.add_argument('-o','--output', default='NA', help='Output csv path with class')
parser.add_argument('-stop','--stop', default='no', help='Stop the server script')
args = parser.parse_args()
str = args.input + ',' + args.output + ',' + args.stop
HOST = '127.0.0.1' # The server's hostname or IP address
PORT = 65432 # The port used by the server
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((HOST, PORT))
bytedata = str.encode()
sock.send(bytedata)
data = sock.recv(1024)
print('Received', data)
if __name__== "__main__":
main()
Server Script
def main():
HOST = '127.0.0.1' # Standard loopback interface address (localhost)
PORT = 65432 # Port to listen on (non-privileged ports are > 1023)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind((HOST,PORT))
sock.listen(5)
stop_process = 'no'
while (stop_process == 'no'):
# print('Waiting for connection')
conn, addr = sock.accept()
data = ''
try:
# print('Connected by', addr)
while True:
data = conn.recv(1024)
if data:
stop_process = process_input(data) # process_input function processes incoming data. If client sends 'yes' for the stop argument, the stop_process variable will be set to 'yes' by the function.
byte_reply = stop_process.encode()
conn.sendall(byte_reply) # send reply back to client
else:
break
conn.close()
# print('Closing connection',addr)
finally:
conn.close()
if __name__== "__main__":
main()

Why a single process can achieve multiple CPU usage of 100% on Windows Subsystem for Linux(WSL), but it can't on Ubuntu on server?

I want to achieve parallel computing by Python multiprocessing module, so I implement a simulated calculation to test whether I can use multiple CPU cores. I found a very strange thing that a single process can achieve 8 CPU usage of 100% on Windows Subsystem for Linux(WSL) on my desktop rather than only one CPU usage of 100% on Ubuntu on Lab's server.
Like this:
And this is the contrast:
Furthermore, I found that using multiple processes does not reduce the time cost on WSL on my desktop, but which indeed largely reduce the time cost on Ubuntu on Lab's server.
Like this:
(Here I run 6 processes and running a single process on Lab's server needs about 440s.)
And this is the contrast:
(Here I run 3 processes and running a single process on my desktop needs about 29s.)
Here is my Python source codes:
import numpy as np
import time
import os
import multiprocessing as mp
PROCESS_MAX = 1
LOOPS = 1
process_list = []
def simulated_calculation():
x = np.random.rand(100, 100)
y = np.random.rand(100, 100)
z = np.outer(x, y)
determinant = np.linalg.det(z)
def child_process(name):
for i in range(LOOPS):
print("The child process[%s] starts at %s and its PID is %s" % (str(name), time.ctime(), os.getpid()))
simulated_calculation()
print("The child process[%s] stops at %s and its PID is %s" %(str(name), time.ctime(), os.getpid()))
def main():
print("All start at %s" % time.ctime())
print("The parent process stars at %s and its PID is %s" % (time.ctime(), os.getpid()))
start_wall_time = time.time()
for i in range(PROCESS_MAX):
p = mp.Process(target = child_process, args = (i + 1, ))
process_list.append(p)
p.daemon = True
p.start()
for i in process_list:
i.join()
stop_wall_time = time.time()
print("All stop at %s" % time.ctime())
print("The whole runtime is %ss" % str(stop_wall_time - start_wall_time))
if __name__ == "__main__":
main()
I hope someone can help me. Thanks!
WSL1 has a virtual layer through which the Windows device drivers are being passed. WSL2 on the other hand, has more access due to a Linux kernel in place. However direct access to the hardware is inaccessible to WSL1 except USB. Hardware such as USB and GPU are currently not available to WSL2 but is being worked.

What's going wrong in conversion of bytes back to numpyndarray?

I successfully established a tcp connection between my pc and raspberry pi. By sending strings, now I am looking forward to send numpy arrays, basically images over the connection this is my code for server:
import socket
import pickle
import cv2
import numpy as np
s = socket.socket()
host = '192.168.137.171' #ip of raspberry pi
port = 12346
s.bind((host, port))
cam=cv2.VideoCapture(0)
s.listen(5)
while True:
ret,frame=cam.read()
#frame=pickle.dumps(frame)
frame=np.ndarray.tobytes(frame)
c, addr = s.accept()
print ('Got connection from',addr)
c.send(frame)
#c.send(bytes(frame,"utf-8"))
c.close()
Using this I am transferring numpy array, by converting it to bytes using the function np.ndarray.tobytes(). After excecution of the following code, here is the client end code that is the one which is to be executed by my pc:
import socket
import numpy as np
s = socket.socket()
host = '192.168.137.171'# ip of raspberry pi
port = 12346
s.connect((host, port))
while True:
print(type(s.recv(1024)))
x=np.frombuffer(s.recv(1024), dtype=np.uint8)
s.close()
Now after execution of all this, I expected to decode bytes back to numpy ndarray and recieve an image but when I use
cv2.imshow('x',x)
I just get a blank grey display. Where is it going wrong?

How to receive a finite number of samples at a future time using UHD/GNURadio?

I'm using the GNURadio python interface to UHD, and I'm trying to set a specific time to start collecting samples and either collect a specific number of samples or stop the collection of samples at a specific time. Essentially, creating a timed snapshot of samples. This is something similar to the C++ Ettus UHD example 'rx_timed_sample'.
I can get a flowgraph to start at a specific time, but I can't seem to get it to stop at a specific time (at least without causing overflows). I've also tried doing a finite aquisition, which works, but I can't get it to start at a specific time. So I'm kind of lost at what to do next.
Here is my try at the finite acquisition (seems to just ignore the start time and collects 0 samples):
num_samples = 1000
usrp = uhd.usrp_source(
",".join(("", "")),
uhd.stream_args(
cpu_format="fc32",
channels=range(1),
),
)
...
usrp.set_start_time(absolute_start_time)
samples = usrp.finite_acquisition(num_samples)
I've also tried some combinations of following without success (TypeError: in method 'usrp_source_sptr_issue_stream_cmd', argument 2 of type '::uhd::stream_cmd_t const &'):
usrp.set_command_time(absolute_start_time)
usrp.issue_stream_cmd(uhd.stream_cmd.STREAM_MODE_NUM_SAMPS_AND_DONE)
I also tried the following in a flowgraph:
...
usrp = flowgrah.uhd_usrp_source_0
absolute_start_time = uhd.uhd_swig.time_spec_t(start_time)
usrp.set_start_time(absolute_start_time)
flowgrah.start()
stop_cmd = uhd.stream_cmd(uhd.stream_cmd.STREAM_MODE_STOP_CONTINUOUS)
absolute_stop_time = absolute_start_time + uhd.uhd_swig.time_spec_t(collection_time)
usrp.set_command_time(absolute_stop_time)
usrp.issue_stream_cmd(stop_cmd)
For whatever reason the flowgraph one generated overflows consistently for anything greater than a .02s collection time.
I was running into a similar issue and solved it by using the head block.
Here's a simple example which saves 10,000 samples from a sine wave source then exits.
#!/usr/bin/env python
# Evan Widloski - 2017-09-03
# Logging test in gnuradio
from gnuradio import gr
from gnuradio import blocks
from gnuradio import analog
class top_block(gr.top_block):
def __init__(self, output):
gr.top_block.__init__(self)
sample_rate = 32e3
num_samples = 10000
ampl = 1
source = analog.sig_source_f(sample_rate, analog.GR_SIN_WAVE, 100, ampl)
head = blocks.head(4, num_samples)
sink = blocks.file_sink(4, output)
self.connect(source, head)
self.connect(head, sink)
if __name__ == '__main__':
try:
top_block('/tmp/out').run()
except KeyboardInterrupt:
pass