I successfully established a tcp connection between my pc and raspberry pi. By sending strings, now I am looking forward to send numpy arrays, basically images over the connection this is my code for server:
import socket
import pickle
import cv2
import numpy as np
s = socket.socket()
host = '192.168.137.171' #ip of raspberry pi
port = 12346
s.bind((host, port))
cam=cv2.VideoCapture(0)
s.listen(5)
while True:
ret,frame=cam.read()
#frame=pickle.dumps(frame)
frame=np.ndarray.tobytes(frame)
c, addr = s.accept()
print ('Got connection from',addr)
c.send(frame)
#c.send(bytes(frame,"utf-8"))
c.close()
Using this I am transferring numpy array, by converting it to bytes using the function np.ndarray.tobytes(). After excecution of the following code, here is the client end code that is the one which is to be executed by my pc:
import socket
import numpy as np
s = socket.socket()
host = '192.168.137.171'# ip of raspberry pi
port = 12346
s.connect((host, port))
while True:
print(type(s.recv(1024)))
x=np.frombuffer(s.recv(1024), dtype=np.uint8)
s.close()
Now after execution of all this, I expected to decode bytes back to numpy ndarray and recieve an image but when I use
cv2.imshow('x',x)
I just get a blank grey display. Where is it going wrong?
Related
I have 2 Raspberry Pi Picos running MicroPython. I am trying to use a 433 MHz RF transmitter on one Pico and a 433 MHz RF receiver on the other Pico. I am currently using UART to transmit and receive data:
# Receiver
import os
import machine
from time import sleep
uart = machine.UART(0, 4800)
print(uart)
led = machine.Pin(25, machine.Pin.OUT)
b = None
msg = ""
while True:
if uart.any():
b = uart.readline()
try:
msg = b.decode('utf-8')
print(str(msg))
except:
print("Failed (" + str(type(b)) + "): " + str(b))
pass
led.toggle()
sleep(1)
and
# Transmitter
import os
import machine
from time import sleep
uart = machine.UART(0, 4800)
print(uart)
led = machine.Pin(25, machine.Pin.OUT)
while True:
sleep(5)
led.toggle()
uart.write('Hello, World!')
But the receiver prints stuff like this even when the transmitter is not transmitting. (I can't paste it here as it messes with the formatting)
As an experiment, I connected the TX on one Pico to the RX of the other pico, and it was able to send the data successfully. Therefore, I believe it is the transmitter and receiver getting interference from other signals.
My Question:
Arduino has libraries for packet radio. (see this) Is there anything similar in MicroPython or for the Raspberry Pi Pico?
Thanks.
My Jupyter Notebook server freezes when I call the style property of a large pandas DataFrame, as in this example:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(9999,3),columns=list("ABC"))
df.style
When I replace 9999 by 999 in the above code this issue doesn't occur. The same script also runs fine in the Anaconda prompt. What could be the cause of the freeze?
I made a pickled utf-8 dataframe on my local machine.
I can read this pickled data with read_pickle on my local machine.
However, I cannot read it on google-colaboratory.
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
Postscript01
My code is very simple
import pandas as pd
DF_OBJ = open('/content/drive/DF_OBJ')
DF = pd.read_pickle(DF_OBJ)
I can run until 2nd row.
I cannot run the last row with the above error comment.
Postscript02
I could solve it by myself.
import pandas as pd
import pickle5
DF_OBJ = open('OBJ','rb')
DF = pickle5.load(DF_OBJ)
I have developed a machine learning python script (let's call it classify_obj written with python 3.6) that imports TensorFlow. It was developed initially for bulk analysis but now I find the need to run this script repeatedly on smaller datasets to cater for more real time usage. I am doing this on Linux RH7.
Process Flow:
Master tool (written in Java) call classify_obj with object input to categorize.
classify_obj generates the classification result as a csv (takes about 7-10s)
Master tool reads the result from #2
Master tool proceeds to do other logic
Repeat #1 with next object input
To breakdown the time taken, I switched off the main logic and just do the modules import without performing any other action. I found that the import takes about 4-5s out of the 7-10s run time on the small dataset. The classification takes about 2s. I am also looking at other ways to reduce the run time for other areas but the bulk seems to be from the import.
Import time: 4-6s
Classify time: 1s
Read, write and other logic time: 0.2s
I am thinking what options are there to reduce the import time?
One idea I had was to modify the classify_obj into a "stay alive" process. The master tool after completing all its activity will stop this process/service. The intent (not sure if this would be the case) is that all the required libraries are already loaded during the process start and when the master tool calls that process/service, it will only incur the classification time instead of needing to import the libraries repeated.
What do you think about this? Also how can I set this up on Linux RHEL 7.4? Some reference links would be greatly appreciated.
Other suggestion would be greatly appreciated.
Thanks and have a great day!
This is the solution I designed to achieve the above.
Reference: https://realpython.com/python-sockets/
I have to create 2 scripts.
1. client python script: Used to pass the raw data to be classified to the server python script using socket programming.
server python script: Loads the keras (tensorflow) lib and model at launch. Continues to stay alive until a 'stop' request from client (to exit the while loop). When the client script sends the data to the server script, server script will process the incoming data and return a ok/not ok output back to the client script.
In the end, the classification time is reduced to 0.1 - 0.3s.
Client Script
import socket
import argparse
from argparse import ArgumentParser
def main():
parser = ArgumentParser(description='XXXXX')
parser.add_argument('-i','--input', default='NA', help='Input txt file path')
parser.add_argument('-o','--output', default='NA', help='Output csv path with class')
parser.add_argument('-stop','--stop', default='no', help='Stop the server script')
args = parser.parse_args()
str = args.input + ',' + args.output + ',' + args.stop
HOST = '127.0.0.1' # The server's hostname or IP address
PORT = 65432 # The port used by the server
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((HOST, PORT))
bytedata = str.encode()
sock.send(bytedata)
data = sock.recv(1024)
print('Received', data)
if __name__== "__main__":
main()
Server Script
def main():
HOST = '127.0.0.1' # Standard loopback interface address (localhost)
PORT = 65432 # Port to listen on (non-privileged ports are > 1023)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind((HOST,PORT))
sock.listen(5)
stop_process = 'no'
while (stop_process == 'no'):
# print('Waiting for connection')
conn, addr = sock.accept()
data = ''
try:
# print('Connected by', addr)
while True:
data = conn.recv(1024)
if data:
stop_process = process_input(data) # process_input function processes incoming data. If client sends 'yes' for the stop argument, the stop_process variable will be set to 'yes' by the function.
byte_reply = stop_process.encode()
conn.sendall(byte_reply) # send reply back to client
else:
break
conn.close()
# print('Closing connection',addr)
finally:
conn.close()
if __name__== "__main__":
main()
As you can guess by the title, the kernel always dies when using pd.read_parquet().
I already tried it with different sizes but it wont work.
here the code... (I am using jupyter (without anaconda, because it always takes to long to start) in Python 3.7 with a i5 & 16GB RAM)
outfp = PurePath(data_dir+'/interim/IVE_tickbidask.parq')
#df = df.head(10)
df.to_parquet(outfp)
from pathlib import PurePath, Path
import pandas as pd
data_dir = "../../../Adv_Fin_ML_Exercises-master/Adv_Fin_ML_Exercises-master/data"
infp=PurePath(data_dir+'/interim/IVE_tickbidask.parq')
df = pd.read_parquet(data_dir+'/interim/IVE_tickbidask.parq')
cprint(df)
What can i do to still make it work?
I had the same problem, adding engine = 'fastparquet' worked for me. Otherwise it defaults to engine = 'pyarrow' and that seems to make the kernel die.