Argparse : How to setup argument (for BrainFlow and OpenBCI) - argparse

Hello guys (or girls)!
I recently purchased an eeg headset and in order to be able to read data from python I need to be able to receive information from a doogle. To do this I need to use BrainFlow, which seems to be the most suitable centralized multi-language solution. However I'm not used to using Argparse, whose role is to receive arguments (from a yml? js file? directly in the code?)
Anyway, can someone tell me how to provide arguments to argparse?
BTW here is the code :
import argparse
import time
from brainflow.board_shim import BoardShim, BrainFlowInputParams
def main():
BoardShim.enable_dev_board_logger()
parser = argparse.ArgumentParser()
# use docs to check which parameters are required for specific board, e.g. for Cyton - set serial port
parser.add_argument('--timeout', type=int, help='timeout for device discovery or connection', required=False,
default=0)
parser.add_argument('--ip-port', type=int, help='ip port', required=False, default=0)
parser.add_argument('--ip-protocol', type=int, help='ip protocol, check IpProtocolType enum', required=False,
default=0)
parser.add_argument('--ip-address', type=str, help='ip address', required=False, default='')
parser.add_argument('--serial-port', type=str, help='serial port', required=False, default='')
parser.add_argument('--mac-address', type=str, help='mac address', required=False, default='')
parser.add_argument('--other-info', type=str, help='other info', required=False, default='')
parser.add_argument('--streamer-params', type=str, help='streamer params', required=False, default='')
parser.add_argument('--serial-number', type=str, help='serial number', required=False, default='0')
parser.add_argument('--board-id', type=int, help='board id, check docs to get a list of supported boards',
required=True)
parser.add_argument('--file', type=str, help='file', required=False, default='')
args = parser.parse_args()
params = BrainFlowInputParams()
params.ip_port = args.ip_port
params.serial_port = args.serial_port
params.mac_address = args.mac_address
params.other_info = args.other_info
params.serial_number = args.serial_number
params.ip_address = args.ip_address
params.ip_protocol = args.ip_protocol
params.timeout = args.timeout
params.file = args.file
board = BoardShim(args.board_id, params)
board.prepare_session()
# board.start_stream () # use this for default options
board.start_stream(45000, args.streamer_params)
time.sleep(10)
# data = board.get_current_board_data (256) # get latest 256 packages or less, doesnt remove them from internal buffer
data = board.get_board_data() # get all data and remove it from internal buffer
board.stop_stream()
board.release_session()
print(data)
if __name__ == "__main__":
main()
Whenever I run the code in the cmd like this : python test.py,
it says that board-id argument is required.
Same when I do python test.py 0 or python test.py "0".
So my question is : How do I setup argument for argparse ?
Thank you in advance :) , Best, KL

I think it's a little bit late to answer that, but as mentioned before, when you run your code, run it from the terminal with the arguments needed:
e.p. python script.py --board-id 0 --serial-port COM5
this will consider you're using an OpenBCI Cyton board, which has the id 0, using the port COM5.
You can check what id your device has from the brainflow documentation.
To know what port you're using, your device's documentation should show how. I'm familiar with OpenBCI. One easy way, to know what port you're using, is from the OpenBCI's GUI.
If you're using the same setup, you can set these in the default argument, so you don't have to specify them each time you run your code.
parser.add_argument('--serial-port', type=str, help='serial port', required=False, default='COM5')
To test the code, you can use a synthetic board, which has the id -1:
python script.py --board-id -1
(no need for a serial port to be specified here)
Hope that helps.
Best of luck!

Related

How to access a SPECIFIC label in Tensorflow Lite object?

I got this code down here and I don't know how to access the "category_name" attribute. If it detects a person, I want it to say "Hello" in the command prompt.
I tried a LOT of different syntaxes and it didn't work. Down below is an image of how the "list" object looks when I do the
print(detection_result.detections)
. What we want is the "category_name". You can see in the code I tried an "IF" that didn't help too much, since it's detecting 3 models simultaneously, so I guess the array has 3 elements, which themselves have multiple elements.
Is there a beginner-friendly answer to this?
Note: I got a Raspberry Pi 4 B
image
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Main script to run the object detection routine."""
import argparse
import sys
import time
import cv2
from tflite_support.task import core
from tflite_support.task import processor
from tflite_support.task import vision
import utils
def run(model: str, camera_id: int, width: int, height: int, num_threads: int,
enable_edgetpu: bool) -> None:
"""Continuously run inference on images acquired from the camera.
Args:
model: Name of the TFLite object detection model.
camera_id: The camera id to be passed to OpenCV.
width: The width of the frame captured from the camera.
height: The height of the frame captured from the camera.
num_threads: The number of CPU threads to run the model.
enable_edgetpu: True/False whether the model is a EdgeTPU model.
"""
# Variables to calculate FPS
counter, fps = 0, 0
start_time = time.time()
# Start capturing video input from the camera
cap = cv2.VideoCapture(camera_id)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
# Visualization parameters
row_size = 20 # pixels
left_margin = 24 # pixels
text_color = (0, 0, 255) # red
font_size = 1
font_thickness = 1
fps_avg_frame_count = 10
# Initialize the object detection model
base_options = core.BaseOptions(
file_name=model, use_coral=enable_edgetpu, num_threads=num_threads)
detection_options = processor.DetectionOptions(
max_results=3, score_threshold=0.3)
options = vision.ObjectDetectorOptions(
base_options=base_options, detection_options=detection_options)
detector = vision.ObjectDetector.create_from_options(options)
# Continuously capture images from the camera and run inference
while cap.isOpened():
success, image = cap.read()
if not success:
sys.exit(
'ERROR: Unable to read from webcam. Please verify your webcam settings.'
)
counter += 1
image = cv2.flip(image, 1)
# Convert the image from BGR to RGB as required by the TFLite model.
rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Create a TensorImage object from the RGB image.
input_tensor = vision.TensorImage.create_from_array(rgb_image)
# Run object detection estimation using the model.
detection_result = detector.detect(input_tensor)
print(detection_result)
#print(detection_result.detections.category_name[0])
#if detection_result[0].detections.categories.category_name)=='person':
#if getattr(detection_result, 'label') =='person':
# print("YES")
#print(detection_result)
#print(...)
#print(detection_result(detections=[]))
# Draw keypoints and edges on input image
image = utils.visualize(image, detection_result)
# Calculate the FPS
if counter % fps_avg_frame_count == 0:
end_time = time.time()
fps = fps_avg_frame_count / (end_time - start_time)
start_time = time.time()
# Show the FPS
fps_text = 'FPS = {:.1f}'.format(fps)
text_location = (left_margin, row_size)
cv2.putText(image, fps_text, text_location, cv2.FONT_HERSHEY_PLAIN,
font_size, text_color, font_thickness)
# Stop the program if the ESC key is pressed.
if cv2.waitKey(1) == 27:
break
cv2.imshow('object_detector', image)
cap.release()
cv2.destroyAllWindows()
def main():
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument(
'--model',
help='Path of the object detection model.',
required=False,
default='efficientdet_lite0.tflite')
parser.add_argument(
'--cameraId', help='Id of camera.', required=False, type=int, default=0)
parser.add_argument(
'--frameWidth',
help='Width of frame to capture from camera.',
required=False,
type=int,
default=640)
parser.add_argument(
'--frameHeight',
help='Height of frame to capture from camera.',
required=False,
type=int,
default=480)
parser.add_argument(
'--numThreads',
help='Number of CPU threads to run the model.',
required=False,
type=int,
default=4)
parser.add_argument(
'--enableEdgeTPU',
help='Whether to run the model on EdgeTPU.',
action='store_true',
required=False,
default=False)
args = parser.parse_args()
run(args.model, int(args.cameraId), args.frameWidth, args.frameHeight,
int(args.numThreads), bool(args.enableEdgeTPU))
if _name_ == '_main_':
main()

Argparse setting files

I recently purchased an eeg headset and in order to be able to read data from python I need to be able to receive information from a doogle. To do this I need to use BrainFlow, which seems to be the most suitable centralized multi-language solution.
However I'm not used to using Argparse, whose role is to receive arguments (from a yml? js file? directly in the code?)
Anyway, can someone tell me how to provide arguments to argparse?
BTW here is the code :
import argparse
import time
from brainflow.board_shim import BoardShim, BrainFlowInputParams
def main():
BoardShim.enable_dev_board_logger()
parser = argparse.ArgumentParser()
# use docs to check which parameters are required for specific board, e.g. for Cyton - set serial port
parser.add_argument('--timeout', type=int, help='timeout for device discovery or connection', required=False,
default=0)
parser.add_argument('--ip-port', type=int, help='ip port', required=False, default=0)
parser.add_argument('--ip-protocol', type=int, help='ip protocol, check IpProtocolType enum', required=False,
default=0)
parser.add_argument('--ip-address', type=str, help='ip address', required=False, default='')
parser.add_argument('--serial-port', type=str, help='serial port', required=False, default='')
parser.add_argument('--mac-address', type=str, help='mac address', required=False, default='')
parser.add_argument('--other-info', type=str, help='other info', required=False, default='')
parser.add_argument('--streamer-params', type=str, help='streamer params', required=False, default='')
parser.add_argument('--serial-number', type=str, help='serial number', required=False, default='0')
parser.add_argument('--board-id', type=int, help='board id, check docs to get a list of supported boards',
required=True)
parser.add_argument('--file', type=str, help='file', required=False, default='')
args = parser.parse_args()
params = BrainFlowInputParams()
params.ip_port = args.ip_port
params.serial_port = args.serial_port
params.mac_address = args.mac_address
params.other_info = args.other_info
params.serial_number = args.serial_number
params.ip_address = args.ip_address
params.ip_protocol = args.ip_protocol
params.timeout = args.timeout
params.file = args.file
board = BoardShim(args.board_id, params)
board.prepare_session()
# board.start_stream () # use this for default options
board.start_stream(45000, args.streamer_params)
time.sleep(10)
# data = board.get_current_board_data (256) # get latest 256 packages or less, doesnt remove them from internal buffer
data = board.get_board_data() # get all data and remove it from internal buffer
board.stop_stream()
board.release_session()
print(data)
if __name__ == "__main__":
main()
Thank you in advance :) ,
Best,
KL
Translated with www.DeepL.com/Translator (free version)

How to run tensorflow retrain.py from other script?

I am writing a script to automate a training using the main() function in the tensorflow retrain.py. This script is normally called from the shell with parsed arguments. In retrain.py:
if __name__ == __main__:
parser = argparse.ArgumentParser()
parser.add_argument(
'--image_dir',
type=str,
default='',
help='Path to folders of labeled images.'
)
parser.add_argument(
'--output_graph',
type=str,
default='/tmp/output_graph.pb',
help='Where to save the trained graph.'
)
...
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
I understand that tensorflow usually handles the FLAGS argument as global variable, but I don't understand how this variable is set as global, since in the code snippet, FLAGS should be an argparse.Namespace object.
However, I've tried to define the FLAGS variable manually in my own script:
from scripts.retrain import main
...
if __name__ == '__main__':
tf.app.flags.DEFINE_string('summaries_dir', summaries_dir, 'Help summaries_dir.')
tf.app.flags.DEFINE_string('image_dir', image_dir, 'Help image_dir.')
...
FLAGS = tf.app.flags.FLAGS
tf.app.run(main=main, argv=[sys.argv[0]] + ['python -m scripts.retrain.py'])
And always get the error AttributeError: 'NoneType' object has no attribute 'summaries_dir'. How should I run the retrain.py from my script?

How to use Tensorflow

I've built multiple DNN and conVNN using tensorflow, and I can reach now a good accuracy. Now my question is how can I use this trained networks in real example.
I case of a convNN for computer vision, how can I use the model to classify a new picture ? can I generate something like convNN.exe that get images as input parameter that through the classification result out ?
Once you've trained the model, you should save it somewhere by adding code similar to
builder = saved_model_builder.SavedModelBuilder(export_path)
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
'predict_images':
prediction_signature,
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
Then, you can use Tensorflow serving to serve your model using a high-performance C++ server by running
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server \
--port=9000 --model_name=mnist \
--model_base_path=/tmp/mnist_model/
Modifying the code for your model, of course. You'll need to implement a client; there's an example for MNIST here. The guts of the client would be something like:
def do_inference(hostport, work_dir, concurrency, num_tests):
"""Tests PredictionService with concurrent requests.
Args:
hostport: Host:port address of the PredictionService.
work_dir: The full path of working directory for test data set.
concurrency: Maximum number of concurrent requests.
num_tests: Number of test images to use.
Returns:
The classification error rate.
Raises:
IOError: An error occurred processing test data set.
"""
test_data_set = mnist_input_data.read_data_sets(work_dir).test
host, port = hostport.split(':')
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
result_counter = _ResultCounter(num_tests, concurrency)
for _ in range(num_tests):
request = predict_pb2.PredictRequest()
request.model_spec.name = 'mnist'
request.model_spec.signature_name = 'predict_images'
image, label = test_data_set.next_batch(1)
request.inputs['images'].CopyFrom(
tf.contrib.util.make_tensor_proto(image[0], shape=[1, image[0].size]))
result_counter.throttle()
result_future = stub.Predict.future(request, 5.0) # 5 seconds
result_future.add_done_callback(
_create_rpc_callback(label[0], result_counter))
return result_counter.get_error_rate()
def main(_):
if FLAGS.num_tests > 10000:
print('num_tests should not be greater than 10k')
return
if not FLAGS.server:
print('please specify server host:port')
return
error_rate = do_inference(FLAGS.server, FLAGS.work_dir,
FLAGS.concurrency, FLAGS.num_tests)
print('\nInference error rate: %s%%' % (error_rate * 100))
if __name__ == '__main__':
tf.app.run()
This is in Python, of course, but there's no reason you couldn't use another language (e.g. Go or C++) if you wanted to create a binary executable.

How to make predictions on TensorFlow's Wide and Deep model loaded in TensorFlow Servings model_server

Can someone assist me in making predictions on TensorFlow's Wide and Deep Learning model loaded into TensorFlow Serving's model_server?
If anyone could point me to a resource or documentation for the same would be really helpful.
You can possibly try to invoke the predict method of the estimator and set the as_iterable as false for an ndarray
y = m.predict(input_fn=lambda: input_fn(df_test), as_iterable=False)
However, note the deprecation note here for future compatibility.
If your model is exported using Estimator.export_savedmodel() and you successfully built TensorFlow Serving itself, you can do something like this:
from grpc.beta import implementations
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
tf.app.flags.DEFINE_string('server', 'localhost:9000', 'Server host:port.')
tf.app.flags.DEFINE_string('model', 'wide_and_deep', 'Model name.')
FLAGS = tf.app.flags.FLAGS
...
def main(_):
host, port = FLAGS.server.split(':')
# Set up a connection to the TF Model Server
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# Create a request that will be sent for an inference
request = predict_pb2.PredictRequest()
request.model_spec.name = FLAGS.model
request.model_spec.signature_name = 'serving_default'
# A single tf.Example that will get serialized and turned into a TensorProto
feature_dict = {'age': _float_feature(value=25),
'capital_gain': _float_feature(value=0),
'capital_loss': _float_feature(value=0),
'education': _bytes_feature(value='11th'.encode()),
'education_num': _float_feature(value=7),
'gender': _bytes_feature(value='Male'.encode()),
'hours_per_week': _float_feature(value=40),
'native_country': _bytes_feature(value='United-States'.encode()),
'occupation': _bytes_feature(value='Machine-op-inspct'.encode()),
'relationship': _bytes_feature(value='Own-child'.encode()),
'workclass': _bytes_feature(value='Private'.encode())}
label = 0
example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
serialized = example.SerializeToString()
request.inputs['inputs'].CopyFrom(
tf.contrib.util.make_tensor_proto(serialized, shape=[1]))
# Create a future result, and set 5 seconds timeout
result_future = stub.Predict.future(request, 5.0)
prediction = result_future.result().outputs['scores']
print('True label: ' + str(label))
print('Prediction: ' + str(np.argmax(prediction)))
Here I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model with more details.
Hope it helps.