Python on M1 MBP trying to connect to USB devices - NoBackendError: No backend available - usb

I am trying to connect with Python to my USB devices.
The final result should be a connection to my Blood Pressure Monitor but I am failing already to connect to ANY device.
My simple code - which I found here - is bellow. The Product- and Vendor ID I got from Apple Menu > About this Mac > System Information
import usb.core
import usb.util
# find our device
dev = usb.core.find(idVendor=0x0781, idProduct=0x55a4)
# was it found?
if dev is None:
raise ValueError('Device not found')
# set the active configuration. With no arguments, the first
# configuration will be the active one
dev.set_configuration()
# get an endpoint instance
cfg = dev.get_active_configuration()
intf = cfg[(0,0)]
ep = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_OUT)
assert ep is not None
# write the data
ep.write('test')
But I get always NoBackendError: No backend available from dev = usb.core.find(idVendor=0x0781, idProduct=0x55a4)
For the connection I installed pyusb in my Python env and with Homebrew libusb on my mac.
I have no clue how to get a connection or even a simple list via iteration with all my connected Product- and Vendor IDs.

This error is to be expected if pyusb cannot find the dynamic libraries of libusb.
Installing libusb with Homebrew is not sufficient. Homebrew puts the relevant files in /opt/homebrew/Cellar/libusb/1.0.24/lib and creates symbolic links in /opt/homebrew/lib. But pyusb is not aware of these paths.
You have two main options:
Add /opt/homebrew/lib to the environment variable DYLD_LIBRARY_PATH. For a permanent setup, add it to ~/.zshenv:
export DYLD_LIBRARY_PATH="/opt/homebrew/lib:$DYLD_LIBRARY_PATH"
Create a symbolic link in your home directory. This takes advantage of the fact that ~/lib is a default fallback path for libraries:
ln -s /opt/homebrew/lib ~/lib

Related

Universal Sentence Encoder load error "Error: SavedModel file does not exist at..."

I installed Uiniversal Sentence Encoder (Tensorflow 2) in 2 virtual environment with Ananconda. One is on Mac, anther is on Ubuntu.
All worked with following:
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
model = hub.load(module_url)
Installed with:
conda create -n my-tf2-env python=3.6 tensorflow
conda init bash
conda activate my-tf2-env
conda install -c conda-forge tensorflow-hub
But, for unknown reason after 3 weeks, Mac does not work with following error which fails at:
model = hub.load(module_url)
Error: SavedModel file does not exist at: /var/folders/99/8rwn_9hx3jj9x3qz6yf0j2f00000gp/T/tfhub_modules/063d866c06683311b44b4992fd46003be952409c/{saved_model.pbtxt|saved_model.pb}
On Mac, I recreated new env with same procedure but has same error.
On Ubuntu, all works well.
I want to know how to fix Mac. Thank you for help.
What I attempted on Mac is that I tried to download "https://tfhub.dev/google/universal-sentence-encoder/4" to local drive and load it from local drive in future, not from web url. This process was not finished and not successful yet. I don't remember if there is anything downloaded to Mac with this attempt, that might corrupted Tensorflow-hub on login user account of my Mac.
This error usually occurs when the saved_model.pb is not present in the path specified in the module_url.
For example, if we consider the Folder structure as shown in the screenshot below,
The code,
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
model = hub.load(module_url)
and
import tensorflow_hub as hub
module_url = "/home/mothukuru/Downloads/Hub"
model = hub.load(module_url)
work successfully.
But if saved_model.pb is not present in that Folder as shown below,
Executing the code,
import tensorflow_hub as hub
module_url = "/home/mothukuru/Downloads/Hub"
model = hub.load(module_url)
results in the below error,
OSError: SavedModel file does not exist at: /home/mothukuru/Downloads/Hub/{saved_model.pbtxt|saved_model.pb}
In your specific case, executing the code while the Download of the Model was in progress might have resulted in the error.
As stated in the comment, deleting the Downloaded File can fix the problem.
Please let me know if this answer has not resolved your issue and I will be happy to modify it accordingly.
TF Published some additional guidelines on caching models apparently in response to questions about this issue.
In my case, I was running this locally on Mac via a jupyter notebook.
I was not sure how to "Delete the download file" as suggest in the other answer, but I found this resolved my issue:
https://www.tensorflow.org/hub/caching#reading_from_remote_storage
Reading from remote storage
Users can instruct the tensorflow_hub
library to directly read models from remote storage (GCS) instead of
downloading the models locally with
os.environ["TFHUB_MODEL_LOAD_FORMAT"] = "UNCOMPRESSED"
or by setting the command-line flag --tfhub_model_load_format to UNCOMPRESSED. This way, no caching directory is needed, which is especially helpful in environments that provide little disk space but a fast internet connection.
I ran that command in my notebook, and then the error was immediately resolved.
Note: I assume this is slower, especially if you do not have a fast internet connection, since what you are doing is telling the program to not locally cache (store) a copy and to just download it on demand.

where does docker upload server.py file?

Setting: lots of mp3 records of customer support conversations somewhere in a db. Each mp3 record has 2 channels, one is customer rep, another is customer's voice.
I need to extract embedding(tensor) of a customer's voice. It's a 3 step process:
get the channel, cut 10 secs, convert to embedding. I have all 3 functions for each step.
embedding is a vector tensor:
"tensor([[0.6540e+00, 0.8760e+00, 0.898e+00,
0.8789e+00, 0.1000e+00, 5.3733e+00]])
Tested with postman. Get embedding function:
I want to build a rest api that connects on 1 endpoint to the db of mp3 files and outputs embedding to another db.
I need to clarify important feature about docker.
When i run "python server.py" flask makes it available on my local pc - 127.0.1.01/9090:
def get_embedding(file):
#some code
#app.route('/health')
def check():
return jsonify({'response':'OK!'})
#app.route('/get_embedding')
def show_embedding():
return get_embedding(file1)
if __name__ == '__main__':
app.run(debug=True, port=9090)
when i do it with docker - where goes the server and files? where does it become available online, can docker upload all the files to default docker cloud?
You need to write a Dockerfile to build your Docker image and after that, Run a container from that image exposing on the port and then you can access it machineIP:PORT
Below is the example
Dockerfile
#FROM tells Docker which image you base your image on (in the example, Python 3).
FROM python:3
#WORKDIR tells which directory container has to word
WORKDIR /usr/app
# COPY files from your host to the image working directory
COPY my_script.py .
#RUN tells Docker which additional commands to execute.
RUN pip install pystrich
CMD [ "python", "./my_script.py" ]
Ref:- https://docs.docker.com/engine/reference/builder/
And then build the image,
docker build -t server .
Ref:- https://docs.docker.com/engine/reference/commandline/build/
Once, Image is built start a container and expose the port through which you can access your application.
E.g.
docker run -p 9090:9090 server
-p Publish a container's port(s) to the host
And access your application on localhost:9090 or 127.0.0.1:9090 or machineIP:ExposePort

Can we single step QEMU using libvert

I am developing a peripheral hardware and want to use QEMU to test it.
The plan is to run the device driver in QEMU and use libvert (or something else?) to interface the VM with a python based simulation model of the peripheral.
I aware that QEMU can be single stepped via GDB, but I am looking at a python approach to do the following.
Wait for a write to a specific memory location.
Suspend QEMU
Run some background task in the host.
Run QEMU for N Cycles.
Write to a memory location
Continue
Is this possible with libvert or any other toolkit?
I needed to do something similar, and came across two approaches:
Run Python in GDB, using a python script of the commands
Use a Python API to GDB like pygdbmi
The latter ended up being more flexible, so I'll explain those steps here.
Configure qemu with debugging information:
./configure --enable-debug
Build qemu and invoke it halted, with debug hooks:
make
sudo make install
qemu-system-x86_64 -S -s
Now, use a Python script to attach to and interact with qemu via pygdbmi(instructions here):
from pygdbmi.gdbcontroller import GdbController
from pprint import pprint
# Start gdb process
gdbmi = GdbController()
print(gdbmi.get_subprocess_cmd()) # print actual command run as subprocess
gdbmi.write('target remote localhost:1234'); # attach to QEMU GDB socket
pprint(response)
response = gdbmi.write('-break-insert main') # machine interface (MI) commands start with a '-'
response = gdbmi.write('break main') # normal gdb commands work too, but the return value is slightly different
response = gdbmi.write('-exec-run')
response = gdbmi.write('run')
response = gdbmi.write('-exec-next', timeout_sec=0.1) # the wait time can be modified from the default of 1 second
response = gdbmi.write('next')
response = gdbmi.write('next', raise_error_on_timeout=False)
response = gdbmi.write('next', raise_error_on_timeout=True, timeout_sec=0.01)
response = gdbmi.write('-exec-continue')
response = gdbmi.send_signal_to_gdb('SIGKILL') # name of signal is okay
response = gdbmi.send_signal_to_gdb(2) # value of signal is okay too
response = gdbmi.interrupt_gdb() # sends SIGINT to gdb
response = gdbmi.write('si 20') # step 20 instructions
response = gdbmi.write('continue')
response = gdbmi.exit()
If you have trouble with kernel symbols, you might also need to issue a command 'file myKernel' to load the symbol table from that file, assuming it was compiled with debugging information.
For reference, the '-s' command adds GDB hooks at localhost:1234. So the first command you issue must direct gdb to look there:
gdbmi.write('target remote localhost:1234');

Install mozroot-certdata package on a read only root file system

I have an established yocto build which I'm now trying to switch over to having a root file system (eg. EXTRA_IMAGE_FEATURES += "read-only-rootfs").
However I'm running into an issue with a recipe in the meta-mono layer: mozroot-certdata. I see the culprit is the pkg_postint script (http://git.yoctoproject.org/cgit/cgit.cgi/meta-mono/tree/recipes-mono/mozroot-certdata/mozroot-certdata_1.0.0.bb) which needs to modify the root file system on first boot which the build system is correctly flagging as impossible with a read only root file system:
ERROR: The following packages could not be configured offline and rootfs is read-only: ['mozroot-certdata']
My question is: is there a way to get these mozroot certs installed and configured with mono during the build process, such that the root file system does not need to be modified at boot/run time?
Well, I had a brief look at this late this summer, as I'm also using a read-only rootfs. The problem is that mozroot.exe is hardcoded to write into /usr/share/.mono/certs and does not respect your sysroot. You could probably hack mozroot.exe to actually write the imported files into the sysroot, though my time limit didn't allow me to try this (and neither have I ever looked into mono at all...).
My solution was instead to do the import at every boot. (It could also be done only once, but then the issue about updates come along). To achive this I made a bind mount on the directory where mozroot.exe wants to write the certdata.
Details of my solution
Add a file volatile-binds.bbappend with the following contents:
VOLATILE_BINDS += "\
/tmp/mono-certs /usr/share/.mono/certs \n\
"
That will make a bind mount from /tmp/mono-certs to /usr/share/.mono/certs, thus you'll be able to import the certs.
Then I added a service file and a mozroot-certdata_%.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
DEPENDS += "mono-native"
SRC_URI += "file://mozroot-certdata.service \
"
inherit systemd
SYSTEMD_SERVICE_${PN} = "mozroot-certdata.service"
do_install_append() {
mkdir -p ${D}${datadir}/.mono/certs
mkdir -p ${D}${systemd_system_unitdir}
install -m 440 ${WORKDIR}/mozroot-certdata.service ${D}${systemd_system_unitdir}/mozroot-certdata.service
}
FILES_${PN} += "${datadir}"
# Empty the postinstallation script, as we can import the cert offline.
pkg_postinst_${PN} () {
# mono $D/usr/lib/mono/4.5/mozroots.exe --import --machine --ask remove --file $D/${sysconfdir}/ssl/certdata.txt
}
The service file mozroot-certdata.service:
[Unit]
Description=Import certficates to Mono
After=tmp-mono-certs.service
[Service]
Type=oneshot
ExecStart=/usr/bin/mono /usr/lib/mono/4.5/mozroots.exe --import --machine --ask-remove --file /etc/ssl/certdata.txt
[Install]
WantedBy=multi-user.target
is there a way to get these mozroot certs installed and configured with mono during the build process
Yes but it requires mosroots binary to be executable at rootfs creation time. See Post-Installation Scripts in documentation.
The 'else' branch in pkg_postinst is what gets executed at that time and if it succeeds, then the delayed postinst is not needed (and you shouldn't get a build error). mono-native recipe already exists so you should be able to depend on that and to fix the else branch in the pkg_postinst function so it finds native mono & mosroots.exe and writes to the correct place under $D.
As Anders mentioned this alone is not enough if you care about package-based upgrades.

Openshift for Play: The remote end hung up unexpectedly

I am trying to launch a play project in openshift. The first phase which was nearly 15% of the project was successfully completed and uploaded. So, I guess the initial configuration was okay. Now, after I completed nearly the rest of the project, then when I am trying to push the project using ssh, everytime after a certain time the remote server hangs up with the following message.
remote: [info] Done packaging.
remote: model contains 69 documentable templates
Connection to blogofprime-thatsqt.rhcloud.com closed by remote host.
fatal: The remote end hung up unexpectedly
error: error in sideband demultiplexer
To ssh://5455ef32e0b8cd379e000293#blogofprime-thatsqt.rhcloud.com/~/git/blogofprime.git/
+ 557ec12...4034b71 HEAD -> master (forced update)
Every-time after a certain step remote server hangs up.
My openshift.conf file:
# This is the main configuration file for the application.
# ~~~~~
include "application"
# Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions.
# If you deploy your application to several instances be sure to use the same key!
application.secret="V0sLX<RAciXw_>7^O8y=I4BRW/M4#vhVhF=H44`lMfgAV2hs^Pp?tsfroKt1J3eX"
# The application languages
# ~~~~~
application.langs="en"
# Database configuration
# ~~~~~
# You can declare as many datasources as you want.
# By convention, the default datasource is named `default`
#
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://"${OPENSHIFT_MYSQL_DB_HOST}":"${OPENSHIFT_MYSQL_DB_PORT}"/"${OPENSHIFT_APP_NAME}
db.default.user=${OPENSHIFT_MYSQL_DB_USERNAME}
db.default.password=${OPENSHIFT_MYSQL_DB_PASSWORD}
# Evolutions
# ~~~~~
# You can disable evolutions if needed
# evolutionplugin=disabled
# Ebean configuration
# ~~~~~
# You can declare as many Ebean servers as you want.
# By convention, the default server is named `default`
#
ebean.default="models.*"
# Logger
# ~~~~~
# You can also configure logback (http://logback.qos.ch/), by providing a logger.xml file in the conf directory .
# Root logger:
logger.root=ERROR
# Logger used by the framework:
logger.play=INFO
# Logger provided to your application:
logger.application=DEBUG
My build.sbt file:
name := "thatsqt"
version := "1.0-SNAPSHOT"
scalaVersion := "2.11.2" // or "2.10.4"
libraryDependencies ++= Seq(
// Select Play modules
jdbc, // The JDBC connection pool and the play.api.db API
//anorm, // Scala RDBMS Library
javaJdbc, // Java database API
javaEbean, // Java Ebean plugin
javaJpa, // Java JPA plugin
filters, // A set of built-in filters
javaCore, // The core Java API
// WebJars pull in client-side web librarie
"org.webjars" %% "webjars-play" % "2.3.0",
"com.typesafe.play" %% "play-slick" % "0.8.0",
// Add your own project dependencies in the form:
// "group" % "artifact" % "version"
"mysql" % "mysql-connector-java" % "5.1.27"
)
fork in Test := false
lazy val root = (project in file(".")).enablePlugins(PlayJava)
EclipseKeys.withSource := true
Platform:
I am using Mac OS X and typesafe activator for play framework.
What I tried:
I tried to unset TMOUT in both server and client. At this point I am not very sure whether this is a timeout problem or something else.
My project Link: https://github.com/magurmach/PlayOpenshiftThatsQt
How can I solve this problem?
If the problem started occurring recently, it may be caused by exceeded quota.
It's very common to run out of space using 1GB small gears.
Try using rhc app-tidy rhc command:
rhc app-tidy <app>
Does your project include large binary files? This can cause git to take up a lot of RAM server-side, and a small Openshift gear will kill things that take too much RAM. The solution is to ssh into your Openshift box (using "rhc ssh" or equivalent) and tell the server-side git to limit its RAM use:
cd git/*.git
git config pack.windowMemory "25m"
git config pack.packSizeLimit "25m"
git config pack.threads "1"
and why they didn't do that by default I have no idea.
Also, next time you want to put large binary files into your project, you don't have to check them into the git repository (which will take up space tracking their entire history): you could just instruct your app to download them from another server as needed (the second server needs only the ability to host static files).