I'm running a Raspberry PI 4B Buster environment with all recommended OpenVino dependencies.
I'm trying to put together an object detection pipeline, with multiple object detectors, and multiple inference requests per object detector.
The problem occurs after some time, 10m to 10h, the program doesn't crash, but I can see on the pipeline, that inference stops to happen. Running lsusb the Myriad device is missing.
First thought was temperature issues, but the device doesn't reach above 42.5 °C.
In dmesg output there's no sign of undervoltage issue.
I'd need your help to investigate the problem, the things I've tried so far are:
Different Python version (3.6, 3.7, 3.8)
Different OpenVINO version (2021/3, 2021/4)
Different Raspberry Device.
The environment contains of:
|||
|-|-|
| OS | Raspbian Buster 10 |
| Python | 3.7.3 |
| OpenVINO | 2021.4.2-3974-e2a469a3450-releases/2021/4 |
The OpenVINO Action Recognition Python Demo is the best reference for you since it uses multiple models and the inference uses input video.
Each step in this demo implements PipelineSteps interface by creating a class derived from PipelineSteps base class. It has been designed to properly function with the supported hardware (CPU,GPU, HDDL or MYRIAD).
Your issue could be caused by your modified code (maybe some error handler is required, some sequence of processes is improper, etc) or caused by the NCS2 device itself.
Try to confirm the device availability:
NCS2 was detected after rebooting and running lsusb - the NCS2 is in a good condition. It's just that sometimes, there are some underneath services halted/incomplete, which results in this issue.
NCS2 was not detected after rebooting and running lsusb - it could be a device issue/malfunction and if still within the warranty period, you may ask for a replacement.
Related
I am getting started on using tensorflow-quantum for some QML circuit simulations. I have everything configured correctly for TensorFlow with GPU, and when I run print(tf.config.list_physical_devices('GPU')), it reports the presence of my GPU.
However, I've done some Googling, and I've come across a few things suggesting that tensorflow-quantum doesn't actually support GPU acceleration for simulations (e.g. MichaelBroughton's first reply here, and this issue which is still open). However, it's unclear to me how up-to-date this state of affairs is. I can't find anything about adding GPU support in the version notes.
Does tensorflow-quantum currently support GPU? If so, how do I (a) make it use my GPU for simulations and (b) verify that it is doing so?
When playing certain games or viewing certain websites, my computer will suddenly crash and my monitor will display "HDMI no signal" the computer cannot be restarted without unplugging it from the wall. Upon viewing the crash report I see event 10016 related to permissions I think, but I'm a moron. Any and all solutions are greatly appreciated. Relevant components are as follows:
Graphics Card: RTX 2080
Power supply: EVGA supernova 1000g2
Storage: Sandisk 500Gb
CPU: Ryzen 2700X
Monitor: Both HP EliteDisplay E222 and another HP monitor
Since you are not supplying your q with the crash report, I can only suspect your problem is rooted to either one of these:
Bug in the accompanying display driver and/or directX installation
Proposed solution : try and obtain the latest version of your RTX 2080, do a 2D and 3D test run afterwards to ensure everythings proper
Fan or cooling related issue. Some games might force your hardwares to work harder, especially over continuous use. Check your fan and coolings to ensure they are moving and cooling as fast as they should. Also install a temp monitoring software if you need to be extra sure.
Hope those help m8
I have a Micro bit v1. Days before, I was unable to search the Bluetooth signal of it, so I followed the instruction on microbit.org to upgrade its firmware. But after I copied the firmware file into it, a FAIL.TXT file showed up in the disk MAINTENANCE. What's more, since then on, every time I connect the Micro bit to my computer, it enters this MAINTENANCE disk, no matter I press the Reset or not during the connection. I've tried different versions of firmwares of the Micro bit v1, but none of them succeed.
The details about this Micro bit are shown as below.
# DAPLink Firmware - see https://mbed.com/daplink
Unique ID: 00000000066aff565357825187123855a5a5a5a597969908
HIC ID: 97969908
Auto Reset: 0
Automation allowed: 1
Overflow detection: 0
Daplink Mode: Bootloader
Bootloader Version: 0254
Git SHA: db711ec68a861b9d9b0d7a7a82071796ec117687
Local Mods: 1
USB Interfaces: MSD
Bootloader CRC: 0x0697f838
Interface CRC: 0x4915d882
Remount count: 1
URL: https://mbed.com/daplink
The contents of FAIL.TXT are shown as below.
error: In application programming aborted due to an out of bounds address.
type: interface
So, I am wondering that, what possibly has occured this upgrade failure ? And how can I fix my Micro bit ?
I am 7 months late, I know, and I made an account just ot answer here. I see some strange things I haven't seen before in your details;
You're missing the part of the UID that specifies the version of the
Micro:Bit (4 first numbers are supposed to be 9900 for 1.3 and 9901
for 1.5, not sure if it's different if you have 1.0).
Interface version is missing from the details.
Local mods being set to 1 means you have unsaved local changes to the Micro:Bit.
Remount count being set to 1 means it has failed to flash the previous hex you tried to flash to it. Not a good sign, but it means you only tried (or it only counted) once to reflash the firmware.
Try flashing an erase hex to the Micro:Bit, then an up-to-date firmware hex, and lastly the OOB hex. This worked for me when I experienced a similar issue.
I hope any of this can help you, or any others that stumble upon this post in the future.
Please reach out if you still need help!
This micro:bit has wrong HIC ID (97969908 instead of 97969901). It doesn't have original bootloader that's why you are not able to flash original interface firmware. According to DAPLink 97969908 is STM32F103XB bootloader. I think there are 2 possible solutions: 1st is to flash original bootloader and original interface firmware then, 2nd is to make some with DAPLink source files to compile a new interface firmware that would work with 97969908 bootloader.
See here https://github.com/ARMmbed/DAPLink/discussions/956
I am new to Phoronix Test Suite and ran my first test with phoronix-test-suite benchmark testname. This ran the test for one of my GPUs but not the other. How can I choose which GPU to use for the benchmark?
I've searched Google and skimmed the documentation for an answer but found nothing.
EDIT
The test I am trying to run is here, using
phoronix-test-suite benchmark 2102179-HA-NVIDIAGEF76
I've also tried using the method described here but to no avail.
I am using Phoronix Test Suite v10.2.2 (Harstad) on Ubuntu 20.04.2 LTS.
UPDATE
According to this issue, phoronix-test-suite always chooses the default GPU on a given system.
PTS currently sticks to using the default GPU configured by your system whether it be configured via PRIME handling or other multi-GPU setup configurations. Basically, it doesn't override your default GPU choice(s) or interfere beyond simply reporting the enumerated GPUs.
So the official way to change the GPU utilized by a phoronix benchmark is to change the 'default GPU' on the broader system. I don't understand what determines which GPU is the default or how to change the default. The above quote indicates that the default GPU might be changed using PRIME.
When running nvidia-settings the following message is printed.
** (nvidia-settings:9809): WARNING **: 15:46:41.950: PRIME: Failed to execute child process “/usr/bin/prime-supported” (No such file or directory)
** Message: 15:46:41.950: PRIME: is it supported? no
So it seems that whatever PRIME is, it's not part of my system.
As you were looking to configure an Nvidia GPU, the logic is slightly different:
looking at the source, PTS seems to always use the first GPU it can find on nvidia-settings --query PCIID output.
This theory has been further confirmed by PTS lead developer on github. So unfortunately there's no switch in PTS that would help achieve that.
This can be done if you are using a Nividea GPU you can go to the Nividea control panel:
Go to Manage 3D settings
Go to "Program Settings"
Select your app (i.e in this case Phoronix-test-suite benchmark) and select the high-performance Nividea GPU.
Now run the benchmark test. <---- For Windows
for more help visit: https://www.phoronix-test-suite.com/documentation/phoronix-test-suite.pdf
I have a Lindy IRDA USB bridge attached to my Xperia Neo (Cyanogen Mod 9). I have changed the features to support host mode etc. All is looking fine in the code. I detect the device. I can see the interface and the two endpoints (one in, one out), however as soon as I try to claimInterface it fails, regardless of whether I atempt a force claim or not.
There appears to be no simple way to find out why the claim fails. Though strace gives me a clue as the ioctl call for claim interface fails with a device not found error.
Ignoring the failure gets me only as far as the request which then fails to queue or send.
The questions I have are (I think):-
What exactly is missing that is resulting in the claim failing?
Is there a way around this that ideally would not require root?
Is there a way to override the claim somehow?
OK, so I appear to have fallen into answering my own question here, but I see that a number of people are getting confused over the apparent support for USB Host and the "odd" behaviours that can be observed so hopefully this answer may help some of you out.
I posed 3 questions, I have a definitive answer for 1 & 3 but I am less certain about the other at this stage.
1) What exactly is missing, and why does this result in a bad claim?
The problem is that the device, a lindy IRDA dongle is being detected by the host (my Xperia Neo handset) but that the only configuration that it supports is demanding too much power for the handset to support.
Oddly, this does not prevent either a) the device from being detected and enumerated by the Android libraries or b) from it appearing to be powered (red LED glowing)
There is no report at the time of the failing claimInterface() call from any system libraries, however a dmesg|tail running when the device is attached gave the necessary insight.
dmesg | tail
<3>usb 1-1: device v066f p4200 is not supported
<6>usb 1-1: New USB device found, idVendor=066f, idProduct=4200
<6>usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
<6>usb 1-1: Product: IrDA/USB Bridge
<6>usb 1-1: Manufacturer: Sigmatel Inc
<6>usb 1-1: rejected 1 configuration due to insufficient available bus power
<4>usb 1-1: no configuration chosen from 1 choice
Further investigation showed that this little device was claiming a requirement for 440mA which seems rather a lot but there seems little that can be done about it.
Questions 2 Can anything that does not require root be done to work around this?
It seems not. In theory I could provide external power to the device through the use of a USB Y cable or similar hackery but I don't believe that that would change the underlying problem that the handset refuses the demand. Even with root it is not clear that anything can be done to override the power profile.
Question 3, is there a way to override the claimInterface() failure and force the communications?
This is a blunt no. The device has simply not been created by the kernel so there is nothing there to override in the first place. Which does make it somewhat puzzling as to why the Android libraries still offer it up.
As to Question 2 and power demands...
Most android devices that support Host/OTG that I have run across, will only support a maximum current draw of around 100 mA. Could you force it to work via some kernel source hackery? Likely, but you would run a very real risk of burning up the USB support circuitry in your android device. This is because the Boost converter that such devices use to power the external usb device only physically support that maximum 100 mA current draw.
Could you use a Y-Cable to supply the needed current externally? Yes, I have done this before on a device that had no boost converter, but you would then need to have a workaround in the kernel to tell it that you had such external power, and that it was now okay to power the device up.