Custom feedback on the basis of object detected by YOLOv7 in colab - google-colaboratory

I want to develop a project using yolov7 in Google colab in which the user will be provided a feedback (may be simply the audio pronouncing the name of the object or giving some instructions to user about object detected). How do I trigger this feedback? Which function to use in python to do this. How the program will read the label of the object detected. I don't have a solid programming background and I am self learner. Many thanks for your help.

Related

How can I use vulkan sink in imx8mq

I'm going to play a 4k60p video using imx8mq.
Glimagesink was used because the performance of playing 4k60p videos had to be maintained even if a video was cropped and rotated, but the fps fell to 20-30fps. Waylandsink cannot scale fully on the screen after cropping the video without using videoconvert that uses cpu.
Looking at the post in the link below, the writer used imx8mq like me, and the available sink type included vulkansink. But when I build the yocto project, basically, the vulkansink is missing.
https://community.nxp.com/t5/i-MX-Processors/overlaysink-mssing-on-MCIMX8M-EVK-with-L4-9-88-2-0-0/m-...
I tried to activate vulkan plugin by modifying the imx gstreamer-plugins-bad recipe file, but it couldn't build with an error message that there is no glslc program when built with bitbake.
How can I use vulkansink in imx8mq?

Program won't start before stop the program frist?

I'm a newbie using LabView for my project. So I'm developing a program that gathers data from sensors that attach in the DAQmx board and also a spectrometer from STS-VIS ocean optic. At the first developing, I combine both devices in one loop inside the same flat structure, but I got an error saying: "The application is not able to keep up with the hardware acquisition." I cannot get the data showing on the graph for both devices, but it was just fine if I run it separately. And I found the solution saying that I need to separate both devices in a different while loop process because it may have different buffer size (?). I did it and it worked that all the sensors are showing in each graph. But the weird thing is, I need to stop the program first at the first run, then run it again for the second time for getting the graph showing in the application. Can anyone tell me what I did wrong and give me a solution? Due to the project rule I cannot share my Vi here publicly, but if anyone interested to help, I'd like to share it personally. Thank you.
you are doing right thing but you have to understand how Data acquisition work in LabVIEW and hardware.
you can increase hardware buffer Programmatically using property node or try to read fast as possible then you dont need two separate loop.
NI
I work currently with a NI DAQmx device too and became desprate using LabView because there is not good documentation and/or examples. Then I started to use Python which I found more intuitively. The only disadvantage is that the userinterface is not so easily generated, but for this one can use the QT Designer (open source programm avaiable online).

How to achieve Image recognition using phone camera

I'm trying to build an app to make image recognition using the phone camera... I saw a lot of videos where using the camara the app identify where is the person or which feelings they have or things like that in real time.
I need to do a built an app like this, I know it's not an easy task, but I need to know which technologies can be use in order to achieve this in a mobile app?
Is it tensor flow?
Are there some libraries that helps to achieve this?
Or do I need to build a full Machine Learning with IA app?
Sorry to make such a general question but I need some insights.
Redgards
If you are trying to do this for the iOS platform, you could use a starter kit here: https://developer.ibm.com/patterns/build-an-ios-game-powered-by-core-ml-and-watson-visual-recognition/ for step-by-step instructions.
https://github.com/IBM/rainbow is a repo which it references.
You train your vision model on the IBM Cloud using Watson Visual Recognition, which just needs example images to learn from. Then you download the model into your iOS app and deploy with XCode. It will "scan" the live camera feed for the classes defined in your model.
I see you tagged TF (which is not part of this starter kit) but if you're open to other technologies, I think it would be very helpful.

Is there a dm script command to control the GIF cinema mode

I have been making digital micrograph scripts to take some sequential frame acquisitions on a JEOL ARM200F. For some experiments, I need a faster readout speed than the usual CCD acquisition mode can do.
The GIF Quantum camera is able to do a "cinema" mode in which half the pixels are used as memory storage such that the camera can be exposed and read out simultaneously. This is utilized for EELS acquisitions.
Does anybody know if there is a DM scripting command to activate (acquire images in) the cinema mode?
My current script sets the number of frames to acquire, the acquisition time per frame, and binning. However the readout time between each frame is too slow. Setting the camera to cinema mode before running the script still only acquires full frame images.
There is no simple command for this. The advanced camera modes are not available as simple commands, and they are generally not part of the supported DM-script interface.
Usually, these modes can only be accessed via the object-oriented camera-script interface (CM_ commands) used by Gatan service and R&D. This script interface is, at least until now, not end-user supported.
It definitely falls into the category of 'advanced' scripting, so you will need to know how to handle object-oriented script coding style.
With the above said, the following might help you, if you already know how to use the CM_ commands in general:
In the extended (not enduser - supported) script interface, the way to achieve cinema mode is to modify the acquisition parameter set. One needs to set the readMode parameter.
The following code snipped shows this:
object camera = cm_GetCurrentCamera()
number read_mode = camera.cm_GetReadModeForNamedAcquisitionStyle("Cinema")
number create_if_not_exist = 1;
object acq_params = camera.CM_GetCameraAcquisitionParameterSet("Imaging", "Acquire", "Record", create_if_not_exist)
cm_SetReadMode(acq_params, read_mode)
cm_Validate_AcquisitionParameters(camera, acq_params);
image img := cm_AcquireImage(camera, acq_params)
img.ShowImage()
Note, that not all cameras support the Cinema readmode. The second line command will throw an error message in that case.

How to write Tensorboard event file by using just protobufs in C++?

Using C++ I was able to write an event file containing a graphdef without a problem. I used EventsWriter::WriterEvent() API. I looked great on TensorBoard.
After a deep dive, I found code in tensorflow.core.util, tensorflow.core.platform, and tensorflow.core.lib.io that wraps the tensorflow::Event in a record in this format: length, masked CRC of length, data, masked CRC of data. (github source here)
But the problem is that I do not want to statically link to the contrib TensorFlow library with my app. Instead, I'd like to make my app lightweight and decoupled from the library by using my local protoc-compiled headers (.pb.h) and sources (.pb.c).
I am able to create an event file using protobufs, but they are not visualized on TensorBoard. While using the debugger on Tensorboard source, I see an DataLossError exception when launching Tensorboard here: tensorboard/backend/event_processing/event_file_loader.py. The DataLossError exception is likely due to the fact that the tensorflow::Event is not wrapped as described above.
If you or anyone knows a strategy to write TB-compatible event files in C++ without using the contrib tensorflow library, please let me know.
So the solution is to wrap each Event record with these fours fields:
uint64 (length)
uint32 (masked crc of length)
byte (data[length])
uint32 (masked crc of data)
See WriteRecord() here