How can I use vulkan sink in imx8mq - gpu

I'm going to play a 4k60p video using imx8mq.
Glimagesink was used because the performance of playing 4k60p videos had to be maintained even if a video was cropped and rotated, but the fps fell to 20-30fps. Waylandsink cannot scale fully on the screen after cropping the video without using videoconvert that uses cpu.
Looking at the post in the link below, the writer used imx8mq like me, and the available sink type included vulkansink. But when I build the yocto project, basically, the vulkansink is missing.
https://community.nxp.com/t5/i-MX-Processors/overlaysink-mssing-on-MCIMX8M-EVK-with-L4-9-88-2-0-0/m-...
I tried to activate vulkan plugin by modifying the imx gstreamer-plugins-bad recipe file, but it couldn't build with an error message that there is no glslc program when built with bitbake.
How can I use vulkansink in imx8mq?

Related

Kinect depth data ONLY

Is there a way in linux (raspbian) to capture only the depth data stream from a kinect? I'm trying to reduce the amount of processing needed to capture Kinect information so I want to ship the data stream to another computer to assemble the data.
Note:
I have freenect installed but anything that requires opengl will not run on rasbian.
I have installed this example which captures the data stream with a b+w visual depth display.
librekinect is a Linux kernel module that lets you use the depth image like a standard webcam. It's known to work with the Raspberry Pi.
But if you want to use libfreenect for full video/depth/motor support, you'll need a more powerful board like the ODROID XU-3 Lite. By the way, libfreenect only requires opengl for some examples. The rest of the project compiles and runs fine without.

How to capture the image from Canon Digital Camera IXUS 75 Video mode

I trying capturing the image in video mode from Canon Digital Camera IXUS 75 model using WIA type. But I didn't get any thing. If Photo mode, I can seen digital Camera storage data i.e., video,photos etc. So, is it required any .dll file or any stuff for capturing image in video mode. Even I tried different way's also
Using JavaCV.It detects Webcam,Laptop internal Camera.But it doesn't detect digital Camera device.
JTWAIN is not supported with windows 64-bit OS. So, I didn't tried with this.
Please help me. Either Canon digital camera software nor Java relevant stuff.
Assuming you still plan to use Java and if the problem is still an issue, I used Asprise's JTwain to get image from scanner, camera (not only laptop web cam) and etc. The only thing you need are the drivers for the devices. Grab an eval version from Asprise and check this step by step dev guide here. As far as I understood, this should be good enough for your requirements.
By the way, seeing the file structure of a camera does not always mean you have the device fully working.

Getting started with image processing on Mac OS X

I recently moved from a PC to a MacBook Pro. I'm starting to go through tutorials on Objective-C and developing in Cocoa. I do a lot of image processing algorithm development work (pixel by pixel manipulation) in my day job so I'd like to get create a test image processing app or two for OS X. I'm struggling to figure out where to start - let's say I want to create a simple application (that I could reuse) like the following:
load an image from an open file option within a file menu
display this within the GUI.
Click a button to apply pixel by pixel processing
Update the displayed image
Save the processed image from the save option within the file menu
Any pointers or links would be most appreciated.
Thanks
Other info:
I'm pretty familiar with OpenCV within Linux - haven't looked at using it within Objective-C/Cocoa/Xcode environment yet though - not even sure if this would be a good idea?
I guess it would be nice to use GPU acceleration as well, but I'm not familiar with OpenGL/OpenCL - so I might have to put that one on the long finger for the moment.
As you are looking at the Apple platform, you should look into the CoreImage framework - it will provide you most of pre-baked cookies ready to be consumed in your application.
For more advanced purposes, you can start off with openCV.
Best of luck!!
As samfisher suggests, OpenCV is not that hard to get working on the Mac, and Core Image is a great Cocoa framework for doing GPU-accelerated image processing. I'm working on porting my GPUImage framework from iOS to the Mac, and it's entirely geared around making accelerated image processing easy to work with, but unfortunately that isn't working right now.
If you're just getting started on the Mac, one tool that I can point out which you might overlook is Quartz Composer. You have to download the separate Graphics Tools package from Apple's developer site to install Quartz Composer, because it's no longer shipped with Xcode.
Quartz Composer is a graphical development tool that lets you drag and drop modules, connect inputs and outputs, and do rapid development of some fairly interesting things. One task it's great for is doing rapid prototyping of image processing, either using Core Image or OpenGL shaders. I've even heard of people using OpenCV with this using custom patches. You can easily connect an image or camera source into a filter chain, then edit the filters and see live updates as you work on them, without requiring a compile-run cycle.
If you want some sample QC projects to play with, I have a couple of them linked from this article I wrote a couple of years ago. They both do the same color-based object tracking, with one using Core Image and the other OpenGL shaders. You can dig into that and play around to see how that works, without having to get too far into writing any code.

H264 frame viewer

Do you know any application that will display me all the headers/parameters of a single H264 frame? I don't need to decode it, I just want to see how it is built up.
Three ways come to my mind (if you are looking for something free, otherwise google "h264 analysis" for paid options):
Download the h.264 parser from (from this thread # doom9 forums)
Download the h.264 reference software
libh264bitstream provides h.264 bitstream reading/writing
This should get you started. By the way, the h.264 bitstream is described in Annex. B. in the ITU specs.
I've created a Web version - https://mradionov.github.io/h264-bitstream-viewer/
Based on h264bitstream and inspired by H264Naked. Done by compiling h264bitstream into WebAssembly and building a simple UI on top of it. Output information for NAL units is taken from H264Naked at the moment. Also supports files of any size, just will take some time initially to load the file, but navigation throughout the stream should be seamless.
I had the same question. I tried h264 analysis, but it only supports windows. So I made a similar tool with Qt to support different platforms.Download H264Naked. This tool is essentially a wrapper around libh264bitstream

iphone - digital image processing

I want to build an app similar to Fat Booth, Aging Boot etc. I am totally noob to digital image processing. where should I start? Some hints?
Processing images on the iPhone with any kind of speed is going to require OpenGL ES. That would be the place to start. (If this is your first iOS project, though, I wouldn’t recommend starting off with GL.)
Apple has an image processing example available here: http://developer.apple.com/iphone/library/samplecode/GLImageProcessing/Introduction/Intro.html.
I imagine the apps you refer to use GL too. Fat Booth, for example, might texture a mesh with your photo, then distort the mesh to make the photo bulge out in the middle. It could also be done purely with fragment shaders.