AForge , How can I set webcam framerate in ver 2.2.5 - webcam

I'm working on the project that record the frame from dual webcam
Due to the dropping frame problem , I want to set webcam framerate to 15(default is
30).
But the DesiredFrameRate property is no longer in use in ver 2.2.5.
Does there exist any solutions to set the framerate ? Thanks

Unfortunately no, it is no longer possible in 2.2.5. All media types are grouped by frame size and the final media type to use is matched based on the frame size only. Any input framerate you would have specified is discarded.
You have to use an older version or change the AForge code.

Related

Extract image of every frame of a video using react-native-ffmpeg

I have looked all over the internet to get a way to extract image of everyframe of a video using react-native-ffmpeg. I am making a mobile app and I want to show all per frame images on the video timeline. I want to do this natively on mobile so that I can utilise hardware power of mobile. That is the reason I am looking for react-native-ffmpeg kind of library. Am I in the right direction? This npmjs.com/package/react-native-ffmpeg is what I am trying to use. I need to know the command to do the job.
To calculate frame rate of the video follow this link :
https://askubuntu.com/questions/110264/how-to-find-frames-per-second-of-any-video-file
After finding the frame rate, you can extract each frame now, for example :
To extract all frames from a 24 fps movie using ffmpeg :
The %03d dictates that the ordinal number of each output image will be formatted using 3 digits.
ffmpeg -i input.mov -r 24/1 out%03d.jpg
another resource :
https://gist.github.com/loretoparisi/a9277b2eb4425809066c380fed395ab3
also refer to .execute() method in react-native-ffmpeg.

Why GUI Component in Qt5 shows different sizes when deployed on system with different resolution

I am developing an application in Qt5. when I deployed it on the machine on which it is developed it looks fine. But when I deployed it on laptop with higher resolution, the size of components like QPushButtons etc,reduces.Please help me, I have no idea why it is happening.
I'm not sure how are you specifying the size but, maybe you are using fixed sizes? If so, when deploying on a higher resolution display the interface is apparently smaller but in fact has the same size than in the smaller resolution display.
Try using relative sizes or at list show how you are setting the size right now

Screen Recording in Mac using AVFoundation's documentation

I have been working on screen recording on MacOS. I have working code for the same based on Apple's Documentation (https://developer.apple.com/library/content/qa/qa1740/_index.html). The problem is that the resolution of the recorded video is very low. According to the logs generated it looks like SD 480x300 is the default resolution. I was unable to find any methods to change the resolution of the video quality. Can somebody help me out here?
I found the solution to the problem. You can set the screen resolution at mSession.sessionPreset = AVCaptureSessionPreset1280x720;
There are several values for the sessionPreset including
AVCaptureScreenPresetLow
AVCaptureScreenPresetMedium
AVCaptureScreenPresetHigh
AVCaptureScreenPreset320x240
AVCaptureScreenPreset352x288
AVCaptureScreenPreset640x480
AVCaptureScreenPreset960x540
AVCaptureScreenPreset1280x720

How do we get Qt to render to memory rather than a device?

I have an application that uses Qt 5.6 for various purposes and that runs on an embedded device. Currently I have it rendering via eglfs to a Linux frame buffer on an attached display but I also want to be able to grab the data and send it to a single-color LED display unit (a device will either have that unit or a full video device, never both at the same time).
Based on what I've found on the net so far, the best approach is to:
turn off anti-aliasing;
set Qt up for 1 bit/pixel display device;
select a 1bpp font, no grey-scale allowed; and
somehow capture the graphics scene that Qt produces so I can transfer it to the display unit.
It's just that last one I'm having issues with. I suspect I need to create a surface of some description and inject that into the Qt display "stack", but I cannot find any good examples on how to do this.
How does one do this and, assuming I have it right, is there a synchronisation method used to ensure I'm only getting complete buffers from the surface (i.e., no tearing)?

Corrupted frame in VMR9

I wrote a Directshow filter for the desktop capture. Set it into the graph and
get the corrupted frames in VMR9. But my grabber, set before VMR9, get the good frames.
Details are in the following pictures:
The full graph
The frames from my grabbers
The frames on VMR9
I use MS Win10 an the window 640x480 for VMR9.
Any ideas?
The filter worked in RGB24, must be in RGB32
Thanks all