Corrupted frame in VMR9 - rendering

I wrote a Directshow filter for the desktop capture. Set it into the graph and
get the corrupted frames in VMR9. But my grabber, set before VMR9, get the good frames.
Details are in the following pictures:
The full graph
The frames from my grabbers
The frames on VMR9
I use MS Win10 an the window 640x480 for VMR9.
Any ideas?

The filter worked in RGB24, must be in RGB32
Thanks all

Related

How to send a texture with Agora Video SDK for Unity

I'm using the package Agora Video SDK for Unity and I have followed these two tutorials:
https://www.agora.io/en/blog/agora-video-sdk-for-unity-quick-start-programming-guide/
https://docs.agora.io/en/Video/screensharing_unity?platform=Unity
Up to here, it is working fine. The problem is that instead os sharing my screen, I want to send a texture. To do so, I'm loading a png picture and trying to set it to the mTexture you find in the second link. It seems to be working on my computer, but it is like it doesn't arrive to the target computer.
How can I send a texture properly?
Thanks
did you copy every line of the code from the example as is? You may not want to do the ReadPixel part since this reads the screen. You may just read the raw data from your input texture and send it with the PushVideoFrame every update.

Extract image of every frame of a video using react-native-ffmpeg

I have looked all over the internet to get a way to extract image of everyframe of a video using react-native-ffmpeg. I am making a mobile app and I want to show all per frame images on the video timeline. I want to do this natively on mobile so that I can utilise hardware power of mobile. That is the reason I am looking for react-native-ffmpeg kind of library. Am I in the right direction? This npmjs.com/package/react-native-ffmpeg is what I am trying to use. I need to know the command to do the job.
To calculate frame rate of the video follow this link :
https://askubuntu.com/questions/110264/how-to-find-frames-per-second-of-any-video-file
After finding the frame rate, you can extract each frame now, for example :
To extract all frames from a 24 fps movie using ffmpeg :
The %03d dictates that the ordinal number of each output image will be formatted using 3 digits.
ffmpeg -i input.mov -r 24/1 out%03d.jpg
another resource :
https://gist.github.com/loretoparisi/a9277b2eb4425809066c380fed395ab3
also refer to .execute() method in react-native-ffmpeg.

ResourceExhaustedError When running network demo fourth try

I have 1600 videos and I want to make joint annotation label data about videos.
I've already made the open pose network and I put my videos as input of the network and saved the joint data as json file.
When I put my first video data as input, there are no errors. And when I put second, third video as input, there are no errors too.
But When I put the fourth video data as input, I got these error message.
enter image description here
enter image description here
these above images are the error message.(OOM)
The size of first, second, third, fourth video is the same.
When I change name first and fourth video name, I got the same error when putting fourth video.
I think this error is about the graph. but I couldn't know why exactly.
I think there are many genious on stackoverflow. So please answer my question... :)
I solve this problem by using cpu. not using gpu.
I use cpu only in tensorflow for solving this problem. and it works!

Screen Recording in Mac using AVFoundation's documentation

I have been working on screen recording on MacOS. I have working code for the same based on Apple's Documentation (https://developer.apple.com/library/content/qa/qa1740/_index.html). The problem is that the resolution of the recorded video is very low. According to the logs generated it looks like SD 480x300 is the default resolution. I was unable to find any methods to change the resolution of the video quality. Can somebody help me out here?
I found the solution to the problem. You can set the screen resolution at mSession.sessionPreset = AVCaptureSessionPreset1280x720;
There are several values for the sessionPreset including
AVCaptureScreenPresetLow
AVCaptureScreenPresetMedium
AVCaptureScreenPresetHigh
AVCaptureScreenPreset320x240
AVCaptureScreenPreset352x288
AVCaptureScreenPreset640x480
AVCaptureScreenPreset960x540
AVCaptureScreenPreset1280x720

python-pptx with matplotlib fix image resolution

i am trying to generate a graph using matplotlib and save it to python-pptx . everything is working fine but the image resolution is low when imported to pptx.( i am just saving to memory using StringIO then using add_picture() in pptx to add image)
when i do :
some_image_in_memory = StringIO()
plt.savefig(some_image_in_memory)
it works fine but give low res image but when i do :
plt.savefig(some_image_in_memory, format='svg')
i get error:
cannot identify image file <StringIO.StringIO INstamce at ..>
is this even correct? svg should maintain resolution but i cant read this in pptx.
I got around this by setting dpi value to savefig():
ex
plt.savefig(some_image_stream_in_memory, dpi=1200)
Unfortunately, PowerPoint does not directly support the SVG format (I've heard it's a turf issue between MS and Adobe). I expect that explains the error you're getting when you save with format=svg.
Other folks seem to have good luck with the PNG format from matplotlib. I kind of suppose that's the default image format, but might be worth a check.
The other thing that occurs to me is I don't see anywhere you have specified the size of the graph to be saved from matplotlib. If it is getting saved as a small image and then getting scaled significantly larger when displaying it in PowerPoint, this will produce a "grainy" appearance.