How to set framerate on uv4l with external usb camera - camera

I'm using external usb camera plugged to my raspberry 3b+. Since I'm not using raspicam, but the uvc driver I can't just set the framerate in config file like when you are using raspicam.
Is there a way to set it somwhere?

v4l2-ctl --set-parm=30
seems like it should change the fps, you also need to specify the device with
--device=
With
set-fmt-video=width=1024, height=640
you can change the resolution. The changes however seem to not affect the video though. Fps seem to have no effect at all, so the mjpeg stream stutters alot, and WebRTC works great even though it's set to 5 for example. Changing the resolution only seems to upscale the image with 0 quality improvement.

Related

WebRTC local camera preview resolution decreases for network speed

I want to scan text page while call is going. What I do is, take frames from local video preview and send it to server for processing.
Before call starts, preview quality and resolution is highest. But when call starts resolution of capturer is decreasing. I can see that onFrameResolutionChanged event is called on local renderer. I'm guessing that Web RTC is changing resolution because of internet speed.
I don't want to change the local display resolution.
I have this issue on IOS and Android WebRTC library.
What can I do to prevent local camera preview resolution from decreasing?
I tried videoSource.adaptOutputFormat function, but it just sets maximum quality and by the time preview still decreases.
Update:
What was I searching was enableCpuOveruseDetection = false. It have be set in
val config = PeerConnection.RTCConfiguration(servers);
config.enableCpuOveruseDetection = false
This works good for android, It does not resize local preview quality.
But in IOS there is no enableCpuOveruseDetection in RTCConfiguration() class. So in IOS problem still remains.

Blender VSE imported audio and video out of sync

I'm working on a short video with blender, and the first few strips I imported were fine. I then imported another strip(and tried re-recording it and importing that). For the new strip, the video is much shorter than the audio.
In fact, I did a quick calculation, and the video is exactly 8 times shorter.(And faster)
I have looked it up, and it says to match up the framerate in the settings, but then it messes up the sync of the other strips.
I was having the same issue. When you change the framerate, it seems to align everything with the same framerate but throw the others off sync. However, if you add a speed control effect to the video strip, turn on 'Stretch to input strip length' in that effect, and stretch the video strip to match the audio strip, they align. This person explains it better than I do: https://www.youtube.com/watch?v=-MZ4VXgEdzo

Unity3d external camera frame rate

I am working on a live augmented reality application. So far I have worked on many AR-Applications for mobile devices.
Now I have to get the video signal from a Panasonic P2. The camera is an European version. I catch the signal with a AJA io HD Box, witch is connected by firewire to a MacPro. So far everything works great - just not in Unity.
When I start the preview in Unity the framebuffer of the AJA ControlPanel jumps to a frame-rate of 59.94 fps. I guess because of a preference on unity. Because of the European version of the camera I can not switch to 59,94fps or 29,47fps.
I checked all settings in Unity, but couldn't find anything...
Is there any possibility to change the frame-rate unity captures from an external camera?
If you're polling the camera from Unity's Update() function then you will be under the influence of Vsync, which limits frame processing to 60 FPS.
You can switch off Vsync by going to Edit > Project Settings > Quality and then setting the option Vsync Count to "don't sync".

Rendering video on HTML5 CANVAS takes huge amount of CPU

I am using HTML5 Canvas for rendering video, but the rendering is taking huge amount of CPU? I am using GtkLauncher (with webkit 1.8.0) for rendering the video on the Canvas.
Can some one please throw some light on this? Is video rendering on Canvas not efficient for embedded systems?
Also I would like to know, whether there is a way in HTML5 video tag to know the video frame rate, before I actually start to render the data on the Canvas. This I would need to know because I would have to set the timer (used for drawing the video frames) at that same frame rate .
Thanks and Regards,
Souvik
Most likely the video rendering is not accelerated and needs to
Decode on software
Resize on software
You did not give system details so this is just a guess. By poking browser internals you can dig out the truth.
Video framerate cannot be known beforehand and in theory can vary within one source. However if you host file yourself you can pre-extract this information using tools like ffmpeg and transfer the number in side-band (e.g. using AJAX / JSON).

QTKit capture: what frame size to use?

I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.