Unity3d external camera frame rate - input

I am working on a live augmented reality application. So far I have worked on many AR-Applications for mobile devices.
Now I have to get the video signal from a Panasonic P2. The camera is an European version. I catch the signal with a AJA io HD Box, witch is connected by firewire to a MacPro. So far everything works great - just not in Unity.
When I start the preview in Unity the framebuffer of the AJA ControlPanel jumps to a frame-rate of 59.94 fps. I guess because of a preference on unity. Because of the European version of the camera I can not switch to 59,94fps or 29,47fps.
I checked all settings in Unity, but couldn't find anything...
Is there any possibility to change the frame-rate unity captures from an external camera?

If you're polling the camera from Unity's Update() function then you will be under the influence of Vsync, which limits frame processing to 60 FPS.
You can switch off Vsync by going to Edit > Project Settings > Quality and then setting the option Vsync Count to "don't sync".

Related

WebRTC local camera preview resolution decreases for network speed

I want to scan text page while call is going. What I do is, take frames from local video preview and send it to server for processing.
Before call starts, preview quality and resolution is highest. But when call starts resolution of capturer is decreasing. I can see that onFrameResolutionChanged event is called on local renderer. I'm guessing that Web RTC is changing resolution because of internet speed.
I don't want to change the local display resolution.
I have this issue on IOS and Android WebRTC library.
What can I do to prevent local camera preview resolution from decreasing?
I tried videoSource.adaptOutputFormat function, but it just sets maximum quality and by the time preview still decreases.
Update:
What was I searching was enableCpuOveruseDetection = false. It have be set in
val config = PeerConnection.RTCConfiguration(servers);
config.enableCpuOveruseDetection = false
This works good for android, It does not resize local preview quality.
But in IOS there is no enableCpuOveruseDetection in RTCConfiguration() class. So in IOS problem still remains.

Controlling frame rate in SpriteKit

I am working on a project using SpriteKit to present simple shapes (SKShapeNode and SKSpriteNode) on the screen and move these shapes around in a predefined way.
I am interested in obtaining a very smooth motion, even for fast moving shapes (The code runs smooth for a couple of hundreds shapes and most of the time I only need less than 10).
I am quite satisfied with the motion smoothness for slowly moving objects on my macbook air and imac (60Hz displays and 60fps reported by showFPS in the SKView), however fast moving shapes create artefacts.
For that reason I would like to display the scene on a 120Hz monitor. I have scoured stackoverflow for a solution and only found solution to reduce the framerate from 60fps to 30 and 15 fps using frameInterval, but I haven't worked out how to obtain something higher than 60fps.
I have tried connecting my mac to a 85Hz CRT monitor (refresh rate set up in System Preferences to 85Hz), but my SKView still runs at 60fps.
Is there a way to access the EDID information of the monitor programmatically through SpriteKit? Is it possible to run an SKView at more than 60fps?
Thanks in advance

How to change dynamic video resolution during a call (in hangout WebRTC)

I'm find hangout's dynamic resolution in google hangout webrtc version.
How to change dynamic video resolution during a call.
[Situation]
- There were three users in room.
- When switching main speaker it is changed same video's resolution (.videoWidth .videoHeight)
I would like to know how it is implemented for many peer connection.
The change your resolution you can use the Hangout Toolstrip at top center of the Hangout interface to change the quality slider from Auto to a lower resolution, but there's a part of me that thinks you might be asking about aspect ratio instead... different devices (webcam, mobile device camera, etc) present in different aspect ratio (16:9 or 4:3). Some webcams allow you to change the aspect ratio, but it's a dependent on the software provided with the camera.
I hope that some part of this was helpful.

How to capture images that are suitable for OCR

I'm reading about which factors can be changed to enhance the accuracy of text recognition in an image BEFORE the image is taken. Basically I'm interested in lighting and various camera settings like exposure and shutter speed.
I've found mostly papers on improving camera focus like this one but since autofocus seems common nowadays I'm not sure special efforts need to be taken to adjust focus. But I could not find much on the lighting and camera settings for OCR(I'm interested in handheld cameras).
So, how do I choose the ideal lighting and camera sensors settings in a handheld camera such that the image is suitable for OCR?

Android2.3 camera HAL capture picture issue

I'm working an Android 2.3 project and have some questions with Camera HAL implementation.
It takes me a lot of time to capture pictures, about 4s, how do I reduce it? The camera sensor is ov3640 on this project. I try to wait for 2 Vsync and capture the 3rd frame follow the Ov3640 spec for capture picture instead of sleep 1s directly when it complete the autofocus.
My second question is, how do I calculate the Vsync time?