I'm find hangout's dynamic resolution in google hangout webrtc version.
How to change dynamic video resolution during a call.
[Situation]
- There were three users in room.
- When switching main speaker it is changed same video's resolution (.videoWidth .videoHeight)
I would like to know how it is implemented for many peer connection.
The change your resolution you can use the Hangout Toolstrip at top center of the Hangout interface to change the quality slider from Auto to a lower resolution, but there's a part of me that thinks you might be asking about aspect ratio instead... different devices (webcam, mobile device camera, etc) present in different aspect ratio (16:9 or 4:3). Some webcams allow you to change the aspect ratio, but it's a dependent on the software provided with the camera.
I hope that some part of this was helpful.
Related
I am using SDK 2.3 and develop an Android application with AS-15 and 20 camera that is exclusively dealing with liveview.
I unable to obtain from Liveview a higher resolution than 640x360px, while the camera specs mention a 1920×1080/30P (HQ).
How can I get the full resolution?
Is this a limitation of the API ? Why?
I've found that some (other) cameras implement get/setLiveviewSize and with the L it says
XGA size scale (the size varies depending on the camera models, and some camera models change the liveview quality instead of making the size larger.)
What are the models with the highest liveview resolution?
Cocoa uses a drawing system (user coordinate space) measured in "points" which are resolution independent...sounds great
While we need to be concerned with our app running in many resolutions, Cocoa is going to take care of that for us in (1) above...sounds too good to be true!
It does scale our controls as resolution changes...this is good.
BUT the screen size increases as my resolution increases...this is not good, I though we had a drawing canvas that was independent of the resolution!
What if the controls shrink to silly small levels as the resolution increases - should I be concerned about this?
To summarize: is their a "standard" resolution I should design for and then all automatic scaling by Apple will automatically look fine?
[Confused while reading the Apple Progammer Guide on the topic of Drawing]
You do not need to be concerned about this. The user is only allowed to select resolutions which make sense given the physical size of the display, so the standard controls will always be "large enough". You just need to test your app on Retina and non-Retina displays (and ideally both at the same time, with an external 1x monitor plugged on a 2x machine ; move your windows between the two screens and check that your images update accordingly).
I am using HTML5 Canvas for rendering video, but the rendering is taking huge amount of CPU? I am using GtkLauncher (with webkit 1.8.0) for rendering the video on the Canvas.
Can some one please throw some light on this? Is video rendering on Canvas not efficient for embedded systems?
Also I would like to know, whether there is a way in HTML5 video tag to know the video frame rate, before I actually start to render the data on the Canvas. This I would need to know because I would have to set the timer (used for drawing the video frames) at that same frame rate .
Thanks and Regards,
Souvik
Most likely the video rendering is not accelerated and needs to
Decode on software
Resize on software
You did not give system details so this is just a guess. By poking browser internals you can dig out the truth.
Video framerate cannot be known beforehand and in theory can vary within one source. However if you host file yourself you can pre-extract this information using tools like ffmpeg and transfer the number in side-band (e.g. using AJAX / JSON).
I am trying to develop an iphone application which needs to show a 360 degree video like the one and rotate the video as per the phone movement. How can i do this? Is it possible to do this with normal MPMovieplayer controller?
I don't think you can do this with a normal MPMoviePlayerController, but there are several libraries out there to achieve this. Have a look here:
PanoramaGL
Panorama 360
They work with OpenGL and you can embed them in your Objective-C code.
EDIT:
As #Mangesh Vyas kindly pointed out those are intended to use with fixed images only. However they might be a suitable starting point for embedding video as well, if you modify the code accordingly. They already do the handling of direction, accelerometer etc. so you don't have to implement all that yourself.
I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.