I am using SDK 2.3 and develop an Android application with AS-15 and 20 camera that is exclusively dealing with liveview.
I unable to obtain from Liveview a higher resolution than 640x360px, while the camera specs mention a 1920×1080/30P (HQ).
How can I get the full resolution?
Is this a limitation of the API ? Why?
I've found that some (other) cameras implement get/setLiveviewSize and with the L it says
XGA size scale (the size varies depending on the camera models, and some camera models change the liveview quality instead of making the size larger.)
What are the models with the highest liveview resolution?
Related
I'm trying to implement a depth camera on unreal engine 5.1's default camera using a post processing volume (not cinematic camera). The official tutorial describes a focal region, shown in the colour black in the image below. They describe that it's possible to increase the size of this black focal region but from what I see in terms of public variables it's only availale for mobile depth of field or cinematic cameras. Does anybody know how if it's possible?
https://docs.unrealengine.com/4.27/Images/RenderingAndGraphics/PostProcessEffects/DepthOfField/DOF_LayerImplementation1.webp
These are only controls that I see that are available in the default camera https://docs.unrealengine.com/5.0/Images/designing-visuals-rendering-and-graphics/post-process-effects/depth-of-field/DoFProperties.webp
I want to scan text page while call is going. What I do is, take frames from local video preview and send it to server for processing.
Before call starts, preview quality and resolution is highest. But when call starts resolution of capturer is decreasing. I can see that onFrameResolutionChanged event is called on local renderer. I'm guessing that Web RTC is changing resolution because of internet speed.
I don't want to change the local display resolution.
I have this issue on IOS and Android WebRTC library.
What can I do to prevent local camera preview resolution from decreasing?
I tried videoSource.adaptOutputFormat function, but it just sets maximum quality and by the time preview still decreases.
Update:
What was I searching was enableCpuOveruseDetection = false. It have be set in
val config = PeerConnection.RTCConfiguration(servers);
config.enableCpuOveruseDetection = false
This works good for android, It does not resize local preview quality.
But in IOS there is no enableCpuOveruseDetection in RTCConfiguration() class. So in IOS problem still remains.
While making iOS Apps, we generally used to supply #x,#2x,#3x images. And based on my knowledge in case of android, there was some approx six different sizes
I have started working on react-native and came across the image issue.
My Question are: Do I need to provide images with all different sizes (i.e. approx 6-7 image sets by combining iOS and android) Or only 1 image and rest will be taken care internally? Will it look blurred on higher resolution phones?
Thanks.
You still need to provide multiple images. According to the Images documentation, if you are using an image named check.png, you also have to include check#2x.png and check#3x.png.
Quoting:
The packager will bundle and serve the image corresponding to device's
screen density. For example, check#2x.png, will be used on an iPhone
7, whilecheck#3x.png will be used on an iPhone 7 Plus or a Nexus 5. If
there is no image matching the screen density, the closest best option
will be selected.
I'm find hangout's dynamic resolution in google hangout webrtc version.
How to change dynamic video resolution during a call.
[Situation]
- There were three users in room.
- When switching main speaker it is changed same video's resolution (.videoWidth .videoHeight)
I would like to know how it is implemented for many peer connection.
The change your resolution you can use the Hangout Toolstrip at top center of the Hangout interface to change the quality slider from Auto to a lower resolution, but there's a part of me that thinks you might be asking about aspect ratio instead... different devices (webcam, mobile device camera, etc) present in different aspect ratio (16:9 or 4:3). Some webcams allow you to change the aspect ratio, but it's a dependent on the software provided with the camera.
I hope that some part of this was helpful.
I am developing a cocos2d game. I need to make it universal. Problem is that I want to use minimun amount of images to keep the universal binary as small as possible. Is there any possibility that I can use same images I am using for iphone, retina and iPad somehow? If yes, how can I do that? What image size and quality should it be? Any suggestion?
Thanks and Best regards
As for suggestions: provide HD resolution images for Retina devices and iPad, provide SD resolution images for non-Retina devices. Don't think about an all-in-one solution - there isn't one that's acceptable.
Don't upscale SD images to HD resolution on Retina devices or iPad. It won't look any better.
Don't downscale HD images for non-Retina devices. Your textures will still use 4x the memory on devices that have half or even a quarter of the memory available. In addition, downscaling images is bad for performance because it has to be done by the CPU on older devices. While you could downscale the image and save the downscaled texture, it adds a lot more complexity to your code and will increase the loading time.
There's not a single right answer to this question. One way to do it is to create images that are larger than you need and then scale them down. If the images don't have a lot of fine detail, that should work pretty well. As an example, this is the reason that you submit a 512x512 pixel image of your app icon along with your app to the App Store. Apple never displays the image at that size, but uses it to create a variety of smaller sizes for display in the App Store.
Another approach is to use vector images, which you can draw perfectly at any size that you need. Unfortunately, the only vector format that I can think of that's supported in iOS is PDF.