I'm trying to implement a depth camera on unreal engine 5.1's default camera using a post processing volume (not cinematic camera). The official tutorial describes a focal region, shown in the colour black in the image below. They describe that it's possible to increase the size of this black focal region but from what I see in terms of public variables it's only availale for mobile depth of field or cinematic cameras. Does anybody know how if it's possible?
https://docs.unrealengine.com/4.27/Images/RenderingAndGraphics/PostProcessEffects/DepthOfField/DOF_LayerImplementation1.webp
These are only controls that I see that are available in the default camera https://docs.unrealengine.com/5.0/Images/designing-visuals-rendering-and-graphics/post-process-effects/depth-of-field/DoFProperties.webp
Related
The text recognition in ML Kit works well IF the orientation of the image is correct(not rotated 90 degrees or upside down). All common OCR engines have an auto orientation function to automatically determine the orientation before performing the text recognition. I do not see anything in the documentation that states there's a flag that can be set to perform auto orientation. Does it exist in MK Kit?
I also see a TextRecognizerOptions class but no documentation on how to set the options. I assume there are options to be set here(like look for "english", etc.). Where is the detailed documentation on this class?
ML Kit OCR supports latin languages. We do not provide options to specify interested languages, and in the return type, it contains the detected languages.
For image rotation, the current OCR can only handle the up-right image, but ML Kit can handle the image rotation for you with the given image orientation information. For example, with the bitmap input, you need to provide the image orientation, so that ML Kit could rotate the image for you and run the detection.
I am using SDK 2.3 and develop an Android application with AS-15 and 20 camera that is exclusively dealing with liveview.
I unable to obtain from Liveview a higher resolution than 640x360px, while the camera specs mention a 1920×1080/30P (HQ).
How can I get the full resolution?
Is this a limitation of the API ? Why?
I've found that some (other) cameras implement get/setLiveviewSize and with the L it says
XGA size scale (the size varies depending on the camera models, and some camera models change the liveview quality instead of making the size larger.)
What are the models with the highest liveview resolution?
I just have a confusion in understanding the difference between the Active IR image and depth Image of Kinect v2. Can anyone tell me the what special features Active IR image have as compare to depth image?
In the depth image, the value of a pixels relates to the distance from the camera as measured by time-of-flight. For the active infrared image, the value of a pixel is determined by the amount of infrared light reflected back to the camera.
Sidenote:
I think there is only one sensor that does both these. The Kinect uses the reflected IR to calculate time of flight but then also makes it available as an IR image.
I'm using OpenNI SDK v1 and attempting to store the alignment between rgb and depth data.
In NiViewer, I enable the overlay mode with registration turned on like so:
// sets the the depth image output from the vantage point of the rgb image
g_Depth.GetAlternativeViewPointCap().SetViewPoint(g_Image);
I understand that this would give me a 1:1 pixel mapping between rgb and depth if both were recorded at the same resolution.
However, for my application, I need rgb to be at 1280x1024 (high res) and depth to be at 640x480.
I'm not sure how the mapping between the depth pixels to rgb would work in this mode.
I had the same problem. By following the advice here, I was able to get it working as desired. It's a bit hacky, but basically you:
Get the 1280x1024 image from OpenNI.
Cut off the bottom to make it 1280x960.
Scale the depth image to 1280x960.
Then they should line up. It's working for me.
Kinect camera has a very low resolution RGB image. I want to use point cloud from the depth kinect but want to texture map it with another image taken from another camera.
Could anyone please guide me how to do that?
See the Kinect Calibration Toolbox, v2.0 http://sourceforge.net/projects/kinectcalib/files/v2.0/
2012-02-09 - v2.0 - Major update. Added new disparity distortion model and simultaneous calibration of external RGB camera.