Obtain tango device main camera image in unity - camera

In my project I am capturing point cloud and based on that I create a binary mask, which I want to apply to captured image from Tango tablet main camera, so that the final image is a cutout of the detected points of point cloud.
I tried to use the tango examples for getting the camera image so that further postprocessing can be made, but I had no success. (temporary hack includes a camera to render texture and then applying a masking shader to the render)
What is the easiest method to obtain the tango main camera image in a Unity?

Take a look at the VideoOverlayProvider.cs file, there are two static methods SetCallback(..) which sound like what you are looking for:
/// Connect a callback to a camera for access to the pixels.
///
/// This is not recommended for display but for applications requiring access to the
/// <code>HAL_PIXEL_FORMAT_YV12</code> pixel data. The camera is selected via TangoCameraId. Currently only
/// <code>TANGO_CAMERA_COLOR</code> and <code>TANGO_CAMERA_FISHEYE</code> are supported.
///
/// The <i>onImageAvailable</i> callback will be called when a new frame is available from the camera. The
/// Enable Video Overlay option must be enabled for this to succeed.
///
/// Note: The first scan-line of the color image is reserved for metadata instead of image pixels.
You can get access to both the TANGO_CAMERA_COLOR camera and the TANGO_CAMERA_FISHEYE camera frames

Related

Control Depth of field focal distance on the default camera postprocessing?

I'm trying to implement a depth camera on unreal engine 5.1's default camera using a post processing volume (not cinematic camera). The official tutorial describes a focal region, shown in the colour black in the image below. They describe that it's possible to increase the size of this black focal region but from what I see in terms of public variables it's only availale for mobile depth of field or cinematic cameras. Does anybody know how if it's possible?
https://docs.unrealengine.com/4.27/Images/RenderingAndGraphics/PostProcessEffects/DepthOfField/DOF_LayerImplementation1.webp
These are only controls that I see that are available in the default camera https://docs.unrealengine.com/5.0/Images/designing-visuals-rendering-and-graphics/post-process-effects/depth-of-field/DoFProperties.webp

Camera Stacking in AR Game to Apply Overlay

I'm trying to create an AR game (similar to FPS), which requires camera stacking to allow for an AR base camera, as well as an overlay camera with overlays such as health points, bullet count etc. However, upon creating an empty GameObject and adding a camera component, I realise that I do not see the Render Type option in the inspector pane.
May I know what went wrong? Or how should I go about creating an AR game with a live AR camera feed with an overlay displaying health points etc?
Thanks and cheers!
enter image description here

Hide an object for a specific camera

I use godot to create my 3d game. I ran into a problem while creating portals using camera viewport rendering to texture. The problem is that the camera captures unnecessary objects that are behind portal. I partially solved this problem by setting the parameter "near " for the camera at a distance from the camera itself to the portal, but the part behind the portal began to be cut off.
The question is, is it possible to hide objects for a particular camera so that other cameras can see them? Perhaps there is another way to do this, for example by creating a static clipping plane?
Proximity Fade
Probably not what you are looking for, but I'll mention it for completeness sake.
The default material has proximity fade and distance fade, which you can use to make the material disappear if it is too close or to distant from the camera, respectively.
It is important to note that this is not a cull plane, and that the fading is gradual.
Thus, using proximity fade you can make objects near the camera appear semitransparent.
Using Visibility layers and cull mask
is it possible to hide objects for a particular camera so that other cameras can see them?
Every VisualInstance (you know, all things that are visible in 3D) has layers. And every Camera has a cull_mask. If the cull_mask of the Camera does not include any of the layers of a VisualInstance, then the Camera does not see that VisualInstance.
A VisualInstance with no layers will not show on no Camera, even if the Camera has all the layers in its cull_mask (which is the default).
You can either edit the cull_mask of the camera to not include the layers of the VisualInstance, or edit the layers of the VisualInstance, or both.
Using a custom shader cull plane
Perhaps there is another way to do this, for example by creating a static clipping plane?
You can use a custom spatial shader to cut things out based on a plane.
You need to define the plane as a uniforms. For this answer I'll use a point-normal definition of a plane:
n·(r - r_0)
That is:
dot(plane_normal, (world_position - plane_point)
Thus, we define a plane_normal and plane_point uniforms:
uniform vec3 plane_normal;
uniform vec3 plane_point;
The plane_normal gives us the orientation of the plane, while the plane_point is a point on the plane which allows us to position it.
And then use this logic:
vec3 wold_position = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
ALPHA = clamp(sign(dot(plane_normal, wold_position - plane_point)), 0.0, 1.0);
Here we are converting the coordinates of the current point to world space, and then using definition of the plane to find the points on one side (using sign), and set ALPHA based on that, such that everything on one side of the plane becomes invisible.
Note: This is not the only way to define the plane. Another popular definition is a 4D vector, where the xyz are the normal, and the w is the distance from plane to the origin.
Sadly, I don't think there is a way to make this work with multiple material passes, because ALPHA controls the blending of the passes, and will not result in transparency. And no, using discard; does not solve it either, because the other passes can write the fragment regardless. Thus, you are going to need to modify your materials to include that.
Further Sadly Godot 3.x does not support global uniforms (see Godot 4.0 gets global and per-instance shader uniforms). Which means you will have to set these parameter everywhere you need them.
Using Constructive Solid Geometry (CSG)
Add a CSGCombiner make the geometry that needs to disappear with other CSG nodes as children.
Then you can, for example, add a CSGSphere with operation set to "Subtraction", and move it with the Camera (for this purpose, I suggest to add a RemoteTransform node as child to the Camera and set its remote path to the CSGSphere).
Of course, it does not have to be a CSGSphere, you can use any CSG nodes for this purpose. For the portal, I imagine you could use a CSGBox and align it to the portal plane.
Note: Currently on Godot 3.3 CSG nodes do not support baking lights. This is a regression. See: Unable to bake lightmap with CSG due to the lack of ability to generate UV2 for CSG nodes.
Portals, actually
Bartleby Lawnjelly has a portal (godot-lportal) module for Godot 3.x.
Being a module, they require to build Godot from source. See Compiling on the official Godot documentation. It is not that bad, I promise. Or use build from godot-titan.
I have to explain that these portals are not portals in the Valve Portal video game series sense… The module lets you define areas as "rooms", and planes as "portals" that connect those rooms, in such way that you can look from one to the other. The purpose of this is to cull entire rooms unless you are looking through one of the portals.
Hopefully that makes more sense with a video. This is a somewhat old one, but good to get the idea across: Portal rendering module in Godot 3.2 - Improved performance. Seeing shadow pooping in the video? Bartleby Lawnjelly also has a custom lightmapper.

How to implement live-videostreaming within a-frame?

I want to implement live 360 video using Theta s camera. I already have done implementation
using three.js for showing images in 360 and live audio stream using WEB RTC.
I found implementation using a-frame for images and video but I couldn't find anything on how to implement using live stream in 360 from my camera input. My question
is how to get the input from my Theta s camera and to show in 360 perspective.
My idea is to get the video from my camera input, to glue on a ball object and to be shown on full screen.

Clear preview window in media foundation

Is it possible to clear a preview window after preview from camera is done? I am using MFCaptureEngine, calling m_pPreview->SetRenderHandle(m_hwnd) to render the video. But when I stop the video I am not able to draw on the window. There remains a last frame from the camera. I need to fill the window by black brush and draw some text, but the image from the camera cannot be overdrawn.
It is not clear understand from you answer what is it MFCaptureManager, but by code SetRenderHandle(m_hwnd) I see that you use IMFCapturePreviewSink::SetRenderHandle. So, I can say that I had faced with similar problem some time ago, and it is related with difference between of the old window system which exist from WinXP and current window system from Vista. Code sets window context to the renderer by calling IMFCapturePreviewSink::SetRenderHandle - for IMFCapturePreviewSink it is DirectX11 - and DirectX11 has got FULL access to the window and it is switched to current window system. As a result, any calling fill the window by black brush and draw some text which is done by old Windows API from Win95-XP generation do nothing - because window handler context is LOCKED by DirectX11.
There are three ways for resolving of this problem:
Write the new UI by the new Microsoft DirectComposition GUI API which is based on DirectX11 and set it to IMFCapturePreviewSink::SetRenderSurface.
Create EVR Media Sink by MFCreateVideoRenderer - it creates DirectX9 video renderer which is compatible with old Windows API from Win95-XP generation, and set this IMFMediaSink in IMFCapturePreviewSink::SetCustomSink.
Create code of the video renderer on DirectX9 base - for example MFCaptureD3D/device.cpp, and draw raw IMFSample from callback IMFCapturePreviewSink::SetSampleCallback.
Regards.
I've implemented it this way:
// Sink
CComPtr<IMFCaptureSink> pSink;
m_pEngine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_PREVIEW, &pSink);
CComPtr<IMFMediaSink> pCustomSink;
::MFCreateVideoRenderer(IID_IMFMediaSink, (void**)&pCustomSink);
CComPtr<IMFCapturePreviewSink> pPreviewSink;
pSink.QueryInterface(&pPreviewSink);
pPreviewSink->SetCustomSink(pCustomSink);
// preview
pSink.QueryInterface(&m_pPreview); // or pPreviewSink.QueryInterface(&m_pPreview)
m_pPreview->SetRenderHandle(m_hwndPreview);
But the behaviour is still the same (the screen cannot be redrawn after the preview is stopped).