RGBDToolKit calibrate correspondence cannot display the live view by my canon digital camera - kinect

Hi~ I am trying to calibrate my depth camera with my digital camera.. but I meet some problem using the RGBDKinectCapture...
1、why the calibrate correspondence cannot show the RGB view from my digital camera?? (canon EOS 600D)
I have set it the live view mode..
2、it cannot recognize my kinect(1414) on Win7 By the example application (download from the RGBToolKit.com),
and I cannot run the source on Windows,just say the error 'enable_if' also mean the std::tr1::enable_if...
3、What is the methods of taking live preview for (all) the HD camera in RGBToolKit ??
is it without the SDK of camera??
which methods it based on??
which class? and I want to debug it...

I'm so sorry ... I see now...
1、the RGBDToolkit doesnot display the live preview of the HD camera.... it just load its video....
2、I cannot run the example on windows because it run with the different sdk for kinect ...(the source has give the sdk..)

Related

How to send a texture with Agora Video SDK for Unity

I'm using the package Agora Video SDK for Unity and I have followed these two tutorials:
https://www.agora.io/en/blog/agora-video-sdk-for-unity-quick-start-programming-guide/
https://docs.agora.io/en/Video/screensharing_unity?platform=Unity
Up to here, it is working fine. The problem is that instead os sharing my screen, I want to send a texture. To do so, I'm loading a png picture and trying to set it to the mTexture you find in the second link. It seems to be working on my computer, but it is like it doesn't arrive to the target computer.
How can I send a texture properly?
Thanks
did you copy every line of the code from the example as is? You may not want to do the ReadPixel part since this reads the screen. You may just read the raw data from your input texture and send it with the PushVideoFrame every update.

How do we get Qt to render to memory rather than a device?

I have an application that uses Qt 5.6 for various purposes and that runs on an embedded device. Currently I have it rendering via eglfs to a Linux frame buffer on an attached display but I also want to be able to grab the data and send it to a single-color LED display unit (a device will either have that unit or a full video device, never both at the same time).
Based on what I've found on the net so far, the best approach is to:
turn off anti-aliasing;
set Qt up for 1 bit/pixel display device;
select a 1bpp font, no grey-scale allowed; and
somehow capture the graphics scene that Qt produces so I can transfer it to the display unit.
It's just that last one I'm having issues with. I suspect I need to create a surface of some description and inject that into the Qt display "stack", but I cannot find any good examples on how to do this.
How does one do this and, assuming I have it right, is there a synchronisation method used to ensure I'm only getting complete buffers from the surface (i.e., no tearing)?

Kinect 2 shows black screen while capturing Infrared Basics

I am trying to use Kinect 2 and SDK v2 for capturing Infrared Images/videos.
Kinect shows Depth and RGB images properly, But when i try to visualize Infrared Basics in Kinect for Window. It does not show any image, rather a black screen.
What is the reason for it. I reinstalled SDK v2, but still the same problem. In a similar post some one suggested that reinstall a newer version, which I did. But still the same problem. Can any one suggest any solution?
thanks
it is better to use "KinectConfigurationVerifierSetup" for test system requirements. and i suggest you that use Infrared Basic-WPF Samples in SDK Browser, also you could use that sample code and install them to your computer. if still infrared data source not show, you could test Kinect on other computer
I fixed my problem by updating GPU Driver. It has a conflict/bug/error with older version. However Nvidia removed it. And if one install new driver, it start showing infrared images.
Attention for your graphics card setting, Maybe changing your computer to auto or Inter HD Graphics will work.

Liveview on Android/QX1 Sony Camera API 2.01 fails

Using the supplied Android demo from
https://developer.sony.com/downloads/all/sony-camera-remote-api-beta-sdk/
Connected up to the WIFI connection on a Sony QX1. The sample application finds the camera device and is able to connect to it.
The liveview is not displaying correctly. At most, one frame is shown and the code hits an exception in SimpleLiveViewSlicer.java
if (commonHeader[0] != (byte) 0xFF) {
throw new IOException("Unexpected data format. (Start byte)");
}
Shooting a photo does not seem to work. Zooming does work - lens is moving. Camera is working fine when using the PlayMemories app directly, so no hardware issue.
Hoping from advice from Sony on this one - standard hardware and demo application should work.
Can you provide some details of your setup?
What version of Android SDK are you compiling with?
What IDE and OS are you using?
Have you installed the latest firmware? (http://www.sony.co.uk/support/en/product/ILCE-QX1#SoftwareAndDownloads)
Edit:
We tested the sample code using a QX1 lens and the same setup as you and were able to run the sample code just fine.
One thing to check is whether the liveview is ready to transfer images. To confirm whether the camera is ready to transfer liveview images, the client can check “liveviewStatus” status of “getEvent” API (see API specification for details). Perhaps there is some timing issue due to connection speed that is causing the crash.

DSC-HX400 RAW image data & Movie Recording

I am currently testing a DSC-HX400. While I am able to do almost everything I need to with the camera there are a couple of items that are not exposed via the API that have frustrated my efforts.
1) The camera does not seem to offer an option, via the API or the camera itself, to capture images in RAW format. It does offer standard & fine JPEG format but both of those are leaving artifacts in the image that become extremely noticeable when you zoom in with an image editor. Is there a way to get the camera to capture RAW images? I do not need the SDK to return the data just to save it out to the card. If getting the RAW data is impossible has anyone found an inventive way to clean up the artifacts?
2) The camera supports both still shoot and movie mode but the API will only expose the mode that I am currently in. It makes it impossible to transition between still to movie mode (to allow recording) from the API but I can do that same transition by pressing a single button on the camera. Once I am recording a movie the API will allow me to transition back to still mode (by cancelling recording). Is there plans to support the ability to trigger a movie recording via the API if you are in a still capture mode (Seeing the firmware already supports this functionality)?
Answers to the questions below:
If the camera cannot capture RAW images, the API will not be able to either. I do not know of a way to capture RAW images but can only comment with regards to the API as I am not an expert on usage of the camera itself.
You can change between still and movie mode by using the "setShootMode" API.