USB camera (not RPi camera) isn't working on Android Things Preview 2 - camera

There are samples of using RPi camera, but none of them are useful.
I have added permissions in the manifest file, but still the output is showing : No cameras found. Why?
02-22 19:24:42.936 2134-2134/com.example.androidthings.doorbell
D/DoorbellActivity: Doorbell Activity created.
02-22 19:24:43.131 2134-2134/com.example.androidthings.doorbell
I/CameraManagerGlobal: Connecting to camera service
02-22 19:24:43.135 2134-2134/com.example.androidthings.doorbell
D/DoorbellCamera: No cameras found
02-22 19:24:43.135 2134-2134/com.example.androidthings.doorbell
W/DoorbellCamera: Cannot capture image. Camera not initialized.

If you look at the Hardware Support Matrix on the Developer Kits page, note that camera support is only provided over the CSI-2 interface, and not USB. The only media interface supported over USB in the current preview is audio record/playback.

Related

How to determine which cameras are front and back facing when using HTML5 getUserMedia and enumerateDevices APIs?

When accessing the camera using HTML5 getUserMedia APIs, you can either:
Request an unspecified "user" facing camera
Request an unspecified "environment" facing camera (optionally left or right)
Request a list of cameras available
Originally we used the "facing" constraint to choose the camera. If the camera faces the "user" we show it mirror image as is the convention.
We run into problems, however, when a user does not have exactly 1 user-facing and 1 environment-facing camera. They might be missing one of these, or have multiple. This can result in the wrong camera being used, or the camera not being mirrored appropriately.
So we are looking at enumerating the devices. However, I have not found a way to determine whether a video device is "user facing" and should be mirrored.
Is there any API available to determine whether a video input is "user" facing the in these APIs?
When you enumerate devices, devices that are an input may have a method called getCapabilities(). If this method is available you may call it to get a MediaTrackCapabilities object. This object has a field called facingMode which lists the valid facingMode options for that device.
For me this was empty on the PC but on my Android device it populated correctly.
Here's a jsfiddle you can use to check this on your own devices: https://jsfiddle.net/juk61c07/4/
Thanks to the comment from O. Jones for setting me in the right direction.

Camera Remote API for DSC-RX1RM2

I am trying to control my Camera DSC-RX1RM2 with Remote SDK.
With the PDF guide [Sony_CameraRemoteAPIbeta_API-Reference_v2.20.pdf],
I think I can use [Continuous shooting mode]API for my Camera,
But the result always return ["error": [12, "No Such Method"]].
I want to ask where is the problem?my camera or the SDK or my source?
Unfortunately, the DSC-RX1RM2 is not supported by the API. Stay tuned to the Sony Camera Remote API page for any updates on supported cameras - https://developer.sony.com/develop/cameras/.
The latest API does support the DSC-RX1RM2, just confirmed it.
Also check that your URLs are like:
http://ip:port/sony/camera
or
http://ip:port/sony/avContent
I didn't append camera or avContent at first and got similar No Such Method errors.

How to switch local video without switching audio in a webrtc conference?

I have two or more cameras connected to my pc. My goal is, to switch between the local cameras in an ongoing webrtc videoconference - but only to switch video from camera 1 to camera 2 NOT audio. Audio should always come from camera 1.
How to toggle between the two videoTracks?
See this answer.
Basically you can use replaceTrack() in Firefox to do this today for a seamless replacement of a camera. This is being added to the spec, but Chrome doesn't support it yet.
The best you can do in Chrome today is to get a new stream with the same mic but a different camera, remove the old stream/tracks from the PeerConnection and add the new one, and then handle onnegotiationneeded and renegotiate. This will likely cause a glitch, and will require at least a couple of round-trip-times to complete. (This will work in Firefox as well.

Liveview on Android/QX1 Sony Camera API 2.01 fails

Using the supplied Android demo from
https://developer.sony.com/downloads/all/sony-camera-remote-api-beta-sdk/
Connected up to the WIFI connection on a Sony QX1. The sample application finds the camera device and is able to connect to it.
The liveview is not displaying correctly. At most, one frame is shown and the code hits an exception in SimpleLiveViewSlicer.java
if (commonHeader[0] != (byte) 0xFF) {
throw new IOException("Unexpected data format. (Start byte)");
}
Shooting a photo does not seem to work. Zooming does work - lens is moving. Camera is working fine when using the PlayMemories app directly, so no hardware issue.
Hoping from advice from Sony on this one - standard hardware and demo application should work.
Can you provide some details of your setup?
What version of Android SDK are you compiling with?
What IDE and OS are you using?
Have you installed the latest firmware? (http://www.sony.co.uk/support/en/product/ILCE-QX1#SoftwareAndDownloads)
Edit:
We tested the sample code using a QX1 lens and the same setup as you and were able to run the sample code just fine.
One thing to check is whether the liveview is ready to transfer images. To confirm whether the camera is ready to transfer liveview images, the client can check “liveviewStatus” status of “getEvent” API (see API specification for details). Perhaps there is some timing issue due to connection speed that is causing the crash.

RGBDToolKit calibrate correspondence cannot display the live view by my canon digital camera

Hi~ I am trying to calibrate my depth camera with my digital camera.. but I meet some problem using the RGBDKinectCapture...
1、why the calibrate correspondence cannot show the RGB view from my digital camera?? (canon EOS 600D)
I have set it the live view mode..
2、it cannot recognize my kinect(1414) on Win7 By the example application (download from the RGBToolKit.com),
and I cannot run the source on Windows,just say the error 'enable_if' also mean the std::tr1::enable_if...
3、What is the methods of taking live preview for (all) the HD camera in RGBToolKit ??
is it without the SDK of camera??
which methods it based on??
which class? and I want to debug it...
I'm so sorry ... I see now...
1、the RGBDToolkit doesnot display the live preview of the HD camera.... it just load its video....
2、I cannot run the example on windows because it run with the different sdk for kinect ...(the source has give the sdk..)