Unavailable Camera control API functions, Sony Development SDK, Camera WX500 - camera

I am trying to modify the CameraRemoteSampleApp provided by the Sony SDK to add full control over the aperture, iso, and shutter speed to the interface.
At this point however all the relevant functions are off limits.
The documentation defines these functions as "available if the latest version of the PlayMemories Smart Remote Control is installed and in use".
a) The smart remote control is installed and running, with the smartphone connected through it and wifi (Samsung Galaxy SIII).
b) I can take photos with the SDK Sample App.
c) Using the available API list shows that the functions exist but are not available.
d) Calling one of those functions directly returns "Not Available Now". This error code is not present in the documentation.
How can one get access to ISO/F-number/Shutter speed functions in this case?
JSON requests and responses:
Request Methods: {"id":1,"method":"getMethodTypes","version":"1.0","params":[""]}
Response: {"id":1,"results":[["actTakePicture",[],["string*"],"1.0"],["actZoom",["string","string"],["int"],"1.0"],["awaitTakePicture",[],["string*"],"1.0"],["cancelTouchAFPosition",[],[],"1.0"],["getApplicationInfo",[],["string","string"],"1.0"],["getAvailableApiList",[],["string*"],"1.0"],["getAvailableExposureCompensation",[],["int","int","int","int"],"1.0"],["getAvailableExposureMode",[],["string","string*"],"1.0"],["getAvailableFNumber",[],["string","string*"],"1.0"],["getAvailableFlashMode",[],["string","string*"],"1.0"],["getAvailableFocusMode",[],["string","string*"],"1.0"],["getAvailableIsoSpeedRate",[],["string","string*"],"1.0"],["getAvailableLiveviewSize",[],["string","string*"],"1.0"],["getAvailablePostviewImageSize",[],["string","string*"],"1.0"],["getAvailableSelfTimer",[],["int","int*"],"1.0"],["getAvailableShootMode",[],["string","string*"],"1.0"],["getAvailableShutterSpeed",[],["string","string*"],"1.0"],["getAvailableWhiteBalance",[],["{\"whiteBalanceMode\":\"string\", \"colorTemperature\":\"int\"}","{\"whiteBalanceMode\":\"string\", \"colorTemperatureRange\":\"int*\"}*"],"1.0"],["getEvent",["bool"],["{\"type\":\"string\", \"names\":\"string*\"}","{\"type\":\"string\", \"cameraStatus\":\"string\"}","{\"type\":\"string\", \"zoomPosition\":\"int\", \"zoomNumberBox\":\"int\", \"zoomIndexCurrentBox\":\"int\", \"zoomPositionCurrentBox\":\"int\"}","{\"type\":\"string\", \"liveviewStatus\":\"bool\"}","{\"type\":\"string\", \"liveviewOrientation\":\"string\"}","{\"type\":\"string\", \"takePictureUrl\":\"string*\"}*","{\"type\":\"string\", \"continuousError\":\"string\", \"isContinued\":\"bool\"}*","{\"type\":\"string\", \"triggeredError\":\"string*\"}","{\"type\":\"string\", \"sceneRecognition\":\"string\", \"steadyRecognition\":\"string\", \"motionRecognition\":\"string\"}","{\"type\":\"string\", \"formatResult\":\"string\"}","{\"type\":\"string\", \"storageID\":\"string\", \"recordTarget\":\"bool\", \"numberOfRecordableImages\":\"int\", \"recordableTime\":\"int\", \"storageDescription\":\"string\"}*","{\"type\":\"string\", \"currentBeepMode\":\"string\", \"beepModeCandidates\":\"string*\"}","{\"type\":\"string\", \"currentCameraFunction\":\"string\", \"cameraFunctionCandidates\":\"string*\"}","{\"type\":\"string\", \"currentMovieQuality\":\"string\", \"movieQualityCandidates\":\"string*\"}","{\"type\":\"string\", \"checkAvailability\":\"bool\", \"currentAspect\":\"string\", \"currentSize\":\"string\"}","{\"type\":\"string\", \"cameraFunctionResult\":\"string\"}","{\"type\":\"string\", \"currentSteadyMode\":\"string\", \"steadyModeCandidates\":\"string*\"}","{\"type\":\"string\", \"currentViewAngle\":\"int\", \"viewAngleCandidates\":\"int*\"}","{\"type\":\"string\", \"currentExposureMode\":\"string\", \"exposureModeCandidates\":\"string*\"}","{\"type\":\"string\", \"currentPostviewImageSize\":\"string\", \"postviewImageSizeCandidates\":\"string*\"}","{\"type\":\"string\", \"currentSelfTimer\":\"int\", \"selfTimerCandidates\":\"int*\"}","{\"type\":\"string\", \"currentShootMode\":\"string\", \"shootModeCandidates\":\"string*\"}","{\"type\":\"string\", \"currentAELock\":\"bool\", \"aeLockCandidates\":\"bool*\"}","{\"type\":\"string\", \"checkAvailability\":\"bool\", \"currentBracketShootMode\":\"string\", \"currentBracketShootModeOption\":\"string\"}","{\"type\":\"string\", \"checkAvailability\":\"bool\", \"currentCreativeStyle\":\"string\", \"currentCreativeStyleContrast\":\"int\", \"currentCreativeStyleSaturation\":\"int\", \"currentCreativeStyleSharpness\":\"int\"}","{\"type\":\"string\", \"currentExposureCompensation\":\"int\", \"maxExposureCompensation\":\"int\", \"minExposureCompensation\":\"int\", \"stepIndexOfExposureCompensation\":\"int\"}","{\"type\":\"string\", \"currentFlashMode\":\"string\", \"flashModeCandidates\":\"string*\"}","{\"type\":\"string\", \"currentFNumber\":\"string\", \"fNumberCandidates\":\"string*\"}","{\"type\":\"string\", \"currentFocusMode\":\"string\", \"focusModeCandidates\":\"string*\"}","{\"type\":\"string\", \"currentIsoSpeedRate\":\"str
Request Versions: {"id":2,"method":"getVersions","version":"1.0","params":[]}
Response: {"id":2,"result":[["1.0"]]}
Request API: {"id":3,"method":"getAvailableApiList","version":"1.0","params":[]}
Response: {"id":3,"result":[["getVersions","getMethodTypes","getApplicationInfo","getAvailableApiList","getEvent","actTakePicture","stopRecMode","startLiveview","stopLiveview","actZoom","setSelfTimer","getSelfTimer","getAvailableSelfTimer","getSupportedSelfTimer","setExposureCompensation","getExposureCompensation","getAvailableExposureCompensation","getSupportedExposureCompensation","setShootMode","getShootMode","getAvailableShootMode","getSupportedShootMode","getSupportedFlashMode"]]}
Request Flash: {"id":5,"method":"getSupportedFlashMode","version":"1.0","params":[]}
Response Flash: {"id":5,"result":[["off","auto","on","slowSync","rearSync"]]}
Request speed: {"id":5,"method":"getAvailableShutterSpeed","version":"1.0","params":[]}
Response speed: {"id":5,"error":[1,"Not Available Now"]}

There are a couple things that you can check first:
Do you have the latest version of Smart Remote Control installed?
Do you have the latest firmware update for your camera?

Related

Update device twin when reprovisioning with custom allocation policy

In Azure Device Provisioning Service
when using a custom allocation policy,
with '--reprovision-policy reprovisionandmigratedata'
is it possible to migrate the device twin data when the changing hubs and change some of the values in the twin?
From experiments initialTwin is ignored when moving between hubs (as opposed to registered for the first time) which is not that unexpected.
Example
Let's say that device d1 is provisioned to hub1 and its desired is
"desired" : {
"a": 1
}
Some time later d1 reprovisions and the allocation function is executed and it will move the device to hub2. I need the new desired to be:
"desired" : {
"a": 2
}
I have confirmed that when "re-provision and migrate data" the initialTwin will be ignored and that is by design.
An alternative is to query the latest twin from the device as part of custom allocation, add the new property and send it to DPS while choosing "re-provision and reset to initial config".

WPA_supplicant authentication implementation

I need help from someone that have some experience in playing with wpa_supplicant code.
What i understand is that wpa_supplicant dose everything in order for a supplicant to connect to an AP (if that what you what). Hence the steps are as:
Scan
Get scan results
AUTH
ASSOC
4-hand shake
data exchange
As i understand this then the first 4 steps are only managed by wpa_supplicant. That is, wpa_supplicant simply calls the under laying driver to perform these steps and after the main event loop receives the EVENT_ASSOC msg. it starts the 4-handshake.
For my part, it is fine with the first two steps are carried out at the driver, ie., wpa_supplicant send a scan req, the driver perform the scan and feed the scan results.
My question is, is it correct that wpa_supplicant cannot generate the necessary packet and use, e.g., layer 2 (rawsocket) to send authentication request to the AP ? and followed by an associate request ?... shall one simply provides these as a handle from the driver layer ?
as i can see from the code in wpa_supplicant.c
(void wpa_supplicant_associate(struct wpa_supplicant *wpa_s,
struct wpa_bss *bss, struct wpa_ssid *ssid))
that this function calls a function pointer to the selected driver eg. ".associate = wpa_driver_nl80211_associate" and here the driver then send this down to the udnerlaying nl80211 driver code ? .... so wpa_supplicant can not generate these packet by it self ?
I hope that this make any sens, if not please ask :)
Yes, your understanding is correct. To send auth/assoc req, the wpa_supplicant should construct the corresponding NL80211 commands in following different scenarios:
a) in case the SME is maintained in wpa_supplicant
NL80211_CMD_AUTHENTICATE
NL80211_CMD_ASSOCIATE
b) in case the SME is maintained by driver
NL80211_CMD_CONNECT
And these commands will trigger the corresponding cfg80211_ops hooks (.auth, .assoc, .connect) registered by the wifi driver to be called to construct the frames and then send out the frames.

Manual focus with flash using Android camera2

How to perform manual (Touch) focus with flash using Android camera2 api?
My captureRequest settings are:
1. type - TEMPLATE_PREVIEW
2. CONTROL_AE_MODE - CONTROL_AE_MODE_OFF
3. FLASH_MODE - FLASH_MODE_SINGLE
4. CONTROL_AF_TRIGGER - CONTROL_AF_TRIGGER_START
usage:
CaptureSession.capture(captureRequest.build(), captureCallback, null);
Result:
Camera get focused if there is enough light. Otherwise flash blinks very fast and focus fails.
you can try to perform manual (Touch) focus with flash by this way:
mPreviewBuilder.set(CaptureRequest.CONTROL_AE_MODE, CameraMetadata.CONTROL_AE_MODE_ON_AUTO_FLASH);
when use TRIGGER,use both AE and AF:
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_CANCEL);
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_START);
mPreviewBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CameraMetadata.CONTROL_AE_PRECAPTURE_TRIGGER_START);
and then:
mCameraCaptureSession.setRepeatingRequest(mPreviewBuilder.build(), mPreviewSessionCallback, mHandler);

Addressing ECUs directly using ELM 327 dongle and ISO 9141

I have a VW Golf 4, which is quite old and talks KWP 2000 (ISO 9141) on its CAN bus. I use a dongle powered by ELM 327, connected to the OBD-2 port of the car.
I am trying to send messages individually to each ECU. I tried to change the header of the messages:
AT SH 48 XX F1 (I hoped XX would be the ECU ID; 48 is the flag for "use physical addressing"). Any command I issue (e.g. tried 3E for "tester present") returns NO DATA (I disabled automatic timeouts and set the timeout to maximum value).
Is there a way to send messages directly to the ECU? I am not interested in the set of data provided via OBD-2, neither do I want to re-flash the ECUs. At the moment I just try to find out which ECUs are available on the bus.
Thanks!
VW works on Transport Protocol TP 2.0, hence you need to initialize with 0x200 header.
https://jazdw.net/tp20
See above link for more info.

Troubleshooting WebRTC code

I'm pulling my hair out with this one. A month or so ago, I was able to put together a proof-of-concept WebRTC demo, using some sample code from the good folks at SignalR. The demo is located here, the source for it is here, and it does what it's supposed to do.
But when I took that code and moved it into our actual application, I haven't been able to get it to work. Of course the code had to be changed significantly - different backends, different set of frameworks and supporting code, supporting multiple simultaneous connections, that sort of thing - but the core logic is very similar. But I can't get it to work.
I've put together a sample app here that demonstrates the problem:
https://bitbucket.org/smithkl42/signalr.webrtc
The core WebRTC logic is all in this TypeScript file:
https://bitbucket.org/smithkl42/signalr.webrtc/src/tip/SignalR.WebRTC/Scripts/Media/WebRTC.ts?at=default
It's several hundred lines long, so I won't bother posting it here, but you can see it by clicking on the link above.
When it runs, it produces output like this:
12:17:58.531 WebRTCController.call(): Calling 7d9e0d39-5047-4afe-86e5-e6e01b9f5955 when preparations have finished
12:17:58.533 WebRTCController.prepareForCall(): Preparing for call: localSessionId='39d2df53-6854-415a-8748-b5230eda2eb1'; remoteSessionId='7d9e0d39-5047-4afe-86e5-e6e01b9f5955'
12:18:0.139 Object.(): The user has granted media device access, so proceeding to prepare for call
12:18:0.141 Connection.createPeerConnection(): Creating peer connection; using stunServer stun:stun1.l.google.com:19302
12:18:0.144 (): Preparations finished. Creating and sending JSEP offer. Util.js:21
12:18:0.272 Connection.handleIceCandidate(): STUN server has found an ICE candidate (event.type='icecandidate').
12:18:0.282 Connection.handleIceCandidate(): STUN server has found an ICE candidate (event.type='icecandidate').
(More like that)
12:18:0.655 WebRTCController.handleJsepAnswer(): Handling JsepAnswer from 7d9e0d39-5047-4afe-86e5-e6e01b9f5955
12:18:0.694 Object.(): Sending ICE candidate to the remote machine: {"sdpMLineIndex":0,"sdpMid":"audio","candidate":"a=candidate:2999745851 1 udp 2113937151 192.168.56.1 62978 typ host generation 0\r\n"}
12:18:0.706 Object.(): Sending ICE candidate to the remote machine: {"sdpMLineIndex":0,"sdpMid":"audio","candidate":"a=candidate:2999745851 2 udp 2113937151 192.168.56.1 62978 typ host generation 0\r\n"}
(More like that)
But then it never connects, i.e., the video from the other side never starts playing. At the signaling layer, I can tell by the logs and by stepping through the code that the first browser is sending a JSEP offer; the second browser is receiving it, storing it and sending back an appropriate JSEP answer; and the first machine is storing that answer. Each peerConnection is then finding the ICE candidates and sending them to the remote machine; and each peerConnection is receiving and apparently trying those ICE candidates; and the peerConnections are even raising the onaddstream event. But the video never starts playing.
The state of the peerConnection object all the way through looks like this:
(iceGatheringState=new; iceState=starting; readyState=active)
The frustrating bit is that every so often, maybe one time out of 20, it does work, i.e., both videos show up. So I'm not doing everything wrong. It sounds like a timing issue of some sort - but I can't figure out what it is. And so far as I can tell, there's not much in the WebRTC objects (specifically RTCPeerConnection) to tell you what's going wrong.
I hate to ask anybody else to do my troubleshooting for me, but... well, I'm running out of options. Does anybody else see anything I'm doing obviously wrong?
Update 2012-12-19: I'm making some progress. I realized I was calling peerConnection.setLocalDescription() synchronously, i.e., without specifying callbacks. So now I've got some lines of code that look like this:
// Answer the call by sending a JsepAnswer message.
connection.peerConnection.createAnswer(
answer => {
connection.peerConnection.setLocalDescription(answer, () => {
var signalState: mData.SignalState = {
FromSessionId: connection.localSessionId,
ToSessionId: connection.remoteSessionId,
Message: JSON.stringify(answer)
};
me.roomHub.server.jsepAnswer(signalState);
mUtil.log("Sent JSEP answer: " + signalState.Message);
connection.readyForIceCandidates.resolve();
},
error => {
mUtil.error("Error setting local description from created answer: " + error + "; answer=" + JSON.stringify(answer));
});
},
error => {
mUtil.error("Error creating answer: " + error);
}, me.mediaConstraints);
And the setLocalDescription() error callback is showing this error:
16:14:42.439 WebRTCController.handleJsepOffer(): Error setting local description from created answer: SetLocalDescription failed.; answer={"sdp":"v=0\r\no=- 439659381 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\nm=audio 1 RTP/SAVPF 103 104 111 0 8 107 106 105 13 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:1 IN IP4 0.0.0.0\r\na=ice-ufrag:vOKflTJ56gV0R9i0\r\na=ice-pwd:9nuXPMDvQ2mZATFCQyEzPRQz\r\na=sendrecv\r\na=mid:audio\r\na=rtcp-mux\r\na=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:m9q9pmLgLuFnfFC09KXKW5p8TjsKk+VdqX0OWv77\r\na=rtpmap:103 ISAC/16000\r\na=rtpmap:104 ISAC/32000\r\na=rtpmap:111 opus/48000/2\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:107 CN/48000\r\na=rtpmap:106 CN/32000\r\na=rtpmap:105 CN/16000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:126 telephone-event/8000\r\na=ssrc:548068416 cname:IXg8QRisWrd7+7f8\r\na=ssrc:548068416 msid:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf a0\r\na=ssrc:548068416 mslabel:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\na=ssrc:548068416 label:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjfa0\r\nm=video 1 RTP/SAVPF 100 116 117\r\nc=IN IP4 0.0.0.0\r\na=rtcp:1 IN IP4 0.0.0.0\r\na=ice-ufrag:vOKflTJ56gV0R9i0\r\na=ice-pwd:9nuXPMDvQ2mZATFCQyEzPRQz\r\na=sendrecv\r\na=mid:video\r\na=rtcp-mux\r\na=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:m9q9pmLgLuFnfFC09KXKW5p8TjsKk+VdqX0OWv77\r\na=rtpmap:100 VP8/90000\r\na=rtpmap:116 red/90000\r\na=rtpmap:117 ulpfec/90000\r\na=ssrc:1460425980 cname:IXg8QRisWrd7+7f8\r\na=ssrc:1460425980 msid:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf v0\r\na=ssrc:1460425980 mslabel:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\na=ssrc:1460425980 label:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjfv0\r\n","type":"answer"}
Now I just need to figure out why that particular SDP - which comes straight from the createAnswer() method - is failing.
Update 2012-12-20: I've created an online demonstration of the problem here: http://srdemo.alanta.com/. I've also turned on Chrome debug logging, with the result that I see a bunch of errors that look like this:
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
Not sure what relationship they have to my problem, but I'm continuing to look into it.
*Edit 2012-12-20: I've managed (I think) to narrow the problem down. See this question for more precise details.
Figured it out. Turns out that SignalR 1.0 RC1 has a bug in it that changes any "+" in a string into a space. So lines in the SDP that looked like this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2+z
Were getting changed into this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2 z
But because not every SDP had a "+" in it on a critical line, sometimes it would work. Everything explained.
The bug has been reported to the good folks working on SignalR (see https://github.com/SignalR/SignalR/issues/1194), and in the meantime, a simple encodeURIComponent() and decodeURIComponent() around the strings in question fixed it.