Sony A6000 failing to set exposure mode - camera

I'm attempting to set the exposure mode of a Sony A6000 using smart remote version 2.0.1 by posting the following command: {'method': 'setExposureMode', 'params': ['Shutter'], 'id': 12, 'version': '1.0'}, but I'm getting back the following response: {u'id': 12, u'error': [500, u'Set operation failed.']}. I can't seem to find a description for error code 500 anywhere in the API documentation. I've used the same command to successfully set the exposure modes of NEX-5, NEX-6, and A5000 cameras. Is there something different I need to be doing for the A6000?

Nothing different as far as I am aware of. Please check code and other general things like current camera mode, also if camera has latest firmware, latest "Smart Remote Control" app, etc,.
Best Regards, Prem, Developer World team

Related

Alexa Skill AudioPlayer: Console test Support poor support/bugs

I'm developing a skill to play my own music.
I have troubles with AudioPlayer with the developer console test environment ( https://developer.amazon.com/alexa/console/ask/test/ ). Two questions:
1. AudioPlayer Built-in Intents doesn't run as expected (test simulator bug?)
Alexa, play mymusicsSkill
playing Wanderer track 1: Antelao Adrift
Alexa, next
What would you like to play next?
Ask mymusicSkill next
intent: next
See also screenshot:
BTW, my code:
'AMAZON.NextIntent': function () {
this.response.speak('intent: next')
this.emit(':responseReady')
},
So the expected behaviour, also following this documentation: https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html
is that Alexa, next would fire the AMAZON.NextIntent of my skill, but it doesn't. That's a bug, in my opinion.
2. AudioPlayer not (yet) supported on test simulator?
I know that audio streaming is not supported on the console test simulator (See also my previous question: https://forums.developer.amazon.com/questions/143902/when-test-simulator-will-support-audioplayer-direc.html ). Because I can't own a physical Echo (I'm from Italy and Amazon deny to sell my an echo to an italian address ), there is any way to test my skill using audio streams?
NOTE/SUGGESTION to improve the web test simulator:
even if there are problems to stream audio to the web test simulator, a workaround could be to visualize a message info, showing on the GUI that streaming is active (the play is active, etc.)...
Any idea?
Thanks
giorgio

Sony A7Sii not showing expected API features

I'm trying to control a Sony a7SII via the pysony Sony Camera API wrapper.
https://github.com/Bloodevil/sony_camera_api
I'm able to connect to the camera, start the session, but I'm not getting the expected list of possible API functions back.
My process looks like this:
import pysony
api = pysony.SonyAPI()
api.startRecMode()
api.getAvailableApiList()
{'id': 1, 'result': [['getVersions', 'getMethodTypes', 'getApplicationInfo', 'getAvailableApiList', 'getEvent', 'actTakePicture', 'stopRecMode', 'startLiveview', 'stopLiveview', 'awaitTakePicture', 'getSupportedSelfTimer', 'setExposureCompensation', 'getExposureCompensation', 'getAvailableExposureCompensation', 'getSupportedExposureCompensation', 'setShootMode', 'getShootMode', 'getAvailableShootMode', 'getSupportedShootMode', 'getSupportedFlashMode']]}
As you can see, the returned list does not contain a full set of controls.
Specifically, I want to be able to set the shutter speed and aperture. Which based on this matrix https://developer.sony.com/develop/cameras/ I should be able to do.
Any ideas would be much appreciated.
Turns out, both pysony and the API are working fine.
You must install the Remote App from the store rather than relying on the "embedded" remote that ships with the camera to get full API functionality.
Also as a note; it seems to take a little time for the 'api.startRecMode()' to actually update the available API list. Sensible to add a little delay to your code.
See:
src/example/dump_camera_capabilities.py

How to connect to my LaunchPad TM4C123G

I have spent some time trying to connect to my LaunchPad TM4C123G using the mspdebug toolchain on my macbook (10.10), but no luck.
While trying to run $ mspdebug rf2500 I get
usbutil: unable to find a device matching 0451:f432
I did some googling and it seems to me the mspdebug toolkit might not be suitable for my version of the LaunchPad. Could this be?
After checking my $ system_profiler SPUSBDataType I got the following:
Product ID: 0x00fd
Vendor ID: 0x1cbe (Texas Instruments - Stellaris)
Version: 1.00
Serial Number: 0E205EE1
Speed: Up to 12 Mb/sec
Manufacturer: Texas Instruments
Location ID: 0x14100000 / 14
Current Available (mA): 500
Current Required (mA): 250
This indicates to me that at least the OS is able to recognise the device, right? If so, what other toolchain could I use to connect to the device.
As a satisfactory solution for the time being I started using Energia. I still had to search for the appropriate settings in order to run anything on my LaunchPad TM4C123G. So I decided to spare people some time, and decided to post a step by step walkthrough here.
First, plug in the USB to MicroUSB cable to the top slot of the LaunchPad, as so. And make sure the switch (at the top left) is switched to the "DEBUG" position.
Next download the Energia IDE, there's a nice bundle for the Mac on their site. Once you're done with the setup, open it and search the toolbar for the "Board" section. Once there, select the appropriate setting. In my case it was the one with the checkmark in the screenshot.
Finally, to make sure everything is OK, try and run the provided empty program on your board with the "Upload" button.
If the connection was established and your source compiled and delivered, the status area should look similar to this one.

Is there any private api to monitor network traffic on iPhone?

I need to implement an app that monitors inbound/outbound connections by different apps on iPhone. my app is going to run in background using apple's background multitasking feature for voip and navigators.
I can use private api as my client doesn't need this app on appstore.
Thanks.
I got through this.
I didn't need any private Api to get information of inbound/outbound active connections.
Its just you need to know basic C programming and some patience.
I wrote to apple dev forums regarding this,and got response as-
From my perspective most of what I have to say about this issue is covered in the post referenced below.
<https://devforums.apple.com/message/748272#748272>>
The way to make progress on this is to:
o grab the relevant headers from the Mac OS X SDK
o look at the Darwin source for netstat to see how the pieces fit together
WARNING: There are serious compatibility risks associated with shipping an app that uses this technique; it's fine to use this for debugging and so on, but I recommend against shipping code like this to end users.
What I did step by step is -
1)downloaded code of netstat from BSD opensource -
2)add this to your new iphone project.
3)see,some header files are not present in ios sdk so you need take it copied from opensource.apple.com and add those in your iphone sdk at relevant path under-
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.0.sdk/usr/include
My xcode version is 4.5.2. so this is path relevent to my xcode. you can have different path according to versions of xcodes.Anyway. and remember to add those headers in both iosSdk & iOSSimulatorSdk both so that code will work on device as well as on simulator.
4)you may find some minor errors in netstat code relating not finding definitions of some structures in header files.e.g " struct xunpcb64 " .dont wory. definitions are present there.you need to comment some "#if !TARGET_OS_EMBEDDED" #else in those header files so that ios sdk can reach in those if condition and access the definition.(need some try and error.be patient.)
5)finally you will be abe to compile your code.Cheers!!
In case you haven't seen this already, I have used it successfully on my iPhone.
https://developer.apple.com/library/ios/ipad/#qa/qa1176/_index.html
I realize it is not exactly what you want.
Generally, I agree with Suneet. All you need is network sniffer.
You can try to do partial port of WireShark (it's open source and it works on MacOS) to iOS. Both iOS and OS X share most part of the kernel, so if it's not explicitly prohibited on iOS, it should work for you.
WireShark is quite big product, so you may be intersted to look for another open source network sniffer which work on OS X.

Android HTC Desire Voice input issues

Would anyone have an idea as to why an app would work on almost every phone that has 2.1 but not the Desire?
One of my apps uses voice input and the Desire is the only phone that force closes when the voice prompt comes up.
The worst part is that I don't know how to test this, I don't have one or know anyone who does.
Any ideas?
EDIT:
I finally found out that HTC disabled voice in the Desire and you have to do a work around to install it.
So if you are relying on voice input make sure you use the code in the google example to catch the error:
PackageManager pm = getPackageManager();
List<ResolveInfo> activities = pm.queryIntentActivities(
new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0);
if (activities.size() == 0) {
noResults.setText("Voice input not found on this phone.");
}else{
//If voice is enabled
}
I think the most important thing to do first is to get the exception report. Since you can't test it by yourself, I would use a tool to get the exception report from your customers. In Android 2.2 the built-in tool can be used. If you have other targeting SDKs I would recommend this services: http://code.google.com/p/android-remote-stacktrace/ to get a remote stacktrace.
Then if you post the stacktrace here, I think somebody will be able to help you!