A7 API SDK for controlling shooting interval remotely - Timelapse - camera

I am interested on writing an app that can turn the Sony A7 into a precise timelapse camera with less than a second shooting interval.
Can I use the Sony SDK API for that?
How, and how fast can I control the shutter of the camera?
Thanks
Vangelis

This is what the documentation says about the shooting speed concerning the latest version of the API:
Continuous shooting speed ("10fps 1sec")
Please let me know if you need more detail. I am sure you already have this, but here is the link to the home page for camera development:
https://developer.sony.com/develop/cameras/

Related

Does the Sony Remote Camera API control HDR modes, ISO, shutter speed, aperture and other "manual" settings?

I just bought a Sony A7 and I am blown away with the incredible pictures it takes, but now I would like to interact and automate the use of this camera using the Sony Remote Camera API. I consider myself a maker and would like to do some fun stuff: add a laser trigger with Arduino, do some computer controlled light painting, and some long-term (on the order of weeks) time-lapse photography. One reason I purchased this Sony camera over other models from famous brands such as Canon, Nikon, or Samsung is because of the ingenious Sony Remote Camera API. However, after reading through the API reference it seems that many of the features cannot be accessed. Is this true? Does anyone know a work around?
Specifically, I am interested in changing a lot of the manual settings that you can change through the menu system on the camera such as ISO, shutter speed, and aperture. I am also interested in taking HDR images in a time-lapse manner and it would be nice to change this setting through the API as well. If anyone knows, why wasn't the API opened up to the whole menu system in the first place?
Finally, if any employee of Sony is reading this I would like to make this plea: PLEASE PLEASE PLEASE keep supporting the Remote Camera API and improve upon an already amazing idea! I think the more control you offer to makers and developers the more popular your cameras will become. I think you could create a cult following if you can manage to capture the imagination of makers across the world and get just one cool project to go viral on the internet. Using http and POST commands is super awesome, because it is OS agnostic and makes communication a breeze. Did I mention that is awesome?! Sony's cameras will nicely integrate themselves into the internet of things.
I think the Remote Camera API strategy is better than the strategies of Sony's competitors. Nikon and Canon have nothing comparable. The closest thing is Samsung gluing Android onto the Galaxy NX, but that is a completely unnecessary cost since most people already own a smart phone; all that needs to exist is a link that allows the camera to talk to the phone, like the Sony API. Sony gets it. Please don't abandon this direction you are taking or the Remote Camera API, because I love where it is heading.
Thanks!
New API features for the Lens Style Cameras DSC-QX100 and DSC-QX10 will be expanded during the spring of 2014. The shutter speed functionality, white balance, ISO settings and more will be included! Check out the official announcement here: https://developer.sony.com/2014/02/24/new-cameras-now-support-camera-remote-api-beta-new-api-features-coming-this-spring-to-selected-cameras/
Thanks a lot for your valuable feedback. Great to hear, that the APIs are used and we are looking forward nice implementations!
Peter

Kinect For XBOX360 and hand tracking

I have a question about Kinect Xbox360: it can track the hand movement and fingers? I am searching on the web and I dont found any interesting about this. Another camera that I am thinking to use is the Asus Movement Sensor, but I dont know if this is better than Kinect (more options, I know that both uses OpenNI) or if both are the same.
Thanks for your time!
I would see these links:
Finger tracking in Kinect
http://www.kinecthacks.com/kinect-hand-tracking-gesture-experiment/
http://makematics.com/code/FingerTracker/
http://social.msdn.microsoft.com/Forums/en-US/c128197f-6925-49c6-bedc-d7692d03c0a9/fingers-tracking-using-kinect
http://channel9.msdn.com/coding4fun/kinect/Finger-Tracking-with-Kinect-SDK-and-the-Kinect-for-XBox-360-Device
These should get you started and give you many options. You can use the SDK or OpenNI, however my personal preference is the SDK, OpenNI or OpenKinect may be better in this case, expecially because of the FingerTracker API (3). Although the sdk has source code for finger tracking with an xbox kinect (5).

Alpha 7 Camera API Shutter Speed

I was trying to implement a long exposure timer for the Sony Alpha 7 using the Remote Camera API. Is it possible to set the shutter speed using the API to BULB? Also, is it possible to control the exposure time from within an app when the shutter is set to BULB?
I hope Sony will implement these features, otherwise the API is kind of useless for many scenarios.
Here is an interesting article about exactly this demand: http://blog.programmableweb.com/2013/09/10/new-sony-camera-remote-api-leaves-developers-wanting-more/
I would be very happy to know if the features will be implemented in the future.
Knowing this we could already start developing apps otherwise it makes no sense...
Sorry at this time those features are not available in the API. Will let you know if they are implemented in the future.

Does Google Glass Mirror API support WebRTC

The Glass Mirror Timeline API document suggests that video can be streamed to a Glass headset (https://developers.google.com/glass/timeline). I'm trying to determine whether this could theoretically work with a WebRTC connection? Documentation is limited around the browser/rendering capabilities of the timeline so has anyone tried something similar?
Ran across a discussion about WebRTC + Glass in a reported issue - https://code.google.com/p/webrtc/issues/detail?id=2083
From the sound of things someone got it to work in chrome beta at 176*144. There were some audio/frame rate issues that appear to have been resolved. Note though they talk about streaming video/audio from the glass to another machine not video streaming into the glass display. I believe that at this point it will only work in chrome beta & doubt you could integrate this into the timeline smoothly, though with how hard Google is pushing WebRTC I would not be surprised to see increased support. I'll be testing this out with my own WebRTC demos soon & will know more.
WebRTC for Google glass has been reported: http://www.ericsson.com/research-blog/context-aware-communication/field-service-support-google-glass-webrtc/. There were some limitations, e.g. the device overheated after 10 minutes.
Another case - http://tech.pristine.io/pristine-presents-at-the-austin-gdg (thanks to Arin Sime)

Can the Kinect SDK be run with saved Depth/RGB videos, instead of a live Kinect?

This question relates to the Kaggle/CHALEARN Gesture Recognition challenge.
You are given a large training set of matching RGB and Depth videos that were recorded from a Kinect. I would like to use the Kinect SDK's skeletal tracking on these videos, but after a bunch of searching, I haven't found a conclusive answer to whether or not this can be done.
Is it possible to use the Kinect SDK with previously recorded Kinect video, and if so, how? thanks for the help.
It is not a feature within the SDK itself, however you can use something like the Kinect Toolbox OSS project (http://kinecttoolbox.codeplex.com/) which provides Skeleton record and replace functionality (so you don't need to stand in front of your Kinect each time). You do however still need a Kinect plugged in to your machine to use the runtime.