I'm trying to control a Sony a7SII via the pysony Sony Camera API wrapper.
https://github.com/Bloodevil/sony_camera_api
I'm able to connect to the camera, start the session, but I'm not getting the expected list of possible API functions back.
My process looks like this:
import pysony
api = pysony.SonyAPI()
api.startRecMode()
api.getAvailableApiList()
{'id': 1, 'result': [['getVersions', 'getMethodTypes', 'getApplicationInfo', 'getAvailableApiList', 'getEvent', 'actTakePicture', 'stopRecMode', 'startLiveview', 'stopLiveview', 'awaitTakePicture', 'getSupportedSelfTimer', 'setExposureCompensation', 'getExposureCompensation', 'getAvailableExposureCompensation', 'getSupportedExposureCompensation', 'setShootMode', 'getShootMode', 'getAvailableShootMode', 'getSupportedShootMode', 'getSupportedFlashMode']]}
As you can see, the returned list does not contain a full set of controls.
Specifically, I want to be able to set the shutter speed and aperture. Which based on this matrix https://developer.sony.com/develop/cameras/ I should be able to do.
Any ideas would be much appreciated.
Turns out, both pysony and the API are working fine.
You must install the Remote App from the store rather than relying on the "embedded" remote that ships with the camera to get full API functionality.
Also as a note; it seems to take a little time for the 'api.startRecMode()' to actually update the available API list. Sensible to add a little delay to your code.
See:
src/example/dump_camera_capabilities.py
Related
I am developing an Android app that gets data from Movesense using the GATT profile from the sample app GATT Sensor Data App here.
I followed the tutorial available here. Building the app and getting the DFU worked fine. I can get IMU, HR and temperature data with no issues.
Now I'd like add a tap detection feature to my app. I understand that I have to subscribe to 'System/States', but first I need to be able to receive the system state data.
I understand that I need a modified DFU for that, but I don't understand what changes I should make in which files of the gatt_sensordata_app before rebuilding and generating the new DFU.
What changes should I make in order to broadcast /System/State data?
(I usually just deal with Android so apologies for the very basic question.)
I tried adding #include "system_states/resources.h" to GATTSensorDataClient.cpp but I don't know how to continue.
The normal data straming in the gatt_sensordata_app uses the sbem-encoding code that the build process generates when building the firmware. However /System/States is not among the paths that the code can serialize. Therefore the only possibility is to implement the States-support to the firmware.
Easiest way is to do as follows:
In your python app call data subscription with "/System/States/3" (3 == DOUBLE_TAP)
Add a special case to the switch in onNotify which matches the localResourceId to the WB_RES::LOCAL::SYSTEM_STATES_STATEID::LID
In that handler, return the data the way you want. Easiest is to copy paste the "default" handler but replace the code between getSbemLength() & writeToSbemBuffer(...) calls with your own serialization code
Full disclosure: I work for the Movesense team
I would like to know whether it is possible to start a video call with another user by means of the tdlib library and transfer a picture from a camera connected to a Raspberry Pi to this call? And if so, how do you do that? What methods should I use?
To work with video calls part of Telegram you need to use Telegram's WebRTC client (https://github.com/TelegramMessenger/tgcalls). With MTProto methods you can get params to start this library. Video and audio bytes passing via this library.
There is already implemented high level library for Python that works with official tgcalls library. But working with private calls in a TODO list. You can use this project as an example how to work with tgcalls library.
https://github.com/MarshalX/tgcalls
Here are the python sources With working code of video translator with youtube/m3ui/mp4
https://github.com/EverythingSuckz/tgvc-video-tests
Looking at the MDN documentation IOS/Safari fully supports ServiceWorkerGlobalScope.onfetch but when you look at the FetchEvent specification it says it is not supported at all by Safari.
In particular, I would like to store some state for each client and was hoping to use the fetchEvent.clientId property of the event to index it. Of course I presume I also have access to the fetchEvent.request object otherwise I can't see how a service worker can do anything useful and I could simulate clientID from a passed in parameter in the url. But the docs don't really tell me what IOS/Safari supports and doesn't so I don't know which way to go.
Can someone please tell me precisely what does IOS/Safari pass when it calls the defined onfetch function.
I found the answer to my question by using https://jakearchibald.github.io/isserviceworkerready/demos/fetchevent/
connecting my iPad to my Macbook and debugging my iPad. I was eventually able to open the web inspector for the Service worker for that page, and the console.log showed the event passed in.
FetchEvent.clientID is present but a zero length string. As it happens I did the same thing on my (linux) Desktop using Chrome and its also a zero length string, BUT it has another parameter resultingClientId with what looks like a UUID in it. That parameter is not there in Safari.
The FetchEvent.request is there, and in particular the URL. So I can generate my own client id in the client (I am using Date.now().toString() as that is good enough for my purposes) for use in the service worker. In fact my site without a service worker was using the in the URLs I need to intercept already, so I am happy that I have a solution.
I am trying to write a small application usingwebrtc that can be used as a messaging/Chat application between 2 computers.
I see this:
http://simpl.info/rtcdatachannel/
and it is not working. any suggestions?
I wrote the simpl.info/rtcdatachannel example. It's only designed to show off data channels working within one page.
For a complete peer-to-peer messaging application, I suggest adding RTCDataChannel functionality to something like apprtc.appspot.com. You could also consider a readymade abstraction library like PeerJS or EasyRTC.
You might also want to take a look at the RTCPeerConnection/RTCDataChannel/signaling codelab I built.
In above example, from the trace log, the ice-candidates are generated, but they are either not exchanged between each other because of there may be problem in sending 'offer' or responding the 'answer'. Also above example works only in chrome( because of only webkitRTCPeerConnection is used, with mozRTCPeerConnection this can work on firefox also.
If you want to develope chat application for only text and not for the video chat, then you can use node-js & socket.io or websockets for this.
You may like :) following two libraries:
DataChannel.js / for webrtc data/text/file sharing (among multi-users)
RTCMultiConnection.js / for data as well as media (screen/audio/video/etc) sharing
Firebase.com is a "suggested" starting point for newcomers; that can be used for signaling. You just need to override "openSignalingChannel" and done!
You should use peer.js (https://github.com/peers/peerjs) or use peer chat (https://github.com/Hironate/PeerChat) if you want to do with node js.
I wrote this code to hook API functions by changing the address in the IAT and EAT: http://pastebin.com/7d9N1J2c
This works just fine when I want to hook "recv" or "connect". However for some unknown reason when trying to hook "gethostbyname", my hook function is never called.
I tried to find "gethostbyname" in a debugger by taking the base address of the wsock32.dll module + 0x375e, which is what the ordinal 52 of my wsock32.dll is showing as offset. But that just makes me end up in some random asm code, not at the beginning of a function.
The same method however works fine for trying to find the "recv" entry point.
Does anyone see what I might be doing wrong?
I recommend this tool:
http://www.moduleanalyzer.com/
They do exactly the same and show the url that was connected with that API.
The problem is that there are more than one API to translate an url to an address. The application you are hooking may be using another version of the API that you're not intercepting.
Run some disassembler like IDA and attach to your process after you hook this functions, ida get apply changes on attaching and play process and check what is wrong.
In other way you have many libraries to do hooks with trampolines like Microsoft Detours, NCodeHook etc.