Video call through the body of tegram api - api

I would like to know whether it is possible to start a video call with another user by means of the tdlib library and transfer a picture from a camera connected to a Raspberry Pi to this call? And if so, how do you do that? What methods should I use?

To work with video calls part of Telegram you need to use Telegram's WebRTC client (https://github.com/TelegramMessenger/tgcalls). With MTProto methods you can get params to start this library. Video and audio bytes passing via this library.
There is already implemented high level library for Python that works with official tgcalls library. But working with private calls in a TODO list. You can use this project as an example how to work with tgcalls library.
https://github.com/MarshalX/tgcalls

Here are the python sources With working code of video translator with youtube/m3ui/mp4
https://github.com/EverythingSuckz/tgvc-video-tests

Related

Hit the getStats api on webrtc app built using Daily.co and get the statistics for on on going call

I am using daily.co for creating an audio/video conference application using React-Typescript. I am unable to get the RTCPeerConnection object to hit the getStats API. Any pointers or references where I should be looking for?
Webrtc app: https://www.daily.co/

how to perform continuous speech to text on webrtc communication audio stream in mobile app

I am trying to add a continuous speech to text recognizer in a mobile application during a webrtc audio-only call.
I'm using react native on the mobile side, with the react-native-webrtc module and a custom web api for the signaling part. I've got the hand of the web api, so I am able to add the feature on it's side if it's the only solution, but I prefer to perform it on the client side to avoid consuming bandwidth if there is no need.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
I have merged the audio only webrtc demo with the audiovisualiser demonstration in one page but there, I did not find how to connect a mediaElementSourceNode (created via AudioContext.createMediaElementSource(remoteStream) at line 44 of streamvisualizer.js) to a web_speech_api SpeechRecognition class. In the Mozilla documentation, the audio stream seems to came with the constructor of the class, which may call the getUserMedia() api.
Second, during my researches I have found two open source speech to text engine : cmusphinx and mozilla's deep-speech. The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try. However, how to embed this in my react native application?
There are also Android and iOS natives webrtc modules, which I may be able to connect with cmusphinx platform specific bindings (iOS, Android) but I don't know about native classes inter-operability. Can you help me with that?
I haven't already created any "grammar" or define "hot-words" because I am not sure of technologies involved, but I can do it latter if I am able to connect a speech recognition engine to my audio stream.
You need to stream the audio to the ASR server by either adding another webrtc party on the call or by some other protocol (TCP/Websocket/etc). On the server you perform recognition and send results back.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
This is experimental and does not really work in Firefox. In Chrome it only takes microphone input directly, not dual stream from caller and callee.
The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try.
You will not be able to run this as local recognition inside your react native app

Sony A7Sii not showing expected API features

I'm trying to control a Sony a7SII via the pysony Sony Camera API wrapper.
https://github.com/Bloodevil/sony_camera_api
I'm able to connect to the camera, start the session, but I'm not getting the expected list of possible API functions back.
My process looks like this:
import pysony
api = pysony.SonyAPI()
api.startRecMode()
api.getAvailableApiList()
{'id': 1, 'result': [['getVersions', 'getMethodTypes', 'getApplicationInfo', 'getAvailableApiList', 'getEvent', 'actTakePicture', 'stopRecMode', 'startLiveview', 'stopLiveview', 'awaitTakePicture', 'getSupportedSelfTimer', 'setExposureCompensation', 'getExposureCompensation', 'getAvailableExposureCompensation', 'getSupportedExposureCompensation', 'setShootMode', 'getShootMode', 'getAvailableShootMode', 'getSupportedShootMode', 'getSupportedFlashMode']]}
As you can see, the returned list does not contain a full set of controls.
Specifically, I want to be able to set the shutter speed and aperture. Which based on this matrix https://developer.sony.com/develop/cameras/ I should be able to do.
Any ideas would be much appreciated.
Turns out, both pysony and the API are working fine.
You must install the Remote App from the store rather than relying on the "embedded" remote that ships with the camera to get full API functionality.
Also as a note; it seems to take a little time for the 'api.startRecMode()' to actually update the available API list. Sensible to add a little delay to your code.
See:
src/example/dump_camera_capabilities.py

webrtc used as a simile message app

I am trying to write a small application usingwebrtc that can be used as a messaging/Chat application between 2 computers.
I see this:
http://simpl.info/rtcdatachannel/
and it is not working. any suggestions?
I wrote the simpl.info/rtcdatachannel example. It's only designed to show off data channels working within one page.
For a complete peer-to-peer messaging application, I suggest adding RTCDataChannel functionality to something like apprtc.appspot.com. You could also consider a readymade abstraction library like PeerJS or EasyRTC.
You might also want to take a look at the RTCPeerConnection/RTCDataChannel/signaling codelab I built.
In above example, from the trace log, the ice-candidates are generated, but they are either not exchanged between each other because of there may be problem in sending 'offer' or responding the 'answer'. Also above example works only in chrome( because of only webkitRTCPeerConnection is used, with mozRTCPeerConnection this can work on firefox also.
If you want to develope chat application for only text and not for the video chat, then you can use node-js & socket.io or websockets for this.
You may like :) following two libraries:
DataChannel.js / for webrtc data/text/file sharing (among multi-users)
RTCMultiConnection.js / for data as well as media (screen/audio/video/etc) sharing
Firebase.com is a "suggested" starting point for newcomers; that can be used for signaling. You just need to override "openSignalingChannel" and done!
You should use peer.js (https://github.com/peers/peerjs) or use peer chat (https://github.com/Hironate/PeerChat) if you want to do with node js.

Start playing streaming audio on symbian

The tiny question is:
How to start (realplayer ?) playing given online resourse (e.g. http://example.com/file.mp3)
PyS60, C++ or C# via RedFiveLabs would do.
EDIT1: Title changed from "Start RealPlayer on symbian" to the more appropriate.
I think the title is a little misleading if you just want to play back media content and not use a particular application for it.
In C++ there is CMdaAudioPlayerUtility::OpenUrlL() but it's not widely implemented. For example in S60 it will complete with KErrNotSupported status. To play files you can use other open functions in CMdaAudioPlayerUtility such as OpenFileL() or OpenDesL() but you need a separate mechanism for retrieving the files or at least the bytes onto the device.
There is also CVideoPlayerUtility::OpenUrlL() which supports rtsp audio streams but not http.