How to add a socket as QTCaptureOutputs for a QTCaptureSession - objective-c

I'm reading Apple's documentation on using QTKit to capture streaming audio and video from input sources. I read that the central class QTCaptureSession handles the input and sends it to every possible output (QTCaptureOutput). The documentation lists six subclasses of QTCaptureOutput (e.g., QTCaptureMovieFileOutput).
I would like to create a subclass of QTCaptureOutput to write to an NSSocket so I could send the video stream over to another machine.
Any ideas would be appreciated.

QTCaptureOutput does not strike me as a class designed to be subclassed outside of QTKit. Instead, you can try dumping the data to a local domain socket in the file system using a QTCaptureFileOutput object, and simultaneously pulling the data from that local domain socket and sending it over to the remote machine.

Related

ApiRTC - Media always sent to the cloud, even with meshOnlyEnabled

As a follow-up to my previous post (ApiRTC - Behaviour with meshModeEnabled and meshOnlyEnabled)
Hello,
You say that SFU is necessary for any activity that requires centralizing all the streams (recording, bandwidth optimization,...). However, in MESH mode, the files/media exchanged still manage to be recorded on the Apizee media server even though I don't go through the SFU. How is this possible ?
Can this behaviour be disabled so that the exchanged documents never leave the MESH stream ?
I have not found anything about this in the documentation.
By the way, the documentation often mentions the term "MCU", does this mean that ApiRTC also uses an MCU server in addition to the SFU ?
Thanks in advance.
apirtc
Can this behaviour be disabled so that the exchanged documents never
leave the MESH stream ?
Concerning a recording of all the streams in the conversation (via the startRecording method of the Conversation object see https://apirtc.github.io/references/apirtc-js/Conversation.html#startRecording__anchor):
--> The composition of multiple streams into one video file is done server-side by the SFU (v4.4.8).
Concerning the files (through conversation.pushData method):
--> We manage the file transfer through uploading the file on a storage and share the URI to all parties of a conversation. P2P transfer is not available (v4.4.8)
To exchange data in a P2P mode, you can use the Conversation.sendData method to send raw data across all participants.
Regarding your question about the MCU, no, ApiRTC doesnt use any MCU server to date (v. 4.4.8). The document refers to MCU for very specific on-premise deployment, not supported for ApiRTC users.
Cheers,
Romain

Server side real time analysis of video streamed from client

I'm trying to build a system for real-time analysis on server for video streamed from the client using WebRTC.
Here is what I currently have in mind. I would capture the webcam video stream from the client and send it (compressed using H.264?) to my server.
On my server, I would receive the stream and every raw frame to my C++ library for analysis.
The output of the analysis (box coordinates to draw) would then be sent back to the client via WebRTC or a separate WebSocket connection.
I've been looking online and found open-source media server like Kurento and Mediasoup but, since I only need to read the stream (no dispatch to other clients), do I really need to use an existing server? Or could I build it myself and if so, where to start?
I'm fairly new to the WebRTC and video streaming world in general so I was wondering, does this whole thing sound right to you?
That depends on how real-time your requirements are. If you want 30-60fps and near-realtime, getting the images to the server via RTP is the best solution. And then you'll need things like a jitter buffer, depacketization etc, video decoders, etc.
If you require only one image per second, grabbing it from the canvas and sending it via Websockets or HTTP POST is easier. https://webrtchacks.com/webrtc-cv-tensorflow/ shows how to do that in Python.

Play audio stream using WebAudio API

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.

Sending File Over Bluetooth

I'm developing an application that has feature which involves sending data over bluetooth. I've tested sending data with GameKit framework and it works fine. My question is "Is that correct way to go?" or should I use some other way of transporting data over bluetooth?
An additional question: when I'm sending data, both devices (one sending and one receiving) have to init GKPeerPickerController and show it. Is there a way for receiver to get notification for connection without calling GKPeerPickerController, so that sender can send a file and receiver can just confirm he wants to receive file (without looking for sender aka showing GKPeerPickerController)?
Tnx,
Mario
You can use GKSession class and its delegate protocol to handle peer to peer connection programmatically and implement your own views.

How to Read SMS/MMS in UIQ

How to Read SMS/MMS in UIQ ?
I am going to assume that you want information about how to write some C++ source code that will allow an application to receive SMS/MMS and read the content of the messages it receives.
On Symbian OS, the message store can store SMS, MMS, EMAIL...
The API of the message store is generic.
In order to write and read data to/from the message store, you'll need to familiarise yourself with the following classes : TMsvId, CMsvSession, CClientMtmRegistry, TMsvEntry and CMsvEntry.
I am obviously biased but I would advise reading the messaging chapter of http://www.quickrecipesonsymbianos.com in order to get an explanation of how the messaging store works and the sample code to use it easily.
Receiving messages, on the other hand, is more complicated.
Listening for and receiving SMS is done using the generic networking API. That's RSocketServ and RSocket. Mostly, you need to use to the right IOCTL parameters on the socket.
You can specify a specific port in order to only receive SMS that are intended for your application. Trying to receive all SMS could be an issue as the native message viewer engine and the embedded Java virtual machine PushRegistry module could both be listening for all SMS already.
You will find useful classes and constants in the following header files in your SDK:
gsmuset.h smsuaddr.h smsustrm.h gsmubuf.h gsmumsg.h.
TSmsAddr, KSMSDatagramProtocol, KSMSAddrFamily, TSmsUserDataSettings, CSmsBufferBase, CSmsPDU, RSmsSocketReadStream, RSmsSocketWriteStream and CSmsMessage are of particular interest. Asynchronously receiving an SMS is actually done using RSocket.Ioctl().
There are SMS-specific error codes whose names start with "KSmsErr"
Receiving MMS on UIQ is done through a UQI-specific API. One that you won't find on Series60 phones. This is the reason why you won't find much talk of a Symbian-generic MMS API in the litterature. You are better off going directly to the UIQ or Sony-Ericsson development communities when you have more detailled questions.
Your application shouldn't have to use RSocket to receive MMS.
Careful, once again, both the Embedded Java virtual machine PushRegistry module and the native message viewer application engine are probably already listening for all incoming MMS messages.
The interesting header files are mmsclient.h, MmsSettingsStore.h, mmsentry.h, MmsApiExtensions.h
Of particular interest are CMmsClientMtm, MmsApiExtensions, MMsvSessionObserver and MMMSMessageHandler.
Good luck.