Can restcomm support multiplayer video - restcomm

Hi,i used restcomm to develop app,when I see it's code i found
private void startCall(SignalingParameters signalingParameters)
{
RCLogger.i(TAG, "startCall");
callStartedTimeMs = System.currentTimeMillis();
// Start room connection.
logAndToast("Preparing call");
// we don't have room functionality to notify us when ready; instead, we start connecting right now
this.onConnectedToRoom(signalingParameters);
}
so i want to know if the restcomm-android-sdk not support multiplayer video now ??

Pengwang, depends what you refer to by multi-peer video. As of now, Restcomm Android SDK supports P2P audio/video, which means that two people can have video-chat. But it doesn't support more than two yet.
Hope this helps,
Antonis Tsakiridis

Related

how to perform continuous speech to text on webrtc communication audio stream in mobile app

I am trying to add a continuous speech to text recognizer in a mobile application during a webrtc audio-only call.
I'm using react native on the mobile side, with the react-native-webrtc module and a custom web api for the signaling part. I've got the hand of the web api, so I am able to add the feature on it's side if it's the only solution, but I prefer to perform it on the client side to avoid consuming bandwidth if there is no need.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
I have merged the audio only webrtc demo with the audiovisualiser demonstration in one page but there, I did not find how to connect a mediaElementSourceNode (created via AudioContext.createMediaElementSource(remoteStream) at line 44 of streamvisualizer.js) to a web_speech_api SpeechRecognition class. In the Mozilla documentation, the audio stream seems to came with the constructor of the class, which may call the getUserMedia() api.
Second, during my researches I have found two open source speech to text engine : cmusphinx and mozilla's deep-speech. The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try. However, how to embed this in my react native application?
There are also Android and iOS natives webrtc modules, which I may be able to connect with cmusphinx platform specific bindings (iOS, Android) but I don't know about native classes inter-operability. Can you help me with that?
I haven't already created any "grammar" or define "hot-words" because I am not sure of technologies involved, but I can do it latter if I am able to connect a speech recognition engine to my audio stream.
You need to stream the audio to the ASR server by either adding another webrtc party on the call or by some other protocol (TCP/Websocket/etc). On the server you perform recognition and send results back.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
This is experimental and does not really work in Firefox. In Chrome it only takes microphone input directly, not dual stream from caller and callee.
The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try.
You will not be able to run this as local recognition inside your react native app

How do I upload firmware and configuration files via SpiDevice in Android Things?

How do I upload firmware (microsemi) via the Spi port?
How do I then start, stop and check the status of said firmware?
Android Things Version: 0.4.1-devpreview (could not get the display to work on the newer builds)
Issue: I am a hardware noob. I have a python driver used for uploading firmware and a config file via the Spi port. My goal is to port this to Android Things leveraging the SpiDevice class.
The python version of the driver strips off headers and checks block size ect. I'm not sure if I need to do this with Android Things SpiDevice.Write(buffer, length) method.
Once I have uploaded the firmware and config, I will need to start it. In total I will need to upload firmware, start firmware, check if firmware is running and stop firmware.
I have started writing a SpiDeviceManager, and have naively began to flesh out the methods. (see below).
*public void LoadFirmware()
{
WriteFile(_firmwareFilePath);
WriteFile(_configurationFilePath);
}
private void WriteFile(string filename)
{
using (System.IO.Stream stream = _assetManager.Open(filename))
{
byte[] buffer = new byte[2048];
int bytesRead;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
_spiDevice.Write(buffer, bytesRead);
}
}
}*
If anyone can point me at some docs or examples I would really appreciate it.
Also, if anyone has any advice about getting the display drivers working for the latest version of Things I could really use your help. I have the device specs for the config.txt it just does not work.
Thanks,
Andrew
The SpiDevice just sends the data along.
Its up to me to pack the bytes in there in such a way that the chip can make sense of it.
I am sifting through the data sheet to that end.
#Nick Felker, thanks for the info about the Android Things display. Cant wait for the next release to drop.

Core Bluetooth deprecations for iOS 7

In iOS 7, some Core Bluetooth things are now deprecated like CBUUIDGenericAccessProfileString and CBUUIDDeviceNameString. The apple docs state
"(Deprecated. There are no replacements for these constants.)"
I am wondering what we are supposed to do to replace these GAP things, as the apple docs and examples are of no help. The entire internet also seems to be silent about this. My code is pretty much just like the Heart Rate Monitor example which still has the deprecated code
/* GAP (Generic Access Profile) for Device Name */
if ( [aService.UUID isEqual:[CBUUID UUIDWithString:CBUUIDGenericAccessProfileString]] )
{
[aPeripheral discoverCharacteristics:nil forService:aService];
}
How about you just use the Generic Access service UUID directly?
if ( [aService.UUID isEqual:[CBUUID UUIDWithString:#"1800"]] )//0x1800 is the Generic Access Service Identifier
{
[aPeripheral discoverCharacteristics:nil forService:aService];
}
Check here for details on the Generic Access Service.

Architecture: Titanium Desktop against the Twitter Streaming API

I'm new to Titanium, and have started out by trying to build a (yet another) Twitter client. The problem I've encountered is that I would like to use Twitter's Streaming API, and I'm struggling to understand the best way to do that within Titanium Desktop.
Here's the options I see:
Don't use the Streaming API, it's not going to work.
Build a Python bridge that connects with a httpclient that supports streaming responses (required for the Streaming API, it never closes the connection). Let that client deliver the responses to a Javascript method that formats and outputs tweets as they come. (Problem here: How do I bundle the python libraries I need?)
Use the Javascript HttpClient shipped with Titanium SDK 1.1 in some clever way I'm not aware of.
Use the 1.2.0-RC2 release of Titanium SDK that ships with a HttpClient that has support for streaming responses. There's very little information in the Release notes to judge if the streaming support is enough to get things working with the Streaming API.
Use twstreamer, a javascript library for streaming support through a Flash intermediary. I've seen bug reports stating the Flash does not work well inside Titanium Desktop, but I'd love to be proven wrong.
Another way that I'm not yet thought of.
I'm hoping for all sorts of clever ideas of how I can get this working, and tips going forward. Thanks for reading!
I'm not at all familiar with Titanium, but looking through their docs your best bet is likely going to be to use Titanium.Process to fork something that can deal with streaming responses. There are plenty of lightweight options here, but note that if you want to use userstreams you'll need an option that supports OAuth and SSL
Here's how to do it (after LOTS of testing):
var xhr = Titanium.Network.createHTTPClient();
xhr.open("GET", "https://stream.twitter.com/1/statuses/filter.json?track=<Your-keyword-to-track>", true, '<Your-twitter-nickname>', '<Your-twitter-password>');
xhr.send();
var last_index = 0;
function parse() {
var curr_index = xhr.responseText.length;
if (last_index == curr_index) return; // No new data
var s = xhr.responseText.substring(last_index, curr_index);
last_index = curr_index;
console.log(s);
}
var interval = setInterval(parse, 5000);
setTimeout(function(){
clearInterval(interval);
parse();
xhr.abort();
}, 25000);

how to enable and configure USB OTG for device mode on iMX31 Litekit?

I need to configure USB OTG on iMX31 for device mode. We need a raw channel between the host and target and usb seems to be the best suited. However I haven't been able to correctly configure the OTG controller. I dont know what I am missing. I have performed the steps mentioned in section 32.14.1 of iMX31 Reference Manual. I have also configured PORTSC1 register for ULPI.
Can any one help me out here? any poineters/code/any thing that can help me is welcome.
Thanks
The litekit is supported by the vanilla Linux kernel.
It's pretty easy to declare the OTG for device mode. You just need to declare it as device when you register your device:
static struct fsl_usb2_platform_data usb_pdata = {
.operating_mode = FSL_USB2_DR_DEVICE,
.phy_mode = FSL_USB2_PHY_ULPI,
};
Register code:
mxc_register_device(&mxc_otg_udc_device, &usb_pdata);
Don't forget to configure the pads for the physical ULPI lines, and eventually make the initial transactions for your transceiver.
You can find all the necessary code as I did it for the moboard platform and the marxbot board file.