How can I get the URI from the Cling Renderer? - upnp

I've now made my cling renderer work after some efforts, that is, the media commanded by a remote control point, can be played/rendered by my renderer(run on Linux, using cling library).
The next I want to achieve is to do something interesting with the media, which requires knowing the URI of the playing media. Does Cling provide some API to achieve this?

See the Cling manual and the MediaRenderer spec:
http://4thline.org/projects/cling/support/manual/cling-support-manual.xhtml#javadoc.example.mediarenderer.AVTransportTest.testCustomPlayer..

You need to implement an statemachine which extends AVTransportStateMachine, then implement different state. Cling will call the state's setTransportURI, so you can got the URI.

Related

Video call through the body of tegram api

I would like to know whether it is possible to start a video call with another user by means of the tdlib library and transfer a picture from a camera connected to a Raspberry Pi to this call? And if so, how do you do that? What methods should I use?
To work with video calls part of Telegram you need to use Telegram's WebRTC client (https://github.com/TelegramMessenger/tgcalls). With MTProto methods you can get params to start this library. Video and audio bytes passing via this library.
There is already implemented high level library for Python that works with official tgcalls library. But working with private calls in a TODO list. You can use this project as an example how to work with tgcalls library.
https://github.com/MarshalX/tgcalls
Here are the python sources With working code of video translator with youtube/m3ui/mp4
https://github.com/EverythingSuckz/tgvc-video-tests

How to organize endpoints when using FeathersJS's seemingly restrictive api methods?

I'm trying to figure out if FeathersJS suits my needs. I have looked at several examples and use cases. FeathersJS uses a set of request methods : find, get, create, update, patch and delete. No other methods let alone custom methods can be implemented and used, as confirmed on this other SO post..
Let's imagine this application where users can save their app settings. Careless of following method conventions, I would create an endpoint describing the action that is performed by the user. In this case, we could have, for instance: /saveSettings. Knowing there won't be any setting-finding, -creation, -updating (only some -patching) or -deleting. I might also need a /getSettings route.
My question is: can every action be reduced down to these request methods? To me, these actions are strongly bound to a specific collection/model. Sometimes, we need to create actions that are not bound to a single collection and could potentially interact with more than one collection/model.
For this example, I'm guessing it would be translated in FeathersJS with a service named Setting which would hold two methods: get() and a patch().
If that is the correct approach, it looks to me as if this solution is more server-oriented than client-oriented in the sense that we have to know, client-side, what underlying collection is going to get changed or affected. It feels like we are losing some level of freedom by not having some kind of routing between endpoints and services (like we have in vanilla ExpressJS).
Here's another example: I have a game character that can skill-up. When the user decides to skill-up a particular skill, a request is sent to the server. This endpoint can look like POST: /skillUp What would it be in FeathersJS? by implementing SkillUpService#create?
I hope you get the issue I'm trying to highlight here. Do you have some ideas to share or recommendations on how to organize the API in this particular framework?
I'm not an expert of featherJs, but if you build your database and models with a good logic,
these methods are all you need :
for the settings example, saveSettings corresponds to setting.patch({options}) so to the route settings/:id?options (method PATCH) since the user already has some default settings (created whith the user). getSetting would correspond to setting.find(query)
To create the user AND the settings, I guess you have a method to call setting.create({defaultOptions}) when the user CREATE route is called. This would be the right way.
for the skillUp route, depends on the conception of your database, but I guess it would be something like a table that gives you the level/skills/character, so you need a service for this specific table and to call skillLevel.patch({character, level})
In addition to the correct answer that #gui3 has already given, it is probably worth pointing out that Feathers is intentionally restricting in order to help you create RESTful APIs which focus on resources (data) and a known set of methods you can execute on them.
Aside from the answer you linked, this is also explained in more detail in the FAQ and an introduction to REST API design and why Feathers does what it does can be found in this article: Design patterns for modern web APIs. These are best practises that helped scale the internet (specifically the HTTP protocol) to what it is today and can work really well for creating APIs. If you still want to use the routes you are suggesting (which a not RESTful) then Feathers is not the right tool for the job.
One strategy you may want to consider is using a request parameter in a POST body such as { "action": "type" } and use a switch statement to conditionally perform the desired action. An example of this strategy is discussed in this tutorial.

Can someone clarify IOS Safari Service Worker Support

Looking at the MDN documentation IOS/Safari fully supports ServiceWorkerGlobalScope.onfetch but when you look at the FetchEvent specification it says it is not supported at all by Safari.
In particular, I would like to store some state for each client and was hoping to use the fetchEvent.clientId property of the event to index it. Of course I presume I also have access to the fetchEvent.request object otherwise I can't see how a service worker can do anything useful and I could simulate clientID from a passed in parameter in the url. But the docs don't really tell me what IOS/Safari supports and doesn't so I don't know which way to go.
Can someone please tell me precisely what does IOS/Safari pass when it calls the defined onfetch function.
I found the answer to my question by using https://jakearchibald.github.io/isserviceworkerready/demos/fetchevent/
connecting my iPad to my Macbook and debugging my iPad. I was eventually able to open the web inspector for the Service worker for that page, and the console.log showed the event passed in.
FetchEvent.clientID is present but a zero length string. As it happens I did the same thing on my (linux) Desktop using Chrome and its also a zero length string, BUT it has another parameter resultingClientId with what looks like a UUID in it. That parameter is not there in Safari.
The FetchEvent.request is there, and in particular the URL. So I can generate my own client id in the client (I am using Date.now().toString() as that is good enough for my purposes) for use in the service worker. In fact my site without a service worker was using the in the URLs I need to intercept already, so I am happy that I have a solution.

how to perform continuous speech to text on webrtc communication audio stream in mobile app

I am trying to add a continuous speech to text recognizer in a mobile application during a webrtc audio-only call.
I'm using react native on the mobile side, with the react-native-webrtc module and a custom web api for the signaling part. I've got the hand of the web api, so I am able to add the feature on it's side if it's the only solution, but I prefer to perform it on the client side to avoid consuming bandwidth if there is no need.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
I have merged the audio only webrtc demo with the audiovisualiser demonstration in one page but there, I did not find how to connect a mediaElementSourceNode (created via AudioContext.createMediaElementSource(remoteStream) at line 44 of streamvisualizer.js) to a web_speech_api SpeechRecognition class. In the Mozilla documentation, the audio stream seems to came with the constructor of the class, which may call the getUserMedia() api.
Second, during my researches I have found two open source speech to text engine : cmusphinx and mozilla's deep-speech. The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try. However, how to embed this in my react native application?
There are also Android and iOS natives webrtc modules, which I may be able to connect with cmusphinx platform specific bindings (iOS, Android) but I don't know about native classes inter-operability. Can you help me with that?
I haven't already created any "grammar" or define "hot-words" because I am not sure of technologies involved, but I can do it latter if I am able to connect a speech recognition engine to my audio stream.
You need to stream the audio to the ASR server by either adding another webrtc party on the call or by some other protocol (TCP/Websocket/etc). On the server you perform recognition and send results back.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
This is experimental and does not really work in Firefox. In Chrome it only takes microphone input directly, not dual stream from caller and callee.
The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try.
You will not be able to run this as local recognition inside your react native app

webrtc used as a simile message app

I am trying to write a small application usingwebrtc that can be used as a messaging/Chat application between 2 computers.
I see this:
http://simpl.info/rtcdatachannel/
and it is not working. any suggestions?
I wrote the simpl.info/rtcdatachannel example. It's only designed to show off data channels working within one page.
For a complete peer-to-peer messaging application, I suggest adding RTCDataChannel functionality to something like apprtc.appspot.com. You could also consider a readymade abstraction library like PeerJS or EasyRTC.
You might also want to take a look at the RTCPeerConnection/RTCDataChannel/signaling codelab I built.
In above example, from the trace log, the ice-candidates are generated, but they are either not exchanged between each other because of there may be problem in sending 'offer' or responding the 'answer'. Also above example works only in chrome( because of only webkitRTCPeerConnection is used, with mozRTCPeerConnection this can work on firefox also.
If you want to develope chat application for only text and not for the video chat, then you can use node-js & socket.io or websockets for this.
You may like :) following two libraries:
DataChannel.js / for webrtc data/text/file sharing (among multi-users)
RTCMultiConnection.js / for data as well as media (screen/audio/video/etc) sharing
Firebase.com is a "suggested" starting point for newcomers; that can be used for signaling. You just need to override "openSignalingChannel" and done!
You should use peer.js (https://github.com/peers/peerjs) or use peer chat (https://github.com/Hironate/PeerChat) if you want to do with node js.