Track won't play even if track is streamable - api

I'm using the soundcloud API and so far it was working fine until I hit this track:
https://soundcloud.com/katyperryofficial/roar
I don't know what's wrong with this track but it really wouldn't play. I can get all info of it just not the stream part. I checked chrome network tab and it gives me this. It just cancels without any error:
Name Method Status Type Initiator Size Time
stream?consumer_key=### GET (canceled) Other 13B 1.02s
Any ideas? Have I missed something?

Soundcloud devs made some changes in their code, and i don't know why, they are switching back to rtmp protocol.
Even the response said that track is streamable it can't be streamed with a regular stream_url.
After some digging in dev tools, i've noticed that some tracks use rtmp protocol instead of http/https
Anyway, you can find the streams of the track on:
http://api.soundcloud.com/tracks/TrackID/streams?consumer_key=XXX
from here, you're on your own. from my research, only flash (why?) can play rtmp streams.

Related

WebRTC: removeStream and then re- addStream after negotiation: Safe?

After a WebRTC session has been established, I would like to stop sending the stream to the peer at some point and then resume sending later.
If I call removeStream, it does indeed stop sending data. If I then call addStream (with the previously removed stream), it resumes. Great!
However, is this safe? Or do I need to re-negotiate/exchange the SDP after the removal and/or after the re-add? I've seen it mentioned in several places that the SDP needs to be re-negotiated after such changes, but I'm wondering if it is ok in this simple case where the same stream is being removed and re-added?
PS: Just in case anyone wants to suggest it: I don't want to change the enabled state of the track since I still need the local stream playing, even while it is not being sent to the peer.
It will only work in Chrome, and is non-spec, so it's neither web compatible nor future-proof.
The spec has pivoted from streams to tracks, sporting addTrack and removeTrack instead of addStream and removeStream. As a result the latter isn't even implemented in Firefox.
Unfortunately, because Chrome hasn't caught up, this means renegotiation currently works differently in different browsers. It is possible to do however with some effort.
The new model has a cleaner separation between streams and what's sent in RTCPeerConnection.
Instead of renegotiating, setting track.enabled = false is a good idea. You can still play a local view of your video by cloning the track.

How to detect Sonos volume changes

I can set the volume of a Sonos on my network sending a POST in the proper envelope/xml format.
What I can't figure out is how to detect when the volume changed from another client.
I noticed that when I change the volume thru the phone app the Controller on the computer changes as well and real time. I would like to replicate that behavior.
I have sniffed the network and I didn't see any http calls on that regard, perhaps I have missed something. I am ok in doing whatever I have to implement, I just don't have a clue on how they do it.
I do see some tcp packets streamed but not documentation or leads helped there either. Thanks!
You get an event on the RenderingControl service. You have registered for events I assume?

unable to play track not encoded correctly when backword or forward skip inside playback of the trac

I'm developing Soap service for Sonos. The service is partially accepted BY SONOS. But still having the problems that some of the mp3 tracks lead to "unable to play track not encoded correctly" when making skip (forward or backward) during playing the track (inside the track). I have compared encoding procedure for 'good' and 'bad' tracks and I don't see any real reason that part of them doesn't make skip inside the track. I will appreciate very much any hint related to this issue.
Best regards,
Krzys
Are you attempting to use HLS? Can you provide any more information on the calls and responses that you are sending or on the exact text of the error messaging you are seeing?

Incorrect currentTime with videojs when streaming HLS

I'm streaming both RTMP and HLS(for IOS and android), with RTMP video.js display correct currentTime. According to me currentTime should be when the stream started, not when the client started to view the stream. But when I go with the HLS-stream currentTime returns when the client started the stream and not when the stream started(same result using any player from android or ios or VLC).
Using ffprobe on my HLS-stream I get the correct values, i.e when the stream started, which makes me believe that I should start looking at the client to find a solution, but I'm far from sure.
So please help me get in the right direction to solve this problem.
I.e is it HLS in nature that doesn't give me correct currentTime, but weird that ffprobe gives me correct answer?
Can't find anything in the video.js code on how to get any other time code.
Is it my server that generates wrong SMTPE timecode for HLS and ffprobe are using other ways to get correct currentTime?
Anyway I'm just curious, I have a workaround for it, by initially counting used fragments I will atleast get in the 5 seconds ballpark, i.e good enough for my case.
Thanks for any help or input.
BR David
RTMP and HLS work in different ways.
RTMP is always streaming, and when you subscribe to the stream, you subscribe to the running stream, so the begin time will be when the stream started.
HLS works differently. When you subscribe to a HLS stream, it creates a stream for you. So the current time will be when the HLS stream was started, which means when you subscribed and the HLS stream was created.

Play audio stream using WebAudio API

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.