How do I know the duration of a sound in Web Audio API - api

It's basicly all in the title, how do I know or how can I access the duration of a soundNode in Web Audio API. I was expecting something like source.buffer.size or source.buffer.duration to be available.
The only alternative I can think of in case this is not possible to acomplish is to read the file metadata.

Assuming that you are loading audio, when you decode it with context.decodeAudioData the resulting arrayBuffer has a .duration property. This would be the same arraybuffer you use to create the source node.
You can look at the SoundJS implementation, although there are easier to follow tutorials out there too.
Hope that helps.

Good and bad news; nodes do not have length - but I bet you can achieve what you want another way.
Audio nodes are sound processors or generators. The duration of a sound processed or made by a node can change - to the computer it is all the same, lack of sound is a buffer full of zeroes instead of other values.
So, if you needed to dynamically determine the duration of a sound, you could write a node which timed 'silences' in the input audio it received.
However I suspect from your mentioning of meta data that you are loading existing sounds - in which case you should indeed use their meta data, or determine the length by loading into an audio element and requesting the length from that.

Related

Is it possible to slice a video file blob and then re-encode it server side?

Been absolutely banging my head on this one and I would like a sanity check. Mainly if what I want to do it's even possible as I am severely constrained by react-native which has pretty dodgy Blob support.
We all know that video encoding is expensive, thus instead of forcing the user to encode using ffmpeg I would like to delegate the whole process to the backend. It's all good, except that sometimes you might want to trim 30s of a video and it's pointless to upload 3+ minutes of it.
So I had this idea of slicing the blob of the video file:
const startOffset = (startTime * blobSize) / duration;
const endOffset = (endTime * blobSize) / duration;
const slicedBlob = blob.slice(startOffset, endOffset);
// Setting the type as third option is ignored
Something like this, the problem is that the file becomes totally unreadable once it reaches the backend.
React Native cannot handle Blob uploads, thus they are converted in base64, which is totally fine for the whole video, but not for the sliced blob.
This even if I keep the beginning intact:
const slicedBlob = blob.slice(0, endOffset);
I feel like the reason is that the file becomes a application/octet-stream which might impact the decoding?
I am at a bit of a loss here as I cannot understand if this is a react native issue with blobs or if it simply cannot be done.
Thanks for any input.
p.s. I prefer to stick to vanilla expo without using external libraries, I am aware that one exists to handle blobs, but not keen on ejecting relying on external libraries if possible.
You can not simply cut of chunks of a file and have it readable on the other side. For example, in an mp4 the video resolution is only stored in one place. If those bytes get removed, the decoder has no idea how to decode the video.
Yes it is possible to repackage the video client side by rewriting the container, and dropping full GOPs. But it’s would be about 1000 lines for code for you to write and would be limited to certain codecs and containers.

Getting metadata (track info, artist name etc.) for radio stream

I have already checked the following links but they weren't much helpful (in parenthesis I've explained why it didn't work in my case as suggested in their answers)
Streams - hasOutOfBandMetadata and getStreamingMetadata (our content is already HLS)
Sonos player not calling GetStreamingMetadata (getMetdata is not called, only getMediaMetada is called since radio stream has unique id and is not a collection)
In Sonos API documentation it is mentioned that "hasOutOfBandMetadata" is deprecated and it is recommended that metadata be embedded inline with the content. However due to some limitations it can't be achieved in our service thus I have to go with the old way itself (whatsoever it is).
I suppose, ideally "getStreamingMetadata" should be called after setting "hasOutOfBandMetadata" to true but it's not happening.
Secondly, for testing purposes I set "secondsRemaining" and "secondsToNextShow" for different values to find out that "description" is also being displayed for those different time intervals (if I set secondsRemaining/secondsToNextShow to 20 then description is displayed for 20 seconds, if set to 200 then for 200 seconds and likewise). After the time lapses, information inside "description" disappears. So I guess there must be some call going to refresh metadata after the time lapses but couldn't figure out which call.
Kindly explain what is the proper way to get metadata for a continuous radio stream. On TuneIn radio you can find Radio Paradise for which metadata is getting updated as track changes. Even if they use metadata inline with their content there must be some way to achieve this.
Can you please post the calls and the the response that you are sending? This would help with troubleshooting this issue. Also what mimeType are you trying to use?
At this time the only full supported method for getting metadata for a continuous radio stream on Sonos that will be guaranteed to work in future releases is to embed metadata in line.

WebRTC Changing Media Streams on the Go

Now since device enumeration is present in chrome, i know i can select a device during "getUserMedia" negotiation. I was also wondering whether i could switch devices during the middle of a call (queue up a local track and switch tracks or do i have to renegotiate the stream)? I am not sure if this is something that is still blocked or now is "allowable"
I have tried to make a new track, but i can't figure out how to switch the track on the go. I know this was previously impossible, but was wondering now if it is possible?
Even i have the same requirement. I have to record the video using MediaRecorder. For this I am using navigator.getUserMedia with constraints of audio and video. You can pass the video or audio tracks dynamically by getting the available devices from navigator.mediaDevices.enumerateDevices() and attaching the respective device to constraints and calling navigator.getUserMedia with new constraints again. The point to be noted when doing this is, you have to kill the existing tracks using track.stop() method.
You can see my example here.
StreamTrack's readyState is getting changed to ended, just before playing the stream (MediaStream - MediaStreamTrack - WebRTC)
In Firefox, you can use the RTPSender object to call replaceTrack() to replace a track on the fly (with no renegotiation). This should eventually be supported by other browsers as part of the spec.
Without replaceTrack(), you can remove the old stream, add a new one, deal with onnegotiationnedded, and let the client process the change in streams.
See the replaceTrack() test in the Mozilla source: https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack
Have you tried calling getUserMedia() when you want to change to a different device?
There's an applyConstraints() method in the Media Capture and Streams spec that makes it possible to change constraints on the fly, but it hasn't been implemented yet:
dev.w3.org/2011/webrtc/editor/getusermedia.html#the-model-sources-sinks-constraints-and-states
dev.w3.org/2011/webrtc/editor/getusermedia.html#methods-1

How to check if NSData has a multimedia content?

I have a NSData object, obtained from a URL request.Now I don't know how to read it.
However in my application I don't know if the data contains a video or not, so I would know:
How to know if NSData has some video inside it?
How to interpret the data, reading it byte per byte?
I'm not familiar with the particular API you're using so I can't say what the code should be, but any web/HTTP client library should provide you the Content-Type of the data as well as the data itself. Use the Content-Type (and only the Content-Type; doing otherwise can lead to security bugs) to determine how to interpret the content. For example, if the Content-Type (also known as MIME type) starts with video/, then the content is definitely video; the part after the slash will tell you the specific format to interpret it as.
If you intend to play the video that the data may contain, then just do that. Whichever playback API you use should give you an error if the data isn't anything it recognizes.

Sharing Data bwtween different AppDomain

I'm trying to sending data from newDomain to cureentDomain.
I used DoCallBack for loading list of .dll and extract file&assembly information as Dictionary type.
And tried to send key/value data to currentDomain.
It's my first time to use appDomain, so I just found rough way, Set/GetData.
For using that, too much converting process needed, and it shows it can make exceptions in a variety of situations.
If I can send Dictionary, it will be very excellent way of that.
Please, let me know~