Trying to split HLS TS Stream audio from video, audio is AAC format.
The gole is to have some sort of AVAsset that I can later manipulate and then Mux back to the video.
After searching for a while i cant find a solid lead, can someone give me a educated direction to take on this issue ?
You can use the ffmpeg/libav library for demuxing the ts. To load the audio back as an AVAsset, it might be necessary to load it from a URL, either by writing temporarily to disk or serving with a local http server within your program.
I think you might run into some trouble in manipulating the audio stream, assuming you want to manipulate the raw audio data. That will require decoding the AAC, modifying it, re-encoding, and re-muxing with the video. That's all possible with ffmpeg/libav, but it's not really that easy.
Related
I am using video.js to play m3u8 links.
I found it would continuously download .ts segments during playing.
I would like to create a loading overlay for preparation period, and when all is done, it disappears and user can watch video just like local.
So, is it possible to preload all segments during the loading period before playing?
Update
I found the m3u8 file contains .ts links, is it possible to predownload those blobs and intercept fetch requests to return downloaded blobs as response?
#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-TARGETDURATION:60
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-START-TIME:3516
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:55:59+08:00
#EXTINF:2.01,
2758527764_1103365203_1.ts?start=0&end=91931&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:01+08:00
#EXTINF:1.979,
2758527764_1103365203_1.ts?start=91932&end=171643&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:02+08:00
#EXTINF:1.932,
2758527764_1103365203_1.ts?start=171644&end=248159&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:04+08:00
#EXTINF:2.002,
2758527764_1103365203_1.ts?start=248160&end=318659&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:06+08:00
#EXTINF:2.064,
2758527764_1103365203_1.ts?start=318660&end=393295&type=mpegts&resolution=320x240
I think that waht you would to do isn't the case of use of adaptive streaming, so you shouldn't use HLS or DASH.
Maybe you could achieve this type of experience using sample mp4 playback.
You can set the "videojs.Vhs.GOAL_BUFFER_LENGTH" value to a high figure. It's the number of seconds which will be pre-loaded. However there is a playback issue when too much is buffered. Reason is all these buffered segments ends up eating ram. On mid range mobile devices more than few minutes of pre-load make the video un-usable.
I use it with few thousand students who can pre-load larger chunk (10min), so they are not interrupted continuously when having a low bandwidth connection.
Is it possible to stream video and audio to a rtmp://-server with GPUImage?
I'm using the GPUImageVideoCamera and would love to stream (video + audio) directly to a rtmp-server.
I tried VideoCore which streams perfectly to e.g. YouTube, but whenever I try to overlay the video with different images I do get performance problems.
It seems as GPUImage is doing a really great job there, but I don't know how to stream with that. I found issues on VideoCore talking about feeding VideoCore with GPUImage, but I don't have a starting point on how that's implemented...
I'm streaming both RTMP and HLS(for IOS and android), with RTMP video.js display correct currentTime. According to me currentTime should be when the stream started, not when the client started to view the stream. But when I go with the HLS-stream currentTime returns when the client started the stream and not when the stream started(same result using any player from android or ios or VLC).
Using ffprobe on my HLS-stream I get the correct values, i.e when the stream started, which makes me believe that I should start looking at the client to find a solution, but I'm far from sure.
So please help me get in the right direction to solve this problem.
I.e is it HLS in nature that doesn't give me correct currentTime, but weird that ffprobe gives me correct answer?
Can't find anything in the video.js code on how to get any other time code.
Is it my server that generates wrong SMTPE timecode for HLS and ffprobe are using other ways to get correct currentTime?
Anyway I'm just curious, I have a workaround for it, by initially counting used fragments I will atleast get in the 5 seconds ballpark, i.e good enough for my case.
Thanks for any help or input.
BR David
RTMP and HLS work in different ways.
RTMP is always streaming, and when you subscribe to the stream, you subscribe to the running stream, so the begin time will be when the stream started.
HLS works differently. When you subscribe to a HLS stream, it creates a stream for you. So the current time will be when the HLS stream was started, which means when you subscribed and the HLS stream was created.
I am developing a Mac app which needs to provide a HTTP Live Stream (just the last 2 seconds or so) for the main screen (Desktop).
I was thinking of the following process:
Create a AVCaptureSession with a AVCaptureScreenInput as input (sessionPreset = AVCaptureSessionPresetPhoto)
Add a AVCaptureVideoDataOutput output to the session
Capture the frames (in kCVPixelFormatType_32BGRA format) in captureOutput:didDropSampleBuffer:fromConnection: and write these to a ffmpeg process for segmenting (using a pipe or something) that creates the MPEG-TS and playlist files.
Use an embedded HTTP server to server up the segmented files and playlist file.
Is this the best approach and is there is no way to circumvent the ffmpeg part for encoding and segmenting the video stream?
What is the best way to pipe the raw frames to ffmpeg?
It sounds like a good approach. You can use have ffmpeg output to a stream and use the segmenting tools from Apple to segment it. I believe that the Apple tools have slightly better mux rate, but it might not matter for your use case.
Hi
I am using the NAudio library at http://naudio.codeplex.com/
I have this hardware made by some manufacturer which claims to send
audio with the following characteristics.
aLaw 8khz, AUD:11,0,3336,0
Not sure what it all means at this stage.
I received bunch of bytes from this device when a user speaks into the
equipment.
Hence I am constantly recieving a stream of bytes at particular times
At this stage I have been unable to decode the audio so I can hear
what is spoken into the device with my headphones.
I have tried writing the audio to a file doing code like
FWaveFileWriter = new WaveFileWriter("C:\Test4.wav",
WaveFormat.CreateALawFormat(8000, 1));
And have been unable to playback the sound using the sample demo apps.
I have tried similar code from
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=231245 and
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=83270
and still have not been able to achieve much.
Any information is appreciated.
Thanks
Allen
If you are definitely receiving raw a-law audio (mono 8kHz) then your code to create a WAV file should work correctly and result in a file that can play in Windows Media Player.
I suspect that maybe your incoming byte stream is wrapped in some other kind of protocol. I'm afraid I don't know what "AUD:11,0,3336,0" means, but that might be a place to start investigating. Do you hear anything intelligible at all when you play back the file?