iOS: stream to rtmp server from GPUImage - rtmp

Is it possible to stream video and audio to a rtmp://-server with GPUImage?
I'm using the GPUImageVideoCamera and would love to stream (video + audio) directly to a rtmp-server.
I tried VideoCore which streams perfectly to e.g. YouTube, but whenever I try to overlay the video with different images I do get performance problems.
It seems as GPUImage is doing a really great job there, but I don't know how to stream with that. I found issues on VideoCore talking about feeding VideoCore with GPUImage, but I don't have a starting point on how that's implemented...

Related

rtmp vod, how to play multiple video files(mp4 or flv) continuously using simplertmp client

as the title shows, is there any methods I can use to play multiple videos continuously using simple rtmp client(my rtmp server is wowza)? Here is the way I think:
when the first video is about to be finished,open a new thread to send a new createStream command and a new play command and get the video rtmp packet and put them into a buffer list, when the first video is finished, then play the video rtmp in the buffer list..
Can this way be available or are there any other recommended methods to achieve it? Any suggestion will be appreciated, thanks!
While the functionality is not built-in, Wowza does have a module called StreamPublisher to allow you to implement a server-side type of playlist. The source code for the module is available on GitHub. A scheduled playlist of multiple VOD files is streamed as a live stream, similar to a TV broadcast.

Recording video simultaneously with audio in chrome blocks on main thread, causing invalid audio

So, I have what I think is a fairly interesting and, hopefully, not intractable problem. I have an audio/video getUserMedia stream that I am recording in Chrome. Individually, the tracks record perfectly well. However, when attempting to record both, one blocks the main thread, hosing the other. I know that there is a way to resolve this. Muaz Khan has a few demos that seem to work without blocking.
Audio is recorded via the web audio API. I am piping the audio track into a processor node which converts it to a 16b mono channel and streams it to a node.js server.
Video is recorded via the usual canvas hack and Whammy.js. When recording, video frames are drawn to a canvas and then the resulting image data is pushed into a frames array which is later encoded into a webm container by Whammy, subsequently uploaded to the node.js server.
The two are then muxed together via ffmpeg server-side and the result stored.
The ideas I've had so far are:
Delegate one to a worker thread. Unfortunately both the canvas and the stream are members of the DOM as far as I know.
Install headless browser in node.js and establish an rtc connection with the client, thereby exposing the entire stream server-side
The entire situation will eventually be solved by Audio Worker implementation. The working group seems to have stalled public progress updates on that while things are shuffled around a bit though.
Any suggestions for resolving the thread blocking?
Web Audio Connections:
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var node = context.createScriptProcessor(2048, 1, 1);
node.onaudioprocess = audioProcess;
source.connect(node);
node.connect(context.destination);
Web Audio Processing:
if (!recording.audio) return;
var leftChannel = e.inputBuffer.getChannelData(0);
Socket.emit('record-audio', convertFloat32ToInt16(leftChannel));
Video Frame Buffering:
if (recording.video) {
players.canvas.context.fillRect(0, 0, players.video.width, players.video.height);
players.canvas.context.drawImage(players.video.element, 0, 0, players.video.width, players.video.height);
frames.push({
duration: 100,
image: players.canvas.element.toDataURL('image/webp')
});
lastTime = new Date().getTime();
requestAnimationFrame(drawFrame);
} else {
requestAnimationFrame(getBlob);
}
Update: I've since managed to stop the two operations from completely blocking one another, but it's still doing it enough to distort my audio.
There are a few key things that allow for successful getUserMedia recording in Chrome at the moment, as taken from a conglomeration of information gleaned from the helpful comments attached to the original question and my own experience.
When harvesting data from the recording canvas, encode as jpeg. I had been attempting webp to satisfy the requirements of Whammy.js. Generating a webp dataURI is apparently a cycle hog.
Delegate as much of the non-DOM operations as possible to worker threads. This is especially true of any streaming / upload operations (e.g., audio sample streaming via websockets)
Avoid requestAnimationFrame as a means of facilitating recording canvas drawing. It is resource intensive, and as Aldel has pointed out, can fail if the user switches tabs. Using setInterval is much more efficient/reliable. It also allows for better framerate control.
For Chrome at least, avoid client-side AV encoding for the time being. Stream audio samples and video frames server-side for processing. While client-side AV encoding libraries are very cool, they simply don't seem efficient enough for production quite yet.
Also, for Node.js ffmpeg automation, I highly recommend fluent-ffmpeg. Special thanks to Benjamin Trent for some practical examples.
#aldel is right. Increasing bufferSize value fixes it. E.g. bufferSize= 16384;
Try this demo in chrome and record audio+video. You'll hear clear recorded WAV in parallel with 720p video frames.
BTW, I agree with jesup that MediaRecorder solutions should be preferred.
Chromium guys are very close and hoping M47/48 will bring MediaRecorder implementations! At least for video (vp8) recordings.
There is chrome-based alternative for whammy.js as well:
https://github.com/streamproc/MediaStreamRecorder/issues/43

Incorrect currentTime with videojs when streaming HLS

I'm streaming both RTMP and HLS(for IOS and android), with RTMP video.js display correct currentTime. According to me currentTime should be when the stream started, not when the client started to view the stream. But when I go with the HLS-stream currentTime returns when the client started the stream and not when the stream started(same result using any player from android or ios or VLC).
Using ffprobe on my HLS-stream I get the correct values, i.e when the stream started, which makes me believe that I should start looking at the client to find a solution, but I'm far from sure.
So please help me get in the right direction to solve this problem.
I.e is it HLS in nature that doesn't give me correct currentTime, but weird that ffprobe gives me correct answer?
Can't find anything in the video.js code on how to get any other time code.
Is it my server that generates wrong SMTPE timecode for HLS and ffprobe are using other ways to get correct currentTime?
Anyway I'm just curious, I have a workaround for it, by initially counting used fragments I will atleast get in the 5 seconds ballpark, i.e good enough for my case.
Thanks for any help or input.
BR David
RTMP and HLS work in different ways.
RTMP is always streaming, and when you subscribe to the stream, you subscribe to the running stream, so the begin time will be when the stream started.
HLS works differently. When you subscribe to a HLS stream, it creates a stream for you. So the current time will be when the HLS stream was started, which means when you subscribed and the HLS stream was created.

Demux HLS TS stream Split Audio AAC From Video

Trying to split HLS TS Stream audio from video, audio is AAC format.
The gole is to have some sort of AVAsset that I can later manipulate and then Mux back to the video.
After searching for a while i cant find a solid lead, can someone give me a educated direction to take on this issue ?
You can use the ffmpeg/libav library for demuxing the ts. To load the audio back as an AVAsset, it might be necessary to load it from a URL, either by writing temporarily to disk or serving with a local http server within your program.
I think you might run into some trouble in manipulating the audio stream, assuming you want to manipulate the raw audio data. That will require decoding the AAC, modifying it, re-encoding, and re-muxing with the video. That's all possible with ffmpeg/libav, but it's not really that easy.

NAudio decode stream of bytes

Hi
I am using the NAudio library at http://naudio.codeplex.com/
I have this hardware made by some manufacturer which claims to send
audio with the following characteristics.
aLaw 8khz, AUD:11,0,3336,0
Not sure what it all means at this stage.
I received bunch of bytes from this device when a user speaks into the
equipment.
Hence I am constantly recieving a stream of bytes at particular times
At this stage I have been unable to decode the audio so I can hear
what is spoken into the device with my headphones.
I have tried writing the audio to a file doing code like
FWaveFileWriter = new WaveFileWriter("C:\Test4.wav",
WaveFormat.CreateALawFormat(8000, 1));
And have been unable to playback the sound using the sample demo apps.
I have tried similar code from
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=231245 and
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=83270
and still have not been able to achieve much.
Any information is appreciated.
Thanks
Allen
If you are definitely receiving raw a-law audio (mono 8kHz) then your code to create a WAV file should work correctly and result in a file that can play in Windows Media Player.
I suspect that maybe your incoming byte stream is wrapped in some other kind of protocol. I'm afraid I don't know what "AUD:11,0,3336,0" means, but that might be a place to start investigating. Do you hear anything intelligible at all when you play back the file?