I want to stream live recorded audio via a HTTP stream from the Mac's line in. When searching for this problem I found a lot of tutorials about how to receive a stream, but nothing about sending a stream.
Is there any tutorial about sending a stream which does not base on a static local file? I found some information on CocoaHTTPServer, but I believe this server can only stream static files. I am right?
Related
as the title shows, is there any methods I can use to play multiple videos continuously using simple rtmp client(my rtmp server is wowza)? Here is the way I think:
when the first video is about to be finished,open a new thread to send a new createStream command and a new play command and get the video rtmp packet and put them into a buffer list, when the first video is finished, then play the video rtmp in the buffer list..
Can this way be available or are there any other recommended methods to achieve it? Any suggestion will be appreciated, thanks!
While the functionality is not built-in, Wowza does have a module called StreamPublisher to allow you to implement a server-side type of playlist. The source code for the module is available on GitHub. A scheduled playlist of multiple VOD files is streamed as a live stream, similar to a TV broadcast.
Is it possible to stream video and audio to a rtmp://-server with GPUImage?
I'm using the GPUImageVideoCamera and would love to stream (video + audio) directly to a rtmp-server.
I tried VideoCore which streams perfectly to e.g. YouTube, but whenever I try to overlay the video with different images I do get performance problems.
It seems as GPUImage is doing a really great job there, but I don't know how to stream with that. I found issues on VideoCore talking about feeding VideoCore with GPUImage, but I don't have a starting point on how that's implemented...
Trying to split HLS TS Stream audio from video, audio is AAC format.
The gole is to have some sort of AVAsset that I can later manipulate and then Mux back to the video.
After searching for a while i cant find a solid lead, can someone give me a educated direction to take on this issue ?
You can use the ffmpeg/libav library for demuxing the ts. To load the audio back as an AVAsset, it might be necessary to load it from a URL, either by writing temporarily to disk or serving with a local http server within your program.
I think you might run into some trouble in manipulating the audio stream, assuming you want to manipulate the raw audio data. That will require decoding the AAC, modifying it, re-encoding, and re-muxing with the video. That's all possible with ffmpeg/libav, but it's not really that easy.
Hi
I am using the NAudio library at http://naudio.codeplex.com/
I have this hardware made by some manufacturer which claims to send
audio with the following characteristics.
aLaw 8khz, AUD:11,0,3336,0
Not sure what it all means at this stage.
I received bunch of bytes from this device when a user speaks into the
equipment.
Hence I am constantly recieving a stream of bytes at particular times
At this stage I have been unable to decode the audio so I can hear
what is spoken into the device with my headphones.
I have tried writing the audio to a file doing code like
FWaveFileWriter = new WaveFileWriter("C:\Test4.wav",
WaveFormat.CreateALawFormat(8000, 1));
And have been unable to playback the sound using the sample demo apps.
I have tried similar code from
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=231245 and
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=83270
and still have not been able to achieve much.
Any information is appreciated.
Thanks
Allen
If you are definitely receiving raw a-law audio (mono 8kHz) then your code to create a WAV file should work correctly and result in a file that can play in Windows Media Player.
I suspect that maybe your incoming byte stream is wrapped in some other kind of protocol. I'm afraid I don't know what "AUD:11,0,3336,0" means, but that might be a place to start investigating. Do you hear anything intelligible at all when you play back the file?
I'm reading Apple's documentation on using QTKit to capture streaming audio and video from input sources. I read that the central class QTCaptureSession handles the input and sends it to every possible output (QTCaptureOutput). The documentation lists six subclasses of QTCaptureOutput (e.g., QTCaptureMovieFileOutput).
I would like to create a subclass of QTCaptureOutput to write to an NSSocket so I could send the video stream over to another machine.
Any ideas would be appreciated.
QTCaptureOutput does not strike me as a class designed to be subclassed outside of QTKit. Instead, you can try dumping the data to a local domain socket in the file system using a QTCaptureFileOutput object, and simultaneously pulling the data from that local domain socket and sending it over to the remote machine.