NAudio decode stream of bytes - naudio

Hi
I am using the NAudio library at http://naudio.codeplex.com/
I have this hardware made by some manufacturer which claims to send
audio with the following characteristics.
aLaw 8khz, AUD:11,0,3336,0
Not sure what it all means at this stage.
I received bunch of bytes from this device when a user speaks into the
equipment.
Hence I am constantly recieving a stream of bytes at particular times
At this stage I have been unable to decode the audio so I can hear
what is spoken into the device with my headphones.
I have tried writing the audio to a file doing code like
FWaveFileWriter = new WaveFileWriter("C:\Test4.wav",
WaveFormat.CreateALawFormat(8000, 1));
And have been unable to playback the sound using the sample demo apps.
I have tried similar code from
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=231245 and
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=83270
and still have not been able to achieve much.
Any information is appreciated.
Thanks
Allen

If you are definitely receiving raw a-law audio (mono 8kHz) then your code to create a WAV file should work correctly and result in a file that can play in Windows Media Player.
I suspect that maybe your incoming byte stream is wrapped in some other kind of protocol. I'm afraid I don't know what "AUD:11,0,3336,0" means, but that might be a place to start investigating. Do you hear anything intelligible at all when you play back the file?

Related

iOS: stream to rtmp server from GPUImage

Is it possible to stream video and audio to a rtmp://-server with GPUImage?
I'm using the GPUImageVideoCamera and would love to stream (video + audio) directly to a rtmp-server.
I tried VideoCore which streams perfectly to e.g. YouTube, but whenever I try to overlay the video with different images I do get performance problems.
It seems as GPUImage is doing a really great job there, but I don't know how to stream with that. I found issues on VideoCore talking about feeding VideoCore with GPUImage, but I don't have a starting point on how that's implemented...

Incorrect currentTime with videojs when streaming HLS

I'm streaming both RTMP and HLS(for IOS and android), with RTMP video.js display correct currentTime. According to me currentTime should be when the stream started, not when the client started to view the stream. But when I go with the HLS-stream currentTime returns when the client started the stream and not when the stream started(same result using any player from android or ios or VLC).
Using ffprobe on my HLS-stream I get the correct values, i.e when the stream started, which makes me believe that I should start looking at the client to find a solution, but I'm far from sure.
So please help me get in the right direction to solve this problem.
I.e is it HLS in nature that doesn't give me correct currentTime, but weird that ffprobe gives me correct answer?
Can't find anything in the video.js code on how to get any other time code.
Is it my server that generates wrong SMTPE timecode for HLS and ffprobe are using other ways to get correct currentTime?
Anyway I'm just curious, I have a workaround for it, by initially counting used fragments I will atleast get in the 5 seconds ballpark, i.e good enough for my case.
Thanks for any help or input.
BR David
RTMP and HLS work in different ways.
RTMP is always streaming, and when you subscribe to the stream, you subscribe to the running stream, so the begin time will be when the stream started.
HLS works differently. When you subscribe to a HLS stream, it creates a stream for you. So the current time will be when the HLS stream was started, which means when you subscribed and the HLS stream was created.

Play audio stream using WebAudio API

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.

how to stream live webcam in unity 3d?

At the moment i am being able to see my own webcam in unity3d as a texture using this simple tutorial
http://www.ikriz.nl/2011/12/23/unity-video-remake
Now i want to know that how can i see someone's else webcam in unity 3d?
can any body give me some pointers?
What do you mean under 'someone's else webcam'? Actually you can open socket connection between two computers, stream 'someone's else' webcam via socket and show a picture in your application.
An application for 'someone's else' can be written in any language/framework.
To help any one who would find this helpful in future, i have done this by using a flash client that takes the live stream and send it to a local server which was written in .Net.
and that .net server then sends the stream to our script which was running in unity3D. and that script was placed on any plane in your model , so showing you the received stream.
It was a bit laggy but it was working :)

zte voice modem problem

we are using zte usb modem. we try to call by AT command (ATD) successfully. But there is no sound when remote device answered.
Does anyone have any idea?
My problem was associated with ZTE usb modem.
I solved the problem.
i can receive and send voice separately to voice port now. But i can not get clean sound like WCDMA UI.
how can i receive and send data with high quality?
Please look at my source code. [http://serv7.boxca.com/files/0/z9g2d59a8rtw6n/ModemDial.zip]
Does anyone now where is my error?
Thank you for your time.
a) Not all zte usb modems supports voice, to detect if modem supports check for ZTE voUSB Device in your ports list.
b) If port present, voice will go through it in pcm format, with 64kbps frequency (8000 samples per sec, 8 sample size).
In your own program, you should read audio stream from there.
stream is additionaly encoded with g.711, so you need to decode it before sending to audio device
It is fairly common to shut off the speaker after connection. Try sending ATM2, that should make the speaker always on.
Basic hayes command set:
M2
Speaker always on (data sounds are heard after CONNECT)
I'm trying to use asterisk's chan_dongle module on ZTE MF180 Datacard model with activated voices abilities.
Originally chan_dongle using raw PCM format on voice data.
But i was discover, that ZTE using ulaw format on sending and recciving voice data.
You can get voice data and save file in this format for learn by using standard Asterisk's Record(filename:ulaw) command in dialplan.
My voice data, dumped from ZTE modem in the same format.
I check it. ZTE dumped data was successefully played by Asterisk's command Playback(dumped)