How to get audio from GSM modem - Not to a speaker but as a RTP stream - webrtc

I have a GSM modem with a serial port and soldered points for MIC and Speaker. Through the serial port I can send AT signalling commands, send SMS and make/receive calls. I had to solder a speaker and a MIC to the modem card. My problem is that I want to read the audio stream as a RTP stream, possibly through some intermediate hardware/software. My goal is to ultimately get the audio as a RTP stream on some port with some RTP profile. I understand that I need to run a UDP server to serve the audio over a UDP IP-address/port tuple, but how do I get the audio to the UDP server in the first place? Codec conversion is also something that I am familiar with and am aware that I may need it.
This is a simplified diagram of what I intend to do:
GSM-modem-audio ---*1*---> audio-over UDP(ip-address:port)---->
>---*2*--> Kurento RTPendpoint---*3*--->Kurento-WebRTC endpoint
I don't know how to handle part * 1 * piece of the puzzle.

Easy way: get Rassbery-PI, get usb sound card, connect analog output of modem to sound card.
Have be no problem convert digital ALSA sound card signal to rtp stream.
More complex way - create special hardware which will do same.. oh wait! That will be like sangoma board.
Anyway it is not wise idea create something like that yourself except for educational purpose, becuase it will be costly or low quality.

Related

How to display MJPEG stream transmitted via UDP in a Mac OS-X application

I have a camera that sends mjpeg frames as UDP packets over wifi that I would like to display in my max os-x application. My application is written in objective-c and I am trying to use the AVFoundation classes to display the live stream. The camera is controlled using http get & post requests.
I would like the camera to be recognized as a AVCaptureDevice as I can easily display streams from different AVCaptureDevices. Since the stream is over wifi, it isn't recognized as a AVCaptureDevice.
Is there a way I can create my own AVCaptureDevice that I can use to control this camera and display the video stream?
After much research into the packets sent from the camera, I have concluded that it does not communicate in any standard protocol such as RTP. What I ended up doing is reverse-engineering the packets to learn more about their contents.
I confirmed it does send jpeg images over UDP packets. It takes multiple UDP packets to send a single jpeg. I listened on the UDP port for packets and assemble them into a single image frame. After I have a frame, I then created an NSImage from it and displayed it in an NSImageView. Works quite nicely.
If anyone is interested, the camera is an Olympus TG-4. I am writing the components to control settings, shutter, etc.

Playing real-time sound from serial port (COM)

I'm using arduino's ADC to capture data from external microphone and send it through the serial port. My question is: How to capture and play this data with NAudio?
Well, if you know how to receive audio from a serial port, you'd put it into a BufferedWaveProvider as you received it. The BufferedWaveProvider can be played with WaveOutEvent

Capture RAW data from Ethernet using Wireshark

I am new to Wireshark and capturing packets and all Stuff. Let me get it to the straight.
I have a hardware which outputs its data over Ethernet using a UDP Broadcast. I Can directly plug a Ethernet Cable to a In-line RJ-45 Coupler (attached to the hardware) and my PC Running Wireshark.
REQUIREMENTS : I need to Capture RAW Data which my hardware is broadcasting so that it can be given to other team so as to know the format in which it is providing for further post processing.
What I Did : Initially , I connected the Ethernet Cable from my home and Started capturing the packets which didn't make any sense to me.
Can you please point out if I am going in correct direction ? Sorry if its a very basic question, but raw data from the hardware is important for my further tasks....
As far as any software can understand a wire you will always get a packet. Between you (in front of a computer) and the cable in the in the RJ-45 jack sits a NIC (network interface controller, i.e. your network card).
Your Ethernet NIC will read the current on the cable (in manchester encoding for ethernet) and synchronize itself to any Ehternet traffic on that cable. What does "synchronizing" mane in there? In front of any Ehternet traffic come 64 alternate bits of 0s and 1s which are meant to synchronize the clocks on both communicating NICs. Without proper clock synchronization some data may be misinterpreted.
But why I am talking about clock synchronization? Because if you want the data as RAW as it is on the cable you will not get it. A NIC will never send any synchronization bit to the rest of the computer, therefore it is absolutely impossible to read exactly what is on the cable by using software.
On the other hand I find hard to believe you want the RAW data as RAW as that. After the synchronization bits come an Ethernet encapsulated packed. Yup, Ethernet uses packets. They're link layer packets (layer 2 in OSI).
And wireshark gives you exactly that (in most cases, see note at the end for two exceptions to this rule): every Ehternet packet that the NIC understands, manages to sync, and manages to read without collision is sent to the kernel and then read by wireshark. A cable has electrical interference and has no provision against collisions (it's just a piece of cooper!) therefore the NIC abstracts things like interferences and collisions.
I'll repeat it once more: After abstracting the synchronization bits, sender collisions (which turn the cable into one huge interference) and plain interferences; all that remains is a stream of packets, one after the other.
Extra Notes
NICs sometimes do ignore some Ethernet packets: packets that are not directed to their MAC. This can be changed by enabling promiscuous mode (available in most NICs). This is irrelevant for broadcast packets.
There are exception to the rule of wireshark getting all the traffic coming from the NIC:
If the traffic comes incredibly quick, wireshark may drop out of kernel schedule and not see some packets. It happens, nothign can be done about it.
If you listen on all interfaces (as opposed to selecting a single interface to listen at), wireshark will strip the Ethernet (or Wifi) headers. This is a wireshark hack needed to make output files uniform (and possible to be read by other applications).
TL;DR, wireshark output (pcap) is pretty much just the stream of packets that it got from the NIC, one after the other. That is as RAW as you can get with software.

External USB device interface with Xilinx Atlys board

I'm trying to interface the Mindwave (http://store.neurosky.com/products/mindwave-1) with my Altys board, through the USB UART port. The dongle I'm trying to connect is basically a wireless reciever that outputs serial data stream on the USB connection. I'm trying to read in this serial stream on the FPGA.
The problem I'm seeing is that when I try to Chipscope the UartRx pin (A16), I see no activity on it even though the dongle is supposed to send 0xAA in standby mode.
Since the FPGA does not power the dongle, I have it connected to an external power USB hub and then connect the hub to the FPGA. However I don't see any activity.
Do I need to convert the signals to another level, or invert them? I thought the EXAR chip takes care of it.
Did you try swapping RX and TX?
Did you have acces to a scope? To check you can repeatly send 'U's (0x55) and look with a scope to see which line is RX and which is TX. You can also check the speed of the interface with this method.

zte voice modem problem

we are using zte usb modem. we try to call by AT command (ATD) successfully. But there is no sound when remote device answered.
Does anyone have any idea?
My problem was associated with ZTE usb modem.
I solved the problem.
i can receive and send voice separately to voice port now. But i can not get clean sound like WCDMA UI.
how can i receive and send data with high quality?
Please look at my source code. [http://serv7.boxca.com/files/0/z9g2d59a8rtw6n/ModemDial.zip]
Does anyone now where is my error?
Thank you for your time.
a) Not all zte usb modems supports voice, to detect if modem supports check for ZTE voUSB Device in your ports list.
b) If port present, voice will go through it in pcm format, with 64kbps frequency (8000 samples per sec, 8 sample size).
In your own program, you should read audio stream from there.
stream is additionaly encoded with g.711, so you need to decode it before sending to audio device
It is fairly common to shut off the speaker after connection. Try sending ATM2, that should make the speaker always on.
Basic hayes command set:
M2
Speaker always on (data sounds are heard after CONNECT)
I'm trying to use asterisk's chan_dongle module on ZTE MF180 Datacard model with activated voices abilities.
Originally chan_dongle using raw PCM format on voice data.
But i was discover, that ZTE using ulaw format on sending and recciving voice data.
You can get voice data and save file in this format for learn by using standard Asterisk's Record(filename:ulaw) command in dialplan.
My voice data, dumped from ZTE modem in the same format.
I check it. ZTE dumped data was successefully played by Asterisk's command Playback(dumped)