I'm writing an iOS client for a The Text-to-Speech software product who requires the audio to be sent in PCM format, 44100hz sample rate, 16 bit , mono, so I'm recording using LPCM.
It is required that users can hit play to hear their own voice aft wards, so the server sends the audio just as it was recorded.
Now, when playing back I'm using AudioStreamer : https://github.com/mattgallagher/AudioStreamer, and I've been playing a lot with buffer size and the like. As of now I'm using 64kbytes buffers and start playing when at least three buffers are fulfilled ,while the total amount is 16 buffers to avoid running out of free buffers.
And here's the thing : when playing under WiFi it takes more time to fill the buffers than when using 3G or 4G.. crazy ! This is a log that I added to see how buffers are filled , you can see that the packets are no bigger than a few kilobytes. I wonder if this is normal. With 3G they come much more smoothly than WIFI …
2013-09-16 23:50:35.997 < AudioStreamer.m:(1855)> Handle incoming data, 1382 bytes , with bytesFilled 19340
2013-09-16 23:50:36.017 < AudioStreamer.m:(1415)> AudioFile Stream Parse Bytes
2013-09-16 23:50:36.018 < AudioStreamer.m:(1855)> Handle incoming data, 5528 bytes , with bytesFilled 20722
2013-09-16 23:50:36.020 < AudioStreamer.m:(1415)> AudioFile Stream Parse Bytes
2013-09-16 23:50:36.021 < AudioStreamer.m:(1855)> Handle incoming data, 1382 bytes , with bytesFilled 26250
2013-09-16 23:50:36.031 < AudioStreamer.m:(1415)> AudioFile Stream Parse Bytes
2013-09-16 23:50:36.032 < AudioStreamer.m:(1855)> Handle incoming data, 1382 bytes , with bytesFilled 27632
2013-09-16 23:50:36.034 < AudioStreamer.m:(1415)> AudioFile Stream Parse Bytes
Which makes the buffers to take a few more seconds under Wifi.
64 kbytes - 3 buffers. No Gaps, WIFI
2013-09-17 19:29:48:553 AudioStreamer Waiting for data
2013-09-17 19:29:52:094 Begin playing audio queue with buffers used 3
64 kbytes - 3 buffers. No Gaps, and on 3G :
2013-09-17 19:27:33:680 AudioStreamer Waiting for data
2013-09-17 19:27:35:954 Begin playing audio queue with buffers used 3
As you can see, 3G seems to be twice as fast .. any clues?
Related
Actually i have raspberry-pi using the bluetooth.service.generic_attribute and a react-native app with react-native-ble-manager , with this i request to the Json-based API (on the raspberry-pi server),using a writeWithoutResponse characteristic that let me transfer chunks of 20 bytes as [17 bytes in ascii of the json and 3 bytes of control] and then recreate all the chunks of the json on the other side while responding with the notification/indication characteristic an "ok" [3 bytes flag] to request for send the next chunck.
From raspberry-pi to the app i use the notification/indication characteristic that let me transfer chunks of 20 bytes as [17 bytes in ascii of the json and 3 bytes of control] and then recreate all the json on the other side, while answering by a writeWithoutResponse characteristic an "ok" [3 bytes flag] to request for send the next chunck.
The problem with is that the transfer's rate that is too slow(like 20 seconds to transfer acii 600 characters)... So does someone have another idea to do this?
So I'm using NAudio to receive data from the computer mic. The buffer for waveIn is 100 ms. I receive 9600 bytes per DataAvailable, which would correspond to 2 bytes per sample and for 100 ms (48000/10 samples with 2 bytes). So far so good. The problem is, however, that I receive data 20 times a second, not 10 as would be expected. I'm running a counter for every DataAvailable, and after one minute, the counter is 1200, not 600.
Anyone know why this is happening? Is there something I don't get?
waveIn.DeviceNumber = selectedDevice;
waveIn.DataAvailable += waveIn_DataAvailable;
waveIn.WaveFormat = new WaveFormat(48000, 1);
waveIn.StartRecording();
I made a setup which consists of 3 Zigbee's, 2 routers(Zigbee S2C's) and 1 coordinator(Zigbee S2). The routers are each connected to arduino nano which collects data from 2 FSR's and an IMU(frame type: zigbee transmit request and packet size 46 bytes) and sends it to the Coordinator attached to an arduino UNO. All the Xbees are in API mode 2 and working at a baud rate of 115200. I am using a library called "Simple Zigbee Library" to send all the collected data to the Coordinator. The collection and sending of data works fine except that there are packets lost in the way. The nano's sample data at a frequency of around 25Hz independently. The coordinator tries to read the data send from the zigbees(using the library of course) in every loop, but unfortunately, it seems to receive only around 40-45 samples.(Should have been 25*2=50 samples total from the 2 xbees). Can anybody suggest why this is happening. I need as less data loss as possible for my setup to achieve its motive. Any kind of help is appreciated.
P.S: It may be important to mention that the coordinator is reading the data only from one xbee in each loop.
As can be seen under the "Source" heading of this image of data received by the coordinator, "19" and "106" are the addresses of the routers and there are data packets dropped intermittently
Thank you.
void setup()
{
// Start the serial ports ...
Serial.begin( 115200 );
while( !Serial ){;} // Wait for serial port (for Leonardo only).
xbeeSerial.begin( 115200 );
// ... and set the serial port for the XBee radio.
xbee.setSerial( xbeeSerial );
// Set a non-zero frame id to receive Status packets.
xbee.setAcknowledgement(true);
}
void loop()
{
// While data is waiting in the XBee serial port ...
while( xbee.available() )
{
// ... read the data.
xbee.read();
// If a complete message is available, display the contents
if( xbee.isComplete() ){
Serial.print("\nIncoming Message: ");
printPacket( xbee.getIncomingPacketObject() );
}
}
delay(10); // Small delay for stability
// That's it! The coordinator is ready to go.
}
// Function for printing the complete contents of a packet //
void printPacket(SimpleZigBeePacket & p)
{
//Serial.print( START, HEX );
//Serial.print(' ');
//Serial.print( p.getLengthMSB(), HEX );
//Serial.print(' ');
//Serial.print( p.getLengthLSB(), HEX );
//Serial.print(' ');
// Frame Type and Frame ID are stored in Frame Data
uint8_t checksum = 0;
for( int i=10; i<p.getFrameLength(); i++){
Serial.print( p.getFrameData(i), HEX );
Serial.print(' ');
checksum += p.getFrameData(i);
}
// Calculate checksum based on summation of frame bytes
checksum = 0xff - checksum;
Serial.print(checksum, HEX );
Serial.println();
}
Although you claim to be using 115,200bps, posted code shows you opening the serial ports at 9600 baud, definitely not fast enough for 2500 bytes/second (50 packets/second * 45 bytes/packet * 110% for overhead) received from XBee and dumped by printPacket()). Remember that 802.15.4 is always 250kbps over the air, and the XBee module's serial port configuration is just for local communications with the host.
Make sure your routers are sending unicast (and not broadcast) packets to keep the radio traffic down.
You should verify that sending is working before troubleshooting code on the coordinator. Update the code on your routers to see if you get a successful Transmit Status packet for every packet sent. Aiming for 50Hz seems like a bit much -- you're trying to send 45 bytes (is that the full size of the API frame?) every 20ms.
Are you using a hardware serial port on the Arduino for both the XBee module and Serial.print()? How much time does each call to printPacket() take? If you reduce the code in printPacket() to a bare minimum (last byte of sender's address and the 1-byte frame ID), do you see all packets come through (an indication that you're spending too much time dumping the packets).
I'm concerned with the code you're using in loop. I don't know the deep internals of how the Arduino works, but does that 10ms delay block other code from processing data? What if you simplify it:
void loop()
{
xbee.read();
// Process any complete frames.
while (xbee.isComplete()){
Serial.print("\nIncoming Message: ");
printPacket( xbee.getIncomingPacketObject() );
}
}
But before going too far, you should isolate the problem would by connecting the coordinator to a terminal emulator on a PC to monitor the frame rate. If all frames arrive then there's an issue on the coordinator. If they don't, work on your router code first.
I'm using the SiLabs C8051F320 configured as a HID to stream ADC data (in 64B or 32B reports) to the PC. I'm basing my HID on the SiLabs example code, with bInterval = 1 and experimenting with endpoint 1 (EP1) versus endpoint 2 (EP2).
Per the C8051F320's datasheet, when the endpoints are in split mode, EP1 is 64B and EP2 is 128B when not double-buffered. I have EP1 as 64B when not double-buffered and 32B when double-buffered. EP2 is 64B whether or not double-buffered. The ADC data is 2 bytes per sample, so 31 samples in a 64B report and 15 samples in 32B report are transferred per report.
1) non-double-buffered EP1 (64B per report) streams 22.5kSps ADC data properly
2) double-buffered EP1 (32B per report) streams 11.5kSps ADC data properly
3) non-double-buffered EP2 (64B per report) does not stream 22.5kSps ADC data properly (I didn't check what's the max sample rate)
4) double-buffered EP2 (64B per report) samples 22.5kSps ADC data properly
5) It seems that the time to fill a report with samples must be longer than bInterval. For example, if bInterval were 10 instead of 1, then non-double-buffered EP1 streams 3kSps properly.
Does the above scenario look right? Why does EP1 allow faster transfer than EP2? Why does the report fill time need to be longer than bInterval?
It seems that the time to fill a report with samples must be longer than bInterval.
Correct: HID uses Interrupt type endpoints, they can transport one report every bInterval ms. That allows you to calculate maximum data rate at 64 Byte * 1000 Hz = 64000 Bytes in a sec.
With 2 Bytes in a sample this results in 32kHz maximum sampling rate.
Why does EP1 allow faster transfer than EP2?
I can see no reason for this behavior besides a programming error.
Note: HID Protocol is a poor choice for streaming data. Bulk type endpoints allow much higher data throughput.
I'm trying to figure out what I need to send (client) in the NTP request package to retrieve a NTP package from the server. I'm working with the LWIP on Cortex M3, Stellaris LM3S6965
I understand that I will recieve a UDP header and then the NTP protocol with the different timestamps the remove the latency. I probable need to make an UDP header but what do I need to add as data?
wireshark image:
I hope you guys can help me.
The client request packet is the same as the server reply packet - just set the MODE bits in the first word to 3 (Client) to be sure.
Send the whole 48 byte packet to the server, it will reply with the same.
The simplest packet would be 0x1B followed by 47 zeroes. (Version = 3, mode = 3)
This is for starters: http://www.eecis.udel.edu/~mills/ntp/html/warp.html
Check this out in case you haven't yet: https://www.rfc-editor.org/rfc/rfc5905
Then look at this: http://wiki.wireshark.org/NTP and check out the sample pcap files that they have uploaded.
I am not sure if this helped, but I hope so.
I have coded an Arduino to connect to an NTP server using this code here,
http://www.instructables.com/id/Arduino-Internet-Time-Client/step2/Code/
Look at the method called getTimeAndDate, and sendNTPpacket.
That is the packet that is sent. This is setting up a buffer and shows binary (0b) and hex (0x) being set up in the 48 character buffer. The address is the NTP time server,
memset(packetBuffer, 0, NTP_PACKET_SIZE);
packetBuffer[0] = 0b11100011;
packetBuffer[1] = 0;
packetBuffer[2] = 6;
packetBuffer[3] = 0xEC;
packetBuffer[12] = 49;
packetBuffer[13] = 0x4E;
packetBuffer[14] = 49;
packetBuffer[15] = 52;
Udp.beginPacket(address, 123);
Udp.write(packetBuffer,NTP_PACKET_SIZE);
Udp.endPacket();
Here is what happens to the received packet,
Udp.read(packetBuffer,NTP_PACKET_SIZE); // read the packet into the buffer
unsigned long highWord, lowWord, epoch;
highWord = word(packetBuffer[40], packetBuffer[41]);
lowWord = word(packetBuffer[42], packetBuffer[43]);
epoch = highWord << 16 | lowWord;
epoch = epoch - 2208988800 + timeZoneOffset;
flag=1;
setTime(epoch);
setTime is part of the arduino time library, so the epoch should be the number of seconds since Jan 1, 1900 as suggested here (search for epoch),
https://en.wikipedia.org/wiki/Network_Time_Protocol
But in case you want a C# version, I found this here, compiled the code under the excepted answer and it works. This will likely make more sense to you, and does show the use of epoch 1/1/1900.
How to Query an NTP Server using C#?
Can easily see the similarity.