I'm using the code block below to receive samples from my microphone and pass them to an RTP channel on a SIP call. The problem is the samples are arriving every 200ms whereas I'm expecting them every 20ms. The samples are the right size for a 20ms sampling interval at 20ms it's just that the 20ms samples only arrive every 200ms. I'm probably doing something silly with my WaveInEvent set up?
var _waveInEvent = new WaveInEvent();
_waveInEvent.BufferMilliseconds = 20;
_waveInEvent.NumberOfBuffers = 1;
_waveInEvent.DeviceNumber = 0;
_waveInEvent.DataAvailable += RTPChannelSampleAvailable;
_waveInEvent.WaveFormat = new WaveFormat(8000, 16, 1);
You normally have at least two buffers, so you can be examining one while another is filled.
20ms might be a little bit quick for WaveIn to cope with. Check how many bytes are in the DataAvailable callback buffer. With your values, you should be getting 320 bytes at a time.
Related
Why is the analog read rate seemingly slow (46 ksamples/s) when it should be fast (250 ksamples/s) for my Adafruit Trinket M0? See this simple Arduino code for details; why is PointCount only 46?
//TrinketReadRateTest
//27Nov2022
//Running on Adafruit Trinket M0, SAMD21
//Measures read times of analog reads on Trinket M0
//nothing at all connected to the Trinket
//according to the settings in this wiring.c file lines 160-173, samples per second should be = 250,000:
//C:\Users\<MyUserName>\AppData\Local\Arduino15\packages\adafruit\hardware\samd\1.7.11\cores\arduino\wiring.c
//in this loop, every PointCount is 2 samples, so in 2 millisecs, number of PointCounts should be:
//(.002 secs)(250000 samples/sec)(PointCounts/ 2 samples) = 250
//however, this routine gives a value of 46 WHY?
//if line 170 prescaler is set to DIV16 instead of DIV32, PointCounts gets to 66 (accuracy ???) so this wiring.c is being loaded
#define INPUT1 A3 //ATSAMD21G PA04
#define INPUT2 A4 //ATSAMD21G PA05
unsigned int Input1[1000];
unsigned int Input2[1000];
unsigned int PointCount = 0;
void setup() {
pinMode(INPUT1, INPUT);
pinMode(INPUT2, INPUT);
}
void loop() {
PointCount = 0;
unsigned long StartTime = micros();
do {
Input1[PointCount] = analogRead(INPUT1);
Input2[PointCount] = analogRead(INPUT2);
PointCount++;
} while (micros() - StartTime < 2000); //read 2 millisecs of data points as fast as they come
Serial.begin(9600); //keep serial off during data reads to avoid the question...
delay(1000);
Serial.println(PointCount);
Serial.end();
delay(1000);
}
I tried reading analog samples as fast as they would come. I expected to receive samples at a rate of 250000 per second. What actually resulted was a rate of 46000 samples per second.
Added 28Nov: the wiring.c file is not easy to find. If you want it:
download the tar.bz2 file:
https://adafruit.github.io/arduino-board-index/boards/adafruit-samd-1.7.11.tar.bz2
extract the tar file using 7-zip or whatever
goto cores\arduino\wiring.c
Here are the relevant lines of wiring.c:
//set to 1/(1/(48000000/32) * 6) = 250000 SPS
while(GCLK->STATUS.reg & GCLK_STATUS_SYNCBUSY);
GCLK->CLKCTRL.reg = GCLK_CLKCTRL_ID( GCM_ADC ) | // Generic Clock ADC
GCLK_CLKCTRL_GEN_GCLK0 | // Generic Clock Generator 0 is source
GCLK_CLKCTRL_CLKEN ;
while( ADC->STATUS.bit.SYNCBUSY == 1 ); // Wait for synchronization of registers between the clock domains
ADC->CTRLB.reg = ADC_CTRLB_PRESCALER_DIV32 | // Divide Clock by 32.
ADC_CTRLB_RESSEL_10BIT; // 10 bits resolution as default
ADC->SAMPCTRL.reg = 5; // Sampling Time Length
Adding this additional question 8Dec2022:
wiring.analog.c (in same folder as wiring.c) executes the analog routines. Line 369 of wiring.analog.c says the same thing that the SAMD21 data sheet says: "The first conversion after the reference is changed must not be used."
In lines 371-394, the analogRead routine for SAMD21, two reads are always made; the first to account for the statement above. But why do two reads for every analogRead? The analog reference is not changed with every read and is set prior to any reads. So why not just do one conversion after the reference is set? That way, there only needs to be one conversion per analogRead.
I moved the first conversion routine to the very end of analogReference. It speeds things up to PointCount = 79. Is this a problem? It does not seem to reduce accuracy.
Your second question is easier to answer than your first. The reason there are two ADC reads in the Arduino code is because there is a bug in the ADC hardware on the SAMD21. In the past, Arduino provided a calibration method that allowed you to correct for this instead of adding in the second read and throwing out the first garbage data. This was problematic for a number of reasons and eventually library was modified. There's an old hackaday article that provides a little more detail.
As for the ADC reads being slow, the limitation you're running into is a limitation of the SAMD library for Arduino. For reference, I am using the SAMD21 datasheet and the code from Arduino SAMD on GitHub. To start out with, the Clock speed should be 48Mhz. Using the DIV32 predivider, the ADC clock frequency is 1.5Mhz. Each ADC conversion from the SAMD21 library takes 63 clock cycles. Leaving you with ~23.8Khz. 23.8Khz * 2ms = 47.619 Conversions. Add on top of that the overhead caused by switching between the two input pins (I don't know the exact characterization but likely 1-2 clock pulses) and you'd end up with closer to 46 Conversions in 2ms.
63 clock pulses per conversion is comically high. Typically, the first read is closer to 20 pulses and subsequent ones are 13.5. There is another post on the electrical engineering Stack Exchange where someone tackles this and posts a link to their own library for improving the conversion speeds.
I am trying to record audio using a 12 bit resolution ADC, take the sample buffer and send it through CAN FD to another device, which takes samples of this audio and creates a .wav and plays it. The problem is that I see the data of the microphone being sent through CAN FD to the other device, but I am not able to transform this data into a .wav file properly and hear what I say through the microphone. I only hear beeps.
I'm creating a new .wav every 4 CAN FD messages in order to make some kind of real time communication and decrease the delay, but I don't think this is possible or if I am thinking it the proper way.
In this thread I take the message sent by the CAN FD and concatenate it in a buffer in order to introduce it in a .wav file. I have tried bigger buffers but it doesn't change the outcome.
How could I be able to take the data from the CAN FD and hear it?
Clarification: I know using CAN FD to transmit audio isn't the proper way, but it is for a master project.
struct canfd_frame frame;
CAN_MSG msg;
int trama_can[72];
int nbytes;
while (status_libreria == 0)
;
unsigned char buffer[256];
// FILE * fPtr;
int i=0,x=0;
//fPtr = fopen("Test.txt", "w");
while (1) {
do {
nbytes = read(s, &frame, sizeof(struct canfd_frame));
} while (nbytes == 0);
msg.id.ext = frame.can_id;
msg.dlc = frame.len;
if (msg.dlc > 8)
msg.dlc = 8; //Protecci�n hasta adaptar AC3LIB a CANFD
Numas_memcpy(&(msg.data.bdata), &(frame.data), msg.dlc);
can_frame_2_ac3lib(&msg, BUS_VERTICAL);
for(x=0;x<64;x++) buffer[i*64+x] = frame.data[x];
printf("%d \r\n",frame.data[x]);
printf("i:%d \r\n",i);
// Copiar datos a fichero.wav y reproducirlo simultaneamente
if (i == 3) {
printf("Datos IN\r\n");
write_wav("prueba.wav",256 , (short int *)buffer, 16000);
//fwrite(buffer,1,sizeof(buffer),fPtr);
//fclose(fPtr);
system("aplay prueba.wav -f cd");
i = 0;
system("rm prueba.wav");
}
i++;
}
32 first bytes of the audio file being recorded
In the picture, as you can see, the data is being recorded. moreover, this data is the same data as in the ADC, but when I play it, I only hear noise.
Simplify the problem first. Make sure you can transmit known data from one end to the other first at low rates. I'm sure the suggestion below will sound far too trivial. But until you are absolutely confident you understand it all, I predict you sill have many struggles.
Slowly - one frame per second, or even slower.
Learn to send one 0x55 byte from one end to the other and verify at the receiver.
Learn to send a few 0x55 in one frame and verify.
Learn to send 0x12345678 - verify it ends up with the bytes in the right order at the other end
Learn to send a counter. Check it at the receiver, make sure you do not drop any data.
Now do it all again but 10x faster.
Continue until you can send a counter at 10x the rate you need to for the audio without dropping any frames at all, for minutes and then hours.
Stress the rest of the system to make sure it still works under stress.
Only now, can you start to learn about sending audio.
Trust me, you will learn a lot!
I have about 6 sensors (GPS, IMU, etc.) that I need to constantly collect data from. For my purposes, I need a reading from each (within a small time frame) to have a complete data packet. Right now I am using interrupts, but this results in more data from certain sensors than others, and, as mentioned, I need to have the data matched up.
Would it be better to move to a polling-based system in which I could poll each sensor in a set order? This way I could have data from each sensor every 'cycle'.
I am, however, worried about the speed of polling because this system needs to operate close to real time.
Polling combined with a "master timer interrupt" could be your friend here. Let's say that your "slowest" sensor can provide data on 20ms intervals, and that the others can be read faster. That's 50 updates per second. If that's close enough to real-time (probably is close for an IMU), perhaps you proceed like this:
Set up a 20ms timer.
When the timer goes off, set a flag inside an interrupt service routine:
volatile uint8_t timerFlag = 0;
ISR(TIMER_ISR_whatever)
{
timerFlag = 1; // nothing but a semaphore for later...
}
Then, in your main loop act when timerFlag says it's time:
while(1)
{
if(timerFlag == 1)
{
<read first device>
<read second device>
<you get the idea ;) >
timerflag = 0;
}
}
In this way you can read each device and keep their readings synched up. This is a typical way to solve this problem in the embedded space. Now, if you need data faster than 20ms, then you shorten the timer, etc. The big question, as it always is in situations like this, is "how fast can you poll" vs. "how fast do you need to poll." Only experimentation and knowing the characteristics and timing of your various devices can tell you that. But what I propose is a general solution when all the timings "fit."
EDIT, A DIFFERENT APPROACH
A more interrupt-based example:
volatile uint8_t device1Read = 0;
volatile uint8_t device2Read = 0;
etc...
ISR(device 1)
{
<read device>
device1Read = 1;
}
ISR(device 2)
{
<read device>
device2Read = 1;
}
etc...
// main loop
while(1)
{
if(device1Read == 1 && device2Read == 1 && etc...)
{
//< do something with your "packet" of data>
device1Read = 0;
device2Read = 0;
etc...
}
}
In this example, all your devices can be interrupt-driven but the main-loop processing is still governed, paced, by the cadence of the slowest interrupt. The latest complete reading from each device, regardless of speed or latency, can be used. Is this pattern closer to what you had in mind?
Polling is a pretty good and easy to implement idea in case your sensors can provide data practically instantly (in comparison to your desired output frequency). It does get into a nightmare when you have data sources that need a significant (or even variable) time to provide a reading or require an asynchronous "initiate/collect" cycle. You'd have to sort your polling cycles to accommodate the "slowest" data source.
What might be a solution in case you know the average "data conversion rate" of each of your sources, is to set up a number of timers (per data source) that trigger at poll time - data conversion rate and kick in the measurement from those timer ISRs. Then have one last timer that triggers at poll timer + some safety margin that collects all the conversion results.
On the other hand, your apparent problem of "having too many measurements" from the "fast" data sources wouldn't bother me too much as long as you don't have anything reasonable to do with that wasted CPU/sensor load.
A last and easier approach, in case you have some cycles to waste, is: Simply sort the data sources from "slowest" to "fastest" and initiate a measurement in that order, then wait for results in the same order and poll.
How to prevent lag bugs issues in flash games? For example If game have countdown timer 1 minute and player have to catch that much items that possible.
Here are following lag bugs issues:
If items moving (don't have static position) - that higher lag player
have, that slower items move;
Timer starting count slowly when player have lags (CPU usage 90-100%).
So for example If player without lags can get 100 points, player with slow / bad computer can get 4-6x more, like 400-600.
I think that because It's on client side, but how to move It to server side? Should I insert (and update) countdown time to database? But how to update It on every millisecond?
And how about items position solution? If player have big lags, items moving very very slowly, so easy to click on that, have you any ideas?
Moving the functionality to the server side doesn't solve the problem.
Now if there are many players connected to the server, the server will lag and give those players more time to react.
To make your logic independent from lag, do not base it on the screen update.
Because this assumes a constant time between screen updates (or frames)
Instead, make your logic based on the actual time that passed between frames.
Use getTimer to measure how much time passed between the current and the last frame.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/utils/package.html
Of course, your logic should include calculations for what happens in between frames.
In order to mostly fix speed issues on the client you need to make all your speed related code based on actual time, not frames. For example:
Here is a fairly typical example code used to move an object based on frames:
// speed = pixels per frame
var xSpeed:Number = 5;
var ySpeed:Number = 5;
addEventListener(Event.ENTER_FRAME, update);
function update(e:Event):void {
player.x += xSpeed;
player.y += ySpeed;
}
While this code is simple and good enough for a single client, it is very dependent on the frame rate, and as you know the frame rate is very "elastic" and actual frame rate is heavily influenced by the client CPU speed.
Instead, here is an example where the movement is based on actual elapsed time:
// speed = pixels per second
var xSpeed:Number = 5 * stage.frameRate;
var ySpeed:Number = 5 * stage.frameRate;
var lastTime:int = getTimer();
addEventListener(Event.ENTER_FRAME, update);
function update(e:Event):void {
var currentTime:int = getTimer();
var elapsedSeconds:Number = (currentTime - lastTime) / 1000;
player.x += xSpeed * elapsedSeconds;
player.y += ySpeed * elapsedSeconds;
lastTime = currentTime;
}
The crucial part here is that the current time is tracked using getTimer(), and each update moves the player based on the actual elapsed time, not a fixed amount. I set the xSpeed and ySpeed to 5 * stage.frameRate to illustrate how it can be equivelent to the other example, but you don't have to do it that way. The end result is that the second example would have consistent speed of movement regardless of the actual frame rate.
I have a machine which uses an NTP client to sync up to internet time so it's system clock should be fairly accurate.
I've got an application which I'm developing which logs data in real time, processes it and then passes it on. What I'd like to do now is output that data every N milliseconds aligned with the system clock. So for example if I wanted to do 20ms intervals, my oututs ought to be something like this:
13:15:05:000
13:15:05:020
13:15:05:040
13:15:05:060
I've seen suggestions for using the stopwatch class, but that only measures time spans as opposed to looking for specific time stamps. The code to do this is running in it's own thread, so should be a problem if I need to do some relatively blocking calls.
Any suggestions on how to achieve this to a reasonable (close to or better than 1ms precision would be nice) would be very gratefully received.
Don't know how well it plays with C++/CLR but you probably want to look at multimedia timers,
Windows isn't really real-time but this is as close as it gets
You can get a pretty accurate time stamp out of timeGetTime() when you reduce the time period. You'll just need some work to get its return value converted to a clock time. This sample C# code shows the approach:
using System;
using System.Runtime.InteropServices;
class Program {
static void Main(string[] args) {
timeBeginPeriod(1);
uint tick0 = timeGetTime();
var startDate = DateTime.Now;
uint tick1 = tick0;
for (int ix = 0; ix < 20; ++ix) {
uint tick2 = 0;
do { // Burn 20 msec
tick2 = timeGetTime();
} while (tick2 - tick1 < 20);
var currDate = startDate.Add(new TimeSpan((tick2 - tick0) * 10000));
Console.WriteLine(currDate.ToString("HH:mm:ss:ffff"));
tick1 = tick2;
}
timeEndPeriod(1);
Console.ReadLine();
}
[DllImport("winmm.dll")]
private static extern int timeBeginPeriod(int period);
[DllImport("winmm.dll")]
private static extern int timeEndPeriod(int period);
[DllImport("winmm.dll")]
private static extern uint timeGetTime();
}
On second thought, this is just measurement. To get an action performed periodically, you'll have to use timeSetEvent(). As long as you use timeBeginPeriod(), you can get the callback period pretty close to 1 msec. One nicety is that it will automatically compensate when the previous callback was late for any reason.
Your best bet is using inline assembly and writing this chunk of code as a device driver.
That way:
You have control over instruction count
Your application will have execution priority
Ultimately you can't guarantee what you want because the operating system has to honour requests from other processes to run, meaning that something else can always be busy at exactly the moment that you want your process to be running. But you can improve matters using timeBeginPeriod to make it more likely that your process can be switched to in a timely manner, and perhaps being cunning with how you wait between iterations - eg. sleeping for most but not all of the time and then using a busy-loop for the remainder.
Try doing this in two threads. In one thread, use something like this to query a high-precision timer in a loop. When you detect a timestamp that aligns to (or is reasonably close to) a 20ms boundary, send a signal to your log output thread along with the timestamp to use. Your log output thread would simply wait for a signal, then grab the passed-in timestamp and output whatever is needed. Keeping the two in separate threads will make sure that your log output thread doesn't interfere with the timer (this is essentially emulating a hardware timer interrupt, which would be the way I would do it on an embedded platform).
CreateWaitableTimer/SetWaitableTimer and a high-priority thread should be accurate to about 1ms. I don't know why the millisecond field in your example output has four digits, the max value is 999 (since 1000 ms = 1 second).
Since as you said, this doesn't have to be perfect, there are some thing that can be done.
As far as I know, there doesn't exist a timer that syncs with a specific time. So you will have to compute your next time and schedule the timer for that specific time. If your timer only has delta support, then that is easily computed but adds more error since the you could easily be kicked off the CPU between the time you compute your delta and the time the timer is entered into the kernel.
As already pointed out, Windows is not a real time OS. So you must assume that even if you schedule a timer to got off at ":0010", your code might not even execute until well after that time (for example, ":0540"). As long as you properly handle those issues, things will be "ok".
20ms is approximately the length of a time slice on Windows. There is no way to hit 1ms kind of timings in windows reliably without some sort of RT add on like Intime. In windows proper I think your options are WaitForSingleObject, SleepEx, and a busy loop.