Acoustic Echo Cancellation (AEC) in embedded software - embedded

I am doing a VoIP project on embedded device. I have built a sample using a 32bits MCU with a low grade audio codec. Now I found that there is echo issue on my device, that is I can hear what I said from the speaker. I have do some research and found that most appliaction use a DSP codec with acoustic echo cancellation feature. However, is it possible that I do the acoustic echo cancellation in the software, using my 32bits MCU?
Can you adive the algorithm, or even source code:P, for doing acoustic echo cancellation? I know sophisticated method is not possible on a MCU, whereas a simple algorithm is also welcomed.
Thank you
[Follow up] : I have tried some AEC code but they can not work well in my MCU, probably it is the limit of the MCU power. I found that my device become non-real-time when implemented these codes (but a VoIP need a real-time respond). At last I implemented a analog hardware solution by adding an AEC chips, because I do not want to write the code again in another DSP chip.

I had a heck of a time with echo cancellation. I wrote a softphone, and the user can switch their audio input and output devices around to suit their fancy. I tried the Speex echo cancellation library, and several other open source libs I found online. None worked well for me. I tried different speaker/mike configuration and the echo was always there in some form or fashion.
I believe it would be very hard to create AEC code that would work for all possible speaker configurations / room sizes / background noises..etc. Finally I sat down and wrote my own echo cancellation module for my softphone with this algorithm.
It's somewhat crude, but it has worked well and is reliable.
variable1:
Keep a record of what the average amplitude is of when the person to whom you're talking is speaking. (Don't factor quiet-time)
variable2:
Keep a record of what the average amplitude is on the input (mike), but only when there is voice- again- don't factor quiet time.
As soon as there's audio to play- cut the mike. And assuming the person listening is not talking, turn the mike on 150-300ms after the last audible audio frame comes in to be played.
If the audio from the microphones (that you're dropping during playback) is greater than oh- say (variable2 * 1.5), start sending the audio input frames for a specified duration, resetting that duration every time the input amplitude reaches (variable2 * 1.5).
That way the person talking will know they are being interrupted, and stop to see what the person is saying. If the person talking doesn't have too noisy of a background, they will probably hear most if not all of the interruption.
Like I said, not the most graceful, but it doesn't use a lot of resources (CPU, memory) and it actually works pretty darn well. I am very pleased with how mine sounds.
To implement it, I just made a few functions.
On a received audio frame, I call a function I called:
void audioin( AEC *ec, short *frame ) {
unsigned int tas=0; /* Total sum of all audio in frame (absolute value) */
int i=0;
for (;i<160;i++)
tas+=ABS(frame[i]);
tas/=160; /* 320 byte frames muLaw */
if (tas>300) { /* I assume this is audiable */
lockecho(ec);
ec->lastaudibleframe=GetTickCount64();
unlockecho(ec);
}
return;
}
and before sending a frame, I do:
#define ECHO_THRESHOLD 300 /* Time to keep suppression alive after last audible frame */
#define ONE_MINUTE 3000 /* 3000 20ms samples */
#define AVG_PERIOD 250 /* 250 20ms samples */
#define ABS(x) (x>0?x:-x)
char removeecho( AEC *ec, short *aecinput ) {
int tas=0; /* Average absolute amplitude in this signal */
int i=0;
unsigned long long *tot=0;
unsigned int *ctr=0;
unsigned short *avg=0;
char suppressframe=0;
lockecho(ec);
if (ec->lastaudibleframe+ECHO_THRESHOLD > GetTickCount64() ) {
/* If we're still within the threshold for echo (speaker state is ON) */
tot=&ec->t_aiws;
ctr=&ec->c_aiws;
avg=&ec->aiws;
} else {
/* If we're outside the threshold for echo (speaker state is OFF) */
tot=&ec->t_aiwos;
ctr=&ec->c_aiwos;
avg=&ec->aiwos;
}
for (;i<160;i++) {
tas+=ABS(aecinput[i]);
}
tas/=160;
if (tas>200) {
(*tot)+=tas;
(*avg)=(unsigned short)((*tot)/( (*ctr)?(*ctr):1));
(*ctr)++;
if ((*ctr)>AVG_PERIOD) {
(*tot)=(*avg);
(*ctr)=0;
}
}
if ( (avg==&ec->aiws) ) {
tas-=ec->aiwos;
if (tas<0) {
tas=0;
}
if ( ((unsigned short) tas > (ec->aiws*1.5)) && ((unsigned short)tas>=ec->aiwos) && (ec->aiwos!=0) ) {
suppressframe=0;
} else {
suppressframe=1;
}
}
if (suppressframe) { /* Silence frame */
memset(aecinput, 0, 320);
}
unlockecho(ec);
return suppressframe;
}
Which will silence the frame if it needs to. I keep all my variables, like the timers, and amplitude averages in the AEC struct, which I return from a call to
AEC *initecho( void ) {
AEC *ec=0;
ec=(AEC *)malloc(sizeof(AEC));
memset(ec, 0, sizeof(AEC));
ec->aiws=200; /* Just a default guess as to what the average amplitude would be */
return ec;
}
typedef struct aec {
unsigned long long lastaudibleframe; /* time stamp of last audible frame */
unsigned short aiws; /* Average mike input when speaker is playing */
unsigned short aiwos; /*Average mike input when speaker ISNT playing */
unsigned long long t_aiws, t_aiwos; /* Internal running total (sum of PCM) */
unsigned int c_aiws, c_aiwos; /* Internal counters for number of frames for averaging */
unsigned long lockthreadid; /* Thread ID with lock */
int stlc; /* Same thread lock-count */
} AEC;
You can adapt as you need to and play with the idea, but like I said. It actually sounds pretty dang good. The only problem I have is if they have a lot of background noise. But for me, if they pick up their USB handset or are using a headset, they can turn echo cancellation off, and not worry about it...but though PC speakers with a mike...I'm pretty happy with it.
I hope it helps, or gives you something to build on...

If you are doing a commercial project that this should be easy. You can integrate a commercial audio cancellation software in your VoIP application.

Related

Can't get the analogue watchdog to trigger an interrupt on the DFSDM peripheral of a STM32L475

I have an AMC1306 current shunt modulator feeding 1-bit PDM data at 10 MHz into a STM32L475. Filter0 takes the bit stream from Channel0 and applies a sinc3 filter with Fosr=125 and Iosr=4. This provides 24-bit data at 20 kHz and is working fine. The DMA transfers the data into a 1-word circular buffer in main memory to maintain fresh data.
I want to be able to call an interrupt function if the 24-bit value leaves a certain window. This would be caused in an over-voltage situation and needs to disengage the MOSFET driver. It would seem this functionality is offered by the analogue watchdog within the peripheral.
I am using STM32CubeIDE and the graphical interface within the IDE to configure the peripherals. Filter0 global interrupts are enabled. I have added this code:
/* USER CODE BEGIN 2 */
HAL_DFSDM_FilterRegularStart_DMA(&hdfsdm1_filter0, Vbus_DMA, 1);
// Set up the watchdog
DFSDM_Filter_AwdParamTypeDef awdParamFilter0;
awdParamFilter0.DataSource = DFSDM_FILTER_AWD_FILTER_DATA;
awdParamFilter0.Channel = DFSDM_CHANNEL_0;
awdParamFilter0.HighBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.HighThreshold = 250;
awdParamFilter0.LowBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.LowThreshold = -250;
HAL_DFSDM_FilterAwdStart_IT(&hdfsdm1_filter0, &awdParamFilter0);
/* USER CODE END 2 */
I have also used the HAL callback function
/* USER CODE BEGIN 4 */
void HAL_DFSDM_FilterAwdCallback(DFSDM_Filter_HandleTypeDef *hdfsdm_filter, uint32_t Channel, uint32_t Threshold)
{
HAL_GPIO_WritePin(GPIOA, LED_Pin, GPIO_PIN_SET);
}
/* USER CODE END 4 */
But the callback function never runs! I have experimented with the thresholds (I even made them zero).
In the debugger I can see the AWDIE=0x1 (So the AWD interrupt is enabled). The AWDF = 0x1 (So the threshold has been crossed and the peripheral should be requesting an interrupt...). The code doesn't even trigger a breakpoint in the stm32l4xx_it.c filter0 interrupt. So it'd seem no DFSDM1_FLT0 interrupts are happening
I'd be enormously appreciative of any help, any example code, any resources to read. Thanks in advance.
I know the DMA conversion complete callbacks work
I have played around with various thresholds and note that the AWDF gets set when the threshold is crossed.

Can't get my DAC(PT8211) to work correctly using a PIC32MX uc and SPI

I'm just trying to learn to use external ADC and DAC (PT8211) with my PIC32MX534f06h.
So far, my code is just about sampling a signal with my ADC every time a timer-interrupt is triggered, then sending then same signal out to the DAC.
The interrupt and ADC part works fine and have been tested independently, but the voltages that my DAC outputs don't make much sens to me and stay at 2,5V (it's powered at 0 - 5V).
I've tried to feed the DAC various values ranging from 0 to 65534 (16bits DAC so i guess it should be the expected range of the values to feed to it, right?) voltage stays at 2.5V.
I've tried changing the SPI configuration, using different SPIs (3 and 4) and DACs (I have one soldered to my pcb, soldered to SPI3, and one one breadboard, linked to SPI4 in case the one soldered on my board was defective).
I made sure that the chip selection line works as expected.
I couldn't see the data and clock that are transmissed since i don't have a scope yet.
I'm a bit out of ideas now.
Chip selection and SPI configuration settings
signed short adc_value;
signed short DAC_output_value;
int Empty_SPI3_buffer;
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000 // SPI on, 16-bit master,CKE=1,CKP=0
#define SPI4_BAUD 100 // clock divider
DAC output function
//output to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
Chip_Select_DAC_Clr();
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write byte to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer
Chip_Select_DAC_Set();
INTEnableInterrupts();
}
ISR sampling the data, triggered by Timer1. This works fine.
ADC_input inputs the data in the global variable adc_value (12 bits, signed)
//ISR to sample data
void __ISR( _TIMER_1_VECTOR, IPL7SRS) Test_data_sampling_in( void)
{
IFS0bits.T1IF = 0;
ADC_Input();
//rescale the signed 12 bit audio values to unsigned 16 bits wide values
DAC_output_value = adc_value + 2048; //first unsign the signed 12 bit values (between 0 - 4096, center 2048)
DAC_output_value = DAC_output_value *16; // the scale between 12 and 16 bits is actually 16=65536/4096
DAC_Output(DAC_output_value);
}
main function with SPI, IO, Timer configuration
void main() {
SPI4CON = SPI4_CONF;
SPI4BRG = SPI4_BAUD;
TRISE = 0b00100000;
TRISD = 0b000000110100;
TRISG = 0b0010000000;
LATD = 0x0;
SYSTEMConfigPerformance(80000000L); //
INTCONSET = _INTCON_MVEC_MASK; /* Set the interrupt controller for multi-vector mode */
//
T1CONbits.TON = 0; /* turn off Timer 1 */
T1CONbits.TCKPS = 0b11; /* pre-scale = 1:1 (T1CLKIN = 80MHz (?) ) */
PR1 = 1816; /* T1 period ~ ? */
TMR1 = 0; /* clear Timer 1 counter */
//
IPC1bits.T1IP = 7; /* Set Timer 1 interrupt priority to 7 */
IFS0bits.T1IF = 0; /* Reset the Timer 1 interrupt flag */
IEC0bits.T1IE = 1; /* Enable interrupts from Timer 1 */
T1CONbits.TON = 1; /* Enable Timer 1 peripheral */
INTEnableInterrupts();
while (1){
}
}
I would expect to see the voltage at the ouput of my DAC to mimic those I put at the input of my ADC, instead the DAC output value is always constant, no matter what I input to the ADC
What am i missing?
Also, when turning the SPIs on, should I still manually manage the IO configuration of the SDI SDO SCK pins using TRIS or is it automatically taken care of?
First of all I agree that the documentation I first found for PT8211 is rather poor. I found extended documentation here. Your DAC (PT8211) is actually an I2S device, not SPI. WS is not chip select, it is word select (left/right channel). In I2S, If you are setting WS to 0, that means the left channel. However it looks like in the extended datasheet I found that WS 0 is actually right channel (go figure).
The PIC you've chosen doesn't seem to have any I2S hardware so you might have to bit bash it. There is a lot of info on I2S though ,see I2S bus specification .
There are some slight differences with SPI and I2C. Notice that the first bit is when WS transitions from high to low is the LSB of the right channel. and when WS transitions from low to high, it is not the LSB of the left channel. Note that the output should be between 0.4v to 2.4v (I2S standard), not between 0 and 5V. (Max is 2.5V which is what you've been seeing).
I2S
Basically, I'd try it with the proper protocol first with a bit bashing algorithm with continuous flip flopping between a left/right channel.
First of all, thanks a lot for your comment. It helps a lot to know that i'm not looking at a SPI transmission and that explains why it's not working.
A few reflexions about it
I googled Bit bashing (banging?) and it seems to be CPU intensive, which I would definately try to avoid
I have seen a (successful) projet (in MikroC) where someone transmit data from that exact same PIC, to the same DAC, using SPI, with apparently no problems whatsoever So i guess it SHOULD work, somehow?
Maybe he's transforming the data so that it works? here is the code he's using, I'm not sure what happens with the F15 bit toggle, I was thinking that it was done to manage the LSB shift problem. Here is the piece of (working) MikroC code that i'm talking about
valueDAC = valueDAC + 32768;
valueDAC.F15 =~ valueDAC.F15;
Chip_Select_DAC = 0;
SPI3_Write(valueDAC);
Chip_Select_DAC = 1;
From my understanding, the two biggest differences between SPI and I2S is that SPI sends "bursts" of data where I2S continuously sends data. Another difference is that data sent after the word change state is the LSB of the last word.
So i was thinking that my SPI is triggered by a timer, which is always the same, so even if the data is not sent continuously, it will just make the sound wave a bit more 'aliased' and if it's triggered regularly enough (say at 44Mhz), it should not be SO different from sending I2S data at the same frequency, right?
If that is so, and I undertand correctly, the "only" problem left is to manage the LSB-next-word-MSB place problem, but i thought that the LSB is virtually negligible over 16bit values, so if I could just bitshift my value to the right and then just fix the LSB value to 0 or 1, the error would be small, and the format would be right.
Does it sounds like I have a valid 'Mc-Gyver-I2S-from-my-SPI' or am I forgetting something important?
I have tried to implement it, so far without success, but I need to check my SPI configuration since i'm not sure that it's configured correctly
Here is the code so far
SPI config
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000
#define SPI4_BAUD 20
DAaC output function
//output audio to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
valueDAC = valueDAC >> 1; // put the MSB of ValueDAC 1 bit to the right (becase the MSB of what is transmitted will be seen by the DAC as the LSB of the last value, after a word select change)
//Left channel
Chip_Select_DAC_Set(); // Select left channel
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer (don't know why we need to do this here, but we do)
//SPI3_Write(valueDAC); MikroC option
// Right channel
Chip_Select_DAC_Clr();
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF;
INTEnableInterrupts();
}
The data I send here is signed, 16 bits range, I think you said that it's allright with this DAC, right?
Or maybe i could use framed SPI? the clock seems to be continous in this mode, but I would still have the LSB MSB shifting problem to solve.
I'm a bit lost here, so any help would be cool

Getting GPS to Work on SIM5320A 3G/GPS Module

I have a Adafruit FONA 3G/GPS module (American Version). I have gotten the cellular functionality to work but I am struggling with GPS. I have tried both passive and active antennas.
This is a list of AT commands that are available to the SIM5320A module (pdf). The Adafruit example code uses AT commands that don't run on this SIM module. I send the following sequence:
AT+CGPS=1,1
AT+CGPSINFO
and I am receiving:
+CGPSINFO: ,,,,,,,,
AmpI/AmpQ: 4xx/4xx
What exactly is going wrong? I am connected but am not getting any data out. Also, I understand the mathematical significance of AmpI/AmpQ but what does that mean in terms of connection to GPS network?
First, if you're using the FONATest example from Adafruit, you need to look around line 48. Comment out the line that declares a variable of type Adafruit_FONA and uncomment the line that declares a variable of type Adafruit_FONA_3G.
Also, if you haven't run the FONA_3G setbaud program to set the baud rate to something other than 115,200, you need to run it. (This isn't made very clear by Adafruit, and I thought my board was bad before I found that information.)
Second, unless you bridge the pads labeled bias on the board, you will have to use a passive GPS antenna. The SIMCOM module's GPS is pretty sensitive though. I had a uFL-to-SMA adapter plugged into the board, and stuck a 5-inch jumper wire into the SMA's center pin. The GPS acquired a position within a minute!
I tried the following command: AT+CGPS=1 and then used AT+CGPS? to query the GPS status. It reported +CGPS: 1,1 back to me.
The first few attempts at sending AT+CGPSINFO gave me the same info that you got, i.e. no position.
However, after a few minutes with a wire (or passive antenna), AT+CGPSINFO started returning valid time and position information.
Hope this helps.
UPDATE:
You've asked for code, and I'm happy to oblige. This code is designed to just be a terminal-like relay for serial data between the Arduino (using Tools -> Serial Monitor) and the Fona. So the commands I described above, I just entered by hand. Still, some folks may find it useful.
#include <Adafruit_FONA.h>
#include <SoftwareSerial.h>
#define RX_FROM_FONA 2
#define TX_TO_FONA 3
#define FONA_RST 4
#define FONA_PWR_KEY 5
#define FONA_PWR_STATUS 7
Adafruit_FONA fona = Adafruit_FONA(FONA_RST);
SoftwareSerial fonaSerial = SoftwareSerial(TX_TO_FONA, RX_FROM_FONA);
void setup() {
pinMode(FONA_RST, OUTPUT);
pinMode(FONA_PWR_KEY, OUTPUT);
pinMode(FONA_PWR_STATUS, INPUT_PULLUP);
pinMode(LED_BUILTIN, OUTPUT);
digitalWrite(FONA_RST, HIGH);
digitalWrite(FONA_PWR_KEY, HIGH);
resetFona();
Serial.begin(115200);
fonaSerial.begin(4800);
}
void resetFona() {
digitalWrite(LED_BUILTIN, HIGH);
digitalWrite(FONA_RST, LOW);
delay(200);
digitalWrite(LED_BUILTIN, LOW);
digitalWrite(FONA_RST, HIGH);
delay(1000);
digitalWrite(LED_BUILTIN, HIGH);
digitalWrite(FONA_PWR_KEY, LOW);
delay(5000);
digitalWrite(LED_BUILTIN, LOW);
digitalWrite(FONA_PWR_KEY, HIGH);
delay(100);
}
char buffer[255];
int crIndex = 0;
int bufferIndex = 0;
int foundCR = 0;
void loop() {
if (Serial.available()) {
buffer[bufferIndex++] = Serial.read();
if (buffer[bufferIndex - 1] == 13) {
crIndex = bufferIndex - 1;
foundCR = 1;
}
}
if (bufferIndex > 255) { bufferIndex = 0; }
while (fonaSerial.available()) { Serial.write(fonaSerial.read()); }
if (foundCR > 0) {
for (int i = 0; i <= crIndex; i++) {
fonaSerial.write(buffer[i]);
}
foundCR = 0;
bufferIndex = 0;
crIndex = 0;
}
}
I don't think this is a software problem, I think you have a hardware problem.
I had this same problem. The sensitivity of the SIM5320A is pretty good, but Adafruit hasn't really followed their design guide in implementing the carrier board, and this hurts its sensitivity.
For example, you can read the application notes for the board here. Check out the PDFs on "SMT Module Design" and "Hardware Design."
First thing you'll notice is SIMCOM recommends a LC tuning network to tune your particular antenna to 50-ohm impedance.
Here is the SIMCOM recommended diagram:
And here is the Adafruit design, with no tuning network anywhere to be found:
Now, if you're going to be using this in your own PCB later, you of course are free to make improvements here yourself.
To test your design, you're going to want to know more about what the module is saying, otherwise you're flying blind. You can read the GPS NMEA sequences and decipher them, but there's a lot to it. Another way is to use the GPS demo tool that SIMCOM support offers. To download this, go to:
ftp://simcom.exavault.com/SIM5320A/GPS tool
username=myd
psd=simcommyd
It will show you how many satellites you're connected to, what signal strength they have, etc. You are going to need at least 3 satellites for a lock (technically you should have 4, but some modules will toss out an impossible triangulation solution with just 3, not sure which the SIM5320A has). You want each satellite to have strengths in the 30+ range.
Here is my Before screenshot, you can see I only had one satellite and hence no lock. This was after giving half an hour after cold start to download the almanac and ephemeris tables (will take about 15min on average from cold start). Satellite 21 (with signal of also 21 strength, coincidentally).
What I did to fix my signal gain was add an LNA (Low Noise Amplifier) in series with my passive antenna. I used the BGA524N6BOARDTOBO1 breakout board, which you can get at DigiKey/Mouser/others, and adds 19dB of gain, and an incredibly-low 0.55dB noise figure.
After hooking up my antenna, and waiting some time, I got a lock!
Hope this helps you and others to use this otherwise great module.

Glowing an LED using GPIO in craneboard

I'd like to use GPIO to turn on an LED in a craneboard (ARM processor). I'm very new to embedded programming. But, I'm quite good at C. I referred in some websites and learnt about GPIO related commands. I wrote a code, but I'm not quite sure in how to integrate it to the u-boot coding of the craneboard. I don't know where to start. Kindly guide me.
#define LED1 (1 << 6)
int getPinState(int pinNumber);
int main(void)
{
GPIO0_IODIR |= LED1;
GPIO0_IOSET |= LED1;
while (1)
{
GPIO0_IOCLR |= LED1;
}
}
int getPinState(int pinNumber)
{
int pinBlockState = GPIO0_IOPIN;
int pinState = (pinBlockState & (1 << pinNumber)) ? 1 : 0;
return pinState;
}
First of all, learn common bit (also pin in your case) manipulation expressions that you will use A LOT in embedded programming:
/* Set bit to 1 */
GPIO0_IODIR |= LED1; //output
/* Clear bit (set to 0) */
GPIO0_IOSET &= ~LED1; //low
/* Toggle bit */
GPIO0_IOSET ^= LED1;
Your while() loop actually does nothing, except for the first iteration, because the same logical OR operations do no change bit state (see logical table of this op). Also you should add delay, because if pin toggles too fast, LED might look like off all the time. Simple solution would look like:
while(1)
{
GPIO0_IOSET ^= LED1;
sleep(1); //or replace with any other available delay command
}
I do not have source files of U-Boot for Craneboard, so cannot tell you the exact place where to put your code, but basically there are several options: 1) add it in main(), where U-Boot start, thus hanging it (but you still have LED blinking!). 2) implement separate command to switch LED on/off (see command.c and cmd_ prefixed files for examples) 3) Integrate it in serial loop, so pin could be switched while waiting user input 4) build it as an application over U-Boot.
Get used to a lot of reading and documentation, TRM is your friend here (sometimes the only one). Also there are some great guides for embedded starters, just google around. Few to mention:
http://www.microbuilder.eu/Tutorials/LPC2148/GPIO.aspx (basics with examples)
http://beagleboard.org/ (great resource for BeagleBoard, but much applies to CraneBoard as they share the same SoC, includes great community).
http://free-electrons.com/ (more towards embedded Linux and other advanced topics, but some basics can also be found)
http://processors.wiki.ti.com/index.php/CraneBoard (official CraneBoard wiki, probably know this, but just in case)
P.S. Good luck in and don't give up!
If you want to do it in u-boot (and not in Linux), then you have to write an application for u-boot.
$5.12 of u-boot manual explains how to do it.
The source of u-boot provides some examples that you can use.
I'd like to add my answer. The coding that I have done previously was a generalized one. In craneboard, there are certain functions, that does the operation. So, I rewrote it accordingly. I included the file cmd_toggle.c in 'common' directory in the u-boot directory. And added it to the Makefile. The following code will make the LED to blink.
int glow_led(cmd_tbl_t *cmdtp, int flag, int argc, char *argv[])
{
int ret,i=0,num=0,ctr=0;
int lpin;
lpin=(int)strtoul(argv[1]);
ret=set_mmc_mux();
if(ret<0)
printf("\n\nLED failed to glow!\n\n");
else{
if(!omap_request_gpio(lpin))
{
omap_set_gpio_direction(lpin,0);
for(i=1;i<21;i++)
{
ctr=0;
if((i%2)==0)
{
num=num-1;
omap_set_gpio_dataout(lpin,num);
}
else
{
num=num+1;
omap_set_gpio_dataout(lpin,num);
}
udelay(3000000);
}
}
}
return 0;
}
U_BOOT_CMD(toggle,2,1,glow_led,"Glow an LED","pin_number");
I could've made this a little simpler by just using a while loop to repeatedly set it as 1 and 0.
This can be executed from the u-boot console as toggle 142, as I have connected the LED to the pin 142.
P.S
Thanks for all your guidance. A special thanks to KBart

Monotonic clock on OSX

CLOCK_MONOTONIC does not seem available, so clock_gettime is out.
I've read in some places that mach_absolute_time() might be the right way to go, but after reading that it was a 'cpu dependent value', it instantly made me wonder if it is using rtdsc underneath. Thus, the value could drift over time even if it is monotonic. Also, issues with thread affinity could result in meaningfully different results from calling the function (making it not monotonic across all cores).
Of course, that is just speculation. Does anyone know how mach_absolute_time actually works? I'm actually looking for a replacement to clock_gettime(CLOCK_MONOTONIC... or something like it for OSX. No matter what the clock source is, I expect at least millisecond precision and millisecond accuracy.
I'd just like to understand what clocks are available, which clocks are monotonic, if certain clocks drift, have thread affinity issues, aren't supported on all Mac hardware, or take a 'super high' number of cpu cycles to execute.
Here are the links I was able to find about this topic (some are already dead links and not findable on archive.org):
https://developer.apple.com/library/mac/#qa/qa1398/_index.html
http://www.wand.net.nz/~smr26/wordpress/2009/01/19/monotonic-time-in-mac-os-x/
http://www.meandmark.com/timing.pdf
Thanks!
Brett
The Mach kernel provides access to system clocks, out of which at least one (SYSTEM_CLOCK) is advertised by the documentation as being monotonically incrementing.
#include <mach/clock.h>
#include <mach/mach.h>
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), SYSTEM_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
mach_timespec_t has nanosecond precision. I'm not sure about the accuracy, though.
Mac OS X supports three clocks:
SYSTEM_CLOCK returns the time since boot time;
CALENDAR_CLOCK returns the UTC time since 1970-01-01;
REALTIME_CLOCK is deprecated and is the same as SYSTEM_CLOCK in its current implementation.
The documentation for clock_get_time says the clocks are monotonically incrementing unless someone calls clock_set_time. Calls to clock_set_time are discouraged as it could break the monotonic property of the clocks, and in fact, the current implementation returns KERN_FAILURE without doing anything.
After looking up a few different answers for this I ended up defining a header which emulates clock_gettime on mach:
#include <sys/types.h>
#include <sys/_types/_timespec.h>
#include <mach/mach.h>
#include <mach/clock.h>
#ifndef mach_time_h
#define mach_time_h
/* The opengroup spec isn't clear on the mapping from REALTIME to CALENDAR
being appropriate or not.
http://pubs.opengroup.org/onlinepubs/009695299/basedefs/time.h.html */
// XXX only supports a single timer
#define TIMER_ABSTIME -1
#define CLOCK_REALTIME CALENDAR_CLOCK
#define CLOCK_MONOTONIC SYSTEM_CLOCK
typedef int clockid_t;
/* the mach kernel uses struct mach_timespec, so struct timespec
is loaded from <sys/_types/_timespec.h> for compatability */
// struct timespec { time_t tv_sec; long tv_nsec; };
int clock_gettime(clockid_t clk_id, struct timespec *tp);
#endif
and in mach_gettime.c
#include "mach_gettime.h"
#include <mach/mach_time.h>
#define MT_NANO (+1.0E-9)
#define MT_GIGA UINT64_C(1000000000)
// TODO create a list of timers,
static double mt_timebase = 0.0;
static uint64_t mt_timestart = 0;
// TODO be more careful in a multithreaded environement
int clock_gettime(clockid_t clk_id, struct timespec *tp)
{
kern_return_t retval = KERN_SUCCESS;
if( clk_id == TIMER_ABSTIME)
{
if (!mt_timestart) { // only one timer, initilized on the first call to the TIMER
mach_timebase_info_data_t tb = { 0 };
mach_timebase_info(&tb);
mt_timebase = tb.numer;
mt_timebase /= tb.denom;
mt_timestart = mach_absolute_time();
}
double diff = (mach_absolute_time() - mt_timestart) * mt_timebase;
tp->tv_sec = diff * MT_NANO;
tp->tv_nsec = diff - (tp->tv_sec * MT_GIGA);
}
else // other clk_ids are mapped to the coresponding mach clock_service
{
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), clk_id, &cclock);
retval = clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
tp->tv_sec = mts.tv_sec;
tp->tv_nsec = mts.tv_nsec;
}
return retval;
}
Just use Mach Time.
It is public API, it works on macOS, iOS, and tvOS and it works from within the sandbox.
Mach Time returns an abstract time unit that I usually call "clock ticks". The length of a clock tick is system specific and depends on the CPU. On current Intel systems a clock tick is in fact exactly one nanosecond but you cannot rely on that (may be different for ARM and it certainly was different for PowerPC CPUs). The system can also tell you the conversion factor to convert clock ticks to nanoseconds and nanoseconds to clock ticks (this factor is static, it won't ever change at runtime). When your system boots, the clock starts at 0 and then monotonically increases with every clock tick thereafter, so you can also use Mach Time to get the uptime of your system (and, of course, uptime is monotonic!).
Here's some code:
#include <stdio.h>
#include <inttypes.h>
#include <mach/mach_time.h>
int main ( ) {
uint64_t clockTicksSinceSystemBoot = mach_absolute_time();
printf("Clock ticks since system boot: %"PRIu64"\n",
clockTicksSinceSystemBoot
);
static mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
// Cast to double is required to make this a floating point devision,
// otherwise it would be an interger division and only the result would
// be converted to floating point!
double clockTicksToNanosecons = (double)timebase.numer / timebase.denom;
uint64_t systemUptimeNanoseconds = (uint64_t)(
clockTicksToNanosecons * clockTicksSinceSystemBoot
);
uint64_t systemUptimeSeconds = systemUptimeNanoseconds / (1000 * 1000 * 1000);
printf("System uptime: %"PRIu64" seconds\n", systemUptimeSeconds);
}
You can also put a thread to sleep until a certain Mach Time has been reached. Here's some code for that:
// Sleep for 750 ns
uint64_t machTimeNow = mach_absolute_time();
uint64_t clockTicksToSleep = (uint64_t)(750 / clockTicksToNanosecons);
uint64_t machTimeIn750ns = machTimeNow + clockTicksToSleep;
mach_wait_until(machTimeIn750ns);
As Mach Time has no relation to any wallclock time, you can play around with your system date and time setting as you like, that won't have any effect on Mach Time.
There's one special consideration, though, that may make Mach Time unsuitable for certain use cases: The CPU clock is not running while your system is asleep! So if you make a thread wait for 5 minutes and after 1 minute the system goes to sleep and stays asleep for 30 minutes, the thread is still waiting another 4 minutes after the system has woken up as the 30 minutes sleep time don't count! The CPU clock was resting as well during that time. Yet in other cases this is exactly what you want to happen.
Mach Time is also a very precise way to measure time spent. Here's some code showing that task:
// Measure time
uint64_t machTimeBegin = mach_absolute_time();
sleep(1);
uint64_t machTimeEnd = mach_absolute_time();
uint64_t machTimePassed = machTimeEnd - machTimeBegin;
uint64_t timePassedNS = (uint64_t)(
machTimePassed * clockTicksToNanosecons
);
printf("Thread slept for: %"PRIu64" ns\n", timePassedNS);
You will see that the thread doesn't sleep for exactly one second, that's because it takes some time to put a thread to sleep, to wake it back up again and even when awake, it won't get CPU time immediately if all cores are already busy running a thread at that moment.
Update (2018-09-26)
Since macOS 10.12 (Sierra) there also exists mach_continuous_time. The only difference between mach_continuous_time and mach_absolute_time is that continues time also advances when the system is asleep. So in case this was a problem so far and a reason for not using Mach Time, 10.12 and up offer a solution to this problem. The usage is exactly the same as described above.
Also starting with macOS 10.9 (Mavericks), there is a mach_approximate_time and in 10.12 there's also a mach_continuous_approximate_time. These two are identical to mach_absolute_time and mach_continuous_time with the only difference, that they are faster yet less accurate. The standard functions require a call into the kernel as the kernel takes care of Mach Time. Such a call is somewhat expensive, especially on systems that already have a Meltdown fix. The approximate versions won't have to always call into the kernel. They use a clock in user space that is only synchronized with the kernel clock from time to time to prevent that it is running too far out of sync, yet a small deviation is always possible and thus it is only the "approximate" Mach Time.