I have a Adafruit FONA 3G/GPS module (American Version). I have gotten the cellular functionality to work but I am struggling with GPS. I have tried both passive and active antennas.
This is a list of AT commands that are available to the SIM5320A module (pdf). The Adafruit example code uses AT commands that don't run on this SIM module. I send the following sequence:
AT+CGPS=1,1
AT+CGPSINFO
and I am receiving:
+CGPSINFO: ,,,,,,,,
AmpI/AmpQ: 4xx/4xx
What exactly is going wrong? I am connected but am not getting any data out. Also, I understand the mathematical significance of AmpI/AmpQ but what does that mean in terms of connection to GPS network?
First, if you're using the FONATest example from Adafruit, you need to look around line 48. Comment out the line that declares a variable of type Adafruit_FONA and uncomment the line that declares a variable of type Adafruit_FONA_3G.
Also, if you haven't run the FONA_3G setbaud program to set the baud rate to something other than 115,200, you need to run it. (This isn't made very clear by Adafruit, and I thought my board was bad before I found that information.)
Second, unless you bridge the pads labeled bias on the board, you will have to use a passive GPS antenna. The SIMCOM module's GPS is pretty sensitive though. I had a uFL-to-SMA adapter plugged into the board, and stuck a 5-inch jumper wire into the SMA's center pin. The GPS acquired a position within a minute!
I tried the following command: AT+CGPS=1 and then used AT+CGPS? to query the GPS status. It reported +CGPS: 1,1 back to me.
The first few attempts at sending AT+CGPSINFO gave me the same info that you got, i.e. no position.
However, after a few minutes with a wire (or passive antenna), AT+CGPSINFO started returning valid time and position information.
Hope this helps.
UPDATE:
You've asked for code, and I'm happy to oblige. This code is designed to just be a terminal-like relay for serial data between the Arduino (using Tools -> Serial Monitor) and the Fona. So the commands I described above, I just entered by hand. Still, some folks may find it useful.
#include <Adafruit_FONA.h>
#include <SoftwareSerial.h>
#define RX_FROM_FONA 2
#define TX_TO_FONA 3
#define FONA_RST 4
#define FONA_PWR_KEY 5
#define FONA_PWR_STATUS 7
Adafruit_FONA fona = Adafruit_FONA(FONA_RST);
SoftwareSerial fonaSerial = SoftwareSerial(TX_TO_FONA, RX_FROM_FONA);
void setup() {
pinMode(FONA_RST, OUTPUT);
pinMode(FONA_PWR_KEY, OUTPUT);
pinMode(FONA_PWR_STATUS, INPUT_PULLUP);
pinMode(LED_BUILTIN, OUTPUT);
digitalWrite(FONA_RST, HIGH);
digitalWrite(FONA_PWR_KEY, HIGH);
resetFona();
Serial.begin(115200);
fonaSerial.begin(4800);
}
void resetFona() {
digitalWrite(LED_BUILTIN, HIGH);
digitalWrite(FONA_RST, LOW);
delay(200);
digitalWrite(LED_BUILTIN, LOW);
digitalWrite(FONA_RST, HIGH);
delay(1000);
digitalWrite(LED_BUILTIN, HIGH);
digitalWrite(FONA_PWR_KEY, LOW);
delay(5000);
digitalWrite(LED_BUILTIN, LOW);
digitalWrite(FONA_PWR_KEY, HIGH);
delay(100);
}
char buffer[255];
int crIndex = 0;
int bufferIndex = 0;
int foundCR = 0;
void loop() {
if (Serial.available()) {
buffer[bufferIndex++] = Serial.read();
if (buffer[bufferIndex - 1] == 13) {
crIndex = bufferIndex - 1;
foundCR = 1;
}
}
if (bufferIndex > 255) { bufferIndex = 0; }
while (fonaSerial.available()) { Serial.write(fonaSerial.read()); }
if (foundCR > 0) {
for (int i = 0; i <= crIndex; i++) {
fonaSerial.write(buffer[i]);
}
foundCR = 0;
bufferIndex = 0;
crIndex = 0;
}
}
I don't think this is a software problem, I think you have a hardware problem.
I had this same problem. The sensitivity of the SIM5320A is pretty good, but Adafruit hasn't really followed their design guide in implementing the carrier board, and this hurts its sensitivity.
For example, you can read the application notes for the board here. Check out the PDFs on "SMT Module Design" and "Hardware Design."
First thing you'll notice is SIMCOM recommends a LC tuning network to tune your particular antenna to 50-ohm impedance.
Here is the SIMCOM recommended diagram:
And here is the Adafruit design, with no tuning network anywhere to be found:
Now, if you're going to be using this in your own PCB later, you of course are free to make improvements here yourself.
To test your design, you're going to want to know more about what the module is saying, otherwise you're flying blind. You can read the GPS NMEA sequences and decipher them, but there's a lot to it. Another way is to use the GPS demo tool that SIMCOM support offers. To download this, go to:
ftp://simcom.exavault.com/SIM5320A/GPS tool
username=myd
psd=simcommyd
It will show you how many satellites you're connected to, what signal strength they have, etc. You are going to need at least 3 satellites for a lock (technically you should have 4, but some modules will toss out an impossible triangulation solution with just 3, not sure which the SIM5320A has). You want each satellite to have strengths in the 30+ range.
Here is my Before screenshot, you can see I only had one satellite and hence no lock. This was after giving half an hour after cold start to download the almanac and ephemeris tables (will take about 15min on average from cold start). Satellite 21 (with signal of also 21 strength, coincidentally).
What I did to fix my signal gain was add an LNA (Low Noise Amplifier) in series with my passive antenna. I used the BGA524N6BOARDTOBO1 breakout board, which you can get at DigiKey/Mouser/others, and adds 19dB of gain, and an incredibly-low 0.55dB noise figure.
After hooking up my antenna, and waiting some time, I got a lock!
Hope this helps you and others to use this otherwise great module.
Related
I have LPCXpresso OM13058 board with LPC11U68 MCU.
Board schematic: https://www.nxp.com/downloads/en/schematics/LPC11U68_Xpresso_v2_Schematic_RevC_1.pdf
I am using this board standalone and compile my program with IAR IDE. For MCU programming I use Flash magic.
Configuration files I used, including SystemInit:
https://github.com/NordicPlayground/mbed/tree/master/libraries/mbed/targets/cmsis/TARGET_NXP/TARGET_LPC11U6X/system_LPC11U6x.h
https://github.com/NordicPlayground/mbed/tree/master/libraries/mbed/targets/cmsis/TARGET_NXP/TARGET_LPC11U6X/system_LPC11U6x.c
https://github.com/NordicPlayground/mbed/tree/master/libraries/mbed/targets/cmsis/TARGET_NXP/TARGET_LPC11U6X/LPC11U6x.h
My program runs at 48MHz. I am trying to generate 200kHz PWM for a GPIO pin. I init my CT32B0 timer:
LPC_SYSCON->SYSAHBCLKCTRL |= (1<<9);
LPC_CT32B0->CTCR = 0x0;
LPC_CT32B0->PR = 48-1; //prescale, I get 1MHz
LPC_CT32B0->TCR = 0x02;
Then I have a timer counter based delay function:
void delay(unsigned int ms)
{
LPC_CT32B0->TCR = 0x02;
LPC_CT32B0->TCR = 0x01;
while(LPC_CT32B0->TC < ms);
LPC_CT32B0->TCR = 0x00;
}
Simple GPIO toggling to get 1kHz (works well, oscilloscope shows 1kHz):
LPC_GPIO_PORT->NOT[0] |= (1<<17);
delay(500);
Then I changed a value from 500 to 50, but instead of getting 10kHz, my oscilloscope shows 9.88kHz.
LPC_GPIO_PORT->NOT[0] |= (1<<17);
delay(50);
By changing value to 5, a result get much more inaccurate. I get 87.7kHz instead of 100kHz:
LPC_GPIO_PORT->NOT[0] |= (1<<17);
delay(5);
As you can see, the frequency is not so high, but the error is not acceptable. I have implemented USART that is running 115200 baud rate without any issues, so I think the main clock frequency is actually running at 48MHz (SystemCoreClock also returns value 48000000).
I have tried different ports and pins, but result the same. Also, I have tried to run the same code on another board with 72MHz MCU LPC1343, ant GPIO switching at 200kHz was perfect and without any errors.
Could you please help to find out the problem. Are there any additional configuration needed or the issue is hardware?
I'm just trying to learn to use external ADC and DAC (PT8211) with my PIC32MX534f06h.
So far, my code is just about sampling a signal with my ADC every time a timer-interrupt is triggered, then sending then same signal out to the DAC.
The interrupt and ADC part works fine and have been tested independently, but the voltages that my DAC outputs don't make much sens to me and stay at 2,5V (it's powered at 0 - 5V).
I've tried to feed the DAC various values ranging from 0 to 65534 (16bits DAC so i guess it should be the expected range of the values to feed to it, right?) voltage stays at 2.5V.
I've tried changing the SPI configuration, using different SPIs (3 and 4) and DACs (I have one soldered to my pcb, soldered to SPI3, and one one breadboard, linked to SPI4 in case the one soldered on my board was defective).
I made sure that the chip selection line works as expected.
I couldn't see the data and clock that are transmissed since i don't have a scope yet.
I'm a bit out of ideas now.
Chip selection and SPI configuration settings
signed short adc_value;
signed short DAC_output_value;
int Empty_SPI3_buffer;
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000 // SPI on, 16-bit master,CKE=1,CKP=0
#define SPI4_BAUD 100 // clock divider
DAC output function
//output to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
Chip_Select_DAC_Clr();
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write byte to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer
Chip_Select_DAC_Set();
INTEnableInterrupts();
}
ISR sampling the data, triggered by Timer1. This works fine.
ADC_input inputs the data in the global variable adc_value (12 bits, signed)
//ISR to sample data
void __ISR( _TIMER_1_VECTOR, IPL7SRS) Test_data_sampling_in( void)
{
IFS0bits.T1IF = 0;
ADC_Input();
//rescale the signed 12 bit audio values to unsigned 16 bits wide values
DAC_output_value = adc_value + 2048; //first unsign the signed 12 bit values (between 0 - 4096, center 2048)
DAC_output_value = DAC_output_value *16; // the scale between 12 and 16 bits is actually 16=65536/4096
DAC_Output(DAC_output_value);
}
main function with SPI, IO, Timer configuration
void main() {
SPI4CON = SPI4_CONF;
SPI4BRG = SPI4_BAUD;
TRISE = 0b00100000;
TRISD = 0b000000110100;
TRISG = 0b0010000000;
LATD = 0x0;
SYSTEMConfigPerformance(80000000L); //
INTCONSET = _INTCON_MVEC_MASK; /* Set the interrupt controller for multi-vector mode */
//
T1CONbits.TON = 0; /* turn off Timer 1 */
T1CONbits.TCKPS = 0b11; /* pre-scale = 1:1 (T1CLKIN = 80MHz (?) ) */
PR1 = 1816; /* T1 period ~ ? */
TMR1 = 0; /* clear Timer 1 counter */
//
IPC1bits.T1IP = 7; /* Set Timer 1 interrupt priority to 7 */
IFS0bits.T1IF = 0; /* Reset the Timer 1 interrupt flag */
IEC0bits.T1IE = 1; /* Enable interrupts from Timer 1 */
T1CONbits.TON = 1; /* Enable Timer 1 peripheral */
INTEnableInterrupts();
while (1){
}
}
I would expect to see the voltage at the ouput of my DAC to mimic those I put at the input of my ADC, instead the DAC output value is always constant, no matter what I input to the ADC
What am i missing?
Also, when turning the SPIs on, should I still manually manage the IO configuration of the SDI SDO SCK pins using TRIS or is it automatically taken care of?
First of all I agree that the documentation I first found for PT8211 is rather poor. I found extended documentation here. Your DAC (PT8211) is actually an I2S device, not SPI. WS is not chip select, it is word select (left/right channel). In I2S, If you are setting WS to 0, that means the left channel. However it looks like in the extended datasheet I found that WS 0 is actually right channel (go figure).
The PIC you've chosen doesn't seem to have any I2S hardware so you might have to bit bash it. There is a lot of info on I2S though ,see I2S bus specification .
There are some slight differences with SPI and I2C. Notice that the first bit is when WS transitions from high to low is the LSB of the right channel. and when WS transitions from low to high, it is not the LSB of the left channel. Note that the output should be between 0.4v to 2.4v (I2S standard), not between 0 and 5V. (Max is 2.5V which is what you've been seeing).
I2S
Basically, I'd try it with the proper protocol first with a bit bashing algorithm with continuous flip flopping between a left/right channel.
First of all, thanks a lot for your comment. It helps a lot to know that i'm not looking at a SPI transmission and that explains why it's not working.
A few reflexions about it
I googled Bit bashing (banging?) and it seems to be CPU intensive, which I would definately try to avoid
I have seen a (successful) projet (in MikroC) where someone transmit data from that exact same PIC, to the same DAC, using SPI, with apparently no problems whatsoever So i guess it SHOULD work, somehow?
Maybe he's transforming the data so that it works? here is the code he's using, I'm not sure what happens with the F15 bit toggle, I was thinking that it was done to manage the LSB shift problem. Here is the piece of (working) MikroC code that i'm talking about
valueDAC = valueDAC + 32768;
valueDAC.F15 =~ valueDAC.F15;
Chip_Select_DAC = 0;
SPI3_Write(valueDAC);
Chip_Select_DAC = 1;
From my understanding, the two biggest differences between SPI and I2S is that SPI sends "bursts" of data where I2S continuously sends data. Another difference is that data sent after the word change state is the LSB of the last word.
So i was thinking that my SPI is triggered by a timer, which is always the same, so even if the data is not sent continuously, it will just make the sound wave a bit more 'aliased' and if it's triggered regularly enough (say at 44Mhz), it should not be SO different from sending I2S data at the same frequency, right?
If that is so, and I undertand correctly, the "only" problem left is to manage the LSB-next-word-MSB place problem, but i thought that the LSB is virtually negligible over 16bit values, so if I could just bitshift my value to the right and then just fix the LSB value to 0 or 1, the error would be small, and the format would be right.
Does it sounds like I have a valid 'Mc-Gyver-I2S-from-my-SPI' or am I forgetting something important?
I have tried to implement it, so far without success, but I need to check my SPI configuration since i'm not sure that it's configured correctly
Here is the code so far
SPI config
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000
#define SPI4_BAUD 20
DAaC output function
//output audio to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
valueDAC = valueDAC >> 1; // put the MSB of ValueDAC 1 bit to the right (becase the MSB of what is transmitted will be seen by the DAC as the LSB of the last value, after a word select change)
//Left channel
Chip_Select_DAC_Set(); // Select left channel
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer (don't know why we need to do this here, but we do)
//SPI3_Write(valueDAC); MikroC option
// Right channel
Chip_Select_DAC_Clr();
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF;
INTEnableInterrupts();
}
The data I send here is signed, 16 bits range, I think you said that it's allright with this DAC, right?
Or maybe i could use framed SPI? the clock seems to be continous in this mode, but I would still have the LSB MSB shifting problem to solve.
I'm a bit lost here, so any help would be cool
I know that there is a lot in internet (http://forum.arduino.cc/index.php?topic=159557.0 for example) about OV7670 and I read a lot about it, but seems something is missing.
First of all I took a look into the way how can we read pixel by pixel from the camera to build the rectangular 600 X 480 image, and this was quite easy to understand considering HREF, VSYNCH and PCLOCK described on documentation here: http://www.voti.nl/docs/OV7670.pdf. I understand XCLOCK as an input I need to give to OV7670 as a kind of cycle controller and RESET would be something to reset it.
So at this point I thought that the functionality of such camera would be covered by wiring the following pins:
D0..D7 - for data (pixel) connected to arduino digital pins 0 to 7 as INPUT on arduino board
XCLK - for camera clock connected to arduino digital pin 8 as OUTPUT from arduino board
PCLK - for pixel clock connected to arduino digital pin 9 as INPUT on arduino board
HREF - to define when a line starts / ends connected to arduino digital pin 10 as INPUT on arduino board
VSYCH - to define when a frame starts / ends connected to arduino digital pin 11 as INPUT on arduino board
GRD - groud connected to arduino GRD
3V3 - 3,3 INPUT connected to arduino 3,3v
RESET - connected to arduino RESET
PWDN - connected to arduino GRD
The implementation for such approach from my point of view would be something like:
Code:
for each loop function do
write high to XCLK
if VSYNCH is HIGH
return;
if HREF is LOW
return;
if lastPCLOCK was HIGH and currentPCLOCK is LOW
readPixelFromDataPins();
end for
My readPixelFromDataPins() basically read just the first byte (as I'm just testing if I can even read something from the camera), and it is written as follows:
Code:
byte readPixelFromDataPins() {
byte result = 0;
for (int i = 0; i < 8; i++) {
result = result << 1 | digitalRead(data_p[i]);
}
return result;
}
In order to check if something is being read from the camera I just print it to the Serial 9600, the byte read from data pins as a number. But currently I'm receiving only zero values. The code I'm using to retrieve an image is stored here: https://gist.github.com/franciscospaeth/8503747.
Did somebody that makes OV7670 work with Arduino already figure out what am I doing wrong? I suppose I'm using the XCLOCK wrongly right? What shall I do to get it working?
I searched a lot and I didn't found any SSCCE (http://sscce.org/) for this camera using arduino, if somebody have it please let me know.
This question is present on arduino forum (http://forum.arduino.cc/index.php?topic=211741.0) too.
your idea is not bad but ...
the xclock need to be a clock (in your program is just a transition from 0 to 1 and is freezing there)
you need also to use I2C with SIOC and SIOD for configuring the camera (or you can use the default settings, but I am not sure if is the correct output format for you, 30F/s,VGA, YUV format ....)
your code execution is slower using the serial output in the same loop with reading data
I will recommend you to toggle the xclock pin and to move the pixel print in a if(). Also you will be able to read Data only in a very precise time, if you want to read only one byte, than after a transition from 0 to 1 of HREF you need to wait for a new transition from 0 to 1 of PCLK (you will be able to see only one 0-1 transition of HREF after 784x2 transitions of PCLK, (640 active pixels + 144 dead time for each line) x 2 (for YUV or RGB are 2 bytes received for each pixel) )
Hello I am Mr_Arduino from the arduino forums. Your issue is that you are reading pixels too slow please do not use digital read to do such a thing. Also if you insist on using a separate function just to read a byte make sure the function is being inlined. You can do this by declaring your function as static inline. Also as mentioned above how are you generating the clock. You can generate the XCLK using PWM on the arduino.
I have created a working example here:
https://github.com/ComputerNerd/arduino-camera-tft/blob/master/captureimage.c
Edit: a 3rd party has copied part but not all of the code from the above link into the answer here. However, the link must remain as the code posted below requires additional files from that source to actually work.
Edit 2: Removed irrelevant code. You will need to modify what you do with the data.
void capImg(void){
cli();
uint8_t w,ww;
uint8_t h;
w=160;
h=240;
tft_setXY(0,0);
CS_LOW;
RS_HIGH;
RD_HIGH;
DDRA=0xFF;
//DDRC=0;
#ifdef MT9D111
while (PINE&32){}//wait for low
while (!(PINE&32)){}//wait for high
#else
while (!(PINE&32)){}//wait for high
while (PINE&32){}//wait for low
#endif
while (h--){
ww=w;
while (ww--){
WR_LOW;
while (PINE&16){}//wait for low
PORTA=PINC;
WR_HIGH;
while (!(PINE&16)){}//wait for high
WR_LOW;
while (PINE&16){}//wait for low
PORTA=PINC;
WR_HIGH;
while (!(PINE&16)){}//wait for high
WR_LOW;
while (PINE&16){}//wait for low
PORTA=PINC;
WR_HIGH;
while (!(PINE&16)){}//wait for high
WR_LOW;
while (PINE&16){}//wait for low
PORTA=PINC;
WR_HIGH;
while (!(PINE&16)){}//wait for high
}
}
CS_HIGH;
sei();
}
You can also find it on github.
You can use my instruction: how to retrieve image from ov7670 It contains all the steps you need. There is also instuction to setup FrameGrabber: how to run framegrabber
I'd like to use GPIO to turn on an LED in a craneboard (ARM processor). I'm very new to embedded programming. But, I'm quite good at C. I referred in some websites and learnt about GPIO related commands. I wrote a code, but I'm not quite sure in how to integrate it to the u-boot coding of the craneboard. I don't know where to start. Kindly guide me.
#define LED1 (1 << 6)
int getPinState(int pinNumber);
int main(void)
{
GPIO0_IODIR |= LED1;
GPIO0_IOSET |= LED1;
while (1)
{
GPIO0_IOCLR |= LED1;
}
}
int getPinState(int pinNumber)
{
int pinBlockState = GPIO0_IOPIN;
int pinState = (pinBlockState & (1 << pinNumber)) ? 1 : 0;
return pinState;
}
First of all, learn common bit (also pin in your case) manipulation expressions that you will use A LOT in embedded programming:
/* Set bit to 1 */
GPIO0_IODIR |= LED1; //output
/* Clear bit (set to 0) */
GPIO0_IOSET &= ~LED1; //low
/* Toggle bit */
GPIO0_IOSET ^= LED1;
Your while() loop actually does nothing, except for the first iteration, because the same logical OR operations do no change bit state (see logical table of this op). Also you should add delay, because if pin toggles too fast, LED might look like off all the time. Simple solution would look like:
while(1)
{
GPIO0_IOSET ^= LED1;
sleep(1); //or replace with any other available delay command
}
I do not have source files of U-Boot for Craneboard, so cannot tell you the exact place where to put your code, but basically there are several options: 1) add it in main(), where U-Boot start, thus hanging it (but you still have LED blinking!). 2) implement separate command to switch LED on/off (see command.c and cmd_ prefixed files for examples) 3) Integrate it in serial loop, so pin could be switched while waiting user input 4) build it as an application over U-Boot.
Get used to a lot of reading and documentation, TRM is your friend here (sometimes the only one). Also there are some great guides for embedded starters, just google around. Few to mention:
http://www.microbuilder.eu/Tutorials/LPC2148/GPIO.aspx (basics with examples)
http://beagleboard.org/ (great resource for BeagleBoard, but much applies to CraneBoard as they share the same SoC, includes great community).
http://free-electrons.com/ (more towards embedded Linux and other advanced topics, but some basics can also be found)
http://processors.wiki.ti.com/index.php/CraneBoard (official CraneBoard wiki, probably know this, but just in case)
P.S. Good luck in and don't give up!
If you want to do it in u-boot (and not in Linux), then you have to write an application for u-boot.
$5.12 of u-boot manual explains how to do it.
The source of u-boot provides some examples that you can use.
I'd like to add my answer. The coding that I have done previously was a generalized one. In craneboard, there are certain functions, that does the operation. So, I rewrote it accordingly. I included the file cmd_toggle.c in 'common' directory in the u-boot directory. And added it to the Makefile. The following code will make the LED to blink.
int glow_led(cmd_tbl_t *cmdtp, int flag, int argc, char *argv[])
{
int ret,i=0,num=0,ctr=0;
int lpin;
lpin=(int)strtoul(argv[1]);
ret=set_mmc_mux();
if(ret<0)
printf("\n\nLED failed to glow!\n\n");
else{
if(!omap_request_gpio(lpin))
{
omap_set_gpio_direction(lpin,0);
for(i=1;i<21;i++)
{
ctr=0;
if((i%2)==0)
{
num=num-1;
omap_set_gpio_dataout(lpin,num);
}
else
{
num=num+1;
omap_set_gpio_dataout(lpin,num);
}
udelay(3000000);
}
}
}
return 0;
}
U_BOOT_CMD(toggle,2,1,glow_led,"Glow an LED","pin_number");
I could've made this a little simpler by just using a while loop to repeatedly set it as 1 and 0.
This can be executed from the u-boot console as toggle 142, as I have connected the LED to the pin 142.
P.S
Thanks for all your guidance. A special thanks to KBart
I've been pulling my hair out lately trying to get an ATmega162 on my STK200 to talk to my computer over RS232. I checked and made sure that the STK200 contains a MAX202CPE chip.
I've configured the chip to use its internal 8MHz clock and divided it by 8.
I've tried to copy the code out of the data sheet (and made changes where the compiler complained), but to no avail.
My code is below, could someone please help me fix the problems that I'm having?
I've confirmed that my serial port works on other devices and is not faulty.
Thanks!
#include <avr/io.h>
#include <avr/iom162.h>
#define BAUDRATE 4800
void USART_Init(unsigned int baud)
{
UBRR0H = (unsigned char)(baud >> 8);
UBRR0L = (unsigned char)baud;
UCSR0B = (1 << RXEN0) | (1 << TXEN0);
UCSR0C = (1 << URSEL0) | (1 << USBS0) | (3 << UCSZ00);
}
void USART_Transmit(unsigned char data)
{
while(!(UCSR0A & (1 << UDRE0)));
UDR0 = data;
}
unsigned char USART_Receive()
{
while(!(UCSR0A & (1 << RXC0)));
return UDR0;
}
int main()
{
USART_Init(BAUDRATE);
unsigned char data;
// all are 1, all as output
DDRB = 0xFF;
while(1)
{
data = USART_Receive();
PORTB = data;
USART_Transmit(data);
}
}
I have commented on Greg's answer, but would like to add one more thing. For this sort of problem the gold standard method of debugging it is to first understand asynchronous serial communications, then to get an oscilloscope and see what's happening on the line. If characters are being exchanged and it's just a baudrate problem this will be particularly helpful as you can calculate the baudrate you are seeing and then adjust the divisor accordingly.
Here is a super quick primer, no doubt you can find something much more comprehensive on Wikipedia or elsewhere.
Let's assume 8 bits, no parity, 1 stop bit (the most common setup). Then if the character being transmitted is say 0x3f (= ascii '?'), then the line looks like this;
...--+ +---+---+---+---+---+---+ +---+--...
| S | 1 1 1 1 1 1 | 0 0 | E
+---+ +---+---+
The high (1) level is +5V at the chip and -12V after conversion to RS232 levels.
The low (0) level is 0V at the chip and +12V after conversion to RS232 levels.
S is the start bit.
Then we have 8 data bits, least significant first, so here 00111111 = 0x3f = '?'.
E is the stop (e for end) bit.
Time is advancing from left to right, just like an oscilloscope display, If the baudrate is 4800, then each bit spans (1/4800) seconds = 0.21 milliseconds (approx).
The receiver works by sampling the line and looking for a falling edge (a quiescent line is simply logical '1' all the time). The receiver knows the baudrate, and the number of start bits (1), so it measures one half bit time from the falling edge to find the middle of the start bit, then samples the line 8 bit times in succession after that to collect the data bits. The receiver then waits one more bit time (until half way through the stop bit) and starts looking for another start bit (i.e. falling edge). Meanwhile the character read is made available to the rest of the system. The transmitter guarantees that the next falling edge won't begin until the stop bit is complete. The transmitter can be programmed to always wait longer (with additional stop bits) but that is a legacy issue, extra stop bits were only required with very slow hardware and/or software setups.
I don't have reference material handy, but the baud rate register UBRR usually contains a divisor value, rather than the desired baud rate itself. A quick google search indicates that the correct divisor value for 4800 baud may be 239. So try:
divisor = 239;
UBRR0H = (unsigned char)(divisor >> 8);
UBRR0L = (unsigned char)divisor;
If this doesn't work, check with the reference docs for your particular chip for the correct divisor calculation formula.
For debugging UART communication, there are two useful things to do:
1) Do a loop-back at the connector and make sure you can read back what you write. If you send a character and get it back exactly, you know that the hardware is wired correctly, and that at least the basic set of UART register configuration is correct.
2) Repeatedly send the character 0x55 ("U") - the binary bit pattern 01010101 will allow you to quickly see the bit width on the oscilloscope, which will let you verify that the speed setting is correct.
After reading the data sheet a little more thoroughly, I was incorrectly setting the baudrate. The ATmega162 data sheet had a chart of clock frequencies plotted against baud rates and the corresponding error.
For a 4800 baud rate and a 1 MHz clock frequency, the error was 0.2%, which was acceptable for me. The trick was passing 12 to the USART_Init() function, instead of 4800.
Hope this helps someone else out!