I have a very simple code that print out something to the terminal then goes directly to sleep.
For some reason the device is consuming more current during sleep mode. It is drawing 0.24 mA but I know it should be less than that. Without sleep it is consuming 4.32 mA. I've ran the most basic software I can and must be missing something.
Please what are some of the factors that effect power consumption? I really need to lower power consumption but I don't know what's causing it be that high. Here is the Datasheet for your own convenience.
/*
File: main.c
Date: 2011-SEP-4
Target: PIC18F87J11
IDE: MPLAB 8.76
Compiler: C18 3.40
*/
#include <p18cxxx.h>
#include <usart.h>
#pragma config FOSC = HSPLL, WDTEN = OFF, WDTPS = 4096, XINST = OFF
#define FOSC (4000000UL)
#define FCYC (FOSC/4UL)
#define BAUD 9600UL
#define SPBRG_INIT (FOSC/(16UL*BAUD) - 1)
void main(void)
{
/* set FOSC clock to 4MHZ */
OSCCON = 0x70;
/* turn off 4x PLL */
OSCTUNE = 0x00;
/* make all ADC inputs digital I/O */
ANCON0 = 0xFF;
ANCON1 = 0xFF;
/* test the simulator UART interface */
Open1USART(USART_TX_INT_OFF & USART_RX_INT_OFF & USART_ASYNCH_MODE & USART_EIGHT_BIT & USART_CONT_RX & USART_BRGH_HIGH, SPBRG_INIT);
putrs1USART("PIC MICROCONTROLLERS\r\n");
Close1USART();
/* sleep forever */
Sleep();
}
Thanks in advance!
Update 1: I noticed adding the following code decreased it to 0.04 mA
TRISE = 0;
PORTE = 0x0C;
And If I was to change PORTE to the following, it increased to 0.16 mA.
PORTE = 0x00;
But I don't really understand what all of that means ... or how the power consumption went down. I have to be missing something in the code but I don't know what it is.
Update 2: This code gives me unstable current consumption. Sometimes 2.7 mA other times 0.01 mA. I suspect the problem with WDTCONbits.REGSLP = 1;
Download Code
Current consumption nicely went down from 0.24 mA to 0.04 mA when OP change settings on port outputs.
This is expected in typical designs, the outputs control various circuitry. Example: An output, by driving high, may turn on an LED(1), taking an additional 0.20 mA. In another design, an output, by driving low, may turn on an LED. In a 3rd design, not driving may turn on an LED.
OP needs to consult the schematic or designer to determine what settings result in low power. Further, certain combinations may/may not be allowed during low power mode.
Lastly, the sequence of lowering power, disabling, etc. on the various design elements may be important. The sequence to to shut things down is usually reversed in bringing them back on-line.
#Chris Stratton has good ideas in the posted comment.
(1) A low powered LED.
Related
I am using the libraries provided by C18 compiler to open and set the duty cycle for PWM usage. I noticed that the max PWM frequency I can get with 100% duty-cycle is about 13.5 KHz. The lower the duty-cycle the higher the PWM frequency. How can I achieve a higher PWM frequency with still 100% duty-cycle? Is it possible to at least get more than 13.5 KHz? I just can't figure out what I missing, maybe someone can help here, and I am using PIC18F87J1.
Here is the C18 C Compiler Libraries
Here is PIC18F87J1 datasheet
Here is a snippet of the code I am using regarding PWM.
TRISCbits.TRISC1 = 0;
OpenTimer2(TIMER_INT_OFF & T2_PS_1_1 & T2_POST_1_1);
OpenPWM2(0x03ff);
SetDCPWM2(255);
Your help is appreciated, thanks!
For a start, you have the parameters to the functions reversed. Open() takes a char value less than 256, and Set() takes a 10-bit number.
That said, you have chosen the largest value (255) which gives the lowest frequency. As the datasheet explains, the Open() function takes a value for the period as the parameter. A higher frequency is equivalent to a shorter period, and vice versa.
Lastly, why would you want a duty cycle of 100%? That is the same as having the pin always high. In that case, frequency doesn't matter at all. Just turn the pin on and don't use PWM at all.
You haven't said what you are driving with this PWM, but generally speaking, having the frequency too high can cause problems. It can produce radio interference, overheat, and so on.
Your question indicates you misunderstand the purpose of PWM and what the terms refers to, so here is a tl;dr.
PWM simulates a voltage between 0 and Vcc by rapidly turning a pin high and low. The simulated voltage is proportional to the time_high/(time_high + time_low). The percent of time the pin is at Vcc is called the duty cycle. (So 100% duty is always on, giving Vcc volts. 0% duty is always off, giving 0 V.)
The rate at which this on/off cycle repeats is called the PWM frequency. If the frequency is too small (the period too long) the load will see the pin voltage fluctuating. The goal is to run the PWM fast enough to smooth out the voltage the load sees, but not so fast as to cause other problems. The available frequencies are appropriate for most applications. Also note that setting the frequency high (period small) will also reduce the accuracy of the duty cycle. This is explained in the datasheet. The reason is basically that the duty cycle must ultimately be converted to clock ticks on versus clock ticks off. The faster the frequency, the fewer ways to divide the clock ticks in each cycle.
I am working on UART with pic32mx5xx. All I need is to send a message from pic to terminal (Putty), but it is not working as I would get invalid characters appearing. The baud rate is set to 19200, how do I calculate the clock frequency?
Is it true that the clock frequency of the UART is 16 times the baud rate. If I do the math the clock frequency should be 307200, but this is doesn't seem right.
Can someone help me understand how baud rate and clock frequency relate to each other ? Also how to calculate both?
Thanks!
The baud rate generator has a free-running 16-bit timer. To get the desired baud rate, you must configure its period register UxBRG and prescaler BRGH.
When BRGH is set to 0 (default), the timer is incremented every 16th cycle of peripheral bus clock.
When BRGH is 1, the timer increments every 4th cycle.
It is usually better to set BRGH to 1 to get a smaller baud rate error, as long as the UxBRG value doesn't grow too large to fit into the 16-bit register (on slower baud rates).
The value in the period register UxBRG determines the duration of one pulse on the data line in baud rate generator's timer increments.
See the formulas in section 21.3 - UART Baud Rate Generator in the reference manual to learn how to calculate a proper value for UxBRG.
To compute the period of the 16-bit baud rate generator timer to achieve the desired baud rate:
When BRGH = 0:
UxBRG = FPB / (16 * BAUDRATE) - 1
When BRGH = 1:
UxBRG = FPB / (4 * BAUDRATE) - 1
Where FPB is the peripheral bus clock frequency.
For example, if FPB = 20 MHz and BRGH = 1 and the desired baud rate 19200, you would calculate:
UxBRG = 20000000 / (4 * 19200) - 1
= 259
If you are using some of the latest development libraries and code examples from Microchip you may find that there are already UART methods in the libraries that will set up the PIC for your needs. If you dig deep into the new compiler directory structures you will find help files in the microsoft format (no fear, if you are on a Unix type computer there are Unix utilities that read these types of files.). There you can drill down into the help to find the documentation of various ready made methods you can call from your program to configure the PIC's hardware. Buyer Beware, the code is not that mature. For instance I was working on a PIC project that needed to sample two analog signals. The PIC hardware A/D converter was very complex. But it was clear the ready made code only covered about 10% of that PIC's abilities.
-good luck
Can anybody spot what is wrong with the code below. It is supposed to average the frame interval (dt) for the previous TIME_STEPS number of frames.
I'm using Box2d and cocos2d, although I don't think the cocos2d bit is very relevent.
-(void) update: (ccTime) dt
{
float32 timeStep;
const int32 velocityIterations = 8;
const int32 positionIterations = 3;
// Average the previous TIME_STEPS time steps
for (int i = 0; i < TIME_STEPS; i++)
{
timeStep += previous_time_steps[i];
}
timeStep = timeStep/TIME_STEPS;
// step the world
[GB2Engine sharedInstance].world->Step(timeStep, velocityIterations, positionIterations);
for (int i = 0; i < TIME_STEPS - 1; i++)
{
previous_time_steps[i] = previous_time_steps[i+1];
}
previous_time_steps[TIME_STEPS - 1] = dt;
}
The previous_time_steps array is initially filled with whatever the animation interval is set too.
This doesn't do what I would expect it too. On devices with a low frame rate it speeds up the simulation and on devices with a high frame rate it slows it down. I'm sure it's something stupid I'm over looking.
I know box2D likes to work with fixed times steps but I really don't have a choice. My game runs at a very variable frame rate on the various devices so a fixed time stop just won't work. The game runs at an average of 40 fps, but on some of the crappier devices like the first gen iPad it runs at barely 30 frames per second. The third gen ipad runs it at 50/60 frames per second.
I'm open to suggestion on other ways of dealing with this problem too. Any advice would be appreciated.
Something else unusual I should note that somebody might have some insight into is the fact that running any debug optimisations on the build has a huge effect on the above. The frame rate isn't changed much when debug optimisations are set to -Os vs -O0. But when the debut optimisations are set to -Os the physics simulation runs much faster than -O0 when the above code is active. If I just use dt as the interval instead of the above code then the debug optimisations make no difference.
I'm totally confused by that.
On devices with a low frame rate it speeds up the simulation and on
devices with a high frame rate it slows it down.
That's what using a variable time step is all about. If you only get 10 fps the physics engine will iterate the world faster because the delta time is larger.
PS: If you do any kind of performance tests like these, run them with the release build. That also ensures that (most) logging is disabled and code optimizations are on. It's possible that you simply experience much greater impact on performance from debugging code on older devices.
Also, what value is TIME_STEPS? It shouldn't be more than 10, maybe 20 at most. The alternative to averaging is to use delta time directly, but if delta time is greater than a certain threshold (30 fps) switch to using a fixed delta time (cap it). Because variable time step below 30 fps can get really ugly, it's probably better in such cases to allow the physics engine to slow down with the framerate or else the game will become harder if not unplayable at lower fps.
My environment is
Windows 7 x64
Matlab 2012a x64
Cuda SDK 4.2
Tesla C2050 GPU
I am having trouble figuring out why my GPU is crashing with the "uncorrectable ECC error encountered". This error only occurs when i use 512 threads or more. I can't post the kernel, but i will try to describe what it does.
In general, the kernel takes a number of parameters and produces 2 complex matricies defined by the thread size, M and another number, N. So the returned matrices will be of size MxN. A typical configuration is 512x512, but each number is independent and can vary up or down. The kernel works when the numbers are 256x256.
Each thread (kernel) extracts a 999 size vector out of a 2D array based on the thread id, ie size 999xM, then cycles through the row (0 .. N-1) of the output matrices for calculation. A number of intermediate parameters are calculated, only using pow, sin and cos among the + - * / operators. To calculate one of the output matrices an additional loop needs to be executed to sum up the contribution of the 999 vector that was extracted earlier. This loop does some intermediate calculations to determine a range of values that will allow contribution. The contribution is then scaled by a factor determined by the cos and sine values of a calculated fractional value. This is where it crashes. If i stick in a constant value or 1.0 or any other for that matter, the kernel executes without trouble. however, when only one of the calls (cos or sine) is included, the kernel crashes.
Some psuedocode follows:
kernel()
{
/* Extract 999 vector from 2D array 999xM - one 999 vector for each thread. */
for (int i = 0; i < 999; i++)
{
.....
}
/* Cycle through the 2nd dimension of the output matricies */
for (int j = 0; j < N; j++)
{
/* Calculate some intermediate variables */
/* Calculate the real and imaginary components of the first output matrix */
/* real = cos(value), imaginary = sin(value) */
/* Construct the first output matrix from some intermediate variables and the real and imaginary components */
/* Calculate some more intermediate variables */
/* cycle through the extracted vector (0 .. 998) */
for (int k = 0; k < 999; k++)
{
/* Calculate some more intermediate variables */
/* Determine the range of allowed values to contribute to the second output matrix. */
/* Calculate the real and imaginary components of the second output matrix */
/* real = cos(value), imaginary = sin(value) */
/* This is were it crashes, unless real and imaginary are constant values (1.0) */
/* Sum up the contributions of the extracted vector to the second output matrix */
}
/* Construct the Second output matrix from some intermediate variables and the real and imaginary components */
}
}
I thought this could be due to a register limit, but the occupancy calculator indicates that this is not the case, I'm using less than the 32,768 registers with 512 threads. Can anyone give any suggestions as to what the cause of this could be?
Here is the ptasx info:
ptxas info : Compiling entry function '_Z40KerneliidddddPKdS0_S0_S0_iiiiiiiiiPdS1_S1_S1_S1_S1_S1_S1_S1_S1_' for 'sm_20'
ptxas info : Function properties for _Z40KerneliidddddPKdS0_S0_S0_iiiiiiiiiPdS1_S1_S1_S1_S1_S1_S1_S1_S1_
8056 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Function properties for __internal_trig_reduction_slowpathd
40 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 53 registers, 232 bytes cmem[0], 144 bytes cmem[2], 28 bytes cmem[16]
tmpxft_00001d70_00000000-3_MexFunciton.cudafe1.cpp
"Uncorrectable ECC error" usually refers to a hardware failure. ECC is Error Correcting Code, a means to detect and correct errors in bits stored in RAM. A stray cosmic ray can disrupt one bit stored in RAM every once in a great while, but "uncorrectable ECC error" indicates that several bits are coming out of RAM storage "wrong" - too many for the ECC to recover the original bit values.
This could mean that you have a bad or marginal RAM cell in your GPU device memory.
Marginal circuits of any kind may not fail 100%, but are more likely to fail under the stress of heavy use - and associated rise in temperature.
There are diagnostic utilities floating around to stress-test all the RAM banks of your PC to confirm or pinpoint which chip is failing, but I don't know of an analog for testing the device RAM banks of the GPU.
If you have access to another machine with a GPU of similar capability, try running your app on that machine to see how it behaves. If you don't get the ECC error on the second machine, this confirms that the problem is almost certainly in the hardware of the first machine. If you get the same ECC error on the second machine, then ignore everything I've written here and continue looking for your software bug. Unless your code is actually causing hardware damage, the chances of two machines having the same hardware failure are extremely small.
I'm trying to transmit serial over the pic18f45k22 eusart peripheral. The messages get sent exactly as expected when the clock is running at 16Mhz, but if I set the PLL to on (so that the the oscillator runs at 64Mhz), I get a framing error.
I have changed the SPBRG registers to account for the new clock frequencey, and tried changing the baudrate generator to both 16 and 8 bit modes, but with no joy.
Current code:
OSCCONbits.IRCF = 0b111; //change Fosc to 16Mhz
OSCTUNEbits.PLLEN = 1; //enable PLL to multiply Fosc by 4
/*Set baud rates and related registers*/
/*For BRG16 = 1 and BRGH = 1, Baud rate = Fosc/(4([SPBRG:SPBRGH]+1)) */
SPBRGH1 = 0; //Set Baud rate control regs to 34 to give baudrate of 115.2k
SPBRG1 = 138;
BAUDCON1bits.BRG16 = 1; //16 bit mode (baudrate generator)
TXSTAbits.BRGH = 1; //Set high speed baud rate
Thanks in advance,
Huggzorx
I'm not familiar with that specific chip but in general, this is what I look at when my UART isn't behaving.
1) Can your clock be divided down to the baud rate with little enough error. Assuming that your baud rate formula in the comments is correct, I think you're okay there:
Baud rate = 16 MHz / (4*(34 + 1)) = 114286 (0.8% error)
Baud rate = 64 MHz / (4*(138 + 1)) = 115107 (0.08% error)
2) Make sure your chip is producing the baud rate you think it should be producing. Some PLLs are really picky about how you turn them on. It's also easy to mis-configure a peripheral. I find that an oscilloscope is your best bet to diagnose this type of problem. If you have access to one, scope the PIC's transmit pin and check that your bit width is 8.68us (1/115200).
If it's 4 times that size (34.72us), then your PLL didn't lock. If it's just a bit off, then the formula might be wrong.
It's not much but hopefully it gets you going in the right direction.