I am new to MP programming and I am working on a project with codevision program, which requires to:
Connect potentiometer at A0.
Generate a PWM signal with an equivalent average DC voltage to the measured value of the voltage of the potentiometer.
The measured voltage from the DAC is displayed on line 1 of lcd (16*2).
Choose a suitable RC circuit to get a pure DC voltage.
Connect the generated PWM signal to the ICP1 and calculate its duty cycle and display it on line 2 off the screen.
So the screen should be like:
Pot1 = 0.000 V
Duty Cycle = ..%
I made the first line but I don't know how to implement timer1 into this requirement.
Also timer 1 has ICRH and ICRL and I am confused which of them should be used.
Related
I'm configuring the internal temperature sensor of a STM32F4 MCU, and when reading the documentation, I came across some "duplicated" but divergent definitions.
Take a look in the image below:
The temperature sensor is connected to the ADC1, channel 16. When reading the ADC value inside my room, I always get values around ~920;
The values for the calibrations (read from MCU memory) are the following:
TS_CAL1 = 941
TS_CAL2 = 1199
It seems to me that calculating the final temperature using the relationship shown on Table 69 leads to different results from when using the relationship from Table 70.
Can anyone help me in this topic? What's the difference between the data of Tables 69 and 70? What is the purpose of each one? How to calculate the temperature correctly?
As Clifford explained in the comments, the information in table 69 tells you the typical behaviour of any device from this family, whereas the pointers in table 70 give you the address of the calibration data for your particular device which were measured in the factory.
If you told me that some device of this type gave a reading of 920, I would estimate the temperature as follows:
ADC voltage = 920/4096 * 3.3V = 741mV
Voltage offset from V(25C) = 741mV - 760mV = -19mV
Temperature offset from 25C = -19/2.5 = 7.6C
Temperature = 25 - 7.6 = 17.4 degrees C
For your calibrated device I would estimate the temperature like this:
Slope = (1199 - 941) / (110 - 30) = 3.225 LSB/degree
ADC offset from ADC(30C) = (920 - 941) = -21 LSB
Temperature offset from 30C = -21 / 3.225 = 17.775 C
Temperature = 30 - 17.775 = 12.2 degrees C
It is important to note however that although this second number is "calibrated", it is done so using calibration data from much higher temperatures. To use it below 30 degrees requires to extrapolate in a way which may not be physically valid.
Ask yourself, was the room closer to 17 degrees or 12 degrees? Bear in mind that the internal temperature sensor is probably subject to a certain amount of self-heating from the high performance processor.
If you want to use the internal temperature sensor to measure low temperatures outside the calibration range like this it might be appropriate to use the offset from the lower calibration point, but then use the typical slope from the datasheet.
Note also that many STM32 evaluation boards run at 3.0V not 3.3V, so all the calculation will have to be changed if that is the case.
I can not see the table but in theory there are following points need to considers when working with sensor:
Offset: value that is different with the actual value. You may check by feed a constant voltage to system and compare with ADC value measured.
Error of sensor: you might need to measure at known value. For example, for temperature, it is used to measure at 0 temperature which is a steady state.
I have an USRP N320 SDR and I have an issue with 3 MHz-450 MHz band center frequency value. When I have a signal between 450 MHz and 6 GHz, I can see the actual frequency value of the signal even if I slide the center frequency but below 450 MHz, the signal is shifted negatively when I slide the center frequency. Is there any reason and solution for this issue? Any help?
As you can see in Figure 1, the FM radio signals are correctly seen when I set the Rx Tune Frequency to 100 MHz.
1
But when I slide the Rx Tune Frequency to 110, 120,130 and 140 MHz, the FM radio signals' frequency values are also shifted. As you can see in Figure 2, 3 ,4 and 5.
2
3
4
5
Addition:
The main picture of the blocks and parameters of USRP source are below figures.
Blocks
USRP Source 1
USRP Source 2
USRP Source 3
Also I figured out that when I applied below 450 MHz, for example 100 MHz signal and shifted center frequency with an amount, it shifts the signal double time inverse. I might not explain well but below figures does.
100MHz Signal at 100 MHZ center frequency
100MHz Signal at 95 MHz Center frequency but appeared at 90 MHz
100MHz Signal at 110 MHz Center frequency but appeared at 120MHz
But When I applied a signal above 450 MHz, for example 2 GHz, it does work properly. As you can see in figure below.
2 GHz given signal can correctly seen in any other center frequency
My knowledge of Micro-controllers is fairly limited at this point, but here goes.
I have an Led Driver PT6959 which I'm trying to interface with. Data is read serially by the Driver IC on the input CLK rising edge once the STB input line goes low.
My question is, how do I know what the input CLK frequency should be?
Does it matter? Or should it be the same as the Led Driver OSC Pin frequency?
I've read the datasheet but can't find any reference to specifying an input CLK frequency.
If your microcontroller has a SPI port, connect as follows:
DIN <-- SPI-MOSI
CLK <-- SPI-CLK
STB <-- CS (often just a GPIO rather than a dedicated SPI chipselect)
The SPI peripheral will then do most of the work for you. Most SPI peripherals allow different combinations of polarity and phase known as modes:
Mode CPOL CPHA
0 0 0
1 0 1
2 1 0
3 1 1
The PT6959 operates in mode 3.
The clock rate is probably not be critical. If you are bit-banging it rather than using SPI, it need not even be periodic or fixed - it is the state of DIN on rising and falling edges that is critical - not the frequency. The device will have some maximum rate - the data sheet specifies this in terms of minimum mark/space widths of >=400ns, assuming a 50% mark:space, that would correspond to 1.25MHz, but there is little benefit in operating at the maximum speed.
I have found it finally here the bigger datasheet fourteen (14) pages, not three.
So the time constraints for this signal as below,
PW CLK (Clock Pulse Width) ≥ 400ns
t setup (Data Setup Time) ≥ 100ns
t CLK-STB (Clock - Strobe Time) ≥ 1μs
t TZH (Rise Time) ≤ 1μs
t TZL < 1μs
V1.7
PW STB (Strobe Pulse Width) ≥ 1μs
t hold (Data Hold Time) ≥ 100ns
t THZ (Fall Time) ≤ 10μ
fosc=Oscillation Frequency
t TLZ < 10μs
As you can see the minimum clock pulse can be as 400ns which means the maximum clock frequency can found as 1/(2x400x10-9) = 1250000Hz (1.25Mhz)
Other calculations you can do the same way. But, yes, it is everything needed better covered at time-diagrams, which are given in the document above. I place them here just in case the link can die one day.
I am using the libraries provided by C18 compiler to open and set the duty cycle for PWM usage. I noticed that the max PWM frequency I can get with 100% duty-cycle is about 13.5 KHz. The lower the duty-cycle the higher the PWM frequency. How can I achieve a higher PWM frequency with still 100% duty-cycle? Is it possible to at least get more than 13.5 KHz? I just can't figure out what I missing, maybe someone can help here, and I am using PIC18F87J1.
Here is the C18 C Compiler Libraries
Here is PIC18F87J1 datasheet
Here is a snippet of the code I am using regarding PWM.
TRISCbits.TRISC1 = 0;
OpenTimer2(TIMER_INT_OFF & T2_PS_1_1 & T2_POST_1_1);
OpenPWM2(0x03ff);
SetDCPWM2(255);
Your help is appreciated, thanks!
For a start, you have the parameters to the functions reversed. Open() takes a char value less than 256, and Set() takes a 10-bit number.
That said, you have chosen the largest value (255) which gives the lowest frequency. As the datasheet explains, the Open() function takes a value for the period as the parameter. A higher frequency is equivalent to a shorter period, and vice versa.
Lastly, why would you want a duty cycle of 100%? That is the same as having the pin always high. In that case, frequency doesn't matter at all. Just turn the pin on and don't use PWM at all.
You haven't said what you are driving with this PWM, but generally speaking, having the frequency too high can cause problems. It can produce radio interference, overheat, and so on.
Your question indicates you misunderstand the purpose of PWM and what the terms refers to, so here is a tl;dr.
PWM simulates a voltage between 0 and Vcc by rapidly turning a pin high and low. The simulated voltage is proportional to the time_high/(time_high + time_low). The percent of time the pin is at Vcc is called the duty cycle. (So 100% duty is always on, giving Vcc volts. 0% duty is always off, giving 0 V.)
The rate at which this on/off cycle repeats is called the PWM frequency. If the frequency is too small (the period too long) the load will see the pin voltage fluctuating. The goal is to run the PWM fast enough to smooth out the voltage the load sees, but not so fast as to cause other problems. The available frequencies are appropriate for most applications. Also note that setting the frequency high (period small) will also reduce the accuracy of the duty cycle. This is explained in the datasheet. The reason is basically that the duty cycle must ultimately be converted to clock ticks on versus clock ticks off. The faster the frequency, the fewer ways to divide the clock ticks in each cycle.
I am working on UART with pic32mx5xx. All I need is to send a message from pic to terminal (Putty), but it is not working as I would get invalid characters appearing. The baud rate is set to 19200, how do I calculate the clock frequency?
Is it true that the clock frequency of the UART is 16 times the baud rate. If I do the math the clock frequency should be 307200, but this is doesn't seem right.
Can someone help me understand how baud rate and clock frequency relate to each other ? Also how to calculate both?
Thanks!
The baud rate generator has a free-running 16-bit timer. To get the desired baud rate, you must configure its period register UxBRG and prescaler BRGH.
When BRGH is set to 0 (default), the timer is incremented every 16th cycle of peripheral bus clock.
When BRGH is 1, the timer increments every 4th cycle.
It is usually better to set BRGH to 1 to get a smaller baud rate error, as long as the UxBRG value doesn't grow too large to fit into the 16-bit register (on slower baud rates).
The value in the period register UxBRG determines the duration of one pulse on the data line in baud rate generator's timer increments.
See the formulas in section 21.3 - UART Baud Rate Generator in the reference manual to learn how to calculate a proper value for UxBRG.
To compute the period of the 16-bit baud rate generator timer to achieve the desired baud rate:
When BRGH = 0:
UxBRG = FPB / (16 * BAUDRATE) - 1
When BRGH = 1:
UxBRG = FPB / (4 * BAUDRATE) - 1
Where FPB is the peripheral bus clock frequency.
For example, if FPB = 20 MHz and BRGH = 1 and the desired baud rate 19200, you would calculate:
UxBRG = 20000000 / (4 * 19200) - 1
= 259
If you are using some of the latest development libraries and code examples from Microchip you may find that there are already UART methods in the libraries that will set up the PIC for your needs. If you dig deep into the new compiler directory structures you will find help files in the microsoft format (no fear, if you are on a Unix type computer there are Unix utilities that read these types of files.). There you can drill down into the help to find the documentation of various ready made methods you can call from your program to configure the PIC's hardware. Buyer Beware, the code is not that mature. For instance I was working on a PIC project that needed to sample two analog signals. The PIC hardware A/D converter was very complex. But it was clear the ready made code only covered about 10% of that PIC's abilities.
-good luck