I am working on a FPGA project in VHDL.
I need to copy a 16 bit shift register into a FIFO each time it fills up (eg after 16 new data bits have been fed into the shift register, I want to take the newly formed 16 bit word and send it to a fifo)
My question is, do I need to set up the data at the input of the fifo one clock before asserting the clock line on the fifo? This is actually a generic VHDL question, and not specific to fifos.
Basically, is it possible to set the data and toggle the clock in the same operation, or do I need some basic state machine to set up the data on one clock edge and toggle the fifo clock on the next?
for instance:
fifo_d_in( 7 downto 0 ) <= shift_register;
fifo_clk <= '1';
or
if( state = one ) then
fifo_d_in( 7 downto 0 ) <= shift_register;
state <= two;
elsif( state = two ) then
fifo_clk <= '1';
end if;
My gut tells me that I have to setup the data first, to satisfy the setup & hold requirements of the input registers.
Thanks!
The data must be present for the setup time before the clock edge, so asserting the clock at the same time as any possible data changes may result in unstable behaviour.
One way to configure your shift register is to have an output which asserts after the last bit of data has been clocked in. For an 8 bit shift register, after the 8th clock the signal would be asserted. Any easy way to accomplish this is with a 3 bit counter, when all bits are 1 the output is 1. This signal is then connected to the CLKEN of your fifo so that on the 9th clock edge, the data at the output of your shift register is clocked into the the fifo. It would also be possible to clock in the next serial bit of data to your shift register on the 9th clock.
shift reg FIFO
------------- ---------
-|DIN DOUT |--------| DIN |
| FULL |--------| CLKEN |
- |> | --|> |
| ------------- | ---------
| |
CLK -----------------------
In the above diagram, FULL would be asserted the instant after the last bit of data was clocked in to fill the shift register, and deasserted on the next cycle. FULL can be combinatorial logic.
Related
I know that maximum speed of USB HID device is 64 kbps, but on oscilloscope I get transactions every 1 ms, which contain only ONE byte. My HID report descriptor listed below. What i must change to achieve 64Kbps? Currently my bInterval = 0x01 (1 ms polling for interrupt endpoint), but actual speed is 65 bytes/s, because it add reportID byte to my 64-byte data. I think, USB should not divide single 64+1 packet to 65 singlebyte packets. For experiment I use reportID=1 (from STM32 to PC). From PC side I use hidapi.dll to interact.
__ALIGN_BEGIN static uint8_t CUSTOM_HID_ReportDesc_FS[USBD_CUSTOM_HID_REPORT_DESC_SIZE] __ALIGN_END =
{
/* USER CODE BEGIN 0 */
USAGE_PAGE(USAGE_PAGE_UNDEFINED)
USAGE(USAGE_UNDEFINED)
COLLECTION(APPLICATION)
REPORT_ID(1)
USAGE(1)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
INPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(2)
USAGE(2)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(3)
USAGE(3)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(4)
USAGE(4)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
/* USER CODE END 0 */
0xC0 /* END_COLLECTION */
};
HID uses interrupt IN/OUT to convey reports. In USB, Interrupt transfers are polled by host every 1 ms. Every time endpoint is polled, it may yield a 64-byte report (for Low/Full speed). That's probably where you get the 64kB/s figure from. Actually, limit is 1k report / second. Also note these limits are different for High-speed and Super-speed devices.
Report descriptor is one thing. What you actually send as interrupt-IN is something else. It should match, but this is not enforced by anything. You should probably look into the code that builds the interrupt IN transfer payload.
Side note: all you seem interested in is to send arbitrary chunks of data, then HID is probably not the relevant profile. Using bulk endpoints looks more appropriate (and you'll not be limited by interrupt endpoint polling rate).
My VHDL code doesn't behave as i expected.
What i want: i have 32 bit input data stream, and decimated 32 bit data output in some specific order.
Let's say each 32 bit data split into two 16 bit data.
first case: every second 16 bit of 32 bit data present on output;
second case: every fourth 16 bit of 32 bit data present on output;
third case fourth 16 bit of 32 bit data present on output
and so on.
Like in the picture:pic1
Here is first case of implementation:
process (CLK_IN, RST_IN)
begin
if (RST_IN = '1') then
rx_data_half_a <= (others => '0');
elsif rising_edge(CLK_IN) then
rx_data_half_a <= DATA_IN(15 downto 0);
end if;
end process;
process (CLK_IN, RST_IN)
begin
if (RST_IN = '1') then
rx_data_half_a0 <= (others => '0');
rx_data_half_a1 <= (others => '0');
elsif rising_edge(CLK_IN) then
rx_data_half_a0 <= rx_data_half_a;
rx_data_half_a1 <= rx_data_half_a0;
rx_data_half_a2 <= rx_data_half_a1;
DATA_OUT <= rx_data_half_a0 & rx_data_half_a;
end if;
end process;
And the testbench is looking like that:
sim
Instead of 00002222 44446666 ...
I get: 00002222 22224444 44446666 ...
I already do this job using memory (just counting specific addresses) but i dont' want to use it. I think there's much easiest way to implement this.
It is possible to do with registers without reducing the frequency?
Can you give me some advise?
You need a minimal state machine (eg: a counter) to track the input and update the data registers at the appropriate time. You logic is running every clock cycle and has no idea it needs to "skip" any of the incoming samples.
Since you are decimating, it is not possible to do this in registers or in memory without "reducing the frequency" in that you will have half (or 1/4 or whatever your decimation ratio is set to) as many output elements as input elements. If you use a memory you could burst at the full rate for a while, but you will still have to pause periodically and "re-fill" the buffer.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm currently working with 9S12 from freescale, I really need some help in order to understand how to write correctly an ISR. In particular, I'm reporting the text of an exercise in which they asked me to measure the difference in phase between two square waveforms (at the input of the microcontroller).
The bus clock is 16 MHz and I have to use the timer module of the system, which provides a free-running counter (TCNT # 16 bit). The counter have to work # 500 kHz, that is achieved by setting a prescaler of 5, starting from the bus clock.
The two signals have the same frequency, which is given (25 Hz), but is required to measure it anyway.
I have to use the INTERRUPT procedure, using the correct registers (actually is not necessary using the exact same ones from the manual, I can use any names I want, instead I have to comment every line of the code) and variables.
My way of approaching the problem is very theoretical, but I need the C code.
In order to solve the problem I have to use the INPUT CAPTURE MODE, to measure the difference in term of units counted by the TCNT (TICKS) between the positive edge of signal 1 and the positive edge of signal 2. My doubts are in particular on the variables that I have to use, the type (LOCAL, GLOBAL, UNSIGNED, LONG (?)), how can I update the values correctly in the ISR and if I should take into account of the overflows of the counter and the respective interrupts generated by them.
I'm stuck in this problem, I hope someone can help me with some code examples in particular for the variables that I have to use and how to write the actual ISR.
Thank you to everyone!
Treat the following as pseudo-code; it is for you to wade through the datasheet to determine how to configure and access the timer-capture unit - I am not familiar with the specific part
In general, given the timer-counter is 16 bit:
volatile static uint16_t phase_count = 0 ;
volatile static uint16_t mean_period_count = 0 ;
int phasePercent()
{
(phase_count * 100) / mean_period_count ;
}
int frequencyHz()
{
500000 / mean_period_count ;
}
void TimerCounterISR( void )
{
static uint16_t count1 = 0 ;
static uint16_t count2 = 0 ;
static uint16_t period1 = 0 ;
static uint16_t period2 = 0 ;
uint16_t now = getCaptureCount() ;
if( isSignal1Edge() )
{
period1 = now - count1 ;
count1 = now ;
}
else if( isSignal2Edge() )
{
period2 = now - count2 ;
count2 = now ;
phase_count = period2 - period1 ;
}
mean_period_count = (period1 + period2) >> 1 ;
}
The method assumes an up-counter and requires that the counter reload runs the full 16 bit range 0 to 0xFFFF - otherwise the modulo-216 arithmetic will not work, and the solution will be much more complex. For a down-counter, swap the operands in the period calculations.
Note the return value from phasePercent() and frequencyHz() will not be valid until after a complete rising-edge to rising-edge cycle on both phases. You could add an edge count and validate after the rising edge has been seen twice on each signal if that is an issue.
This is how you sort out the technical parts for the S12:
Setup the ECT timer with appropriate pre-scaler. You seem to have this part covered? At 500kHz each timer tick is 2us and you need 65536 timer ticks to cover the worst case. One full TCNT period will be 131ms with your prescaler. If ~25Hz is the worst case, then that's roughly 40ms so it should be ok in that case.
Pick a timer channel TC that corresponds to the pin used. This channel needs to be configured as input capture, trigger on both rising/falling edge. Store the current value of TCNT as init value to a uint16_t time counter variable.
Register the ISR in the vector table etc, the usual stuff for writing an ISR.
Upon interrupt, read the value of the TCn channel register. The difference between the counter variable and the stored value gives the period time in timer cycles. Multiply this with 1/500kHz and you get the period time. In the ISR always read TC and not TCNT, as the former will not be affected by interrupt latency and code execution overhead. Update your counter variable with the value from TCn.
With a design like this ensure there are some external means of filtering out spikes and EMI from the input: RC filter or similar.
I have OMAP-L138 Experimenter Kit and I want to communicate with one of peripheral devices which is set on SPI 1 chip select 1 (there is also flash memory on SPI1 chip select 0).
I'm confused which registers should I use to select chip 1 ?
According to OMAP-L138 Technical Reference Manual, I should
set 4-pin mode
spi->SPIPC0 = SOMI | SIMO | CLK | SCS0; //4-pin mode with chip select
set 1 bit of SPIPC0.SCS0FUN to show that SPI_CS1 - is a SPI functional pin
SETBIT(spi->SPIPC0, 0x00000002);
set 17 bit of SPIDAT1.CSNR (It means that SPI_CS1 pin is driven high.)
spi->SPIDAT1 = 0;
SETBIT(spi->SPIDAT1, 0x20000); //set 17th bit (corresponds to SPI_CS1)
set 1 bit of SPIDEF.CSDEF (It means that SPI_CS1 pin is driven high.)
spi->SPIDEF = 0;
SETBIT(spi->SPIDEF, 0x00000002); //set 1st bit (corresponds to SPI_CS1) in CSDEF field
finally, before reading data from SPI1_CS1 device, I should set SPIDAT1.CSHOLD to held active chip select signal
SETBIT(spi->SPIDAT1,0x10000000); //set 28th bit which represents CSHOLD
Is that correct or I miss something?
May be I also need do something with PINMUX5 (Pin Multiplexing Control 5 Register)?
Thank you!
It seems that I have figured it out.
Setting up 0th bit in the register PINMUX5 - selects function SPI1_SCS[1]
Setting up 4th bit in the register PINMUX5 - selects function SPI1_SCS[0]
EVMOMAPL138_pinmuxConfig(5, 0x00FFFFF0, 0x00111101); //enable chip select 1
EVMOMAPL138_pinmuxConfig(5, 0x00FFFFF0, 0x00111110); //enable chip select 0
I've been pulling my hair out lately trying to get an ATmega162 on my STK200 to talk to my computer over RS232. I checked and made sure that the STK200 contains a MAX202CPE chip.
I've configured the chip to use its internal 8MHz clock and divided it by 8.
I've tried to copy the code out of the data sheet (and made changes where the compiler complained), but to no avail.
My code is below, could someone please help me fix the problems that I'm having?
I've confirmed that my serial port works on other devices and is not faulty.
Thanks!
#include <avr/io.h>
#include <avr/iom162.h>
#define BAUDRATE 4800
void USART_Init(unsigned int baud)
{
UBRR0H = (unsigned char)(baud >> 8);
UBRR0L = (unsigned char)baud;
UCSR0B = (1 << RXEN0) | (1 << TXEN0);
UCSR0C = (1 << URSEL0) | (1 << USBS0) | (3 << UCSZ00);
}
void USART_Transmit(unsigned char data)
{
while(!(UCSR0A & (1 << UDRE0)));
UDR0 = data;
}
unsigned char USART_Receive()
{
while(!(UCSR0A & (1 << RXC0)));
return UDR0;
}
int main()
{
USART_Init(BAUDRATE);
unsigned char data;
// all are 1, all as output
DDRB = 0xFF;
while(1)
{
data = USART_Receive();
PORTB = data;
USART_Transmit(data);
}
}
I have commented on Greg's answer, but would like to add one more thing. For this sort of problem the gold standard method of debugging it is to first understand asynchronous serial communications, then to get an oscilloscope and see what's happening on the line. If characters are being exchanged and it's just a baudrate problem this will be particularly helpful as you can calculate the baudrate you are seeing and then adjust the divisor accordingly.
Here is a super quick primer, no doubt you can find something much more comprehensive on Wikipedia or elsewhere.
Let's assume 8 bits, no parity, 1 stop bit (the most common setup). Then if the character being transmitted is say 0x3f (= ascii '?'), then the line looks like this;
...--+ +---+---+---+---+---+---+ +---+--...
| S | 1 1 1 1 1 1 | 0 0 | E
+---+ +---+---+
The high (1) level is +5V at the chip and -12V after conversion to RS232 levels.
The low (0) level is 0V at the chip and +12V after conversion to RS232 levels.
S is the start bit.
Then we have 8 data bits, least significant first, so here 00111111 = 0x3f = '?'.
E is the stop (e for end) bit.
Time is advancing from left to right, just like an oscilloscope display, If the baudrate is 4800, then each bit spans (1/4800) seconds = 0.21 milliseconds (approx).
The receiver works by sampling the line and looking for a falling edge (a quiescent line is simply logical '1' all the time). The receiver knows the baudrate, and the number of start bits (1), so it measures one half bit time from the falling edge to find the middle of the start bit, then samples the line 8 bit times in succession after that to collect the data bits. The receiver then waits one more bit time (until half way through the stop bit) and starts looking for another start bit (i.e. falling edge). Meanwhile the character read is made available to the rest of the system. The transmitter guarantees that the next falling edge won't begin until the stop bit is complete. The transmitter can be programmed to always wait longer (with additional stop bits) but that is a legacy issue, extra stop bits were only required with very slow hardware and/or software setups.
I don't have reference material handy, but the baud rate register UBRR usually contains a divisor value, rather than the desired baud rate itself. A quick google search indicates that the correct divisor value for 4800 baud may be 239. So try:
divisor = 239;
UBRR0H = (unsigned char)(divisor >> 8);
UBRR0L = (unsigned char)divisor;
If this doesn't work, check with the reference docs for your particular chip for the correct divisor calculation formula.
For debugging UART communication, there are two useful things to do:
1) Do a loop-back at the connector and make sure you can read back what you write. If you send a character and get it back exactly, you know that the hardware is wired correctly, and that at least the basic set of UART register configuration is correct.
2) Repeatedly send the character 0x55 ("U") - the binary bit pattern 01010101 will allow you to quickly see the bit width on the oscilloscope, which will let you verify that the speed setting is correct.
After reading the data sheet a little more thoroughly, I was incorrectly setting the baudrate. The ATmega162 data sheet had a chart of clock frequencies plotted against baud rates and the corresponding error.
For a 4800 baud rate and a 1 MHz clock frequency, the error was 0.2%, which was acceptable for me. The trick was passing 12 to the USART_Init() function, instead of 4800.
Hope this helps someone else out!