I'm trying to start a DMA transfer on my stm32f412, and I've got everything set up to the point where I'm setting the control registers on the DMA channels/streams for TX and RX. I am able to set the enable (Bit 0) on the TX, but not the RX.
The datasheet has 3 options for the bit being cleared by hardware: 1.) On a DMA end of transfer (stream ready to be configured) 2.) If a transfer error occurs on the AHB master buses 3.) When the FIFO threshold on memory AHB port is not compatible with the size of the burst.
I don't think it could be the first or the third, because the DMA transfer hasn't even started yet, and there isn't a burst configured, it's just a single transfer. I'm not quite certain what the second means, but there aren't transfer errors marked in the error registers.
Any avenues to look into would be appreciated
Edit: Ugh, I was looking at the wrong registers for to find the DMA_LISR and _HISR. There was a transfer error on my RX channel.
From the description of DMA_SxCR_EN bit in the reference manual:
Note: Before setting EN bit to '1' to start a new transfer, the event flags corresponding to the
stream in DMA_LISR or DMA_HISR register must be cleared.
In my experience, these event flags include not only the error flags, but also the regular event flags like Transfer Complete or Half Transfer. In some cases, I also ended up clearing FIFO error flag, although I can't remember the reason behind it.
This problem manifests itself as "DMA works only once". In your case, it doesn't work even once, so there can be other problems. Still, I think it's worth trying to clear all the status flags before enabling the stream.
Related
I'm going through the process of coding up an xHC interface following the Intel xHCI guide. I'm at the point where I need to give the controller the memory buffer address for the command ring (using the CRCR_LO and CRCR_HI registers).
However, after I write these values, I am not able to read it back to verify. The documentation indicates that the value will be updated after the next doorbell, but that doesn't seem to happen either.
I do see the CRR bit go high (command ring running), so it's at least doing something. But I am also not getting command responses in my event ring (though I am getting port status change messages in the event ring).
Can someone clarify how this register works? Is there a way to verify that command ring buffer address?
I'm developing a USB device driver for a microcontroller (Atmel/Microchip SAMD21, but I think the question is a general one). I need multiple endpoints for control & data, and the USB hardware uses per-endpoint descriptors to (among other things) locate buffers for input and output data.
Since IN data is polled at the host's discretion it makes sense that each endpoint has its own IN buffer, so that any endpoint's data (if it has any to send) is immediately available when polled.
But as far as incoming data from SETUP & OUT transactions is concerned, it occurs to me that I can save memory by configuring all endpoints to use a shared buffer. It seems wasteful for each endpoint to have its own buffer when, given the nature of USB transactions, only one such transaction can occur at a time.
Obviously this approach requires that transaction interrupts are handled sufficiently quickly that the shared buffer is freed and prepared for a new transaction in time for whatever the next transaction might be - but this is already a requirement for the control endpoint, where some SETUP transactions are immediately followed by an OUT.
So, assuming the timing is feasible, is there any other reason why such an approach wouldn't work?
Probably not.
Normally, the USB module on a microcontroller handles OUT packets by keeping track of which packet buffers it has written data to, and it waits for your firmware to say it is done processing the buffer before accepting more data from the computer and overwriting the buffer. If an endpoint has no buffers available to receive more data, but the computer sends an OUT packet to the endpoint, the USB module typically responds to the computer with a NAK packet, which tells the computer it should retry later. In this situation, your firmware can take pretty much as long as it wants to handle the OUT packets.
By having multiple endpoints configured to use the same buffer, you mess up this system. When you receive an OUT packet on any of your endpoints, the USB module would (probably) not know that multiple endpoints use the same buffer, so it would not issue NAK packets on your other OUT endpoints. If it receives another OUT packet right away, it would write it to the same buffer, overwriting the previous packet. Therefore, whenever you receive a packet, your code would have to rush as fast as it can to do something like copying the data out of that buffer, disabling other OUT endpoints, or reassigning buffers.
Even if you can actually get this to work, it means that your scheme to save a little bit of memory turns the servicing of USB events into a real-time task (i.e. a task that requires responses from your code in a few microseconds). If you want to add another real-time task to your system later, it will be very difficult, because you always have to be ready to be interrupted by your USB-handling code.
The SAMD21 has tons of memory (32 KB) so you probably don't need to worry about optimizing this part of it.
I agree with David's Response. You didn't mention the speed of the device you are creating. A low-speed would need just a few 8-byte buffers. A full-speed, a few 64-byte buffers. High-speed, maybe eight 64-byte buffers, depending on your use. A super-speed device, your still only talking a few 512-byte buffers.
I would create a ring buffer for each endpoint. This way you are not moving data around. You are simply using a pointer that points to an entry within a memory ring. A full-speed device with a control endpoint, an interrupt endpoint, and two bulk endpoints, each endpoint having sixteen 64-byte entries per ring, is still only a total of 4k RAM, 1/8th of the total RAM.
However, I am not familiar with the SAMD21, so please check the specification to be sure this will work.
Good evening!
I'm configuring NTP on an embedded Linux system connected with an U-Blox GPS receiver. I've used NTPD and GPSD.
I would like to know what's the technical difference between:
PPS Signal provided by the GPSD shared memory SHM, (Pseudo IP Address 127.127.28.1);
PPS Signal "Stand Alone", but always connected in some way I would like to understand, with GPS (Pseudo IP Address 127.127.22.0)
It is critical for me to understand because I really need an high level synchronization and I would like the right information from the receiver.
Searching all over the web I've found only confused answers to my doubt...
Thanks in advance!
FL
The SHM driver is not designed to provide a PPS signal by itself. So maybe your notion here is misguided.
A PPS signal is used for getting a (precise) notion of the
frequency of the local clock (the one used for measuring external signals), as it just provides a well known timing distance of the "pulses" (1s in this case). Actually pps is a frequency source.
GPSD on the other hand is communicating with some device (could be built into your HW). It then proovides the time data read from the GPS source via shared memory to ntp. This provisioning of data does not guarantee any timing relation (delay). (E.g. could occur earlier or later within the second due to load or scheduling)
From the perspective of ntp, you will have a true date/time label, but you might not know exactly when the related point in time did occur related to your local clock. (Usually not precisely enough for common ntp use cases.) This is where PPS kicks in.
Depending on how the GPS device is being connected to your local machine (parallel, serial, internal bus) you will have some way of getting an interrupt on the pulse from the pps signal. (e.g. with serial connection you usually will get a transition on the DCD pin).
The internal processing of the related interrupt will read the local clock and the resulting timing information is then provided for further processing. This information is exactly what the PPS clock discipline is using and providing to ntp. What you need to configure here, is the offset from the triggering of the pulse to reading the local clock. (Pulse usually is assumed to occur "on the second.)
So, in your configuration, it is likely that the "source" of the PPS signal is the same GPSD is using for providing date/time data (your GPS device).
However, the actual signal used for date/time data and pps is different. Date/time will use a data telegram or some register content read from the GPS device while pps will be a level change on an input pin proveded from this very device.
For details start with the interfacing information from your GPS receiver, especialy any timings stated there. Then look at ntp and figure what driver(s) will allow exploiting such input data for best time quality.
I have a relay contact closure event that needs to be timestamped accurately ( 1 msec) with a GPS and the PPS output... I am not sure how to feed the relay contact output to a microcontroller and then synchronize the microcontroller clock to the GPS ...plus how to get the UTC afterall?
Can you please help me.
thanks
If your microcontroller has at least two interrupts based on hardware pins, you could connect the relay to one of the interrupt-generating pins, and the PPS to the other interrupt-generating pin.
You will need to connect the NMEA (or other proprietary protocol of your GPS) to the corresponding port in your microcontroller. Some common buses are UART or SIP.
Then, every time that you get a PPS interrupt, you enable a global flag that can be used in the main loop to reset a counter. This counter will tell you how far apart from the PPS the relay switched (if it happens within that second). If you know the base frequency of your counter, you can convert the counter into fractions of seconds. Note that if both edges of the relay state change have to be detected, you will need an interrupt source capable to interrupt on both edges (or use two interrupts)
Then, if the Relay interrupt goes off, you can get the value of the counter upon interrupt, and save it in storage, send it to host, etc. (Note, it would be best to save the value in RAM, lift a flag of "value present", and leave the sending/storing to the main loop, then turning off the flag).
Finally, when you receive a complete NMEA message (this could be being parsed in your main loop by a state machine for instance), you can send this information to the host or storage along with the counter that you saved to time your relay state change. Note please that the NMEA message will be generated and decoded with a certain delay from the PPS, so you will need to compensate for that.
So, to open up a serial port and successfully transmit data from the balance through the serial port, i need to make sure that the settings on the serialPort object match the actual settings of the balance.
Now, the question is how do i detect that the connection hasn't been established due to the settings being different? No exception is thrown by serialPort.Open to indicate that the connection has been established. Yes, the settings are valid, but if they don't match the device (balance) settings; I am in the dark as to why the weight off the balance is not being captured.
Any input here?
Without knowing any more information on the format of the data you expect from your balance, only general serial port settings mismatch detection techniques are applicable.
If the UART settings are significantly incorrect, you'll likely see a lot of framing errors: when the UART is expecting a 1 stop bit, it will in fact see a 0. You can detect this with the ErrorReceived event on the port.
private void OnErrorReceived(object sender, SerialErrorReceivedEventArgs e)
{
if ((e.EventType & SerialError.Frame) == SerialError.Frame)
{
// your settings don't match, try something else
}
}
If things are close, but still incorrect, the .NET serial port object may not even give you an error (that is, until something catastrophic occurs).
My most common serial port communication failure occurs due to mismatched baud rates. If you have a message that you know you can get an 'echo' for, try that as part of a handshaking effort. Perhaps the device you're connecting to has a 'status' message. No harm will come from requesting it, and you will find out if communication is flowing correctly.
For software handshaking (xon xoff) There's very little you can do to detect whether or not it's configured right. The serial port object can do anything from ignore it completely to have thread exception errors, depending on the underlying serial port driver implementation. I've had serial port drivers that completely ignore xon/xoff, and pass the characters straight into the program - yikes!
For hardware handshaking, the basic echo strategy for baud rate may work, depending on how your device works. If you know that it will do hardware handshaking, you may be able to detect it and turn it on. If the device requires hardware handshaking and it's not on, you may get nothing, and vice versa.
Another setting that's more rarely used is the DTR pin - data terminal ready. Some serial devices require that this be asserted (ie, set to true) to indicate that it's time to start sending data. It's set to false by default; give toggling it a whirl.
Note that the serial port object is ... finicky. While not necessarily required, I would consider closing the port before you make any changes.
Edit:
Thanks to your comments, it looks like this is your device. It says the default settings should be:
1200 baud
Odd parity
1 stop bit
Hardware handshaking
It doesn't specify how many data bits, but the device says it supports 7 and 8. I'd try both of those. It also says it supports 600, 1200, 2400, 4800, 9600, and 19200 baud.
If you've turned on hardware handshaking, enabled DTR (different things) and cycled through all the different baud rates, there's a good chance that it's not your settings. It could be that the serial cable that's being used may be wired incorrectly for your device. Some serial cables are 'passthrough' cables, where the 1-9 pins on one side match exactly with the 1-9 pins on the other. Then, you have 'crossover' cables, where the "TX" and "RX" cables are switched (so that when one side transmits, the other side receives, a very handy cable.)
Consider looking at the command table in the back of the manual there; there's a "print software version" command you could issue to get some type of echo back.
Serial ports use a very, very old communications technology that use a very, very old protocol called RS-232. This is pretty much as simple as it gets... the two end points have synchronized clocks and they test the line voltage every clock cycle to see if it is high or low (with high meaning 0 and low meaing 1, which is the opposite of most conventions... again an artifact of the protocol's age). The clock synchronization is accomplished through the use of stop bits, which are really just rest time in between bytes. There are also a few other things thrown into the more advanced uses of the protocol such as parity bits, XON/XOFF, etc, but those all ride on top of this very basic communication layer. Detecting a mismatch of the clocks on each end of the serial line is going to be nearly impossible -- you'll just get incorrect data on the recieving end. The protocol itself has no way built in to identify this situation. I am unaware of any serial driver that is smart enough to notice the input data being clocked an an inappropriate frequency. If you're using one of the error detection schemes such as parity bits, probabilistically every byte will be declared an error. In short, the best you can do is check the incoming data for errors (parity errors should be detected by your driver/software layer, whereas errors in the data received by your app from that layer will need to be checked by your program -- the latter can be assisted by the use of checksums).