I have a system that communicates between a PyQT app and a teensy 4.1 microcontroller. The app is doing machine learning inferencing and then sending the result of the inference down to the microcontroller. Due to the nature of the project, I cannot batch results as timing is everything.
When sending the data to the microcontroller, the microcontroller will echo back what was sent. I are trying to send inference decisions at rates between 100-1000 per second. I am hooking up the coms between the microcontroller and app with a USB connection, which should have extremely fast speeds. (baud on pyserial set to 246000).
The behavior that I notice is that sometimes the writes will 'queue' up and then send in succession very quickly. This bursty nature is not ideal and I was wondering what areas I could look into to solve this.
Does the echo (clogging the serial read) cause delays in writing since it can only do one at a time? Is there a better technology for fast coms besides using USB serial? Could i benefit from using asyncio?
Related
I've been given the task of getting ADC samples onto an embedded linux computer at the highest rate I can (up to about 300kSPS). I am playing with several different platforms (odroid, edison) but easrly on I realized the limitations of using the build in ADCs from within linux and timing (I am relativly new to this).
Right now I am reliably getting 150kSPS using a teensy 3.2 with a very basic swapping buffer, a PDB, and the USB connection. USB writes take 2.5usec no matter my buffer size so any faster and the ADC read interrupt collides with the USB and I get nothing.
My question is: Would using an external ADC chip enable faster speeds? I see chips on Digikey and Mouser advertising 600kSPS and higher with SPI and even parallel outputs... but I fell like the bottleneck is the teensy with USB writes. Even if it could (and I am sure it could) read values 600k times a second how do you get it onto the computer without falling behind?
also, it is for long term collection so I can't just store everything and write it once the collection is over. The edison has a built in microcontroller, but no SPI implemented yet.
Edit:
To clarify, my question is weather there is any way to get large amounts of data very fast into my embedded linux device programmatically or is there some layer between a fast SPI device and the comptuer that I don't know about. So far my mentors have suggested I 1) learn to write a device driver for the SPI device or 2) recompile an image with RT_PREEMPT.
I would like to have Arduino operating in a CAN network. Does the software that provides OSI model network layer exist for Arduino? I would imagine detecting the HI/LOW levels with GPIO/ADC and sending the signal to the network with DAC. It would be nice to have that without any extra hardware attached. I don't mind to have a terminating resistor required by the CAN network though.
By Arduino I mean any of them. My intention is to keep the development environmen.
If such a software does not exist, is there any technical obstacle for that, like limited flash size (again, I don't mean particular board with certain Atmega chip).
You can write a bit banging CAN driver, but it has many limitations.
First it's the timeing, it's hard to achieve the bit timing and also the arbitration.
You will be able to get 10kb or perhaps even 50kb but that consumes a huge amount of your cpu time.
And the code itself is a pain.
You have to calculate the CRC on the fly (easy) but to implement the collision detection and all the timing parameters is not easy.
Once, I done this for a company, but it was a realy bad idea.
Better buy a chip for 1 Euro and be happy.
There are several CAN Bus Shield boards available (e.g: this, and this), and that would be a far better solution. It is not just a matter of the controller chip, the bus interface, line drivers, and power all need to be considered. If you have the resources and skills you can of course create your own board or bread-board for less.
Even if you bit-bang it via GPIO you would need some hardware mods I believe to handle bus contention detection, and it would be very slow and may not interoperate well with "real" CAN controllers on the bus.
If your aim is to communicate between devices of your own design rather than off-the shelf CAN devices, then you don't need CAN for that, and something proprietary will suffice, and a UART will perform faster that a bit-banged CAN implementation.
I don't think, that such software exists. CAN bus is more complex, than for example I2C. Basically you would have to implement functionality of both CAN controller and CAN transceiver. See this thread for more details (in German).
Alternatively you could use one of the CAN shields. Another option were to use BeagleBone with suitable CAN cape.
Also take a look at AVR-CAN.
I'm doing a USB device is to control stepper motors. I've done this before using a parallel port. because these ports do not exist in current motherboards, I decided to implement a USB communication between my device and the PC (host).
To achieve My objective, I endowed the freescale microcontroller the device with that has a USB module 12Mbps.
My USB device must receive 4 bytes (one for each motor driver) at a given time, because every byte is a step that should move the engine.
In the PC (Host) an application of user processes a text file with information and make the trajectory coordinates sending bytes at a certain rate for each motor (time is trivial to achieve the acceleration and speed of the motors) .
Using the parallel port was an easy the task because each byte is sent sequentially to a time determined by the user app.
doing a little research about full speed USB protocol understood that the frame is sent every 1ms.
then you can send 4 byte or many more every 1ms but I can not manage time like I did with the parallel port.
My microcontroller can send up to 64 bytes per frame (Based on transfer papers type Control, Bulk, Int, Iso ..).
question 1:
I want to know in what way I can send 4-byte packets faster than every 1 ms?
question 2:
What type of transfer can advise me for these type of devices?
Thanks.
Like Ricardo said, USB-serial will suffice.
As for the type of transfer, try implementing a CDC stack and use your SCI receiver to listen for PC commands. That will give you a receive buffer which will meet your needs.
Initialize your SCI (baud, etc)
Enable receiver and interrupt
On data receive, move it to your 4-byte command buffer
Clear receive buffer, wait for more
When you have all 4 bytes, fire off the steppers! Four bytes should take µs.
Check with Freescale to see if your processor is supported.
http://cache.freescale.com/files/microcontrollers/doc/support_info/USB_STACK_RELEASE_NOTES_V4.1.1.pdf?fpsp=1
There might even be some sample code to get you started.
-Cheers
I am achieving the same goal (driving/control CNC machines) like this:
the USB device is just synchronous I/O parallel port. Using continuous bulk transfer one pipe as input and one as output. This way I was able to achieve synchronous 64bit parallel communication with ~70KHz sample rate. It uses traffic around (i)4.27+(o)4.27 MBit/s that is limit for mine MCU and code. Bigger speeds cause jitter on the output due to USB events interrupts.
How to do it (on MCU side)
I have 2 FIFO's one for ingoing and one for outgoing data. I have timer interrupt occurring with sample rate frequency. In it I read the inputs and feed it to the first FIFO and read data from the other FIFO and send it to the outputs.
On top of that the USB task is called (inside the same interrupt) checking FIFO for sending to and incoming data from USB handling the transfer itself
I choose ATMEL AT32UC3A chips for this task. After a long and pain full research I decided these MCU's because they have enough memory for both FIFO's and program so no need for additional IC. It has FPGA package which can be used (BGA is not an option). It has HS USB (most USB MCU's have only FS like yours). It runs at 66MHz. It supports many interesting features (did interesting projects with it in the past) and of coarse I have experience with ATMEL MCU's from past
So if you want to achieve something similar then
start with bulk transfer (PC -> USB -> MCU -> output)
add FIFO if needed
do not know the sample rate you need. The old LPT's could handle from 80-196KHz depend on the manufactor. The modern ones are much much slower (which is silly and sad).
measure the critical sample rate
you need oscilloscope or very good hearing for this. The output data must be synchronous so no holes in it, no jitter, etc...
if any of these are present you have to lower the sample rate. Mine setup could handle even 1MHz sample rate but the USB jitter was present (sometimes USB event froze the sending for longer that one sample...) so I achieve only 70KHz of stable output.
if needed also inputs then add them
but only if the output is working as it should. Do not forget to lower the sample rate after this too ... Use separate bulk pipes and FIFOs for input and output.
What's the rationale behind making USB a polling mechanism rather than interrupt-driven? The answers I can come up with some reasoning are:
Leave control of processing efficiency and granularity to OS, rather than the device itself.
Prevent "interrupt storms" by faulty devices.
Some explanations on the net that I found say that it's mostly because of the nature of USB devices. They are mostly microcontroller-based systems which cannot queue larger transfers therefore require short interrupt intervals and such short interrupt intervals may not be the most efficient. Is that true?
Could there be other reasons?
The overarching premise of the development of USB was, "cheap chips". This was done, through the use of polling, which reduces the need for a higher arbitration protocol.
Firewire, which did allow for interrupts from the devices and even DMA, was much more expensive. So USB won in the low-cost field, and firewire in low-latency/low-overhead/... field. Due to history USB more or less won.
What's the rationale behind making USB a polling mechanism rather than interrupt-driven?
This seems to be anti-USB FUD (as in Fear-Uncertainy-Doubt).
The reason is that this simplifies things on the harware level quite a bit - no more collisions for example. USB is half-duplex to reducex the amount of wires in the cable, so only one can talk anyway.
While USB uses polling on the wire, once you use it in software you will notice that you have interrupts in USB. The only issue is a slight increase in latency - neglible in most use cases. Since the polling is usually realized in hardware IIRC, software only gets notified if there is new data.
On the software level, there are so-called "interrupt endpoints" - and guess what, every HID device uses them: Mice, Keyboard and Josticks are HID.
There are three ways to avoid data transfer collisions on a bus:
Have a somewhat complex bus management protocol. Such a protocol has to be rather sophisticated, as when too simple, it will make the bus rather slow (see Token Ring, which is rather simple, yet inefficient). However, having a sophisticated protocol makes all components expensive as all of them require management logic and need to understand of how the bus actually works (see Firewire).
Don't avoid them at all, allow them but detect and handle them. This is also somewhat complex and the bus cannot guarantee any speed or latency as if there are constantly collisions, throughput will fall and latency will raise (see Ethernet without switches, see WiFi).
Have one bus master that controls who can use the bus at which time and for how long. This is inexpensive, as only the master must be sophisticated and the master can give any possible guarantee regarding speed or latency.
And (3) is how USB works and not even the master USB chips need to be sophisticated as the master is usually a computer with a fast CPU and can perform all bus management in software.
USB device chips are dump as toast. They don't need to understand the nifty details of the bus. They just have to look for packets addressed to them which are either control packets, e.g. requesting meta data or selecting a configuration, data packets sent by the master, or poll requests from the master saying "if you have something to send, the bus is now yours".
To make sure the master polls them on time, they hand out a simple description table on request, that explains which endpoints they offer, how often those need to be polled, and how much data they will at most transfer when being polled. The master can use that information to build up a poll schedule that ensures that all devices are polled on time and get the bus for long enough to allow their maximum transfer size. Of course, this is not possible under all circumstances. If you connect too many devices that require very frequent polling and always want to send a lot of data, your system may refuse to add a new device with the error, that its poll requirements cannot be satisfied anymore. Yet that situation is rare in practice and USB is limited to 127 devices (hubs count as devices, so does the master itself).
Power management works in a similar way. Every device tells the master how much power it needs and the master ensures that the bus can still deliver that much, taking active hubs into account. If you connect another device and the bus cannot power it anymore, adding the device will fail with an error.
This allows a fairly complex, powerful, and fast bus system, yet with components that don't even need a real CPU. The simplest USB chips out there are just bridges to a serial data line (like an internal RS-232 bus or an I2C bus) and there is nothing really configurable and they cannot run software or have a firmware one could update. They just place incoming data packets into a buffer and then sent the buffer content bit for bit over the serial bus and they receive serial data in another buffer and return the buffer content when being polled. As for telling the master the configuration (including device and vendor IDs, as well as human readable strings), they simply send the content of a small external EPROM. Things cannot get much simpler than that yet such a chip is enough to build plenty of USB hardware already.
I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.