Some of these ettus boxes have some serious (& seriously expensive) FPGA's in them. Seems like a waste if all they do is pass data from the ADC to the ethernet bus. When I build something in GRC how much signal processing is done in the FPGA & how much is done by my PC?
GNU Radio itself is host software. So, all the processing you program in GNU Radio is done on your CPUs, unless you use special hardware accelerator blocks, for example:
gr-theano: GPU accelleration
gr-fosphor: OpenCL-accelerated Waterfall spectrogram
gr-ettus: Employing RFNoC to implement specific functionality on the X3x0's FPGA. This requires you to build an FPGA image including the functionality you use as gr-ettus block.
Generally, the FPGA in the X3x0 already does a lot: physically, the ADC and DAC of the X3x0 are running at 200MHz by default, and you can select integer fractions of that as "user sampling rate"; the interpolation/decimation from/to that rate to match these hardware clocks is done in the FPGA with relatively large filters. Also, you can digitally shift your signal in frequency by setting a digital tuning offset, which is also done by a CORDIC in the FPGA.
Related
I've been given the task of getting ADC samples onto an embedded linux computer at the highest rate I can (up to about 300kSPS). I am playing with several different platforms (odroid, edison) but easrly on I realized the limitations of using the build in ADCs from within linux and timing (I am relativly new to this).
Right now I am reliably getting 150kSPS using a teensy 3.2 with a very basic swapping buffer, a PDB, and the USB connection. USB writes take 2.5usec no matter my buffer size so any faster and the ADC read interrupt collides with the USB and I get nothing.
My question is: Would using an external ADC chip enable faster speeds? I see chips on Digikey and Mouser advertising 600kSPS and higher with SPI and even parallel outputs... but I fell like the bottleneck is the teensy with USB writes. Even if it could (and I am sure it could) read values 600k times a second how do you get it onto the computer without falling behind?
also, it is for long term collection so I can't just store everything and write it once the collection is over. The edison has a built in microcontroller, but no SPI implemented yet.
Edit:
To clarify, my question is weather there is any way to get large amounts of data very fast into my embedded linux device programmatically or is there some layer between a fast SPI device and the comptuer that I don't know about. So far my mentors have suggested I 1) learn to write a device driver for the SPI device or 2) recompile an image with RT_PREEMPT.
I am wondering if I should be worried about excessive writes to the embedded controller registers on my laptop. I am guessing that if they are true registers, they probably act more like RAM rather than flash memory so this isn't a problem.
However, I have a script to modify the registers in my laptop's EC to better control the fan speed curve. It has to be re-applied after each power change event such as sleep/wake as well as power cable events, so it happens fairly often. I just want to make sure I am not burning out my chips in the process.
The script I am using to write to the EC is located here:
https://github.com/RayfenWindspear/perl-acpi-fanspeed
Well, it seems you're writing to ACPI registers. Registers here do not refer to any specific hardware; it just means its a specific address that you can reach using a specific bus. It's however highly unlikely that something that you have to re-write after every power cycle is overwriting permanent storage, so for all practical aspects I'd assume that you can rely on this for as long as your laptop lives.
Hardware peripherals are almost universally implemented as SRAM cells. They will not wear out first. The fan you are controlling will have a limited number of start/stop cycles. So it is much more likely that the act of toggling these registers will wear something else out prematurely (than the SRAM type memory cell itself).
To your particular case, correctly driving a fan/motor can significantly improve it's life time. Over driving a fan/motor does not always make it go faster, but instead creates heat. The heat weakens the wiring and eventually the coils will short reducing drive and eventually wearing out. That said, the element being cooled can be damaged by excess heat so tuning things just to reduce sound may not make sense.
Background details
Generally, the element is called a Flip-Flop with various forms. SystemRDL is an example as well as SystemC and others where digital engineers will model these. In digital hardware, the flip-flops have default or reset values. This is fixed like ROM on each chip and is not normally re-programmable, uses EEPROM technologyNote1 or is often configured via input lines which the hardware designer can pull them high/low with a resistor or connect them to another elements 'GPIO'.
It is analogous to 'initdata'. Program values that aren't zero get copied from flash, disk, etc to memory at program startup. So the flip-flops normally do not hold state over a power cycle; something else does this.
The 'Flash' technology is based off of a floating gate and uses 'quantum tunnelling' to program the floating gates. This process is slightly destructive. It was invented by Fowler and Nordheim in 1967, but wide spread electronics industry did not start to produce them until the early 90s with NOR flash followed by NAND flash and many variants. But the underlying physics is the same; just the digital connections are different. So as well as this defect you are concerned about, the flash technology actually followed many hardware chips such as 68k, i386, etc. So 'flip-flops' were well established and typically the 'register' part of the logic is not that great of a typical chip and a flip-flop uses the same logic (gates) as the rest of the chip logic. Meaning that using flash would be an extra overhead with little benefit.
Some additional take-away is that the startup up and shutdown of chips is usually the most destructive time. Often poor hardware designers do not put proper voltage supervision and some lines maybe floating with the expectation that system programs will set them immediately. Reset events, ESD, over heating, etc will all be more harmful than just the act of writing a peripheral register.
Note 1: EEPROM typically has 100,000+ cycles. These features are typically only used once at manufacture time to set a chip configuration for the system. These are actually quite rare, but possible.
The MLC (multi-level) NAND flash in SSD has pathetically low cycles like 8,000 in some cases. The SLC (single level) old school flash have 10,000+ cycles, but people demand large data formats.
The beaglebone Black processor includes two independent Programmable Real Time Units (PRUs). Hobbyists and professionals are excited about possible use of these units for real-time applications, which is understood. However, if you can have a RTOS (whether for the beaglebone or the raspberry pi), why would you need the PRUs?
EDIT-
For information, the BBB has an ARM Cortex A8 running at 1 GHz, with 1.9 DMIPS / MHz. The PRUs are simple RISCs running at 200 MHz.
Linux, even with the real-time scheduler is unsuited to many critical hard real-time tasks with response requirements at the microsecond level, on the other hand it provides or enables a great deal of functionality in terms of UI, connectivity and filesystem support. These things are either not available in an RTOS or are provided at significant cost in high end RTOS, and with much more limited hardware support.
So if you have a system that has hard-real time constraints, but needs more general purpose computing features such a networking, filesystem connection to commercial-off-the-shelf (COTS) peripherals etc., then the PRU provides a solution to that.
On the other hand I can't help but think that this is a marketing exercise on the part of TI to sell more chips. A similar solution has always been possible (and indeed common) using one or more processors to perform time critical tasks, possibly running an RTOS, while UI and connectivity are handled by a single processor with the necessary hardware and memory resources but without the real-time constraints. The PRU device does have two 32 bit cores, but XMOS xCORE devices have as many as 16 cores - with 16 communicating cores, you may not even need an RTOS.
To answer the question...
[...] if you can have a RTOS [...], why would you need the PRUs?
... directly; you probably wouldn't need them in that case, but you would loose Linux - and your application may need that. It is just one of many solutions to real-time applications using Linux. You pays your money, and takes your choice.
Most likely the processor in BeagleBone or RaspberryPI is too "heavy" for real time - after all, you could run RTOS on your PC, but it will not be very responsive entirely deterministic, even when it's faster than your typical microcontroller (I guess that these PRUs are some sort of microcontrollers with a new fancy name). In such high-level application processor as found on these boards you rarely have direct access to hardware or interrupts, which are essential for real time applications that actually do something time-critical.
I would like to have Arduino operating in a CAN network. Does the software that provides OSI model network layer exist for Arduino? I would imagine detecting the HI/LOW levels with GPIO/ADC and sending the signal to the network with DAC. It would be nice to have that without any extra hardware attached. I don't mind to have a terminating resistor required by the CAN network though.
By Arduino I mean any of them. My intention is to keep the development environmen.
If such a software does not exist, is there any technical obstacle for that, like limited flash size (again, I don't mean particular board with certain Atmega chip).
You can write a bit banging CAN driver, but it has many limitations.
First it's the timeing, it's hard to achieve the bit timing and also the arbitration.
You will be able to get 10kb or perhaps even 50kb but that consumes a huge amount of your cpu time.
And the code itself is a pain.
You have to calculate the CRC on the fly (easy) but to implement the collision detection and all the timing parameters is not easy.
Once, I done this for a company, but it was a realy bad idea.
Better buy a chip for 1 Euro and be happy.
There are several CAN Bus Shield boards available (e.g: this, and this), and that would be a far better solution. It is not just a matter of the controller chip, the bus interface, line drivers, and power all need to be considered. If you have the resources and skills you can of course create your own board or bread-board for less.
Even if you bit-bang it via GPIO you would need some hardware mods I believe to handle bus contention detection, and it would be very slow and may not interoperate well with "real" CAN controllers on the bus.
If your aim is to communicate between devices of your own design rather than off-the shelf CAN devices, then you don't need CAN for that, and something proprietary will suffice, and a UART will perform faster that a bit-banged CAN implementation.
I don't think, that such software exists. CAN bus is more complex, than for example I2C. Basically you would have to implement functionality of both CAN controller and CAN transceiver. See this thread for more details (in German).
Alternatively you could use one of the CAN shields. Another option were to use BeagleBone with suitable CAN cape.
Also take a look at AVR-CAN.
I have been searching around [in vain] for some good links/sources to help understand GPIOs and why they are used in embedded systems. Can anyone please point me to some ?
In any useful system, the CPU has to have some way to interact with the outside world - be it lights or sounds presented to the user or electrical signals used to communicate with other parts of the system. A GPIO (general purpose input/output) pin lets you either get input for your program from outside the CPU or to provide output to the user.
Some uses for GPIOs as inputs:
detect button presses
receive interrupt requests from external devices
Some uses for GPIOs as outputs:
blink an LED
sound a buzzer
control power for external devices
A good case for a bidirectional GPIO or a set of GPIOs can be to "bit-bang" a protocol that your SoC doesn't provide natively. You could roll your own SPI or I2C interface, for example.
The reason you cannot find an answer is probably because if you know what an embedded system is and does, or indeed anything about digital electronic systems, then the answer is rather too obvious to write down! That is to say that if you get as far a s actually implementing a working embedded system, you should already know what they are.
GPIO pins are as a minimum, two state digital logic I/O. In most cases some or all of them may also be interrupt sources. These interrupts may have options for be rising, falling, dual edge, or level triggering.
On some targets GPIO pins may have configurable output circuitry to allow, for example, external pull-ups to be omitted, or to allow connection to devices that require open-collector outputs, and in some cases even to provide filtering of high frequency noise and glitches.
In most embedded systems, a processor will be ultimately responsible for sensing the state of various devices which translate external stimuli to digital-level logic voltages (e.g. when a button is pushed, a pin will go low; otherwise it will sit high), and controlling devices which translate logic-level voltages directly into action (e.g. when a pin is high, a light will go on; when low, it will go off). It used to be that processors did not have general-purpose I/O, but would instead have to use a shared bus communicate with devices that could process I/O requests and set or report the state of the external circuits. Although this approach was not entirely without advantages (one processor could monitor or control thousands of circuits on a shared bus) it was inconvenient in many real-world applications.
While it is possible for a processor to control any number of inputs and outputs using a four-wire SPI bus or even a two-wire I2C bus, in many cases the number of signals a processor will need to monitor or control is sufficiently small that it's easier to simply include the circuitry to monitor or control some signals directly on the chip itself. Although dedicated interfacing hardware will frequently have output-only or input-only pins (the person choosing the hardware interface chips will know how many signals need to be monitored, and how many need to be controlled), a particular family of processor may be used in some applications that require e.g. 4 inputs and 28 outputs, and other applications that require 28 inputs and 4 outputs. Instead of requiring that different parts be used in applications with different balances between inputs and outputs, it's simpler to just have one part with inputs that can be configured as inputs or outputs, as needed.
I think you have it backwards. GPIO is the default in electronics. It's a pin, a signal, that can be programmed. Everything is made up of these. For a processor, dedicated peripherals are a special case, they're extras for when you know you want a more limited function.
From a chip manufacturers perspective, you often don't know exactly what the user needs so you can't make the exact peripherals on your chip. You make generic ones instead. Many applications are so rare that there's no market for a specific chip. Only thing you can do is use GPIO or make specific hardware yourself. Also, all (unused or potentially unused) pins are worth turning into GPIO because that makes the part even more generic and reusable. Generic and reusable is very nearly the whole point of programmable chips, otherwise you would just make ASICs.
Some particularly suitable applications:
Reset parts (chips) in a system
Interface to switches, keypads, lights (all they have is one pin/signal!)
Controlling loads with relays or semicondctor switches (on-off)
Solenoid, motor, heater, valve...
Get interrupts from single signals
Thermostats, limit switches, level detectors, alarm devices...
BTW, the Parallax Propeller has practically nothing but GPIO pins. Peripherals are made in software. It works very well for many uses.