GNU Radio and labview - labview

I am using a set of 4 N210 devices with SBX daughter boards where I need to perform a beamforming application. As I understood, the mentioned resync feature in the fractional PLL used is accessed through the UHD driver
Do Gnuradio or Labview support "resync feature"?

Both use UHD, and both have access to the functionality necessary to synchronize the LO frontends in all four SBXes:
First, you will need to sync the N210's reference clocks – for that, you will need an external clock divider (e.g. an Octoclock) or GPSDOs.
Then, you set a common device time relative to a common PPS signal. That can also come from four GPSDOs, or be sourced from a single device and split with a clock divider (Octoclock etc)
Then, you can use timed commands to tune the frontends of the SBXes at exactly the same time, leading to a constant relative phase
For more information, the UHD manual's page on LO synchronization will be of help!
Notice that you should use a recent version of UHD, i.e. keep your software as fresh as possible.

Related

Using same codes on different STM32F3/F4 MCUs

Currently I'm working with STM32F303ZET6 (with Nucleo development board) in a university project. We also need to make an all SMT PCB including the microcontroller. The problem we have is we can't find SMT verison of STM32F303ZET6 in our country.
So we have to change our microcontroller but currently STM32F303ZET6 is all I got and I'll write all of the code with it. I'm planning to use arm mbed for the libraries and development environment. My question is can I use same codes I wrote for STM32F303ZET6 for some other STM32F3 or STM32F4 microcontroller?
There is a great deal of commonality between STM32F2, STM32F3 and STM32F4 series. Both F3 and F4 are Cortex-M4 and all three series share common peripherals. In some cases you may find pin-multiplexing options differ, or there are certain peripherals available in one part but not the other.
Different parts may have a different number of USARTs, ADCs, DACs etc. And differing number of available GPIOs. So you should check that the peripherals and ports you use are available on the alternate part.
It is really a matter of going through the data sheet and comparing the function, capabilities and pin-out options for the parts. If you are using the STM32Cube you should have few compatibility issue (Cube has other issues but cross-part compatibility is its main purpose).
The clock trees for each part tend to differ, so you will need part specific C runtime start-up code, but that is normally provided by the toolchain.

Implementation of TDMA scheme on GNU radio using USRP

What is the procedure of implementing TDMA scheme on GNU radio using USRP?
I want to implement TDMA scheme using two USRPs as a transmitter and the third one as a receiver. The requirement is that first transmitter sends some data to the receiver for first 10 seconds and then after a delay of two seconds, the second transmitter sends some data to the receiver for another 10 seconds and this process continues to do so. Anyone who can help or provide with some useful links in order to implement this whole process in GNU radio software?
I am in the middle of implementing a TDMA radio. My design relies on GPS sync on the GR host platform. I use its time to sync my USRP using set_time_unknown_pps using an arg that is 2 seconds in the future.
My MAC block is purely message based, acting as a PDU broker between the application and PHY layer. PDU's to be transmitted are tagged with a tx_time command with time set an appropriate amount into the future. I've had to write several OOT blocks to handle tx_[sob,eob] tagging and other PHY details, but in the end packets come out exactly when they need to. The turn on latency for my B200mini appears to be about 1-2 us, which is fine for my timing requirements.
My advice is to start with simple MAC functions and test all the way along until you are confident with a block, then move down the transmit chain.
Anticipating your obvious question, I cannot release any of my code because it's not my code to release :-)
Here is a useful link, explaining how a TDMA system may be implemented in GNU Radio.

How to code ARM interrupt functions in C

I am using arm-none-eabi-gcc toolchain, v 4.8.2, on LinuxMint 17.2 64b.
I am, at hobbyist level, trying to play with a TM4C123G board and its usual features (coding various blinkies, uart things...) but always trying to remain as close to the metal as possible without using other libraries (eg CMSIS...) whenever possible. Also no IDE (CCS, Keil...), just Linux terminal windows, the board and I... All that mostly for education purpose.
The issue : I am stuck trying to implement the usual interrupt functions like :
EnableInt (clearing bit 0, bit I, of special registry PRIMASK) :
CPSIE I
WaitForInt :
WFI
DisableInt :
CPSID I
Eg, I added this function to my .c file for EnableInt :
void EnableInt(void)
{ __asm(" cpsie i\n");
}
... this compiles but the execution does not seem to work properly (in the simplest blinky.c version, I cannot get any LED action once I have called EnableInt() in the C code). The blinky.c code can be found here.
What would be the proper way to write these interrupt routines in a .c file (ideally without using other libraries, but just setting/clearing bits of the appropriate registers...)?
EDIT : removed the bx lr instructions - but EnableInt() does not seem to work any better - still looking for a solution.
EDIT2 : Actually the function EnableInt(), defined as above, is now working. My SysTick_Handler was mapped incorrectly to the Interrupt Vector table in the startup file (while my original problem was the bx lr instructions which I removed in Edit1).
The ARM Cortex-M4 CPU which your Tivia MCU incorporates does basically not require the software environment to take special action for entry/exit the interrupt handler. The only requirement is to use the AAPCS calling standard, which should be the default with gcc if compiling for this CPU.
The CPU is supported by some tightly coupled "core" peripherals provided by ARM. These are standard for most (if not all) Cortex-M3/4 MCUs. MCU vendors can configure some features, but the basic operation is always the same.
To simplify software development, ARM has introduced the CMSIS software standard. This at least consists of some header-files which unify access to the core-peripherals and use of special CPU instructions. Among those are intrinsics to manipulate the special CPU registers like PRIMASK, BASEMASK, OPTION, etc. Another header provides definitions of the core peripherals and functions to manipulate some of them where a simple access is not sufficient.
So, one of these peripherals supports the CPU for interrupt handling: The NVIC (nested vector-interrupt controller). This prioritises interrupts aagains each other and provides the interrupt vector to the CPU which uses this vector to fetch the address of the interrupt handler.
The NVIC also includes enable-bits for all interrupt sources. So, to have an interrupt processed by the CPU, for a typical MCU you have to enable the interrupt in two or three locations:
PRIMASK/BASEMASK in the CPU: last line of defense. These are the global interrupt gates. `PRIMASK is similar to the interrupt-enable bit in the status-register of the smaller CPUs, BASEMASK is part of interrupt-priority resolution (just ignore it for the beginning).
NVIC interrupt-enable bit for each peripheral interrupt source. E.g Timer, UART, SPI, etc. Many peripherals have multiple internal sources tied to this NVIC-line. (e.g UART rx and tx interrupt).
The interrupt-enable bits in the peripheral itself. E.g. UART rx-interrupt, tx interrupt, rxerror interrupt, etc.
Some peripherals might not have internal bits, so the last one might be missing.
To get things working, you should read the Reference Manaul (Family Guide, or similar), then there is often some "porgramming the Cortex-M4" howto (e.g ST has one for the STM32 series). You should also get the documents from ARM (they are available for free download).
Finally you need the CMSIS headers from your MCU vendor (TI here). These should be tailored for your MCU. You might have to provide some `#define's.
And, yes, this is quite some stuff to read. But imo it is worth the effort. Alternatively you might start with a book. There are some out which might be helpful to get the whole picture first (it is really hard to get from the single documents - yet possible).

On reset what happens in embedded system?

I have a doubt regarding the reset due to power up:
As I know that microcontroller is hardwired to start with some particular memory location say 0000H on power up. At 0000h, whether interrupt service routine is written for reset(initialization of stack pointer and program counter etc) or the reset address is there at 0000h(say 7000) so that micro controller jumps at 7000 address and there initialization of stack and PC is written.
Who writes this reset service routine? Is it the manufacturer of microcontroller chip(Intel or microchip etc) or any programmer can change this reset service routine(For example, programmer changed the PC to 4000h from 7000h on power up reset resulting into the first instruction to be fetched from 4000 instead of 7000).
How the stack pointer and program counter are initialized to the respective initial addresses as on power up microcontroller is not in the state to put the address into stack pointer and program counter registers(there is no initialization done till reset service routine).
What should be the steps in the reset service routine considering all possibilities?
With reference to your numbering:
The hardware reset process is processor dependent and will be fully described in the data sheet or reference manual for the part, but your description is generally the case - different architectures may have subtle variations.
While some microcontrollers include a ROM based boot-loader that may contain start-up code, typically such bootloaders are only used to load code over a communications port, either to program flash memory directly or to load and execute a secondary bootloader to RAM that then programs flash memory. As far as C runtime start-up goes, this is either provided with the compiler/toolchain, or you write it yourself in assembler. Normally even when start-up code is provided by the compiler vendor, it is supplied as source to be assembled and linked with your application. The compiler vendor cannot always know things like memory map, SDRAM mapping and timing, or processor clock speed or what oscillator crystal is used in your hardware, so the start-up code will generally need customisation or extension through initialisation stubs that you must implement for your hardware.
On ARM Cortex-M devices in fact the initial PC and stack-pointer are in fact loaded by hardware, they are stored at the reset address and loaded on power-up. However in the general case you are right, the reset address either contains the start-up code or a vector to the start-up code, on pre-Cortex ARM architectures, the reset address actually contains a jump instruction rather than a true vector address. Either way, the start-up code for a C/C++ runtime must at least initialise the stack pointer, initialise static data, perform any necessary C library initialisation and jump to main(). In the case of C++ it must also execute the constructors of any global static objects before calling main().
The processor cores normally have as you say a starting address of some sort of table either a list of addresses or like ARM a place where instructions are executed. Wrapped around that core but within the chip can vary. Cores that are not specific to the chip vendor like 8051, mips, arm, xscale, etc are going to have a much wider range of different answers. Some microcontroller vendors for example will look at strap pins and if the strap is wired a certain way when reset is released then it executes from a special boot flash inside the chip, a bootloader that you can for example use to program the user boot flash with. If the strap is not tied that certain way then sometimes it boots your user code. One vendor I know of still has it boot their bootloader flash, if the vector table has a valid checksum then they jump to the reset vector in your vector table otherwise they sit in their bootloader mode waiting for you to talk to them.
When you get into the bigger processors, non-microcontrollers, where software lives outside the processor either on a boot flash (separate chip from the processor) or some ram that is managed somehow before reset, etc. Those usually follow the rule for the core, start at address 0xFFFFFFF0 or start at address 0x00000000, if there is garbage there, oh well fire off the undefined instruction vector, if that is garbage just hang there or sit in an infinite loop calling the undefined instruction vector. this works well for an ARM for example you can build a board with a boot flash that is erased from the factory (all 0xFFs) then you can use jtag to stop the arm and program the flash the first time and you dont have to unsolder or socket or pre-program anything. So long as your bootloader doesnt hang the arm you can have an unbrickable design. (actually you can often hold the arm in reset and still get at it with the jtag debugger and not worry about bad code messing with jtag pins or hanging the arm core).
The short answer: How many different processor chip vendors have there been? There are many different solutions, as many as you can think of and more have been deployed. Placing a reset handler address in a known place in memory is the most common though.
EDIT:
Questions 2 and 3. if you are buying a chip, some of the microcontrollers have this protected bootloader, but even with that normally you write the boot code that will be used by the product. And part of that boot code is to initialize the stack pointers and prepare memory and bring up parts of the chip and all those good things. Sometimes chip vendors will provide examples. if you are buying a board level product, then often you will find a board support package (BSP) which has working example code to bring up the board and perhaps do a few things. Say the beagleboard for example or the open-rd or embeddedarm.com come with a bootloader (u-boot or other) and some already have linux pre-installed. boards like that the user usually just writes some linux apps/drivers and adds them to the bsp, but you are not limited to that, you are often welcome to completely re-write and replace the bootloader. And whoever writes the bootloader has to setup the stacks and bring up the hardware, etc.
systems like the gameboy advance or nds or the like, the vendor has some startup code that calls your startup code. so they may have the stack and such setup for them but they are handing off to you, so much of the system may be up, you just get to decide how to slice up the memorires, where you want your stack, data, program, etc.
some vendors want to keep this stuff controlled or a secret, others do not. in some cases you may end up with a board or chip with no example code, just some data sheets and reference manuals.
if you want to get into this business though you need to be prepared to write this startup code (in assembler) that may call some C code to bring up the rest of the system, then that might start up the main operating system or application or whatever. Microcotrollers sounds like what you are playing with, the answers to your questions are in the chip vendors users guides, some vendors are better than others. search for the word reset or boot in the document to try to figure out what their boot schemes are. I recommend you use "dollar votes" to choose the better vendors. A vendor with bad docs, secret docs, bad support, dont give them your money, spend your money on vendors with freely downloadable, well written docs, with well written examples and or user forums with full time employees trolling around answering questions. There are times where the docs are not available except to serious, paying customers, it depends on the market. most general purpose embedded systems though are openly documented. the quality varies widely, but the docs, etc are there.
Depends completely on the controller/embedded system you use. The ones I've used in game development have the IP point at a starting address in RAM. The boot strap code supplied from the compiler initializes static/const memory, sets the stack pointer, and then jumps execution to a main() routine of some sort. Older systems also started at a fixed address, but you manually had to set the stack, starting vector table, and other stuff in assembler. A common name for the starting assembler file is CRT0.s for the stuff I've done.
So 1. You are correct. The microprocessor has to start at some fixed address.
2. The ISR can be supplied by the manufacturer or compiler creator, or you can write one yourself, depending on the complexity of the system in question.
3. The stack and initial programmer counter are usually handled via some sort of bootstrap routine that quite often can be overriden with your own code. See above.
Last: The steps will depend on the chip. If there is a power interruption of any sort, RAM may be scrambled and all ISR vector tables and startup code should be rewritten, and the app should be run as if it just powered up. But, read your documentation! I'm sure there is platform specific stuff there that will answer these for your specific case.

Why are GPIOs used?

I have been searching around [in vain] for some good links/sources to help understand GPIOs and why they are used in embedded systems. Can anyone please point me to some ?
In any useful system, the CPU has to have some way to interact with the outside world - be it lights or sounds presented to the user or electrical signals used to communicate with other parts of the system. A GPIO (general purpose input/output) pin lets you either get input for your program from outside the CPU or to provide output to the user.
Some uses for GPIOs as inputs:
detect button presses
receive interrupt requests from external devices
Some uses for GPIOs as outputs:
blink an LED
sound a buzzer
control power for external devices
A good case for a bidirectional GPIO or a set of GPIOs can be to "bit-bang" a protocol that your SoC doesn't provide natively. You could roll your own SPI or I2C interface, for example.
The reason you cannot find an answer is probably because if you know what an embedded system is and does, or indeed anything about digital electronic systems, then the answer is rather too obvious to write down! That is to say that if you get as far a s actually implementing a working embedded system, you should already know what they are.
GPIO pins are as a minimum, two state digital logic I/O. In most cases some or all of them may also be interrupt sources. These interrupts may have options for be rising, falling, dual edge, or level triggering.
On some targets GPIO pins may have configurable output circuitry to allow, for example, external pull-ups to be omitted, or to allow connection to devices that require open-collector outputs, and in some cases even to provide filtering of high frequency noise and glitches.
In most embedded systems, a processor will be ultimately responsible for sensing the state of various devices which translate external stimuli to digital-level logic voltages (e.g. when a button is pushed, a pin will go low; otherwise it will sit high), and controlling devices which translate logic-level voltages directly into action (e.g. when a pin is high, a light will go on; when low, it will go off). It used to be that processors did not have general-purpose I/O, but would instead have to use a shared bus communicate with devices that could process I/O requests and set or report the state of the external circuits. Although this approach was not entirely without advantages (one processor could monitor or control thousands of circuits on a shared bus) it was inconvenient in many real-world applications.
While it is possible for a processor to control any number of inputs and outputs using a four-wire SPI bus or even a two-wire I2C bus, in many cases the number of signals a processor will need to monitor or control is sufficiently small that it's easier to simply include the circuitry to monitor or control some signals directly on the chip itself. Although dedicated interfacing hardware will frequently have output-only or input-only pins (the person choosing the hardware interface chips will know how many signals need to be monitored, and how many need to be controlled), a particular family of processor may be used in some applications that require e.g. 4 inputs and 28 outputs, and other applications that require 28 inputs and 4 outputs. Instead of requiring that different parts be used in applications with different balances between inputs and outputs, it's simpler to just have one part with inputs that can be configured as inputs or outputs, as needed.
I think you have it backwards. GPIO is the default in electronics. It's a pin, a signal, that can be programmed. Everything is made up of these. For a processor, dedicated peripherals are a special case, they're extras for when you know you want a more limited function.
From a chip manufacturers perspective, you often don't know exactly what the user needs so you can't make the exact peripherals on your chip. You make generic ones instead. Many applications are so rare that there's no market for a specific chip. Only thing you can do is use GPIO or make specific hardware yourself. Also, all (unused or potentially unused) pins are worth turning into GPIO because that makes the part even more generic and reusable. Generic and reusable is very nearly the whole point of programmable chips, otherwise you would just make ASICs.
Some particularly suitable applications:
Reset parts (chips) in a system
Interface to switches, keypads, lights (all they have is one pin/signal!)
Controlling loads with relays or semicondctor switches (on-off)
Solenoid, motor, heater, valve...
Get interrupts from single signals
Thermostats, limit switches, level detectors, alarm devices...
BTW, the Parallax Propeller has practically nothing but GPIO pins. Peripherals are made in software. It works very well for many uses.