In Verilog how "wire" data type is managed in computer memory? - hdl

It's easy to visualize how computer manages to store reg & int variables in memory. Just allocating 1/32 bits respectively in memory and storing the initializing value in binary form. When these variables need to be printed, they are converted back to human readable form, based on the format specifier (%d, %b etc).
**But how computer manages to store wire data type in memory? Since this data type is not meant for storage. **
How computer memory differentiates between data storing variables (int, reg) and data transmission variables (nets).
It would be really helpful, if someone explains in such a way that I can visualize the process going inside computer, while dealing with "wire" data type.

This is independent of the hardware description language, it does not matter if this is Verilog or VHDL or any other.
A wire connects "sources" (producers of values on the wire) and "sinks" (consumers of these values). If a wire has no source and no sink, its value cannot be seen, as a "watcher" is a consumer.
One possible implementation for a software object of a wire is a list of producers and consumers.
In case of a simulator: Each time a consumer needs to know the value on the wire, the software object looks up its list of producers and calculates the resulting value.
So your first thought is correct, a software object of a wire commonly does not store the value on the net. But it stores references of other components it connects. By this aspect it is static, as a wire in reality.

A wire is not a data type, it represents a network of connections between drivers and receivers, You can say that it is an HDL abstraction for a piece of metal in an electronic circuit. A variable is also an HDL abstraction. It also has the potential to represent a piece of metal, or it could be a register. A register does not store anything; it is an electronic circuit that supplies a current and voltage to a set of transistors connected by wires. It's only when you start talking about non-volatile memory that there begins to be any concept of storage.
How a tool simulating an HDL represents these wires and variables is not directly tied to how they are synthesized into hardware. Technically, a Verilog wire's value is represented by a built-in resolution function of all of its drivers. When any driver changes it's value, the resolution function gets called and computes a resolved value based on the state of the drivers and their strengths. But since the overwhelming majority of wires have only one driver, there's no reason it could not just store the state of the driver in "memory" and use that value whenever some code needs its value.
You might want to read: https://blogs.sw.siemens.com/verificationhorizons/2013/05/03/wire-vs-reg/

Related

How does Mission Planner update Parameters List values?

In Mission Planner, when you change any parameter in the parameter list, say RC limits or PID; after pressing 'write parameters' the software updates the parameters.
I tried finding how does the same happen but to no avail (I don't know what it's called exactly). How does Mission Planner write parameters to already existing firmware on the APM board. Or it rewrites the firmware again with updated parameters?
I want to implement similar kind of procedure. To test with, I have an arduino board running a code. Instead of uploading entire code again and again, there must be a way to just update the value of a variable using some protocol (Serial) sent from the custom software on the PC. Just like updating a parameter when required. How to do it ?
Thanks.
The ATMEGA1280 used on the ArduPilotMega has a 4K EEPROM on-chip. Other MCUs used in Arduinos have EEPROM of varying capacity. The Arduino library includes support for it: https://www.arduino.cc/en/Reference/EEPROM
An EEPROM (Electrically Erasable Programmable Read-Only Memory) is a non-volatile memory technology similar to Flash, but with properties that make it more suited to storage of small amounts of configuration data, such as being byte level re-writable. It is much less dense (takes up more space) than flash memory, so is less suited to code storage.

What happens at the receive part when we write with SPI?

When SPI master writes to the slave, something is shifting into the receive buffer right?
If yes, then it is normal "RXDATAAVAILABLE" flag to be set? It is nonsense! We send data, and when data is sent, we get notified that there is data received.
If all of my statements are correct, then how do we know what the correct data is into the RXFIFO?
Suppose we send two bytes frame. The first one is the address and the second one is dummy in order to read the value in that address (of the slave). Then suppose we have two levels Rx FIFO. In that FIFO instead the value read from the slave, we have two bytes, the first is who knows what, and the second the value read from the slave.
So the question is: how do we manage to receive only what is necessary, without getting junk data during the write part of the frame?
SPI works like simple 8 bit shift registers. You shift out bytes on MOSI at each flank of a clock and at the same time you shift in new data from MISO. Thus you send and receive at the same time. Hence the names MOSI = Master Out Slave In, and MISO = Master In Slave Out.
SPI peripherals on microcontrollers are more intricate than that though, and have separate data registers that are different from the actual hardware shift register, so that we can write data without worrying about the pending transmission. Some may even have multiple data buffers. But on the fundamental level, SPI always work with 8 bits.
When the microcontroller acting as SPI master writes something, there is usually two flags, one that says that the data buffer is made available, and another that tells that transmission is done.
When you are done sending, you are also done receiving. You'll get some sort of flag set. This is assuming that all devices are implementing SPI as intended, which is often not the case.
Note that some devices implement a system where you first send x bytes of data, and after that receive x bytes of data. This seems to be the scenario you describe. Sending and receiving is not done at the same time for that device, but instead in sequence. Meaning that during the first transmission, you'll clock in garbage, and then in order to receive data, you must clock out garbage. This is no fault of SPI, but how the manufacturer of the specific device has specified things.
Note that SPI is very poorly standardized and therefore all manner of weird crap exists on the market. The manner of sending/receiving data may vary, the clock polarity (flanks) may vary, where the device clocks the data may vary. Some devices might need delays between data bytes. Some devices might need some obscure handling of the Slave Select pin in order to work. It's all one big mess and the lack of international standardization is to blame.
An SPI master engine's received data available flag will be set as a simple result of the occurrence of a word's worth of clock cycles generated by the master itself. It tells you nothing about the operation or even existence of a peripheral on the bus.
When this flag is set, it is entirely up to you and your software to know if the contents of the received data register will have meaning or not.
If you have properly selected and interacted with an existent, operational peripheral in a read or transfer operation where it is documented to give a result, they will have meaning
If you have performed a purely write operation to a peripheral for which no reply data is documented at the word position in question, it will be meaningless, effectively no different than reading some random legal memory location. Note that in most cases, a write operation is simply a transfer where the received data is to be ignored - at implementation level there is usually no other difference.
If you have failed to address any existent peripheral it will be similarly be meaningless.
As with any other memory or read operation, it is up to you to know if the contents of the register in a particular situation are meaningful or not.
Since you know that the first byte contains "who knows what" while the second has meaning, write your software to ignore the first and use the second.
(As an aside, many, though by no means all, SPI peripherals are documented to shift out whatever constitutes their primary status register during the address phase, since that makes for a quick way to poll it)

Microcontroller to microcontroller communication library (over UART/RS232)

I want to interface two microcontrollers with a UART interface and I search a protocol to exchange data between them.
In practice, I want to exchange data periodically (ie: sensors reading) and also data on event (GPIO state). I have around 100-200 bytes to exchange every 100 milli second.
Does anybody know a protocol or library to achieve this kind of task ?
For now, I see protobuf and nano protobuff ? Is there something else ?
It would be nice if I could add a software layer over the UART and use "virtual data stream" like if it was a TCP/IP connection to N ports.
Any idea ?
Thanks
I think the most straight forward way is to roll your own.
You'll find RS232 drivers in the manufacturers chip support library.
RS232 is a stream oriented transport, that means you will need to encode your messages into some frameing structure when you send them and detect frame boundaries on the receiver side. A clever and easy to use mechanism to do this is "Consistent Overhead Byte Stuffing".
https://en.wikipedia.org/wiki/Consistent_Overhead_Byte_Stuffing
This simple algorithm turns zeros in your messages into some other value, so the zero-byte can be used to detect start and end of frame. If a byte gets corrupted on the way you can even resynchronize to the stream and keep going.
The code on Wikipedia should be easy enough even for the smallest micro-processors.
Afterwards you can define your message format. You can probably keep it very simple and directly send your data-structures as is.
Suggestion for a simple message format:
Byte-ID Meaning
---------------------------------
0 Destination port number
1 message type (define your own)
2 to n message data
If you want to send variable length messages you can either send out a length byte or derive the length from the output of the Constant Overhead Byte Stuffing framing.
By the way, UART/RS232 is nice and easy to work with, but you may also want to take a look at SPI. The SPI interface is more suitable to exchange data between two micro-controllers. It is usually faster than RS232 and more robust because it has a dedicated clock-line.
How about this: eRPC https://community.nxp.com/docs/DOC-334083
The eRPC (Embedded Remote Procedure Call) is a Remote Procedure Call (RPC) system created by NXP. An RPC is a mechanism used to invoke a software routine on a remote system using a simple local function call. The remote system may be any CPU connected by an arbitrary communications channel: a server across a network, another CPU core in a multicore system, and so on. To the client, it is just like calling a function in a library built into the application. The only difference is any latency or unreliability introduced by the communications channel.
I have use it in a two processor embedded system, a cortext-A9 CPU with a Context-M4 MCU, which communicate each other with SPI/GPIO.
Erpc can run over UART, SPI, rpmsg and network(tcp). even when using serial or SPI as transport tunnel, it can do bidirectional
calls and with very minimal footprint.
Simple serial point-to-point communication protocol
http://www.zipplet.co.uk/index.php/content/openformats_mise
It depends if you need master/slave implementation, noise protection, point-point or multi-point (and in this case collision detection), etc
but, as our colleague said, I would go with the simplest solution that fits the problem, following the KISS principle http://en.wikipedia.org/wiki/KISS_principle
Just add some header information like ID and length, if necessary CRC checking, and be happy :)
Try Microcontroller Interconnect Network (MIN) 1.0:
https://github.com/min-protocol/min
It has framing using byte-stuffing to keep receiver sync, 16-bit Fletcher's algorithm for checksum, an identifier for use by the application and a variable payload of up to 15 bytes.
There's embedded C code there plus also a Python implementation to make it easier to talk to a PC.
As the first answer starts, the simplest result is to roll your own. Define your header (the "format" above) as needed, perhaps including status information so each processor knows that the other is working properly. I have had success with a protocol that includes
2 byte ascii prefix and suffix such as "[" and "]" so that a
protocol analyzer can show you message boundaries.
The number of bytes.
The command ID (parsed to indicate what command handler to use.
Command arguments (I used 3 32 bit words).
A CRC or checksum to verify transfer integrity
The parser then recognizes the [* as the start of the message, and dispatches the body to the command handler for the particular command ID with the associated arguments as long as the checksum matches.

How are SYNC words chosen?

I'm using a data transmission system which uses a fixed SYNC word (0xD21DB8) at the beginning of every superframe. I'd be curious to know how such SYNC words are chosen, i.e. based on which criteria designers choose the length and the value of such a SYNC word.
In short:
high probability of uniqueness
high density of transitions
It depends on the underlying "server layer" (in communication terms). If the said server layer doesn't provide a means of distinguishing payload data from control signals then a protocol must be devised. It is common in synchronous bit-stream oriented transport layer to rely on a SYNC pattern in order to delineate payload units. A good example of such technique used is in SONET/SDH/OTN, the major optical transport communication technologies.
Usually, the main criterion for choosing a SYNC word is high probability of uniqueness. Of course what makes its uniqueness property depend on the encoding used for the payload.
Example: in SONET/SDH, once the SYNC word has been found, it is validated for a number of superframes (I don't remember exactly of many) before declaring a valid sync state. This is required because false positive can occur: encoding on a synchronous bit stream cannot be guaranteed to generate encoded payload patterns orthogonal to the SYNC word.
There is another criterion: high density of transitions. Sometimes, the server layer is made up of both clock and data signals (i.e. not separate). In this case, for the receiver to be able to delineate symbols from the stream, it is critical to ensure a maximum number of 0->1, 0->1 transitions in oder to extract the clock signal.
Hope this helps.
Updated: these presentations might be of interest too.
At the physical layer, another consideration (besides those mentioned in jldupont's answer) is that a sync word may be used to synchronise the receiver's communication clock to that of the sender. Synchronisation may only require zeroing the receiver's clock, but it may also involve changing the frequency of the clock to match the sender's more closely.
For a typical asynchronous protocol, the sender and receiver are required to have clocks that are the same. In reality of course, the clocks are never precisely the same, so a maximum error is normally specified.
Some protocols don't require the receiver to adjust its clock rate, but tolerate the error by oversampling, or some other method. For example, a typical UART is able to cope with errors by zeroing on the first edge of the start bit, and thereafter, taking multiple samples at the point where it expects the middle of each bit to be. In this case, the sync word is just the start bit, and ensures a transition at start of the message.
In the HART industrial protocol, the sync word is 0xFF, plus a zero parity bit, repeated a number of times. This is represented as an analogue waveform, encoded using FSK, and appears as 8 periods (equal to 8 bits times) of a 1200 Hz sinusoidal wave, followed by one bit time at 2200 Hz. This pattern allows the receiver to detect that there is a valid signal, and then synchronise to the start of a byte by detecting the transition from 2200 Hz back to 1200 Hz. If required, the receiver can also use this waveform to adjust its clock.

How firmwares communicate to the electronic devices to perform its operations?

Almost all electronic devices comes with firmwares. I know it is stored in ROM (Read only memory) so it becomes non-volatile (no power source required to hold the contents from getting erased like RAM)
What I want to know is "How firmwares communicate to the electronic devices to perform its operations?"
Let say there is a small roller.. On press of a button, how it makes it to move?
Can someone please explain what is residing behind, to make it happen..
I think it may require a little brief explanation to unwind it..
Also what is the most popular language used for coding firmwares?
Modern hardware like you're describing has a program stored in ROM and an all-purpose microcomputer (CPU) executing that program.
The CPU reads information from ROM by setting up addresses on its address bus and then asking the ROM to tell it the value stored at that location. There's something like a read pulse being raised (on a separate line) to tell the ROM to make the value accessible on the lines of the data bus. That, in a nutshell, is reading.
To get the hardware to do something, the CPU basically executes a kind of write operation. It puts a value, which is just a bunch of bits if you want to look at it that way, on the address bus to select a certain device and perhaps function on that device, then it raises another signal line saying "write!" The device that recognizes its address on the address bus responds to that signal by accepting the data from the data bus and then performing whatever its function is. Typically, one of the data bus bits will be connected within the output device to a power output stage, i.e. a transistor stronger than the ones used just for computation, and that transistor will connect some electrical device to current sufficient to make it move/glow/whatever.
Tiny, cheap devices are coded in assembly language to save costs for ROM; in industrial quantities, even small amounts of memory can affect price. The assembly language is specific to the CPU; some chips called "8051", "6502" and "Atmel (something or other)" are popular. Bigger devices with more complex requirements may have their firmware written in C or a C-like dialect, which makes programming a little easier than assembler. The bigges ones even run C++ code. Compiled, of course.
In most systems there are special memory addresses which are used for I/O. Reading and writing on such addresses executes some function instead of just moving data around. In x86 systems there are also special I/O instructions IN and OUT for that.
The simplest case is called general parallel I/O (GPIO), where you can read or write data directly from/to external electrical pins on the device. There are several memory addresses, called registers, where you can read data from the port (voltage near 0 = 0, near supply voltage = 1), where you can write data to the port, and where you can define whether a particular pin is input (the corresponding bit is typically 0) or output (the bit is 1). Every microcontroller has GPIO.
So in your example the button could be connected to a pin set to input, which the software could sense. It would typically do this every 10ms and only react if it has a stable value for several reads, this is called debouncing. Then it would write a 1 to some output, which via some transistor for amplification could drive a motor. If it senses that you release the switch it could turn the motor off again by writing a 0. And so on, this program would run until you turn the device off.
There are lots of other I/O devices for other purposes with typically hundreds of registers for controlling them. If you want to see more you could look into the data sheet of some microcontroller. For example, here is the data sheet of ATtiny4/5/9/10, a very small controller from the Atmel AVR family.
Today most firmware is written in C, except for the smallest devices and for a little special code for handling resets and interrupts, which is written in assembly language.