verilog driving signals on the same wire - conflict

I looked through internet and couldn't find a clear and concise answer to my question. I want to know what'll happen if I drive same strength signals onto the same wire, one of them being logic 1 and the other being logic 0? What do I do if I want a signal to "win", for lack of a better word, depending on the situation?

Based on your comment, it sounds like you want a three-state bus. The basic structure to drive a three-state bus is:
assign bus = enable ? out : 1'bz;
Each module driving the bus has a driver of this form. Only one module may have its enable asserted at any time; the bus protocol should define how ownership of the bus is decided. For example, serial buses like I2C have a "master" and a "slave"; the master always talks first, and the slave only talks after it has been requested to by the master.
If you don't want the bus to float when nothing is driving (in simulation, this shows up as a Z value), you can declare the bus as tri0 or tri1 rather than a regular wire.
If multiple modules have the enable asserted at the same time, or if you have multiple standard assign bus = out; drivers attempting to drive different values on the bus, it is known as "contention". This will show up as an X value in simulation, and could result in damage to the drivers in a physical device.

I want to know what'll happen if I
drive same strength signals onto the
same wire, one of them being logic 1
and the other being logic 0?
If the load is a simple net it will be assigned StX(Strong X).
What do I do if I want a signal to
"win", for lack of a better word,
depending on the situation?
Are you asking how to model this in Verilog or how to do this with MOS devices?

Related

What does a parallel port on an embedded system look like?

this may be a stupid question but it's been confusing me.
I've been watching some videos on Embedded Systems and they're talking about parallel ports, the data, the direction and the amount used.
I understand that the ports are connected to wires which feed other parts of the system or external devices. But I am confused because the lecture I watched says that to control a single LED would require 1 bit from 1 port.
My question is, what does the parallel port on an embedded system look like and how would you connect your own devices to the board? (say you made a device which sent 4 random bits to the port)
EDIT: I have just started learning so I might have missed a vital piece of information which would tie this altogether. I just don't understand how you can have an 8 bit port and only use 1 bit of it.
Firstly, you should know that the term "parallel port" can refer to a wide variety of connectors. People usually use the phrase to describe 25-pin connectors found on older PCs for peripherals like printers or modems, but they can have more or fewer pins than that. The Wikipedia article on them has some examples.
The LED example means that if you have an 8-bit parallel port, it will have 8 pins, so you would only need to connect one of the pins to an LED to be able to control it. The other pins don't disappear or anything strange, they can just be left unconnected. The rest of the pins will be either ones or zeros as well, but it doesn't matter because they're not connected. Writing a "1" or "0" to that one connected pin will drive the voltage high or low, which will turn the LED on or off, depending on how it's connected. You can write whatever you want to the other pins, and it won't affect the operation of the LED (though it would be safest to connect them to ground and write "0"s to them).
Here's an example:
// assume REG is a memory-mapped register that controls an 8-bit output
// port. The port is connected to an 8-pin parallel connector. Pin 0 is
// connected to an LED that will be turned on when a "1" is written to
// Bit 0 (the least-significant bit) of REG
REG = 0x01 // write a "1" to bit 0, "0"s to everything else
I think your confusion stems from the phrase "we only need one bit", and I think it's a justified confusion. What they mean is that we only need to control that one bit on the port that corresponds to our LED to be able to manipulate the LED, but in reality, you can't write just one bit at a time, so it's a bit (ha!) misleading. You (probably) won't find registers smaller than 8-bits anymore, so you do have to read/write the registers in whole bytes at a time, but you can mask off the bits you don't care about, or do read-modify-write cycles to avoid changing bits you don't intend to.
Without the context of a verbatim transcript of the videos in question, it is probably not possible to be precise about what they may have specifically referred to.
The term "parallel port" historically commonly refers to ports primarily intended for printer connections on a PC, conforming to the IEEE 1284 standard; the term distinguishing it from the "serial port" also used in some cases for printer connections but for two-way data communications in general. More generally however it can refer to any port carrying multiple simultaneous data bits on multiple conductors. In this sense that includes SDIO, SCSI, IDE, GPIB to name but a few, and even the processor's memory data bus is an example of a parallel port.
Most likely in the context of embedded systems in general, it may refer to a word addressed GPIO port, although it is not a particularly useful or precise term. Typically on microcontrollers GPIO (general purpose I/O) ports are word addressable (typically 8, 16, or 32 bits wide), all bits of a single GPIO port may be written simultaneously (or in parallel), with all bit edges synchronised so their states are set simultaneously.
Now in the case where you only want to access a single bit of a GPIO (to control an LED for example), some GPIO blocks allow single but access by having separate set/clear registers, while others require read-modify-write semantics of the entire port. ARM Cortex-M supports "bit-banding" which is an alternate address space where every word address corresponds to a single bit in the physical address space.
However bit access of a GPIO port is not the same as a serial port; that refers to a port where multiple data bits are sent one at a time, as opposed to multiple data bits simultaneously.
Moreover the terms parallel-port and serial-port imply some form of block or stream data transfer as opposed to control I/O where each bit controls one thing, such as your example of the LED on/off - there the LED is not "receiving data" it is simply being switched on and off. This is normally referred to as digital I/O (or DIO). In this context you might refer to a digital I/O port; a term that distinguishes it from analogue I/O where the voltage on the pin can be set or measured as opposed to just two states high/low.

PhotoCell circuit with identity

I have a photocell that gives me the intensity of light in voltage. I want to add a unique number (that I can hard-code on the chip) along with the photocell info and send in a format I can read using a digital computer (Arduino). Any suggestion when I can start?
Sounds like you want to shop for a cheap micro controller that's easy to work with, has an ADC, and a small amount of flash memory. Almost every silicon vendor will claim to have something meeting that requirement, what you are familiar with and can easily buy in appropriate quantities may matter as much as the technical details of the offering. If you already have an arduino, another atmel part such as one of the 8-pin attiny's might be attractive.
You write a little program with a loop that reads the photocell through the ADC, bundles it with an ID number which you store in the flash beside your program, and ships it off through something like a serial port to whatever system needs the information.
Serial port can be a UART peripheral or bit-banged in a software timing loop. For short runs people often skip the line driver/receiver on each end and signal at logic voltage rather than the higher voltage (and inverted sense, the drivers/receivers invert for you) of the RS232 spec. Or you can use other schemes, synchronous ones like SPI or I2C being popular as well.
You might want to look at maxim one-wire bus devices.
They share a bus connection and ground and your ardunio can interrogate the bus.
Each device has a unique identifer and can be read.
The DS2438 is a cheap member of the family that can measure voltage.
(The DS2450 was a quad A/D converter but it was a buggy chip and is now obsolete.)
Ardunio drivers at http://www.arduino.cc/playground/Learning/OneWire

Why is not USB interrupt-driven?

What's the rationale behind making USB a polling mechanism rather than interrupt-driven? The answers I can come up with some reasoning are:
Leave control of processing efficiency and granularity to OS, rather than the device itself.
Prevent "interrupt storms" by faulty devices.
Some explanations on the net that I found say that it's mostly because of the nature of USB devices. They are mostly microcontroller-based systems which cannot queue larger transfers therefore require short interrupt intervals and such short interrupt intervals may not be the most efficient. Is that true?
Could there be other reasons?
The overarching premise of the development of USB was, "cheap chips". This was done, through the use of polling, which reduces the need for a higher arbitration protocol.
Firewire, which did allow for interrupts from the devices and even DMA, was much more expensive. So USB won in the low-cost field, and firewire in low-latency/low-overhead/... field. Due to history USB more or less won.
What's the rationale behind making USB a polling mechanism rather than interrupt-driven?
This seems to be anti-USB FUD (as in Fear-Uncertainy-Doubt).
The reason is that this simplifies things on the harware level quite a bit - no more collisions for example. USB is half-duplex to reducex the amount of wires in the cable, so only one can talk anyway.
While USB uses polling on the wire, once you use it in software you will notice that you have interrupts in USB. The only issue is a slight increase in latency - neglible in most use cases. Since the polling is usually realized in hardware IIRC, software only gets notified if there is new data.
On the software level, there are so-called "interrupt endpoints" - and guess what, every HID device uses them: Mice, Keyboard and Josticks are HID.
There are three ways to avoid data transfer collisions on a bus:
Have a somewhat complex bus management protocol. Such a protocol has to be rather sophisticated, as when too simple, it will make the bus rather slow (see Token Ring, which is rather simple, yet inefficient). However, having a sophisticated protocol makes all components expensive as all of them require management logic and need to understand of how the bus actually works (see Firewire).
Don't avoid them at all, allow them but detect and handle them. This is also somewhat complex and the bus cannot guarantee any speed or latency as if there are constantly collisions, throughput will fall and latency will raise (see Ethernet without switches, see WiFi).
Have one bus master that controls who can use the bus at which time and for how long. This is inexpensive, as only the master must be sophisticated and the master can give any possible guarantee regarding speed or latency.
And (3) is how USB works and not even the master USB chips need to be sophisticated as the master is usually a computer with a fast CPU and can perform all bus management in software.
USB device chips are dump as toast. They don't need to understand the nifty details of the bus. They just have to look for packets addressed to them which are either control packets, e.g. requesting meta data or selecting a configuration, data packets sent by the master, or poll requests from the master saying "if you have something to send, the bus is now yours".
To make sure the master polls them on time, they hand out a simple description table on request, that explains which endpoints they offer, how often those need to be polled, and how much data they will at most transfer when being polled. The master can use that information to build up a poll schedule that ensures that all devices are polled on time and get the bus for long enough to allow their maximum transfer size. Of course, this is not possible under all circumstances. If you connect too many devices that require very frequent polling and always want to send a lot of data, your system may refuse to add a new device with the error, that its poll requirements cannot be satisfied anymore. Yet that situation is rare in practice and USB is limited to 127 devices (hubs count as devices, so does the master itself).
Power management works in a similar way. Every device tells the master how much power it needs and the master ensures that the bus can still deliver that much, taking active hubs into account. If you connect another device and the bus cannot power it anymore, adding the device will fail with an error.
This allows a fairly complex, powerful, and fast bus system, yet with components that don't even need a real CPU. The simplest USB chips out there are just bridges to a serial data line (like an internal RS-232 bus or an I2C bus) and there is nothing really configurable and they cannot run software or have a firmware one could update. They just place incoming data packets into a buffer and then sent the buffer content bit for bit over the serial bus and they receive serial data in another buffer and return the buffer content when being polled. As for telling the master the configuration (including device and vendor IDs, as well as human readable strings), they simply send the content of a small external EPROM. Things cannot get much simpler than that yet such a chip is enough to build plenty of USB hardware already.

Send and receive data trough the power network

I'm not interested in a hardware solution, I want to know about software that may "read" modulated signal received trough the power supply - some sort of a low-level driver that would access the power signal in a convenient place and demodulate it.
Is there a way to receive signal from the computer's power supply? I'm interested in an API or library that would allow the computer to be seen as a node in a Power Line Communication network and receive data directly through the power cable, without the need for a converter. Is there any active research in this field?
Edit:
There is software that reads monitors and displays internal component voltages - DC voltage after being converted and filtered by the power supply - now I need is a method of data encoding that would be invariant to conversion and filtering, the original signal embedded in AC being present in some form within the converted DC signal.
This is not possible, as described in the question. Yes, with extra hardware you can do it. No, with the standard hardware in a PC, you could not.
As others have noted, among other problems, the only information you can get from a generic PC is a bit of voltage info for the CPU. It's not going to give a picture of the AC signal, nor any signal modulated on top of it. You'll be watching a few highly regulated DC signals deep inside the computer, probably converted at a relatively low rate too. Almost by definition, if you could see external information on any of those signals, your machine is already suffering a hardware failure and chances are the CPU will be crashing soon...
*blink* No...
Edit: I mean, there's the possibility to use the powerlines as network cables, but only with special adapters. And it is just designed for home networks.
Edit2: You can't read something from the power supply of a computer...it's not designed for that. You would have to create your own component/adapter for this.
Am I mis-reading this? Wouldnt this be a pure hardware solution?
This is highly improbable without adding some hardware.
You see, the power supplies in a regular PC are switching power supplies which effectively decouple the AC input from the supplied DC voltage needed on the PC side. The AC side just basically provides power that fuels the high-speed power switching circuitry.
Also, a DC signal, by definition, doesn't provide a signal per se: it is a "static" power level (and yes the power level does vary a bit in the time domain but not as an easy to leverage function).
Yes there can be an AD (Analog to Digital) monitoring chip that can be used on the PC side to read the voltage of the DC component supplied to the motherboard etc., but that doesn't mean there is still a signal that can be harvested: the original power line "signal" might have been through enough filters that there isn't a "signal" left to be processed.
Lastly, one needs to consider that power supplies design varies from company to company; this fact will undoubtedly affect any possible design of a communication solution.
what you describe is possible but unfortunately, you need an adapter to convert the signal running on the powerlines to sensible network traffic.
the power line acts as a physical medium, thus is at the lowest level f the OSI stack. conversion from electrical signal to sensible network traffic requires a hardware adapter, same for your an ethernet adapter. your computer is unable to understand this traffic since its power supply was not build to transmit those informations. but note that you can easily find an adapter and it will works the same as an ethernet adapter, that is be accessible through the standard BSD socket library.
This is ENTIRELY possible, although you would need to either buy or build some hardware to make it happen. In addition, the software solution would be very, very complex.
The computer's power supply would be out of the picture for the most part. You need to read data straight from the wall with as little extraneous noise as possible. From the electrical engineering perspective, this is a very thoroughly covered topic. In the end, all you're really doing is an analog to digital conversion, and the rest keeps your circuit from being fried.
The software solution would basically be eliminating random noise, and looking for embedded signals. The math behind analog signal analysis is very complex, and you can spend a few semesters in college covering the topic, and the rest of your career trying to master it. If you're good at it, there's a cushy job for you on wallstreet predicting the stock market.
And that only covers reading incoming signals. Transmitting is a whole 'nother sport.
Now, it also sounds like you might be interested in a hack. That is...
You could buy a
commercial-off-the-shelf power-line
Ethernet adapter and tear it apart.
They have two prongs that plug into
a standard wall outlet. You could
remove these and wire them to the
INSIDE of a power supply.
To do that, you'd have to tear apart a power
supply as well, which is incredibly
dangerous and I hereby warn you and
anyone else to NEVER attempt this.
The entire Ethernet adapter could be
tucked into the power supply and you
could basically have an Ethernet
port on the surface of your power
supply (either inside or outside the
computer).
Simply wire that to a
standard Ethernet adapter and voila
(!), you have nothing but a power
cable connecting your computer to
the wall outlet, AND you magically have
Ethernet!
Note that there also has to be another power-line
Ethernet adapter somewhere else for
you to establish a network and make the whole project useful.
How can you read modulated data from the power supply, you are talking about voltage and ohms and apart from a possible electrical shock which would be just shocking :) There are specialized electrical plugs with ethernet jacks in them that you can use.
I just hazard a guess that this is totally transparent as per Adrien Plisson's answer, i.e. you would have all of the OSI layer and is no different. You can write code to read from the sockets.
AFAIK no company that produces this electrical plug would ever open up the API for competition reasons, it is still in early stages as adoption of that is low because obviously it is very expensive (120 euro here in my country for a pair of 'em), as it does not deliver the quoted speed, say 100Mbps power plug, may get maybe 85Mbps due to varying situations and phenomena with power (think surges, brown outs, interference).
My 2cents.
Hope this helps,
Best regards,
Tom.

How firmwares communicate to the electronic devices to perform its operations?

Almost all electronic devices comes with firmwares. I know it is stored in ROM (Read only memory) so it becomes non-volatile (no power source required to hold the contents from getting erased like RAM)
What I want to know is "How firmwares communicate to the electronic devices to perform its operations?"
Let say there is a small roller.. On press of a button, how it makes it to move?
Can someone please explain what is residing behind, to make it happen..
I think it may require a little brief explanation to unwind it..
Also what is the most popular language used for coding firmwares?
Modern hardware like you're describing has a program stored in ROM and an all-purpose microcomputer (CPU) executing that program.
The CPU reads information from ROM by setting up addresses on its address bus and then asking the ROM to tell it the value stored at that location. There's something like a read pulse being raised (on a separate line) to tell the ROM to make the value accessible on the lines of the data bus. That, in a nutshell, is reading.
To get the hardware to do something, the CPU basically executes a kind of write operation. It puts a value, which is just a bunch of bits if you want to look at it that way, on the address bus to select a certain device and perhaps function on that device, then it raises another signal line saying "write!" The device that recognizes its address on the address bus responds to that signal by accepting the data from the data bus and then performing whatever its function is. Typically, one of the data bus bits will be connected within the output device to a power output stage, i.e. a transistor stronger than the ones used just for computation, and that transistor will connect some electrical device to current sufficient to make it move/glow/whatever.
Tiny, cheap devices are coded in assembly language to save costs for ROM; in industrial quantities, even small amounts of memory can affect price. The assembly language is specific to the CPU; some chips called "8051", "6502" and "Atmel (something or other)" are popular. Bigger devices with more complex requirements may have their firmware written in C or a C-like dialect, which makes programming a little easier than assembler. The bigges ones even run C++ code. Compiled, of course.
In most systems there are special memory addresses which are used for I/O. Reading and writing on such addresses executes some function instead of just moving data around. In x86 systems there are also special I/O instructions IN and OUT for that.
The simplest case is called general parallel I/O (GPIO), where you can read or write data directly from/to external electrical pins on the device. There are several memory addresses, called registers, where you can read data from the port (voltage near 0 = 0, near supply voltage = 1), where you can write data to the port, and where you can define whether a particular pin is input (the corresponding bit is typically 0) or output (the bit is 1). Every microcontroller has GPIO.
So in your example the button could be connected to a pin set to input, which the software could sense. It would typically do this every 10ms and only react if it has a stable value for several reads, this is called debouncing. Then it would write a 1 to some output, which via some transistor for amplification could drive a motor. If it senses that you release the switch it could turn the motor off again by writing a 0. And so on, this program would run until you turn the device off.
There are lots of other I/O devices for other purposes with typically hundreds of registers for controlling them. If you want to see more you could look into the data sheet of some microcontroller. For example, here is the data sheet of ATtiny4/5/9/10, a very small controller from the Atmel AVR family.
Today most firmware is written in C, except for the smallest devices and for a little special code for handling resets and interrupts, which is written in assembly language.