MSP430G2553 UART Baudrate and Control Registers. Can I use word access? - uart

Is it possible to program MSP430G2553 UART registers with word access or are the internal peripherals only byte wide and therefore only byte accessible?
(I know that MCTL is only byte wide on this device.)

The 2xx family User's Guide says in section 1.4.3:
The address space from 010h to 0FFh is reserved for 8-bit peripheral modules. These modules should be accessed with byte instructions. Read access of byte modules using word instructions results in unpredictable data in the high byte. If word data is written to a byte module only the low byte is written into the peripheral register, ignoring the high byte.

Related

What does a parallel port on an embedded system look like?

this may be a stupid question but it's been confusing me.
I've been watching some videos on Embedded Systems and they're talking about parallel ports, the data, the direction and the amount used.
I understand that the ports are connected to wires which feed other parts of the system or external devices. But I am confused because the lecture I watched says that to control a single LED would require 1 bit from 1 port.
My question is, what does the parallel port on an embedded system look like and how would you connect your own devices to the board? (say you made a device which sent 4 random bits to the port)
EDIT: I have just started learning so I might have missed a vital piece of information which would tie this altogether. I just don't understand how you can have an 8 bit port and only use 1 bit of it.
Firstly, you should know that the term "parallel port" can refer to a wide variety of connectors. People usually use the phrase to describe 25-pin connectors found on older PCs for peripherals like printers or modems, but they can have more or fewer pins than that. The Wikipedia article on them has some examples.
The LED example means that if you have an 8-bit parallel port, it will have 8 pins, so you would only need to connect one of the pins to an LED to be able to control it. The other pins don't disappear or anything strange, they can just be left unconnected. The rest of the pins will be either ones or zeros as well, but it doesn't matter because they're not connected. Writing a "1" or "0" to that one connected pin will drive the voltage high or low, which will turn the LED on or off, depending on how it's connected. You can write whatever you want to the other pins, and it won't affect the operation of the LED (though it would be safest to connect them to ground and write "0"s to them).
Here's an example:
// assume REG is a memory-mapped register that controls an 8-bit output
// port. The port is connected to an 8-pin parallel connector. Pin 0 is
// connected to an LED that will be turned on when a "1" is written to
// Bit 0 (the least-significant bit) of REG
REG = 0x01 // write a "1" to bit 0, "0"s to everything else
I think your confusion stems from the phrase "we only need one bit", and I think it's a justified confusion. What they mean is that we only need to control that one bit on the port that corresponds to our LED to be able to manipulate the LED, but in reality, you can't write just one bit at a time, so it's a bit (ha!) misleading. You (probably) won't find registers smaller than 8-bits anymore, so you do have to read/write the registers in whole bytes at a time, but you can mask off the bits you don't care about, or do read-modify-write cycles to avoid changing bits you don't intend to.
Without the context of a verbatim transcript of the videos in question, it is probably not possible to be precise about what they may have specifically referred to.
The term "parallel port" historically commonly refers to ports primarily intended for printer connections on a PC, conforming to the IEEE 1284 standard; the term distinguishing it from the "serial port" also used in some cases for printer connections but for two-way data communications in general. More generally however it can refer to any port carrying multiple simultaneous data bits on multiple conductors. In this sense that includes SDIO, SCSI, IDE, GPIB to name but a few, and even the processor's memory data bus is an example of a parallel port.
Most likely in the context of embedded systems in general, it may refer to a word addressed GPIO port, although it is not a particularly useful or precise term. Typically on microcontrollers GPIO (general purpose I/O) ports are word addressable (typically 8, 16, or 32 bits wide), all bits of a single GPIO port may be written simultaneously (or in parallel), with all bit edges synchronised so their states are set simultaneously.
Now in the case where you only want to access a single bit of a GPIO (to control an LED for example), some GPIO blocks allow single but access by having separate set/clear registers, while others require read-modify-write semantics of the entire port. ARM Cortex-M supports "bit-banding" which is an alternate address space where every word address corresponds to a single bit in the physical address space.
However bit access of a GPIO port is not the same as a serial port; that refers to a port where multiple data bits are sent one at a time, as opposed to multiple data bits simultaneously.
Moreover the terms parallel-port and serial-port imply some form of block or stream data transfer as opposed to control I/O where each bit controls one thing, such as your example of the LED on/off - there the LED is not "receiving data" it is simply being switched on and off. This is normally referred to as digital I/O (or DIO). In this context you might refer to a digital I/O port; a term that distinguishes it from analogue I/O where the voltage on the pin can be set or measured as opposed to just two states high/low.

Z80 Multibyte Commands in IM0

I'm trying just for the fun to design a more complex Z80 CP/M system with a lot of peripheral devices. When reading the documentation I stumbled over an (undocumented?) behaviour of the Z80 CPU, when accepting an interrupt in IM0.
When an interrupt occurs, the Z80 activates M1 and IORQ to signal the external device: "Hey, give me an opcode". All is well if the opcode is rst 00 or something like this. Now the documentation tells, ANY opcode of any command can be given to the cpu, for instance a CALL.
But now comes the undocumented part: "The first byte of a multi-byte instruction is read during the interrupt acknowledge cycle. Subsequent bytes are read in by a normal memory read sequence."
A "normal memory read sequence". How can I determine, if the CPU wants to get a byte from memory or instead the next byte from the device?
EDIT: I think, I found a (good?) solution: I can dectect the start of the interrupt acknowlegde cycle by analyzing IORQ and M1. Also I can detect the next "normal" opcode fetch by analyzing MREQ and M1. This way I can install a flip-flop triggered by these two ANDed signals, i.e. the flip-flop is 1 as long as the CPU reads data from the io-device. This 1 I can use to inhibit the bus drivers to and from the memory.
My intentions? I'm designing an interrupt controller with 8 prioritized inputs in a CPLD. It's registers hold a 16 bit address for each interrupt pin. Just for the fun :-)
My understanding is that the peripheral device is required:
to know how many bytes it needs to feed;
to respond to normal read cycles following the IORQ cycle; and
to arrange that whatever would normally respond to memory read cycles does not do so for the duration.
Also the behaviour was documented by Zilog in an application note, from which your quote originates (presumably uncredited).
In practice I guess 99.99% of IM0 users just use an RST and 99.99% of the rest use a known-size instruction like CALL xxxx.
(also I'm aware of a few micros that effectively guaranteed not to put anything onto the bus during an interrupt cycle, thereby turning IM0 into a synonym of IM1 owing to open collector output).
The interrupt behavior is reasonably documented in the Z80 manual:
Interupt modes, IM2 allows you to supply an 8-bit address to a 16-bit pointer. At least halfway to the desired 16-bit direct address.
How to set the interrupt modes
My understanding is that the M1 + IORQ combination is used since there was no pin left for a dedicated interrupt response. A fun detail is also that the Zilog I/O chips like PIO, SIO, CTC reads the RETI instruction (as the CPU fetches it) to learn that the CPU is ready to accept another interrupt.

Why Erase in NAND or NOR is not performed Page wise?

Erase is always performed by Block and never by Page or Word. If Read and Write can be performed by Page , why not Erase ?
This is due to the physical architecture of a flash device where the memory array is divided up into blocks. It is only possible to erase the data stored in the device in blocks, either one at a time or multiple blocks in sequence or parallel (depending upon the generosity of the device manufacturer). Just as the size of a processor word or page (whatever that may be defined as) may vary between processor architectures, the size of these blocks vary between devices - I have used devices with sectors between 32 bytes and 128K bytes.
If you need to be able to erase a single byte in the memory then you need to use EEPROM memory.

What does it mean when my CPU doesn't support unaligned memory access?

I just discovered that the ARM I'm writing code on (Cortex M0), doesn't support unaligned memory access.
Now in my code I use a lot of packed structures, and I never got any warnings or hardfaults, so how can the Cortex access members of these structures when it doesnt allow for unaligned access?
Compilers such as gcc understand about alignment and will issue the correct instructions to get around alignment issues. If you have a packed structure, you will have told the compiler about it so it know ahead of time how to do alignment.
Let's say you're on a 32 bit architecture but have a struct that is packed like this:
struct foo __attribute__((packed)) {
unsigned char bar;
int baz;
}
When an access to baz is made, it will do the memory loads on a 32 bit boundary, and shift all the bits into position.
In this case it will probably to a 32 bit load of the address of bar and a 32 bit load at the address of bar + 4. Then it will apply a sequence of logical operations such as shift and logical or/and to end up with the correct value of baz in a 32 bit register.
Have a look at the assembly output to see how this works. You'll notice that unaligned accesses will be less efficient than aligned accesses on these architectures.
On many older 8-bit microprocessors, there were instructions to load (and storing) registers which were larger than the width of the memory bus. Such an operation would be performed by loading half of the register from one address, and the other half from the next higher address. Even on systems where the memory bus is wider than 8 bits wide (say, 16 bits) it is often useful to regard memory as being an addressable collection of bytes. Loading a byte from any address will cause the processor to read half of a 16-bit memory location and ignore the other half. Reading a 16-bit value from an even address will cause the processor to read an entire 16-bit memory location and use the whole thing; the value will be the same as if one read two consecutive byte addresses and concatenated the result, but it will be read in one operation rather than two.
On some such systems, if one attempts to read a 16-bit value from an odd address, the processor will read two consecutive addresses, using half of one value and the other half of the other value, as though one had performed two single-byte reads and combined the results. This is called an unaligned memory access. On other systems, such an operation will result in a bus fault, which generally triggers some form of interrupt which may or may not be able to do something useful about it. Hardware to support unaligned accesses is rather complicated, and designing code to avoid unaligned accesses is generally not overly difficult. Thus, such hardware generally only exists either on processors that are already very complicated, or processors which will be running code that was designed for processors that would assembly multi-byte registers from single-byte reads (e.g. on the 8088, every 16-bit read required two 8-bit memory fetches, and a lot of 8088 code was run on later Intel processors).

Explain why mikroC's PIC18F4550 HID example works

The mikroC compiler has a library for HID (Human Interface Device) USB communication. In the supplied samples, they specify that the buffers below should be in USB RAMand use a PIC18F4550 as the target microcontroller.
unsigned char readbuff[64] absolute 0x500; // Buffers should be in USB RAM, please consult datasheet
unsigned char writebuff[64] absolute 0x540;
But the PIC18F4550's datasheet says USB RAM ranges from 400h to 4FFh.
So why does their example work, when their buffers appear not to be between 400h to 4FFh?
Link to full source.
The datasheet actually says:
Bank 4 (400h through 4FFh) is used specifically for
endpoint buffer control, while Banks 5 through 7 are
available for USB data. Depending on the type of
buffering being used, all but 8 bytes of Bank 4 may also
be available for use as USB buffer space.
So, it would appear the code you're quoting is defining buffers used for USB data, not "endpoing buffer control" since they are in bank 5 instead of bank 4.
When USB HID mode is activated, USB RAM Memory ranges from 400h to 4FFh is assigned for buffer descriptors. Range between 500h and 7ffh is assigned for USB and user data. Important thing is all descriptor buffer and data buffer will be in ram memory range of bank 4-7.
Thank you