Unexpected Result in Intel 8080 Test ROM TST8080.ASM - testing

I am creating an Intel 8080 and was running a test ROM named TST8080.ASM that I found at this Webiste. I am failing when this block of code runs:
CPOI: RPE ;TEST "RPE"
ADI 010H ;A=99H,C=0,P=0,S=1,Z=0
CPE CPEI ;TEST "CPE"
ADI 002H ;A=D9H,C=0,P=0,S=1,Z=0
RPO ;TEST "RPO"
CALL CPUER
I just don't understand why on the ADI instruction the parity flag is not set. When converting 99H to binary I get 10011001 which is an even number of bits yet the tests seems to expect the parity flag to not be set. If anyone could shed some light I would be grateful... Thx
I have read the Intel 8080 Manual which states "Byte "parity" is checked after certain operations. The number of 1 bits in a byte are counted, and if the total is odd, "odd" parity is flagged; if the total is even, "even" parity is flagged. The Parity bit is set to 1 for even parity, and is reset to 0 for add parity.

Try this online 8080 emulator
Enter the code:
mvi a, 89h
adi 10h
Press step and observe that the parity bit is set:

Related

How to calculate return instructions after a period of the program (based on the aspect of computer hardware)

I've been working on rop recently. When using perf to count hardware information, I want to measure the number of return instructions executed by a given piece of code. But the perf interface only provides branch instructions.
If you're only x86 with a recent Intel CPU:
perf list on my Skylake shows there's a hardware counter for br_inst_retired.near_return. That will count only ret instructions, not other branches. But see erratum SKL091 for branch-instruction counters.
perf stat -e instructions,br_inst_retired.near_return,... ./a.out may be what you're looking for. Or maybe attaching perf stat to an already-running program, or maybe -I 1000 to print accumulated counts over intervals.
But note that if you're looking for ROP gadgets, you can find a C3 opcode inside what normally decodes as some other instruction. So restricting yourself only to ret instructions that actually run during the target program's normal execution is more limiting than it needs to be.
e.g. a 4-byte immediate might usefully decode as something + ret if you jump to the immediate.

required size of a configuration file for a HX1K (in "SPI slave" mode)

I am reworking the programmer for the Olimex iCE40HX1K board (targetted towards a STM32F103 ma) where I also would like to implement the "SPI Slave" mode to configure an image directly into RAM without using the serial flash.
Looking at the Lattice "programming and Configuration guide" (page 11), it is noted in table 8 that a EPROM for a ICE40-LP/LX1K must be at least 34112 bytes. (which -I guess- means that the configuration-files can be up to that size).
However, all images I have (sofar) created with the icestorm tools are 32220 octets.
I am a bit puzzled here.
Can somebody explain the difference between these two figures?
Does the HX1K need a configuration-file of 32220 or 34112 bytes?
I don't know how Lattice arrived at this number. A complete HX1K bin file with BRAM initialization but without comment and without multiboot header is 32220 bytes in size. The (optional) multiboot header would add another 160 bytes (32220 + 160 = 32380). The lattice tools usually add about 80 bytes to the comment field (32220 + 80 = 32300). Whatever I do, all numbers I have are more than 1000 short of 34112.
I don't know if there is a maximum length for the comment. Maybe there is and 34112 is the size of a bit stream with a comment of maximum length?
34112 - 32220 = 1892. Maybe someone decided to add 8kB (8192 bytes) just in case, but that person accidentally swapped the first two digits? Idk..
If you don't care about comments or multiboot headers, then iCE40 1K bit-streams have a fixed size, and that size is 32220 bytes.

Understanding the bitstream generated for iCE40 I/O tiles

When I synthesize an empty circuit using Yosys and arachne-pnr, I get a few irregular bits:
.io_tile 6 17
IoCtrl IE_1
.io_tile 6 0
IoCtrl REN_0
IoCtrl REN_1
These are also part of every other file I could generate so far. Since an unused I/O tile has both IE bits set, I read this as:
for IE/REN block 6 17 0, the input buffer is enabled
for IE/REN block 6 0 0, the input buffer is enabled and the pull-up resistor is disabled
for IE/REN block 6 0 1, the input buffer is enabled and the pull-up resistor is disabled
However, according to the documentation, there is no IE/REN block 6 17 0.
What is the meaning of these bits? If the IE bit of block 6 17 0 is unset because the block doesn't exist, why aren't the bits of the other blocks which don't exist unset, too? The other IE/REN blocks seem to correspond to I/O blocks 6 0 1 and 7 0 0. What do these blocks do, and why are they always configured as inputs?
The technology library entry for SB_IO does not mention the IE bit. How is it related to the PIN_TYPE parameter settings?
When I use an I/O pin as an input, the REN bit is set (the pull-up resistor disabled). This suggests that the pull-up resistors are primarily intended to keep unused pins from floating, not to provide a pull-up resistor for conditionally connected inputs (e.g. buttons). Is this assumption correct? Would it be ok to use the internal pull-up resistors for that purpose?
The technology library says the following:
defparam IO_PIN_INST.PULLUP = 1'b0;
// By default, the IO will have NO pull up.
// This parameter is used only on bank 0, 1,
// and 2. Ignored when it is placed at bank 3
Does this mean bank 3 doesn't have pull-up resistors, or merely that they can't be re-enabled using Verilog? What would happen if I clear that bit in the ASCII bitstream manually? (I'd be trying this, but the iCEstick evaluation board doesn't make any pin on bank 3 accessible – a coincidence? – and I'm not sure if I want to mess with the hardware yet.)
When I use an I/O pin as an output, the IE bit isn't cleared, but the input pin function is set to PIN_INPUT. What effect does this have, and why is it done?
The default behavior for unused IO pins is to enable the pullup resistors and disable input enable. On iCE40 1k chips this means IE_0 and IE_1 are set and REN_0 and REN_1 are cleared in an unused IO tile. (On 8k chips IE_* is active high, i.e. all bits are cleared in an unused IO tile on an 8k chip.)
icebox_explain by default hides tiles that have "uninteresting" contents. (Run icebox_explain -A to disable this feature.)
It looks like arachne-pnr does not set those bits for IO pins that are not available in the current package. Thus you get some unusual bit pattern in some IO tiles that contain IE/REN bits for IO blocks not connected to any package pin.
This is what a "normal" unused IO tile looks like on the 1k architecture:
$ icebox_explain -mAt '1 0' example.asc
Reading file 'example.asc'..
Fabric size (without IO tiles): 12 x 16
.io_tile 1 0
B0 ------------------
B1 ------------------
B2 ------------------
B3 ------------------
B4 ------------------
B5 ------------------
B6 ---+--------------
B7 ------------------
B8 ------------------
B9 ---+--------------
B10 ------------------
B11 ------------------
B12 ------------------
B13 ------------------
B14 ------------------
B15 ------------------
IoCtrl IE_0
IoCtrl IE_1
Would it be ok to use the internal pull-up resistors for that purpose?
Yes.
The technology library entry for SB_IO does not mention the IE bit. How is it related to the PIN_TYPE parameter settings?
When D_IN_0 or D_IN_1 from SB_IO is connected to something, then this implies IE.
When I use an I/O pin as an output, the IE bit isn't cleared
Note that IE is active low on 1k chips and active high on 8k chips. When you use an I/O pin as output-only pin on a 1k device, then the corresponding IE bit should be set, otherwise it should be cleared.
I found the explanation, so I'm sharing it here for future reference:
.io_tile 6 17
IoCtrl IE_1
I/O block 16 7 0 is connected to a pin in some packages but not in the TQFP package.
.io_tile 6 0
IoCtrl REN_0
IoCtrl REN_1
This corresponds to I/O blocks 6 0 1 and 7 0 0 (pins 49 and 50) into whose input paths the PLL PORTA and PORTB clocks are fed, respectively.

saveenv fails after U-Boot update - Writing to NAND... FAILED

to be able to run my eSata Sheevaplug with Debian Wheezy I had to upgrade U-Boot to the DENX version.
As step-by-step guide I used this read from Martin Michlmayr. I did the upgrade using screen and a USB stick at the plug.
The upgrade went good and after resetting I got the plug started with the new version.
Marvell>> version
U-Boot 2013.10 (Oct 21 2013 - 21:06:56)
Marvell-Sheevaplug - eSATA - SD/MMC
gcc (Debian 4.8.1-9) 4.8.1
GNU ld (GNU Binutils for Debian) 2.23.52.20130727
Marvell>>
In the guide is written to set machid environment variable and MAC address.
But unfortunatly saveenv fails due to bad blocks in the NAND. I tried different versions of U-Boot also the one provided by NewIT. All behave the same way.
Marvell>> setenv machid a76
Marvell>> saveenv
Saving Environment to NAND...
Erasing NAND...
Skipping bad block at 0x00060000
Writing to NAND... FAILED!
There are some blocks marked as bad, which might be normal - by NewIT.
Marvell>> nand info
Device 0: nand0, sector size 128 KiB
Page size 2048 b
OOB size 64 b
Erase size 131072 b
Marvell>> nand bad
Device 0 bad blocks:
00060000
00120000
00360000
039c0000
0c300000
10dc0000
1ac40000
1f1c0000
Has someone a clue what the problem is and what I need to change to be able saving environment variables in u-boot?
Thanks,
schibbl
Due to configuration of environment variable storage at NAND, the sector size of 128k and a bad block mapping the environment variable storage adress it is not possible to write env to NAND.
Marvell>> nand bad
Device 0 bad blocks:
00060000
...
include/configs/sheevaplug.h which points perfectly to the bad block.
/*
* max 4k env size is enough, but in case of nand
* it has to be rounded to sector size
*/
#define CONFIG_ENV_SIZE 0x20000 /* 128k */
#define CONFIG_ENV_ADDR 0x60000
#define CONFIG_ENV_OFFSET 0x60000 /* env starts here */
Because of unused sector 0x80000 to 0x9FFFF I moved env storage there.
/*
* max 4k env size is enough, but in case of nand
* it has to be rounded to sector size
*/
#define CONFIG_ENV_SIZE 0x20000 /* 128k */
#define CONFIG_ENV_ADDR 0x80000
#define CONFIG_ENV_OFFSET 0x80000 /* env starts here due to bad block */
Beware! We have to ensure our compiled u-boot.kwb is less then 384k. Otherwise we will write u-boot to bad block marked memory and will brick the device.
Best way to recompile with custom env address, is to use Michlmayrs sources, which includes patches for mmc and e-sata support.

Statically Defined IDT

This question already has answers here:
Solution needed for building a static IDT and GDT at assemble/compile/link time
(1 answer)
How to do computations with addresses at compile/linking time?
(2 answers)
Closed 5 days ago.
I'm working on a project that has tight boot time requirements. The targeted architecture is an IA-32 based processor running in 32 bit protected mode. One of the areas identified that can be improved is that the current system dynamically initializes the processor's IDT (interrupt descriptor table). Since we don't have any plug-and-play devices and the system is relatively static, I want to be able to use a statically built IDT.
However, this proving to be troublesome for the IA-32 arch since the 8 byte interrupt gate descriptors splits the ISR address. The low 16 bits of the ISR appear in the first 2 bytes of the descriptor, some other bits fill in the next 4 bytes, and then finally the last 16 bits of the ISR appear in the last 2 bytes.
I wanted to use a const array to define the IDT and then simply point the IDT register at it like so:
typedef struct s_myIdt {
unsigned short isrLobits;
unsigned short segSelector;
unsigned short otherBits;
unsigned short isrHibits;
} myIdtStruct;
myIdtStruct myIdt[256] = {
{ (unsigned short)myIsr0, 1, 2, (unsigned short)(myIsr0 >> 16)},
{ (unsigned short)myIsr1, 1, 2, (unsigned short)(myIsr1 >> 16)},
etc.
Obviously this won't work as it is illegal to do this in C because myIsr is not constant. Its value is resolved by the linker (which can do only a limited amount of math) and not by the compiler.
Any recommendations or other ideas on how to do this?
You ran into a well known x86 wart. I don't believe the linker can stuff the address of your isr routines in the swizzled form expected by the IDT entry.
If you are feeling ambitious, you could create an IDT builder script that does something like this (Linux based) approach. I haven't tested this scheme and it probably qualifies as a nasty hack anyway, so tread carefully.
Step 1: Write a script to run 'nm' and capture the stdout.
Step 2: In your script, parse the nm output to get the memory address of all your interrupt service routines.
Step 3: Output a binary file, 'idt.bin' that has the IDT bytes all setup and ready for the LIDT instruction. Your script obviously outputs the isr addresses in the correct swizzled form.
Step 4: Convert his raw binary into an elf section with objcopy:
objcopy -I binary -O elf32-i386 idt.bin idt.elf
Step 5: Now idt.elf file has your IDT binary with the symbol something like this:
> nm idt.elf
000000000000000a D _binary_idt_bin_end
000000000000000a A _binary_idt_bin_size
0000000000000000 D _binary_idt_bin_start
Step 6: relink your binary including idt.elf. In your assembly stubs and linker scripts, you can refer to symbol _binary_idt_bin_start as the base of the IDT. For example, your linker script can place the symbol _binary_idt_bin_start at any address you like.
Be careful that relinking with the IDT section doesn't move anyting else in your binary, e.g. your isr routines. Manage this in your linker script (.ld file) by puting the IDT into it's own dedicated section.
---EDIT---
From comments, there seems to be confusion about the problem. The 32-bit x86 IDT expects the address of the interrupt service routine to be split into two different 16-bit words, like so:
31 16 15 0
+---------------+---------------+
| Address 31-16 | |
+---------------+---------------+
| | Address 15-0 |
+---------------+---------------+
A linker is thus unable to plug-in the ISR address as a normal relocation. So, at boot time, software must construct this split format, which slows boot time.