I'm somewhat confused by the examples of how to disable and restore the interrupt state that I've found for 8-bit AVR processors.
8-bit AVR processors like the ATmega 2560 have a Global Interrupt Enable bit (labelled 'I') in the Status Register (SREG). The CLI instruction disables all interrupts by clearing that bit. From the AVR Instruction Set Manual:
CLI - Clear Global Interrupt Enable Bit
Description
Clears the Global Interrupt Enable (I) bit in SREG (Status Register). The interrupts will be immediately disabled. No interrupt will be executed after the CLI instruction, even if it occurs simultaneously with the CLI instruction. (Equivalent to instruction BCLR 7.)
The AVR Instruction Set Manual also shows the following example:
1 in temp, SREG ; Store SREG value (temp must be defined by user)
2 cli ; Disable interrupts during timed sequence
3 sbi EECR, EEMWE ; Start EEPROM write
4 sbi EECR, EEWE
5 out SREG, temp ; Restore SREG value (I-flag)
The intent of line 5 seems to be to restore SREG's I-flag to the value that it had just before line 2 was executed. In fact, this code stores the state of all of SREG's flags - it just seems to assume that the values of SREG's other flags won't change between lines 1 and 5. However, if an interrupt occurred between lines 1 and 2, couldn't it cause some of SREG's other flags to be "restored" incorrect?
1 in temp, SREG ; Store SREG value (temp must be defined by user)
; <------- interrupt occurs here
2 cli ; Disable interrupts during timed sequence
3 sbi EECR, EEMWE ; Start EEPROM write
4 sbi EECR, EEWE
5 out SREG, temp ; Restore SREG value (I-flag)
When interrupt happen the CPU will switch to interrupt service routine but before switching the CPU will remember the location of the program (by push Program counter in the stack)
CPU will execute ISR
after execute ISR It's very important and crucial to restore CPU environment (SERG, GPR, SP, PC) as before interrupt this will be done by your code(if you did not restore CPU environment after interrupt your program will crush With High probability)
CPU will Return to the same location (by restore PC from Stack)
1 in temp, SREG ; Store SREG value (temp must be defined by user)
; <------- interrupt occurs here
; <-------------go to interupt handler
; <--------Rutern from interupt with same value of temp, SREG, and all GPR are same value
2 cli ; Disable interrupts during timed sequence
3 sbi EECR, EEMWE ; Start EEPROM write
4 sbi EECR, EEWE
5 out SREG, temp ; Restore SREG value (I-flag)
the above code simply telling you "Make sure that the line 3,4 will execute after each other with no interrupt"
if an interrupt occurred between lines 1 and 2, couldn't it cause some of SREG's other flags to be "restored" incorrect?
if an interrupt occurred between lines 1 and 2,the interrupt will be handled and return before the line 2 and Any interrupt code Must preserve CPU Environment
Related
I am creating an Intel 8080 and was running a test ROM named TST8080.ASM that I found at this Webiste. I am failing when this block of code runs:
CPOI: RPE ;TEST "RPE"
ADI 010H ;A=99H,C=0,P=0,S=1,Z=0
CPE CPEI ;TEST "CPE"
ADI 002H ;A=D9H,C=0,P=0,S=1,Z=0
RPO ;TEST "RPO"
CALL CPUER
I just don't understand why on the ADI instruction the parity flag is not set. When converting 99H to binary I get 10011001 which is an even number of bits yet the tests seems to expect the parity flag to not be set. If anyone could shed some light I would be grateful... Thx
I have read the Intel 8080 Manual which states "Byte "parity" is checked after certain operations. The number of 1 bits in a byte are counted, and if the total is odd, "odd" parity is flagged; if the total is even, "even" parity is flagged. The Parity bit is set to 1 for even parity, and is reset to 0 for add parity.
Try this online 8080 emulator
Enter the code:
mvi a, 89h
adi 10h
Press step and observe that the parity bit is set:
can someone help me?
I'm new to pic programming, what I'm trying to do is make bit 3 of port b turn on as long as bit 0 and 7 are 1, main is bit 0. if bit 0 is not 1 output, bit 3, is 0. The bit 3 is turning on, but when I change the bit 0 the bit 3 doesn't change to off (0).
I'm using INTCON REGISTER.
Here's my code:
#include "p16f628a.inc"
; CONFIG
__config 0x3F18
; __CONFIG _FOSC_INTOSCIO & _WDTE_OFF & _PWRTE_OFF & _MCLRE_OFF & _BOREN_OFF & _LVP_OFF & _CPD_OFF & _CP_OFF
LIST P=16F628A ;DECLARAR EL PIC
RADIX HEX ;DECLARAR QUE SE TRABAJARA EN HEXADECIMAL
STATUS EQU 0x03
PORTA EQU 0X05
PORTB EQU 0X06
CMCON EQU 0X1F
ORG 0X00 ;DECLARAR DONDE INICIARA
GOTO CONF
ORG 0X04
GOTO INTERRUPCIONES
ORG 0X08 ;BRINCAR REGISTROS DE MEMORIA
CONF
BSF CMCON, 0
BSF CMCON, 1
BSF CMCON, 2 ; ANALOG A DIGITAL
BSF STATUS, 5
MOVLW 0XFF
MOVWF PORTA
MOVLW 0x81
MOVWF PORTB
BCF STATUS,5
BSF INTCON,3
BSF INTCON,4
BSF INTCON,7
MAIN
BCF PORTB,3
GOTO MAIN
INTERRUPCIONES
BCF INTCON,GIE
BTFSC INTCON,INTF
GOTO TEMPERATURA
TEMP
BSF INTCON,GIE
BCF INTCON,INTF
RETFIE
TEMPERATURA
BTFSC PORTB,7
BSF PORTB,3
BSF INTCON,GIE
GOTO TEMP
END
If I understand correct; you want to set bit 3 of PORTB to 1 whenever the bit 0 or bit 7 goes high, but bit 0 is superior I mean it has higher priority over the bit 7. In the light of this information we can make the truth table as following:
RB0 (INT0)
RB7
RB3 (output)
0
0
0
0
1 (ignore)
0
1
0 (ignore)
1
1
1
1
But my question here is: What is the role of bit 7 if it doesn't have any affect on the output since the bit 0 override it in any case? Please let me know that.
So according to the table the output will set to 1 only if the higher priority interrupt input (RB0) is 1. For the rest the output remains 0. Correct me if I miss something.
Let's implement this logic into your existing code. By the way I've also noted that you've set the RB Port change interrupt but didn't service it. I will only write the related part with some optimizations.
INTERRUPCIONES
; BCF INTCON,GIE ; No need to do this since the hardware does this automatically
BTFSS INTCON,INTF
GOTO DONE ; Not INTF interrupt simply skip
BCF INTCON,INTF ; Better clear this flag as soon as you start to service it
BSF PORTB,0 ; High priority input goes 1 hence no more questions to setting the output as 1
; Check for PORTB on change interrupt
BTFSS INTCON,RBIF
GOTO DONE ; Not RBIF interrupt simply skip
BCF INTCON,RBIF ; First of all clear the interrupt flag to avoid unintended recursive interrupts
BTFSS PORTB,0 ; Check if the higher priority input is 1
GOTO DONE ; Higher priority input is not 1 so we ignore this change
BCF INTCON,RBIF ; Better clear this flag as soon as you start to service it
CALL TEMPERATURA ; Calling as a subroutine is supported up to 8 levels by this micro
DONE
; BSF INTCON,GIE ; Don't do it, because the RETFIE instruction does it for you
RETFIE
; If the program reaches here, the high priority input is 1.
; Now we need to check if the low priority input is 1 as well.
TEMPERATURA
BTFSC PORTB,7
BSF PORTB,3
; BSF INTCON,GIE ; Not needed since we return to the ISR routine and RETFIE will do it for us
RETURN
And finally let's simplify the above ode by eliminating unnecessary code lines:
INTERRUPCIONES
BTFSS INTCON,INTF
GOTO DONE ; Not INTF interrupt simply skip
BCF INTCON,INTF ; Better clear this flag as soon as you start to service it
BSF PORTB,0 ; High priority input goes 1 hence no more questions to setting the output as 1
; Check for PORTB on change interrupt
BTFSS INTCON,RBIF
GOTO DONE ; Not RBIF interrupt simply skip
BCF INTCON,RBIF ; First of all clear the interrupt flag to avoid unintended recursive interrupts
BTFSS PORTB,0 ; Check if the higher priority input is 1
GOTO DONE ; Higher priority input is not 1 so we ignore this change
BCF INTCON,RBIF ; Better clear this flag as soon as you start to service it
CALL TEMPERATURA ; Calling as a subroutine is supported up to 8 levels by this micro
DONE
RETFIE
; If the program reaches here, the high priority input is 1.
; Now we need to check if the low priority input is 1 as well.
TEMPERATURA
BTFSC PORTB,7
BSF PORTB,3
RETURN
When I synthesize an empty circuit using Yosys and arachne-pnr, I get a few irregular bits:
.io_tile 6 17
IoCtrl IE_1
.io_tile 6 0
IoCtrl REN_0
IoCtrl REN_1
These are also part of every other file I could generate so far. Since an unused I/O tile has both IE bits set, I read this as:
for IE/REN block 6 17 0, the input buffer is enabled
for IE/REN block 6 0 0, the input buffer is enabled and the pull-up resistor is disabled
for IE/REN block 6 0 1, the input buffer is enabled and the pull-up resistor is disabled
However, according to the documentation, there is no IE/REN block 6 17 0.
What is the meaning of these bits? If the IE bit of block 6 17 0 is unset because the block doesn't exist, why aren't the bits of the other blocks which don't exist unset, too? The other IE/REN blocks seem to correspond to I/O blocks 6 0 1 and 7 0 0. What do these blocks do, and why are they always configured as inputs?
The technology library entry for SB_IO does not mention the IE bit. How is it related to the PIN_TYPE parameter settings?
When I use an I/O pin as an input, the REN bit is set (the pull-up resistor disabled). This suggests that the pull-up resistors are primarily intended to keep unused pins from floating, not to provide a pull-up resistor for conditionally connected inputs (e.g. buttons). Is this assumption correct? Would it be ok to use the internal pull-up resistors for that purpose?
The technology library says the following:
defparam IO_PIN_INST.PULLUP = 1'b0;
// By default, the IO will have NO pull up.
// This parameter is used only on bank 0, 1,
// and 2. Ignored when it is placed at bank 3
Does this mean bank 3 doesn't have pull-up resistors, or merely that they can't be re-enabled using Verilog? What would happen if I clear that bit in the ASCII bitstream manually? (I'd be trying this, but the iCEstick evaluation board doesn't make any pin on bank 3 accessible – a coincidence? – and I'm not sure if I want to mess with the hardware yet.)
When I use an I/O pin as an output, the IE bit isn't cleared, but the input pin function is set to PIN_INPUT. What effect does this have, and why is it done?
The default behavior for unused IO pins is to enable the pullup resistors and disable input enable. On iCE40 1k chips this means IE_0 and IE_1 are set and REN_0 and REN_1 are cleared in an unused IO tile. (On 8k chips IE_* is active high, i.e. all bits are cleared in an unused IO tile on an 8k chip.)
icebox_explain by default hides tiles that have "uninteresting" contents. (Run icebox_explain -A to disable this feature.)
It looks like arachne-pnr does not set those bits for IO pins that are not available in the current package. Thus you get some unusual bit pattern in some IO tiles that contain IE/REN bits for IO blocks not connected to any package pin.
This is what a "normal" unused IO tile looks like on the 1k architecture:
$ icebox_explain -mAt '1 0' example.asc
Reading file 'example.asc'..
Fabric size (without IO tiles): 12 x 16
.io_tile 1 0
B0 ------------------
B1 ------------------
B2 ------------------
B3 ------------------
B4 ------------------
B5 ------------------
B6 ---+--------------
B7 ------------------
B8 ------------------
B9 ---+--------------
B10 ------------------
B11 ------------------
B12 ------------------
B13 ------------------
B14 ------------------
B15 ------------------
IoCtrl IE_0
IoCtrl IE_1
Would it be ok to use the internal pull-up resistors for that purpose?
Yes.
The technology library entry for SB_IO does not mention the IE bit. How is it related to the PIN_TYPE parameter settings?
When D_IN_0 or D_IN_1 from SB_IO is connected to something, then this implies IE.
When I use an I/O pin as an output, the IE bit isn't cleared
Note that IE is active low on 1k chips and active high on 8k chips. When you use an I/O pin as output-only pin on a 1k device, then the corresponding IE bit should be set, otherwise it should be cleared.
I found the explanation, so I'm sharing it here for future reference:
.io_tile 6 17
IoCtrl IE_1
I/O block 16 7 0 is connected to a pin in some packages but not in the TQFP package.
.io_tile 6 0
IoCtrl REN_0
IoCtrl REN_1
This corresponds to I/O blocks 6 0 1 and 7 0 0 (pins 49 and 50) into whose input paths the PLL PORTA and PORTB clocks are fed, respectively.
I'm using MPLAB to program PIC16F84A for my project. I have an assembly code where RB4-7 bits are connected to buttons and hence used as inputs. An interrupt subroutine is implemented to handle any new interrupt (when a button is pressed). Everything works fine, when a button is pressed the pic goes to the specified subroutine. But now when I'm in the subroutine I have to clear the flag (INTCON - RBIF) but it's not being cleared, yet clearing any other bit in the INTCON register works fine. So what should I do?
Here is my code:
ORG 0X00
GOTO START
ORG 0x04
BTFSC INTCON,RBIF
GOTO RBX_INT
START CLRF PORTA
MOVLW B'10001000'
MOVWF INTCON
BSF STATUS,RP0
CLRF TRISA
MOVLW B'11110000'
MOVWF TRISB
MOVLW B'10000111'
MOVWF OPTION_REG
BCF STATUS,RP0
MAIN GOTO MAIN
And this is my subroutine:
RBX_INT BCF INTCON,RBIF
MOVLW D'156'
CALL DELAY
RETFIE
You should clear the bit right before you return from interrupt, otherwise a new interrupt can already happen while in the delay loop and RBIF will be cleared again. This happens because buttons bounce ( https://en.wikipedia.org/wiki/Switch#Contact_bounce ).
Also, the datasheet states:
The input pins (of RB7:RB4)
are compared with the old value latched on the last
read of PORTB. The “mismatch” outputs of RB7:RB4
are OR’ed together to generate the RB Port Change
Interrupt with flag bit RBIF (INTCON<0>).
This means you have to read PORTB before clearing RBIF to update the latched value.
RBX_INT
MOVFW PORTB ;Read PORTB to update the latch.
MOVLW D'156'
CALL DELAY
BCF INTCON,RBIF ;Clear interrupt flag as close as possible to RETFIE.
RETFIE
Also, you should read up about context saving/restoring for interrupt service routines. For this example it matters not, because the main loop is doing nothing but because a interrupt can happen at any moment it should take care to save all registers and resources it uses and restore them before exiting the interrupt to prevent corrupting any data/state from the main codepath.
See section
6.9
Context Saving During Interrupts
In the PIC16F84A datasheet.
I am working on an Embedded ARM9 development board. In that i want rearrange my nand partitions. Can anybody tell me how to do that ?
In my u-boot shell if i give the command mtdparts which gives following information .
Boardcon> mtdparts
device nand0 <nandflash0>, # parts = 7
#: name size offset mask_flags
0: bios 0x00040000 0x00000000 0
1: params 0x00020000 0x00040000 0
2: toc 0x00020000 0x00060000 0
3: eboot 0x00080000 0x00080000 0
4: logo 0x00100000 0x00100000 0
5: kernel 0x00200000 0x00200000 0
6: root 0x03c00000 0x00400000 0
active partition: nand0,0 - (bios) 0x00040000 # 0x00000000
defaults:
mtdids : nand0=nandflash0
mtdparts: mtdparts=nandflash0:256k#0(bios),128k(params),128k(toc),512k(eboot),1024k(logo),2m(kernel),-(root)
Kernel boot message shows the following :
Creating 3 MTD partitions on "NAND 64MiB 3,3V 8-bit":
0x000000000000-0x000000040000 : "Boardcon_Board_uboot"
0x000000200000-0x000000400000 : "Boardcon_Board_kernel"
0x000000400000-0x000003ff8000 : "Boardcon_Board_yaffs2"
Anybody can please explain me what is the relation between both these messages . And which one either kernel or u-boot is responsible for creating partions on nand flash?. As for as i know kernel is not creating partitions on each boot but why the message "Creating 3 MTD partitions"?
For flash devices, either NAND or NOR, there is no partition table on the device itself. That is, you can't read the device in a flash reader and find some table that indicates how many partitions are on the device and where each partition begins and ends. There is only an undifferentiated sequence of blocks. This is a fundamental difference between MTD flash devices and devices such as disks or FTL devices such as MMC.
The partitioning of the flash device is therefore in the eyes of the beholder, that is, either U-Boot or the kernel, and the partitions are "created" when beholder runs. That's why you see the message Creating 3 MTD partitions. It reflects the fact that the flash partitions really only exist in the MTD system of the running kernel, not on the flash device itself.
This leads to a situation in which U-Boot and the kernel can have different definitions of the flash partitions, which is apparently what has happened in the case of the OP.
In U-Boot, you define the flash partitions in the mtdparts environment variable. In the Linux kernel, the flash partitions are defined in the following places:
In older kernels (e.g. 2.6.35 for i.MX28) the flash partitioning could be hard-coded in gpmi-nfc-mil.c or other driver source code. (what a bummer!).
In the newer mainline kernels with device tree support, you can define the MTD paritions in the device tree
In the newer kernels there is usually support for kernel command line partition definition using a command line like root=/dev/mmcblk0p2 rootwait console=ttyS2,115200 mtdparts=nand:6656k(all),1m(squash),-(jffs2)
The type of partitioning support that you have in the kernel therefore depends on the type of flash you are using, whether it's driver supports kernel command line parsing and whether your kernel has device tree support.
In any event, there is an inherent risk of conflict between the U-Boot and kernel partitioning of the flash. Therefore, my recommendation is to define the flash partitions in the U-Boot mtdparts variable and to pass this to the kernel in the U-Boot kernel command line, assuming that your kernel support this option.
you can set the mtdparts environment variable to do so in uboot, and the kernel only use this if you pass this in kernel boot commandline, or else it'll default to the nand partition structure in the kernel source code for your platform, which in this case the 3 MTD partition default.