Declaring 16bit memory variable in assembly - variables

I'm starting to study assembly for PIC18f4550 and I've been trying to do some activities and I don't know how to solve it. According to the activity, Using MPLABX, I'm supposed to sum 2 16bit variables and store the result on a third 16 bit variable.
I was able to sum and store the result on the third variable but I have no idea how to declare these variables in 16bit.
; TODO INSERT CONFIG CODE HERE USING CONFIG BITS GENERATOR
INCLUDE
RES_VECT CODE 0x0000 ; processor reset vector GOTO START ; go to beginning of program
; TODO ADD INTERRUPTS HERE IF USED
MAIN_PROG CODE ; let linker place main program
START
clrw ;clear the w register
num1 equ 00000 ;declares 3 variables and their initial values
num2 equ 00001
result equ 00002
movlw H'4F'
movwf num1
movlw H'8A'
movwf num2
movf num1,W ;moves num1 value to w register
addwf num2,W ;sums num2 and w and stores it in w itself
movwf resultado ;moves w to the result register
END
I need to check if my code is actually correct (Im totally new on assembly) and how to declare these 3 variables in 16bit format. Thanks in advance!

The PIC18 is a 8 bit controller. If you want to add two 16 bit variables you had to do it byte by byte.
Maybe you don't want to work with an absolue address and work with the linker:
udata_acs H'000'
num1_LSB RES 1 ;reserve one byte on the access bank
num1_MSB RES 1 ;
You also could reserve two bytes for a name:
udata_acs H'000'
num1 RES 2 ;reserve two bytes on the access bank
Know you could access the second byte with :
movwf num1+1
And always remember to check the carry bit to get the MSB of an addition.

Related

How to erase msp430f2619 flash using bsl?

I want to do mass erase on my msp430f2619 using bsl. I use software jump in my code to invoke bsl. I send 0x80, get 0x90 from BSL(ack). Then i send mass erase command and get 0x90 again. Then i power off my device, then i power on the device, then i send 0x80 and get 0x90, that means there was no mass erase operation.
Read command is not working too. I send password (0xFF 32 times), after that send rx command, then i get few coorect bytes, and then infinite raw of 0xff.
I think i miised something before jump to bsl, please give an example code, or step by step instruction on how to make software jump to bsl and make it work correctly.
If you are sending 0x80 only, then get back 0x90, this confirms you have entered into the BSL since this completes the required synchronization sequence (see section 2.1 of this document). You should not require the "RX password" command since the "Mass erase" command is not protected.
The next sequence after the synchronization is to send the desired command, which should be the "Mass erase". There is a format to each of the BSL commands called the data frame. You want to send the following data frame: eight mandatory bytes (note two dummy bytes), and two checksum bytes. Note the "Mass erase" command does not contain data bytes, but you need to calculate the checksum bytes. Here are the bytes to be sent to perform the mass erase:
80 18 04 04 dd dd 06 A5 CL CH
Where: dd = dummy bytes (any value accepted), CL = Checksum low, CH = Checksum high
After sending this data frame then you should receive the ACK (0x90) byte. Then power off the device.

Assembly x86 - variable assignment

Assume I have a variable called Block_Size and without initialization.
Would
Block_Size db ?
mov DS:Block_Size, 1
be equal to
Block_Size db 1
No, Block_Size db ? has to go in the BSS or data section, not mixed in with your code.
If you wrote
my_function:
Block_Size db ?
mov DS:Block_Size, 1
...
ret
your code would crash. ? isn't really uninitialized, it's actually zeroed. So then the CPU decoded the instructions starting at my_function (e.g. after some other code ran call my_function), it would actually decode the 0 as code. (IIRC, opcode 0 is add, and then the opcode of the mov instruction would be decoded as the operand byte of add (ModR/M).)
Try assembling it, and then use a disassembler to show you how it would decode, along with the hex dump of the machine code.
db assembles a byte into the output file at the current position, just like add eax, 2 assembles 83 c0 02 into the output file.
You can't use db the way you declare variable in C
void foo() {
unsigned char Block_size = 1;
}
A non-optimizing compiler would reserve space on the stack for Block_size. Look at compiler asm output if you're curious. (But it will be more readable if you enable optimization. You can use volatile to force the compiler to actually store to memory so you can see that part of the asm in optimized code.)
Maybe related: Assembly - .data, .code, and registers...?
If you wrote
.data
Block_size db ?
.code
set_blocksize:
mov [Block_size], 1
ret
it would be somewhat like this C:
unsigned char Block_size;
void set_blocksize(void) {
Block_size = 1;
}
If you don't need something to live in memory, don't use db or dd for it. Keep it in registers. Or use Block_size equ 1 to define a constant, so you can do stuff like mov eax, Block_size + 4 instead of mov eax, 5.
Variables are a high-level concept that assembly doesn't really have. In asm, data you're working with can be in a register or in memory somewhere. Reserving static storage for it is usually unnecessary, especially for small programs. Use comments to keep track of what you put in which register.
db literally stands for "define byte" so it will put the byte there, where the move command can have you place a particular value in a register overwriting whatever else was there.

Interperting hex commands for a camera on a data sheet

I'm reading this data sheet for a camera.
Datasheet
I have my Arduino communicating with the camera over SPI and can send it a command to take a picture.
The last step is to send a command to retrieve the data, which I'm stuck on.
On page 4 the command DATA is
FF FF FF 0x0A 0X05 Length Byte 0 Length Byte 1 Length Byte 2'
So in code the command would look like this. But how do I figure out what Length Byte 0, Byte 1 Length Byte 2 are??
uint8_t DataCmd[8] = { 0xff, 0xff, 0xff, 0x0a, 0x05, ?, ?, ?};
On page 6 it says
Image Length = len 0 + Len 1 * 100h + Len 2 * 10000h
What does this mean? And how to I translate it into the three parameters that I need for my command?
As you can read, the DATA command is a command sent BY the camera to you. The flow chart at page 9 shows what it does
When the camera receives "get picture",
is the picture ready? if not send a NAK and return
if it is send an ACK
send the DATA command (with the length)
send the picture data
wait for the host to send an ACK
Page 10 has the steps you have to perform. I'll copy them here for future reference:
Establish communication with the camera
Send command INIT (e.g. FFFFFF0100870107h)
Wait for the ACK (e.g. FFFFFF0E01nn0000h)
Send command SELECT IMAGE QUALITY (e.g. FFFFFF1000000000h)
Wait for the ACK (e.g. FFFFFF0E10nn0000h)
Send command GET PICTURE (e.g. FFFFFF0405000000h)
Wait for the ACK (e.g. FFFFFF0E04nn0000h)
Wait for the DATA (e.g. FFFFFF0AnnL0L1L2h)
Receive Image Data
The DATA packet contains L0, L1 and L2, which contain the data image length. L0 is the low-order byte, so if L0 = 0x45, L1 = 0x23, L2 = 0x01 the total length will be 0x012345 = L0 + L1 * 0x100 + L2 * 0x1000; this means that the image is 0x12345 = 74565 bytes, so you know how many bytes you will receive before actually receiving them

Statically Defined IDT

This question already has answers here:
Solution needed for building a static IDT and GDT at assemble/compile/link time
(1 answer)
How to do computations with addresses at compile/linking time?
(2 answers)
Closed 5 days ago.
I'm working on a project that has tight boot time requirements. The targeted architecture is an IA-32 based processor running in 32 bit protected mode. One of the areas identified that can be improved is that the current system dynamically initializes the processor's IDT (interrupt descriptor table). Since we don't have any plug-and-play devices and the system is relatively static, I want to be able to use a statically built IDT.
However, this proving to be troublesome for the IA-32 arch since the 8 byte interrupt gate descriptors splits the ISR address. The low 16 bits of the ISR appear in the first 2 bytes of the descriptor, some other bits fill in the next 4 bytes, and then finally the last 16 bits of the ISR appear in the last 2 bytes.
I wanted to use a const array to define the IDT and then simply point the IDT register at it like so:
typedef struct s_myIdt {
unsigned short isrLobits;
unsigned short segSelector;
unsigned short otherBits;
unsigned short isrHibits;
} myIdtStruct;
myIdtStruct myIdt[256] = {
{ (unsigned short)myIsr0, 1, 2, (unsigned short)(myIsr0 >> 16)},
{ (unsigned short)myIsr1, 1, 2, (unsigned short)(myIsr1 >> 16)},
etc.
Obviously this won't work as it is illegal to do this in C because myIsr is not constant. Its value is resolved by the linker (which can do only a limited amount of math) and not by the compiler.
Any recommendations or other ideas on how to do this?
You ran into a well known x86 wart. I don't believe the linker can stuff the address of your isr routines in the swizzled form expected by the IDT entry.
If you are feeling ambitious, you could create an IDT builder script that does something like this (Linux based) approach. I haven't tested this scheme and it probably qualifies as a nasty hack anyway, so tread carefully.
Step 1: Write a script to run 'nm' and capture the stdout.
Step 2: In your script, parse the nm output to get the memory address of all your interrupt service routines.
Step 3: Output a binary file, 'idt.bin' that has the IDT bytes all setup and ready for the LIDT instruction. Your script obviously outputs the isr addresses in the correct swizzled form.
Step 4: Convert his raw binary into an elf section with objcopy:
objcopy -I binary -O elf32-i386 idt.bin idt.elf
Step 5: Now idt.elf file has your IDT binary with the symbol something like this:
> nm idt.elf
000000000000000a D _binary_idt_bin_end
000000000000000a A _binary_idt_bin_size
0000000000000000 D _binary_idt_bin_start
Step 6: relink your binary including idt.elf. In your assembly stubs and linker scripts, you can refer to symbol _binary_idt_bin_start as the base of the IDT. For example, your linker script can place the symbol _binary_idt_bin_start at any address you like.
Be careful that relinking with the IDT section doesn't move anyting else in your binary, e.g. your isr routines. Manage this in your linker script (.ld file) by puting the IDT into it's own dedicated section.
---EDIT---
From comments, there seems to be confusion about the problem. The 32-bit x86 IDT expects the address of the interrupt service routine to be split into two different 16-bit words, like so:
31 16 15 0
+---------------+---------------+
| Address 31-16 | |
+---------------+---------------+
| | Address 15-0 |
+---------------+---------------+
A linker is thus unable to plug-in the ISR address as a normal relocation. So, at boot time, software must construct this split format, which slows boot time.

How to print disassembly registers in the Xcode console

I'm looking at some disassembly code and see something like 0x01c8f09b <+0015> mov 0x8(%edx),%edi and I am wondering what the value of %edx or %edi is.
Is there a way to print the value of %edx or other assembly variables? Is there a way to print the value at the memory address that %edx points at (I'm assuming edx is a register containing a pointer to ... something here).
For example, you can print an objet by typing po in the console, so is there a command or syntax for printing registers/variables in the assembly?
Background:
I'm getting EXC_BAD_ACCESS on this line and I would like to debug what is going on. I'm aware this error is related to memory management and I'm looking at figuring out where I may be missing/too-many retain/release/autorelease calls.
Additional Info:
This is on IOS, and my application is running in the iPhone simulator.
You can print a register (e.g, eax) using:
print $eax
Or for short:
p $eax
To print it as hexadecimal:
p/x $eax
To display the value pointed to by a register:
x $eax
Check the gdb help for more details:
help print
help x
Depends up which Xcode compiler/debugger you are using. For gcc/gdb it's
info registers
but for clang/lldb it's
register read
(gdb) info reg
eax 0xe 14
ecx 0x2844e0 2639072
edx 0x285360 2642784
ebx 0x283ff4 2637812
esp 0xbffff350 0xbffff350
ebp 0xbffff368 0xbffff368
esi 0x0 0
edi 0x0 0
eip 0x80483f9 0x80483f9 <main+21>
eflags 0x246 [ PF ZF IF ]
cs 0x73 115
ss 0x7b 123
ds 0x7b 123
es 0x7b 123
fs 0x0 0
gs 0x33 51
From Debugging with gdb:
You can refer to machine register contents, in expressions, as variables with names
starting with `$'. The names of registers are different for each machine; use info
registers to see the names used on your machine.
info registers
Print the names and values of all registers except floating-point
registers (in the selected stack frame).
info all-registers
Print the names and values of all registers, including floating-point
registers.
info registers regname ...
Print the relativized value of each specified register regname.
regname may be any register name valid on the machine you are using,
with or without the initial `$'.
If you are using LLDB instead of GDB you can use register read
Those are not variables, but registers.
In GDB, you can see the values of standard registers by using the following command:
info registers
Note that a register contains integer values (32bits in your case, as the register name is prefixed by e). What it represent is not known. It can be a pointer, an integer, mostly anything.
If po crashes when you try to print a register's value as a pointer, it's likely that the value is not a pointer (or an invalid one).