Contiki compile error, " ERROR: address 0x820003 out of range at line 1740 of..." - process

I started to use contiki operating system with atmel atmega128rfa1.
I can compile my example, but the hex file is bad. The error is:
ERROR: address 0x820003 out of range at line 1740 of ipso.hex (i am not using IPSO, just i kept this name).
When I compile in linux system the code is program size is 27804 byte and the data is 4809byte.
When I compile in windows the program is 28292 and the data is 4791.
I use only one process and one etimer, I would like to turn on and off 1 led.
the makefile consinst of:
`
TARGET=avr-atmega128rfa1
CONTIKI = ../..
include $(CONTIKI)/Makefile.include
all:
make -f Makefile.ipso TARGET=avr-atmega128rfa1 ipso.elf
avr-objcopy -O ihex -R .eeprom ipso.elf ipso.hex
avr-size -C --mcu=atmega128rfa1 ipso.elf `
i can't program the controller. What is the problem?
thank you.

Special sections in the .elf file start above 0x810000 and must be removed when generating a hex file for programming a particular memory, e.g.
$ avr-objdump -h webserver6.avr-atmega128rfa1
webserver6.avr-atmega128rfa1: file format elf32-avr
Sections:
Idx Name Size VMA LMA File off Algn
0 .data 00001bda 00800200 0000e938 0000ea2c 2**0
CONTENTS, ALLOC, LOAD, DATA
1 .text 0000e938 00000000 00000000 000000f4 2**1
CONTENTS, ALLOC, LOAD, READONLY, CODE
2 .bss 000031a6 00801dda 00801dda 00010606 2**0
ALLOC
3 .eeprom 00000029 00810000 00810000 00010606 2**0
CONTENTS, ALLOC, LOAD, DATA
4 .fuse 00000003 00820000 00820000 0001062f 2**0
CONTENTS, ALLOC, LOAD, DATA
5 .signature 00000003 00840000 00840000 00010632 2**0
CONTENTS, ALLOC, LOAD, READONLY, DATA
So,
avr-objcopy -O ihex -R .eeprom -R .fuse -R signature ipso.elf ipso.hex
alternately, only copy the desired sections:
avr-objcopy -O ihex -j .text -j .data ipso.elf ipso.hex

avr-objcopy --change-section-lma .eeprom=0
this works for me

Related

How would you crack this (MD5 HashCat)?

I was given this file:
hashes.txt
experthead:e10adc3949ba59abbe56e057f20f883e
interestec:25f9e794323b453885f5181f1b624d0b
ortspoon:d8578edf8458ce06fbc5bb76a58c5ca4
reallychel:5f4dcc3b5aa765d61d8327deb882cf99
simmson56:96e79218965eb72c92a549dd5a330112
bookma:25d55ad283aa400af464c76d713c07ad
popularkiya7:e99a18c428cb38d5f260853678922e03
eatingcake1994:fcea920f7412b5da7be0cf42b8c93759
heroanhart:7c6a180b36896a0a8c02787eeafb0e4c
edi_tesla89:6c569aabbf7775ef8fc570e228c16b98
liveltekah:3f230640b78d7e71ac5514e57935eb69
blikimore:917eb5e9d6d6bca820922a0c6f7cc28b
johnwick007:f6a0cb102c62879d397b12b62c092c06
flamesbria2001:9b3b269ad0a208090309f091b3aba9db
oranolio:16ced47d3fc931483e24933665cded6d
spuffyffet:1f5c5683982d7c3814d4d9e6d749b21e
moodie:8d763385e0476ae208f21bc63956f748
nabox:defebde7b6ab6f24d5824682a16c3ae4
bandalls:bdda5f03128bcbdfa78d8934529048cf
I thought I had to separate them, for example I put the experthead, interestec, etc. in one file named wordtext.txt and e10adc3949ba59abbe56e057f20f883e, etc in another file called hash.txt.
I then ran this:
hashcat -m 0 -a 0 /Users/myname/Desktop/hash.txt /Users/myname/Desktop/wordtext.txt -O
but I couldn't get anything. And then I googled e10adc3949ba59abbe56e057f20f883e and the output was 123456 so now I don't know how to approach this problem.
Just leave the hashes (erase the plaintext) on the txt file, hashcat will sort them out by itself. What I do is: hashcat.exe -m 0 -a 0 hashFile.txt dict.txt --show
The file appears to be in username:hash format. By default, hashcat assumes that only hashes are in the target file.
You can change this behavior with hashcat's --username option.
You don't need to place the -O at the end. It should work perfectly without it, but you do need hashcat.exe in the beginning.

perl gunzip to buffer and gunzip to file have different byte orders

I'm using Perl v5.22.1, Storable 2.53_01, and IO::Uncompress::Gunzip 2.068.
I want to use Perl to gunzip a Storable file in memory, without using an intermediate file.
I have a variable $zip_file = '/some/storable.gz' that points to this zipped file.
If I gunzip directly to a file, this works fine, and %root is correctly set to the Storable hash.
gunzip($zip_file, '/home/myusername/Programming/unzipped');
my %root = %{retrieve('/home/myusername/Programming/unzipped')};
However if I gunzip into memory like this:
my $file;
gunzip($zip_file, \$file);
my %root = %{thaw($file)};
I get the error
Storable binary image v56.115 more recent than I am (v2.10)`
so the Storable's magic number has been butchered: it should never be that high.
However, the strings in the unzipped buffer are still correct; the buffer starts with pst which is the correct Storable header. It only seems to be multi-byte variables like integers which are being broken.
Does this have something to do with byte ordering, such that writing to a file works one way while writing to a file buffer works in another? How can I gunzip to a buffer without it ruining my integers?
That's not related to unzip but to using retrieve vs. thaw. They both expect different input, i.e. thaw expect the output from freeze while retrieve expects the output from store.
This can be verified with a simple test:
$ perl -MStorable -e 'my $x = {}; store($x,q[file.store])'
$ perl -MStorable=freeze -e 'my $x = {}; print freeze($x)' > file.freeze
On my machine this gives 24 bytes for the file created by store and 20 bytes for freeze. If I remove the leading 4 bytes from file.store the file is equivalent to file.freeze, i.e. store just added a 4 byte header. Thus you might try to uncompress the file in memory, remove the leading 4 bytes and run thaw on the rest.

How can I get the disk IO trace with actual input values?

I want to generate some trace file from disk IO, but the problem is I need the actual input data along with timestamp, logical address and access block size, etc.
I've been trying to solve the problem by using the "blktrace | blkparse" with "iozone" on the ubuntu VirtualBox environment, but it seems not working.
There is an option in blkparse for setting the output format to show the packet data, -f "%P", but it dose not print anything.
below is the command that i use:
$> sudo blktrace -a issue -d /dev/sda -o - | blkparse -i - -o ./temp/blktrace.sda.iozone -f "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n"
$> iozone -w -e -s 16M -f ./mnt/iozone.dummy -i 0
In the printing format "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n", all other things are printed well, but the "%P" is not printed at all.
Is there anyone who knows why the packet data is not displayed?
OR anyone who knows other way to get the disk IO packet data with actual input value?
As far as I know blktrace does not capture the actual data. It just capture the metadata. One way to capture real data is to write your own kernel module. Some students at FIU.edu did that in this paper:
"I/O deduplication: Utilizing content similarity to ..."
I would ask this question in linux-btrace mailing list as well:
http://vger.kernel.org/majordomo-info.html

Import class-dump info into GDB

Is there a way to import the output from class-dump into GDB?
Example code:
$ cat > test.m
#include <stdio.h>
#import <Foundation/Foundation.h>
#interface TestClass : NSObject
+ (int)randomNum;
#end
#implementation TestClass
+ (int)randomNum {
return 4; // chosen by fair dice roll.
// guaranteed to be random.
}
#end
int main(void) {
printf("num: %d\n", [TestClass randomNum]);
return 0;
}
^D
$ gcc test.m -lobjc -o test
$ ./test
num: 4
$ gdb test
...
(gdb) b +[TestClass randomNum]
Breakpoint 1 at 0x100000e5c
(gdb) ^D
$ strip test
$ gdb test
...
(gdb) b +[TestClass randomNum]
Function "+[TestClass randomNum]" not defined.
(gdb) ^D
$ class-dump -A test
...
#interface TestClass : NSObject
{
}
+ (int)randomNum; // IMP=0x0000000100000e50
#end
I know I can now use b *0x0000000100000e50 in gdb, but is there a way of modifying GDB's symbol table to make it accept b +[TestClass randomNum]?
Edit: It would be preferably if it would work with GDB v6 and not only GDB v7, as GDB v6 is the latest version with Apple's patches.
It’s possible to load a symbol file in gdb with the add-symbol-file command. The hardest part is to produce this symbol file.
With the help of libMachObjC (which is part of class-dump), it’s very easy to dump all addresses and their corresponding Objective-C methods. I have written a small tool, objc-symbols which does exactly this.
Let’s use Calendar.app as an example. If you try to list the symbols with the nm tool, you will notice that the Calendar app has been stripped:
$ nm -U /Applications/Calendar.app/Contents/MacOS/Calendar
0000000100000000 T __mh_execute_header
0000000005614542 - 00 0000 OPT radr://5614542
But with objc-symbols you can easily retrieve the addresses of all the missing Objective-C methods:
$ objc-symbols /Applications/Calendar.app
00000001000c774c +[CALCanvasAttributedText textWithPosition:size:text:]
00000001000c8936 -[CALCanvasAttributedText createTextureIfNeeded]
00000001000c8886 -[CALCanvasAttributedText bounds]
00000001000c883b -[CALCanvasAttributedText updateBezierRepresentation]
...
00000001000309eb -[CALApplication applicationDidFinishLaunching:]
...
Then, with SymTabCreator you can create a symbol file, which is just actually an empty dylib with all the symbols.
Using objc-symbols and SymTabCreator together is straightforward:
$ objc-symbols /Applications/Calendar.app | SymTabCreator -o Calendar.stabs
You can check that Calendar.stabs contains all the symbols:
$ nm Calendar.stabs
000000010014a58b T +[APLCALSource printingCachedTextSize]
000000010013e7c5 T +[APLColorSource alternateGenerator]
000000010013e780 T +[APLColorSource defaultColorSource]
000000010013e7bd T +[APLColorSource defaultGenerator]
000000010011eb12 T +[APLConstraint constraintOfClass:withProperties:]
...
00000001000309eb T -[CALApplication applicationDidFinishLaunching:]
...
Now let’s see what happens in gdb:
$ gdb --silent /Applications/Calendar.app
Reading symbols for shared libraries ................................. done
Without the symbol file:
(gdb) b -[CALApplication applicationDidFinishLaunching:]
Function "-[CALApplication applicationDidFinishLaunching:]" not defined.
Make breakpoint pending on future shared library load? (y or [n]) n
And after loading the symbol file:
(gdb) add-symbol-file Calendar.stabs
add symbol table from file "Calendar.stabs"? (y or n) y
Reading symbols from /Users/0xced/Calendar.stabs...done.
(gdb) b -[CALApplication applicationDidFinishLaunching:]
Breakpoint 1 at 0x1000309f2
You will notice that the breakpoint address does not exactly match the symbol address (0x1000309f2 vs 0x1000309eb, 7 bytes of difference), this is because gdb automatically recognizes the function prologue and sets the breakpoint just after.
GDB script
You can use this GDB script to automate this, given that the stripped executable is the current target.
Add the script from below to your .gdbinit, target the stripped executable and run the command objc_symbols in gdb:
$ gdb test
...
(gdb) b +[TestClass randomNum]
Function "+[TestClass randomNum]" not defined.
(gdb) objc_symbols
(gdb) b +[TestClass randomNum]
Breakpoint 1 at 0x100000ee1
(gdb) ^D
define objc_symbols
shell rm -f /tmp/gdb-objc_symbols
set logging redirect on
set logging file /tmp/gdb-objc_symbols
set logging on
info target
set logging off
shell target="$(head -1 /tmp/gdb-objc_symbols | head -1 | awk -F '"' '{ print $2 }')"; objc-symbols "$target" | SymTabCreator -o /tmp/gdb-symtab
set logging on
add-symbol-file /tmp/gdb-symtab
set logging off
end
There is no direct way to do this (that I know of), but it seems like a great idea.
And now there is a way to do it... nice answer, 0xced!
The DWARF file format is well documented, IIRC, and, as the lldb source is available, you have a working example of a parser.
Since the source to class-dump is also available, it shouldn't be too hard to modify it to spew DWARF output that could then be loaded into the debugger.
Obviously, you wouldn't be able to dump symbols with full fidelity, but this would probably be quite useful.
You can use DSYMCreator.
With DSYMCreator, you can create a symbol file from an iOS executable binary.
It's a toolchain, so you can use it like this.
$ ./main.py --only-objc /path/to/binary/xxx
Then a file /path/to/binary/xxx.symbol will be created, which is a DWARF format symbol. you can import it to lldb by yourself.
Apart from that, DSYMCreator also supports to export symbols from IDA Pro, you can use it like this.
$ ./main.py /path/to/binary/xxx
YES, just ignore --only-objc flag. Then the IDA Pro will run automatically, and then a file /path/to/binary/xxx.symbol will be created, which is the symbol file.
Thanks 0xced for creating objc-symbols, which is a part of DSYMCreator toolchain.
BTW, https://github.com/tobefuturer/restore-symbol is another choice.

objdump - head ELF - Meaning of flags?

$ objdump -f ./a.out
./a.out: file format elf32-i386
architecture: i386, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x080484e0
$ objdump -f function.o
function.o: file format elf32-i386
architecture: i386, flags 0x00000011:
HAS_RELOC, HAS_SYMS
start address 0x00000000
What is the meaning of flags (flags 0x00000011: OR flags 0x00000112:) ? Nothin in the ELF header file has this flag. e_flag contain 0.
Someone have an idea about his meaning ?
Thanks
They are BFD-specific bitmasks. In the binutils source tree, see bfd/bfd-in2.h:
/* BFD contains relocation entries. */
#define HAS_RELOC 0x01
/* BFD is directly executable. */
#define EXEC_P 0x02
...
/* BFD has symbols. */
#define HAS_SYMS 0x10
...
/* BFD is dynamically paged (this is like an a.out ZMAGIC file) (the
linker sets this by default, but clears it for -r or -n or -N). */
#define D_PAGED 0x100
These flag values won't appear in your object file; they are simply an in-memory representation that libbfd uses.
They are LibBFD flags.
You're trying to recode objdump ? ... =)