Is there a way to convert from OMF 16 bit object file format to COFF 32 bit object file format?
I seriously doubt there would exist one. Code designed to be ran in 16 bit environment is binary incompatible with 32 bit mode. For example there's an instruction that tells the CPU to flip bit sizes for the upcoming instruction. In 16 bit mode such an instruction is needed to use 32 bit instructions. However the same opcode is needed to use 16 bit instructions in 32 bit mode.
Whether a series of opcodes are to be assumed to be 16 or 32 bits is specified in the segment descriptor.
Anyway, if you have 16 bit code that you'd like to use in 32 bit mode, that has no OS dependencies, you can use that by disaassembling it using IDA, then recompile it with a 32bit assembler. Of course only if that's permitted by its license. (although this could be fair use, but IANAL).
If the code is also tied to the underlying OS, this could be a lot more difficult, and would require to rewrite perhaps significant portions of the code.
Presumably the OMF16 code targets 16 bit x86 real-mode or 286 protected mode? That being the case, the object file format is not really your issue, the code itself is entirely incompatible since it uses different register sizes and a different addressing scheme.
Moreover if the code is targetted for DOS, Win16 or OS/2 (i.e. systems that used OMF16), then targeting it to a 32 bit target is not just a case of converting the object file format.
You need to rebuild from the source which give the tags to the question is either C or C++? Either that or you have a significant reverse engineering task on your hands!
I've searched on the net, and found these links:
The first one is a collection of tools:
http://sourceware.org/binutils/
The second one is a tool I think you need:
http://sourceware.org/binutils/docs/binutils/objcopy.html
They are not work in all cases(bazsi77 above), just test it.
Related
Since memoryTypeBits in VkMemoryRequirements is a 32 bit uint, does it mean that there can be no more than 32 memory types?
Basically, yes. But you pretty much never see more than 12 on actual implementations. There just aren't that many combinations of heaps and memory allocation patterns.
At least, not yet. It's possible that extensions and later core features could balloon this past 32 (much like they had to extend the bits for stages to 64 when they added ray tracing). But thus far, they're pretty far from the limit.
I'm a total n00b in embedded programming. Suppose I'm building a firmware using a compiler. The result of this operation is a file that will be flashed into (I guess) the flash memory of a MCU such an ARM or a AVR.
My question is: What common structures (if any) are used for such generated files containing the firmware?
I came from desktop development and I understand that for example for Windows the compiler will most likely generate a PE or PE+, while Unix-like systems I may get a ELF and COFF, but have no idea for embedded systems.
I also understand that this highly depends on many factors (Compiler, ISA, MCU vendor, OS, etc.) so I'm fine with at least one example.
Update: I will up vote all answers providing examples of used structures and will select the one I feel best surveys the state of the art.
The firmware file is the Executable and Linkable File, usually processed to a binary (.bin) or text represented binary (.hex).
This binary file is the exact memory that is written to the embedded flash. When you first power the board, an internal bootloader will redirect the execution to your firmware entry point, normally at the address 0x0.
From there, it is your code that is running, this is why you have a startup code (usually startup.s file) that will configure clock, stack pointer registers, vector table, load the data section to RAM (your initialized variables), clear the zero initialized section, maybe you will want to copy your code to RAM and jump to the copy to avoid running code from FLASH (can be faster on some platforms), and so on.
When running over an Operational System, all these platform choices and resources are not in control of user code, there you can only link to the OS libraries and use the provided API to do low level actions. In embedded, it is 100% user code, you access the hardware and manage its resources.
Not surprisingly, Operational Systems are booted in a similar manner as firmware, since both are there in touch with the processor, memory and I/Os.
All of that, to say: the structure of a firmware is similar to the structure of any compiled program. There's the data sections and code sections that are organized in memory during the load by the Operational System, or by the program itself when running on embedded.
One main difference is the memory addressing in the firwmare binary, usually addresses are physical RAM address, since you do not have memory mapping feature on most of micro-controllers. This is transparent to the user, the compiler will abstract it.
Other significant difference is the stack pointer, on OSs user code will not reserve memory for the stack by itself, it relays on OS for that. When on firmware, you have to do it in user code for the same reason as before, there's no middle man to manage it for you. The linker script of the compiler will reserve Stack and Heap memory accordingly configured and there will be a stack_pointer symbol on your .map file letting you know where it points to. You won't find it in OSs program's map files.
Most tools output either an ELF, or a COFF, or something similar that can eventually boil down to a HEX/bin file.
That isn't necessarily what your target wants to see, however. Every vendor has their own format of "firmware" files. Sometimes they're encrypted and signed, sometimes plain text. Sometimes there's compression, sometimes it's raw. It might be a simple file, or something complex that is more than just your program.
An integral part of doing embedded work is the build flow and system startup/booting procedure, plus getting your code onto the part. Don't underestimate the effort.
Ultimately the data written to the ROM is normally just the code and constant data from which your application is composed, and therefore has no structure other than perhaps being segmented into code and data, and possibly custom segments if you have created them. The structure in this sense is defined by the linker script or configuration used to build the code. The file containing this code/data may be raw binary, or an encoded binary format such as Intel Hex or Motorola S-Record for example.
Typically also your toolchain will generate an object code file that contains not only the code/data, but also symbolic and debug information for use by a debugger. In this case when the debugger runs, it loads the code to the target (as in the binary file case above) and the symbol/debug information to the host to allow source level debugging. These files may be in a proprietary object file format specific to the toolchain, but are often standard "open" formats such as ELF. However strictly the meta-data components of an object file are not part of the firmware since they are not loaded on the target.
I've recently run across another firmware format not listed here. It's a binary format, might be called ".EEP" but might not. I think it is used by NXP. I've seen it used for ARM THUMB2 and for mystery stuff that may be a DSP/BSP.
All the following are 32-bit values, all stored in reverse endian except for CAFEBABE (so... BEBAFECA?):
CAFEBABE
length_in_16_bit_words(yes, 16-bit...?!)
base_addr
CRC32
length*2 bytes of data
FFFF (optional filler if the length is an odd number)
If there are more data blocks:
length
base
checksum that is not a CRC but something bizarre
data
FFFF (optional filler if the length is an odd number)
...
When no more data blocks remain:
length == 0
base == 0
checksum that is not a CRC but something bizarre
Then all of that is repeated for another memory bank/device.
Currently I am using intellij idea 14.0.3(earlier I was using 12.1.4) on 64 bit windows 8.1.
When we install it, the installer creates the shortcut in start menu and other places which defaults to the 32 bit .exe file even on a 64 bit system.
I know that I can use the 64 bit executable to run idea in 64 bit mode as given in this SO answer.
But is there any significant performance difference between the two versions of the IDE?
And which executable is recommended for 64 bit systems? Shall I keep using 32 bit? or shall I switch to 64bit version?
The difference between running the 32 and 64 bits launcher is which Java will be used to start the IDE and what are the vmoptions parameters passed to it.
When starting the 32 bit one, IDEA uses it's own bundled 32 bit JRE. If there is no such one, IDEA tries to find 32 bit JRE in several places on specific order (%IDEA_HOME%, %JDK_HOME%, %JAVA_HOME%). The values in idea.exe.vmoptions are passed to it.
When starting the 64 bit one, it tries to find 64 bit JRE in several places on specific order. The values in idea64.exe.vmoptions are passed to it.
So if you want to allocate 2 GB RAM or more (with -xmx), this is not going to happen with 32 bit Java (resp. IDEA). And for large projects using less than 2GB causes the IDE to hang a lot. For smaller projects I don't think you'll feel any difference.
For reference this is the bug about this, so far they are not acting on it:
https://youtrack.jetbrains.com/issue/IDEA-146040
I have a 64 bit system. I"m declaring my variables as 64 bit variables expecting my code to run faster. When I do functions such as 'String.IndexOf("J", X)', it fails because X is a Long but it is looking for a 32 bit value as the start index.
Is there any way to pass a 64 bit variable without converting it down to a 32 bit?
You got the wrong idea about 64-bit code. The arguments to the String.IndexOf() method do not change, the second argument is still an Integer. The only type in .NET that changes size is IntPtr.
This is all very much by design. A 64-bit processor does not execute code faster when you let it manipulate 64-bit integral values. Quite the contrary, it makes it run slower. Processor speed is throttled in large part by the size of the caches. The CPU caches are important because they help avoid the processor having to read or write data from RAM. Which is very slow compared to the speed of the processor. A worst-case processor stall from not having data in the L1, L2 or L3 caches can be 200 cycles.
The cache size is fixed. Using 64-bit variables makes the cache half as effective.
You also make your code slower by using Long and Option Strict Off. That requires the compiler to emit a conversion to cast the Long to an Integer. It is not visible in your code but it certainly is performed, you can see it when you look at the IL with ildasm.exe or a decompiler like ILSpy or Reflector.
I am not sure what is meant by 16-bit or 32-bit applications. Is that a 16-bit application is an application which would not require more than 2^16 bytes of memory space? Does this 16-bit refers to the max size of the application?
It means the application has been compiled for a processor that has 16 bits of memory addressing or 32 bit of memory addressing. Same goes for 64 bit applications.
The number refers to the maximum amount of memory that the application can address.
See wikipedia - 16-bit, 32-bit, 64-bit (and more).
A 32-bit application is software that runs in a 32-bit flat address space.
Answers to common questions
Will a 64 bit CPU run a standard (32-bit) program on a 64-bit version of an OS?
Yes it will. 64 bit systems are backward compatible with the 32 bit counterparts.
Will a 64-bit OS run a standard application on a 64 bit processor?
Again, it will. This is because of backward compatibility.
Can I run W2K and WXP on an 64 bit CPU, and use old software?
Yes, a 32 bit OS (W2K and WXP) will run on a 64 bit processor. Also, you should be able to run "old software" on a 64 bit OS.
The number(32 or 16 of the assembler directive of the addressmode (example "[use16]" and "[use32]")) does not refers to the maximum amount of memory that the application can address!
Because with the 80386+ it is also possible to use operandsize- and adresssize prefixes in combination with the 16 bit PM for to address up to 4 GB of ram.
(The maximum amount of memory that our application can be use is refering to the segment entries of the segmentsize inside of a GDT/LDT selector, or by the default size for a segment of 64 kb.)
The only one differnce between the 32 bit - and the 16 bit addressmode is the meaning and the usage of those operandsize- and addresssize prefixes.
[use16]
So if we want to use in the 16 bit addressmode 32 bit operands/addresses, then we have to add those prefixes to our opcode. Without those prefixes we can only use 16 bit.
[use32]
In the 32 bit addressmode we found a diametrical opposite situation, so if we want to use 32 bit operands/addresses, then we have to leave out those prefixes from our opcode and only if we want to use 16 operand/addresses, then we have to add those prefixes to our opcode.
If we use these size-directives above(or similar notation) carefully, then our assembler will do this job.
Operand size prefix in 16-bit mode
Dirk