reading the system clock value? - low-level

Is there a virtual/system clock running independently when a computer is booted?
How can we read that value?

Use the RDTSC x86 instruction, it reads the clocks since the system-start.
Edit:
On x86-64 targets the use of inline assembly is not possible anymore, use either intrinics or a external linked object file which was generated by an assembler. Do not forget to flush the processor pipeline before using this instruction.

Related

Do I have to use Full System Mode after adding a new device on gem5?

I'm trying to add a ORAM module to gem5, it would modify the address from the CPU to Memory. After reading the introduction about how to add a device named HelloDevice to gem5 in ASPLOS 2008 tutorial, I am still confused that if I add a new device to gem5, do I have to use Full System Mode to run tests/test-progs/hello/bin/x86/linux/hello?
tests/test-progs/hello/bin/x86/linux/hello is an userland executable, meant to be ran with se.py.
I think devices are not visible from se.py since it only emulates userland by translating simple instructions, and capturing syscalls, you can't see for example arbitrary hardware registers or physical memory.
Therefore yes, I think you need to use full system emulation with your build.
If you don't know how to use fs.py, give this setup a try.

Is a software image loaded into non-volatile RAM when using tftpboot from U-boot?

I have a Xilinx development board connected to a RHEL workstation.
I have U-boot loaded over JTAG and connect to it with minicom.
Then I tftpboot the helloworld standalone application.
Where do these images go?
I understand I am specifying a loadaddr, but I don't fully understand the meaning.
When I run the standalone application, I get various outputs on the serial console.
I had it working correctly the first time, but then started trying different things when building.
It almost feelings like I am clobbering memory, but I assumed after a power cycle anything tftp'd would be lost.
The issue stills occurs through a power cycle though.
Where do these images go?
The U-Boot command syntax is:
tftpboot [loadAddress] [[hostIPaddr:]bootfilename]
You can explicitly specify the memory destination address as the loadAddress parameter.
When the loadAddress parameter is omitted from the command, then the memory destination address defaults to the value of the environment variable loadaddr.
Note that several other U-Boot commands also use this loadaddr variable, such as "bootp", "rarpboot", "loadb" and "diskboot".
I understand I am specifying a loadaddr, but I don't fully understand the meaning.
When I run the standalone application, I get various outputs on the serial console.
The loadAddress is simply the start address in memory to which the file transfered will be written.
For a standalone application, this loadAddress should match the CONFIG_STANDALONE_LOAD_ADDR that was used to link this program.
Likewise the "go" command to execute this standalone application program should use the same CONFIG_STANDALONE_LOAD_ADDR.
For example, assume the physical memory of your board starts at 0x20000000.
To allow the program to use the maximum amount of available memory, the program is configured to start at:
#define CONFIG_STANDALONE_LOAD_ADDR 0x20000000
For convenient loading, define the environment variable (at the U-Boot prompt):
setenv loadaddr 0x20000000
Assuming that the serverip variable is defined with the IP address of the TFTP server, then the U-Boot command
tftpboot hello_world.bin
should retrieve that file from the server, and store it at 0x20000000.
Use
go 20000000
to execute the program.
I assumed after a power cycle anything tftp'd would be lost.
It should.
But what might persist in "volatile" memory after a power cycle is unpredictable. Nor can you be assured of a default value such as all zeros or all ones. The contents of dynamic RAM should always be presumed to be unknown unless you know it has been initialized and has been written.
Is a software image loaded into non-volatile RAM when using tftpboot from U-boot?
Only if your board has main memory that is non-volatile (e.g. ferrite core or battery-backed SRAM, which are not likely).
You can use the "md" (memory display) command to inspect RAM.

How does machine code communicate with processor?

Let's take Python as an example. If I am not mistaken, when you program in it, the computer first "translates" the code to C. Then again, from C to assembly. Assembly is written in machine code. (This is just a vague idea that I have about this so correct me if I am wrong) But what's machine code written in, or, more exactly, how does the processor process its instructions, how does it "find out" what to do?
If I am not mistaken, when you program in it, the computer first "translates" the code to C.
No it doesn't. C is nothing special except that it's the most widespread programming language used for system programming.
The Python interpreter translates the Python code into so called P-Code that's executed by a virtual machine. This virtual machine is the actual interpreter which reads P-Code and every blip of P-Code makes the interpreter execute a predefined codepath. This is not very unlike how native binary machine code controls a CPU. A more modern approach is to translate the P-Code into native machine code.
The CPython interpreter itself is written in C and has been compiled into a native binary. Basically a native binary is just a long series of numbers (opcodes) where each number designates a certain operation. Some opcodes tell the machine that a defined count of numbers following it are not opcodes but parameters.
The CPU itself contains a so called instruction decoder, which reads the native binary number by number and for each opcode it reads it gives power to the circuit of the CPU that implement this particular opcode. there are opcodes, that address memory, opcodes that load data from memory into registers and so on.
how does the processor process its instructions, how does it "find out" what to do?
For every opcode, which is just a binary pattern, there is its own circuit on the CPU. If the pattern of the opcode matches the "switch" that enables this opcode, its circuit processes it.
Here's a WikiBook about it:
http://en.wikibooks.org/wiki/Microprocessor_Design
A few years ago some guy built a whole, working computer from simple function logic and memory ICs, i.e. no microcontroller or similar involved. The whole project called "Big Mess o' Wires" was more or less a CPU built from scratch. The only thing nerdier would have been building that thing from single transistors (which actually wasn't that much more difficult). He also provides a simulator which allows you to see how the CPU works internally, decoding each instruction and executing it: Big Mess o' Wires Simulator
EDIT: Ever since I originally wrote that answer, building a fully fledged CPU from modern, discrete components has been done: For your considereation a MOS6502 (the CPU that powered the Apple II, Commodore C64, NES, BBC Micro and many more) built from discetes: https://monster6502.com/
Machine-code does not "communicate with the processor".
Rather, the processor "knows how to evaluate" machine-code. In the [widespread] Von Neumann architecture this machine-code (program) can be thought of as an index-able array of where each cell contains a machine-code instruction (or data, but let's ignore that for now).
The CPU "looks" at the current instruction (often identified by the PC or Program Counter) and decides what to do (this can either be done directly with transistors/"bare-metal", or it be translated to even lower-level code): this is known as the fetch-decode-execute cycle.
When the instructions are executed side-effects occur such as setting a control flag, putting a value in a register, or jumping to a different index (changing the PC) in the program, etc. See this simple overview of a CPU which covers the above a little bit better.
It is the evaluation of each instruction -- as it is encountered -- and the interaction of side-effects that results in the operation of a traditional processor.
(Of course, modern CPUs are very complex and do lots of neat tricky things!)
That's called microcode. It's the code in the CPU that reads machine code instructions and translate that into low level data flow.
When the CPU for example encounters the add instruction, the microcode describes how it should get the two values, feed them to the ALU to do the calculation, and where to put the result.
Electricity. Circuits, memory, and logic gates.
Also, I believe Python is usually interpreted, not compiled through C → assembly → machine code.

How does one use dynamic recompilation?

It came to my attention some emulators and virtual machines use dynamic recompilation. How do they do that? In C i know how to call a function in ram using typecasting (although i never tried) but how does one read opcodes and generate code for it? Does the person need to have premade assembly chunks and copy/batch them together? is the assembly written in C? If so how do you find the length of the code? How do you account for system interrupts?
-edit-
system interrupts and how to (re)compile the data is what i am most interested in. Upon more research i heard of one person (no source available) used js, read the machine code, output js source and use eval to 'compile' the js source. Interesting.
It sounds like i MUST have knowledge of the target platform machine code to dynamically recompile
Yes, absolutely. That is why parts of the Java Virtual Machine must be rewritten (namely, the JIT) for every architecture.
When you write a virtual machine, you have a particular host-architecture in mind, and a particular guest-architecture. A portable VM is better called an emulator, since you would be emulating every instruction of the guest-architecture (guest-registers would be represented as host-variables, rather than host-registers).
When the guest- and host-architectures are the same, like VMWare, there are a ton of (pretty neat) optimizations you can do to speed up the virtualization - today we are at the point that this type of virtual machine is BARELY slower than running directly on the processor. Of course, it is extremely architecture-dependent - you would probably be better off rewriting most of VMWare from scratch than trying to port it.
It's quite possible - though obviously not trivial - to disassemble code from a memory pointer, optimize the code in some way, and then write back the optimized code - either to the original location or to a new location with a jump patched into the original location.
Of course, emulators and VMs don't have to RE-write, they can do this at load-time.
This is a wide open question, not sure where you want to go with it. Wikipedia covers the generic topic with a generic answer. The native code being emulated or virtualized is replaced with native code. The more the code is run the more is replaced.
I think you need to do a few things, first decide if you are talking about an emulation or a virtual machine like a vmware or virtualbox. An emulation the processor and hardware is emulated using software, so the next instruction is read by the emulator, the opcode pulled apart by code and you determine what to do with it. I have been doing some 6502 emulation and static binary translation which is dynamic recompilation but pre processed instead of real time. So your emulator may take a LDA #10, load a with immediate, the emulator sees the load A immediate instruction, knows it has to read the next byte which is the immediate the emulator has a variable in the code for the A register and puts the immediate value in that variable. Before completing the instruction the emulator needs to update the flags, in this case the Zero flag is clear the N flag is clear C and V are untouched. But what if the next instruction was a load X immediate? No big deal right? Well, the load x will also modify the z and n flags, so the next time you execute the load a instruction you may figure out that you dont have to compute the flags because they will be destroyed, it is dead code in the emulation. You can continue with this kind of thinking, say you see code that copies the x register to the a register then pushes the a register on the stack then copies the y register to the a register and pushes on the stack, you could replace that chunk with simply pushing the x and y registers on the stack. Or you may see a couple of add with carries chained together to perform a 16 bit add and store the result in adjacent memory locations. Basically looking for operations that the processor being emulated couldnt do but is easy to do in the emulation. Static binary translation which I suggest you look into before dynamic recompilation, performs this analysis and translation in a static manner, as in, before you run the code. Instead of emulating you translate the opcodes to C for example and remove as much dead code as you can (a nice feature is the C compiler can remove more dead code for you).
Once the concept of emulation and translation are understood then you can try to do it dynamically, it is certainly not trivial. I would suggest trying to again doing a static translation of a binary to the machine code of the target processor, which a good exercise. I wouldnt attempt dynamic run time optimizations until I had succeeded in performing them statically against a/the binary.
virtualization is a different story, you are talking about running the same processor on the same processor. So x86 on an x86 for example. the beauty here is that using non-old x86 processors, you can take the program being virtualized and run the actual opcodes on the actual processor, no emulation. You setup traps built into the processor to catch things, so loading values in AX and adding BX, etc these all happen at real time on the processor, when AX wants to read or write memory it depends on your trap mechanism if the addresses are within the virtual machines ram space, no traps, but lets say the program writes to an address which is the virtualized uart, you have the processor trap that then then vmware or whatever decodes that write and emulates it talking to a real serial port. That one instruction though wasnt realtime it took quite a while to execute. What you could do if you chose to is replace that instruction or set of instructions that write a value to the virtualized serial port and maybe have then write to a different address that could be the real serial port or some other location that is not going to cause a fault causing the vm manager to have to emulate the instruction. Or add some code in the virtual memory space that performs a write to the uart without a trap, and have that code instead branch to this uart write routine. The next time you hit that chunk of code it now runs at real time.
Another thing you can do is for example emulate and as you go translate to a virtual intermediate bytcode, like llvm's. From there you can translate from the intermediate machine to the native machine, eventually replacing large sections of program if not the whole thing. You still have to deal with the peripherals and I/O.
Here's an explaination of how they are doing dynamic recompilation for the 'Rubinius' Ruby interpteter:
http://www.engineyard.com/blog/2010/making-ruby-fast-the-rubinius-jit/
This approach is typically used by environments with an intermediate byte code representation (like Java, .net). The byte code contains enough "high level" structures (high level in terms of higher level than machine code) so that the VM can take chunks out of the byte code and replace it by a compiled memory block. The VM typically decide which part is getting compiled by counting how many times the code was already interpreted, since the compilation itself is a complex and time-consuming process. So it is usefull to only compile the parts which get executed many times.
but how does one read opcodes and generate code for it?
The scheme of the opcodes is defined by the specification of the VM, so the VM opens the program file, and interprets it according to the spec.
Does the person need to have premade assembly chunks and copy/batch them together? is the assembly written in C?
This process is an implementation detail of the VM, typically there is a compiler embedded, which is capable to transform the VM opcode stream into machine code.
How do you account for system interrupts?
Very simple: none. The code in the VM can't interact with real hardware. The VM interact with the OS, and transfer OS events to the code by jumping/calling specific parts inside the interpreted code. Every event in the code or from the OS must pass the VM.
Also hardware virtualization products can use some kind of JIT. A typical use cases in the X86 world is the translation of 16bit real mode code to 32 or 64bit protected mode code to not to be forced to emulate a CPU in real mode. Also a software-only VM replaces jump instructions in the executing code by jumps into the VM control software, which at each branch the following code path for jump instructions scans and them replace, before it jumps to the real code destination. But I doubt if the jump replacement qualifies as JIT compilation.
IIS does this by shadow copying: after compilation it copies assemblies to some temporary place and runs them from temp.
Imagine, that user change some files. Then IIS will recompile asseblies in next steps:
Recompile (all requests handled by old code)
Copies new assemblies (all requests handled by old code)
All new requests will be handled by new code, all requests - by old.
I hope this'd be helpful.
A virtual Machine loads "byte code" or "intermediate language" and not machine code therefore, I suppose, that it just recompiles the byte code more efficiently once it has more runtime data.
http://en.wikipedia.org/wiki/Just-in-time_compilation

What does it mean to attach ROMFS in RAM?

I'm building a kernel for an ARM platform running uClinux 2.4 and under "General Setup" in the Linux configuration there is an option called "m68knommu-style attached romfs in RAM support". My ARM assembly skills are somewhat limited but as far as I can tell if I enable this option the ROMFS is copied to the end of the kernels BSS.
What is the purpose of this?
As you rightly indicate, this option causes the romfs attached to the kernel image to be relocated to the end of the .bss section. This allows the system to start from the romfs as its root filesystem.
The above isn't exactly correct; I believe I actually developed the change, if not I definitely used it. As noted, this feature offers support for a romfs filesystem concatenated to the kernel image -- both of which are placed in RAM. Then this option ensures the romfs filesystem will automatically have its size evaluated and be moved to a reserved area of RAM (as well as the appropriate pointers passed for mounting via the MTD RAM driver).
Without this change it is still possible to run out of RAM; you merely needed to have your bootloader place it in a predetermined location and pass in the appropriate kernel options. The big feature this change added was the ability to have a single, unified kernel+filesystem image the way the Coldfire builds did.
Note that it only worked if you have the appropriate changes in your head-platform.S, as I recall -- I think it may only be in place on the NetSilicon NS7520.