How is the BIOS used by a modern OS? - api

What's the function of the BIOS in a modern OS? Is it still used after booting? And is there some kind of BIOS API?

The BIOS is still the first thing that runs on the just-started CPU and responsible for getting the motherboard hardware turned on, setting basic chipset modes and registers, initializing some hardware, and running the code that loads the kernel.
The BIOS is usually not used once the kernel is loaded, and depends on a 16-bit execution environment as opposed to the 32- or 64-bit protected mode environment that a modern kernel operates in.
The boot loader normally does require the BIOS IO calls to get the kernel into memory. The BIOS is being replaced even in this role by newer boot-time software such as Coreboot to provide faster boot times. EFI will one day replace the traditional BIOS, and hopefully the boot loader, passing control directly to the kernel after loading it from storage.
The discovered hardware configuration, memory range settings, and ACPI metadata tables are probably the only BIOS-based data used by the OS after the kernel is loaded. Any runnable ACPI code is encoded as ACPI Machine Language and is interpreted by the OS.
Any good traditional book on MS-DOS assembly programming will include information on the BIOS programming interface. Check out The Art of ASSEMBLY LANGUAGE PROGRAMMING

I wrote BIOS for notebook computers for several years. The BIOS does a lot of things while the OS is running.
A major task is to inform the OS when many events happen so the OS can look smart (as if it somehow figured these things out on its own). For example, the BIOS tells the OS when: the power button is pressed, batteries are inserted or removed, AC power comes or goes, the system connects to or disconnects from a docking station, hard drives and or certain types of optical drives are inserted or removed.
Most portable computers have features that you can access/control through Fn keys and through OS-level applications provided by the manufacturers. The BIOS responds to these hotkeys and has code to interface with the OS-level apps. Features like controlling screen brightness (which certain OSes want to appear to control) or controlling bling LEDs fall into this category.
Perhaps the most important task of the BIOS is to shut down the system when the power button is held down for more than 4 seconds (to recover from OS hangs!).

The biggest benefit to having OS control over BIOS now is to control hardware level variables such as fan speed, temperature gauges, etc.

Related

Most hardware is controlled at the driver level through memory mapped handles. How is RAM controlled? Is there a spec for it?

I'm just getting into the massive topic of learning UEFI driver development and from what I understand so far, hardware peripherals are controlled using specific addresses mapped to memory. Well, the memory is hardware too. Is it not controlled by drivers?
I assume the CPU and motherboard have built-in circuits that handle this, but my curiosity is whether drivers have any hardware level control to this handling. I'd just prefer to know for sure and I'm not sure what manual would explain this.
[kernel/UEFI] driver <-> memory mapped address <-> firmware [hardware:keyboard]
[kernel/UEFI] driver <-> ? <-> firmware [hardware:RAM]
guess:
spec
driver <-> CPU microcode <-> motherboard circuit <-> firmware
I just think assumptions are bad and can't find a citation confirming the probable answer. The answer is relevant to security and which supply chain / standard we're trusting. Like PCIe or NVMe are standard specs, perhaps there's a standard for RAM <-> CPU communication?
Maybe this question is a better fit for an Engineering SE site?
From software development perspective, there isn't a driver for RAM control ‐ it's exposed as an indistinguishable part of the hardware via the Instruction Set for a given Architecture (ISA). Just like CPU is hardware, but without a driver to control it. Reason for drivers is access to hardware unknown to the ISA, such as a USB device, a particular manufacturer's SSD (which may or may not be present at power up time), graphics hardware, etc. Just like CPU, RAM must be present at power up ‐ you won't get much further than an error code via lights and beeps without RAM in your system at power up time. This isn't the case for most, if not all, other hardware; such other hardware is therefore optional, and isn't part of the ISA definition; drivers (software) are needed to access such optional hardware.
RAM is managed by both hardware (CPU; see Intel manual, volume 3), especially modern hardware, which provides virtualization support for a modern OS, including paging support, and software (the OS memory allocator), for purposes of virtualizing RAM for the running processes. No drivers though ‐ just addressing via ISA instructions.
If you're looking for an answer from a hardware perspective, such as the details of the bus circuitry, which CPU pins are involved, exact protocol, etc., then this question is a better candidate for Engineering StackExchange site.

How do you access secret area of any USB device?

We know interfaces based on "vtable" principle. Once you have a pointer to an object, you can narrow-cast it to an interface and the new object is the same object but is very limited to the interface. I always thought hardware firmware is somewhat similar. For example, for block devices (HDDs or SSDs), this interface is like read, write, status and similar. So driver is a user of such a device interface.
As it turns out, any storage device has firmware and a special area of its storage marked internal where firmware is saved. Manufacturers release programs that allow to "flash" their specific devices, e.g. by writing new program to its internal space, hidden from the OS.
My question is: on a software level, how do they perform this read-write operations to the "hidden" area of a drive? How dead "COM ports" are related?
If HDDs work across all OSes, why do firmware upgrade software is only released for Windows? In open source world of linux, what do i need to read to understand "debugging firmware" better?
We know interfaces based on "vtable" principle. Once you have a pointer to an object, you can narrow-cast it to an interface and the new object is the same object but is very limited to the interface. I always thought hardware firmware is somewhat similar. For example, for block devices (HDDs or SSDs), this interface is like read, write, status and similar. So driver is a user of such a device interface.
No, not really. Object-orientated programming is unrelated to personal computer hardware and your impression that virtual calls are relatable to device drivers is misguided. They're completely unrelated.
As it turns out, any storage device has firmware and a special area of its storage marked internal where firmware is saved. Manufacturers release programs that allow to "flash" their specific devices, e.g. by writing new program to its internal space, hidden from the OS.
This is not true. Not all storage devices have firmware - and whatever firmware they have (if any) is not necessarily stored on rewritable flash-storage. ROM chips exist, for example, which are not rewritable.
My question is: on a software level, how do they perform this read-write operations to the "hidden" area of a drive? How dead "COM ports" are related?
If you're referring to firmware updates of modern (post-2004) SATA and NVMe storage devices, then those devices' firmware can be updated using SATA and NVMe's built-in commands.
This is documented in places like and t13.org ATA/ATAPI Command Set - 4
If HDDs work across all OSes, why do firmware upgrade software is only released for Windows? In open source world of linux, what do i need to read to understand "debugging firmware" better?
why do firmware upgrade software is only released for Window
Because Windows is the predominant operating-system used by users of those kinds of hardware. While the firmware can be updated using raw SATA/NVMe commands, you still need a host operating system to run the program that will issue those SATA/NVMe commands. Supposing it costs $100k to build a firmware update for an SSD for Windows and another $100k for Linux (for $200k for both Linux and Windows) - but 90% of all Linux users also run Windows - so why spend $200k for 100% coverage when you can spend $100k on 90% coverage, then spend the extra $100k buying a Ferrari or Tesla Model X P100D on yourself, and blame the users for not booting from a Windows USB stick to upgrade their firmware? (Side note: I chose the latter, and yes, I really do love my Tesla Model X)
You cannot have a program that just magically runs on any computer platform (Windows, BSD, Linux, macOS, QNX, etc) and updates periphial device firmware: it always needs to be a program that can be executed by a host OS (you can argue that UEFI/EFI is a platform-agnostic approach, but in reality UEFI/EFI is still its own platform)
In open source world of linux, what do i need to read to understand "debugging firmware" better?
200mg of Adderall and a pirated copy of IDA Pro.
...or 500mg of Dexedrine and NSA Ghidra.
It depends on the exact type of block device and how it is interfaced to the PC. A very common interface is SATA, when can be used directly with a SATA controller in a home PC - or it can be reached through a USB-SATA bridge.
If we take SATA as an example, there exists a special command in the SATA protocol known as "Download Microcode" (command ID 0x92) - which exists solely for the purpose of transferring new firmware to the drive controller.
The firmware is typically not stored on a "hidden area of the drive" itself, as your indicate - it is typically stored in flash memory or similar on the drive controller PCB or within the drive controller IC.
There are no "dead COM ports" involved in this.
The reason why harddrive vendors some times release firmware update tools only for Windows is probably the simple reason that most of their customers use Windows, and it is cheaper for them to simply support that one platform.

How does the operating system manage processes

How does the operating system manage the program permissions? If you are writing a low-level program without using any system calls, directly controlling the processor, then how does an operating system put stakes if the program directly controls the processor?
Edit
It seems that my question is not very clear, I apologize, I can not speak English well and I use the translator. Anyway, what I wonder is how an operating system manages the permissions of the programs (for example the root user etc ...). If a program is written to really low-level without making system calls, then can it have full access to the processor? If you want to say that it can do everything you want and as a result the various users / permissions that the operating system does not have much importance. However, from the first answer I received I read that you can not make useful programs that work without system calls, so a program can not interact directly with a hardware (I mean how the bios interacts with the hardware for example)?
Depends on the OS. Something like MS-DOS that is barely an OS and doesn't stop a program from taking over the whole machine essentially lets programs run with kernel privilege.
An any OS with memory-protection that tries to keep separate processes from stepping on each other, the kernel doesn't allow user-space processes to talk directly to I/O hardware.
A privileged user-space process might be allowed to memory-map video RAM and/or I/O registers of a device into its own address space, and basically act like a device driver. (e.g. an X server under Linux.)
1) It is imposible to have a program that that does not make any system calls.
2) Instructions that control the operation of the processor must be executed in kernel mode.
3) The only way to get into kernel mode is through an exception (including system calls). By controlling how exceptions are handled, the operating system prevents malicious access.
If a program is written to really low-level without making system calls, then can it have full access to the processor?
On a modern system this is impossible. A system call is going to be made in the background whether you like it or not.

Is the Operating System a process?

I am just now learning about OSes and I stumbled upon this question from my class' lecture notes. In our class, we define a process as a program in execution and I know that an OS is itself a program. So by this definition, an OS is a process.
At the same time processes can be switched in or out via a context switch, which is something that the OS manages and handles. But what would handle the OS itself when it isn't running?
Also if it is a process, does the OS have a process control block associated with it?
There was an older question on this site that I looked at, but I felt as if the answers weren't clear enough to really outline WHY the OS is/isn't a process so I thought I'd ask again here.
First of all, an OS is multiple parts. The core piece is the kernel, which is not a process. It is a framework for running processes. In practice, a process is more than just a "program in execution". On a system with an MMU, a process is usually run in its own virtual address space. The kernel however, is usually mapped into all processes. It's always there.
Other ancillary parts of the OS exist to make it usuable. The OS may have processes that it runs as part of its management. For example, Linux has many kernel threads that are independently scheduled tasks. But these are often not crucial to the OS's operation.
Short answer: No.
Here's as good a definition of "Operating System" as any:
https://en.wikipedia.org/wiki/Operating_system
An operating system (OS) is system software that manages computer
hardware and software resources and provides common services for
computer programs. The operating system is a component of the system
software in a computer system. Application programs usually require an
operating system to function.
Even "system-level processes" (like "init" on Linux, or "svchost.exe" on Windows) rely on the "operating system" ... but are not themselves the operating system.
Agreeing to some of the comments above/below.
OS is not a process. However there are few variants in design that give the opposite illusion.
For eg: If you are running a FreeRTOS, then there is no such thing as a separate OS address space and Process address space, every thing runs as a single process, the FreeRTOS framework provides API's that allow Synchronization of different tasks.
Operating System is just a set of API's (system calls) and utilities that help to achieve Multi-processing, Resource sharing etc. For eg: schedule() is a core OS function that handles the multi-processing capabilities of the OS.
In that sense, OS is not a process. Although it attaches to every process that runs on the CPU, otherwise how will the process make use of the OS's API.
It is more like soul for the body (hardware), if you will.
It is just not one process but a set of (kernel) processes required to run user processes in the system. PID 0 being the parent of all processes providing scheduler/swapping functionality to the rest of the kernel/user processes, but it is not the only process. These kernel processes (with the help of kernel drivers) provide accessor functionality (through system calls) to the user processes.
It depends upon what you are calling the "operating system".
It depends upon what operating system you are talking about.
That said and at the risk of gross oversimplification, most of what one calls "the operating system" is generally executed from user processes while in kernel mode. The entry into kernel occurs either through an interrupt, trap or fault.
To do a context switch usually either a process causes a fault entering kernel mode to so something (like write to the disk). While in kernel mode, the process realizes it would have to wait so it yields by switching the context to another process. The other common way is a timer causes an interrupt, that forces the process into kernel mode. The process then determines who should be executed next, and switches the process context.
Some operating systems do have their own kernel process that function but that is increasingly rare.
Most operating system have components that have their own processes.

cortex a9 boot and memory

I am a newbie starting out in micro-controller programming. The chip of interest here is cortex-a9. At reset or power up there has to be code at 0x0000000 from my readings. My questions though they may sound too trivial will help me in putting some concepts in perspective.
Does the memory address 0x0000000 reside in ROM?
What happens right after the code is read from that address?
Should there be some sort of boot-loader present & if so at what address should this be in & Should it also be residing in ROM?
Finally, at what point does the kernel kick in & where does the kernel code reside?
ARM sells cores not chips, what resides at that address depends on the chip vendor that has bought the ARM core and put it in their chip. Implementations vary from vendor to vendor, chip to chip.
Traditionally an ARM will boot from address zero, more correctly the reset exception vector is at address zero. Unlike other processor families, the traditional ARM model is NOT a list of addresses for exception entry points but instead the ARM EXECUTES the instruction at that address, which means you need to use either a relative branch or a load pc instruction. The newer cortex-m series, which are thumb/thumb2 only (they cannot execute ARM (32 bit) instructions) uses the traditional (non-ARM) like list of addresses, also the zero address is not an exception vector, it is the address to load in the stack pointer, then the second entry is reset and so on. Also the cortex-m exception list is different, that family has like 128 individual interrupts, where the traditional ARM has two, fast and normal. There is a recent cortex-m based question or perhaps phrased as thumb2 question for running linux on a thumb2 ARM. I think the cortex-m implementations are all microcontroller class chips and only have on chip memory in the tens of kbytes, basically these dont fall into the category you are asking about. And you are asking about cortex-a9 anyway.
A number of cores or maybe all of them have a boot option where the boot address can be 0x00000000 or something like 0xFFFF0000 as an alternate address. using that would be very confusing for ARM users, but it provides the ability for example to have a rom at one address and a ram at another allowing you to boot on power up from a rom then switch the exception table into ram for runtime operation. You probably have a chip with a core that can do this but it is up to the chip vendor whether or not to use these edge of the core features or to hardwire them to some setting and not provide you that flexibility.
You need to look at the datasheet/docs for the chip in question. Find out what the name of the ARM core is, as you mentioned cortex-a9. Ideally you want to know the rev as well r0p0 kind of a thing, then go to ARM's website and find the TRM, technical reference manual for that core. You will also want to get a copy of the ARM ARM, ARM Architectural Reference Manual. The (traditional) ARM exception vectors are described in the ARM ARM as well as quite a ton more info. You also need the chip vendors documentation, and look into their boot scheme. Some will point address zero to the boot prom on power up, then the bootloader will need to do something, flip a bit in a register, and the memory controller will switch address 0 to ram. Some might have address 0 always configured as ram, and some other address always configured as rom, lets say 0x80000000 for example, and the chip will copy some items from rom to ram for you before boot, or the chip may simply have the power up setting for the reset vector to be a branch to rom, then it is up to the bootloader to patch up the vector table. As many different schemes as you can think of, it is likely someone has tried it, so you have to study the chip vendors documentation or sample code to understand Basically the answer to your rom question, is it depends and you have to check with the chip vendor.
The ARM TRM for the core should describe, if any, the strap options on the core (like being able to boot from an alternate address), connect those strap options, if any, that are implemented by the vendor. The ARM ARM is not really going to get into that like the TRM. A vendor worth buying from though will have some of their own documentation and/or code that shows what their rom based boot strategy is.
For a system destined to be a linux system you are going to have a bootloader, some non-linux code (very much like the bios on your desktop/laptop) that brings up the system and eventually launches linux. Linux is going to need a fair amount of memory (relative to microcontroller and other well known ARM implementations), that ram may end up being sram or dram and the bootloader may have to initialize the memory interface before it can start linux up. There are popular bootloaders like redboot and uboot. both are significant overkill, but provide features for developers and users like being able to re-flash linux, etc.
ARM linux has ATAGs (ARM TAGs). You can use both the traditional linux command line to tell linux boot information like what address to find the root file system, and ATAGs. Atags are structures in memory that I think r0 or something like that is set to when you branch from the bootloader to linux. The general concept though is the chip powers up, boots from rom or ram, if prepares ram so that it is ready to use, linux might want/need to be copied from rom to ram, the root file system, if separate, might want to be copied to somewhere else in ram. ATAGs are prepared to tell arm where to decompress linux if need be, as well as where to find the command line and or where to find things like the root file system, some registers are prepared as passed parameters to linux and lastly the bootloader branches to the address containing the entry point in the linux kernel.
You have to have boot code available at the address where the hardware starts executing.
This is usually accomplished by having the hardware map some sort of flash or boot ROM to the boot address and start running from there.
Note that in micro controllers the code that starts running at boot has a pretty tough life - no hardware is initialized yet, and by no hardware I mean that even the DDR controllers that control the access to RAM are not working yet... so your code needs to run without RAM.
After the initial boot code sets enough of the hardware (e.g. sets the RAM chips, set up TLBs etc, program MACs etc.) you have the bootloader run.
In some systems, the initial boot code is just the first part of the boot loader. In some systems, a dedicated boot code sets things up and then reads the boot loader from flash and runs it.
The job of the boot loader is to bring the image of the kernel/OS into RAM, usually from flash or network (but can also be shared memory with another board, PCI buses and the like although that is more rare). Once the boot loader has the image of the kernel/OS binary in RAM it might optionally uncompress it, and hand over control (call) the start address of the kernel/OS image.
Sometime, the kernel/OS image is actually a small decompressor and blob of compressed kernel.
At any rate the end result is that the kernel/OS is available in RAM and the boot loader, optionally through the piggy back decompressor, has passed control to it.
Then the kernel/OS starts running and the OS is up.