Determining SSC (Spread Spectrum Clocking) on Linux - clock

Is there a Linux generic shell command to determine whether SSC is on or off on the PCIe system?

According to PCI Express Base Specification Rev 3.0, SSC must be turned off, if RX and TX are operated on different Refclk sources. So there should be a flag to monitor and control SSC.
However, I have searched the complete (860 pages) specification, but I can't find any configuration register defined by the PCIe standard, to switch off SSC. So I assume it's vendor dependent, if even accessible by software.

Related

How do you access secret area of any USB device?

We know interfaces based on "vtable" principle. Once you have a pointer to an object, you can narrow-cast it to an interface and the new object is the same object but is very limited to the interface. I always thought hardware firmware is somewhat similar. For example, for block devices (HDDs or SSDs), this interface is like read, write, status and similar. So driver is a user of such a device interface.
As it turns out, any storage device has firmware and a special area of its storage marked internal where firmware is saved. Manufacturers release programs that allow to "flash" their specific devices, e.g. by writing new program to its internal space, hidden from the OS.
My question is: on a software level, how do they perform this read-write operations to the "hidden" area of a drive? How dead "COM ports" are related?
If HDDs work across all OSes, why do firmware upgrade software is only released for Windows? In open source world of linux, what do i need to read to understand "debugging firmware" better?
We know interfaces based on "vtable" principle. Once you have a pointer to an object, you can narrow-cast it to an interface and the new object is the same object but is very limited to the interface. I always thought hardware firmware is somewhat similar. For example, for block devices (HDDs or SSDs), this interface is like read, write, status and similar. So driver is a user of such a device interface.
No, not really. Object-orientated programming is unrelated to personal computer hardware and your impression that virtual calls are relatable to device drivers is misguided. They're completely unrelated.
As it turns out, any storage device has firmware and a special area of its storage marked internal where firmware is saved. Manufacturers release programs that allow to "flash" their specific devices, e.g. by writing new program to its internal space, hidden from the OS.
This is not true. Not all storage devices have firmware - and whatever firmware they have (if any) is not necessarily stored on rewritable flash-storage. ROM chips exist, for example, which are not rewritable.
My question is: on a software level, how do they perform this read-write operations to the "hidden" area of a drive? How dead "COM ports" are related?
If you're referring to firmware updates of modern (post-2004) SATA and NVMe storage devices, then those devices' firmware can be updated using SATA and NVMe's built-in commands.
This is documented in places like and t13.org ATA/ATAPI Command Set - 4
If HDDs work across all OSes, why do firmware upgrade software is only released for Windows? In open source world of linux, what do i need to read to understand "debugging firmware" better?
why do firmware upgrade software is only released for Window
Because Windows is the predominant operating-system used by users of those kinds of hardware. While the firmware can be updated using raw SATA/NVMe commands, you still need a host operating system to run the program that will issue those SATA/NVMe commands. Supposing it costs $100k to build a firmware update for an SSD for Windows and another $100k for Linux (for $200k for both Linux and Windows) - but 90% of all Linux users also run Windows - so why spend $200k for 100% coverage when you can spend $100k on 90% coverage, then spend the extra $100k buying a Ferrari or Tesla Model X P100D on yourself, and blame the users for not booting from a Windows USB stick to upgrade their firmware? (Side note: I chose the latter, and yes, I really do love my Tesla Model X)
You cannot have a program that just magically runs on any computer platform (Windows, BSD, Linux, macOS, QNX, etc) and updates periphial device firmware: it always needs to be a program that can be executed by a host OS (you can argue that UEFI/EFI is a platform-agnostic approach, but in reality UEFI/EFI is still its own platform)
In open source world of linux, what do i need to read to understand "debugging firmware" better?
200mg of Adderall and a pirated copy of IDA Pro.
...or 500mg of Dexedrine and NSA Ghidra.
It depends on the exact type of block device and how it is interfaced to the PC. A very common interface is SATA, when can be used directly with a SATA controller in a home PC - or it can be reached through a USB-SATA bridge.
If we take SATA as an example, there exists a special command in the SATA protocol known as "Download Microcode" (command ID 0x92) - which exists solely for the purpose of transferring new firmware to the drive controller.
The firmware is typically not stored on a "hidden area of the drive" itself, as your indicate - it is typically stored in flash memory or similar on the drive controller PCB or within the drive controller IC.
There are no "dead COM ports" involved in this.
The reason why harddrive vendors some times release firmware update tools only for Windows is probably the simple reason that most of their customers use Windows, and it is cheaper for them to simply support that one platform.

Virtualization architecture on mainframe (z/Architecture)

I have with interest studied the hardware virtualization extensions added by Intel and AMD to the x86 architecture (known as VMX and SVM, respectively). While this is still a relatively recent addition to x86 CPU's my understanding is that the mainframe architecture made extensive use of virtualization since the 70's-80's for instance in the form of the venerable z/VM operating system. Even nested virtualization has been used.
My question is, is there a public documentation of the hardware facilities provided by the z/Architecture used by the z/VM operating system to implement this virtualization? I.e. the control registers and data structures that the hardware implements to allows the hypervisor to simulate the guest state and trap necessary instructions? Another thing I am curious about is if the z/Architecture supports second-level address translation (which was added later to VMX and SVM).
Just to get it out of the way, System/370 and all its descendants support virtualization as is (they satisfy virtualization requirements). In that sense, no special hardware support has ever been needed, as opposed to Intel architecture.
The performance improvements for VM guests on System/370, XA, ESA etc. all the way through z/Architecture have been traditionally implemented using DIAG (diagnose) instruction as well as microcode (now millicode) assist. In modern terms, it is more of paravirtualization. The facilities are documented, you can start here for instance.
Update - after reading extensive comments, a few notes and clarifications.
S/370 and its descendants never needed specialized hardware virtualization support to correctly run guest operating systems - not because the virtualization was part of the initial design and requirements - it wasn't, but because the architecture was properly designed to support secure multiuser environment. Popek and Goldberg's virtualization requirements are actually very weak - in essence, that only privileged instructions can affect system configuration. This requirement was part of even S/370's predecessor, System/360, well before first virtualized systems appeared.
Performance imporvements of VM guests proceeded along two lines.
First, paravirtualization approach - essentially developing well-architected API for guest-hypervisor communication. It's been used not only for performance, but for a wide variety of other services such as inter-VM communication. The API is documented in the manual referred to above.
Second, microcode extensions (VM microcode assist) that performed some performance sensitive hypervisor logic at microcode level, essentially hardware level. That is not paravirtualization, it is hardware virtualization support proper. But in early 370 machines this support was not architected, meaning it was model-dependent and subject to change. With 370/XA, IBM introduced a proper architectural way to support high-performance virtualization, the Start Interpretive Execution (SIE) instruction. This instruction is not documented in Principles of Operation, but rather in a separate publication, IBM System/370 XA Interpretive Execution. (This document is referenced multiple times in Principles of Operation. The link refers to the first version of the document, you can download version 2 here. I am not sure if that publication was ever updated - probably this is the latest version.) Additionally, I/O subsystem provided VM assist too.
I failed to mention the SIE instruction and the manual that documented it in my original answer, which is a crucial part of the story. I am grateful to the author of the question and the extensive comments that proded me to check my memory and figure out that I skipped an important bit of technical background. This presentation provides an excellent overview of z/VM core facilities that covers additional aspects including memory management, I/O, networking etc.
The SIE instruction is how virtualization software accesses the z/Architecture Interpretive Execution Facility (IEF). The exact details of the interface have not been published since the early 1990s.
This is a hardware-based capability. IEF provides two levels of virtualization. The first level is used by firmware (via the SIE instruction) to create partitions. In each partition you can run an operating system. One of those operating systems is z/VM. It uses the SIE instruction (running within the context of the first level SIE instruction) to run virtual machines. You can run any z/Architecture operating system in a virtual machine, including z/VM itself.
The SIE instruction takes as input the description of a virtual server (partition or virtual machine). The hardware then runs the virtual server's instruction stream, stopping only when it needs help from whatever issued the SIE instruction, whether it be the partition hypervisor or the z/VM hypervisor.

Embedded Board Support Package

As I understand, a BSP (Board Support Package) contains bootloader, kernel and device driver which help OS to work on HW. But I'm confused because OS also contains a kernel. So what is the difference between the kernel in OS and the kernel in BSP?
What a BSP comprises of depends on context; generically it is code or libraries to support a specific board design. That may be provided as generic code from the board supplier for use in a bare-metal system or for integrating with an OS, or it may be specific to a particular OS, or it may even include an OS. In any case it provides board specific support for higher-level software.
A kernel is board agnostic (though often processor architecture specific), and makes no direct access to hardware not intrinsic to the processor architecture on which it runs. Typically an OS or application will require a Hardware Abstraction Layer (HAL); the HAL may well be built using the BSP, or the BSP may in fact be the HAL. A vendor may even package a HAL and OS and refer to that as a BSP.
The term means what it means to whoever is using it - context is everything. For example in VxWorks, WindRiver use the term BSP to refer to the layer that supports the execution of a VxWorks based application on a specific hardware design. A board vendor on the other hand may provide a complete Linux distribution ported to the board and refer to that as a BSP.
However and to what extent a particular vendor or developer chooses to support a board is a board support package regardless of how much or how little it may contain.
BSP definition is broad. It is a supporting software package for a specific board. BSP for a tiny microcontroller probably just contains HW drivers for its peripherals. On the other hand, for an embedded CPU it may contain HW drivers, bootloader and OS kernel and what not.
So the kernel in a BSP (board support package) is just a specific version of an OS kernel that has been ported to your board.
Im probably just saying the same things already said.
You have a chip and/or board product you want to sell to other (software) developers. A reference design (board) with the chip(s) in question are used. The BSP is a vague term to mean the software that is provided to you as a software developer to ideally make your life easier in using that product (chip and/or board) and adding your software to it or developing for it. So if it is a linux or rtos or other operating system capable platform and the vendor (providing the bsp) believes that users want an operating system and a specific operating system, then instead of you having to port the os to that target, they do it for you. If something like linux that is open source, then you either are told which linux sources to download then the patches made by the bsp are added and/or the bsp contains the complete sources for the whole thing already patched. Drivers, applications as deemed necessary by the vendor etc. Multiple operating systems may be supported if the vendor feels that is needed in order to attract customers to buy that board/chip product.
The whole package of software that you get from them to make that chip/board into your own product, is the BSP.
vxWorks kernel which you can run on a Board contains vxWorks core kernel and "other components" which may change from one environment.
Core kernel contains essential programs such as Scheduler, Memory manager, Basic File systems, security features etc.
These "other components" which are part of BSP may be optional or may vary from system to system, and helps the core kernel features.
In simple words, the image dislays the defination of BSP. Please correct me if I'm wrong
I would say for a well structured code base, the application layer should be abstracted from lower layers by the HAL layer. This would allow the app layer to be portable if we want to migrate the system to a new board. If you see you have board/CPU specific logic in your app layer, you know you have broken the portability.
The HAL layer functions' bodies should contain board specific code, here is where the BSP layer code comes into play. When we want to port the system to a new board, code changes should happens in the HAL functions' bodies, while the HAL functions' declaration should not change, which leads to the app layer remains the same.

Any wireless chipsets with open specifications?

I'm wondering if there's any "mainstream" wireless chipsets/adapters for PC's that have open specifications, to a level that would permit one to implement a custom driver (i.e. specifications of registers, mode of operation etc.)? It's OK if the chipset requires the upload of binary blobs (for which the source isn't available) to the chip/card itself etc. as long as the host <-> adapter interface is public. I'm looking for it mainly out of interest to see what this interface looks like, but I might also be interested in doing some coding myself. Thanks!
You have OpenWRT which is a fully capable open source router operation system, TP-LINK products are based on OpenWRT.
You may be also interested in https://www.zigbee.org/ more oriented to the Internet of Things and M2M wireless sensor networks.
You probably want to check the Atheros WiFi chipset and its open source drivers, for examples ath5k and ath9k. These drivers are preinstalled in Linux kernel. It's widely used in academy, at least, and adopted by many off-the-shelf NIC.

cortex a9 boot and memory

I am a newbie starting out in micro-controller programming. The chip of interest here is cortex-a9. At reset or power up there has to be code at 0x0000000 from my readings. My questions though they may sound too trivial will help me in putting some concepts in perspective.
Does the memory address 0x0000000 reside in ROM?
What happens right after the code is read from that address?
Should there be some sort of boot-loader present & if so at what address should this be in & Should it also be residing in ROM?
Finally, at what point does the kernel kick in & where does the kernel code reside?
ARM sells cores not chips, what resides at that address depends on the chip vendor that has bought the ARM core and put it in their chip. Implementations vary from vendor to vendor, chip to chip.
Traditionally an ARM will boot from address zero, more correctly the reset exception vector is at address zero. Unlike other processor families, the traditional ARM model is NOT a list of addresses for exception entry points but instead the ARM EXECUTES the instruction at that address, which means you need to use either a relative branch or a load pc instruction. The newer cortex-m series, which are thumb/thumb2 only (they cannot execute ARM (32 bit) instructions) uses the traditional (non-ARM) like list of addresses, also the zero address is not an exception vector, it is the address to load in the stack pointer, then the second entry is reset and so on. Also the cortex-m exception list is different, that family has like 128 individual interrupts, where the traditional ARM has two, fast and normal. There is a recent cortex-m based question or perhaps phrased as thumb2 question for running linux on a thumb2 ARM. I think the cortex-m implementations are all microcontroller class chips and only have on chip memory in the tens of kbytes, basically these dont fall into the category you are asking about. And you are asking about cortex-a9 anyway.
A number of cores or maybe all of them have a boot option where the boot address can be 0x00000000 or something like 0xFFFF0000 as an alternate address. using that would be very confusing for ARM users, but it provides the ability for example to have a rom at one address and a ram at another allowing you to boot on power up from a rom then switch the exception table into ram for runtime operation. You probably have a chip with a core that can do this but it is up to the chip vendor whether or not to use these edge of the core features or to hardwire them to some setting and not provide you that flexibility.
You need to look at the datasheet/docs for the chip in question. Find out what the name of the ARM core is, as you mentioned cortex-a9. Ideally you want to know the rev as well r0p0 kind of a thing, then go to ARM's website and find the TRM, technical reference manual for that core. You will also want to get a copy of the ARM ARM, ARM Architectural Reference Manual. The (traditional) ARM exception vectors are described in the ARM ARM as well as quite a ton more info. You also need the chip vendors documentation, and look into their boot scheme. Some will point address zero to the boot prom on power up, then the bootloader will need to do something, flip a bit in a register, and the memory controller will switch address 0 to ram. Some might have address 0 always configured as ram, and some other address always configured as rom, lets say 0x80000000 for example, and the chip will copy some items from rom to ram for you before boot, or the chip may simply have the power up setting for the reset vector to be a branch to rom, then it is up to the bootloader to patch up the vector table. As many different schemes as you can think of, it is likely someone has tried it, so you have to study the chip vendors documentation or sample code to understand Basically the answer to your rom question, is it depends and you have to check with the chip vendor.
The ARM TRM for the core should describe, if any, the strap options on the core (like being able to boot from an alternate address), connect those strap options, if any, that are implemented by the vendor. The ARM ARM is not really going to get into that like the TRM. A vendor worth buying from though will have some of their own documentation and/or code that shows what their rom based boot strategy is.
For a system destined to be a linux system you are going to have a bootloader, some non-linux code (very much like the bios on your desktop/laptop) that brings up the system and eventually launches linux. Linux is going to need a fair amount of memory (relative to microcontroller and other well known ARM implementations), that ram may end up being sram or dram and the bootloader may have to initialize the memory interface before it can start linux up. There are popular bootloaders like redboot and uboot. both are significant overkill, but provide features for developers and users like being able to re-flash linux, etc.
ARM linux has ATAGs (ARM TAGs). You can use both the traditional linux command line to tell linux boot information like what address to find the root file system, and ATAGs. Atags are structures in memory that I think r0 or something like that is set to when you branch from the bootloader to linux. The general concept though is the chip powers up, boots from rom or ram, if prepares ram so that it is ready to use, linux might want/need to be copied from rom to ram, the root file system, if separate, might want to be copied to somewhere else in ram. ATAGs are prepared to tell arm where to decompress linux if need be, as well as where to find the command line and or where to find things like the root file system, some registers are prepared as passed parameters to linux and lastly the bootloader branches to the address containing the entry point in the linux kernel.
You have to have boot code available at the address where the hardware starts executing.
This is usually accomplished by having the hardware map some sort of flash or boot ROM to the boot address and start running from there.
Note that in micro controllers the code that starts running at boot has a pretty tough life - no hardware is initialized yet, and by no hardware I mean that even the DDR controllers that control the access to RAM are not working yet... so your code needs to run without RAM.
After the initial boot code sets enough of the hardware (e.g. sets the RAM chips, set up TLBs etc, program MACs etc.) you have the bootloader run.
In some systems, the initial boot code is just the first part of the boot loader. In some systems, a dedicated boot code sets things up and then reads the boot loader from flash and runs it.
The job of the boot loader is to bring the image of the kernel/OS into RAM, usually from flash or network (but can also be shared memory with another board, PCI buses and the like although that is more rare). Once the boot loader has the image of the kernel/OS binary in RAM it might optionally uncompress it, and hand over control (call) the start address of the kernel/OS image.
Sometime, the kernel/OS image is actually a small decompressor and blob of compressed kernel.
At any rate the end result is that the kernel/OS is available in RAM and the boot loader, optionally through the piggy back decompressor, has passed control to it.
Then the kernel/OS starts running and the OS is up.