I have a display controller called "Sitronix ST7789". I want to drive it with 18 bit parallel interface(DB0-DB17). I searched emwin document for his parallel interface support with Sitronix ST7789.
I find the GUIDRV_FLEXCOLOR_F66709 parameter for it. Each controller selection supports different operation modes, but it supports only M16C0B8, M16C1B8, M16C0B16, M16C1B16. That means it supports only 8bit bus and 16bit bus and it doesn't support 18bit parallel interface?
Do I understand correctly?
Related
Is it possible to create one file, say for example uart.c to be generic, so that I can call the uart functions for different microcontrollers, say for example avr and arm? or is it a must that for every microcontroller I will have to create the the uart functions from scratch?
You can create a Hardware Abstraction Layer (HAL) that acts as a common interface for all hardware of the same kind. A correctly designed HAL allows portable application-layer code, which is the sole purpose of having one.
The HAL should be in the form of an API library that then acts as a header file template for how the drivers should be designed, the simplest form of "polymorphism". The application programmer calls the HAL, and the MCU-specific functions in the driver will then get called.
In case of UART you might have an init function taking baudrate, stop bit, parity, handshaking etc as parameters. And then a read function and a write function, with some error handling. Overrun and framing errors are universal, for example. It is then up to the specific driver for MCU "x" to implement itself according to your specified HAL.
Generally, one should not create abstaction layers needlessly though. It is quite qualified work and easy to get wrong. If you don't need portability or code re-use between projects, there's no obvious need for a HAL and you could as well call the driver directly from the application code.
The hardware implementations and register interfaces across different vendors certainly differ. ARM do not make MCUs they licence the core - the peripherals are not defined by the core so even amongst ARM devices the peripheral implementations differ between vendors.
What you can do is define a common device-layer interface and implement that interface for each device family you need, then you can reuse the application layer code across architectures.
The alternative is to stick with a common family. AVR for example covers a wide range of devices and the peripherals generally are common across the range. Similarly STM32 (ARM Cortex-M) devices share common peripherals across the range.
So the answer is no, but you can deal with that by abstracting the hardware (or the vendor supplied abstraction or device layer).
For a UART you might use stdio as your abstraction layer and access the device via fprintf, fputchar, fread, fwrite etc. Though typically you'd build that on a lower level layer too.
Would using multi-GPUs in Vulkan be something like making many command queues then dividing command buffers between them?
There are 2 problems:
In OpenGL, we use GLEW to get functions. With more than 1 GPU, each GPU has its own driver. How'd we use Vulkan?
Would part of the frame be generated with a GPU & the others with other GPUs like use Intel GPU to render UI & AMD or Nvidia GPU to render game screen in labtops for example? Or would a frame be generated in a GPU & the next frame in an another GPU?
Updated with more recent information, now that Vulkan exists.
There are two kinds of multi-GPU setups: where multiple GPUs are part of some SLI-style setup, and the kind where they are not. Vulkan supports both, and supports them both in the same computer. That is, you can have two NVIDIA GPUs that are SLI-ed together, and the Intel embedded GPU, and Vulkan can interact with them all.
Non-SLI setups
In Vulkan, there is something called the Vulkan instance. This represents the base Vulkan system itself; individual devices register themselves to the instance. The Vulkan instance system is, essentially, implemented by the Vulkan SDK.
Physical devices represent a specific piece of hardware that implements the interface to a GPU. Each piece of hardware that exposes a Vulkan implementation does so by registering its physical device with the instance system. You can query which physical devices are available, as well as some basic properties about them (their names, how much memory they offer, etc).
You then create logical devices for the physical devices you use. Logical devices are how you actually do stuff in Vulkan. They have queues, command buffers, etc. And each logical device is separate... mostly.
Now, you can bypass the whole "instance" thing and load devices manually. But you really shouldn't. At least, not unless you're at the end of development. Vulkan layers are far too critical for day-to-day debugging to just opt out of that.
There are mechanisms, core in Vulkan 1.1, that allow individual devices to be able to communicate some information to other devices. In 1.1, only certain kinds of information can be shared across physical devices (namely, fences and semaphores, and even then, only on Linux through sync files). While these APIs could provide a mechanism for sharing data between two physical devices, at present, the restriction on most forms of data sharing is that both physical devices must have matching UUIDs (and therefore are the same physical device).
SLI setups
Dealing with SLI is covered by two Vulkan 1.0 extensions: KHR_device_group and KHR_device_group_creation. The former is for dealing with "device groups" in Vulkan, while the latter is an instance extension for creating device-grouped devices. Both of these are core in Vulkan 1.1.
The idea with this is that the SLI aggregation is exposed as a single VkDevice, which is created from a number of VkPhysicalDevices. Each internal physical device is a "sub-device". You can query sub-devices and some properties about them. Memory allocations are specific to a particular sub-device. Resource objects (buffers and images) are not specific to a sub-device, but they can be associated with different memory allocations on the different sub-devices.
Command buffers and queues are not specific to sub-devices; when you execute a CB on a queue, the driver figures out which sub-device(s) it will run on, and fills in the descriptors that use the images/buffers with the proper GPU pointers for the memory that those images/buffers have been bound to on those particular sub-devices.
Alternate-frame rendering is simply presenting images generated from one sub-device on one frame, then presenting images from a different sub-device on another frame. Split-frame rendering is handled by a more complex mechanism, where you define the memory for the destination image of a rendering command to be split among devices. You can even do this with presentable images.
In vulkan you need to enumerate the devices and select the one you want to work with. There will be nothing stopping you from trying to work with 2 different ones separately. Each vulkan call needs at least 1 parameter as context. The loader layer will then forward the call to the correct driver. Or you can load the functions for each device separately to avoid the loader's trampoline.
A generated frame will need to be forwarded to the card that is connected to the screen for display. So it's more likely that 1 GPU is responsible for graphics and the others are used for physics.
Only a single device can be connected to a specific surface at a time so that device needs to get the rendered frame to copy it into the renderable image that gets pushed to the screen.
Device group is the way to go. Look at the vulkan specification for documentation. Vulkan handle all the dispatch to the others GPUs (when they are connected by sli/crossfire). All you need to do is to tell vulkan how the dispatch is done (for example dispatch one frame on a GPU and the next on another one). If you need to do compute work you will need to address each GPU individually. Please find a link for a reference: https://www.ea.com/seed/news/khronos-munich-2018-halcyon-vulkan
I try to understand how they both relate to each other. As far as I know, they both can be a part of the HAL. In case of a communication between an application and a graphics card - can an API get the job done on its own or do we have to rely on them both? Can an API directly communicate with the hardware or do we always need a driver in-between, which translates the command of the API?
TL;DR
Think of an API as a specification that describes what to do, while a driver is an implementation that describes how to do it.
Details
As a contrived example, imagine we have three different audio cards that we want to play nicely with multiple operating systems. We can define an API for the card manufacturers that says, "Each card must support four methods: mute(), playsound(sound), volumeup() and volumedown()". By defining the API, we get a common interface that allows the operating system designers to support the audio devices without worrying about the hardware details. They know that if they want to mute the sound card, they can call mute(), or if they want to turn the volume up, they call volumeup().
It is then up to the device manufacturers to implement a driver that actually performs those actions. The driver will vary between the three different audio cards because they are different at the hardware level, but the API is consistent so the next higher abstraction level (the OS) doesn't need to know how to deal with the hardware.
For a more concrete example, consider the Advanced Control & Power Interface (ACPI) specification. It defines a common interface for operating systems to manage power consumption and thermal characteristics of hardware devices. There are methods that a device driver or firmware must implement in order to be "ACPI Compliant". This allows Windows operating systems and Linux variants to both perform the same actions on hardware devices without needing to implement their own drivers for the hardware
Note: Windows performs ACPI actions through acpi.sys, which they call an "ACPI Driver". Don't let the terminology confuse you; even though they call it a driver, it is really a window into the ACPI interface. Linux uses the acpi kernel module to do the same thing, and Linux doesn't call it a driver. Perhaps ACPI wasn't the best example, but I don't have anything better at the moment.
in my project I need to work with device drivers, but have a hard time to understand the naming, scope and function of the abstraction layers. As I see the main layer is HAL - "hardware abstraction layer".
What are the clients of HAL, whom is HAL interfacing?
Are you talking about a specific HAL in windows or linux or something or in general?
When accessing registers from a device driver (code that drives a device, doesnt have to be a kernel thing or have an operating system at all) for example I generally recommend to create functions like PUT32(address,data), data=GET32(address). Or writel and readl, whatever you fancy. The point being to avoid creating a pointer with the address and using that pointer directly. There is a performance gain to the pointer type solution, and performance hit to the abstract PUT32(). Why I use it though is because if the code is clean enough that driver can be used as part of a kernel driver for this os, a kernel driver for that os, run standalone embedded, connect to an hdl simulation of the logic, run on a processor on the same chip, or run on a host computer that reaches into the chip via PCI or jtag, etc. One chunk of code reused from the birth of the logic (hdl sim) to the end user kernel driver.
Perhaps more to your question though think about a uart, you want to send some bytes and receive some bytes right? Create a uart_send() function and a uart_recv() function, everything above the abstraction layer uses these two functions, when you target this code to a specific platform then you implement those functions for the specific uart in that specific hardware. later on you can replace that uart with something else, so long as the new uart can send and receive the code above the abstraction layer does not have to change. Even though you have created an abstraction layer with the functions above, I personally would still use PUT8() and GET8() functions in the implementation of uart_send() and uart_recv() for the specific uart, and in a separate file implement PUT8() and GET8().
How many layers of abstraction between the driver and the actual hardware, how and where are often specific to the task and the hardware.
In computers, a hardware abstraction layer (HAL) is a layer of programming that allows a computer operating system to interact with a hardware device at a general or abstract level rather than at a detailed hardware level. Windows 2000 is one of several operating systems that include a hardware abstraction layer. The hardware abstraction layer can be called from either the operating system's kernel or from a device driver. In either case, the calling program can interact with the device in a more general way than it would otherwise.
Where can one find the DirectX HAL specification?
Taking this diagram to be correct
Then all GPU vendors have to write their device drivers such that they speak to the HAL.
Where is the HAL specified? How does MSFT adjust or update the HAL? When does the HAL change? If the HAL changes does the world break or the sky fall?
As far as I know, there is no "DirectX HAL", HAL is just HAL. HAL is a kernel-mode abstraction layer that WDDM uses. In turn, the DirectX API talks to the WDDM driver (written by nVidia, ATi, etc), and instantiates a HAL device.
For software to talk to HAL, it needs to run in privileged mode (i.e. be a driver). If you're curious, this is where HAL is specified: http://msdn.microsoft.com/en-us/library/aa490448.aspx
HAL (usually) changes when new versions of Windows are released. And yes, the sky does sometimes fall. Remember when no XP drivers worked on Vista? This was caused either by WDDM changing, or by HAL changing. Or, most likely, both.
Videos Drivers on Vista+ are written against WDDM. See MSDN. I'm not absolutely sure whether I understand you correctly, but I think the WDDM specification/guidelines/API is what you're looking for.
GPU vendors write to the device driver model (WDDM in Vista and Windows 7). They must conform to this model to be used by DirectX.
The WDDM is available in the Windows Device Driver Kit.
He is looking for this ?
DirectDraw DDI, Direct3D DDI
This is the interface for writing Device driver that:
1. Accepts D3D requests (eg. to draw a triangle) through that interface.
2. Then Directly access Video Card Hardware Registers to Apply that request.
(Fill PCI-E memory mapped memory, with: triangle parameters, rendering states and send command to gpu to start drawing a triangle. )
(Eq. Calling sequence:
1. User calls Direct3D.DrawPrimitive function =>
2. Direct3D calls Direct3DDDIDriver.D3dDrawPrimitives2 funcion in driver =>
3. Direct3D DDI Driver writes Graphic Card Memory with request parameters and writes drawing command to command register).
4. GPU is working and drawing a triangle to its specified destination memory area (eg. in GDDR5), which is dynamically allocated and marked as target 2D surface.)
You can practice implementing this driver for Simpler, older, open hardware specification GPUs like: SIS 6326, 3dfx Voodoo 1, 2, 3, 4, 5.
This would be very good practice in college.