How does GPUDirect enforce isolation on a shared device - gpu

I have been reading here https://developer.nvidia.com/gpudirect about GPUDirect,
In there example there is a network card attached to the PCIe together with two GPU's and a CPU.
How is isolation enforced between all clients trying to access the network device? Are they all accessing the same PCI BAR of the device?
Is the network device using some kind SR-IOV mechanism to enforce isolation?

I believe you're talking about rDMA, which was supported with the second release of GPU Direct. It's where the NIC card can send/receive data external to the host machine and utilizes peer-to-peer DMA transfers to interact with the GPU's memory.
nVidia exports a variety of functions to kernel space that allow for programmers to look up where physical pages reside on the GPU, itself, and map them manually. nVidia also requires the use of physical addressing within kernel space, which greatly simplifies how other [3rd party] drivers interact with GPU's -- through the host machine's physical address space.
"RDMA for GPUDirect currently relies upon all physical addresses being the same from the PCI devices' point of view."
-nVidia, Design Considerations for rDMA and GPUDirect
As a result of nVidia requiring a physical addressing scheme, all IOMMU's must be disabled in the system, as these would alter the way each card views the memory space(s) of other cards. Currently, nVidia only supports physical addressing for rDMA+GPUDirect in kernel-space. Virtual addressing is possible via their UVA, made available to user space.
How is isolation enforced between all clients trying to access the network device? Are they all accessing the same PCI BAR of the device?
Yes. In kernel space, each GPU's memory is being accessed by it's physical address.
Is the network device using some kind SR-IOV mechanism to enforce isolation?
The driver of the network card is what does all of the work in setting up descriptor lists and managing concurrent access to resources -- which would be the the GPU's memory in this case. As I mentioned above, nVidia gives driver developers the ability to manage physical memory mappings on the GPU, allowing the 3rd party's NIC driver to control what resource(s) are available or not available to remote machines.
From what I understand about NIC drivers, I believe this to be a very rough outline of what's going on under the hood, relating to rDMA and GPUDirect:
Network card receives an rDMA request (whether it be reading or writing).
Network card's driver receives an interrupt that data has arrived or some polling mechanism has detected data has arrived.
The driver processes the request; any address translation is performed now, since all memory mappings for the GPU's are made available to kernel space. Additionally, the driver will more than likely have to configure the network card, itself, to prep for the transfer (e.g. set up specific registers, determine addresses, create descriptor lists, etc).
The DMA transfer is initiated and the network card reads data directly from the GPU.
This data is then sent over the network to the remote machine.
All remote machines requesting data via rDMA will use that host machine's physical addressing scheme to manipulate memory. If, for example, two separate computers wish to read the same buffer from a third computer's GPU with rDMA+GPUDirect support, one would expect the incoming read request's offsets to be the same. The same goes for writing; however an additional problem is introduced if multiple DMA engines are set to manipulate data in overlapping regions. This concurrency issue should be handled by the 3rd party NIC driver.
On a very related note, another post of mine has a lot information regarding nVidia's UVA (Unified Virtual Addressing) scheme and how memory manipulation from within kernel-space, itself, is handled. A few of the sentences in this post were grabbed from it.
Short answer to your question: if by "isolated" you mean how does each card preserve its own unique address-space for rDMA+GPUDirect operations, this is accomplished by relying on the host machine's physical address space which fundamentally separates the physical address space(s) requested by all devices on the PCI bus. By forcing the use of each host machine's physical addressing scheme, nVidia essentially isolates each GPU in that host machine.

Related

GPUDirect RDMA out of range pin address by Quadro p620

I want to implement FPGA-GPU RDMA by nvidia quadro p620.
Also, I used common PCIe BAR resources(BAR0 - BAR1 - BAR2) for FPGA registers and other chunk controllers handling which is independent from RDMA in my custom driver.
PCIe managements are OK but direct memory access to GPU ram which is pinned are always wrong. Precisely, i always get 64KB pinned addresses starting from 2955739136 (~2.7GB) by using nvidia_p2p_get_pages() API without any errors but the point is that quadro p620 ram capacity is just 2GB!.
The virtual address obtained by cuMemAlloc() change every time (which is correct) and i pass this address, together the allocated size, to my driver by ioctl sys-call. Also, i linked my custom driver to nvidia driver as the nvidia GPUDirect RDMA document is said.
Well, every things sounds OK, but the physical addresses are out of range!. Why? Does it requirement to have the qudro GPU equal or over 4GB ram address?
I expect to find the right solution to get the correct physical addresses and then DMA data by FPGA bus master.
Thanks
P.S. before this i implemented FPGA direct memory access to system ram over PCIe without any problems.

Direct data copy between devices

I am trying to explore the possibility of achieving global IO space across devices (GPUs, NIC, storage etc.). This might boil down to the question asked in this thread - Direct communication between two PCI devices.
I have been reading upon Nvidia GPUDirect where the memory region pinned and the physical address is obtained with the help of nvidia_p2p_* calls. I can't exactly understand how can GPU's physical address be used to program the 3rd party device's DMA controller for data transfers. I am confused by the fact that GPU memory is not visible unlike the cpu memory space (this maybe due to my poor knowledge on programming dma controllers). Any pointers on this would really helpful.
Also, many PCI devices expose memory regions in terms of PCI BARs (e.g. GPUs expose a memory region of 256M). Is there any way to know device physical addresses over which this BAR memory region maps to? Is there any overlap between the BAR memory regions and memory allocated via nvidia driver to CUDA runtime?
Thanks in advance.

Why is virtual memory needed in embedded systems? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Per my understanding, virtual memory is as follows:
Programs/applications/executables reside in a storage device. Storage device access is much slower than RAM. Hence, programs is copied from storage memory to main memory for execution. Since computers have limited main memory (RAM), when all of the RAM is being used (e.g., if there are many programs open simultaneously or if one very large program is in use), a computer with virtual memory enabled will swap data to the HDD and back to memory as needed, thus, in effect, increasing the total system memory.
As far as I know, most embedded devices do not have disk memory (like smartphones or in car infotainment systems). Code is directly executed from Flash memory. RAM is mainly used as a scratchpad area (local variables, return address etc).
So why do we need virtual memory in embedded systems? (e.g. WinCE and QNX support virtual memory)
Your understanding is completely wrong. You are confusing virtual memory with swapping or page files. There are systems that have virtual memory and no swap or page files and there are systems that swap without virtual memory.
Virtual memory just means that a process has a view of memory that is different from the physical mapping. Among other things, it allows processes to have their own virtual address space.
Storage device access is much slower than RAM. Hence programs is copied from storage memory to main memory for execution. Since computers have limited main memory (RAM), when all of the RAM is being used (e.g., if there are many programs open simultaneously or if one very large program is in use), a computer with virtual memory enabled will swap data to the HDD and back to memory as needed, thus, in effect, increasing the total system memory.
That's swapping (or paging). It has nothing to do with virtual memory except that most modern operating systems implement swapping using virtual memory. Swapping actually existed before virtual memory.
I think you're probably incorrect about these devices running code directly from flash memory. The read speed of flash is pretty low and RAM is very cheap. My bet is that most of the systems you mention don't run code directly from flash and instead use virtual memory to fault code into RAM as needed.
embedded systems, the term itself has a wide range of applications. you could call a small microcontroller with flash program space measured in kbytes or less and ram measured in either bits or bytes (not enough to be kbytes) an embedded system. Likewise a tivo running a full blown operating system on a pretty much full blown computer motherboard (replace tivo with xbox as another example) as an embedded system. So you need to be less vague about your question. virtual memory has little to do with any of that its applications cross those boundaries.
There are many answers above, David S has the best of course that virtual memory simply means the memory address on one side of the virtual memory boundary is different than the physical address that is used on the other side of that boundary. Where, how, why, etc is there a boundary varies.
A popular use for virtual memory, and I might argue a primary use case is for operating systems. One benefit is that for example all applications could be compiled for the same address space, all applications might be compiled such that from the programs perspective they all start at say address 0x8000, and as far as that program when it runs and accesses memory it accesses stuff based on that address. A combination of the hardware and the operating system change that virtual address that the program is using to a physical address. If the operating system allows for multitasking, then each task might think they are in the same address space but the physical addresses are different for each of those tasks. I wont elaborate further on why using an assumed, fixed address space, is a benefit. Another aspect that operating systems use is memory management. Many MMU's will let you segment the memory however. If a user wants to allocate 100 Megabytes of memory the program may access in its virtual address space that 100 meg as if it were linear and in that address space it is linear, but that 100 meg might be broken down into say 4Kbyte chunks that are scattered all about the physical address space, not always likely but certainly technically possible that no two chunks of that physical memory is next to any other chunk of that 100 meg. your memory management doesnt necessarily have to try to keep large physical chunks of memory available for applications to allocate. Note not all MMUs are exactly the same and 4Kbytes is just an example. A third major benefit from virtual address space to an operating system is protection. If the application is bound to the virtual address space, it is often quite easy to prevent that application from touching the memory of any other application or the operating system. the application in this case would operate/execute at a proection level such that all accesses are considered virtual and have to go through a translation to physical, the tables that are used to define that virtual to physical can contain protection flags. If the application addresses a memory address in its virtual space that it has no business accessing, the hardware can trap that and let the operating system take action as to how to handle it (virtualize some hardware, pop up an error and kill the app, pop up a warning and not kill the app but at the same time feed the app bogus data for their transaction, etc).
There are lots of ways this can be used in an embedded system. first off many embedded systems run operating systems, so all of the above, ease of compiling the program for the address space, relative ease of memory management, and protection of the other applications and operating system and other benefits not mentioned. (virtualization being one, being able to enable/disable instruction/data caching on a block by block basis is another)
The bottom line though is what David S pointed out. virtual memory simply means the virtual address is not necessarily equal to the physical address, it can be but doesnt have to be, there is some boundary, some hardware, usually table driven, that translates the virtual address into a physical address. Lots of reasons why you would want to do this, since some embedded systems are indistinguishable from non-embedded systems any reason that applies to a non-embedded system can apply to an embedded system.
As much as folks may want you to believe that a system has a flat address space, it is often an illusion. In a microcontroller for example you might have multiple flash banks and one or more ram banks. Each of these banks has a physical, generally zero based address. Even if there is no mmu or anything else like that there is a place somewhere between the address bus on the processor and the address bus on the flash or ram memory that decodes the address on the processor and uses that to address into the specific memory bank. Often the lower bits match and upper bits are responsible for the bank choices (this is often the case with an mmu as well) so in that sense the processor is living in a virtual address space. (not limited to microcontrollers, this is generally how processors address busses are treated) With microcontrollers depending on a pin being pulled high or low or some other mechanism you might have a chip feature that allows one flash bank to be used to boot the processor or another. You might tie an input pin high and the processors built in bootloader allows you to access and debug the system for example reprogram the application flash. Or perhaps tie that line low and boot the application flash instead of the vendors debugger/boot flash. some chips get even more complicated letting you boot one flash then the program writes a register somewhere instantly changing the memory architecture moving things around, for example allowing ram to be used for the interrupt vector table so your application can be changed after boot rather than a vector table in flash that is not as easy to change at will.
now when you talk about virtual memory as far as swapping to and from a disk, that is a trick often employed by operating systems to give the illusion of having more ram. I mentioned that above under the category of virtualization. virtual memory in the sense that it isnt really there, I have X bytes but will let the software think there are Y bytes (where Y is larger than X) available. The operating system through the virtual tables used by the hardware, manages which memory chunks are tied to physical ram and are allowed to complete as is by the hardware, or are marked as not available in some way, causing an exception to the operating system, upon inspection the operating system determines that this is a valid address for this application, but the data behind this address has been swapped to disk. The operating system then finds through some algorithm another chunk of ram belonging to whomever (part of the algorithm) and it copies that chunk of ram to disk, marks the table related to that virtual to physical as not valid, then copies the desired chunk from disk to ram, marks that chunk as valid and lets the hardware complete the memory cycle.
Not any different than say how vmware or other virtual machines work. You can execute instructions natively on the hardware using virtual memory until such time as you cause an exception, the virtual machine might think you have an xyz network interface and might have a driver that is accessing a register in that xyz network interface, but the reality is you have no xyz hardware and/or you dont want the virtual machine applications to access that hardware, so you virtualize it, you trap that register access, and using software that simulates the hardware you fake that access and let the program on the virtual machine continue. This obviously not the only way to do virtual machines, but it is one way if the hardware supports it, to let a virtual machine run very fast as a percentage of the time it is actually running instructions on the hardware. The slowest way to virtualize of course is to virtualize everything including the processor, every instruction in that case would be simulated, this is quite slow but has its own features (virtualizing an arm system on an x86 or x86 on an arm, xyz on an abc, fill in the blanks). And if that is the type of virtual memory you are talking about in an embedded system, well if the embedded system is for the most part indistinguishable from a non-embedded system (an xbox or tivo for example) then well for the same reasons you could allow such a thing. If you were on a microcontroller, well the use cases there would generally mean if you needed more memory you would buy a bigger microcontroller, or add more memory to the system ,or change the needs of the application such that it doesnt need as much memory. there may be exceptions, but it mostly depends on your application and requirements, a general purpose or general purpose like system which allows for applications or their data to be larger than the available ram, will require some sort of solution. the microcontroller in your keyless entry key fob thing or in your tv remote control or clock radio or whatever normally would not have a need to allow "applications" to require more resources than are physically there.
The more important benefit of using virtual memory is that every process gets its own address space which is isolated from every other process's. That way virtual memory helps keep faults contained and improves security and stability. I should note that it is still possible for two processes to share a bit of memory, to facilitate communication (shared mem IPC).
Also you can do other tricks like conserving memory via mapping shared parts into more than one process's (libc comes to mind for embedded use) address space but only having it once in physical mem. Also this gives it a speed boost, you can even enhance it further the way linux does cheapen fork/clone by only copying the in kernel descriptors and leaving the memory image alone up until the first write access is done with a similar idea.
As a last benefit, in modern systems, it's common to do file I/O via mapping the file into the process space (cf. mmap for example).
It's interesting to note that one can get some of the benefits of "virtual memory" without needing a full-fledged MMU. The hardware requirements can sometimes be amazingly light. The PIC 16C505 has a 5-bit address space and 40 bytes of RAM; addresses 0x10 to 0x1F can map to either of two groups of 16 bytes of RAM. When writing an application which needed to manage two different data streams, I arranged so that all the variables associated with one data stream would be in the first group of 16 "switchable" memory locations, and those associated with the other would be at the corresponding addresses in the second group. I could then use the same code to manage both data streams. Simply set the banking bit one way, call the routine, set it the other way, and call the routine again.
One of the reasons Virtual Memory exists is so that your device can multitask. It can also act as your RAM does, thus taking the load off of your physical RAM and swapping the load back and forth.

Accessing memory space / registers on externally connected devices through software

This question is a bit vague, and I apologzie for that, but a fairly vague answer will do :)
How do people typically access memory adresses of external devices (say, connected to a PC through USB, or even just say, a multipurpose microcontroller)? I'm wondering how software is able to find address to write to registers or EEPROM space.
For example if I want to write a value to register 0x1234, does software just send this information (the register and the value to be written) to some sort of driver that "talks" to the device and takes care of the value change through hardware?
Is implementation of this functionality mostly a hardware endeavor?
Thanks!
Let's use as an example a fairly common USB peripheral controller that is based on an 8-bit 8051 microcontroller core. One side of it attaches to the USB host controller on a desktop computer. The other end goes to a USB device controller that presents itself as a FIFO endpoint to the host.
Some 8051 firmware will be required to initialize the device side. A class driver will be required on the host side. Once those are in place, the application developer will have a device name on the host side which may be opened for read/write. Sometimes a vendor will provide a library to perform device specific tasks and isolate the user from the raw device. Often a Windows DLL is available to hide the low level I/O and present device operations as function calls.
Additional 8051 firmware monitors the FIFO from device end and interprets messages sent from the host application or DLL then takes actions. These actions may be low level such as read/write from a memory location or register. They may be high level such as setting the PWM value of a programmable counter array.
So your hypothetical description of a write to register 0x1234 is not far from how it is often implemented.

How do I access my memory mapped I/O Device (FPGA) from a RTP in VxWorks?

When using VxWorks, we are trying to access a memory mapped I/O device from a Real-Time Process.
Since RTPs have memory protection, how can I access my I/O device from one?
There are two methods you can use to access your I/O mapped device from an RTP.
I/O Subsystem (preferred)
You essentially create a small device driver. This driver can be integrated into the I/O Subsystem of VxWorks. Once integrated, the driver is available to the RTP by simply using standard I/O operations: open, close, read, write, ioctl.
Note that "creating a device driver" doesn't have to be complicated. It could be as simple as just defining a wrapper for the ioctl function. See ioLib for more details.
Map Memory Directly (not recommended)
You can create a shared memory region via the sdOpen call. When creating the shared memory, you can specify what the physical address should be. Specify the address to be your device's I/O mapped region, and you can access the device directly.
The problem is that a shared memory region is a public object that is available to any space, and poking directly at hardware goes against the philosophy behind RTPs.