Unified Memory For CUDA Beginners

Aus Vokipedia
Version vom 20. Oktober 2025, 07:15 Uhr von AureliaFethersto (Diskussion | Beiträge)

(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Wechseln zu: Navigation, Suche


", launched the fundamentals of CUDA programming by showing how to put in writing a easy program that allocated two arrays of numbers in memory accessible to the GPU after which added them collectively on the GPU. To do this, I introduced you to Unified Memory, which makes it very easy to allocate and entry knowledge that may be utilized by code working on any processor within the system, CPU or GPU. I finished that post with a couple of simple "exercises", one in all which encouraged you to run on a recent Pascal-based GPU to see what occurs. I was hoping that readers would attempt it and touch upon the outcomes, and some of you probably did! I steered this for two reasons. First, because Pascal GPUs such as the NVIDIA Titan X and the NVIDIA Tesla P100 are the first GPUs to include the Web page Migration Engine, which is hardware support for Unified Memory page faulting and migration.



The second motive is that it offers an amazing opportunity to learn extra about Unified Memory. Fast GPU, Quick Memory… Proper! However let’s see. First, I’ll reprint the results of working on two NVIDIA Kepler GPUs (one in my laptop computer and one in a server). Now let’s strive operating on a very quick Tesla P100 accelerator, primarily based on the Pascal GP100 GPU. Hmmmm, that’s beneath 6 GB/s: slower than operating on my laptop’s Kepler-primarily based GeForce GPU. Don’t be discouraged, though; we are able to repair this. To understand how, I’ll need to let you know a bit extra about Unified Memory. What is Unified Memory? Unified Memory is a single memory deal with space accessible from any processor in a system (see Determine 1). This hardware/software program know-how allows purposes to allocate data that may be read or written from code working on either CPUs or GPUs. Allocating Unified Memory is as simple as replacing calls to malloc() or new with calls to cudaMallocManaged(), an allocation operate that returns a pointer accessible from any processor (ptr in the following).



When code operating on a CPU or GPU accesses knowledge allotted this manner (typically known as CUDA managed data), the CUDA system software program and/or the hardware takes care of migrating memory pages to the memory of the accessing processor. The important point right here is that the Pascal GPU architecture is the primary with hardware assist for digital Memory Wave Audio web page faulting and migration, by way of its Page Migration Engine. Older GPUs based mostly on the Kepler and Maxwell architectures also help a extra restricted form of Unified Memory. What Occurs on Kepler Once i name cudaMallocManaged()? On techniques with pre-Pascal GPUs like the Tesla K80, calling cudaMallocManaged() allocates dimension bytes of managed memory on the GPU system that's active when the decision is made1. Internally, the driver additionally units up page table entries for all pages covered by the allocation, in order that the system is aware of that the pages are resident on that GPU. So, in our instance, running on a Tesla K80 GPU (Kepler architecture), x and y are each initially totally resident in GPU memory.



Then in the loop starting on line 6, the CPU steps by way of both arrays, initializing their elements to 1.0f and 2.0f, respectively. Because the pages are initially resident in system memory, a web page fault occurs on the CPU for every array web page to which it writes, and the GPU driver migrates the web page from machine memory to CPU Memory Wave. After the loop, all pages of the two arrays are resident in CPU memory. After initializing the information on the CPU, the program launches the add() kernel so as to add the elements of x to the elements of y. On pre-Pascal GPUs, upon launching a kernel, the CUDA runtime should migrate all pages beforehand migrated to host memory or to another GPU back to the system memory of the system working the kernel2. Since these older GPUs can’t web page fault, all information should be resident on the GPU just in case the kernel accesses it (even if it won’t).


Memory Wave

Memory Wave Audio

Meine Werkzeuge
Namensräume

Varianten
Aktionen
Navigation
Werkzeuge