How The Panorama Of Memory Is Evolving With CXL

Aus Vokipedia
Wechseln zu: Navigation, Suche


As datasets develop from megabytes to terabytes to petabytes, the price of transferring information from the block storage devices across interconnects into system memory, performing computation after which storing the large dataset again to persistent storage is rising in terms of time and energy (watts). Additionally, heterogeneous computing hardware increasingly wants access to the same datasets. For instance, a normal-purpose CPU may be used for assembling and preprocessing a dataset and scheduling tasks, however a specialised compute engine (like a GPU) is much sooner at coaching an AI model. A extra efficient solution is required that reduces the transfer of large datasets from storage directly to processor-accessible memory. A number of organizations have pushed the business towards options to those issues by preserving the datasets in massive, byte-addressable, sharable memory. In the nineteen nineties, the scalable coherent interface (SCI) allowed a number of CPUs to entry memory in a coherent means within a system. The heterogeneous system architecture (HSA)1 specification allowed memory sharing between devices of differing types on the identical bus.



In the decade starting in 2010, the Gen-Z standard delivered a memory-semantic bus protocol with high bandwidth and low latency with coherency. These efforts culminated in the broadly adopted Compute Categorical Hyperlink (CXLTM) standard getting used at the moment. Since the formation of the Compute Express Hyperlink (CXL) consortium, Micron has been and remains an lively contributor. Compute Express Hyperlink opens the door for saving time and power. The brand new CXL 3.1 commonplace permits for byte-addressable, load-store-accessible memory like DRAM to be shared between totally different hosts over a low-latency, high-bandwidth interface utilizing business-standard parts. This sharing opens new doorways beforehand only possible by way of expensive, proprietary equipment. With shared memory programs, the data might be loaded into shared memory once and then processed multiple times by a number of hosts and accelerators in a pipeline, without incurring the cost of copying information to native memory, Memory Wave block storage protocols and latency. Furthermore, some community information transfers may be eradicated.



For example, data may be ingested and saved in shared memory over time by a host linked to a sensor array. As soon as resident in memory, a second host optimized for this function can clear and preprocess the info, Memory Wave Experience followed by a 3rd host processing the data. In the meantime, the first host has been ingesting a second dataset. The one data that must be passed between the hosts is a message pointing to the data to indicate it is prepared for processing. The big dataset by no means has to move or be copied, saving bandwidth, vitality and memory house. Another instance of zero-copy data sharing is a producer-consumer information mannequin the place a single host is liable for amassing information in memory, and then multiple other hosts devour the info after it’s written. As earlier than, the producer simply needs to send a message pointing to the address of the info, signaling the opposite hosts that it’s prepared for consumption.



Zero-copy information sharing will be additional enhanced by CXL memory modules having built-in processing capabilities. For example, if a CXL memory module can carry out a repetitive mathematical operation or information transformation on a data object completely in the module, system bandwidth and energy may be saved. These financial savings are achieved by commanding the memory module to execute the operation without the info ever leaving the module using a functionality known as close to Memory Wave Experience compute (NMC). Moreover, the low-latency CXL fabric can be leveraged to send messages with low overhead very quickly from one host to another, between hosts and memory modules, or between memory modules. These connections can be utilized to synchronize steps and share pointers between producers and consumers. Beyond NMC and communication benefits, advanced memory telemetry might be added to CXL modules to provide a brand new window into actual-world application site visitors within the shared devices2 without burdening the host processors.



With the insights gained, working methods and administration software program can optimize data placement (memory tiering) and tune other system parameters to satisfy working goals, from performance to vitality consumption. Further memory-intensive, value-add functions reminiscent of transactions are additionally ideally suited to NMC. Micron is excited to mix giant, scale-out CXL global shared memory and enhanced memory options into our memory lake concept. As datasets grow from megabytes to terabytes to petabytes, the price of moving knowledge from the block storage units throughout interconnects into system memory, performing computation after which storing the big dataset back to persistent storage is rising in terms of time and power (watts). Moreover, heterogeneous computing hardware increasingly wants access to the identical datasets. For instance, a general-function CPU may be used for assembling and preprocessing a dataset and scheduling duties, however a specialised compute engine (like a GPU) is far quicker at training an AI mannequin.

Meine Werkzeuge
Namensräume

Varianten
Aktionen
Navigation
Werkzeuge