Non-uniform Memory Access
Non-uniform memory access (NUMA) is a computer memory design utilized in multiprocessing, the place the memory entry time is dependent upon the memory location relative to the processor. Beneath NUMA, a processor can entry its personal native memory sooner than non-native memory (memory local to a different processor or memory shared between processors). NUMA is useful for workloads with high memory locality of reference and low lock contention, because a processor could operate on a subset of memory largely or solely inside its personal cache node, lowering site visitors on the memory bus. NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP) architectures. They were developed commercially during the 1990s by Unisys, Convex Laptop (later Hewlett-Packard), Honeywell Info Programs Italy (HISI) (later Groupe Bull), Silicon Graphics (later Silicon Graphics International), Sequent Computer Systems (later IBM), Data Common (later EMC, now Dell Applied sciences), Digital (later Compaq, then HP, now HPE) and ICL. Strategies developed by these corporations later featured in a variety of Unix-like working systems, and to an extent in Home windows NT.
Symmetrical Multi Processing XPS-100 family of servers, designed by Dan Gielan of Vast Company for Honeywell Information Methods Italy. Modern CPUs operate significantly faster than the primary memory they use. In the early days of computing and data processing, the CPU generally ran slower than its personal memory. The efficiency strains of processors and memory crossed in the 1960s with the arrival of the first supercomputers. Since then, CPUs more and more have discovered themselves "starved for information" and having to stall whereas waiting for information to arrive from memory (e.g. for Memory Wave Routine Von-Neumann structure-based computers, see Von Neumann bottleneck). Many supercomputer designs of the 1980s and 1990s targeted on offering high-pace memory access as opposed to faster processors, permitting the computers to work on massive data sets at speeds different systems couldn't approach. Limiting the variety of memory accesses provided the important thing to extracting high performance from a modern pc. For commodity processors, this meant putting in an ever-rising quantity of high-pace cache memory and utilizing more and more sophisticated algorithms to keep away from cache misses.
However the dramatic enhance in measurement of the operating systems and of the purposes run on them has usually overwhelmed these cache-processing enhancements. Multi-processor programs with out NUMA make the issue significantly worse. Now a system can starve several processors at the same time, notably because just one processor can entry the pc's memory at a time. NUMA attempts to address this drawback by offering separate memory for each processor, avoiding the efficiency hit when several processors attempt to address the identical memory. For issues involving unfold data (frequent for servers and Memory Wave similar applications), NUMA can enhance the efficiency over a single shared memory by an element of roughly the variety of processors (or separate memory banks). Another strategy to addressing this downside is the multi-channel memory architecture, during which a linear enhance within the variety of Memory Wave Routine channels increases the memory entry concurrency linearly. Of course, not all knowledge finally ends up confined to a single task, which means that a couple of processor could require the identical knowledge.
To handle these circumstances, NUMA programs embody further hardware or software program to maneuver data between memory banks. This operation slows the processors connected to these banks, so the overall speed increase due to NUMA heavily is dependent upon the character of the working tasks. AMD applied NUMA with its Opteron processor (2003), using HyperTransport. Intel announced NUMA compatibility for its x86 and Itanium servers in late 2007 with its Nehalem and Tukwila CPUs. Nearly all CPU architectures use a small amount of very quick non-shared memory often known as cache to use locality of reference in memory accesses. With NUMA, maintaining cache coherence throughout shared memory has a big overhead. Though simpler to design and construct, non-cache-coherent NUMA methods become prohibitively complex to program in the usual von Neumann structure programming mannequin. Usually, ccNUMA makes use of inter-processor communication between cache controllers to maintain a consistent memory image when more than one cache shops the identical memory location.