Memory Hierarchy And Access Time - Sand Software And Sound
This page takes a better look on the Raspberry Pi memory hierarchy. Every degree of the Memory Wave Routine hierarchy has a capability and speed. Capacities are relatively straightforward to find by querying the working system or reading the ARM1176 technical reference guide. Pace, nonetheless, just isn't as straightforward to find and must often be measured. I use a simple pointer chasing approach to characterize the conduct of each degree in the hierarchy. The method also reveals the conduct of memory-associated efficiency counter events at every level. The Raspberry Pi implements 5 ranges in its memory hierarchy. The degrees are summarized in the desk under. The highest stage consists of virtual memory pages which might be maintained in secondary storage. Raspbian Wheezy retains its swap space in the file /var/swap on the SDHC card. That is sufficient house for 25,600 4KB pages. You are allowed as many pages as will fit into the preallocated swap area.
The Raspberry Pi has both 256MB (Model A) or 512MB (Model B) of primary memory. This is sufficient house for 65,536 pages or 131,072 bodily pages, if all of main memory had been out there for paging. It isn’t all accessible for user-area packages as a result of the Linux kernel needs space for its personal code and data. Linux also helps large pages, however that’s a separate matter for now. The vmstat command displays information about virtual memory usage. Please refer to the man web page for utilization. Vmstat is an effective tool for troubleshooting paging-related efficiency points because it exhibits page in and out statistics. The processor within the Raspberry Pi is the Broadcom BCM2835. The BCM2835 does have a unified level 2 (L2) cache. Nevertheless, the L2 cache is devoted to the VideoCore GPU. Memory references from the CPU side are routed around the L2 cache. The BCM2835 has two level 1 (L1) caches: a 16KB instruction cache and a 16KB information cache.
Our analysis under concentrates on the information cache. The information cache is 4-means set associative. Every way in an associative set shops a 32-byte cache line. The cache can handle as much as four energetic references to the identical set without conflict. If all four ways in a set are legitimate and a fifth reference is made to the set, then a conflict happens and one of many four ways is victimized to make room for the new reference. The data cache is nearly listed and physically tagged. Cache strains and tags are stored individually in DATARAM and TAGRAM, respectively. Digital address bits 11:5 index the TAGRAM and DATARAM. Given a 16KB capacity, 32 byte strains and 4 methods, there have to be 128 units. Virtual address bits 4:0 are the offset into the cache line. The information MicroTLB interprets a digital handle to a physical address and Memory Wave sends the bodily address to the L1 information cache.
The L1 knowledge cache compares the bodily handle with the tag and determines hit/miss standing and the correct manner. The load-to-use latency is three (3) cycles for an L1 information cache hit. The BCM2835 implements a two stage translation lookaside buffer (TLB) construction for virtual to bodily tackle translation. There are two MicroTLBs: a ten entry data MicroTLB and a ten entry instruction MicroTLB. The MicroTLBs are backed by the primary TLB (i.e., the second level TLB). The MicroTLBs are totally associative. Each MicroTLB interprets a digital tackle to a bodily handle in one cycle when the web page mapping info is resident within the MicroTLB (that is, a hit within the MicroTLB). The main TLB is a unified TLB that handles misses from the instruction and information MicroTLBs. A 64-entry, 2-manner associative construction. Foremost TLB misses are handled by a hardware page table walker. A page desk stroll requires at the very least one additional memory access to search out the web page mapping info in major memory.