Pumping Up Performance with Linux Hugepages – Part 1


Hans and Franz

As memory becomes cheaper, servers are delivered with larger memory configurations and applications are starting to address more of it. This is generally a good thing from a performance standpoint. However, this can create performance issues when you’re using the default memory page size of 4 KB on x86-based systems.


To address this, Linux has a feature called “hugepages” that allows applications (databases, JVMs, etc.) to allocate larger memory pages than the 4 KB default. Applications using hugepages can benefit from these larger page sizes because they have a greater chance of finding memory mapping info in cache and thereby avoid more expensive operations.

In order to understand the benefits of hugepages, it helps to know a bit more about memory mapping, page tables and the TLB (translation lookaside buffer).

Memory Mapping and the Page Table

Wikipedia has a good description of memory mapping and the page table:

“In operating systems that use virtual memory, every process is given the impression that it is working with large, contiguous sections of memory. In reality, each process’ memory may be dispersed across different areas of physical memory, or may have been moved (paged out) to another type of storage, typically to a hard disk.

When a process requests access to a data in its memory, it is the responsibility of the operating system to map the virtual address provided by the process to the physical address of the actual memory where that data is stored. The page table is where the operating system stores its mappings of virtual addresses to physical addresses.”

With standard 4 KB pages, each process that attaches to the shared memory segment needs its own copy of the page table. So, if you have 200 processes attaching to that 20 GB shared memory segment, then you will have 8000 MB of page table entries! That is a significant chunk of memory that could be used for better purposes.

If you’re using 2 MB hugepages however, you only need 10240 page table entries or 81920 bytes (80 KB)! Plus, with hugepages, the page table is shared. So, you only need that one copy of the 80kb page table for your 20GB SGA:


Pagesize Shared Memory Segment Size # of Attached Processes Total potential size of page tables
4 KB 20 GB 200 8000 MB
2 MB (hugepage) 20 GB 200 80 KB



The TLB is the Translation Lookaside Buffer. It is located in the memory management unit of the processor and it holds the most-recently used virtual-to-physical address translations. There are two types of TLBs on Intel processors:

  • the ITLB (Instruction Translation Lookaside Buffer) stores virtual-to-physical memory mappings for programs
  • the DTLB (Data TLB) stores virtual-to-physical memory mappings for data pages

There can also be multiple levels of TLBs on the processor. The STLB is a second-level TLB that can store instruction or data memory mappings.

For instance, on the Intel Ivy Bridge processor that is used in the Exadatas, there are two levels of TLBs (1st and 2nd) as well as a DTLB and an ITLB in the 1st level:

Cache Page Size / Entries
Name Level 4 KB 2 MB 1 GB
DTLB 1st 64 32 4
ITLB 1st 128 8 / logical core none
STLB 2nd 512 none none

TLB Misses, Page Walks and Page Faults

When a logical-to-physical memory mapping is not found in the TLB, it is classified as a “TLB miss”. The processor actually maintains counters of how many of each type of miss it encounters. Tools such as the ‘perf events’ utility can be used to report on this data.

  • If the mapping is not found in the first-level TLB, it will look in the second-level TLB (STLB). This would be an “ITLB-miss” or a “DTLB-miss”, depending on whether it is an instruction or data page.
  • If it is not found in the STLB (another “miss”) the processor must then read from the page table. This is called a “page walk” and it may require multiple memory accesses.
  • If it is not found in the page table, it results in a “page fault” and the operating system then has to load the necessary page into the page table.

Why Hugepages?

By using a larger page size, a single TLB entry can represent a larger memory range.  As mentioned before, the default page size in Linux is 4 KB. A large page in Linux is 2 MB. So a large page can cover the same memory range as 512 4 KB pages. As a result, there will be less pressure on the TLB and memory-intensive applications may have better performance due to an increased TLB hit ratio.

Using huge pages means the processor’s MMU (memory management unit) spends less time walking page tables to refill the TLB. Using hugepages also reduces the amount of memory used for storing the page tables and it reduces the operating system maintenance of page states.

Other Considerations

  • there are times when using hugepages can negatively affect system performance. For example, when a large amount of memory is pinned in hugepages by an application, it could create a shortage of regular memory. This could then cause excessive paging in other applications and slow down the entire system.
  • hugepages require contiguous memory. Memory fragmentation can make it impossible to reserve enough hugepage memory. When that happens, the application or the OS will revert to using regular pages.

Next Time

In the second part of this article, I’ll talk about how to use hugepages with the Oracle database and with JVMs. I’ll also talk about Transparent Hugepages (THP) and why you should turn off this Linux feature.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s