Why not to allocate contiguous memory for page table entries of a process? - Stack Overflow

时间: 2025-01-06 admin 业界

In the text book Operating System Concepts by Abraham Silber Schuatz in Section 9.4 it argues that:

9.4 Structure of the Page Table - In this section, we explore some of the most common techniques for structuring the page table, including hierarchical paging, hashed page tables, and inverted page tables. 9.4.1 - Hierarchical Paging Most modern computer systems support a large logical address space (232 to 264 ). In such an environment, the page table itself becomes excessively large. For example, consider a system with a 32-bit logical address space. If the page size in such a system is 4 KB (212 ), then a page table may consist of over 1 million entries (220 = 232 /212 ). Assuming that each entry consists of 4 bytes, each process may need up to 4 MB of physical address space for the page table alone. Clearly, we would not want to allocate the page table contiguously in main memory. One simple solution to this problem is to divide the page table into smaller pieces. We can accomplish this division in several ways.

I really don't understand the part which says "Clearly, we would not want to allocate the page table contiguously..." which is in bold. So yeah why is that ?

In the text book Operating System Concepts by Abraham Silber Schuatz in Section 9.4 it argues that:

9.4 Structure of the Page Table - In this section, we explore some of the most common techniques for structuring the page table, including hierarchical paging, hashed page tables, and inverted page tables. 9.4.1 - Hierarchical Paging Most modern computer systems support a large logical address space (232 to 264 ). In such an environment, the page table itself becomes excessively large. For example, consider a system with a 32-bit logical address space. If the page size in such a system is 4 KB (212 ), then a page table may consist of over 1 million entries (220 = 232 /212 ). Assuming that each entry consists of 4 bytes, each process may need up to 4 MB of physical address space for the page table alone. Clearly, we would not want to allocate the page table contiguously in main memory. One simple solution to this problem is to divide the page table into smaller pieces. We can accomplish this division in several ways.

I really don't understand the part which says "Clearly, we would not want to allocate the page table contiguously..." which is in bold. So yeah why is that ?

Share Improve this question asked 15 hours ago Vacation Due 20000Vacation Due 20000 371 silver badge5 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

I think it's just slightly awkward wording. What the author probably means is that we would not want to be required by hardware to allocate the page table contiguously, nor would we want to use an algorithm that only works with contiguous blocks, because it's hard to guarantee that such a large contiguous block would be available.

But there's nothing inherently problematic about using a contiguous region. If such a region does happen to be available, there's nothing wrong with using it. (Though, once we have an algorithm that can make use of scattered pages instead, it might be advantageous to leave the large contiguous region free for some other purpose where contiguousness may actually be required, e.g. huge pages or DMA buffers.)