Operating System Concepts
Chapter 9 { Virtual Memory
Based on the 9th Edition of:
Abraham Silberschatz, Peter B. Galvin and Jreg Gagne:. Operating System
Concepts
Department of Information Technology, College of Business, Law & Governance
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Learning Objectives
To describe the benefits of a virtual memory system
To explain the concepts of demand paging, page-replacement
algorithms, and allocation of page frames
To discuss the principle of the working-set model
To examine the relationship between shared memory and
memory-mapped files
To explore how kernel memory is managed
Chapter 9 { Virtual Memory Operating System Concepts 2
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Outline
1 Background
2 Demand Paging
3 Copy-on-Write
4 Page Replacement
5 Allocation of Frames
6 Thrashing
7 Memory-Mapped Files
Chapter 9 { Virtual Memory Operating System Concepts 3
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Background
Code needs to be in memory to execute, but entire program
rarely used (e.g., error code, unusual routines, large data
structures)
Entire program code not needed at same time
Consider ability to execute partially-loaded program
Program no longer constrained by limits of physical memory
Each program takes less memory while running ! more
programs run at the same time (increased CPU utilization and
throughput with no increase in response time or turnaround
time)
Less I/O needed to load or swap programs into memory !
each user program runs faster
Chapter 9 { Virtual Memory Operating System Concepts 4
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Background
Virtual Memory | separation of user logical memory from physical
memory
Only part of the program needs to be in memory for execution
Logical address space can therefore be much larger than
physical address space
Allows address spaces to be shared by several processes
Allows for more efficient process creation
More programs running concurrently
Less I/O needed to load or swap processes
Chapter 9 { Virtual Memory Operating System Concepts 5
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Background
Virtual address space { logical view of how process is stored in
memory
Usually start at address 0, contiguous addresses until end of
space
Meanwhile, physical memory organized in page frames
MMU must map logical to physical
Virtual memory can be implemented via:
Demand paging
Demand segmentation
Chapter 9 { Virtual Memory Operating System Concepts 6
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Background
Virtual memory that is larger than physical memory
virtual
memory
memory
map
physical
memory
•••
page 0
page 1
page 2
page v
Chapter 9 { Virtual Memory Operating System Concepts 7
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Background | Virtual-Address Space
Usually design logical address space for stack to start at Max
logical address and grow down while heap grows up
Enables sparse address spaces with holes
left for growth, dynamically linked
libraries, etc
System libraries shared via mapping into
virtual address space
Shared memory by mapping pages
read-write into virtual address space
Pages can be shared during fork(),
speeding process creation code
0
Max
data
heap
stack
Chapter 9 { Virtual Memory Operating System Concepts 8
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Background | Virtual-Address Space
Shared library using virtual memory
shared library
stack
shared
pages
code
data
heap
code
data
heap
shared library
stack
Chapter 9 { Virtual Memory Operating System Concepts 9
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 Which of the following is a benefit of allowing a program that
is only partially in memory to execute?
A. Programs can be written to use more memory than is available
in physical memory.
B. CPU utilization and throughput is increased.
C. Less I/O is needed to load or swap each user program into
memory.
D. All of the above
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 10
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 Which of the following is a benefit of allowing a program that
is only partially in memory to execute?
A. Programs can be written to use more memory than is available
in physical memory.
B. CPU utilization and throughput is increased.
C. Less I/O is needed to load or swap each user program into
memory.
D. All of the above
Answer: D
Chapter 9 { Virtual Memory Operating System Concepts 10
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 Which of the following is a benefit of allowing a program that
is only partially in memory to execute?
A. Programs can be written to use more memory than is available
in physical memory.
B. CPU utilization and throughput is increased.
C. Less I/O is needed to load or swap each user program into
memory.
D. All of the above
Answer: D
2 In systems that support virtual memory, .
A. virtual memory is separated from logical memory.
B. virtual memory is separated from physical memory.
C. physical memory is separated from secondary storage.
D. physical memory is separated from logical memory.
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 10
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 Which of the following is a benefit of allowing a program that
is only partially in memory to execute?
A. Programs can be written to use more memory than is available
in physical memory.
B. CPU utilization and throughput is increased.
C. Less I/O is needed to load or swap each user program into
memory.
D. All of the above
Answer: D
2 In systems that support virtual memory, .
A. virtual memory is separated from logical memory.
B. virtual memory is separated from physical memory.
C. physical memory is separated from secondary storage.
D. physical memory is separated from logical memory.
Answer: D
Chapter 9 { Virtual Memory Operating System Concepts 10
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging
Could bring entire process into memory at load time
Or bring a page into memory only when it is needed
Less I/O needed, no unnecessary I/O
Less memory needed
Faster response
More users
Similar to paging system with swapping i.e., Page is needed
) reference to it
invalid reference ) abort
not-in-memory ) bring to memory
Lazy swapper { never swaps a page into memory unless page
will be needed. Swapper that deals with pages is a pager
Chapter 9 { Virtual Memory Operating System Concepts 11
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging
Paging System with Swapping
program
A
swap out 0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15
16 17 18 19
20 21 22 23
swap in
program
B
main
memory
Chapter 9 { Virtual Memory Operating System Concepts 12
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging | Basic Concepts
With swapping, pager guesses which pages will be used before
swapping out again
Instead, pager brings in only those pages into memory
How to determine that set of pages? Need new MMU
functionality to implement demand paging
If pages needed are already memory resident
No difference from non demand-paging
If page needed and not memory resident;
Need to detect and load the page into memory from storage
Without changing program behavior
Without programmer needing to change code
Chapter 9 { Virtual Memory Operating System Concepts 13
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging | Valid-Invalid Bit
With each page table entry a validinvalid bit is associated:
v ) in-memory, i.e., memory resident;
i ) not-in-memory
Initially validinvalid bit is set to i
on all entries
Example of a page table snapshot
!:
During MMU address translation,
if validinvalid bit in page table
entry is i ) page fault
Chapter 9 { Virtual Memory Operating System Concepts 14
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging | Valid-Invalid Bit
Page table when some pages are not in main memory
B D
D E
F H
logical
memory
valid–invalid
frame bit
page table
1
0 4
2 6
3 4
5 9
6 7
0 1 2 3 4 5 6 7
v i v i i v i i
physical memory
A
C A B
C
F F G H
0 1 2 3 4 5 6 7 8 9
10
11
12
13
14
15
A C E G
Chapter 9 { Virtual Memory Operating System Concepts 15
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging | Page Fault
If there is a reference to a page, first reference to that page
will trap to operating system: page fault
1 Operating system looks at another table to decide:
Invalid reference ) abort
Just not in memory
2 Find free frame
3 Swap page into frame via scheduled disk operation
4 Reset tables to indicate page now in memory; Set validation
bit = v
5 Restart the instruction that caused the page fault
Chapter 9 { Virtual Memory Operating System Concepts 16
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging | Page Fault
Steps in handling a page fault
load M
reference
trap
i
page is on
backing store
operating
system
restart
instruction
reset page
table
page table
physical
memory
bring in
missing page
free frame
1
2
3
6
5 4
Chapter 9 { Virtual Memory Operating System Concepts 17
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging | Aspects of Demand Paging
Extreme case { start process with no pages in memory
OS sets instruction pointer to first instruction of process,
non-memory-resident ! page fault
And for every other process pages on first access
Pure demand paging
Actually, a given instruction could access multiple pages !
multiple page faults. Consider fetch and decode of instruction
which adds 2 numbers from memory and stores result back to
memory
Hardware support (e.g., page table, secondary memory, etc.)
needed for demand paging
Chapter 9 { Virtual Memory Operating System Concepts 18
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging | Performance of Demand Paging
Three major activities
1 Service the interrupt { careful coding means just several
hundred instructions needed
2 Read the page { lots of time
3 Restart the process { again just a small amount of time
Page Fault Rate 0 ≤ p ≤ 1.
If p = 0 no page faults.
If p = 1, every reference is a fault
Effective Access Time (EAT)
EAT = (1 – p) × memory access
+ p(page fault overhead + swap page out
+ swap page in)
Chapter 9 { Virtual Memory Operating System Concepts 19
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Demand Paging | Performance of Demand Paging
Demand paging Example
Memory access time= 200 nanoseconds
Average page-fault service time = 8 milliseconds
EAT = (1 – p) × 200 + p(8 milliseconds)
= (1 – p) × 200 + p × 8; 000; 000
= 200 + p × 7; 999; 800
That is, if one access out of 1; 000 causes a page fault, then
EAT = 8:2 microseconds. This is a slowdown by a factor of 40!!
Chapter 9 { Virtual Memory Operating System Concepts 20
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Copy-on-Write
Copy-on-Write (COW) allows both parent and child processes
to initially share the same pages in memory
If either process modifies a shared page, only then is the page
copied
COW allows more efficient process creation as only modified
pages are copied
In general, free pages are allocated from a pool of
zero-fill-on-demand pages. Pool should always have free
frames for fast demand page execution
vfork() variation on fork() system call has parent suspend
and child using copy-on-write address space of parent
Designed to have child call exec() |Very efficient
Chapter 9 { Virtual Memory Operating System Concepts 21
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Copy-on-Write
Before process 1 modifies page C
process1
physical
memory
page A
page B
page C
process2
Chapter 9 { Virtual Memory Operating System Concepts 22
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Copy-on-Write
After process 1 modifies page C
process1
physical
memory
page A
page B
page C
Copy of page C
process2
Chapter 9 { Virtual Memory Operating System Concepts 23
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
What happens if There is no Free Frame?
Page replacement { find some page in memory, but not really
in use, page it out
Algorithm { terminate? swap out? replace the page?
Performance { want an algorithm which will result in minimum
number of page faults
Same page may be brought into memory several times
Chapter 9 { Virtual Memory Operating System Concepts 24
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Prevent over-allocation of memory by modifying page-fault
service routine to include page replacement
Use modify (dirty) bit to reduce overhead of page transfers
|only modified pages are written to disk
Page replacement completes separation between logical
memory and physical memory |large virtual memory can be
provided on a smaller physical memory
Chapter 9 { Virtual Memory Operating System Concepts 25
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Need for Page Replacement
monitor
load M
physical
memory
0 1 2 3 4 5 6 7
H
load M
J M
logical memory
for user 1
0
PC
1 2 3
B
M
valid–invalid
frame bit
page table
for user 1
i
A B D E
logical memory
for user 2
0 1 2 3
valid–invalid
frame bit
page table
for user 2
i
3 4 5
v v v
2 7
v v
6 v
D H J A E
Chapter 9 { Virtual Memory Operating System Concepts 26
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement | Basic Page Replacement
1 Find the location of the desired page on disk
2 Find a free frame:
If there is a free frame, use it
If there is no free frame, use a page replacement algorithm to
select a victim frame
Write victim frame to disk if dirty
3 Bring the desired page into the (newly) free frame; update the
page and frame tables
4 Continue the process by restarting the instruction that caused
the trap
Note now potentially 2 page transfers for page fault |increasing
EAT
Chapter 9 { Virtual Memory Operating System Concepts 27
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Page Replacement
frame valid–invalid bit
f
page table
victim
change
to invalid
page out
victim
page
page in
desired
page
reset page
table for
new page
physical
memory
2 4
1 3
f
0 i
v
Chapter 9 { Virtual Memory Operating System Concepts 28
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Page and Frame Replacement Algorithms
Frame-allocation algorithm determines
How many frames to give each process
Which frames to replace
Page-replacement algorithm
Want lowest page-fault rate on both first access and re-access
Chapter 9 { Virtual Memory Operating System Concepts 29
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Page and Frame Replacement Algorithms
Evaluate algorithm by running it on a particular string of
memory references (reference string) and computing the
number of page faults on that string
String is just page numbers, not full addresses
Repeated access to the same page does not cause a page fault
Results depend on number of frames available
In all our examples, the reference string of referenced page
numbers is 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
Chapter 9 { Virtual Memory Operating System Concepts 30
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Graph of page faults versus the number of frames
number of page faults
16
14
12
10
8 6 4 2
1 2 3
number of frames
4 5 6
Chapter 9 { Virtual Memory Operating System Concepts 31
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Fist-IN-First-Out (FIFO) Algorithm
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
3 frames (i.e., 3 pages can be in memory at a time per
process)
7 7
0
7 0 1
page frames
reference string
2 0 1
2 3 1
2 3 0
4 3 0
4 2 0
4 2 3
0 2 3
7 1 2
7 0 2
7 0 1
0 1 3
0
7 0 1 2 0 3 0 4 2 3 0 7 1 0 3 2 1 2 1 0
1 2
That is, the number of page faults is 15.
Chapter 9 { Virtual Memory Operating System Concepts 32
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
FIFO illustrating Belady’s Anomaly(Adding more frames can cause
more page faults! {try the example with 4 frames)
number of page faults
16
14
12
10
8 6 4 2
1 2 3
number of frames
4 5 6 7
Chapter 9 { Virtual Memory Operating System Concepts 33
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Optimal Algorithm
Replace page that will not be used for longest period of time
9 page faults is optimal for the example
How do you know this? |Can’t read the future
Used for measuring how well your algorithm performs
page frames
reference string
7 7
0
7 0 1
2 0 1
2 0 3
2 4 3
2 0 3
7 0 1
2 0 1
7 0 1 2 0 3 0 4 2 3 0 7 1 0 3 2 1 2 1 0
Chapter 9 { Virtual Memory Operating System Concepts 34
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Least Recently Used (LRU) Algorithm
Use past knowledge rather than future
Replace page that has not been used in the most amount of
time |associate time of last use with each page
page frames
reference string
7 7
0
7 0 1
2 0 1
2 0 3
4 0 3
4 0 2
4 3 2
0 3 2
1 3 2
1 0 2
1 0 7
7 0 1 2 0 3 0 4 2 3 0 7 1 0 3 2 1 2 1 0
12 page faults |better than FIFO but worse than OPT
Chapter 9 { Virtual Memory Operating System Concepts 35
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Least Recently Used (LRU) Algorithm (Cont.)
Counter implementation { Every page entry has a counter;
every time page is referenced through this entry, copy the
clock into the counter. When a page needs to be changed,
look at the counters to find smallest value (Search through
table needed)
Stack implementation { Keep a stack of page numbers in a
double link form. Whenever a page is referenced move it to
the top (requires 6 pointers to be changed).
LRU and OPT are cases of stack algorithms that don’t have
Belady’s Anomaly
Chapter 9 { Virtual Memory Operating System Concepts 36
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Use of stack to record most recent page references
2 1 0 7 4
stack
before
a
7 2 1 0 4
stack
after
b
reference string
4 7 0 7 1 0 1 2 1 2 2 7
a b
1
Chapter 9 { Virtual Memory Operating System Concepts 37
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
LRU Approximation Algorithm
LRU needs special hardware and still slow
Reference bit { With each page associate a bit, initially = 0.
When page is referenced bit set to 1. Replace any with
reference bit = 0 (if one exists).
Second-chance algorithm
Generally FIFO, plus hardware-provided reference bit
Clock replacement. If page to be replaced has
Reference bit = 0 ! replace it
reference bit = 1 then: set reference bit 0, leave page in
memory; replace next page, subject to same rules
Chapter 9 { Virtual Memory Operating System Concepts 38
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Second-Chance (clock) Page replacement Algorithm
circular queue of pages
(a)
next
victim
0
reference
bits
pages
0 1 1 0 1 1
… …
circular queue of pages
(b)
0
reference
bits
pages
0 0 0 0 1 1
… …
Chapter 9 { Virtual Memory Operating System Concepts 39
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Counting Algorithm
Keep a counter of the number of references that have been
made to each page |Not common
Lease Frequently Used (LFU) Algorithm: replaces page with
smallest count
Most Frequently Used (MFU) Algorithm: based on the
argument that the page with the smallest count was probably
just brought in and has yet to be used
Chapter 9 { Virtual Memory Operating System Concepts 40
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Page Replacement
Page-Buffering Algorithm
Keep a pool of free frames, always
Then frame available when needed, not found at fault time
Read page into free frame and select victim to evict and add
to free pool
Possibly, keep list of modified pages |When backing store
otherwise idle, write pages there and set to non-dirty
Possibly, keep free frame contents intact and note what is in
them
If referenced again before reused, no need to load contents
again from disk
Generally useful to reduce penalty if wrong victim frame
selected
Chapter 9 { Virtual Memory Operating System Concepts 41
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 Belady’s anomaly states that .
A. giving more memory to a process will improve its performance
B. as the number of allocated frames increases, the page-fault
rate may decrease for all page replacement algorithms
C. for some page replacement algorithms, the page-fault rate may
decrease as the number of allocated frames increases
D. for some page replacement algorithms, the page-fault rate may
increase as the number of allocated frames increases
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 42
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 Belady’s anomaly states that .
A. giving more memory to a process will improve its performance
B. as the number of allocated frames increases, the page-fault
rate may decrease for all page replacement algorithms
C. for some page replacement algorithms, the page-fault rate may
decrease as the number of allocated frames increases
D. for some page replacement algorithms, the page-fault rate may
increase as the number of allocated frames increases
Answer: D
Chapter 9 { Virtual Memory Operating System Concepts 42
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 Belady’s anomaly states that .
A. giving more memory to a process will improve its performance
B. as the number of allocated frames increases, the page-fault
rate may decrease for all page replacement algorithms
C. for some page replacement algorithms, the page-fault rate may
decrease as the number of allocated frames increases
D. for some page replacement algorithms, the page-fault rate may
increase as the number of allocated frames increases
Answer: D
2 True or False { Stack algorithms can never exhibit Belady’s
anomaly.
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 42
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 Belady’s anomaly states that .
A. giving more memory to a process will improve its performance
B. as the number of allocated frames increases, the page-fault
rate may decrease for all page replacement algorithms
C. for some page replacement algorithms, the page-fault rate may
decrease as the number of allocated frames increases
D. for some page replacement algorithms, the page-fault rate may
increase as the number of allocated frames increases
Answer: D
2 True or False { Stack algorithms can never exhibit Belady’s
anomaly.
Answer: True
Chapter 9 { Virtual Memory Operating System Concepts 42
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Allocation of Frames
Each process needs minimum number of frames
Example: IBM 370 { 6 pages to handle SS MOVE
instruction:
instruction is 6 bytes, might span 2 pages
2 pages to handle from
2 pages to handle to
Maximum of course is total frames in the system
Two major allocation schemes
1 fixed allocation
2 priority allocation
Chapter 9 { Virtual Memory Operating System Concepts 43
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Allocation of Frames
Fixed Allocation
Equal allocation { For example, if there are 100 frames (after
allocating frames for the OS) and 5 processes, give each
process 20 frames
Keep some as free frame buffer pool
Proportional allocation { Allocate according to the size of
process
Dynamic as degree of multiprogramming, process sizes change
Chapter 9 { Virtual Memory Operating System Concepts 44
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Allocation of Frames
Priority Allocation
Use a proportional allocation scheme using priorities rather
than size
If process Pi generates a page fault,
select for replacement one of its frames
select for replacement a frame from a process with lower
priority number
Chapter 9 { Virtual Memory Operating System Concepts 45
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Allocation of Frames
Global vs. Local Allocation
Global replacement { process selects a replacement frame
from the set of all frames; one process can take a frame from
another
But then process execution time can vary greatly
But greater throughput so more common
Local replacement { each process selects from only its own set
of allocated frames
More consistent per-process performance
But possibly underutilized memory
Chapter 9 { Virtual Memory Operating System Concepts 46
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Allocation of Frames
Non-Uniform Memory Access (NUMA)
So far all memory accessed equally
Many systems are NUMA |speed of access to memory varies
Consider system boards containing CPUs and memory,
interconnected over a system bus
Optimal performance comes from allocating memory close to
the CPU on which the thread is scheduled
And modifying the scheduler to schedule the thread on the
same system board when possible
Solved by Solaris by creating lgroups.
Chapter 9 { Virtual Memory Operating System Concepts 47
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Thrashing
If a process does not have enough pages, the page-fault rate is
very high
Page fault to get page
Replace existing frame
But quickly need replaced frame back
This leads to:
Low CPU utilization
Operating system thinking that it needs to increase the degree
of multiprogramming |Another process added to the system
Thrashing ≡ a process is busy swapping pages in and out
Chapter 9 { Virtual Memory Operating System Concepts 48
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Thrashing
thrashing
degree of multiprogramming
CPU utilization
Chapter 9 { Virtual Memory Operating System Concepts 49
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Thrashing
Demand Paging and Trashing
Why does demand paging work?
Locality model
Process migrates from one locality to another
Localities may overlap
Why does thrashing occur?
Σ size of locality > total memory size
Limit effects by using local or priority page replacement
Chapter 9 { Virtual Memory Operating System Concepts 50
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 The allocation algorithm allocates available memory to
each process according to its size.
A. equal
B. global
C. proportional
D. slab
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 51
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 The allocation algorithm allocates available memory to
each process according to its size.
A. equal
B. global
C. proportional
D. slab
Answer: C
Chapter 9 { Virtual Memory Operating System Concepts 51
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 The allocation algorithm allocates available memory to
each process according to its size.
A. equal
B. global
C. proportional
D. slab
Answer: C
2 occurs when a process spends more time paging than
executing.
A. Thrashing
B. Memory-mapping
C. Demand paging
D. Swapping
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 51
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 The allocation algorithm allocates available memory to
each process according to its size.
A. equal
B. global
C. proportional
D. slab
Answer: C
2 occurs when a process spends more time paging than
executing.
A. Thrashing
B. Memory-mapping
C. Demand paging
D. Swapping
Answer: A
Chapter 9 { Virtual Memory Operating System Concepts 51
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows the parent and child processes to initially share
the same pages, but when either process modifies a page, a
copy of the shared page is created.
A. copy-on-write
B. zero-fill-on-demand
C. memory-mapped
D. virtual memory fork
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 52
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows the parent and child processes to initially share
the same pages, but when either process modifies a page, a
copy of the shared page is created.
A. copy-on-write
B. zero-fill-on-demand
C. memory-mapped
D. virtual memory fork
Answer: A
Chapter 9 { Virtual Memory Operating System Concepts 52
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows the parent and child processes to initially share
the same pages, but when either process modifies a page, a
copy of the shared page is created.
A. copy-on-write
B. zero-fill-on-demand
C. memory-mapped
D. virtual memory fork
Answer: A
2 True or False { If the page-fault rate is too high, the
process may have too many frames.
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 52
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows the parent and child processes to initially share
the same pages, but when either process modifies a page, a
copy of the shared page is created.
A. copy-on-write
B. zero-fill-on-demand
C. memory-mapped
D. virtual memory fork
Answer: A
2 True or False { If the page-fault rate is too high, the
process may have too many frames.
Answer: False
Chapter 9 { Virtual Memory Operating System Concepts 52
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows the parent and child processes to initially share
the same pages, but when either process modifies a page, a
copy of the shared page is created.
A. copy-on-write
B. zero-fill-on-demand
C. memory-mapped
D. virtual memory fork
Answer: A
2 True or False { If the page-fault rate is too high, the
process may have too many frames.
Answer: False
3 True or False { Non-uniform memory access (NUMA) has
little effect on the performance of a virtual memory system.
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 52
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows the parent and child processes to initially share
the same pages, but when either process modifies a page, a
copy of the shared page is created.
A. copy-on-write
B. zero-fill-on-demand
C. memory-mapped
D. virtual memory fork
Answer: A
2 True or False { If the page-fault rate is too high, the
process may have too many frames.
Answer: False
3 True or False { Non-uniform memory access (NUMA) has
little effect on the performance of a virtual memory system.
Answer: False
Chapter 9 { Virtual Memory Operating System Concepts 52
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Memory-Mapped Files
Memory-mapped file I/O allows file I/O to be treated as
routine memory access by mapping a disk block to a page in
memory
A file is initially read using demand paging
A page-sized portion of the file is read from the file system
into a physical page
Subsequent reads/writes to/from the file are treated as
ordinary memory accesses
Simplifies and speeds file access by driving file I/O through
memory rather than read() and write() system calls
But when does written data make it to disk?
Periodically and / or at file close() time |For example, when
the pager scans for dirty pages
Chapter 9 { Virtual Memory Operating System Concepts 53
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Memory-Mapped Files
Memory-Mappede File Technique for all I/O
Some OSs uses memory mapped files for standard I/O
Process can explicitly request memory mapping a file via
mmap() system call |Now file mapped into process address
space
For standard I/O (i.e., open(), read(), write(), close()),
mmap anyway
COW can be used for read/write non-shared pages
Memory mapped files can be used for shared memory
(although again via separate system calls)
Chapter 9 { Virtual Memory Operating System Concepts 54
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Memory-Mapped Files
process A
virtual memory
1
1
1 2 3 4 5 6
23
3
45
5 42
6
6
123456
process B
virtual memory
physical memory
disk file
Chapter 9 { Virtual Memory Operating System Concepts 55
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Memory-Mapped Files
Shared Memory via Memory-Mapped I/O
process1
memory-mapped
file
shared
memory
shared
memory
shared
memory
process2
Chapter 9 { Virtual Memory Operating System Concepts 56
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows a portion of a virtual address space to be
logically associated with a file.
A. Memory-mapping
B. Shared memory
C. Slab allocation
D. Locality of reference
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 57
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows a portion of a virtual address space to be
logically associated with a file.
A. Memory-mapping
B. Shared memory
C. Slab allocation
D. Locality of reference
Answer: A
Chapter 9 { Virtual Memory Operating System Concepts 57
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows a portion of a virtual address space to be
logically associated with a file.
A. Memory-mapping
B. Shared memory
C. Slab allocation
D. Locality of reference
Answer: A
2 Systems in which memory access times vary significantly are
known as .
A. memory-mapped I/O
B. demand-paged memory
C. non-uniform memory access
D. copy-on-write memory
Answer:
Chapter 9 { Virtual Memory Operating System Concepts 57
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
Quick Quiz
1 allows a portion of a virtual address space to be
logically associated with a file.
A. Memory-mapping
B. Shared memory
C. Slab allocation
D. Locality of reference
Answer: A
2 Systems in which memory access times vary significantly are
known as .
A. memory-mapped I/O
B. demand-paged memory
C. non-uniform memory access
D. copy-on-write memory
Answer: C
Chapter 9 { Virtual Memory Operating System Concepts 57
Background Demand Paging Copy-on-Write Page Replacement Allocation Thrashing Mapped
End of Chapter 9
Chapter 9 { Virtual Memory Operating System Concepts 58