I found this problem in an OS past paper and I need some help in figuring it out. You are given:
- a matrix A with 10 lines and 10 columns that are being stored contiguously by lines
- a system in which we have 3 available page frames; in this system, you can fit 10 integers in a page
- 2 programs,
P1andP2, that fit, on their own and together, in one page; the programs are:
P1:
for(i=0; i<10; i++)
for(j=0; j<10; j++)
A[i][j]=0;
P2:
for(j=0; j<10; j++)
for(i=0; i<10; i++)
A[i][j]=0;
The question is: How many pages does each program occupy and which one is more efficient given that we are using the LRU algorithm?
I know how the LRU algorithm works and my guess would be that P1 is more efficient because the matrix is stored contiguously by lines. But I don't understand where LRU fits in this logic (please correct me if my reasoning is wrong). Concerning the number of pages that each program occupies, we've never covered that in class. Can someone help me?
If it helps, the answer options are:
P1andP2occupy 11 pages each andP1is more efficientP1andP2occupy 11 pages each andP2is more efficientP1andP2occupy 100 pages each andP2is more efficientP1andP2occupy 10 pages each andP1is more efficientP1andP2occupy 10 pages each andP2is more efficient
Also, I have a feeling this question is "inspired" by this problem in Operating System Concepts Essentials (maybe it could provide some context):
