All quizzes

All quizzes

Quiz 01

Question 1: Which of the following operations are likely to require performing at least one system call on an operating system with a monolithic kernel design like xv6?
Select all that apply

  1. switching away from a program that is running an infinite loop to use the CPU for something else

  2. outputing a value from a program to the screen

  3. copying a large string from one memory location to another

  4. reading a binary file from disk

Question 2: Suppose an xv6 system is running two processes. Process A starts to read from the keyboard, and while it is waiting for input from the keyboard, xv6 context switches and starts running process B in user mode. Then, a key is pressed and the corresponding interrupt happens. Where will process B's registers be saved while the interrupt is happening?

  1. on the kernel stack corresponding to process A

  2. on the kernel stack corresponding to process B

  3. on the heap of process A

  4. on the heap of process B

Question 3: Unix-like operating systems like xv6 provide programs access to I/O devices, like keyboards and monitors, primarily through the abstraction of

  1. files

  2. shells

  3. processes

  4. page tables

  5. directories

  6. context switches

Quiz 02

Question 1 (0 points): Consider the following POSIX C program:

int main() {
    for (int i = 0; i < 3; i += 1) {
        pid_t p = fork();
        if (p == 0) {
            i += 1;
        }
        printf("%d", i);
        fflush(stdout);
    }
}

Assuming fork(), printf(), and fflush() do not fail, which of the following are possible outputs?
Select all that apply

  1. 012312

  2. 120123

  3. 110223

Question 2: On a typical implementation of POSIX, when an application performs a library call like read(fd, buffer, 16) to read from the keyboard, _____.
Select all that apply

  1. the application will make a system call

  2. the application will not be run again until exactly 16 characters are typed

  3. at most 1 byte will be copied into buffer, since the keyboard only supports entering one character at a time

Question 3: Which of the following are true about a call to waitpid like waitpid(pid, &status, 0)?
Select all that apply

  1. if not equal to -1, pid would most likely be obtained by calling a function like execv()

  2. after the call returns successfully, any pipes used to communicate with the child process pid will be closed

  3. if the child process pid segfaults, then the call to waitpid() will terminated successfully

  4. after the call returns successfully, the value of status will reflect the return value of main() (if any) in the child process pid

Question 4: Consider the following POSIX C program firstkilobyte:

#include <unistd.h>
int main(void) {
    char buffer[1024];
    ssize_t amount_read = read(STDIN_FILENO, buffer, 1024);
    if (amount_read > 0) {
        write(STDOUT_FILENO, buffer, amount_read);
    }
}

This program attempts to read the first up to 1 kilobyte from its input and send it to its output. When run like

    firstkilobyte <input.dat >output.dat`

the program always seems to work correctly, but when run as part of a shell pipeline like

    ./some_other_program | ./firstkilobyte > output.dat

sometimes it outputs less than the first kilobyte. Which of the following is a possible explanation for this?
Select all that apply

  1. some_other_program's output contained a \0 (NUL) character, causing it to be truncated

  2. firstkilobyte did not close STDIN_FILENO after calling read() on it, so its input was truncated

  3. some_other_program did not write all of its output at once, so an additional call to read would be needed to read the rest of the output

  4. because it was being called in a shell pipeline, firstkilobyte needed to read from file descriptor 3 instead of STDIN_FILENO (which is 0)

Quiz 03

Question 1: Which changes to a system are likely to improve a system's throughput?
Select all that apply

  1. using a simpler scheduler that takes less time to decide which process to run

  2. switching from a non-preemptive to a preemptive scheduling algorithm

  3. making context switches take less time

Question 2: In lecture, we discussed multi-level feedback queue schedulers that attempts to adjust a process's priority based on the length of its CPU bursts. In such a scheduler, programs at the highest priority
Select all that apply

  1. have shorter timeslices than programs at any other priority

  2. are not allowed to use more total CPU time than the programs at lower priority

  3. will be demoted to a lower priority if they stop using the CPU before the end of their timeslice

Consider a operating system running whose ready queue contains three processes: * process A, which needs to run for 5 units of time * process B, which needs to run for 15 units of time, and * process C, which needs to run for 10 units of time Assume the processes are in sorted in that order in the queue, so A is the first process in the queue. After a process completes its unit of time, it will not become ready again (for the purposes of these questions). For all these questions, you may ignore context switch and other overheads.

Question 3: (see above) What will the mean turnaround time be if this system uses a first-come, first-served scheduler?

  1. 5 or less units of time

  2. more than 5 and not more than 10 units

  3. more than 10 and not more than 13 units

  4. more than 13 and not more than 16 units

  5. more than 16 and not more than 19 units

  6. more than 19 and not more than 22 units

  7. more than 22 units

Question 4: (see above) What will the mean turnaround time be if this system uses a round-robin scheduler which enforces a 6 unit time quantum? (You may assume the system enforces the time quantum by starting a timer just before it switches to the process.)

  1. 5 or less units of time

  2. more than 5 and not more than 10 units

  3. more than 10 and not more than 13 units

  4. more than 13 and not more than 16 units

  5. more than 16 and not more than 19 units

  6. more than 19 and not more than 22 units

  7. more than 22 units

Quiz 04

Question 1: In Linux's Completely Fair Scheduler (CFS), when a process becomes runnable after not having been runnable for some time, if it's virtual time is too low compared to other programs, it is increased. To decide whether the virtual time is "too low", the scheduler has a maximum amount of difference that is allowed between the newly runnable program's virtual time and the "current" virtual time. Increasing this limit (allowed difference) will generally
Select all that apply

  1. increase the long-term fairness of how much CPU time each program gets

  2. increase the mean turnaround time programs experience

  3. increase the overhead of each context switch

Question 2: Some schedulers are susceptible to starvation where, given some mixes of running programs, some of those programs will never be run no matter how long they wait. Which are examples of schedulers with this property?
Select all that apply

  1. a preemptive shortest remaining time first scheduler

  2. a lottery scheduler with fixed-size time quanta

  3. an preemptive earliest deadline first scheduler

  4. Linux's Completely Fair Scheduler (ignoring the special support for "idle" and "batch" priorities)

Question 3: On a system which implements kernel threads, creating a new thread would require allocating
Select all that apply

  1. memory for an additional user stack

  2. memory for an additional copy of the program's machine code

  3. memory for an additional copy of the open file list

  4. memory for additional saved register values

Question 4: Consider the following code snippet:

void *bar(void *arg) {
    int *q = (int*) arg;
    ...
}

int foo() {
    pthread_t;
    int value = 42;
    int *p = &value;
    pthread_create(&t, NULL, bar, (void*) p);
    pthread_join(t, NULL);
    return value;
}

Assume that the pthread_create and pthread_join calls is successful. What is true abou the above code?
Select all that apply

  1. q will contain the same address as the value of p

  2. modifying *q from bar will change the value of the variable value

  3. if the pthread_join was omitted, then accessing *q could write to deallocated memory

  4. accessing *q will access the stack of the thread created by the pthread_create call

Quiz 05

Question 1: In lecture, we discussed a way to implement a mutex lock using a spinlock and a queue of waiting threads. In this design, threads is added to queue

  1. whenever it tries to lock the mutex

  2. whenever it tries to lock the mutex and the mutex is currently locked

  3. whenever it tries to lock the mutex and if the mutex is currently unlocked

  4. whenever it tries to lock or unlock the mutex

  5. whenever it tries to unlock the mutex

  6. whenever it tries to lock the spinlock used to implement the mutex, but discovers the spinlock was already locked

In lecture we discussed a cache coherency mechanism where

For the below questions, assume that values are only evicted from each processor's cache because cache coherency requires it (that is, never because the cache runs out of room) and that values are cached whenever possible.

Question 2: (see above) Suppose two processors A and B share a value X in memory. Initially, both processors do not have X cached. Then, they execute the following accesses in the following order:

  • Processor A reads X
  • Processor B reads X
  • Processor B writes X
  • Processor B writes X
  • Processor B writes X
  • Processor B writes X
  • Processor A reads X

How many times is processor A's cached copy of X updated or invalidated?

  1. 1

  2. 2

  3. 3

  4. 4

  5. 5

  6. 6

Question 3: (see above) Two processors can simulatenously
Select all that apply

  1. modify in their caches two memory locations that map to different parts of the same cache block

  2. modify in their caches two memory locations that map to different cache blocks

  3. read using their caches from a single memory location

  4. read using their caches from two memory locations that map to different parts of the same cache block

Question 4: Suppose we are writing a multithreaded program where some threads are responsible for computing graphs and other threads are responsible for displaying them. We want the graph displaying threads to wait until a graph has been computed before it starts trying to display a graph. To implement this waiting using a counting semaphore, we could initialize the sempahore to _________, use the ______ operation after a thread computes a graph, and use the ________ operation before a thread displays a graph.

  1. -1 / down / down

  2. -1 / down / up

  3. -1 / up / down

  4. -1 / up / up

  5. 0 / down / down

  6. 0 / down / up

  7. 0 / up / down

  8. 0 / up / up

  9. 1 / down / down

  10. 1 / down / up

  11. 1 / up / down

  12. 1 / up / up

  13. the maximum possible number of threads / down / down

  14. the maximum possible number of threads / down / up

  15. the maximum possible number of threads / up / down

  16. the maximum possible number of threads / up / up

Quiz 06

Suppose we have a variant of producer/consumer where there are two produce functions:

and one Consume(int *pointerLeft, int *pointerRight) function which waits for a left and right value to be available, returning when one isthey are [edited 2019-02-24]. An incomplete implemation (with missing code marked with _____s) of these functions appears as follows:

pthread_mutex_lock_t lock;
pthread_cond_t ready_cv;
Queue queueLeft;
Queue queueRight;
void ProduceLeft(int value) {
    pthread_mutex_lock(&lock);
    queueLeft.enqueue(value);
    pthread_cond_signal(&ready_cv);
    pthread_mutex_unlock(&lock);
}

void ProduceRight(int value) {
    pthread_mutex_lock(&lock);
    queueRight.enqueue(value);
    pthread_cond_signal(&ready_cv);
    pthread_mutex_unlock(&lock);
}

void Consume(int *pointerLeft, int *pointerRight) {
    pthread_mutex_lock(&lock);
    ___________________________ // (A)
    *pointerLeft = queueLeft.dequeue();
    *pointerRight = queueRight.dequeue();
    pthread_mutex_unlock(&lock);
}

Question 1: (see above) Which of the following could go in the blank marked (A) and result in correct implementation?
Select all that apply

  1. pthread_cond_wait(&ready_cv, &lock);

  2. if (queueLeft.size() > 0 && queueRight.size() > 0) { pthread_cond_wait(&ready_cv, &lock); }

  3. while (queueLeft.size() > 0 && queueRight.size() > 0) { pthread_cond_wait(&ready_cv, &lock); }

  4. while (queueLeft.size() >0 || queueRight.size() > 0) { pthread_cond_wait(&ready_cv, &lock); }

  5. while (!ready_cv) { pthread_cond_wait(&ready_cv, &lock); }

  6. while (pthread_cond_wait(&ready_cv, &lock)) { pthread_mutex_lock(&lock); }

Question 2: (see above) Which of the following could replace the pthread_cond_signal(&ready_cv) in ProduceLeft and result in a correct implementation?
Select all that apply

  1. pthread_cond_broadcast(&ready_cv)

  2. if (queueRight.size() > 0) { pthread_cond_signal(&ready_cv) }

  3. ready_cv = true;

  4. while (queueRight.size() > 0 && queueLeft.size() > 0) { pthread_cond_signal(&ready_cv); }

  5. sem_post(&ready_cv);

Consider one is implementing a plane ticketing system where each plane flight is represented by a data structure which has a lock and a list of reservations on that flight. To reserve a plane ticket for a user-supplied list of multiple flights, the system iterates through the list and acquires the lock for each flight, then iterates through the list and verifies seats are available on each flight, then iterates through the list and modifies the list of reservations for each flight, then releases the locks for all the flights.

Question 3: (see above) When might the above strategy result in deadlocks?
Select all that apply

  1. if two or more users are simultaneously trying to reserve the same sequence of flights and they specify the flights in the exact same way

  2. if two or more users are simultaneously trying to reserve the same sequence of flights, but they specify the flights in different orders

  3. if two users are simultaneously trying to reserve two different sequences of flights, and those sequences share the same last flight (but all the other flights are different)

Question 4: (see above) What strategies would prevent the flight reserving function from deadlocking?
Select all that apply

  1. require the list the flights to ordered by departure time and date

  2. replace the locks for each plane flight with one lock that must acquired before adjusting reservations for any plane flight

  3. instead of waiting to acquire a lock on a plane flight, try to acquire a lock and if that fails report that the flight is not available

  4. require the reservation function to acquire locks corresponding to each seat on the plane after acquiring the main lock for each plane flight, instead of having just one lock for each plane flight

Quiz 07

Question 1: Suppose an xv6 system has a page table base pointer that is set to point to physical page 0x123, or physical byte address 0x123000. Recall that xv6 runs with a two-level page table, where page tables at each level contain 1024 4-byte page table entries. What is the physical byte address of the first-level page table entry that the processor will access when accessing virtual address 0x555555?

  1. 0x123

  2. 0x124

  3. 0x128

  4. 0x123000

  5. 0x123001

  6. 0x123004

  7. 0x123005

  8. 0x123016

  9. 0x123040

  10. none of the above

Question 2: xv6 contains a userspace malloc() function which is implemented in terms of the sbrk() system call. When an xv6 program uses malloc(), the kernel tracks
Select all that apply

  1. the ending address of the heap

  2. the location of each object or array that has been allocated with malloc() and not yet free()d

  3. the number of objects or array that have been allocated with malloc() and not yet free()d

  4. the location of each free()d object whose memory has not yet been reused

Question 3: Consider the following POSIX C code:

    int fd = open("example.dat", O_RDONLY);
    char *ptr = (char *) mmap(0, 16 * 1024, PROT_READ | PROT_WRITE, MAP_SHARED);
    unsigned sum  = 0;
    for (int i = 0; i < 16 * 1024; i += 1) {
        sum += ptr[i];
    }
    printf("%d\n", sum);

On an operating system with 4KB pages [edited 2019-03-05: word "pages" previously omitted] that implements mmap by only setting page table entries when required by a page fault, how many page faults will occur when executing the for loop above?

  1. 0

  2. 1

  3. 2

  4. 3

  5. 4

  6. 8

  7. 1024

  8. 4096

  9. 16384

  10. 32768

  11. none of the above

Question 4: Consider the following C program:

    int fd = open("example.dat", O_RDONLY);
    char *ptr = (char *) mmap(0, 16 * 1024, PROT_READ | PROT_WRITE, MAP_PRIVATE);
    ptr[1] = 'X';
    printf("%c %d\n", ptr[1], sum);
    exit(0);

On an operating system that uses 4KB pages, 32-bit pointers, and copy-on-write as part of its mmap implementation, how many bytes must be copied by the operating system in order to implement the assignment to ptr above?

  1. 1

  2. 4

  3. 8

  4. 4096

  5. 16384

Quiz 08

Question 1: Consider a system which uses 4 physical pages for its page cache and is running exactly one program that uses it (and nothing else uses the page cache). The page cache initially contains virtual pages A, B, C, and D (each assigned to a physical page). Then the program access virtual pages in the following sequence:

A, B, C, B, C, D, C, D, E, A

If the system uses a least recently used replacement policy, what is the contents of the page cache after the access pattern completes?

  1. A, B, C, D

  2. A, B, C, D, E

  3. A, B, C, E

  4. A, B, D, E

  5. A, C, D, E

  6. A, C, E

  7. A, E

  8. B, C, D, E

  9. B, C, D

  10. none of the above

Consider a system which uses 6 physical pages for its page cache and is running exactly one program that uses it (and nothing else uses the page cache). Suppose the system uses the "SEQ" replacement policy we discussed in lecture and initially the inactive list contains:

and the active list contains (in order):

(and other virtual pages are not cached in a physical page).

The system temporarily marks pages on the inactive list as not present, so whenever an access to them is attempted, it immediately moves them to the active list. Whenever, the inactive list's size would go below 2 pages, the system fills the inactive list up to 2 pages.

Suppose the program accesses virtual pages in the following sequence:

C, D, G, C, D, B, H

Question 2: (see above) During the first access to G, what page will be replaced?

  1. A

  2. B

  3. C

  4. D

  5. E

  6. F

  7. none of the above

Question 3: (see above) During the first access to H, what page will be replaced?

  1. A

  2. B

  3. C

  4. D

  5. E

  6. F

  7. G

  8. none of the above

Question 4: Which are examples of access patterns for which a least-recently used policy page replacement performs poorly?
Select all that apply

  1. a program that only accesses data near the top of its stack and within a small global array

  2. ten programs running in parallel that repeatedly access data from different parts of a small, shared file

  3. a program that reads a large file once and produce a brief summary of the information inside

Quiz 09

Question 1: Suppose a keyboard controller is connected to a processor via its memory bus. This keyboard controller uses programmed I/O rather than direct memory access. The operating system would most likely determine what key was pressed occurs by

  1. looking at which interrupt number the keypress triggers (e.g. xv6's trapno)

  2. reading from a register on the device's controller using a memory read instruction

  3. sending the keyboard controller the physical memory address corresponding to the read() call that was reading from the keyboard

  4. sending the keyboard controller the virtual memory address corresponding to the read() call that was reading from the keyboard

Question 2: Suppose an OS performs the following operations on an SSD:

  • writes X to sector 10
  • writes Y to sector 11
  • writes new data to sectors 500-10000
  • writes Z to sector 10

and makes sure that all the operations result in changes to the flash chips of the SSD rather than just modifying any DRAM cache the SSD controller may have.

Which of the following are true about this process?
Select all that apply

  1. it is possible after all these operations that the value X is still stored in the SSD's flash

  2. it is most likely that the values Y and Z will be stored in adjacent locations in the SSD's flash after Z is written

  3. the new data for sectors 500-10000 could have been written to a different flash chip than where the old data for these sectors was stored

Question 3: In the FAT filesystem with 1KB clusters, truncating a file foo/bar/baz.txt from 2.5KB to 1.5KB requires modifying which of the following? Ignore any updates to creation, modification, and access times.
Select all that apply

  1. the directory entry for baz.txt

  2. the directory entry for bar

  3. the directory entry for foo

  4. one or more of the FAT entries for bar

  5. the FAT entry for the first cluster of baz.txt

  6. the FAT entry for the second cluster of baz.txt

  7. the FAT entry for the third cluster of baz.txt

Question 4: Suppose a inode-based filesystem has 5 direct blocks, 1 indirect block, and 1 double-indirect block in its inode. If the filesystem uses 1KB (1024 byte)blocks with 2 byte block numbers, then how many blocks are allocated solely to store block numbers for a 1MB (1024 * 1024 byte) file?

  1. 0

  2. 1

  3. 2

  4. 3

  5. 4

  6. 5

  7. none of the above

Quiz 10

Question 1: Suppose a system uses RAID 4 across 5 disks holding a total of 12 sectors of data, where the data on the disks is laid out as follows:

Disk 1A1 (sector 0)B1 (sector 4)C1 (sector 8)
Disk 2A2 (sector 1)B2 (sector 5)C2 (sector 9)
Disk 3A3 (sector 2)B3 (sector 6)C3 (sector 10)
Disk 4A4 (sector 3)B4 (sector 7)C4 (sector 11)
Disk 5Ap = A1 XOR A2 XOR A3 XOR A4Bp = B1 XOR B2 XOR B3 XOR B4Cp = C1 XOR C2 XOR C3 XOR C4

Suppose all Disk 3 is inaccessible to a hardware failure. How many sectors must be read in order to read sectors 3, 4, 5, and 6 into memory, recovering using the parity Disk 5 if necessary?

  1. 4

  2. 5

  3. 6

  4. 7

  5. 8

  6. 9

  7. 10

  8. 11 or more

  9. none of the above, the data is not recoverable

Question 2: Compared to a inode-based filesystem that does not use block groups, a filesystem that uses block groups lowers distance on disk between _____.
Select all that apply

  1. directory entries for subsubsubdirectories of the root directory and directory entries for the root directory

  2. inodes and the superblock of the disk

  3. directory entries and the inodes they refer to

  4. inodes and the data blocks they refer to

  5. inodes for files and inodes for the directories containing those files

  6. data blocks and the free bitmap information for those data blocks

Question 3: Suppose a filesystem uses redo logging and supports renaming a file. As part of a renaming the file, the filesystem simulatenously updates the modification time of the directory. Using redo logging, The filesystem attempts to ensure that the renaming operation either updates both the file name and the directory's modification time or updates neither. What would be most useful for the filesystem to store in its log for this operation?

  1. the contents of the directory entry after the rename, its location on disk, the modification time after the rename, and the inode number of the directory

  2. the contents of the directory entry just before the rename, its location on disk, the modification time just before the rename, and the inode number of the directory

  3. the contents of the directory entry just before the rename, the new filename, the name of the directory, and its modification time just after the rename

  4. the contents of the file inode just after the rename, its inode number, and the inode number of the directory, and the new modification time

Question 4: In a copy-on-write filesystem that supports snapshots, removing a file X in a directory A will most likely create new copies of ______. Ignore any changes required to update modification times, access times, etc.
Select all that apply

  1. X's inode

  2. X's data blocks

  3. A's inode

  4. one or more of A's data blocks

  5. the inode for the directory containing the directory A (assuming A is not the root directory)

Quiz 11

Question 1: To make a server that handles multiple clients at the same time in the POSIX socket API, which of the following strategies would work?
Select all that apply

  1. create a single server socket associated with multiple client connections by calling listen() multiple times, once for each client, then use this socket with read() and write()

  2. call accept() multiple times in a row on a server socket to get several connection file descriptors and then use all these sockets with read() and write()

  3. call accept() once to get a connection file descriptor, but use bind() to change which client that communicates with in between calls to read() and write() on the file descriptor

Question 2: In POSIX internet sockets, a successful to bind() typically
Select all that apply

  1. sends a message over the network to the machine specified by the arguments to bind()

  2. sends a message over the network containing an IP address and port number specified by the arguments to bind()

  3. stores an IP address and port number on the local machine

Question 3: Which of the following are consistent with a stateless protocols a network file server?
Select all that apply

  1. a server providing an open/write/close interface for accessing files

  2. a server allowing an RPC call to write a whole file all at once, given the file's name and new contents

  3. a client storing the names of files on the network file server that its programs are currently accessing

  4. a client reading a file by making separate requests for each block of the file

Question 4: Using remote procedure calls is typically unlike normal (local) procedure calls because _____.
Select all that apply

  1. remote procedure call arguments are always byte-arrays so they can be sent over the network

  2. clients need to specify the name or address of the server to send the remote procedure call to

  3. clients need to specify the architecture of the server to send the remote procedure call to

Quiz 12

Question 1: Suppose two systems use NFSv3, with stateless servers and its close-to-open consistency model, and programs on the two machine access the shared file as follows, where (1) and (2) represent unknown values:

System ASystem B
open file
write A to byte 0-1023 of file
write B to byte 1024-2047 of file
close file
  
open file
write X to byte 0-1023 of file
open file
write Y to byte 1024-2047
close file
read (1) from byte 0-1023
close file
open file
read (2) from byte 0-1023
close file

If system A and B both do caching of files on the shared filesystem if they have available memory as allowed by the NFS protocol and its consistency model, which are possible values of (1) and (2)? (Consider both cases where A and B have available memory for their caches and where they do not.)
Select all that apply

  1. (1) = A, (2) = A

  2. (1) = A, (2) = B

  3. (1) = A, (2) = X

  4. (1) = X, (2) = X

  5. (1) = A, (2) = X

Question 2: Consider a system performing two-phase commit with two workers A and B and one coordinator. Suppose the coordinator sends a message to both workers to prepare to do an operation. Then, both worker A and worker B send their 'agree/vote to commit' message to the coordinator, but worker A's message is lost due to a temporary network failure. Then, worker A loses power. How can the system recover from this failure?
Select all that apply

  1. when worker A's power is restored, it will resend the "agree to commit" message after reading its log

  2. after noticing that worker A is not responding for a long time, the coordinator can vote to abort the transaction itself

  3. after noticing that worker A is not responding for a long time, the coordinator tells worker B to commit the transaction and remembers to tell worker A to commit it when its power is restored

Question 3: If we want to allow programs controlled by certain users to play audio on a POSIX-like machine but not allow others to play audio, which of the following would be suitable approaches?
Select all that apply

  1. start the user's shells with an "audio" group in its group IDs after the user logs in, make the audio device file by assigne to the "audio" group, and mark it as having group "write" permission

  2. start the users' shells or other programs with a file descriptor already open to the audio device (and tell the users which file descriptor to use in their programs that play audio)

  3. list all the users in an access control list for the audio device file

  4. setup copies of audio playing program owned by each of the users in question and each marked set-user-ID

Question 4: Comparing a capability-based system with an access-control-list-based system, which of the following operations are substantially more difficult in the capability-based system?
Select all that apply

  1. allowing a program run by user A to give program run by user B access to a file on its behalf

  2. stopping a user's programs from having access to files while those programs are still running

  3. giving different programs run by the same user access to different sets of files

Quiz 13

Question 1 (0 points): Suppose a guest operating system is running in a virtual machine implemented using trap-and-emulate. A user program running in the guest operating system tries to access memory that is not allocated but will be allocated on demand by the guest operating system. What will happen?

  1. the user program's memory access will trigger a page fault in the host operating system, then that page fault handler will cause the guest operating system's page fault handler to run in kernel mode

  2. the user program's memory access will trigger a page fault in the host operating system, then that page fault handler will update the guest operating system's page table, then resume running the user program

  3. the user program's memory access will trigger a system call in the host operating system, then the host operating system's system call implementation will update the guest operating system's page table

  4. the user program's memory access will trigger a page fault in the host operating system, then the host operating system's page fault handler will trigger a malloc() call in the guest operating system

Suppose a hypervisor uses shadow page tables to implement virtual memory and fills in these shadow page tables on demand as required by page faults that occur while the guest operating system is running, but does not mark the guest page tables as read-only or invalid. (We called this the "virtual TLB" strategy in lecture.)

On a system with 4KB pages the hypervisor allocates the guest operating system machine memory addresses 0x1000000 through 0x2000000, so that physical address 0x0 in the guest OS corresponds to machine address 0x1000000 in the guest OS, and 0x1 to 0x1000001 and so on.

Question 2 (0 points): (see above) If a last-level guest page table entry contains page number 0x40, then the corresponding shadow page table entry should contain what page number?

  1. 0x50

  2. 0x10000040

  3. 0x10040

  4. 0x10050

  5. 0x10040000

  6. none of the above

Question 3: (see above) Shadow page table lookups are performed using _____________.

  1. virtual addresses or page numbers

  2. physical addresses or page numbers

  3. machine addresses or page numbers

Question 4: Many operating systems avoid microkernel-based designs for performance reasons. Which of the following are performance issues that microkernel-based designs like seL4 are likely to experience compared to a monolithic kernel design like Linux?
Select all that apply

  1. reading data from a keyboard is likely to require more kernel/user mode switches and/or context switches than in a monolithic kernel design

  2. microkernel designs make it harder for user-mode code to ensure that data is kept in physical memory and not swapped out to disk

  3. writing to a file is likely to require more kernel/user mode switches and/or context switches than in a monolithic kernel design

  4. sending data between two processes is likely to require more kernel/user mode switches and/or context switches than in a monolithic kernel design