All quizzes (key)

All quizzes (key)

Quiz 01

Question 1 (1 points): In xv6, the arguments to system calls like write() are stored on the user program's stack in user mode. This allows kernel function that implement the system call like sys_write() to obtain the arguments because

  1. functions like sys_write() run on the same stack, and the x86-32 calling convention expects normal (non-system-call) function arguments to be there, so it can retrieve them as if they were ordinary arguments

  2. functions like sys_write() will context switch back to calling user program in order to retrieve their arguments

  3. (correct answer)

    the sys_write() function runs on a separate kernel stack from the user program's stack, but can still use utility functions that access the user stack

Question 2 (1 points): On a typical Unix-like operating system (e.g. xv6), which of the following operations typically must be done in kernel mode?
Select all that apply

  1. (correct answer)

    communicating with the network interface card (NIC) to send a message on the wifi network

  2. changing the stack pointer to deallocate a local array just before a function returns

  3. (correct answer)

    creating a new process

  4. (correct answer)

    creating a new file

  5. (accepted any answer)

    calling one function in the executable from another

The intended interpretation of "another" was "another function"; but accepting the answer for the interpretation of "another executable"

Suppose a single-core xv6 system is running two programs. Program A is processing a large input file, doing some time-consuming computation, then writing an output file. Before this data processing finished, the user typed in some input for Program B. Eventually, xv6 starts running Program B instead of Program A, even though Program A is not finished.

Question 3 (1 points): (see above) When program A was last running before xv6 switched to program B, which of the following are possible ways in which it might have been stopped?
Select all that apply

  1. (correct answer)

    program A made a system call that requires it to wait for I/O to complete (e.g. to read more from an input file for the computation)

  2. (correct answer)

    program A was temporarily stopped running because a timer interrupt transferred control to the xv6 kernel

  3. program A executed a ret instruction in order to return from a utility function inside program A, and since this required using the stack, control was transferred to the xv6 kernel to make sure this works correctly

  4. program A attempted to use the register %ebp for the first time since program B was running and using that register, so control was transferred to the xv6 kernel to save this register

Question 4 (1 points): (see above) When program B starts running, the value of program A's user stack pointer (that is, the value %esp last had when A was running in user mode) will be saved

  1. on program B's kernel stack

  2. on program B's user stack

  3. in a special processor register

  4. (correct answer)

    on program A's kernel stack

Quiz 02

Question 1 (2 points): Consider the following C code that uses the POSIX API:

write(STDOUT_FILENO, "A", 1);
int value = 0;
pid_t pid1 = fork();
if (pid1 == 0) {
    write(STDOUT_FILENO, "B", 1);
    value += 1;
} else {
    write(STDOUT_FILENO, "C", 1);
    value += 2;
}
pid_t pid2 = fork();
char value_as_string[2];
sprintf(value_as_string, "%d", value);
write(STDOUT_FILENO, value_as_string, 1);

Assume each POSIX API call (fork, write, etc.) does not fail. Which of the following are possible outputs?
Select all that apply

  1. (correct answer)

    ABC1122

  2. ABC3333

  3. (correct answer)

    ACB1122

  4. AC22C33

  5. (correct answer)

    AC22B11

  6. AC221B1

Question 2 (1 points): If successful, execv("/bin/program", args) will _________.
Select all that apply

  1. (correct answer)

    load /bin/program into the current process's memory

  2. if standard out is not open to the terminal, reopen standard out (file descriptors 1) to output to the current terminal before or while starting /bin/program

  3. change the parent process ID of the current process

  4. return a process ID

Question 3 (1 points): The following C function attempts to use the POSIX API to run another program, sending it input as its input and reading its output_size byte output into output.

void run_program_with_input_and_get_output(char *program, const char *input, size_t input_size, char *output, size_t output_size) {
   char *argv[2] = {program, NULL};
   int output_pipe_fds[2];
   int input_pipe_fds[2];
   pipe(output_pipe_fds);
   pipe(input_pipe_fds);
   int pid = fork();
   if (pid == 0) {
       /* child */
       dup2(output_pipe_fds[1], STDOUT_FILENO);
       dup2(input_pipe_fds[0], STDIN_FILENO);
       close(input_pipe_fds[0]); close(input_pipe_fds[1]);
       close(output_pipe_fds[0]); close(output_pipe_fds[1]);
       execv(program, argv);
   } else {
       /* parent */
       close(input_pipe_fds[0]); close(output_pipe_fds[1]);
       write(input_pipe_fds[1], input, input_size);
       close(input_pipe_fds[1]);
       read(output_pipe_fds[0], output, output_size);
       close(output_pipe_fds[0]);
       waitpid(pid, NULL, 0);
   }
}

This function works for a small input. But, when attempting to run this function with a large input, the program ends up hanging. But, when the program is run manually, it terminates normally when given the specified input, and produces output that fits in the supplied output_size. What is a likely cause of the hang?
Select all that apply

  1. (correct answer)

    the other program tries to write a lot before it reads most of its input; since the parent process does not call read() until it finishes write()ing the entire input buffer, the other program's write never completes

  2. because output_pipe_fds[1] was closed, the program could not successfully output to its standard output

  3. because the parent closed input_pipe_fds[0], it could not successfully write to input_pipe_fds[1]

  4. waitpid might be called after the child process has already exited, in which case it will wait forever

Question 4 (1 points): Typically programs on a POSIX-like system use library functions that are implemented in terms the read() and write() system calls, such as printf, fwrite, fgets, fread and the various methods of the iostream classes. The implementation of these functions usually do their own buffering in addition to any buffering the operating system kernel may perform within the system call implementations. Besides being more portable to non-POSIX systems, which of the following are usually advantages of these wrapper functions?
Select all that apply

  1. (correct answer)

    making many calls to a function like fwrite can require substantially less system call overhead (e.g. time spent saving and restoring registers required to make a system call) than making the same number of calls to a function like write

  2. (correct answer)

    sometimes reading a large amount of data from a file descriptor may require multiple calls to read, and a function like fread will typically make these extra calls automatically

  3. writing data with a function like fwrite will usually require the data to be written to be copied into and out of buffers fewer times than using write directly

Question 5 (1 points): On a typical POSIX-like system, when a program A uses the write() system call to send data to another program B via a pipe (like created with pipe()), the write() system call will return (assuming no errors occur and that write() writes as many bytes as program A requests)

  1. only after the data has been read by program B using the read() system call

  2. (correct answer)

    only after the kernel has copied the data from the buffer program A passed as an argument to write() into some other location(s)

  3. only after the kernel has saved the pointer (passed as an argument to write()) indicating the data's location in program A's memory, so it can use that pointer later when the program B completes its read() system call

Quiz 03

Suppose a single-core system is running three processes A, B, and C.

Process A becomes ready to run (after finishing some previous I/O) at time 0, and process B becomes ready to run at time 2, and process C becomes ready to run at time 5. The operating system runs the programs until they start waiting for their next I/O operation using the following schedule:

For these questions, assume the time spent context switching, etc. are negligible.

Question 1 (1 points): (see above) What is the turnaround time experienced by process A (in the schedule above)?

  1. 0 time units

  2. 1 time unit

  3. 2 time units

  4. 3 time units

  5. 4 time units

  6. 5 time units

  7. 6 time units

  8. 7 time units

  9. 8 time units

  10. 9 time units

  11. (correct answer)

    10 time units

  12. 11 time units

  13. 12 time units

Question 2 (1 points): (see above) What is the turnaround time experienced by process B (in the schedule above)?

  1. 0 time units

  2. 1 time unit

  3. 2 time units

  4. 3 time units

  5. 4 time units

  6. 5 time units

  7. 6 time units

  8. 7 time units

  9. 8 time units

  10. 9 time units

  11. (correct answer)

    10 time units

  12. 11 time units

  13. 12 time units

Question 3 (0 points): (see above) [This question will be dropped because there is an off-by-one error in it.] Suppose the workload above were scheduled with a FCFS (first-come, first-served) scheduler instead of the schedule used above. Under this scheduler, process A would finish running at time 5. When would process B finish running and start its next I/O operation?

  1. at time 6

  2. at time 7

  3. at time 8

  4. at time 9

  5. (correct answer)

    at time 10

  6. at time 11

  7. at time 12

  8. none of the above

the description of when A would finish running was off-by-one in the question, so we dropped this question.

Question 4 (1 points): (see above) Suppose the workload above were scheduled with a preemptive SRTF (shortest remaining time first) scheduler instead of the schedule used above. Under this scheduler, when would process B finish running and start its next I/O operation?

  1. (correct answer)

    at time 6

  2. at time 7

  3. at time 8

  4. at time 9

  5. at time 10

  6. at time 11

  7. (correct answer)

    at time 12

  8. (correct answer)

    none of the above

by accident, this question depended on the priority between A and B when they have the same amount of remaining time, so we accepted two answers. One schedule: A (6 units), C (2 units), B (4 units); Another schedule: A (2 units), B (4 units), C (2 units), A (4 units). It was our intention that running processes until they slept for their next I/O with the schedule above meant that processes did not start the I/O until the last time they were run in the schedule above, but this not clearly communicated. Consequently, I elected to give everyone credit for this question.

Question 5 (1 points): When running a mix of jobs that need to react quickly to input and jobs that perform lots of long computation, increasing the time quantum in a round-robin scheduler generally ___________.
Select all that apply

  1. improves turnaround time

  2. (correct answer)

    improves throughput

  3. improves fairness

  4. is more helpful for the programs that need to react quickly to input than the programs that perform long computation

Quiz 04

Question 1 (1 points): Consider a multi-level feedback queue scheduler with 4 priority levels:

  • Priority 3 (highest), where processes run with a 1 ms quantum
  • Priority 2, where processes run with a 5 ms quantum
  • Priority 1, where processes run with a 10 ms quantum; and
  • Priority 0 (lowest), where processes run with a 100 ms quantum

Whenever a process finishes running without using its entire time quantum, it is moved up to a higher priority level (except if it is already at the highest priority level) for the next time it is runnable. If a process is still runnable after using its entire quantum, it is moved down to a lower priority level (unless it is already at the lowest priority level).

Within each priority level, the scheduler uses round-robin.

Suppose this scheduler is running two processes: * process A which alternates between performing 5 ms of computation and starting and waiting for an I/O operation that takes 50 ms to complete * process B which alternates between performing 15 ms of computation and starting and waiting for an I/O operation that takes 25 ms to complete

After the processes are running for a long time, ______.

  1. (correct answer)

    process A will alternate between priority 3 and priority 2, while process B will alternate between priority 1 and priority 0

  2. process A will alternate between priority 3 and priority 2, while process B will alternate between priority 2 and priority 1

  3. (correct answer)

    process A will alternate between priority 2 and priority 1, while process B will alternate between priority 1 and priority 0

  4. process A will be at priority 3 whenever process B is waiting for its I/O, and at priority 2 when process B is ready or running

  5. process A will alternate between priority 3 and prioirity 2, but process B might never run

  6. (correct answer)

    none of the above

Probably shouldn't have said process A does exactly 5 ms, so I can also see the argument that it would stay at priority 2 (assuming that using exactly its time quantum doesn't meet either "moved up to a higher priority level" criteria or the "moved down to a lower priority level" critiera) --- except that it's not clear how it would've gotten there in the first place.

Question 2 (1 points): Suppose one wanted to use a lottery scheduler to approximate an SJF (shortest job first) scheduler. Which of the following would be the best approach?

  1. give more tickets to processes which have NOT run recently (than to processes which have run recently)

  2. give more tickets to processes which have run recently (than to processes which have NOT run recently)

  3. (correct answer)

    give more tickets to processes which used less CPU before starting to wait for I/O recently (than to processes which used more CPU)

  4. give more tickets to processes which used more CPU before starting to wiat for I/O recently (than to processes which used less CPU)

  5. give more tickets to processes which are using more memory (than to processes using less memory)

  6. give more tickets to processes which are using less memory (than to processes using more memory)

Question 3 (1 points): Consider the following C program that uses the POSIX API:

void *thread_function(void *argument) {
    const char *ptr = (const char*) argument;
    write(STDOUT_FILENO, ptr, strlen(ptr));
    return NULL;
}

int main(int argc, const char **argv) {
    pthread_t a, b;
    pthread_create(&a, NULL, thread_function, (void*) "A");
    pthread_create(&b, NULL, thread_function, (void*) "B");
    pthread_join(&a, NULL);
    write(STDOUT_FILENO, "C", 1);
    pthread_join(&b, NULL);
    return 0;
}

Assume none of the calls to pthread_create, pthread_join, or writefail (and ignore minor syntax errors, missing #includes, etc.). Which of the following are possible outputs of this code?
Select all that apply

  1. (correct answer)

    ABC

  2. (correct answer)

    ACB

  3. (correct answer)

    BAC

  4. BCA

  5. AC

  6. BCB

Consider the following C program that uses the POSIX API:

pthread_mutex_t lock;
int global = 100;

struct ThreadData { int *pointer; };
struct ThreadData a_data, b_data;

void *thread_function(void *argument) {
    ThreadData *data = (struct ThreadData*) argument;
    int *pointer = data->pointer;
    pthread_mutex_lock(&lock);
    *pointer = global + *pointer;  /* <--- here */
    global += 10;
    pthread_mutex_unlock(&lock);
    return NULL;
}

int main() {
    pthread_t a, b;
    int value1 = 1, value2 = 2;
    a_data.pointer = &value1;
    b_data.pointer = &value2;
    pthread_mutex_init(&lock, NULL);
    pthread_create(&a, NULL, thread_function, (void*) &a_data);
    pthread_create(&b, NULL, thread_function, (void*) &b_data);
    pthread_join(&a, NULL);
    pthread_mutex_lock(&lock);
    printf("%d %d %d", global, value1, value2);
    pthread_mutex_unlock(&lock);
    pthread_join(&b, NULL);
    return 0;
}

Assume none of the calls to pthread_create, pthread_join, pthread_mutex_lock, or printf fail (and ignore minor syntax errors, missing #includes, etc.)

Question 4 (1 points): (see above) Which of the following are possible outputs of this code?
Select all that apply

  1. (correct answer)

    110 101 2

  2. (correct answer)

    120 101 112

  3. (correct answer)

    120 111 102

  4. 120 101 2

  5. 110 101 112

Question 5 (1 points): (see above) The line marked with a <--- here modifies:

  1. a value stored on the heap

  2. a value stored on the stack of thread a or thread b

  3. (correct answer)

    a value stored on the stack of the thread that runs main

  4. a value stored in the global data region

Quiz 05

Question 1 (1 points): The implementation of a mutex lock we discussed in lecture involves using a spinlock and a queue of waiting threads. Which of the following are true about the spinlock this implementation uses?
Select all that apply

  1. the spinlock is kept locked whenever the mutex lock is locked

  2. (correct answer)

    unlocking the mutex requires locking the spinlock

  3. (correct answer)

    unlocking the mutex requires unlocking the spinlock

  4. the spinlock is only locked if more than one thread is trying to lock the mutex lock at the same time

Question 2 (1 points): False sharing occurs when two or more threads access logically independent resources simulatenously, but end up needing to synchronize those accesses because they are actually dependent. Most commonly, the threads area accessing values that are stored in the same cache block.

Which of the following are strategies to prevent false sharing?
Select all that apply

  1. use atomic read/modify/write instructions when accessing values from any thread

  2. (correct answer)

    when splitting up an array between multiple threads, assign each thread a contiguous portion of the array

  3. when making an array of values, each of which will be accessed by a different thread (for example, to store intermediate values when computing the results of a computation), make sure the array is stored in as little space as possible

  4. always use detached threads instead of joining threads

Consider the modified/shared/invalid cache coherency protocol we discussed in lecture running on a system with three processors A, B, and C, each of which have their own caches with 64 byte cache blocks. (With 64 byte cache blocks, addresses 0x0 through 0x3F are in the same cache block, as are 0x40 through 0x7F, and are 0x80 through 0xBF, and so on.)

Assume that when it is necessary to write a modified cache block to memory, the processor writes the value but keeps its cache block in the shared state when possible. Also assume that values are not evicted from caches due to lack of capacity

The caches are initially empty, and the processors perform the following accesses to the following addresses in this order:

Question 3 (1 points): (see above) What is the final state of address 0x100 in processor A's cache?

  1. modified

  2. (correct answer)

    shared

  3. invalid

Question 4 (1 points): (see above) What is the final state of address 0x100 in processor B's cache?

  1. modified

  2. shared

  3. (correct answer)

    invalid

A multithreaded batch processing program sometimes produces a very large number of messages. In attempt to limit the amount of messages the user sees, the program transfers all the messages to a dedicated "message displayer" thread, which only displays at most one message per second. When a data processing thread wants to display a message, it waits until its message is displayed before proceeding with its processing.

To facilitate this, the program uses a monitor with relevant variables and data structures declared as follows:

struct MessageInfo {
    const char *message;
    pthread_cond_t cv;
    MessageInfo *next;
    bool displayed;
};

pthread_mutex_t lock;
pthread_cond_t new_message_cv;
MessageInfo *head; MessageInfo *tail;

And once the "message displayer" thread decides to show a message, it uses code like the following (which is called with the mutex locked):

void ShowNextMessage() {
    MessageInfo *info = head;
    head = info->next;
    std::cout << info->message << std::endl;
    info->displayed = true;
    pthread_cond_signal(&info->cv);
}

(Please ignore minor syntax errors, etc. in the code above and assume all values are initialized appropriately.)

Question 5 (1 points): (see above) Each data processing thread that wants to display a message allocates and initializes a MessageInfo struct called info (initializing all the fields within it), then acquires the mutex lock, then adds it to the linked list (using the head and tail pointers), then signals the new_message_cv. Following this, it runs code like the following:

_______ ( ______________________________ ) {

    pthread_cond_wait(&info->cv, &lock);

}

What should go in the blanks above to make this code correct?

  1. if (!info->displayed)

  2. if (info->displayed)

  3. (correct answer)

    while (!info->displayed)

  4. (correct answer)

    while (info->displayed)

  5. while (pthread_mutex_unlock(&lock) == 0)

  6. while (pthread_mutex_unlock(&lock) != 0)

  7. if (pthread_mutex_unlock(&lock) == 0)

  8. if (pthread_mutex_unlock(&lock) != 0)

  9. while (info->cv == true)

  10. while (info->cv == false)

  11. while (info == tail)

  12. while (head == tail)

Question 6 (1 points): (see above) The following is an incorrect implementation of the part of the "message displayer" thread that waits for messages and calls ShowNextMessage() as new messages arrive.

pthread_mutex_lock(&lock);
while (head == NULL) {
    pthread_cond_wait(&new_message_cv, &lock);
    ShowNextMessage();
}
pthread_mutex_unlock(&lock);

Which of the following are types of incorrect behavior that could happen from this code?
Select all that apply

  1. (correct answer)

    it may exit even though messages are waiting

  2. it may call ShowNextMessage() without the mutex lock lock being locked, sometimes causing ShowNextMessage() to crash

  3. it may keep the mutex lock lock locked indefinitely (even if it doesn't crash), preventing data processing threads from adding new messages

Quiz 06

A multithreaded batch processing program sometimes produces a very large number of messages. In attempt to limit the amount of messages the user sees, the program attempts to allow at most one message to be sent per second. When a thread generates a message and it cannot be shown to the user immediately, it waits until it can display the message before proceeding.

The program implements this using two counting semaphores. One semaphore, called the "prepare-to-send" semaphore, is used to indicate to a dedicated timekeeping thread that a data processing thread is ready to send a message to a user. When this timekeeping thread processes that indication, it indicates to the data processing thread that it can show the message, then waits one second, then starts checking for another data processing thread that wishes to send a message. The other semaphore, called the "ready-to-send" semaphore, is how the timekeeping thread indicates to a data processing thread that it can show a messsage.

Question 1 (1 points): (see above) When a data processing thread is using the "prepare-to-send" semaphore to ask the timekeeeping thread to send a message, what semaphore operation should it use?

  1. down

  2. (correct answer)

    up

Question 2 (1 points): (see above) When the timekeeping thread is using the "ready-to-send" semaphore to indicate to a data processing thread that it can send a message now, what semaphore operation should it use?

  1. down

  2. (correct answer)

    up

Question 3 (1 points): (see above) What should the initial value of the "ready-to-send" semaphore be?

  1. -1

  2. (correct answer)

    0

  3. 1

  4. equal to the total number of threads of data processing threads

  5. equal to the maximum number of messages that can be produced by the program

An online encyclopedia keeps track of articles, which are grouped into categories. To do this, it keeps data structures in memory representing both the articles and categories:

struct Article {
    pthread_mutex_t lock;
    Category* category;
    ...
};

struct Category {
    pthread_mutex_t lock;
    vector<Article*> articles;
    ...
};

The online encyclopedia implementation is multithreaded sometimes experiences deadlock when simulatenously performing some combinations of the following operations on Articles and Categories:

Question 4 (1 points): (see above) Which of the following combinations of operations, if performed in parallel, could cause deadlock? D, E, and F represent categories and X, Y, and Z represent articles.
Select all that apply

  1. (correct answer)

    merge(D, E) and merge(E, D)

  2. merge(D, E) and merge(F, E)

  3. (accepted any answer)

    if X is originally in category D, move(X, E) and merge(E, D)

  4. (accepted any answer)(correct answer)

    if X is originally in category D, move(X, E) and merge(D, E)

we meant to say "from category A into B" in the question, so being more generous with deadlock scenarios.

Question 5 (1 points): (see above) Which of the following changes would reduce the ways deadlocks could occur?
Select all that apply

  1. (accepted any answer)(correct answer)

    modifying merge acquire the locks for the categories in order by their addresses

  2. (correct answer)

    modifying move to check if the lock on the original category is available before acquiring it, and, if not, release the lock on article X, then reacquire the lock in article X, then check the lock in the category again in a loop

  3. modifying move to acquire the lock on both the original category and category B before modifying article X or category A or B's vector of articles

  4. (correct answer)

    having both move and merge acquire a single global lock before they do anything else and release that lock before they return

"modifying merge acquire" was meant to be "modifying merge to acquire"; dropped option to account for lack of clarity.

Quiz 07

Question 1 (1 points): When writing a multithreaded program, a common alternative to having the threads communicate in shared memory regions (synchronized via monitors, semaphores, etc.) is to use message passing. What are some advantages of using message passing?
Select all that apply

  1. less copying of data is required than using a shared-memory-based approach

  2. (correct answer)

    it is easier to run a multithreaded program in parallel across multiple, physical separate machines

  3. (correct answer)

    the functions used to send data (in "messages") between threads can also handle most synchronization

32-bit x86 as configured by xv6 uses a two-level page table with a 32-bit virtual addresses, 4096 byte pages, and 4-byte page table entries stored in 4096 byte page tables (for each level).

For the purposes of these questions, a first-level page table is one pointed to directly by the page table base pointer (CR3 on x86), which on x86 is also called a page directory. The second-level page table is one pointed to by an page table entry in the first-level page table.

Question 2 (1 points): (see above) If the page table base pointer is set to byte address 0x10000, then first-level page table entry for virtual address 0x11222333 is stored at what physical byte address?

  1. 0x10044

  2. (correct answer)

    0x10110

  3. 0x10222

  4. 0x10333

  5. 0x10888

  6. 0x10CCC

  7. 0x21222

  8. 0x54888

  9. none of the above

Question 3 (1 points): (see above) If the second-level page table corresponding to virtual address 0x44555666 is located as physical byte address 0x80000, then the second-level page table entry for the virtual address is what physical byte address?

  1. 0x80110

  2. 0x80155

  3. 0x80440

  4. (correct answer)

    0x80554

  5. 0x80666

  6. 0x81998

  7. 0xC4555

  8. 0x191554

  9. none of the above

Quiz 08

Consider the following POSIX C code that uses mmap:

int fd1 = open("test1.dat", O_RDWR);
char *ptr1 = mmap(0, 16384, PROT_READ | PROT_WRITE, MAP_SHARED, fd1, 0);
int fd2 = open("test2.dat", O_RDWR);
char *ptr2 = mmap(0, 16384, PROT_READ | PROT_WRITE, MAP_SHARED, fd2, 0);

for (int i = 1024; i < 6 * 1024; ++i) {
    ptr2[i] = ptr2[i] + 10;
    ptr1[i] = ptr2[i];
}

Assume this runs on a system where:

Question 1 (0 points): (see above) [This question will be dropped, since I accidentally made an adjustment that removed MAP_PRIVATE from the code above.] How many bytes must have been copied to run this program as part of the copy-on-write used to implement MAP_PRIVATE?

  1. 1024

  2. 5120

  3. 4096

  4. (correct answer)

    8192

  5. 16384

  6. none of the above

Question 2 (1 points): (see above) How many pages will (eventually) be written to disk as a result of the modifications this program makes via mmap?

  1. 1

  2. (correct answer)

    2

  3. 3

  4. (correct answer)

    4

  5. 5

  6. 6

  7. 7

  8. 8

  9. none of the above

I posted a version of this quiz earlier that had different code; this is why an answer for 2 is also accepted.

Question 3 (1 points): A mapping from physical pages to page table entries referencing that physical page could be helpful to _____.
Select all that apply

  1. identify the invalid page table entry that triggered a page fault

  2. (correct answer)

    determine whether there are multiple references to a page that might make copy-on-write necessary

  3. (correct answer)

    check whether the physical page should be considered accessed or dirty

  4. (correct answer)

    move data from one physical page to another without disrupting processes using that memory

For the following questions consider an application running on a system with virtual memory backed by 4 physical pages. Exactly one application is running and, initially, the all the physical pages are empty, and these physical pages are only used for this application's virtual memory.

The application that accesses its virtual memory pages (identified using letters like A, B, C, D, E, F, G, ...) in the following sequence:

The questions below ask about page replacements.

For these questions, count replacing the contents of an empty OR non-empty physical page with data for a particular virtual page as a page replacement.

Question 4 (1 points): (see above) What is the minimum number of page replacements required to complete the above access pattern? (See note above about what counts as a page replacement.) Write your answer as an integer.

Answer: (accepted answers: 6)

Question 5 (1 points): (see above) To achieve the minimum number of page replacements described in the previous question, a physical page containing the contents of _____ must be replaced with the contents of virtual page E.
Select all that apply

  1. virtual page A

  2. virtual page B

  3. virtual page C

  4. (correct answer)

    virtual page D

  5. virtual page E

Question 6 (1 points): (see above) If an LRU (least recently used) replacement policy is used, how many page replacements are used to complete the above access pattern? (See note above about what counts as a page replacement.) Write your answer as an integer.

Answer: (accepted answers: 10)

Quiz 09

Question 1 (1 points): A second-chance replacement policy is an approximation of LRU (least recently used). For which of the following decisions is the second-chance replacement most likely to choose a page which has been accessed relatively recently?

  1. (correct answer)

    the choice of page to replace for the first replacement after the application has run for a long time without requiring any page replacement

  2. (correct answer)

    the choice of page to replace in the middle of a long sequence of page replacements closely spaced in time

  3. the choice of page to replace when there are still empty physical pages

When operating systems perform readahead for their page cache, they try to take advantage of sequential access patterns by detecting when a program appears to be accessing data sequentially. Then, they cache the data that the program should access next (based on the prior sequential accesses) before the program first accesses it.

Question 2 (1 points): (see above) After reading ahead pages X to X+K of a large file (where K is a relatively small portion of the available memory), an operating system wants to detect whether to readahead more pages from the file. To do this, which of the following is likely to minimize the number of page cache misses?

  1. (correct answer)

    read pages X+K+1 to X+2K when and if the processes accesses page X+K+1 is accessed

  2. read the rest of the file without waiting to see if what the process does

  3. detect the first access to page X by setting it invalid to trigger a page fault, and replace pages X to X+K of the file with pages X+K+1 to X+2K when this page fault occurs

For non-file pages in its page cache (such as pages representing part of a process's stack or heap), the Linux kernel keeps a separate ordered accessed list and inactive list. Pages are initially added to the top of the active list. Pages are removed from the bottom of the active list and added to the top of the inactive list in order to keep the inactive list a certain size. If a page is accessed while on the inactive list, it is moved to the active list. Otherwise, pages are taken from the bottom of the inactive list to perform page replacements.

Suppose an operating system implements a page cache using this replacement policy where:

Question 3 (1 points): (see above) Suppose the program accesses its virtual pages in the following pattern that repeats indefinitely: (each letter identifies a virtual page)

  • A, B, C, D, E, A, B, C, D, E, A, B, C, D, E, ...

How many of these accesses will require a page replacement?

  1. the first five, then all future accesses will be hits (not require replacement)

  2. about 20%

  3. about 40%

  4. about 60%

  5. about 80%

  6. (correct answer)

    about 100%

exactly emulates LRU

Question 4 (1 points): (see above) Suppose the program accesses its virtual pages in the following pattern: (each letter identifies a virtual page)

  • A, B, C, D, E, D, C, B, A

The second access to A will replace the page containing ____.

  1. B

  2. C

  3. (correct answer)

    D

  4. E

  5. none of the above; it will not require a page replacement

Question 5 (1 points): Suppose a mouse controller is connected to the processor via the memory bus. Whenever a mouse button is pressed or released or the mouse is moved, the mouse controller adds an a record of that event to a small buffer it maintains and signals the processor to trigger an interrupt. Suppose a process on this system can make a system call that waits for a mouse click to occur. (That is, the system call only cares about mouse clicks that occur after the system call is started.) To implement this system call, what would be the best choice for the device driver?

  1. the device driver should read the device controller's buffer of events in a loop when it is called from the system call handler until it detects a mouse click in it

  2. (correct answer)

    the device driver should mark the process as not runnable from the system call handler and later, from its interrupt handler mark the process as runnable again

  3. the device driver should write the address of the system call's return value into the mouse controller's buffer, then context switch to another process

Question 6 (1 points): Which of the following are advantages of direct memory access (DMA) over programmed I/O (where, in both cases, the device controller is attached to the memory bus)?
Select all that apply

  1. (correct answer)

    Usually DMA allows the processor to do other work while data is being copied to a device controller, but programmed I/O usually does not

  2. Usually DMA does not require an interrupt handler to be implemented, but programmed I/O usually does

  3. Usually DMA does not require memory addresses to be allocated for the device's control registers, but programmed I/O usually does

Quiz 10

Question 1 (1 points): When writing a 10000 cluster file to a FAT-based filesystem, which strategy is likely to minimize seek times for reading the file from a hard disk assuming the operating system caches the entire file allocation table in memory but other data from the disk is typically not cached?

  1. (correct answer)

    storing the file in a set of 10000 contiguous free clusters

  2. storing the file in clusters which are adjacent to important directory entries

  3. storing the file in the highest-numbered free clusters

  4. storing the file in the lowest-numbered free clusters

Question 2 (1 points): Suppose an operating system wanted to make a filesystem that uses flash memory like is contained in an SSD but without the benefit of block remapping. In this design, the operating system is responsible for determining which flash pages (name for the equivalent for sectors in the flash hardware) data is stored in and triggering the erasure of an erasure block. Which of the following techniques would most help the filesystem get good reliablity out of the flash?

  1. (correct answer)

    relocating a file's data blocks whenever they are modified instead of overwriting their prior location

  2. when possible, allocating all the data blocks for files in directory within the same erasure block

  3. storing all directory entries in one location on the flash rather than spreading the directory entries throughout the entire flash memory

Question 3 (1 points): In order for a filesystem to get the best performance out of a hard disk drive, it is useful to ________.
Select all that apply

  1. (correct answer)

    place data blocks of related files physically close to each other on the disk

  2. place directory entries in a separate area of the disk from the data contained in the files those directory entries are for

  3. prefer linked-list-like structures over array-like structures

Suppose one has an inode-based filesystem where:

Question 4 (1 points): (see above) How many blocks must be allocated to store a 2000KB file on the filesystem? Ignore space required for inodes and directory entries but include any space outside of inodes required to store block pointers. (Write your answer as a base 10 number.)

Answer: (accepted answers: 1003)

: 1000 blocks of data; 10 pointers in inode + 512 pointers in indirect block (+1) pointed to by inode + 1 double-indirect (+1) block pointing to 1 indirect block (+1) containing 478 remaining pointers --- 1000 +1+1+1 = 1003

Question 5 (1 points): (see above) How many blocks must be allocated to store a directory of 2 1KB files on the fiesystem? Ignore space required for inodes and any space required for the directory's directory entry in its parent directory, but include any space outside of inodes rquired to store block pointers and space required for directory entries for the 1KB files. (Write your answer as a base-10 number.)

Answer: (accepted answers: 3)

: 1 block for directory; 1 block for each of the two files

Question 6 (1 points): (see above) How many inodes must be allocated to store a directory of 1000 2KB files on the fiesystem? Include any inodes required for the directory itself as well as the files it contains.

Answer: (accepted answers: 1001)

: 1 inode for directory, 1 for each of the 1000 files

Question 7 (1 points): Which of the following schemes would be most space-efficient at storing the locations of the data for 20MB large file that is divided up into 4 contiguous 5MB segments? Assume 1K blocks for any inode-based filesystems, and 1K clusters for FAT filesystems.

  1. (correct answer)

    an array of extents in an inode-based filesystem

  2. direct and indirect and double-indirect block pointers in a inode-based filesystem

  3. pointers to block fragments in an inode-based filesystem

  4. a file allocation table like in the FAT filesystem

Quiz 11

Consider a inode-based filesystem. For each of these questions, consider the process of moving a subdirectory X from one parent directory A to another parent directory B, assuming that the filesystem needs to an additonal data block for the directory entry in directory B.

For these questions, ignore any changes to modification and creation times the filesystem may store in inodes and assume no additional data blocks need to be allocated to store block pointers.

Question 1 (1 points): (see above) Suppose this filesystem that attempts to keep data consistent and avoid data loss by using careful ordering of updates. As a result, for the entire move, the filesystem needs to perform the following updates:

  • 1: updating the inode for directory B
  • 2: updating data blocks containing directory entries for directory A (to mark a directory entry as unused)
  • 3: writing a new data block for the new directory entry in directory B
  • 4: updating the free block map to allocate the new data block for directory B

What is an appropriate order for these operations occur in that would best avoid data loss and inconsistency? For example, if you think 1, then 2, then 3, then would be appropriate, write "1234".

Answer: (accepted answers: /(43|34)(12|21)/)

Question 2 (1 points): (see above) Suppose this filesystem uses journalling with redo logging to keep data consistent and avoid data loss. As a result, for the entire move, the filesystem needs to perform the following updates:

  • 1: adding a log entry for overwriting a directory entry in directory A's data blocks (to mark a directory unused)
  • 2: overwriting the directory entry for directory A's data block
  • 3: adding a log entry for writing a new data block for the directory entry in directory B
  • 4: writing the new data block for directory B
  • 5: adding a log entry for the update to the inode of directory B
  • 6: updating the inode for directory B
  • 7: adding a log entry for the update to free block map to allocate the new data block for directory B
  • 8: updating the free block map for the new data block
  • 9: writing a log entry that indicates an transaction was committed

What is an appropriate order for these operations occur in that would best avoid data loss and inconsistency? For example, if you think 1, then 2, then 3, then 4, then 5, then 9, then 7, then 8, then 9 again, write "123459789". (Multiple answers may be correct.)

Answer: (accepted answers: /[1357]{4}9[2468]{4}/)

: write log entries (any order), then write commit, then do other updates (any order)

Question 3 (1 points): Adding redo logging to a filesystem generally __________.
Select all that apply

  1. (correct answer)

    increases the amount that is written to a hard disk as a result of a particular filesystem operation

  2. (correct answer)

    requires less seeking to perform many modifications to a filesystem than when careful ordering of writes is used to prevent inconsistency

  3. provides redundancy that ensures data will not be lost in the event a single part of the disk is corrupted

Question 4 (1 points): In the POSIX sockets API, the connect() function __________.

  1. creates a new socket file descriptor

  2. takes an existing socket file descriptor and sets its local address

  3. (correct answer)

    takes an existing socket file descriptor and sets its remote address

  4. takes two existing socket file descriptors and makes it so data written to one can be read from the other

connect's primary purpose is setting the remote address, but it is correct that it sometimes sets the local address, too; so accepting both.

Question 5 (1 points): Which of the following is generally true about the client/server based model of a distributed system?
Select all that apply

  1. (correct answer)

    clients may contact servers, even if the server does not do anything to cause this to happen

  2. servers may contact clients, even if the client does not do anything to cause this to happen

  3. (correct answer)

    servers can send data to clients

  4. (correct answer)

    clients can send data to servers

  5. if one client is temporarily inaccessible, the entire system is often unusable

Quiz 12

Question 1 (1 points): Which of the following are possible for an RPC system to implement over a typical network?
Select all that apply

  1. indicating to an RPC client whether or not an RPC call reached the server, even in the event of server or network failure

  2. (correct answer)

    passing a very large array through an RPC call

  3. (accepted any answer)(correct answer)

    indicating to an RPC client when there is a server or network failure that may have affected the RPC call

  4. (correct answer)

    allowing the RPC server to be written in a very different programming language than the RPC client

re "indicating when a server or network failure may have affected", I meant doing so sometimes not necessairily all the time. Giving credit either way because of this ambiguity

Question 2 (1 points): Given a C prototype like:

    int ExampleFunction(char *s)

a typical RPC system needs extra information to create useful server and client stubs that represent (for client stubs) or handle this function. What would this information most likely include?
Select all that apply

  1. whether ExampleFunction is likely to take a long time to execute

  2. (correct answer)

    the size of the value pointed to by ExampleFunction's argument

  3. (accepted any answer)(correct answer)

    whether ExampleFunction modifies the pointer passed in as an argument

  4. the amount of stack space required to execute ExampleFunction

  5. (accepted any answer)

    whether ExampleFunction can be called with a pointer to a value on the stack

re "modifies the pointer" -- meant to say whether it modifies the value pointed to by the pointer; re: pointer to value on stack -- how value is allocated might hypothetically be relevant

Question 3 (1 points): Suppose in a system that uses two-phase commit, agree-to-commit messages are sent to a coordinator from each of its three workers. Immediately after this, the coordinator crashes and does not return to service for a very long time. Assuming maintaining consistency is the highest priority, which of the following are acceptable things for a worker to do in this scenario while the coordinator is still unavailable?
Select all that apply

  1. (correct answer)

    wait for the coordinator to return to service, even though this will take a very long time

  2. after determining that the coordinator is not running for a very long time, discard the transaction

  3. after determining that all other workers sent an agree to commit message to the coordinator, perform (and commit) the transaction locally

Suppose two programs A and B running on separate machines are both accessing a file X on an NFSv3 server. They do the following operations in this order:

Assume no operations overlap in time. For example, when A and B close the file the first time, the close for B, and everything required as a result of the close, completes entirely before the close for A starts.

Question 4 (1 points): (see above) Suppose the machines A and B are running both do as much caching as is allowed by NFSv3 protocol and its open-to-close consistency model. Assume that they never need to evict values from the cache, except when required by the consistency model. What values would A read from file X in the read operation marked with ***** above?

  1. (correct answer)

    12

  2. 14

  3. 32

  4. 34

Question 5 (1 points): (see above) Suppose the machines A and B are running both do as much caching as is allowed by NFSv3 protocol and its open-to-close consistency model. Assume that they never need to evict values from the cache, except when required by the consistency model. What values would B read from file X in th eread operation marked with +++++ above?

  1. (correct answer)

    12

  2. 14

  3. 32

  4. 34

Quiz 13

Question 1 (1 points): In a typical POSIX-style OS, during an open() call, the file's access control lists are ______.

  1. (correct answer)

    checked within the system call handler for open() in the kernel

  2. checked within the userspace wrapper function that contains the special instruction to perform the system call exception

  3. not checked until a read(), write(), or similar system call is issued

Question 2 (1 points): In a typical POSIX-like system, access control lists and file permissions are evaluated using ________.
Select all that apply

  1. the list of other files a process trying to access the file has open

  2. (correct answer)

    a user ID number (or numbers) stored in the process control block for the process trying to access the file

  3. a user name (or names) and password (or passwords) supplied to the kernel by a system call in the process

Question 3 (1 points): Consider a multiuser system with many users. Suppose on this system we want to implement a database of public contact information for each user. We want to give each user the ability to edit their own contact information, but not the ability to edit other user's contact information. Which of the following mechanisms would be suitable for doing this?
Select all that apply

  1. (correct answer)

    create a special user (only accessible to sysadmins) and a set-user-ID program owned by that user that reads or edits a file owned by and only accessible by that special user

  2. (accepted any answer)

    create separate files for each user and use access control lists to allow that programs run by that user to read the file and allow any program to write the file

  3. create a special group for the contact information database, make a database file for all users' contact information writable by that group, and add this special group to each user's default groups

re: the option with access control list, the policy specified is reversed from what it should be, which was not intentional. Accepting either way because of that being unintentionally confusing. The option about a group for the contact information database doesn't work because it lets any user edit any other user's contact information.

Question 4 (1 points): In a hypervisor implemented using trap-and-emulate, suppose the guest OS is running and the exception corresponding to system call occurs. The hypervisor should ______.

  1. change the program counter to the location of the guest OS's system call exception handler and run it in kernel mode on the hardware

  2. (correct answer)

    change the program counter to the location of the guest OS's system call exception handler, record that the guest OS thinks it is in kernel mode, and run the handler in user mode on the hardware

  3. check what the system call number is (e.g. stored in %eax if the guest OS is xv6) and performs different actions depending on what it is

  4. show an error, because system call exception should not occur while the guest OS is running; it should only occur when the hypervisor itself needs to make a system call to the host OS

Question 5 (1 points): In a hypervisor implemented using trap-and-emulate, suppose the guest OS is running and the exception corresponding to protection fault occurs. Depending on the details of the protection fault and state around it, the hypervisor might want to do which of the following?
Select all that apply

  1. (correct answer)

    run the protection fault handler in the guest OS

  2. (accepted any answer)(correct answer)

    perform a system call in the host OS based on the instruction that triggered the protection fault, then return the following instruction

  3. perform a system call in the guest OS based on the instruction that triggered the protection fault, then return the following instruction

meant to write "return TO the following instruction"

For this question, virtual address refer to addresses the guest OS accesses when it is running; physical addresses refer the addresses produced by taking virtual addresses and translating them using the guest OS's page table, and machine address refer to the addresses on the underlying hardware used the implement the corresponding physical addresses.

Consider a hypervisor uses shadow page tables to implement virtual memory.

Question 6 (0 points): (see above) [question dropped] Suppose a page fault occurs in the guest OS and:

  • the guest OS and host OS use 4KB pages
  • the guest OS and host OS use two-level page tables, where each level uses 10 bits from the virtual page number
  • the faulting address (what would be rcr2() in xv6) is 0x5043 which is in virtual page 5
  • the hypervisor uses machine addresses 0x10000 for physical address 0x0, 0x10001 for 0x1, etc.
  • the guest OS has a page table entry for this address which references physical page 0x50

When this page fault occurs, the hypervisor sometimes needs to modify or create a shadow page table entry. This entry can be found by reading the PTE at index _________ of the first level of the shadow page table, then, in the second-level table pointed to by that PTE, reading the PTE at index ______.

  1. 0x5 / 0x0

  2. 0x15 / 0x0

  3. 0x55 / 0x0

  4. 0x43 / 0x5

  5. 0x53 / 0x0

  6. 0x53 / 0x5

the correct answer was 0x0 / 0x5 and I miskeyed this. Note that the shadow page table is indexed using virtual page numbers.

Question 7 (1 points): (see above) The shadow page table will primarily be read by _______.

  1. the guest OS

  2. the host OS

  3. the hypervisor

  4. (correct answer)

    the hardware