This page does not represent the most current semester of this course; it is present merely as an archive.
This page contains quizzes given Fall 2014. For other semesters see the main old quizzes page
|
||
|
||
|
||
|
||
|
||
|
Each question presents one assembly operation and asks what it does. All questions assume the following initial configuration: This quiz is open-book. You may use a calculator of computer if you wish. Do not consult with others, including others located on the Internet. I made this a question so it will show up on the page. You don’t need to answer it. |
||||||||||||||||||||||||||
After executing movl %ecx, (%esp), which of the following changes?
|
||||||||||||||||||||||||||
After executing popl %eax, what is the value in register eax?
|
||||||||||||||||||||||||||
After executing pushl %eax, what is the value in register esp?
|
||||||||||||||||||||||||||
After executing pushl %eax, what part of memory now contains 0x03020100?
|
Each question presents one assembly operation and asks what it does. All questions assume the following initial configuration: This quiz is open-book. You may use a calculator of computer if you wish. Do not consult with others, including others located on the Internet. I made this a question so it will show up on the page. You don’t need to answer it. |
||||||||||||||||||||||||||
After executing movl (%esp), %ecx, which of the following changes?
|
||||||||||||||||||||||||||
After executing popl %eax, what is the value in register eax?
|
||||||||||||||||||||||||||
After executing popl %eax, what is the value in register esp?
|
||||||||||||||||||||||||||
After executing pushl %eax, what is in memory at byte 8?
|
Section 4.2.2 defines the HCL for bit-equality to be bool eq = (a && b) || (!a && !b); Section 4.2.3 defines the HCL for word-equality to be bool Eq = (A == B); Assume we have two-bit words, so A is made of bits a1 and a0 and B is made of bits b1 and b0. Which of the following defines the bit-level implementation of word-level equality (A == B)?
|
||||||||||||||||||||||||||
On page 358 we see the following HCL:
int Out4 = [
The text following this notes
int Out4 = [ without changing its meaning in any way. Which of the following pieces of C code allow this same kind of flexibility in how we write the boolean expressions?
(Note: more than one of the following options is
|
||||||||||||||||||||||||||
Section 4.2.2 defines the HCL for bit-MUX to be bool out = (s && a) || (!s && b); Section 4.2.3 defines the HCL for word-MUX to be
int Out = [ Assume we have two-bit words, so A is made of bits a1 and a0 and B is made of bits b1 and b0. Which of the following defines the bit-level implementation of word-level MUX ([s: A; 1:B;])?
|
Assume S is a non-pipelined system and P is a pipelined system. Pipelining increases both throughput and latency. Which of the following cases exemplify these principles? Check all that apply (if any do)
|
||||||||||||||||||
What is meant by nonuniform partitioningand why is it a problem?
|
||||||||||||||||||
Diminishing returns means that the more pipeline stages there are, the less benefit we get from adding more stages. Why is this?
|
Compare DRAM and SRAM. Check the true statements from the following:
|
||||||||||||||||||||||||||
Suppose a single address sent to the memory module retrieves 128 bits of data. That data is probably retrieved by
|
||||||||||||||||||||||||||
It is typical for the I/O Bridge to be connected to three buses: the system bus, the memory bus, and the I/O bus. It routes a signal between pairs of these busses; the figures on pages 569, 570, and 578 show examples of
What is an example of memory-to-I/O?
|
Compare DRAM and SRAM. Check the true statements from the following:
|
||||||||||||||||||
Reading a supercell takes two memory controller operations. In between the two the intermediate data is stored in
|
||||||||||||||||||
A disk never sends memory to the CPU itself, instead sending it to memory. How does the CPU discover the data is there?
|
One reason that arrays can be faster than linked lists is that their elements are adjacent to one another in memory, where linked list elements may be scattered across widely varying addresses. This is an example of
|
||||||||||||||||||||||||||||||||||||||||||||||||||
Suppose you have a program that often needs several kilobytes of temporary memory. Re-using the same temporary memory each time can be faster than using different temporary memory each time because of
|
||||||||||||||||||||||||||||||||||||||||||||||||||
Consider two programs; one runs in a small loop, the other invokes several dozen functions each one. Assuming the amount of work is similar, the loop will almost certainly run faster. At least some of that speed difference is due to
|
||||||||||||||||||||||||||||||||||||||||||||||||||
Cacheis pronounced like
|
||||||||||||||||||||||||||||||||||||||||||||||||||
If we find the data we want in a cache, we call that a
|
||||||||||||||||||||||||||||||||||||||||||||||||||
If we fail to find data in a cache because we have never accessed the data before, we call that a
|
||||||||||||||||||||||||||||||||||||||||||||||||||
If we fail to find data in a cache even though we access only a few bytes since we last accessed that same data, we call that a
|
||||||||||||||||||||||||||||||||||||||||||||||||||
If we fail to find data in a cache because we’ve read too much data since we last accessed that same data, we call that a
|
See also comments on quiz 10
|
||||||||||||||||||||||
|
||||||||||||||||||||||
Suppose a cache can store 64KB of data and is organized with 2-line sets. What is the largest amount of data that cache can serve without encountering a conflict miss?
|
||||||||||||||||||||||
Assume that cache F is fully associative and cache D is direct-mapped. Assume the two caches have the same address space and the same total data capacity.
|
||||||||||||||||||||||
Process A accesses 32 bytes of data scattered in a deterministic but random-looking pattern across addresses 0x00 through 0xff. Assume that cache F is fully associative (using a least-recently-used policy) with 16 lines of 4 bytes each, and cache D is direct-mapped with 32 lines of 4 bytes each. Assume we A twice and measure the cache misses the second time.
|
||||||||||||||||||||||
Process A accesses 32 bytes of data in order evenly spaced across addresses 0x00 through 0xff. Assume that cache F is fully associative (using a least-recently-used policy) with 16 lines of 4 bytes each, and cache D is direct-mapped with 32 lines of 4 bytes each. Assume we A twice and measure the cache misses the second time.
|
||||||||||||||||||||||
Process A accesses 32 bytes of data scattered in a deterministic but random-looking pattern across addresses 0x00 through 0xff. Assume that cache F is fully associative (using a least-recently-used policy) with 32 lines of 4 bytes each, and cache D is direct-mapped with 32 lines of 4 bytes each. Assume we A twice and measure the cache misses the second time.
|
||||||||||||||||||||||
Process A accesses 32 bytes of data in order evenly spaced across addresses 0x00 through 0xff. Assume that cache F is fully associative (using a least-recently-used policy) with 32 lines of 4 bytes each, and cache D is direct-mapped with 32 lines of 4 bytes each. Assume we A twice and measure the cache misses the second time.
|
Loop unrolling is usually used to remove what source of inefficiency?
|
||||||||||||||||||||||||||
Inline substitution (also called inlining) is usually used to reduce what source of inefficiency?
|
||||||||||||||||||||||||||
Using multiple accumulators is usually used to reduce what source of inefficiency?
|
||||||||||||||||||||||||||
Loop blocking is usually used to reduce what source of inefficiency?
|
||||||||||||||||||||||||||
Adding local variables is usually used to reduce what source of inefficiency?
|
||||||||||||||||||||||||||
Reassociation of operators is usually used to reduce what source of inefficiency?
|
||||||||||||||||||||||||||
Which of the following techniques depend on a pipelined processor’s ability to work on several instructions at once, and thus would not work in a non-pipelined processor? Check all that apply.
|
||||||||||||||||||||||||||
Section 5.2 of the textbook discusses CPE (cycles per element, also called cycles per execution or cycles per instruction in other sources). Their discussion suggests that if I have code with 20 CPE and run it on a problem where my algorithm executes on 100 elements, I should expect the runtime to be:
|
When discussing virtual memory, a single page (check all that apply)
|
||||||||||||||||||||||
From the perspective of a running application, virtual memory looks like
|
||||||||||||||||||||||
From the perspective of the operating system, virtual memory looks like
|
||||||||||||||||||||||
As used by our textbook, a cachedvirtual page is one that is
|
||||||||||||||||||||||
What is the difference between cachinga page, swapping ina page, and paging ina page? Check all that apply.
|
A page table is
|
||||||||||||||||||||||||||||||||||
A translation lookaside buffer (TLB) is
|
||||||||||||||||||||||||||||||||||
For a multi-level page table, the first level page table(s) of process P
|
||||||||||||||||||||||||||||||||||
For a multi-level page table, the second level page table(s) of process P
|
||||||||||||||||||||||||||||||||||
Core i7 memory uses four virtual page numbers because it’s MMU has a four-level page table hierarchy. Consider translating a single virtual address into a physical address. Which following numbers of page table accesses could occur during that translation? If different states of the caches and so on could result in different numbers of page table lookups, check multiple answers.
|
When the CPU is interrupted, how does it know which device created the interruption?
|
||||||||||||||||||||||||||
What distinguishes a fault from a trap? Check all that apply
|
||||||||||||||||||||||||||
The assembly instruction int xused to make system calls only accepts a 1-byte (256-value) argument x, but Linux uses it to support over 300 system calls. How does it do that?
|
|
||||||||||||||||||
|
||||||||||||||||||
|
||||||||||||||||||
|
||||||||||||||||||
How does knowing you have a pipelined processor change the code you write?
|
||||||||||||||||||
|
||||||||||||||||||
|