Task
-
We recommend doing this lab on Linux (a lab machine or in a VM). If you do not, then see the compatiblity note below..
-
Download the lab tarball here. Extract the tarball. (Last updated 8 April 2018 11:40)
-
Build the lab with
makeand run the produced program./sum. This will benchmark 6 different functions that equivalent to the following C function:unsigned short sum_C(long size, unsigned short * a) { unsigned short sum = 0; for (int i = 0; i < size; ++i) { sum += a[i]; } return sum; } -
Create a copy of
sum_simple.s, which contains an commented assembly implementation of the above sum function, calledsum_unrolled2.s. (See below for an explanation.) Modify this copy to rename the sum function tosum_unrolled2and unroll the loop twice. You do not need to handlesizes that are not multiples of 16.Add the resulting
sum_unrolled2function tosum_benchmarks.c, recompile withmake, then observe how much faster the unrolled version is by running./sum. -
Repeat the previous step, but unroll 4 times (using the name
sum_unrolled4). -
Create a copy of your
sum_unrolled4.scalledsum_multiple_accum.s. Rename the function in this copy tosum_multiple_accumand modify it to use multiple accumulators. Make sure you obey the calling convention when choosing where to store the additional accumulators. Add this tosum_benchmarks.cand observe how much faster it is than the unrolled version. -
In
sum_benchmarks.c, create a copy ofsum_Ccalledsum_multiple_accum_Cthat uses multiple accumulators like in your assembly solution. Compare its performance to the assembly version. -
In a text file called
times.txtreport the performance for the naive assembly version, and each of the versions you created on the largest size tested.If possible and you have time, we suggest testing on multiple machines (e.g. your laptop and a lab machine).
-
Run
make looplab-submit.tarto create an archive of all your.sfiles and the text file. Submit this file to archimedes. If you are working remotely on a lab machine, our guide to using SSH and SCP or other file transfer tools may be helpful.
Files in the tarball
- sum_benchmarks.c — file containg a list of versions of sum function to time.
- sum_simple.s — a simple assembly implementation of the sum code.
- sum_gcc7_O3.s, sum_clang5_O.s — versions of the above C code compiled with various compilers and various optimization flags
- sum_main.c — main function that times several version of the above C code.
- timing.c, timing.h — internal code used for timing
- sum.h — various definitions used by our timing, sum code
- Makefile — instructions to allow
maketo build the testing binary
The supplied version of sum
The 6 versions we have supplied are:
sum_C— This is the above code compiled on your machine. If you didn’t change the Makefile, on the lab machines, this will use GCC version 4.9 with the options-O2 -msse4.2. On your machine it will use whatever compilergccis with-O2 -msse4.2sum_simple— This is the assembly code insum_simple.s. This is an assembly implementationsum_clang5_O— This is the above C code compiled with Clang version 5.0.0 with the options-O -msse4.2(with the function renamed). The assembly code is insum_clang5_O.ssum_gcc7_O3— This is the above C code compiled with GCC version 7.2 with the options-O3 -msse4.2(with the function renamed). The assembly code is insum_gcc7_O3.s
Compatability note
OS X requires that function names have an additional leading underscore in assembly. So, the supplied assembly files will not work on OS X. The easiest thing to do is use Linux for the lab (either via SSH or via a VM). Alternately, you can modify the assembly files to add an _ before the function names (e.g. changing sum_simple: to _sum_simple: and .global sum_simple to .global _sum_simple).
Dealing with large output
You can redirect the output of ./sum to a file using something like
./sum > output.txt
Then open output.txt in a text editor.
No performance improvement?
Make sure you that your loop unrolling implementation does not increment the index i more than needed.
It is possible that, due to the simple nature of addition, that you will hit the latency bound rather quickly:
- On
labunix01throughlabunix03(which use the Intel “Nehalem” microarchitecture), we expect the best performance to be 1 cycles/element. (The processor onlabunix01throughlabunix03can perform only one load per cycle, and this is what ultimately limits performance.) - On a more recent Intel processor (Sandy Bridge or later, but not Atom), we expect the best performance without multiple accumulators to be about 1 cycle/element, and with multiple accumulators around .5 cycles/element (More recent Intel processors can perform two loads per cycle, which is what ultimately limits performance.)
- We have not tested extensively, but we believe most non-very-old AMD processors can perform better than 1 cycle/element with multiple accumulators.
Appendix: Timing
Cycle counters
The timing code we have supplied uses the rdtsc (ReaD Time Stamp Counter) instruction to measure the performance
of the function. Historically, this accessed a counter of the number of processor clock cycles. On current generation
processors, where different processor cores have different clock rates and clock rates vary to save power,
that is no longer how rdtsc works. On modern systems, rdtsc accesses the number of cycles of a counter that
counts at a constant rate regardless of the actual clock speeds of each core. This means that the cycle counter
reliably measures “wall clock” time rather than actually measuring the number of cycles taken.
Since clock rates vary on modern processors, measurements of wallclock time do not have an obvious correlation to number of clock cycles. A particular problem are processor features like Intel Turbo Boost or AMD Turbo Core. (These might generally be called “dynamic overclocking”.) In these cases, processor cores briefly operate at faster than the normal maximum clock rate. This means that microbenchmarks like ours my make the processor appear faster than it would be under normal operation — e.g., if we needed to compute sums repeatedly over a period of time. The cycle counter generally counts clock cycles at the “normal” sustained clock rate.
Taking minimums
The function tries to give the approximate minimum timing, ignoring temporary effects like moving arrays into cache or other things running on the system. To do this, it runs the function until:
- It has run for at least 250 million cycles; and
- Either:
- The 10 shortest times are within .5% of each other; or
- 200 attempts have been made.
It then returns the 5th shortest time (ordinarily within .5% of the shortest time).