stream test run

Date: Mon Mar 03 1997 - 20:44:34 CST


This is the test run of "stream_d.c" on my computer, an Intel Pentium 60 on a
Micronics motherboard with 256K L2 cache and 64 MB of parity memory, running
Linux 2.0.27. The test was run on March 3, 1997. The stream_d.c version was

The system gcc compiler version 2.7.2 was used with the following flags:

gcc stream_d.c seconds.c -o stream -O3 -m486 -funroll-loops
-fomit-frame-pointer -s -lm

The seconds.c code is as follows:

#include <sys/times.h>
#include <f2c.h>

struct tms x;
real y;
real z;

real second()
        y = CLK_TCK;
        z = x.tms_utime;
        z = z/y;

CLK_TCK under Linux is 100.

The executable program is 7556 bytes in size (shared libraries) and the output
results produced by running it as

stream > output


This system uses 8 bytes per DOUBLE PRECISION word.
Array size = 1000000, Offset = 0
Total memory required = 22.9 MB.
Each test is run 10 times, but only
the *best* time for each is used.
Your clock granularity/precision appears to be 9999 microseconds.
Each test below will take on the order of 330000 microseconds.
   (= 33 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
Function Rate (MB/s) RMS time Min time Max time
Copy: 30.1887 0.5380 0.5300 0.5400
Scale: 48.4848 0.3340 0.3300 0.3400
Add: 51.0638 0.4700 0.4700 0.4700
Triad: 52.1739 0.4690 0.4600 0.4700


Robert E. Canup II

This archive was generated by hypermail 2b29 : Tue Apr 18 2000 - 05:23:06 CDT