Stream's future

Bill Broadley (
Sun, 22 Sep 1996 06:04:26 -0700 (PDT)

Hello all,

I'm Bill Broadley, I'm a programmer/admin for the UC Davis
Math Dept, been here for 2 years. Before that I was programming
for psychology research using time compressed speech, mri, and eeg. The
latter two involving significant amounts of post processing.

I'm interested in benchmarking, computer architecture, high end
workstations, linux, performance tuning code, etc. Our department does a
fair amount of research, the most compute intensive is probably fluid
dynamics. We are always looking for the hottest price/performance box we can
find (most recently a HP C180Xp). Some of our codes are definitely in
the not cache friendly catagory. (I.e. access every point in a 3-d array
to generate each new point).

I think if your looking for peak observable practical memory
bandwidth to doing something more interesting then memcpy/bzero then
stream does a good job, for a memstone I'd say (assign+Scale+sum+saxpy)/4 would
work well. If you want to learn more about the memory system then you need
more information. Probably in the form of a 2 or 3-d graph. I.e.
bandwidth changes in relationship to the stride and array size. I guess you
could integrate the area under the curve if you want fewer numbers to

>(1) Should STREAM be extended to automagically measure bandwidths at each
> level of a memory hierarchy? What is a robust way of doing this with
> a single, portable piece of source code?

With lotsa samples I've managed to get some pretty flat graphs.. i.e.:

I've done tests on the 21164 and managed to make out 3 levels. Of
course a cycle counter (I have alpha and intel asm code, looking for
parisc) make taking lotsa samples quicker. I think benchmarking
each level seperately is practical but not sure about the motivation.
As far as users and customers are concerned observable performance
is the most important, not the method by which you get there....

>It was nice that Stream had both a "C" version and a FORTRAN version
>over the years. But if we are going to extend it, let's pick one and
>stick with it. It will be too much of a headache if we add lots of
>features to both versions, and try to ensure both are really measuring
>the same thing.

I agree.

>I assert that the proper language to pick is FORTRAN, as that is the
>only language used for Scientific programming. All those scientists
>writing in "C" are really just writing the same FORTRAN programs they
>used to write, with slightly modified syntax, and with compilers forced
>to be afraid of their pointers.

I'd argue the opposite, fortran is changing to get more of the functionality
of C. So fortran programmers can write C programs in fortran syntax ;-)

As a datapoint the researchers and grad students I know when writing new
programs use c/c++. If working with significant changes to old programs
they use C/C++ and link with the legacy fortran libraries.

Trying to avoid any biases I think writing for the most commonly used
language makes sense. Which for today is C I believe... Not to
mention there is no free decent fortran compiler (g77 is beta).

I have some code that varies array size and plots a graph with gnuplot,
and one that various stride (useful for detecting associativity, and
other memory sytem characteristics.) Not quite ready to release it
yet, but plan to put up a few web pages on it, probably with some kind
of automatic way for people to submit results. I'd be happy to coorperate..

Bill Broadley                UCD Math Sys-Admin
Linux is great.			PGP-ok