Copyright (c) 2001-2003 The Trustees of Indiana University.  
                        All rights reserved.
Copyright (c) 1998-2001 University of Notre Dame. 
                        All rights reserved.
Copyright (c) 1994-1998 The Ohio State University.  
                        All rights reserved.

This file is part of the LAM/MPI software package.  For license
information, see the LICENSE file in the top level directory of the
LAM/MPI source distribution.

$HEADER$

(more copyrights at end of this file)

This test directory has been modified by the LAM Team.  It is based on
a modified version of the IBM test suite which was originally derived
from an MPICH test suite (see text below).  Questions and comments
should be directed to the LAM mailing list (see
http://www.lam-mpi.org/mailman/listinfo.cgi/lam) -- not to the
original IBM or MPICH authors.


Overview:
=========

NOTE: This test suite *can* be used with other implementations of MPI.
The configure script should figure out the necessary details.  If you
are running this test suite with LAM/MPI, additional LAM-specific
tests will be run.

The LAM Team requests that you compile and run this test suite after
successfully installing LAM on your system. 

The following caveats apply to this test suite:

- You cannot run this test suite as root; root is specifically
  disallowed to run most LAM/MPI executables (including lamboot and
  mpirun).

- Although the test suite can run on a single node, many of the tests
  will either abort or limit the tests that they run.  Some tests will
  abort if there is an odd number of nodes.  It is therefore
  recommended to run the test on an even number of nodes/CPUs that is
  greater than 2.

- The automated reporting of this test suite uses MPI_Send() and
  MPI_Recv() to collate results between ranks.  As such, it assumes
  that these functions work properly.  If you can run simplistic MPI
  programs (e.g., the example programs that come with LAM/MPI), these
  functions can be assumed to be working [at least] well enough to run
  the automated report collating in this test suite.  If the examples
  don't even work properly, there are some fundamental problems with
  your installation -- you don't need this test suite to tell you
  that.

The purpose of this package is to run a series of tests against an MPI
implementation.  It will highlight potential problems and bugs, and
will be very helpful for the validation of the overall LAM/MPI
package.  


Instructions (short version):
=============================

To compile and run the test suite:

1. lamboot at least 1 node (2 or more is better).
2. Ensure the correct "mpicc" and "mpirun" are in your path.
3. Set the environment variable CFLAGS and/or LDFLAGS if necessary.
   (e.g., to force 64 bit compilation, or force your compiler to accept
    ANSI prototypes)
4. Run "./configure".  
5. Run "make" to build the tests.
6. Run "make -k check" to actually run the tests.
7. If there are any errors, send all relevant information to the LAM
   mailing list: lam@lam-mpi.org (**).

(**) See http://www.lam-mpi.org/mailman/listinfo.cgi/lam for
     information on how to join the mailing list.


Instructions (long version):
============================

To compile and run the test suite:

1. Boot up a LAM with at least 1 node/CPU (i.e., do a successful
lamboot).  Most tests use an even number of nodes on 2 or more
nodes/CPUs.  If you only lamboot 1 node/CPU, many tests will be
skipped.

2. Be sure that the "mpirun" and "mpicc" for the new version of LAM
are in your $PATH, and that older versions of LAM are ***not*** in
your path.  You may need to edit your .cshrc/.profile/.whatever file
to ensure that this is true for all nodes that you will be running on.

3. Set the environment variable CFLAGS or LDFLAGS to any relevant
values.  For example, to compile in 64-bit mode with the Solaris
Workshop or Forte compilers (for csh-type shells):

	shell% setenv CFLAGS -xarch=v9
	shell% setenv LDFLAGS -xarch=v9

Using the Bourne shell:

	shell$ CFLAGS=-xarch=v9
	shell$ LDFLAGS=-xarch=v9
	shell$ export CFLAGS LDFLAGS

Many of the lamtests programs use ANSI C prototypes.  Some older C
compilers may require additional command line flags to properly
compile this code.  For example, old C compilers under HP-UX 10.20
require the "-Ae" flag in CFLAGS to compile lamtests properly.

Note that some compilers are picky about using the same flags between
compiling the LAM MPI library and compiling (and linking) applications
to that MPI library.  For example, if you compiled LAM/MPI with "-O",
you should probably set CFLAGS and LDFLAGS to "-O" before running the
test suite.  Some compilers (cough cough Sun Workshop 5.0 cough cough)
have been known to give odd (and non-obvious) linking errors when this
is the case.

4. Run "./configure" in the top-level directory.  This will determine
a few options about your MPI implementation -- mainly whether the MPI
implementation is LAM and whether MPI-IO support is included.  The
configure script takes two optional arguments:

  --with-uniform-fs   If you have a uniform filesystem between the
                      nodes that you will be testing (such as NFS or
                      AFS) such that all the test executables appear
                      in the same location on every node, using this
                      option will *greatly* speedup the execution of
                      the test suite.  If this option is not used, the
                      test suite will assume that the test executables
                      only exist on the node where "make check" is
                      invoked, and will therefore send each executable
                      to each node before running it.  This can slow
                      down the test dramatically.

  --without-trillium  This option is generally only for special cases
                      where you are testing a copy of LAM that does
                      not have the full Trillium package installed
                      (see LAM's INSTALL file), but another copy of
                      LAM is installed in the compiler's default
                      include path that *does* have the full Trillium
                      package installed.  Most users will not need
                      this option.

5. Run "make" to build the tests.  The tests should compile with no
warnings or errors.  You may wish to save the output of "make" for
future reference, or if anything goes wrong.  For csh-type shells:

       shell% make |& tee make.out

For Bourne-type shells:

       shell$ make 2>&1 | tee make.out

6. Run "make -k check" to actually run the tests.  Again, output
should probably be saved to a file:

       shell% make -k check |& tee check.out

for csh-type shells, or:

       shell$ make -k check 2>&1 | tee check.out

for Bourne-type shells.  

WARNING: By default, this will run all the tests on with all available
RPI modules, including the lamd RPI.  The lamd RPI uses UDP for
communication, and does not scale well.  As such, running the tests
with the lamd RPI on a large number of nodes is not recommended.  It
is strongly advised to either not run the lamtests on a large number
of nodes, or to use the "MODES" method (described below) to skip the
lamd RPI in the tests.

You can run any subset of tests by invoking "make check" in any
testing subdirectory.  The valid testing subdirectories are listed
below:

       ccl         - Collective communication tests
       comm        - Communicator tests
       dtyp        - Datatype tests
       dynamic     - Dynamic process tests (MPI-2)
       env         - Environment tests
       group       - Group tests
       info        - MPI_Info tests (MPI-2)
       io          - I-O tests (MPI-2)
       onesided    - One-sided communication tests (MPI-2)
       pt2pt       - Point-to-point communication tests
       topo        - Topology tests

       lam         - Tests specifically for LAM/MPI; they will almost
                     certainly fail (or not even compile) if run with
                     other MPI implementations
       lam/basic   - Basic stress tests (specifically for LAM/MPI)
       lam/dynamic - Dynamic process tests (specifically for LAM/MPI)
       lam/env     - Environment tests (specifically for LAM/MPI)

If running the test suite with LAM/MPI, each test will be run once for
each available RPI module.  To run the tests for a specific RPI, use
the MODES variable to specify which RPI modules to use (may require
GNU make).  For example:

       shell% make -k check MODES="tcp sysv"

will run each test twice, once for the tcp RPI and once for the sysv
RPI.

7. If any errors occur, first check the release notes and LAM test
suite sections in the LAM/MPI Installation Guide and User's Guide
documents to ensure that they aren't known issues or otherwise
documented behavior.  

If your problems are still unresolved:

*** For compile issues, send the following to the LAM mailing list
    (you must be a subscriber first):

    - Output from the "laminfo" command
    - Output from "mpicc -showme"
    - Output from "./configure" in the lamtests directory
    - The config.log file from the lamtests directory
    - Output from "make"

    - IF YOU BUILT LAM/MPI FROM SOURCE, ALSO INCLUDE:
      - The config.log file from the top-level LAM directory
      - Output from when you ran "./configure" to configure LAM
      - The share/include/lam_config.h file
      - Output from when you ran "make" to build LAM

    - IF YOU INSTALLED LAM/MPI FROM A BINARY PACKAGE, ALSO INCLUDE
      - The output from "uname -a"
      - Which package you installed LAM/MPI from

*** For run-time test failures, send the following to the LAM mailing
    list (you must be a subscriber first):

    - Output from the "laminfo" command
    - Output from "./configure" in the lamtests directory
    - The config.log file from the lamtests directory
    - Output from "make check"

==> PLEASE COMPRESS ALL TEXT ATTACHMENTS! <==

This may seem like a lot of information -- and it is -- but given the
wide variety configurations possible with cluster computing software,
we really need to see *all* the information before we can help you.

Only members are allowed to post to the list (mainly to control spam)
-- see http://www.lam-mpi.org/mailman/listinfo.cgi/lam to join the
list.


Thank you for your time.
- The LAM Team


LAM modifications to this test suite:
=====================================

The LAM modifications consist of:

- Add "small memory" versions of several tests to help RPI modules
  that may have access to limited "special" memory (e.g., gm).
- Add C++ and Fortran tests.
- Revamped several tests to make them friendly to run properly with a
  single MPI process
- Removed actual running process from the Makefiles, and added
  config/runtests.sh script to run individual tests
- Revamped the testing process to use the standard "make check"
  functionality from GNU
- Added autoconf/automake support.  Have the configure script check
  for some bare-bones kinds of things, and determine if it is running
  with LAM/MPI or not, determine if there is MPI-2 I/O support, and
  select compilation/check directories as appropriate
- LAM/MPI copyrights were added to the top of all .c files, above the
  IBM copyright notice
- The reporting, dynamic, io, info, lam, and onesided directories are
  new for LAM/MPI; they were not included in the original IBM or MPICH
  test suites
- All printf's were changed to lamtest_error() (or
  lamtest_fatal_error(), depending on context) to utilize centralized
  error collating functionality
- New Makefiles; the target "testing" builds the programs and invokes
  the targets "testing-lamd" and "testing-c2c"
- The mpirun commands use LAM's "-s h" option to ship the binary and
  thus work nicely in homogeneous clusters without complex setting of
  the directory names (heterogeneous clusters are not supported; the
  binaries are built once on the local host)
- Removal of some success-case and warning printf()s to reduce the output
- The cancel-send test cases were deleted (LAM doesn't support it)
- The sync-send and wtime test cases were redesigned
- The "char*argv" typo was fixed
- A datatype test case was fixed as it violated the rule: extent = ub - lb
- Some programs were renamed to avoid clashes with standard Unix commands
- Added a few "make" targets to make the test suite a little more user
  friendly 
- Changed a few "int"s to "MPI_Aint"s where necessary
- Changed a spurious "int" to "MPI_Request"
- Added an endian test to dtype/bakstr.c to ensure that the test happens 
  right
- Forced -O on mpirun command line to ensure that LAM knows that it's
  homogeneous (the NULL buffer check in pt2pt/badbuf.c will only happen
  if LAM thinks we're either on a big endian machine or homogeneous)
- Added some tidying-up code to some of the programs to make them
  bcheck clean (a tool similar to purify)
- Changed some codes to malloc huge arrays rather than try to allocate
  them on the stack (allocating 100,000 int's on the stack causes seg
  faults and other Badness on some platforms)
- Changed some MPI_Send/MPI_Recv pairs to MPI_Sendrecv.  This wasn't
  strictly necessary for the current LAM RPI's, but at least one user
  (Troy Baer) points out that using the lamtests suite with another
  MPI (MPICH/gm) causes deadlock.  Assumedly, they have a very small
  "small" message size.  Using MPI_Sendrecv is technically more
  correct, anyway.
- Changed comm/attr.c to only print a warning when MPI_TAG_UB is not
  what was expected when not using LAM/MPI.  Thanks to Troy Baer for
  pointing this out.
- Changed the running part of the tests to be in a script rather than
  in the makefile; this obviated the need for deep mojo in the
  makefiles to make everything work nicer
- Made a bunch of exit(1) statements in dtyp/ tests be exit(0) so that
  LAM's mpirun wouldn't think that they errored out (they called
  exit(1) if they were invoked with more than one rank, and we invoke
  all tests with a default of 2 ranks, with a small number of
  exceptions)



Contact information:
====================

Send feedback and bug reports to the LAM mailing list.  See
http://www.lam-mpi.org/mailman/listinfo.cgi/lam for information.



Original Copyright/Author Text:
===============================

******************************************************************************

This directory of tests has been modified by William Gropp of Argonne National
Laboratory to both conform to the MPI Standard (mostly providing argc and argv
to MPI_Init) and to contain MPICH-style makefiles.  The original programs
were provided by IBM.  The target "testing" will build and test the programs,
assuming that "mpirun" is used to run programs.

To use with MPICH, you'll need to build the Makefiles.  The easiest way to do
this is with

mpireconfig Makefile 
make makefiles

where mpireconfig is the MPICH script to create Makefile's from Makefile.in's.

******************************************************************************

This is a very good (partial) test suite, and I recommend it to all MPI
implementors.  As of this writing, MPICH passes almost all of the tests,
failing only tests of MPI_CANCEL (unimplemented in MPICH).  Thanks to IBM 
for providing it.  A few tests tests for particular behavior on errors (such
as calling MPI_Barrier before MPI_Init) and as such are undefined.


Comments on the differences with the original suite.

Many of the programs failed to return a return value to the 
calling environment; a return 0; was added after most MPI_Finalize calls.

A number of bugs of the form "if (a = b)" were fixed by changing to 
"if (a == b)". 

Some of the tests tested interpretations of the MPI Standard that do not match
clarifications made by MPIF (particularly TEST/WAIT on empty lists); these
have been updated to match the current spec.  Some of the datatype tests
tested a particular interpretation of the "padding" rule that was (a)
implementation specific and (b) also changed in a clarification (making the
resulting standard more portable).  

There are reports that some tests check for specific data lengths, for
example, assuming that sizeof(int) == 4.  If you find any of these, please
send mail to mpi-bugs@mcs.anl.gov .

Many of the tests created MPI objects (groups, communicators, etc) but did
not free them.  While this is not a bug, it made it hard to test for storage
leaks in the MPI implementation, so I've added code right before the
MPI_Finalize to free most objects.
