----------------------------- SYNTAX ------------------------------

TFILE    ::= TOPDEF *

TOPDEF   ::= include EXPR
           | MACRODEF
           | test STRING { STMT* }
           | VAR = EXPR

MACRODEF ::= def MNAME ( VAR , ... , VAR ) { STMT* }

STMT     ::= VAR = EXPR
           | print EXPR
           | if EXPR then STMT* fi
           | if EXPR then STMT* else STMT* fi
           | MNAME ( EXPR , ... , EXPR )
           | VAR = run EXPR
           | return EXPR
           | skip when EXPR
           | RESULT when EXPR
           | expect RESULT
           | framefail EXPR

RESULT	 ::= pass | fail | unknown

VAR      ::= $identifier
MNAME    ::= identifier
STRING   ::= "a string"

EXPR     ::= EXPR9 || ... || EXPR9
EXPR9    ::= EXPR8 && ... && EXPR8
EXPR8    ::= EXPR0 OP EXPR0
           | EXPR0 "|" EXPR8
           | EXPR0

OP       ::=    == or /= or contains or lacks or ++

EXPR0    ::= VAR
           | STRING
           | True 
           | False
           | MNAME ( EXPR , ... , EXPR )
           | if EXPR then EXPR fi
           | if EXPR then EXPR else EXPR fi
           | otherwise
           | defined VAR
           | contents EXPR0
           | exists EXPR0
           | framefail EXPR0
           | ( EXPR )


Types
~~~~~
The only type (sic) is String.  The strings "True" and "False" are 
(the only) acceptable inputs to conditionals.  Conversely expressions
producing booleans really produce strings "True" or "False".

Notes
~~~~~
A macro may or may not produce a result.  The driver will complain at
run-time if a result-giving macro is called in a non-result-using
context or vice versa.

defined   returns a boolean indicating whether or not the specified
          variable has a value.

exists    returns a boolean indicating whether or not the specified
          file exists.

contents  returns the contents of a file.

The pipe operator works shellishly.   e1 | e2 computes e1
and feeds it to stdin of the command spec'd by e2.  e2's stdout
is the result of the entire expression.

framefail e  immediately denotes a framework failure for this test;
             the specified string is printed by the driver.


---------------------------- SEMANTICS ----------------------------


On the command line is specified
  * path to the config file
  * name of the tool to test
  * path to root of dir holding tests
  * optionally, some var=value bindings
  * optionally, the name(s) of tests to run.  Default is all the tests
    found.


0.  Create the initial global var env.  This binds:
       $conffilename   -- config file name
       $confdir        -- config file dir
       $tool           -- compiler to test
    and any other vars defined on cmd line.
    Call this Genv-INIT.


1.  Find all T files in the directory structure.


2.  Parse, and process independently, each T file.  A T file:
       * defines some tests
       * defines some macros
       * gives bindings for top-level vars
       * specifies include files
    For each T file, create a new env Genv-TFILE, which is Genv-BASE
    plus
       $testfilename   -- name of this .T file
       $testdir        -- dir containing this .T file
    For each include statement, eval the expression using Genv-T.
    Use the result as the name of a file to include.  Recurse until
    includes are exhausted.


3.  We now have a collection of fully-parsed T files, each comprising:
       * a bunch of test defns
       * a bunch of macros
       * possibly some global var bindings.
    Augment Genv-TFILE with the bindings specd in this file,
    giving Genv-FINAL.   The driver dependency-analyses them and 
    complains if circular.  If not, evaluates them in an appropriate 
    sequence and augments the global var env (for this T file).


4.  For each T file, execute the statements for each test in
    turn, with global env Genv-FINAL.  All var bindings created inside 
    macros are purely local and disappear when the macro exits.
    To avoid confusion, you may not use a local var which shadows
    a global.

    Execution of a test finishes when both an expected and actual
    result have been determined.  These are specified by executing
    "expect ..." and "pass/fail when ..." respectively.

    Tests which finish without specifying these are counted as
    framework failures.


5.  If you specified with --save-summary=<file>, a summary of this
    run's results is dumped in <file>.


6.  If you specified with --compare-summary=<file>, a summary of
    differences against results in <file> is printed to stdout.
