\documentclass{book}
%\newcommand{\VolumeName}{Volume 2: Axiom Users Guide}
%\input{bookheader.tex}
\pagenumbering{arabic}
\mainmatter
\setcounter{chapter}{0} % Chapter 1

\usepackage{makeidx}
\makeindex
\begin{document}

\begin{verbatim}

Date: Mon, 14 Oct 2002 10:01:01 -0400
From: Tim Daly
To: list
Subject: Arthur Norman Re: atan2

------- Start of forwarded message -------
Date: Mon, 14 Oct 2002 07:55:56 +0100 (BST)
From: Arthur Norman
To: Tim Daly
Subject: Re: atan2

On Sun, 13 Oct 2002, root wrote:

> Arthur,
> 
> I'm trying to build the cslbase directory files.
> The build is blowing up looking for the definition of
> functions like atan2, etc. I've searched the whole
> code base and cannot find the source.
> Where is the source for these math functions?
> 
> Tim
> 
Glad to be in touch - I had been intending to signal you but have now
become somewhat swamped under start of term etc.

sin, cos, atan2 are in the standard C library, as in \#include <math.h>.
On SOME systems you need to link with "-lm" to pick them up, on many
others they are there without fuss. On my Red Hat 7.3 you can find the
actual declarations hidden in obscure mess in
/usr/include/bits/mathcalls.h that /usr/include/math.h itself \#includes.
On some machines many of these get open-compiled when the floating point
unit has magic to do them.

I have been finding the nested Makefiles hard to sort things out through.
I had hoped to do test builds on Windows which is the system I run at
home, and my next choice would have ben cygwin there. With the build
process as messy as it is at present windows is not an easy prospect.
Under cygwin when I do step 1, ie "make" in the development directory,
cygwin make coredumps on me.  The linux setup says it is for glibc2.1 and
I have 2.2 on Red Hat 7.3... it has been much harder and uphill work to
get started than I had hoped!

         Arthur
------- End of forwarded message -------


Date: Mon, 14 Oct 2002 10:42:29 -0400
From: Tim Daly
To: list
Subject: Arthur Norman Re: atan2]

(god, how i love emacs. my system crashed hard while i was typing this
and not one word of my immortal prose was lost.)

> Glad to be in touch - I had been intending to signal you but have now
> become somewhat swamped under start of term etc.

Yes, I've just started a new job (at city college of new york) and
I've had a steep learning curve to climb there. I'm on the team that
created Magnus working with a bunch of experts in infinite group theory.
While I understood this stuff centuries ago it has wilted a bit with age
so I've been reading math books in my spare time. Magnus is a special
purpose computer algebra system that is dying of "code rot" (the authors
were grad students who have left the field; the experts in the group are
not programmers). I'm hoping to keep axiom from the same fate.

> sin, cos, atan2 are in the standard C library, as in \#include <math.h>.
> On SOME systems you need to link with "-lm" to pick them up, on many
> others they are there without fuss. On my Red Hat 7.3 you can find the
> actual declarations hidden in obscure mess in
> /usr/include/bits/mathcalls.h that /usr/include/math.h itself \#includes.
> On some machines many of these get open-compiled when the floating point
> unit has magic to do them.

Ah, right. I could guess that but 11pm isn't conducive to insightful thinking.

> I have been finding the nested Makefiles hard to sort things out through.
> I had hoped to do test builds on Windows which is the system I run at
> home, and my next choice would have ben cygwin there. With the build
> process as messy as it is at present windows is not an easy prospect.
> Under cygwin when I do step 1, ie "make" in the development directory,
> cygwin make coredumps on me.  The linux setup says it is for glibc2.1 and
> I have 2.2 on Red Hat 7.3... it has been much harder and uphill work to
> get started than I had hoped!

Re: Literate Programming

Actually, I'm going to write up a literate document that explains the nested
makefile structure. It'll use noweb (http://www.eecs.harvard.edu/~nr/noweb)
to document the pile. noweb is a variant of Knuth's idea of literate
programming which I plan to use to document the whole of the system.

The literate programming idea (assuming you haven't seen it) in its
simple form is that you write a document in tex that has a few special
tags of the form 
  <<something>>= ...code...@ 
which allows you to mix tex (or latex) and code. once you have the
document (I call it a pamphlet) you can run two programs against it:
  noweave foo.pamphlet >foo.tex
  notangle foo.pamphlet >foo.code
where noweave will generate the tex documentation of the code
and notangle will generate the actual running code. I've already 
rewritten the DHMATRIX domain in this form. (DHMATRIX was derived
from Richard Paul's Ph.D. thesis and he was kind enough to let me
quote directly from that document).

I'll send you the tex and output files until you have the ability
to handle pamphlets.

Anyway, I plan to write up the recursive Makefile chain the same way.
(as well as the ccl files as I need to understand them deeply anyway).

Re: Recursive Makefiles

To get you started the idea of the recursive Makefile chain is that the
base Makefile will create "global" ${FOO} variables. These variables are
added to the temporary environment ${ENV} which prefixes each recursive
call to make. the next Makefile one layer down adds yet more variables
to the ${ENV} and calls its children.

Each Makefile only knows how to make the files in its own subtree.
There is a recursive ${MAKE} call for each subtree in a directory.
Each parent Makefile has to 
  (1) set up environment variables, 
  (2) set up conditions for its children, 
  (3) build any files for which it is directly responsible 
  (4) invoke its children Makefiles (one per subdirectory), and 
  (5) clean up the mess.

The "root" makefile is a special case. It sets up truly global variables
then calls a sibling makefile for the kind of system build you want. All
of the system specific environment variables are in the siblings.
The reason for this is that the makefile tree is intended to work on NFS
mounted directories. You NFS mount the target filesystem, type 
"make whatever" and it handles the details automatically to build a
proper system for the architecture you need. It works rather well as
I was able to build systems ranging the spectrum of ibm/360, intel,
sparc, powerpc, etc in one Makefile tree. I know it seems painful
but once you understand the limited scope of each makefile it is 
rather obvious where things belong (think of scope issues in programming).

The directory structure is important also. There are 5 primary directories:
lsp, src, int, obj, and mnt.

These are divided into 5 different categories for a reason. The basic idea
is to keep the "pure" source files separate from the machine generated
files. and keep the system-dependent files separate from the 
system-independent files. The cross-product of these gives us 4 of the
5 possible directories:

 src = (system independent, human generated   e.g. .boot files)
 int = (system independent, machine generated e.g. .lsp files from .boot)
 obj = (system   dependent, machine generated e.g. .o files)
 mnt = (system   dependent, final image code  e.g. .image files)

src is code we write. It is always read-only to the machine and makefiles.

int is code the machine writes (the lisp generated from the boot code) but
will only be needed when something changes. This considerably shortens the
build process (by about 10^3) but is basically a cache. Removing this
directory will have no other effect. Normally this is mounted read-only
once the first build occurs as there is no need to write over the cache
files. There is nothing cached that depends on any particular target
architecture so we can reuse all of this work no matter what kind of
system we are building.

obj is code that depends on the target architecture, usually compiler
files like foo.o and such. This is "scratch space" for the makefiles
that allows compilers, documentation systems, and other machinery to
build up their working files. This directory can be completely removed
as the Makefiles will rebuild it if needed. It contains nothing 
permanent and does not include anything that gets shipped (although
it might be built here and copied to the final image).

mnt is the final system image for a particular target architecture.
You can copy this directory once the build completes. thus the final
executables are always under: ~/mnt/(target)/....

Using this directory structure you can have a master build system
which contains only the src directory. On the master build system
you NFS mount empty file systems under obj and mnt. Next you type
"make systemtype". The master Makefile sets up the globals, invokes
Makefile.systemtype to set up the system-special globals, and starts
the build. A side-effect of the build is to build all the subdirectory
structure in int, obj and the final mnt ship. Now you have cached work
in int you can keep, trash files in obj you can forget and a shipped
system in mnt you can run. NFS mount a new obj and a new mnt for a new
architecture and type "make nextsystem" and it all works again.

Hope this helps.

Also of interest is that I'm planning to build the system on two
different host services. There is a hidden service on tenkan.org
where I can make my mistakes in private and a world-avaiable service
on savannah (http://savannah.nongnu.org/projects/axiom).

Tim

------- End of forwarded message -------


Date: Mon, 14 Oct 2002 17:06:08 -0400
From: Tim Daly
To: Arthur Norman
Subject: Makefile conventions

Arthur,

I've set up a mailing list for developers at:
axiom-developer@mail.freesoftware.fsf.org

You can subscribe to it online at:
http://mail.freesoftware.fsf.org/mailman/listinfo/axiom-developer

You can see the archives at:
http://mail.freesoftware.fsf.org/pipermail/axiom-developer

More conventions related to Makefiles:

The top level Makefile defines
SPAD= the full path to the root of the world. This is where the
      highest level Makefile lives.
SYS=  the name of the shipped system type (e.g. linux)
LISP= the name of the lisp we are building on
SRC=  ${SPAD}/src
INT=  ${SPAD}/int
OBJ=  ${SPAD}/obj
MNT=  ${SPAD}/mnt/${SYS}
LSP=  ${SPAD}/lsp/${LISP}

so if we are building a system in /home/axiom for linux using ccl these
would look like:

SPAD= /home/axiom
SYS=  linux
LISP= ccl
SRC=  /home/axiom/src
INT=  /home/axiom/int
OBJ=  /home/axiom/obj
MNT=  /home/axiom/mnt/linux
LSP=  /home/axiom/lsp/ccl


We need to keep track of where we are and where we are putting files.
In order to solve this problem we want to name 3 "places" we want to be.
The first is where we have the input (IN), 
the second is where we have space to work (MID) and 
the third is where we want the result (OUT).

Each Makefile (one per src directory) defines 3 local variables (if needed):

IN=  the full path to the src directory where the Makefile lives
MID= the full path to the "working directory" 
       this could be in the int subtree if the file that gets
       created is system independent but machine generated or
       this could be in the obj subtree if the file that gets
       created is system dependent but machine generated.
OUT= the full path to the target directory
       this might be the mnt/${SYS}/bin directory, for example,
       if the result of the make is a shipped executable file.

for example, 
IN=${SRC}/boot
MID=${INT}/boot
OUT=${OBJ}/boot

so, in general, a "standard stanza set" will look like:

${OUT}/boothdr.o: ${MID}/boothdr.lisp
	@ echo making ${OUT}/boothdr.o from ${MID}/boothdr.lisp
	@ ( cd ${MID} ; \
	   echo '(progn  (compile-file "${MID}/boothdr.lisp" :output-file "${OUT}/boothdr.${O}") (${BYE}))' | ${LISPSYS}  )
 
${MID}/boothdr.lisp:	${IN}/boothdr.lisp
	@ echo making ${MID}/boothdr.lisp from ${IN}/boothdr.lisp
	@ cp ${IN}/boothdr.lisp ${MID}/boothdr.lisp

Note that the there are two stanzas related to the boothdr.lisp file. 
The idea is that the second stanza will go from our input file to
our work directory (IN->MID) and the first stanza will go from our
work directory to where we want the result (MID->OUT). This two step
process, while tedious to write, allows us to cache work in the MID
subdirectory. In general, after the first system build, the second
stanzas will never be executed again unless the source files change.
This can result in a considerable savings (for example, depsys images
are cached after being built). For more detail:

The second stanza takes 
  ${IN}/boothdr.lisp -> (by copy) -> ${MID}/boothdr.lisp

  (a) since this is a "system independent" file it creates a copy in the
      int subdirectory (MID=/spad/int/boot).
  (b) since there is no other processing required it just copies.
  (c) the MID file is now in the cache (since this only required a
      copy in this case it hardly matters but overall it saves a lot)

The first stanza takes
  ${MID}/boothdr.lisp -> (compile-file piped into lisp) -> ${OUT}/boothdr.o

  (a) since this is a "system dependent" file it creates the compiled
      output in the obj subdirectory.
  (b) since this file is not shipped with the system there is nothing
      created in the mnt subdirectory.
  (c) note the ( cd ${MID} ; do-the-compile-command ) style.
      this makes sure that we are not trying to work in the src directory.
      (in this case the compiler does not generate tmp files so it doesn't
       matter where we work but the style is followed anyway)


By convention there are two stanzas per file. We don't depend
on default rules for make stanzas so each subdirectory's Makefile
is very explicit about the steps necessary to make a file properly.

By convention the first line in a stanza echos what the stanza does.
All other operations in the stanza are quiet (have a preceeding '@')
This is useful for quickly finding where a build failed.
   

Date: Mon, 14 Oct 2002 21:00:41 -0400
From: Tim Daly
To: Norman Ramsey
Subject: Axiom and noweb

Norman,

I'm building a computer algebra system called Axiom which has been
released as free software (http://savannah.nongnu.org/projects/axiom).
I'm working on rewriting the Axiom source code to deeply depend on
literate programming and, in particular, on noweb.

In addition, I'm working on defining a document format that requires
user examples, test cases, help documentation and cross references to
other documents that will be loaded when this document is used. I can
create most of these sections by using <<special-tag-name>> conventions
and post-processors. As I add requirements I'm not sure that will continue
in the future.

You state in your license that I'm allowed to create derivative works
provided I retain the copyright notice but the result may not be called
noweb without your written consent.

I'm not particularly eager to change the name since I'd like to give 
credit and make reference to your work. As I'm fulfilling the license
requirements at this time I don't see a problem with keeping the name
"noweb". However, I'd like you to be aware of its use.

Tim


Date: Tue, 15 Oct 2002 17:07:26 -0400
From: Tim Daly
To: Norman Ramsey
Subject: Axiom and noweb

>Tim,

>While I'm sympathetic to your efforts, at present I want to be sure
>that the name `noweb' is attached to my work and not to anyone else's.
>In particular, when I come up for tenure, it would be unfortunate for
>me if there were to be any confusion.  Maybe if you tell me more of
>the details we can work something out.

>Norman

A perfectly reasonable request.

Details:

Axiom carries almost all of its knowledge of algebra in a high level
language called "spad". Currently these files suffer from a few
problems that can be addressed by literate programming.

First, there is the classic separation of mathematical theory from
its implementation. People write papers that give the theory and
develop the code to reduce the theory to practice. The paper gets
published, the code gets integrated. Unfortunately, they never again
meet. In a computer algebra system (unlike, say, an editor or other
general purpose program) you can't really tell people "go read the
code" because most of what you need to know to understand the code
exists in a research library.

Second, there is the problem that various parts of a large system
like Axiom can get badly out of sync. Test cases get lost, help
files become outdated, etc.

Third, it is my opinion that computer algebra systems are reaching a
natural limit of complexity based on the way they are constructed.
Because of the theory separation they are very hard to maintain and
even harder to extend. Axiom has been around for 30 years. How will
we get thru the next 30 years?

So, I'm experimenting with literate programming as a way to attacking
the above problems. The experiment involves several directions.

To attack the first problem I'm planning on rewriting the algebra
files as literate programs. I'm going back to the original source
material (thesis work and papers), finding the authors and writing
literate programs that start by explaining the theory and work toward
its reduction to practice. (Eventually I'd like to start a journal 
that requires literate algebra programs as submission material).
I have a few hundred pages so far but there are many years of work 
ahead.

To attack the second problem I'm planning to require a certain style
of document with marked sections and chunks for test cases, user docs,
regression tests, etc. This file format (which you call .nw and I
call .pamphlet) will have bibliographic cross references to other
pamphlets that contain required code. Since code is currently 
dynamically loaded I'm planning to build the system so you can
"drag and drop" a pamphlet onto the system and it will self-expand
(including expanding cross references) and update the system.
This is where I expect to have to add filters and possibly modify
noweb.

To attack the third problem I'm planning on developing the concept
of "booklets" which are groups of pamphlets. One could put together
booklets that explain, say, all of the matrix types in Axiom (think
of a horizontal slice thru the types). Or one could put together
booklets that explain, say, integration in depth (think of a
vertical slice thru the types). It appears that noweb may support
this already (thru -delay) so I don't anticipate any changes for
this idea.

Tim


Date: Wed, 23 Oct 2002 21:00:07 -0400
From: Camm Maguire
To: list
Subject: source

Greetings!  I just saw this list recently setup and looked over the
messages -- it appears that some are working with the source now, but
I cannot find it anywhere.  Is there any status update?


Date: Wed, 23 Oct 2002 21:30:27 -0400
From: Tim Daly
To: Camm Maguire
Subject: Re:  source

Camm,

I haven't posted the source yet. I'm trying to build a working
version.  The source code won't be particularly useful if you can't
run it.  Axiom is based on a new lisp (Codemist Common Lisp) since I
last used it and I'm learning how to build it. The key issue is that
the underlying Lisp requires some built-in support.  That and some
personal commitments are taking time.

I have been looking at the GCL-MPI enhancements as one of the listed
goals is having parallel programming support. I'll get back to you on
this as soon as I resurface.

Tim


Date: Fri, 25 Oct 2002 18:30:50 -0400
From: Tim Daly
To: Robert Morelli
Subject:  Re: Axiom Development

> Robert Morelli wrote:
> I'm potentially interested in working on Axiom.
> Are you coordinating the project?

Robert,

Yes, I'm the person coordinating the project. It is a rather large
project so I expect people will want to focus on particular areas.
There are several different ways that people will be able contribute
depending on their skills. The internal architecture is almost all
Common Lisp. The algebra is written in a high-level language known
as spad or aldor. New algebra needs to be written. The first release 
of the system is going to be on linux but porting work needs to be 
done to other platforms. The documentation needs expanding. The 
user interface needs lots of work. If you visit the root page
http://savannah.nongnu.org/projects/axiom you'll see the overall
project. If you visit the homepage at:
http://www.nongnu.org/axiom you can see a list of open topics and
directions for work. The source code is not yet posted as I'm
in the process of getting the initial build to work. 
What interests you about Axiom?

Tim


Date: Sat, 26 Oct 2002 11:13:11 -0400
From: Tim Daly
To: Philippe Toffin
Subject: Re:  Axiom Development

>Philippe Toffin wrote:
> I am philippe Toffin; I have been working with Axiom for many years,
> mainely because I like very much this high generality level in wich one
> can work; there are many things which are bad, as the low level of errors
> detected explainations by the compiler, or the uses of different names of
> the same functions by the compiler or the interpreter etc...
> I do not really know if I can help you, but it would be more on the math
> algorithm side rather than writing things for specified plattforms.
> Actually, I have the Axiom 2.3 for linux version, but it is not properly
> or not completely installated. A friend of mine is going to do it, soon.
> Anyway, I think that the basic ideas at the beginning of Axiom were good
> ideas, and I still think that it could become an excellent computer
> algebra sofware.
> Please excuse my very inperfect english.
> best regards
> philippe

Philippe,

At the moment the new version of Axiom is still being built.
There are challenges to setting up the system so it can be built
automatically by anyone and the system is built on a lisp (CCL)
that is different from the original (AKCL) so the initial step
has been progressing slowly. I have, however, succeeded in building
the lisp image and integrating it into the build makefile tree cleanly.
I'm much more familiar with the rest of the process so it should go
faster. The first version will be on Linux but I'll be looking for
people who have other kinds of systems and are interested in the
drudge-work of porting.

If you're interested in the algebra but don't have an Axiom system
available the key contribution you can make is to choose a particular
algebra file or two and attempt to document them. The algebra suffers
from the fact that the research papers and reference material necessary
to understand the algorithms are hidden in research libraries. The
plan is to use "noweb" (http://www.eecs.harvard.edu/~nr/noweb) and
a literate programming style to document each of the algebra files.
I can post a first example if you want to see what this would involve.
noweb allows you to mix tex and source code in the same file. It is
basically tex with a few extra tags. You can then write a "pamphlet"
that describes the algebra in detail, including the actual code.
You run:
  notangle foo.pamphlet >foo.spad
  noweave  foo.pamphlet >foo.tex
and you get the original algebra code in foo.spad and the description
of the algebra in foo.tex.

Or you can take a broader view and construct a bibliography list for each of
the algebra files. We will need to do background research to find the
primary materials for the algebra code. Some of it is from books and some
of it is from papers. Axiom is primarily a research platform for 
computational mathematics and leading edge research is mostly found in
papers (although by this time it will have shown up in textbooks).
We need to find the original authors of the code and ask them for
references.

If you get Axiom running there are several things you can do. One is
to start sending bug reports. Be sure to include the line that caused
the failure, the actual output and the expected output. We need to
find the flaws and fix them. One of the key features of open source
is supposed to be the speed with which things get fixed. Hopefully
Axiom will keep that tradition alive.

A second task is to develop example/test cases. There is a directory
(src/input) that has example files and was used for regression
testing. It is hardly changed since the system was shipped. We need
much more coverage of the algebra. Indeed, we need to structure the
tests so we can decide what is and is not covered, at least in a 
minimal way. Many people develop input files to check their work
and throw them away. These are valuable in a more general sense and
we need to encourage collecting, categorizing and documenting them.
There are portions of the algebra that would be a lot easier to use
if there were examples. You could look at Hypertex and construct pages
that help the end user. Hypertex pages are fairly simple to write.

A third task would be to develop new algebra. If you have expertise
in some area that the algebra doesn't cover and understand the algorithms
it would be useful to propose new code. I expect that the criteria for
accepting new algebra will be challenging because we have to be careful
that the new code is correct, well documented, well tested, well
reviewed (like technical papers, I hope to see a list of reviewers) and
that it plays well with the rest of the system. There are huge areas of
mathematics that are never mentioned in Axiom so this is a fertile field
of work.

There are more tasks to think about and I'm open to suggestions for new
ones. Check out the main page (http://savannah.nongnu.org/projects/axiom)
or the homepage (http://www.nongnu.org/axiom) for further ideas. The main
thing is to figure out how you can best contribute.

Tim

P.S. Your english is fine.


Date: 28 Oct 2002 10:05:50 -0500
From: Camm Maguire
To: Tim Daly
Subject: Re:  source

Greetings!

Tim Daly writes:

> Camm,
> 
> I haven't posted the source yet. I'm trying to build a working
> version.  The source code won't be particularly useful if you can't
> run it.  Axiom is based on a new lisp (Codemist Common Lisp) since I
> last used it and I'm learning how to build it. The key issue is that
> the underlying Lisp requires some built-in support.  That and some
> personal commitments are taking time.
> 

No problem, I understand completely.  Not that I have much time
either, but if you get overloaded, you might try posting what you have
(with ample disclaimers) and solicit the few free moments of other
interested parties to get some traction. Much to my pleasant surprise,
the gcl developer list has grown to 11, most of whom contribute
regularly.  

> I have been looking at the GCL-MPI enhancements as one of the listed
> goals is having parallel programming support. I'll get back to you on
> this as soon as I resurface.
> 

Great!  We would like to include the mpi extensions in the
distribution.  We need to figure out a policy, though, for these
optional ffi interfaces, as they are beginning to proliferate.

> Tim
> 

Thanks for your work with axiom!


Date: Mon, 28 Oct 2002 17:25:09 -0500
From: Tim Daly
To: Camm Maguire
Subject:  [Axiom] source

Camm,

>No problem, I understand completely.  Not that I have much time
>either, but if you get overloaded, you might try posting what you have
>(with ample disclaimers) and solicit the few free moments of other
>interested parties to get some traction. Much to my pleasant surprise,
>the gcl developer list has grown to 11, most of whom contribute
>regularly.  

Well, I'm working at redefining the way the system is built and
I'm not 100% certain of all of the details. I've nearly
finished the CCL lisp system, which will be the first release
platform, and I'll see what kind of reaction that generates.
I should be posting the first draft of the CCL code shortly.
Essentially the goal is transitioning the system build to work
from literate programs. The first draft requires a lot of writing
as there is no other documentation. I'm finding all kinds of interesting
advantages and I've only just begun using it. I've also built
some system tools that, as I got smarter about things, it turned
out I didn't need. Simplify, simplify...

I recommend that you take a look at this literate programming 
stuff. It is a bit tedious to set up but the system should be
much more maintainable in the long term. (Besides, it would
be REAL convenient if you make GCL literate so I don't have to :-) )

Tim


Date: Mon, 28 Oct 2002 18:28:16 -0500
From: Tim Daly
To: Robert Morelli
Subject:  Axiom Development

>In a nutshell, I'm interested in many aspects of the AXIOM project but
>my immediate problem is funding.  I noticed you mention a number of
>potential sources of funding on your TODO page.  I'm wondering if you
>have any advice on how I could proceed.

So far there have been no positive replies about funding Axiom.
All of the funding involved comes from my own pocket except for
the server services from the free software foundation. One of
these days I should send them a donation. It would be nice to
be able to work on Axiom full time but that won't happen again
in my lifetime.

>My background is in pure math.  I received my PhD under Raoul Bott at
>Harvard.  My mathematical research centers on the interplay between
>polyhedral geometry and algebraic geometry.  However, for about the
>past year I've been working in the CS department at the University of
>Utah.  Unfortunately, I recently learned that my support here will end
>in 3 months, so I am looking at various possibilities for employment
>or funding after that point.

I know a tiny bit about algebraic topology but nothing about 
algebraic geometry. If you can recommend a textbook I'll look
at it and suggest a possible connection to Axiom. Or you could
check with Jon McCammond at UC Santa Barbara. He's been doing
work in Geometric Group Theory.
Also see www.math.ucsb.edu/~jon.mccammond/geogrouptheory/people.html

>In my more recent involvement in computer science, I've focused on
>computer language design and formal methods.  (By the way, I'm okay
>with Common Lisp, but I prefer languages like Haskell and OCaml for
>internals.  I'm also good with Java, which is well suited to some
>kinds of cross platform user interface work.)

Heathen! :-) Sorry, that one got away. Common Lisp is a useful
implementation language because the source text and the data
structures use the same syntax, a feature we use quite often.
There is nothing sacred about it, however, as the Aldor compiler
is implemented in C (Aldor is Axiom's standalone compiler).

>One of my ideas is to work on computer algebra.  I am drawn to this
>for several different, partly incompatible, reasons, having to do with
>my mathematical research, my interest in computer programming
>languages, and my experiences teaching mathematics.

>First, there's the open source issue, about which I'm sure you need no
>convincing.  For commercial systems like Maple, and especially
>Mathematica, cost is exorbitant, open documentation is lacking,
>precise semantics are lacking, source code is not freely available,
>etc.  This is a big negative for both teaching and research.  It's
>long been one of my dreams to have a high quality open source symbolic
>mathematics system.

Actually, a large portion of the computational math community has the
same issue. Almost everyone I've spoken to wants open source so they
can do various things (not the least of which is to fix broken code).

>My second reason has to do with my experiences using symbolic
>computation in my research.  I've used general purpose programming
>languages for this, general purpose CA systems, and domain specific
>systems.  I've had mixed success with all three breeds.  My chief
>complaint has nothing to do with efficiency or fast algorithms.  I'm
>more interested in reliability, usability, and especially
>expressiveness.  General programming languages don't have the
>facilities, systems like Mathematica are inadequate in abstract
>domains like algebraic geometry, and domain specific systems lack
>flexibility and constantly reinvent the wheel in ad hoc and limiting
>ways.  I find the difficulty of expressing mathematics in programming
>languages a fascinating problem.

According to Daly's Hasty Generalization Theorem (TM) there are 3
kinds of computer algebra system. 

Type 1 is the library approach. The insight begins with the fact 
that their favorite language has a type system and there is a nice
mapping from types to abstract algebra. A large library gets built
which no-one can use except the developers because it is complex.
An interpreter is usually placed over the library to make it more
useable but the library is the key.

Type 2 is the engineering approach. Do whatever is necessary to make
it work. The key symptom is that you can subtract two types, say
matrices, and get a 0 (integer). Note the loss of type information
because a 0 is a 0 is a 0, right? These systems are easy to use at
first but they have trouble scaling because the coercions that make
it work also turn out to be the source of bugs in more complex situations.

Type 3 is the theory approach. The symptom is that a language is
defined that is close to the mathematics you want to express. This
makes the algorithms clearer and, therefore, easier to get right.
The problem with these systems is that they have very steep learning
curves making them hard to learn initially. However, they scale
better because they have good theoretical models and you can strongly
argue for the correctness of the results.

Axiom is a type 3 system. It is harder to learn but, once learned,
it becomes easier to write correct algorithms.

>My third reason has to do with my more recent involvement in computer
>science where I've gained some knowledge of programming language
>design.  I'm particularly interested in advanced type systems, module
>systems, and other devices that balance expressiveness with structure
>and safety.  From this point of view, systems like Mathematica are
>rather undisciplined.  Simply carrying state of the art language
>design to the CA world seems a worthy undertaking, but this is just a
>first step.  An even bigger ambition is to reverse the process -- to
>explore the deep and rich domain of mathematics as a vehicle for
>research in programming language design itself.

The Aldor (external compiler)/Spad (internal compiler) language
IS state of the art. Very few languages have dependent types,
parameterized types and types as first-class objects. Stephen
Watt and his team have done some very impressive work in this
area (www.aldor.org). Also check out references to Manuel Bronstein
(Sumit) and Nicolas Mannhart (Piit). Having been directly involved
in defining and implementing 4 commercial programming languages
I'm kinda burned out in this area.

>These three reasons cover a huge amount of ground, a good deal of
>which overlaps with the goals of the AXIOM project.  Like I said,
>there are also some incompatibilities; experimental research in design
>is somewhat at odds with the goal of producing a polished system
>targeted at teachers, students, and ordinary users.

Half the teachers in France want to use Axiom for teaching in a
much more polished form so there is a lot of support there. Send
mail to the OSCAS@ACM.ORG mailing list and see what support you
find (they may even have a job opening). Gilbert Baumslag at 
CCNY wants to converge the Magnus user interface with Axiom.
Magnus has a "zero learning curve" philosophy and a completely
different direction than any other computer algebra system.
(see www.grouptheory.org for Magnus)

>So where in this expanse of possibilities would it make sense for me
>to work?  That has everything to do with funding.  Like I said, my
>current funding runs out in 3 months.  I can bear a lapse for a few
>more months after that, but I can't work indefinitely without support.

Unfortunately I'm recently re-hired myself (I was one of the chosen
17000 Worldcom layoffs). The people page listed above could give you
a list of places to apply, I guess.

>Keep in mind that getting funded in computer science is tricky for me
>because I have no formal background in that field.  I have to lean on
>my math background and the credentials of my collaborators.  Here are
>some rough ideas for specific kinds of project proposals I have in
>mind:
>
>1.  General user interface, usability, accessibility, development for
>AXIOM.  The tools and libraries needed to make top quality end-to-end
>usability and packaging feasible are just now coming together in the
>open source world.  This would be pitched as a means of growing the
>AXIOM community and improving AXIOM as a tool for education and 
>research.

I would recommend taking a look at TeXmacs as a possible place to
start. Joris van der Hoeven is the author and has exactly the same
goals. Andrey Grozin is another contact.

>2.  AXIOM in education.  I've taught most of the undergraduate courses
>in math at the U. of Utah, where we've generally used Maple.  I'd
>propose working with the math department on integrating AXIOM into some
>of these courses, and using the experience to improve AXIOM as a
>teaching tool.

Memory fails me for the best contact but check with Paul Zimmermann.
He can put you in touch with people who share your interest in this.

>3.  Integrating AXIOM with theorem proving.  I'm right now working
>with one of the principal developers of the HOL theorem proving system
>(who happens to be in the Utah CS department).  I don't speak for him,
>but he and I have discussed integrating HOL with computer algebra and
>I think he'd be interested if we could get funded to do the work.

The best contacts here are either at UTexas (ACL2, contact
Michael Bogomolny) or Cornell (MetaPRL, contact Sergey Artimov)

>4.  Programming language research in the context of symbolic algebra.
>I've discussed this idea with one of the computer languages people in
>the CS department.  He's an expert in component and module systems and
>the principal developer of PLT Scheme and the Dr. Scheme development
>environment.  (By the way, Dr. Scheme's design is very flexible and it
>can be easily modified to provide a development environment for a
>computer algebra system.)  Again, I don't speak for him, but he seems
>enthusiastic about the idea.

I'm unfamiliar with PLT Scheme or Dr. Scheme, though I've used scheme
in the past. Axiom requires some low-level mods to most lisps to
support things like sockets and dynamic loading of native code.
Plus we plan to push Axiom into parallel programming so there would
have to be some support for MPI. I have set up a beowulf so I'm
interested in reflecting this parallelism into the type hierarchy
and the data structures. 

Hope this helps.

Tim


Date: Tue, 29 Oct 2002 19:45:52 -0500
From: Tim Daly
To: Norman Ramsey
Subject:  Catch 22

Norman,

I've got a catch 22 going on and perhaps you can suggest a better solution.

Lets suppose I have a literate program P that documents a Java program.
P contains several classes. In order to construct P from the P.nw file
I need a makefile to automate the extraction. The makefile contains 
lines like:

extract:
	notangle -Rfoo.java P.nw >foo.java
	notangle -Rbar.java P.nw >bar.java

etc...

Being a fan of literate programming the Makefile is also literate
so to extract it we need to say 
   
        notangle Makefile.nw >Makefile

Now if I want to send you a literate program you get 2 files, 
P.nw and Makefile.nw. I'm trying to figure out a way to reduce
this to one file with the idea that I can embed the Makefile
into P. Thus, I'd like to incant:

       notangle -withMakefile P.nw
       make

so that the -withMakefile would look for the <<Makefile>> tag
and extract it.

This almost works now except that the steps are more tedious
and I've taken to keeping the files separate. Is this a reasonable
modification? Would others find it useful? I feel it would be
elegant to just send one file rather than two.

Tim


Date: Tue, 29 Oct 2002 23:14:28 -0500
From: Tim Daly
To: Norman Ramsey
Subject:  Re: Catch 22

I played with your suggestion of using bundle and distributing
both files. It seems no different than just using tar. However
a different idea seems to work out to a good solution. If I use
the <<*>> tag to surround the makefile code:

/section{The Makefile}
<<*>>=
all:
        notangle -Rfoo.java P.nw >foo.java
        notangle -Rbar.java P.nw >bar.java
        javac *.java
@

and the tags:

/section{foo.java}
<<foo.java>>=
...
@
/section{bar.java}
<<bar.java>>=
...
@

then I can send just the P.nw file. The rules for unpacking it
and making the program become:

   notangle P.nw >Makefile
   make

This seems like an elegant solution as I only need to send one
file, the Makefile is up to date, and there are no modifications
needed to noweb.

Tim


Date: Thu, 31 Oct 2002 23:23:33 -0500
From: Tim Daly
To: Bill Page
Subject:  Re: Status

Actually, I have no estimate. The key to the game is to get the lisp
running as this is the only part of the system that is new to me.
Axiom used to be hosted on AKCL, now GCL, and Camm and I have discussed
rehosting it there. I expect it to run on both as they each have
advantages. In any case, though, the game is to get it to run anywhere
and I'm working on that at the moment. There really isn't any point
to posting the sources as the build process is very complex and
not documented (as yet. it will be). I've built the first version
of the lisp and now am working on building the "image" file. 
Unfortunately there isn't any obvious way to share this task.

The new system build uses noweb (search for noweb Ramsey in google)
which is a tool to support literate programming. If Axiom has any
chance to survive it has got to be documented so anyone who is
willing to put out the effort can learn how to build, modify and
maintain it. I suggest you look at noweb and I can send you an
example file or two to bring you up to speed on how I'm using it.

Once the lisp build works locally I can upload the lisp portion
of the system, you can try to build it, and we can work on 
correcting the problems with the build.

What is your background? Programmer? Mathematician? What area of
Axiom strikes your interest?

Tim

\end{document}

\eject
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\cleardoublepage
%\phantomsection
\addcontentsline{toc}{chapter}{Bibliography}
\bibliographystyle{axiom}
\bibliography{axiom}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\cleardoublepage
%\phantomsection
\addcontentsline{toc}{chapter}{Index}
\printindex
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
