Differences between revisions 2 and 26 (spanning 24 versions)
Revision 2 as of 2007-01-15 20:33:16
Size: 5167
Comment:
Revision 26 as of 2007-01-21 11:11:54
Size: 210
Editor: wstein
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
These are the abstracts for all the talks scheduled for the workshop, listed in alphabetical order. For times, see the [:msri07/schedule: schedule] itself. = Abstracts =
Line 3: Line 3:
[[TableOfContents]]

== Bailey: Experimental Mathematics and High-Performance Computing ==
 
[http://crd.lbl.gov/~dhbailey/ David H Bailey], Lawrence Berkeley Lab
 
Abstract:
 
Recent developments in "experimental mathematics" have underscored the value
of high-performance computing in modern mathematical research. The most
frequent computations that arise here are high-precision (typically
several-hundred-digit accuracy) evaluations of integrals and series,
together with integer relation detections using the "PSLQ" algorithm. Some
recent highlights in this arena include: (2) the discovery of "BBP"-type
formulas for various mathematical constants, including pi and log(2); (3)
the discovery of analytic evaluations for several classes of multivariate
zeta sums; (4) the discovery of Apery-like formulas for the Riemann zeta
function at integer arguments; and (5) the discovery of analytic evaluations
and linear relations among certain classes of definite integrals that arise
in mathematical physics. The talk will include a live demo of the
"experimental mathematician's toolkit".

== Cohn: <TITLE> ==
[http://research.microsoft.com/~cohn/ Henry Cohn (Microsoft Research)]

== Cooperman: <TITLE> ==
[http://www.ccs.neu.edu/home/gene/ Gene Cooperman (Northeastern University)]

== Edelman: <TITLE> ==
[http://www-math.mit.edu/~edelman/ Alan Edelman (MIT)]


== Granger: Interactive Parallel Computing using Python and IPython ==

[http://txcorp.com Brian Granger - Tech X Corp.]

Interactive computing environments, such as Matlab, IDL and
Mathematica are popular among researchers because their
interactive nature is well matched to the exploratory nature of
research. However, these systems have one critical weakness:
they are not designed to take advantage of parallel computing
hardware such as multi-core CPUs, clusters and supercomputers.
Thus, researchers usually turn to non-interactive compiled
languages, such as C/C++/Fortran when parallelism is needed.

In this talk I will describe recent work on the IPython project
to implement a software architecture that allows parallel
applications to be developed, debugged, tested, executed and
monitored in a fully interactive manner using the Python
programming language. This system is fully functional and allows
many types of parallelism to be expressed, including message
passing (using MPI), task farming, shared memory, and custom user
defined approaches. I will describe the architecture, provide an
overview of its basic usage and then provide more sophisticated
examples of how it can be used in the development of new parallel
algorithms. Because IPython is one of the components of the SAGE
system, I will also discuss how IPython's parallel computing
capabilities can be used in that context.


== Harrison: Science at the petascale --- tools in the tool box. ==

[http://www.csm.ornl.gov/ccsg/html/staff/harrison.html Robert Harrison ] (Oak Ridge National Lab)

Petascale computing will require coordinating the actions of 100,000+
processors, and directing the flow of data between up to six levels
of memory hierarchy and along channels that differ by over a factor of
100 in bandwidth. Amdahl's law requires that petascale applications
have less than 0.001% sequential or replicated work in order to
be at least 50% efficient. These are profound challenges for all but
the most regular or embarrassingly parallel applications, yet we also
demand that not just bigger and better, but fundamentally new science.
In this presentation I will discuss how we are attempting to confront
simultaneously the complexities of petascale computation while
increasing our scientific productivity. I hope that I can convince you
that our development of MADNESS (multiresolution adaptive numerical
scientific simulation) is not as crazy as it sounds.

This work is funded by the U.S. Department of Energy, the division of
Basic Energy Science, Office of Science, and was performed in part
using resources of the National Center for Computational Sciences, both
under contract DE-AC05-00OR22725 with Oak Ridge National Laboratory.


== Hart: <TITLE> ==
[http://www.maths.warwick.ac.uk/~masfaw/ Bill Hart (Warwick)]

== Hida: <TITLE> ==
[http://www.cs.berkeley.edu/~yozo/ Yozo Hida (UC Berkeley)]

== Martin: <TITLE> ==
[http://www.math.jmu.edu/~martin/ Jason Martin (James Madison University)]

== Maza-Xie: <TITLE> ==
[http://www.csd.uwo.ca/~moreno/ Moreno Maza and Xie (Western Ontario)]

== Noel: <TITLE> ==
[http://www.math.umb.edu/~anoel/ Alfred Noel (UMass Boston / MIT)]

== Qiang: <TITLE> ==
[http://www.yiqiang.net/ Yi Qiang (UW)]

== Roch: <TITLE> ==
[http://www-id.imag.fr/Laboratoire/Membres/Roch_Jean-Louis/perso.html Jean-Louis Roch (France)]

== Verschelde: <TITLE> ==
[http://www.math.uic.edu/~jan/ Jan Verschelde (UIC)]

== Yelick: <TITLE> ==
[http://www.cs.berkeley.edu/~yelick/ Kathy Yelick (UC Berkeley)]
Please see [http://modular.math.washington.edu/msri07/schedule.html#abstracts the HTML abstracts page] or the [http://modular.math.washington.edu/msri07/abstracts.pdf abstract in PDF format].

Abstracts

Please see [http://modular.math.washington.edu/msri07/schedule.html#abstracts the HTML abstracts page] or the [http://modular.math.washington.edu/msri07/abstracts.pdf abstract in PDF format].

msri07/abstracts (last edited 2008-11-14 13:41:59 by anonymous)