ICMS 2014 Session: Software, Design and Practice in Special Functions and Concrete Mathematics
Aim and Scope
Concrete mathematics can be thought of as the complement to abstract mathematics. Its scope covers problems in recurrences, summation and integration, number theory, combinatorics, generating functions, probability, and so on. In recent years, software has allowed one to obtain numerical results to these problems through simulations and experimentations. On the other hand, software has also made it possible to obtain closed form answers, via its ability to handle a large array of special functions. This session aims to bring together like-minded researchers to share their progress.
Topics include, but are not limited to, use of mathematical software in:
- Concrete problems in, and bridging, number theory, combinatorics (for example, enumeration), and probability (for example, random walks)
- Exploration of well-known constants (such as pi)
- Numerical treatment and closed forms evaluations of integrals, recurrences, and special functions
- Relationships between special functions and other areas of mathematics/science
- Use of concrete mathematics in education
- A short abstract will appear below as soon as accepted.
- An extended abstract will appear in the conference proceedings.
- A journal special issue consisting of full papers
may be organized after the meeting.
- If you would like to give a talk at ICMS, you need to submit
first a short abstract and (if you wish) an extended abstract. See
for the details.
- The deadlines are: Short Abstract: April 30; Extended Abstract: May 21.
- After the meeting,
the submission guideline for a journal special issue
will be communicated to you by the session organizers.
Fast algorithms for Monte Carlo simulation of self-avoiding walks
Nathan Clisby (University of Melbourne)
The self-avoiding walk (SAW) is a widely studied model of polymers with a long and rich history, which despite being extremely simple in its definition has resisted all attempts at finding an exact solution.
Consequently, much of what we know about SAW comes from computer experiments, both via sophisticated enumeration algorithms such as the finite lattice method, and Monte Carlo simulations.
In this talk, I'll give a brief overview of the history of Monte Carlo simulations of self-avoiding walks and related models, and attempt to convey the underlying idea for the different algorithms. I'll describe in detail the most powerful known method for sampling SAW, the pivot algorithm, which is a Markov chain Monte Carlo algorithm for which the
elementary "move" is rotation or reflection of part of the walk.
Then I'll describe how recent improvements in the computer
implementation of the pivot algorithm has greatly increased the speed of computer simulations, and the length of walks that can be studied. This new implementation has been used to simulate walks with hundreds of millions of steps, and calculate various quantities for SAW to unprecedented accuracy. Two key observations motivate the
implementation: (a) linear order of SAW leads to a natural binary tree representation, and (b) one can determine that a walk is self-avoiding without knowing everything about it (akin to "lazy evaluation" in computer programming). These principles may in future lead to efficient algorithms for other models of walks.
BetaSCP: A Program for the Optimal Prediction of Side-chains in Proteins
Joonghyun Ryu (Hanyang University, Korea)
Mokwon Lee (Hanyang University, Korea)
Jehyun Cha (Hanyang University, Korea)
Chanyoung Song (Hanyang University, Korea)
Deok-Soo Kim (Hanyang University, Korea)
Protein mediates important biological functions in living organism. It is generally believed that
protein structure determines protein functions and thus it is not a surprise to find many prior studies
to determine protein structures, either experimentally or computationally. Protein consists of a set
of amino acids which are linearly connected to each other via peptide bonds. Residue, the remaining
part of each amino acid in a bonded sequence, consists of two parts: backbone and side-chain.
The structure of protein is fixed by the dihedral angles of some rotatable bonds in residues.
The "side-chain prediction problem", abbreviated as SCP-problem, is a computational problem
to predict the optimal structure of proteins by finding the optimal dihedral angles. Assuming that
a backbone structure is fixed (i.e., the coordinates of the atoms in a backbone are given),
the SCP-problem finds the optimal side-chain structure by predicting the dihedral angles in side-chains so that
the total energy of the structure is minimized. The SCP-problem is one of key computational cornerstones
for many important problems such as protein design, flexible docking of proteins, homology modeling, etc.
While each dihedral angle, in theory, can take any value between 0 and 360 degrees, it is well-known
that there exists a preferred range of dihedral angle which maps to a representative angle through
statistical analysis. A combination of such representatives for each residue is called a "rotamer"
which is short for "rotational isomer". Then, the SCP-problem can be interpreted as
a combinatorial optimization problem which assigns the optimal rotamer to each residue in terms of energy.
Thus, the SCP-problem can be formulated as a minimization problem of an integer linear program
which is NP-hard thus inevitably invites heuristic approach to find the solution.
In this talk, we report a heuristic algorithm, called the BetaSCP algorithm, and its implementation
which quickly finds an excellent solution of the SCP-problem. The algorithm/program is called BetaSCP
because it is based on the theory of the beta-complex, which is a derived computational geometric construct
of the Voronoi diagram of spheres, also referred to as an additively weighted Voronoi diagram.
The core idea of the BetaSCP algorithm is to transform the SCP-problem into a simple geometric problem
whose solution process can be facilitated by the Voronoi diagram and its dual structure called
the quasi-triangulation. The BetaSCP algorithm is entirely implemented using the Molecular Geometry engine
called BULL! which has been developed by Voronoi Diagram Research Center (VDRC) in C++ programming language.
In this presentation, we also demonstrate the BetaSCP program and benchmark it with other programs of
doing similar work. The BetaSCP program is available as both a stand-alone and a web server program from VDRC.
Computation of an Improved Lower Bound to Giuga's Primality Conjecture
Matthew Skerritt (University of Newcastle, Australia)
Our most recent computations tell us that any counterexample to Giuga's 1950 primality conjecture must have at least 19,908 decimal digits. Equivalently, any number which is both a Giuga and a Carmichael number must have at least 19,908
decimal digits. This bound has not been achieved through exhaustive testing of all numbers with up to 19,908 decimal digits, but rather through exploitation of the properties of Giuga and Carmichael numbers. This bound improves upon the
1996 bound of Borwein, Borwein, Borwein, and Girgensohn. We present the algorithm used, and our improved bound. We also discuss the changes over the intervening years as well as the challenges to further computation.
Mathematical software for modified Bessel functions
Juri Rappoport (Russian Academy of Sciences, Russia)
The high-quality mathematical software for the computation of
modified Bessel functions of the second kind with integer,
imaginary and complex order and real argument is elaborated.
The value of function may be evaluated with high precision
for given value of the independent argument x and order r.
These codes are addressed to the wide audience of scientists,
engineers and technical specialists. The tables of these
functions are published. This software improves significantly
the capability of computer libraries.
These functions arise naturally in boundary value problems
involving wedge-shaped or cone-shaped geometries. They are
fundamental to mathematical modeling activities in applied
science and engineering. Methods of mathematical and numerical analysis are adapted
for the creation of appropriate algorithms for these
functions, computer codes are written and tested. Power series, Tau method and numerical quadratures of trapezoidal kind are used for the construction of subroutines.
New realization of the Lanczos Tau method with minimal residue
is proposed for the constructive approximation of the second
order differential equations solutions with polynomial
coefficients. A Tau method computational scheme is applied for
the constructive approximation of a system of differential
equations solutions related to the differential equation of
hypergeometric type. Various vector perturbations are discussed. Our choice of the perturbation term is a shifted Chebyshev
polynomial with a special form of selected transition and
normalization. The minimality conditions for the perturbation
term are found for one equation . They are sufficient simple for the verification in a number of important cases. Tau method's approach gives a big advantage in the economy of computer's time.
The mathematical software for kernels of Lebedev type index
transforms--modified Bessel functions of the second kind with
complex order is elaborated in detail. The software for new
applications of Lebedev type integral transforms and related dual integral equations  for the numerical solution of problems of mathematical physics is constructed. The algorithm of numerical solution of some mixed boundary value problems for the Helmholtz equation in wedge domains is developed. Observed examples admitting complete analytical solution demonstrate the efficiency of this approach for applied problems.
- J.M. Rappoport Some numerical methods for approximation of
modified Bessel functions., Proc. Appl. Math. Mech., 2007, vol.7,
issue 1, p.2020017-2020018.
- J.M. Rappoport On some integrals from modified Bessel functions.,
Progress in Analysis. Proceedings of the 8th ISAAC Congress, vol.1,
Moscow, People's Friendship University of Russia, 2012, p.308-315.
An extension and efficient calculation of the Horner's rule for matrices
Shinichi Tajima (University of Tsukuba, Tsukuba, Japan)
Katsuyoshi Ohara (Kanazawa University, Kanazawa, Japan)
Akira Terui (University of Tsukuba, Tsukuba, Japan)
We propose an efficient method for calculating "matrix polynomials" by extending the Horner's rule for univariate polynomials. By calculating "matrix polynomials", we mean to evaluate a univariate polynomial over a field at a given value that is a square matrix over the same field by the celebrated Horner's rule. For a given matrix, we
have developed algorithms with exact arithmetic, based on residue analysis of the resolvent of the matrix, for various computations so far, including spectral decomposition, calculation of (pseudo) annihilating polynomials and eigenvectors. Calculation of matrix
polynomials is at the core of these computations thus it is important to establish an efficient algorithm for calculating matrix polynomials to increase efficiency of overall algorithms. Computing time of the
Horner's rule for matrices is dominated by multiplication of a matrix by another matrix, thus we aimed to reduce the number of matrix-matrix multiplications. With this strategy, we extend the Horner's rule by
partitioning it by a given degree to reduce the number of
matrix-matrix multiplications. By this extension, we show that we can calculate matrix polynomials more efficiently than by using naive Horner's rule. We estimate the arithmetic time complexity of our new
algorithm and derive a degree of partitioning of the Horner's rule which makes our extension the most efficient. This estimate has been
verified with experiments with our implementation on the computer algebra system Risa/Asir, and the experiments also have demonstrated
that, at suitable degree of partitioning, our new algorithm needs significantly shorter computing time as well as much smaller amount of
memory, compared to naive Horner's rule. Furthermore, we show that our
new algorithm is effective for matrix polynomials not only over multiple-precision integers, but also over fixed-precision (IEEE standard) floating-point numbers by experiments.
Expectations on IFS attractors
Michael Rose (University of Newcastle, Australia)
In a previous work, the authors extended the classical notion of box integrals - expectations over unit hypercubes - to encompass a class of Cantor-like fractal sets. We present a generalization of this work in which the notion of expectation is extended to any self-similar fractal attractor of an iterated function system.
Developing linear algebra packages on Risa/Asir for eigenproblems
Katsuyoshi Ohara (Kanazawa University, Japan)
Shinichi Tajima (University of Tsukuba, Japan)
Akira Terui (University of Tsukuba, Japan)
We are developing linear algebra packages on Risa/Asir, a computer algebra system. The aim is to provide programs for efficiently and exactly solving eigenproblems on the computer algebra system for large scale square matrices over integers or rational numbers.
For a square matrix with rational number members, in general,
eigenvalues are algebraic numbers. A square matrix can be regarded as a linear transformation on a finite-dimensional complex vector space.
The invariant subspaces have a basis which can be expressed by polynomials of the corresponding eigenvalue with rational number coefficients. Here each polynomial has smaller degree than the defining polynomial of the eigenvalue. Moreover the projection matrix to the invariant subspace, which appears in the spectral decomposition, can be also represented by polynomials of the eigenvalue. Since invariant subspaces have common expression for conjugate eigenvalues, it depends on only the defining polynomial of the eigenvalue.
Some mathematical software give verbose expressions as solutions of eigenproblems. For example, "Eigenvectors" of Maple 17 uses rationals of eigenvalues. In this case it is hard to analyze eigenproblems for large scale matrices because of high costs of normalization for algebraic numbers. It is motivation to develop and implement efficient algorithms for large scale problems. We also pay attention to parallel computation from the beginning of the research.
Our approach is based on minimal (or pseudo-)annihilating polynomials and residue analysis of resolvents. The resolvent of square matrix is a matrix-valued rational function of a complex variable and its denominator agrees with the minimal polynomial of the matrix.
Theoretically we have the spectral decomposition by using Laurent expansion of the resolvent. In order to execute residue analysis of the resolvent we use algebraic local cohomology groups.
On the other hand a monic polynomial is called an annihilating
polynomial for a vector if the polynomial of the matrix annihilates
the vector. The minimal polynomial is given by the LCM of minimal annihilating polynomials for the standard basis of Euclidean vector space. Minimal annihilating polynomials are also applied to parallel computation.
The software package consists of some programs. Users input a square matrix and its characteristic polynomial for invoking each function.
The followings are currently prepared for solving eigenproblems:
(1) computing bases of eigenspaces for corresponding eigenvalues,
(2) the spectral decomposition,
(3) Jordan chains for corresponding eigenvalues.
Users can call functions in the infrastructure as follows:
(4) minimal or pseudo-annihilating polynomials,
(5) algebraic local cohomologies and residue calculation.