Keio University

"My TeX Life"

Participant Profile

  • Takashi Nodera

    Takashi Nodera

At 14:46:18 on March 11, 2011, the Tōhoku earthquake and tsunami occurred with its epicenter off the Sanriku coast, causing the Great East Japan Earthquake, which brought widespread damage to various regions. In the field of science and technology, the predictive analysis of earthquake-tsunamis has been actively researched in recent years as the mathematics of destructive phenomena, thanks to the advent of supercomputers. However, this time, we were no match for the incredible destructive power of nature, leading to an unexpectedly tragic outcome. This is because, at present, a completely satisfactory theoretical analysis of destructive phenomena cannot be performed, and we must rely on numerical simulations using high-performance computers.

Numerical simulations have been used in various research fields of science and technology in recent years due to the emergence of inexpensive and high-speed computers.

Examples include, in addition to the analysis of destructive phenomena, aeronautical engineering, transient response of semiconductor LSI circuits, heat dissipation in engines, structural analysis, and fluid analysis of Navier-Stokes systems. In these applications, changes in the quantities in question are numerically tracked using computers, and the results obtained are utilized for their respective purposes. Generally, numerical simulations are performed through the following series of processes.

(1) Creation of a mathematical model of the phenomenon (expressed in the form of equations)

(2) Solving the equations (discretization and numerical methods)

(3) Evaluation of the obtained numerical solution

Within these processes, we focus on the numerical methods of (2), which utilize computers—that is, on methods (also called algorithms) that use mathematical mechanisms—and we are engaged in the research and development of new numerical methods, so-called computational tools.

画像

In large-scale weather forecasting and earthquake numerical simulations using the Earth Simulator, it is necessary to find approximate solutions for systems of linear equations with several million unknowns. For such equations, direct methods like Gaussian elimination, which we typically learn in first-year university linear algebra textbooks, are often inadequate. Furthermore, since real number calculations implemented in computers are performed using a method called floating-point arithmetic, it is inevitable that well-known errors will occur with every calculation. When dealing with large-scale problems, this leads to a lack of confidence in the effective precision of the approximate solutions calculated by the computer. This is where iterative methods, which can yield reasonably good results even for very large-scale problems, come into play. In the mid-20th century, an innovative algorithm called the Conjugate Gradient method (also known as the CG method) appeared. Of course, I am one of the researchers who was instantly captivated by this method, but by the time I began my research, more than two decades had already passed since the algorithm was developed.

Research is born from encounters with people and begins with a single conversation with a good mentor. It is also said that these thoughts have the potential to be realized by expressing them in writing.

The catalyst for my research on the Conjugate Gradient method was my encounter with Professor Hidetoshi Takahashi in April 1976.

Until then, although I had an interest in methods for solving large-scale linear equations, particularly stationary iterative methods, I did not have a clear research objective that I had settled on. As it happened, while talking with Professor Takahashi, I began to consider researching the Conjugate Gradient method, which is a non-stationary iterative method.

At that time, convenient tools like the modern Web did not exist, so I started by writing letters directly to researchers around the world to ask them to send me the papers I wanted to read. It is a fact that I still maintain close relationships with many of the researchers from that time.

The Conjugate Gradient method is an algorithm that utilizes the orthogonality of vectors; it is an unprecedented and novel non-stationary iterative method that has the direct-method-like property of converging in a finite number of iterations. In particular, there is a mathematical equivalence between the Conjugate Gradient method and the Lanczos method (a method for solving eigenvalue problems), and I can still recall with interest things like the graph-theoretical propagation analysis of computational errors and the relationship between the convergence of the algorithm and the orthogonality between vectors. For example, it has the wonderful property of converging to an approximate solution as long as a local orthogonality relationship holds, even if a complete orthogonality relationship does not hold for the residual vectors or direction vectors. This research experience with the Conjugate Gradient method has become the foundation for my research on matrix preconditioning and later, for new Krylov subspace methods for large, sparse, non-symmetric matrix systems, represented by the GMRES method. In particular, the development of algorithms for non-symmetric systems is not straightforward, and various new algorithms are continually emerging. However, a definitive algorithm has yet to appear, and we are still in a "warring states period." It's a case of "striving on, yet my ambitions remain unfulfilled, leaving me in a state of desperation."

Apparently, I have a fondness for new things, and I remember being very intrigued when Japanese word processors first appeared. I started by trying to create a clean copy of a mathematics textbook with a Japanese word processor, but it was a huge task, as I had to start by creating fonts for the mathematical symbols. I also think this was the first time I encountered the term "document processing." Later, I would encounter a document formatting system called TeX, whose main purpose is for writing papers and books in science and technology.

The formula below is known as S. Ramanujan's mysterious Indian melody, and it was typeset with LaTeX. Furthermore, the schematic diagram of the tea-serving doll is also described in LaTeX.

画像
Tea-serving doll

It is customary for researchers to have their various research findings published by submitting them in the form of papers to journals in the field of science and technology. TeX is a document formatting system developed by Professor D. E. Knuth of Stanford University, and today, LaTeX, developed by L. Lamport (then at DEC), is the mainstream. Its development began in 1975, making it surprisingly old for document formatting software, and it is still in active use today.

The way TeX is versioned is a bit unique; I believe the current version of TeX is 3.1415926 (as of March 2008). There is a convention that the version numbers of TeX should approach the value of pi. Also, since "pi is an irrational number of the type called transcendental numbers," if we follow this versioning scheme, it means that, to borrow the words of a famous baseball player for the Giants, "TeX is eternal and immortal." TeX is theoretically structured with the various functions necessary for writing books and papers on computer science and mathematical physics. Especially when it comes to mathematical formulas, it is unparalleled by other document processing systems. In a word, it is a system that can produce truly beautiful document formatting, but since it is not WYSIWYG (what you see is what you get) like Word, it is a tricky thing to master. I remember being hooked for several years when TeX first arrived in Japan. Thanks to that, I now consider myself a full-fledged TeXnician. TeXnician sounds nice, but it's what you would now call a "geek." I became a "TeX-loving geek" by unsparingly pouring vast amounts of computer time and effort into it. According to Malcolm Gladwell's "Outliers," he gives examples of the "10,000-hour rule," a magic number suggesting that any talent or skill can become genuine if practiced for 10,000 hours. Whether it's true or not, the Beatles also put in over 10,000 hours during their formative years at live houses in Hamburg, and the same goes for Bill Gates. It is said that he succeeded because he stayed after school in the computer room, dedicating himself to his studies for over 10,000 hours. That said, even though I didn't get rich by spending 10,000 hours typesetting with TeX, I do consider myself a decent TeXnician, and I feel that what I gained as a result was significant.

Indeed, in my own research life, I've dabbled in this and that, and the desperate struggle continues even now. Perhaps it is, to borrow the words of a famous philosopher, "not how the world is, but how to live in it." It's not that I believe in the "10,000-hour rule," but it is a fact that if I find something I want to do in the future, I will end up spending 10,000 hours on it before I know it. Isn't it also necessary to find something to which you can devote 10,000 hours of effort and boldly take on the challenge?

Gakumon no susume (An Encouragement of Learning) (Research Introduction)

Showing item 1 of 3.

Gakumon no susume (An Encouragement of Learning) (Research Introduction)

Showing item 1 of 3.