Posts Tagged ‘statistical mechanics’

Two Draft Books

September 13, 2007

If you’re interested in learning about statistical mechanics, graphical models, information theory, error-correcting codes, belief propagation, constraint satisfaction problems, or the connections between all those subjects, you should know about a couple of books that should be out soon, but for which you can already download extensive draft versions.

The first is Information, Physics, and Computation by Marc Mézard and Andrea Montanari.

The second is Modern Coding Theory by Tom Richardson and Ruediger Urbanke.

I also recommend the tutorial on Modern Coding Theory: the Statistical Mechanics and Computer Science Points of View, by Montanari and Urbanke, from the lectures they gave at the 2006 Les Houches summer school.

Advertisement

The Hubbard Model: a Tutorial

September 9, 2007

Today, I will introduce a new kind of post at Nerd Wisdom, that I hope to do more of in the future: the tutorial. These posts will be a little more technical than ordinary posts, and will probably run a little longer as well. They are designed to give the intelligent and well-educated scientist an entry into a field for which he or she is not already a specialist.

When working with graduate-student interns, I find myself presenting these types of tutorials constantly. Unfortunately, this type of material is very hard to find in the literature, because it covers material that is too well-known to the specialist, while sometimes being beyond what is available in textbooks. Thus, there tends to be an artificial barrier to entry into new fields.

My first tutorial will cover the Hubbard Model. To keep this post to a reasonable length, I will just include the first part of the tutorial here.

 

I previously discussed high-temperature superconductivity in cuprates, and mentioned that the detailed mechanism is still controversial. However, what is widely agreed is that, as originally proposed by P.W. Anderson, a good model for these materials is the Hubbard Model (see this paper by Anderson for an entertaining and readable argument in favor of this point). And even if one doesn’t agree with that statement, the Hubbard Model is of enormous intrinsic interest, as perhaps the simplest model of interacting electrons on a lattice.

Despite its simplicity, physicists have not been able to solve for the behavior of the two or three-dimensional Hubbard model in the “thermodynamic limit” (for lattices with a very large number of sites and electrons). Coming up with a reliable approach to solving the Hubbard model has become a kind of holy grail of condensed matter theory.

Note that this is very different from the situation for the classical ferromagnetic Ising model, for which Onsager solved the two-dimensional version exactly in 1944 but where we do not have an exact solution in three dimensions. For the three-dimensional Ising model, we may not have an exact solution, but we understand extremely well the qualitative behavior, and can compute quantitative results to practically arbitrary accuracy. For the Hubbard model, we do not even have know what the qualitative behavior is for two or three dimensional lattices.

I will first describe the model in words, and then show you how to solve for the quantum statistical mechanics of the simplest non-trivial version of the model: the Hubbard model on a lattice with just two sites. I strongly believe that whenever you want to learn about a new algorithm or theory, you should start by solving, and understanding in detail, the smallest non-trivial version that you can construct.

The Full Tutorial

Algorithms for Physics

August 14, 2007


smac_cover.jpg

Much of my own work is at the intersection of statistical mechanics and algorithms, in particular understanding and developing new algorithms using ideas originating in statistical mechanics. Werner Krauth also works at the intersection of the two fields, but coming from a very different angle: he is a leading expert on the development and application of algorithms to compute and understand the properties of physical systems.

In his recently published book, “Statistical Mechanics: Algorithms and Computations,” targeted at advanced undergraduates or graduate students, he covers a very wide range of interesting algorithms. To give you an idea of the coverage, I’ll just list the chapters: “Monte Carlo methods,” “Hard disks and spheres,” “Density matrices and path integrals,” “Bosons,” “Order and disorder in spin systems, “Entropic forces,”and “Dynamic Monte Carlo methods.”

Krauth’s presentation is leavened by his humor, and he often uses the results obtained using his algorithms to make surprising points about physics that would otherwise be hard to convey.

I am often asked by computer science or electrical engineering scientists and researchers for good introductions to physics, and particularly statistical mechanics, and I’m now happy to be able to recommend this book.

images.jpg

Specifying physics explicitly in terms of algorithms, as Krauth does, gives a very concrete basis for understanding concepts that can otherwise seem terribly abstract. Gerald Sussman and Jack Wisdom make this point in the preface of their already classic book (which is available online) “Structure and Interpretation of Classical Mechanics”:

“Computational algorithms are used to communicate precisely some of the methods used in the analysis of dynamical phenomena. Expressing the methods of variational mechanics in a computer language forces them to be unambiguous and computationally effective. Computation requires us to be precise about the representation of mechanical and geometric notions as computational objects and permits us to represent explicitly the algorithms for manipulating these objects. Also, once formalized as a procedure, a mathematical idea becomes a tool that can be used directly to compute results.”

But while Sussman and Wisdom’s book focuses in great detail on classical mechanics, Krauth’s book covers more broadly subjects in classical mechanics, statistical mechanics, quantum mechanics, and even quantum statistical mechanics. Another difference is that Sussman and Wisdom specify their algorithms in executable Scheme code, while Krauth uses pseudo-code. Of course, both choices have their advantages, just as both of these books are worth your time.

Lectures on Disordered Systems

August 12, 2007

Many physicists study “disordered systems,” such as materials like glasses where the molecules making up the material are arranged randomly in space, in contrast to crystals, where all the particles are arranged in beautiful repeating patterns.

The symmetries of crystals make them much easier to analyze than glasses, and new theoretical methods had to be invented before physicists could make any headway in computing the properties of disordered systems. Those methods have turned out to be closely connected to approaches, such as the “belief propagation” algorithm, that are widely used in computer science, artificial intelligence, and communications theory, with the result that physicists and computer scientists today regularly exchange new ideas and results across their disciplines.

Returning to the physics of disordered systems, physicists began working on the problem in the 1970’s by considering the problem of disordered magnets (also called “spin glasses”). My Ph.D. thesis advisor, Philip W. Anderson summarized the history as follows:

“In 1975 S.F. (now Sir Sam) Edwards and I wrote down the “replica” theory of the phenomenon I had earlier named “spin glass”, followed up in ’77 by a paper of D.J. Thouless, my student Richard Palmer, and myself. A brilliant further breakthrough by G. Toulouse and G. Parisi led to a full solution of the problem, which turned out to entail a new form of statistical mechanics of wide applicability in fields as far apart as computer science, protein folding, neural networks, and evolutionary modelling, to all of which directions my students and/or I contributed.”

csss_us.jpg

In 1992, I presented five lectures on “Quenched Disorder: Understanding Glasses Using a Variational Principle and the Replica Method” at a Santa Fe Institute summer school on complex systems. The lectures were published in a book edited by Lynn Nadel and Daniel Stein, but that book is very hard to find, and I think that these lectures are still relevant, so I’m posting them here. As I say in the introduction, “I will discuss technical subjects, but I will try my best to introduce all the technical material in as gentle and comprehensible a way as possible, assuming no previous exposure to the subject of these lectures at all.”

The first lecture is an introduction to the basics of statistical mechanics. It introduces magnetic systems and particle systems, and describes how to exactly solve non-interacting magnetic systems and particle systems where the particles are connected by springs.

The second lecture introduces the idea of variational approaches. Roughly speaking, the idea of a variational approach is to construct an approximate but exactly soluble system that is as close as possible to the system you are interested in. The grandly titled “Gaussian variational method” is the variational method that tries to find the set of particles and springs that best approximates an interacting particle system. I describe in this second lecture how the Gaussian variational method can be applied to heteropolymers like proteins.

The next three lectures cover the replica method, and combine it with the variational approach. The replica method is highly intricate mathematically. I learned it at the feet of the masters during my two years at the Ecole Normale Superieure (ENS) in Paris. In particular, I was lucky to work with Jean-Philippe Bouchaud, Antoine Georges, and Marc Mezard, who taught me what I knew. I thought it unfortunate that there wasn’t a written tutorial on the replica method, so the result were these lectures. Marc told me that for years afterwards they were given to new students of the replica method at the ENS.

Nowadays, the replica method is a little less popular than it used to be, mostly because it is all about computing averages of quantities over many samples of systems that are disordered according to some probability distribution. While those averages are very useful in physics, they are somewhat less important in computer science, where you usually just want an algorithm to deal with the one disordered system in front of you, rather than an average over all the possible disordered systems.

Santa Fe Institute Lectures