Archive for the ‘Books’ Category

SICM on Mac OS X

October 17, 2007

“Structure and Interpretation of Classical Mechanics,” (SICM) by Gerald Jay Sussman and Jack Wisdom, with Meinhard Mayer, is a fascinating book, revisiting classical mechanics from the point of view that everything must be computationally explicit. I already mentioned the book in a previous post.

The book is available online, and all the software is freely available on-line as well. The software is written in Scheme, and a very extensive library called “scmutils” was developed to support computations in classical mechanics, including implementations of many symbolic and numerical algorithms.

I think that many scientists and programmers could find the “scmutils” library to be generally useful, even if they are not particularly interested in classical mechanics. If you are using the GNU/Linux operating system, there’s no problem in getting the library working. However, if you want to use it on Mac OS X (or Windows), the instructions leave the impression that it’s not possible, and Googling turned up some useful information, but no complete instructions, and also some people that seemed to be at a loss about how to do it.

Well, it is possible to get MIT-Scheme with the scmutils library running on Mac OS X (and you can probably modify my instructions to make it work on Windows too):

Click here for my instructions for running scmutils on Mac OS X.

Advertisement

On Blogs and Books

October 12, 2007

Jeff Atwood has a great blog, called “Coding Horror,” about software development. He’s been blogging continuously and very actively since 2004, which I find impressive; check out his archives.

One of his most recent posts is about the book that he’s just completed, on ASP.NET. I can’t say that I’m interested in the subject of his book, but I was interested in what he had to say about writing blogs versus writing books. Basically, he comes down heavily in favor of writing blogs:

“As I see it, for the kind of technical content we’re talking about, the online world of bits completely trumps the offline world of atoms:

  • it’s forever searchable
  • you, not your publisher, will own it
  • it’s instantly available to anyone, anywhere in the world
  • it can be cut and pasted; it can be downloaded; it can even be interactive
  • it can potentially generate ad revenue for you in perpetuity

And here’s the best part: you can always opt to create a print version of your online content, and instantly get the best of both worlds. But it only makes sense in that order. Writing a book may seem like a worthy goal, but your time will be better spent channeling the massive effort of a book into creating content online.”

He also points out that writing books is a lot harder than writing blogs:

“Writing a book is hard work. For me, writing blog entries feels completely organic, like a natural byproduct of what I already do. It’s not effortless by any means, but it’s enjoyable. I can put a little effort in, and get immediate results out after I publish the entry. The book writing process is far more restrictive. Instead of researching and writing about whatever you find interesting at any given time, you’re artificially limited to a series of chapters that fit the theme of the book. You slave away for your publisher, writing for weeks on end, and you’ll have nothing to show for it until the book appears (optimistically) six months down the road. Writing a book felt a lot like old fashioned hard work– of the indentured servitude kind.”

Charles Petzold, an experienced author of programming texts, chimes in here with more details on the declining economics of technical book-writing. Apparently a lot fewer people are buying programming books nowadays, so the financial situation for the authors of these books is getting worse. So Petzold agrees that it makes a lot more sense to write in blog format, but notes that blogs don’t usually pay very well.

Let me just give a different perspective, from someone who is more interested in academic writing than technical writing. I think that most academics don’t write books in order to make money, and that’s certainly true of the journal articles that are written. So for academics, moving from books to blogs has more of the upside and less of the downside than for other authors.

I know that I personally find writing a blog a lot more appealing than writing a book. As Atwood points out, you can write about whatever happens to appeal to you on the day you’re writing; and you get much faster feedback. Sure it’s some work, but all in all, it’s great!

Petzold has another criticism of blogs:

On the Internet, everything is in tiny pieces. The typical online article or blog entry is 500, 1000, maybe 1500 words long. Sometimes somebody will write an extended “tutorial” on a topic, possibly 3,000 words in length, maybe even 5,000.

It’s easy to convince oneself that these bite-sized chunks of prose represent the optimum level of information granularity. It is part of the utopian vision of the web that this plethora of loosely-linked pages synergistically becomes all the information we need.

This illusion is affecting the way we learn, and I fear that we’re not getting the broader, more comprehensive overview that only a book can provide. A good author will encounter an unwieldy jungle of information and cut a coherent path through it, primarily by imposing a kind of narrative over the material. This is certainly true of works of history, biography, science, mathematics, philosophy, and so forth, and it is true of programming tutorials as well.

Sometimes you see somebody attempting to construct a tutorial narrative by providing a series a successive links to different web pages, but it never really works well because it lacks an author who has spent many months (or a year or more) primarily structuring the material into a narrative form.

For example, suppose you wanted to learn about the American Civil War. You certainly have plenty of online access to Wikipedia articles, blog entries, even scholarly articles. But I suggest that assembling all the pieces into a coherent whole is something best handled by a trained professional, and that’s why reading a book such as James McPherson’s Battle Cry of Freedom will give you a much better grasp of the American Civil War than hundreds of disparate articles.

If I sound elitist, it’s only because the time and difficulty required for wrapping a complex topic into a coherent narrative is often underestimated by those who have never done it. A book is not 150 successive blog entries, just like a novel isn’t 150 character sketches, descriptions, and scraps of dialog.”

As somebody who writes blog posts that typically come in at 500-1500 words, but loves to read books, I want to respond to that.

The web lets us write material in whatever form we want. I’m comfortable with the 500-1500 word post. Other people with popular blogs write 2-sentence posts linking to an article. Paul Graham writes long essays. David MacKay puts drafts of his books online.

The fact is, that you can write about whatever you want, whenever you want, in whatever form you want. The most important point is that you don’t need permission to publish anymore. You don’t need a publisher; you are the publisher.

Which brings me to a related point. I find a lot of the current scientific publication process completely bizarre. Scientists write the articles, type-set the articles, review the articles, edit the articles, and then find that their own articles are not freely available online? And we write articles that are hard to understand because of space limitations? Online there are no space limitations! The entire system is lumbering on mostly as if we still were living in the early 20th century, when only a few specialized groups of people had the capability to publish, and delivering journal articles to people was necessarily expensive.

Most of the current system’s remaining justification, compared to a free system where everybody simply published their work online, and that was the end of it, is for credentialing articles by peer review and credentialing people by the number of peer-reviewed articles they’ve written. Look, I know peer review is important, but given the huge time sink of the current system, and the morale problems that it contributes to, I think that we should take a closer look at the costs and benefits, and maybe be more open to people who simply publish their work online (see, for example, Perelman’s papers proving the Poincare conjecture, which were never published in a peer-reviewed journal).

So if you feel the urge, publish yourself online in whatever format suits you. Just try to make the content worthwhile for somebody else in the world, and don’t worry about the rest. [End of rant.]

Princeton Companion to Mathematics

October 5, 2007

From Terence Tao’s excellent blog, I learned about the upcoming Princeton Companion to Mathematics (PCM), a roughly 1000-page survey of and introduction to mathematics at the undergraduate level. It looks like the PCM editors have lined up a distinguished cast of mathematicians to write comprehensible articles covering all of mathematics. The editor-in-chief is Timothy Gowers, who like Tao is a Fields Medalist, and who has very recently started his own blog.

Gowers gives instructions for finding a large sample of articles that will appear in the book, but they are not completely transparent, so follow these:

  1. Go here.
  2. Use Username “Guest” and Password “PCM”.
  3. Click on “Resources” in the sidebar.
  4. Click on “Sample articles” in the sidebar.

You’ll find many interesting and highly readable articles in .pdf format.

Beginning Go

October 1, 2007

If you’re interested in learning more about the game of Go, whether you’re an absolute beginner or already know something about the game, a good starting point is “Sensei’s Library”, a huge wiki (it currently contains 15722 pages) all about Go. There are many ways to explore the wiki, and tons of interesting topics to explore.

There are are many nice English language books on Go. I will just recommend, if you are a beginner, the excellent five volume “Learn to Play Go Series” by Janice Kim and Jeong Soo-hyun. (If you already know the rules and have played a couple games, you probably want to begin with volume 2).

Go-playing programs, unlike Chess programs, are no match for even moderately strong human amateurs, but for beginners or weak players like myself, they can provide an interesting challenge. I very much like the free Goban Go GUI for Mac OS X, which includes GNU Go, which is one of the best Go-playing programs. Goban also serves as a client for online Go servers and lets you replay Go games in the .sgf format.

Gallager’s LDPC error-correcting codes

September 28, 2007

Error-correcting codes are a technology that most people don’t think much about, if they even know they exist, but these codes work quietly in the background to enable such things as cell phones and other wireless technology, cable and satellite TV, and also the internet, including DSL, fiber-optic communications, and good old-fashioned dial-up modems. The modern communications revolution could not have begun without these codes.

So what’s the idea behind these codes? There’s a lot to say, and many textbooks have been written on the subject, so I’ll only give the briefest of introductions. [Some excellent textbooks I recommend include MacKay’s textbook which I’ve already reviewed, McEliece’s “Theory of Information and Coding”, and Lin and Costello’s “Error Control Coding.” See also this post for two forthcoming books available online.] EDIT: I’ve added some more information about LDPC decoders, with pointers to available software, in this post.

The basic idea is that we want to transmit some bits which represent some piece of text or picture or something. Unfortunately, when we transmit those bits, they need to travel through some channel (say a wire or through the air) and when they are received, the receiver only gets a noisy version of each bit. For example, each bit might be flipped independently from a 0 to a 1 or vice versa with some small probability (this is called the binary symmetric channel; many other channel models exist too).

To combat this noise, we send extra bits; so instead of sending say the 1000 bits that represent our message, we might send 2000, where the extra 1000 “check” bits have some known relationship to the original 1000. Both the transmitter and receiver agree on that relationship ahead of time; that is the “code.” Of course, all 2000 bits are now subject to the noise, so some of those extra bits could be flipped. Nevertheless, if the noise is small enough, the receiver can try to “decode” the original 1000 bits by finding the configuration of the 2000 bits which obeys all the constraints and is most probable.

In 1948, Claude Shannon proved a theorem that essentially said that if the noise in the channel was small enough, and if the number of extra bits that you were willing to send per original bit was large enough, that one could design very long codes, that if optimally decoded, would always remove all the noise and recover the transmitted message.

(By the way, it is this amazing property that codes can remove 100% of the noise that means that we can watch crystal-clear high-definition TV coming in over the airwaves, something I very much appreciate when I watch a football game these days. When advertisers talk about “digital this” or “digital that,” they really mean “error-corrected digital”.)

As an example of Shannon’s theorem, if one was willing to use one extra bit for every original bit, and the percentage of flipped bits in your binary symmetric channel was less than the Shannon limit of about 11%, his theorem tells you that codes exist that will reliably remove all the noise. However, Shannon’s proof was non-constructive; he didn’t tell us what these wonderful codes were. Shannon also proved a theorem that if the noise was higher than the “Shannon limit,” no codes exist that can reliably correct the noise.

So error-correcting coding theory deals with the problems of designing codes, and efficient encoders and decoders for those codes, that work as close to the Shannon limit as possible. Many theorists invented many interesting and useful codes and encoders and decoders over the years, but until the 1990’s, it still seemed a distant dream to most coding theorists that we would be able to find practical codes that performed near the Shannon limit.

What is very strange is that the best codes and decoders that were discovered in the 1990’s were actually a rediscovery of codes and decoders invented by Robert Gallager in the early 1960’s, for his Ph.D. thesis. Gallager’s thesis introduced “low density parity check” (LDPC) codes, and their decoding algorithm, the famous “belief propagation” decoding algorithm. His thesis also introduced many other important ideas in coding theory, including “density evolution,” simplified “bit-flipping decoders,” and analysis methods for optimal LDPC decoders. It is a truly remarkable work, that every aspiring coding theorist should read. Fortunately, it’s available online.

images.jpg

How is it possible that this work was forgotten? Well, in 1968, Robert Gallager wrote a magnificent textbook on information theory, called “Information Theory and Reliable Communication,” where he explained most of the information and coding theory known at the time, but neglected to talk about LDPC codes! I’m not sure why. Possibly he thought that he’d already covered the material in the 1963 monograph that was made from his thesis, or perhaps he didn’t think LDPC codes were practical at the time. In fact, the decoder was too complex for 1960’s technology, but Moore’s Law took care of that, although only now are LDPC codes widely replacing previously-developed codes in communications systems.

So the moral of the story is: if you write a ground-breaking Ph.D. thesis about a remarkable technology that is 30 years ahead of its time, please don’t forget to mention it in your classic textbook a few years later. Not a moral too many of us have to worry about, but still…

Talking about Probabilistic Robotics

September 23, 2007

Sebastian Thrun is a professor of computer science and electrical engineering at Stanford, and director of the Stanford Artificial Intelligence Laboratory. He was the leader of Stanford’s team which won the $2 million first prize in the 2005 DARPA Grand Challenge, which was a race of driver-less robotic cars across the desert, and also leads Stanford’s entry into the 2007 DARPA Urban Challenge.

One of the ingredients in the Stanford team’s win was their use of “probabilistic robotics,” which is an approach based on the recognition that all sensor readings and models of the world are inherently subject to uncertainty and noise. Thrun, together with Wolfram Burgard and Dieter Fox have written the definitive text on probabilistic robotics, which will be a standard for years to come. If you are seriously interested in robotics, you should read this book. (The introductory first chapter, which clearly explains the basic ideas of probabilistic robotics is available as a download here.)

The Laboratory of Intelligent Systems at the Swiss École Polytechnique Fédérale de Lausanne (EPFL) hosts the superb “Talking Robots” web-site, which consists of a series of podcast interviews with leading robotics researchers. I noticed that the latest interview is with Thrun, and liked it quite a bit; it is well worth downloading to your iPod or computer.

You can watch Thrun speaking about the DARPA Grand Challenge at this Google TechTalk.

Multicellular Logic Circuits, Part II: Cells

September 18, 2007

In my post “Multicellular Logic Circuits, Part I: Evolution,” I discussed evolution and genetic algorithms; I want to continue that discussion here.

There are two salient facts of biology that are completely inescapable. The first is that all organisms are shaped by the process of evolution. The second is that all organisms are constructed from cells.

Furthermore, all complex multicellular organisms begin life as a single cell, and undergo a process of development through cell division to mature into an adult. And no matter how different any two organisms may be on the gross macroscopic level that we are used to, inside their cells the chemical processes of life are fundamentally very similar.

080534635x.jpg 0815340729.jpg

Thus it is no accident that the titles of the two leading textbooks in molecular biology are The Molecular Biology of the Gene by Watson, et. al. and The Molecular Biology of the Cell by Alberts et. al. [These are both great books. This link to the first chapter of MBOC is an excellent entry point into modern biology. And if you are serious about learning biology, I also strongly recommend the companion Molecular Biology of the Cell: A Problems Approach, by Wilson and Hunt, which will force you to think more actively about the material.]

It therefore seems reasonable that if we want to construct artificial systems that achieve the performance of natural ones, we should consider artificially evolving a system constructed from cells.

Although there are typically many different cell types in a mature multi-cellular organism, all the different cells of the organism, with the exception of sperm and egg cells, share an identical genetic specification in their DNA. The different behavior of cells with identical genetic specifications is the result of the cells having different histories and being subjected to different environments.

More specifically, the behavior of a biological cell is controlled by complex genetic regulatory mechanisms that determine which genes are transcribed into messenger RNA and then translated into proteins. One very important regulatory mechanism is provided by the proteins called “transcription factors” that bind to DNA regulatory regions upstream of the protein coding regions of genes, and participate in the promotion or inhibition of the transcription of DNA into RNA. The different histories of two cells might lead to one having a large concentration of a particular transcription factor, and the other having a low concentration, and thus the two cells would express different genes, even though they had identical DNA.

Another important mechanism that controls the differential development of different types of cells in a multi-cellular organism is the biochemical signaling sent between cells. Signals such as hormones have the effect of directing a cell down a particular developmental pathway.

In general, the transcription factors, hormones, and multitude of other control mechanisms used in biological cells are organized into a network which can be represented as a “circuit” where the state of the system is characterized by the concentrations of the different biochemical ingredients. In fact, biologists are now using wiring diagrams to help summarize biological circuits; see for example, the “Biotapestry editor” developed by Eric Davidson’s lab at Caltech.

books.jpeg books2.jpeg

[I strongly recommend Davidson’s recent book The Regulatory Genome: Gene Regulatory Networks in Development and Evolution for an exciting introduction to the burgeoning “evo-devo” field; if you don’t have any background in biology, you may prefer The Coiled Spring, by Ethan Bier for a somewhat more popular account.]

Turning to the problem of designing artifical systems, a natural question is what theoretical advantages exist, from the point of view of designing with evolution, to using an identical genetic specification for all the cells in a multi-cellular organism.

One potential advantage is that relatively small changes to the genetic specification of the organism can concurrently alter the behavior of many different kinds of cells at many different times during the development of the organism. Therefore, if there is the possibility of an advantageous change to the circuitry controlling a cell, then it can be found once and used many times instead of needing to find the same advantageous mutation repeatedly for each of the cells in the organism.

Another related potential advantage is that a highly complicated organism can be specified in a relatively compact way. If each of the trillions of cells in a complex organism like a human had to be separately specified, then the overall amount of information required to describe the human genome would be multiplied more than a trillion-fold. Clearly, it is much more efficient to re-use the identical circuitry in many different types of cells.

In other words, biology uses a strategy of specifying a complex multi-cellular organism by just specifying a single cell–all the other cells in the mature organism are grown organically out of the developmental process. This seems like a strategy worth imitating.

On the other hand, the constraint that each cell in an organism should share an identical genetic specification clearly causes complications from the point of view of design. For example, it is important that genes that are designed to function in one type of cell at one point in development not cause problems for different type of cell at a different point in development. Clearly, good design of the control logic that turns genes on and off is essential to the proper functioning of a multi-cellular organism.

In the next post in this series, I will turn to the construction of a concrete model for multi-cellular circuits that tries to capture, as simply as possible, the essence of what is happening in biology. 

Mindbusters for Magic the Gathering

September 16, 2007

Today we have a guest article from my son Adam, about a variant of Magic the Gathering that we enjoy playing. I wrote a post introducing Magic and other trading card games previously, so if you know nothing about them, you might want to read that first. Adam’s article is definitely addressed to the advanced Magic player.

There are a few problems with Magic that this variant addresses: in particular the luck of the draw and the expense of buying packs. This variant is essentially a no-luck version which just requires two packs of cards. This type of variant can also be used for other trading card games.

If you’re interested in other variants to games like chess, go, bridge, etc., or perhaps in learning how to design your own variants or games, I highly recommend R. Wayne Schmittberger’s “New Rules for Classic Games.”

And if you’re interested in adding some Magic-like luck to the no-luck game of Chess, rather than eliminating the luck from Magic to make it more like Chess, you should read this post.

Read about the “Mindbusters” variant of Magic the Gathering.

(One comment for those who try to follow the article without much Magic experience: “milling” is the strategy of trying to run your opponents out of cards to draw: if you can’t draw a card when you’re supposed to, you lose the game.)

Two Draft Books

September 13, 2007

If you’re interested in learning about statistical mechanics, graphical models, information theory, error-correcting codes, belief propagation, constraint satisfaction problems, or the connections between all those subjects, you should know about a couple of books that should be out soon, but for which you can already download extensive draft versions.

The first is Information, Physics, and Computation by Marc Mézard and Andrea Montanari.

The second is Modern Coding Theory by Tom Richardson and Ruediger Urbanke.

I also recommend the tutorial on Modern Coding Theory: the Statistical Mechanics and Computer Science Points of View, by Montanari and Urbanke, from the lectures they gave at the 2006 Les Houches summer school.

Consistent Quantum Theory

September 4, 2007

Quantum mechanics is a notoriously confusing and difficult subject, which is terribly unfortunate given its status as a fundamental theory of how the world works. When I was an undergraduate and graduate student of physics in the 1980’s, I was always unsatisfied with the way the subject was presented. We basically learned how to calculate solutions to certain problems, and there was an understanding that you were better off just avoiding the philosophical issues or the paradoxes surrounding the subject. After all, everybody knew that all the challenges raised by physicists as eminent as Einstein had ultimately been resolved experimentally in favor of the “Copenhagen interpretation.”

Still, there was a lot of weird stuff in the Copenhagen interpretation like the “collapse of the wave-function” caused by a measurement, or apparently non-local effects. What I wished existed was a clearly-defined and sensible set of “rules of the game” for using quantum mechanics. The need for such a set of rules has only increased with time with the advent of the field of quantum computing.

cover.gif

In his book, “Consistent Quantum Theory,” (the first 12 out of 27 chapters are available online) noted mathematical physicist Robert Griffiths provides the textbook I wished I had as a student. With contributions from Murray Gell-Mann, Roland Omnes, and James Hartle, Griffiths originated the “consistent history” approach to quantum mechanics which is explained in this book, .

The best summary of the approach can be obtained from the comparison Griffiths makes between quantum mechanics and the previous classical mechanics. Quantum mechanics differs from classical mechanics in the following ways:

1. Physical objects never posses a completely precise position or momentum.
2. The fundamental dynamical laws of physics are stochastic and not deterministic, so from the present state of the world one cannot infer a unique future (or past) course of events.
3. The principle of unicity does not hold: there is not a unique exhaustive description of a physical system or a physical process. Instead reality is such that it can be described in various alternative, incompatible ways, using descriptions which cannot be combined or compared.

It is the 3rd point which is the heart of the approach. In quantum mechanics, you are permitted to describe systems using one “consistent framework” or another, but you may not mix incompatible frameworks (basically frameworks using operators that don’t commute) together.

Griffiths uses toy models to illustrate the approach throughout the book, and provides resolutions for a large number of quantum paradoxes. He stresses that measurement plays no fundamental role in quantum mechanics, that wave function collapse is not a physical process, that quantum mechanics is a strictly local theory, and that reality exists independent of any observers. All of these points might seem philosphically natural and unremarkable, except for the fact that they contradict the standard Copenhagen interpretation.

This is a book that will take some commitment from the reader to understand, but it is the best explanation of quantum mechanics that I know of.