Move to Analog Devices

January 17, 2016

I recently moved to Analog Devices, where I am Director of AI Research. My current research focus is on the design and hardware implementation of novel machine learning and inference algorithms. I also direct the a group of about twenty very talented researchers in Algorithmic Systems Group at the Analog Garage Research Lab. Our group creates advanced algorithms in the fields of signal processing, machine learning, and AI, and implements those algorithms in practical and efficient hardware. We are growing, so if you are interested in joining, and have skills in at least two of the following areas: signal processing, machine learning, algorithm development, ASIC circuit design, FPGA prototyping, and software development, and have or will soon have a Ph.D. or equivalent experience, please write to me.

Move to Disney Research

January 29, 2012

I’m checking in for the first time in a long time just to let you know that in 2011, I moved from MERL to Disney Research. At Disney Research, I’ve been working on AI and machine learning. I’m enjoying it a lot, and expect to be giving several talks on my research soon.

You can now find my publications and other professional information at my personal web-page.

More on Leopard

October 31, 2007

If you’re interested in learning more about Leopard than I told you in my last post, head over to John Siracusa’s 17-page review at Ars Technica.

One nice tidbit from the review is that you can replace Leopard’s new reflective Dock with a more functional one by entering the following commands at your terminal:

> defaults write com.apple.dock no-glass -boolean YES

> killall Dock

This will give you the same style of dock that you’ll get if you position the dock to the left or right of your screen, even if you put the dock at the bottom. If you change your mind, and want the new shiny reflective dock back, just repeat the above commands replacing “YES” with “NO”. The changes will stick through a reboot.

Developing on Mac OS X 10.5 Leopard

October 27, 2007

The latest version of the Mac OS X operating system, numbered 10.5 and called “Leopard,” was released last night. I went out and bought it and upgraded the iMac that I’m writing this post on. It’s a nice upgrade, with all sorts of little features. Here’s Apple’s detailed list of features, here’s a mainstream review, and here’s the perspective of a developer.

The installation went smoothly for me. Don’t be surprised if your hard-drive is busy for a about half an hour after you first boot up; that’s Leopard’s Spotlight search program indexing your hard drive.

I want to focus on some of the new features for developers, particularly the upgrade of the Objective-C programming language to version 2.0, and the upgrade of the XCode integrated development environment to version 3.0.

Objective-C is an object-oriented version of C, dating from the early 1980’s, which is a strict super-set of C; that means ordinary C programs will compile successfully under an Objective-C compiler. That makes Objective-C sound a lot like C++, but I like Objective-C a lot more than C++. C++ takes a “Swiss army chainsaw” approach, throwing many new features into the language, while Objective-C is much more minimalist, basically extending the language just to support objects with a Smalltalk-like syntax. Objective-C is also much more dynamic than C++; much more is decided at run-time rather than compile-time. Because of that, it feels a lot closer to programming with a nice scripting language like Python or Ruby. Here’s an excellent introduction to the Objective-C language by Apple; it’s a surprisingly literate piece of technical documentation.

Objective-C is almost never used by itself, instead you use it in conjunction with a set of extensive libraries (the Cocoa libraries on Mac OS X or the GNUstep libraries on Linux or Windows). Cocoa and GNUstep derive from the NeXTStep and OpenStep libraries developed by the NeXT Computer company in the 1980’s and 1990’s. They add both fundamental features (e.g. string handling features, hash tables, that type of thing) and GUI-creation features. These libraries have been under development for 20 years, so they are extraordinarily mature. And since Apple uses Cocoa and Objective-C to develop all of its applications, including Mac OS X itself, it is clear that if you want to develop desktop applications for the Macintosh, you need to learn about them.

I actually believe that using Objective-C and GNUstep is also a very reasonable choice on Linux (or Windows), for those types of applications where you would otherwise use C++, but few people actually make that choice. In fact, I have found that GNUstep and Cocoa are compatible enough that one can pretty easily maintain code that works on all platforms if you need that.

Apple provides a very nice integrated development environment called XCode for free with Mac OS X. If you have not upgraded to Leopard, you’ll be limited to XCode 2.5, while Leopard gives you XCode 3.0. One of the nicest parts of XCode is the “Interface Builder,” which lets you build GUI’s using a GUI instead of by writing code.

With Leopard, Objective-C is being upgraded to version 2.0. Perhaps the most important new feature included is garbage collection. It’s an opt-in system, so legacy code will still work, and you can turn it off if you like allocating and releasing memory yourself, but for new code, most developers will obviously be very happy to have it. This removes one of the major warts of the Objective-C language.

0321213149.jpg

To learn more about building desktop applications for Mac OS X, I highly recommend “Cocoa Programming for Mac OS X” 2nd edition, by Aaron Hillegass. I only wish that there was a third edition that covered the new features of XCode, Objective-C and Cocoa that have appeared in Tiger and Leopard. (UPDATE: Apparently, a 3rd edition is scheduled for Spring 2008.)

If you’re interested in programming Macintosh applications, but prefer to use Ruby, Python, or Haskell instead of Objective-C, you should know about RubyCocoa, PyObjC, and HOC, which let you call the Cocoa libraries from those languages. These tools are very nice, (I’ve actually only used PyObjC personally) but you’ll still need to have some familiarity with Objective-C to understand them.

Connect6

October 23, 2007

pic48005_t.jpg

I’ve been enjoying playing the game Connect6 with my son Adam. The game was invented and introduced by Professor I-Chen Wu, from National Chiao Tung University in Taiwan. Connect6 is played with a Go board and stones. The object is to place six stones in a row, diagonally, horizontally, or vertically. On the first turn, Black places a single stone; after that each player places two stones per turn. Because each player will always have placed one more stone than his or her opponent after each turn, the game appears to be balanced.

One potential concern about this notion of balance is that perhaps the second player should place his or her stones far from the first stone, to get a two-stone advantage somewhere else on the board, and possibly forcing the first player to follow in that part of the board. Fortunately, Wu and a colleague demonstrated that this initial break-away strategy is unlikely to be good for White in this paper.

Anyways, it’s not clear whether with perfect play the game should be a win for the first player, a win for the second player, or a draw (with neither player ever able to achieve six-in-a-row.) If I had to guess, I would venture a draw, even on an infinite board, but on the other hand my actual games have all ended in victory for somebody.

The game is very similar to Gomoku (also known as Connect 5), where one tries to get five stones in a row, but each player only places one stone at a time. Of course, that game favors the first player, and in fact it has apparently been demonstrated that the first player wins with perfect play.

Renju is an older and much less elegant approach to balancing Gomoku. In Renju, the first player is restricted from making moves which make certain types of threats. Looking at all the complications in the Renju rules, I find it surprising that it took so long for Connect6 to be introduced.

In fact, aside from the issues of fairness and elegance of rules, I also find that Connect6 has a more dynamic feel than Gomoku or Renju; I definitely prefer it.

Because of the large number of possible moves each side can make each turn, and the difficulty of evaluating a position, it’s not easy to program a computer to play Connect6 well; I don’t think any programs exist yet that play as well as humans. You can play Connect6 against some relatively weak bots and other humans at Vying Games, which also features other interesting turn-based strategy games (currently Checkers, Pente, Keryo-Pente, Phutball, Breakthrough, Othello, Kalah, Oware, and Footsteps).

2008 SIAM Annual Meeting

October 22, 2007

Lenore Cowen (a co-chair of the 2008 SIAM annual meeting) asked if I might write a post here to help publicize that conference, and I’m very happy to do so. The conference will be held in San Diego from July 7-11, and it looks like it will cover a wider spectrum of topics than is usual for SIAM, so you might consider attending even if it’s not on your normal conference circuit.

The themes for the 2008 meeting are computational science & engineering, data mining, dynamical systems, geosciences, imaging science, linear & multi-linear algebra, biological, social, and internet networks, and enabling complex simulations with scientific software. There is also a quite diverse list of invited speakers.

At SIAM’s annual meeting you are encouraged to propose and organize your own mini-symposium. There are also regular contributed talks and posters. Submission deadlines are January 14 for minisymposium proposals, and January 28 for abstracts for contributed and minisymposium speakers. See the conference web-site for more details.

Using Unambiguous Notation

October 19, 2007

I’ve mentioned their book the “Structure and Interpretation of Classical Mechanics” a couple times already; today I’d like to recommend that you read a wonderful short paper by Gerald Jay Sussman and Jack Wisdom, called “The Role of Programming in the Formulation of Ideas,” which helps explain why understanding physics and the other mathematical sciences can sometimes be so difficult. The basic point is that our notation is often an absolute mess, caused by the fact that we use equations like we use natural language, in a highly ambiguous way:

“It is necessary to present the science in the language of mathematics. Unfortunately, when we teach science we use the language of mathematics in the same way that we use our natural language. We depend upon a vast amount of shared knowledge and culture, and we only sketch an idea using mathematical idioms.”

The solution proposed is to develop notation that can be understood by computers, which do not tolerate ambiguity:

“One way to become aware of the precision required to unambiguously communicate a mathematical idea is to program it for a computer. Rather than using canned programs purely as an aid to visualization or numerical computation, we use computer programming in a functional style to encourage clear thinking. Programming forces one to be precise and formal, without being excessively rigorous. The computer does not tolerate vague descriptions or incomplete constructions. Thus the act of programming makes one keenly aware of one’s errors of reasoning or unsupported conclusions.”

Sussman and Wisdom then focus on one highly illuminating example, the Lagrange equations. These equations can be derived from the fundamental principle of least action. This principle tells you that if you have a classical system that begins in a configuration C1 at time t1 and arrives at a configuration C2 at time t2, the path it traces out between t1 and t2 will be the one that is consistent with the initial and final configurations and minimizes the integral over time of the Lagrangian for the system, where the Lagrangian is given by the kinetic energy minus the potential energy.

Physics textbooks tell us that if we apply the calculus of variations to the integral of the Lagrangian (called the “action”) we can derive that the true path satisfies the Lagrange equations, which are traditionally written as:

lagrange001.jpg

Here L is the Lagrangian, t is the time, annd qi are the coordinates of the system.

These equations (and many others like them) have confused and bewildered generations of physics students. What is the problem? Well, there are all sorts of fundamental problems in interpreting these equations, detailed in Sussman and Wisdom’s paper. As they point out, basic assumptions like whether a coordinate and its derivative are independent variables are not consistent within the same equation. And shouldn’t this equation refer to the path somewhere, since the Lagrange equations are only correct for the true path? I’ll let you read Sussman and Wisdom’s full laundry list of problems yourself. But let’s turn to the psychological effects of using such equations:

“Though such statements (and derivations that depend upon them) seem very strange to students, they are told that if they think about them harder they will understand. So the student must either come to the conclusion that he/she is dumb and just accepts it, or that the derivation is correct, with some appropriate internal rationalization. Students often learn to carry out these manipulations without really understanding what they are doing.”

Is this true? I believe it certainly is (my wife agrees: she gave up on mathematics, even though she always received excellent grades, because she never felt she truly understood). The students who learn to successfully rationalize such ambiguous equations, and forget about the equations that they can’t understand at all, are the ones who might go on to be successful physicists. Here’s an example, from the review of Sussman and Wisdom’s book by Piet Hut, a very well-regarded physicist who is now a professor at the Institute for Advanced Studies:

“… I went through the library in search of books on the variational principle in classical mechanics. I found several heavy tomes, borrowed them all, and started on the one that looked most attractive. Alas, it didn’t take long for me to realize that there was quite a bit of hand-waving involved. There was no clear definition of the procedure used for computing path integrals, let alone for the operations of differentiating them in various ways, by using partial derivatives and/or using an ordinary derivative along a particular path. And when and why the end points of the various paths had to be considered fixed or open to variation also was unclear, contributing to the overall confusion.

Working through the canned exercises was not very difficult, and from an instrumental point of view, my book was quite clear, as long as the reader would stick to simple examples. But the ambiguity of the presentation frustrated me, and I started scanning through other, even more detailed books. Alas, nowhere did I find the clarity that I desired, and after a few months I simply gave up. Like generations of students before me, I reluctantly accepted the dictum that ‘you should not try to understand quantum mechanics, since that will lead you astray for doing physics’, and going even further, I also gave up trying to really understand classical mechanics! Psychological defense mechanisms turned my bitter sense of disappointment into a dull sense of disenchantment.”

Sussman and Wisdom do show how the ambiguous conventional notation can be replaced with unambiguous notation that can even be used to program a computer. Because it’s new, it will feel alien at first; the Lagrange equations look like this:

functional002.jpg

It’s worth learning Sussman and Wisdom’s notation for the clarity it ultimately provides. It’s even more important to learn to always strive for clear understanding.

One final point: although mathematicians do often use notation that is superior to physicists’, they shouldn’t feel too smug; Sussman and Wisdom had similar things to say about differential geometry in this paper.

SICM on Mac OS X

October 17, 2007

“Structure and Interpretation of Classical Mechanics,” (SICM) by Gerald Jay Sussman and Jack Wisdom, with Meinhard Mayer, is a fascinating book, revisiting classical mechanics from the point of view that everything must be computationally explicit. I already mentioned the book in a previous post.

The book is available online, and all the software is freely available on-line as well. The software is written in Scheme, and a very extensive library called “scmutils” was developed to support computations in classical mechanics, including implementations of many symbolic and numerical algorithms.

I think that many scientists and programmers could find the “scmutils” library to be generally useful, even if they are not particularly interested in classical mechanics. If you are using the GNU/Linux operating system, there’s no problem in getting the library working. However, if you want to use it on Mac OS X (or Windows), the instructions leave the impression that it’s not possible, and Googling turned up some useful information, but no complete instructions, and also some people that seemed to be at a loss about how to do it.

Well, it is possible to get MIT-Scheme with the scmutils library running on Mac OS X (and you can probably modify my instructions to make it work on Windows too):

Click here for my instructions for running scmutils on Mac OS X.

NobelPrize.org

October 16, 2007

It’s Nobel season, as you’ve certainly noticed. What you might be less aware of is that the Nobel Foundation maintains an interesting web-site at http://nobelprize.org/. Since 2001, all the Nobel lectures have been video-taped, and the videos are all available at the site. Each of the Nobel Laureates since 2001 has also been interviewed, and since 2004, the Nobel Laureates in physics, medicine, chemistry and economics have participated in round-table discussions, and there have been documentaries produced about each of the Laureates. So there’s quite a lot of material to view for all tastes and scientific interests.

The site is a little awkwardly organized, but you can find your way around. As one random starting point that might be of interest, here’s the page about the 2006 Prize in Medicine, to Andrew Fire and Craig Mello, for the discovery of RNA interference.

While on the subject of the Nobel Prize, I can’t resist adding the priceless reaction of Doris Lessing, this year’s Nobel Laureate in Literature, to learning that she won the prize.

LDPC Decoders and PyCodes

October 15, 2007

I already wrote about Gallager’s LDPC error-correcting codes, but I didn’t explain very much about how they work, aside from pointing you to some good references. I want to use this post to say a little about their decoders, which use the belief propagation algorithm, and also to make you aware of some freely available LDPC software, in case you want to study or simulate these codes.

The decoders typically work by message-passing (although decoders based on linear programming have also been studied). One represents the codes using a “Tanner graph,” that looks like the figure shown below, which is actually a Tanner graph for the famous Hamming code.

ldpc001.jpg

The circles in the Tanner graph represent the bits that are transmitted. For this Hamming code, only 7 bits are transmitted in a block, but more practical codes will have hundreds or thousands of bits in a block.

The squares with a “+” inside of them represent the parity check constraints. Each parity check constraint enforces that the bits that it is connected to must sum to 0 modulo 2, or equivalently the sum of the bits is even. For example, in the code above, there are three parity check constraints, and the first parity check constraint forces the first, second, third, plus fifth bit to sum to an even number. Even in codes with large number of bits, each check will only be connected to a small number of bits; that’s what makes the codes “low density.”

The plain squares represent the information from the channel about each bit. For example, if a binary symmetric channel with a flip probability of f was used, and the first bit was received as a 0, the first square would be a function that said that the first bit had a probability of 1-f of being a 0, and a probability of f of being a 1.

The belief propagation decoders for LDPC codes (there are actually various variants) work by passing messages back and forth between the bit nodes and the parity check nodes in the factor graph. The bit nodes start by sending their beliefs about what values they have to their neighboring check nodes. I.e., a message would say something like “bit 1 believes it has a 90% chance of being a 0, and a 10% chance of being a 1.”

The check nodes look at their incoming messages, and send out appropriate messages in response. For example, if a check node is connected to four bits, and the first three bits think that they are a 0, a 0, and a 1, respectively, the fourth bit will get a message to be a 1 (so that the sum will be even), with a probability that depends on how strongly the three other bits believe that they have those values.

When the bits get messages back from the check nodes, they update their beliefs appropriately and iterate. Eventually, if we’re lucky, the bits have beliefs which are consistent (when they are thresholded to their most likely value) with all the parity checks, and the decoder can output a codeword. Again, you should check out the references in my previous post about LDPC codes for more mathematical details about the algorithms.

If you want to implement LDPC codes, you might want to use the PyCodes package developed by Dr. Emin Martinian while he was at Mitsubishi Electric Research Labs (MERL). PyCodes is written in C, and linked into Python, so you can call it within Python as an ordinary module, but it still runs very fast.

Emin began writing PyCodes when he was my intern, and continued when he became a full-time employee at MERL. It’s very well-written code that I use a lot; Emin was a professional software developer before he was a graduate student, and the software is professional-quality. PyCodes is free for non-commercial use; see the license for more details.

For other software for error-correcting codes, see “the Error Correcting Codes Page.”