I've now talked to many people who think that causal states (minimal sufficient statistics of prediction) and epsilon-Machines (the hidden Markov models that are built from them) are niche. I would like to argue that causal states are useful. To understand their utility, we have to understand a few unfortunate facts about the world. 1. Take a random time series, any random time series, and ask if the process it comes from is Markovian or order-R Markov. Generically, no. Even if you look at a pendulum swinging, if you only take a picture every dt, you're looking at an infinite-order Markov process. Failure to understand that you need the entire past of the input to estimate the angular velocity of that pendulum results in a failure to infer the equations of motion correctly. Another way of seeing this: most hidden Markov models that generate processes will generate an infinite-order Markov process. So no matter what way you slice it, you probably need to deal with infinite-order Markov processes unless you are very clever with how you set up your experiments (e.g., overdamped Langevin equations as the stimulus). 2. Causal states are no more than the minimal sufficient statistics of prediction. They are a tool. If they are unwieldy because the process has an uncountable infinity of them-- which is the case generically-- we still have to deal with them, potentially. In truth, you can approximate processes with an infinite number of causal states by using order-R Markov approximations (better than their Markov approximation counterparts) or by coarse-graining the mixed state simplex. 3. This is less an unfortunate fact and more just a rejoinder to a misconception that people seem to have: epsilon-Machines really are hidden Markov models. To see this, check out the example below. If you see a 0, you know what state you're in, and for those pasts that have a 0, the hidden states are unhidden, allowing you to make predictions as well as possible. But if you see only 1's, then you have no clue what state you're in-- which makes the process generated by this epsilon-Machine an infinite-order Markov process. These epsilon-Machines are the closest you can get to unhidden Markov models, and it's just a shame that usually they have an infinite number of states. 4. This is a conjecture that I wish somebody would prove: that the processes generated by countably infinite epsilon-Machines are dense in the space of processes. So if we want to predict or calculate prediction-related quantities for some process, it is highly likely that the process is infinite-order Markov and has an uncountable infinity of causal states. What do we do?
There are basically two roads to go down, as mentioned previously. One involves pretending that the process is order-R Markov with large enough R, meaning that only the last R symbols matter for predicting what happens next. The other involves coarse-graining the mixed state simplex, which works perfectly for the infinite-order Markov processes generated by hidden Markov models with a finite number of causal states. Which one works better depends on the process. That being said, here are some benefits to thinking in terms of causal states: 1. Model inference gets slightly easier because there is, given the starting state, only one path through the hidden states for a series of observations for these weird machines. This was used to great effect here and here, and to some effect here. 2. It is easy to compute certain prediction-related quantities (entropy rate, prediction information curves) from the epsilon-Machine even though it's nearly impossible otherwise, as is schematized in the diagram below. See this paper and this paper for an example of how to simplify calculations by thinking in terms of causal states.
0 Comments
There was an early paper on the MWC molecule that showed that you could get different logical gates by changing binding energies. At the same time, we all know that the Izhikevich neuron can yield many different types of neural behavior that all have different computational properties.
At root, gene regulatory networks and neural networks must perform computations on inputs-- really, arbitrary computations. How do they succeed? I think it has something to do with the presence of different apparent activation functions in the biophysical network. Take a gene regulatory network, with its first layer being production of mRNA and its second layer being production of protein. The first layer is full of different apparent activation functions-- same thermodynamic model underlying, but different apparent logical functions based on changes in binding energy, leading to literally any computation you want. Or take a neural network, with its many layers. Every layer might have different apparent activation functions from the same underlying dynamical system, with a few parameters changed. The weird part about this is that it seems easy to analyze if you view it as: there is some underlying hidden state dynamics that describes everything, and the only thing you're changing are parameters of the hidden state dynamics to get different apparent activation functions. Back when I was at MIT and starting to think about the predictive capabilities of reservoirs, I wanted to pull out the dynamical systems textbook and answer all my questions about prediction.
The first dynamical systems textbook I pulled out was Strogatz. I realized that I could hit the problem of prediction with that textbook in the limit of weak input, when the basin-attractor portrait failed to change, and that resulted in this paper. The second book I looked up was one on "Random Dynamical Systems". I got one chapter in when I realized something-- this book wasn't answering any of my questions on prediction. I realized that a conceptual shift was needed. These dynamical systems were not "random". They were filters of input, and the input was the signal, and it was incorrect for all the problems I was working on to treat the input as noise. I barely care about what the state of the system is; I only care about how the state of the system relates to the past of the input, something that may be harder to keep track of. The fix to this, I think, is to look at the joint state space of predictive features of the input and the state of the system and find dynamics on that joint state space. I did this in this paper. You have to know something about the input. The math gets a bit complicated, but I'm hoping that slight fixes to the Strogatz textbook can be imported in the more general case! Altogether, I think a new textbook on dynamical systems with input is in order, one that includes more recent work on reservoir computing. These input-dependent dynamical systems actually do a computation, and so many fields-- from biophysics to theoretical neuroscience-- care about quantifying exactly how well that computation is done. Considering the input as noise is the opposite of solving the problems in these fields. I think this is a classic example of how bio-inspired math could spark an entirely new textbook. Many people have written about this, and I do know this literature as well as I'd like, but I did a random calculation that I thought was semi-useful based on this paper. I haven't touched the channel coding aspect of this paper, but I did try to understand the mutual information between input and neural response in the small noise limit. In this case, I'm assuming that the neurons are nearly noiseless, and that their information is coded by a firing rate. The approximation was heavily inspired by this paper.
The details are not pretty, but the story ends up being pretty simple. Essentially, we'll assume that the firing rate of the neuron is some nonlinear function of some gain multiplied by the input signal, plus some Gaussian noise that ends up being rather irrelevant. We'll also assume that the probability density function of the input is highly peaked around some value. This is essentially ICA, but we learn a few things about the optimal nonlinearity:
Please let me know if someone has already done this calculation! It seems like an obvious move, but I am unfortunately not familiar enough with the literature to know. Here are some calculations that I will never publish. Intuitively, if you want to be energy efficient, you should die. (I credit Tony Bell with this analysis.) But now, it seems like there is buzz around the idea that energy efficiency leads to prediction of input.
In my opinion, this is almost true. I can't find a way in which Tony Bell's argument doesn't hold up, unless you add so many constraints to your system that the death solution is impossible. For example, in this very interesting recent preprint, it seems to be the case that the death solution is impossible. If you add a scalar v in front of the input in their RNN, the death solution is now possible; I hypothesize some experiments might confirm that training for energy efficiency would set v to 0 and kill the activations of the network entirely. But instead, their system is constrained so that energy efficiency demands that "p" must be the negative of the input, and so predictive coding results. A more general take on energy efficiency and prediction is the thermodynamics of prediction. Continuous-time versions are in this paper. I find this bound to be quite clever in that it equates prediction inefficiency with energy efficiency, rather than prediction wholesale. Prediction inefficiency can actually be zero when there is no prediction (e.g., this paper). It is not yet clear to me if this bound is tight, though, for optimized systems. Based on the overly simple examples in the aforementioned paper, I'd say no, but we'll have to see. I wrote some stuff. I held off on doing this for a while, but now seems like a good time to be a nitpicker. Here's some thoughts.
At the end of the day, it seems likely that early brain regions are doing joint source-channel coding, and I wish I had a good way to think about that. Just to update: I have two pieces of work on this with Simon Dedeo. We take the view that rate-distortion theory is a mathematical description of the efficient coding hypothesis, and because we're not sure what else to do, we randomly draw distortions and probabilities. We find two things. In this paper, we find that there are two regimes-- one in which resources grows with the number of environmental states, and one in which it doesn't. In this other paper, we find an experimental mathematical result (that looks like but isn't the Central Limit Theorem) which says that the rate-distortion curve doesn't change much from environment to environment. Hence, no need to change the number of sensory neurons, and no need for sensory neurogenesis. I don't usually like to write about stuff like this, but I feel like this might actually do some good. Myself and Jim Crutchfield (well-known for his work on chaos theory) have papers about new methods for continuous-time, discrete-event process inference and prediction (here) and about how one can view the predictive capabilities of dynamical systems as a function of their attractor type (here). The reviews-- one from an information theory journal and another from machine learning experts-- unfortunately illustrated a lack of common knowledge on interdisciplinary problems. So I thought I'd put a few key points here, for those studying recurrent neural networks in any way, shape, or form.
First, if you have a dynamical system, you can classify its behavior qualitatively by attractor type. There are three types of attractors: fixed points, limit cycles, and beautiful strange attractors. It turns out that the "qualitative" attractor type is a guide to many computational properties of the dynamical system (again, soon to appear on arXiv). Second, hidden Markov models-- including unifilar ones, in which the current state and next symbol determine the next state-- are not memoryless or Markovian. More to come. I've been wanting to write this post for a while, but never had the courage. But here goes.
Every so often, I encounter a paper that proposes a new objective function for agent behavior. Sometimes, something like predictive information is proposed; sometimes, something more like entropy rate is proposed. In both cases, I have a bit of an issue. When we try and say maximize predictability while minimizing memory, you end up either flipping coins (when you penalize memory too much) or running in very large circles (when you don't penalize memory enough). There doesn't usually seem to be an interesting intermediate behavior. When you maximize entropy rate, you typically end up flipping coins. The key to making these objective functions interesting, I think, is to add enough constraints that they start doing interesting things. And since this now impinges upon an old project that I may pick up again in the near future, that's all I'll say! I've been reading a lot of papers lately in physics, in neuroscience, in biology in general, in which new mechanisms for memory are discovered. Certain types of inputs are shown, the system's state appears to remember something about past inputs, and victory is declared.
I think that the general trend here is wonderful. Systems do have memory, and many have memory for a reason; some of them don't, but their memory can actually be used for something. It's definitely about time that we started cataloguing these phenomena. However, at the risk of being a Debbie Downer, I want to point out that pretty much any input-dependent dynamical system has memory. What that means is that if your system's state evolves according to some set of rules than involves the input, then you're pretty much guaranteed to have memory. Thus, memory in and of itself is not that special. Both forgetting and remembering are typical. The real question is, what does the system remember? What is special is when the system remembers, and only remembers, just the "right" things. Now, "right" is an unfortunate word because it's user-defined, but what is "right" depends on what the system is used for. In any particular application, there's some number of necessary things that the system needs to remember in order to do its job, whatever that may be. And it is usually tricky to design an input-dependent dynamical system that stores these things and only these things. (Exception: periodic input is exceptionally easy to remember. It is entirely predictable.) I've therefore started to look at some of these papers a little more critically, like the grump that I am. It doesn't make me less excited where this field will go, but it does highlight that more studies on what is "typical" are sorely needed. I've been to two workshops now at which the theory of theory-making is vigorously discussed. It seems clear that even practicing scientists don't quite know how to define what makes something a "rule of life" or a "theoretical framework for neuroscience"-- myself included.
As such, I asked a philosopher at the recent Mathematical Biosciences Institute summit at Ohio State University to tell me how philosophers of science would define a theory. He gave me six characteristics of a theory:
Update: check out this paper about a theory of theories that I'm a co-author on! |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
May 2024
Categories |