An Invite-Only Workshop To Be Held At The Santa Fe Institute, July 10-14, 2023
Engineers try to predict future input from past input; this can take the form of prediction of natural video, natural audio, or text, which has famously led to such products as Generative Pre-trained Transformer 3 (GTP3) and proprietary algorithms for stock market prediction. Organisms and parts of organisms may have evolved to efficiently predict their input as well, and the hypothesis that they do are cornerstones of theoretical neuroscience and theoretical biology. How one can design systems to predict input is still a matter of debate, especially when one has continuous-time input—input that has a state at every point in time, not just at specially sampled points. We aim to bring together researchers that approach the question of how to design systems to predict input through the lens of biology with machine learning, information theory, and dynamical systems. This knowledge will help establish a foundation of theoretical neuroscience and theoretical biology, to enable the scientific community to better calibrate and understand prediction products.
We hope that this workshop will cover a wide range of topics. Some participants will talk about examine evolved systems, including the study of neurons in the retina, hippocampus, and the visual cortex. Some participants will discuss engineering systems to better predict input through reservoir computing and training recurrent neural networks, in which reservoir weights are trained as well.
This workshop is graciously sponsored by the National Science Foundation, the Santa Fe Institute, the W. M. Keck Science Department, and Awecom.
A list of confirmed participants:
Sarah Marzen, W. M. Keck Science Department, Pitzer, Scripps, and Claremont McKenna College, and co-organizer
Jim Crutchfield, U. C. Davis and co-organizer
David Pfau, Google DeepMind
Nicolas Brodu, INRIA
Adam Rupe, Los Alamos National Laboratory
Alexandra Jurgens, INRIA
Marc Howard, Boston University
Steve Presse, Arizona State University
Nicol Harper, University of Oxford
Chris Hillar, Awecom, Inc.
Bryan Daniels, Arizona State University
Erik Bollt, Clarkson
Antonio Carlos Costa, Ecole Normale Supérieure
Guillaume Pourcel, University of Groningen
Manuel Beiran, Columbia University
Jared Salisbury, University of Chicago
Andre Bastos, Vanderbilt
Bruno Olshausen, University of California, Berkeley
Kelly Finn, Dartmouth
Alex Beltsen, University of California, Berkeley
Please submit to the Special Issue on Sensory Prediction in Computational Brain and Behavior! You can submit even if you didn't attend this workshop. More details coming soon.
Computational Mechanics: Where it has been and where it's going
An Invite-Only Working Group to be held at Santa Fe Institute from September 4-10, 2025
Computational mechanics started as an answer to a number of questions that have turned into cornerstones of machine learning, physics, information theory, and complexity science. Namely, what is the causal structure of a process, as much as we can get from observations? (These are causal states.) How can we understand a stochastic process? (Using the machine made from causal states, called the epsilon-Machine.) What is the inherent complexity, randomness, and predictability of that stochastic process? (Statistical complexity might provide a nice description of complexity, while randomness is the purview of entropy rate and predictability the purview of predictive information or excess entropy.) How do we infer that causal structure and that inherent complexity, randomness, and predictability?
Over the years, computational mechanics has expanded to include a wide variety of topics that previously would have seemed out of reach. New algorithms for inferring causal states have been developed, showing details of chemical and biological processes, with many applications to follow. We can benchmark the lossiness of biological systems that process information, including everything from biomolecules to humans. We can benchmark how much energy a system harvests, based on what it could harvest using information-theoretic bounds. We have quantitative answers to how improved is a quantum epsilon-Machine in its memory savings over a classical epsilon-Machine. A NeurIPS paper, published with three of the planned participants, as well as several other papers, addressed how we can use causal states to understand transformers, reservoir computers, and other machine learning algorithms. Most importantly, we have new state-of-the-art algorithms that we can use in order to infer causal states. Summarizing these advances in a pedagogical way for a high-profile physics review would be a boon to physics and a number of other fields that still use order-R Markov models when infinite-order Markov models are within reach!
But our goal is not just to figure out what’s been done, but to convene the leaders in the field of computational mechanics to figure out what’s next.
This Working Group is graciously funded by the Santa Fe Institute and more to come.