|
I don't usually like to write about stuff like this, but I feel like this might actually do some good. Myself and Jim Crutchfield (well-known for his work on chaos theory) have papers about new methods for continuous-time, discrete-event process inference and prediction (here) and about how one can view the predictive capabilities of dynamical systems as a function of their attractor type (here). The reviews-- one from an information theory journal and another from machine learning experts-- unfortunately illustrated a lack of common knowledge on interdisciplinary problems. So I thought I'd put a few key points here, for those studying recurrent neural networks in any way, shape, or form.
First, if you have a dynamical system, you can classify its behavior qualitatively by attractor type. There are three types of attractors: fixed points, limit cycles, and beautiful strange attractors. It turns out that the "qualitative" attractor type is a guide to many computational properties of the dynamical system (again, soon to appear on arXiv). Second, hidden Markov models-- including unifilar ones, in which the current state and next symbol determine the next state-- are not memoryless or Markovian. More to come.
4 Comments
8/7/2025 06:51:15 am
What types of real‑world interventions might correspond to adjusting the mass or spring constant in your toy social system model?
Reply
Sarah Marzen
3/8/2026 10:11:32 pm
Good question. It'd have to do with how a bunch of almost-Bayesians respond to shifts in thinking, probably, so something having to do with collective cognition. But I don't know!
Reply
3/8/2026 11:53:01 am
This is fascinating insight into the world of dynamical systems.
Reply
Sarah Marzen
3/8/2026 10:10:41 pm
Thanks-- sorry, I didn't really write much more, but it turns out reviewers often lack basic textbook understanding!
Reply
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
August 2025
Categories |