Their brilliant idea today that I can write up is that silence from neurons is memory about the stimulus. Interestingly, silence from one brain region to the next communicates one thing-- silence. Hard to communicate memory, which requires more than one state.
The other brilliant idea they had this morning was that if you take a Potts model, you can build a Hopfield-like network where you ask if the next neuron should go to a slightly higher or slightly lower Potts state and do so accordingly. This idea isn't as bad. Actually, I had this idea before like a decade ago and extended it. Just ask what the optimal neuron state is and go to it. Too bad nobody uses the Potts model in neuroscience. What a brilliant idea. They had another idea. Their idea was that to get neural codewords, you compress neural activity to retain information about the relevant stimulus information, which for Stephanie Palmer and Bill Bialek was the future visual stimulus. I originally felt like neural codewords were obtained from compressing neural activity to retain information about the past stimulus information. It's a totally different framework. And they just gave me a dream with an idea that again sounded good but wasn't good. They said, make a semi-Markov model, calculate the autocorrelation function, and use that to bound the predictive information. But I can actually completely calculate the predictive information from a semi-Markov model. This is getting weirder and weirder.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
May 2024
Categories |