I've been wanting to write this post for a while, but never had the courage. But here goes.
Every so often, I encounter a paper that proposes a new objective function for agent behavior. Sometimes, something like predictive information is proposed; sometimes, something more like entropy rate is proposed. In both cases, I have a bit of an issue. When we try and say maximize predictability while minimizing memory, you end up either flipping coins (when you penalize memory too much) or running in very large circles (when you don't penalize memory enough). There doesn't usually seem to be an interesting intermediate behavior. When you maximize entropy rate, you typically end up flipping coins. The key to making these objective functions interesting, I think, is to add enough constraints that they start doing interesting things. And since this now impinges upon an old project that I may pick up again in the near future, that's all I'll say!
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
May 2024
Categories |