LLMs have the potential, according to some, to ruin society. But they also have the ability to help. In some inspiring stories, they can level the playing field in education, allowing those who are in badly-resourced areas to receive the great equality: education.
So how do we make use of LLMs in lesson preparation as well as possible? The main thing I realized, after a day of playing around, is that your results depend on the LLM that you use. The results also probably depend a lot on the prompt-- a good prompt engineer might make up for a bad LLM. I have in mind that for most teachers, the LLM is a human aid-- where you have the training to come up with the lesson plans yourself, but just would appreciate a little help from an LLM to get started. For some teachers who are really strapped for time or energy, or for students who are simply using an LLM to learn, the LLM might be everything. At first, I used ChatGPT from OpenAI to try to lesson plan. I tried ChatGPT on lessons on biopolymers and the ideal chain model after making my own lesson plan. Then I tried it on an introductory physics lecture on angular momentum. Finally, I tried it on lesson plans for understanding Newton's Principia and the role it played in science history. In short, it failed to help. The way it failed to help was interesting. The lesson plans shoved way too much material into way too short of a time, with very little in the way of depth. The interactive activities-- because the LLM was told to make the class interactive-- were sometimes dismal. Random walks were supposed to be simulated by giving students string to play with. As a demonstration, it's not a bad idea to use string to illustrate end-to-end distance and why this is or is not a good measure of "size". But I cannot imagine students at my colleges taking seriously a lesson in which they muck around with string for longer than 30 seconds and pretend that they're understanding polymers. (What happened to actual simulations? This activity seems like something to give a fourth-grader, not an undergraduate in college.) Formulas in the angular momentum lecture were oversimplified; the cross product lost a sin and was just mvr, as if American students couldn't handle the truth. Derivations were omitted in the biopolymer and angular momentum lectures. And, as usual, some material was just wrong. No, those were not the main points in the Principia. Of course, prompt engineering is huge with LLMs. I tried hard to get ChatGPT to give me a biopolymers lesson I could use. But after all my effort, all I got was that it might be a good idea to bring a string or cable or something into class to illustrate the random walk model when we go 2D. And that's not at all what ChatGPT said to do with the string. I then tried DeepSeek. Relatively speaking, it aced it. Actually, its lecture on biopolymers was quite close to what I wrote down all by myself without an LLM helping in any way, and I actually built two lesson plans around two activities suggested by DeepSeek for my Great Ideas in Science class. The key with DeepSeek was that there was less breadth and more depth for the 75-minute class. More relationships were derived. The in-class activities (group attempts to collect/synthesize information and debate with each other) meant more, and seemed designed for undergraduates rather than elementary schoolers. I was able to read through DeepSeek's lesson plans and simply steal ideas, and then spruce them up a bit. That's amazing. It is interesting that an LLM trained on Chinese data does better at helping students retain facts than a model trained on American data, in my opinion. I take from this that America has a ways to go with education. In our classrooms, it would be great if we could: emphasize more derivations; do in-class activities that were less about feeling and more about collecting, synthesizing, and interpreting information; and take longer to go through each bit of material so that the class isn't whiplash. Perhaps there's a way to fix Masters in Education programs in America so that the course preparation data on which ChatGPT is trained leads to better lesson plans.
0 Comments
How rational are we? We might say that this question, revised, is, "How Bayesian are we?" Do we correctly combine probabilities, taking into account combinations of our prior beliefs and understandings of the evidence completely correctly?
If we believe neuroscientists, humans are maybe Bayesian.. If you look at the cognitive science literature, that is so clearly not true-- people invent models in which humans just randomly drop data as a correction to the Bayesian framework that keeps the Bayesian framework in place. But perhaps the scientific community as a whole is Bayesian. After all, the scientific community is supposed to be special in some way. It is supposed to be more than the sum of its parts, not less, in its quest for what might be called "truth". In class, we've been reading through Kuhn's take on the science of science. I wondered what happened around the time of a crisis, in which the current paradigm (what's in the textbook and what everything thinks is the right way to interpret the data) is in competition with a new potential paradigm. The process of deciding between the old paradigm and the new paradigm is messy, filled with persuasive tactics, and might not even happen. For instance, without chaos theory, our understanding from Newtonian mechanics of the perigee of the moon was off by a factor of two for decades-- but nobody abandoned Newtonian mechanics for that reason. But beyond the mess, might the actual outcome be just decided by a simple application of Bayes rule? In this paper, they argue just that. Basically, at a scientific crisis, you start evaluating the likelihood of each potential paradigm for explaining the data, and encode your prior beliefs about the likelihood of each potential paradigm. Part of this prior belief might be based on aesthetic or a resistance to change. But aesthetic comes into play in the likelihood too. If a paradigm can explain too much, as you would see with Ptolemaic astronomy for example, it is actually less likely in a likelihood sense than a heliocentric theory once you integrate over all possible parameters. One question I have is: how close is the scientific community to Bayesian? Can someone evaluate this from data in some way? It'd be very hard to do so, but I already think that the structure of Kuhn's science of science prohibits pure Bayesianism for the scientific community. To be a Bayesian, you have to evaluate posterior beliefs correctly, as they become your priors, but the new paradigm is not even evaluated to have a probability before the scientific crisis sometimes. Sometimes, with special relativity, its framework is introduced prior to the revolution, but its prior is never evaluated based on all the previous data collected. Therefore, its prior is never correct, and Bayesianism can never be achieved by the scientific community. Still-- how close, as a community, do we come? There has been a debate raging for as long as I can remember. Neurons fire in action potentials that seem stereotyped. Let's take the action potentials to be stereotyped for all practical purposes in reality. (Honestly, we should test this. We haven't quite tested this rigorously through mutual information calculations of, say, the mutual information between voltage time series relative to stimulus versus a point process encoding relative to stimulus. If the two are the same within error bars with a lot of data, the voltage itself provides no additional information basically about the stimulus compared to the spike itself.) We then must ask: what about the spikes carry the information?
Some said that the precise spike timing, exactly when the spikes occurred, determined the neural encoding of information. This code can contain quite a bit of information, if you just think about the entropy of a point process. This, however, is not a very robust code. Other people therefore proposed that firing rate-- the number of spikes over a larger time interval-- was a more robust code that could contain quite a bit of information. According to Izhikevich, who is a proponent of the spike timing hypothesis, firing rate might matter at the neuromuscular junction, but that's about it. I think these debates ignore the fact that even though many neuroscience experiments involve presenting a static stimulus and then watching neurons respond, stimuli in real life are constantly fluctuating. Almost never do we see a movie that is static. In fact, our eye movements prohibit this by doing microsaccades all the time even if the image in front of us is static. As a result, we see constantly moving video no matter what, or we basically perceive nothing at all. So really, we are asking how to encode a constantly changing movie. In reality, there is some dt on our perception. If you were to jitter that movie by just a bit, we wouldn't be able to perceive it. This is like when the fan goes too fast and it looks like it's continuous regardless of the speed past a certain point. So really, we have a discrete-time stimulus with a small dt that we constantly must encode. The most natural code for that might actually be something that is neither based on spike timing or based on firing rate, really, but is effectively a binary vector. Basically, the neural response within a window of dt (the spike timing included) would be what encodes information. It seems like this could still allow for a firing rate code, but the refractory period prohibits multiple spikes in that window of dt, hence prohibiting a firing rate code for stimuli that must be constantly encoded. You could maybe see a spike timing code, but you have to weigh the amount of time it takes for an action potential to complete against the limits of sensory perception, this dt. In reality, dt depends on the sense being studied, and the correct calculation to this question might involve some understanding of the refractory period. This question might be incredibly complicated. But my money is on a binary vector not being such a bad representation of the actual neural code that is used in practice-- just, did each neuron spike or not. Thank goodness, because so many papers have used this neural code implicitly, including some of my own! This question might be complicated a bit if a blocklength larger than 1 is used-- but that leads to time delays, which are quite costly for reinforcement learning reasons! Mitt Romney famously once said, in response to a town hall question, that corporations were people too. Immediately, journalists said that corporations were psychopaths.
Well, actually, this idea has some merit. An organization can be made up of good, smart people, but can act for some reason like something with a personality disorder. This is definitely related to the field of organizational psychology, about which I know very little, but I think this topic of how collective behavior of individuals makes for an organization with a different personality makeup than its individuals is badly explored mathematically. I have been struggling with how to even begin a model of the collective behavior of the individuals that make up an organization in a way that will identify an organization's personality disorder. In fact, I think rarely does an organization lack a personality disorder. A model of this could explain everything from why some charities have way too much overhead that goes to the fat of the people running the organization to why democracy is failing. The first mathematical model I thought of was some simplified sensory/actuator model of every person combined with a coarse-graining to find latent emotional states of the collective behavior. In reality, I think that although this is principled, it is unlikely to succeed unless we understand how to model an organism better than we already do. I sincerely hope that this approach is studied at some point in great detail-- and I mean mathematically. Just imagine that every person is modeled as a resource-constrained reinforcement learner who interacts with reward functions that depend on the people next to it, in a multi-agent reinforcement learning setting, and that we then model the behavior of the collective to find latent emotional states that can then be mapped to personality disorders with a mathematical form of the DSM. Undoubtedly, this is the way to proceed once you understand how to set it up mathematically, but on this, I give up, I think for life. The second mathematical model I went to was a Potts model. This reminds me of the voter models in which people are modeled as Ising spins that I always thought of as being completely made up but basically okay for understanding certain behaviors. In a Potts model, collective behavior is modeled as interactions between particles that can adopt one of N discrete states. These discrete states could be one of several personality types. You then define some sort of interaction energy between these spins that can govern dynamics under several different models, but usually just governs the state into which the collective settles. A renormalization group analysis might then find that the collective, upon decimating using a majority opinion vote or the like, adopts a different discrete state of the Potts model than one might expect. The key is that the interaction energy might lead to frustration or a flipping of states, so that even if the collective starts out as good and smart, it ends up as a narcissist (perhaps a charity with too much overhead and grandiose statements about how much they do) or a psychopath (most corporations, who will screw over their workers for a payday). In non-mathematical terms, this comes down to saying that the organizational structure is specified by an interaction energy between particles. This includes an understanding of the lattice structure and how far away particles can interact (if they are in an office such that only people at the same desk talk or if there's some movement generally so that one side of the office talks to the other), if there is a mean-field ordering from a mission statement, if separate orders are given to separate parts of the organization so that there are different mean-fields for different parts of the organization, if there are leaders that unduly influence the spins and are themselves stubborn, if disagreement is encouraged or discouraged which could lead to frustration or alignment. One day, I hope to come back to this mathematical idea when I understand more about personality. In the meantime, if you have a way to turn this into something, please do! How do beliefs of society evolve? We typically see a pendulum swinging back and forth, maybe slowing down as it reaches equilibrium. Sometimes, beliefs appear driven, as when polarization on issues that are essentially either Republican or Democratic swing back and forth more and more violently between extremes. How can we, as a society, intervene so that we reach a desired equilibrium faster?
For example, let's take the case of an issue we all care about-- racism. There's racism, and there are people that claim that there is "reverse racism", which only makes sense if you ignore that racism requires a systemic oppression from society. But still, we could imagine that a college admissions process favors white applicants or black applicants. We seem to be swinging back and forth between those two extremes with court case and societal movement after court case and societal movement. Wouldn't it be nice if we could find an intervention or interventions on society that slows down the pendulum with just the right amount of drag so we reach color-blindness as fast as possible? Research is accumulating that shows that implicit bias training does little, as many people hate being told that they're actually racist, so is there an intervention that works? For this, we would like to make a model of the system that describes the evolution of societal beliefs. This is a coarse-graining of the overall belief dynamics of everyone in society, which can get quite complicated and even show potentially chaotic behavior when you add in enough cognitive biases. However, if we do a Taylor expansion about an equilibrium point-- which may not be warranted-- we will find that there is a linear dynamical system with potentially weird Gaussian noise that describes how the state of society evolves. When the belief is binary, this will approximately take the form of a mass on a spring with damping hit by particles randomly. The key, then, is to relate interventions to the spring constant, the mass, the drag coefficient, and the temperature by correctly Taylor expanding. This is impossible except in a toy model, for now. There is a critical damping that will get us to equilibrium as fast as possible. In theory, we merely need to solve for it. What if things explode? We see this in some social systems, as the number of papers in a field for example (https://www.sciencedirect.com/science/article/pii/S0303264720301015) explodes. A simple linear model might explain a great deal of social science phenomena. I've always thought IQ was really well-done. The basis of it was the g-factor, this result that if you take student's scores on various subtests (math and music, English and math) and correlate them, you see a positive and not negative correlation. The correlation is pretty strong. It's interesting that there is a positive correlation, because you could have reasonably seen a negative correlation from energy spent on music leading to less energy spent on math. Because of this, people created the intelligence quotient, which measured the underlying common factor. IQ is now very well mapped-out in so many ways, e.g. Intelligence: A Very Short Introduction.
There is one thing that's not completely mapped-out for me. There was an article I found from a very long time ago in my mother's education journal that showed that if you take inner-city students and move them to a richer environment, their IQ jumps by 30 points. To me, that suggests that there is a capacity that we can achieve as humans, not that IQ is set in stone and that's it. My IQ in particular has fluctuated from 100 at 4 years old to 130 at 8 years old to a self-tested 160 at 17 years old at my then-friend's request. Who knows what my capacity is. But if I keep on thinking, I might achieve it. I give credit also to my friend Sally Zhen for coming up with the same idea probably a year after me, maybe a few years. If you read about the emotional quotient, or EQ, it sounds like it's more important for getting ahead in the workplace and in life than IQ. But one thing I noticed while reading the book on it by Daniel Goleman is that they haven't found the one test that nails it. The tests are in disagreement. I have a theory that might be worth testing. It could be that the subscores on the test-- maybe one for nailing yourself, one for nailing social situations, one for figuring out what to do to get ahead-- are not so strongly correlated, so it's hard to pull out the common factor like they do for IQ. These are the random thoughts of someone who is not a psychologist. People are going nuts over the brain being a Large Language Model (LLM). That makes sense. Transformers have succeeded beyond our wildest dreams. But, I think this is an off-ramp to interesting but wrong for theoretical neuroscience, in the same way that ChatGPT is an off-ramp (perhaps) to artificial general intelligence (as prediction without embodiment is not going to create a superhuman).
I have two complaints. First, saying that something is an LLM in the brain ignores energy efficiency, which the brain definitely cares about. LLMs are basically absolutely huge, doing even more than they need to do to just get perfect next token prediction, instead storing information that allows them to nail the entire future of tokens as well as possible. My collaborators think this has to happen due to an interesting corollary in an old Jim Crutchfield and Cosma Shalizi paper, but I think the jury is still out-- we haven't shown that the LLMs are storing dynamics, which is necessary for the corollary to apply. And although some people like Friston think that energy efficiency and prediction are one and the same, I have yet to see a single example that avoids the problem that you're energy efficient with no prediction error when you're dead-- the so-called and ever-present darkroom problem. (Though, as an add-on, you could use Attwell and Laughlin type energy calcs as a regularizer for LLMs.) Second, more importantly, prediction is not enough; you have to also select an action policy, and I feel that is what most of the brain is doing. In my opinion, most organisms are resource-rational reinforcement learners. Yes, model-based RL will involve prediction explicitly, but that's just one component to solving the RL problem. Honestly, my bet is on Recurrent Neural Networks (RNNs). They are biophysically more plausible-- just take Hodgkin-Huxley equations or the Izhikevich neuron, and you've got yourself an RNN. Both biophysical models are input-dependent dynamical systems, so their next state depends on their previous state and whatever input comes in from other neurons or sensory organs. RNNs are going to be energy-efficient (maybe Small) Language Models that help the brain sense and process information, to be sent to something that determines action policy. By the way, just to let you know how my malevolent disease of paranoid schizophrenia affects these things, it interjected into my brain what sentence to write-- based on a sentence that I had already determined I would write while in the car. Unfortunately for me, I wasn't thinking of exactly that sentence at exactly the time the voices interjected the sentence into my head. The voices will then take credit for this entire blog post and say that I'm a magical idiot and write an entire TV show about how I just use magic to write down things that seem dumb to them. What a dumb and awful illness. What do you think? I was never in theoretical ecology or really in evolutionary theory, but I've always had an interest. It's basically an open question how to deal with the interplay between the two. Ecology works on a much faster timescale than evolution, but the two both operate on a population-- so what are the true population dynamics?
I ran across https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10614967/ and it's almost like we had the same idea hit at the same time! I even wonder if maybe David Schwab went to the same APS March Meeting Session to get the idea. The basic idea is very simple: ecology is one set of equations, evolution is represented by another set of equations, and to understand the interplay between the two, you just couple the two equations. David Schwab and his co-author found the coupling-- mutations lead to new phenotypes, which then lead to a change in the ecology. Interestingly, the way that I would approach the problem is very different. It basically boils down to a different model and a totally different analysis that leads to solutions that can approach the fully correct solution more and more accurately. The model I would use for evolution would be the Fisher-Wright model, and the model I would use for ecology would be a master equation version of the generalized Lotka-Volterra model. Then, since ecology operates on fast timescales, you could solve for the probability distribution over species number in the generalized Lotka-Volterra model first, and plug it into the evolutionary equations to get a full solution. The generalized Lotka-Volterra model takes some explaining. Basically, in the generalized Lotka-Volterra model, there are three "chemical reactions": one birth, one death, and one death via eating by another species. If you write down the master equation for these reactions, you can use moment equations to find better and better approximations via a Maximum Entropy approach to the true probability distribution over the number of each species. The moment equations don't close, but you could artificially at some order just assume that the higher-order moments have an independence property of some sort. At the very least, you can get a Gaussian approximation to what the ecological equations might look like. Of course that's not correct, but maybe it's good enough. This avoids a thorny problem with pretending that the ecological equations solve quickly. In reality, the rate equation approximation to the master equation can yield chaos. This chaos goes away if you consider the fact that the number of species has to be a nonnegative integer. Whether or not these MaxEnt approximations are good is up to the field to decide, but you can get better and better approximations by including more and more moments. The Fisher-Wright equations are well-known to take one generation and produce a binomial distribution over the next generation's population number with properties that depend on selection coefficients and mutation rates. So basically, the next generation is sampled, it settles down to something having to do with a MaxEnt approximation, and you plug the resultant MaxEnt approximation into the next generation (convolve it) to see the probabilistic updates. You can automatically assume that this is the right thing to do because of the separation of timescales. Altogether, you essentially get a complete solution to the probability distribution over population number as a function of time. This isn't my field, and my schizophrenic voices basically don't want me to do this project, so I won't, but I'm hoping somebody will. I don't think it's a terrible idea. I've been keenly interested in nonequilibrium thermodynamics for a while, ever since graduate school. Originally, my idea for being interested was based on the idea that it provided insight into biology. This is not a bad idea-- the minimal cortical wiring hypothesis says that neural systems minimize cortical wiring while retaining function. Why would they do that? Well, the usual reason is that the brain of the child has to fit through the mother's vagina, but you could also cite timing delays as being costly for computational reasons and also cite energy considerations. And in fact, there is a paper by Hausenstaub that suggests that energy minimization is crucial when deciding what kinds of channel densities and kinetic parameters still produced action potentials in neurons. And I have an unpublished manuscript (still in my advisor Mike DeWeese's hands) that suggests that eye movements are driven by energy considerations ala visuomotor optimization theory reinterpreted-- maybe.
However, I have been somewhat sad at the match between nonequilibrium thermodynamics and biophysics. Basically, the question is this: do Landauer-like bounds matter? A long time ago, Landauer proposed that there was a fundamental limit on the energy efficiency of engineered systems. This limit is far from being obtained. But you might hope that evolved systems have attained this limit, or something like it. There is some evidence that biology is somewhat close to the limit, but the evidence is sparse and could be easily reinterpreted as not being that close. We are about half an order of magnitude off of the actual nonequilibrium thermodynamic bound in bacterial chemotaxis systems, in simple Hill molecule receptors, with a match to an improved nonequilibrium thermodynamic bound being given with networks that I am not sure are biologically realistic. There is a seeming match with Drosophila, which tends to have surprisingly optimal information-theoretic characteristics, and also with chemosensory systems again. I am sure I'm missing many references, and please do leave a comment if you notice a missing reference so I can include it. This may look promising. To me, it's interesting what we need to do to get rid of the half an order of magnitude mismatch. Now, half an order of magnitude is not much. Our biological models are not sufficiently good that half an order of magnitude means much to me, so the half an order of magnitude basically has error bars the size of half an order of magnitude because we just don't understand biology. But beyond that, we should have more examples of biology meeting nonequilibrium thermodynamic bounds. I think we can get there by incorporating more and more constraints into the bounds. For example, incorporate the modularity of a network, and you get a tighter bound on energy dissipation. I want to use nonequilibrium thermodynamic bounds in my research, but haven't for a while. I just need better bounds with more constraints incorporated. Does the network you're studying have modularity or some degree distribution? Are there hidden states? Is it trying to do multiple things at once with a performance metric for each, e.g. not just Berg-Purcell extended but Berg-Purcell revisited so that we are trying to not just estimate concentrations but predict something about them as well at the same time? Is the environment fluctuating and memoryful, which was one way in which the Berg-Purcell limit was extended? I'm going to talk to a real expert on this stuff in a few months and will update this post then to reflect any additional links. Please do comment if there are links I've missed while being out of the field for years. I am quite curious. This will be kind of a funny post from one perspective, and a post that I probably should not write, were I to optimize for minimal eye-rolling. But I'd rather just express an opinion, so here it is.
I was part of a team that wrote a really nice (I think) paper on the theory behind modeling in neuroscience (which was largely led by the first author, Dan Levenstein) and as that might suggest I am firmly convinced that there are good philosophical discussions to be had on philosophy in neuroscience and biology. What is the role of theory? What does a theory look like? These were some of the questions that we tackled in that paper. But then, there are some philosophical questions that are easily answered if you just know a little physics and mathematics. And some people do try to answer them. The problem in my opinion is that they often do not know the math and physics well and are often speaking to an audience that doesn't know the math and physics well either. It's like the blind following those who only have one eye that's mutilated. This can lead to some appallingly funny results, like the scandal in which some physicist wrote bullshit tied up in the language of quantum mechanics and got it through a philosophy journal's peer review process. If the referees don't know quantum mechanics but like the conclusions, why would this not happen? But I'm speaking of what I know, which is that this sometimes happens in theoretical biology as well. This more often than not can lead to years of trying to answer conundrums that are not actually conundrums because someone has fundamentally misunderstood what causality means or how something could mathematically be goal-directed without being psychic. There's a particular paper that I want to cite that is quite good in some ways and interestingly wrong in others, and because I do like the paper and do not want to be a jackass (more on that later), I will not link anyone to this paper. Suffice it to say that it had some good ideas. Apparently biologists and the philosophers who had aided them had been questioning the validity of teleonomy for years-- this is the idea that organisms are goal-directed. Questioning this idea is no problem. It's just that some questioned the idea of goal-directedness on the grounds that it violated causality. Wouldn't you have to look into the future in order actually be goal-directed? Sounds common sense. And yet, if one were to talk to a stockbroker, they might tell you that while they can't see the future prices of stocks, they use the past as a guide to the future and make predictions regardless. There is no psychic ability here and no violation of causality. The stockbroker is certainly goal-directed in their desire to make money; they have a strategy that uses learning and memory; and this strategy does not require psychic powers. So, the concern that goal-directedness violates causality violates common sense, in my opinion. This particular paper did a nice job of pointing out (in different words for sure) that goal-directedness and causality were not at odds. What this paper did not do well: it confused quantum mechanics and chaos; it confused homeostasis for goal-directedness, and while homeostasis is often a goal, it is not the only goal, since we sometimes need to modify our internal state in order to survive when we are making predictions about the world, finding food, mating, seeking shelter, and so on. The latter was the manuscript's main contribution to the literature. I don't think anybody is ever going to spark a debate with the manuscript. I am pretty aggressive and I feel like that just wouldn't be "nice". There's this unstated idea in science that we should be collegial. It has, in my opinion, led to a few huge reproducibility crises, and yet I feel the pull of being in a scientific world in which I basically do not challenge papers that I think are wrong for two reasons. First, this "nice" reason: I feel a gendered pull to not be "mean" to this person who put their ideas down in a scientific paper, even though the point is that ideas are supposed to be challenged. In fact, senior researchers told me (when I was a more argumentative graduate student) that certain papers were not written by experts but were supposed to just introduce ideas, so I should just drop the idea of writing a manuscript correcting their basic information-theoretic misunderstandings. Second, a resource-based argument: if I sat around all day writing papers that correct papers that are published, I would never get anything done. And yet, that impedes the progress of science, right? So, I have taken to writing these short blog posts that who knows how many people read instead of writing paper responses to papers... but even now, I'm too afraid of being not "nice" to say the author's name!!!! How's that for a sociological problem. Or maybe, it's just me. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
February 2025
Categories |