1. Scott Alexander on cognition models. Reading the book On Intelligence by Jeff Hawkins when it came out in 2004 was a revelation to me. That’s because Hawkins provided an overarching theory for how cognition works, with his memory/prediction model. From page 99: “The recalled memory is compared with the sensory input stream. It both ‘fills in’ the current input and predicts what will be seen next. By comparing the actual sensory input with recalled memory, the animal not only understands where it is but can see into the future.” Reading this in 2019, it seems unsurprising. And the framework is not original to Hawkins. But it was the first time I’d encountered a well written, convincing and unified framework for human cognition. Mind is prediction. At a base level, sense data constantly flows in, and predicted future sense data constantly flows out. At a higher level, mind is prediction on how potential possible actions can affect the world. An overarching framework is critical for science, because it stops you from drowning in details. It’s like completing the outside edge of a puzzle. Once that’s done, the frame guides you on where to place all the other pieces.
So I really liked this post by Scott Alexander, where he created his “Grand Unified Chart” below. It 100% matched my priors 🙂

Scott Alexander explains:
All of these are examples of interpreting the world through a combination of pre-existing ideas what the world should be like (first column), plus actually experiencing the world (last column).
All of these domains share an idea that the interaction between facts and theories is bidirectional. Your facts may eventually determine what theory you have. But your theory also determines what facts you see and notice. Nor do contradictory facts immediately change a theory. The process of theory change is complicated, fiercely resisted by hard-to-describe factors, and based on some sort of idea of global tension that can’t be directly reduced to any specific contradiction.
Why do all of these areas share this same structure? I think because it’s built into basic algorithms that the brain uses for almost everything (see the Psychology and Neuroscience links above). And that in turn is because it’s just factually the most effective way to do epistemology, a little like asking “why does so much cryptography use prime numbers”.
Last point. The full title of Hawkins’ book is On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines. That is to say, we could add a new row to Alexander’s table for artificial intelligence. It slots in perfectly. (And yes, I’m aware some would consider today’s AI a subfield of Bayesian stats). We’re still a long way from general AI. But the point here is the framework to judge AI progress already exists. If mind is prediction from sensory streams, generalized mind (AGI) requires an ability to go beyond narrowly specific domains.
2. Tanner Greer on virtue signalling. Greer first reminds us how Marx believed economic structures and institutions drove class consciousness, which in turn determined people’s ideas. Ideas are nothing more than an expression of class interest. Which means, as Greer says:
A straight line can be drawn from statements like these and the great terrors imposed upon captive populations by Marxist dictators like Stalin, Mao, and Pol Pot. Men like Stalin did not oppose freedom of expression simply because it threatened their personal power. They opposed freedom of expression because they earnestly believed it was a pointless exercise. There is no point in debating with representatives of the bourgeoisie when the ideas of the bourgeoisie were not pliable to debate. If the opposing stance was not the product of reason, it could not be undone by reason.
Then to Greer’s analogy to signalling theory:
The central proposition of signalling theory is that the vast majority of arguments made in the public sphere are not made in good faith. An outraged tweet is not written to express genuine emotion, but to signal solidarity with the ‘right’ side. A verbose blog post is not written to persuade its readers of its argument, but of the cleverness of it author. A well circulated censure of some racist act is not written to convict the racist, but to display the Wokeness of the censor. The connecting string in all of these cases is that your arguments are less about your ideas than shaping other people’s perceptions of you. Whether you believe you are writing primarily to shape other people’s perceptions of you is immaterial. As with the Marxist theorists, signalling theorists are happy to conclude that signalling does not need to be a fully conscious process. In place of a class consciousness imposed by the material circumstances of an individual’s social status, signalling theorists trace the origins of self-interested arguments to mental social-status ‘modules’ imposed by the material circumstances of an individual’s evolutionary heritage.
And
Civilized discourse depends very much on both parties in duel of words recognizing the self-serving intentions that lie behind the other party’s speech… and then deciding not to mention them. This is Reason’s Pact: the bare minimum required for fruitful rational discourse to take place. We underestimate how many institutions in our society depend on us maintaining this illusion. Jettison the ideal of ‘reason’ as a governing principle, and all you have left are words as war.
One quibble. Greer notes “Robin Hanson, doyen of modern signalling studies, fights a perpetual battle to convince his readers that none of the ulterior motives he ascribes to human sociality writ large are behind his twitter polls.” Fair point on the irony of Hanson being attacked for impure motives and signalling. But I would add in Hanson’s defense, I’ve not seen Hanson use that style of argument against his accusers. See this example. In fact the central argument of Hanson’s book on signalling is social policy could be improved if we take signalling into account. That is to say, I read Hanson as a nerdish theorist who writes about signalling with the hope of finding ways to work around it, leading to better social policies. Though perhaps that doesn’t matter. It’s being weaponized regardless. Quibble aside, an outstanding post. Worth reading in full.
3. No One Is Prepared for Hagfish Slime. It’s yucky. Has a great picture of a slime-covered Prius. And is written by the excellent Ed Yong. Recommended.
4. Iterative embryo selection. CRISPR gene editing is not reliable enough to do the many 100s or 1000s of edits it would take to dramatically impact complex traits. But one way around this problem is through iterative embryo selection: “conducing multiple generations of embryo selection in a petri dish by exploiting gametogenesis or stem cells.” So you use IVF, get an embryo, do gene edits, then replicate those embryos to get a new set of cells, select those without side effects, and repeat. The selection step allows you to weed out bad edits. This is not completely workable now. And will be applied to livestock first. But the technique will accelerate when CRISPR can do large numbers of complex edits. This was a new idea to me. link
5. Screentime worries overblown. New paper with N=355,358 argues that screen time worries are overblown, if hard to pin down. The effect of wearing glasses is bigger. Paper here. News article here.
6. On desiring your enemies to be evil. I was going to link to some Covington stories, but decided not to. Here’s C.S. Lewis from his 1952 book instead:
Suppose one reads a story of filthy atrocities in the paper. Then suppose that something turns up suggesting that the story might not be quite true, or not quite so bad as it was made out. Is one’s first feeling, ‘Thank God, even they aren’t quite so bad as that,’ or is it a feeling of disappointment, and even a determination to cling to the first story for the sheer pleasure of thinking your enemies are as bad as possible? If it is the second then it is, I am afraid, the first step in a process which, if followed to the end, will make us into devils. You see, one is beginning to wish that black was a little blacker. If we give that wish its head, later on we shall wish to see grey as black, and then to see white itself as black. Finally we shall insist on seeing everything — God and our friends and ourselves included — as bad, and not be able to stop doing it: we shall be fixed for ever in a universe of pure hatred.
And that’s all for this post. Thanks for your time!
Interesting!
I have a real problem understanding Scott Alexander on cognition models. I can’t remember who said it but it was something like “A theory which explains everything explains nothing.” So theories like Marxism explain nothing. Anyway, how can you explain all those theories in a such a short chart? If this sort of model is able to categorize everything, every thought, every theory, why can’t it be used to categorize, I dunno basketball players, like Lebron, Durant and Curry. Put them in the chart too. I went over to the original blog post and waded into a multilayer meta analysis of Bayes analysis, which doesn’t really require KL divergence of course all Bayes theory does is calculate probabilities, and KL divergence is just one sort of test, you could use anything you like. I just got the feeling the reason they concentrated on the Bayes theory is maybe they also could not make head or tail of the rest of the chart like for instance, psychology being prediction surprisal and sense data. If you had given me this chart and filled in all the blanks except for psychology there is absolutely zero chance apriori I could have predicted that psychology would have these fields, same with neuroscience, there’s zero chance I would have put in NMDA dopamine and AMPA instead of a thousand other choices, for instance I might have guessed mirror neurons and amygdala. As you probably know I don’t think scientists really understand why exactly science works and I certainly don’t think philosophers have a good grasp on what exactly science is and why it’s different from ordinary human endeavors.
I think it was the mathematician Richard Courant who defined logic as “an experiment in the imagination.” This seems to fit well with your point about cognition in general.