While neuroscience has made tremendous progress over the past decade, the details of this progress can be opaque to the uninitiated. The field is new and complex enough that textbooks are not the best source for information (unlike, arguably, foundational math and physics), and the rapidly expanding literature can be difficult to parse without specialized training. The sociology of neuroscience—what do people care about? where do researchers think progress is possible?—can also be hard to ascertain from papers alone.
I propose to create a series of articles/essays to help make our current understanding of neuroscience more accessible to interested outsiders with some technical expertise. The hope is to use these articles to start a 'living guide' to neuroscience that is analogous to Lilian Weng's blog (https://lilianweng.github.io/), a popular and well-maintained reference for concepts in machine learning research. These articles would be posted to LessWrong as well as my personal website.
These articles would be created from scratch, and would reflect many conversations I've had, presentations I've heard, and papers I've read as a neuroscientist-in-training over the past few years.
Why might this be useful? Is there really a need for something like this in a world with Wikipedia, review articles, and large language models (LLMs)? In principle, someone interested in (for example) memory could read Wikipedia, or some collection of review articles, or ask an LLM to generate text summarizing memory-related research.
The advantage of what I propose is mainly in the better curation of ideas. As a neuroscientist with insider knowledge, I have a reasonably good sense of which details to omit (making the presentation more compact than that of an encyclopedia); how to maintain a coherent overarching story across different topics (making the presentation more unified than a set of review articles); and how to ensure the ideas presented are both state-of-the-art and fairly accurate (unlike LLM-generated text). The goal is to present an understanding which is current, somewhat technical (i.e., may involve some equations, to allow readers with technical expertise to apprehend relevant details precisely), and easy to reference.
Neuroscience is an astonishingly large field, so even selecting topics to write about can be challenging. Here are 10 core essays that I think are worth writing, which I would write throughout 2024 at a pace of about 1 essay per month:
1. Substrate of nervous systems
- modeling biological neural networks
- neural and molecular variability
2. Studying nervous systems
- measuring neural activity
- fitting neural data
3. Types of explanations
- explanations in neuroscience
- the Bayesian brain as a theoretical framework
4. Learning
- types of learning and learning rules / meaning of "biologically plausible"
- reinforcement learning in the brain
5. Capabilities
- memory
- probabilistic reasoning
These topics are selected both for their fundamental importance, and because they reflect my own expertise/interest. (This list is tentative; I am open to a different set of topics if you feel something else would be better.) I am asking for $5000, more or less exclusively for labor. This means about $500 per essay, although in principle one gets more than this for the price since I will participate in ongoing discussion and maintain these articles, and possibly write additional ones later. The hope is that I get some momentum doing this, and continue writing similar articles beyond this initial year, while keeping existing articles somewhat up to date.
Each essay might take ~ 20 hours, including reading and writing (this is a very rough estimate and I am not yet sure to what extent it is an over- or underestimate). This means roughly 5 hours per week assuming a pace of about 1 essay per month, which is doable given other research and writing obligations.
As a final comment, my interest (and presumably many others' interest) is not exclusively in learning about neuroscience for its own sake, but also for learning lessons that may be relevant for understanding, building, and controlling artificial intelligence (AI). Throughout these articles, connections with AI and implications for AI would be identified and discussed.
Whoever writes the synthesis I described above should ideally (i) be educated in neuroscience, (ii) be able to connect neuroscience with other fields of study (and particularly with AI), (iii) actively engage in neuroscience research themselves, (iv) be capable of communicating their ideas clearly, and (v) have the professional bandwidth to do the task justice. I am a postdoctoral researcher in theoretical/computational neuroscience at Harvard Medical School (HMS), and will explain below why I believe I satisfy each of these criteria.
1. Neuroscience background. A formal background in something often involves coursework. Three graduate-level courses I have audited while at Harvard are worth mentioning. I audited a survey course for first-year neuroscience graduate students (co-taught by many experts, and covering a mix of molecular and systems neuroscience), a theoretical neuroscience course taught by the up-and-coming Cengiz Pehlevan, and another theoretical neuroscience course emphasizing physics-inspired approaches by Haim Sompolinsky (arguably one of the greatest living theorists).
I have read some or all of various foundational textbooks (e.g., Dayan and Abbott's "Theoretical Neuroscience", Sutton and Barto's "Reinforcement Learning: An Introduction"). I have read many, many papers, both experimental and theoretical. I have attended many seminars and major conferences (e.g., Cosyne, NeurIPS, the Society for Neuroscience's annual meeting).
More briefly, I do neuroscience for a living, and have been immersed in it for several years. If I don't know a topic well, I probably know labs that work on it (or can ask around to quickly find out). For a reasonable number of interesting topics, I personally know someone working on them.
2. Broader educational background. My undergraduate training was in both math and physics, and I took many graduate-level courses in both subjects before and during my PhD (e.g., analysis, algebra, topology, algebraic topology, Lie groups and algebras, mechanics, quantum mechanics, quantum field theory; I can come up with a complete list if necessary). This has helped give me a solid picture of how to use math as a tool to study nature, along with its strengths and limitations.
I obtained a physics PhD in 2021, and specialized in theoretical biophysics. (Specifically, my PhD work primarily concerned studying stochastic mathematical models of cellular processes like transcription.) Since I took most core courses required by my PhD program as an undergrad, I focused on biology-related coursework as a graduate student. Highlights include courses in systems biology and the physics of living systems, as well as a year-long survey course on biology intended for first-year biology graduate students (that I was able to take via special permission).
As a postdoc, I have also worked on artificial intelligence research in my spare time. (On this front, I am most excited about a paper on diffusion models that I hope to preprint within the next month.) Although I haven't taken any machine learning courses, I have been an avid participant in HMS's "Machine Learning from Scratch" seminar, and have been the main organizer since fall 2022.
In summary, I am broadly educated in math, physics, biology, and AI, in addition to neuroscience.
3. Current research. I work in the Drugowitsch Lab, which is broadly interested in using theoretical and computational approaches to better understand brains, and often takes a Bayesian perspective. One project I'm working on studies how the brain disentangles self- and object-motion when assessing whether an object is moving or stationary (think about how you still have a good sense of how objects move despite shaking your head or running while viewing them). Motion judgments, like many other things, can be performed "optimally" using a Bayesian algorithm; we've shown so far that participants appear to use an approximately Bayes-optimal algorithm when doing a task involving motion judgments. We've also come up with a theoretical model of how neural circuits could implement algorithms involving Bayesian model comparison.
I work on various other things, like how cells can act as a memory storage site, the theoretical properties of different kinds of neural codes, and how dopamine could be used to implement a cause-effect learning algorithm.
4. Writing experience. I have published research papers and written a few essays (see links in the "more about me" section). Most of my neuroscience-related work is not out yet, but several important papers should be released this year (2024).
5. Professional bandwidth. Postdocs like myself strike a reasonable balance between having enough expertise and perspective, while not (yet) having too much responsibility and pressure. Someone outside the academic system may also enjoy such a balance, but may lack current insider knowledge.
Twitter: https://twitter.com/johnjvastola
Google Scholar: https://scholar.google.com/citations?user=LMFIFBcAAAAJ&hl=en
Personal website: https://www.johnvastola.com/
Website of my lab: https://www.drugowitschlab.org/
Technical writing example (co-first author): https://www.johnvastola.com/assets/pdf/natcomm_interp.pdf
Two essays written for different 2020 essay contests:
Essay writing example 1: https://cdn.vanderbilt.edu/vu-my/wp-content/uploads/sites/2692/2020/11/02095804/vastola_2020_FQXi_essay.pdf
Essay writing example 2 (won runner-up, see https://engage.aps.org/fhpp/resources/essay-contest):https://cdn.vanderbilt.edu/vu-my/wp-content/uploads/sites/2692/2020/10/26165329/vastola_fhpcontest_entry.pdf
5000
No response.
80%
Uncertainty is mostly related to doing things in a timely fashion. Some parts of the year are extremely busy research-wise.