The Rage of Research

The rage of research

I realized recently that not everyone experiences research emotions the same way that I do, so I decided to write up my typical ‘research emotion journey’. If you experience similarly strong emotions when doing research, I’d love to hear about them! Could even do a followup blog post summarizing the findings. 

There are two key emotions that drive my research habits: wonder, and pure unadulterated rage

It’s funny, because I’m normally a pretty chill person. I don’t get upset about things on a daily basis. But when it comes to research, a lot of my initial exploratory behavior is driven by a deep internal dissatisfaction with a question at hand. It’s extraordinarily frustrating.

I’ll separate it into a few stages: 

Stage 1: the question hooks in the mind

Most research rabbit holes I go down start out with a seemingly innocuous question. I’ll just kind of feel I need to know something. For physiology, one question was ‘how do tissues work?’. For physics, the innocuous question was ‘what is entropy?’. I never learn. Honestly, I just assume the answer will be in some book (the really satisfying answer never is) and do a first wikipedia search to get something satisfying. I’ll read a sentence like ‘Entropy expresses the number Ω of different configurations that a system defined by macroscopic variables could assume’.

And then, the rage will set in. 

Stage 2: pure, adulterated rage

Something about explanations like the above - a sentence so obviously incomprehensible given no other context - sets me off on a complete emotional tangent. But why????? my brain will scream. Everything will get blocked out. I can’t click another thing or read anything else until I understand what this is actually saying. What does a state mean, how do you know here are a finite number of them, what actually is a macroscopic variable, how does all of this fit in to literally anything else. 

There are then two paths: 

  1. Shut down and constrain this area of intellectual space to one that I will forever just internally snarl at without having done the work to unearth it (unsatisfying, but necessary if there’s something else important going on)
  2. Start the wormhole

Stage 3: what actually is the thing????

A lot of stage 3 can be best characterized by reading everything I possibly can about a subject and internally screaming ‘but what actually is the thing???’ after every sentence. For example, a state - but what actually is the thing, entropy came originally from a heat engine - but what actually is a heat engine, entropy is actually heat over temperature - but what in Newton’s name is heat actually????? What is temperature???? - and following this rabbit hole all the way down. It’s like a branching tree of indignation that just cannot stop growing (and always gets stuck somewhere when I get to the limit of what we know), but is just so infuriated that each definition doesn’t resolve itself. 

For entropy, the tree came to an abrupt halt when I asked Scott Aaronson, who was an obviously knowledgeable MIT expert on quantum information theory what entropy actually was and why it wasn’t actually just a subjective thing, how people could claim it was this objective principle when according to everything I understood it really wasn’t he said ultimately that yes if there is one universal wavefunction of which all worlds are branches then if you could know that wavefunction you would know deterministically how everything would turn out and so the entropy would actually be 0. But we’re all presumably entangled with one branch and so that’s why subjectively there is entropy associated with the future. (Note: I may have mis understood what Scott was saying, but the point is that it was the first reasonable explanation I’d heard).

That shut me up because it didn’t necessarily answer the question permanently, but it sounded reasonable. And I felt like I’d gotten to the brink of what we knew. 

Then there’s a gush of unceasing wonder when I realize I’ve now loaded a full new programming language into my brain and can fluidly manipulate concepts and compute things in a new realm of intellectual space. But that comes after, and is seemingly inexhaustible river of questions and just emotion that to me characterizes falling in love with a question but almost being driven mad because of it. 

Lastly, here are a few heuristics I’ve started to use when in the rage stage, that seemingly help quickly get to the final explanation: 

  1. Historical research: How did things get to be the way they are (many satisfying conceptual frameworks for me come from seeing the historical or temporal past of some thing). For example, understanding tissue structure through the lens of development, or anatomy through comparative anatomy. It also usually nicely resolves seemingly weird things where you wonder ‘why’ and then realize the answer is ‘someone did a thing a while ago that was useful then but makes no sense now’. 
  2. Trying to visualize things: Especially in biology, my brain explodes when I see a conceptual diagram with shapes explaining how a pathway works. I have to understand things in terms of concentrations, numbers, where things are in the cells, how many would be likely to be bumping into each other. I’m still getting better at this, but a fluid quantitative understanding is required before my brain can move on. 
  3. Watching a bunch of random adjacent talks: The good people in the field already know what I need to know. I just need to watch enough obscure talks that the perfect explanation seeps through. 

Anyway, that’s my typical rage-filled journey through research. Would be curious to hear about yours! 


views
23 responses
Sounds familiar :) The life sciences in general are even worse to reason about - as I found out not that long ago- because rather than a few neatly constructed abstractions as one finds in engineering or physics, it's more a haphazardly constructed collection of more or less accepted facts and more or less accepted theories to hang them together. But rather than rage I'd say its *dissatisfaction* at the state of my knowledge, at seeing something and noticing I don't understand; thinking I'm not good enough yet. I can't stand that; I must know! This materialises itself in obsessiveness and eventually into wanting to write about it if I can't find a satisfactory answer in one place. Some more heuristics to add to the list: 1. Cross-country research. Not as useful in the life sciences but useful in the social sciences. How do regulations work in different countries, healthcare systems, etc. 2. Multiple sources to avoid overfitting: A paper may claim X and make X look good, but what's its broader context? In practice, this is reading a few literature reviews and meta-analysis. Even a single review may also be insufficient. Just reading a lot of papers helps one notice what terms are central and repeated a lot, what is accessory, what is controversial, how good the facts are. 3. Writing: When I don't find good explanations, I try to write the explanation I'd like to have found in the first place. At times, I think I've understood something but until I can't commit it to writing it may be a fake understanding, later revealed to be a mere illusion. Writing also puts a permanent record out there for others to scrutinise and correct if needed. 4. Extrapolation: This is more as a check on the understanding. If a paper finds X, and one thinks X implies Y, is Y actually the case? One can search for experiment that support or reject Y and see if one got it right. 5. Cycling between optimism and cynicism. Upon finding some new fact, trying to gather as much evidence for it, as if driven by confirmation bias. Then switch to try to go as hard on the fact as possible, finding flaws or problems. Then repeat and, as Tyler says, solve for the equilibrium. 6. Comprehensively reading everything that cited an interesting paper, as I describe here https://github.com/jlricon/open_nintil/blob/mas... . (Everything relevant). Useful especially if it's an old paper, maybe there is more recent evidence that contradicts or adds nuance. Jose
I've been consuming the internet for 20+ years; this is the piece I've related to most, ever. Thank you! Currently on three benders - i. photosynthesis (underratedly complex and maddening - currently number one in the "‘but what actually is the thing???" rankings, b/c it's so DECEPTIVELY simple) ii. mitosis (kind of cool actually, and very visually available; I do rage a bit at how DNA is able to exist when the nuclear membrane dissolves during prophase ("I thought DNA couldn't exist outside the nucleus - how is totally fine when the nucleus dissolves!") iii. electricity (amp current watts volts ohms - just a mess.)
Hello! Loved this post. I had exactly the same issue with textbook entropy explanations! Here's the mental solution I found, if interesting: People overload 'probability' to mean (1) fundamental physical uncertainty (e.g. wavefunction collapse) (2) this abstract kind of uncertainty when you are trying to simulate an extremely complex physical system perfectly with a bounded computer with imperfect initial conditions, so you start with a bad approximation that gets worse and worse as the simulation evolves. I haven't seen the second explanation written up anywhere as a definition of entropy but it makes much more sense to me: - You can imagine uncertainty increases as you run the simulation just because errors accumulate, that's the second law - If you pause the simulation the simulation error is going to stay fixed, that's the third law - Sometimes you can't fully specify the initial conditions of every particle (perhaps your microscope is limited, or perhaps your computer has run out of memory) but your computer has a killer CPU and the physical system happens to be simple enough to evolve literally perfectly on the computer, so even though your initial conditions may have been somewhat wrong, the 'wrongness' of the simulation is going to stay constant in some sense, that's a 'reversible system'. So entropy sort of makes sense as the gap between simulation and reality, i.e. 'the number of additional bits needed to fully specify reality at time T given the simulation result'. It's still sounds totally dumb (e.g. how can a seemingly-absolute physical quantity depend on the computer simulating it?), but I think that (1) it's dumb in the ways that entropy itself is dumb (2) it doesn't feel as gross to think about as all the explanations on Wikipedia.
You quote someone saying:if you could know that wavefunction you would know deterministically how everything would turn out and so the entropy would actually be 0. This is totally wrong! See my book on Entropy for Smart Kids
Very informative
18 visitors upvoted this post.