“Oooooh, the Lord is good to me
And so I thank the Lord
For giving me the things I need
The sun and the rain and the apple seed
The Lord is good to me”
At summer camp we learned that Johnny Appleseed travelled around, randomly planting seeds throughout pioneer-era America, thus providing migrants with a source of food and a tasty treat in the rough wilderness. No one told us that apple trees don’t grow true from seed, and that the vast majority produce inedible fruit (don’t worry, this isn’t going to be a metaphor about postdocs from certain labs). What they do create, as we all learn later, is an easy raw ingredient for booze. And that’s why there are songs about John Chapman.
In modernity, we have lost virtually all of the apple diversity that once characterized North America. Not just the sour cider apples, but all of the freak discoveries of something unique, weird, delicious when roasted, keeps all winter, good for soup, good for pie, amazing off the tree but doesn’t last—anything that was interesting enough for people to take cuttings and graft it into their own apple garden. Apples selected for industrial-scale production are not chosen for taste alone, but also how they look in great heaps in supermarkets (uniform size and shape, smooth skin, no brown), how well they survive shipment by truck or ship, easiness of picking, yield per hectare. (I learned this from a guy at a farmer’s market and from Michael Pollan on the radio.)
This sucks. Some of the best apples I’ve ever tasted look like malformed potatoes. You can still find some of these local varietals at farmer’s markets in the eastern U.S., and they are maintained in isolated places or agricultural research facilities, but most never made it west, and the vast majority of apples are just not available to you and me.
Like John Chapman and apples, the relationship between NIH and “health” is not exactly straightforward, and not what most people think. The Johnny Appleseed version is that the NIH is working for the taxpayer to improve the health and quality of life of the good people of America. Yes and no. Let’s leave aside that if aggregate “health” were the actual goal, almost every dollar spent by the NIH would be better spent on prevention, education, regulation of industry, and alleviating poverty. So what is the NIH for?
With the apple story from summer camp, we went from “food to sustain the weary and brave pioneer” (I’m picturing Michal Landon) to “cider to get so drunk that you can briefly forget you live in a hole in the ground and will probably freeze/starve this winter.” Here, we go from “improving health” to…what? A somewhat more realistic description of the goal of the NIH is not health per se, but the development of biomedical interventions. That’s important, too, and I think non-controversially beneficial.
But the people who have administered the NIH—and this includes the US Congress, believe it or not—have been much smarter and more strategic than this. At least since the post-war period, the NIH has put enormous emphasis on also funding research that has nothing to do with health at all. They have done this by defining the “health” as the the institutes’ ultimate goal. Most suitably-motivated scientists can think of ways their research may ultimately serve the public health interest. And this is not being sneaky, this is exactly why the NIH’s mission statement and goals are phrased this way. So when you feel tinges of guilt for reaching for health relevance that feels tenuous, relax: it’s a feature, not a bug.
Why do this? In part, it’s a recognition that biomedical advances largely depend on fundamental discoveries, and that fundamental discoveries are nearly impossible to engineer. You can’t go looking for them. If you want more of what you have, you cut and splice from existing apple trees—create clones (I am going to torture the crap out of this metaphor). But if you need truly new things, you have to just plant as many seeds as you can. Only once in a while will you get edible fruit, but many of the scientists are actually happiest with the cider. Win-win.
The other reason, of course, is that the NIH represents the transfer of public funds to local institutions across the country—mostly universities, some companies. This means jobs, development, prestige—stuff congress critters love to take credit for. Until relatively recently, even most Congressional Republicans recognized not only the value of basic research but the value of having federally funded basic research programs at the universities in their districts.
But times are tight. Grant success rates are at historic lows. The purchasing power of the typical NIH grant has declined steadily. Too many PhDs are being produced and there aren’t enough jobs. We don’t even know how many postdocs there are. But we know that a much, much smaller fraction will have the opportunities their mentors did to pursue a research career. All sorts of terrible incentives arise from these conditions.
The instinct in tight times is to prioritize. And my fear is that prioritization means NIH leadership making choices about what kind of proximate basic research goals to fund, despite decades of success doing essentially the opposite: funding the best scientific ideas of almost any kind (here is not the place to go into how the CSR often fails at this, but we can accept that this is at least its stated purpose).
The recent BRAIN Initiative report allays some of the worst fears about centralization of resources and top-down mandates of research priorities, but not all of them. We’ll see how it develops. This is of course life or death for young PIs like myself, who, in the first years of our appointments, are absolutely dependent on the NIH for starting our programs and keeping our jobs. Whether we fit the particular programs or RFAs of BRAINI or not, how will any study section look on the relevance of young neuroscientists who don’t fit the BRAINI mold? Any policy shifts or mandates that narrowly define what the scope of neuroscience research will be could in effect exclude me from funding, and that is the end of my career.
But it’s not just self-interest. It would be a deep mistake for the NIH to try to pick winners in advance, or even narrow the pool. It is still hard for me to see the focus on “neurotechnologies” as anything other than an already massively successful and expanding field staking claim to essentially guaranteed funding at the expense of others. It is a positive feedback loop of resource concentration. The training pipeline and job market dynamics (everyone wants to hire someone fundable!) mean we will endlessly clone and transplant a few lab “types” and create a generation of people working on the same systems with the same technology. And then what? A supermarket with nothing but golden delicious*. As I keep repeating, conformity is the death of creative science, and how the funding system works determines whether conformists thrive or we have an intellectually diverse field.
You can’t pick apple seeds in advance, you have to plant as many as you can. Neuroscience in the Era of Strong Bullshit is already in danger from mono-cropping tendencies, group-think, bandwagons and concentrating resources around particular institutions, technologies, or experimental approaches. I think recording from a million mouse neurons at once sounds kind of cool…go fucking figure it out and publish the results of each step along the way. But I’ll be damned if I’ll agree a priori, “Yes! That must be done! Set aside $200M!”
That’s why this whole thing can feel like an end run around peer review. When you appeal to POTUS to somehow legitimize or put some federal stamp of inevitability and necessity on specific kinds of neuroscience, when you start to make totally unsupported and idiotic equivalencies between neuroscience and space exploration or high energy physics, it starts to sound like a boondoggle. Let’s be scientists, not shysters. Let’s fund neuroscience, but let’s not institutionalize our worst instincts.
*I hate golden delicious.
Methodological crazes are all like this. Think back on your favorite that is no longer. Or is still a thing, but not the thing, and now we now the know caveats and why a lot of the early work was bullshit: before the astronomical false positive rates were known, or the caveats and artefacts that must be dealt with but were ignored in our youthful zeal to publish it in Nature. The early days of almost any method are rife with the cowboys: sloppy experiments, over-interpreted. Cowboy is too romantic. Low hanging fruit harvesters.
I won’t pick on particular methods, but the pattern is pretty clear, historically. New methods are often so uncritically acclaimed that it causes major distortions in science. Whether or not they end up causing seismic shifts in our knowledge or practice of science (most don’t), they make or break careers and incontrovertibly cause seismic shifts in funding and publishing.
And guess what? There were people back then, saying exactly what I (and many others) have said about optogenetics. And they were ignored, and you could argue what harm was done? We learned how to use (or not use) and interpret these methods appropriately. So what if the naysayers are almost always right, technically. We still make progress. And by naysayers, I mean people who simply say “let’s not put all our eggs in here” or “this is promising, but let’s be sure we continue to fund a variety of approaches” or “what is this even for and why are we spending tens of millions on it.” Because no scientist is actually opposed to new methods.
I say that the problem is that in the early days of new methods, we massively reward over-promising, sloppy thinking, and over-interpreted experiments. Massively. And this is terrible. Re-allocating funds to chase the new, hot thing basically means creating and incentive structure that rewards Type I errors (and/or being a blowhard) at the expense of quality and diversity of ideas and methods.
And this is where I haz the wow sad realization about (maybe) how this keeps happening: for your career, Type I errors aren’t errors at all, and Type II errors (or even reasonable caution) can kill you.
In going all brainless and ga-ga and “let’s fund the fuck out of it” over any and all uses of a new technology that looks like it’s all-that-and-a-side-of-chips, we extend massive credit to the low-hanging-fruit harvesters, and by the time the sobering bill arrives and we’ve separated the bit of wheat from the mountains of chaff, all their postdocs have tenure doing the same thing.
And look at neuroscience today. Look at the “You Are Your Connectome”* fan club of consciousness uploaders and deepity commentariat. Look at how optogenetics is going to either cure everything or enslave us to the government.
Look—if you dare—at the Human Brain Project, which is kind of the F35 jet + tulip mania wrapped up in some kind of Willy Wonka fever nightmare.
Neuroscience is in an era of Strong Bullshit. And I get how that’s an opportunity for the field despite it being annoying. But it is my sincere hope that things like the BRAIN Initiative can at the very least find a responsible way to use the hype without being defined by the hype.
That was a tiring day of blogging.
Here is a plea to neuroscience editors to find a 100 fucking pound grain of salt to take with reviewers who prescribe optogenetic experiments in knee-jerk fashion. Now that I have written this, my internal annoyance is externalized to the internet where it can no longer bother me. Right?
1. Optogetetics is great when it is appropriate to the experimental question and in contexts where it has been shown (convincingly, with science) to work as advertised. I like to use it, however the conditions above are rarely met for my work.
2. The sum total of my experience with “optogenetic experiments proposed by reviewers” (n=7 reviewers) is that reviewers who demand new optogenetic experiments (i.e. not extensions/controls with regard to existing optogenetic experiments in the paper) are 100% idiots.
3. This is not because of anything of the particular problems with optogenetics per se, though that is some part of it. It is almost entirely because they are terrible experiments by any standard. At best irrelevant, but usually the reviewer 3 sweet spot of being exceedingly difficult and completely non-informative.
4. This pattern says something important about scientists who think the latest craze is the answer to everything. And that thing is “ignore them.”
5. Given the totally unearned “gee whiz” bonus that a paper gets through the use of optogenetics, don’t you think the authors probably considered ways in which it could be used? And maybe they have good reasons for not using it?
Just stop it. Imposed conformity of any kind—theoretical, methodological, experimental system—is the death of creative science.
I am very uncomfortable with the first conversation I have with new or prospective students. They are usually enthusiastic and eager to make a good impression, and I am too. I want them to be excited about graduate school and about science. I want to know what their long term plans are if they have any (it is fine not to). Because whatever their plans are, that’s what I want to help them do. At the same time, I want them to help me and the lab succeed.
I don’t want to lie to them. Joining my lab comes with real risks. I have a short window of time in which to obtain substantial ongoing funding. I am “untested” as a PI (though I feel pretty fucking tested, academic life-wise). All of the issues around publishing, the culture of science, careerism, funding…I think for them it all seems very abstract and faraway, even as they are making the choice to step into the center of it. You can tell people to read the Scientopia blogs, but until they hit their first big disappointment, hate their PI for the first time, and drunk-Google “why the fuck did I go to grad school” most of them probably won’t. They are moving to a fun new city and all possibilities lie ahead. Why get bogged down with worrying about career bullshit? All of my instincts agree with this attitude, but I don’t know if it’s right.
So what can I say to someone who is obviously smart and motivated and wants to pursue an academic career? Something that is completely honest yet not demotivating? Should I really describe how I think the next 10 years of their life might go? Should I tell them they should be looking for someone who has a higher profile and more resources to start their career, and only settle for me if they have to? I don’t want them to look back on joining my lab as something they did blindly, or that they were set up to fail. And I don’t want to be self-defeating. Or, you know, shit on people’s dreams.
What I do say is this: you’ve chosen a fantastic graduate program (it’s true). And what matters now is learning to be an experimentalist and publishing good science that we believe in. Luckily, that’s what matters for both of us, so our shared goals will be motivating for both of us.
What I don’t tell them is how unfair a lot of this is going to be. How arbitrary. How my failures might affect them. That the advantages they potentially give up by trusting me and joining my lab might lead to lost opportunities. I have the instinct to protect them from bullshit, but I don’t want to “protect” them from realities that will affect their career, and that they should develop the personal and professional skills to deal with. At least they aren’t postdocs. For graduate students, I can take some solace in the idea that I can help them strategically move on from my lab when the time comes, whether it’s a postdoc or a job outside academia or whatever.
Preach it, Captain, and pass the bottle.
There are plenty of others willing to call you a failure. A fool. A loser. A hopeless souse. Don’t you ever say it of yourself. You send out the wrong signal, that is what people pick up. Don’t you understand? You care about something, you fight for it. You hit a wall, you push through it. There’s something you need to know about failure, Tintin. You can never let it defeat you.
Well, not really a retraction, but a pivot. A while back, I minimized the whole OA thing as something that is not an impediment to the daily practice of science (What Limits My Science?). People raised good points in the comments about access by journalists, etc. But I argued why it was a battle I wasn’t choosing to be involved with.
I’m still not ready to make OA my “issue”… I think some of the diehards in that movement have undermined it by presenting it as a with-us-or-against-us moral battle and by refusing to acknowledge that it is a separate (and to me, less serious) problem from glam/prestige bullshit. For example, eLife and PLOS Biology, JIF-humping prestige journals if there ever were such, critically undermine the idea that OA is about changing the way we assess scientific papers and scientists. But I’m going to take their side here.
Several things have happened to make me feel more strongly about OA as a disruptive tool. First, I had long assumed that the glam bullshit was a problem (like homophobia and Matlock) that we would primarily solve with funerals. Sure, you occasionally meet a mini-BSD clone, they are hilarious at first. You assume they will have an awakening at some point. The opposite was driven home for me, however, when encountering someone who works in the same field as me and graduated from the same SLAC, same department, same year as me. I had never heard of him. (Would it surprise you, reader, that as an undergraduate I was not someone who seemed likely to pursue much of anything, let alone an academic career?) Anyway, this guy was the worst. Obsessed with what journals his papers were in, clearly judging himself (and practically begging you to judge him) by all of the things that count in the world of prestige, press releases, media coverage, and nascent science celebrity. Is his work good? It’s fine. Fundable. He is working hard in a crowded corner of neuroscience, straddling several bandwagons, using all the right buzzwords, sure to win BRAINI approval, right in the sweet spot of risk-free science that we have all been coerced into agreeing is innovative and essential these days.
Second, I got a surprising and unpleasantly up-close look at how communication and collusion between BSDs and glam editors often works. It turned my stomach to see the degree to which work from some labs is solicited and clearly given preferential and kid glove treatment. Somehow, I had at least imagined that although being a BSD is an advantage, at least you were having to go through the gauntlet like everyone else. Turns out: no. This is a fraud. It has to stop.
Again, while neither of these things are about OA per se, they are driven currently by a few non-OA publishers. Like getting Capone on taxes, maybe OA is a useful wedge issue. I will freely admit that some of the best work in my field is published in those journals, I just don’t think there is any reason for it to be. Articles should at least start their lives on a level playing field. Competing for hyper-limited spots in “top” journals via a process that is so tainted by prestige and influence (not to mention the random/noisy filter of peer review) just isn’t good enough. Curating the literature isn’t going to be as hard as people think. YMMV, but Google Scholar hasn’t missed a beat for me, and has led me to things I might’ve missed due to a relatively obscure venue.
Maybe it’s true that quasi-glam OA things like eLife and PLOS Biology can be stepping stones to ease us out of judging scientific papers by journal branding instead of by manuscript content. My one experience with PLOS Biology was just as frustrating as dealing with a glam journal, so I’m skeptical. But waiting for funerals isn’t going to work if my generation is inheriting these biases and habits. Most of my peers (and myself) argue something like “I know it’s bullshit, but hate the game not the player.” It’s a slippery slope, and success breeds complacency, then acceptance, then self-delusion.
There is nothing I hate worse than a collection of elite “experts” sitting around deciding what everyone else should think is interesting (<cough>BRAINI<cough>). Thus, I was not amused by this arriving in my twitfeed:
First: Pay. Fucking. Walled.
Second: I skimmed the list of experts (many are not neuroscientists). One of them is a Google executive who—I shit you not—suggests that we read a book by Ray Kurzweil.
Third: Optogenetics. WOW! What crystal ball from 2005 did this visionary genius look into?
Which brings me to the main point. This kind of list is self defeating. No one would’ve put “light activated ion channels” on this list 10 years ago. It’s totally fucking unnecessary to sit around guessing, and in the medium to long term you are guaranteed to look stupid. In the short term, it enforces buzz, hype, and conformity.
Fund young scientists. Period. We know what to do with the money.
My reaction summarized thusly: