Faking data and committing fraud is the cardinal sin of science. It’s a really big deal, and I honestly think ORI is toothless when it comes to consequences (oh you robbed a bank? that’s cool, keep it, but we’re going to pay someone to follow you around and make sure you don’t rob another one for 5 whole years). Fraud occurs for many reasons, but I think that in many (or even most, it’s impossible to say, as I’m sure most fraud is never detected) cases, senior authors and co-authors can be mostly blameless. If someone is working in your lab and gives you convincing but fake results, it can be extremely hard to detect. Even harder if they are giving you a subset of real results, just those that support a hypothesis, for example. There is a lot of trust placed in people who are under enormous stress and career pressure. It’s bound to happen at some frequency.
On the other hand, it is clear from the online discussion (and the real life of any science trainee), that there are many labs in which cheating is enabled by a system of rewards and expectations, created by the PI, that certain results are what is necessary for praise and career advancement. Data fakers are still to blame, but obviously there is some additional culpability here.
Zooming out even further, the media “witch hunts” that result from high-profile fraud cases result from the absurdity of the hype that surrounds science in the first place. If journals, institutions, and the media (and we have to admit it, us scientists too) didn’t drive this machine of fame, celebrity all the attendant dishonest bullshit that comes with it, perhaps there wouldn’t be a need for such vicious recrimination when someone we just anointed a Great Science Hero turns out to be a flawed person. Despite university press releases, the “News” arm of the glamour mags, and the growing annual cycle of Cash Prizes and Top 10 Lists, science almost never happens in stunning leaps of individual genius, and progress never depends on any one person or lab.
Fraud–faking science results–should be punished. Jobs and grants are zero sum–everyone who gets one by cheating stole from an honest scientist (or a less able cheater, I guess). It should probably end your research career. Let’s not pretend that’s a “witch hunt” or a disproportionate response. We think doping athletes should be banned, embezzling bankers should be fired and lose their licenses. What’s the difference? Why should we bother or care about “rehabilitating” Marc Hauser? He had a chance to contribute and blew it, there are many, many others deserving of that chance.
But science is done in teams. If the faker is a trainee, it’s harder to tease apart what happened, and questions inevitably arise. Mistakes and negligence should have consequences too, but these are complicated: the PI and co-authors are somewhere along a spectrum of being innocent victims of the fraudster, of having been negligent in oversight, or having actively contributed through (non-fraudulent) bad leadership or research practices. However, anyone who reads Retraction Watch knows that except in cases where it is proven that the PI actively participated in the fraud, there are essentially no career or funding consequences for them. They may take a credibility hit for a while. Is that bad? I don’t know.
For me, it raises questions about our expectations of a PI. As I wrote, almost everything said about Obokata should’ve have raised red flags instead of garnering praise. Can someone running two labs and holding an administrative position, managing perhaps 20+ scientists and staff even fulfill the minimal expectations we have for 1) Instilling the right system of values and incentives that will minimize the temptation to commit fraud; 2) Be familiar enough with the people, experiments, and data in their lab(s!) to have a fair shot at noticing when something might be amiss?
These are management roles that, in my view, cannot be delegated if 1) You are the grant holder and 2) Your name is last on the paper. The inevitable accumulation of funds and trainees that comes from the kind of reward system we have—grants beget grants in a positive feedback loop—leads to very large labs that become the epicenters of disciplines and subfields. But is it even plausible that these PIs are competently doing their jobs and meeting their oversight responsibilities?
Update: The suicide of Yoshiki Sasai is appalling. Everything here is out of proportion and almost every step in this sad saga is a crystallization of the most pathological features of how “big time” science is practiced today.
Whatever personal demons conspired with the professional fallout surrounding his role in the STAP thing, Sasai’s death is a tragedy, a human life ruined and lost over some science experiments. All authors on the work bear some responsibility for the fraud. Obokata, it seems, faked results. A supremely irrational act, given the work’s overblown promise (which preceded any actual experimental results by years) and inevitable scrutiny it would receive. Maybe she believed so much that she could only see what she wanted to—it had to be true, it had to work. That’s not science, yet obsessive commitment to an idea or theory is often portrayed as a positive quality—rogue genius is vindicated and proven right! They all laughed at me at the university, but now I’m giving a TED Talk.
The responsibility of senior authors is less direct—a responsibility to identify and not exploit the kind of obsessive ambition that leads to fraud. (Is it really that hard to spot?) A responsibility to make it clear that you want the right answer, not the most expedient or exciting answer. Not the best result for your career, the real result.
And what about everyone else? Journals, colleagues, scientists, journalists? Do we really need hero narratives? The splashy results that will “change everything”? The hype machine it is out of fucking control. We are adopting the language of biz-speak bullshit and starting to buy into these empty non-values about techno-utopian revolutionaries and lone geniuses. We all participate in the culture of valuing glam, prestige, prizes. Who gets the 8-figure grants while everyone else struggles to stay afloat? Who can I get a selfie with at SfN? Who gets to stamp their name all over the culmination of decades of work by hundreds or thousands? We’ve become cultish: around people, journals, technologies, institutions. As if these are things that matter more than the colleagues around us, or our own integrity. It’s pathetic, and we can be better.
[Repost from April 1, 2014]
Some snippets on the STAP author, from an admiring piece before problems started coming to light:
“There were many days when I wanted to give up on my research and cried all night long,”
He described Obokata as competitive and persistent, saying the graduate student learned the cell cultivation technique from scratch and worked on experiments around the clock.
She said she spends more than 12 hours a day throughout the week at her laboratory,
I think about my research all day long, including when I am taking a bath and when I am on a date with my boyfriend,
There is a powerful, pop-culture image of the single-minded, obsessed, tireless scientist, whose personal sacrifices are rewarded by the discovery of Truth. It’s a lie. The best (and more importantly, happiest) scientists I know are people who are interested in many things, who approach all aspects of their lives with engagement, purpose and openness. I know people like the description here. They are, in my experience, sick. They are unhappy. They think in ruts. They are stubborn. They are unpleasant to work with. They are selfish. They are often single-minded to the point of being negligent. They are terrified of not living up to expectations.
We need to stop presenting and encouraging these traits as admirable or desirable in young scientists.
And what about the field of stem cells? As someone who works in a field that seems to be experience a rising tide of bullshit and tech-driven hype, this worries me:
The field was described as “a mess” by one senior researcher with 20 years experience, and as having a “very unhealthy, competitive attitude, nourished by top tier journals”, by another.
What is clear is that the senior scientists who praised, encouraged, and stood to benefit from Obokata’s obsessive and self-destructive nature will suffer few if any career consequences.
The trajectory of Haruko Obokata was meteoric.
I remember reading some point about how humans become better at logical reasoning if you state the problem in a way that is socially familiar to them. I think I have discovered a critical exception to this phenomena, and that is when the context is a power hierarchy that has favored the subject.
That’s the only explanation I can think of for PIs who get all shirty and defensive about how they treat trainees when confronted with the systemic problems with the training and career opportunities we are offering the next generation of scientists.
A different take on passion
“Passion” has been a topic of derision in science ever since St K3rn’s epic, gassy whine set the new standard for entitled boomer-d00d condescension to young scientists. (Also, it looks like K3rn and Chuck Vacanti look for the same qualities in their trainees.) I’ve found the word funny in most contexts for years thanks to the other David Mitchell’s video below. It is funny, and perhaps not coincidentally ends with Johns Hopkins, which makes me think that K3rn’s musings on what he thinks “passion” is may have been derived from a dumb university marketing campaign! Hilarious.
But it is nice to see blogger Parklife rehabilitate the word and make it about the qualities of a great mentor rather than a patronizing scold.
Don’t be skeevy
Next, Prof-Like Substance has a really great common sense post about men hitting on women at conferences. For fuck’s sake… just don’t. See the comments for hilarious concern troll “anon” who professes to be such a hapless and helpless thing that it is impossible to ever tell if he is behaving appropriately with a female human.
Life-hack: If you are completely clueless about something—driving, handling radioactive material, cooking rice on a stove top, speaking to 52% of humans—don’t do it. Find somewhere to develop these skills where you will not cause harm, annoyance, and discomfort to others.
“Oooooh, the Lord is good to me
And so I thank the Lord
For giving me the things I need
The sun and the rain and the apple seed
The Lord is good to me”
At summer camp we learned that Johnny Appleseed travelled around, randomly planting seeds throughout pioneer-era America, thus providing migrants with a source of food and a tasty treat in the rough wilderness. No one told us that apple trees don’t grow true from seed, and that the vast majority produce inedible fruit (don’t worry, this isn’t going to be a metaphor about postdocs from certain labs). What they do create, as we all learn later, is an easy raw ingredient for booze. And that’s why there are songs about John Chapman.
In modernity, we have lost virtually all of the apple diversity that once characterized North America. Not just the sour cider apples, but all of the freak discoveries of something unique, weird, delicious when roasted, keeps all winter, good for soup, good for pie, amazing off the tree but doesn’t last—anything that was interesting enough for people to take cuttings and graft it into their own apple garden. Apples selected for industrial-scale production are not chosen for taste alone, but also how they look in great heaps in supermarkets (uniform size and shape, smooth skin, no brown), how well they survive shipment by truck or ship, easiness of picking, yield per hectare. (I learned this from a guy at a farmer’s market and from Michael Pollan on the radio.)
This sucks. Some of the best apples I’ve ever tasted look like malformed potatoes. You can still find some of these local varietals at farmer’s markets in the eastern U.S., and they are maintained in isolated places or agricultural research facilities, but most never made it west, and the vast majority of apples are just not available to you and me.
Like John Chapman and apples, the relationship between NIH and “health” is not exactly straightforward, and not what most people think. The Johnny Appleseed version is that the NIH is working for the taxpayer to improve the health and quality of life of the good people of America. Yes and no. Let’s leave aside that if aggregate “health” were the actual goal, almost every dollar spent by the NIH would be better spent on prevention, education, regulation of industry, and alleviating poverty. So what is the NIH for?
With the apple story from summer camp, we went from “food to sustain the weary and brave pioneer” (I’m picturing Michal Landon) to “cider to get so drunk that you can briefly forget you live in a hole in the ground and will probably freeze/starve this winter.” Here, we go from “improving health” to…what? A somewhat more realistic description of the goal of the NIH is not health per se, but the development of biomedical interventions. That’s important, too, and I think non-controversially beneficial.
But the people who have administered the NIH—and this includes the US Congress, believe it or not—have been much smarter and more strategic than this. At least since the post-war period, the NIH has put enormous emphasis on also funding research that has nothing to do with health at all. They have done this by defining the “health” as the the institutes’ ultimate goal. Most suitably-motivated scientists can think of ways their research may ultimately serve the public health interest. And this is not being sneaky, this is exactly why the NIH’s mission statement and goals are phrased this way. So when you feel tinges of guilt for reaching for health relevance that feels tenuous, relax: it’s a feature, not a bug.
Why do this? In part, it’s a recognition that biomedical advances largely depend on fundamental discoveries, and that fundamental discoveries are nearly impossible to engineer. You can’t go looking for them. If you want more of what you have, you cut and splice from existing apple trees—create clones (I am going to torture the crap out of this metaphor). But if you need truly new things, you have to just plant as many seeds as you can. Only once in a while will you get edible fruit, but many of the scientists are actually happiest with the cider. Win-win.
The other reason, of course, is that the NIH represents the transfer of public funds to local institutions across the country—mostly universities, some companies. This means jobs, development, prestige—stuff congress critters love to take credit for. Until relatively recently, even most Congressional Republicans recognized not only the value of basic research but the value of having federally funded basic research programs at the universities in their districts.
But times are tight. Grant success rates are at historic lows. The purchasing power of the typical NIH grant has declined steadily. Too many PhDs are being produced and there aren’t enough jobs. We don’t even know how many postdocs there are. But we know that a much, much smaller fraction will have the opportunities their mentors did to pursue a research career. All sorts of terrible incentives arise from these conditions.
The instinct in tight times is to prioritize. And my fear is that prioritization means NIH leadership making choices about what kind of proximate basic research goals to fund, despite decades of success doing essentially the opposite: funding the best scientific ideas of almost any kind (here is not the place to go into how the CSR often fails at this, but we can accept that this is at least its stated purpose).
The recent BRAIN Initiative report allays some of the worst fears about centralization of resources and top-down mandates of research priorities, but not all of them. We’ll see how it develops. This is of course life or death for young PIs like myself, who, in the first years of our appointments, are absolutely dependent on the NIH for starting our programs and keeping our jobs. Whether we fit the particular programs or RFAs of BRAINI or not, how will any study section look on the relevance of young neuroscientists who don’t fit the BRAINI mold? Any policy shifts or mandates that narrowly define what the scope of neuroscience research will be could in effect exclude me from funding, and that is the end of my career.
But it’s not just self-interest. It would be a deep mistake for the NIH to try to pick winners in advance, or even narrow the pool. It is still hard for me to see the focus on “neurotechnologies” as anything other than an already massively successful and expanding field staking claim to essentially guaranteed funding at the expense of others. It is a positive feedback loop of resource concentration. The training pipeline and job market dynamics (everyone wants to hire someone fundable!) mean we will endlessly clone and transplant a few lab “types” and create a generation of people working on the same systems with the same technology. And then what? A supermarket with nothing but golden delicious*. As I keep repeating, conformity is the death of creative science, and how the funding system works determines whether conformists thrive or we have an intellectually diverse field.
You can’t pick apple seeds in advance, you have to plant as many as you can. Neuroscience in the Era of Strong Bullshit is already in danger from mono-cropping tendencies, group-think, bandwagons and concentrating resources around particular institutions, technologies, or experimental approaches. I think recording from a million mouse neurons at once sounds kind of cool…go fucking figure it out and publish the results of each step along the way. But I’ll be damned if I’ll agree a priori, “Yes! That must be done! Set aside $200M!”
That’s why this whole thing can feel like an end run around peer review. When you appeal to POTUS to somehow legitimize or put some federal stamp of inevitability and necessity on specific kinds of neuroscience, when you start to make totally unsupported and idiotic equivalencies between neuroscience and space exploration or high energy physics, it starts to sound like a boondoggle. Let’s be scientists, not shysters. Let’s fund neuroscience, but let’s not institutionalize our worst instincts.
*I hate golden delicious.
Methodological crazes are all like this. Think back on your favorite that is no longer. Or is still a thing, but not the thing, and now we now the know caveats and why a lot of the early work was bullshit: before the astronomical false positive rates were known, or the caveats and artefacts that must be dealt with but were ignored in our youthful zeal to publish it in Nature. The early days of almost any method are rife with the cowboys: sloppy experiments, over-interpreted. Cowboy is too romantic. Low hanging fruit harvesters.
I won’t pick on particular methods, but the pattern is pretty clear, historically. New methods are often so uncritically acclaimed that it causes major distortions in science. Whether or not they end up causing seismic shifts in our knowledge or practice of science (most don’t), they make or break careers and incontrovertibly cause seismic shifts in funding and publishing.
And guess what? There were people back then, saying exactly what I (and many others) have said about optogenetics. And they were ignored, and you could argue what harm was done? We learned how to use (or not use) and interpret these methods appropriately. So what if the naysayers are almost always right, technically. We still make progress. And by naysayers, I mean people who simply say “let’s not put all our eggs in here” or “this is promising, but let’s be sure we continue to fund a variety of approaches” or “what is this even for and why are we spending tens of millions on it.” Because no scientist is actually opposed to new methods.
I say that the problem is that in the early days of new methods, we massively reward over-promising, sloppy thinking, and over-interpreted experiments. Massively. And this is terrible. Re-allocating funds to chase the new, hot thing basically means creating and incentive structure that rewards Type I errors (and/or being a blowhard) at the expense of quality and diversity of ideas and methods.
And this is where I haz the wow sad realization about (maybe) how this keeps happening: for your career, Type I errors aren’t errors at all, and Type II errors (or even reasonable caution) can kill you.
In going all brainless and ga-ga and “let’s fund the fuck out of it” over any and all uses of a new technology that looks like it’s all-that-and-a-side-of-chips, we extend massive credit to the low-hanging-fruit harvesters, and by the time the sobering bill arrives and we’ve separated the bit of wheat from the mountains of chaff, all their postdocs have tenure doing the same thing.
And look at neuroscience today. Look at the “You Are Your Connectome”* fan club of consciousness uploaders and deepity commentariat. Look at how optogenetics is going to either cure everything or enslave us to the government.
Look—if you dare—at the Human Brain Project, which is kind of the F35 jet + tulip mania wrapped up in some kind of Willy Wonka fever nightmare.
Neuroscience is in an era of Strong Bullshit. And I get how that’s an opportunity for the field despite it being annoying. But it is my sincere hope that things like the BRAIN Initiative can at the very least find a responsible way to use the hype without being defined by the hype.
That was a tiring day of blogging.
Here is a plea to neuroscience editors to find a 100 fucking pound grain of salt to take with reviewers who prescribe optogenetic experiments in knee-jerk fashion. Now that I have written this, my internal annoyance is externalized to the internet where it can no longer bother me. Right?
1. Optogetetics is great when it is appropriate to the experimental question and in contexts where it has been shown (convincingly, with science) to work as advertised. I like to use it, however the conditions above are rarely met for my work.
2. The sum total of my experience with “optogenetic experiments proposed by reviewers” (n=7 reviewers) is that reviewers who demand new optogenetic experiments (i.e. not extensions/controls with regard to existing optogenetic experiments in the paper) are 100% idiots.
3. This is not because of anything of the particular problems with optogenetics per se, though that is some part of it. It is almost entirely because they are terrible experiments by any standard. At best irrelevant, but usually the reviewer 3 sweet spot of being exceedingly difficult and completely non-informative.
4. This pattern says something important about scientists who think the latest craze is the answer to everything. And that thing is “ignore them.”
5. Given the totally unearned “gee whiz” bonus that a paper gets through the use of optogenetics, don’t you think the authors probably considered ways in which it could be used? And maybe they have good reasons for not using it?
Just stop it. Imposed conformity of any kind—theoretical, methodological, experimental system—is the death of creative science.