The well-measured bean

Another discussion about author rank over at the Drugmonkey Web Log. Commenter GMP raised the point that in collaborations, comp/theory is relegated to the fly-over section of the author list, while the bench scientists are the coastal elites at the beginning and end. Meh. If that is generally true of comp/theory, then it is known in that field and those in the field will judge your work accordingly. That’s why papers and grants and tenure files all involve evaluation by people also in your field.

And this is what is dumb about hand-wringing over this shit and all kinds of altmetric wackaloonery. All disciplines / subdisciplines / subsubdisciplines have their cultural norms, for better or worse. Part of your job as a scientist is to know the norms of your tribe. This desire to standardize, measure, quantify anything and everything across and within disciplines is misbegotten nonsense driven by people who are bean counters at heart, but just want better and better ways of counting, weighing, comparing, and describing beans. If you can’t figure out which beans are which in your discipline just by tasting them, you’re in trouble, and math isn’t going to help. The practice of science is composed of culture – well, many cultures. It’s like trying to come up with statistics to compare the dramatic qualities of various community theater groups. Yeah, you probably could, but who cares? The theater dorks know what’s what.

Obviously, I see the utility of looking at this if you are specifically and professionally interested in publishing practices in science, like you are in library sciences or something. If you are a working scientist, my advice is to ignore it all with extreme prejudice. If you start pay attention to shitty stats, then the stats will become your goal, and you will become an empty shell of a person.

Relatedly: The fallacy of the age of big data is that all data are interesting.

Advertisements

7 Comments on “The well-measured bean”

  1. rxnm says:

    I should add a large caveat that one theoretical advantage of having quantifiable metrics is that historically (and still), qualitative judgment is used as a shield for bias (personal, gender, race, professional). The cultural norms of science have the same ugly history as the rest of our cultural norms. However, I do think these are separable issues, and it is possible to identify and redress bias while retaining the kind of peer review and scientific judgement that doesn’t reduce someone’s scholarly activity to hit points and ability scores.

  2. There is a practical problem, though: how to get rid of the numbers? Once they’re around, you can’t pretend they don’t exist and people will look at them – no matter how shitty they are (e.g., IF). I’m all for dropping metrics entirely, but the actually possible solution may be to develop an inflation of metrics, all of which are scientifically vetted and sound. This not only devalues each metric, but also, in theory, allows a constantly evolving set of metrics to be used, which makes predicting which metric counts impossible and leaves ‘just doing science’ as the most straightforward ESS.

  3. […] “Big Data” is one of those ideas that scares my card-carrying ACLU side and also offends my cranky side because it’s a trendy, buzzwordy thing.  So I appreciate this quote: […]

  4. @ Björn Brembs – people might just simmer your bevy of metrics down to a single number by PCA or some such dimensional reduction method. I would, at least.

  5. When you’re doing interdisciplinary (hardcore interdisciplinary, like moving from electrical engineering into molecular biology, not immunology to biochemistry), this “culture” becomes somewhat of a problem. I find myself often explaining to biofolks that conference proceedings in electrical engineering are peer reviewed works that are equivalent to low IF publishing and not brain farts sent in hopes of getting an intelectual spa treatment once a year.

    Don’t know how big of a deal this is – it’s easy to explain this on resumés by adding, fwiw. But people trying integrative efforts may suffer from problems with that.

  6. rxnm says:

    Yes, everyone thinks they love interdisciplinary whatnot, but almost every instinct I’ve seen is boundary policing.

    That said, I think almost all computational biology approaches are pretty safely their own disciplines at this point. Conferences, journals, etc.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s