May/June are the epicentre of what I call “conference season.” For me that means looking longingly at my garden as I leave on weekends that should be dedicated to barbecues and street hockey with the kid. However, it also means lots of intensive time for rapid-fire thinking. As a research-embedded health librarian, I often go to non-LIS-type conferences that are aimed at health researchers. While I really like my librarian colleagues, this spring I’ve been thinking about the juxtoposition of how we present research in LIS fora versus health fora.
To my fellow Health Librarianship Researchers: We need to toughen up.
(Frankly, this should probably be addressed to all librarianship researchers, not just those in health, but health is my current niche and where my illustrative examples come from today)
1) Stand up for your methods!
When I am listening to a health librarian give a research presentation, it is all too common for me to end up cringing at what is either weak research methodology or weak defense of methods (or both).We need to deal with this.
I’ll give a specific example from a recent conference, because the speaker in this example is a very well-respected tenured LIS faculty member in an ALA-accredited institution, who has a long and established track record of important research and advocacy for libraries, and thus I think fair game for public critique of research presentations. Dr. Marshall gave a keynote talk at CHLA/ABSC on a study that will be very important to health libraries and librarians – especially in clinical settings such as hospitals. Yes, this is the much-needed and highly anticipated update to the Rochester Study! Very exciting stuff.
Now, obviously this talk was a conference presentation, and in that kind of setting there’s never enough time to fully describe methods. That’s part of why we have time for questions afterward – so audience members can ask about areas of particular interest that were not explained in the talk.
In the question period after the talk, when people asked about methods, however, I found that even this prominent LIS faculty member was a bit wishy-washy. For example, rather than defending the reliability of her research, or explaining why reliability was perhaps not the appropriate question to ask about the qualitative portion of her investigation, the speaker demurred, basically saying that well, nothing’s perfect and we all do our best.
WHAT? I mean, yes nothing’s perfect and I’m sure we all do try to do our best, but that is how you respond to someone questioning your methods? When I mentally place this faculty member at my workplace, presenting to the faculty in my home department, she gets torn apart. Sitting there in the audience, I had a vision of the dreaded librarian “niceness” working to discredit our field in the face of other disciplines.
To give this speaker the benefit of the doubt, she might present and defend completely differently in front of another audience – say, an audience of economists. Also, she could have been having an off-day, or any number of things. BUT, this isn’t the only example I’ve seen of this type of thing, and she’s not doing librarians any favours by being soft in front of members of our own discipline. As a leader in our field, she should be modelling rigourous research and the ability to explain and defend it for us.
2) Policy-based evidence: We need to recognize and avoid it
I saw a few examples of this during the current conference season, but I feel bad pointing specific fingers because I don’t want to be “mean” or discouraging to novice researchers. (Yes, I am aware that this is the dreaded librarian “niceness” manifesting in me, and I don’t have the distance to know if it’s good or bad.) However, I don’t think my naming a particular presentation from a particular conference is that useful, as you can probably conjure up your own examples of library policy-based evidence without much effort.
Here’s what I see: Librarians do a lot of surveys. Especially Masters-level academic librarians, who are supposed to do some research and base policy decisions on some sort of evidence. User surveys are pretty common, and this is reinforced by our love of LIBQUAL+. We also have this idea that survey research is “simple” and thus a masters-level professional can do it just fine with no methodological problems. This I would dispute. Surveys *can* be simple, just as many other research methods can be simple. But surveys are also really easy to do badly. And we do a fair amount of bad, or at least biased, surveys.
At pretty much every library conference I attend, I see presentations of surveys with conclusions that do not follow from the actual results, and/or surveys that were clearly (albeit often not purposely) designed to justify a particular policy move. This is certainly not unique to libraries, or health libraries. Lots of fields generate policy-based evidence. The federal government does it at times. (*ahem* Long form census -> National Household Survey)
But when I see librarians doing things like:
- presenting surveys with extremely low response rates, and
- no demographic information to assume this small sample is representative of the whole population, then
- basing conclusions on the responses of the majority of a tiny minority of the whole, with no discussion of response bias,
I am frankly appalled. I know ML/IS research methods classes tend to be generic and weak, but that we in the profession continue to reward shoddy research methods with conference presentations and other support is horrible. And doing a huge disservice to our profession. Not only are we probably making poor decisions based on lousy research, but we are completely undermining our own efforts to position librarians as professionals with research expertise.
Yet, even I am reluctant to rake someone, especially a first-time presenter, over the coals in the open question period after a low-quality presentation. It’s “mean.” I feel peer pressure not to ask the same questions of my librarian peers that I would ask to my researcher peers back at home/work. I reassure myself that maybe I will talk to an individual privately afterward, if I can catch her. But honestly, this often doesn’t happen.
What do we do? How do we toughen up? How do we get others in our profession to toughen up?
I think one thing that would help would be for our visible and prominant leaders in the field to engage in more public methodological debate regarding LIS research, personally. If we’re going to do research and position ourselves as reseach experts (or even just research-competent), we need to sharpen our chops.
Follow-up post: here