Category Archives: research

Fact-finding: Not an ethics-free zone

Hey folks – I’m popping back in here to guest-post today. Still doing the PhD student thing and still won’t be back around regularly. But here’s something I thought we in libraryland should be thinking about. -Greyson

Canadian author/storyteller Ivan Coyote recently published an article about the importance of respecting people’s preferred names and pronouns. The article opens with the following anecdote:

A couple of weeks ago I got an email from a young woman, a college student, who claimed that her professor had assigned her entire class a special little assignment, for extra credits, for students who could track down my legal name and bring it to class. This young woman had tried and tried, she said, to find it online, but couldn’t, and she really wanted those extra marks. Would I be so kind as to just tell her?

I took a deep breath. I was flabbergasted, skin crawling with chill fingers at how totally creepy this felt, an entire college English or writing or queer studies or whatever class assigned the task of violating my privacy for extra credit at school.

Go read the article, really. It’s good. But not what this post is about.

This post is about another article, “Teaching Students to be Rude,” that was written in reaction to Coyote’s column. In this response article, journalist Bert Archer does two noteworthy things that we need to discuss.

  1. Asserts that fact-checking (or, in LIS-speak, information seeking) is a nearly “ethics-free zone” and certainly impolite and invasive
  2. Argues that librarians are very useful because we can and will find anything

You may be wondering what the connection is between librarians and some alleged college student trying to find out Ivan Coyote’s birth name. The connection is Bert Archer’s mind. Although Coyote doesn’t say that the student was a library student (and, in fact, implies the contrary, as library science is a grad degree in North America), Archer assumes it.

Why would Archer assume that it was a library student doing this invasive information-seeking? Because, in Archer’s words,

“I think this sort of assignment is exactly what I expect from librarians.”

Think about that for a minute. Let it sink in. Teaching students to dig up people’s private personal information is “exactly what I expect from librarians.”

Scary.  

We may need some librarian PR here. But not the usual kind. Archer got the “not everything is on the Internet” memo. His experience as a journalist has taught him to value the information retrieval expertise of librarians. He knows that, even in the era of Google and Wikipedia, “Unsearchables remain.” He writes,

“Reporters at the Toronto Star, for instance, know how useful librarians can be. They can ask their in-house librarians anything, and get an answer back quick.”

I am flattered by Archer’s (only nearly true) assertion that librarians can find anything. However, librarians also have ethics and are both students and creators of information policy. Library associations have taken more than one major professional stand in favour of protecting personal privacy.

Skill without ethics is not my librarianship.

It’s not the American Library Association’s librarianship, either. Yes, “Access” is the first of the ALA’s listed Core Values of Librarianship, but it’s immediately followed by “Confidentiality/Privacy.” Also among the core values on the list are diversity, the public good and social responsibility – all items that might give pause to an information professional digging up the birth name of a gender variant individual just to feed the public’s curiosity. The Code of Professional Ethics for Librarians is also offered for guidance when values – e.g., the free flow of information and patron privacy – may conflict with each other.

Archer implies that, were he writing a biographical dictionary entry on Coyote, he could ask a librarian to find out Coyote’s birth name. Honestly, many librarians (especially given a decent research budget) probably could obtain nearly anyone’s birth name, medical histories, library borrowing history, and various other bits of private information. However, would we provide that information to be published? I’d like to think that most of us would not. I would sincerely hope that if Archer asked his librarian to find Ivan’s birth name to publish, the librarian would contact Ivan and subsequently let Archer know that it was inappropriate to include such information in the entry.

Digging up and/or publishing someone’s private personal information isn’t, as Archer states, “Rude.” It’s a violation of privacy. Rude is interrupting someone, or not saying “excuse me” after you belch. Librarians are not known for being rude. They’re particularly not known for violating people’s privacy. And I think it’s a matter of concern that Bert Archer, and now perhaps many people who read his column, think they may no longer be able to trust their librarian with that potentially-embarrassing health or legal question they have.

Let me set the record straight here. Dear world: If you disclose to your librarian, in her/his professional capacity, something private about yourself, we are duty-bound to keep your confidence. Even if you are a public figure, famous author or movie star.

Not because it would be “rude” not to. Because we have professional ethics.

I understand that I will likely differ from Archer on many questions of ethics, as he also thinks it’s just fine and part of the job for a journalist (or, presumably, a librarian) to “ask a heaving mother for a picture of her just raped and murdered child.”

I hope I don’t differ from the majority of librarians on such questions, though.

 -Greyson

Disclosure: Ivan Coyote is an acquaintance of mine. Don’t know if having met in person, or having overlapping social circles, makes a difference here, but there it is in case it does. 

Advertisements

8 Comments

Filed under ethics, gender, research, The Profession

Libraries and Statistics – What are the Issues?

I originally wrote this in 2008, but never really knew what to do with it?  I am not against the use of quantitative statistics.  I actually quite enjoy doing multivariate stats, although I really haven’t touched it much since entering the library profession.

Using statistics makes sense for libraries.  Statistics provide our funders, boards, and senior administrators with a snapshot of the inputs and outputs occurring in libraries.  I hope to write a future posting which talks about the importance of using the narrative to ensure we are also capturing the impacts and outcomes of library services.

I hope this makes sense.. but here it goes.

Librarians, like most other professionals, have traditionally collected numbers, especially descriptive statistics, as a primary means of measurement, evaluation, and to justify or change current or proposed operations.  While quantitative statistics have served a valuable function within the traditional library environment, there are many drawbacks to using numbers to represent the attitudes and behaviours of patrons.

Using Quantitative Measures

While the collection of numbers is usually viewed as a non-biased methodology for collecting information, numeric indicators have a number of drawbacks.  Quantitative methodology is a deductive approach, where the researcher acts:

  • as the expert who determines the questions used to collect information or data, and
  • questions are usually generated by referring to other library studies (conducting a literature review), or by relying on their own expertise, to determine the concepts to measure.

When this occurs, we as library staff, define the concepts are important to measure. Staff also decide:

  • what questions to ask,
  • how to ask them, and
  • how to measure them.

This process should, but does not always involve, clearly pre-determining the concepts to measure.  This can provide a one time snapshot, or a more long term picture of a social phenomena.

Numbers permits us to test hypotheses, predictions, and causal connections between the measured concepts.  Under specific circumstances the use of statistical procedures allows for sample (e.g. small number of old age library users) findings to be inferred to populations (e.g. all old age users in the library system).

Issue I.

Ultimately, the numeric basis of quantitative research is one of its major weaknesses.  While concepts are asked and defined by library staff, so they can measure them, the use of common terminology is not always consistent.

(Example #1 – After a program library staff may ask the participants what they thought of it through the use of a five point scale: 1=Very bad / 2=Bad / 3=Neutral / 4=Good / 5=Very Good.  One person may have had a horrible time, but only interpret the experience as “bad”, while another might have been mildly annoyed by the person sitting beside them and then indicate their experience was “very bad”.  Therefore, the response depends on the definition the individual places on the concept created by the survey constructor – not how the survey constructor defined the concept).

(Example #2 – The number of library users in one library system can be defined as the number of people that check out books, while another system may measure the same concept based on the number of people who enter their buildings.   Therefore, when numeric data is collected and compared, between branches and other systems, it is very important that library staff constantly ensure that apples are being compared with apples – not oranges.)

Issue II.

By pre-determining the concepts to measure and compare, the librarian is viewed as the expert who knows, prior to data collection, which concepts or variables are important.  This process is very inflexible, and does not provide members of the community the opportunity to provide information about how they see the world, outside the prescribed measurement tool created by the librarian.

Issue III.

By far the most dangerous consequences of the improper use of quantitative statistics occurs when people collecting and interpreting data conclude that the findings are causal or predictive.  With quantitative statistics, the type of number used (nominal, ordinal, interval-ratio) determines which research questions can be asked, what questions can be answered, and what types of analysis can be performed.  For example, people may talk about the “correlation” between concepts, although correlation does not show causality – it is a measure of association – and is much more accurate when occurring between concepts measured at the interval-ratio level (not nominal level data (frequencies or whole numbers) which are primarily collected in libraries).

In addition, small samples should only be generalized to large scale populations, when library staff can tell the sample drawn  is representative of the entire library system (e.g. remember the old age user example discussed above. Survey results of a sample of old aged users can only be generalized to a population of old aged users, if the sample is reflective of the population).    If sample characteristics do not reflect the population, there is a danger of introducing bias into the results, and interpretations – which drive library policy (e.g. only older library users who are highly mobile filled out the survey, since the survey was conducted in winter and those with mobility issues could not come to the branch because of all the snow).  This is a real threat to library systems if done incorrectly, since a small innocent survey – which was improperly interpreted, is relied upon to direct future library services and policies.

~ Ken

1 Comment

Filed under public libraries, research, Uncategorized

Conference Season Continued: OA advocacy with my researcher hat on

I don’t try to hide it – I believe that we’re in a transitional period to fully open access (OA)* scholarly journal publishing, at least in the sciences. And while I could see this playing out in different ways that have varying impact on equity, concentration of wealth, quality of scientific publishing, etc., by and large I do believe this transition is a step forward for equity and knowledge, through increasing access to information (one of the core values of librarianship).

I’ve been involved with various OA interest group/committee/task forces, as well as policy development and empirical research projects related to advancing the state of OA. I’ve given educational talks, webinars, oral & poster conference presentations, and published articles on various aspects of OA.

Yet, I have come to recognize that some very important “advocacy” work on the OA file may not be writing letters to politicians or giving formal talks, but the informal talks I have with editors in venues such as non-LIS conferences. In other words, sometimes I think that my potential for advancing OA as a member of the research-author community is just as great as that in my role as a librarian, OA advocate and researcher.

I was reminded of this during a recent conference session, when I had the opportunity to talk with a couple of editor-types (journal editors or journal editorial/advisory board members) as they stopped to encourage me to consider submitting work to their journal. Research conferences are natural opportunities for editor-types to publicize their journals and recruit authors/articles of interest. Poster sessions are a natural ground for not-yet-published research that may soon be looking for a home in manuscript form. Therefore, as a poster presenter at a large research conference, one can expect to talk with editors.

In OA advocacy, I think we tend to focus a lot on the author-publisher dynamic in terms of negotiating copyright and advocating for journal policy change. This makes sense on the individual-article level, and to some extent with the advocating for policy-change level. But editors may be quite important for effecting journal level change to OA, and communicating to publishers through another route. Journal editors are often in a crossover position, both researchers in their own right and in a close working relationship with the publishing company managing their journal.** In scholarly communities, editors are often one of “us” – researchers – rather than one of “them” – publishing industry folk. As a researcher’s career advances, it’s often expected that s/he will take on academic community service such as journal editing. And as researchers, they’re still going to conferences in their given field, so journal editor duties, such as scouting out potential articles, dovetail well with their own scholarly interests.

So, when I’m there by my poster and an editor hands me her/his card*** and suggests I consider a particular journal for publishing, I ask if it’s open access. If they say no (generally meaning not Gold OA), I ask if authors are allowed to archive a copy in a repository such as PubMed Central. Most (but not all) say Yes to that now. I let them know about the many funder mandates under which my research group is obligated, and also that it’s important to me ethically and career-wise as an early-career researcher to make my work accessible to the widest audience possible. If I’ve already published the research I’m presenting, I make a point of clearly mentioning that it’s available open access online, so anyone can read it without a subscription. And then we talk about their journal’s new policy matters column, or the scope of their journal, or a question they have about my poster, or whatever else. I don’t sit on the OA point forever, but I do ask it, and generally first thing, when an editor suggests their journal. I think this makes a difference.

Does this make a difference? It’s possible that I’m deluding myself and seeing impact where I want to. I can’t quantify a difference this type of questioning makes, but I have had one or two instances where editors have come back to me in another year or emailed with me much later to let me know that they have moved to OA or checked and will comply with Canadian funder policies (which are generally shorter embargo period than US/UK funder policies).  So I think it helps. And I sincerely encourage other research folk who are also concerned with OA to adopt this strategy when talking with editors.

I’ve talked with publishers at LIS conferences, and it’s not the same thing at all. By now, they expect some OA flak from us pesky librarians. These days, staff from the major publishers are either prepped with the official OA line or else have to defer to the higher powers in decisions about things like OA – their job at the conference  is mainly to “build relationships” with potential customers and ultimately to sell product/subscriptions.

Now, different people have access to different advocacy and policy-making opportunities, and some people I know are senior enough in their fields to, say, be at the table when a major research funder is developing their research policies. This type of access is a major opportunity – and having funder policies in place gives me a much stronger position from which to ask journals to go OA. That said, it’s not my opportunity at this point in my life. And the people at those tables probably don’t need my tips on OA advocacy anyway. But to all those of us who are more junior or just not in those circles, we can influence policy in our own way. Some of it is “loud” and public – professional or scholarly association letters to research funders in favour of OA policies, for example. But I’ve come to think that a significant portion of that is kind of quiet, too. So next time you’re presenting research at a conference, I encourage you to mention OA as a priority (not THE priority – we all know there are many) in your publishing decisions. It matters to editors. They want your submissions.

-Greyson

*Open Access here fairly broadly applied to mean scholarly publications that are free to read online.

**While I recognize that several scholarly journals are published independently, by a scholarly society, or through a “publisher” such as a library that hosts OJS, in my health publishing experience the dominant model in this field is increasingly to be independently edited but managed by a larger publishing company, although the scale of the publishers does vary widely.

***Of course, not all editors self-identify as such when they’re cruising conference posters. I don’t emphasize it as much to everyone, but I do try to mention it when possible, in order to influence fellow researchers and undercover editors – for example I might say, “The first manuscript from this project was published in Journal XYZ and it’s freely available online so you can Google it. The second is currently under review at Journal ABC, an open access journal, so watch for it in the future.”

2 Comments

Filed under OA, publishing, research

Toughening ourselves up as librarian-researchers: Follow up Post #1

I wasn’t aware that I posted my bit about disappointment with LIS conference research presentations smack dab in between the EBLIP6 conference and the launch of the UK-based Developing Research Excellence and Methods (DREaM) project. Serendipity at it’s finest! Thanks for the attention, and for helping me feel less like an isolated downer, folks.

A few interesting things that have come to my attention via links, twitter and the like

In the UK, there’s this Library and Information Science Research Coalition that’s been around for a couple of years now, although I hadn’t heard of it over here in Canada. It was started by the British Library, and CILIP and JISC and a few other partner organizations. The member orgs get together to influence the LIS research agenda. These folks are behind the new DREaM project I referred to above, which is funded by the UK Arts and Humanities Research Council (AHRC) and looks cool:

A key goal of the project is to build capacity and capability in the development and implementation of innovative methods and techniques in undertaking LIS research.

Someone also pointed to an article in the journal Library & Information Science Research, which I was vaguely aware of as a journal of LIS research but wasn’t really on my radar as a publishing stuff about LIS research. The article was a commentary bt Ray Lyons on sloppy survey research (that I don’t see openly archived anywhere yet, but hopefully Lyons will do something about that soon), which included the following gem of a statement:

…we in the library and information profession sometimes prefer convenience and expedience over accuracy and thoroughness. Like the most impatient of information seekers, we ignore the fact that inadequate information gathering techniques will lead us quite expediently to the wrong answers.

So true! The same rushed sloppiness we bemoan in information seekers, we too often embody. I mean, I know I do – one of the things I love about being a librarian is that I can beat any of my coworkers in an information duel. I am Quick Draw McInformationist. My “google-fu” is strong and my prowess with controlled vocabularies is stronger. But that’s not the way I should conduct empirical research.

To be continued…Next post will have some thoughts on how we can improve things here/now

1 Comment

Filed under academic libraries, LIS education, research, The Profession

Update on withdrawn CIHR trials policy

In an only somewhat-overdue update (thanks to conference season interrupting my regular blogging activities – I do write on the road, but need to get sleep & give a read over before I can push “publish” on a post) the Canadian Instutites of Health Research (kind of the Canadian NIH, for US American readers) has put out a new message regarding the missing CIHR trials policy that we’ve been following since late March.

To backtrack a bit, while I was on the road and having fun with research and colleagues over the past month, there was more coverage of the CIHR trials policy disappearance, including Michael Geist’s blog and the National Post. Additionally, “rapid response” letters from around the world continued to roll in to the BMJ related to their article, some with great titles such as, “Canadians step back from the well of transparency while the World is thirsting for it” and “CIHR decides it must compromise my interests as a patient.”

Then, right around June 15 or so (I saw it on the 17th), CIHR President Dr. Alain Beaudet issued a “Message from the President – Policy on ethical conduct of research involving humans.” Go read it: http://www.cihr-irsc.gc.ca/e/43756.html. If you’re anything like me, you may need to read it a few times over, because it’s not the clearest statement ever made.

Here’s what I think it’s saying:

  1. That the March removal of the trials policy (“Registration and Results Disclosure of Controlled and Uncontrolled Trials”) was about “harmonization” and deference to the TCPS-2
  2. The TCPS-2 has some requirements for trial registration and public disclosure
  3. While the trials disclosure requirements in the TCPS-2 and the former CIHR trials registration policy were in the same spirit, the CIHR policy had more specific directions about what needed to be done
  4. CIHR will (at some unspecified point) be  integrating certain of these more stringent operational requirements as part of the terms and conditions of its “relevant programs.” These include: a) publication of the systematic review used to justify the trial, b) registration and compliance with WHO requirements for minimum data disclosure, and c) submission of final reports in CONSORT format.
  5. CIHR will propose 4 revisions to the TCPS-2 for “prompt consideration” (not clear on how soon this can/will happen): a) applying to all trials, not just clinical trials, b) requirement to update trial registration when the trial protocol changes, c) requiring that serious adverse events be reported in post-trial publications, and d) and a requirement to deposit aggregate data in an unbiased, publicly accessible database.
  6. In the interim, CIHR will specify that researchers have to “comply with all the requirements mentioned above” (not sure whether this means 4a-c or is also inclusive of 5a-d).

So, what does this mean? Are we all good now?

Well, we’re better than we were before the press coverage, I think. We’re not as better as we would have been, had the trials policy never been pulled.

Remaining questions:

  1. Really??? I’m still kind of skeptical that 3 months after the trials policy and the TCPS-2 came out, both of which had been in development for a looong time, someone suddenly just went, “Oh, gosh, you know what? It’s not okay to have both of these policy statements!” Why am I skeptical? Well, because it just doesn’t make sense. CIHR had a trials policy that wasn’t 100% the same as the TCPS-1. Tri-council funders have all sorts of different policies that are more stringent than the TCPS, and it’s not a problem (e.g., the beloved CIHR Access to Research Outputs policy). It’s just not adding up, and at this point the message seems to be that it doesn’t matter if it’s not adding up, CIHR is sticking to their story.
  2. When will the CIHR be implementing these new requirements for relevant grant programs, and how will this implementation be different from the trial policy rules?
  3. What’s the process for revisions to the TCPS, and how long does it take?
  4. When are we going to see the requirement to make individual-level/micro/”raw” data publicly available? The item listed above in 5d, which currently has no teeth, only requires aggregate data deposit. What does this mean? How aggregate? Does this have to include all adverse events? We need to be able to reanalyse this data to look for harms to specific groups. This is one of the most important parts of the scrapped trials policy, and there is no mention of it in the new statement from CIHR.

I think the international attention and public pressure on CIHR over the withdrawal of the new trials policy likely contributed to these developments, which seem like a step back in the right direction. However, without teeth in the current requirements, and a return of the publicly-accessibly micro-level data archivng requirement, it seems like 3 steps forward, 2 steps back at this point.

-Greyson

Previous posts on this topic, in case you haven’t been following along:

Leave a comment

Filed under democracy, ethics, funding, government, Health, OA, research

Conference Season 2011: Librarianship Researchers, we need to toughen up

May/June are the epicentre of what I call “conference season.” For me that means looking longingly at my garden as I leave on weekends that should be dedicated to barbecues and street hockey with the kid. However, it also means lots of intensive time for rapid-fire thinking. As a research-embedded health librarian, I often go to non-LIS-type conferences that are aimed at health researchers. While I really like my librarian colleagues, this spring I’ve been thinking about the juxtoposition of how we present research in LIS fora versus health fora.

To my fellow Health Librarianship Researchers: We need to toughen up.

(Frankly, this should probably be addressed to all librarianship researchers, not just those in health, but health is my current niche and where my illustrative examples come from today)

1) Stand up for your methods!

When I am listening to a health librarian give a research presentation, it is all too common for me to end up cringing at what is either weak research methodology or weak defense of methods (or both).We need to deal with this.

I’ll give a specific example from a recent conference, because the speaker in this example is a very well-respected tenured LIS faculty member in an ALA-accredited institution, who has a long and established track record of important research and advocacy for libraries, and thus I think fair game for public critique of research presentations. Dr. Marshall gave a keynote talk at CHLA/ABSC on a study that will be very important to health libraries and librarians – especially in clinical settings such as hospitals. Yes, this is the much-needed and highly anticipated update to the Rochester Study! Very exciting stuff.

Now, obviously this talk was a conference presentation, and in that kind of setting there’s never enough time to fully describe methods. That’s part of why we have time for questions afterward – so audience members can ask about areas of particular interest that were not explained in the talk.

In the question period after the talk, when people asked about methods, however, I found that even this prominent LIS faculty member was a bit wishy-washy. For example, rather than defending the reliability of her research, or explaining why reliability was perhaps not the appropriate question to ask about the qualitative portion of her investigation, the speaker demurred, basically saying that well, nothing’s perfect and we all do our best.

WHAT? I mean, yes nothing’s perfect and I’m sure we all do try to do our best, but that is how you respond to someone questioning your methods? When I mentally place this faculty member at my workplace, presenting to the faculty in my home department, she gets torn apart. Sitting there in the audience, I had a vision of the dreaded librarian “niceness” working to discredit our field in the face of other disciplines.

To give this speaker the benefit of the doubt, she might present and defend completely differently in front of another audience – say, an audience of economists. Also, she could have been having an off-day, or any number of things. BUT, this isn’t the only example I’ve seen of this type of thing, and she’s not doing librarians any favours by being soft in front of members of our own discipline. As a leader in our field, she should be modelling rigourous research and the ability to explain and defend it for us.

2) Policy-based evidence: We need to recognize and avoid it

I saw a few examples of this during the current conference season, but I feel bad pointing specific fingers because I don’t want to be “mean” or discouraging to novice researchers. (Yes, I am aware that this is the dreaded librarian “niceness” manifesting in me, and I don’t have the distance to know if it’s good or bad.)  However, I don’t think my naming a particular presentation from a particular conference is that useful, as you can probably conjure up your own examples of library policy-based evidence without much effort.

Here’s what I see: Librarians do a lot of surveys. Especially Masters-level academic librarians, who are supposed to do some research and base policy decisions on some sort of evidence. User surveys are pretty common, and this is reinforced by our love of LIBQUAL+. We also have this idea that survey research is “simple” and thus a masters-level professional can do it just fine with no methodological problems. This I would dispute. Surveys *can* be simple, just as many other research methods can be simple. But surveys are also really easy to do badly. And we do a fair amount of bad, or at least biased, surveys.

At pretty much every library conference I attend, I see presentations of surveys with conclusions that do not follow from the actual results, and/or surveys that were clearly (albeit often not purposely) designed to justify a particular policy move. This is certainly not unique to libraries, or health libraries. Lots of fields generate policy-based evidence. The federal government does it at times. (*ahem* Long form census -> National Household Survey)

But when I see librarians doing things like:

  1. presenting surveys with extremely low response rates, and
  2. no demographic information to assume this small sample is representative of the whole population, then
  3. basing conclusions on the responses of the majority of a tiny minority of the whole, with no discussion of response bias,

I am frankly appalled. I know ML/IS research methods classes tend to be generic and weak, but that we in the profession continue to reward shoddy research methods with conference presentations and other support is horrible. And doing a huge disservice to our profession. Not only are we probably making poor decisions based on lousy research, but we are completely undermining our own efforts to position librarians as professionals with research expertise.

Yet, even I am reluctant to rake someone, especially a first-time presenter, over the coals in the open question period after a low-quality presentation. It’s “mean.” I feel peer pressure not to ask the same questions of my librarian peers that I would ask to my researcher peers back at home/work. I reassure myself that maybe I will talk to an individual privately afterward, if I can catch her. But honestly, this often doesn’t happen.

What do we do? How do we toughen up? How do we get others in our profession to toughen up?

I think one thing that would help would be for our visible and prominant leaders in the field to engage in more public methodological debate regarding LIS research, personally. If we’re going to do research and position ourselves as reseach experts (or even just research-competent), we need to sharpen our chops.

-Greyson

Follow-up post: here

6 Comments

Filed under academic libraries, research, The Profession