Experiments with a 3-credit Research Course

This semester I seized on an opportunity to teach a new, 3-credit course in our Honors Program. It’s called Quest for Answers: Intro to Research, and I joined two other adjunct professors to design the syllabus. Since the status of librarians is always a hot topic, I should say that I’m teaching this class as an adjunct, hired by the Honors Program. Librarians do not have faculty status at Tulane, and the library cannot offer its own credit-bearing courses. However, my department saw the value in my contributing to this new course, and I have their permission and blessing to work on this in addition to my daily responsibilities.

I don’t think my teaching comrades would argue with me that the course content is mostly my doing;  many of the concepts we’ll discuss and work with in class are at the heart of information literacy in higher ed. It’s also worth noting that the course is interdisciplinary, and is meant to serve students from across the liberal arts and sciences.

This blog has been a bit dormant lately, so this seems like a good venue in which to reflect upon the course as it unfolds. I’ve also been asked to share my syllabus, which I’m happy to do, and this seems like an efficient way to disseminate it.

So please find attached my syllabus for COLQ 2010, the honors colloquium course I’m teaching this fall:



What is Data Literacy?

This March I’ll be presenting at the ACRL 2015 conference with Christine Murray (Bates College) on teaching data literacy in the library. To help me prepare and perhaps preview our discussion, I thought I’d post a few thoughts on the blog to get the juices flowing. Let’s begin with some definitions as they appear in both the library literature and the scholarship of statistics education in order to answer the question: what is data literacy?

In Libraryland, “data literacy” seems to be the most popular term (over statistical literacy, quantitative literacy, and numeracy), and consists of two aspects: information literacy and data management. From an information literacy perspective, the emphasis is on statistics, which are considered a special form of information but one that still falls under the information literacy umbrella. For example, Schield (2004:6) describes statistical literacy as the critical consumption of statistical information when used as evidence in arguments. Similarly, Stephenson and Caravello (2007) advocate for librarians to promote statistical literacy by assisting learners to locate and evaluate authoritative statistical sources, recalling Standards 2 and 3 of the 2000 ACRL Information Literacy Standards, as well as reference classics like the annual Statistical Abstract of the United States.

From the data management perspective, the emphasis is on data rather than statistics, and focuses on the organizational skills needed to create, process, and preserve original data sets. Returning to Schield (2004:7), he defines data literacy as the ability to obtain and manipulate data, but reserves these skills for certain fields of study such as business or the social sciences.  Carson et al. (2011:631), based on interviews with faculty and GIS students, emphasize the importance of data management and curation skills required to “store, describe, organize, track, preserve, and interoperate data.” There is plenty of literature on data management, a hot topic in Libraryland fueled by interest in e-science initiatives, new data requirements for federal grants, and the creation of institutional repositories. In my experience, though, discussion of data management is often divorced from statistical literacy, perhaps due to its focus on faculty and other experts rather than data novices. Calzada Prado and Marzal (2013) do attempt to unify the information literacy and data management aspects under one rubric, although their proposal for five data literacy standards is largely derivative of the soon-to-be-sunsetted 2000 ACRL Information Literacy Standards, which doesn’t bode well for their wider adoption.

Turning away from librarianship, we find that statisticians and statistics educators typically use the term “statistical literacy” to describe the knowledge, skills, and dispositions surrounding their field. One widely cited exposition of statistical literacy is that of Iddo Gal (2002:2-3), who identifies two interrelated components: the ability to interpret and critically evaluate statistical information, as well as the ability to discuss and communicate one’s understanding, opinions, and concerns regarding such statistical information. Gal (2002:4) further describes a model of interrelated knowledge elements and dispositions that together enable statistically literate behavior. Gal’s definition will no doubt look familiar to information literacy librarians, incorporating the evaluative and communicative aspects of information literacy along with the dispositions and affective components we find highlighted under the new ACRL Framework.

But what is the nature of statistical information, the object of Gal’s model for statistical literacy? It may be helpful to consider this in terms put forth by George Cobb and David S. Moore (1997). In their oft cited article on statistics pedagogy, they break down statistical analysis into three interrelated phases: data production, data analysis, and formal inference. Each of these phases produces statistical information requiring varying levels of contextual and mathematical knowledge.

Data production includes aspects of the research process related to designing a study, creating a data set, and preparing the data for short term and long term analysis. Viewed from the library, the data production phase is most closely associated with data management skills. Data analysis, next in Cobb and Moore’s schema, consists of the exploratory and descriptive phase of data-driven research. This includes examining the data set to discover trends or outliers, and using descriptive statistics to reduce large amounts of data into summary information such as measures of central tendency and variance (e.g. mean, median, mode, range, percentiles, standard deviation). Through this analysis, researchers can make hypotheses or predictions about phenomena revealed by the data. Finally, formal inference can be used to draw conclusions about a population from findings in sample data. Here we find the notorious formulas full of Greek letters such as Student’s t-test, chi-square test, ANOVA, and regression models. I’ll return to Cobb and Moore’s pedagogical advice in a future post.

So back to the original question: what is data literacy?

I suggest librarians borrow heavily from statistics educators when trying to answer this question. To paraphrase Gal and apply his definition to Cobb and Moore’s three phases of statistical analysis, the simplest definition of data literacy is the ability to interpret, evaluate, and communicate statistical information. Central to this ability is an understanding of how statistical information is created, encompassing data production, data analysis, and formal inference. In other words, data literacy includes the ability to evaluate the modes of data production, including the underlying research design and means of sampling, and how this impacts the possible findings. Data literacy also includes the ability to interpret the results of formal inference tests, including confidence intervals and the probability that findings are representative of a population rather than coincidental to the given sample. And finally, data literacy includes the ability to interpret and communicate about the descriptive statistics learners and citizens encounter everyday, from unemployment rates to political polling.

And what about data management? Ultimately it belongs to the data production phase of Cobb and Moore’s schema, and is perhaps one aspect of data literacy that, as Schield intimated, can be reserved for the specialists. While the data literate person can identify and evaluate the soundness of a research design and data collection methods, perhaps only trained practitioners need the specialized skills to carry out a full-fledged project involving data curation and advanced tools. And in most instances, teaching these skills is beyond the purview of librarians. Stay tuned for more on this and data literacy instruction in the library.

Calzada Prado, Javier and Miguel Ángel Marzal. 2013. “Incorporating Data Literacy into Information Literacy Programs: Core Competencies and Contents.” Libri: International Journal of Libraries & Information Services 63(2):123–34.
Carlson, Jacob, Michael Fosmire, C. C. Miller, and Megan Sapp Nelson. 2011. “Determining Data Information Literacy Needs: A Study of Students and Research Faculty.” portal: Libraries and the Academy 11(2):629–57.
Cobb, George W. and David S. Moore. 1997. “Mathematics, Statistics, and Teaching.” The American Mathematical Monthly 104(9):801–23.
Gal, Iddo. 2002. “Adults’ Statistical Literacy: Meanings, Components, Responsibilities.” International Statistical Review 70(1):1–25.
Schield, Milo. 2004. “Information Literacy, Statistical Literacy and Data Literacy.” IASSIST Quarterly 28(2):6–11.
Stephenson, Elizabeth and Patti Schifter Caravello. 2007. “Incorporating Data Literacy into Undergraduate Information Literacy Programs in the Social Sciences: A Pilot Project.” Reference Services Review 35(4):525–40.

Zappos and Library Instruction: Caveat Emptor

Stop me if you’ve heard this one: Librarian needs an engaging and effective way to teach a class of undergrads how to use the library’s shiny new discovery tool. Knowing that good pedagogy begins with students’ existing knowledge and experience and builds on those to create new knowledge, she begins her one-shot session by visiting the online shopping site, Zappos.com. Zappos, best known for shoes, uses a facet structure similar to that of most discovery tool interfaces. Students know how to buy shoes, and can apply their online shopping skills to better navigate the database of information sources. Our librarian can even ad lib a few jokes on how much librarians love sensible shoes.

Some version of this lesson plan resurfaces among librarians with some frequency. It came up in a discussion on teaching with discovery tools at LOEX 2013. In October my good friend Daniel posted his thoughts about using Zappos in a one-shot session, eliciting numerous positive responses. And at an ALA 2014 conference session on threshold concepts, the Zappos metaphor showed up again, followed by several tweets of approval. As if to confirm that librarians are head over shoes for Zappos, there was even a tour of Zappos’s headquarters in Las Vegas on the ALA conference schedule.

So why does this bother me? I’m not necessarily opposed to using a commercial website in my teaching, and besides, thought-provoking pieces on libraries and the neoliberal condition already exists here and here. My problem with using Zappos in library instruction is pedagogical, and stems from the flawed assumption on which this teaching strategy is based: Students know how to buy shoes, and can apply their online shopping skills to better navigate the database of information sources. Let’s consider each part of this statement in turn.

1. Students know how to buy shoes.
Let’s dispatch with this first assumption quickly. While the premise appears simple enough, my critical theorist readers have already noted that the librarian has actually imposed a particular version of American middle-class experience on her classroom. She imagines that her students buy shoes at Zappos.com, have sufficient disposable income to spend an average $130 per order [1], and belong to a socioeconomic group for whom shopping online is both ubiquitous and natural. If her students do not share this particular experience, then the pedagogical value of the Zappos demo is much diminished. Maybe her students do shop online, but don’t like Zappos. Will students negative feelings toward Zappos be applied to the library, library-provided databases, or the instruction session itself? Even worse, the librarian could inadvertently alienate students who can’t afford Zappos’s prices, or don’t shop online. The digital divide is still a problem, and is not always a function of socioeconomic status.[2] In sum, it makes pedagogical sense to begin with students’ existing knowledge, but we must take care not to assume students share our own professional or socioeconomic experiences.

2. Students can apply their online shopping skills to better navigate the database of information sources.
Moving past the pitfalls explored above, let’s get to the heart of the matter and explore the utility of comparing the Zappos interface with that of the typical library database. On the surface, the Zappos website and the typical library database have much in common. Both provide a basic search box, and both include some kind of facet structure that allows users to filter results according to preset categories. And that’s where the similarities end.

First, students know what a shoe is. Before the shopping begins, students already have ideas about what kinds of shoes they like, what kinds of shoes they need, what size they wear, what styles are appropriate for social situations they will encounter, and perhaps even which brands have the best reputation for quality or cultural cachet. Most of this knowledge has been acquired and reinforced over time, and is deployed subconsciously, allowing the shopper to focus on finer details to make a final selection. Students often lack this innate understanding when it comes to scholarly communication. Most students are just learning about the style and structure of academic writing, the values associated with peer-review and high-impact factor journals, the nature and appropriate uses of different information sources. Equating the search techniques of shoe shopping and information seeking may give students the false impression that selecting an information source is as simple as checking off criteria from a list of options: Peer-reviewed journal, check. Relevant subject heading, check. Recent date, check. If the article fits, cite it.

Second, students buy shoes, but must create knowledge. In our online shopping scenario, once a desirable pair of shoes has been identified, the consumer completes the purchase, receives the product, tries it on, and either wears it or returns it for another size or style. The equivalent process when writing a research paper might mean finding an article on the right subject, selecting a quote and fitting it into an already written paper. If the professor says the source doesn’t work, the student finds a new one to substitute. Thus, the Zappos metaphor reinforces the notion of information as a static consumer good, to be acquired, used, and exchanged like fashion accessories.

But conducting academic research should be so much more. Students are tasked with gathering information sources, analyzing and critiquing them, generating questions, and combining information sources with their experience and collected data to create new knowledge. The Zappos equivalent of an actual research project would mean gathering shoes, investigating how and why they were made and by whom, worker conditions throughout the manufacturing and supply chain, and then dismantling the shoes to discover their internal structures in order to then build one’s own shoe based on the knowledge gained from studying an array of styles, materials, and cobbling techniques. The analogy is almost comical at this point, but highlights the chasm separating shoe shopping from information literacy that may be obscured by a seemingly innocuous teaching metaphor.

In the end, the similarity between the Zappos website and a library database exists only at the mechanical level, and fails to advance student learning of critical thinking and research skills that we aim for in our information literacy instruction. Furthermore, the potential harm of alienating learners who don’t share the experiences of middle-class online consumers is a high risk to take for relatively low-level learning outcomes, and seems to contradict librarians’ efforts to infuse critical theory and social justice into our pedagogy. Add to that Zappos’s questionable relationship with the privacy of their employees [3], and I don’t see how we can salvage the Zappos metaphor for information literacy instruction.


1Zappos average order value (AOV) was $130 when it was acquired by Amazon. See “Analysis and retailer impact of Amazon’s acquisition of Zappos,” ChannelAdvisor Blog (July 24, 2009). Available online at http://www.amazonstrategies.com/2009/07/thoughts-on-the-amazon-acquisition-of-zappos.html. Accessed July 7, 2014.

2In Louisiana, only 2 out of 3 households have broadband Internet, and in a 2012 survey, Louisiana respondents who lack Internet access imagine using it to communicate with friends or to find information, not for commercial activity.

3Adam Auriemma, “Zappos Zaps Its Job Postings,” Wall Street Journal (May 26, 2014). Available online at http://online.wsj.com/news/articles/SB10001424052702304811904579586300322355082. Accessed July 2, 2014.

Correlation Coefficients, or Applying What I Learned at LOEX 2014

LOEX is one of my favorite conferences. Its smallness makes it more intimate than ALA or ACRL. It’s “all inclusive,” which promotes those in-between-sessions conversations that are often the most fruitful. And everything is about instruction.  Win win win.

And sometimes one of the most helpful takeaways appears in an unexpected format.  All the sessions I attended at LOEX 2014 in Grand Rapids, Michigan, were great, but the one that paid the most immediate dividends for what’s happening right now at my library was the lightning talk by Chantelle Swaren of the University of Tennessee at Chattanooga.  In a strictly timed 7 minute presentation to all LOEX attendees after lunch, Chantelle explained the statistical concept of correlation, and demonstrated how to use Microsoft Excel to generate correlation coefficients. This is a statistical method for revealing relationships among the piles of data we have lying around: circulation stats, survey results, instruction session attendance, etc.

As it happened, my library received the results of our local Ithaka S+R Survey of faculty right before I departed for Grand Rapids.  While Chantelle was giving her presentation, I had the Excel spreadsheet of the survey responses (scrubbed of identifying information, of course), in my e-mail inbox. How fortuitous! I have to admit that I did test out the Excel correlation function before dinner that same day, but I still had some things to learn about correlation and the survey data before I could make these numbers meaningful.

Since then, I’ve done a little reading (thanks Wikipedia!) to better understand how statistical correlation works, and what are its limitations.  And now that my entire library is focused on digesting and interpreting our Ithaka Survey results, I’ve been putting my new Excel skills to good use. The folks at Ithaka sold us an analytic report of the survey findings, but that mostly included comparisons of the Tulane faculty responses to those of the 2012 national faculty survey.  I could have done that myself since the data set for the national survey is the ICPSR data archive. I wanted to know more.  For example, does a respondent’s perception of librarians’ impact on student success have any relationship to their value of librarians overall. (Answer: It does.) Or does a respondent’s heavy use of the library collections have any relationship to their willingness to divert funds away from the library building and staff.  (Answer: It doesn’t.) Correlation of these survey responses does not mean causation, but it does generate some interesting questions about how the library is perceived and valued by our faculty, and what aspects of our work with them may have the most impact.

As a library we’re still digesting the survey results, but my work with correlation coefficients has quantified what most librarians in public services have known for a long time: the more visible we make our work to our users, the more they will value us as partners in their research and teaching. When we make ourselves invisible–a common side-effect of making discovery and access easier for users–the work of librarians becomes devalued simply because our users don’t know about it. How my library will respond to these findings is still being discussed, but to me the solutions are obvious: put more resources into the visible services that generate value for the library as a whole, and find ways to make the inherently invisible work of librarians (collection development, technical services, electronic resources management) more visible outside the library walls. It’s up to all of us to demonstrate value to our communities, and the numbers suggest that even at a research institution, just having a great collection is not enough.

Primary Colors: My presentation at LOEX 2014

The LOEX 2014 conference has just ended, and I wanted to make sure folks that attended my presentation could access my slides:  Click the title below to download the PowerPoint, or access my slides on SlideShare.

Primary Colors: The Art of Teaching & Learning with Primary Sources in the Library

Thanks to everyone that attended my discussion of teaching with primary sources in the library! I’ll post again soon with my takeaways from this great library instruction conference.

ACRL’s Proposed Framework for Information Literacy: A View from the Disciplines

I’m feeling a bit behind the times on this, but I held off on posting about ACRL’s proposed Framework for Information Literacy. I promised to contribute a column on the Framework to ANSS Currents, the newsletter of the Anthropology & Sociology Section of ACRL, and didn’t want to preempt that with a blog post.

The spring 2014 issue of ANSS Currents has been released, and my thoughts on the new Framework from the perspective of subject liaison librarians begin on page 19. For a little context, the new Framework is  intended to replace the existing ACRL information literacy standards, which were adopted in 2000. Since then, a myriad of subject-specific standards that have been developed, including standards for information literacy in anthropology and sociology, and it will be interesting to see what becomes of those should the new Framework be implemented. You’ll have to read the column in ANSS Currents for my initial thoughts on the matter.

Tomorrow I head off to Grand Rapids, Michigan, for the LOEX 2014 conference, and I expect to have some great discussions on the new Framework and the future of information literacy efforts in academic libraries. I’m also presenting on my work with students using primary sources, so it’s going to be a weekend full of library awesomeness.