How AI is changing research, teaching, and learning
Assessment and User Experience Librarian, Mary Huissen (MH) sat down with one of Swarthmore’s campus experts on AI, Head of Digital Scholarship Strategies, Amanda Licastro (AL) for a conversation about how artificial intelligence is impacting libraries and research on December 5, 2025. Edited for length and clarity.
MH: You’ve become one of Swarthmore’s campus experts on AI, generally, in collaboration with partners in ITS and you participate regularly with those who are helping the College respond to AI, including the Board of Managers. But for our readers, could you talk more specifically about how the Libraries are leading responses to AI?
AL: This is a great question. I'm gonna break it down into two different buckets:
The first bucket is evaluating available tools and the other is making transparent decisions for our community about what we are providing access to or not.
I work with Andrew Ruether and the Academic Technology team in evaluating tools that our community requests, resulting in the ones we currently offer data-protected access to, like Google Gemini, Notebook LM, and through LibreChat, OpenAI and Claude. But there have been requests for other tools that we have not provided access to, after having done a deep dive into them and deciding we did not want to support campus-wide for a variety of reasons.
Internal to the Libraries, we are looking at how our library databases and the tools we support now have AI integrated into them, and which of those tools are value-added, actually giving our patrons something worthwhile, and which ones are not.
In our initial evaluation of library tools, we found that most of the AI add-ons were actually just using ChatGPT, and that the results did not add anything, and sometimes were actually misleading.
I'll give you an example. I looked up a Margaret Atwood poem that uses an extended metaphor about the heart to talk about human relationships, and the AI add-on, providing a ChatGPT overlay, gave me information about cardiology. So it obviously didn't understand the metaphor or the poetic nature of the piece, which would not be helpful, and may mislead someone who doesn’t understand what Atwood's aim in the poem was.
It also obfuscates what is actually happening. It makes it seem like the database itself is offering an internal artificial intelligence tool, when in fact, it's outsourcing to OpenAI without being transparent about it. So students, faculty or staff, could accidentally use OpenAI, even if in other contexts they would be against that.
I would rather provide safe access to OpenAI separately, so that you have to make the conscious choice to go there. And you can; you could download that Margaret Atwood poem, go to LibreChat, upload the poem, and ask OpenAI all the questions you want, but that would be the conscious choice. Having it integrated into the database obfuscates that choice. It makes it seem like it's an internal tool purpose-built for this project, which it isn't.
So the only database that we left the AI add-on available was Statista, because Statista's a very specific kind of resource that's dealing with high levels of statistics, and you have to have a certain fluency with statistics to even use that database. All the rest, we turned it off because of concerns they were inaccurate, not adding any value, and in fact, obfuscating the decision that we really want our patrons to be making consciously for themselves, not having the choice made for them.
MH: I experienced a serendipitous conversation last spring when one of the external honors examiners happened to wander by my office, and we struck up a conversation about AI. This professor identified himself as neurodivergent and talked about how enormously helpful these tools are to him. I'm talking here specifically about some library catalogs, not the databases within them, like JSTOR, but even the catalogs themselves, which can have an AI option where you enter a search query, and it will pull top relevant results, and briefly summarize them for you. He found this enormously helpful, time saving and a great reduction to his cognitive load. This feature is actually available for Tripod, and we looked into it, but decided not to use it because it wasn't good enough. But someday it probably will be, and I'm wondering about that. Are we going to be able to keep up with evaluating these things? Because there will be a flood of them.
AL: This is a great question, and it's actually difficult to keep even library colleagues up-to-date and well-informed about how fast these tools are developing. So, if you had asked me at this exact time last year, if you could use Google Gemini, or LibreChat to do a literature review for a research project, I would have said no, because it would fabricate or make up sources. That is no longer true. If you use Google Gemini for a literature review, you're gonna get excellent results.
You can set up a Google Gemini Gem that will act as an academic researcher and only pull relevant, peer-reviewed, scholarly sources with links to those sources, and in some cases, it'll link directly to the full text, and in other cases, you will have to then go to Tripod to find the full text of that article to evaluate it for yourself, but it will provide an excellent overview of sources, and like you said, summaries of the broad categories of research in that topic.
We have run workshops on this, showing students and faculty how to create a literature review with something like Google Gemini, and then evaluate the sources it points you to yourself, using the cognitive skills we build in higher education: reading the abstract, looking at the methodology, looking at the context, the authority of the authors, all of that rhetorical analysis work, and then finding a small subset of those resources that you find valid and reasonable for your research project. And then you can even put those results into Notebook LM and create a small dataset of human-vetted resources to interact with in a variety of ways.
I think the key for us is weighing the pros and cons of those. We want students to be able to understand how to use a library catalog, how to evaluate their sources. But we also want them to understand how the artificial intelligence tools may or may not be useful for those processes once they have the basic kind of rhetorical analysis skills. In many cases, the answer might be that they're not useful. In other cases, they may be incredibly useful.
The neurodivergent community often finds these tools incredibly useful in helping them with organization, time management, and accessing multimodal forms of information, including visual representations and audio representations of information that currently our traditional tools don't provide.
MH: I've mentioned before that I started to follow the University of Virginia’s new Dean of Libraries and Advisor for AI Literacy, Leo Lo. Given his title, he tends to say a lot about libraries and AI. He cites evidence that growing use of AI tools confirms what librarians have been noticing; shifts away from searching to asking, from finding sources to consuming synthesized answers. He says it changes the act of learning and makes AI literacy and information literacy - and I'll add data literacy - more critical than ever. How do you respond?
AL: Part of our job in libraries, and for me as a humanist and teaching humanities classes, is to give students space to slow down and engage with texts, and I say texts broadly. I mean novels in my classes, but I also mean films, I also mean art…
MH: Let's not forget music!
AL: Yes. music, all of it.
I think part of the work we're doing is giving people space and time to slow down and engage in the analog, even if it means electronic versions of analog. It could be a text or an archival object, or a piece of art that has been digitized, but giving them the time and space to slow down and look closely, observe, and think, form opinions and reactions of their own to that content, whatever it may be. It is important for us to make that space.
And maybe that means we have to change our pedagogy: do less in our classes, teach fewer texts, teach fewer pieces of art, teach fewer readings so that students don't feel the pressure to ingest large amounts of content quickly.
Maybe that is providing space to engage with our special collections, to do original research in terms of oral history and community engagement. I think that we need to figure out how to slow down and make space for that, because it's incredibly important that students feel that they have the time and space to do that without the pressure of deadlines and grades.
There's a lot of conversation happening in the committees that I've been a part of about how our structures of grading are leading to that pressure. We need to rethink assessment if we're going to be serious about having students slow down and think critically about analog objects.
MH: Why should the Libraries be leaders in this conversation?
AL: Because we are information professionals. Libraries, historically, have always been in conversations about privacy and access, and fundamentally, questions around generative artificial intelligence are about privacy and access.
Sometimes privacy and access are complementary, and sometimes they're at odds. That is absolutely the case with generative AI. We can talk fluently and expertly with folks about issues of privacy and access around generative AI and information literacy because that is our training. I don't think anyone is better suited to having those conversations than librarians.
Do you have questions about AI ethics, copyright, or teaching with AI tools? Reach out to Amanda or join the Teaching & Learning Commons' Critical AI Inquiry Group!