Breakout Sessions - 11:15 a.m.
Introducing "Curated Pairs"
To offer a diverse range of perspectives, there are select Curated Pair sessions. These 1-hour blocks feature two 30-minute presentations centered on complementary themes. Presentations that are part of pair are marked with matching labels, ex. [Pair A]. To ensure a seamless experience for our speakers and the audience, these paired sessions are treated as a single 60-minute commitment. We ask that attendees remain for the duration of both talks.
Room Assignments
Location details will be posted by May 12.
Track: Teaching, Learning & Student Formation
Eric R. Severson, Adjunct Professor, Philosophy, Woods College
[Two Presentations in 1 hour] [Pair E]
This presentation explores the problem of entitlement as a worrisome and often overlooked danger in learning that is supported by Generative AI. Borrowing an analogy from plumbing, I use the concept of “backflow” to discuss the surreptitious impact of technology on the way we relate to the world. After sketching a philosophy of technology, this paper outlines the potential backflow of entitlement from these tools to the user. Part of a larger project that includes student surveys and experiments, this presentation claims that there are no neutral interactions with AI chatbots, nor any other tool or machine. We are changed by the tools we take up, for better and for worse. The presentation ends with a two-fold call to action, seeking to: 1) help students become aware of the danger of “backflow” in modern technology, and 2) challenge developers to create tools that attend to the character development of students.
This is a 30-minute presentation paired with "Pedagogical Challenges as a result of AI."
Tim Lindgren, Assistant Director, Design Innovation, Center for Digital Innovation in Learning
Noël Ingram, Digital Teaching Programs Administrator, Center for Digital Innovation in Learning
Colleen Dallavalle, Associate Vice President, Student Engagement & Formation, Division of Student Affairs
Belle Liang, Professor & Ascione Family Formation Fellow, Lynch School of Education & Human Development
Elisa Liang, Ph.D. Student in Counseling Psychology, Lynch School of Education and Human Development
Student panelists: Toby Ting, Junior theology/philosophy major; Mackenzie Duffy, Sophomore
[Panel Discussion]
How do students actually think about AI—and how can we create space for honest conversation at Boston College? This panel shares two connected programs that grew out of CDIL's 2024-25 Student AI Advisors working group. First, a summer research internship (CDIL and the Purpose Lab) trained undergraduate interns in qualitative research methods so they could interview other students about formative experiences of AI at BC. Those findings informed a second initiative: "BC Students Talk AI," a peer-led pizza conversation program developed in partnership with Student Affairs. Students hosted small-group discussions with friends and classmates, then completed reflections on what they heard and learned. The panel will share preliminary findings from both programs and discuss the collaborative process—how units across campus worked together to treat AI as a conversation starter about student formation, trust, and dialogue.
Paula Mathieu, Associate Professor of English, English
Melanie Hubbard, Head of Digital Scholarship & Data Services (Libraries)
Noël Ingram, Digital Teaching Programs Administrator (CDIL)
[Two Presentations in 1 hour] [Pair D]
In this presentation, Paula Mathieu (English), Noël Ingram (CDIL), and Melanie Hubbard (Libraries) will discuss their collaboration on designing a GenAI learning module for First-Year Writing (FWS), a project focused on centering student voices. Topics will include their rationale for choosing the FWS program as the module’s launching point, their design approach, how they are involving students in the creation process, and their plan for piloting and assessing the module.
This is a 30-minute presentation paired with "Teaching Students Responsible AI-Assisted Literature Searching."
Julie Bostian, Associate Director, Employer Engagement, BC Career Center
Maura Quinn, VP, Early Talent Acquisition & Engagement Programs, Liberty Mutual Insurance, Chair-Elect, National Association of Colleges and Employers, BC Alumna
Li Steinberg, VP of Talent Acquisition, Shea & Company
[Panel Discussion]
This panel brings together Boston-based recruiting leaders from a range of industries to discuss the impact of AI in current college recruiting. The goal of this session is to provide BC faculty and staff with information about the AI skills employers are seeking in new college hires as well as the role of AI in the hiring process today. By understanding how employers distinguish “augmenting” work (using AI to enhance performance) from “automated” work (using AI primarily for shortcuts and efficiency), attendees will gain clearer insight into which AI and non‑AI career readiness skills employers are prioritizing — and use these insights to better guide their work with students.
Marcus Breen, Associate Professor of the Practice, Communication, MCAS
[Two Presentations in 1 hour] [Pair E]
A critical reflection emerging from Critical Challenges: AI in an Evolving a Crisis, a Communication course taught on an off at BC since 2019. Introduces the concept of pedagogical blockage, reinforced by AI in the classroom, in which teaching outcomes directed at Jesuit Liberal Arts formation is constrained by AI. Three examples are offered: 1. Context setting of pre ChatGPT AI students focused on science-oriented careers resisting critical thinking or formation strategies; 2. Post-Chat GPT teaching and learning: AI advocacy by CSOM professors compared with Liberal Arts refusal, challenged to the point of despair about AI’s impact. 3. Gender differences in the take up of AI. While highlighting the contrast between Humanities versus CSOM approaches may be unfair, the contradiction in BC teaching, is a pedagogical blockage. This is a crisis for the Jesuit concept of formation.
This is a 30-minute presentation paired with "Backflow: AI Chatbots and Habituated Entitlement."
Justin Henriques, Associate Professor, Human-Centered Engineering
Jenna Tonn, Associate Professor of the Practice, Human-Centered Engineering
[Two Presentations in 1 hour] [Pair F]
This talk will present two case studies of how GenAI is used in the Department of Engineering to help support student formation in the development of their engineering judgement. One case study is the first-year course Making the Modern World: Design, Ethics & Engineering and another is from the junior-year design course called Collaborative Service Engineering Project. These courses use a scaffolded framework that treats GenAI as a tool that must be connected to discernment (i.e. integrating technical reasoning, stakeholder context, ethical responsibility, and professional responsibility). The approach aligns GenAI use with competency and skill development, and requires students to transparently validate and verify outputs. We will address this scaffolded framework and how it is communicated to students in different ways across these two courses.
This is a 30-minute presentation paired with "Using AI to promote critical thinking."
Shveta Parekh, Lecturer, Psychology and Neuroscience
[Two Presentations in 1 hour] [Pair D]
As generative AI becomes increasingly accessible, many students default to general-purpose tools (e.g., ChatGPT) to summarize or replace engagement with primary literature. In this session, I share a structured approach used in undergraduate research practicums to redirect students toward discipline-appropriate AI tools, such as Consensus, Elicit, SciSpace, and Research Rabbit, for literature discovery, synthesis, and field mapping. Rather than generating answers, these tools are integrated as scaffolding mechanisms that support question development, identification of key debates, and efficient navigation of the research landscape while maintaining an explicit expectation of reading and evaluating original papers. I will describe specific assignments, guardrails, and student workflows currently implemented in neuroscience research courses, along with observed shifts in student research behavior. This session highlights how intentional AI integration can strengthen research literacy, academic integrity, and student formation while preparing students for responsible AI use in graduate and professional settings.
This is a 30-minute presentation paired with "Building a Critical AI Literacy Module for First-Year Writing: A Cross-Campus Collaboration."
Gene M. Heyman, Senior Lecturer, Psychology & Neuroscience
[Two Presentations in 1 hour] [Pair F]
I teach a large introductory Psychology & Neuroscience course. One of my fall semester’s goals was to develop an AI based assignment that would teach students to use AI in ways that promoted critical thinking and the pleasure of finding things out on their own. The students chose a topic related to course content and were instructed (1) to probe AI for the consensus (“gold standard”) measures and control conditions used in research on their topic; (2) to probe AI for two to four experiments that used the recommended measures and control conditions; (3) to summarize the studies, then, on the basis of their summary, to probe AI for results that conflicted with their summary. The students turned in a three-page history of their conversation with AI and a two-page summary of what they had learned, including comments on the discrepant results. The papers were excellent.
This is a 30-minute presentation paired with "Supporting Students Formation through Developing Engineering Judgement with GenAI."
Track: Research
Martin Scanlan, Professor, Educational Leadership and Higher Education
Luciana Wing, (LSEHD Undergraduate Student)
[Presentation]
This presentation explores generative AI as a research tool in digital case studies (DCS). Case studies are common tools to examine educational infrastructure. While traditional case studies are narrative-based and linear, DCS incorporate multiple forms of media (e.g., text, video, and audio) and are interactive and nonlinear. We are engaged in two projects building DCS focused on exemplary practices in schools—one exploring inclusive education for students with disabilities, and the other exploring formative education in Catholic schools. Using Anthropic's Claude Projects to develop custom chatbots, we are populating each DCS with curated data (e.g., textual data from interviews, focus groups, and artifacts) and related empirical literature. The chatbots' conversational interface invites users to pose questions and explore the educational infrastructure of these exemplary practices on their own terms. This approach offers a replicable model for researchers seeking to make complex qualitative data more accessible and interactive for practitioner audiences.
William Watkins, Associate Technology Consultant/M.A. alumnus, Information Technology Services/Philosophy
[Presentation]
As a coextensive fact of embodied existence, we are on display. The question, however, is to what degree and to which observers are we on display. The answer is dependent upon which space is observed space, how willing we are to exhibit behaviors in known observed space, and to how keen of an inferential process this information is provided. This research pursues that line of inquiry by investigating what information can be gleaned from our day-to-day activities in online environments, what functions as a source of data collection, and how that data is subsequently aggregated, categorized, and utilized. The already alarming state of observation is exacerbated by the advent of AI, insofar as it enhances the usability of unorganized mass data, narrows our capacity for information sourcing outside of observed space, and improves the acuity of inference due to its capacity for automated pattern recognition, simulation, and experimentation.
Julia DeVoy, PhD, MTS, MBA, MLS '26, Associate Dean of Undergraduate Programs and Students, LSEHD
[Workshop/Demo]
This topic explores the dual role of Generative AI (GenAI) as both a 'non-human stakeholder' and a strategic governance tool within Higher Education. Moving beyond normative assertions, the 'Stakeholder Salience Framework' is used to consider how the 'agentic' capabilities of Ai, such as reasoning, planning, and acting may challenge traditional corporate governance boundaries. Preliminary examination suggests shifting from 'reactive compliance' to a 'proactive AI-integrated GRC' (Governance, Risk, and Compliance) maturity approach will foster greater long-term organizational resilience. By operationalizing AI’s stakeholder 'interests' through its reliance on data, and Higher Education interests in ethical alignment, this work introduces a theoretical roadmap for helping to manage implications of autonomous technological agents within Higher Education.
Track: Operational Efficiency
Ravindra Harve, Enterprise Data Architect, Information Technology Services
Lance Tucker, Associate Director (Data & Reporting), ITS
[Presentation]
In 2014, ITS explored address verification software to help departments maintain accurate postal addresses and improve communication with University constituents. Research with Gartner, vendors, and peer institutions revealed that this solution was popular among organizations with large customer bases that faced inconsistent address formats from unregulated sources.
The primary challenge was cost. Options ranged from tens of thousands of dollars for format standardization to hundreds of thousands for full address verification. University Advancement used an annual batch cleansing service. International address verification was the most expensive, with costs increasing with scope.
In 2026, we revisited this challenge, exploring AI tools as a cost-effective alternative. In this session, we will present 5 use cases to determine whether this approach was helpful.
Scott Olivieri, Managing Director, Web Services,Office of University Communications
[Two Presentations in 1 hour] [Pair G]
Data is not just in back-office databases—it's everywhere. Chaotic email inboxes, massive Excel and Word documents, and scattered department web pages are critical sources of unstructured data we can tap into to gain a deeper understanding of our organizational operations, opportunities, target audiences, and challenges.I'll share three examples of how I used Gemini, NotebookLM, and Claude to transform this unstructured madness into actionable insights.
1. SUPPORT & CUSTOMER SERVICE. Our group email support inbox was stuffed with thousands of messages. We responded and then archived the message.
AI tools helped us transform this into a useful report that categorized requests, documented workload, identified successes, and helped us be proactive rather than reactive.
2. WEBSITE EFFECTIVENESS. How do people perceive your department or school website? AI has transformed user testing. I will show you how to begin assessing website effectiveness by using synthetic personas and provide tips on optimizing your website for GenAI.
3. REPORTING. Finally, I'll share how we used AI to transform a 200 page word document with dozens of tables and images into an interactive and engaging reporting application.
This is a 30-minute presentation paired with "More than just Chatbots: AI and the Future of Graduate Enrollment."
Adam Poluzzi, Associate Vice Provost, Graduate Enrollment Management, Provost Office
Frances Stearns, Director, Graduate Enrollment Services
Brett DiMarzo, Director, Graduate Enrollment Digital Strategy
Alyssa Volivar, Associate Director, Graduate CRM Operations
[Two Presentations in 1 hour] [Pair G]
AI is redefining Graduate Enrollment Management (GEM) amid a looming "demographic cliff" and tightening budgets. This session moves beyond the "if" of AI to provide a pragmatic roadmap for implementation, centered on operational efficiency and hyper-personalized engagement.
The presentation explores three critical pillars:
(1) Admissions: The shift from simple automation to evaluative decision-making and the impact of applicant-side AI usage.
(2) Marketing & Recruitment: Navigating radical changes in student search behavior while leveraging human-centered content as a differentiator.
(3) Boston College Case Study: Insights into our collaborative, cross-campus framework for ethical integration.
Attendees will learn to identify admissions friction points, optimize marketing for "AI discoverability," and navigate the ethical implications of automation. This session offers a collaborative framework for evolving operations and leading change together.
This is a 30-minute presentation paired with "Harvesting Actionable Insights from Unstructured Data."
Peter Salvitti, Chief Technologist, ITS
[Presentation]
University staff navigate complex processes every day: document lookups, first-draft writing, cross-departmental communication, all built for an earlier generation of tools. This session demonstrates how two tools already available through our Google Workspace, Gemini and NotebookLM, can immediately streamline those workflows, with no new software and no IT ticket required.
Through several live demos, attendees will see AI retrieve cited answers from policy documents, draft job descriptions from rough notes, analyze anonymized survey data, translate technical jargon into plain English, and more. Every workflow follows a single principle: Human-Led, AI-Supported. The AI handles the lookup and the first draft; you make the decisions.
Attendees leave with some ready-to-use prompts and instructions to turn any prompt into a permanent, reusable tool using Gemini Gems. All activities are covered under our university's Workspace Enterprise data protection agreement.
