Introduction
Google’s launch of NotebookLM in 2023 was the catalyst for vast potential changes to the way university researchers investigate, organize, write and deliver their new research works. And now, in 2025, the product has been fully launched into production and adopted by many universities across the United States and around the world, with some universities, such as the University of San Diego, making NotebookLM and other Google tools the only officially-approved AI tools “emphasizing that these tools comply with both FERPA and GDPR regulations and making them the only AI platforms approved for use with institutional data” (University of San Diego, 2025).
With this new technology, researchers have the ability to shift their workloads from spending time finding useful resources, scouring them, annotating them, and organizing their findings and insight manually to having the tool do this work for them. This new technology has the capacity to change the way research is conducted forever through researchers being able to upload source packs of PDFs, web articles, datasets, YouTube videos and Google Drive files to have the latest LLM models from Google do a lot of this work for them. However, the true impact of this displacement lies in its effect on the researcher’s cognitive workflow. Adopting Google’s NotebookLM appears to shift the middle of the research workflow from manual skimming/organizing to AI-assisted summarizing, which plausibly reduces time-to-synthesis in reported practice; however, this efficiency introduces a verification burden, as automated citations are error-prone and require human review.
Case & Scope
In this case study, Nichole Lim and Matthew Sorenson strive to assess how adopting Google’s NotebookLM in place of traditional reading, annotating, and outlining shifts researchers’ time allocation across the pipeline (search, close reading, synthesis, writing) and how that reallocation affects time to synthesis, citation correctness, and argument quality. We will pursue this question through the lens of AI and Digital Labor as new foundations of educational technology.
Our exploratory case focuses on U.S. higher education, 2023–2025, with the “tipping point” occurring when NotebookLM moved from pilot curiosity to an approved, institutionally supported research tool within Google Workspace campuses. We treat the displaced workflow as “search → close-read → annotate → outline → draft” and the LM-mediated workflow as “search → upload a source pack → prompt → verify.” The chosen method and source base includes publicly available policy pages, service catalogs, and instructor guidance from university IT sites, along with Google’s official feature documentation.
Case Background
To understand what NotebookLM is actually changing, it helps to zoom out and look at how research and note-taking have usually worked in U.S. universities. For most of that history, the basic pattern has been the same: you find sources, you read them closely, and you write things down yourself. Earlier generations did that in commonplace books and handwritten notebooks; later, systems like Cornell notes gave students a more structured way to split a page into cues, main notes, and a summary (Pauk & Owens, 2010). Even after typewriters, laptops, and tablets arrived, the expectation didn’t really change. Students and faculty still printed PDFs, highlighted them, copied key quotes into Word or Google Docs, and built their own outlines. Many instructors, such as Mueller & Oppenheimer (2014), continue to argue for handwriting or at least active manual note-taking because the slower pace forces students to process and organize ideas as they write.
Over time, tools around that core pattern got faster and smarter. Citation indexes, Google Scholar, and library databases made it much easier to discover articles. Reference managers like Zotero took the pain out of storing citations and formatting reference lists. Cloud tools such as Google Docs and campus learning management systems (LMSs) made it normal to collaborate online, share readings, and comment directly on PDFs. But even with all of that infrastructure in place, the heavy lifting of skimming, annotating, and turning a stack of readings into a coherent argument still fell on individual researchers. The bottleneck had moved away from search and storage and into the middle of the workflow.
NotebookLM steps directly into that middle. Instead of just keeping a folder of PDFs, researchers can upload “source packs” of articles, book chapters, web pages, Google Docs, slide decks, datasets, and even YouTube transcripts. From there, they can ask the system to generate quick summaries, answer questions about particular passages, build timelines, suggest themes, create flashcards, and record audio overviews. In other words, a large chunk of the work that used to happen through slow, manual skimming and reorganizing text can now be done by an AI-powered notebook. This is unfolding on campuses that are already heavily tied into Google Workspace or similar ecosystems, where disability and student-support offices are under pressure to provide fast, consistent notes and alternative formats. That combination of mature cloud tools, accessibility pressures, and new AI capabilities sets the stage for a real change in how research labor is distributed.
In this sense, NotebookLM is not just another note-taking app, but a tool that actively reassigns parts of the research workload from humans to an AI system. This is where questions of AI and digital labor start to surface. Which tasks move to the tool, which stay with the researcher, what new skills become part of the researcher’s job, and how does the use of AI impact the quality of the insights of the researcher?
The Tipping Point
The tipping point for NotebookLM in U.S. higher education is easiest to see when we look at specific universities that have moved from “interesting new tool” to official endorsement. One clear example is the University of San Diego (USD). In 2025, USD announced that Google’s AI products, specifically Gemini and NotebookLM, would be the institution’s preferred AI toolkit, emphasizing their fit with Google Workspace and their compliance with Family Educational Rights and Privacy Act (FERPA) and General Data Protection Regulation (GDPR) expectations. In practice, that means a student or faculty member who wants to use AI for research is being nudged toward NotebookLM rather than a random public chatbot, and they are told explicitly that this is the safe, supported option for work involving institutional data.
Similar patterns are showing up at large public universities. The University of Minnesota’s Office of Information Technology (n.d.) now presents NotebookLM as one of the generative AI tools that has been reviewed and approved for campus use, and campus IT units run trainings on how to use it to manage research notes responsibly. The Information Technology Services at Florida State University, as part of broader AI-for-education initiatives with Google, encourages students to use Gemini-powered tools, including NotebookLM, to create summaries, study guides, and practice questions, while faculty are shown how to turn existing course materials into interactive notebooks. Other institutions list NotebookLM alongside their supported software and bring it into professional development sessions, positioning it not as an add-on curiosity but as part of everyday academic work.
Taken together, these moves mark a real shift in the U.S. context. Universities are no longer treating NotebookLM as something students quietly experiment with on the side, but as part of an officially supported AI stack: reviewed for data protection, covered in workshops, and in some cases framed as the primary acceptable AI option for sensitive data. Once that happens, the research pipeline on those campuses can realistically shift from “search → read → annotate → outline → draft” to “search → upload → prompt → verify.” That is the tipping point this case study is interested in: the moment when the middle of the research process is increasingly handled by NotebookLM, and the key questions become how that reallocation of effort affects speed, citation correctness, coverage, and the overall quality of the arguments researchers are able to make.
The timeline to the tipping point has had various points of evolution, but none more powerful than the general public introduction of Generative AI and the LLM, which will be discussed further in the next section.
Catalysts & Actors
Many different forces had to come together before NotebookLM could move from an interesting Google experiment to something that is officially encouraged on U.S. campuses. In our view, the real spark for this shift was not NotebookLM itself, but the introduction of large language models to the general public. Before late 2022, AI was already present in search engines, plagiarism detectors, and recommendation systems, but it was mostly invisible. When tools like ChatGPT and Gemini appeared, students and faculty suddenly had a very visible interface that could summarize readings, explain difficult concepts, and draft text on demand. This moment changed what people thought was reasonable to automate and opened the door for something like NotebookLM to be taken seriously as an academic tool.
Once that door was open, different actors on campus started to reshape how research work is done. University leadership and IT governance teams play a major role. Chief information officers, provosts, and data protection officers decide which AI tools are allowed, what counts as “approved,” and how they can be used with institutional data. When they sign enterprise agreements with Google, review data flows, and publish lists of preferred tools that include Gemini and NotebookLM, they are not just making a technical decision. They are actively deciding which parts of the research workload can be handed off to AI systems and under what conditions.
Disability and student-support offices are another important catalyst. These offices are often under intense pressure to provide high-quality notes, transcripts, and alternative formats for a growing number of students, and to do so very quickly. For them, generative AI is not just a shiny new tool; it looks like a way to reduce some of the most repetitive and time-consuming tasks. When they experiment with NotebookLM to produce first-pass notes or summaries, the labor shifts from humans doing all of the capturing and typing to humans curating, checking, and correcting AI-generated drafts. The job moves from “create every note” to “oversee and improve what the AI produces.”
Faculty, librarians, and Teaching Assistants also sit at the center of this change. On one hand, NotebookLM can save time by turning reading lists into summaries, study guides, timelines, and practice questions. On the other hand, it introduces new kinds of work: designing prompts, setting expectations in the syllabus, teaching students how to verify AI outputs, and grading not just the final product but also the way students documented their use of AI. Students themselves experience a similar shift. They may spend less time manually summarizing every article and more time choosing which sources to include in a notebook, asking good questions, and deciding when to trust or override the system’s answers.
Seen through an AI and digital labor lens, these catalysts and actors are not just “adopting a new tool”, but rather redistributing research labor across people and machines (Crawford, 2021). Some tasks move from humans to NotebookLM (first-pass summaries, basic organization), some tasks stay firmly with humans (interpretation, argument-building), and some new tasks appear (prompting, verification, documentation), effectively configuring the user’s research approach (Woolgar, 1991). This case study is interested in exactly that redistribution: who does what work after NotebookLM is introduced, and what that means for the quality and integrity of research on U.S. campuses.
Pedagogical & Practical Consequences
The official adoption and displacement of traditional methods by NotebookLM creates a dramatic shift in the academic experience, yielding both profound pedagogical changes and measurable practical consequences that directly address the core argument of this case study. To fully grasp this transformation, we analyze the outcomes through the lenses of AI and Digital Labor. The shift is defined by a fundamental redistribution of labor across people and machines and the subsequent introduction of new forms of academic responsibility and risk.
The most significant pedagogical consequence is the redistribution of cognitive labor. Students and researchers are shifting from “slow thinking”, the manual, time-intensive labor of close reading, annotating, and synthesizing; to “fast thinking,” which involves strategic prompt design and the curation of AI-generated output. This change necessitates a reevaluation of fundamental research skills: Does NotebookLM deskill students in close reading and citation building, or does it upskill them in advanced skills like prompt engineering and high-level synthesis? The emergence of these new skills is now an integral part of academic research pedagogy (Zarilla, 2025). The focus of “active note-taking” is no longer the physical act of handwriting, but the intellectual challenge of critical source selection and prompt design, shifting the pedagogical focus from how to take notes to what to ask the AI and how to verify its results (Reyna, 2025). Furthermore, by enabling shared source packs and collective annotation, the tool enhances collaborative research and facilitates diverse learning preferences, empowering researchers by automating routine tasks and elevating research capabilities rather than replacing the human researcher (Noreika, 2025).
Timeline to the Tipping Point
Historical Foundation
Science Citation Index (SCI)
Eugene Garfield publishes the SCI, kick-starting citation-linked discovery.
(Science)Google Scholar (beta)
Google launches Scholar, bringing web-scale discovery to scholarly literature.
(Office of Scholarly Communication)Zotero (public launch)
RRCHNM releases Zotero, automating capture/tags/citation formatting.
(rrchnm.org)Google Docs goes public
Google merges Writely + Spreadsheets, normalizing cloud co-authoring.
(WIRED)GenAI Era
Project Tailwind (I/O 2023)
Google previews an AI-first notebook grounded in your sources.
(TechCrunch)NotebookLM name + U.S. Labs rollout
Tailwind is rebranded and opened to a small U.S. cohort.
(blog.google)Feature build-out
Noteboard, faster source jumping, and other usability upgrades land.
(blog.google)Global rollout on Gemini 1.5 Pro
Support expands to 200+ countries/territories; adds Slides & web links.
(blog.google)Audio Overviews
Turn a notebook into a podcast-style discussion of your sources.
(blog.google)Three-pane UI + NotebookLM Plus
Sources / Chat / Studio layout and an org-friendly tier debut.
(blog.google)Audio Overviews in 50+ languages
Multilingual expansion of the podcast-style summaries.
(blog.google)Gemini 2.5 Flash under the hood
Performance/latency boost and prep for standalone mobile apps.
(Android Central)Institutional Adoption
University of San Diego
USD names Gemini & NotebookLM its preferred AI tools and "the only AI platforms approved for use with institutional data."
(sandiego.edu)University of Minnesota (OIT)
UMN announces NotebookLM availability campus-wide; OIT posts instructor guidance and training.
(it.umn.edu)Florida State University
FSU publishes NotebookLM how-to and student guidance in its Canvas/ITS knowledge base.
(Office of Digital Learning)Internet2 + Google
NET+ AI Education Leadership Program launches to speed responsible campus integration of Gemini for Education & NotebookLM.
(blog.google)This cognitive shift directly translates into the practical consequences driving university adoption. The primary benefit is the dramatic efficiency gain in the research workflow. Time-to-synthesis is significantly reduced, as the tool streamlines the transition from raw data to meaningful understanding by saving substantial time in the “scour, annotate, and organize” phases. For instance, the Audio Overview feature in NotebookLM has been cited for reducing reading times by 90%, boosting productivity and slashing traditional annotation phases (Shah, 2025).
However, this efficiency is achieved through the critical trade-off identified in our thesis, specifically concerning citation correctness. While the time freed up now allows the researcher to spend more time on critical revision and counter-argument development, the automated citation extraction process presents a measurable risk. Large Language Models (LLMs) generate citations probabilistically and may “hallucinate” references despite a formally correct structure (Wu et al., 2025). This unreliability makes it necessary for human verification labor, resulting in a new cost of using AI, which ultimately redistributes work from humans performing all organizational tasks to humans curating, prompting, and verifying AI-generated outputs.
Viewing these consequences through the AI and Digital Labor lenses clarifies the new responsibilities inherent in this displacement. The AI lens reveals that the adoption of NotebookLM carries substantial ethical and compliance costs. Institutions like the University of San Diego emphasize adherence to FERPA and GDPR regulations, underscoring that the practical expense extends beyond licensing fees to encompass ongoing governance labor related to data minimization and protection, often also involving sensitive information. Correspondingly, the Digital Labor lens demonstrates that the researcher’s role is not eliminated but transformed. Researchers now perform essential digital labor, specifically prompt engineering, error correction, and source verification, effectively evolving the researcher into a supervisor responsible for overseeing and curating the AI’s work (Noreika, 2025). These findings directly respond to our research question regarding time allocation, citation correctness and argument quality. This “human-in-the-loop” model preserves the researcher’s cognitive engagement and ultimate responsibility, reframing academic labor as mediated by AI but fundamentally dependent on active human oversight.
Critical Analysis
The Efficiency-Verification Paradox: Reallocated Labor and Hidden Costs
This case study findings supports that the deployment of NotebookLM successfully delivers on its core intended promises: radical efficiency and increased access to research synthesis. The tool acts as a digital laborer, absorbing the time-intensive tasks of scouring, organizing, and preliminary annotating, thereby yielding a predicted reduction in the time-to-synthesis for complex literature. As demonstrated by Shah (2025), this benefit is significant, reporting a 90% reduction in their research time. Furthermore, the tool streamlines the prep process for accommodation requests by offering reproducible prep trails. However, this fundamental acceleration fundamentally redefines the research timeline, shifting the core bottleneck from data acquisition to verification. This transition exposes the citation correctness liability, a critical unintended consequence and a Human-Computer Interaction (HCI) flaw. While the LLM excels at synthesis and high-level abstract work, it demonstrably struggles with mechanical adherence such as citation formatting and accurate source attribution. The tool thus fails in error recovery, creating a new, hidden cost in human effort: the user is forced to invest additional labor to verify every citation and correct misattributions (Sebők & Kiss, 2025). This tension means the user’s role is divided, requiring them to leverage the AI for rapid synthesis while simultaneously acting as the rigorous quality control checkpoint.
The Cognitive Shift: Deskilling and the Evolution of Digital Labor
Examining this new workflow through the Digital Labor lens reveals a significant cognitive shift. On one hand, the automation of annotation and summarization presents the risk of deskilling close reading. When students develop an over-trust in summaries generated by the AI, they may engage in reading avoidance, foregoing the manual, slow-thinking cognitive labor traditionally necessary for deep retention and critical analysis. The very cognitive work that students are slowly de-practicing is also the work that builds scholarly foundation. Conversely, this reallocated labor compels researchers to engage in a new, essential form of digital labor: the oversight and curation of the AI output. The researcher’s role is not eliminated but evolves from being the primary “digger” of information to becoming the “director” of the information-seeking process. This new professional skill set that’s focused on prompt engineering and source verification is increasingly constant and crucial for professional development, demanding advanced capabilities in strategic tool utilization and ethical diligence.
Algorithmic Pragmatics and the Narrowing of Inquiry
Finally, the adoption of the LLM-augmented workflow introduces the risk of algorithmic pragmatics and narrowing. This phenomenon describes the tendency for users to unconsciously tailor their research questions to what the AI model can answer most easily and coherently. By favoring prompts that produce clear, concise summaries, researchers become bound by the AI’s strengths. This adaptation subtly narrows the scope of inquiry, prioritizing goal-oriented research over the often-tedious, tangential discoveries characteristic of traditional academic exploration. Jones (2020) elaborates on this through the concept of the Pragmatic Web, wherein algorithms act as active mediators of meaning and interaction by interpreting user behavior and shaping information flow. Rather than being passive tools, these algorithmic agents influence not only what questions users ask but also which avenues of inquiry remain accessible, reconfiguring research pragmatics within algorithmically driven networks. As Joseph (2025) suggests, while AI’s efficiency favors direct answers, this narrowing effect risks limiting the richness and intellectual diversity of exploratory research, potentially excising the valuable, unexpected connections that are often found in manual discovery. The user is therefore not only configured by policy and the tool’s design, but also by the output itself, which influences the intellectual direction of the research.
Systemic Critique: Governance, Equity, and Ethical Sustainability
Finally, through the critical lens of AI reveals a clear systemic challenge of equity and long-term sustainability. The introduction of the tool immediately creates a dichotomy of “winners” and “losers”. The “winners” are high AI-literacy students and university courses/departments that establish clear guardrails and verification requirements, who can maximize efficiency while actively managing risk. Conversely, the “losers” are novices who lack the foundational skills to spot AI errors, and researchers operating in policy-ambiguous contexts where ethical usage is undefined. This disparity highlights a key challenge in Diffusion Theory: the tool’s powerful relative advantage is undermined by its low compatibility with low pre-existing AI literacy skills (Dearing, 2009). This leads directly to a critique of the workflow’s sustainability. The model is inherently fragile if speed is rewarded alone. For example, if students are solely rewarded for timely delivery without rigorous citation checks then the system is critically flawed, risking a collapse of academic integrity and the quality of the submitted work. Ultimately, this situation demonstrates the complex dynamic of “configuring the user,” where the tool’s design, institutional policy, and pedagogical incentives co-produce the evolving, and sometimes compromised, behavior of the researcher.
Conclusion
The trajectory of this transformation confirms that NotebookLM’s adoption is not simply replacing traditional note-taking methods; it is fundamentally transforming the research process and the nature of academic labor itself. While traditional practices retain undeniable benefits for developing critical thinking and synthesis, the integrated AI workflow offers compelling efficiency gains, particularly the crucial reduction in time-to-synthesis. However, this unprecedented speed comes with a persistent paradox and a critical trade-off: the inherent unreliability of LLMs in citation correctness. This confirms our thesis on how NotebookLM’s adoption reshapes researchers’ time allocation and output quality, and that acceleration risks academic integrity. Since both human and AI processes possess flaws, the success and sustainability of this new academic compact depend entirely on a process designed around rigorous human verification. This necessitates that universities and institutions take immediate, proactive steps to assess verification as a core competency, teach new digital labor skills such as effective prompt engineering and audit literacy, and deploy policies that protect equity and privacy such as embedding FERPA aligned safeguards. Only by configuring the user not merely as a consumer of AI output, but as an active and important supervisor of its work, can institutions ensure that the efficiencies of the digital age maintain the quality and authenticity of scholarly research.
References
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Dearing, J. W. (2009). Applying diffusion of innovation theory to intervention development. Research on Social Work Practice, 19(5), 503–518. https://doi.org/10.1177/1049731509335569
Florida State University, Information Technology Services. (n.d.). Google NotebookLM. https://its.fsu.edu/services/desktop-and-mobile-computing/google-notebooklm
Jones, R. H. (2020). The rise of the Pragmatic Web: Implications for rethinking meaning and interaction. In C. Tagg & M. Evans (Eds.), Message and medium: English language practices across old and new media (pp. 17-37). De Gruyter Mouton
Joseph, J. (2025). The algorithmic self: how AI is reshaping human identity, introspection, and agency. Frontiers in Psychology, 16, 1645795. https://doi.org/10.3389/fpsyg.2025.1645795
Martin, R., & Johnson, S. (2023, July 12). Introducing NotebookLM. Google. https://blog.google/technology/ai/notebooklm-google-ai/
Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological Science, 25(6), 1159-1168. https://doi.org/10.1177/0956797614524581
Noreika, A. (2025). Advancements in AI-powered research tools. SentiSight.ai. https://www.sentisight.ai/advancements-in-ai-powered-research-tools/
Pauk, W., & Owens, R. J. Q. (2010). How to study in college (10th ed.). Boston, MA: Wadsworth. ISBN 978-1-4390-8446-5. (See Chapter 10, “The Cornell System: Take Effective Notes,” pp. 235–277.)
Reyna, J. (2025). The potential of Google NotebookLM for teaching and learning. Paper presented at eLearning 2025, Bangkok, Thailand. Retrieved from https://www.academia.edu/143048105/2025_The_Potential_of_Google_NotebookLM_for_Teaching_and_Learning
Sebők, M., & Kiss, R. (2025). Testing AI-assisted literature reviews with Notebook LM. Prompt Revolution. Retrieved from https://promptrevolution.poltextlab.com/testing-ai-assisted-literature-reviews-with-notebook-lm/
Shah, P. (2025, October 16). My new research workflow uses a secret NotebookLM trick to cut reading time by 90%. XDA-Developers. https://www.xda-developers.com/new-research-workflow-uses-notebooklm-trick-to-cut-reading-time/ xda-developers.com
University of Minnesota, Office of Information Technology. (n.d.). NotebookLM. IT@UMN. https://it.umn.edu/services-technologies/notebooklm
University of San Diego. (2025, August 13). USD adopts Google AI products for campus use. https://www.sandiego.edu/news/detail.php?_focus=97108
Woolgar, S. (1990). Configuring the user: The case of usability trials. The Sociological Review, 38(S1), 58–99. https://doi.org/10.1111/j.1467-954X.1990.tb03349.x
Wu, K., Wu, E., Wei, K., Zhang, A., Casasola, A., Nguyen, T., Riantawan, S., Shi, P., Ho, D., & Zou, J. (2025). An automated framework for assessing how well LLMs cite relevant medical references. Nature Communications, 16, 3615. https://doi.org/10.1038/s41467-025-58551-6
Zarilla, P. (2025). NotebookLM: Google’s AI-powered notebook transforming research & note-taking. LinkedIn. Retrieved from https://www.linkedin.com/pulse/notebooklm-googles-ai-powered-notebook-transforming-research-zarilla-n8q4e/
AI Use Statement: AI tools were used only for outlining, finding publicly available sources, and grammar cleanup. All ideas, analysis, and conclusions are solely those of Matthew Sorenson and Nichole Lim.
