°®ÎÛ´«Ã½

Skip to Main Content Skip to bottom Skip to Chat, Email, Text

Generative AI in Doctoral Education

A group of doctoral students sit in a circle and participate in a discussion

By Patricia Akojie, Ph.D. Research Team Members: Louise Underdahl, Ph.D. & Marlene Blake, Ph.D.

Context

Since the public release of ChatGPT in late 2022, generative artificial intelligence has quickly entered the landscape of higher education. In doctoral programs in particular, these tools offer promising possibilities for idea generation, drafting, language refinement, and research organization. At the same time, their rapid adoption has raised significant questions about academic integrity, originality, and the preservation of critical thinking. While conversations about managing generative AI in universities have been widespread, systematic mapping of how these tools are being used ethically in doctoral research remains limited.

Method

To address this gap, this study was conducted under the 2025 CEITR Research Lab. It employed a scoping review methodology guided by Arksey and O’Malley’s framework. Scoping reviews are especially appropriate for emerging and complex topics where concepts are still evolving, and the literature is dispersed across disciplines. Rather than evaluating the effectiveness of a single intervention, this approach maps the breadth and nature of existing scholarship, identifies patterns, and highlights areas requiring further inquiry.

Following the established framework, the review proceeded through six structured stages: identifying the research questions, systematically searching relevant literature, selecting studies based on predefined inclusion and exclusion criteria, charting and extracting data, synthesizing findings into thematic categories, and consulting with stakeholders. The review focused specifically on academic applications of generative AI within doctoral education, including student use in dissertation research and faculty responses within supervisory contexts. Studies addressing ethical concerns, policy development, instructional implications, and student outcomes were included to ensure comprehensive coverage of the topic.

Findings

The analysis identifies four interrelated areas shaping the integration of generative AI in doctoral education. First, generative AI can enhance methodological processes by assisting with idea organization, language refinement, and engagement with scholarly literature, yet it must remain a cognitive aid rather than a substitute for independent reasoning to preserve analytical rigor and critical thinking. Second, these tools offer advantages within doctoral supervision by streamlining routine tasks such as preliminary editing and summarization, potentially freeing supervisors or doctoral committee members to focus more intentionally on higher-order mentoring, theoretical development, and methodological depth, provided expectations are clearly defined. Third, ethical concerns remain central, with recurring apprehension about plagiarism, fairness in assessment, dependency, and the broader implications for scholarly integrity, underscoring the need for transparency and shared accountability in authorship. Finally, the literature consistently calls for structured policy development and targeted training to establish clear guidelines for responsible use while building AI literacy among faculty and doctoral students to ensure innovation aligns with academic standards.

Key Takeaway

Collectively, the findings suggest that generative AI is neither inherently beneficial nor inherently harmful to doctoral scholarship. Its impact depends largely on how institutions, faculty, and students frame and regulate its use. Educators carry a dual responsibility: to safeguard the integrity of doctoral research while also preparing graduates to navigate and ethically leverage advanced technologies. Meeting this responsibility requires intentional curriculum design, ongoing professional development, and sustained attention to evolving regulatory and ethical expectations.

Contribution

The scoping review contributes to the growing body of scholarship examining the intersection of artificial intelligence and doctoral education. By mapping current research and identifying thematic patterns, the study offers a structured understanding of how generative AI is shaping dissertation practice. Importantly, it also highlights the need for continued inquiry, particularly in the areas of ethical standardization and policy clarity. As doctoral programs continue to adapt to rapidly evolving technologies, thoughtful integration of generative AI will remain a pressing concern. The conversation is not about whether these tools will be used, but how they can be incorporated in ways that preserve rigor, promote integrity, and strengthen scholarly development.

Patricia Akojie

Dr. Patricia Akojie

ABOUT THE AUTHOR

Patricia Akojie, PhD, M.Ed., M.Sc., earned her M.Sc. in Instructional Design and an Educational Technology Certificate from Western Kentucky University and her PhD in Educational Policy Studies and Evaluation from the University of Kentucky. She served as a high school social studies teacher and administrator for 24 years before transitioning to higher education. From 2005 to 2017, she was Director of the Graduate Education Program at Brescia University, and she has been affiliated with the °®ÎÛ´«Ã½ since 2004. Dr. Akojie currently serves as a Doctoral Program Manager in the College of Doctoral Studies and is an active contributor to the °®ÎÛ´«Ã½ Research Hub, including her role as a Research Scholar with the Center for Educational and Instructional Technology Research. She was recognized as Faculty Spotlight in May 2020 and received the Faculty Sperling Award in 2025. She can be reached at pakojie@email.phoenix.edu.