Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.
I didn’t set out to write an essay about academic identity, generative AI, and publishing politics. But, as with so many qualitative journeys, the story found me first. What started as a playful experiment with image generation soon became a critical turning point in how I understand knowledge, creativity, and resistance within the academy. Seeing Myself in Pixels, my latest autoethnographic essay, tells that story.
Let me share a little of what happened and why I think it matters. I used GenAI, specifically image-generating tools like DALL·E and ChatGPT, to create two visual representations of my academic identity: one as an educator, the other as a researcher. These weren’t just illustrations for a paper. They were evidence. They helped me see myself more clearly, ask deeper questions, and articulate values that had previously remained unspoken.
And then… they were deleted.
Despite surviving peer review and editorial approval, the images were removed by the publisher at the final stage. The reason? A blanket policy against GenAI-generated figures, even though the book itself was about GenAI in higher education.
The irony wasn’t lost on me.
In qualitative research, we often talk about “making the invisible visible”. That’s what these images did. They surfaced metaphors, values, tensions, and identities that weren’t easily captured in prose. Working with GenAI wasn’t smooth or straightforward. It forced me to confront stereotypes (“university professor” almost always defaulted to an older white man), question aesthetic choices, and reckon with my own reactions. The image of myself as a hyper-glamorous figure in a fitted and revealing dress? That stung. But instead of rejecting it outright, I used the discomfort as compost for deeper reflection. Why did this feel wrong? What gendered expectations were being surfaced?
In this way, the process became iterative and affective. Prompt. Reflect. Revise. Repeat. I called this my “evidence tree” method: germinating ideas, branching into new directions, tending to nuance, and harvesting insight.
But when those co-created images were removed from the final chapter, it wasn’t just the visuals that were lost. It was a particular way of knowing that got silenced.
So, I made a choice. I published the images in an open-access repository, cited them in the chapter, and wrote Seeing Myself in Pixels to document the erasure.
Call it a quiet act of academic defiance.
This experience laid bare a fundamental tension in academia: What kinds of knowledge are considered legitimate? And who gets to decide? We talk a lot about innovation in research, but often still cling to conventional forms. Text is trusted. Emotion is suspect. Visuals are decoration, not argument. And anything co-created with GenAI? Treated as a risk, not a resource.
But this isn’t just about policy. It’s about power. The gatekeeping logic that removed my images echoes broader dynamics in scholarly communication where multimodal, affective, or experimental work is often sidelined. The publishing system rewards neatness over nuance, prose over presence, and familiarity over innovation. By documenting this erasure, I wanted to make visible the institutional mechanisms that quietly shape what counts as knowledge.
This wasn’t just a personal journey, it reshaped how I teach, too. In my autoethnography unit, I now invite students to create GenAI-generated images as part of their own identity explorations in the classroom. Together, we ask: How does visual co-creation help us see ourselves and our stories differently? What gets revealed when we look beyond text?
One of my proudest classroom moments involved unveiling a GenAI-generated “Academic Avenger” action figure of myself, complete with accessories. Yes, it was playful. Yes, the figure wasn’t perfect. But it sparked meaningful discussion about academic labour, visibility, and imagination. Creativity became rigour. Gen AI became an interlocutor. Learning came alive.
Throughout this process, I didn’t just use GenAI as a tool, I actively collaborated with it. I’ve trained a customised version of ChatGPT called Artzi Dax (named after my favourite Star Trek 🖖🏼 character while showcasing artistic flair), who has now become an intellectual partner of sorts. It (or should I say she?) didn’t write the essay for me. Artzi Dax helped me think more clearly, revise more creatively, and reflect more deeply. In fact, it was Artzi Dax who helped me transform my original four research steps into the growth metaphor, which ultimately shaped the entire essay.
So can a machine be a co-researcher? I believe so, if we approach it reflexively, ethically, and with imagination. Seeing Myself in Pixels is not just an essay. It’s a call to action. We need to rethink what counts as scholarly labour. We need to make room for affect, imagination, and multimodality. And we need to resist the quiet silencing that happens when institutional norms override epistemic possibility.
If academia is to remain intellectually vibrant and humane, we must create space for new forms of knowledge creation: forms that not only tell, but show; that not only argue, but resonate. As one of my students once said:
✨ “Don’t let the Muggles get you down.” ✨
Questions to ponder
How do institutional publishing norms shape what “counts” as valid research?
In what ways can GenAI serve as a co-creator rather than a mere tool?
What hidden forms of knowledge might we be ignoring in academic work?
How can educators integrate creative co-creation methods into research training?
Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.
Hi everyone! I’m excited to share a new podcast episode that I think you’ll enjoy. Last week, I had the privilege of presenting my latest research on generative AI to the Australian Association for Research in Education. We explored some of the big questions shaping higher education right now, particularly how we can foster AI literacy in ways that are compassionate and equitable, and what that type of learning might look like in an AI-driven world. If you’re curious about the discussion, you can listen to the full talk below. The papers I mentioned are freely available, so if something sparks your interest, you can explore the research by clicking on the links. I’d love to hear your thoughts! 🙂
Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.
Redi Pudyanti is an educator and researcher pursuing her PhD on the influence of local wisdom on graduate employability. Her other research interests are Indigenisation, decolonisation, and generative AI.
Acknowledgement: This blog post extends our presentation at the Higher Education Research and Development Society of Australasia (HERDSA) Conference 2025. We acknowledge the other co-authors of our paper, as it was a truly collaborative project: Huy-Hoang Huynh, Ziqi Li, Abdul Qawi Noori, and Zhiheng Zhou. As the South African proverb says: “If you want to run fast, run alone; if you want to run far, run together”.
In the halls of academia, where prestige often correlates with fluency in a particular kind of English communication, having a voice can feel like a privilege, not a right. For multilingual scholars, this can create a disconnect between who they are and what academia expects of them. These scholars have rich and diverse intellectual contributions, but these are often filtered, flattened, or forgotten by the English-language customs of academia. This isn’t just about grammar or vocabulary. It’s about whose knowledge counts, whose voice is deemed legitimate, and how power circulates in scholarly spaces.
Academic writing is often seen as a neutral skill: something that anyone can learn with enough practice, feedback, and hard work. In reality, though, this idea of neutrality is misleading. Beneath the surface, academic writing carries a host of hidden expectations: about how to structure an argument, what kind of tone sounds “professional”, which sources are seen as credible, and even what types of ideas are considered valuable or “good”. These expectations aren’t universal: they’re shaped by English-speaking academic traditions and Western ways of thinking.
For multilingual scholars, especially those coming from different cultural and educational backgrounds, this can feel like stepping into a performance where the rules haven’t been explained. They’re expected not just to write clearly, but to sound a certain way: to mirror the phrasing, logic, and stylistic choices of native English speakers who have been immersed in this Western ways of thinking from an early age. It’s a bit like being asked to join a play mid-scene, in a language that’s not your own, with the added pressure of sounding polished and convincing. The result is often a quiet and persistent pressure to conform: to smooth out cultural expression, to set aside familiar ways of knowing, and to rewrite one’s voice to match what academia deems “legitimate”. In this context, writing isn’t just about communicating ideas, it becomes a test of belonging.
This experience can be deeply isolating. When your ideas are dismissed because they don’t fit a particular format, or when you constantly feel like your writing is being judged through the lens of language proficiency rather than substance, it can leave you feeling invisible. For many multilingual scholars, it’s not just a matter of learning the rules; it’s the emotional weight of having to silence parts of who you are just to be taken seriously. Over time, this can lead to a sense of marginalisation, where your contributions feel undervalued, and your cultural perspective feels out of place. It’s certainly not that these scholars lack ideas or insight; it’s that the academic system often fails to make room for how those ideas are expressed.
We have found that these challenges can make academic life feel like a constant uphill battle, especially when the very structures meant to support learning and innovation exclude our ways of thinking and being. Yet, rather than remain silent or adapt unquestioningly, we have been actively seeking new ways to engage with academia in ways that honour both our cultural identities and scholarly ambitions. This is where our latest research began: with a shared desire to not only survive academia, but to reshape it. Through community, reflection, and the careful integration of generative AI, we began to imagine what a more just and inclusive academic future could look like.
Writing together, thinking together: a decolonising vision for academic writing
Our new paper offers a timely vision of the future: one where academic spaces are reimagined as inclusive, relational, and linguistically diverse, and where generative AI is embraced not as a threat to academic integrity and rigour, but as a partner in knowledge creation. To develop this vision of academia, we combined the Southern African philosophy of Ubuntu (a philosophy that says, “I am because we are”) with collaborative autoethnography and the strategic use of generative AI to reframe generative AI as a relational tool for epistemic justice. As noted in another blog post, epistemic justice is about fairness: it ensures that everyone’s voice and knowledge are equally respected, no matter where they come from or how they express themselves. In our vision of a more just and inclusive academic future, multilingual scholars will feel empowered to contribute fully, confidently, and in ways that honour their linguistic and cultural identities within global scholarly conversations.
One of the most important parts of our study was how we chose to think about and use generative AI. Ubuntu reminded us that we’re shaped by our relationships with others, and that knowledge and growth are shared, not owned by any one person. In many academic settings, writing is treated as something you do alone. Seeing academic writing through the Ubuntu philosophy, however, we saw knowledge creation and dissemination as something academia should do together. In our group, we gave each other feedback not to criticise, but to support and learn from one another. In this spirit, generative AI became more than just a helper. It became a kind of thinking partner that joined us in our conversations, helping us express our ideas more clearly while still keeping our voices true to who we are. Our generative AI use empowered us while also honouring our identities as multilingual speakers engaging with global academia.
At its heart, our work is about challenging the status quo in academia: we aim to decolonise how knowledge can be created and shared in academia. As shown in our figure below, we started with Ubuntu, a philosophy that puts relationships, community, and shared responsibility at the centre. From there, we used a method called collaborative autoethnography, which allowed us to tell our personal stories, learn from each other in a supportive, reflective way, and explore the cultural complexities present within academia. Then we brought in generative AI, not to make our writing faster, but to help us express our ideas more clearly, question academic norms, and speak up in ways that felt true to ourselves. These three elements aren’t separate steps. Like threads in a tapestry, they are woven together to create a new way of doing research. Together, they helped us imagine a more inclusive kind of academic voice: one that’s ethical, shared, and shaped by many perspectives, not just one. The dots in the diagram show how these ideas flow between people, values, and technology, all working together to build a better future for academic work.
Stories of reclamation and agency
One of the most vivid examples from the study involves the translation of a Chinese idiom which, when processed through a conventional tool, was reduced to a flat literalism: “seeing flowers in the mist, looking at the moon in the water”. While technically accurate, the translation missed the metaphorical essence of the idiom. When Ziqi, one of the authors, posed the same phrase to ChatGPT, the response captured both the poetic beauty and interpretive depth she needed by offering: “The situation is shrouded in mystery, constantly shifting, and challenging to grasp”. In that moment, the idiom didn’t just survive translation, it transcended it. For Ziqi, this wasn’t merely a linguistic success; it was a profound moment of affirmation. Her cultural ways of knowing embedded in the metaphor’s symbolism and rhythm didn’t have to be abandoned or diluted to be legible in academic English. They could be translated with meaning, not despite it.
For others in the group, generative AI proved equally transformative in different contexts. It supported the generation of constructive feedback, assisted in structuring complex presentations, and offered clarity around dense theoretical frameworks. In Redi’s case of balancing the demands of doctoral research with motherhood, generative AI became an unexpected ally in maintaining wellbeing. Whether generating weekly schedules, planning meals, or brainstorming research questions, it helped her lighten the cognitive load, carving out space for reflection, family, and rest.
Importantly, though, our paper isn’t a love letter to generative AI. We are acutely aware of the ethical tensions. Generative AI tools are shaped by the biases of their training data, data which are often steeped in colonial logics, linguistic hierarchies, and Western-centric perspectives. The risk of overreliance or uncritical adoption is real. Yet, what shines through in our reflections is not techno-optimism, but intentionality. We didn’t blindly accept what generative AI offered. Instead, we engaged with it critically, revising, interrogating, and adapting output to ensure the content preserved cultural nuance and scholarly integrity.
Implications for academia
What we did wasn’t about quietly fitting in or changing ourselves to match the usual expectations of academic writing. It’s something more powerful: it was an act of academic reclamation. We used generative AI thoughtfully and with care, not to erase our voices, but to amplify them. Our voices are shaped by different cultures, languages, and ways of thinking, and these types of voices don’t always fit neatly into the typical mould of English-speaking academia. By working with generative AI, not just relying on it, we found ways to express our ideas more clearly without losing who we are. We’re not just trying to keep up, we’re helping to change what academic writing can be. We’re showing that it’s possible to honour cultural and linguistic diversity in research, and that there’s real value in broadening what counts as a “legitimate” academic voice. In doing so, we’re not just joining the conversation, we’re reshaping it. Prompt by prompt, paragraph by paragraph, we’re building an academic world that listens to more voices, tells more stories, and reflects more diverse ways of knowing.
Our study calls on educators, institutions, publishers, and policy-makers to rethink what counts as “good writing” and whose voices are heard in academic discourses. It invites everyone to question the academic orthodoxy that frames multilingual ways of thinking as flawed or generative AI use as inherently dishonest. It shows that when used ethically and reflexively, generative AI can level the playing field, not by simplifying scholars’ ideas, but by enabling them to be expressed more fully. By integrating Ubuntu, collaborative autoethnography, and generative AI, we empowered each other and contributed to decolonising the academy by advancing non-traditional voices. Our research presents a compelling vision for a more inclusive academy: one where multilingualism is celebrated, not hidden, and where academic voice is something to be reclaimed, not earned through conformity. As Lynette notes in the paper:
I have also had many discussions with colleagues in other countries who seem to believe that the use of generative AI has led to the loss of academic rigour or critical thinking in students’ work. They either lament that they cannot clearly detect AI written work with tools such as Turnitin, or claim that whenever they see the words “delve” or “tapestry” they know that it is written by AI and should therefore be considered as cheating. […] I see this viewpoint as a form of academic orthodoxy, where written academic work is considered “rigorous” only when it has been written as it has always been. […] I wonder whether the same debates were circulating in academia when the typewriter was invented and those who used to write academic missives by hand thought that the typewriter would be a danger to academic rigour?
You can listen to the HERDSA2025 presentation below.
By the way, Lynette also shared two other studies at this year’s HERDSA conference. Follow the links below to explore those studies in more detail.
Whose standards define “good” academic writing? How do linguistic norms in academia privilege certain voices while marginalising others? In what ways might generative AI disrupt or reinforce these norms, and what responsibilities do scholars have in shaping its use?
Can technology be decolonial? Given that most generative AI tools are trained on predominantly Western data sources, is it possible for them to support decolonial knowledge practices? What conditions would need to be met for generative AI to serve as a truly inclusive and relational academic partner?
What does ethical generative AI use look like? Reflecting on the Ubuntu-inspired approach described in the post, how can scholars use generative AI tools ethically, without losing their cultural specificity or scholarly voice? How might institutions better support this kind of critical and agentive generative AI engagement?
Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.
We are seeking expressions of interest for our new book provisionally titled Positionality & Reflexivity in Research (Editors: Sun Yee Yip and Lynette Pretorius from Monash University).
Whose research is it? Who owns it? Whose interests does it serve? Who benefits from it? Who has designed its questions and framed its scope? Who will carry it out? Who will write it up? How will its results be disseminated?
Research across various knowledge traditions has challenged the notions of neutrality and objectivity, increasingly recognising that framing a research problem is inextricably linked to those granted the power to participate in knowledge creation within the institutional spaces of the academy and who have access to that knowledge.
To address the presence and impact of knowledge makers on the forms that knowledge takes, social science research has introduced “position” and often “positionality statements” as genres in which researchers typically consider certain social identities, including but not limited to race, class, gender, sexual orientation, and (dis)ability. A researcher’s positionality can influence all aspects of the research process, including study design, research questions, data collection, and analysis and understanding one’s positionality can shape the outcomes and trustworthiness of the results (Yip, 2023). While traditionally a common feature in qualitative research, some researchers have recently also argued for its relevance in quantitative research (Jamieson et al., 2022).
Reflexivity, the process of critically examining one’s assumptions, biases, and perspectives and how they might impact the research process, is considered a fundamental element in addressing a researcher’s positionality. It challenges researchers to critically analyse their positionality—their role, assumptions, and influence on the research process—and to reflect on how their engagement shapes their understanding of the issue under investigation, their research design, findings, and theories they develop and the communication of results (Addyman, 2025; Smith, 2021).
Yet, despite the growing recognition of the importance of positionality and reflexivity, there remains a surprising lack of evidence in resulting publications of researchers explicitly addressing their lived experiences in the field and how they practice reflexivity. This lack of transparency obscures the iterative and adaptive role that reflexivity plays in shaping research practices, insights, and contributions to theory development. By conceptualising their positionality and embracing reflexivity more effectively, researchers can examine their impact on the research process, reveal their work’s relational and emotional dynamics, and contribute in academically rigorous and practically relevant ways.
Given the increasing demand for researchers to disclose their positions in relation to the research they conduct and articulate their reflexivity practices, we invite chapters that offer profound and critical insights into personal experiences of examining positionalities and engaging in reflexivity within your research. This may pertain to your PhD projects or beyond.
We suggest that the chapter address (but is not limited to) the following guiding questions:
What is your research about?
What motivated you to embark on this research?
What is your position or standpoint in relation to your research?
How does your position impact different aspects of your research? e.g. research design, methodology, findings/results, theorisation etc.
How did you practice reflexivity? What strategies did you adopt/not adopt? Why did you utilise these strategies? In what ways are they helpful/unhelpful? What are the challenges? How did you address these challenges/overcome them?
What did you learn in the process?
How has this shaped your future practice?
Please express your interest by submitting an abstract/chapter proposal of no more than 500 words by clicking on the button below. You can also refer to this list, which we regularly update to reflect relevant published work on this topic.
The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.
Generative AI is reshaping the world, one image, paragraph, and data point at a time. Whether you’re a curious newcomer, an educator trying to keep up with the latest trends, or a student dipping your toes into artificial intelligence, you’re in the right place. Let’s unpack what generative AI actually is, why it matters, and how you can begin your learning journey with a few hand-picked videos.
What is generative AI?
Generative AI refers to a class of artificial intelligence that can create new content like text, images, music, code, and more. Think of it as a creative partner trained on vast amounts of data. These systems learn patterns, styles, and structures, and then use that knowledge to generate novel outputs that often feel surprisingly human-like.
Why should you care?
Generative AI isn’t just a tech trend, it’s a shift in how we produce knowledge, express creativity, and interact with machines. For educators, it’s reshaping pedagogy. For students, it’s changing how assignments are written and evaluated. For researchers, it’s opening up new methods of inquiry, simulation, and communication.
How do you use generative AI?
Prompt design is crucial when using generative AI because the quality of your prompt directly shapes the relevance, clarity, and creativity of the AI’s response. Well-crafted prompts help you guide generative AI more effectively, turning it into a powerful tool for learning, research, and problem-solving.
Developing AI literacy
Understanding the basics isn’t just about staying relevant; it’s about becoming literate in a rapidly evolving digital world. AI literacy is now a key component of digital citizenship, academic integrity, and lifelong learning.
The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.
Are you exploring how generative AI is transforming the research landscape? Have you developed innovative approaches, ethical insights, or practical applications regarding AI in research? If so, we invite you to contribute a chapter to our forthcoming open access book: Generative AI-Enhanced Research: Ethical, Practical, and Transformative Approaches.
This edited collection will serve as a go-to resource for researchers, academics, educators, and students interested in harnessing generative AI tools across the research lifecycle. Our aim is to showcase a diverse range of perspectives, theoretical frameworks, and methodological innovations that illuminate the evolving role of AI in academic work.
We welcome contributions in the form of conceptual papers, empirical studies, reflective case narratives, and practical guides. Key areas of interest include (but are not limited to):
Ethical challenges and considerations in generative AI-enhanced research
Generative AI in research design and literature review
Generative AI in data collection and analysis
Generative AI in writing, publishing, and dissemination
Generative AI and research training, critical thinking, and future trends
Interested? Learn more and submit your abstract here.
Abstracts are due by 30 June 2025!
Update: Abstract deadline extended to 18 July 2025 – get your abstracts in quick if you haven’t yet!
The AI Literacy Lab is an interdisciplinary collaboration of educators and researchers investigating the implications of generative AI for teaching, learning, and scholarly inquiry.
You are warmly invited to participate in the International Conference on AI for Higher Education (AI4HE). Facilitated by the Human-AI Collaborative Knowledgebase for Education and Research (HACKER) and the AI Literacy Lab, the conference provides an opportunity to share knowledge of AI in Higher Education, network with peers and participate in practical workshops.
The conference will be on 26 and 27 November 2025 and will run electronically through Zoom. The conference is FREE 🙂
Presentations can take various formats and should focus on the use of generative AI in higher education settings. Some questions you can use to prompt your thinking are:
What constitutes AI literacy for researchers today?
How can we effectively embed AI literacy into research training and higher education curricula?
What new methodological possibilities or tensions arise when generative AI is integrated into the research process?
How do we ethically use generative AI in research without compromising scholarly integrity, originality, trustworthiness, and rigour?
Who gets to decide what constitutes ‘authorship’ or ‘contribution’ when generative AI tools are involved in the production of knowledge?
How does the use of generative AI in research reshape our understanding of the researcher’s role, voice, and epistemic authority?
What does it mean to ‘position oneself’ in relation to a generative AI tool? Is it a collaborator, instrument, co-author, or something else entirely?
Abstracts are due by the 20th of June. To submit an abstract or register to attend, click on the button below. See you there!
Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.
Chris Pretorius is a doctoral candidate specialising in spiritual health and practice, with an interest in the intersections between technology and theology.
The rise of generative AI has sparked new conversations about its role in academic research. While generative AI tools like ChatGPT have proven effective for summarisation, pattern recognition, and text classification, their potential in deep, interpretive qualitative data analysis remains underexplored. In our recent study, we examine the integration of ChatGPT as an active collaborator in qualitative data analysis. Our findings highlight ChatGPT’s ability to streamline initial coding, enhance reflexivity and higher-order thinking, and support knowledge co-construction while emphasising the necessity of human oversight.
Our study marks an exciting step forward in the integration of generative AI into qualitative inquiry. By approaching generative AI as a partner rather than a passive tool, we believe researchers will be able to its potential while preserving the richness and depth that define qualitative research.
As illustrated in another blog post, qualitative data analysis is often a laborious process, requiring meticulous coding, interpretation, and reflection. Traditional computer-assisted qualitative data analysis software, such as NVivo and MAXQDA, has long been used to help streamline aspects of qualitative data analysis. However, generative AI, and specifically ChatGPT, introduces an additional layer of adaptability, offering real-time feedback and dynamic analytical capabilities. This made us wonder how effective it would be in the qualitative data analysis process.
In our paper, we explore how ChatGPT can function beyond a simple data processing tool by actively participating in the interpretive process. Rather than merely classifying text, we found that ChatGPT could highlight implicit themes, suggest theoretical frameworks, and prompt deeper reflections on the data from both the researcher and participant. However, ChatGPT’s capacity is highly contingent on the researcher’s ability to craft well-designed prompts.
One of the key takeaways from the study is the significance of effective prompt design. We note that ChatGPT’s responses were only as good as the prompts it received. Initially, we found that the ChatGPT’s responses lacked depth or were fixated on single aspects of a topic while neglecting others. By refining our prompts, explicitly defining key concepts, and structuring questions carefully, we were able to guide ChatGPT toward more nuanced and insightful analyses.
We developed a series of 31 prompts to explore our dataset (see the prompts here). This iterative prompting process not only improved ChatGPT’s analytical output but also helped the researcher clarify her own theoretical perspectives. Our study consequently frames this prompt design process as a reflexive exercise, demonstrating how the act of crafting prompts can refine a researcher’s conceptual thinking and analytical approach.
An unexpected yet valuable outcome of using ChatGPT in the research process was its ability to stimulate the researcher’s higher-order thinking. By engaging with the ChatGPT-generated interpretations, the researcher was prompted to critically assess underlying assumptions, refine theoretical lenses, and explore alternative perspectives she might not have initially considered. This process encouraged deeper engagement, pushing the researcher to interrogate her own biases and methodological choices. As a result, the interaction with ChatGPT became an intellectual exercise in itself, allowing the researcher to refine and expand her analytical thinking in ways that traditional methods may not have facilitated as effectively.
One of the most striking findings from our study was ChatGPT’s ability to uncover implicit meanings within qualitative data. For example, when asked about concepts like “illusio” (investment in the socially constructed values within a field), ChatGPT was able to infer instances of this concept even when it was not explicitly mentioned in the data. However, we also found that the ChatGPT-generated interpretations sometimes diverged from participants’ own perspectives. This emphasises the critical role of human oversight. Generative AI lacks self-awareness (at least at the moment!), meaning that its responses must be carefully evaluated. Generative AI can be a powerful tool for organising and prompting analysis, but it is the researcher’s interpretive lens that ultimately determines the depth and rigour of qualitative inquiry.
One of the most innovative aspects of our study is its participatory approach, in which both the researcher and the participant engaged with ChatGPT’s analyses. Instead of using generative AI as a behind-the-scenes tool, the study involved participants in critically appraising ChatGPT’s findings, thereby decentralising the researcher’s authority over data interpretation. This triadic model (researcher, participant, and ChatGPT) fostered greater participant agency in the research process. By giving participants the opportunity to review and respond to ChatGPT-generated interpretations, we ensured that the generative AI-assisted analyses did not overwrite or misrepresent participants’ lived experiences. This approach not only enhanced the ethical integrity of the generative AI-assisted research but also enriched the depth and authenticity of the findings.
Questions to ponder
What are the potential benefits and risks of using AI tools like ChatGPT in qualitative research?
How can researchers ensure that ChatGPT-assisted analyses remain ethically sound and participant-driven?
Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.
The advent of generative artificial intelligence (GenAI) has opened up transformative possibilities in academic research. Tools like ChatGPT, Gemini, and Claude hold the potential to help with idea and content development, structure and research design, literature review and synthesis, data management and analysis, as well as proofreading and editing. However, as enticing as these advancements are, they bring ethical challenges that require careful navigation. To bridge this gap between potential and responsibility, my colleagues and I developed the ETHICAL framework for GenAI use, which has just been published open access!
The ETHICAL framework offers a structured approach, with each letter in the acronym representing a principle that users should embed into their practices. The framework has been summarised in this handy picture.
The ETHICAL Framework for Responsible Generative AI Use, republished from here under a CC-BY license.
Examine policies and guidelines Researchers must consult international, national, and institutional GenAI policies. This involves not only aligning with global GenAI ethics recommendations but also understanding the specifics of local guidelines. Adhering to these ensures compliance and fosters trust. As an example, my institution has an entire policy suite relating to responsible GenAI use in both teaching and research.
Think about the social impacts GenAI can reinforce biases and perpetuate inequalities. Researchers should critically evaluate the societal consequences of using GenAI, considering both environmental sustainability and digital equity.
Harness understanding of the technology A robust understanding of how GenAI tools operate (beyond their surface-level functionalities) is essential. Researchers must grasp the limitations and ethical implications of the technologies they use and should promote AI literacy within their academic communities. I have written other blog posts about what AI literacy is and how you can build your AI literacy. This handy quick video explains the components of AI literacy.
Indicate use transparently Transparency is key to maintaining academic integrity. Researchers should explicitly disclose where and how GenAI tools were used, documenting their role in the research process. This fosters accountability and mitigates risks related to copyright and authorship disputes. This video provides a simple guide to formatting GenAI acknowledgements.
Critically engage with outputs GenAI outputs are not infallible and require rigorous validation. Researchers bear the ultimate responsibility for ensuring that GenAI-generated content aligns with disciplinary standards and is free from inaccuracies or ethical breaches.
Access secure versions Security and privacy are paramount when using GenAI. Free versions of tools may not offer adequate protections for sensitive data, underscoring the need for secure, institutional subscriptions or private deployments of GenAI models.
Look at user agreements Many GenAI tools have complex user agreements, which can have significant implications for data ownership and privacy. Researchers should carefully review these terms to ensure ethical compliance and to safeguard their intellectual property.
The ETHICAL framework encourages universities to incorporate AI literacy into their curricula, ensuring that both students and faculty are prepared to navigate the ethical complexities of GenAI-enhanced research. The ETHICAL framework is also not just a set of guidelines, it’s a call to action. For educators, researchers, and institutions alike, the message is clear: the future of GenAI in higher education depends on our collective ability to navigate its challenges responsibly. The ETHICAL framework provides a compass for doing just that, fostering a research culture that is as ethical as it is forward-thinking.
Questions to ponder
How can universities integrate AI literacy into their existing curricula effectively?
What steps can researchers take to ensure equitable access to GenAI tools across diverse socio-economic contexts?
How should publishers and peer-review committees adapt to the growing use of GenAI in manuscript preparation?
Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.
Dr Sweta Vijaykumar Patel is a lecturer, researcher, and mentor specialising in early childhood education, creative methodologies, teacher education and culturally responsive pedagogy.
As qualitative researchers, we’ve often used pseudonyms in our work to protect the identities of participants. It’s a standard practice and one that’s meant to safeguard confidentiality while ensuring their stories remain authentic. But recently, we conducted a study that made us pause and rethink how we approach pseudonyms. It highlighted the power of inviting participants to choose their own pseudonyms and how that simple act can transform the research process.
In our study, 40 doctoral students shared their experiences of academia, and part of that was choosing pseudonyms for themselves and their institution. They were also asked to explain the reasons why they chose those names. Reading through their choices, we were struck by how much thought and emotion they poured into these names. For some, the pseudonym chosen was deeply personal. One participant, for instance, chose “Chess” to reflect their strategic navigation through life as an autistic, trans individual. Another participant selected “Kurdi,” proudly emphasising their Kurdish heritage and lifelong pursuit of knowledge. These names were more than identifiers; they were declarations of identity, resilience, and aspiration. Some picked hopeful names like “The University of Dreams” for their institutions, reflecting admiration or ambition. But not all pseudonyms were positive. One participant, for example, referred to their institution as “The University of Business,” critiquing the commodification of education. Another layered nuance onto their name, highlighting disillusionment with systemic issues they encountered. These choices offered us a window into their experiences, highlighting both their struggles and triumphs.
Letting participants name themselves isn’t just a small methodological tweak, it was a purposeful act of empowerment. By giving participants the opportunity to take control of their own representation, we were able to disrupt the traditional power dynamics that so often define research. It wasn’t just about collecting data; it was about fostering trust, collaboration, and authenticity.
Of course, there are challenges with this approach. Participants might feel pressure to choose names that conform to researchers’ expectations or worry about how their pseudonyms will be interpreted. It also takes time and effort to create a supportive environment where participants feel comfortable making these decisions. But the benefits (including greater trust, richer data, and more ethical representation) far outweigh the hurdles.
Conducting this study has changed how we think about our own research practices. It’s a reminder that the small details, even something as simple as giving a participant the chance to name themselves, can carry huge implications. When participants take control of their representation, it deepens the authenticity of their stories and strengthens the research process.
We also see this as a challenge to examine the systems within which we work. We’ve shown that names aren’t just labels; they’re an opportunity for participants to reclaim their stories, critique their environments, and express their identities on their own terms. As noted in another blog post, epistemic justice is about fairness: it ensures that everyone’s voice and knowledge are equally respected, no matter where they come from or how they express themselves. As researchers, we’re in positions of power, and it’s easy to perpetuate epistemic injustice without even realising it. But when we hand the reins to participants, we’re making a deliberate choice to amplify their voices and honour their expertise.
For us, this study is also a reminder to slow down, reflect, and listen. Research isn’t just about collecting data; it’s about honouring the people behind the stories. And sometimes, it starts with something as simple and as profound as the researcher asking, “What’s in a name?”
Questions To Ponder
How can you create a space where research participants feel truly empowered to represent themselves?
What does it mean to approach research as a collaboration, rather than a process of data extraction?
How can naming practices become tools for resistance and critique in your own work?
Leave a Comment