Accurately assessing students’ use of generative AI acknowledgements in assignments

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Lecturers play a pivotal role in shaping the learning of their students. In a metric-focused university environment, this learning necessitates the assessment of students’ learning throughout their educational journey. Assessing assignments not only gauges the understanding of the subject matter but also evaluates the development of critical academic skills. These skills, such as research, analysis, and effective communication, are integral components of a well-rounded higher education.

Assessing transferable skills

The skills assessed must align with what is taught within the unit. When students perceive a direct connection between what is taught and what is assessed, their engagement and comprehension are heightened. Consequently, if we are going to assess students not only on their content knowledge but also their transferable skills, we need to provide them with the tools to succeed.

I believe that transferable skills enhance the applicability of students’ disciplinary knowledge. For years, I have worked to develop a suite of academic skills resources which are now embedded across the units within our Faculty. These resources include a suite of just-in-time online videos freely available on YouTube, as well as two written booklets (Doing Assignments and Writing Theses) that explicitly teach academic communication skills.

Over the years, I have also worked to improve the assignment rubrics within our Faculty to more accurately assess the skills that are taught within individual units. For example, I have worked with another staff member to develop templates for staff to provide feedback on academic language and literacy. We designed these templates to allow assessors to label specific mistakes for students and to provide students with referrals to appropriate support. Giving students specific labels for their errors helps them to see where they can improve. The referrals to appropriate resources and support help the student improve their skills, encouraging self-directed learning.

It is important to note that we usually recommend that these skills account for no more than 10% of the total grade for the assignment. This is because the main focus of the assessment should be the content – students should be able to clearly demonstrate an understanding and critical evaluation of the topic of the assignment. However, the students’ use of academic language and academic literacy can enhance the quality of their disciplinary content, or it can hinder the meaning of their ideas. As such, our templates allow for 5% to be attached to academic language (specifically, the elements listed in blue here) and 5% to academic literacy (the elements listed in purple here).

Assessing AI literacy

In the era of rapid technological advancement, the rise of generative artificial intelligence (AI) introduces a new dimension to education. As students are increasingly exposed to AI tools, it becomes imperative for educators to teach them how to use these tools effectively. As I have highlighted in another blog post, I firmly believe that it is our role as educators to teach students how to collaborate effectively with AI and evaluate the results obtained, a concept termed AI literacy. I see AI literacy as an essential transferable skill.

A key component of using AI ethically is acknowledging it effectively in written work. It is important to highlight, though, that if we are going to require students to demonstrate AI literacy, including the accurate acknowledgement of the use of AI tools, we need to teach it in our units and also assess it accurately. In my units, I teach students that an acknowledgement should include the name of the AI used, a description of how it was used (including the prompt used where appropriate), and an explanation of how the information was then adapted in the final version of the document. I also provide students with the example below so that they can see how an acknowledgement is used in practice.

I acknowledge that I used ChatGPT (OpenAI, https://chat.openai.com/) in this assignment to improve my written expression quality and generate summaries of the six articles I used in the annotated bibliography section. The summary prompt provided to the AI was “Write a 350 word abstract for this article. Include a summary of the topic of the article, the methodology used, the key findings, and the overall conclusion”. I adapted the summaries it produced to reflect my argument, style, and voice. I also adapted the summaries to better link with my topic under investigation. When I wanted the AI to help me improve my writing clarity, I pasted my written text and asked it to rewrite my work “in less words”, “in a more academic style”, or “using shorter sentences”. I also asked it to explain why it made the changes it did so that I could use this collaborative discussion as a learning process to improve my academic communication skills. I take responsibility for the final version of the text in my assignment.

Clear guidelines within rubrics should also be established to evaluate the ethical and responsible use of AI, reinforcing the importance of acknowledging the role of these tools in academic work. Given my previous work developing rubric templates for staff, I have recently developed a template for the acknowledgement of AI use within assignments. In my template, this criterion falls within the “academic literacy” section of the rubric I mentioned earlier. I have included the rubric criteria below so that other educators can use it as needed. The grading scale is the one used in my university, but it can be easily adapted to other grading scales.

  • High Distinction (80-100%): There was an excellent explanation about how generative AI software was used. This included, where appropriate, explicit details about the software used, the prompts provided to the AI, and explanations as to how the output of the generative AI was adapted for use within the assignment.  
  • Distinction (70-79%): There was a clear explanation about how generative AI software was used. This included, where appropriate, sufficient detail about the software used, the prompts provided to the AI, and explanations as to how the output of the generative AI was adapted for use within the assignment. 
  • Credit (60-69%): There was a reasonably clear explanation about how generative AI software was used. The explanation lacked sufficient details regarding one of the following: the software used, the prompts provided to the AI, and/or explanations as to how the output of the generative AI was adapted for use within the assignment.
  • Pass (50-59%): There was some explanation about how generative AI software was used. The explanation lacked several of the following: the software used, the prompts provided to the AI and/or explanations as to how the output of the generative AI was adapted for use within the assignment.
  • Fail (Below 50%): There was little or no explanation about how generative AI software was used.

Questions to ponder

The blog post outlines a rubric for assessing the acknowledgement and use of generative AI in student assignments. Considering the varying levels of detail and adaptation of AI-generated content required for different grades, what are your thoughts on the fairness and effectiveness of this approach?

How might this rubric evolve as generative AI technology becomes more advanced and commonplace in educational environments?

Autoethnography: What is it and how do you do it?

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Autoethnography has become an increasingly popular research methodology, particularly within the humanities and social sciences. I use it regularly because of its emphasis on personal experiences, reflexivity, and storytelling which allows for a deeper exploration of complex experiences and societies. So what is autoethnography? The name autoethnography comes from three core aspects: self, culture, and writing. So, literally, autoethnography is an approach to research and writing that seeks to describe and systematically analyse personal experience to better understand cultural experiences.

As I’ve noted in a recent book chapter, there are several reasons why I find autoethnography a particularly compelling research methodology.

  • First, autoethnography allows researchers to purposely explore personal experiences to understand a particular culture or society. Researching personal experiences is becoming increasingly important as individuals’ stories are recognised as important sources of knowledge. Personal experiences can provide unique insights into social, cultural, and historical contexts and highlight the complexities of human experience.
  • Second, autoethnography considers insider knowledge as a valuable source of data. Researchers are the participants in their own studies and the stories which are told often explore transformative experiences for the researcher, frequently taking the form of epiphanies that significantly influenced the author’s worldview. I believe that this allows researchers to provide more meaningful insights into complex phenomena compared with more traditional objective research methods.
  • Third, autoethnography empowers researchers as it allows them to embrace emotionality and uncertainty and highlight topics that may be considered hidden or taboo. Autoethnography allows researchers to connect with their own emotions and experiences and, in doing so, find their voice. It allows them to challenge the dominant narratives that often dominate research and to tell their own stories in their own words.
  • And finally, autoethnography is a more accessible type of research for those outside of academia because it is written from personal experience in easy-to-understand language. The autoethnographer also does not merely narrate an experience for their audience. Instead, they try to engage the audience in the conversation so that the audience can understand experiences which may be different from their own. By sharing their own experiences, they can create a space for others to share theirs, fostering a more equitable and inclusive research process.

It is important to note that autoethnography does have some challenges. Some researchers critique it as a methodology because it is not scientific enough, while others say it is not artistic enough. I believe, however, that these critiques fail to see the value of combining both science and art when exploring complex phenomena. In this way, autoethnographers can advocate for social change to address perceived societal wrongs.

So how do you actually do autoethnography in your research project? It is important to remember that there is no one way to do autoethnography. What is most important is to develop systematic data collection and analysis methods that help you deeply explore your personal experience.

First, it is important to have a series of reflective prompts to help you explore your experiences. I use a simple prompt strategy, which gives very open initial prompts to allow me to delve into my personal experiences, analyse my emotions and thoughts during that period, reflect on how I feel now, and determine how my previous experiences have impacted my current philosophy or practice.

  • Describing the experience
    • What happened?
    • What did I do?
  • Analysing the experience
    • What was I thinking and feeling?
    • How do I feel now?
    • What went well?
    • What could I have done better?
  • Creating a step-by-step plan
    • How will this information be useful in the future?
    • How can I modify my practice in the future?
    • What help do I need?

Second, you need a way to record your reflections. I like to start my reflection journey by voice or video recording a conversation I have with myself, thinking about my past experiences. I start by thinking about what happened, what I did, what I was thinking and feeling at the time, and how I feel now. Then, I explore how I think the experience has informed my way of being now. How has it shaped my future practice? Why? After finishing the recording, I transcribe the recording and use this transcription as my initial data. I have also recently used discussions and images created with generative AI as part of my autoethnography data generation process. If you want to see how that is done, you can watch this video or read the paper.

Third, you can also consult relevant artefacts as part of your autoethnography, such as photos and documents from the past to help you think and reflect more deeply about an experience. You can also consult other important figures, such as family or friends from your past, to help you see the experience from multiple viewpoints. A good example of an autoethnography that used artefacts as additional data sources can be found here. It is important to note that you will require ethics approval for your study if you use photos with other people in them or if the significant people you consult are possibly identifiable in your final project.

Fourth, you use the writing process as part of your reflection process.  Through the writing process, you further reflect on what you were thinking and feeling during the experiences you are describing. These reflections can remind you of other experiences that shaped your understanding of that experience. This continuous writing and re-writing of your story become further data sources that allow you to engage more deeply with your experiences. Remember to lean into your story’s more emotive and vulnerable parts, as this will allow you to uncover hidden perspectives in your understanding more effectively. Ask yourself, why did this experience make me feel this way? What does it tell me about the context I found myself?

Finally, as you write about your experiences, you should incorporate your theoretical analysis. Start looking for key concepts you have identified in your reflections and how they link to your overarching research problem. Which theoretical concepts do they reflect? What can others learn from your experience?

In conclusion, good-quality autoethnography explores personal experiences to illuminate a particular cultural context.  Autoethnography is not merely telling your story. It is analysing your story to uncover previously ignored perspectives within a particular research context.

One final thing to note is that autoethnography can be done by one researcher or by a group of researchers. When done together, this type of autoethnography is called collaborative autoethnography. Collaborative autoethnography is particularly pertinent when examining complex social phenomena, such as marginalisation and the pursuit of social justice, as it facilitates the inclusion of multiple perspectives and voices. In this way, the individual voices of the researchers work together to illuminate common themes or experiences.

Questions to ponder

Autoethnography emphasises the value of personal experiences in understanding cultural contexts. Reflect on an experience from your life that could offer unique insights into a particular cultural or societal aspect. How could analysing this personal experience using autoethnography enhance our understanding of broader cultural phenomena?

What are your thoughts on balancing the scientific rigour and artistic expression in autoethnography? Can you think of any specific situations or contexts where this methodology might be particularly beneficial or problematic?

Learning how to evaluate the reliability of online sources

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


This post is based on an article I recently published.

It is commonly thought that contemporary students are digital natives who are naturally able to use sophisticated digital literacy in their daily practices because they have been immersed in the digital age their entire lives. Research, though, shows that the concept of being a digital native is a myth. For example, studies have shown that students born in the digital age use technology frequently, but that this often requires only basic technology knowledge (e.g., how to type a search into an internet browser or how to send and receive emails or instant messages).

It is clear from the research that students require significant support to learn how to use specific technologies for learning. Students entering university are not necessarily familiar with the skills needed to access information at a university level. For example, many have never had to search for or read academic journal articles before. It is, therefore, incumbent upon us to teach students how to find this type of information on the internet, while also assessing the reliability of the information they obtain.

There is clear evidence that, while students are able to use technology to find information (i.e., search engines), little attention is given to evaluating the quality of information. As educators, we need to help students learn how to effectively evaluate information for relevance, accuracy, or authority so that they can enter the online information landscape and resolve conflicts between online media and scholarly content.

I explicitly teach students how to evaluate the reliability of sources during my orientation workshops each semester. This is done in a two-hour workshop focused on how to read academic sources effectively. A key component of this workshop is an online interactive tutorial which I developed several years ago. I have recently made the tutorial freely available for other educators to use in their classrooms.

The tutorial incorporates case-based learning and self-discovery to encourage learning through experience. After completing each case, the students are provided with an expert evaluation of the reliability of the source. There are five cases, as outlined below:

  • Blog Post
    • Students are presented with a blog post discussing the science of salt lamps and how it can be used to treat asthma. Students are asked to decide whether the source is reliable or unreliable for use in their assignment. Students are also asked to provide a reason for their evaluation. After submitting their answers for each question, students are provided with a video explaining how to evaluate the reliability of sources.
  • Wikipedia
    • Students are presented with a Wikipedia entry for the Opium War. Students are asked whether they think Wikipedia is an appropriate first step in research. They are given three options from which to choose:
      • Yes, you should research a topic on Wikipedia first, as it gives you a broad understanding of the ideas important to the topic.
      • Sometimes, as you can gain some useful information and Wikipedia can provide links to other resources such as journal articles, books, and academic websites.
      • No, as Wikipedia can be edited by anyone, the reliability of the information is suspect.
  • History Website
    • Students are presented with a history website discussing the Opium War. Students are asked to select items they think make the source reliable from the following list: the author is a historian, the author has written several articles on the website, the article uses historical dates and Chinese names, the author lived and worked in Asia, and the article is easy to understand. Students are also asked to select items they think make the source unreliable from the following list: there are no references, the article does not indicate to which institution the author is affiliated, the website sounds unreliable, and the links to further information redirects to other pages on the same website. Students are then asked to provide an overall evaluation of the source’s reliability.
  • Newspaper Article
    • Students are presented with a newspaper article discussing a new medical treatment for heart disease. Students are asked whether this source can be used in an assignment by choosing from one of the following options:
      • Yes. This article clearly describes a new pharmaceutical treatment for heart disease, quotes a respected professor in the field, and highlights the key research findings.
      • Sometimes. These types of articles can be useful as they provide information in an easy to understand language, and can provide the links to the original research.
      • No. You should never use these types of articles in an academic assignment
  • Journal Article
    • Students are presented with a journal article presenting qualitative data from an educational research paper about self-discovery learning at university. Students are asked to select items they think make the source reliable from the following list: the article is published in an international education journal, the authors work at an academic institution and have qualifications in the field, the article describes original research, the authors use data to support their claims, and the article uses technical terms. Students are also asked to provide an overall evaluation of the source’s reliability and to provide a reason for their evaluation.

In my research paper, I evaluated my teaching strategy and found that this approach can effectively teach students how to discern the reliability of sources. It helps students deepen their personal understanding of what makes sources reliable or not. By analysing the responses students provided to the blog post, I discovered that students had not previously considered that evaluating the reliability of a source would be an important consideration for writing assignments. I also found that students’ evaluations of sources were dependent on their personal opinions about the topic, rather than any verifiable evidence provided in the source. Then, as they moved through the tutorial, students started to discover which aspects were most important in establishing the credibility and reliability of research. By the time they reached the final source, they were much more cautious when assessing the research, often asking for further details about the source.

Through my research, I was able to demonstrate that the students in my study had changed their way of looking at online information. They had crossed a threshold in understanding which permanently transformed their way of thinking. This demonstrates the value of explicit instruction through self-discovery learning as a pedagogical tool for teachers.

Questions to ponder

How do you personally evaluate the credibility of information you find online? What specific criteria or strategies do you use, and how do these align with or differ from the methods outlined in the tutorial described in the study?

How has the skill of evaluating digital sources impacted your academic work or research? Can you recall a situation where discerning the reliability of a source significantly influenced the outcome of your project or research? How did this experience shape your approach to digital literacy?

Building a sense of belonging for students who do not live on campus

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Students who do not live on campus and commute to university (often termed commuter students) can experience a sense of detachment from the university community, which can adversely affect their student experience. Juggling travel, studies, and other commitments means that these students can feel like they are visitors to their own campus. In a recent paper, my colleagues and I describe and evaluate the non-residential colleges (NRC) program at Monash University, an initiative designed to specifically foster a greater sense of connection for commuter students.

The NRC program creates a space where commuter students can experience similar support programs and campus activities as those who live in the residences on campus. Students are assigned a college mentor (a student who has already studied at the university for a while). These mentors are each responsible for providing mentoring and pastoral support for a small group of students. They also organise social events for their mentees and larger events for the whole college. Each college also has a college head and deputy head, who are members of staff with an interest in student engagement and belonging. There are also administrative staff who oversee the program to ensure an equitable experience for all students. In this way, NRC provides extra-curricular support for commuter students, aiming to emulate the community feel of traditional residential colleges, thereby building students’ sense of belonging.

It is important to note that “sense of belonging” is not just a feel-good term. Research consistently demonstrates that a sense of belonging plays a critical role in the academic and personal development of students. Some of the benefits of feeling connected to your place of study include:

  1. Academic success: Numerous studies have shown a strong connection between a sense of belonging and academic achievement. When students feel like they are a part of their university community, they are more likely to be motivated, engaged, and committed to their studies.
  2. Mental health and wellbeing: The transition to university life can be challenging, often marked by a sense of isolation and disconnection. Feeling connected to the university community can provide emotional support, reduce stress and anxiety, and improve mental health.
  3. Retention rates: When students feel valued and connected, they are less likely to drop out and more likely to complete their degrees.
  4. Personal development: University is a time for personal growth and development. A sense of belonging can facilitate this by providing a safe environment where students can explore their identities, build confidence, and develop interpersonal skills.

We wanted to evaluate the effectiveness of the NRC program, so we surveyed students who were part of the NRC program and students who were not, focusing on their sense of belonging, campus engagement, and overall student experience. We found that NRC students had a more positive university experience compared to non-NRC students. There were four key insights from the study:

  1. The NRC program was effective in enhancing students’ sense of belonging to the university community. This was achieved through increased interaction with peers and staff, along with more frequent campus attendance.
  2. Participants in the NRC program reported a more positive university experience compared to non-NRC students. This was reflected in their choice of words describing their experience, with a higher selection of positive terms like “friendly”, “community”, “comfortable”, and “supportive”.
  3. The study showed that NRC students were more likely to remain on campus after classes and interact more with their peers and teaching staff, indicating an increased engagement in both social and academic aspects of university life.
  4. Interestingly, NRC students were also more likely to have contemplated ways to enhance their employability, suggesting a broader impact of the program beyond just academic and social engagement. This was despite the NRC program not focusing on employability. We think this benefit comes from discussions students have with their mentors, who may be considering employability as they are further along in their course of study.

As universities continue to evolve and adapt to the diverse needs of their student populations, initiatives like the NRC program can play a pivotal role in shaping a more inclusive and supportive educational environment. A strong sense of belonging is linked to the creation of an inclusive environment that respects and values diversity. It is important to ensure that all students, regardless of their background, feel welcomed and accepted. This is particularly important in university settings, where students from various identities, cultures, and backgrounds come together. The NRC programs’ success in fostering community, engagement, and a sense of belonging is a compelling argument for the adoption of similar initiatives in tertiary institutions worldwide.

Importantly, this study underscores the importance of acknowledging that the goal of a university education is not just academic achievement. As educators, we should encourage the holistic development of our students by encouraging students to engage with initiatives such as the NRC program. In this way, we can encourage them to seek out and engage with opportunities to have a more fulfilling university experience.

Questions to ponder

  1. In your opinion, how important is building a sense of community within a university? Can online platforms and social media complement initiatives like the NRC program?
  2. What role can technology play in enhancing the sense of belonging and community for commuter students?

Fostering AI literacy as students, teachers, and researchers

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Credit: This blog post is an adapted form of a recent paper I wrote.

Artificial intelligence (AI) has been present in society for several years – think, for example, of computer grammar-checking software, autocorrect on your phone, or GPS apps. Recently, however, there has been a significant advancement in AI research with the development of generative AI technologies like ChatGPT. Generative AI refers to technologies which can perform tasks that require creativity by using computer-based networks to create new content based on what they have previously learnt.

For example, generative AI technologies now exist which can write poetry or paint a picture. Indeed, I entered the title of one of my published books (Research and Teaching in a Pandemic World) into a generative AI which paints pictures (Dream by WOMBO). The response it generated accurately represented the book’s content, was eye-catching, and I believe it would have been a very suitable picture for its cover. Check it out:

(Note: This response was generated by Dream by WOMBO (WOMBO Studios, Inc., https://dream.ai/) on December 12, 2021 by entering the prompt “research and teaching in a pandemic world” into the generator and selecting a preferred style of artwork.)

The introduction of generative AI has, however, led to a certain amount of panic among educators; many workshops, discussions, policy debates, and curriculum redesign sessions have been run, particularly in the higher education context. Educators acknowledge that there is a need to accept that generative AI can also be leveraged to support student learning. In fact, it is clear that students will likely be expected to know how to use this technology when they enter the workforce. Importantly, though, there has also been significant concern that generative AI would encourage students to cheat. For example, many educators fear that students could enter their essay topic into a generative AI and that it would generate an original piece of work for them which would meet the task requirements to pass.

I believe what is missing from these discussions regarding generative AI is the fact that assessment regimes focus predominantly on the product of learning. This focus assumes that the final assignment is indicative of all the student’s learning but neglects the importance of the learning process. This is where generative AI can be a valuable tool. From this perspective, the technology should be considered as an aide, with the intellectual work of the user lying in the choice of an appropriate prompt, the assessment of the suitability of the output, and subsequent modification of that prompt if the output does not seem suitable. Some examples of the use of generative AIs as an aide include helping students develop an outline or brainstorm ideas for an assignment, providing feedback to students on their work, guiding students in learning how to improve the communication of their ideas, and acting as an after-hours tutor or a way for English-language learners to improve their written skills. Using generative AI in this more educative manner can help students better engage with the process of their learning.

In a similar way to when Microsoft Word first introduced a spell-checker, I believe generative AI will become part of our everyday interactions in a more digitally connected and inclusive world. Importantly, though, as mentioned above, while generative AI may help the user create something, it is dependent on the user providing it with appropriate prompts to be effective. The user is also responsible for evaluating the accuracy or usefulness of what is generated. As such, we need to teach students how to communicate effectively and collaboratively with generative AI technologies, as well as evaluate the trustworthiness of the results obtained – a concept termed AI literacy. I believe AI literacy is likely to soon become a key graduate attribute for all students as we move into a more digital world which integrates human and non-human actions to perform complex tasks.

It appears that my university has come to the same conclusion. Monash University’s generative AI policy notes that students and researchers at Monash University are allowed to use generative AI, provided that appropriate acknowledgement is made in the text to indicate what role the generative AI played in creating the final product. The University has also created a whole range of resources which are freely accessible to students and the wider public to help them learn how to use generative AI ethically. I have recently developed a video (Using generative artificial intelligence in your assignments and research) that explains what generative AI is and what it can be used for in assignments and research.

In my teaching practice, I now advise students to use generative AI as a tool to help them improve their approaches to their assignments. I suggest, in particular, that generative AI can be used as a tool to start brainstorming and planning for their assignment or research project. I include examples of how generative AI can be used for various purposes in my classes. For example, I highlight that generative AI may be able to assist a researcher in generating some starting research questions, but it is the researcher’s responsibility to refine these questions to reflect their particular research focus, theoretical lens, and so on. I emphasise to students that generative AI will not do all the work for them; they need to understand that they are still responsible for deciding what to do with the information, linking the ideas together, and showing deeper creativity and problem-solving in the final version of their work.

I have recently showcased this approach in a video which is freely available on YouTube. The first video (Using generative artificial intelligence in your assignments and research) explains what generative AI is and what it can be used for in assignments and research. The second video (Using generative AI to develop your research questions) showcases a worked example of how I collaborated with a generative AI to formulate research questions for a PhD project. These videos can be reused by other educators as needed.

This video starts by showing students how I have used ChatGPT to brainstorm a starting point for a research project by asking it to “Act as a researcher” and list the key concerns of doctoral training programmes. In this way, I show the students the importance of prompt design in the way they collaborate with the generative AI. In the video, I show that ChatGPT provided me with a list of seven core concerns and note that, using my expertise in the field, I have evaluated these concerns and can confirm that they are representative of the thinking in the discipline. In the rest of the video, I showcase how I can continue my conversation with the generative AI by asking it to formulate a research question that investigates the identified core concerns. I show students how I collaborated with the generative AI to refine the research question until, in the end, a good quality question is developed which incorporates the specificity and theoretical positioning necessary for a PhD-level research question.

It is important to note that students are likely not yet experts in their field when they are designing their research questions. Therefore, it is important to provide them with guidance as to how to evaluate the ideas produced by generative AI. This includes highlighting that a generative AI is not always accurate, that it may disregard some information which may be pertinent to a specific research project, or that it may fabricate information. Students need to learn that a generative AI is not a tool similar to an encyclopedia which contains all the correct information. Rather, generative AI is a tool which responds to prompts by generating answers it “thinks” would be appropriate in that particular context. Consequently, I advise students to use generative AI as a starting point, but that they should then explore the literature to further assess the accuracy of the core concerns identified earlier as well as the viability of the research question for their project.

It is also worth noting that generative AI could be used as a way to help students see what a good research question might look like, rather than using it specifically to develop a research question for their particular research project. Generative AI may also be useful in helping students see how to organise the themes in the literature. In this way, we encourage students to use generative AI as part of the learning process, allowing them to scaffold their skills so that they can use their creativity and other higher-order thinking skills to further advance knowledge in their discipline.

Students should also be taught how to appropriately acknowledge the use of generative AI in their work. Monash University has provided template statements for students to use. I use these template statements as part of my regular workshops. In this way, I show students that ethical practice is to acknowledge which parts of the work the generative AI did and which parts of the work were done by a person.

I have also recently used such an acknowledgement in one of my research papers. I have included it below for other researchers to use in their work.

I acknowledge that I used ChatGPT (OpenAI, https://chat.openai.com/) to generate an initial draft outline of the introduction of this manuscript. The prompt provided for this outline was “Act as a social science researcher and write an outline for a paper advocating for change to survey design to collect more diverse participant information”. I adapted the outline it produced for the introduction to reflect my own argument, style, and voice. This section was also significantly adapted through the peer review process. As such, the final version of the manuscript does not include any unmodified content generated by ChatGPT.

As with all new technologies, there are potential challenges and risks that should be considered. Firstly, generative AI technologies can generate results which seem correct but are factually inaccurate or entirely made up. Secondly, there is the issue of equity of access. It is incumbent upon us as educators to ensure that all students have equal access to the technologies they may be required to use in the classroom. Thirdly, there is the risk that the generative AI may learn and reproduce biases present in society. Finally, for researchers, there are also ethical concerns relating to the retention and possible generation of potentially sensitive data.

Generative AI is, at its core, a natural evolution of the technology we already use in our daily practices. In an ever-increasingly digital world, generative AI will become integral to how we function as a society. It is, therefore, incumbent upon us as educators to teach our students how to use the technology effectively, develop AI literacy, and use their higher-order thinking and creativity to further refine the responses they obtain. I believe that this form of explicit modelling is how we, as educators, can help students develop an understanding of generative AI as a tool to improve their work. In this way, we focus on the process of learning, rather than being so focused on the ultimate product for assessment.

Questions to ponder

How do you think AI literacy can be integrated into current educational curricula to enhance learning while ensuring academic integrity? What are the potential challenges and benefits of incorporating generative AI into classroom settings?

How should students and researchers navigate the ethical implications of using AI-generated content in their assignments and research?

Improving students’ understanding by building a culture of academic integrity

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Credit: This blog post is an adapted form of a case study I wrote for Advance HE.

Universities have been cracking down on cheating and all sorts of dishonest academic behaviour recently. They’ve rolled out a bunch of strict rules related to academic integrity and use fancy software to keep an eye out for academic misconduct. In this space, there’s this idea floating around that you should either focus your attention entirely on fighting cheating or you should only be championing academic honesty (Dawson, 2021). However, I believe that this is a false dichotomy. It’s not just about telling students what not to do, even though this is of course important; it’s also about getting them involved in the process, making them understand and own up to their responsibilities. It’s teaching them the ropes of being academically honest through real experiences. In this way, we create a culture of academic integrity (Cutri et al., 2021). This means encouraging students to think about their own academic integrity practices, talking about academic integrity openly, and using mistakes as teachable moments, especially when it comes to plagiarism.

Encouraging students to think about their own academic integrity practices

I’m a big believer in the power of self-reflection because I know that reflecting on your own experiences and beliefs can really open your eyes, spark growth, and sharpen your skills (Cahusac de Caux et al., 2017). That’s why I always make sure my students get the chance to think about their own academic integrity practices. For example, I recently completed a project with some PhD students where we dived into the research on academic integrity and they got to reflect on why they approached academic integrity in certain ways. It was eye-opening for them to see how their academic identities shaped their approaches to academic integrity. One student, for example, mentioned how coming from a country where textbooks were almost worshipped, they found it difficult to critically analyse other studies. They weren’t used to pointing out flaws or gaps in research, which led them to rely a lot on direct quotes. This project showed us that sometimes it’s a lack of confidence that drives how students write. We ended up developing a model of academic integrity at the doctoral level, which highlighted how feeling like an impostor can lead to plagiarism and other dishonest academic practices. We published our findings in an open-access paper in 2021 and you can access it by clicking on the button below.

Talking about academic integrity openly

Over the last decade, I’ve been developing different ways to help students get better at playing by the academic rules, including workshops, online videos, and something I call the Practice Turnitin Assignment. Every semester, I run a workshop named “Referencing and Academic Integrity”. It’s open to all the students in my faculty, and it’s all about understanding what counts as plagiarism, how to make sure work is original, and what is considered the right way to reference sources in our faculty. All the notes for this workshop are provided beforehand and are also publicly available through our Doing Assignments Booklet. If you would like to use these notes, you can download them by clicking on the button below.

I’ve been teaming up with my colleagues to improve how we describe assignments, design our marking guides, and give feedback. We’ve been making a real point of showing how crucial it is to back up arguments with solid evidence. It’s all about emphasising the importance of being honest in your work. This includes explicit marking rubric criteria linked to the use of references to support work as well as clear criteria associated with formatting the references correctly. This is because these two things are separate academic skills – one focuses on being able to support your arguments while the other emphasises being able to follow a template.

I also decided to develop a bunch of snappy videos in YouTube. I’ve played around with different styles for these videos, and students can pick and choose what they watch and in what order. It’s great for giving students the info they need, right when they need it. You can learn more about how I designed these videos here. Turns out, my YouTube channel’s a bit of a hit – it’s racked up over a million views last time I checked! Over 8,000 people have also subscribed so that they can be notified when I create new videos. The best bit? I’ve made all my videos freely available, so any educator out there can use them in their classes. Check out my channel by clicking on the button below.

Finally, I developed a resource called the “Practice Turnitin Assignment” which is available to all students in my faculty. My university uses Turnitin to spot any copied work, but I figured why not use it as a teaching tool as well? I set up a special Turnitin assignment where students can submit their work, but no staff checks it and it doesn’t get stored in Turnitin’s database. This means students can use the Practice Turnitin Assignment to test their summarising and paraphrasing skills, see where they might be going wrong, and fix their work before they submit it. In this way, I am encouraging students to check the academic integrity of their work as part of their assignment writing process.

Using mistakes as teachable moments

Let’s be real, though. Even with all my hard work, my colleagues and I will still spot cases of plagiarism every semester. Most of the time, it’s not because students are trying to purposely cheat. More often than not they just don’t understand how to apply the rules (e.g., they just don’t know how to reference properly). Sure, this does necessitate penalties, like failing the assignment, but I also see this as a chance for a teachable moment.

That’s why I’ve set up a process where, when a student slips up and it’s clear they didn’t mean to, their lecturer can send them my way. I sit down with them and we go over how they can improve their academic integrity practices in the future. After our discussion, I give them a special hurdle task that is all about taking a piece of text and rewriting it in their own words, in just one paragraph. This way, they get to apply the skills we talked about, demonstrating that they’re ready to apply improved academic integrity skills in their future assignments. If this sounds like something you would like to use yourself, you can download the task below.

Questions to ponder

Have you ever had a moment of realisation about your own academic integrity practices? How did this awareness influence your approach to academic work, and what steps did you take to enhance your understanding and application of academic integrity principles?

In your opinion, what is the right balance between using technology to prevent cheating and educating students about academic integrity?

Benefits of doctoral writing groups

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


For many years now, I have been working to improve the experiences of PhD students. One practice I’ve found particularly useful is incorporating collaborative and peer-based learning through doctoral writing groups. My work with writing groups started way back in 2013 and, over more than a decade, I have further refined my approach. I currently facilitate four such groups on a fortnightly basis. Writing groups embody some of the most important aspects of learning: working together to co-construct personal knowledge through experience, constantly reflecting on one’s own understanding to improve professional practice, and building rich experiences that inspire learning and foster an environment of empowerment.

My approach to teaching in these groups is unique: doctoral writing groups are not common and, even in
settings where they are available, they are usually run in a very different manner. My doctoral writing groups are set up as a peer-based environment where small groups of students receive feedback on their academic writing from the facilitator and their fellow students. There are three sections of each writing group meeting:

  • Collegial chat: Meetings start with a friendly discussion time where participants can share their doctoral journeys over the past two weeks.
  • Reflection: Ten minutes of discussion where students who shared their written work in the previous meeting reflect on how they have incorporated the feedback they received.
  • Feedback and discussion: The rest of the meeting is focused on students sharing their written work and receiving feedback on areas for improvement in a peer-learning environment.

My writing groups have been set up in this way to create a space for authentic learning about actual writing, where peers support peers. Participants discuss suggestions for improvement as a group, fostering an environment where all participants learn from the feedback provided. As such, in many ways, the learning in a doctoral writing group is a continuous process of reading, discussion, personal reflection, and peer-based learning. In this way, the writing group becomes a site of academic social practice.

I also wanted to create a collegial space in which any question would be valid at any stage of the process. To achieve these goals, modelling of the academic writing process was particularly important. During meetings, I will regularly share draft documents I am currently writing, explaining to the writing group what I aim to achieve with that text. I will then also model how I would provide feedback to myself, highlighting errors in logic, poor phrasing, lack of evidence, or other academic language and literacy issues. Through this modelling, students gain an authentic insight into how academic writing is actually done. This helps to normalise the concept of writing as a process and helps them to learn how to critique others’ (and their own) work.

Collegiality is the cornerstone of the success of this type of group. Feedback discussions and personal reflections would not be effective if the students do not feel safe and part of the learning community. It is important to create a safe space to allow for the collegial critique of each other’s written work. I do this by establishing expectations from the beginning. Each participant is provided with the writing group’s code of conduct. If you want to create a code of conduct for your writing group, you can use the one below.

Ensuring a safe space

In order to ensure that all participants are treated with respect, we should behave in a manner that affirms the worth, dignity, and significance of all participants.

  • As part of the writing group, you are supposed to critique other’s work, but this should never be done in a way that disrespects the other person. Do not use language that devalues another person or the significance of their research. All participants in the writing group have the same right to be there and should be treated in a way that affirms their worth and significance.
  • Be respectful with the words you use when you talk to or about others. Listen to others and take note of others’ reactions to your tone of voice and manner.
  • Never use derogatory language, put downs, racist or sexist language, even sarcastically or as a joke.
  • Show respect for other cultures, traditions, or religions. Remember that everyone does not necessarily think the way you do. Avoid statements that reflect ignorance or bias about other cultures, traditions, or religions.
  • Have a zero tolerance for discrimination. If you believe someone is behaving in a discriminatory way, you should feel comfortable to raise the issue in the group or by talking to Lynette afterwards. We do not condone any discriminatory behaviour in the writing group setting.

Respectfully critique someone else’s work

  • When giving feedback to another participant, start by highlighting what you thought was done well in the text you read.
  • Focus on areas for improvement in academic style and language. This can include suggestions for improvement in referencing, style, voice, organisation of ideas, as well as any area of English language.
  • If you are knowledgeable about the topic that the other person wrote about in their text, you can also provide them with suggestions for improvement in content. This can include suggestions for further readings, as well as theories or concepts that can be added to strengthen the arguments in the text.

Want to learn more about the benefits of academic writing groups? My research has demonstrated that writing groups are spaces for academic pastoral care which foster academic identity and sense of belonging. You can learn more by watching the research presentation or reading the paper below. Why not start a writing group today?

Questions to ponder

Have you ever participated in a doctoral writing group or a similar peer-based learning environment? Share your experiences regarding how this setup impacted your learning, writing skills, and academic identity. Did you encounter any challenges in giving or receiving feedback, and how did you overcome them?

In your opinion, what are the key elements of effective feedback in an academic setting? How can such feedback contribute not only to the improvement of academic writing but also to the development of a sense of belonging and academic identity among doctoral students?

Embracing flexibility in assessment to enhance higher-order thinking

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Innovations in assessment task design are essential if we as educators are to encourage our students to see assessment as a learning process, rather than just a means towards a grade. In a recent study I did with some of my colleagues from accounting, we developed a flexible assessment regime designed to bolster students’ higher-order thinking skills, particularly critical thinking, reflection, and self-directed learning. We did this by giving students the option to choose how to complete their assessment during the semester, kind of like a “choose your own adventure” assessment regime.

In the paper we wrote, we describe how we developed optional critical thinking tasks for a core second-year undergraduate accounting unit. Our assessment regime gave students autonomy to choose whether to invest time and effort into optional tasks. In this way, students were allowed to take control of their learning trajectory throughout the semester. Their choice affected the way the assessments were weighted in the unit, as shown below. It is important to note that we wanted to ensure that students were not deterred from choosing to attempt the optional tasks because of any perceived risks. As such, students’ final overall grades depended either on just the two compulsory tasks or on all four assessments, whichever was higher.

Choice 1 (completion of all four tasks)Choice 2 (completion of only the two compulsory tasks)
Answering teacher-developed pre-lecture quiz questions10%Not applicable
Students developing their own critical thinking questions for the tutorial sessions15%Not applicable
Compulsory coursework tasks15%20%
Compulsory exam60%80%
Flexible assessment regime

The design of the optional assessment tasks encouraged students to reflect on their learning needs, question their existing knowledge, and identify gaps in their understanding. In this way, we hoped to promote a deeper level of engagement with the content and foster a more active learning experience. The critical thinking questions were used in tutorials in a peer-learning environment, allowing students to work together in groups to find answers to the questions they had generated. This helped to foster shared learning.

A large proportion of the cohort in our study chose to complete the optional tasks, with two-thirds of the cohort thinking that a flexible assessment regime was a “very good” or “good” idea by the end of the semester. Students who completed the optional tasks had a 12% higher grade than those who chose to only complete the compulsory tasks. Qualitative data from the students also highlighted that students realised they had improved their higher-order thinking, particularly their critical thinking ability and their reflection skills.

It was interesting to see that several students complained that it would be better if the teacher just gave them the answers to the questions, instead of encouraging students to discover the answers for themselves. In particular, they thought that critical thinking was not necessary in accounting. Students also thought that the flexible assessment regime did not affect the grades they ultimately received, despite the clear quantitative difference in grades mentioned earlier. This indicates that students may not have yet made the connection that improved higher-order thinking such as critical thinking helped them in other tasks such as the final exam. It also highlights that students could not necessarily make the connection that critical thinking can enhance the applicability of their content knowledge in accounting. For accounting students, higher-order thinking such as critical thinking is important for several reasons, including:

  • It helps students solve accounting problems by enabling them to analyse problems, identify causes, and come up with effective solutions. The discipline content taught in the unit we adapted, for example, includes cost-volume-profit analysis, necessitating critical thinking and problem-solving skills.
  • It encourages informed decision-making. Effective graduates from this unit would need higher-order thinking to be able to make thoughtful and reasoned management decisions related to cost behaviours and projections.
  • It fosters students’ capacity to adapt and innovate in constantly evolving contexts. Using higher-order thinking allows students to learn how to critically think about a situation, assess their knowledge, and creatively apply their skills in an environment where variables related to things such as costs, cost behaviours, and cost allocations are constantly changing.

Our study, therefore, highlights that it is important for educators to explain the relevance of the higher-order thinking skills they are fostering in their classrooms to the disciplinary field more broadly.

In summary, the flexible assessment regime in our study was carefully crafted to not only assess students’ understanding but also to actively engage them in the process of learning. By requiring students to generate questions and seek answers collaboratively, these tasks were instrumental in promoting self-reflection, problem-solving, and critical thinking, which are key components of higher-order thinking​​. Other educators may choose to use a similar strategy and encourage their students to choose their own assessment adventure, thereby fostering deeper learning and student engagement.

Questions to ponder

How can flexible assessment be adapted to different disciplinary fields?

In what ways can educators ensure that flexible assessment regimes are equitable and inclusive for all students, regardless of their backgrounds?

Developing students’ critical thinking and clinical reasoning through problem-based assessment

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


In clinical education, the challenge is to not just impart content knowledge, but also help students develop critical real-world clinical skills. This is particularly true when it comes to critical thinking and clinical reasoning skills. In a paper I recently wrote with colleagues from a midwifery unit, we demonstrate how constructive alignment of a course’s graduate attributes and a unit’s learning experiences and assessment tasks can help students develop clinical reasoning skills.

Critical thinking and clinical reasoning are foundational skills in midwifery for several reasons:

  • Complex decision-making: Midwifery involves making decisions in complex, often unpredictable situations. Critical thinking and clinical reasoning enable midwives to assess and interpret patient data, consider various options, and make informed decisions that ensure the best outcomes for their patients.
  • Adapting to diverse scenarios: Every childbirth is unique, and midwives encounter a wide range of scenarios. Critical thinking and clinical reasoning equip them with the ability to adapt their knowledge to different contexts and provide tailored care based on individual needs and circumstances.
  • Safety and quality of care: Good critical thinking and clinical reasoning are key to patient safety and the quality of care. It allows midwives to identify and respond to potential complications promptly and effectively, which is vital in a field where situations can change rapidly and have critical consequences.
  • Holistic patient care: Midwifery is not just about the physical aspects of childbirth; it encompasses the emotional, psychological, and social well-being of the patient and their family. Critical thinking and clinical reasoning help midwives to consider all these aspects in their care, leading to more comprehensive and personalised support.
  • Lifelong learning and professional development: Midwifery, like all healthcare professions, is constantly evolving. Critical thinking and clinical reasoning are essential for midwives to engage in continuous learning, keep up with the latest evidence and practices, and refine their skills over time.
  • Collaborative practice: Midwifery often involves working in teams with other healthcare professionals. Critical thinking and clinical reasoning are important for effective communication and collaboration, ensuring that all team members are aligned in their approach to patient care.

Consequently, midwifery educators need to develop curricula which balance academic content with skills development. This is also true for assessment tasks. Traditionally, assessment tasks in midwifery have revolved around essay questions, which often fail to test students’ clinical reasoning and decision-making skills. Recognising this gap, we embarked on a curriculum redesign journey, aiming to make our assessment task more clinically relevant using problem-based learning.

We wanted to make our assignment more problem-based, as there is ample evidence that real-world scenarios can make students’ education more clinically relevant. We believe real-world scenarios are useful for several reasons:

  • Bridging theory and practice: Assessments that mirror real-world scenarios enable students to apply theoretical knowledge in practical contexts. This helps to bridge the gap between what they learn in the classroom and what they will encounter in their professional lives, making their education more relevant and effective.
  • Developing critical thinking: Real-world-focused assessments often require critical thinking, problem-solving, decision-making, and other key skills that are essential in professional settings. By incorporating these elements into assessments, educators ensure that students are not just learning information, but are also developing the skills they need to use that information effectively.
  • Encouraging active learning: Real-world assessments often involve active, experiential learning, which is typically more engaging and effective than passive learning methods. This can lead to a deeper understanding of the subject matter and a more enjoyable learning experience for students.
  • Fostering lifelong learning: In the rapidly changing modern world, the ability to learn and adapt is crucial. Real-world assessments encourage students to be self-directed learners who can seek out information, analyse problems, and find solutions independently.
  • Preparing for professional challenges: The workplace presents challenges that are often complex and unpredictable. Assessments that simulate real-world situations prepare students for these challenges, equipping them with the experience and confidence to handle similar situations in their future careers.

To achieve our goals, we needed to constructively align our assessment tasks, learning outcomes, and learning activities. This helped us ensure that the outcomes we hoped to achieve with our unit were effectively developed in our classroom activities and that our assessment tasks actually assessed the skills we taught. To help us constructively align our assessment task to the learning outcomes of the unit, we utilised the Research Skills Development (RSD) framework.

We changed the previous essay-based assessment task into a scenario-based question, requiring students to apply clinical reasoning to a specific case, as shown below.

Lola is a G1P0, EDC 16/4/2012, singleton pregnancy, positive blood group, currently taking pregnancy multivitamins, she has attended the routine schedule of antenatal care with no adverse issues identified. Lola presents to your maternity unit at 10:30hrs with a history of irregular contractions since 02:30hrs, with contractions now becoming regular at four minutely intervals. Lola’s membranes ruptured at 01:00hrs with clear liquor draining. On admission the abdominal palpation reveals a baby presenting in a right occipito-posterior position (ROP), with the fetal head 3/5ths above the pelvic brim. A vaginal examination is performed, with the cervix found to be posterior, 1-2cms long, 2cms dilated, station -2, and membranes are confirmed ruptured. Critically discuss the care required for the laboring woman with the fetus presenting in an occipito-posterior position, including possible outcomes this woman may expect.

This case was designed to test a broad range of competencies, including critical thinking and clinical reasoning in complex clinical situations. These students had not previously had a similar assignment in their course. By consulting the RSD framework, we decided to target the assessment at Level IV, encouraging students to research and analyse the scenario themselves, but with some structured guidance. As a result, an assignment preparation session was also conducted to examine the scenario-based question in a peer learning environment. Consequently, interactive group discussions were used to analyse the assessment task and decide how to best approach the assignment. The three discussion prompts used in the class are listed below:

“What are the key symptoms or features in this case?”

“What do the key symptoms mean?”

“How will I care for Lola?”

Students were encouraged to work in groups to decide on appropriate answers to these questions, and they were then asked to present their ideas to the class. Discussion between the groups was used to
foster the investigation of different opinions and ideas.

The implementation of this new assessment approach was met with positive feedback from both students and staff. The scenario-based question was appreciated for its clinical relevance, and the structured guidance helped students focus on critical aspects of midwifery care. This suggests that students were more concerned with understanding the implications of the case for clinical practice than simply answering the question, reflecting a deeper level of engagement and critical thinking​​.

The study also provided insights into the feedback process following the assessment. Students received extensive, focused feedback from the academics who marked the assignments. Several students also engaged with the optional opportunity to meet with the lecturers after receiving the feedback, seeking verbal insights into their performance. Students highlighted that the feedback they received was useful in helping them know how to improve in the future. Staff found that the feedback they provided on assignments indicated that the new approach led to a deeper engagement with the content and a better understanding of clinical reasoning.

A marking rubric was developed to accurately assess the research skills developed as part of this process. This marking rubric is freely available and can be used by other educators as needed. It can be found here (pages 386-387).

By shifting from traditional essay-based tasks to scenario-based questions aligned with the RSD framework and constructive alignment theory, we succeeded in enhancing student engagement, critical thinking, and clinical reasoning skills. Students benefit from an educational approach that prepares them for real-world challenges, fostering skills that are directly applicable to their future professional practice. This study also offers a framework for integrating educational theories into the design of practical assessment tasks and rubrics, which can be useful for other educators.

Questions to ponder

What are some of the key factors in assessment design that can encourage deeper learning and critical thinking in students?

In what ways could this approach to curriculum design impact the quality of healthcare provided by future graduates in clinical settings?

Trauma, anxiety, depression, solitude: The impact of COVID-19 on academic identity

Profile Image

Dr Basil Cahusac de Caux

Contact details

Dr Basil Cahusac de Caux is an Assistant Professor with a specialisation in the sociology of higher education, postgraduate research, and the sociology of language.

Profile Image

Dr Lynette Pretorius

Contact details

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, academic identity, student wellbeing, AI literacy, autoethnography, and research skills development.


Profile Image

Dr Luke Macaulay


Dr Luke Macaulay is a research fellow, researching the education and employment experiences of people from refugee and asylum seeking backgrounds.


Credit: Text and images have been republished from an article in the Monash Lens, https://lens.monash.edu/@education/2023/04/28/1385557/trauma-anxiety-depression-solitude-the-impact-of-covid-19-on-academic-indentity

COVID-19 brought about unprecedented changes to society, causing widespread disruption to many aspects of our lives.

The pandemic has impacted people from all walks of life, but particularly hard-hit have been academics, early-career researchers (ECRs), and PhD students. They’ve had to face a range of challenges, from adapting to new ways of working, to dealing with the closure of research facilities and universities.

Here, we explore the ways in which the pandemic has affected this group.

Much is drawn directly from insights in our book Research and Teaching in a Pandemic World, published in January. We used a research methodology where academics, ECRs, and PhD students could tell their personal stories of their pandemic experiences.

Some were filled with trauma, grief, and loss. Other times, the stories highlighted moments of resilience and growth.

This shows the pandemic affected each person differently, and that we should value and respect these diverse experiences as we move into the next stage of the pandemic (and hopefully a future post-pandemic world).

A focus on academic identity

Our research focused on the challenges of academic identity, an integral part of their lives. It’s developed through teaching and researching, and it is shaped by the values and beliefs of the academic community.

For many individuals, academic identity is a fundamental part of who they are. It defines their sense of purpose, and provides them with a sense of belonging within an academic community.

The pandemic had a profound impact on academic identity. The closure of research facilities and universities significantly hindered their ability to conduct their research. This led to delays in research projects, which can be particularly challenging for ECRs and PhD students who rely on their research to progress in their careers.

To further complicate matters, many are employed on fixed-term contracts, meaning their employment is dependent on their ability to secure research funding. However, with the closure of facilities, many funding opportunities dried up.

This has had a particularly negative impact on ECRs and PhD students, who are often in a more precarious position than their more established tenured colleagues.

Many had to adapt to new ways of working, such as remote teaching and learning, which led to a sense of disconnection from colleagues, students, and the broader academic community.

As a result, academics, ECRs, and PhD students struggled to develop their academic identities in the conventional way (that is, through face-to-face interaction, networking, and collaboration).

Instead, they had to discover and develop their academic identities amid chronic uncertainty and restrictions on mobility. This involved resorting to new techniques and strategies, typically immersing themselves in individual research projects, writing, and meaning-making.

A small wooden mannequin sitting on an open book with head in hands

Solitude singled out

Solitude was the main theme that coloured the stories of academic identity development during the pandemic.

Perhaps the most demonstrable impact of the pandemic has been its toll on mental health.

Mental health is an important part of believing you can contribute in your chosen field, so challenges to mental health can have a significant impact on the academic identity development.

Our previous research has already highlighted a mental health crisis in academia, particularly for those early in their research journey. This was markedly exacerbated by the pandemic.

The isolation and uncertainty led to increased levels of anxiety and depression. There were increasingly common stories of individuals progressively becoming mentally ill, with anxiety, depression, and difficulties dealing with trauma. As a result, the ensuing frustration and anxiety drove many to question whether an academic career was the wisest route to take.

That said, the pandemic also spurred some ECRs to develop cognitive hardiness. This means we now have a group of budding academics who have cultivated greater levels of resilience. One hopes their perseverance will not only shape their future research activities, but that this key trait will also be absorbed through association by their new “post-pandemic” colleagues.

The marginalisation of individuals

The COVID-19 pandemic has not only impacted academics, ECRs, and PhD students as a whole, but it’s also contributed to the marginalisation of certain individuals within academia.

We’ve previously shown that challenges to academic identity development can lead to feelings of marginalisation. The stories we were told in this book showed the pandemic has amplified already existing inequalities in academia, with individuals from marginalised groups – including women, people of colour, and those with disabilities – facing disproportionate challenges.

Coupled with broader societal issues such as gender-based discrimination, systemic racism, and war, those from marginalised groups struggled even more to have their voices heard.

There’s now a growing imperative to address such issues in ways that make all academics valued for the work that they do. We need to aim for equity and justice in our communities of practice.

Several of the stories shared with us told of the particular challenges academic parents faced. Those with children, and especially those who were academic mothers, talked about how they had more caregiving responsibilities.

The closure of schools and daycare centres meant many parents had to balance working from home with caring for their children. This made it difficult for parents in academia to maintain their productivity and meet their work obligations, leading to additional stress and anxiety.

It also made it difficult for them to progress their careers, leading to further marginalisation within academia.

A young female scientist sitting alone among scientific equipment

The upside for parents

In some aspects, however, pandemic-related restrictions were also a boon for parents, who were able to involve themselves more in their children’s education and daily activities. The blurred boundaries between work and home resulted in a chaotic but occasionally meaningful realignment of priorities for parents working from home.

Overall, the COVID-19 pandemic has had a profound impact on academics, ECRs, and PhD students. The closure of facilities, the move to remote teaching and learning, and the impact on mental health and job prospects have all combined to create a challenging environment.

It’s important we recognise the challenges faced by this group during this difficult time, and provide them with the continued support they need to carry out their important work. Society depends on it.