Participant recruitment strategies in research

The image shows a professional portrait of a woman with long, dark brown hair and glasses, smiling warmly. She is wearing a floral-patterned blouse under a black jacket, and the setting is softly lit, with an out-of-focus background featuring natural light and hints of greenery. The photograph conveys a friendly and approachable demeanour, suitable for professional or academic contexts.

Dr Lynette Pretorius

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, AI literacy, research literacy, academic identity, and student wellbeing.


The way researchers select their participants impacts the validity and reliability of their findings, making participant recruitment one of the most crucial steps in the research process. But how do researchers go about this task? What strategies do they use to ensure their sample accurately reflects the broader population or the group they are investigating? Letโ€™s explore some common participant recruitment strategies, breaking down their strengths, weaknesses, and best use cases. This post will cover six key sampling techniques: convenience sampling, purposive sampling, snowball sampling, random sampling, stratified sampling, and quota sampling.

Convenience Sampling

Convenience sampling, as the name implies, revolves around ease of access and availability. This method involves selecting participants who are nearby, easily accessible, and willing to take part in the study. Itโ€™s a go-to choice for researchers when they need to collect data quickly and with minimal effort. Instead of engaging in time-consuming and resource-intensive processes to identify and recruit participants, convenience sampling allows researchers to gather data from those who happen to be in the right place at the right time, or who meet the studyโ€™s basic criteria and are easy to contact.

One of the most notable benefits of convenience sampling is its speed and cost-effectiveness. Since participants are easy to reach, researchers can save both time and resources that would otherwise be spent on recruitment strategies, travel, or extensive outreach. For example, if youโ€™re studying employee engagement in the workplace, you might simply survey your colleagues, since they are readily available and meet the general criteria of being employees. You donโ€™t need to look far or conduct an elaborate recruitment process. This ease of implementation is especially valuable when dealing with limited budgets or tight deadlines. Convenience sampling also simplifies logistics, as researchers don’t need to source participants from outside their immediate environment, which can be particularly helpful in the initial stages of research where the primary goal is to test ideas or gather preliminary data.

Despite its practicality, convenience sampling carries a significant risk of bias. Since the sample is drawn from a pool of easily accessible participants, it may not reflect the diversity of the broader population. This lack of representation can lead to skewed results, limiting the generalisability of the study’s conclusions. Moreover, convenience sampling often captures a very specific subset of individuals: those who are willing to participate. People who are available and motivated to take part may differ significantly from those who are harder to reach, potentially introducing a self-selection bias. This means that the participants in your study might share certain characteristics that make them different from the larger group you’re trying to understand, thereby limiting the accuracy and breadth of the findings.

Convenience sampling is best suited for exploratory research, pilot studies, or projects where time and resources are constrained. Itโ€™s a practical method when the research goal is to test hypotheses, gather preliminary data, or explore an emerging field. However, for studies where generalising findings to a larger population is critical, convenience sampling is not recommended. In these cases, a more representative sampling method, such as random or stratified sampling, would yield more reliable and valid results.

Purposive Sampling

Purposive sampling, also known as purposeful sampling, is a strategically driven approach to participant selection, designed to align closely with the objectives of the research. Purposive sampling involves the deliberate selection of individuals who possess specific characteristics, knowledge, or experiences that are directly relevant to the study’s focus. The intention here is not to gather a wide, diverse group of participants, but to choose individuals whose particular insights can provide depth and richness to the data. In purposive sampling, researchers carefully define the criteria for inclusion, selecting participants based on how well they fit the study’s needs. This targeted approach guarantees that the participants are not only suitable but also capable of offering the kind of focused and contextually relevant information that the research seeks to uncover.

The primary strength of purposive sampling lies in its efficiency and precision. By handpicking participants based on specific criteria, researchers can ensure that every individual involved in the study has a direct connection to the research topic, which enhances the quality of the data collected. For instance, if a researcher is investigating the experiences of people recovering from cancer, they would purposefully select participants who have undergone cancer treatment, ensuring that the data collected is directly relevant to the research question. This method is especially useful in qualitative research, where the goal is often to gain a deeper understanding of a particular phenomenon rather than to generalise findings to a larger population. Moreover, purposive sampling is often more practical when working with small or hard-to-reach populations. In studies involving niche groups, such as people with rare medical conditions or members of specific subcultures, purposive sampling enables researchers to focus on finding individuals who meet the study’s strict criteria, bypassing the need for broader recruitment efforts that may yield less relevant participants.

While purposive sampling offers many advantages in terms of relevance and efficiency, it also comes with inherent limitations, the most significant of which is the risk of selection bias. Since participants are chosen subjectively by the researcher, there is always the potential for bias in the selection process. The researcherโ€™s choices may be influenced by preconceived notions about who would provide the most useful data, which could result in an unbalanced or unrepresentative sample. Since the sample is intentionally selective, it does not provide an accurate cross-section of a broader group. As a result, purposive sampling is not ideal for studies where broad generalisability is a key objective.

Purposive sampling is most commonly employed in qualitative research, where the goal is to explore specific themes, experiences, or phenomena in great detail. It is particularly useful when researchers are investigating a clearly defined group or phenomenon, such as in case studies, ethnographic research, or studies focusing on specialised areas like mental health, education, or organisational behaviour. Additionally, purposive sampling is often used in evaluation research, where the goal is to assess a programme, policy, or intervention. By focusing on individuals with firsthand experience, researchers can gather detailed feedback that is crucial for evaluating the effectiveness of the intervention.

Snowball Sampling

Snowball sampling is a participant recruitment method that relies heavily on social networks and personal referrals to build a sample. The process begins with a small group of initial participants who are chosen based on their relevance to the study. These participants are then asked to refer others they know who meet the studyโ€™s criteria, who in turn refer more people, and so on, creating a snowball effect. Over time, the sample grows organically, expanding through connections within a specific community or network.

This method is especially useful when researchers are working with hard-to-reach populations. These might include people in marginalised groups, individuals involved in illegal activities, or those with experiences that are not easily accessible through conventional recruitment methods, such as people who have experienced homelessness or are part of underground subcultures. In many cases, people within these groups may not want to reveal their identities to researchers, especially if their involvement in the group is sensitive or stigmatised. However, through personal referrals from trusted peers, they may be more likely to participate. The trust established between members of the community can make them more comfortable with sharing their experiences, allowing researchers to collect rich, authentic data from participants who would otherwise be unreachable. Snowball sampling can also be highly cost-efficient and flexible.

Despite its advantages, snowball sampling has several potential drawbacks, the most notable being the risk of bias. Since participants are recruited through personal networks, the sample is often restricted to people who are socially connected, which can limit the diversity of the sample. This lack of diversity can skew the results, making it difficult to generalise findings to the broader population. Moreover, snowball sampling can create a chain of referrals that is disproportionately shaped by the initial participants. If the first few participants are not representative of the population being studied, their referrals may perpetuate this imbalance, further reducing the sampleโ€™s representativeness. Another challenge is the difficulty in controlling the sample size. Since snowball sampling relies on personal referrals, the growth of the sample can be unpredictable. In some cases, the “snowball” may gather momentum quickly, leading to a large, varied participant pool. In other instances, recruitment may stall if participants are unwilling or unable to refer others, resulting in a sample that is too small to draw meaningful conclusions.

Given its strengths and limitations, snowball sampling is most effective in studies where recruiting participants through traditional methods would be difficult or impractical. It is particularly well-suited for research involving rare populations, sensitive topics, or hidden communities where members may be reluctant to come forward on their own. This method is also useful in qualitative research, where the goal is to collect in-depth, nuanced data from a specific group rather than to achieve broad generalisability. In exploratory research, snowball sampling can help researchers generate preliminary data about populations that are otherwise difficult to access. It allows for a gradual expansion of the sample, giving researchers the flexibility to adjust their recruitment strategy based on the data collected. However, because of the potential for bias, snowball sampling is generally not recommended for studies that require representative samples or where generalisability to the broader population is a primary concern.

Random Sampling

Random sampling, as the name suggests, is a method where each individual in the population has an equal chance of being selected, which makes the process akin to drawing names out of a hat. By giving every person an equal opportunity to be included, random sampling maximises the likelihood that the sample will accurately represent the broader population. A simple example would be assigning numbers to everyone in a population and using a random number generator to pick participants. This approach minimises bias and maximises the likelihood that the sample is representative. This quality is what makes random sampling a preferred choice in large-scale surveys and experimental research, where the goal is to ensure that the findings can be applied to a larger group.

One of the most notable strengths of random sampling is its ability to provide high external validity. Since the method does not favour any particular subset of the population, the findings from a study using random sampling are more likely to be generalisableโ€”meaning that they can be applied to the wider population with a greater degree of confidence. Another key benefit is the reduction of systematic bias. In other sampling methods, certain individuals or groups may be over-represented due to researcher influence or convenience. With random sampling, this risk is minimised because the selection process is completely unbiased. The random nature of this method ensures that personal preferences, biases, or logistical factors do not affect who is chosen for the study.

Despite its many advantages, random sampling can be challenging to implement, particularly in studies with large populations. Some of the main drawbacks are the time and cost involved. To conduct random sampling on a large scale, researchers need access to a complete and up-to-date list of the population from which theyโ€™re drawing their sample. In some cases, obtaining such a list can be difficult or impossible, especially when working with fragmented or hard-to-reach populations. Additionally, there can be significant logistical hurdles. In small populations, random sampling may be fairly straightforward, but when dealing with larger populations, coordinating a random selection process can become complex. This can involve significant costs, not just in terms of the initial recruitment of participants, but also in terms of travel, communication, and follow-up procedures.

Given the costs and logistical challenges, random sampling is best suited for large quantitative studies, particularly those where generalisability is the primary goal. If the research is designed to draw conclusions about the broader population, such as in public health research, market research, or large-scale sociological studies, random sampling is ideal because it provides the most unbiased and representative data possible. In cases where time and budget constraints are more pressing, or where the research is exploratory rather than aiming for population-level generalisability, other sampling methods (such as convenience or purposive sampling) might be more appropriate.

Stratified Sampling

Stratified sampling is a method used by researchers to ensure that their sample accurately reflects the diversity of the population by focusing on key subgroups, or “strata.” The basic idea is that the population is divided into distinct groups based on important characteristics such as age, gender, income level, education, or ethnicity. Once these groups are defined, participants are then randomly selected from each stratum. This approach allows researchers to ensure that the sample mirrors the proportions of these subgroups in the overall population, leading to more precise and reliable findings. For example, if a population consists of 40% males, 55% females, and 5% transgender people, the researcher ensures that the sample has the same proportional representation. This method is particularly effective in studies where the population consists of individuals with varying characteristics that could influence the outcome of the study. By ensuring that all relevant subgroups are proportionally represented, stratified sampling helps researchers avoid over-representing or under-representing certain groups.

One of the main strengths of stratified sampling is its ability to produce a highly representative sample of the population. By ensuring that each subgroup is properly represented, this method increases the precision of the results, which in turn improves the reliability of the study’s findings. This is especially important in research where differences between subgroups are a key focus. Moreover, by dividing the population into strata and then randomly selecting participants from each group, stratified sampling ensures a more balanced and accurate representation, which minimises the risk of sampling errors. Finally, the ability to analyse subgroup differences is a key advantage of stratified sampling, particularly in fields like sociology, economics, and public health, where understanding these differences is critical.

While stratified sampling offers many advantages, it does come with certain challenges, particularly in terms of the time and resources required to implement it. One of the most time-consuming aspects of this method is the need to define and organise the strata before selecting participants. Researchers must have a clear understanding of which characteristics are most relevant to the study and must have detailed information about the population to create the strata. Furthermore, in some cases, this information may not be readily available, or the population may be too complex to neatly divide into well-defined strata. Stratified sampling can also be more logistically complicated than simpler methods like convenience sampling. Researchers need to ensure that they have enough participants in each stratum to allow for meaningful analysis, which can require more recruitment efforts. If some strata are smaller or harder to reach, the researcher may need to put in extra effort to find participants from those groups, increasing both time and costs.

Given its ability to provide a highly representative sample, stratified sampling is best used in studies where representation across key subgroups is critical. It is particularly useful when researchers are interested in analysing differences between subgroups, such as age, income, or geographic location. Stratified sampling is also valuable in demographic studies, where the goal is often to understand the characteristics of various subgroups within a population.

Quota Sampling

Quota sampling is a sampling method that shares certain goals with stratified sampling, particularly the aim of capturing diversity across specific subgroups. However, the fundamental difference lies in how the sample is selected. While stratified sampling relies on random selection from each subgroup, quota sampling allows researchers to directly control who is recruited by actively seeking participants to fill predefined quotas based on certain characteristics, such as age, gender, education level, or income. Once the quota for each subgroup is filled, no further participants from that group are recruited, ensuring that the final sample meets the predetermined criteria for representation.

One of the main advantages of quota sampling is that it guarantees the inclusion of specific subgroups in the sample. By setting quotas for each group, the researcher ensures that the final sample reflects the desired characteristics or proportions, which is particularly important when the goal of the research is to compare different groups. Another key benefit of quota sampling is its efficiency. Since the researcher can directly seek out participants who meet the required criteria, the process can be completed more quickly and at a lower cost than methods like stratified sampling. Moreover, quota sampling offers a greater degree of control over the composition of the sample. The researcher can adjust the quotas based on the needs of the study, ensuring that specific groups are represented according to the studyโ€™s objectives.

Despite its advantages, quota sampling also has several limitations, the most significant of which is the potential for bias. Since participants are not selected randomly, there is a risk that the sample may not accurately represent the broader population, even if the quotas are met. The recruitment process is subjective, as it relies on the researcherโ€™s judgement and outreach methods, which can introduce selection bias. This lack of randomisation means that the results from a quota sample may not be generalisable to the larger population, especially if certain characteristics or perspectives are overlooked during recruitment. Additionally, quota sampling can lead to incomplete representation within each subgroup. While the researcher may set quotas based on broad characteristics like age or gender, other important factors may not be considered. This can result in a sample that, while meeting the quota criteria, lacks internal diversity within the subgroups, which can limit the depth and richness of the data collected. Another challenge with quota sampling is that it requires detailed knowledge of the population beforehand. The researcher must have a clear understanding of the proportions of different groups within the population to set accurate quotas. This can be difficult if reliable demographic data is not available, or if the population is highly fragmented or diverse in ways that are not easily captured by simple quotas.

Quota sampling is best suited for studies where the primary goal is to compare specific groups or ensure representation across key subgroups. It is commonly used in market research, opinion polling, and social research, where researchers need to gather data quickly and cost-effectively while ensuring that certain groups are represented. This method is also useful in studies where strict randomisation is not feasible or necessary. For example, in research involving focus groups or interviews, where the goal is to gather in-depth insights from specific subgroups, quota sampling allows the researcher to select participants who fit the desired profile without the logistical complexities of random selection.

You can also learn more about research designs and methods by watching the videos below.

Questions to ponder

How do different sampling methods influence the validity of research findings?

Can convenience sampling ever be justified in large-scale research?

In what scenarios might snowball sampling offer a better solution than random sampling?

Exploring 10 popular research designs: a quick guide

The image shows a professional portrait of a woman with long, dark brown hair and glasses, smiling warmly. She is wearing a floral-patterned blouse under a black jacket, and the setting is softly lit, with an out-of-focus background featuring natural light and hints of greenery. The photograph conveys a friendly and approachable demeanour, suitable for professional or academic contexts.

Dr Lynette Pretorius

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, AI literacy, research literacy, academic identity, and student wellbeing.


In research, the design chosen plays a pivotal role in determining how data are collected, analysed, and interpreted. Each design provides a unique lens through which researchers can explore their questions, offering distinct advantages and limitations. Below, I summarise ten common research designs, spanning qualitative, quantitative, and mixed methods approaches.

Action Research

Action research is a collaborative and iterative approach that seeks to solve real-world problems while simultaneously generating knowledge. Action research is characterised by its participatory nature, where researchers and participants collaborate to identify problems and implement solutions. This collaborative process ensures that the research is deeply rooted in the needs and realities of the community or organisation being studied. By involving stakeholders in every step, action research not only increases the relevance of the findings but also empowers participants by giving them ownership of the process. This makes it particularly impactful in settings like schools, where teachers and administrators can actively contribute to shaping educational practices.

What sets action research apart is its cyclical nature. Unlike traditional research, where data are collected and analysed in a linear fashion, action research involves continuous cycles of planning, acting, observing, and reflecting. Another important feature of action research is its adaptability. As new insights emerge, the research design can be adjusted to address unforeseen challenges or opportunities. This flexibility allows for iterative learning and continuous improvement, fostering a more dynamic and responsive research environment. This makes it particularly well-suited for environments where ongoing change is necessary, such as schools or businesses aiming to improve their operations or outcomes. However, this adaptability also introduces challenges, particularly in maintaining rigour and objectivity. Balancing the need for scientific validity with the practical demands of real-world problem-solving requires careful planning and reflective practice, often making the role of the researcher one of facilitator as much as investigator.

Autoethnography

I have previously written another blog post which explains autoethnography in detail. In essence, autoethnography is a research design that combines the study of personal experience with broader social and cultural analysis. In this approach, the researcher uses their own life as the primary source of data, reflecting on their personal experiences to explore larger cultural or societal issues. Researchers are the participants in their own studies and the stories which are told often explore transformative experiences for the researcher, frequently taking the form of epiphanies that significantly influenced the authorโ€™s worldview. By blending autobiography and ethnography, autoethnography allows researchers to provide an insiderโ€™s perspective on their own social context, making it a powerful tool for examining how individual identity and experiences are shaped byโ€”and in turn, shapeโ€”cultural norms, values, and power dynamics.

One of the strengths of autoethnography is its ability to highlight marginalised voices or experiences that are often overlooked in traditional research. It provides a platform for self-reflection and critical analysis, allowing researchers to connect their individual stories to larger collective experiences. However, the highly personal nature of this research design also presents challenges. Balancing subjectivity with academic rigour requires careful reflection to avoid the research becoming overly introspective or self-indulgent. Autoethnographers must navigate the fine line between personal storytelling and scholarly analysis, ensuring that their narrative contributes meaningfully to the understanding of broader social or cultural issues. Despite these challenges, autoethnography remains a powerful approach for exploring the intersection of the personal and the political, offering rich, emotionally resonant insights into the complexities of human experience.

Note that autoethnography can be done by one researcher or by a group of researchers. When done together, this type of autoethnography is called collaborative autoethnography. Collaborative autoethnography is particularly pertinent when examining complex social phenomena, such as marginalisation and the pursuit of social justice, as it facilitates the inclusion of multiple perspectives and voices. In this way, the individual voices of the researchers work together to illuminate common themes or experiences.

Case Study

Case study research is particularly effective for exploring complex phenomena in depth and within their real-life context. The case study design focuses on an in-depth examination of a ‘case,’ which could be an individual, group, organisation, or event. Case studies can be either descriptive, exploring what is happening, or explanatory, seeking to understand why and how something occurs. They often use multiple data sourcesโ€”such as interviews, observations, and documentsโ€”to provide a comprehensive understanding of the case. Unlike other designs that seek to generalise findings across large populations, case studies focus on the intricacies of a ‘case’. The depth of focus of a case study also presents limitationsโ€”namely, the findings from a single case may not be applicable to other contexts. Despite this, case studies are often used as a stepping stone for further research, providing in-depth insights that can inform broader studies.

The distinction between single-case and multiple-case designs lies in the scope and focus of the research. A single-case design centres around an in-depth examination of one particular case, which is often chosen because it is either unique, critical, or illustrative of a broader phenomenon. This design is beneficial when the case is exceptional or offers significant insight into a rare or novel situation. In contrast, a multiple-case design involves studying several cases to compare and contrast findings across different contexts or instances. Multiple-case designs offer more robust evidence, as they allow researchers to identify patterns or variations across cases, increasing the potential for generalising findings to a broader population or set of circumstances.

Document or Policy Analysis

Document or policy analysis is a qualitative research design that involves systematically reviewing and interpreting existing documents to extract meaningful data relevant to a research question. These documents can range from government reports, personal letters, and organisational records to media articles, policy documents, and historical texts. It involves examining the formulation, implementation, and outcomes of documents or policies by analysing relevant data, understanding stakeholder perspectives, and evaluating the potential impacts of various options. Researchers use document analysis to identify patterns, themes, or trends within written materials, which can offer valuable insights into social, political, or organisational contexts. One of the strengths of document analysis is that it allows researchers to access data that is already available, making it a relatively unobtrusive approach that does not require direct interaction with participants.

This research design is particularly useful when studying past events, policies, or organisational practices, as documents can provide a rich historical or contextual backdrop. Additionally, document analysis can be used in conjunction with other research designs, such as case studies, to triangulate findings and enhance the depth of the research. However, one of the challenges of this design is assessing the credibility, bias, or completeness of the documents. Researchers must critically evaluate the sources to ensure that the information is reliable and relevant to their study. Despite these challenges, document analysis remains a valuable tool for exploring existing written records and uncovering insights that may not be easily accessible through other research designs.

Ethnography

Ethnography is a deeply immersive research design that involves the researcher becoming part of the community or environment they are studying. This approach allows researchers to gather first-hand insights into the social dynamics, practices, and beliefs of a group from the inside. Rather than relying on external observation or second-hand accounts, ethnographers immerse themselves among their participants, often for extended periods. This enables them to capture the complexities of human behaviour in its natural setting, offering a nuanced understanding of cultural practices and social interactions.

One of the unique aspects of ethnography is its emphasis on the participants’ perspectives. By prioritising the voices and experiences of the people being studied, ethnographers aim to represent the world as seen through the eyes of the participants. However, this approach also raises challenges, particularly around maintaining objectivity and managing the researcherโ€™s role in influencing the group they are observing. Ethnography requires careful ethical considerations, such as gaining informed consent and respecting privacy, given the often intimate nature of the research. Despite these challenges, the rich, contextual insights that ethnography provides make it a powerful approach for understanding the lived experiences of individuals within their cultural and social environments.

Experimental and Quasi-Experimental Design

Experimental research is a highly controlled design that seeks to establish cause-and-effect relationships by manipulating one or more independent variables and observing their impact on dependent variables. This research design typically involves two groups: an experimental group that receives the treatment or intervention and a control group that does not. By randomly assigning participants to these groups, researchers can minimise bias and ensure that differences in outcomes are directly attributable to the variable being tested, rather than external factors. This randomisation strengthens the internal validity of the experiment.

Quasi-experimental designs are similar to experimental research but differ in one key aspect: they lack the random assignment of participants to experimental and control groups. In cases where randomisation is either impractical or unethicalโ€”such as in educational settings or when studying pre-existing groupsโ€”quasi-experimental designs provide a valuable alternative. While researchers still manipulate an independent variable and observe its effect on a dependent variable, the absence of randomisation means that there may be pre-existing differences between groups. As a result, researchers must account for these differences when analysing the outcomes, often using statistical methods to control for confounding variables.

Grounded Theory

Grounded theory is a qualitative research design designed to generate theory directly from the data rather than testing an existing hypothesis or using a pre-existing theoretical framework. Unlike more traditional research approaches, grounded theory allows the theory to emerge naturally through the iterative process of data collection and analysis. Researchers continuously compare new data with previously gathered information. This ongoing comparison enables them to identify recurring patterns, concepts, and categories, which are then refined into a coherent theoretical framework. Grounded theory is particularly useful when studying processes, interactions, or behaviours where existing theories do not exist or may not fully explain the phenomena.

One of the major advantages of grounded theory is its flexibility. Since it does not require researchers to adhere to a rigid hypothesis or framework from the start, the design allows for the exploration of unexpected insights that may arise during data collection. This makes it a powerful approach for investigating complex or under-researched topics. However, the open-ended nature of grounded theory can also be a challenge, as it requires researchers to be highly reflexive and adaptable throughout the research process. The absence of a pre-set framework means that analysis can be time-consuming, with researchers needing to sift through large amounts of data to construct a meaningful theory that adequately reflects the participants’ experiences and emerging patterns.

Narrative Inquiry

Narrative inquiry is a qualitative research design that focuses on the stories people tell about their personal experiences, aiming to understand how individuals construct meaning in their lives. Unlike other research approaches that may prioritise external observation or objective measurement, narrative inquiry dives into the subjective world of the participant. Researchers collect these narratives through interviews, journals, letters, or even autobiographies, and analyse how individuals structure their stories to make sense of their experiences. This approach is particularly useful in fields where understanding personal identity, life transitions, or cultural contexts requires a close examination of how people frame and interpret their lived experiences.

A key feature of narrative inquiry is its emphasis on the co-construction of meaning between the researcher and the participant. The researcher does not just passively collect stories but actively engages in dialogue, interpreting the narratives while considering how their own perspectives and biases influence the analysis. This collaborative process allows for a richer understanding of the subject matter but also demands a high level of reflexivity from the researcher. Since narratives are shaped by memory, culture, and social influences, researchers must carefully navigate issues of subjectivity, ensuring that the participantโ€™s voice is authentically represented while also providing a critical analysis of how the story fits within broader social or cultural patterns.

Phenomenology

Phenomenology is a qualitative research design that seeks to explore and understand individuals’ lived experiences of a particular phenomenon. Rather than focusing on objective measures or external observations, phenomenology prioritises subjective experience, aiming to uncover the essence of how people perceive, interpret, and make sense of their experiences. Researchers using this design typically collect data through a variety of in-depth methods such as interviews or reflections, allowing participants to describe their personal encounters with the phenomenon in their own words. The goal is to view the experience as closely as possible through the eyes of the individuals who lived it, capturing its richness and complexity without external influence.

While this research design provides deep insights into human consciousness and subjective experience, it can be challenging to generalise the findings due to the intensely personal nature of the data. Nevertheless, phenomenologyโ€™s strength lies in its ability to provide a profound, context-rich understanding of how individuals uniquely experience and interpret specific aspects of life, making it invaluable for exploring complex, emotionally charged, or abstract phenomena.

Survey Research

Survey research is a widely utilised design in both quantitative and qualitative research that involves gathering data from a large group of respondents, typically through structured questionnaires. This approach is highly versatile, allowing researchers to collect information about a wide range of topics, including attitudes, behaviours, preferences, and demographic characteristics. One of the main advantages of survey research is its ability to gather data from a broad population efficiently, making it possible to identify trends, correlations, or patterns within large datasets. Surveys can be administered in various formats, such as online, by phone, or in person, providing flexibility in how researchers reach their target audience.

However, the quality and reliability of the data collected through surveys depend heavily on the surveyโ€™s design. Well-constructed surveys require carefully worded questions that avoid bias and confusion, and they must be designed to ensure that respondents understand and can accurately answer the questions. Another challenge is ensuring a high response rate, as low participation can skew results and affect the studyโ€™s representativeness. Despite these limitations, survey research remains a powerful tool in fields like marketing, social sciences, public health, and education, where large-scale data collection is necessary to inform policies, identify trends, or make generalisations about a populationโ€™s characteristics or behaviours.

You can also learn more about research designs and methods by watching the videos below.

Questions to ponder

How does the nature of the research question influence the decision to use a particular research design?

How do ethical concerns shape the choice of research design?

What types of research questions are best suited for case study research, and how do these differ from questions better addressed through autoethnography?

The power of collaborative writing and peer feedback in doctoral writing groups


Dr Basil Cahusac de Caux

Contact details

Dr Basil Cahusac de Caux is an Associate Professor with a specialisation in the sociology of higher education, postgraduate research, and the sociology of language.


The image shows a professional portrait of a woman with long, dark brown hair and glasses, smiling warmly. She is wearing a floral-patterned blouse under a black jacket, and the setting is softly lit, with an out-of-focus background featuring natural light and hints of greenery. The photograph conveys a friendly and approachable demeanour, suitable for professional or academic contexts.

Dr Lynette Pretorius

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, AI literacy, research literacy, academic identity, and student wellbeing.


Have you ever wondered how doctoral students can navigate the challenging journey of academic writing? For many, the answer lies in the strength of community and the power of collaborative feedback. Our recent paper explores this very subject, examining how doctoral writing groups can transform the academic experience through peer feedback and collective learning.

Our study centres on a collaborative book project where doctoral students wrote and peer-reviewed each other’s chapters, ultimately producing a book titled Wellbeing in Doctoral Education: Insights and Guidance from the Student Experience. This project wasn’t just about writing; it was about creating a community of practice, where students learned together, shared experiences, and supported each other through the arduous process of academic writing. The concept of communities of practice is pivotal in understanding this study. These communities are formed by individuals who share a passion or concern for something they do, learning to do it better through regular interaction.

In the context of our specific doctoral writing groups, the shared domain was academic writing and publishing of the academic book, and the community was formed through mutual engagement and support. Participants were united by their commitment to improving their academic writing through peer feedback. This shared focus provided a common ground for all members, fostering a sense of belonging and purpose. Building a supportive community was crucial. The writing groups created a space where students felt safe to share their work, provide feedback, and discuss their challenges. This environment of trust and collegiality was essential for effective learning and personal growth. Through their interactions, the group developed a shared repertoire of resources, experiences, and practices. This included not just the technical aspects of writing but also the emotional and psychological support needed to thrive in academia. Participants learned from each other, gaining insights into different writing styles, feedback techniques, and academic expectations.

One of the most significant findings from our study was the transformative power of peer feedback. Participants found that receiving and giving feedback was instrumental in improving their writing. Feedback was not only about correcting mistakes but also about providing affirmation and recognising the potential and effort of the writers. This helped build confidence and self-esteem. Another powerful aspect of peer feedback was the opportunity to learn from others. This process helped participants identify their own mistakes and areas for improvement. By reviewing peers’ work, participants also gained new perspectives and ideas that they could apply to their own writing.

Our findings illustrate how peer feedback and collaborative practices within writing groups can significantly enhance the doctoral experience. Participants discovered that, despite their unique backgrounds and stories, they shared common challenges in their academic journeys. This realisation fostered a sense of community and mutual understanding. Our findings highlight the dual nature of the doctoral experience: each student has a unique narrative, yet their struggles and successes resonate with others. This shared experience of uncovering commonalities amidst diversity facilitated a deeper understanding and appreciation of one another’s viewpoints, thereby fostering a sense of community and collegiality within the group. This collective recognition of shared struggles also helped alleviate feelings of isolation and promoted a supportive environment. Our findings also emphasise the importance of reflective writing and feedback in promoting personal growth and academic development. Through sharing their stories, participants articulated and reshaped their identities in academia, which helped them navigate both personal and academic development.

Our study highlights the immense value of collaborative writing and peer feedback in doctoral education. By fostering a supportive community of practice, doctoral students can navigate the complexities of academic writing more effectively, develop their academic identities, and build the confidence needed to succeed in academia. This approach not only improves writing skills but also provides emotional and psychological support, making the doctoral journey a more enriching and less isolating experience.

The findings of our study have several important implications for doctoral education:

  • Institutions should encourage the formation of writing groups and other collaborative learning opportunities to help doctoral students develop their writing skills and academic identities.
  • Developing students’ ability to give and receive feedback is crucial. Our study shows that feedback literacy can significantly enhance the quality of academic writing and the overall learning experience.
  • Creating a safe and supportive environment where students can share their work and experiences is essential for their personal and academic growth.

Taken together, our study shows that embracing the power of community and collaboration could be the key to transforming the doctoral experience, making it more supportive, inclusive, and ultimately, more successful for all students involved.

Questions to ponder

How do your emotions influence academic writing and reactions to feedback?

Are there hidden practices of publishing that should be discussed more openly?

How can academic institutions better support the formation of communities of practice among doctoral students?

What are some challenges that might arise in implementing peer feedback systems, and how can they be addressed?

In what ways can the process of giving and receiving feedback be made more effective and less emotionally taxing for students?

The AI literacy framework for higher education

The image shows a professional portrait of a woman with long, dark brown hair and glasses, smiling warmly. She is wearing a floral-patterned blouse under a black jacket, and the setting is softly lit, with an out-of-focus background featuring natural light and hints of greenery. The photograph conveys a friendly and approachable demeanour, suitable for professional or academic contexts.

Dr Lynette Pretorius

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, AI literacy, research literacy, academic identity, and student wellbeing.



Dr Basil Cahusac de Caux

Contact details

Dr Basil Cahusac de Caux is an Associate Professor with a specialisation in the sociology of higher education, postgraduate research, and the sociology of language.


In an era where generative artificial intelligence (AI) permeates every aspect of our lives, AI literacy in higher education has never been more crucial. In our recent paper, we delve into our own journeys of developing AI literacy, showcasing how educators can seamlessly integrate AI into their teaching practices. Our goal is to cultivate a new generation of AI-literate educators and graduates. Through our experiences, we also created a comprehensive framework for AI literacy, highlighting the transformative potential of embracing AI in educational settings.

We embraced AI with optimism and enthusiasm, seeing it as a tool to be harnessed rather than feared. In our recent paper, we passionately argue that AI literacy is an indispensable skill for today’s graduates. We emphasise that this mindset requires a significant cultural shift in higher education, advocating for the integration of AI as a valuable learning aid. By fostering this change, we can unlock AI’s potential to enhance education and empower students to thrive in an increasingly digital world.

Our journey began with curiosity and a willingness to experiment with AI in our educational practices. Lynette, for instance, integrated AI into her role, showcasing its capacity as an academic language and literacy tutor. She encouraged her students, many of whom are from non-English speaking backgrounds, to use tools like Grammarly and ChatGPT to improve their academic writing. By doing so, she highlighted the importance of collaboration between students and AI, promoting deeper learning and engagement.

In a Masterโ€™s level course on autoethnography, Lynette inspired her students to harness generative AI for creative data generation. She showcased how tools like DALL-E could be used to create artworks that visually represent their research experiences. This approach not only ignited students’ creativity but also deepened their engagement with their assignments, allowing them to explore their research from a unique and innovative perspective.

Basil introduced his students to the power of generative AI through hands-on assignments. One notable task involved creating a public awareness campaign centred around the UN’s Sustainable Development Goals. Students utilised DALL-E to produce compelling visuals, showcasing AI’s ability to amplify creativity and enhance learning outcomes. This practical approach not only highlighted the transformative potential of AI but also encouraged students to engage deeply with important global issues through innovative and impactful media.

While the benefits of AI in education were clear to us, we also encountered ethical considerations and challenges. In our paper, we emphasised the importance of transparency and informed consent when using AI in research and teaching. For example, we ensured that students and research participants were aware of how their data would be used and the potential biases inherent in AI-generated content. Moreover, we highlighted the environmental impact of using AI technologies. The energy consumption of AI models is significant, raising concerns about their sustainability. This awareness is crucial as educators and institutions navigate the integration of AI into their practices.

From our experiences and reflections, we developed a groundbreaking AI literacy framework for higher education, encompassing five domains: foundational, conceptual, social, ethical, and emotional. As illustrated in the figure below, this comprehensive framework is designed to empower educators and students with the essential skills to adeptly navigate the intricate AI landscape in education. By promoting a holistic and responsible approach to AI literacy, our framework aims to revolutionise the integration of AI in academia, fostering a new generation of informed and conscientious AI users.

Elements of AI Literacy in Higher Education. Download here.

From these essential domains of AI literacy, we have crafted a comprehensive framework for AI literacy in higher education.

The framework underscores the following key features:

  • Foundational Understanding: Mastering the basics of accessing and using AI platforms.
  • Information Management: Skillfully locating, organising, evaluating, using, and repurposing information.
  • Interactive Communication: Engaging with AI platforms as interlocutors to create meaningful discourse.
  • Ethical Citizenship: Conducting oneself ethically as a digital citizen.
  • Socio-Emotional Awareness: Incorporating socio-emotional intelligence in AI interactions.
The AI Literacy Framework for Higher Education. Download here.

Our AI literacy framework has significant implications for higher education. It provides a structured approach for integrating AI into teaching and research, emphasising the importance of ethical considerations and emotional awareness. By fostering AI literacy, educators can prepare students for a future where AI plays a central role in various professional fields.

Embracing AI literacy in higher education is not just about integrating new technologies; it’s about preparing students for a rapidly changing world. Our AI literacy framework offers a comprehensive guide for educators to navigate this transition, promoting ethical, effective, and emotionally aware use of AI. As we move forward, fostering AI literacy will be crucial in shaping the future of education and empowering the next generation of learners.

Questions to ponder

How can educators ensure that all students, regardless of their technological proficiency, can access and utilise generative AI tools effectively?

In what ways can generative AI tools be used to enhance students’ conceptual understanding of course materials?

How can the concept of generative AI as a collaborator be integrated into classroom discussions and activities?

How can educators model ethical behaviour and digital citizenship when using generative AI tools in their teaching?

How can understanding the emotional impacts of generative AI interactions improve the overall learning experience?

How can the AI literacy framework be practically integrated into different academic disciplines and curricula?

Developing AI literacy in your writing and research

The image shows a professional portrait of a woman with long, dark brown hair and glasses, smiling warmly. She is wearing a floral-patterned blouse under a black jacket, and the setting is softly lit, with an out-of-focus background featuring natural light and hints of greenery. The photograph conveys a friendly and approachable demeanour, suitable for professional or academic contexts.

Dr Lynette Pretorius

Dr Lynette Pretorius is an award-winning educator and researcher specialising in doctoral education, AI literacy, research literacy, academic identity, and student wellbeing.


I have recently developed and delivered a masterclass about how you can develop your AI literacy in your writing and research practice. This included a series of examples from my own experiences. I thought I’d provide a summary of this masterclass in a blog post so that everyone can benefit from my experiences. You can also watch the full masterclass below.


Artificial intelligence (AI) has been present in society for several years and refers to technologies which can perform tasks that used to require human intelligence. This includes, for example, computer grammar-checking software, autocomplete or autocorrect functions on our mobile phone keyboards, or navigation applications which can direct a person to a particular place. Recently, however, there has been a significant advancement in AI research with the development of generative AI technologies. Generative AI refers to technologies which can perform tasks that require creativity. In other words, these generative AI technologies use computer-based networks to create new content based on what they have previously learnt. These types of artistic creations have previously been thought to be the domain of only human intelligence and, consequently, the introduction of generative AI has been hailed as a โ€œgame-changerโ€ for society.

I am using generative AI in all sorts of ways. The AIs I use most frequently include Google’s built-in generative AI in email, chat, Google Docs etc. which learns from your writing to suggest likely responses. I also use Grammarly Pro to help me identify errors in my students’ writing, allowing me more time to give constructive feedback about their writing, rather than trying to find examples. This is super time-saving, particularly given how many student emails I get and the number of assignments and thesis chapters I read! I also frequently use a customised version of Chat GPT 4, which I trained to do things the way I would like them to be done. This includes responding in a specific tone and style, reporting information in specific ways, and doing qualitative data analysis. Finally, I use Leonardo AI and DALL-E to generate images, Otter AI to help me transcribe some of my research, Research Rabbit to help me locate useful literature on a topic, and AILYZE to help conduct initial thematic analysis of qualitative data.

The moral panic that was initiated at the start of 2023 with the advent of Chat GPT caused debates in higher education. Some people insisted that generative AI would encourage students to cheat, thereby posing a significant risk to academic integrity. Others, however, advocated that the use of generative AI could make education more accessible to those who are traditionally marginalised and help students in their learning. I came to believe that the ability to use generative AI would be a core skill in the future, but that AI literacy would be essential. This led me to publish a paper where I defined AI literacy as:

AI literacy is understanding โ€œhow to communicate effectively and collaboratively with generative AI technologies, as well as evaluate the trustworthiness of the results obtainedโ€.

Pretorius, L. (2023). Fostering AI literacy: A teaching practice reflection. Journal of Academic Language & Learning, 17(1), T1-T8. https://journal.aall.org.au/index.php/jall/article/view/891/435435567   

This prompted me to start to develop ways to teach AI literacy in my practices. I have collated some tips below.

  • Firstly, you should learn to become a prompt wizard! One of the best tips I can give you is to provide your generative AI with context. You should tell your AI how you would like it to do something by giving it a role (e.g., “Act as an expert on inclusive education research and explain [insert your concept here]”). This will give you much more effective results.
  • Secondly, as I have already alluded to above, you can train your AIs to work for you in specific ways! So be a bit brave and explore what you can do.
  • Thirdly, when you ask it to make changes to something (e.g., to fix your grammar, improve your writing clarity/flow), ask it to also explain why it made the changes it did. In this way, you an use the collaborative discussion you are having with your AI as a learning process to improve your skills.

The most common prompts I use in my work are listed below. The Thesis Whisperer has also shared several common prompts, which you can find here.

  • โ€œWrite this paragraph in less words.โ€
  • โ€œCan you summarise this text in a more conversational tone?โ€
  • โ€œWhat are five critical thinking questions about this text?โ€

I have previously talked about how you can use generative AI to help you design your research questions.

I have since also discovered that you can use generative AI as a data generation tool. For example, I have recently used DALL-E to create an artwork which represents my academic identity as a teacher and researcher. I have written a chapter about this process and how I used the conversation between myself and DALL-E as a data source. This chapter will be published soon (hopefully!).

Most recently, I have started using my customised Chat GPT 4 as a data analysis tool. I have a project that has a large amount of qualitative data. To help me with a first-level analysis of this large dataset, I have developed a series of 31 prompts based on theories and concepts I know I am likely to use in my research. This has allowed me to start the analysis of my data and give me direction as to areas for further exploration. I have given an example of one of the research prompts below.

In this study, capital is defined as the assets that individuals vie for, acquire, and exchange to gain or maintain power within their fields of practice. This study is particularly interested in six capitals: symbolic capital (prestige, recognition), human capital (technical knowledge and professional skills), social capital (networks or relationships), cultural capital (cultural knowledge and embodied behaviours), identity capital (formation of work identities), and psychological capital (hope, efficacy, resilience, and optimism). Using this definition, explain the capitals which have played a part in the doctoral studentโ€™s journey described in the transcript.

What I have been particularly impressed by so far is my AIs ability to detect implicit meaning in the transcripts of the interviews I conducted. I expected it to be pretty good at explaining explicit mentions of concepts, but had not anticipated it to be so good at understanding more nuanced and layered meanings. This is a project that is still in progress and I expect very interesting results.

There are some ethical considerations which should be taken into account when using generative AIs.

  • Privacy/confidentiality: Data submitted to some generative AIs could be used to train the generative AI further (often depending on whether you have a paid or free version). Make sure to check the privacy statements and always seek informed consent from your research participants.
  • Artwork: Generative AIs were trained with artwork without express consent from artists. Additionally, it is worth considering who the actual artist/author/creator of the artwork is when you use generative AI to create it. I consider both the user and the AI as collaborators working to create the artwork together.
  • Bias propagation: Since generative AIs are trained based on data from society, there is a risk that they may reflect biases present in the training data, perpetuating stereotypes or discrimination.
  • Sustainability: Recent research demonstrates that generative AI does contribute significantly to the userโ€™s carbon footprint.

It is also important to ethically and honestly acknowledge how you have used generative AI in your work by distinguishing what work you have done and what work it has done. I have previously posted a template acknowledgement for students and researchers to use. I have recently updated the acknowledgement I use in my work and have included it below.

I acknowledge that I used a customised version of ChatGPT 4 (OpenAI, https://chat.openai.com/) during the preparation of this manuscript to help me refine my phrasing and reduce my word count. The output from ChatGPT 4 was then significantly adapted to reflect my own style and voice, as well as during the peer review process. I take full responsibility for the final content of the manuscript.

My final tip is – be brave! Go and explore what is out there and see what you can achieve! You may be surprised how much it revolutionises your practices, freeing up your brain space to do really cool and creative higher-order thinking!

Questions to ponder

How does the use of generative AI impact traditional roles and responsibilities within academia and research?

Discuss the implications of defining a ‘collaborative’ relationship between humans and generative AI in research and educational contexts. What are the potential benefits and pitfalls?

How might the reliance on generative AI for tasks like grammar checking and data analysis affect the skill development of students and researchers?

The blog post mentions generative AI’s ability to detect implicit meanings in data analysis. Can you think of specific instances or types of research where this capability would be particularly valuable or problematic?

Reflect on the potential environmental impact of using generative AI as noted in the blog. What measures can be taken to mitigate this impact while still benefiting from AI technologies in academic and research practices?