Relationship 1
Note: All analyses control for participant age, gender, race, and socioeconomic status.
Partners’ known or suspected ESI behavior was also significantly associated across relationships, supporting the hypotheses for our second research question. When compared to those who reported no partner ESI in the first relationship, participants who reported known partner ESI in the first relationship were 2.4 times more likely to report known partner ESI in the second relationship (22% compared to 9%). Further, participants who reported suspected partner ESI in the first relationship were 4.3 times more likely to suspect their second relationship partners of ESI (37% compared to 6%).
Hypotheses regarding our third research question were not supported. We found no evidence that engagement in ESI in first relationships predicted any differences in the likelihood of reporting one’s partner’s known or suspected ESI in the second relationship.
Finally, in follow-up analyses, neither gender nor marriage significantly moderated the link between ESI in first relationships and ESI in second relationships for any model. Thus, we found no evidence that the persistence of ESI from one relationship to the next differed for women versus men or for couples who married compared to those who did not.
The current study addressed an important gap in the literature on infidelity in romantic relationships by examining persistent or serial risk of infidelity across subsequent romantic relationships over time. Results from this study indicated that people who engaged in infidelity themselves, knew about a partner’s infidelity, or suspected a partner of infidelity had a higher risk of having those same infidelity experiences again in their next romantic relationships. These findings controlled for many demographic variables that are predictive of engaging in infidelity, and they did not vary based on gender or marital status.
Overall rates of infidelity in this sample were toward the high end of the range of previous estimates, with 44% of participants reporting engaging in infidelity themselves during the relationships captured by this study, 30% reporting having at least one partner who they knew engaged in infidelity, and 18% reporting that they suspected a partner of engaging in infidelity. These higher rates are expected, given that this was an unmarried sample at baseline, and unmarried samples tend to have higher rates of infidelity than married samples ( Treas & Giesen, 2000 ). Two notable departures from the prior literature were that there was no difference in the prevalence of reporting one’s own or a partner’s infidelity for women and men in this sample, and that participants with more years of education were less likely to report infidelity. These findings suggest that the existing understanding of gender and education differences in infidelity is nuanced; it likely reflects a complex interplay of social forces (e.g., power, privilege, and opportunity) that is not easily captured by simple demographic characteristics and that may be changing rapidly along with larger societal changes. Previous descriptions of demographic risk factors for infidelity do not necessarily accurately characterize the younger, unmarried population represented in the current study.
Our results indicated a three-fold increase in the likelihood that a person will engage in infidelity if they already have a history of engaging in ESI, and a two- to four-fold increase in the likelihood of having an partner engage in ESI if a person knew about or suspected infidelity from a past relationship partner. Thus, effects in the current study were generally medium in size.
These findings suggests that previous engagement in infidelity is an important risk factor predicting engagement in infidelity in a subsequent relationship, even after accounting for key demographic risk factors. At the same time, it is important to interpret these effects in the context of their base rates, which suggest that most people who reported either their own or their partner’s infidelity during their first relationship in this study did not report having that same experience again in their second relationship during the study timeframe. That is, although a history of infidelity may be an important risk factor of which to be aware, it is not necessarily true that someone who is “once a cheater” is “ always a cheater.” Understanding what distinguishes those who experience repeated infidelity from those who do not remains an important next step, both for understanding the development of infidelity risk and for designing effective interventions for individuals who would like to stop negative relationship behaviors and experiences from carrying over into their future relationships.
One important consideration is that first relationships and second relationships were somewhat different in the current study. First relationships were longer and more likely to involve living together. This makes sense in our sample, given that first relationships began before the study timeframe, whereas second relationships were newer simply by virtue of our data collection procedure. First relationships also necessarily ended during the study, but not all second relationships did. These differences likely explain the differences in rates of infidelity in first and second relationships. However, we do not believe these differences alter the conclusions reached from the current analyses. It may be the case that even more participants with infidelity in first relationships would have gone on to report infidelity again in the second relationships that were still ongoing, which would have strengthened the effects found in our analyses. The only circumstance that would threaten the validity of our primary conclusions about serial infidelity risk would be if participants without first-relationship infidelity were to “catch up” to those with first-relationship infidelity by reporting a greater rate of infidelity later on in second relationships, and we do not know of any reason to expect that to be the case.
We found no evidence that reported suspected or known partners’ infidelity was related to a person’s own past history of engaging in infidelity. These null results belie the common wisdom that those who are suspicious of their partners’ fidelity have likely engaged in infidelity themselves, at least within the context of the two subsequent young adult romantic relationships captured in the present study. On the other hand, our results did indicate that even when they left one relationship and began another, people who suspected previous partner ESI were much more likely to be suspicious of their new relationship partners as well. Individual differences in trait suspiciousness or jealousy, independent of relationship context, may play a role in suspecting a relationship partner of infidelity; for example, parent relationship models (e.g., Rhoades, Stanley, Markman, & Ragan, 2012 ) and stable relationship attachment styles (e.g., Dewall et al., 2011 ) may impact persistent attitudes or beliefs about fidelity. Further, little is known about the accuracy of suspicions of infidelity. Future research investigating how frequently individuals are correct when suspecting partner infidelity could shed light on the rationale people may have for being suspicious of their partners.
Perhaps most intriguing, we found that participants who said they were certain that their previous relationship partners engaged in ESI were more than twice as likely to go on to report feeling certain that their current partner had engaged in ESI in their next relationship. We cannot make assertions about causality using data from the current study. It may be that some individuals have persistent relationship styles that tend to create a relationship context in which a partner’s infidelity is likely ( Allen et al., 2005 ). Alternatively, some people may learn that these types of behaviors are more acceptable or expected after experiencing them once (e.g., Glass & Wright, 1992 ; Simon et al., 2001 ), and thus may become more tolerant of signs of infidelity in future relationship partners. This explanation is consistent with theories that posit a bidirectional link between infidelity experiences and attitudes. It may also be the case that socioeconomic constraints, cultural values, or limited partner pools make certain individuals more likely to select or tolerate infidelity in partners again and again. For example, scholars of race and relationships posit that social factors causing an unequal gender ratio in Black communities create a context in which male infidelity is ignored, tolerated, or even considered normative ( Bowleg, Lucas, & Tschann, 2004 ; Pinderhughes, 2002 ). Finally, because we did not assess how or why participants knew about their partners’ infidelity, we must consider the possibility that even participants who reported being “certain” about their partners’ infidelity could be reporting subjective perceptions related to their own suspicion, jealousy, or other personality traits. These individuals may be more likely to repeatedly report certain knowledge of partner infidelity despite lacking definitive evidence.
In addition to filling an important gap in our understanding of serial infidelity, results from this study may be relevant for clinical interventions as well. Prevention efforts such as relationship education may help individuals interrupt the tendency to repeatedly engage in ESI in different relationships. Other researchers have identified a need for infidelity prevention to identify the people who are most at risk, and to address the contextual and situational risks that may then lead to infidelity ( Markman, 2005 ). This study demonstrated that past infidelity is an important indicator to identify those who are at continued risk of engaging in infidelity, over and above common demographic risk factors. Moreover, this study points to an opportunity for clinicians to help individuals identify the circumstances that led to past infidelity in order to avoid repeating similar patterns again in future relationships.
The findings regarding serial partner similarities may indicate an important additional target for preventative relationship education aimed at helping individuals make better decisions in their romantic lives. Relationship interventions can encourage participants to make informed choices about selecting potential partners based on those partners’ romantic histories. Interventions can also teach skills appropriate for mitigating the particular risks that may accompany having a relationship with someone who has engaged in infidelity during a previous relationship. For example, professionals have noted that couples’ abilities to discuss the individual and relationship factors that led up to infidelity is a strong indicator of successful relationship recovery ( Gordon, Baucom, & Snyder, 2004 ), and there is a growing body of research in support of explicit conversations aimed at defining relationships as they undergo transitions ( Stanley, Rhoades, & Markman, 2006 ). Preliminary research suggests that such conversations may be particularly important with regard to managing infidelity ( Knopp, Vandenberg, Rhoades, Stanley, & Markman, 2016 ). It may be that conversations aimed at reaching a mutual definition of relationship fidelity and anticipating potential barriers to maintaining fidelity could be beneficial for couples who are at risk due to past experiences or other risk factors, though this intervention remains to be empirically evaluated.
The current study has limitations that are important to consider. First, the sample is not likely to represent all people in the United States equally well. The ages of eligible participants at the time of initial recruitment were restricted to between 18 and 35 years old; the effects of serial infidelity may be different among younger adolescents or older adults, particularly considering the different relationship structures and expectations that exist throughout the lifespan. Although the sample from the current study was a subset of a larger group of participants that was representative of the U.S. in terms of geographic location, race, and ethnicity, the smaller sample used here is not likely to reliably represent all racial and ethnic minorities, because non-White participants comprise a small number of the sample participants. In addition, inclusion criteria for participants in this study involved being in a relationship with “someone of the opposite sex” at the time of recruitment. Thus, findings may not generalize to people who have same-gender relationships.
As previously discussed, the measurement of infidelity in the current study has some limitations. Although most research on infidelity to date, including the current study, has defined infidelity as ESI (c.f. Allen et al., 2005 ; Blow & Hartnett, 2005a ), ESI is an imperfect proxy for infidelity. In particular, it is unclear in this study whether the ESI was considered to be allowed in the relationship (e.g., as in consensually non-monogamous relationships). Our data indicate that at the end of the study, 2% of our sample reported being in an “open relationship,” but we do not know this information about the majority of the relationships included in the current analysis; further, the term “open relationship” may not capture all different forms of consensual ESI. Future research could define infidelity in more precise ways to distinguish it from consensual non-monogamy and could also measure different facets of infidelity (e.g., Luo, Cartun, & Snider, 2010 ).
Finally, all data in the current study are self-reported and are therefore subject to reporting bias and shared method variance. This issue is exacerbated by the fact that infidelity is a sensitive topic that is subject to social desirability effects in research (e.g., Whisman & Snyder, 2007 ). Thus, these results may be partially explained by consistency in willingness to report infidelity: individuals who are unwilling to report their own (or a partner’s) infidelity in one relationship are probably also not willing to report infidelity at any point in the future. The current study’s procedure – collecting surveys by mail rather than in person – may have helped to ameliorate this issue, but future research could try alternative methods of collecting data about infidelity.
The current study provides novel contributions to established notions of infidelity across serial relationships, including that personal engagement in ESI and perceptions of partner engagement in ESI predict increased risk of serial infidelity in subsequent relationships. Infidelity can harm individuals and relationships, and these results can inform prevention or intervention efforts by targeting risk factors based on previous relationship patterns in addition to the various individual, relational, and contextual factors demonstrated to predict infidelity in previous work. Although intervention research will be necessary to explore which risk factors are most useful to address and through what mechanisms, our findings clearly demonstrate the need for researchers and clinicians to take into account previous infidelity patterns while developing an understanding of how to predict and intervene with regard to risk for serial infidelity.
Research reported in this publication was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development of the National Institutes of Health [award number R01HD047564]. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Declaration of Conflicting Interests:
The authors declare that they have no conflicts of interest with respect to authorship or the publication of this article.
Kayla Knopp, University of Denver.
Shelby Scott, Veteran Affairs Eastern Colorado Health Care System.
Lane Ritchie, University of Denver.
Galena K. Rhoades, University of Denver.
Howard J. Markman, University of Denver.
Scott M. Stanley, University of Denver.
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Sulfur isotope characteristics in the qian-3 4 section of the qianjiang depression and its implications for the paleoenvironment.
2. geological settings.
Click here to enlarge figure
3.1. elemental composition analyses, 3.2. bulk sample pyrite’s sulfur isotope analyses, 5. model of pyrite’s sulfur isotopes, 5.1. one-dimensional diffusion–advection–reaction model, 5.2. modeling results, 5.3. sensitivity test, 6. conclusions, author contributions, data availability statement, conflicts of interest.
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Wang, T.; Wei, R.; Ling, K.; Dong, L. Sulfur Isotope Characteristics in the Qian-3 4 Section of the Qianjiang Depression and Its Implications for the Paleoenvironment. Minerals 2024 , 14 , 626. https://doi.org/10.3390/min14060626
Wang T, Wei R, Ling K, Dong L. Sulfur Isotope Characteristics in the Qian-3 4 Section of the Qianjiang Depression and Its Implications for the Paleoenvironment. Minerals . 2024; 14(6):626. https://doi.org/10.3390/min14060626
Wang, Tianyu, Ren Wei, Kun Ling, and Lin Dong. 2024. "Sulfur Isotope Characteristics in the Qian-3 4 Section of the Qianjiang Depression and Its Implications for the Paleoenvironment" Minerals 14, no. 6: 626. https://doi.org/10.3390/min14060626
Further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
Within China’s scientific community, there persists a notion that publishing in English is not only a medium of communication, but also a bridge to global recognition. The use of Chinese remains taboo, a silent sacrifice at the altar of international acceptance.
When China’s Chang’e-5 mission in 2020 retrieved the first lunar samples in decades from the moon’s near side, the first research was carried out by a joint team of Chinese and Western scientists and appeared in Science magazine in October 2021.
This was followed by three more scientific papers published by Nature in the same month, according to an editor with the Science China Press, a scientific journal publishing company of the Chinese Academy of Sciences (CAS), recalling the global sensation.
The rocks collected in 2020 led to a number of surprising discoveries, as they turned out to be much younger than the samples brought back by the US Apollo and Soviet Luna missions in the 1960s and 1970s.
“We certainly hope that some of our country’s groundbreaking scientific and technological achievements can appear in China’s top journals, so that we can expand our influence,” said the editor, who asked not to be named.
It was not always so. Tu Youyou, who won China’s first Nobel Prize for science in 2015, published her paper on the discovery of artemisinin in the Chinese Science Bulletin in 1977.
The journal, co-sponsored by CAS and the National Natural Science Foundation of China, once published many major discoveries but since the 1990s has suffered from a lack of quality manuscripts.
Speaking at a conference in 2018, George Gao Fu – a leading scientist in the field of virology and immunology and former head of the Chinese Centre for Disease Control and Prevention (CDC) – said Chinese as a language of academic communication “used to be glorious”.
Breakthroughs, including Tu’s achievement and the discovery of high-temperature iron-based superconducting materials, had been published first in Chinese-language journals and then recognised by the world, he said.
However, for three decades China’s important scientific research results were “basically first reported by foreign journals”, Gao noted.
Interestingly, just a few years later, Gao led a landmark study by a Chinese CDC team on the epidemiology of Covid-19 that was first published in January 2020 by the New England Journal of Medicine.
The move caused controversy in China, where the public was eager for any information about the new coronavirus that causes Covid-19 as the country grappled with the early stages of the pandemic.
The response to the overseas publication of the study reflected a broader, uncomfortable dilemma for Chinese researchers: while they recognise the importance or necessity of writing in their native language, it is difficult on a practical level.
China’s Chang’e-6 touches down on far side of moon on mission to bring rock samples back to Earth
Newton’s Principia Mathematica was written in Latin. Einstein’s first influential papers were written in German. Marie Curie’s work was published in French.
Yet, since the middle of the last century, there has been a shift in the global scientific community, with most scientific research now published in a single language – English, which is spoken by only about 18 per cent of the world’s population.
While it is estimated that up to 98 per cent of global scientific research is published in English, the number of papers by Chinese scholars has been climbing.
As early as 2010, Chinese biologist Zhu Zuoyan, a CAS academician, observed that the number of papers published by scholars from China had risen from 0.2 per cent of the world’s total to 10 per cent within a decade, second only to the United States.
But China’s academic evaluation system encourages the flow of excellent papers to foreign journals, which had partly led to the country’s lack of international academic impact, despite having the second largest number of academic journals – more than 4,800 – in the world, he said.
In late 2019, Li Zhimin, former director of the Ministry of Education’s Science and Technology Development Centre, called for papers to be published in the country’s official language if the research is funded by the government.
The requirement would make it easier for funders to review research projects, facilitate exchanges with their domestic counterparts and improve the nation’s scientific literacy, he said.
A CAS physicist, who declined to be named, stressed that the proposal to “write research results on the soil of the motherland” could not simply be understood as submitting and publishing articles in domestic journals and in Chinese.
That would be “parochial”, he said. Rather, the key is to focus research on solving crucial issues or problems in China’s development, rather than blindly following global research hotspots and wasting research funds and resources.
But at an individual level, there are plenty of pragmatic reasons and incentives for researchers to do just that. Under China’s evaluation system, getting articles published in prestigious English-language journals often brings rewards.
In addition to promotion opportunities and academic honours, there is also fame, with overseas scientific recognition tending to attract wide media and public attention.
Last month, for example, biologist Zhu Jiapeng earned a prize from Nanjing University of Traditional Chinese Medicine for his “outstanding contribution” and a grant of 1 million yuan (US$138,000) as one of the lead authors in a study published by Nature.
Astronomer Deng Licai, with the National Astronomical Observatories under CAS, believes that research results from national missions such as the Chang’e programme should be prioritised for publication in domestic journals.
Deng, who has been a team leader on China’s giant telescope project since its development in the 1990s and 2000s, said he insisted that the first batch of studies to emerge from the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (Lamost) appeared in domestic journals.
“This can firstly highlight the nationality of these independent and cutting-edge major scientific projects, and also help to enhance the international impact of domestic academic journals,” he said.
But English has become the international scientific community’s lingua franca and should be used as a medium of communication, Deng said, adding that it had nothing to do with politics.
China’s space plans: lunar GPS, a 3D-printed moon base and soil samples from Mars
According to Deng, the scientists who study the Chang’e-6 lunar samples could consider publishing in some of China’s English-language journals, such as Research in Astronomy and Astrophysics (RAA).
“All of our pre-research articles on the Lamost programme published in RAA have made it into international lists of highly cited articles,” he said.
Chinese Academy of Social Sciences researcher Zhu Rui, who prefers to publish in Chinese journals – partly because the academy encourages it – said that using his own language when writing academic papers is not an obstacle, as long as the scientific community maintains substantive communication.
You have full access to this open access article
4045 Accesses
Explore all metrics
The proliferation of artificial intelligence (AI) technology has brought both innovative opportunities and unprecedented challenges to the education sector. Although AI makes education more accessible and efficient, the intentional misuse of AI chatbots in facilitating academic cheating has become a growing concern. By using the indirect questioning technique via a list experiment to minimize social desirability bias, this research contributes to the ongoing dialog on academic integrity in the era of AI. Our findings reveal that students conceal AI-powered academic cheating behaviors when directly questioned, as the prevalence of cheaters observed via list experiments is almost threefold the prevalence of cheaters observed via the basic direct questioning approach. Interestingly, our subsample analysis shows that AI-powered academic cheating behaviors differ significantly across genders and grades, as higher-grade female students are more likely to cheat than newly enrolled female students. Conversely, male students consistently engage in academic cheating throughout all grades. Furthermore, we discuss potential reasons for the heterogeneous effects in academic cheating behavior among students such as gender disparity, academic-related pressure, and peer effects. Implications are also suggested for educational institutions to promote innovative approaches that harness the benefits of AI technologies while safeguarding academic integrity.
Avoid common mistakes on your manuscript.
Artificial intelligence (AI) has emerged as a transformative technology, reshaping how businesses and individuals interact, communicate, and access services (Kutyauripo et al., 2023 ; Olan et al., 2022 ; Phan et al., 2023 ; Wang et al., 2023 ). The rapid adoption of these intelligent virtual applications has occurred across many sectors such as business, agriculture, transportation, and healthcare services (Ali et al., 2023 ; Du et al., 2023 ; Kulkov, 2021 ; Kumar et al., 2023 ; Wang et al., 2022 ). In a similar vein, the field of education has undergone significant transformation with the incorporation of AI applications (Mubin et al., 2020 ; Qu et al., 2022 ; Udupa, 2022 ). Specifically, AI virtual assistants are altering teacher-student interactions, content delivery, and learning methods (Aung et al., 2022 ; Dai et al., 2023 ). By providing detailed instruction, instantaneous assistance, greater interactivity, and streamlined administration, AI-powered chatbots are revolutionizing the educational system (Ratten & Jones, 2023 ). Education is improved in terms of accessibility, efficiency, and engagement through the use of AI virtual assistants. AI-powered chatbots render lectures more accessible and productive for all educational stakeholders (Kasneci et al., 2023 ).
While AI-powered applications offer many valuable outcomes in the field of education, there are also a lot of potential drawbacks regarding data privacy, accuracy, overreliance, and ethical concerns (Guo et al., 2023 ; Kasneci et al., 2023 ; Koo, 2023 ; Sollosy & McInerney, 2022 ). Importantly, academic misconduct issues have been raised by the intervention of AI-powered chatbots, which present challenging problems for educational institutions (Fyfe, 2023 ; Sweeney, 2023 ). AI-powered chatbots, which are outfitted with sophisticated algorithms and capabilities, provide students with a wide range of assistance during assignments or exams (Ansari et al., 2023 ; Cotton et al., 2023 ; Currie, 2023 ; Dalalah & Dalalah, 2023 ; Moisset & Ciampi De Andrade, 2023 ). With the assistance of AI chatbots, students can quickly and easily access auto-generated answers, responses, or plagiarized content, pushing them to break the fundamental regulations of academic integrity (Bakar-Corez & Kocaman-Karoglu, 2023 ; Li et al., 2023 ). Importantly, students might intentionally use AI-generated responses for academic cheating purposes that appear highly credible but may not be easily detectable by any anti-plagiarism applications (Choi et al., 2023 ; Livberber & Ayvaz, 2023 ; Sweeney, 2023 ). The intricate interplay between AI chatbots and academic cheating raises emerging concerns among educational institutions in preserving the principles of academic integrity (Guo & Wang, 2023 ; Kasneci et al., 2023 ).
Although previous studies have provided valuable insights into academic cheating in the digital age, noticeable research gaps remain. First, most existing studies rely on the direct questioning approach in their data collection method to examine academic cheating behavior. For instance, Ossai et al. ( 2023 ) examined the relationship between academic performance and academic integrity among 3,214 Nigerian high school students via the direct questioning approach in a paper survey. Footnote 1 Similarly, Park ( 2020 ) examined a sample of 2,360 Korean college students by employing direct questions to measure the frequency of cheating behaviors on a 5-point Likert scale. Footnote 2 Regarding the differences in academic cheating behavior in online education and face-to-face education, Ababneh et al. ( 2022 ) used online questionnaires to investigate 176 UAE undergraduates. Footnote 3 However, examining highly sensitive issues such as academic cheating via the direct questioning approach may raise concerns about the reliability of outcomes due to the effect of social desirability bias. Specifically, social desirability bias is a widely observed phenomenon wherein individuals provide untruthful responses to align with societal norms or expectations, thus helping them to positively present themselves rather than revealing accurate or precise information (Blair & Imai, 2012 ). Biased responses can arise from the predilection to pursue social validation or an aversion to criticism. Importantly, social desirability bias potentially manifests in diverse settings, encompassing interviews, surveys, or other data collection methods that focus on self-reports, notwithstanding the anonymity afforded by these approaches (Larson, 2019 ). As a result, social desirability bias can significantly compromise the credibility and accuracy of research outcomes. The skewing of data resulting from untruthful participants can bias the findings and produce erroneous conclusions (Ahmad et al., 2023 ; Latkin et al., 2017 ; Ried et al., 2022 ). In the context of the education sector, direct responses to academic cheating might be biased, as students might conceal academic cheating behavior for a variety of reasons, often rooted in a complex interplay of academic and social reasons. For academic reasons, cheating is typically considered a violation of academic integrity regulations and can result in disciplinary actions ranging from failing a specific assignment to even expulsion from the institution. In terms of social reasons, admitting to academic dishonesty might negatively affect students’ self-esteem and reputation. As such, students may conceal their cheating behavior in basic direct questioning to avoid unexpected consequences.
Second, numerous studies have extensively examined the heterogeneity in cheating behavior by gender. For instance, Yazici et al. ( 2023 ) indicate that females report a lower prevalence of academic cheating in face-to-face education. In a similar vein, Mohd Salleh et al. ( 2013 ) highlighted that male students are more likely to violate academic integrity than their counterparts. Conversely, Ezquerra et al. ( 2018 ) and Ip et al. ( 2018 ) revealed that no difference in academic cheating exists between males and females. In addition to valuable findings related to heterogeneity in academic cheating behavior by gender, the disparity in academic cheating behavior by gender across different grades remains understudied.
Addressing these gaps is essential for developing a comprehensive understanding of academic cheating in the era of AI. This study seeks to answer the following research question: To what extent do undergraduates conceal AI-powered academic cheating behaviors when investigated using direct questioning and indirect questioning? Regarding the scope of cheating behaviors in our study, we focused on cheating history (students who had cheated) and cheating intention (students who intend to cheat in the future). By delving into this question, our study aims to uncover not only the current situation of AI-powered academic cheating among undergraduates but also the heterogeneity of AI-powered academic cheating observed among students from diverse individual characteristics. To do so, we examine a sample of 1,386 Vietnamese undergraduates to unveil academic cheating behaviors by using ChatGPT (Generative Pretrained Transformer), which is an AI-powered language model developed by OpenAI. In terms of popularity, ChatGPT reached 100 million monthly active users just two months after its launch in November 2022 and became the fastest-growing consumer application in history (UBS, 2023 ). Based on the reliable outcomes of the list experiment, our study contributes valuable insights that inform policy formulation and management strategies, ultimately striving for academic integrity in the Fourth Industrial Revolution.
The remainder of this paper is structured as follows: Section 2 provides data descriptions. Section 3 describes the research methodology and the experiment design to investigate academic cheating behaviors among undergraduates. Section 4 presents the main findings. Section 5 provides a discussion. The last section provides conclusions and explores the potential implications of preventing AI-powered academic cheating.
Our study was conducted in May 2023. We focused on one of three Vietnam regional universities, Thai Nguyen University. The experiment included three stages. In the first stage, we sent the collaboration invitations to all 9 graduate schools of Thai Nguyen University, as these administrative formalities are mandatory in Vietnam. Consequently, we obtained acceptance letters from 4 graduate schools as follows: Graduate School of Education, Graduate School of Medicine and Pharmacy, Graduate School of Engineering, and Graduate School of Information Technology. We then confirmed the total number of undergraduates in all participating graduate schools and selected an initial sample of 1,450 participants. The number of participants in each graduate school was proportionally limited to the total number of undergraduates in all four schools. In the second stage, we transferred survey invitations attached with QR code access to the online survey powered by Qualtrics to participating graduate schools. In the last stage, each graduate school distributed survey invitations to all their undergraduates via internal management systems. The number of responses in each graduate school was proportionally limited by the system according to the total number of students in all 4 graduate schools. From 9 May 2023 to 12 May 2023, we received a total of 1,386 valid responses. The distribution of respondents across the four universities is shown in Appendix Table 6 .
Regarding the awareness among undergraduates about punishment for academic misconduct, all participating graduate schools regularly inform their students about the punishment policy for academic cheating (including AI-powered academic cheating) at the beginning of each academic semester. All academic misconduct is strictly prohibited, and offenders have to face strict punishments including expulsion from educational institutions. Footnote 4
Table 1 shows descriptive statistics of respondents in our study. On average, students are approximately 20.3 years old. Male students are dominant, as they account for 57.3% of respondents. In terms of grade, newly enrolled students represent more than one-third of the sample. Footnote 5 Regarding ethnicity, 26.6% of respondents were minority ethnic students. In terms of after-school activities, nearly three-fourths of the students were members of social associations, while 26.3% of students reported that they engaged in part-time jobs.
The list experiment, also referred to as the item count technique or unmatched count technique, is a survey method used in social sciences and polling to collect sensitive or confidential information from respondents while maintaining their anonymity (Blair & Imai, 2012 ; Li & Van den Noortgate, 2022 ; Igarashi & Nagayoshi, 2022 ). The indirect questioning method is especially effective for examining sensitive topics that respondents may be reluctant to admit openly, such as illegal activities, socially undesirable behaviors, or stigmatized beliefs (Hinsley et al., 2019 ). While maintaining respondent anonymity, list experiments enable researchers to collect more precise and trustworthy data on sensitive topics. The list experiment method has been used in a wide range of social topics, including political science, public health, discrimination, consumer behavior, and food security (Eriksen et al., 2018 ; Harris et al., 2018 ; Lépine et al., 2020 ; Nicholson & Huang, 2022 ; Song et al., 2022 ; Tadesse et al., 2020 ).
The basic design of the list experiment includes two distinct groups: the control group and the treatment group. The control group is presented with a list containing n nonsensitive statements. The treatment group contains the same n nonsensitive statements as the control group, plus an additional sensitive statement. Respondents are then required to report only the total number of statements that are associated with them without specifying exactly specific statements (Blair & Imai, 2012 ). The prevalence of sensitive behavior is measured by comparing the average number of statements reported between the control group and the treatment group. The difference in averages is used to infer the prevalence of the sensitive item without revealing individual responses—an indirect questioning approach. The key assumption in the list experiment is that respondents in both groups will, on average, provide truthful answers about nonsensitive statements (Imai, 2011 ). Therefore, any difference in the average counts between the treatment and groups can be attributed to the prevalence of respondents who are associated with the sensitive statements.
We adopt the basic design of the list experiment with a few adjustments to reveal responses to multiple academic cheating-related statements based on our sample. Specifically, we designed a control group and two separate treatment groups. The respondents were randomly allocated to one of the three groups. Table 2 describes the detailed experimental design. Our experiment included two separate phases. Phase 1 (list experiment) aimed to investigate AI-powered academic cheating behaviors via indirect questioning. On the other hand, Phase 2 (direct questioning) helps to investigate AI-powered academic cheating behaviors via the direct questioning approach.
Phase 1 includes the participation by all groups. Respondents in the control group received a list containing four nonsensitive statements. Treatment group 1 received a list that included a similar list of nonsensitive statements as the control group and an additional sensitive statement that helps to measure the prevalence of students who had cheated by using the ChatGPT (cheating history). Similarly, the list for treatment group 2 is equipped with an additional sensitive statement along with four nonsensitive statements from the control group to measure the prevalence of students who intend to cheat by using the ChatGPT (cheating intention). In Phase 1, all the respondents were required to indicate only the total number of statements that they agreed with. Consequently, we can calculate the average response value of each group. We then captured the prevalence of cheating by calculating the difference in the average response value between the control group and treatment group 1. Similarly, the prevalence of students who intend to cheat is calculated by the difference in the average response value between the control group and treatment group 2.
Next, we investigated academic cheating behaviors via direct questioning (Phase 2). Only respondents in the control group participated in this phase. To guarantee the accuracy of the outcomes, Phase 2 includes participation only by respondents in the control group because these respondents did not engage with sensitive statements during the list experiment, as opposed to respondents in the treatment groups. In Phase 2, respondents in the control group were required to answer only “yes” or “no” for two direct academic cheating-related questions (cheating history and cheating intention). By doing so, we can observe the prevalence of respondents who are associated with cheating history and cheating intention via direct questioning.
To estimate the prevalence of sensitive behaviors, list experiments must satisfy three key assumptions: (1) random assignment, (2) no liars, and (3) no design effect (Imai, 2011 ). These three assumptions are empirically validated in this subsection.
First, we ran balance tests to confirm whether respondents were allocated randomly to the treatment regardless of demographic variables. Accurate causal analysis, reduced bias, increased statistical power, and generalizability all depend on list experiments having guaranteed randomization of treatment (Imai, 2011 ). Individuals are assigned to different treatment groups at random when randomization is used. It is crucial in any experimental design to keep the control and treatment groups similar in terms of respondent characteristics. Table 3 depicts the outcomes of the balance tests. Since no significant difference in respondent characteristics exists, we can confirm that random assignment was well guaranteed in our list experiment.
Second, the concept of "no liars", validated through the absence of floor and ceiling effects, plays a pivotal role within the framework of the list experiment. The floor effect manifests when certain groups of respondents consistently express disagreement with all survey statements, while the ceiling effect occurs when respondents consistently report affirmative responses to all statements. Such deceptive response patterns often stem from concerns about privacy among respondents, and these effects can undermine the reliability of estimates derived from a list experiment. If a significant number of respondents consistently select extreme response options, the accuracy of the estimated prevalence of sensitive attitudes is questioned (Blair & Imai, 2012 ). To counteract these effects, we applied the design method of Glynn ( 2013 ) by including at least one nonsensitive statement predicted to be rejected by the majority of respondents and another nonsensitive statement predicted to be accepted by the majority. Based on the distribution of response values presented Appendix in Table 7 , it is evident that there were no instances of ceiling or floor effects, as the proportions of entirely affirmative or entirely negative responses in our list experiment were all below 9% of all responses.
Finally, we examine whether the design effect appears in our list experiment. A design effect exists when the presence of a sensitive item alters respondents' tendencies to select nonsensitive items. Since list experiments rely on differences in the average value of statements chosen between treatment and control groups, the selection of nonsensitive items should not be affected by the presence of sensitive statements (Blair & Imai, 2012 ). Design effects pertain to alterations in an individual's responses to innocuous statements due to the inclusion of sensitive statements. They impact the dependability of results derived from a list experiment, signifying a heightened influence of intricate sampling or design elements on the estimates. Consequently, the reliability of these estimates may diminish, posing challenges for drawing precise conclusions. To address this, we applied the design effect test package of Tsai ( 2019 ) to ascertain the presence of design effects. Based on the outcomes described in Appendix Table 8 , no design effects existed in our list experiment.
Our primary objective is to examine the magnitude of misreporting about AI-powered academic cheating behaviors among respondents. To do so, we first estimate the prevalence of academic cheating behaviors among undergraduates via list experiment by employing the estimation model of Lépine et al. ( 2020 ), with modifications by controlling for multiple covariates and school-level fixed effects Footnote 6 as follows:
\({Y}_{is}\) represents the response value (number of statements that the respondent agrees with) reported by respondent i in school s . \({\alpha }_{1}\) is the intercept, indicating the constant term in the model. \({T}_{is}\) represents binary treatment variables of respondent i in school s ( \({T}_{is}\) = 0 for the control group and \({T}_{is}\) = 1 for the treatment group) . \({\tau }_{1}\) corresponds to the prevalence of sensitive cheating behavior elaborated in Section 3.1 , which is equivalent to the difference in average response value between the control and treatment groups. \({\mathbf{X}}_{{\varvec{i}}{\varvec{s}}}\) is a vector of student-level covariates of respondent i in the school s , including age, gender, ethnicity, grade, social association membership, and part-time job engagement while \(\delta\) is the coefficient associated with these covariates. \({\theta }_{s}\) denotes the school-level fixed effects, which capture unobserved school-specific characteristics, and \({\varepsilon }_{is}\) is the error term that represents unobserved factors or random variations in the dependent variable \({Y}_{is}\) .
To measure the misreporting magnitude in responses among respondents between direct questioning and indirect questioning, we consequently compare the differences in outcomes obtained via list experiment and direct questioning. To quantify this, we use the immediate form of a two-sample t-test with the unequal variances option to compare the estimated prevalence of academic cheating behaviors obtained from the list experiment with the prevalence of affirmative responses to academic cheating behavior obtained from direct questioning.
We further examine heterogeneity in AI-powered academic cheating behaviors across different subsamples. Equation 2 represents our estimation model to evaluate the heterogeneous effects in the subsamples:
in which \({G}_{is}\) is the subsample dummy for respondent i in school s for potential factors . For instance, when we examine the heterogeneous effects of academic cheating behaviors by gender, \({G}_{is}\) is equal to 1 for male respondents and 0 for female respondents (i.e., male dummy). \({\alpha }_{2}\) is the intercept, indicating the constant term in the model. \({\tau }_{2}\) indicates the prevalence of academic cheating behavior among the subsample when \({G}_{is}\) = 0, which is equivalent to the difference in average response value between the control and treatment groups in that subsample. \({\tau }_{2}+ \gamma\) indicates the prevalence of sensitive cheating behavior in the subsample when \({G}_{is}\) = 1. Hence, \(\gamma\) corresponds to the difference in the prevalence of academic cheating behavior among subsamples. \({v}_{is}\) is the error term that represents unobserved factors or random variations in the dependent variable \({Y}_{is}\) .
Our main findings are highlighted in this section. First, we present the results of both the list experiment and direct questioning, as well as the misreporting magnitude observed from these two questioning techniques. Next, we investigate the heterogeneous effects of AI-powered academic cheating behaviors among subsamples.
The prevalence of students who reported that they had cheated by using ChatGPT increased significantly according to the list experiment. Table 4 depicts the prevalence of academic cheating behaviors and the magnitude of misreporting between the two questioning methods. Regarding the outcomes of direct questioning, only 9.6% of respondents reported that they had cheated. However, the prevalence of cheaters rose nearly threefold to 23.7% via the list experiment. The results suggest that confessing to cheating was an especially sensitive issue among students, as the misreporting magnitude between indirect and direct questioning was 14 percentage points (significant at the 5% level). In terms of cheating intention, no significant differences exist between the two questioning methods, as the prevalence of students reporting that they have the intention to cheat between the list experiment and the direct questioning method remains similar (21.6% and 22.5%, respectively).
Subsample analysis effectively detects differential responses or outcomes among diverse demographic, social, or contextual groups. By rigorously examining heterogeneous effects among subsamples, our study found disparities in AI-powered academic cheating behavior across different subsamples.
In terms of the heterogeneous effects of academic cheating behavior by gender, male students are more likely to use ChatGPT to cheat than female students in terms of cheating history. Figure 1 , shows the disparity in cheating history among respondents by gender. In the pooled sample, 35.1% of the male students reported that they had cheated, which is more than triple the prevalence of their counterparts showing the same behavior. The magnitude of the difference between the two genders is approximately 25 percentage points, which is significant at the 10% level. Furthermore, the difference in cheating history by gender is even higher among newly enrolled students (40.1 percentage points, significant at the 5% level). Conversely, no significant differences exist in cheating history by gender in higher grades.
Heterogeneous effects of the cheating history by gender. Note: Fig. 1a represents the estimated prevalence of respondents who reported affirmative responses to cheating history by gender. Figure 1b represents the disparity in cheating history by gender (male dummy). Robust standard errors in parenthesis. *** p < 0.01, ** p < 0.05, * p < 0.1
Heterogeneous effects of cheating behavior by grade among majority ethnic group. Note: Fig. 2a represents the estimated prevalence of respondents who reported affirmative to sensitive statements by grade. Figure 2b represents the disparity in cheating behaviors by grade (higher-grade dummy). Robust standard errors in parenthesis. *** p < 0.01, ** p < 0.05, * p < 0.1
Importantly, the cheating history of each gender differs significantly across grades. Among female students, higher-grade female students are more likely to cheat than newly enrolled female students. As shown in Appendix Table 9 , approximately 33% of female students in higher grades reported that they had used ChatGPT to cheat, while no proof of cheating was found among newly enrolled female students. The difference in cheating history among female students across grades is 43 percentage points (significant at the 5% level). Conversely, no difference exists in cheating history among male students across grades, as male students consistently engage in academic cheating in all grades. In particular, approximately 42.5% of higher-grade male students admitted that they had cheated in comparison with 30.1% of newly enrolled male students who reported the same behavior. However, the differences in cheating history among male students across grades are not statistically significant.
With regard to the heterogeneous effects of cheating intention by gender, male and female students show no disparity in cheating intention in the pooled sample (23% and 22.4%, respectively). Correspondingly, no heterogeneous effect on academic cheating intention was found by gender across grades (as shown in Appendix Fig. 3 ).
Regarding the heterogeneous effects of academic cheating behavior by ethnicity, higher-grade students are more likely to cheat than newly enrolled students within the majority ethnic group. Figure 2 represents the heterogeneous effects of cheating behavior between newly enrolled students and higher-grade students in the majority ethnic group. Specifically, 38.3% of higher-grade students admitted that they had used ChatGPT to cheat, which is more than fourfold the prevalence of newly enrolled students reporting the same behavior. Concerning cheating intention among majority ethnic students, both newly enrolled students and higher-grade students had the intention to cheat using ChatGPT, but the difference in cheating intention between these two groups is not statistically significant.
Relevant to the heterogeneous effects of academic cheating behavior by major, only information technology students reported engagement with both cheating history and cheating intention (38.0% and 33.9%, respectively). However, there is no significant difference in cheating history between information technology majors and other majors. Furthermore, information technology students are more likely to have the intention to cheat than medicine and pharmacy students (as shown in Appendix Fig. 4 ).
To examine the stability and reliability of our results, we conducted additional robustness tests by controlling for multiple covariates and fixed effects at the school level. Based on the outcomes of the robustness tests presented in Table 5 , we confirm that our results are strongly consistent with those indicated in the previous sections. In addition, we further examine the consistency of our findings regarding heterogeneous effects across subsamples. As shown in Appendix Fig. 5 , the results of robustness tests validate the consistency of the subsample analysis results.
By using the indirect questioning approach via a list experiment, our findings show that students conceal academic cheating behavior under direct questioning. Any confession of academic cheating may subject the student to negative consequences. Cheating is often punishable by failing assignments or exams, academic probation, or even expulsion from academic institutions. Furthermore, students may be concerned about how their peers, teachers, and parents will perceive them if they are identified as cheaters. Admitting to academic cheating can harm their reputation as honest and capable students. Cheating is frequently associated with moral and ethical stigma. Students conceal their cheating to avoid feelings of shame, guilt, or remorse associated with their dishonest behavior. Consequently, respondents understandably conceal truthful answers when directly questioned.
Our subsample analysis highlighted the heterogeneity in AI-powered academic cheating behavior by gender, as male students are more likely to cheat than female students. In terms of pooled sample analysis, our results align with the findings of previous studies (e.g., Mohd Salleh et al., 2013 ; Yazici et al., 2023 ). Gender disparities in moral attitudes and risk-taking tendencies possibly cause heterogeneous effects in cheating behavior between male and female students. Regarding the moral attitude, Ip et al. ( 2018 ) highlight that male students hold a more forgiving perspective toward acts of academic cheating than their female counterparts. Gender disparities in academic cheating may be attributed to the notion that women, who tend to prioritize social harmony, are less inclined to violate regulations, while men, who often exhibit greater competitiveness, may be more inclined to transgress rules in pursuit of success (Fisher & Brunell, 2014 ). In a similar vein, Zhang et al. ( 2018 ) reveal that female students exhibit considerably more negative attitudes toward academic misconduct and demonstrate greater levels of discomfort when they are detected as cheaters. In terms of risk-taking tendencies, Chala ( 2021 ) suggested that, on average, the propensity for risk-taking behaviors is greater for males than for females. Male students may be inclined to engage in academic dishonesty as a means to attain their academic objectives due to their greater propensity for taking risks.
In terms of heterogeneity in cheating behavior by grade, higher-grade students are more likely to cheat than newly enrolled students in the majority ethnic group. Our findings contrast with some previous studies. For instance, Bakar-Corez and Kocaman-Karoglu ( 2023 ) found a higher level of academic dishonesty among master’s students than among Ph.D. students. In a similar vein, Lord Ferguson et al. ( 2022 ) highlighted that the prevalence of academic dishonesty is higher among undergraduates than graduates. Importantly, we found that the cheating history of each gender differs substantially across grades. Although male students are more likely to cheat by using ChatGPT in the pooled sample, our subsample analysis shows that no significant difference in cheating history by gender exists among higher-grade students. Conversely, there was a substantial difference in cheating history by gender among newly enrolled students, as the prevalence of cheating among males is strongly dominant. Specifically, female students seem to change their cheating behaviors over time, as they are more likely to cheat in higher grades, as opposed to male students who consistently report cheating history across grades.
Academic-related pressure and peer effects might lead higher-grade students to be more likely to cheat than their counterparts. First, academic-related pressure is usually high for juniors and seniors, particularly in their final academic years. Higher-grade students may engage in academic dishonesty because they perceive it as a band-aid solution to achieve their goals, which are heightened expectations and future career prospects (Ababneh et al., 2022 ). Additionally, the final academic years are often especially stressful due to the accumulation of coursework, exams, and deadlines. To meet academic requirements, students might cheat to alleviate the stress of managing multiple courses and assignments (Amigud & Lancaster, 2019 ; Costley, 2019 ). Specifically, Orok et al. ( 2023 ) revealed that fear of failure is the most common reason for engaging in academic dishonesty, as 77% of respondents reported. Second, higher-grade students might be more likely to engage in academic cheating due to the peer dishonesty effect. For instance, Zhao et al. ( 2022 ) reveal that the peer dishonesty effect has a strong positive relationship with academic cheating, as observing peers engaging in academic misconduct potentially reinforces the idea that cheating is an effective solution to achieve academic objectives without the detection of educational institutions. In a similar vein, Lucifora and Tonello ( 2015 ) found that peer effects have a significant influence on academic cheating behaviors among students as the likelihood of cheating increases if educational institutions loosen the level of class monitoring systems. During the academic journey, the probability of witnessing peer cheating might increase among higher-grade students, potentially influencing them to follow their peers to violate academic integrity with the assistance of AI.
This study has provided valuable insights into academic cheating in the era of AI growth. Although AI applications can be valuable educational tools, they also pose associated risks to academic integrity. By exploring a sample of 1,386 Vietnamese undergraduates via the list experiment to minimize social desirability bias, we found a significant magnitude of misreporting in response to AI-powered academic cheating behaviors among undergraduates. Specifically, the prevalence of cheaters observed via list experiments is almost threefold the prevalence of cheaters observed via direct questioning. Regarding the heterogeneous effect of AI-powered academic cheating behaviors among subsamples, we observed that female students are more likely to cheat in the later grades, while male students engage in academic cheating in all grades. In addition, academic cheating is more common in the final academic years among the majority ethnic group.
Based on our findings, we suggest potential implications that safeguard academic integrity. In terms of theoretical implications, academic cheating should be measured via the indirect questioning method, as students reasonably conceal their truthful answers due to the sensitivity of cheating issues. Educational policies for promoting academic integrity are effective only if cheating behaviors are accurately examined. In terms of practical implications, male students and higher-grade students of majority ethnicity must be well managed, as these groups showed a greater prevalence of AI-powered academic cheating. In addition, our subsample analysis shows that female students are also more likely to engage in academic dishonesty in higher grades; therefore, educational institutions should implement stringent management policies for these students during their final academic years. To prevent AI-powered cheating while leveraging the advantages of AI in education, it is necessary to apply concurrently supportive solutions and prevention solutions. Regarding supportive solutions, educational institutions should, for instance, offer counseling services to students dealing with stress, anxiety, or other personal issues that may facilitate academic dishonesty, in addition to more intensive orientation programs designed to educate students about the proper use of AI to harness the potential of AI-powered academic cheating but keep improving the learning effectiveness for students. Regarding prevention solutions, educational institutions should further consider investing in advanced monitoring systems to detect AI-powered academic cheating. Simultaneously, the implementation of adaptive assessment methods including randomization, dynamic question generation, and algorithmic modifications is necessary to mitigate the possibility of academic dishonesty facilitated by AI.
While this study contributes to the understanding of AI-powered academic cheating in education, it is important to acknowledge the remaining limitations. Because several graduate schools refused to participate, our study is limited to only four specific graduate schools. The generalizability of findings to other student populations, educational backgrounds, or major contexts may be restricted. To address these limitations, further research, methodological improvement, and cross-disciplinary cooperation are needed to deeply investigate academic cheating behavior in the era of accelerated AI.
The dataset of this study is available in the Mendeley Data at: https://data.mendeley.com/datasets/jyrw4wrtjr/1 .
Ossai et al. ( 2023 ) used the following direct statement to measure cheating behavior: “I sometimes copy already prepared assignments from my friends”.
Park ( 2020 ) used the following direct question to measure cheating behavior: “How often did you conduct the following behaviors in the past semester?”.
Ababneh et al. ( 2022 ) used the following direct question to measure cheating behavior: “During the past year, how frequently did you cheat on online tests/exams at your university”.
As members of Thai Nguyen University, all four participating graduate schools have applied Circular No.10/2016/TT-BGDĐT (Regulations for Student Affairs in Formal Higher Education programs) issued by the Vietnam Ministry of Education and Training to treat academic offenders. Following this circular, first-time offenders will fail the subjects in which they engage in academic cheating and receive caution. For repeated offenders, enhanced punishment (expulsion from the academic institution) will be applied.
We separate grades into 2 distinct groups: newly enrolled students (including freshmen and sophomores) and higher-grade students (including juniors and seniors).
To estimate the prevalence of academic cheating behaviors, the t-test for difference-in-mean estimator is qualified to compare the average response value between the control group and treatment groups. However, we followed the model of Lépine et al. ( 2020 ) supplemented by Ordinary Least Squares (OLS) regressions while controlling for multiple variables and school-level fixed effects, which have advantages in statistical analysis, particularly in addressing potential biases and improving the robustness of the model.
Ababneh, K. I., Ahmed, K., & Dedousis, E. (2022). Predictors of cheating in online exams among business students during the Covid pandemic: Testing the theory of planned behavior. The International Journal of Management Education, 20 (3), 100713. https://doi.org/10.1016/j.ijme.2022.100713
Article PubMed Central Google Scholar
Ahmad, S., Lensink, R., & Mueller, A. (2023). Religion, social desirability bias and financial inclusion: Evidence from a list experiment on Islamic (micro-)finance. Journal of Behavioral and Experimental Finance, 38 , 100795. https://doi.org/10.1016/j.jbef.2023.100795
Article Google Scholar
Ali, Md. A., Dhanaraj, R. K., & Nayyar, A. (2023). A high performance-oriented AI-enabled IoT-based pest detection system using sound analytics in large agricultural field. Microprocessors and Microsystems, 103 , 104946. https://doi.org/10.1016/j.micpro.2023.104946
Amigud, A., & Lancaster, T. (2019). 246 reasons to cheat: An analysis of students’ reasons for seeking to outsource academic work. Computers & Education, 134 , 98–107. https://doi.org/10.1016/j.compedu.2019.01.017
Ansari, A. N., Ahmad, S., & Bhutta, S. M. (2023). Mapping the global evidence around the use of ChatGPT in higher education: A systematic scoping review. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12223-4
Aung, Z. H., Sanium, S., Songsaksuppachok, C., Kusakunniran, W., Precharattana, M., Chuechote, S., Pongsanon, K., & Ritthipravat, P. (2022). Designing a novel teaching platform for AI : A case study in a Thai school context. Journal of Computer Assisted Learning, 38 (6), 1714–1729. https://doi.org/10.1111/jcal.12706
Bakar-Corez, A., & Kocaman-Karoglu, A. (2023). E-dishonesty among postgraduate students and its relation to self-esteem. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12105-9
Blair, G., & Imai, K. (2012). Statistical analysis of list experiments. Political Analysis, 20 (1), 47–77. https://doi.org/10.1093/pan/mpr048
Chala, W. D. (2021). Perceived seriousness of academic cheating behaviors among undergraduate students: An Ethiopian experience. International Journal for Educational Integrity, 17 (1), 2. https://doi.org/10.1007/s40979-020-00069-z
Choi, E. P. H., Lee, J. J., Ho, M.-H., Kwok, J. Y. Y., & Lok, K. Y. W. (2023). Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Education Today, 125 , 105796. https://doi.org/10.1016/j.nedt.2023.105796
Article PubMed Google Scholar
Costley, J. (2019). Student perceptions of academic dishonesty at a cyber-University in South Korea. Journal of Academic Ethics, 17 (2), 205–217. https://doi.org/10.1007/s10805-018-9318-1
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International , 1–12. https://doi.org/10.1080/14703297.2023.2190148
Currie, G. M. (2023). Academic integrity and artificial intelligence: Is ChatGPT hype, hero or heresy? Seminars in Nuclear Medicine, 53 (5), 719–730. https://doi.org/10.1053/j.semnuclmed.2023.04.008
Dai, Y., Lin, Z., Liu, A., & Wang, W. (2023). An embodied, analogical and disruptive approach of AI pedagogy in upper elementary education: An experimental study. British Journal of Educational Technology , bjet.13371. https://doi.org/10.1111/bjet.13371
Dalalah, D., & Dalalah, O. M. A. (2023). The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT. The International Journal of Management Education, 21 (2), 100822. https://doi.org/10.1016/j.ijme.2023.100822
Du, P., He, X., Cao, H., Garg, S., Kaddoum, G., & Hassan, M. M. (2023). AI-based energy-efficient path planning of multiple logistics UAVs in intelligent transportation systems. Computer Communications, 207 , 46–55. https://doi.org/10.1016/j.comcom.2023.04.032
Eriksen, S., Lutz, C., & Tadesse, G. (2018). Social desirability, opportunism and actual support for farmers’ market organisations in Ethiopia. The Journal of Development Studies, 54 (2), 343–358. https://doi.org/10.1080/00220388.2017.1299138
Ezquerra, L., Kolev, G. I., & Rodriguez-Lara, I. (2018). Gender differences in cheating: Loss vs. gain framing. Economics Letters, 163 , 46–49. https://doi.org/10.1016/j.econlet.2017.11.016
Article MathSciNet Google Scholar
Fisher, T. D., & Brunell, A. B. (2014). A bogus pipeline approach to studying gender differences in cheating behavior. Personality and Individual Differences, 61–62 , 91–96. https://doi.org/10.1016/j.paid.2014.01.019
Fyfe, P. (2023). How to cheat on your final paper: Assigning AI for student writing. AI & Society, 38 (4), 1395–1405. https://doi.org/10.1007/s00146-022-01397-z
Glynn, A. N. (2013). What can we learn with statistical truth serum? Public Opinion Quarterly, 77 (S1), 159–172. https://doi.org/10.1093/poq/nfs070
Guo, K., & Wang, D. (2023). To resist it or to embrace it? Examining ChatGPT’s potential to support teacher feedback in EFL writing. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12146-0
Guo, K., Zhong, Y., Li, D., & Chu, S. K. W. (2023). Effects of chatbot-assisted in-class debates on students’ argumentation skills and task motivation. Computers & Education, 203 , 104862. https://doi.org/10.1016/j.compedu.2023.104862
Harris, A. S., Findley, M. G., Nielson, D. L., & Noyes, K. L. (2018). The economic roots of anti-immigrant prejudice in the global south: Evidence from South Africa. Political Research Quarterly, 71 (1), 228–241. https://doi.org/10.1177/1065912917734062
Hinsley, A., Keane, A., St. John, F. A. V., Ibbett, H., & Nuno, A. (2019). Asking sensitive questions using the unmatched count technique: Applications and guidelines for conservation. Methods in Ecology and Evolution , 10 (3), 308–319. https://doi.org/10.1111/2041-210X.13137
Igarashi, A., & Nagayoshi, K. (2022). Norms to be prejudiced: List experiments on attitudes towards immigrants in Japan. Social Science Research, 102 , 102647. https://doi.org/10.1016/j.ssresearch.2021.102647
Imai, K. (2011). Multivariate regression analysis for the item count technique. Journal of the American Statistical Association, 106 (494), 407–416. https://doi.org/10.1198/jasa.2011.ap10415
Article MathSciNet CAS Google Scholar
Ip, E. J., Pal, J., Doroudgar, S., Bidwal, M. K., & Shah-Manek, B. (2018). Gender-based differences among pharmacy students involved in academically dishonest behavior. American Journal of Pharmaceutical Education, 82 (4), 6274. https://doi.org/10.5688/ajpe6274
Article PubMed PubMed Central Google Scholar
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences , 103 , 102274. https://doi.org/10.1016/j.lindif.2023.102274
Koo, M. (2023). Harnessing the potential of chatbots in education: The need for guidelines to their ethical use. Nurse Education in Practice, 68 , 103590. https://doi.org/10.1016/j.nepr.2023.103590
Kulkov, I. (2021). The role of artificial intelligence in business transformation: A case of pharmaceutical companies. Technology in Society, 66 , 101629. https://doi.org/10.1016/j.techsoc.2021.101629
Kumar, P., Sharma, S. K., & Dutot, V. (2023). Artificial intelligence (AI)-enabled CRM capability in healthcare: The impact on service innovation. International Journal of Information Management, 69 , 102598. https://doi.org/10.1016/j.ijinfomgt.2022.102598
Kutyauripo, I., Rushambwa, M., & Chiwazi, L. (2023). Artificial intelligence applications in the agrifood sectors. Journal of Agriculture and Food Research, 11 , 100502. https://doi.org/10.1016/j.jafr.2023.100502
Larson, R. B. (2019). Controlling social desirability bias. International Journal of Market Research, 61 (5), 534–547. https://doi.org/10.1177/1470785318805305
Latkin, C. A., Edwards, C., Davey-Rothwell, M. A., & Tobin, K. E. (2017). The relationship between social desirability bias and self-reports of health, substance use, and social network factors among urban substance users in Baltimore, Maryland. Addictive Behaviors, 73 , 133–136. https://doi.org/10.1016/j.addbeh.2017.05.005
Lépine, A., Treibich, C., & D’Exelle, B. (2020). Nothing but the truth: Consistency and efficiency of the list experiment method for the measurement of sensitive health behaviours. Social Science & Medicine, 266 , 113326. https://doi.org/10.1016/j.socscimed.2020.113326
Li, J., & Van den Noortgate, W. (2022). A meta-analysis of the relative effectiveness of the item count technique compared to direct questioning. Sociological Methods & Research, 51 (2), 760–799. https://doi.org/10.1177/0049124119882468
Li, L., Ma, Z., Fan, L., Lee, S., Yu, H., & Hemphill, L. (2023). ChatGPT in education: A discourse analysis of worries and concerns on social media. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12256-9
Livberber, T., & Ayvaz, S. (2023). The impact of artificial intelligence in academia: Views of Turkish academics on ChatGPT. Heliyon, 9 (9), e19688. https://doi.org/10.1016/j.heliyon.2023.e19688
Lord Ferguson, S., Flostrand, A., Lam, J., & Pitt, L. (2022). Caught in a vicious cycle? Student perceptions of academic dishonesty in the business classroom. The International Journal of Management Education, 20 (3), 100677. https://doi.org/10.1016/j.ijme.2022.100677
Lucifora, C., & Tonello, M. (2015). Cheating and social interactions. Evidence from a randomized experiment in a national evaluation program. Journal of Economic Behavior & Organization, 115 , 45–66. https://doi.org/10.1016/j.jebo.2014.12.006
MohdSalleh, M. I., Alias, N. R., Hamid, H. A., & Yusoff, Z. (2013). Academic dishonesty among undergraduates in the higher education. International Journal of Academic Research, 5 (2), 222–227. https://doi.org/10.7813/2075-4124.2013/5-2/B.34
Moisset, X., & Ciampi De Andrade, D. (2023). Neuro-ChatGPT? Potential threats and certain opportunities. Revue Neurologique, 179 (6), 517–519. https://doi.org/10.1016/j.neurol.2023.02.066
Article CAS PubMed Google Scholar
Mubin, O., Cappuccio, M., Alnajjar, F., Ahmad, M. I., & Shahid, S. (2020). Can a robot invigilator prevent cheating? AI & Society, 35 (4), 981–989. https://doi.org/10.1007/s00146-020-00954-8
Nicholson, S. P., & Huang, H. (2022). Making the list: Reevaluating political trust and social desirability in china. American Political Science Review , 1–8. https://doi.org/10.1017/S0003055422000946
Olan, F., OgiemwonyiArakpogun, E., Suklan, J., Nakpodia, F., Damij, N., & Jayawickrama, U. (2022). Artificial intelligence and knowledge sharing: Contributing factors to organizational performance. Journal of Business Research, 145 , 605–615. https://doi.org/10.1016/j.jbusres.2022.03.008
Orok, E., Adeniyi, F., Williams, T., Dosunmu, O., Ikpe, F., Orakwe, C., & Kukoyi, O. (2023). Causes and mitigation of academic dishonesty among healthcare students in a Nigerian university. International Journal for Educational Integrity, 19 (1), 13. https://doi.org/10.1007/s40979-023-00135-2
Ossai, M. C., Ethe, N., Edougha, D. E., & Okeh, O. D. (2023). Academic integrity during examinations, age and gender as predictors of academic performance among high school students. International Journal of Educational Development, 100 , 102811. https://doi.org/10.1016/j.ijedudev.2023.102811
Park, S. (2020). Goal contents as predictors of academic cheating in college students. Ethics & Behavior, 30 (8), 628–639. https://doi.org/10.1080/10508422.2019.1668275
Phan, Q. N., Tseng, C.-C., Thi Hoai Le, T., & Nguyen, T. B. N. (2023). The application of chatbot on Vietnamese misgrant workers’ right protection in the implementation of new generation free trade agreements (FTAS). AI & Society , 38 (4), 1771–1783. https://doi.org/10.1007/s00146-022-01416-z
Qu, J., Zhao, Y., & Xie, Y. (2022). Artificial intelligence leads the reform of education models. Systems Research and Behavioral Science, 39 (3), 581–588. https://doi.org/10.1002/sres.2864
Ratten, V., & Jones, P. (2023). Generative artificial intelligence (ChatGPT): Implications for management educators. The International Journal of Management Education, 21 (3), 100857. https://doi.org/10.1016/j.ijme.2023.100857
Ried, L., Eckerd, S., & Kaufmann, L. (2022). Social desirability bias in PSM surveys and behavioral experiments: Considerations for design development and data collection. Journal of Purchasing and Supply Management, 28 (1), 100743. https://doi.org/10.1016/j.pursup.2021.100743
Sollosy, M., & McInerney, M. (2022). Artificial intelligence and business education: What should be taught. The International Journal of Management Education, 20 (3), 100720. https://doi.org/10.1016/j.ijme.2022.100720
Song, J., Iida, T., Takahashi, Y., & Tovar, J. (2022). Buying votes across Borders? A list experiment on mexican immigrants in the United States. Canadian Journal of Political Science, 55 (4), 852–872. https://doi.org/10.1017/S0008423922000567
Sweeney, S. (2023). Who wrote this? Essay mills and assessment – considerations regarding contract cheating and AI in higher education. The International Journal of Management Education, 21 (2), 100818. https://doi.org/10.1016/j.ijme.2023.100818
Tadesse, G., Abate, G. T., & Zewdie, T. (2020). Biases in self-reported food insecurity measurement: A list experiment approach. Food Policy, 92 , 101862. https://doi.org/10.1016/j.foodpol.2020.101862
Tsai, C. (2019). Statistical analysis of the item-count technique using stata. The Stata Journal, 19 (2), 390–434. https://doi.org/10.1177/1536867X19854018
UBS. (2023). Let's chat about ChatGPT . https://secure.ubs.com/public/api/v2/investment-content/documents/XILxY9V9P5RazGpDA1Cr_Q?apikey=Y8VdAx8vhk1P9YXDlEOo2Eoco1fqKwDk&Accept-Language=de-CH . Accessed 6 Aug 2023.
Udupa, P. (2022). Application of artificial intelligence for university information system. Engineering Applications of Artificial Intelligence, 114 , 105038. https://doi.org/10.1016/j.engappai.2022.105038
Wang, Z., Li, M., Lu, J., & Cheng, X. (2022). Business innovation based on artificial intelligence and blockchain technology. Information Processing & Management, 59 (1), 102759. https://doi.org/10.1016/j.ipm.2021.102759
Wang, Z., Liu, Y., & Niu, X. (2023). Application of artificial intelligence for improving early detection and prediction of therapeutic outcomes for gastric cancer in the era of precision oncology. Seminars in Cancer Biology, 93 , 83–96. https://doi.org/10.1016/j.semcancer.2023.04.009
Yazici, S., YildizDurak, H., AksuDünya, B., & Şentürk, B. (2023). Online versus face-to-face cheating: The prevalence of cheating behaviours during the pandemic compared to the pre-pandemic among Turkish University students. Journal of Computer Assisted Learning, 39 (1), 231–254. https://doi.org/10.1111/jcal.12743
Zhang, Y., Yin, H., & Zheng, L. (2018). Investigating academic dishonesty among chinese undergraduate students: Does gender matter? Assessment & Evaluation in Higher Education, 43 (5), 812–826. https://doi.org/10.1080/02602938.2017.1411467
Zhao, L., Mao, H., Compton, B. J., Peng, J., Fu, G., Fang, F., Heyman, G. D., & Lee, K. (2022). Academic dishonesty and its relations to peer cheating and culture: A meta-analysis of the perceived peer cheating effect. Educational Research Review, 36 , 100455. https://doi.org/10.1016/j.edurev.2022.100455
Download references
We are grateful to the managing board, staff, and students of Thai Nguyen University for their excellent assistance with our survey. We would like to thank anonymous reviewers for taking the time and effort necessary to review the manuscript, which helped us to improve the quality of the manuscript.
Open Access funding provided by Hiroshima University.
Authors and affiliations.
Graduate School of Humanities and Social Sciences, Hiroshima University, 1-5-1 Kagamiyama, Higashi-Hiroshima, 739-8529, Japan
Hung Manh Nguyen
The IDEC Institute, Hiroshima University, 1-5-1 Kagamiyama, Higashi-Hiroshima, 739-8529, Japan
Daisaku Goto
Network for Education and Research On Peace and Sustainability (NERPS), Hiroshima University, 1-5-1 Kagamiyama, Higashi-Hiroshima, 739-8529, Japan
You can also search for this author in PubMed Google Scholar
Correspondence to Hung Manh Nguyen .
Informed consent.
Informed consent was obtained from all participants in the experiment.
The authors have no relevant financial or nonfinancial interests to disclose.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Heterogeneous effects of the cheating intention by gender. Note: Fig. 3a represents the estimated prevalence of respondents who reported affirmative for cheating intention by gender. Figure 3b represents the disparity in cheating intention by gender (male dummy). Robust standard errors in parenthesis. *** p < 0.01, ** p < 0.05, * p < 0.1
Heterogeneous effects of cheating behavior by major. Note: Fig. 4a represents the estimated prevalence of respondents who reported affirmative to sensitive statements. Figure 4b represents the disparity in cheating behaviors by major (the major base is Information Technology). Covarites include age, gender, ethnicity, grade, social, and part-time job. Robust standard errors in parenthesis. *** p < 0.01, ** p < 0.05, * p < 0.1
Subsample robustness tests. Note: Fig. 5a represents the heterogeneous effects of cheating history by gender (male dummy). Figure 5b represents the heterogeneous effects of cheating history by gender among newly enrolled students (male dummy). Figure 5c represents the heterogeneous effects of cheating history by grade among students with majority ethnicity (higher-grade dummy). Covarites include age, gender, ethnicity, grade, social, and part-time job. Fixed effects at the school level. Robust standard errors in parenthesis. *** p < 0.01, ** p < 0.05, * p < 0.1
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Nguyen, H.M., Goto, D. Unmasking academic cheating behavior in the artificial intelligence era: Evidence from Vietnamese undergraduates. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12495-4
Download citation
Received : 07 November 2023
Accepted : 18 January 2024
Published : 05 February 2024
DOI : https://doi.org/10.1007/s10639-024-12495-4
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
At the 2024 Worldwide Developers Conference , we introduced Apple Intelligence, a personal intelligence system integrated deeply into iOS 18, iPadOS 18, and macOS Sequoia.
Apple Intelligence is comprised of multiple highly-capable generative models that are specialized for our users’ everyday tasks, and can adapt on the fly for their current activity. The foundation models built into Apple Intelligence have been fine-tuned for user experiences such as writing and refining text, prioritizing and summarizing notifications, creating playful images for conversations with family and friends, and taking in-app actions to simplify interactions across apps.
In the following overview, we will detail how two of these models — a ~3 billion parameter on-device language model, and a larger server-based language model available with Private Cloud Compute and running on Apple silicon servers — have been built and adapted to perform specialized tasks efficiently, accurately, and responsibly. These two foundation models are part of a larger family of generative models created by Apple to support users and developers; this includes a coding model to build intelligence into Xcode, as well as a diffusion model to help users express themselves visually, for example, in the Messages app. We look forward to sharing more information soon on this broader set of models.
Apple Intelligence is designed with our core values at every step and built on a foundation of groundbreaking privacy innovations.
Additionally, we have created a set of Responsible AI principles to guide how we develop AI tools, as well as the models that underpin them:
These principles are reflected throughout the architecture that enables Apple Intelligence, connects features and tools with specialized models, and scans inputs and outputs to provide each feature with the information needed to function responsibly.
In the remainder of this overview, we provide details on decisions such as: how we develop models that are highly capable, fast, and power-efficient; how we approach training these models; how our adapters are fine-tuned for specific user needs; and how we evaluate model performance for both helpfulness and unintended harm.
Our foundation models are trained on Apple's AXLearn framework , an open-source project we released in 2023. It builds on top of JAX and XLA, and allows us to train the models with high efficiency and scalability on various training hardware and cloud platforms, including TPUs and both cloud and on-premise GPUs. We used a combination of data parallelism, tensor parallelism, sequence parallelism, and Fully Sharded Data Parallel (FSDP) to scale training along multiple dimensions such as data, model, and sequence length.
We train our foundation models on licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot. Web publishers have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.
We never use our users’ private personal data or user interactions when training our foundation models, and we apply filters to remove personally identifiable information like social security and credit card numbers that are publicly available on the Internet. We also filter profanity and other low-quality content to prevent its inclusion in the training corpus. In addition to filtering, we perform data extraction, deduplication, and the application of a model-based classifier to identify high quality documents.
We find that data quality is essential to model success, so we utilize a hybrid data strategy in our training pipeline, incorporating both human-annotated and synthetic data, and conduct thorough data curation and filtering procedures. We have developed two novel algorithms in post-training: (1) a rejection sampling fine-tuning algorithm with teacher committee, and (2) a reinforcement learning from human feedback (RLHF) algorithm with mirror descent policy optimization and a leave-one-out advantage estimator. We find that these two algorithms lead to significant improvement in the model’s instruction-following quality.
In addition to ensuring our generative models are highly capable, we have used a range of innovative techniques to optimize them on-device and on our private cloud for speed and efficiency. We have applied an extensive set of optimizations for both first token and extended token inference performance.
Both the on-device and server models use grouped-query-attention. We use shared input and output vocab embedding tables to reduce memory requirements and inference cost. These shared embedding tensors are mapped without duplications. The on-device model uses a vocab size of 49K, while the server model uses a vocab size of 100K, which includes additional language and technical tokens.
For on-device inference, we use low-bit palletization, a critical optimization technique that achieves the necessary memory, power, and performance requirements. To maintain model quality, we developed a new framework using LoRA adapters that incorporates a mixed 2-bit and 4-bit configuration strategy — averaging 3.5 bits-per-weight — to achieve the same accuracy as the uncompressed models.
Additionally, we use an interactive model latency and power analysis tool, Talaria , to better guide the bit rate selection for each operation. We also utilize activation quantization and embedding quantization, and have developed an approach to enable efficient Key-Value (KV) cache update on our neural engines.
With this set of optimizations, on iPhone 15 Pro we are able to reach time-to-first-token latency of about 0.6 millisecond per prompt token, and a generation rate of 30 tokens per second. Notably, this performance is attained before employing token speculation techniques, from which we see further enhancement on the token generation rate.
Our foundation models are fine-tuned for users’ everyday activities, and can dynamically specialize themselves on-the-fly for the task at hand. We utilize adapters, small neural network modules that can be plugged into various layers of the pre-trained model, to fine-tune our models for specific tasks. For our models we adapt the attention matrices, the attention projection matrix, and the fully connected layers in the point-wise feedforward networks for a suitable set of the decoding layers of the transformer architecture.
By fine-tuning only the adapter layers, the original parameters of the base pre-trained model remain unchanged, preserving the general knowledge of the model while tailoring the adapter layers to support specific tasks.
We represent the values of the adapter parameters using 16 bits, and for the ~3 billion parameter on-device model, the parameters for a rank 16 adapter typically require 10s of megabytes. The adapter models can be dynamically loaded, temporarily cached in memory, and swapped — giving our foundation model the ability to specialize itself on the fly for the task at hand while efficiently managing memory and guaranteeing the operating system's responsiveness.
To facilitate the training of the adapters, we created an efficient infrastructure that allows us to rapidly retrain, test, and deploy adapters when either the base model or the training data gets updated. The adapter parameters are initialized using the accuracy-recovery adapter introduced in the Optimization section.
Our focus is on delivering generative models that can enable users to communicate, work, express themselves, and get things done across their Apple products. When benchmarking our models, we focus on human evaluation as we find that these results are highly correlated to user experience in our products. We conducted performance evaluations on both feature-specific adapters and the foundation models.
To illustrate our approach, we look at how we evaluated our adapter for summarization. As product requirements for summaries of emails and notifications differ in subtle but important ways, we fine-tune accuracy-recovery low-rank (LoRA) adapters on top of the palletized model to meet these specific requirements. Our training data is based on synthetic summaries generated from bigger server models, filtered by a rejection sampling strategy that keeps only the high quality summaries.
To evaluate the product-specific summarization, we use a set of 750 responses carefully sampled for each use case. These evaluation datasets emphasize a diverse set of inputs that our product features are likely to face in production, and include a stratified mixture of single and stacked documents of varying content types and lengths. As product features, it was important to evaluate performance against datasets that are representative of real use cases. We find that our models with adapters generate better summaries than a comparable model.
As part of responsible development, we identified and evaluated specific risks inherent to summarization. For example, summaries occasionally remove important nuance or other details in ways that are undesirable. However, we found that the summarization adapter did not amplify sensitive content in over 99% of targeted adversarial examples. We continue to adversarially probe to identify unknown harms and expand our evaluations to help guide further improvements.
In addition to evaluating feature specific performance powered by foundation models and adapters, we evaluate both the on-device and server-based models’ general capabilities. We utilize a comprehensive evaluation set of real-world prompts to test the general model capabilities. These prompts are diverse across different difficulty levels and cover major categories such as brainstorming, classification, closed question answering, coding, extraction, mathematical reasoning, open question answering, rewriting, safety, summarization, and writing.
We compare our models with both open-source models (Phi-3, Gemma, Mistral, DBRX) and commercial models of comparable size (GPT-3.5-Turbo, GPT-4-Turbo) 1 . We find that our models are preferred by human graders over most comparable competitor models. On this benchmark, our on-device model, with ~3B parameters, outperforms larger models including Phi-3-mini, Mistral-7B, and Gemma-7B. Our server model compares favorably to DBRX-Instruct, Mixtral-8x22B, and GPT-3.5-Turbo while being highly efficient.
We use a set of diverse adversarial prompts to test the model performance on harmful content, sensitive topics, and factuality. We measure the violation rates of each model as evaluated by human graders on this evaluation set, with a lower number being desirable. Both the on-device and server models are robust when faced with adversarial prompts, achieving violation rates lower than open-source and commercial models.
Our models are preferred by human graders as safe and helpful over competitor models for these prompts. However, considering the broad capabilities of large language models, we understand the limitation of our safety benchmark. We are actively conducting both manual and automatic red-teaming with internal and external teams to continue evaluating our models' safety.
To further evaluate our models, we use the Instruction-Following Eval (IFEval) benchmark to compare their instruction-following capabilities with models of comparable size. The results suggest that both our on-device and server model follow detailed instructions better than the open-source and commercial models of comparable size.
We evaluate our models’ writing ability on our internal summarization and composition benchmarks, consisting of a variety of writing instructions. These results do not refer to our feature-specific adapter for summarization (seen in Figure 3 ), nor do we have an adapter focused on composition.
The Apple foundation models and adapters introduced at WWDC24 underlie Apple Intelligence, the new personal intelligence system that is integrated deeply into iPhone, iPad, and Mac, and enables powerful capabilities across language, images, actions, and personal context. Our models have been created with the purpose of helping users do everyday activities across their Apple products, and developed responsibly at every stage and guided by Apple’s core values. We look forward to sharing more information soon on our broader family of generative models, including language, diffusion, and coding models.
[1] We compared against the following model versions: gpt-3.5-turbo-0125, gpt-4-0125-preview, Phi-3-mini-4k-instruct, Mistral-7B-Instruct-v0.2, Mixtral-8x22B-Instruct-v0.1, Gemma-1.1-2B, and Gemma-1.1-7B. The open-source and Apple models are evaluated in bfloat16 precision.
Advancing speech accessibility with personal voice.
A voice replicator is a powerful tool for people at risk of losing their ability to speak, including those with a recent diagnosis of amyotrophic lateral sclerosis (ALS) or other conditions that can progressively impact speaking ability. First introduced in May 2023 and made available on iOS 17 in September 2023, Personal Voice is a tool that creates a synthesized voice for such users to speak in FaceTime, phone calls, assistive communication apps, and in-person conversations.
Earlier this year, Apple hosted the Natural Language Understanding workshop. This two-day hybrid event brought together Apple and members of the academic research community for talks and discussions on the state of the art in natural language understanding.
In this post, we share highlights from workshop discussions and recordings of select workshop talks.
Our research in machine learning breaks new ground every day.
Work with us
Language Testing in Asia volume 12 , Article number: 56 ( 2022 ) Cite this article
3857 Accesses
2 Citations
Metrics details
Contract cheating, or students outsourcing their assignments to be completed by others, has emerged as a significant threat to academic integrity in higher education institutions around the world. During the COVID-19, when traditional face-to-face instruction became unsustainable, the number of contract cheating students increased dramatically. Through focus group interviews, this study sought the perspectives of 25 students enrolled in first year writing in a private higher education institution in Kuwait during the pandemic in 2020–2021, on their attitudes towards contract cheating. MAXQDA 2020 was used to examine the data. The participants believe that the primary motivations for engaging in contract cheating are mainly the opportunities presented by online learning and the psychological and physical challenges they experienced during online learning. Those who did not cheat had some shared traits, such as a competitive spirit, confidence in their talents, and a strong desire to learn. Additionally, those with high moral values avoided cheating. To combat contract cheating, students believe that teaching and evaluation techniques should be drastically altered and that students should be educated about plagiarism, while institutions should impose tougher sanctions on repeat offenders.
Universities met their teaching commitments through remote teaching because traditional face-to-face teaching temporarily became unsustainable due to the COVID-19 pandemic that hit the world between 2020 and 2022. Because students were not physically present in the classroom during this time, many higher education institutions conducted tests through digital platforms or replaced exams with essays and other types of written tasks (Gamage et al., 2020 ). The technology and infrastructure required to join online sessions, concerns about privacy as information technology devices demanded access to students' cameras, microphones, and desktop, and, most importantly, questions about academic integrity have been raised as a result of alternate methods to assessment. Replacement of exams with writing and/or take-home assignments constituted a danger to academic integrity, necessitating the adoption of fraud-free methods (Almeida & Monteiro, 2021 ).
Academic dishonesty, which refers to committing or contributing to dishonest acts in teaching, learning, research, and related academic activities (Cizek, 2003 ; Whitley & Keith-Spiegel, 2001 ), has long been a source of concern in higher education and has been on the rise in recent years. According to some estimates, up to 75% of university students have engaged in some type of academic misconduct during their academic career (Brimble & Stevenson-Clarke, 2005 ; McCabe & Bowers, 1994 ).
Plagiarism is one of the most persistent issues confronting higher education institutions, and it can take many forms, including “copy and paste” without citing the source; patch-writing; providing incorrect or incomplete citations or references; presenting or citing a secondary source as a primary source; ghost-writing; and contract cheating (De Jager & Brown, 2010 ; Ellery, 2008 ; Ellis et al., 2018 ; Park, 2003 ; Zafarghandi et al., 2012 ). Clarke and Lancaster ( 2006 ) coined the term “contract cheating” to characterize the unacknowledged usage of materials prepared by another person or entity involved in the sale of academic resources. Students outsource their coursework to others to do, usually for a fee, and then present it as their own. Contract cheating, according to many people, is a growing problem that most academic institutions are dealing with. To make matters worse, it is difficult to spot ghostwritten work because it is a new piece of writing tailored to course requirements and specific assignments (Ali & AlHassan, 2021 ).
Although contract cheating is common in both traditional face-to-face and online settings, it is more likely to take place in the latter. There are some strong indications that contract cheating went rampant during the pandemic and became a significant COVID-19 side effect for higher education institutions. According to Lancaster and Cotarlan ( 2021 ), the number of requests for answers to academic questions on a popular student website jumped by over 200% during the pandemic. Lancaster ( 2020 ) found that a website providing essay writing services expanded the number of tutors they recruited and was able to offer discounts due to the increased profitability. Similarly, during the first COVID-interrupted semester, additional research found that university students believed their colleagues cheated when classes went online (Daniels et al., 2021 ) and that their willingness and pressure to cheat were stronger online than in-person (Walsh et al., 2021 ). Likewise, King and Case ( 2014 ) discovered that throughout a 5-year period, the number of students who self-reported academic cheating increased, and around 75% of students said it was easier to cheat in online assessment.
There are several possible explanations for why students engaged in contract cheating in online education more often than in person education. Psychological distance adversely affected interpersonal relationships; and the internet obscured the line between academically honest and dishonest behavior (Eshet, 2022 ). Sudden campus closures and abrupt transition to online teaching modals provided more opportunities for students to complete assignments with online assistance. Furthermore, essay mills saw the lack of face-to-face interaction and proctoring on campus as an opportunity and used aggressive marketing methods to attract students. Through social media, students quickly became aware of the possibilities of a wide variety of options to carry out contract cheating; as a result, contract cheating has emerged as a real threat to academic integrity (Erguvan, 2021 ; Hill et al., 2021 , Bautista & Pentang, 2022 ).
Academic dishonesty is a complicated system including a variety of components that interact in unanticipated ways. Due to the vast number of paper mills, full-text databases, and collaborative web pages, many researchers attribute the rise in academic dishonesty to the increased usage of the internet, which generates “opportunities” for cheating (Peytcheva-Forsyth, et al., 2018 ; Townley & Parsell, 2004 ). Students engage in academic dishonesty for a variety of reasons, according to researchers: desire for high grades, procrastination, time pressure to complete assignments or study for tests, lack of organizational skills, fear of failing a course (loss of time and money), lack of understanding of academic dishonesty, and plagiarism not being considered a serious offense (Eshet et al., 2012 ; Jone, 2011 ; McGee, 2013 ). Academic dishonesty is influenced by social factors including peer pressure, social attitudes and norms about academic dishonesty, or domestic job market circumstances (Carpenter, et al., 2006 ; Gallant & Drinan, 2006 ). A “competitive culture” to earn excellent grades or succeed in school (Roberts & Hai-Jew, 2009 ). Furthermore, if the dominant culture does not regard academic dishonesty as a significant problem that requires attention, such situations will be handled on an individual basis, and formal consequences will rarely be pursued. The existence of an institutional policy on academic integrity, code of honor, and effective disciplinary procedures performed by educational institutions are all institutional elements that may affect academic dishonesty (Roberts & Hai-Jew, 2009 ; Vilchez & Thirunarayanan, 2011 ).
Many theories have been developed to describe why and how students engage in plagiarism and what factors play a role in this. Plagiarism has been commonly understood using theoretical frameworks originating from criminology literature which conceptualizes students as delinquents; however, that is rather problematic and ineffective in the long run (DiPietro, 2010 ). Some other theories could be listed as deterrence theory, rational choice theory, neutralization theory, planned behavior theory, situational ethics, social learning theory, self-control, and rational choice theory (DiPietro, 2010 ; Sattler, et al., 2013 ). This research was guided by social learning theory and rational choice theory.
Social learning theory by Albert Bandura could be applied to explain the learners’ plagiarism behavior. Bandura ( 1997 ) theorizes that learning is a cognitive-process that takes place in a social-context and can occur through “the influence of example” by observing a behavior and/or the consequences of the behavior. Therefore, if learners discover their fellow classmates plagiarizing and getting high grades and receiving nominal or no punishment at all for these acts, they will also feel inclined to adopt cheating. If a behavior is learned with a perceived negative consequence associated with it, then an individual is more likely to inhibit that behavior for him- or herself. However, positive reinforcement, which can also mean not having a negative consequence associated with the behavior, may encourage behaviors, whether they are positive or negative. (Denler et al., 2014 ).
Students choose to plagiarize in their assignments or tests for a range of reasons, and it is possible to examine the students’ motivation within the framework of rational choice theory, according to which every individual follows the principle of maximum utilization when they have to make a decision (Hawes, 2020 ). Individuals compare potential advantages to possible costs entailed by their decision and the course of action is chosen after weighing the advantages and disadvantages of all possible alternatives. Therefore, the decision to cheat and plagiarize results from a cost–benefit analysis. On the one hand, plagiarism offers some benefits: allowing students to finish the work quickly and save time and improve their grades; on the other hand, there are some counter-factors such as the risk of being caught. In case of plagiarism, potential losses would become real if the fact of plagiarism were to be discovered, which is not always likely. The consequences of plagiarism for students might include unsatisfactory marks for the assignment, reproach by the teachers, or other disciplinary punishments. Nevertheless, such measures do not seem to be significant enough to have the possibility to prevent students from plagiarizing. The risk of being caught has a medium negative effect on cheating, the fear of punishment has a small negative effect, and the importance of the outcome has a medium positive effect (Whitley, 1998 ).
Even though large numbers of students are claimed to partake in contract cheating in Kuwaiti higher education, such as purchasing papers from shops on campus that seemingly provide only printing and photocopying services (Al Jiyyar, 2017 ), there is little research on contract cheating in Kuwait. Indeed, a literature review by Ahsan et al. ( 2021 ) identifies research deficits outside of Australia, the UK, and Canada, as well as in contexts of contract cheating such as society, culture, and religion. Contract cheating during and after COVID-19 is another dimension that has received insufficient attention.
As a result, the purpose of this study is to investigate students' opinions of contract cheating occurring in first year writing classes in a private university context in Kuwait, a country that has had little research done in this area. The questions that will guide the study are as follows:
Why did more students engage in contract cheating during the pandemic?
What stopped students from engaging in contract cheating?
What consequences did contract cheating students face, if any?
What should be done to curb contract cheating?
In this exploratory case study, participants’ perspectives were acquired through focus group interviews, which is a popular strategy for acquiring qualitative data (Morgan, 1996 ). The strength of focus groups is that they allow participants to interact in groups which can provide insights into the causes of complicated actions as a result of group interaction (Carey & Smith, 1994 ; Morgan & Krueger, 1993 ). Because members simultaneously question and explain themselves to one another, focus groups are more than the sum of separate individual interviews. Because of the sensitive nature of the subject, the researcher determined that a group discussion, rather than individual interviews, would yield more insightful data.
Regarding the number and size of focus groups, different authors have varied advice and references. Various researchers have noted that sizes of focus groups may range from four to five, six to eight, and even eight to twelve people, and some have even suggested that there are no universal standards for the best number of focus groups (as cited in Gundumogula, 2020 ).
The researcher recruited twenty-five students for this study and five focus group interviews were scheduled, with each interview containing five to six students. There were eleven females and fourteen males among the participants. Purposeful sampling was used as the sampling method. Other faculty members in the department were contacted and asked to provide a list of potential participants. The list of participants suggested by faculty members was screened for eligibility to see if they met the following inclusion criteria:
Fluent in English.
Currently registered in a course during the 2021–2022 Fall term in the university.
Enrolled in a university and attended online classes in the previous academic year, 2020–2021.
Diversity in gender, discipline, and academic performance was observed in recruitment. The potential participants who were selected were contacted by e-mail. Official invitations to the online meeting were sent via emails to those expressed interest in attending.
The informed consent form which was approved by the Institutional Review Board included a detailed description of the interview process and confidentiality information. The participants were asked to read and sign these forms before the interview took place.
In qualitative research, because the researcher is the instrument in semi structured or unstructured qualitative interviews, unique researcher attributes have the potential to influence the collection of empirical materials (Pezalla et al., 2012 ); therefore, explicitly identifying oneself is more important than it is in quantitative research. In this study, the researcher is a faculty member in the same institution where the research took place, thus, familiar with the plagiarism and contract cheating habits and attitudes of Kuwaiti undergraduate students. She also included some of her own as well as her colleagues’ students in the study. She stated clearly at the beginning of the study that student responses will not be used for any course related assessment and evaluation and the collected data will be limited to research only. She only joined the focus group meetings as a listener, and another trained colleague moderated these meetings. Although every effort has been made to ensure objectivity, certain biases may remain, and these biases may shape the way the data is collected and the participants’ experiences are interpreted in this study.
All sessions were conducted online, due to the pandemic restrictions still in place, and in English, between November and December 2021. The sessions were recorded and simultaneously transcribed using the built-in function of the online meeting platform. The participants were informed of recording at the time of recruitment, as well as the beginning of each session.
Each online focus group lasted between 60 and 90 min that allowed in depth discussions. The focus group sessions were moderated by a trained Education Department faculty member. A script was developed for the moderator to guide the discussion. The moderator used the script that explained the purpose of the focus group, went over the focus group rules, and reinforced the confidentiality of all the information shared.
Although there are no general rules as to the optimal number of focus group discussions, researchers state that four focus groups are generally sufficient, but that consideration of response saturation should be made after the third focus group discussion (Nyamathi & Shuler, 1990 ). Guest et al. ( 2017 ) stated that within two to three more than 80%, and within three to six focus groups 90% of all themes were discoverable. Three focus groups were also enough to identify the most prevalent themes within the data set. The number of focus groups in this study was guided by theme saturation. After conducting five focus group sessions, it has been observed that the information collected was becoming repetitive, as no new themes were emerging. Therefore, it was decided that data saturation has been reached.
Content and thematic analysis methodologies were used to analyze the data collected in this study. Content analysis refers to the process in which presentations of behavior or qualitative data from self-reports are analyzed (Karataş, 2015 ). Content analysis is more related to the initial analysis and the coding process, where researchers look for redundant and similar codes (Humble & Mozelius, 2022 ). The thematic analysis occurs after the coding process as researchers aggregate similar codes to form major concepts or themes. Basically, thematic analysis converts qualitative data into quantitative data. Once data is transcribed, it is reviewed repeatedly so that the researcher can identify trends in the meaning conveyed by language. The themes identified are re-analyzed so that they become more refined and relevant and given codes (Boyatzis, 1998 ; Braun & Clarke, 2006 ). The researcher can then annotate the transcript with the codes that have been identified. In this study, we started with the content analysis as a more basic way of approaching the data material, and then proceeded with the thematic analysis to detect, analyze, and report themes, as well as organize and describe data in dimensions. As distinct and fundamental qualitative approaches, the two should be used by qualitative researchers as transparent structures provide researchers with clear and user-friendly methods for analyzing data (Vaismoradi et al, 2013 ).
Each student was assigned a code to safeguard their anonymity and confidentiality during the study. The name of the institution was also taken out of the transcribed focus group sessions. The researcher ran a pilot focus group with some students who were not in the sample to verify the understandability of the questions for the focus group interviews’ reliability and validity.
When there were any discrepancies, the meeting platform’s transcriptions were compared to the audio recordings and modified. The written data was afterwards uploaded to the MAXQDA 2020 program, which allows for systematic data analysis (Kuckartz & Rädiker, 2019 ). The earliest codes were constructed using an inductive approach, and codes that were connected to each other were grouped together and assigned names. Following that, the emerging themes were explained in a way that readers could comprehend. Finally, the researcher provided an interpretation of the findings as well as supporting images.
The first question analyzed students’ perceptions regarding why more students got engaged in contract cheating during the pandemic. In line with the statements of the participants, the motivators category was defined with ten different codes in order of frequency: wanting to get easy grades, having more opportunities to cheat, challenges of online education and difficult assignments, culture/pressure, and insecurity/lack of ability emerged as some major motivators (Fig. 1 ).
Motivators for contract cheating
The participants’ statements regarding the major motivators are below:
“For starters, the online education sort of opened the window of opportunity for students. Now when you see a before and after image, you would think that before we didn’t have access to these sorts of resources. People usually were very busy going to day to day classes. They didn’t have time to research these sort of things. I believe that because people had a bit more free time to do many activities or do whatever during the pandemic because of online education or online learning, they were able to come across these resources through search engines and were able to practice how to use these facilities.” (FG3-1).
Among the motivators, the challenges of online education and the difficulty of assignments in online learning were also mentioned as a reason for resorting to contract cheating. Participants mentioned that students had difficulty accessing information and could not focus during online education:
“Although there are office hours, maybe they need face to face with the professor, in order to learn how to write it properly, because personally when I had an essay writing class, it was easy for me because when we wrote one paragraph, we would review it one on one with the professor. So I feel like that’s why when it came to online, the percentage got a lot higher than when we were on campus.” (FG5-3).
An interesting code that came out of student responses was the culture in Kuwait. Participants mentioned that Kuwaiti children grow up in an environment where everything is done for them. Another element of the culture is the pressure on students to get good grades and graduate with a high GPA to be eligible for government jobs, therefore cheating is considered almost acceptable.
“In Kuwait, in particular, culturally speaking, because of how Kuwaitis were raised or brought up or how they live through an environment of luxury and lack of hardship, to say the very least, it led them to this mindset where they could do these things because they have the option of doing so because it’s so easy to them.” (FG3-1).
When participants were asked what stopped some students from contract cheating during the online education, their responses revealed six different codes. The major deterrents of cheating emerged as moral and religious values and having certain personality traits. Some minor deterrents were fear of getting caught, not wanting to risk future job prospects, and getting trapped in a vicious cycle, also seeing contract cheating as a waste of money (Fig. 2 ).
Reasons for not misconducting
One of the most popular responses was the student’s moral values as a reason for not cheating in their assignments.
“Some students, have strong moral values. So, no matter what challenges they face, they would not resort to cheating, because they believe that it’s wrong.” (FG1-2).
Kuwait is officially an Islamic country and Kuwaitis are quite religious people. This was also perceived as part of the non-cheating students’ set of moral values.
“Religion definitely does play a role because in Islam we know that if you cheat to get yourself success, everything you earn from that success is going to be forbidden upon you, so you won’t benefit in the end. But nowadays the religious commitment is not that big.” (FG4-2).
Another code that the participants’ responses revealed was certain personality traits. Under this code participants mentioned three different subcodes: self-confidence, motivation to learn, and competitiveness (Fig. 3 ).
Hierarchical code—subcodes model of reasons for not misconducting
Participants mentioned that when students have enough self-confidence, motivation, and competitiveness, they do not need any help, they are excited about their achievements, and they cannot trust anyone else to do their work for them.
“I think some students do not like to depend on other people to do their work. Also they are not trustworthy like they can’t trust them to do their work as they feel more confident doing their own work. In this way they will improve themselves to become a better person.” (FG2-4). “Eventually people we will come back to fully on a real life and we’ll be in a position where you cannot cheat. So, in order to enhance our knowledge, or to do better in future classes, they wouldn’t cheat, so they can actually learn something.” (FG5-4).
Another reason why some students did not cheat was the fear of getting caught, according to students’ perceptions. Participants mentioned that some students did not resort to cheating because they were afraid of the outcomes in case they get caught. This was also similar to another deterrent mentioned by students as not wanting to risk future job prospects. Some quotes below exemplify such perceptions:
“I think one of the biggest things that most students fear when it comes down to plagiarizing or cheating is getting caught. But I also think what would devastate a student is if the teacher or the instructor make an example out of the student. Because if you pull out their assignment in front of the entire class and say that ‘this is plagiarized, and because of that, I will give you a zero’, you know you would be set as an example, and I think that would break a student, and so I think that thought or the fact that you’re getting caught. And then being exposed is what really scares or that fear that set a lot of students aside from wanting to plagiarize or cheat.” (FG1-2). “This would affect them in long term, and they wouldn’t be able to do stuff that normal person would be able to do and complete their assignments and all of that stuff. They will have problems in their jobs later on their lives.” (FG4-3). “I think it’s all a certain mindset. Some students don’t cheat because they realize there’s no meaning to it in a sense that if you do cheat your whole life, you’ll keep cheating… if you cheat now, you’re going to cheat in your whole life and there’s no point to it.” (FG3-1).
Students were asked about their perceptions towards the consequences cheating students faced. Did cheaters get caught? Did they get penalized sufficiently when they were caught? Did cheating ever go unnoticed? The responses revealed six different codes, which could be summarized as most of the time cheating went unnoticed, and when it was noticed, they received certain punishments ranging from failing the assignment and/or the course or being suspended from the university (Fig. 4 ).
Punishments when students get caught
The most striking response as analysis revealed was most participants thought instructors did not even realize that cheating took place.
“I think it’s less likely that the professor will catch the student until and unless they know the student and his past assignments. Because the professors don’t know anything about them. They don’t know how they’ve been doing. I think they get their assistants to check the paper so they are very less likely they can catch the culprit.” (FG4-3).
In case students got caught contract cheating, the most common consequence they faced was getting a zero and failing that specific assignment. Failing the entire course or being suspended from the university for a semester and blacklisted on a list that circulates among faculty members were also mentioned by some participants was another repercussion mentioned by some participants.
“I had no experience with people getting caught with cheating, but like usually if they get caught they just get a zero for the essay. For that particular assignment, not for the whole course.” (FG4-1).
Students also talked about being interrogated by the instructor as a consequence of cheating. This interrogation sometimes took place in private, but sometimes in public, in front of their peers, which was a big source of embarrassment for the student:
“To make it even better, they should do the punishments publicly so other students can see this person is being punished for this reason so they can scare everyone else from being humiliated in front of the class. Well, what we did in our old school if someone cheats, they rip the paper on the spot and then kick the student out the class.” (FG5-1).
The final research question of the study focused on solutions to this academic misconduct and asked students to make suggestions to prevent this problem. Students were reminded to consider all the stakeholders in their suggestions, such as students, instructors, and the administration of the institution in which they are studying. Their responses revealed nine codes as seen in Fig. 5 . The solutions could be classified as positive and negative ones, with the positive, more nurturing solutions being changing teaching and assessment methods, educating students about cheating, giving students second chances, offering more learning support services, conducting face-to-face education and raising awareness on social media and on campus. Some students were in favor of more punitive solutions, such as applying harsher punishments and stricter control, and using anti-cheating software and equipment.
Suggestions for combatting contract cheating
The suggestion with the highest frequency was changing the teaching and assessment methods. Participants mentioned that education system relies too much on rote learning, memorization and should include more hands-on assignments and projects. As a result, assessment types should move from multiple choice, written exams to more applied methods and performance assessment.
“That’s the only way I could see this work is if the entire like education industry changed the way they move forward with their teaching and their learning, make it more practical with experience… more than just about grades. The fact that you know your grades are on the line and students compare one of their grades, their GPA over the other, the pressure is intense, and that’s where you know people resort to things that are much easier. But if it’s more about, having fun, learning, experiencing, things are quite practical to the world out there specifically tailored to what they want to do in the future, that would completely eliminate the. You know, the problem of plagiarism and cheating or whatnot, because then they’re doing something they love doing.” (FG1-2).
The next common code was educating students about cheating. Participants mentioned that students need to be educated about cheating and given clear warnings about the outcomes. Some participants expressed the need for a more nurturing environment for students and that they should be given a second chance when caught. Offering more learning support services was also proposed as a solution as some participants to encourage students to seek more help from legal sources.
“I do believe that a severe punishment would refrain the students from cheating, but what would be an even better learning objective to make them understand that cheating would be wrong, plagiarizing would be wrong is to let them understand how severe it is beforehand so they wouldn’t do it in the first place, not by punishing them if they cheated. Making it sink down deep end that this is how severe cheating is… this is what will happen. Because some students don’t really understand the full gravity of what they’re doing and what would happen if they’re caught.” (FG3-1).
“Maybe they’re lazy, but in most cases they need help, they need people to help them with things they don’t understand. They need help with things that they probably don’t know. Or maybe they weren’t focused in class on a particular day. Students need help, like education is not easy. It’s a learning process. We as people learn through mistakes and experiences, but we also need a guiding hand in order for us to succeed in life and for us we should not focus solely on punishments because at the end of the day students or these young people are the future of the nation of Kuwait of this country. If we can guide them to a better path instead of punishing them and then going on a very darker or very negligent path, it would have been much better. Not only for us,
Other codes that came out of our analysis were conducting face-to-face education, using anti-cheating software and equipment in exams to detect cheating, warning classmates about the outcomes of cheating, and raising awareness on social media. Some participants asked for harsher punishments to combat cheating.
The first research question on the motivators of contract cheating revealed the fact that some students outsourced their tasks because they just wanted to get easy grades and online learning made this a possibility by providing more opportunities to cheat. Many students were just looking to get by and pass the course because the shift to online education has drastically affected their ability to learn and retain information, and they only intended to cheat in the short-term.
According to Gallant (cited in Dey, 2021 ), there was probably increased cheating because there were more temptations and opportunities. When colleges shut down or restricted in-person access, students were taking exams in their bedrooms, with unrestricted access to their phones and other information technologies. This spurred cheating to take on new and different forms. Regarding students cheating in online courses, if students feel anonymous and unlikely to be adequately monitored, they may assume that the likelihood of being caught cheating is virtually zero and cheat more in online classes using online resources. Previous research has shown that participants had a higher propensity to cheat when chances of being caught were less likely (Kajackaite & Gneezy, 2017 ). Despite being supervised through a web camera, the teachers cannot control the surroundings or the computer screens of the students. In class, students are regularly monitored and watched and thus are less free to consult sources of information, but with the physical distance, those odds decrease, and so cheating increases.
College students could not help but want to be a part of the herd since they did not want to be left out when their peers were earning good scores without putting in any effort. This is further reinforced by research findings that students are less inclined to cheat when they believe their peers are trustworthy and the misconception of “everyone else is doing it” encourages cheating (Carpenter et al., 2006 ; Daniels et al., 2021 ; Turner & Uludag, 2013 ). Observing their peers’ cheating activities in online classes through group chats, as our participants’ responses reveal, encourages more students to cheat, especially after the initial shock has worn off and they felt more at ease, in the second semester of online education.
The difficulties of online education have been cited in various research as a factor that contributed to contract cheating. Along with the opportunities online learning provided, stress and pressure started building up and the pandemic essentially intensified a feeling of potential loss among college students. Asking questions during exams was difficult without the in-person experience. Students were able to ask questions via email or attend virtual office hours, but many missed the ease of raising a hand and getting a question answered in real time. According to a study conducted in Vietnam (Tran, et al., 2021 ), students had generally negative feelings toward online education, with 63.31% of respondents stating that they disliked online exams and 64.8 percent stating that online learning was only marginally effective and only a temporary solution. The difficulties in assessing and testing online, as well as not understanding the course and communicating with peers, were identified as negatives.
With the outburst of the pandemic, many students found their surroundings transformed completely. Such a change probably caused an increased fear of loss with students being away from their friends and normal social environment, away from the usual learning atmosphere and resources they are used to. They developed a fear of losing social connections, falling behind in class, losing internship and career opportunities, etc. In a study that surveyed students from all over the USA (Hoyt et al., 2021 ), students reported that the loss of their social life had a major influence on their mental state during the pandemic. When these results are considered with the established idea that increased fear of loss can cause a biological reaction that increases dishonest behavior, it can reasonably be assumed that one of the primary reasons colleges all over the world detected abnormally high cheating rates among their students is an increase in a fear of loss (Arie, 2021 ). Similar findings were seen in other research, including as in Hong Kong, where students said they struggled to maintain self-discipline when studying alone on online platforms (Mok et al, 2021 ); students experienced stress, worry, and pressure as a result of the pandemic (Sahu, 2020 ), and they did not find online learning to be totally rewarding, particularly when they experienced disruptions during online classes due to insufficient educational and institutional assistance (Fauzi & Sastra Khusuma, 2020 ; Xie & Yang, 2020 ).
The purpose of our second study question was to examine students’ perspectives of the motivations for not cheating. Students’ responses emphasize the importance of students’ own moral compass, as well as particular personality traits like self-confidence, ambition to learn, and competition, as key deterrents to cheating. This is consistent with research that emphasizes the role of attitudes and beliefs in preventing academic misconduct and promoting an ethical culture (Rundle et al., 2019 ; Grym and Liljander, 2016 ) Strong individual views and ideals regarding integrity, according to Reedy et al. ( 2021 ), minimize the likelihood of students cheating. Following these two major deterrents, fear of being caught emerges as a third code, which is corroborated by the findings of an Australian study by Rundle et al. ( 2019 ). Three significant predictors of fear of detection and punishment were identified in Rundle’s regression analysis (Machiavellianism, narcissism, and consistency of interest), implying that students who scored high on these are more likely to report fear of detection and punishment as a reason for not engaging in contract cheating. These findings imply that appealing to students’ values and beliefs while conveying clear messages about academic integrity could be an effective method for improving the integrity of online and offline exams.
The study’s third finding concerned the implications of cheating. Students were asked what the consequences and punishments were when cheaters were caught. Surprisingly, the highest frequency was observed in the code that cheaters were not generally caught, and contract cheating went unnoticed. This finding is intriguing in a way as we are not sure whether instructors really fail to recognize cheating or tend to ignore it and not take any action as it is difficult to present hard evidence to prove contract cheating. Research found that faculty were able to identify 62% of contract cheating when they were advised to specifically look for it (Dawson & Sutherland-Smith, 2019 ); but when they were unaware of the possible presence of contract cheating, they could not detect any (Lines, 2016 ). Although Erguvan’s study ( 2021 ) on faculty awareness of contract cheating found that faculty members are confident in their ability to spot it even when cases are detected, teaching staff are concerned that proving cheating may be difficult (Walker & Townley, 2012 ). Faculty members frequently express concerns about cheating during online education, but they have not always been able to detect and punish cheating as they would like to, due to a lack of security measures, reliable plagiarism detection tools, and training on online assessment and cheating prevention measures (Meccawy et al, 2021 ). Because of the problem’s complexity and the difficulties to solving it, faculty members may simply choose to ignore it (Coren, 2011 ; McCabe, 2005 ).
When faculty members detect students cheating on an assignment, the most typical repercussions include failing the work and being interrogated by the teacher, sometimes in private and sometimes in public. Students, on the other hand, stated that failing an assignment is not a strong enough deterrent to cheating because these assignments are often so little in proportion that they have little impact on students’ total grade in the course. Another study found that academic dishonesty is frequent among Kuwaiti university students because the danger of detection and severity of sanctions for academic misconduct is minimal (Alsuwaileh et al., 2016 ). Participants believe academic dishonesty remains widespread because sanctions are not enough. According to most of the participants, embarrassment is the only informal sanction for academic dishonesty, and they would be embarrassed more by the lecturer than by their friends or families.
According to the findings of the fourth research question, students believe that the existing educational system merely promotes students to memorize and does not teach them the real skills they need in their jobs. Students expressed their desire for a change in the university's educational and assessment techniques. Indeed, there is a growing body of research on the function of evaluation in contract cheating prevention and detection. Some suggest increasing the use of invigilated (Clarke & Lancaster, 2006 ; Lines, 2016 ) and in-class viva voce examinations (Carroll and Appleton, 2001 ) to reduce the potential for cheating. Others focus on reducing motivations to cheat through increasing student engagement by choosing personalized topics (Sutherland-Smith, 2013 ), and using authentic assessment, which aims to engage students in “real-world” tasks (Collins et al, 2007 ; QAA, 2020 ).
However, some researchers are skeptical about the impact of changing the assessment in curbing contract cheating and suggest that authentic assessment does not necessarily assure academic integrity and that educators need to be aware that cheating may take place even in applied and “authentic” exams such as oral exam/viva or practical exam (Harper et al., 2021 ; Ellis et al, 2020 ).
Text-rich forms of assessment, according to Harper et al. ( 2021 ), should keep their place in university assessment strategies, not because they are impervious to contract cheating, but because faculty get more proficient in detecting cheating in written assignments like research papers, which enable them to develop more personalized relations with students. This supports the students’ belief that, in addition to assessment, a shift in pedagogy could play a role in minimizing contract cheating.
Our research suggests that students want to be educated about academic integrity and given explicit warnings about the consequences, but they also believe punishments should be severe enough to work as deterrents. The rational choice theory may offer hints about how to curb the plagiarism problem in this regard. Universities should increase the benefits associated with non-plagiarized papers and publicly circulate information about plagiarism; otherwise, any punishments or sanctions will not be deterrent to plagiarism. Academic integrity values may be fostered, and students can become familiar with this culture through course objectives and activities. A system of progressive educational punishment might likewise be implemented (Cinali, 2016 ; Mervis, 2012 ). Faculty members and university administrators should diminish the benefits of plagiarism and increase the costs and the probability of detection. If students still choose to plagiarize, they must take higher risks into account; otherwise, they need to either be experts in hiding plagiarism or make greater effort in producing plagiarism that is hard to detect, which will reduce the time-saving benefits of plagiarism (Collins et al., 2007 ) and in turn reduce the number of students committing plagiarism.
The findings of this study reveal that as per students’ perceptions, online learning has driven more students to contract cheat, primarily by approaching an essay mill or a tutor and paying them to do the work for them. Students expressed they were tempted by the opportunities presented by online learning, such as not having any proctoring or an obligation to turn on their webcams during exams, ease of finding a tutor to do the work at a very affordable rate, and not having the motivation and skills to cope with the challenges of online classes, therefore choosing the easy way. Some students described the feeling as “being part of the herd,” which could be compared to the new trendy acronym FOMO (fear of missing out), that basically refers to the feeling or perception that others are having more fun, living better lives, or experiencing better things than you are, which is often exacerbated by social media.
To summarize, academic integrity violations have been on the rise as a result of COVID-19-mandated online or hybrid education systems which may tempt many students to continue using their tried and tested methods of cheating when they return to face-to-face instruction. Therefore, violations of academic integrity necessitate a rethinking of teaching and evaluation methodologies. Higher education institutions must adapt to the changing contract cheating marketplace and ensure that the faculty are aware of contract cheating and can recognize the indicators of contract cheating. Students should be given the message that their tutors are aware of contract cheating services. To keep up with the constant changes in technology, academic integrity processes must be current, resilient, and assessed on a regular basis.
If we do not take immediate action, contract cheating will likely reach epidemic proportions. We need to take a comprehensive approach that includes a focus on assessment design, a strengthened culture of integrity, and robust technical tools. We should also urge academics to perform ongoing research on ways to improve academic integrity during and post pandemic higher education instruction.
The author is confident this paper will add significant value to the body of existing literature; however, we cannot be sure that the focus groups have captured a representative sample of students studying in higher education institutes in Kuwait. It is also important to note that the study is limited to the experiences and assumptions of students who participated in the study and therefore the findings should not be generalized.
The qualitative data that support the findings of this study are available on request from the corresponding author, DE. The data are not publicly available as they contain information that could compromise the privacy of research participants.
Coronavirus disease
Max Qualitative Data Analysis
Ahsan, K., Akbar, S., & Kam, B. (2021). Contract cheating in higher education: A systematic literature review and future research agenda. Assessment & Evaluation in Higher Education . https://doi.org/10.1080/02602938.2021.1931660
Article Google Scholar
Ali, H.I. & Alhassan, A. (2021) Fighting contract cheating and ghostwriting in higher education: Moving towards a multidimensional approach. Cogent Education , 8 (1). https://doi.org/10.1080/2331186X.2021.1885837
Al Jiyyar, A. (2017). Cheated education: Research papers for sale at Kuwaiti University. Arab Reporters for Investigative Journalism (ARIJ). Retrieved November 17, 2021. https://en.arij.net/investigation/cheated-education-research-papers-for-sale-at-kuwait-university
Almeido, F., & Monteiro, J. (2021). The challenges of assessing and evaluating the students at distance. Journal of Online Higher Education , 5 (1), 3- 10. https://doaj.org/article/eaab3e55f98346eeaa410db7528b7f59
AlSuwaileh, B. G., Russ-Eft, D. F., & Alshurai, S. R. (2016). Academic dishonesty: a mixed-method study of rational choice among students at the College of Basic Education in Kuwait. Journal of Education and Practice , 7(30), 139–151. Retrieved November 17, 2021, https://www.iiste.org/Journals/index.php/JEP/article/view/33629/34573 .
Arie, R. (2021). Academic dishonesty and COVID-19: a biological explanation. University Writing Program, Brandeis University. https://www.brandeis.edu/writingprogram/write-now/2020-2021/arierotem/arie-rotem.pdf
Bandura, A. (1997). Self-efficacy: The exercise of control . W H Freeman/Times Books/ Henry Holt & Co.
Google Scholar
Bautista, R. M., & Pentang, J. T. (2022). Ctrl C + Ctrl V: Plagiarism and knowledge on referencing and citation among pre-service teachers. International Journal of Multidisciplinary: Applied Business and Education Research, 3 (2), 245–257. https://doi.org/10.11594/ijmaber.03.02.10
Boyatzis, R. E. (1998). Transforming qualitative information: Thematic analysis and code development . Sage.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3 (2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Brimble, M., & Stevenson-Clarke, P. (2005). Perceptions of the prevalence and seriousness of academic dishonesty in Australian universities. Australian Educational Researcher, 32 , 19–44. https://doi.org/10.1007/BF03216825
Carey, M. A., & Smith, M. W. (1994). Capturing the group effect in focus groups: A special concern in analysis. Qualitative Health Research, 4 (1), 123–127. https://doi.org/10.1177/104973239400400108
Carpenter, D. D., Harding, T. S., Finelli, C. J., Montgomery, S. M., & Passow, H. J. (2006). Engineering students’ perception of and attitudes towards cheating. Journal of Engineering Education, 95 (3), 181–194.
Carroll, J., & Appleton, J. (2001). Plagiarism: a good practice guide. https://i.unisa.edu.au/siteassets/staff/tiu/documents/plagiarism---a-good-practice-guide-by-oxford-brookes-university.pdf .
Cinali, G. (2016). Middle Eastern perspectives of academic integrity: a view from the Gulf region. In T. Bretag (Ed.), Handbook of Academic Integrity. Singapore: Springer. https://doi.org/10.1007/978-981-287-098-8_8
Chapter Google Scholar
Cizek, G. (2003) Detecting and preventing classroom cheating. Corwin Press, Thousand Oaks ISBN 0–7619–4655-1
Clarke, R. & Lancaster, T. (2006). Eliminating the successor to plagiarism? Identifying the usage of contract cheating sites. Proceedings of the 2nd International Plagiarism Conference available at: https://marketing-porg-statamic-assets-uswest-2.s3-us-west-2.amazonaws.com/main/Clarke2_fullpaper2006.pdf
Collins, A., Judge, G., & Rickman, N. (2007). On the economics of plagiarism. European Journal of Law and Economics., 24 (2), 93–107. https://doi.org/10.1007/s10657-007-9028-4
Coren, A. (2011). Turning a blind eye: Faculty who ignore student cheating. Journal of Academic Ethics., 9 (4), 291–305. https://doi.org/10.1007/s10805-011-9147-y
Daniels, L. M., Goegan, L. D., & Parker, P. C. (2021). The impact of COVID-19 triggered changes to instruction and assessment on university students’ self-reported motivation, engagement, and perceptions. Social Psychology of Education., 24 (1), 299–318. https://doi.org/10.1007/s11218-021-09612-3
Dawson, P., & Sutherland-Smith, W. (2019). Can training improve marker accuracy at detecting contract cheating? A multi-disciplinary pre-post study. Assessment & Evaluation in Higher Education, 44 (5), 715–725. https://doi.org/10.1080/02602938.2018.1531109
Denler, H., Walters, C., & Benzon, M. (2014). Social cognitive theory. http://www.education.com/reference/article/social-cognitive-theory/
De Jager, K., & Brown, C. (2010). The tangled web: Investigating academics’ views of plagiarism at the University of Cape Town. Studies in Higher Education, 35 (5), 513–528. https://doi.org/10.1080/03075070903222641
Dey, S. (2021). Reports of cheating at colleges soar during the pandemic. https://www.npr.org/2021/08/27/1031255390/reports-of-cheating-at-colleges-soar-during-the-pandemic .
DiPietro, M. (2010). 14: Theoretical frameworks for academic dishonesty. To Improve the Academy, 28(1), 250–262. https://doi.org/10.1002/j.2334-4822.2010.tb00606.x
Ellery, K. (2008). An investigation into electronic-source plagiarism in a first-year essay assignment. Assessment and Evaluation in Higher Education, 33 (6), 607–617. https://doi.org/10.1080/02602930701772788
Ellis, C., van Haeringen, K., Harper, R., Bretag, T., Zucker, I., McBride, S., Rozenberg, P., Newton, P. & Saddiqui, S. (2020). Does authentic assessment assure academic integrity? Evidence from contract cheating data. Higher Education Research & Development, 39(3), 454–469, https://doi.org/10.1080/07294360.2019.1680956
Ellis, C., Zucker, I. M., & Randall, D. (2018). The infernal business of contract cheating: Understanding the business processes and models of academic custom writing sites. International Journal for Educational Integrity, 14(1). https://doi.org/10.1007/s40979-017-0024-3
Erguvan, I. D. (2021). The rise of contract cheating during the COVID-19 pandemic: A qualitative study through the eyes of academics in Kuwait. Language Testing in Asia, 11(34). https://doi.org/10.1186/s40468-021-00149-y
Eshet, Y., Grinautski, K., & Peled, Y. (2012). Learning motivation and student academic dishonesty: a comparison between face-to-face and online courses. Proceedings of the Chais conference on instructional technology research 2012: Learning in technology era. (pp. 22–29). The Open University of Israel. Raanana
Eshet, Y. (2022). Contract cheating in Israel during the COVID-19 pandemic. In European Conference. https://philpapers.org/archive/ESHCCI-2.pdf
Fauzi, I., & Sastra Khusuma, I. H. (2020). Teachers’ elementary school in online learning of COVID-19 pandemic conditions. Jurnal Iqra’: Kajian Ilmu Pendidikan, 5 (1), 58–70. https://doi.org/10.25217/ji.v5i1.914
Gallant, T. B., & Drinan, P. (2006). Organizational theory and student cheating: Explanation, responses, and strategies. The Journal of Higher Education, 77 (5), 839–860.
Gamage, K. A., Silva, E. K., & Gunawardhana, N. (2020). Online delivery and assessment during COVID-19: Safeguarding academic integrity. Education Sciences, 10 (11), 301. https://doi.org/10.3390/educsci10110301
Grym, J., & Liljander, V. (2016). To cheat or not to cheat? The effect of a moral reminder on cheating. Nordic Journal of Business , 65(3–4), 18–37. http://njb.fi/wp-content/uploads/2017/01/Grym_Liljander.pdf
Guest, G., Namey, E., & McKenna, K. (2017). How many focus groups are enough? Building an evidence base for nonprobability sample sizes. Field Methods, 29 (1), 3–22. https://doi.org/10.1177/1525822X16639015
Gundumogula, M. (2020). Importance of focus groups in qualitative research. International Journal of Humanities and Social Science (IJHSS) , 8 (11):299–302. ff10.24940/theijhss/2020/v8/i11/HS2011–082ff. ffhal-03126126f
Harper, R., Bretag, T., & Rundle, K. (2021). Detecting contract cheating: Examining the role of assessment type. Higher Education Research & Development, 40 (2), 263–278. https://doi.org/10.1080/07294360.2020.1724899
Hawes, D. (2020). Rational choice and political power. Journal of Contemporary European Studies, 28 (1), 132. https://doi.org/10.1080/14782804.2019.1684744
Hill, G., Mason, J., & Dunn, A. (2021). Contract cheating: An increasing challenge for global academic community arising from COVID-19. Research and Practice in Technology Enhanced Learning, 16(1). https://doi.org/10.1186/s41039-021-00166-8
Hoyt, L. T., Cohen, A. K., Dull, B., Maker Castro, E., & Yazdani, N. (2021). “Constant stress has become the new normal”: Stress and anxiety inequalities among U.S. college students in the time of COVID-19. Journal of Adolescent Health, 68 (2), 270–276. https://doi.org/10.1016/j.jadohealth.2020.10.030
Humble, N., & Mozelius, P. (2022). Content analysis or thematic analysis: Doctoral students' perceptions of similarities and differences. Electronic Journal of Business Research Methods, 20(3), 89–98. https://doi.org/10.34190/ejbrm.20.3.2920
Jone, D. L. R. (2011). Academic dishonesty: Are more students cheating? Business Communication Quarterly, 74 (2), 141–150.
Kajackaite, A., & Gneezy, U. (2017). Incentives and cheating. Games and Economic Behavior, 102 , 433–444. https://doi.org/10.1016/j.geb.2017.01.015
Kuckartz, U., & Rädiker, S. (2019). Analyzing qualitative data with MAXQDA: Text, audio, and video . Springer. https://doi.org/10.1007/978-3-030-15671-8
Book Google Scholar
Karataş, Z. (2015). Sosyal bilimlerde nitel araştırma yöntemleri. Manevi Temelli Sosyal Hizmet Araştırmaları Dergisi, 1(1), 62–80. https://www.academia.edu/33009261/Sosyal_Hizmet_E_Dergi_SOSYAL_B%C4%B0L%C4%B0MLERDE_N%C4%B0TEL_ARA%C5%9ETIRMA_Y%C3%96NTEMLER%C4%B0 .
King, D. L., & Case, C. J. (2014). E-cheating: incidence and trends among college students. Issues in Information Systems, 15 (1), 20–27. https://iacis.org/iis/2014/4_iis_2014_20-27.pdf
Lancaster, T. (2020). Commercial contract cheating provision through micro-outsourcing web sites. International Journal for Educational Integrity, 16(1). https://doi.org/10.1007/s40979-020-00053-7
Lancaster, T., & Cotarlan, C. (2021). Contract cheating by STEM students through a file sharing website: A COVID-19 pandemic perspective. International Journal for Educational Integrity, 17(3). https://doi.org/10.1007/s40979-021-00070-0
Lines, L. (2016). Ghostwriters guaranteeing grades? The quality of online ghostwriting services available to tertiary students in Australia. Teaching in Higher Education, 21(8), 889–914. https://doi.org/10.1080/13562517.2016.1198759
McCabe, D. L., & Bowers, W. (1994). Academic dishonesty among male college students: A thirty-year perspective. Journal of College Student Development, 35 , 3–10.
McCabe, D. L. (2005). Cheating among college and university students: A North American perspective. International Journal for Educational Integrity, 1 (1). https://doi.org/10.21913/ijei.v1i1.14
McGee, P. (2013). Supporting academic honesty in online courses. The Journal of Educators Online, 10 (1), 1–31.
Meccawy, Z., Meccawy, M. & Alsobhi, A. (2021). Assessment in ‘survival mode’: student and faculty perceptions of online assessment practices in HE during Covid-19 pandemic. International Journal for Educational Integrity. 17(16). https://doi.org/10.1007/s40979-021-00083-9
Mervis, J. (2012). Growing pains in the desert. Science, 338 (7), 1276–1281. https://doi.org/10.1126/science.338.6112.1276
Mok, K. H., Xiong, W., & Bin Aedy Rahman, H. N. (2021). COVID-19 pandemic’s disruption on university teaching and learning and competence cultivation: Student evaluation of online learning experiences in Hong Kong. International Journal of Chinese Education, 10(1), 221258682110070. https://doi.org/10.1177/22125868211007011
Morgan, D. L. (1996). Focus groups. Annual Review of Sociology, 22 , 129–152. https://doi.org/10.1146/annurev.soc.22.1.129
Morgan, D. L., & Krueger, R. A. (1993). When to use focus groups and why. In D. L. Morgan (Ed.), Successful focus groups: advancing the state of the art (pp. 3–19). Sage Publications Inc. https://doi.org/10.4135/9781483349008.n1
Nyamathi, A., & Shuler, P. (1990). Focus group interview: A research technique for informed nursing practice. Journal of Advanced Nursing, 15 (11), 1281–1288. https://doi.org/10.1111/j.1365-2648.1990.tb01743.x
Park, C. (2003). In other (people’s) words: Plagiarism by university students–literature and lessons. Assessment and Evaluation in Higher Education, 28 (5), 471–488.
Peytcheva-Forsyth, R., Aleksieva, L., & Yovkova, B. (2018). The impact of technology on cheating and plagiarism in the assessment – the teachers’ and students’ perspectives. AIP Conference Proceedings, 2048 , 020037. https://doi.org/10.1063/1.5082055
Pezalla, A. E., Pettigrew, J., & Miller-Day, M. (2012). Researching the researcher-as-instrument: An exercise in interviewer self-reflexivity. Qualitative Research: QR, 12 (2), 165–185. https://doi.org/10.1177/1487941111422107
Reedy, A., Pfitzner, D., Rook, L., & Ellis, L. (2021). Responding to the COVID-19 emergency: Student and academic staff perceptions of academic integrity in the transition to online exams at three Australian universities. International Journal for Educational Integrity, 17(9). https://doi.org/10.1007/s40979-021-00075-9
Roberts, C. J., & Hai-Jew, S. (2009). Issues of academic integrity: An online course for students addressing academic dishonesty. MERLOT Journal of Online Learning and Teaching, 5 (2), 182–196.
Rundle, K., Curtis, G. J., & Clare, J. (2019). Why students do not engage in contract cheating. Frontiers in Psychology, 10 (2229). https://doi.org/10.3389/fpsyg.2019.02229
Sahu, P. (2020). Closure of universities due to coronavirus disease 2019 (COVID-19): Impact on education and mental health of students and academic staff. Cureus, 12 (4), e7541. https://doi.org/10.7759/cureus.7541
Sattler, Sebastian, Graeff, Peter, & Willen, Sebastian. (2013). Explaining the decision to plagiarize: An empirical test of the interplay between rationality, norms, and opportunity. Deviant Behavior, 34 (6), 444–463. https://doi.org/10.1080/01639625.2012.735909
Sutherland-Smith, W. (2013). Crossing the line: collusion or collaboration in university group work? Australian Universities Review, 55(1). https://files.eric.ed.gov/fulltext/EJ1004398.pdf
The Quality Assurance Agency for Higher Education. (2020). Contracting to cheat in higher education. https://www.qaa.ac.uk/docs/qaa/guidance/contracting-to-cheat-in-higher-education-2nd-edition.pdf
Townley, C., & Parsell, M. (2004). Technology and academic virtue: Student plagiarism through the looking glass. Ethics and Information Technology, 6 (4), 271–277. https://doi.org/10.1007/s10676-005-5606-8
Tran, T. K., Dinh, H., Nguyen, H., Le, D.-N., Nguyen, D.-K., Tran, A. C., … Tieu, S. (2021). The impact of the COVID-19 pandemic on college students: an online survey. Sustainability , 13:10762. https://doi.org/10.3390/su131910762
Turner, S. W., & Uludag, S. (2013). Student perceptions of cheating in online and traditional classes. In 2013 IEEE frontiers in education conference (FIE 2013): Oklahoma City, Oklahoma, USA, 23 - 26 October 2013. IEEE. https://doi.org/10.1109/FIE.2013.6685007
Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & Health Sciences, 15 (3), 398–405. https://doi.org/10.1111/nhs.12048
Vilchez, M., & Thirunarayanan, M.O. (2011). Cheating in online courses: A qualitative study. Department of Teaching and Learning. 14. https://digitalcommons.fiu.edu/tl_fac/14
Walker, M., & Townley, C. (2012). Contract cheating: a new challenge for academic honesty? Journal of Academic Ethics, 10 (1), 27–44. https://doi.org/10.1007/s10805-012-9150-y
Walsh, L. L., Lichti, D. A., Zambrano-Varghese, C. M., Borgaonkar, A. D., Sodhi, J. S., Moon, S., Wester, E. R., & Callis-Duehl, K. L. (2021). Why and how science students in the United States think their peers cheat more frequently online: Perspectives during the COVID-19 pandemic. International Journal for Educational Integrity, 17(23). https://doi.org/10.1007/s40979-021-00089-3
Whitley, B. E. (1998). Factors associated with cheating among college students: A review. Research in Higher Education, 39 (3), 235–274.
Whitley, B. E., & Keith-Spiegel, P. (2001). Academic dishonesty: An educator's guide. Psychology Press. https://doi.org/10.4324/9781410604279
Xie, Z., & Yang, J. (2020). Autonomous learning of elementary students at home during the COVID-19 epidemic: a case study of the second elementary school in Daxie, Ningbo, Zhejiang province, China. Best Evidence of Chinese Education, 4 (2), 535–541. https://doi.org/10.15354/bece.20.rp009
Zafarghandi, A., Khoshroo, F., & Barkat, B. (2012). An investigation of Iranian EFL Masters students’ perceptions of plagiarism. International Journal for Educational Integrity, 8 , 69–85. https://doi.org/10.21913/IJEI.v8i2.811
Download references
This study has been funded by Gulf University for Science and Technology in Kuwait, internal seed grant number 234553.
Authors and affiliations.
Gulf University for Science and Technology, West Mishref, Kuwait
Inan Deniz Erguvan
You can also search for this author in PubMed Google Scholar
Deniz Erguvan as the sole author of this manuscript wrote the literature review, collected and analyzed the data, and produced the discussion section of the manuscript. The author read and approved the final manuscript.
DE has been teaching undergraduate students in a private university in Kuwait for the past 12 years. She is quite familiar with contract cheating practices and shes observed a significant surge in contract cheating among students during the pandemic.
Correspondence to Inan Deniz Erguvan .
Competing interests.
The author declares no competing interests.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Cite this article.
Erguvan, I.D. University students’ understanding of contract cheating: a qualitative case study in Kuwait. Lang Test Asia 12 , 56 (2022). https://doi.org/10.1186/s40468-022-00208-y
Download citation
Received : 29 August 2022
Accepted : 19 November 2022
Published : 02 December 2022
DOI : https://doi.org/10.1186/s40468-022-00208-y
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
In a new bu-led paper, astrophysicists calculate the likelihood that earth was exposed to cold, harsh interstellar clouds, a phenomenon not previously considered in geologic climate models.
For a brief period of time millions of years ago, Earth may have been plunged out of the sun’s protective plasma shield, called the heliosphere, which is depicted here as the dark gray bubble over the backdrop of interstellar space. According to new research, this could have exposed Earth to high levels of radiation and influenced the climate. Photo courtesy of Opher, et al., Nature Astronomy
Around two million years ago, Earth was a very different place, with our early human ancestors living alongside saber-toothed tigers, mastodons, and enormous rodents . And they may have been cold: Earth had fallen into a deep freeze , with multiple ice ages coming and going until about 12,000 years ago. Scientists theorize that ice ages occur for a number of reasons , including the planet’s tilt and rotation, shifting plate tectonics, volcanic eruptions, and carbon dioxide levels in the atmosphere. But what if drastic changes like these are not only a result of Earth’s environment, but also the sun’s location in the galaxy?
In a new paper published in Nature Astronomy , BU-led researchers find evidence that some two million years ago, the solar system encountered an interstellar cloud so dense that it could have interfered with the sun’s solar wind. They believe it shows that the sun’s location in space might shape Earth’s history more than previously considered.
Our whole solar system is swathed in a protective plasma shield that emanates from the sun, known as the heliosphere. It’s made from a constant flow of charged particles, called solar wind, that stretch well past Pluto, wrapping the planets in what NASA calls a “a giant bubble.” It protects us from radiation and galactic rays that could alter DNA, and scientists believe it’s part of the reason life evolved on Earth as it did. According to the latest paper, the cold cloud compressed the heliosphere in such a way that it briefly placed Earth and the other planets in the solar system outside of its influence.
“This paper is the first to quantitatively show there was an encounter between the sun and something outside of the solar system that would have affected Earth’s climate,” says BU space physicist Merav Opher , an expert on the heliosphere and lead author of the paper.
Her models have quite literally shaped our scientific understanding of the heliosphere, and how the bubble is structured by the solar wind pushing up against the interstellar medium— the space in our galaxy between stars and beyond the heliosphere. Her theory is that the heliosphere is shaped like a puffy croissant , an idea that shook the space physics community. Now, she’s shedding new light on how the heliosphere, and where the sun moves through space, could affect Earth’s atmospheric chemistry.
“Stars move, and now this paper is showing not only that they move, but they encounter drastic changes,” says Opher, a BU College of Arts & Sciences professor of astronomy and member of the University’s Center for Space Physics. She worked on the study during a yearlong Harvard Radcliffe Institute fellowship.
Opher and her collaborators essentially looked back in time, using sophisticated computer models to visualize where the sun was positioned two million years in the past—and, with it, the heliosphere and the rest of the solar system. They also mapped the path of the Local Ribbon of Cold Clouds system, a string of large, dense, very cold clouds mostly made of hydrogen atoms. Their simulations showed that one of the clouds close to the end of that ribbon, named the Local Lynx of Cold Cloud, could have collided with the heliosphere.
If that had happened, says Opher, Earth would have been fully exposed to the interstellar medium, where gas and dust mix with the leftover atomic elements of exploded stars, including iron and plutonium. Normally, the heliosphere filters out most of these radioactive particles. But without protection, they can easily reach Earth. According to the paper, this aligns with geological evidence that shows increased 60Fe (iron 60) and 244Pu (plutonium 244) isotopes in the ocean, Antarctic snow, and ice cores—and on the moon—from the same time period. The timing also matches with temperature records that indicate a cooling period.
“Only rarely does our cosmic neighborhood beyond the solar system affect life on Earth,” says Avi Loeb , director of Harvard University’s Institute for Theory and Computation and coauthor on the paper. “It is exciting to discover that our passage through dense clouds a few million years ago could have exposed the Earth to a much larger flux of cosmic rays and hydrogen atoms. Our results open a new window into the relationship between the evolution of life on Earth and our cosmic neighborhood.”
The outside pressure from the Local Lynx of Cold Cloud could have continually blocked out the heliosphere for a couple of hundred years to a million years, Opher says—depending on the size of the cloud. “But as soon as the Earth was away from the cold cloud, the heliosphere engulfed all the planets, including Earth,” she says. And that’s how it is today.
It’s impossible to know the exact effect the cold cloud had on Earth—like if it could have spurred an ice age. But there are a couple of other cold clouds in the interstellar medium that the sun has likely encountered in the billions of years since it was born, Opher says. And it will probably stumble across more in another million years or so.
Opher and her collaborators are now working to trace where the sun was seven million years ago, and even further back. Pinpointing the location of the sun millions of years in the past, as well as the cold cloud system, is possible with data collected by the European Space Agency’s Gaia mission , which is building the largest 3D map of the galaxy and giving an unprecedented look at the speed stars move.
“This cloud was indeed in our past, and if we crossed something that massive, we were exposed to the interstellar medium,” Opher says. The effect of crossing paths with so much hydrogen and radioactive material is unclear, so Opher and her team at BU’s NASA-funded SHIELD (Solar wind with Hydrogen Ion Exchange and Large-scale Dynamics) DRIVE Science Center are now exploring the effect it could have had on Earth’s radiation, as well as the atmosphere and climate.
“This is only the beginning,” Opher says. She hopes that this paper will open the door to much more exploration of how the solar system was influenced by outside forces in the deep past.
This research was supported by NASA.
Jessica Colarossi is a science writer for The Brink . She graduated with a BS in journalism from Emerson College in 2016, with focuses on environmental studies and publishing. While a student, she interned at ThinkProgress in Washington, D.C., where she wrote over 30 stories, most of them relating to climate change, coral reefs, and women’s health. Profile
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.
Hi Jessica, this paper was extremely incredible with lots of sense. I always love to explore space and very convinced that life somewhere outside of our solar system exists. I know that the nearest solar system is 4 light years away from us. All the time I think that how we can make it possible to get there within our lifetime span. I know it is impossible but we can still keep thinking about it.
Your email address will not be published. Required fields are marked *
Being open about lgbtq+ identities in the classroom creates positive learning environments, should schools struggling with classroom management clamp down with more suspensions or turn to restorative justice, getting police officers to trust implicit bias training, air quality sensors could be coming to a bicycle near you, airplane noise may be bad for your health, bu researcher named a 2024 hertz fellow; award honors “innovators with the greatest potential”, is ai biased against some groups and spreading misinformation and extreme views, 3d printing robot uses ai machine learning for us army research, permanent birth control—vasectomies and tubal ligations—up following supreme court abortion ruling, explain this how do planets form, bu’s innovator of the year has pioneered devices to advance astronomy, microscopy, eye exams, bu study shows a correlation between social media use and desire for cosmetic procedures, covid-19 photo contest winners capture moments of joy, sorrow, meaning in crisis, how high-level lawsuits are disrupting climate change policies, the h5n1 bird flu is a growing threat for farm animals and humans—how serious is it, why is a bu researcher so fascinated with the diets of dung beetles, we are underestimating the health harms of climate disasters, three bu researchers elected aaas fellows, should people be fined for sleeping outside.
IMAGES
VIDEO
COMMENTS
The paper is structured as follows. In Section 2, the research method is described, including study selection criteria, databases and search strategy, and study selection. ... In addition, it is needed to conduct more research on cheating motivation and cheating types. In this research, we review and classify online exam cheating comprehensively.
Abstract. We define cheating as the intentional use of prohibited materials, processes or assistance to undermine the validity of the assessment and gain an unfair advantage. We explore various ways in which cheating can occur and classify these under the three terms of the definition. Improper use of materials includes plagiarism, access to ...
Review of literature. Academic dishonesty is a complicated system including a variety of components that interact in unanticipated ways. Due to the vast number of paper mills, full-text databases, and collaborative web pages, many researchers attribute the rise in academic dishonesty to the increased usage of the internet, which generates "opportunities" for cheating (Peytcheva-Forsyth, et ...
Gender is another variable believed to affect cheating behavior. According to McCabe and Trevino (1997), sex-role socialization has been employed to account for gender differences, arguing in the lines that women are inclined to be socialized to conform to rules.The results of the research findings are indeed inconclusive. Some studies have found that males have a stronger propensity toward ...
One of the practical problems with research into cheating in higher education is the varied definitions in use. Thus, in a study of Canadian higher education institutions' policies on plagiarism, Eaton (Citation 2017) finds wide variations and argues for the need to develop a common agreed framework.Barnhardt (Citation 2016) also notes that the different parties involved may have different ...
Notwithstanding the limitations, this work offers valuable insights into final exam fraud detection and aims to contribute to this growing area of research. Our paper is organized as follows. Section 2 provides an overview of existing literature on fraud detection and related topics. In Section 3, we present our methodology.
This paper presents a comprehensive, cross-country study on the magnitude and determinants of cheating among economics and business undergraduates, involving 7,213 students enrolled in 42 ...
RQ5: How can online exam cheating be prevented? The paper is structured as follows. In Section 2, the research method is described, including study selection criteria, databases and search strategy, and study selection. Section 3 presents review results and provides the answers to research questions.
This research paper begins with a literature review discussing possible relationships ... possible beliefs and behaviors associated with cheating, this paper will also examine how personal characteristics, gender differences, and personal histories, among other variables, can ... Results section, para. 1). Guitar et al. (2016) also asked ...
online learning, there is no such review study to provide comprehensiv e insight into. cheating motivations, cheating types, c heating detection, and cheating prevention. in the online setting ...
Both paper-based and computerized exams have a high level of cheating. It is, therefore, desirable to be able to detect cheating accurately. Keeping the academic integrity of student evaluations intact is one of the biggest issues in online education. There is a substantial possibility of academic dishonesty during final exams since teachers are not directly monitoring students. We suggest a ...
Our aim in this study was to determine students' and teachers' attitudes towards cheating in assessing students' performance. We used mixed methodology and the main research method was a case study. We aimed to describe how our respondents: 1. recognize ethical misconduct (EM) in several situations given through case studies, 2. understand the roles of each subject involved, 3. predict ...
Consistent with this finding, McCabe (2001) andO'Rourke et al. (2010) reported that copying from student exam papers and allowing other student to copy is the most common type of cheating students ...
The use of artificial intelligence in academia is a hot topic in the education field. ChatGPT is an AI tool that offers a range of benefits, including increased student engagement, collaboration, and accessibility. However, is also raises concerns regarding academic honesty and plagiarism. This paper examines the opportunities and challenges of ...
Academic cheating, a common and consequential form of dishonesty, has puzzled moral psychologists and educators for decades. The present research examined a new theoretical approach to the perceptions, evaluations, and motivations that shape students' decisions to cheat.
Technological advancements have made cheating easier and more prolific (McGregor & Stuebs, 2012). "The process of how students cheat has been the topic of extensive research" (Baker, et al., 2008, p. 28). This paper supplements the established findings on academic dishonesty by delineating the innovative techniques that students use to respond
4). Cheating is associated with feelings of self-satisfaction, and the boost in positive affect from cheating persists even when prospects for self-deception about unethical behavior are reduced (Study 5). Our results have important implications for models of ethical decision making, moral behavior, and self-regulatory theory.
David A. Rettinger and Tricia Bertram Gallant, Editors. Wiley, 2022 256 pp., $36.00. Academic dishonesty is a guild problem that connects academi-cians from all disciplines. While cheating has appeared through-out the history of education, never has skirting the rules threatened the legitimacy and integrity of modern educa-tion as it does now.
Abstract. Although there is a large body of research addressing predictors of relationship infidelity, no study to our knowledge has specifically addressed infidelity in a previous relationship as a risk factor for infidelity in a subsequent relationship. The current study addressed risk for serial infidelity by following adult participants ( N ...
Original Research Background of the Study Cheating is an academic dishonesty (Colnerud & Rosander, 2009) that is rampant in university milieu. Apparently, it is ... writer and submitting the same papers for two or more courses. Studies on Cheating in an Iranian Context Cheating is considered a deplorable act, and there are even ...
In this paper, we used samples from the 3 4 section of the Qianjiang depression in the Jianghan Basin as direct research objects by combining pyrite and sulfur isotope determination. The one-dimensional diffusion-advection-reaction simulation (1D-DAR) model was applied to simulate the changes in the pyrite content and sulfur isotope values ...
This was followed by three more scientific papers published by Nature in the same month, according to an editor with the Science China Press, a scientific journal publishing company of the Chinese ...
The security analysis of our schemes is covered in Section 4. Section 5 analyzes the experimental results, including computational cost, key size, signature length, and implementation efficiency. In Section 6, we outline the specific application scenarios of our scheme. Section 7 draws the conclusion in the end. 2. Preliminaries 2.1. Linear Codes
The remainder of this paper is structured as follows: Section 2 provides data descriptions. Section 3 describes the research methodology and the experiment design to investigate ... Academic dishonesty and its relations to peer cheating and culture: A meta-analysis of the perceived peer cheating effect. Educational Research Review, 36 ...
What my paper highlights is that large inflows of immigrants place fiscal pressure on local governments, especially when a large number of the immigrants are children. Even if you are supportive of immigration and accept that there are long-term economic benefits, one still needs to consider how to address the fiscal challenges that arise ...
Figure 1: Modeling overview for the Apple foundation models. Pre-Training. Our foundation models are trained on Apple's AXLearn framework, an open-source project we released in 2023.It builds on top of JAX and XLA, and allows us to train the models with high efficiency and scalability on various training hardware and cloud platforms, including TPUs and both cloud and on-premise GPUs.
Academic dishonesty is a complicated system including a variety of components that interact in unanticipated ways. Due to the vast number of paper mills, full-text databases, and collaborative web pages, many researchers attribute the rise in academic dishonesty to the increased usage of the internet, which generates "opportunities" for cheating (Peytcheva-Forsyth, et al., 2018; Townley ...
Research Papers; Special section: APVSDG; Special section: AI in Carbon Emission; Special section: CUE 2023; Special section: VPP; ... Design and experimental study of magnetically excited variable cross section bending beam piezoelectric energy harvester. Fei Du, Nengyong Wang, Tianbing Ma, Rui Shi, ... Changpeng Li. Article 123636
In a new paper published in Nature Astronomy, BU-led researchers find evidence that some two million years ago, the solar system encountered an interstellar cloud so dense that it could have interfered with the sun's solar wind. They believe it shows that the sun's location in space might shape Earth's history more than previously considered.