top of page

Assessing the role of AI Ethics in Education on Sustainable Development & Institutional AI Policies 

  • Vu Thanh Hao Ngoc
  • Jan 16
  • 10 min read

Updated: Jan 25

Abstract


In today's globalised world, the prevalence of AI is a double-edge sword for education. On the one hand, it supports students in their learning and intrinsic ability improvement, while on the other hand, the potential of misuse and violation of academic integrity of using AI is considerable. In this scenario, determining the role of AI ethical awareness is crucial for successful education and sustainability. This essay argues how AI ethics could foster development via improvement in human capital and its contribution to long run growth by referencing Endogenous growth theory; while analysing abusive use could distort productivity and lead to negative externality via false signalling; which further harms long-term growth. Assessing relevant AI academic integrity policy among institutions from different education levels in order to investigate raising awareness solutions, the role of maintaining ethical use of AI in academia following morality principles is highlighted along with regulations with various forms of discipline. Possible limitations influencing the feasibility of AI policies in practice are considered with the technical, behavioural and feasible perspectives. The policy proposal in this essay aims to bridge the cognitive and behavioural gap in early education on AI Ethical Awareness as well as setting a framework for AI policy in the school-level, maintaining the clarity and comprehension. In higher education, a nudge can be provided by step-by-step guidelines and the expansion of AI policies to promote responsible AI usage. Further research might investigate effective awareness intervention instead of sole reliance on regulations for the aim of sustainable development. 


Introduction


Education is the vital factor contributing to sustainable growth- not only considering purely economic growth but also human development (Lucas, 1988; Romer, 1990). In this innovative era, the integration of the Artificial Intelligence (AI) usage in education is increasingly prevalent due to significant benefits. Nevertheless, the role of AI Awareness in the context of modern education to development has not been assessed, since other papers genuinely argue the role of AI usage but not how learners actually use it and evaluate the influences on a macro level.


The rise of AI has contributed countless advantages for modern education, from supporting students in their learning to efficient assessment (Rahayu, 2023). Other evidence echo the findings, suggesting that AI could foster motivation, critical thinking and productivity through promoting an active and engaging learning environment (Bittle & El-Gayarl, 2025; Chaudhary et al., 2024). The benefits brought to education is undeniable, however, this technology has been criticised for its detrimental influences on the preservation of academic integrity in today globalised’s world (Bittle & El-Gayar, 2025), hence raising concern about the quality of the education. Besides, abusing GenAI could lead to distortion in cognitive abilities (Zhai et al., 2024). That said, the prime consideration is not whether to use AI in education or not, but how learners use ethically and aware of risk, alleviating the risk of misuse. AI ethical awareness, which involves ethical perception and application of AI usage (Kong & Zhu, 2025) is an irreplaceable component for successful education in this innovative era. 


This essay argues AI awareness is the key determinant variable whether sustainability is achieved by improving human capital, otherwise, distorting it and worsening the economy. Several implications could improve the awareness, accounting both early stage and higher education for the well-rounded approach.



The role of AI ethical awareness in education


It is clear that AI tools are advantageous in education, nevertheless, a crucial determinant of success is the AI Ethics, which means how students utilize AI ethically and responsibly. This part of the essay analyses how AI ethics in education could contribute to sustainability.


Considering individual perspective, AI offers a personalised and adaptive learning environment, maximising productivity. Beside knowledge, this technology could foster intrinsic aspects such as critical thinking and creativity (Chaudhary et al., 2024; Fan et al., 2025). These qualities are the core components of human capital, which, according to Human capital theory, are the engines that raise future productivity. Specifically, by perceiving and  utilising AI mindfully, it can help learners to improve their academic performance as well as problem-solving skills (Bittle & El-Gayar, 2025). By improving these competencies, ethical use of AI does not only enhance learning success, but also human capital growth, which supports competitiveness in today rapidly-change economy. This aids job security and potential learning, as they have robust competency. 


Regarding the influences on the economy, AI Ethics play a significant role in sustainable development. According to Endogenous Growth theory (Lucas, 1988; Romer, 1990), long-run economic growth is the result of human capital, knowledge accumulation and innovation– the internal forces within the economy. A systematic review suggests that AI could foster sustainability in individual development on knowledge accessibility, efficiency and personalized learning, and these sustainable improvements require sustainability of AI (involves ethical, fair, transparent and responsible use of AI) (Bansal, 2025). From the earlier idea, having high awareness of using AI, involving ethical use in academics, could foster human capital and innate value of labour. These human capital promote advancement in technologies more sustainably and responsibly from continuing creativity and innovation, leading to social sustainability and long-term growth. That said, AI Ethics is the central driver of long-run development.


Despite countless benefits, learners with low ethical awareness of using AI will continually exist. As these individuals are not aware of the drawbacks from violence academic integrity, especially when using AI tools, they might cheat, misuse or plagiarize in class assessment; or even worse, distortion in cognitive abilities (Bittle & El-Gayar, 2025), (Zhai et al., 2024). This detrimentally influences the learning objectives and learners’ competency, questioning the actual result of education. This worsens the quality of the human capital, as people might lack incentives to innovate because of AI overreliance or cognitive deterioration. Consequently, the productivity drop, as labour now are not competent, which further harms the potential growth of the economy.


Using AI for cheating could create false signals, leading to misallocation of labour and negative externality to the society. Concerning assessment, the development of AI makes generated content challenging to determine whether it is human-authored or not (Wakjira et al., 2025). Therefore, it can be argued that if these technologies are used to cheat, it will be a challenge for educators to maintain learning quality. As a result, the grades inflate and are asymmetric with intrinsic ability, such as cheaters using tools to pass exams and doing assignments, not putting their effort into it. This situation creates a false signal to employers, as they are unable to determine the full capacity of the candidates based on their scores only, leading to misallocation of labour - where the equilibrium wages do not match labour quality. In the long-term, this will slow down economic growth or even exacerbate the vulnerability of the economy. According to Endogenous growth theory, long-term growth is not merely due to technological changes, it heavily depends on human capital and innovation (Romer, 1990). Grade inflation from AI cheating will lead to high-achieving students without skills, decelerating  the innovation progress due to low level of human capital. Consequently, these factors make the economy more vulnerable and ruin the potential growth.


Additionally, these false signalling creates negative externality to the society, as they might invest more on anti-cheating technologies to prevent misallocation of labour and integrity maintenance. Even though cheaters are the one pursuing those behaviours, the cost of assessment and prevention lies within society. For instance, the UC (University of California) Berkeley and UC Davis paid for Turnitin - an anti-teaching software tool -  approximately $430,000 throughout the past 4 years, while with UC Irvine this tool is more than $550,000 for a 7-year period (Mathewson, 2025). Along with that, firms might have to spend more time and effort on job recruitment, especially on assessing candidates’ ability, which should increase the hiring cost. This illustrates massive investment spending on academic misconduct prevention and labour misallocation alleviation, and this is not efficient since those expenditures could be spent on other categories such as Research and Development for future growth. 


Policies and practical challenges

The role of awareness in academic AI usage is vital in determining the success of education and sustainability. Since then, UNESCO has promulgated Guidance for Generative AI in Education and Research to support countries planning for immediate actions and long-term policies to develop human capacity. Echoing the implication, at higher education level, many institutions have AI, academic integrity and other relevant policies. Prestigious universities such as Havard have clear guidelines for using Generative AI tools, aiming to guide ethical and responsible application for students (Harvard, n.d). At high school level, the International Baccalaureate Organisation (IBO) also arranges a section for AI academic integrity policy for IB students (IBO, 2023), emphasizing the ethics within application and form of disciplines for academic misconduct (disqualified, diploma withdrawal,...). Most of the aforementioned guidelines follow the principles of academic integrity, as well as data privacy and intellectual property. As can be clearly observed, world-wide organisations acknowledge the significance of maintaining academic honesty in this AI era, and offer proper policies to improve awareness among students. 


Even though there are implications in order to maintain AI ethics, the policies itself have considerable obstacles in practice affecting the effectiveness. First, AI content detection tools have limited capabilities in identifying AI-generated content. A study showed inconsistent accuracy performance of AI detectors, as they might hallucinate the result from AI to man-made and vice versa (Wakjira et al., 2025). This challenges the reliability for learning outcomes assessment and transparency of works. Second, though having clear guidelines for ethical application of AI tools, outlining clear methods and consequences if violent, institutions are unable to manage student’s behaviours, especially when they act on their self-interest. Even though the universities have detection tools, they are flawed. Therefore, educators cannot give valid evaluations of students’work and ensure all of them are following the regulations.  Third, regarding school AI policy, it is not sufficiently broad for general education. Yet the IBO and other international qualifications such as Pearson A-Level have proper regulations for this issue, the rest do not. In the US, 31% of public schools have formal policy for AI usage, and only 14% of them have AI ethics educational activities (National Centre for Education Statistics, 2024). Also, about 60% of educators reported that the school’s AI policy was not clear to them and their students (Langreo, 2025). This leads to shortcomings in disseminating awareness of AI ethics to the general students. Fourth, it is complicated to quantify abstract concepts such as AI ethics. Therefore, policy-makers are unable to obtain precise measurable outcomes to determine the effectiveness of the policies.  In other words, it is hard to determine whether AI policies do raise students’ ethical awareness or encourage them to develop new cheating strategies. Consequently, there is insufficient evidence about the outcome to assess the impact, suggesting the introduction of AI policies might not be as effective as proposed. All of these considerations suggest implementing AI policy in an academic context requires empirical evidence for feasibility evaluation and modification. 


Recommendation

As well as setting new, formal guidelines in academic context, there are practical views influencing effectiveness that should be considered. This part of the essay aims to recommend the well-rounded implications in reality. 


Considering the school level, AI policies are not sufficiently broad and clear to both educators and general students. There should be encouragement for AI policies with a clear and comprehensive framework, referencing reliable sources such as UNESCO Guidance for Generative AI in Education and Research. This aims to provide a structure enhancing clarity and accessibility for the school-context. Along with that, there should be cognitive intervention in early education stages about AI to eliminate potential risk of misuse in the future. According to Piaget’s Cognitive Development theory, children could reason about moral principles as well as its consequences in the Formal Operation stage (Piaget, 1952). Also, the Social Learning Theory suggests humans can learn behaviours from environments through the process of imitation (Bandura, 1977). On this basis, the school could arrange AI Ethics sessions, consisting of both AI ethical principles and examples of ethical use of AI for younger students. This hybrid measure’s purpose is to offer a holistic approach in both cognitive and behavioural aspects of the AI ethics, which further reinforce students’ethical action. 

For the college level and upper, the educators might implement policy with proper guidance and form of disciplines, and giving clear borderline for ethical use in context. This contextualises the policies, delivering comprehensive guidance for both teachers and learners as well as extending the implication broader. By framing AI regulations through choice architecture, institutions can replace vague mandates with specific behavioral prompts to enhance compliance. Providing clear guidance, such as "disclose specific AI prompts" rather than "maintain integrity", functions as a "nudge" that lowers cognitive load (Thaler & Sunstein, 2008). This design aligns ethical behaviour with the path of least resistance, effectively reducing student resistance through simplified, accessible decision-making.


Conclusion

In all, AI ethical awareness can be considered as an essential factor determining the quality of human capital, which is a core component of sustainable growth. As a result, there are proper measures to improve awareness, or at least regulate misuse behaviours, guiding students to academic integrity. However, these policies face obstacles in practice, requiring empirical evidence for suitable modifications. Acknowledging technical challenges and considering long-term human growth, further research might concentrate on developing effective measures to raise learners' awareness of AI ethics in the early education stage for sustainability. 


References

Bandura, A. (1977). Social learning theory. Prentice Hall.

Bansal, C. (2025). AI ethics and sustainability: Accelerating paradigm shifts toward sustainable development. Journal of Strategy & Innovation, 36(1), Article 200537. https://doi.org/10.1016/j.jsinno.2025.200537

Bittle, K., & El-Gayar, O. (2025). Generative AI and academic integrity in higher education: A systematic review and research agenda. Information, 16(4), Article 296. https://doi.org/10.3390/info16040296

Chaudhary, A. A., Arif, S., Calimlim, R. J. F., Khan, S. Z., & Sadia, A. (2024). The impact of AI-powered educational tools on student engagement and learning outcomes at higher education level. International Journal of Contemporary Issues in Social Sciences, 3(2), 2842–2852.

Fan, L., Deng, K., & Liu, F. (2025). Educational impacts of generative artificial intelligence on learning and performance of engineering students in China. Scientific Reports, 15, Article 26521. https://doi.org/10.1038/s41598-025-06930-w

Harvard University Information Technology. (n.d.). Generative AI guidelines. Harvard University. https://www.huit.harvard.edu/ai/guidelines

International Baccalaureate Organization. (2023). Academic integrity policy (Updated March 2023). https://www.ibo.org/globalassets/new-structure/programmes/shared-resources/pdfs/academic-integrity-policy-en.pdf

Kong, S. C., & Zhu, J. (2025). Developing and validating an artificial intelligence ethical awareness scale for secondary and university students: Cultivating ethical awareness through problem-solving with artificial intelligence tools. Computers and Education: Artificial Intelligence, 9, Article 100447. https://doi.org/10.1016/j.caeai.2025.100447

Langreo, L. (2025). Schools’ AI policies are still not clear to teachers and students. Education Week. https://www.edweek.org/technology/schools-ai-policies-are-still-not-clear-to-teachers-and-students/2025/01

Lucas, R. E., Jr. (1988). On the mechanics of economic development. Journal of Monetary Economics, 22(1), 3–42.

Mathewson, T. G. (2025, June 26). Turnitin charged colleges vastly different amounts to detect plagiarism. The Markup. https://themarkup.org/artificial-intelligence/2025/06/26/plagiarism-detector-costs-california

National Center for Education Statistics. (2024). School Pulse Panel – Interactive results. https://nces.ed.gov/surveys/spp/results.asp#technology-dec24-chart-4

Piaget, J. (1952). The origins of intelligence in children (M. Cook, Trans.). International Universities Press. (Original work published 1936)

Rahayu, S. (2023). The impact of artificial intelligence on education: Opportunities and challenges. Jurnal Educatio, 9(4), 2132–2140. https://doi.org/10.31949/educatio.v9i4.6110

Romer, P. M. (1990). Endogenous technological change. Journal of Political Economy, 98(5, Pt. 2), S71–S102.

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

Wakjira, T., Tijani, I., Alam, M. S., Mashal, M., & Hasan, M. (2025). Can we trust AI content detection tools for critical decision-making? Information, 16(10), Article 904. https://doi.org/10.3390/info16100904

Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11, Article 28. https://doi.org/10.1186/s40561-024-00316-7



Comments


  • Threads
  • LinkedIn
  • Instagram
bottom of page