Toward Ethical Use of Generative AI in AP Courses

Toward Ethical Use of Generative AI in AP Courses

Kyrie Zhixuan Zhou, Madelyn Sanfilippo, Allison Sinnott

The emergence of generative AI and Large Language Models (LLMs) has sparked heated discussions around their ethical usage. These tools provide creative support (Di Fede et al., 2022) and task automation (Wen et al., 2023), among many other uses. However, ethical concerns have developed, ranging from privacy risks (Zhang et al., 2023) to bias (Felkner et al., 2023). The stakes are higher when it comes to the ethical use of generative AI in high schools. High schoolers are the next generation’s decision-makers, and educating them on ethical generative AI use in academia is crucial. There are limited education policies on regulating generative AI use. Our synthesis and recommendations are based on the College Board’s 2023-24 Guidance for Artificial Intelligence Tools and Other Services for AP courses. AP courses offer undergraduate-level curricula to high schoolers with examinations and projects for college credit equivalency. Most AP courses are graded with the usage of a paper examination. The following courses require a project or portfolio that is completed throughout the school year under the supervision of a teacher and therefore are subject to the student AI policy.

—The College Board is hardly an ideal promoter of ethical AI use—

AP Computer Science Principles

Restrictions on the use of generative AI are relatively loose in the AP Computer Science Principles course. Acceptable generative AI use includes “utilizing generative AI tools as supplementary resources for understanding coding principles, assisting in code development, and debugging.” Simultaneously, students are warned that “generative AI tools can produce incomplete code, code that creates or introduces biases, code with errors, inefficiencies in how the code executes, or code complexities that make it difficult to understand and therefore explain the code.” Students are thus asked to “take the responsibility to review and understand any code co-written with AI tools, ensuring its functionality,” as well as “explain their code in detail.” Such guidance strikes a balance between the role of generative AI tools as efficiency promoters/learning peers/collaborative agents and error makers/bias creators/imperfect co-workers in helping with coding tasks. By permitting the responsible use of generative AI to develop code, the College Board allows students to explore and reflect on how to achieve fruitful and ethical human-AI collaboration in computing. 

AP Art and Design

In contrast to the AP Computer Science course, the AP Art and Design course strictly prohibits the use of AI tools: “The use of Artificial Intelligence tools is categorically prohibited at any stage of the creative process.” The College Board links the usage of AI to plagiarism: “It is unethical, constitutes plagiarism, and often violates copyright law simply to copy another work or image (even in another medium) and represent it as one’s own.” Generative AI tools threaten to displace professional artists, e.g., by mimicking the artistic style of specific artists (Shan et al., 2023). Researchers have identified open problems regarding the regulation of generative AI in arts, e.g., the role of platform interventions in tracking source provenance (Epstein et al., 2023). The prohibition of the use of AI tools in the AP Art and Design Course is an overly simplified response to questions on copyright, source provenance, and protecting artists’ rights, as well as the lack of policies and laws in this realm. 

AP Capstone

AP Capstone courses, including AP Seminar and AP Research, enable high schoolers to dive into the academic world and research. There is current debate over the ethics of using generative AI, particularly LLMs, in research. For example, it is unclear how to preserve the privacy of survey respondents when LLMs are used in data analysis (Shen et al., 2023). Some researchers also characterize text written by ChatGPT as plagiarized and thus unacceptable (Thorp, 2023). The complexity of the research process makes it hard to decide which steps in research can contain ethical AI use and how it can be incorporated. The College Board does not prohibit the use of generative AI in research, but instead encourages a responsible use: “Generative AI tools must be used ethically, responsibly, and intentionally to support student learning, not to bypass it.” Some concrete use cases are provided, such as “exploration of potential topics of inquiry, initial searches for sources of information, confirming their understanding of a complex text, or checking their writing for grammar and tone.” To ensure students genuinely engage with the performance tasks and do not use generative AI to bypass work, they are asked to complete interim “checkpoints” (similar to oral defenses) with their teacher. Teachers are responsible for “confirming the student’s final submission is, to the best of their knowledge, authentic student work.” This is where biases can be introduced. Non-native English writing has been misclassified as AI-generated, raising concerns about fairness and robustness (Liang et al., 2023). There are currently no best practices for teachers to detect AI-generated materials, either via tools or their own eyes.

The College Board as An Ethical AI Promoter?

The College Board is hardly an ideal promoter of ethical AI use. It has been reported to disclose sensitive student information, such as GPAs and SAT scores, to social media platforms. The College Board also commodifies students and monetizes their data, despite their status as a “not-for-profit” organization. Associated profits benefit their executives, directly contradicting their “not-for-profit” status. As a malicious student data seller and a student money harvester, the College Board’s standpoints may reflect profitability, e.g., making test results more convincing to attract high schoolers who seek a higher chance of getting into a good university, when it comes to regulating AI use. Thus, they will intentionally emphasize financial interests over considerations of benefits to students brought by AI use, including creativity, efficiency, and digital literacy around systems that will certainly impact their future education, careers, and lives.


Academic integrity vs. benefits of generative AI usage. In addition to supporting creativity and ideation, generative AI also helps in engaging students, facilitating collaboration, and personalizing learning experiences (Cotton et al. 2023). While detecting and preventing academic dishonesty has always been challenging, educators can teach and proactively encourage the ethical use of AI tools in education.

Differentiating AI use in different contexts. The College Board’s differentiation between the policies of different courses is helpful. The benefits of generative AI in assisting computing and research are acknowledged, with the responsible use of generative AI in AP Capstone courses and AP Computer Science Principles course allowed. The overly simplified AI regulation in the AP Art and Design course should be overhauled.

Providing guidelines to schools and teachers. The College Board fails to inform educators of ideal practices for regulating AI use in academic settings. It is crucial to establish a policy for assessments that do not contribute to an AP score, where grading is at the school/teacher’s discretion. Schools and teachers are often uneducated on AI ethics (Kilhoffer et al., 2023), let alone the benefits and risks of the newly emerging generative AI. They may be afraid of what they do not understand and therefore create overly strict policies against the use of this technology. With a heightened level of ethical discussions surrounding generative AI, the College Board should provide more concrete guidelines for schools and teachers regarding their decisions on AI use in the classroom, toward enabling students to fruitfully and responsibly use generative AI.


Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 1-12.

Di Fede, G., Rocchesso, D., Dow, S. P., & Andolina, S. (2022, June). The Idea Machine: LLM-based Expansion, Rewriting, Combination, and Suggestion of Ideas. In Proceedings of the 14th Conference on Creativity and Cognition (pp. 623-627). 

Epstein, Z., Hertzmann, A., Investigators of Human Creativity (2023). Art and the science of generative AI. Science, 380(6650), 1110-1111.

Felkner, V. K., Chang, H. C. H., Jang, E., & May, J. (2023). WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models. arXiv preprint arXiv:2306.15087.

Kilhoffer, Z., Zhou, Z., Wang, F., Tamton, F., Huang, Y., Kim, P., Yeh, T., & Wang, Y. (2023, May). “How technical do you get? I’m an English teacher”: Teaching and Learning Cybersecurity and AI Ethics in High School. In 2023 IEEE Symposium on Security and Privacy (SP) (pp. 2032-2032). IEEE.

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. arXiv preprint arXiv:2304.02819.

Shan, S., Cryan, J., Wenger, E., Zheng, H., Hanocka, R., & Zhao, B. Y. (2023). Glaze: Protecting artists from style mimicry by text-to-image models. arXiv preprint arXiv:2302.04222.

Shen, H., Li, T., Li, T. J. J., Park, J. S., & Yang, D. (2023). Shaping the Emerging Norms of Using Large Language Models in Social Computing Research. arXiv preprint arXiv:2307.04280.

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313-313.

Wen, H., Li, Y., Liu, G., Zhao, S., Yu, T., Li, T. J. J., … & Liu, Y. (2023). Empowering LLM to use Smartphone for Intelligent Task Automation. arXiv preprint arXiv:2308.15272.

Zhang, Z., Jia, M., Yao, B., Das, S., Lerner, A., Wang, D., & Li, T. (2023). ” It’s a Fair Game”, or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents. arXiv preprint arXiv:2309.11653.

Cite this article in APA as: Zhou, K. Z., Sanfilippo, M., Sinnott, A. Toward ethical use of generative AI in AP courses. (2023, November 9). Information Matters, Vol. 3, Issue 11. https://informationmatters.org/2023/11/toward-ethical-use-of-generative-ai-in-ap-courses/


  • Kyrie Zhixuan Zhou

    Kyrie Zhixuan Zhou is a PhD candidate in the School of Information Sciences at the University of Illinois at Urbana-Champaign. His research interests are broadly in HCI and Usable Security. He aims to understand, design, and govern ICT/AI experience for vulnerable populations.

  • Madelyn Sanfilippo

    Madelyn Rose Sanfilippo is an Assistant Professor in the School of Information Sciences at the University of Illinois at Urbana-Champaign. Her research empirically explores governance of sociotechnical systems and practically supports decision-making in, management of, and participation in a diverse public sphere. Using mixed-methods, including computational social science approaches and institutional analysis, she addresses research questions about: participation and legitimacy; social justice issues; privacy; and differences between policies or regulations and sociotechnical practices. Her most recent book Governing Privacy in Knowledge Commons was published by Cambridge University Press in 2021.

  • Allison Sinnott

    Allison Sinnott is a 12th grade student at East Meadow High School. Her current research is focused on social impact of blockchain technology and ethical artificial intelligence usage.

Kyrie Zhixuan Zhou

Kyrie Zhixuan Zhou is a PhD candidate in the School of Information Sciences at the University of Illinois at Urbana-Champaign. His research interests are broadly in HCI and Usable Security. He aims to understand, design, and govern ICT/AI experience for vulnerable populations.