Ethical Considerations for AI

Introduction

Artificial Intelligence (AI) tools are constantly emerging, bringing increasingly diverse opportunities for amplification and enhancement of digital skills, critical thinking, and creativity, as well as the potential for subversion of current practice. The aim of this guidance paper is to provide some principles which can be applied at a local context, informing how and when AI can be ethically and effectively applied to enhance and extend our scholarly activities within the broader context of 1:

  • Governance i.e., privacy, security, regulatory, accountability concerns.
  • Pedagogy i.e., learning and teaching concerns.
  • Operational i.e., infrastructure and training concerns

[1] Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. https://doi.org/10.1186/s41239-023-00408-3.

Context

Whether the use of AI is appropriate, allowable, or ethical in teaching and learning activities is a localised and disciplinary informed decision. Factors influencing this decision will vary and depend entirely on context. Therefore, banning the use of AI will be, if not already, impractical as services begin to integrate AI features as commonplace feature enhancements.

The suggestion to revert to more ‘traditional’ approaches will not service the needs of our community of users, as this fails to prepare our students and staff for use of such skills outside of the HE environment. A more balanced approach is required, one which considers the appropriateness and integrity required in the use of AI. One where the core consideration for when introducing AI into teaching and learning activities is:

What learning outcome do you want your students to demonstrate, and how can AI be ethically included to support them?

Examples of the importance of context when introducing AI into teaching and learning

An assessment measuring film-making expertise (editing, camera work, sound, lighting etc.) may not be assessing a student’s scriptwriting capabilities. As such, any (declared) use of AI to write the script would be a significant time saving and therefore an effective use of AI.

However, this would not be the case if the student were studying creative writing, where use of the same script in an assessment would be ethically wrong. However, an AI animation creation tool (if such a tool exists) to turn the human-generated original creative writing assignment into a short, animated film may, in these circumstances, be acceptable.

AI Ethical Framework

The University is committed to supporting the ethical use of generative AI. To support staff and students we propose these principles to inform its adoption: Authenticity, Accountability, Access, and Awareness.

Authenticity

There are two elements within the theme of authenticity: personalisation and transparency.  The University strategy already sets the goal for “developing assessments that are inclusive, authentic, transparent, ambitious, and co-created with stakeholders.” The development of authentic assessments that, where practical, require students to include self-reflection, critical analysis, to cross reference prior learning or to make connections to their lived experience is a key for tackling any potential misuse of AI in assessments. When setting assessments, academics should consider and adapt assessments to ensure they reflect and consider how AI might support or subvert the learning outcome in connection with authentic assessment practice. It should be noted that concerns around plagiarism, and the outsourcing of integrity to commercial platforms, has been a problem for much longer than ChatGPT has been around.

By setting clear expectations on when and how any of the available AI tools can be used by students1 without subverting the learning outcomes being assessed, will encourage appropriate use. Students should also include how and why they have used AI, in their self-reflection responses. This extends to acknowledgement of its use. AI, as a non-human generator of content, should not be referenced as an ‘author’ on any work completed with its use2 Guidance for staff and students will be provided via Libraries and Learning Skills (website), however, this will inevitably become an area requiring further monitoring as AI becomes increasing blended and accepted within everyday practice.3 This activity may also require regular review and updating of the University’s assessment framework to address the emergent challenges raised by AI.

The opportunities for co-creation and creativity with the use of AI in teaching and learning, in particular in assessments, has the potential to be a timely enabler to equip our students (for example, rapidly generating personalised tests or Socratic tutoring support). Effective use can enhance critical thinking, identify how imagination and curiosity has a place in students being the producers of their own knowledge, through partnerships and collaboration.

Accountability

Generative AI uses sophisticated predictive algorithms to generate the most likely outputs based on vast amounts of internet sourced data. The results can be impressive but cannot be relied upon to be 100% accurate (a.k.a. ‘AI hallucinations’), or without bias due to the data the AI service may have been trained upon (which could be problematic due to inherently biased historical data). All users of AI therefore must accept full responsibility for the outputs of their use of AI. It may seem obvious, but for the removal of any doubt, if the AI outputs are later proven to be false or poorly constructed, the responsibility for those errors lies solely with the individual who chose to use it. All users of AI must therefore independently critically evaluate AI output for accuracy before using it in their own work, including checking of factual statements and citations.

Furthermore, the issue of accountability also relates to the question of data provenance and copyright. Data provenance and copyright refers to the description of the origin, ownership, creation, and propagation process of the data repurposed by generative AI in its response. In other words, it is the documented history of a digital object. By observing the data provenance of an object, we can infer its trustworthiness and our confidence in the use of it. The provenance of data is crucial for validating, debugging, auditing, evaluating the quality of data and determining reliability in data. For copyright this extends to reuse, especially if the sourced data the AI has used to build its response is copyrighted. Because of how language models are developed, data provenance is questionable, meaning we can have little trust in the outputs and should use with caution.

Appropriate adaptations to teaching and assessments can improve efficiency and enhance practice.4 To that end, AI can be used to shape our thinking and inform what we create (as productivity support) but it should never be relied upon to deliver our final outputs (especially if this is what is being assessed). ‘Process’ when using AI is important for learning; in essence, what we learn is dependent on how we approach the learning activity. If AI is being used to generate a product (e.g., report, image, essay), and the product is all that we assess, then AI is indeed a problem. If we assess the process of learning, then AI is only another input into that process, alongside many other inputs. In this context, AI is less of a problem. We therefore have an opportunity to explore a wide range of assessment tasks in response to the introduction of generative AI, which may address some of our more long-standing problems with assessment.5 The embedded use of AI might support an approach to assessment that enables more authentic, creative, and innovative processes than we may be familiar with.

How and when AI is used in teaching, with recognition that disciplines and context will vary, means that AI use should be an academically informed decision. This decision should be a collective endeavour, through conversations with students and staff, so application in practice is transparent and supportive. Guidance provided to both staff and students needs to share good practice on such adaptions, to outline what is and is not allowed in the completion of an assessment.

An example of the potential challenges AI may introduce within commonly used tools

A grammar checking service can help catch mistakes (typos, spelling mistakes etc.) in a student’s original text, which if corrected do not substantially change the student’s original work. However, paraphrasing tools may breach existing guidance on proof-reading or tools that generate original responses are clearly replacing the student’s original output. In such cases, their use may substantially replace or amend a student’s original submission and therefore not accurately evidence the student’s mastery of the desired learning.

Access

Access to AI technologies and the accessibility of these services is a core ethical consideration for its inclusion within teaching and learning activities. Not all students will be able to afford or use emergent AI services which could provide some students with an unfair advantage. As such, factors of access must be considered before making AI tools available within in the institution.

Accessibility is therefore a key element, especially with AI being increasingly integrated into Office 365. It is proposed that the University should only recommend tools that all students can access or are supported by the University to guarantee equitable access. This is especially important considering that some of the more popular language models will likely offer tiered access. For example, ChatGPT is already available in a free version that is scaled back and relies on a limited model, and a paid-for tier that includes access to a much more advanced version of the model. Students who can afford access to better language models will have advantages over those who can only use the free version.

Awareness

The emergence of AI is not a new phenomenon, and use is already commonplace; however, awareness of its use may not be as well understood at this point and may become increasingly invisible over time. As such, it is highly likely students and staff could be using (or misusing) AI without fully appreciating or recognising this fact.  The University of Lincoln should ensure all its members are AI literate and develop support and communications that promote technical understanding of AI (opportunities, limitations, or potential for bias), ethical awareness, academic integrity, critical thinking and evaluation and continuous learning at its heart.

Academic integrity and AI detection:

Tools such as Turnitin have developed AI detection services to support academic integrity. However, these are not 100% accurate and should not be relied upon alone. Any score provided is purely indicative and not proof of an offence or non-offence in isolation. Academic colleagues should continue to use their own judgement and experience when assessing the originality of a student’s submission, seeking other sources of evidence, such as accuracy of citations before raising with the student or a potential academic offence.

Setting clear examples of what is appropriate in each context will be needed through AI literacy. This may vary between disciplines and contexts. As such, additional training, and support for use of AI, with clear instructions within assessment criteria, will be needed. This is likely to develop over time and become embedded within teaching as a central learning activity. 

In addition, when using AI services, it is important that users carefully consider how the service manages our personal data. Are you inadvertently providing personal data (yours, or someone else’s), to the service? GDPR should already make us aware of these issues and the guidance here may cover us, but we should be extra vigilant of data protections concerns when using AI services. We need students and staff to be aware of the difference between the data used to train the language model, and the data submitted to the model as part of the prompt. There are serious privacy concerns that arise when submitting personally identifiable information as part of using services like ChatGPT. Anonymisation of any data submitted and removal of business identifiable information is a minimum, but if you are unsure, support and guidance can be obtained for the ICO team through emailing compliance@lincoln.ac.uk.6


[1] For example, as part of a critical analysis of AI output or productivity support in summarising complex documents.

[2] Committee on Publication Ethics. (2023, February). Artificial intelligence and authorshiphttps://publicationethics.org/news/artificial-intelligence-and-authorship

[3] Wilson, N. (2023, June 16). Keep calm, and carry on … Using Chat GPT. HEPI. https://www.hepi.ac.uk/2023/06/16/keep-calm-and-carry-on-using-chat-gpt/

[4] Bowden, M. (2023, April 4). What is the future for student assessment in the light of AI and ChatGPT? HEPI. https://www.hepi.ac.uk/2023/04/04/what-is-the-future-for-student-assessment-in-the-light-of-ai-and-chatgpt/

[5] Bowden, M. (2023, April 4). What is the future for student assessment in the light of AI and ChatGPT? HEPI. https://www.hepi.ac.uk/2023/04/04/what-is-the-future-for-student-assessment-in-the-light-of-ai-and-chatgpt/

[6] https://openai.com/policies/terms-of-use

Next Steps

1] Exploring mechanisms for AI guidance on adoption

Any colleague wishing to seek general advice and preliminary support for introducing AI services into their teaching should still contact Digital Education or their representative Digital Lead for initial discussions on their potential use.

However, adoption of AI and its appropriateness within a discipline or area, as discussed, can be nuanced and dependent on how or why it is implemented. To ensure the appropriate safeguards are introduced within their disciplinary contexts when seeking to introduce AI, the University may wish to introduce additional approval mechanisms before adoption.

At this point, existing approval mechanisms for introducing any new technology will provide oversight. In addition, a Special Interest Group will be created to provide a community of practice to help shape and inform the adoption of AI technologies across the University. Consisting of academic colleagues, student representation and professional service staff, this group will support colleagues across the University by answering and contributing to ongoing discussions on the effective and ethical implementation of AI.

There is clearly enormous potential for the University when adopting AI in teaching, support, or research. We are already seeing a significant increase in publications in this domain, which is demonstrative of the low-hanging fruit available for a wide range of researchers. The SIG may also prove useful in connecting potential research projects – possibly a research programme into the pedagogical implications of AI in higher education – that might be useful to pursue.

2] Develop AI literacy programmes

Both students and staff will need support in the implementation of AI-enhanced learning and teaching activities. It will not be enough to simply give staff permission to update assessment tasks, nor can we expect students to know how to navigate a different assessment landscape. Most stakeholders are used to a learning, teaching and assessment context that is relatively well-defined, with high levels of certainty and structure. AI-enhanced activities that align with institutional frameworks (e.g., Student as Producer, Skills for LIFE) are likely to require faculty and student development processes. This will require thinking differently and creatively, to challenge and change the traditional subject outputs and learning behaviours.

The University should also explore the development of a range of use cases for which generative AI could benefit higher education, including content creation, curriculum and assessment design, research opportunities (both studying the impact of generative AI on LTA, and using generative AI for research purposes in other domains), administrative support, student support (e.g. those who are not first-language English speakers, those with communication challenges), and so on.

Partnering with the Student’s Union, LALT, Libraries and Learning Skills, and Digital Leads, Digital Services will explore existing external support opportunities (for example AI ethics course from Jisc (website)); or deliver a bespoke range of guidance and support materials to increase AI awareness throughout AY2324.