Governance and Policies

How and When to use AI?

The application of AI will be different between disciplines, departments and contexts. When deciding to use AI , it may be helpful to seek advice and support as a collective endeavour first. Through conversation and consultation with students, colleagues or departments, ethical, practical, and technical considerations can be explored, whilst ensuring the application is transparent and supportive. Wider working groups are currently discussing the implementations at different levels of the University and the support required. It is also recommended to think about the value that the use of AI will add and the benefits or enhancements that it may offer. When being used for education purposes, it’s important to identify if it will align constructively with your intended outcomes. We also must consider the ethical adoption of AI, for more context around this we recommend visiting the AAAA framework (web).

AI within education

When AI is chosen to be used, it is important that students and staff are informed and aware of the different approaches. Guidance should be created to clarify the expectations for:

• What tools they can use.
• Specific guidance for using the selected tool(s).
• Any limitations/challenges that may be placed on using the tools.
• How to acknowledge use of the tool as specified by the University guidance (web).
• Whether they need to reflect upon or describe their use of the tool within their submission.

Please be aware that there are multiple approaches to embedding AI and the guidance should given to students needs to reflect this.

University Specific Tools

For anyone familiar with ChatGPT, Bing Chat for Enterprise offers access to the latest AI models through the Bing search page (in any browser) or the Edge browser side bar. This is a text-based AI model that can respond to prompts. Bing Chat also now includes Dall-E 3, a text-to-image AI model that can produce realistic images and drawings from prompts.

What is CoPilot?

The University currently, as part of Microsoft 365, has access to co-pilot (formerly Bing Chat Enterprise) AI, which is one of our supported tools that can be accessed by all staff members (date to be confirmed) via any browser or the Edge browser side bar. You will need to login with your university credentials to access this tool.


• Combined with ChatGPT.
• Commercial data protection (user and commercial data is encrypted).
• Chat data is not saved.
• Chat data is not used to further train the AI.

All of the usual capabilities are there, plus a new enhanced way to utilise the tool for our own organisational benefit, for example re-writing documents or providing a summary of meeting minutes.

Further guidance:

  • To get the best experience you need to use the Edge browser and the co-pilot Side bar (big blue O top right), and open documents in the browser.
  • You will have to log in to the browser with your University credentials.
  • It will only be for full-time staff, not currently for roles such as associate lecturer (licence based).
  • It cannot be deployed to students (enterprise licences only).
  • Text entries are restricted to 4000 characters or less depending on how creative you choose it to be.
  • It will not keep a record of the query once you have clicked to start a new topic or logged out of the browser, so your query is protected, but, if you don’t take a copy, it will be lost.

How does Co-pilot (bingchat) work?


Staff must never place organisational data into a large language model (e.g. ChatGPT) that has not been approved for use by the University: this risks your data becoming freely available and used to train the models. Co-pilot prevents this and is approved for use.

Data Protection & Fair Use

General Data Protection Regulations (GDPR)

When using AI services, it is important that you carefully consider how the service manages personal data. Are you inadvertently providing personal data (yours, or someone else’s) to the service? It is important that both students and staff are aware of the difference between the data used to train the language model and the data submitted to the model as part of the prompt. There are further serious privacy concerns that may arise when submitting personally identifiable information as part of using services like ChatGPT. Anonymisation of any data submitted and removal of business identifiable information is a minimum, but if you are unsure, support and guidance can be obtained from the ICO team through emailing

Equality and Fair Use of AI

When accessing AI technologies for education purposes, the accessibility level of those services needs to be considered, especially when included as part of learning and teaching activities. The University wants to ensure equitable and fair use when incorporating AI tools to ensure all students can succeed. To support this goal, the University is recommending that only AI tools to which all students have the same level of access are chosen. This is due to some tools providing ‘paid-for’ advanced versions of the system which may unfairly advantage students from certain economic backgrounds.

Academic Integrity

With the rise of AI comes the increased possibility of collusion or misconduct in reference to academic offences. However, AI as a tool can be used effectively within assessments to help support or enhance the offering and should not be discounted. There are three key parts that need to be considered when looking at academic integrity:

  1. Originality statements
  2. AI acknowledgement
  3. AI Checking using Turnitin

Originality statements

As part of the University’s guidance around academic offences, we have an originality statement that is visible to students as part of guidance around assessment submissions. The originality statement was made in collaboration with the College Directors of Academic Quality and Standards, as well as in alignment with the regulations set out by the University. The statement highlights that the submission is the student’s:

“own work, without input from either commercial or non-commercial writers or editors or advanced technologies such as artificial intelligence services.” (reference originality statement)

If you are planning to encourage student use of AI as a supportive element within an assessment, its is important to let your students know the acceptable limits and acknowledgments required. See the AI and Assessment (web) section for more detail around altering assessments to include AI.

If including AI tools, please ensure you alter the following wording within your originality statement. This is usually found within the assessment folder within Blackboard Ultra under ‘Academic Offences & Originality Report’. If you are within Canvas, this can be found within the assessment instructions.

“…own work, without input from either commercial or non-commercial writers or editors or advanced technologies such as artificial intelligence services unless explicitly allowed and referenced.”

Click here for further advice on adding AI within your assessments (web)

AI Acknowledgement

If AI tools have been built into the assessment, alongside clear instructions and guidance on expectations, the students must acknowledge any AI usage appropriately. The University library has created a guide on examples of how to acknowledge the use of AI that can be shared with students.

University of Lincoln Library | Study skills: Artificial Intelligence | Web

If you are allowed to use AI to help you develop your ideas, research or plan the writing process, it must be acknowledged appropriately, even if you do not include any content generated from AI in your assessment.

Tutors should provide guidance on how to do this. If this guidance has not been given, students are advised to acknowledge the use of generative AI tools by:

  • Including a statement of acknowledgment.
  • A description of use.
  • Ensure a copy of the transcript of your questions and the AI generated responses are saved as part of an appendix. You can do this by either right clicking on the page and selecting ‘Save as’ to save the webpage or by taking a screenshot.

AI Checker

Turnitin has a built-in AI checker that can help identify markers relate to documents created with AI. This is built into Turnitin feedback studio. This tool will generate a report to give an indication of whether it’s possible AI has been used. Although the report will give an insight into this possibility, this new tool should not be seen as proof of use alone, and only be used as an indicator and to contribute evidence as part of a wider investigation. Staff also need to consider that any returned scores may also detect AI sources that have been authorised for use as part of the assessment.

To support the AI checker, we also recommend a ‘Promote, Prevent, Detect’ approach, which looks at wider evidence to form a clearer picture of potential misconduct. Information on this approach can be found here:

Digital Education | Artificial Intelligence in Assessment | Web

If you still believe that there has been an element of misconduct due to AI, then staff will need to follow the usual academic offences investigation procedures. Advice and guidance on how to use Turnitin for Academic integrity investigations can be found here:

Digital Education | Resources hub – Investigating academic integrity in Turnitin | Web