Guidance and support for the use of AI in Cambridge Technicals in Digital Media
04 March 2024
John Hibbert, Media Subject Advisor
The rapid and ongoing advances in generative artificial intelligence (AI) tools bring both benefits and challenges to education and assessment. In this blog, we highlight the guidance available for managing AI use in Cambridge Technicals in Digital Media and how to deal with misuse in assessments.
What is AI?
AI tools generate text or images in response to user prompts and questions. The responses of AI tools are based upon the data sets upon which they have been trained. ChatGPT is the best-known example of an AI chatbot, but many other chatbots and tools are available.
Some forms of AI, such as its use in personalised content recommendation systems and ad targeting, are already well-established within media industries. Its use is becoming increasingly widespread across different media sectors, from generating news stories to being used across a range of media production processes. AI tools are increasingly being integrated into media production software such as Photoshop. AI is becoming a key feature of production and post-production tools across different media forms including publishing, video production, animation and video games.
Appropriate use of AI
Whether the use of AI by students is appropriate for a given task will depend on the marking criteria and nature of the task.
Most appropriate uses of AI for Cambridge Technicals in Digital Media will relate to preparing students for externally assessed units and supporting general teaching and learning. AI can be used to support homework and flipped learning activities, as well as to create notes and provide definitions of key terminology. For example, AI could be used by students to research the purpose and format of different pre-production documents for Unit 2, or to provide explanations of key concepts such as connotation for Unit 1. It’s worth bearing in mind that information provided by AI can include errors, so should be checked before sharing with your students.
For internally assessed units, AI could be used as a research tool. For example, students could use AI as a research source to explore the conventions of the media product they are going to create in Unit 3 or to generate ideas for potential content. If students do use AI when producing work for assessment, they must acknowledge its use and clearly indicate how it has been used.
Inappropriate use of AI
Like plagiarism, AI can be used by students to create work which they then try to pass off as their own. Where a student has used AI to complete all or some of their work, they are not demonstrating their own knowledge, understanding and application of skills. This may prevent the candidate from presenting their own authentic evidence.
Examples of AI misuse include using or modifying AI responses without acknowledgement, disguising the use of AI, or using it for substantial sections of work. You can support your students by teaching them about appropriate use of AI in Digital Media, demonstrating how to reference AI correctly where its use is appropriate, and having clear policies for AI use within your department.
Inappropriate uses of AI could include using it to generate written content, for example replicating content provided by AI to describe the work of professional photographers in LO1 of Unit 8. Use of AI to create planning and pre-production documents for internally assessed units is another potential inappropriate use of AI. Other inappropriate uses of AI would include using AI-generated content in media products or submitting media products created by AI. Students should avoid any use of AI tools in production work which results in them not being able to show they have independently met the required criteria.
What to do when candidates misuse AI in assessments
Teachers must not accept work which is not the student’s own. Ultimately the Head of Centre has the responsibility for ensuring that students do not submit inauthentic work.
If you suspect AI misuse and the student has not signed the declaration of authentication, your centre doesn’t need to report the malpractice to OCR. You can resolve the matter prior to the signing of the declarations.
If AI misuse is suspected after formal submission and signing of the authentication sheet, AI concerns within candidate work should be reported with a JCQ M1 form, as outlined in the JCQ AI guidance, available on the Malpractice section of the JCQ website. Please email your completed forms to OCR at malpractice@ocr.org.uk.
Further support
Please refer to the JCQ AI use in assessments: Protecting the integrity of assessment document for further information on managing the use of AI within your assessments.
We are also producing a range of support resources, included recorded webinars, on our AI support page.
Stay connected
If you have any questions about this topic, please comment below or get in touch via email media@ocr.org.uk or message us on X (formerly Twitter) @OCR_Media_Film. You can also sign up to our mailing list for information about resources and support.
About the author
John Hibbert has been Subject Advisor for Media and Film Studies since 2018. Prior to joining OCR John taught a range of media and film studies qualifications in secondary schools, and was a head of department for the last eight years. Predictably, in his spare time he is a keen filmgoer, and in addition enjoys reading and miserable indie music.