Guidance and support for the use of AI in A Level Computer Science NEA
19 February 2024
Ceredig Cattanach-Chell, Computing Subject Advisor
Generative artificial intelligence (AI) tools are developing quickly. These tools bring benefits and challenges to education and assessment.
In this blog I highlight the guidance available for managing AI use in A Level Computer Science. I also look at how to deal with suspected misuse in assessments.
What are AI tools?
AI tools use user input (prompts and questions) to generate text or images. AI tools are trained on data sets. The response from an AI tool depends on how it has been trained. ChatGPT is the best-known example of an AI chatbot. It has been trained on all the text available on the internet. There are also many other chatbots, and other tools available.
The primary focus of AI use in the A level NEA is likely to be tools that generate program code. This would include common chatbots like ChatGPT, Google Bard, and Microsoft’s Co-pilot.
AI tools may also be integrated into common desktop applications. This could include MS Word, the Google Suites, MS Visual Studio, and so on.
Appropriate use of AI in the NEA
The appropriate use of AI is determined by:
- The specific mark scheme.
- The nature of the task.
Analysis
Generating ideas for projects can be supported by AI. AI tools can provide great ideas and develop concepts. AI tools can help with initial project concepts, and to provide more scope to a project. For example, a student could use ChatGPT to provide stimulus to the question: “Write me 10 game project ideas that could use OOP paradigms.”
Ideas from ChatGPT or candidates could then be developed further using stimulus: “State 10 ways that I could use power ups in this game.”
AI tools could also be used to identify similar ideas or types of projects. This may speed up the research process. However, it’s also important that this stage is not solely driven by using AI tools.
Design
Appropriate use is more of a challenge in this section. AI tools could be used to suggest testing strategies. The candidate must then develop these into specifics for their project.
Otherwise, AI use in this section should be discouraged.
Implementation
AI tools can be used to support debugging. AI Tools could also be used to suggest ideas and methods to troubleshoot. For example, a candidate could ask it to suggest how a method or object could be written. But the candidate must show clearly how this suggestion has been adapted to suit their project.
There is no harm in using AI tools to support the coding process – if all use is well documented. This must clearly show where AI tools are used, and how the candidate then develops their independent work from this.
The challenge in using AI tools is that AI generated code is not always functional, or error free. This leads to challenges in testing and evaluation. A lack of understanding about how the code works can limit a candidate’s ability when testing and evaluating at a later stage.
Testing and evaluation
AI use in this area should be very limited. The testing is using test data from earlier sections. The evaluation is clearly linked back to specific stakeholder requirements. AI tools are unlikely to be able generate detailed and specific feedback for this. There is opportunity to create evidence of interactions between stakeholders and candidate. This opportunity is impacted by use of AI tools.
Inappropriate use of AI in the NEA
AI allows students to claim credit for responses that are not independently created.
Where a student has used AI to complete work, they are not demonstrating their own knowledge, understanding and application of skills. This prevents the student from presenting their own authentic evidence and may limit access to mark bands.
Examples of AI misuse include:
- using or modifying AI responses without acknowledgement
- disguising the use of AI
- using it for substantial sections of work.
It is important that you have an AI department policy. This should be based around the school policy. Further guidance for this exists on the JCQ website.
Teach students about appropriate use of AI in computer science before they start their NEA. Demonstrate how to reference AI correctly, including how to evidence the use in an appendix.
Analysis
AI can generate great prompts. However, candidates should steer away from generating final ideas. Analysis requires interactions with stakeholders. Candidates restrict access to upper mark bands where this is limited. Overuse of AI may restrict engagement with stakeholders, as the candidates become too dependent on the AI responses.
Much of the analysis is open to AI abuse. Care should be taken early on to ensure that candidates do not rely on AI too heavily. Plenty of short review points and teacher monitoring at this stage of the NEA is key.
Design
The design does not have to be fully completed in one go. This is due to the iterative nature of the NEA. For example, a high-level sketch or idea is often refined later after a “first attempt” has been made at building a user interface for example. After testing, these earlier designs may be refined (and documented). The project goes through another iteration, and the result of the tweaked designs are re-evaluated.
The lure of AI may lead candidates to create over specific designs – which more reflect a final piece of code, rather than an early ‘pseudocode’ styled design.
Examiners will not reward reverse engineered designs. We expect there to be errors and tweaks needing to be made. This is why the projects are iterative in nature.
Therefore, a candidate who overuses AI to generate ‘code perfect’ designs are likely to cause themselves issues.
Full and perfect ‘code like’ designs for algorithms are going to raise suspicions at teacher level.
Development
Much of the development encourages independent work. However, AI tools may be used to auto-comment code and solve challenges along the way.
The key to ‘good’ use of AI is to encourage research into techniques, rather than solutions. Researching techniques allow a candidate to adapt and modify their findings to their project.
For example: Researching “shortest path algorithms” would allow a candidate to explore which pathing algorithm they would want to use and why. Researching: “Write Dijkstra’s shortest path for my code” limits their ability to show independent development.
Spotting misuse of AI is down to knowing you candidates. Being able to spot:
- sudden changes in work production
- changes in coding styles
- sudden redevelopment without justification
will help you challenge candidates and help to reassure you over authenticity.
Testing
Use of AI to test a program is not really much help – as the tests and data will have been defined earlier in the project.
One area that AI could be misused is in the remedial action taken to resolve issues. We do not expect that every project will work perfectly. There may be bugs in the system at final completion.
However, simply copying errors/issues and getting AI tools to solve them without referencing is cheating. Therefore, we would encourage teachers to discuss solutions candidates produce. The focus being on how they reached the solution, and how well they understand it.
Evaluation
AI tools are very good at writing evaluations when given enough information. A key pointer to AI generated evaluations will be the generic nature of the evaluation. There will also likely be a lack of evidence of interaction between stakeholders and candidate.
Using AI to generate the evaluation is poor practice and will mean it is very difficult to access upper mark bands.
Dealing with misuse of AI in the NEA
Teachers must not accept work which is not the student’s own. Ultimately the Head of Centre has the responsibility for ensuring that students’ work is authentic.
If you suspect AI misuse before a candidate has signed the declaration of authenticity, you can resolve the matter internally. You do not need to report this to OCR.
If AI misuse is suspected after a candidate has signed the declaration of authenticity, you must report suspected malpractice to OCR.
Guidance on reporting malpractice is outlined in the JCQ AI guidance on the Malpractice section of the JCQ website.
To report malpractice, you must:
Further support
Please refer to the JCQ AI use in assessments: Protecting the integrity of assessment document for further information on managing the use of AI within your assessments.
We are also producing a range of support resources, included recorded webinars, on our AI support page.
Stay connected
Share your thoughts in the comments below. If you have any questions, you can email us at ComputerScience@ocr.org.uk, call us on 01223 553998 or post on X (formerly Twitter) @OCR_ICT. You can also sign up to subject updates to keep up to date with the latest news, updates and resources.
About the author
Before joining OCR in 2015, Ceredig had eight years teaching experience across a wide range of schools, including primary, secondary, academies and SEN sectors. At OCR he supported the development of the new GCSE (9-1) Computer Science and Entry Level R354, and led on the delivery of teacher delivery packs, a key element of the new GCSE’s success with teachers. Ceredig has a degree in Computer Science from Liverpool University and post-grads from Liverpool Hope and Cambridge Universities. Outside work, Ceredig is a keen modeller/painter, gamer and all-around geek.
Related blogs