Some recent work with programme designers in other UK institutions suggests to me that quality assurance and enhancement measures continue to be appended to the policies and practices carried out in UK HEIs rather than seeing a revitalising redesign of the entire design and approval process.
This is a shame because it has produced a great deal of work for faculty in designing and administering programmes and modules, not least when it comes to assessment. Whatever you feel about intended learning outcomes (ILOs) and their constraints or structural purpose, there is nearly universal agreement that the purpose of assessment is not to assess students ‘knowledge of the content’ on a module. Rather the intention of assessment is to demonstrate higher learning skills, most commonly codified in the intended learning outcomes. I have written elsewhere about the paucity of writing effective ILOs and focusing them almost entirely the cognitive domain (intellectual skills), with the omission of other skill domains notably the effective (professional skills) and the psychomotor (transferable skills). Here I want to identify the need for close proximity between ILOs and assessment criteria.
It seems to me that well-designed intended learning outcomes lead to cogent assessment design. They also suggest that the use of a transparent marking rubric, used by both markers and students, creates a simpler process.
To illustrate this I wanted to share two alternative approaches to aligning assessment to the outcomes of a specific module. In order to preserve the confidentiality of the module in question some elements have been omitted but hopefully the point will still be clearly made.
Complex Attempt to Assessment Alignment
- Intended Learning Outcomes are written (normally at the end of the ‘design’ process)
- ILOs are mapped to different categorizations of domains, Knowledge & Understanding, Intellectual Skills, Professional Skills and Attitudes, Transferable Skills.
- ILOs are mapped against assessments, sometimes even mapped to subject topics or weeks.
- Students get first sight of the assessment.
- Assessment Criteria are written for students using different categories of judgement: Organisation, Implementation, Analysis, Application, Structure, Referencing, etc.
- Assessment Marking Schemes are then written for assessors. Often with guidance as to what might be expected at specific threshold stages in the marking scheme.
- General Grading Criteria are then developed to map the schemes outcomes back to the ILOs.
Streamlined version of aligned assessment
I realise that this proposed structure is not suitable for all contexts, all educational levels and all disciplines. Nonetheless I would advocate that this is the optimal approach.
- ILO are written using a clear delineation of domains; Knowledge, Cognitive (Intellectual), Affective (Values) and Psychomotor (Skills). These use appropriate verb structures tied directly to appropriate levels. This process is explained in this earlier post.
- A comprehensive marking rubric is then shared with both students and assessors. It identifies all of the ILOs that are being assessed. In principle we should only be assessing the ILOs in UK Higher Education NOT content. The rubric will differentiate the type of responses expected to achieve varies grading level.
- There is an option to automatically sum grades given against specific outcomes or to take a more holistic view.
- It is possible to weight specific ILOs as being worth more marks than others.
- This approach works for portfolio assessment but also for a model of assessment where there are perhaps two or three separate pieces of assessment assuming each piece is linked to two or three ILOs.
- Feedback is given against each ILO on the same rubric (I use Excel workbooks)
I would suggest that it makes sense to use this streamlined process even if it means rewriting your existing ILOs. I’d be happy to engage in debate with anyone about how best to use the streamlined process in their context.