Policy Directory

GOOD PRACTICE ASSESSMENT GUIDELINES

Date approved:

15 September 2004

Date Policy will take effect:

15 September 2004

Date of Next Review:

December 2010

Approved by:

Academic Senate

Custodian title & e-mail address:

Director, Academic Quality and Standards Unit
quality@uow.edu.au

Author:

 

Responsible Faculty/
Division & Unit:

Academic Quality and Standards Unit
Deputy Vice-Chancellor (Education) Portfolio

Supporting documents, procedures & forms of this policy:

Academic Integrity and Plagiarism Policy

References & Legislation:

Code of Practice – Honours
Code of Practice - Student Professional Experience

General Course Rules
Student Academic Consideration Policy

Academic Grievance Policy (Coursework and Honours Students)

Privacy Policy

Audience:

Public – accessible to anyone

Expiry Date of Policy:

 
  • Contents

1 Introduction / Background

  • 1. "Good Practice - Assessment" provides guidance to UOW academic teaching staff in implementing the requirements of the Code of Practice - Teaching & Assessment. It includes:
        • a. Core principles of effective assessment;
        • b. Specific guidance regarding a number of commonly occurring assessment issues;
        • c. Examples of good practice from across the disciplines; and
        • d. Faculty/unit processes to support good assessment practice.
  • 2. For an assessment practice or tool to qualify for inclusion in this Guide, it must be judged by a sub-group of the University Education Committee as meeting all the requirements of the Code of Practice - Teaching and Assessment and all the standards of the Good Practice Assessment Guidelines.
  • 3. The document fits within the framework of UOW teaching and assessment policies as set out in the flowchart in section 4 below.

2 Scope / Purpose

  • Not available.

3 Definitions

  • Not available.

4 FlowchartImage

5 Frequently Asked Questions

FAQ

Document Reference

Which methods of assessment do I use?

See Choosing assessment methods

What do I need to know as a Subject Co-ordinator?

See Indicators of Quality Assessment Checklist

What do I need to know as a Program Co-ordinator?

See Indicators of Quality Assessment Checklist

How can I minimise opportunities for students to plagiarise or cheat in some way?

See Section on Minimising Plagiarism

How do I assess large numbers of students?

See Section on Assessing Large Classes

I want my students to work in teams on a project – how can I assess each student’s contribution?

See Section on Group Work

What assessment issues do I need to think about when I’m setting an in-session test?

See Section on In-Session Tests

How can I be clearer about what level of performance I expect from my students?

See Section on Assessment Criteria

How can I improve consistency in marking when different people are marking the one assignment?

See Section on Marking

Can I award marks for class participation? How?

See Section on Class Participation

What about online assignments – how do they work?

See Section on Online Assessment

6 PART A: DESIGNING ASSESSMENT

General Assessment Principles

The principles of good assessment practice are the same in any learning environment. Any assessment practice should be valid and measure intended objectives; should be reliable and consistent; should be flexible using a variety of methods and approaches; and should be fair and bias free. (Kerka, Wonacott, Grossman, & Wagner, 2000)

Purposes of assessment

Assessment is an essential part of quality learning and teaching in higher education and is usually the key factor influencing how students approach the learning and teaching process. Carefully designed assessment tasks can positively affect the approach of students to their study and the quality of their learning. In particular, they can contribute significantly to the development of the University of Wollongong Graduate Qualities.

The purposes of assessment are:

    • □ To promote learning;
    • □ To measure performance, by awarding marks which indicate whether and how well a particular student has attained the stated learning outcomes;
    • □ To determine whether a particular student is sufficiently well-prepared in a subject area to proceed to the next level of instruction;
    • □ To provide feedback to students which indicates levels of attainment, and to indicate and diagnose misunderstandings and learning difficulties;
    • □ To provide feedback to teaching staff to indicate areas in which students are experiencing difficulties, and to identify and diagnose ineffective teaching.

Effective assessment

Key to effective assessment are the linkages between outcomes, the design of assessment tasks, criteria, marking procedures and feedback. These are recognised in the Code of Practice – Teaching and Assessment which provides that “learning outcomes must be congruent with the content of the subject and assessment processes and practices”.

Assessment Practices should be reviewed regularly to ensure their effectiveness. The following diagram illustrates the cycle of planning, implementing, reviewing and improving assessment practices at the subject level (see shading). It also illustrates the role of the relevant Assessment Committee(s) in overseeing this process and providing input into individual subject and overall course assessment practices.

  • Image
    • □ Assessment should be in a form that allows the determination of student performance, measured against the stated student outcomes of that subject.
    • □ The assessment process should provide for appropriate and timely feedback.
    • □ Weightings for each assessment component, and deadlines for submitting material for assessment, should take into consideration the stated student outcomes of the subject, including identified graduate attributes, and the required function of the assessment.
    • □ Assessment methods should provide for a range of assessment instruments and processes to encourage the development of a range of attributes and skills.
    • □ Assessment should be reliable and valid.
    • □ Assessment tasks and methods should take into account, where possible, past experience/prior student feedback.
    • □ Assessment design should be responsive to students’ context such as multiple curricula, different study environments and different cultural contexts.
    • □ Assessment methods should provide reasonable accommodation for students with a disability - refer to the Disability Policy – Students on the UOW Policy Directory.

Choosing assessment methods

There is a wide variety of assessment methods. In selecting the assessment method that is “appropriate for the intended learning outcomes” (as required by the Code of Practice – Teaching and Assessment), it is important to consider the following issues:

The role of the subject in the overall subject sequence of the program – for example, the role of core subjects within a program or programs. In particular:

    • □ Specific knowledge – basic to the field and/or serving as a prerequisite for later learning
    • □ Specific professional or academic skills
    • □ Specific graduate attributes

The flow of the learning within the subject, for example:

    • □ Sequence of content delivery
    • □ Sequence of skill development - both within the subject and the program
    • □ Specific skills prior to practical application

The main purpose(s) of the specific assessment tasks, for example:

    • □ To provide early feedback to students
    • □ To measure how well a student has attained stated learning outcomes
    • □ To identify which students are experiencing difficulties
    • □ To develop specific graduate attributes
    • □ To determine whether the student can proceed to the next level of instruction
    • □ To measure how well a student can apply knowledge to professional practice
    • □ To determine achievement of professional accreditation requirements
    • □ To diagnose teaching effectiveness.

Practical considerations, for example:

    • □ Size of the class
    • □ Mix of students – international; working; mature age; etc
    • □ Single or mixed campuses or off-campus
    • □ Consider online delivery of assessment and assessment support services
    • □ Resources available – staff with required expertise; equipment; facilities
    • □ Ease of setting
    • □ Ease of marking
    • □ Potential for plagiarism and other forms of cheating
    • □ Stress on students and staff
    • □ Need for disabilities support

A Comparison of Assessment Methods

The following table provides an overview of common assessment tasks and practical issues for consideration:

Table 1 – A Comparison of Assessment Methods

Assessment Method

Some Possible Objectives

Some Possible Advantages

Some Possible Disadvantages/ Issues

Considerations – See Below

Assignment - essay

Research and synthesise information; make an argument; interpret and evaluate ideas.

Relatively easy to set. Appropriate for testing higher order thinking.

Reduced reliability with different markers. May be time consuming to mark.

Authenticity
Plagiarism
Language
Online
Group Work

Assignment – problem centred or case study

Problem solving; application and interpretation of knowledge; synthesise and evaluate.

More realistic test of ability, e.g. closer to performances required in professional practice.

Cases/problems must be well designed to include appropriate level of complexity and generate genuine inquiry.

Authenticity
Plagiarism
Online
Group Work
Disabilities Support

Assignment – short answer questions

Knowledge and understanding.

Reasonably easy to set. Allows broad coverage of syllabus. Consistency in marking.

Little opportunity to make an argument or display original thinking.

Authenticity
Plagiarism
Online
Group Work
Disabilities Support

Projects and theses

Identify, define and solve problems; research and synthesise information; structure and present an argument.

Allows students to pursue individual interests – can be highly motivating. Allows for creative and original work.

May require unforeseen amounts of work on the student’s part. Time consuming to mark. Reliability in marking difficult to achieve.

Authenticity
Plagiarism
Group Work
Online Language

Exam - essay

Remember, organise and structure information; structure and present an argument under pressure.

Relatively easy to set. Allows confidence about authorship

Different questions often require different levels of ability (eg. describe v. criticise). Comparisons of student performance therefore difficult.

Online
Language
Disabilities Support

Exam - Open book

Problem solving; application and interpretation of knowledge; use reference materials effectively.

Less study time spent on memorising; thought required in studying for the exam and in writing the response.

Questions should be set so that they require real thinking and not just looking up the answer.

Online
Language
Disabilities Support

Exam - Oral/vivas

Oral communication skills; reasoning behind judgement and actions.

May be used to confirm practical/clinical assessments.

May be subjective. Personal factors may influence assessor. Variability in questions asked may mean students face different tests. May be highly stressful for some students.

Online
Language
Disabilities Support

Exam - Problem centred or case study

Problem solving; application and interpretation of knowledge; synthesise and evaluate material.

More realistic test of ability, e.g. closer to performances required in professional practice.

Cases/problems must be well designed to include appropriate level of complexity and generate genuine inquiry.

Online
Language
Disabilities Support

Exam - Short answer questions

Knowledge and understanding.

Reasonably easy to set. Allows broad coverage of syllabus. Consistency in marking.

Little opportunity to make an argument; display original thinking.

Online
Language
Disabilities Support

Exam - MCQ, true/false etc

Knowledge and understanding; interpret data.

Broad coverage of syllabus possible. Reliable marking.

Difficult and time consuming to set. Cannot test ability to make an argument, defend judgment, display original thinking.

Plagiarism
Online
Language
Disabilities Support

In-session Tests/Quizzes

Knowledge and understanding; Interpret data; diagnosis.

Useful means of assessing progress. Can provide an early warning sign for students who are experiencing difficulties. Can be used as Pre-test: help prepare students for final exams.

Can be difficult to supervise in large lecture theatre.

Little opportunity to make an argument; display original thinking.

Weightings
Plagiarism
Online
Disabilities Support

Laboratory exercises/reports

Practical skills. Safety requirements. Follow procedures accurately. Understanding of scientific method. Document experiments.

A learning experience as well as an assessment task – learning by doing.

Written report rather than practical skills usually assessed.

Plagiarism
Authenticity
Group Work
Disabilities Support

Journals, diaries and log books

Reflection on practice. Growth in understanding. Reasoning behind judgements and actions. Attitudes

Requires deep thinking about practical exercises and field placements. May help to integrate theory and practice.

Often an unfamiliar assessment tool that students may resist. Difficult to assess attitudes.

Authenticity
Group Work

Seminar presentation

Oral presentation skills. Lead a discussion. Research, organise information and make an argument.

May simulate presentations required in professional practice.

Guidance on effective presentation and group facilitation skills required. Variability in audience may make comparisons difficult.

Plagiarism
Group Work
Online
Language
Disabilities Support

Performance (music, dance, theatre, etc)

Interpret an artistic work. Creativity. Technical skill.

Multiple assessors improve reliability of a highly subjective assessment event.

Assessment by panel will improve reliability

Panel assessment can be highly stressful for students. Criteria for successful performance needs to be clear.

Group Work
Online
Language
Disabilities Support

Creative work (exhibitions, portfolios, websites etc)

Creativity and originality. Technical skill. Application of knowledge

Allows students to pursue individual interests – can be highly motivating. Allows for creative and original work.

Assessment by panel will improve reliability

May require unforeseen amounts of work on the student’s part. Time consuming to mark. Reliability in marking difficult to achieve.

Authenticity
Plagiarism
Online
Group Work

Simulated professional tasks

Technical skill. Interpersonal skills. Problem solving ability. Application of knowledge.

Attitudes.

Closely approximates professional work

Preparation of markers’ checklists and training of assessors may be necessary to ensure reliability. May be time consuming and expensive to assess. Reliability in marking difficult to achieve.

Group Work
Online
Language
Disabilities Support

Design tasks

Problem solving ability. Creativity. Technical skills. Presentation skills.

Allows students to pursue individual interests – can be highly motivating. Allows for creative and original work.

Assessment by panel will improve reliability.

Displays or presentations of design solutions help make standards clear to students.

Difficult to make reliable assessments of widely differing design solutions – clearly written and weighted criteria will help

Group Work
Online
Language
Disabilities Support

Class participation

Preparation, oral communication skills, comprehension, involvement and enthusiasm.

May improve attendance and preparation. Offers opportunity to assess students engagement with, and ability to debate, ideas.

Criteria for successful performance needs to be clear. Assessment may be highly subjective and unreliable. Provision for equal opportunity for participation is required.

Weightings
Online
Language
Disabilities Support

Adapted from a checklist by David Jacques in: Module 10 Course Design Certificate in Teaching and Learning in Higher Education course materials. 1989. Oxford Centre for Staff Development.

Considerations

Weightings

This indicates that there may be certain restrictions on how much weight can be assigned to a certain type of assessment task. The Code of Practice – Teaching and Assessment states that:

    • □ No single assessment task to count for more than 70% of assessment;
    • □ Group work not to constitute more than 50% of assessment, unless approved by FEC;
    • □ Class participation - if worth more than 10%, a record should be made by the academic staff member

In addition, it is recommended that no single in-sessions test should constitute more than 30% of the total mark.

Authenticity

This indicates that there may be a problem with ensuring that the person who presents the work has done the assessment task. Any assessment task that is done without adequate supervision will have this problem.

Formal examinations have the best authenticity. In-class tests are potentially less authentic unless all student cards are checked carefully. This can be hard to do in a lecture theatre and in limited time. When setting take home tasks, the task should be difficult to do by someone not engaged in the subject. Vary the assessment tasks each time the subject is run. Information on how to ensure authenticity in laboratory classes, is provided in Part B.

Plagiarism

This indicates that there is scope for students to copy other people’s work, not reference other people’s work or cheat in some way. Plagiarism may overlap with the problem of authenticity. Formal exams reduce the risk of plagiarism. Even during formal exams, it would be possible to copy someone else’s multiple choice answers, if computer marking sheets were in use. If students sit close together during in-class tests it may be easy to copy unless different versions of the test papers are issued to adjacent rows of students. Group Work This indicates that this method of assessment may be effectively undertaken as a group activity. More details on assessing group work are provided in Part B, Section 5.

Language

This indicates that some consideration should be given to students’ English language proficiency when setting and/ or wording the task. Written and oral assessment tasks place great demands on students' English language skills, and this may put some students whose first language is not English at a disadvantage. Where these assessment tasks are used, consideration might be given to providing students with early feedback on their language skills, providing support for those who require language development assistance and articulating how language issues are to be dealt with in the grading process. Alternatively, if multiple choice tests are used, careful attention will need to be paid to the wording of questions; for example, the same terminology used in teaching would need to be reproduced in the assessment questions, and the use of double negatives would be inappropriate.

Online

This indicates that this method of assessment may be effectively undertaken as an online assessment task. More details on Online Assessment are provided in Part B.

Disabilities Support

This indicates that the assessment task may have predictable implications for students with common disabilities. Students with speech, vision or hearing impairment may have difficulty in class presentations, for example, while students whose memory or concentration is affected by a learning disability or an injury may need extra time in exam situations. When setting tasks, it can be helpful to keep in mind ways in which the task could be adapted or the deadline could fairly be extended. In some cases, an alternative task will be required.

Designing Assessment Tasks to Minimise Plagiarism

General principles

The Code of Practice – Teaching and Assessment states that the design of assessment tasks should take into consideration the need to minimise opportunities for plagiarism and other forms of cheating. It is important, however, that emphasis in this area does not interfere with the quality of assessment design: the choice and design of assessment tasks must remain true to the expected learning outcomes while also minimising opportunities for plagiarism.

Problems and specific issues

There are two critical and complementary means for minimising plagiarism: the first is task design, and the second involves teaching students about plagiarism, and in particular, how to avoid it. As Carroll (2000 cited in James et al, 2002 p.44) argues, it is necessary to create a culture of “involvement and interest rather than one of merely detection and punishment”. The following suggestions are designed to assist you in achieving the aforementioned.

Examples of good practice

Develop a culture of involvement:

        • □ Discuss the issue of plagiarism in class and make the students aware of the various definitions, instances and penalties;
        • □ Provide instruction and resources that teach students the skills of paraphrasing, summarising, critical analysis, arguing and referencing (particularly if you have a first or second year subject). Ask Learning Development staff to work with you on this; and
        • □ Direct students to references and guidelines relevant to your academic area.

Written Assessment Tasks: Make the task so specific that students are unable to simply download from the web: for example:

        • □ Choose a topical question and ask students to argue something specific about it (eg. The current Australian Government’s treatment of refugees is an international human rights issue. Do you agree or disagree? Support your argument with evidence.);
        • □ Base it on a particular journal article or newspaper article (eg. Critically evaluate the Hardaker & Ward (1997) article in light of the information presented in the other six recommended readings.);
        • □ Ask students to relate particular theories/concepts to events/ issues in current newspaper articles (eg. Find an article relating to one of the concepts presented in the lecture series. Analyse and comment on it in light of your learning so far this session.);
        • □ Use case studies;
        • □ Get students to integrate theory and experience (eg. field trips, practicums, reflective writing); and
        • □ Ask students to analyse and report on specific aspects of a local/ national company.

Stage the Assessment Tasks, so that students develop skills related to their use of source material throughout the course; for example:

        • □ To prepare students for ‘Assignment 1: Critical evaluation of three given articles’, teach the students to critically evaluate an article in class earlier in session and have them write it up and hand it in for feedback; or as part of tutorial preparation, ask students to bring a summary of at least one essential reading and collect them at random for marking – give students feedback on their summarising skills;
        • □ Arrange the assessment task so that students need to hand in several drafts of the essay (Stefani & Carroll, 2001);
        • □ Ask students to make brief presentations to the class based on their written assignments; and
        • □ Ask students to submit an annotated bibliography either before or when the essay is due.

Group Work: Group work is intended to encourage cooperative learning and teamwork skills, and this can be achieved if the task is well-designed and clearly communicated to students. Concern about copying must be dealt with initially in the delivery of the assessment task. Related to plagiarism is the fact that some students may over-rely on other group members to do the majority of the work while they all receive the same mark.

        • □ Be clear about the purpose of setting the task, the expected learning process and the outcomes (product and process).
        • □ In class and in the subject outline, make the marking criteria explicit.
        • □ Clearly explain the difference between collaboration and copying.
        • □ Ask the students to submit individual assignments.
        • □ Ask students to write a short reflective paper on what they learnt from the process, or use reflective journals for the same purpose.
        • □ Include a peer assessment component with the assessment.

Example: ACCY102 – Group Contract/ Group Report/ Individual Portfolio and Reflective Piece.

In Session Tests and Short Answer Quizzes: Where testing is conducted during session in lecture theatres or classrooms, copying can be an issue. Where possible, you may do the following:

        • □ Ask students to sit with at least one space between them in lecture theatres and classrooms.
        • □ For electronic quizzes, randomise questions and answers.
        • □ For paper-based tests, vary the sequence of questions on several versions of the same paper, and systematically distribute the different versions to the class.

Exam - MCQ, True / False: Concern with multiple choice testing in the exam situation relates to the ease with which patterns of dots (on MCQ answer sheets) or circling for true/false questions may be easily copied, particularly in crowded conditions. To make such copying more difficult, you might randomise the questions on several versions of the exam paper and systematically distribute the different papers to the class.

Seminar or tutorial presentations: At the heart of the issue of plagiarism in seminar presentations is the students’ citation of evidence, particularly when a paper is not required. Like writing, a good presentation should be clear about where the evidence has been sourced, yet such an expectation is often not mentioned when this type of assessment task is set. The following suggestions may help.

        • □ Make your expectations clear in writing and verbally.
        • □ Develop and communicate the marking criteria to students and include a criterion related to citation/ referencing.
        • □ Where a paper is not required, ask students to hand in an annotated bibliography or similar, identifying where they sourced their information.
        • □ Where appropriate and feasible, model for students the behaviour expected or teach them explicitly.

Projects and theses: While thesis work is invariably an individual effort, project work may involve a group of individuals. Thus, while concern regarding the use and citation of evidence is common to both, authenticity is also a consideration of the latter. For concerns relating to group work, see Section 5B of this document. For other concerns with regard to project and thesis work, you might consider the following:

        • □ Ask students to regularly hand in samples of their notes and use these to give them feedback on their identification of key ‘issues’ and their integration of these into their work.
        • □ Ask for an annotated bibliography for each section/ chapter.
        • □ Ask students to keep a log book of their learning throughout the project/ thesis.

Indicators of Quality Assessment Checklist

(Adapted, in part, from the checklist for quality in student assessment, "Assessing Learning in Australian Universities", 2002, CSHE.)

At the Subject Level

At the Course Level

At the Faculty/Academic Unit Level*

Assessment tasks address both the learning outcomes and graduate attributes listed for the subject.

Student choice is provided in assessment tasks and weightings at certain times.

Student workloads are considered in scheduling and designing assessment tasks for this subject and in relation to assessment tasks in other subjects likely to be taken at the same time.

Excessive assessment is avoided - assessment tasks are designed to sample student learning.

Assessment tasks are balanced – include both developmental (“formative”) and judgemental (“summative”) tasks. Early low-stakes, low-weight assessment is used to provide students with feedback.

Include criteria for marking – clearly articulated learning outcomes and criteria for levels of achievement are described, with marks allocated accordingly.

Assessment is fair – assessment tasks checked to minimize inherent biases that may disadvantage particular student groups.

Plagiarism is minimised – careful task design, explicit education and appropriate monitoring of academic honesty.

Feedback – students receive explanatory and diagnostic feedback as well as marks.

Overall plan - Subject assessment is integrated into an overall plan for course assessment; and

      □ there is a clear alignment between expected learning outcomes, what is taught and learnt, and the knowledge and skills assessed; and

      □ there is a steady progression in the complexity and demands of assessment requirements in the later years of courses.

      Student workloads are considered in scheduling and designing assessment tasks.

      Subject outlines are reviewed before session for consistency and completeness of information on assessment.

      A variety of assessment methods is employed so that the limitations of particular methods are minimised.

      □ There is a regular cycle of review of courses/ subjects that includes review of the appropriateness and monitoring of assessment.

      Review of subjects includes communication with coordinators of all programs that include the subject/s.

Faculty Education Committee/ Assessment Committee(s):

□ There is a faculty/ academic unit policy that guides assessment practices and the development and monitoring of assessment.

□ There is a regular cycle of review of programs and subjects that includes review of the appropriateness and monitoring of assessment.

□ Monitoring of marks – trends over time and across subjects are reviewed at the end of each session.

□ Assignment cover sheets comply with University guidelines.

□ Processes are in place:

      □ to track the collection and return of assignments;

      □ to inform students fully of the assessment procedures and policies.

*The level at which these tasks are undertaken may vary across faculties/academic units.

7 PART B: IMPLEMENTING ASSESSMENT – PRACTICAL ISSUES

Introduction

When designing the assessments tasks for a subject, a number of questions need to be answered:

    • □ Which tasks are most appropriate for the learning objectives of the subject, given whatever constraints you need to deal with, such as student numbers? Refer to Table 1 above.
    • □ What weightings should be assigned to each task? There are some university principles regarding weightings (refer to previous Section on weightings) but you also need to consider the emphasis you wish to place on the different aspects of your subject, as the weightings will convey this emphasis to the students.
    • □ Which criteria will be used to mark the task? How will you communicate this to students? Refer to the following section on Criteria.
    • □ How will the tasks be marked? Practical considerations that need to be considered include consistency, team marking, double marking, etc. Refer to the following section on Marking.
    • □ How will feedback be given to students? Feedback is integral to the assessment (and learning) process. Refer to the following section on Feedback.

Assessment Criteria

General Principles:

The University's Code of Practice - Teaching and Assessment requires that clear criteria be developed for marking each assessment task and be made available to students.

Criteria for assessment should be:

    • □ Specific to each task
    • □ Clear and sufficiently detailed so as to provide guidance to students undertaking assessment task
    • □ Transparent (i.e. stated in advance – refer section below)
    • □ Justifiable (i.e. linked to learning objectives) and achievable
    • □ Appropriate to weightings
    • □ Where appropriate, supported by a verbal or written statement about what constitutes the various levels of performance (refer example on p. 15).

Stating Assessment Criteria:

Criteria can be stated in many ways. These depend on the type of assessment task. Sometimes specific criteria for assessment cannot be stated in advance without defeating the purpose of the assessment (by informing the learner of what is to be tested). However, it is desirable that the criteria should be made explicit at some stage (e.g. after the work has been marked). For example, if an examination requires the solving of a mathematical problem, the examiner may require the use of logical methods or particular processes. Students should know this, preferably before, but at least after, they have sat for the examination. If an essay is intended to test a student's ability to organise an argument logically, this should be stated and preferably a statement about what constitutes the various levels of performance should be provided.

How much detail?

The question of how detailed assessment criteria should be is a matter of judgement. It seems that students find very general statements such as 'advanced analytical skills' of little use. On the other hand, as discussed above, it is reductive and counter-productive to try to pin everything down. Nevertheless, general statements may provide a useful guide. They can indicate, for instance, that grammar and spelling will be taken into account, or that a certain range of reference to sources is expected. It is probably helpful to look at some examples from colleagues.

Using Criteria as a basis for standardising marking:

Assessment criteria are the basis for marking. When more than one marker is involved, subject co-ordinators should be mindful that other markers may not necessarily share a common understanding of the assessment question. Clear and specific assessment criteria and discussion of marking schemes will be required in advance.

Linking to Learning Outcomes and Performance Levels:

In drafting assessment criteria, it is important to refer to the student learning outcomes and to give some thought as to how the criteria can be justified and how it will inform feedback to students. When designing criteria, it is also important to check that the performance levels are achievable by students undertaking the subject.

Making all Criteria Explicit:

It is essential that students are made aware that there are global criteria listed in the subject outline that apply to all tasks (eg. penalties for lateness, word lengths, etc). In addition, there may be criteria that are commonly assumed by academics that need to be clearly communicated to students. Examples would include:

    • □ Presentation style: font size, line spacing, margins;
    • □ Mode of expression: grammar, syntax, spelling (when not already documented);
    • □ Ways of referencing.

It is also recommended practice that these criteria are consistently observed, and students be given feedback accordingly. In some instances it may be helpful to have a discussion with students about these kinds of factors which influence an assessment of their work.

Examples of Good Practice

Image

Providing Feedback to Students

General principles

The University's Code of Practice - Teaching and Assessment requires staff to provide to all students 'appropriate, helpful and explanatory feedback', as promptly as possible on all work submitted for assessment . Both teaching staff and students should understand:

        • □ Why students are required to do certain tasks as part of their assessment in a subject or course;
        • □ What the criteria for assessment are, and how they are applied; and
        • □ Why students receive the mark or grade awarded.

Constructive and timely feedback on assessment is important in order to:

        • □ Assist learning;
        • □ Reward achievement;
        • □ Provide encouragement;
        • □ Explain grades received; and
        • □ Indicate standards expected in a particular discipline or professional area.

Effective Feedback needs to be:

        • □ Specific and detailed - so that students can understand where their strengths and weaknesses are and how they can improve;
        • □ Timely – a guiding principle is that students should get feedback on one piece of work in time for this to be of benefit for the next;
        • □ Directly linked to the criteria for assessment.

Examples of Good Practice

Annotations on written work should provide sufficient detail to be constructive and should address all criteria. May be used in conjunction with marking sheets.

In-session tests and quizzes:

When returning in-class tests, provide a brief written statement of the criteria, and/or the key points which a test question is/was designed to elicit, including examples of good and bad ways of answering the question. (Discussion of past test papers may be used to assist students prepare for tests).

Tests administered in-session should have an explicit diagnostic function. If so, they should indicate clearly to students whether or not they are meeting the learning objectives. Online multiple-choice quizzes may provide a means for giving feedback for each answer choice.

Written assignments, essays and practical reports - ways to provide feedback include:

    • □ Standard coversheets which set out the general criteria against which written work will be judged, and which provide for the marker to indicate in some standardised form the extent to which the specified criteria have been met by an individual student. Such coversheets also include space for specific comments by the marker. (Example: CCS Essay Cover Sheet)
    • □ Marking sheets which allow the marker to tick a box (or mark a point on a scale) indicating the strength or weaknesses of the writer in meeting the objectives of the assessment - again with space for comments. (Example: DESN212 Project Marking sheet)
    • □ Generic sheets that indicate the main points or issues which students were expected to discuss, consider or solve in the assignment, possibly indicating some techniques or methods that could have been applied.
    • □ Model answers (e.g. in quantitative subjects) with which students can compare their own efforts.

Other feedback

    • □ Feedback is also provided in other ways. In laboratory and clinical classes, and in some other smaller classes, students receive feedback on several aspects of their work orally, though this may not be systematic nor articulate the criteria for assessment as clearly as might be hoped.
    • □ Online quizzes can automate and standardise the feedback given to individual students.

Marking

General principles

Marking student work can be as straightforward as adding up marks for correct or incorrect answers in a quiz. However, in many instances, marking involves the interpretation of criteria and the translation of abstract feedback into a numerical mark. Marking practices are a highly sensitive issue for students and therefore should be transparent, consistent and able to be explained, according to some or all of the following:

        • □ the unit’s/Faculty’s general standards on marking, at appropriate levels
        • □ the stated criteria in the subject outline
        • □ marks assigned to other students for the same piece of work, including in situations where marking is done by teams
        • □ special consideration issues, as agreed by the subject coordinator
        • □ penalties for late submission, partially completed work, poor referencing or other issues, which have been notified to students in advance

Remember that the Code of Practice – Teaching and Assessment requires that a numerical mark be granted for every assessment task (except in pass/fail subjects) and that students may obtain their final examination marks on application to the Subject Coordinator.

Problems and specific issues

Unit and Faculty assessment committees provide oversight of the overall distribution of student results, and are an opportunity to identify and discuss apparently inconsistent marking practices. Nevertheless, there are many circumstances in which consistency can be hard to monitor.

In subjects with a small enrolment, for example, where the subject coordinator is likely to mark all student assignments, there is the potential for an individual marker’s interpretation of marking criteria to diverge from those of the rest of the unit or Faculty. In large subjects, or subjects distributed across campuses, care needs to be taken to ensure common interpretations of the marking criteria are used by the whole teaching team.

Consistency with other institutions (particularly in the case of Study Abroad and Exchange students) is not possible, but students who may be confused by differences in their results should nonetheless have confidence in the consistency of the internal processes used to determine their grades at this university.

Examples of good practice

The key to developing a transparent and accountable marking system is for written grading criteria to be developed for each task, specifying as far as possible what standard of work will be judged as a Pass, Credit, Distinction and High Distinction—and for these to be communicated to everyone involved, including students (see criteria).

Within subject teaching teams:

    • □ A two stage process is strongly recommended to promote consistency:
        • □ before marking commences - an initial team marking meeting to distribute and discuss grading criteria prior to marking, allowing plenty of time for tutors to ask questions.
        • □ before release of results - a second team marking meeting to compare results across groups, paying particular attention to marks which are close to the borderline of a higher or lower band; and to discuss problematic students.
    • □ Subject teaching teams may also engage in double-marking of a selection of already graded tasks, including work which has been judged as a Fail, either as a matter of course, or where grading deviations suggest some inconsistency.
    • □ Where a subject is taught at multiple campuses, it can be helpful for staff occasionally to exchange marking batches, to minimise the potential for local bias.
    • □ Consideration should be given to rotating the marking of assignments across a teaching team.
    • □ Individual markers can check they are marking to the same standard as others in their unit by asking a colleague to look over a selection of graded tasks sampled from each band.
    • □ Procedures need to be in place should a discrepancy in marking occur (e.g. >10% variation in marks).
    • □ Heads of academic units can also initiate this process from time to time, to check that all staff are working to the same standards.

Group Work

General Principles

The Code of Practice – Teaching and Assessment states that “group work must be assessed by means which allow the real contribution of each member of the group to be determined and should not constitute more than 50% of assessment for a subject, unless with special approval, in accordance with the procedures…”. Procedures should be transparent, equitable and contain proper processes of review.

Students working in groups can produce a richness and variety of data and interpretation that may not be produced by an individual student. Group work may meet some of the needs of students who have a preference for collaborative learning. Group work supports the development of a broad range of student abilities and graduate attributes, in particular a capacity for teamwork but also a capacity for independent judgment and the ability to evaluate one's own performance. Assessment tasks should therefore be designed to value group work processes and outcomes.

Key questions in group assessment are:

    • □ What will be assessed? (outcomes/process/both)
    • □ What will be the assessment criteria?
    • □ Who will decide on the criteria? (lecturer/group/individual students/combination)
    • □ Who will assess the work? (lecturer/peer/self/combination)
    • □ How will marks be distributed? (individual student work/common group mark/group mark allocated differentially within the group)

Problems and specific issues

PROBLEM

ELABORATION & SUGGESTIONS

Conflict of Values - higher education values and recognises individual achievement and this may conflict with the concept of assessment of collaborative tasks.

Discussion of the purposes of group work may help overcome this to some extent.

Student preference for individual work

Discussion of the purposes of group work and assessment of each student’s contribution may help overcome this to some extent.

Allocation of marks to reflect individual contributions

Use individual reflective journals as an avenue to assess individual student’s capacity to interpret group process.

Have students design a work plan/ allocate specific tasks to each member and submit this at commencement of group task.

Require a work journal to be submitted.

For first year subjects, marks could be based on the individual student’s contribution to content. For higher level subjects, assessment of a student’s work could also reflect their contribution to the process and their insight into that process.

Use of lecturer designated or group negotiated team roles can help spread workload evenly and prevent one person from “carrying” the group (see DESN301 example).

Student concern about negative effect on grades

Some research indicates that low achieving students tend to achieve higher than usual scores when they work in groups but that high achieving students tend to receive similar grades to those they receive when working alone (Exley and Dennick, 2004).

Developing an assessment system that allocates marks according to individual effort may help to overcome this concern.

Difficulty in meeting as a group

Online discussion can ameliorate this to some extent.

Incorporate team tasks into tutorial activities – this enables members to meet and the group process to be observed.

Student self-assessment: over and under

According to Nightingale et al (1996: 96-97) some research indicates no general tendency for students to over or under estimate their own performance, however able and/or experienced students make more accurate judgments of their own performance than do less able/experienced students.

According to Exley and Dennick (2004: 184), students will initially give each other equal marks and only lower a group member’s mark if they have not contributed.

Providing specific peer marking criteria and requiring records of work such as meeting logs or learning journals to be handed in will help overcome this.

Students being exclusionary - Local students may not be inclusive and culturally sensitive towards students from the non-dominant culture, eg, international students, women students.

Training in group skills, discussing team work issues, having two students from the same culture and two female students in a group, and monitoring group processes, will help overcome this

Dispute of a shared mark

A group task for which a shared mark is awarded can only be considered if:

      □ individual work journals or log books are kept by all members of the group, and

      □ dispute resolution involves consultation with all members of the group.

Guidelines for group work

Subject coordinators should ensure all students are able to do their best work within groups.

    • □ Allow sufficient time for groups to meet/undertake their group activity.
    • □ Establish explicit guidelines and procedures for group work activities and provide these to the students in writing (this may require significant class time for discussion).
    • □ Require group participants to carry out specific introductory activities so that the group is able to develop an equitable and effective working relationship.
    • □ Teach students the skills to work in groups (group processes and procedures).
    • □ Provide relevant pro formas for recording processes, for example:
        • □ meeting attendance
        • □ levels of contribution per meeting/task
        • □ decisions made
        • □ actions to be taken by whom
        • □ standard of work completed by each member.

Guidelines for group assessment

Subject coordinators should:

        • □ provide clear and achievable assessment criteria and standards
        • □ require individual work journals/log books be kept for those tasks where a group mark is to be allocated
        • □ be clear as to what amount of work and effort is required;
        • □ make advance plans for the possibility of any group disbanding;
        • □ allocate marks based on contribution;
        • □ provide examples of highly graded group work;
        • □ require groups to submit a progress report on group process for formative assessment;
        • □ consider formative peer assessment of the main tasks of individual members;
        • □ consider setting up online discussion forums to support out-of-class group work.

Other considerations

The following ideas can help ensure groups complete the task to the satisfaction of all members.

    • □ Require groups to submit a work journal recording individual contributions and group progress and planning.
    • □ Ask students to keep a reflective journal and submit a certain number of reflections or a short reflective paper on what they learned from the process.
    • □ Require groups to design a work plan to be submitted for formative feedback.
    • □ If students are required to allocate marks and/or provide feedback, train students in peer and self-assessment.
    • □ Where individual marks for group outcomes are to be allocated by the lecturer, groups may be required to allocate individual tasks on which the group outcomes may be built. A record of these tasks may be submitted with the final project so that marks can be allocated.
    • □ Where individual assessment tasks follow group activities these should indicate how the group activity helped the development of the outcome.
    • □ Where possible, support students in establishing their own assessment criteria.
    • □ For transparency of group work records, consider requiring students to post meeting outcomes to an online discussion forum for easy reference by group members and also to assist in dispute resolution if required.

Examples of good practice - Subject ACCY102 Co-ordinator: Kathy Rudkin, Accounting & Finance

Purpose

Students are expected to develop skills in teamwork and written communication. There is a strong emphasis on individual learning processes and contribution to a group outcome.

Teaching Strategy

  • 1. The lecturer sets up the group assignment as a partnership and joint venture. The partnership analyses the technical accounting document.
  • 2. The lecturer specifies the requirement to negotiate, write and sign a partnership agreement.
  • 3. The lecturer provides guidelines of the contents of a partnership agreement, including:
    • □ The purpose of the partnership;
    • □ Timing and location of meetings acceptable to all members, and contact information (Do not give out private details);
    • □ Requirements for valid meetings, e.g. quorum, number of meetings allowed to be missed etc;
    • □ Agreed meeting procedures;
    • □ A requirement for each group member to take turns in taking minutes at each meeting, and that these minutes must be distributed to and approved by other partners at the next meeting;
    • □ Rules for allocating work;
    • □ Method of sharing marks for the group report;
    • □ Rules for one member leaving;
    • □ Rules for admitting a new partner;
    • □ Rules for settling disputes between partners;
    • □ Rules for dealing with the prolonged illness of a partner;
    • □ Terms of the partnership and the basis for liquidation;
    • □ The lecturer provides dates for which parts of work are to be submitted as a group.
    • □ The lecturer provides for an individual component to be submitted for assessment – extract:

Each individual student must hand in:

  • 4. A portfolio in a plastic sleeved folder of the processes of your part of the work done in the completion of the group project. Each piece of work should be dated, and placed in the folder in chronological order. At a minimum your portfolio must contain:

      □ A typed two page reflective summary. This should reflect on the learning process that you have undergone in doing this assignment. Points addressed should include what you felt you did well, and what you would do differently next time.

      □ Minutes you took of all group meetings that you attended.

      □ 10 specimens of development work that demonstrate the process you have gone through in making a contribution to your group’s partnership agreement and report. These specimens should not include printed material from web sites or photocopied reference material. Rather they should be working papers of your individual work in progress and analysis. Development work must be in English and may be hand written.

      □ A diary summary of time contributed to the project.

  • 5. The lecturer provides guidelines in the event of disputes – extract:
    • □ Where problems arise, groups must consult with the tutor or subject coordinator as soon as possible and before the due date.
    • □ In the event of dispute, partnership agreements and portfolios may be considered in apportioning marks between group members.
  • 6. Tertiary Literacies - a capacity for, and understanding of, teamwork:
        • □ interacts effectively with other people both on a one to one basis and in groups, to achieve a shared goal
        • □ develops leadership skills in order to undertake leadership roles
        • □ understands and responds to the needs of clients
        • □ develops people management strategies
        • □ values the opinions of others and appreciates their diversity
        • □ demonstrates a commitment to principles of equity
        • □ demonstrates the ability to compromise and negotiate
        • □ engages in and receives constructive criticism and argument
        • □ can work with geographically dispersed teams including members based offshore

Assessing Practical Skills – Laboratory Exercises and Reports

General Principles

Assessment should be designed to test the skills learned in the laboratory. Assessment of practical skills should be done with a practical test.

Problems and Specific Issues

Students are often required to work in groups of two or three in laboratories, because of the pressures on equipment, space and consumables. This can be a good learning environment where the individual students learn by discussions with each other and observation of others performing practical tasks. However it can be difficult to get all students in a group to learn all skills involved in the activities of the laboratory (preparation for the experiment, setting up the experiment, taking the results, interpreting the results).

Examples of Good Practice

    • □ Encourage students to take on differing roles in a group throughout the semester.
    • □ Have all students keep a logbook of all the activities in the laboratory and have the demonstrator check, sign and date the logbook at each laboratory session.
    • □ Have each student in the group write a report on a different experiment.
    • □ Change the experiments each time the laboratory is run.
    • □ Have one student in the group prepare one experiment before the laboratory, supervise the running of the experiment and collection of results, and write the report. Each student in the group does this in turn.
    • □ Run an individual laboratory test.

Assessing Large Classes

Assessing large groups of students poses additional challenges for academic staff. The key to effective assessment – developing criteria, guides, exemplars and models, and discussing these with students and other staff - will impact initially on the workload of subject co-ordinators. However, as James et al (2002) state this preparatory work will lead to potential positive gains - e.g. by reducing marking time, avoiding potential time-wasting issues when many staff are involved in marking and improving the overall quality of teaching and learning.

Problems and specific issues

Challenge

Response

Avoiding assessment that encourages shallow learning

Avoid over-reliance on exam-based assessment which uses mcq or short answer questions. Be aware of limitations of particular methods of assessment.

Providing high quality, individual feedback

Assess early; provide students with marking criteria prior to undertaking assessment task ; prepare a list of common problems in completing task along with explanations or model answers; use standardised feedback sheets which incorporate the assessment criteria; use online discussion boards where appropriate; use a website subject homepage to provide basic information, FAQs related to assessment; after using mcq, provide students with written rationale and explanations for correct or high scoring answers; use of an online quiz can automate and standardise this feedback.

Fairly assessing a diverse mix of students

Early in session, briefly survey students to identify prior knowledge, expectations, etc; set early ‘hurdle task’ to identify students at risk of failing written assessments and offer assistance through Learning Development; organise support tutorials; ensure tutorials follow lectures and not vice versa; use variety of assessment tasks; ensure English-language assistance for students who need such help.

Managing the volume of marking and co-ordinating the staff involved in marking

Provide clear marking criteria; make past exam papers and model answers available; provide examples of various levels of work (from pass grade to HD). More details about team marking are provided elsewhere in Part B.

Avoiding Plagiarism

The likelihood of plagiarism/cheating in large classes may be increased because of student feelings of anonymity or less opportunity to check referencing. The key to minimising plagiarism in the design of assessment tasks as discussed in Part A.

Adapted from James, R., McInnis, C., & Devlin, M. “Assessing Learning in Australian Universities” CSHE, University of Melbourne, 2002

Class Participation

General principles

The Code of Practice - Teaching and Assessment provides that class participation or online discussion may be assessed and marks may be awarded. Where marks are so awarded:

        • □ clear criteria must be provided for assigning marks; and
        • □ where class participation is worth more than 10% of the marks for a subject, a record should be made by the academic staff member conducting the class, in accordance with Section 8 of this Code.

Using class participation as an assessment tool is a valuable approach to rewarding students for their contribution in an active and cooperative learning process. Not only does it encourage students to attend and to take responsibility for their own learning, it ensures that they are exposed to a number of interpretations in a discussion about the subject-related concepts and issues, thus promoting deeper learning. Participation requires preparation by and active engagement of students through articulating ideas and defending academic argument, thereby developing oral communication skills.

Problems and specific issues

The major issues related to using classroom participation as an assessment item include that staff may fail to provide clear guidelines on the way in which students are expected to participate, insufficient information about the marking criteria, and potential for subjectivity of the marker. Some staff allocate marks for attendance rather than participation and these are often seen by students as easy marks. This is contrary to the Code.

Where students are expected to participate in discussion, the facilitator is responsible for ensuring that all students have the opportunity to participate. Structuring discussion in pairs and small groups may provide students better opportunity to participate.

Assessment of participation is less reliable in large classes and/or where there is limited class time (e.g. one hour tutorial a week).

Examples of good practice

    • □ Set clear criteria by which participation will be marked.
    • □ Differentiate between attendance and participation.
    • □ Keep criteria simple (see Maznevski, 1996 and Tyler, 2003 for examples).
    • □ Consider reliability – taking into account class size/ class time - how?
    • □ Inform students of ways they will need to prepare to participate effectively in class.
    • □ Train tutors in facilitating equitable participation.
    • □ Provide feedback on the nature and quality of participation you are observing in class.
    • □ Direct specific questions to individual class members to facilitate participation.
    • □ Maintain records of marks achieved by each student every week.

Example 1 Class participation marking criteria

Grade

Criteria

0

Absent.

1

Present.

Tries to respond when called on but does not offer much.

Demonstrates infrequent involvement with discussion.

2

Demonstrates adequate preparation: knows basic case or reading facts, but does not show evidence of trying to interpret them or analyse them.

Offers straightforward information (eg. straight from case or reading), without elaboration or very infrequently (perhaps once a class).

Does not offer to contribute to discussion, but contributes to a moderate degree when called on.

Demonstrates sporadic involvement.

3

Demonstrates good preparation: knows case or reading facts well, has thought through implications of them.

Offers interpretations and analysis of case material (more than just facts) to class.

Contributes well to discussion in an ongoing way: responds to other students’ points, thinks through own points, questions others in a constructive way, offers and supports suggestions that may be counter to the majority opinion.

Demonstrates consistent ongoing involvement.

4

Demonstrates excellent preparation: has analysed case exceptionally well, relating it to reading and other material (eg. readings, course material, discussions, experience, etc).

Offers analysis, synthesis and evaluation of case material, eg. puts together pieces of the discussion to develop new approaches that take the class further.

Contributes in a very significant way to ongoing discussion: keeps analysis focused, responds very thoughtfully to other students’ comments, contributes to the cooperative argument-building, suggests alternative ways of approaching material and helps class analyse which approaches are appropriate, etc.

Demonstrates ongoing very active involvement.

Maznevski, M. (1996). Grading Class Participation. Teaching Concerns: Newsletter of the Teaching Resource Center for Faculty and Teaching Assistants. University of Virginia. http://www.trc.virginia.edu/tc/1996/Grading.htm (accessed 11/5/04).

Example 2 Class participation marking criteria

Outstanding Contributor

Contributions in class reflect exceptional preparation. Ideas offered are always substantive; provide one or more major insights as well as direction for the class. Challenges are well substantiated and persuasively presented. If this person were not a member of the class, the quality of discussion would be diminished markedly.

Good Contributor

Contributions in class reflect thorough preparation. Ideas offered are usually substantive; provide good insights and sometimes direction for the class. Challenges are well substantiated and often persuasive. If this person were not a member of the class, the quality of discussion would be diminished.

Adequate Contributor

Contributions in class reflect satisfactory preparation. Ideas offered are sometimes substantive, provide generally useful insights but seldom offer a new direction for the discussion. Challenges are sometimes presented, fairly well substantiated, and are sometimes persuasive. If this person were not a member of the class, the quality of discussion would be diminished somewhat.

Unsatisfactory Contributor

Contributions in class reflect inadequate preparation. Ideas offered are seldom substantive; provide few if any insights and never a constructive direction for the class. Integrative comments and effective challenges are absent. If this person were not a member of the class, valuable air-time would be saved.

Non-Participant

This person says little or nothing in class. Hence, there is not an adequate basis for evaluation. If this person were not a member of the class, the quality of discussion would not be changed.

In-session Tests

General principles

In-session tests are defined as “tests, quizzes or in-class reviews that are held within session, either during regular class times or outside classes.”

In-session testing, when designed to support student learning:

        • □ is a useful means of assessing a student’s progress in a subject;
        • □ can provide an early warning sign for students who are experiencing difficulties;
        • □ consolidates learning of modules within a subject;
        • □ supports incremental learning;
        • □ prepares students for final exams.

In-session tests must be conducted in accordance with written procedures approved by the Faculty Education Committee, which should include appropriate processes to review the question papers and require appropriate arrangements to be made for seating and supervision to minimise the possibility of cheating (Code of Practice-T&A, Section 5.4.3).

As required by Section B of the Subject Outline Checklist, details of in-session tests including date, time and location of the test, as well as its weighting in relation to the subject, should be listed in the subject outline.

Specific issues and guidelines

Weighting of tasks:

Weighting of marks should be commensurate with the assessment task. No single in-session tests should be worth more than 30% of the final mark for the subject.

For in-session tests worth 10% or more of the final mark, the following guidelines are recommended:

Timing:

      □ In-session tests occurring in regular class time should not exceed the time allocated for that class.

      □ Students who are unable to attend in-session tests, whether these are held inside or outside scheduled teaching times, may apply under the Special Consideration Policy for alternate arrangements to be made.

      □ When scheduling in-session tests, staff are advised to consult the Calendar of Religious Observance.

Supervision/ Security

    • □ Appropriate seating arrangements, allowing for adequate space between students, should be organised to minimise the possibility of academic misconduct. Where this is not possible because of large class size and/or limited space, additional supervision should be provided.
    • □ Adequate supervision of in-session tests needs to be arranged in advance. As a general practice, the same arrangements used for formal exams should be instituted for in-session tests – that is, the ratio of one supervisor and one assistant for up to 60 students should apply, with an additional supervisor required for each additional 60 students in attendance.
    • □ Appropriate arrangements should be made for securing test papers prior to the test.
    • □ Large classes: Security can be a problem for in-session testing conducted in large lecture theatres and checking student identity cards can be difficult and time consuming. Attention needs to be given to how best to use in-session testing under these circumstances – refer Examples of Good Practice: Using in-session multiple choice tests to support learning.
    • □ Students can be issued with a password to assist with security during online quizzes.

Identification of Students

Depending on the size of the class, students may be required to produce their student identity card for verification by the supervisor. Faculty/Unit procedures need to address this issue and what actions are to be taken if students are unable to produce their ID card. The following examples are provided as a guide only.

    • □ If a student is unable to produce a student ID card, some other means of identification may be required. For example, the student may be admitted to the room on condition that any other form of photographic identification with matching signature is presented e.g. drivers licence, passport.
    • □ In a situation where a student fails to produce a satisfactory form of identification, the student may be required to leave his/her signature with the supervisor and be advised to report to the subject co-ordinator on the following day with his/her ID card so that signatures can be checked.

Students with Disabilities/Special Needs

    • □ The Academic Unit, in consultation with the individual student and the Disabilities Liaison Officer, should make appropriate provision, in terms of resources and timing, for students with disabilities and/or special needs.
    • □ Where an in-session test has been delayed for a student with a disability and/or special need, every effort should be made to reschedule the test so that a final mark can be determined by the end of the session.

Examples of good practice

Using in-session multiple choice test to support learning:

Multiple choice computer generated tests can be used as an aid to formative learning. Sly, L. (1999) uses this as an optional pre-test for students. The questions are computer generated and marked by the computer. Students receive feedback and presumably can take several pre-tests. Sly found that practice tests improve student performance on computer-managed learning assessment. (Sly, L. (1999) Assessment and Evaluation in Higher Education, 24 (3), pp.339 – 343

Using a ‘pyramid exam’ technique, Choen and Henle (1995 in Yuretich, 2003) require students to sit for the same exam in several different ways. The first time a student will take the quiz or test individually under test conditions. It is usually a very difficult set of questions with the expectation that a good percentage of the questions will not be answered correctly by a large number of students in the class. Students hand this in and then straight away in the same lecture time work with another student to complete the same test, discussing their approach to solving the questions and the solutions. Finally the most difficult questions can be discussed and solved as a whole group. This approach changes what might be an ordinary multiple choice test and introduces analysis, critical thinking, team work and problem solving skills. (Yuretich, R. (2003) Encouraging critical thinking, Journal of College Science Teaching, 33 (3), pp. 40 – 45)

Sharon Robinson, Biological Sciences.

First year students receive an introduction on how to complete the quiz during the lecture. The quiz is administered and once marked the lecturer works through the answers to support the learning process. This is done over several weeks.

Second year students receive an introduction on how to complete short answer questions during the lecture. The test is administered and once marked is followed up with working through model answers during a lecture.

15% marks are attached to the above in-session tests. Classes of 80 or less are moved to the labs for additional security. Classes over 80 have the test in lecture theatres. It is acknowledged that the latter is not an ideal test condition.

Notification of in-session tests is included in subject outlines and discussed in lectures.

James Wallman, Biological Sciences.

First year students have two small quizzes during the session. The quizzes take about 10 minutes at the start of practical sessions. They are open book and the questions are based on the notes from practical classes they should have taken in the previous weeks.

The intention of the quizzes is to ensure students develop good work practice in recording scientific information correctly in their practical classes. The types of questions are similar to those in the final exam and as such could be considered as formative assessment.

Each quiz is worth 10%. Tests are conducted in the lab and are supervised by the lecturer and four demonstrators.

Students with special needs will be catered for by allowing them to do the test at the end of the prac class or at another time, depending on the circumstances.

Online Assessment

General Principles

Online assessment may refer either to tasks that are completed in the conventional manner but submitted using online delivery mechanisms (via WebCT assignment drop-boxes, for example), or to those tasks which have been designed in part to assess students' skills in working in an online environment.

Assessment tasks can be distributed, completed, submitted and returned online in a variety of ways, including timed quizzes, essays submitted and graded via WebCT, web-based projects or portfolios displayed online, or contributions to participation in online discussion. Assessment of presentation skills can be conducted via videoconference, and Information Communication Technology (ICT) is a key component in group work, enabling the effectiveness of group work processes to be precisely documented via journals and an ongoing record of team correspondence.

Enabling students to use online modes of assessment can be flexible and practical for both lecturer and students, particularly in terms of students working at other campuses, and has the additional benefit of developing graduate skills in virtual teamwork and the use of ICT to meet deadlines.

Where contributions to group work or class participation are assessed, many students are assisted by being able to work in flexible, asynchronous discussion environments which enable them to develop sophisticated and thoughtful contributions in their own time. Students with language difficulties, students in large classes, and students with hearing, vision or learning impairments may all be assisted where their online participation is assessed.

There are some specific benefits to academic staff using online assessment. Receiving and returning work online automatically creates a copy both for tutor and student, enabling the tutor to track the student’s development over the whole assessment cycle. The use of assessment drop-box facilities in Learning Management Packages such as WebCT also eases the practical problems involved in team-marking and double-marking, providing a facility for all members of a marking team to look over each other’s responses and grades.

Problems and specific issues

Assessment tasks that are performed online raise the question of authenticity: can the academic be sure that the work submitted has been completed by the right student as claimed? Online examinations and tests would need supervision. Secondly, it is sometimes the case that using a computer connected to the network to complete an assignment opens up a much larger data pool for copying other people’s work than is available in some other environments, particularly those used for timed examinations.

Assessment submitted via networked computers can create problems of equity and access, significantly advantaging students with good online connections at home over students who depend on university facilities. In addition, assessment tasks should be designed to minimise accidental bias against students with poor technical skills. Reasonable accommodation must be provided for students whose disabilities prevent them from working effectively online.

Online assessment places pressure on academic staff to develop and maintain particular technical skills, particularly when working in a cross-platform environment, in order to ensure that the quality of a student’s work is not accidentally compromised in transfer between computers, or at least, that the academic staff member can recognise where this has been the case. Online assessment (e.g. use of chat rooms) can be time consuming and careful management of this type of assessment may be required (e.g. making synchronise chat space time limited).

Use of online assessment places additional pressure on the University to guarantee network security, network availability, and data back-up and ongoing storage of student work.

Examples of good practice

Where online assessment is used, students should be provided with clear statements of the rationale for using online assessment, and should be made aware of their and the University’s responsibilities in terms of security, access and storage.

Criteria for assessment should be particularly clear about the significance of technical skills, timely submission and other factors which may be driven by access to technology. Students working online need to know exactly what is being assessed in terms of content, presentation, and means of submission, and that the relevant staff have the appropriate skills to assess this.

In the assessment of contribution to discussion, the subject outline must clarify whether or not students are expected to write in formal sentences, whether they should include references, and whether all participation postings are being assessed or only a selection.

Where there is concern about authenticity of the results of timed quizzes, some steps can be taken towards partial invigilation providing university facilities are used to complete the assignment. In general, however, the question of authenticity is best addressed by developing a range of tasks for the subject which will reveal more distinctive characteristics of the student submitting work. As a general guide, multiple-choice or short answer quizzes should not comprise a majority of the overall mark.

Prior preparation of alternative tasks or deadlines in the event of network failure should minimise stress on staff and students.

Disabilities Support

General Principles

In a university setting, a disability is considered to be a condition or event which impedes a student’s access to their education and, in this context, which impairs their ability to participate equitably in the assessment of their learning. The Code of Practice--Teaching and Assessment requires that assessment tasks are capable of being amended or substituted to provide "reasonable accommodation for students with a disability." This accommodation may be requested by the student, directly or with the support of the SSAs. While the emphasis should remain on the learning outcomes of the subject, there are a broad range of possible modifications and alternatives to assessment tasks, and/or the environment in which the task is conducted (including in terms of time constraints), which can be achieved.

Details of some of the options which may be requested are listed here.

Problems and specific issues

While the impact on learning and assessment faced by some students with physical disabilities is immediately apparent, there are many students who are managing less obvious disabilities, including learning disabilities, mental health problems, and problems with chronic pain management.

The design of assessment tasks cannot be expected to anticipate every form of apparent or hidden disability which might affect a student's performance in a particular task. Nevertheless, designing tasks that can easily be renegotiated or substituted will assist students with disabilities, and is likely to be of benefit to all students in the class.

When designing an assessment task, think about students taking the assessment task who might be affected by:

        • □ temporary serious illness and recovery
        • □ the effects of medication, or changes to medication
        • □ mental health issues
        • □ learning disabilities
        • □ chronic or recurring physical conditions
        • □ conditions affecting communication with others, including speech, hearing and vision impairments
        • □ conditions which affect the ability to work effectively using a computer
        • □ conditions which restrict access particular physical environments, including mobility impairments and phobias.

What changes could be made? Are there tasks which would be very hard to substitute? Can you make appropriate accommodation for different student needs?

Examples of good practice

There are a range of language, memory and sustained concentration issues which can affect students with learning disabilities, including those caused by injury, with particular impact on their performance in timed exams. Consider offering an alternative to a timed exam, or design an exam which can easily be completed in shorter sections, over a longer time. Allowing all students to take short breaks, or to complete timed work online, may be useful to everyone.

When designing online environments for assessment, bear in mind that site design and layout which assists students with vision impairments or learning disabilities is likely to benefit all students. (Opening all Options II)

When designing or selecting the physical environment of lab-based or professional practice assessment tasks, take into consideration the possibility that students with diagnosed phobias may be affected as well as students with physical mobility problems. In each case, it may be possible to construct team assessment tasks which offer a range of different roles to a group of students. It may also be helpful to allow all students extra time to become familiar with an environment before being asked to work there.

There are a number of communication issues which can affect students’ participation in assessed class discussion or oral presentations. Again, consider the use of team assessment tasks, or provide a range of options (including online presentations or emailed contributions to discussion) to enable all students to communicate the results of their work.

References

[Note: References used in this document are selective only and are not considered to be comprehensive. Suggestions for additional relevant references are welcomed.]

General

  • 1. Brown, G. (2001), Assessment: A Guide for Lecturers, Learning and Teaching Support Network Generic Centre, http://www.ltsn.ac.uk/home.asp (accessed on 8 March 2004)
  • 2. Centre for the Study of Higher Education, (2002), Assessing Learning in Australian Universities - Ideas, strategies and resources for quality in student assessment, www.sche.unimelb.edu.au/assessinglearning (accessed on 16 February 2004)
  • 3. Kerka, Wonacott, Grossman, & Wagner, 2000
  • 4. Isaacs, G. (2001), Assessment for Learning, University of Queensland
  • 5. Murphy, R. (2001), A Briefing on Key Skills in Higher Education, Learning and Teaching Support Network Generic Centre, http://www.ltsn.ac.uk/home.asp (accessed on 8 March 2004)
  • 6. University of Queensland, Teaching and Educational Development Institute, Teaching and Learning Support, http://www.tedi.uq.edu.au/teaching/assessment/ (assessed on 10 May 2004)

Minimising Plagiarism

  • 7. Clanchy, J. & Ballard, B. (1991) Essay Writing for Students. Melbourne; Longman Cheshire.
  • 8. James, R., McInnis, C. & Devlin, M. (2002) Assessing Learning in Australian Universities. CSHE/ AUTC. www.cshe.unimelb.edu.au/assessinglearning (accessed 18 March 2004)
  • 9. Stefani, L. & Caroll, J. (2001) A Briefing on Plagiarism. Assessment Series No. 10. Learning and Teaching Support Network.

Group Work

  • 10. Chalmers, D. and Volet, S. (1997) Common Misconceptions about Students from South East Asia Studying in Australia in HERD, Vol.16, No. 1
  • 11. Dunn, L. et al (2004) The Student Assessment Handbook. London: Routledge Falmer.
  • 12. Exley, K. and Dennick, R. (2004) Small Group Teaching: Tutorials, Seminars and Beyond. London: RoutledgeFalmer.
  • 13. Nightingale, P. et al (1996) Assessing Learning in Universities. Sydney: UNSW Press.
  • 14. Wright and Lander 2003
  • 15. Violet 2001
  • 16. Violet and Ang 1998

Class Participation

  • 17. Maznevski, M. (1996) Grading Class Participation. Teaching concerns. Newsletter of the Teaching Resource Centre for Faculty and Teaching Assistants. January, 1996. http://www.trc.virginia.edu (accessed 3 March 2004)
  • 18. CIDR (2003) Strategies: fostering equitable class participation. Inclusive teaching http://depts.washington.edu/cidweb/inclusive/foster.html (accessed Feb 3, 2004)
  • 19. Tyler, J. (2004) Class Participation Assessment Guidelines. http://www.brown.edu/Departments/Italian_Studies/dweb/pedagogy/particip-assessm.shtml (accessed 3 February 2004)

In-session Testing

  • 20. Cohen, D. and Henle, J. (1995) The pyramid exam, UME Trends, (2), p 15
  • 21. Jacobs, L.C. and Chase, C. (1992) Developing and Using Tests Effectively, Jossey-Bass, San Francisco.

Assessing Large Classes

  • 22. Teaching Large Classes, Teaching and Educational Development Institute, University of Queensland, 2002 http://www.tedi.uq.edu.au/largeclasses/ (assessed on 10 May 2004)

Disabilities support

  • 23. Opening all Options II, DEST and University of Tasmania, http://student.admin.utas.edu.au/services/options/academics.htm (accessed 10 May 2004)

8 PART C: FACULTY/UNIT PROCESSES

At the Course Level:

The assessment related responsibilities at the course coordinator level are a combination of academic, pedagogical, professional, management and quality assurance issues. The academic, pedagogical and professional issues should be clarified in conjunction with the academic staff involved, External Advisory Committee members and representatives of professional organisations. The resultant overall structure for the course should incorporate assessment strategies relevant to the course outcomes. These broad assessment strategies will then be translated into actual assessment tasks within individual subjects. The Course coordinator has a quality assurance role to ensure that the balance and timing of assessment tasks between subjects is appropriate. Quality review by the course coordinator should also incorporate a plan for the regular review of subjects and courses, with processes to act on the outcomes of such reviews.

Overall plan

  • □ Subject assessment is integrated into an overall plan for course assessment; and
  • □ there is a clear alignment between expected learning outcomes, what is taught and learnt, and the knowledge and skills assessed; and
  • □ there is a steady progression in the complexity and demands of assessment requirements in the later years of courses.

Ideally the overall course plan is developed in conjunction with internal academics and relevant external professionals and organisations via an External Course Advisory Committee.

See Course and Subject Approval Kit

Student workloads are considered in scheduling and designing assessment tasks.

A factor to be considered by the Unit-level assessment committee.

Subject outlines are reviewed before session for consistency and completeness of information on assessment.

See Subject Outline Checklist in COPTA

Sample Template included in Appendix 3

A variety of assessment methods is employed so that the limitations of particular methods are minimised.

See Table 1 in Section A (link)

There is a regular cycle of review of courses/ subjects that includes review of the appropriateness and monitoring of assessment.

Add link to Course and Subject Review Guidelines (currently under development by QA team)

Review of subjects includes communication with co-ordinators of all course programs that include the subject/s.

As above

At the Faculty/Unit Level:

The University has in place a number of policy guidelines and requirements regarding assessment matters. It is the responsibility of Faculties (in some cases Units) to implement and monitor these policies and guidelines. Faculties must be able to demonstrate that they are meeting their responsibilities and processes are in place.

There are faculty/ academic unit guidelines for:

  • In-session testing
  • Course/Campus Transfers
  • Scaling of marks
  • Grievance Resolution

See Faculty Guidelines re In-session Tests – template (under development)

Guidelines for Course and Inter- Campus Transfers (under development)

Sample Faculty Academic Grievance Resolution Flowchart (rtf word document)

Sample Academic Grievance Form (rtf word document)

Standards for the Finalisation of Student Results, Schedule 1: Scaling Guidelines

□ There is a regular cycle of review of courses and subjects that includes review of the appropriateness and monitoring of assessment

Add link to Course and Subject Review Guidelines (currently under development by QA team)

Monitoring of marks – trends over time and across subjects are reviewed at the end of each session. External cross-marking as an optional QA strategy: between UOW campuses/ between UOW and other institutions

See Standards for the Finalisation of Academic Results

Sample Form for Recording Variation to Marks

□ Processes are in place for the secure submission and return of assignments

Sample cover sheet included in Appendix 4.

9 Roles & Responsibilities

  • 1. Not available.

10 Version Control and Change History

Version Control

Date Effective

Approved By

Amendment

1

15 September 2004

Academic Senate

New Guidelines

2

5 February 2009

Deputy Vice-Chancellor (Academic)

Migrated to UOW Procedure Template as per Policy Directory Refresh

3

5 August 2009

Senior Manager, Policy and Governance

Correcting minor error in section 1.3

3

13 August 2009

Deputy Vice-Chancellor (Academic)

Minor amendment to replace reference to SEDLOs with reference to Student Support Advisers.

4

9 March 2010

Senior Manager, Policy and Governance Unit

Future review date identified in accordance with Standard on UOW Policy.

5

25 March 2010

Senior Manager, Policy and Governance Unit

Updated to reflect policy name change for Code of Practice – Professional Experience and Academic Integrity and Plagiarism Policy.

6

06 June 2011

Senior Manager, Policy and Governance Unit

Updated broken links.

7

24 January 2014

Deputy Vice-Chancellor (Education)

Clarification of references to Deferred Assessment/Examination and Supplementary Assessment/Examination

8

TBA

University Council

Amendments to reflect the implementation of the new Standards for the Finalisation of Student Results, which replace the previous Assessment Committee Standards and Assessment Guidelines – Scaling.

Appendix 1: Faculty/Unit Guidelines for In-Session Tests – Checklist

Definition

In-session tests are defined as “tests, quizzes or in-class reviews that are held within session, either during regular class times or outside classes.”

Background

These guidelines have been developed to ensure that In-session testing is undertaken in a fair and equitable manner and minimises opportunities for cheating, as outlined in the Code of Practice - Teaching and Assessment.

Checklist for the Conduct of In-Session Tests

Subject Outline – information to include

□ Nature of test, eg mcq, short answer, online, whether questions are the same or randomly assigned from a test bank

□ Date and time of test, eg in lecture time, Wed 5th May

□ Location of test, eg computer lab 41.101

□ Weighting of test, eg 15% of total mark

□ Weighting of in-session tests commensurate with assessment task (Good Practice Assessment Guidelines recommends that no single in-session test should be worth more than 30% of the final mark for the subject)

□ Duration of test, eg first 15 minutes of the tutorial class; 45 minutes during scheduled lecture time

Test paper

□ Reviewed by the Unit Assessment committee prior to printing test paper

□ Test papers are secured within the academic unit prior to conduct of the test

Location arrangements

□ Seating is adequate for the test task and minimises opportunities for cheating

□ Adequate supervision is provided. The ratio of one supervisor and one assistant for up to 60 students, with an additional supervisor for each additional 60 students in attendance

Identification of Students

□ Student identity to be checked (preferably by ID card)

□ If the student is not carrying Student ID, another form of photo ID is recorded, eg Drivers’ licence or passport.

□ If the student does not have a form of photo ID, the student is required to leave his/her signature with the supervisor and be advised to report to the subject co-ordinator on the following day with his/her ID card so that signatures can be checked.

□ Online quizzes – students are issued with a password for the duration of the test

Alternative arrangements

□ Information about alternative arrangements is made available to students who cannot attend the scheduled test time and/or location, eg for reasons of special consideration or clashes of other compulsory academic events (before setting date for test, check the Calendar for Religious Observance) [link]

□ Appropriate arrangements are made for students with a disability or special needs – in consultation with the student and the Disabilities Liaison Officer.

Deferred tests

□ Students are notified in advance of the arrangements for sitting a deferred test, if they were not able to sit the examination on the day due to unforeseen circumstances.

□ Student to seek special considerations via SOLS should s/he not be able to sit the in-session test at the nominated time, due to unforeseen circumstances.

Marking and Feedback

□ Students receive their mark within a designated, short period of time, eg within one week.

□ Students are provided with feedback on their performance, either individually on papers or collectively in class discussion.

Grievance procedure

□ The Faculty’s Grievance Policy includes reference to steps students may take if they have a grievance about the conduct of the in-session test or the mark that they received.

Appendix 2: Group Work - Good Practice Example

The following example is of a complex group work task which comprises 50% of the assessment.

ENG154 - Engineering Design and Innovation

Coordinator – A/Prof Peter Wypych PART A – Design Component and Creative Design Project (50% Weighting)

      □ Part A (Design)

      □ Part B (Drawing)

      □ Creative Design Competition

Subject Details:

Objective

At the end of this course, the student should possess:

    • □ An ability to identify apparent and real design problems.
    • □ An ability to identify large number of alternatives for the given design problem.
    • □ An ability to evaluate various alternatives against various design criteria, such as environmental, economical, technical, human and legal.
    • □ An ability to think independently and develop required imagination and insight for the given problem.
    • □ An ability to work in teams.
    • □ An ability to write technical reports.

Subject Details - Outline:

The lecture schedule and the details on tutorial activities are given below:

Week

Tutorial exercise

1

Form groups of 4 students (for tutorial work and the Creative Design Competition – see later). Group membership must be finalised by beginning of Week 3 tutorial. Complete Tutorial Questions 1 and 2 at the end of “Lecture 1”. Hand in answers by the end of class (group submission).

2

Discuss team/group work requirements (e.g., chairperson, minutes, regular meetings, brainstorming, action items, allocation of jobs). Brainstorm for the selection of a design project – see Tutorial Question at end of “Lecture 2”. No answers to be submitted this week. Finalise design groups (e.g. membership).

3

Complete Tutorial Questions 1 and 2 (involving design report assessment) at the end of “Lecture 3”. Hand in answers by the end of class (group submission).

4

Complete Tutorial Questions 1 and 2 at the end of “Lecture 4”. Hand in answers by the end of class (group submission). Also, as homework, complete your draft preliminary creative design report for review in the Week 5 tutorial – make sufficient copies (see below).

5

Each group to submit at the beginning of tutorial one (1) copy/member of the draft preliminary creative design report, which will be evaluated by other students (random basis). Peer review forms will be handed out for this purpose. Each student is to complete the review and submit the completed form ASAP. The tutor will check all the reviews and return the forms to the groups by the end of class (so that each group has feedback). Also, as homework, complete Tutorial Question at the end of “Lecture 5”. Hand in your answers in the Week 6 tutorial (group submission).

6

Preliminary oral presentation of design projects (max. 2-minute set-up; 7-minute oral presentation by all group members; 2-minute discussion). NOTE: any student that is absent without special consideration will receive a zero for the oral presentation; each group is responsible for any audio-visual equipment in addition to what is in the room. Groups to re-submit the preliminary design report incorporating peer review comments (one report per group). The tutor will mark each report using the same review form used previously. Mark distribution for the report normally will be equal (unless specified otherwise by students - this needs to be signed by students).

7

Return of preliminary design reports with feedback. Complete Tutorial Question towards the end of “Lecture 8”. Hand in answers by the end of class (group submission).

8

Evaluation of a 2-hole paper-punching machine for human factors – complete Tutorial Question towards the end of “Lecture 9”. Hand in answers by the end of class (group submission). The tutor will supply the 2-hole puncher and 20 sheets of A4 paper for testing/evaluation purposes. Note: all students must take care of the punching machines, re-package them (as was supplied) and return them to the tutor.

9

Complete Tutorial Questions 1 and 2 at the end of “Lecture 8”. Hand in answers by the end of class (group submission).

10

Complete Tutorial Question at the end of “Lecture 11”. Hand in answers by the end of class (group submission). Also, as homework, complete the draft final creative design report for review in the Week 12 tutorial – make sufficient copies (see below).

11

Each group to submit at the beginning of tutorial one (1) copy/member of the draft final creative design report, which will be evaluated by other students (random basis). Peer review forms will be handed out for this purpose. Each student is to complete the review and submit the completed form ASAP. The tutor will check all the reviews and return the forms to the groups by the end of class (so that each group has feedback).

12

Oral presentation of final designs (max. 2-minute set-up; 8-minute oral presentation by all group members; 2-minute discussion). Submission of final design reports incorporating peer review comments. The tutor will mark each report using the same review form used in the previous week. NOTE: a special review of all design models will be arranged (e.g. all groups will be asked to display their models to all the tutors, so that the six finalists can be decided)

Assessment Guidelines

Criteria:

The assessment consists of both formative (for feedback purpose) and summative (for final grading) assessment schemes. The distribution of various components of assessment is shown in Table 2. The tutorial work and the creative design project will be carried out mainly in groups of 4 members. The composition of groups will be decided in the first class on a voluntary basis.

Table 2. Subject assessment components:

    Assessment component

    Weighting (%)

    Formative: Tutorial hand-in

    Midterm exam

    Preliminary design report

    Preliminary oral presentation

    Peer review of Preliminary Report

    Peer review of Final report

    9.0

    10.0

    3.0

    2.0

    1.0

    1.0

    Summative: Final design report

    Creative design model

    Final oral presentation

    10.0

    10.0

    4.0

    Total

    50.0

Tutorial hand-ins

The tutorial hand-ins form the part of formative assessment. In each tutorial, one submission is required from each group, which would then form the part of the tutorial hand-in. Each tutorial hand-in should contain only the names of the group members who are present on that day of tutorial work. The role of each member in the group (e.g. writing, calculations, moderating, etc) should be on a rotation basis. The distribution of tutorial marks amongst the individual members depends on the individual’s participation, which will be strictly assessed during the tutorial sessions. All the tutorial submissions should be made at the end of the tutorial session where indicated. Unless discussed/arranged otherwise, each submission will carry equal weighting for those who attended the tutorial class.

Creative design report

Each group has to work on a chosen design problem, a design report and a physical model, which have to be submitted for assessment. The details are given in the “Creative Design Project Manual”.

Peer review of design reports

Draft copies of both preliminary and final design reports will be reviewed by the other students. Each student must complete a peer review form. At the end of the tutorial class, the relevant forms will be handed back to each group. The group will subsequently incorporate all reasonable comments in the final submission of the report. Each student who submits a completed peer review form will get a mark (see below).

    • □ Tutors will distribute the reports randomly and make sure that a student does not get a report of a group who is sitting next to him/her.
    • □ Students should indicate N/A against each question that is not applicable.
    • □ Minutes should be in an Appendix.
    • □ Students should check for proper paragraphs in the report.
    • □ Students should not attempt to assign any marks.
    • □ Marking of Peer Review sheet:
    • □ 100% for full marking with significant comments/feedback;
    • □ 50% for only full marking with occasional comments;
    • □ 0% for anything less than full marking.

Design report marking

The respective tutors mark the design reports. All reports should include percentage contribution by each member (on front cover). Individual contributions should be discussed between all the group members and signed by the respective member against his/her contribution. For the reports with uneven contributions (signed by all students), the following method will be used for distributing the mark:

    • □ Cumulative Mark (CM) = Raw Mark × No. of Members in Group
    • □ Student Mark = Corresponding contribution (%) × Cumulative mark/100
    • □ Condition: Max possible mark = 100.

For example, consider the following for the final design report and model:

    • □ Report mark = 65/100
    • □ Model Mark = 75/100
    • □ Total Raw Mark (report and model) = 70/100
    • □ Four members: A=10%, B=25%, C=25% and D=40%

Based on above model:

        • □ CM = 70 × 4 = 280
        • □ A receives 10 × 280/100 = 28/100
        • □ B and C each receive 25 × 280/100 = 70/100 (note same as Total Raw Mark – due to sharing of equal load of work)
        • □ D receives 40 × 280/100 = 112 (reduced to 100 max. possible value).

Prizes for creative design model:

The model developed as a part of the creative design activity will be assessed during the final oral presentation in Week 13. Based on these presentations, which may have to be run at different times (to be confirmed), and a special review of all designs/models (to be confirmed), the top 6 creative design projects will be selected by the tutors for the final judging competition. An overall winner will be announced from this competition. Other prizes will be awarded for other categories.

Oral presentations:

One in Week 6 and the second (final) one in Week 13. Each group will have a max. of 2 minutes for set-up, including audio visuals, etc. The Week 6 presentation will comprise 7 minutes of formal presentation and 2 minutes of discussion. The Week 13 one will comprise 8 minutes of formal presentation and 2 minutes of discussion. All the group members should be present on each day. The topics should be divided evenly between the group members for presentation purposes. Also, each group member should contribute to the discussion period.

Mid-term exam:

To assess the individual understanding of the subject, there will be one mid-term exam in Week 6 of 1-hour duration during lecture time.

Appendix 3: Sample Subject Outline

  • Image
  • Image
  • Image
  • Image
  • Image
  • Image
  • Image

Appendix 4: Sample Assignment Cover Sheet

  • Image
Last reviewed: 11 July, 2014

Here to Help

Need a hand? Contact the Governance Unit for advice and assistance on policy issues.