Just like any product’s design, the effectiveness of an assessment tool hinges on the dedication invested in its research, development, testing, and refinement phases. The design process unfolds in four essential steps, each falling under distinct categories.
- Step 1 – Start by immersing yourself in the mandatory requirements of the assessment task(s). – Clarify the evidence requirements.
- Step 2 – Leverage your grasp of the specified competencies to select suitable assessment method(s). – Choose your assessment methods.
- Step 3 – Dive into the process of formulating the assessment tool(s). – Design and develop your assessment tools.
- Step 4 – Embark on trial runs and fine-tune your tools. This iterative process will bolster your confidence in the tool’s flexibility and its ability to yield valid, reliable, and equitable judgments. – Trial, refine and review your tools.
The precision and attention devoted to these steps are instrumental in crafting assessment tools that not only meet the necessary standards but also enhance the overall assessment experience.
Step 1 – Clarify the Evidence Requirements
Gaining Insight into Competence
Numerous trainer and assessor guides suggest commencing the process by gauging your own grasp of the competency criteria outlined in the units. This involves visualising a skilled individual proficiently performing the tasks associated with the job. When you have a clear understanding of the tasks, potential contingencies, and contextual applications of these skills, you are prepared to devise a training program and choose an appropriate assessment approach. Crafting a comprehensive competency profile after scrutinising the entirety of job-related activities empowers you to identify opportunities for grouping units of competency, mirroring real-world workplace scenarios.
For determining a person’s competency, a set of criteria or benchmarks is essential for the assessment. Within the Vocational Education and Training (VET) sector, these benchmarks are often established through national competency standards, starting with units of competency. Additional benchmarks might encompass assessment criteria, evidence prerequisites from accredited courses, international or Australian standards, and benchmarks set by the organisation – including operational procedures, Workplace Health and Safety (WHS) standards, and product specifications. The diagram below broadly outlines the interplay among benchmarks, evidence requirements, assessment methods and tools, and the resulting evidence.
Confirming Evidence Requirements
Evidence represents the information that, when weighed against a unit of competency, enables confident judgment regarding an individual’s competence. Unlike some assessment approaches, competency-based assessment does not involve comparisons between learners. Instead, learners are assessed against clear, predefined standards. In order to ascertain the necessary evidence collection, it’s crucial to have a comprehensive understanding of competency requirements. This entails examining various sources of essential information, such as:
- Elements of the unit(s) of competency
- Performance criteria
- Foundation skills
- Performance evidence and knowledge evidence
- Assessment context information
- Dimensions of competency: task, task management, contingency management, and job/role environment skills
- Relevant employability skills, if applicable
- Relevant Australian Qualifications Framework descriptor
Associated workplace processes, procedures, and systems for contextualising assessed activities, including any legal, Workplace Health and Safety (WHS), or legislative considerations.
A strong grasp of the requisite evidence for each unit is not only vital for the Registered Training Organisation (RTO) to make valid judgments but also for identifying opportunities to cluster units and employ shared evidence for assessment decisions.
Understanding Your Learners
The individuals for whom your assessment methods and tools are intended can encompass a wide range or may originate from a well-defined target demographic, such as a specific enterprise or industry sector. These learners might consist of employees holding specific job profiles or a group defined by the stipulations of a funding body. Whenever feasible, it holds significance to clearly identify your learner group. This identification is pivotal for tailoring suitable tools that effectively cater to their unique requirements.
Having a complete and unequivocal understanding of the standards or criteria by which you are conducting assessments, along with the evidence prerequisites and, when feasible, the attributes of your learners, are fundamental aspects of the design process. The investment of time dedicated to determining the necessary evidence will yield substantial benefits when crafting your learning program, assessment plan, and corresponding tools.
Step – 2: Choose your assessment methods.
When it comes to handpicking the assessment techniques you will employ for specific units of competency, your guiding principle should be the individual unit(s) of competency themselves. With a vivid picture of a competent worker in mind and a clear understanding of the desired knowledge and skills your learners should showcase, you are now equipped to ascertain the methods that will aptly gather the required evidence. This collaborative process involves learners, as well as colleagues, trainers, assessors, and representatives from the industry or enterprise.
In the realm of real work, tasks often transcend the boundaries delineated by individual units of competency. More frequently, a genuine work activity necessitates a blend of multiple units of competency working in tandem. Crafting an effective assessment task might involve skilfully clustering several competencies to emulate a real-world task or a specific job role. The extent to which units can be clustered to mimic authentic work activities varies across training packages.
A noteworthy consideration is the principle of clustering. This entails gathering related units of competency under a single assessment to capture a holistic view of a learner’s skills. The suitability of clustering hinges on various factors such as the nature of the work activity, assessment context, qualification’s training and assessment arrangements, available time, resources, facilities, and personnel. For successful clustering, trainers and assessors must possess a comprehensive understanding of the training package and the pertinent workplace setting.
Strategising Evidence Collection
The process of selecting an appropriate assessment method is a captivating challenge within the realm of professional practice. It entails a judicious assessment of various techniques to determine the most fitting approach. This discernment process often involves the assessment methods outlined in the table below, each considered for its “best fit” to the context and purpose of the assessment.
By thoughtfully delving into the array of options and aligning them with the unique demands of your assessment, you will uncover the techniques that ensure a comprehensive and accurate measurement of learners’ competencies.
Considering the Needs of Your Learners
The selection of an appropriate assessment method is guided by a multitude of factors, with a paramount focus on addressing the unique needs of your learners. While upholding the integrity of the targeted unit(s) of competency or competency cluster, your choice of methods should delicately balance the circumstances of your learners. This consideration encompasses various aspects, such as their individual situations and capabilities.
For instance, Indigenous learners might be more inclined to showcase their knowledge through practical demonstrations rather than verbal discussions. Learners with disabilities could require additional time to complete tasks effectively. Individuals returning to education or the workforce after a prolonged hiatus of unemployment might grapple with confidence issues and discomfort in performing in front of others. It is imperative that any adjustments you make to the assessment method maintain alignment with all the stipulated requirements of the unit(s) of competency.
While this pertinent information might not be immediately accessible during the planning phase, it is pivotal to remain flexible in your methodology and assessment plan. As you gain a clearer understanding of learners’ needs, adapting your approach becomes crucial. Pre-emptively establishing how you will identify learners’ requirements and utilise this insight to tailor your assessment process is a strategic move.
Your decisions are also influenced by evaluating the language, literacy, and numeracy proficiencies of your learners, coupled with the skill levels mandated by the qualification. Should any uncertainties arise, tapping into the expertise of specialist Language, Literacy, and Numeracy (LLN) professionals can prove invaluable in making informed judgments.
To the extent feasible, active participation of industry representatives, employers, and learners themselves is integral in shaping the assessment process. Their involvement not only offers pragmatic insights to you but also fosters their enduring commitment and satisfaction with the calibre of training and assessment you provide.
Determining the Evidence Collectors
The process of choosing your assessment methods inevitably involves decisions about the individuals responsible for gathering the evidence. The guidelines outlined in training package assessment requirements can offer valuable insights into potential evidence collectors. Whether it’s the learner, the trainer/assessor, or an external third-party evidence gatherer, ensuring the precision of your assessment tools’ instruments and instructions is paramount. These components must distinctly outline expectations and establish a coherent framework for evidence gatherers to navigate.
Where will you gather the evidence?
The decision of where to amass the evidence is significantly influenced by the stipulations set forth in the training package or course. In many cases, the workplace is the recommended and preferred environment, demanding careful consideration of safety measures and endeavours to minimise disruptions to regular operations.
Should workplace-based assessment prove unviable or unsuitable, an alternative approach involves selecting settings and methodologies that allow learners to showcase their competence to the specified performance level. Simulation serves as one such method of evidence gathering, encompassing scenarios where learners tackle tasks, activities, or problems in contexts detached from the actual workplace, yet closely mirroring it.
Simulations manifest in various forms, ranging from replicating authentic workplace scenarios, akin to the use of flight simulators, to role-playing exercises founded on real workplace situations, and even reconstructing business scenarios using spreadsheets.
Before embarking on a simulation-based approach:
- Confirm the stipulations delineated in the relevant training package and consider industry perspectives on simulation use.
- Consider forging partnerships with local enterprises that can provide access to workplaces, equipment, genuine workplace documentation, or insights on creating a convincingly realistic simulated environment.
- Undertake a comprehensive review of the entire qualification or the specific units of competency slated for assessment. This examination aids in incorporating opportunities to assess complete work tasks or clusters of competencies within the simulation framework.
Determining the Timing of Evidence Gathering
The timing of your evidence collection endeavours should thoughtfully consider both the requirements of the enterprise and the needs of your learners. For instance, scheduling assessment activities involving bakers or their establishments right before Easter might be impractical. Similarly, for learners with religious obligations, avoiding assessment times that coincide with prayer times would be considerate. Whenever possible, strive to sidestep periods that clash with typical family responsibilities, such as school drop-off and pick-up times.
Additional Practical Considerations
Numerous practical factors will play a role in shaping your choice of assessment methods. These factors encompass the following considerations, all of which affect your ability to effectively manage the chosen evidence collection process:
- The diverse mix of learners under your guidance.
- The scale of the learner cohort.
- The geographical location of your learners, whether they are on or off campus.
- Accessibility to necessary equipment and facilities for both you and your learners.
- Budgetary implications and resource demands.
- The potential stress exerted on learners and staff due to your assessment requirements.
Meeting Your Obligations
Irrespective of the nature of the evidence you gather and evaluate, adherence to regulatory standards is paramount. Prior to constructing your assessment tools, allocate time to scrutinise whether the chosen assessment methods align with the core principles of assessment.
Your chosen assessment methods should be:
- Valid: They should genuinely assess what they claim to evaluate.
- Reliable: Other trainers or assessors, when presented with the same evidence, would arrive at the same judgment.
- Flexible: Learners’ needs are duly accommodated in terms of methods, timing, and location.
- Fair: The assessment methods should provide a level playing field, allowing all learners to effectively demonstrate their competence.
With your assessment methods determined, you now stand ready to craft your assessment tools.
Step 3 – Design and develop your assessment tools.
With a clear understanding of the evidence prerequisites and a defined selection of assessment methods, it’s now time to embark on the design of your assessment tools. These tools encompass both the instrument itself and the accompanying instructions or methodologies for collecting and deciphering evidence. They cater to the demands of evidence gatherers for impartiality and transparency, while simultaneously providing learners with lucidity and structure.
The fundamental aim of assessment tools is to offer unambiguous guidance and robust support to learners. This ensures there is no room for ambiguity concerning the expected tasks or the foundation upon which trainers/assessors will base their evaluations. In addition to their primary functions, well-designed assessment tools can also serve as valuable assets for documentation and reporting.
In general, assessment tools are tailored to meet a range of practical necessities, including:
- The learner’s personal details.
- Identification of the trainer/assessor.
- The assessment dates.
- The title of the relevant unit or cluster.
- The assessment context.
- The assessment procedure.
- A comprehensive list of knowledge/skills slated for assessment.
- The achieved competence and assessment outcomes.
- Constructive feedback for the learner.
- Signatures of the learner and the trainer/assessor, accompanied by dates.
- Clear instructions for the learner, trainer/assessor, or other evidence gatherer.
- Resource requisites for the assessment.
When devising these tools, it’s crucial to ensure alignment with the rules of evidence. For instance, the tools must facilitate the collection of evidence that is:
- Valid: It comprehensively covers all requisites of the unit of competency.
- Sufficient: It enables competent judgments to be made over time and across diverse situations.
- Current: It pertains to contemporary competent performance.
- Authentic: It genuinely represents the learner’s independent work.
Fit for Purpose
Your chosen assessment method takes shape and gains structure through the design of your assessment tool. It is crucial for the tool to be tailored for its intended purpose. This requires careful consideration of the most suitable tool to support the chosen assessment approach effectively and efficiently. During the design process, it is vital to pay close attention to the language, literacy, and numeracy (LLN) skill levels of the learners and align with the requisites of the specific units of competency.
Standardised tools often present a valuable option, offering a cost-effective foundation from which you can customise your own tools. These standardised tools also foster common understanding among groups of trainers/assessors and instil confidence, particularly for new professionals in the field. Leveraging the skills and expertise of others is important, especially when dealing with aspects outside your technical expertise, such as LLN skills, or when seeking feedback on the tools you’ve developed.
Guidance for Learners and Trainers/Assessors
Instructions for both learners and trainers/assessors constitute an integral component of all assessment tools. These instructions should address the “what, when, where, how, and why” of the assessment processes. They may include suggestions for reasonable adjustments to cater to diversity and provide advice on recording requirements for trainers/assessors/observers. These instructions, drafted in clear and concise language, can be incorporated within the instrument itself or presented in a separate document.
Electronic assessment (e-Assessment) must adhere to the same standards as any other form of assessment within the VET Sector. The principles of assessment and the rules of evidence should be adhered to in conducting e-Assessment. Challenges arise in ensuring the security and accessibility of assessments, as well as authenticating candidates’ identities. Additional considerations include the ongoing maintenance of e-assessment tools, electronic feedback, record-keeping, and student data retention.
To ensure regulatory compliance in e-assessment practices, the National Quality Council introduced specific guidelines and case studies in 2011. These guidelines, initially published by the National VET e-Learning Strategy, offer direction for maintaining standards. Access to the e-Assessment Guidelines for the VET Sector can be obtained through the NCVER research database at voced.edu.au.
At this stage of development, many RTOs embark on mapping the alignment between their assessment tools and the respective units of competency. This process is designed to verify that the evidence gathered adequately covers all the unit requirements, thereby contributing to the RTO’s confidence in the validity and sufficiency of the evidence.
The mapping approach employed by an RTO can vary, ranging from simple notations on the unit of competency to the creation of detailed mapping matrices. Generally, the more comprehensive the mapping document, the more valuable it becomes for validation and any redevelopment endeavours as the unit undergoes updates. The following examples have proven effective for RTOs utilising them.
Step 4 – Trial, refine and review your tools.
Ensuring Alignment and Effectiveness of Assessment Resources
To guarantee the alignment, currency, sufficiency, and effectiveness of your assessment resources with the training package requirements, a critical step is their thorough review by fellow trainers/assessors and a comprehensive trial prior to implementation.
Soliciting input from your peers, learners, and industry experts serves as a validation mechanism to confirm that these tools proficiently facilitate evidence collection and maintain an appropriate level of challenge for the qualification level. Differing perspectives present an opportunity for transparent discussions and the resolution of any ambiguities or misconceptions before the tools are introduced to learners.
Conducting trials of your tools prior to their formal deployment with learners allows you to assess the user-friendliness of the format, gauge the suitability of literacy and numeracy levels, ensure clarity in instructions, and ascertain the practicality of the format for documenting assessment evidence and judgments. This process also offers insights into the appropriateness of allocated timeframes for assessment tasks and the overall cost-effectiveness of the tool.
Throughout the trial period, it’s important to evaluate the adaptability of the tool. Its capacity to accommodate variations in context and learners’ needs while maintaining the integrity of valid and reliable assessment outcomes is a key consideration.
The review of assessment tools can be executed through diverse approaches, ranging from sharing them with fellow trainers/assessors to involving a panel of trainers/assessors for industry-wide validation. Collaborating with others often results in fresh perspectives that lead to valuable enhancements.
Websites providing ongoing information relevant to competency‑based assessment
- Australian Qualifications Framework: edu.au
- Australian Skills Quality Authority: The regulatory and accreditation authority for RTOs who offer or deliver training and assessment in Western Australia, New South Wales, Australian Capital Territory, Queensland, Tasmania, South Australia, Northern Territory and to overseas students. – gov.au
- Department of Training and Workforce Development (WA): Policies, information on training and professional development activities and useful links, for example, to Training Councils. – wa.gov.au
- Training Accreditation Council: The regulatory and accreditation authority in WA. – wa.gov.au
- Gov.AU: Provides comprehensive national training information on training packages, registered training providers, qualifications, accredited courses, skill sets and units of competency. – training.gov.au